| |
Repose
Registered: Oct 2010 Posts: 225 |
Fast large multiplies
I've discovered some interesting optimizations for multiplying large numbers, if the multiply routine time depends on the bits of the mulitplier. Usually if there's a 1 bit in the multiplier, with a standard shift and add routine, there's a "bit" more time or that bit.
The method uses several ways of transforming the input to have less 1 bits. Normally, if every value appears equally, you average half 1 bits. In my case, that becomes the worst case, and there's about a quarter 1 bits. This can speed up any routine, even the one that happens to be in rom, by using pre- and post- processing of results. The improvement is about 20%.
Another speedup is optimizing the same multiplier applied to multiple multiplicands. This saves a little in processing the multiplier bits once. This can save another 15%.
Using the square table method will be faster but use a lot of data and a lot of code.
Would anyone be interested in this?
|
|
... 144 posts hidden. Click here to view all posts.... |
| |
JackAsser
Registered: Jun 2002 Posts: 2014 |
Quote: cool, I hope its for some c64 demo thingie :)
No sorry. :) just curiosity. |
| |
Oswald
Registered: Apr 2002 Posts: 5094 |
Quote: No sorry. :) just curiosity.
nah next time you make a true doom fx, or you'll be kicked out :) |
| |
JackAsser
Registered: Jun 2002 Posts: 2014 |
Quote: nah next time you make a true doom fx, or you'll be kicked out :)
Hehe, wasn't the Andropolis vector enough? ;) there I only used 16-bit muls and divs (8.8 fixpoint) hence the coarse resolution of the level (thickness of walls and minimum stair step height). I also optimised to only have axis-aligned rooms. Moving to 16.16 muls and remove the axis alignment would perhaps slow it down by 30% but it would be able to render an untextured Doom level I'm sure. |
| |
Repose
Registered: Oct 2010 Posts: 225 |
Very impressive. So if I can speed up a 16x16, that would be of practical benefit in speeding the framerate of Andropolis? Now there's a goal, would you be able to substitute my routine to see what happens? (If I can make a major speedup to your code) |
| |
ChristopherJam
Registered: Aug 2004 Posts: 1409 |
Quoting ChristopherJamQuoting ReposeAbout the correction, I think you're adding things up wrong. I only use correction for those columns where it's faster, and I found the break even at 7 adds, so it should work. All but the outer 1 or 2 columns can use it.
Ah, good point. Only remaining issue is what to do with the borrow if the correction underflows. My brain hurts..
OK, done now and right you are; saved me 13½ cycles on average for my ten random cases (down to 747.2 cycles now), and only cost me an extra five cycles on my best case (up to 697)
I ended up replacing the lda#offset at the starts of each of the middle six columns with an sbc id+$ff-offset,x at the ends.
My code generator has fairly generic treatment of addends now, so it takes care of all the carry logic for me. Core is now
for tb in range(8):
bspecs=getbspecs(tb)
doCounts=(tb<7)
iv=initialValue[tb]
doClears=len(bspecs)<4 and iv!=0
op="lda"
if tb==1:
emit(" ldx#0")
elif tb>1:
emit(" txa")
op="adc"
if doCounts: emit(" ldx#0")
if iv!=0:
if doClears:
bspecs=[pointerReturner( "#${iv:02x}".format(iv=iv), co=(iv>0xef))]+bspecs
else:
bspecs=bspecs+[pointerReturner( "id+${nv:02x},x".format(nv=0xff-iv), negate=True)]
for n,s in enumerate(bspecs):
addB(s, op, moveCarry=(n!=len(bspecs)-1), doCounts=doCounts, doClears=doClears)
op="adc"
emit(" sta mRes+{tb}\n\n".format(tb=tb))
|
| |
Repose
Registered: Oct 2010 Posts: 225 |
Beautiful! Only thing else I could see is to auto-generate for any precision, which isn't much of a leap. Should have option to include table-generator code too, and 6816(?).
You can estimate timing much the same way I do it by hand, by including counts per column based on number of adds, then overhead per column, and total overhead. I use variables Px for branches with an initial estimate of Px=.5 over all multiplies. Was gonna have a P generator given a stream of numbers too, that's really fancy of course.
Was gonna say, put the carries in the table (there's even opportunity for a table with built-in accumulate, would be slightly faster).
I have bad news though, I have doubts about my sec fix for post-correction :( I'm just working that out now (it's simple though, just ensure the running total is offset by 1 for each carry including the first). Did you verify the outputs? |
| |
JackAsser
Registered: Jun 2002 Posts: 2014 |
Quote: Very impressive. So if I can speed up a 16x16, that would be of practical benefit in speeding the framerate of Andropolis? Now there's a goal, would you be able to substitute my routine to see what happens? (If I can make a major speedup to your code)
Not major perhaps, a lot of time is spend on line drawing and EOR-filling anyway. But maybe 30%, i dunno. Long time ago now and I'm not really sure about how much time is spend on what. |
| |
ChristopherJam
Registered: Aug 2004 Posts: 1409 |
Thanks!
I test the multiply using a set of 20 randomly selected 32 bit integers, comparing the product of each to a known result, and I time each multiply using CIA#2 with screen blanked and IRQs disabled. So, it's not comprehensive, but I'm reasonably confident.
Yes, generalising for a set of operand widths would probably be quite handy. Good idea instrumenting the generator to calculate the likely execution time. I've been meaning to do something similar with another project..
The post fixup is fine if it's the last addition performed, then the carry from that addition can propagate through to the next column.
Do you mean 65816? I started learning that once, but SuperCPU never really took off, and I tend to ignore most platforms between C64 and modern hardware these days, at least for hobby coding. REU last year was already something of a leap for me. |
| |
Repose
Registered: Oct 2010 Posts: 225 |
Btw, one more idea: squaring and cubing based on this can be optimized significantly as well. |
| |
Repose
Registered: Oct 2010 Posts: 225 |
Test your routine with these magic numbers:
00 00 54 56
00 00 03 03
If my theory is correct, that's the only case that will fail (not really, just the only one I'm bothering to solve for).
It's quite a subtle bug and random values won't catch it, you need 'whitebox' testing.
The failure is rare and of the form that the high byte is higher by 1. |
Previous - 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | ... | 16 - Next |