| |
Repose
Registered: Oct 2010 Posts: 225 |
Fast large multiplies
I've discovered some interesting optimizations for multiplying large numbers, if the multiply routine time depends on the bits of the mulitplier. Usually if there's a 1 bit in the multiplier, with a standard shift and add routine, there's a "bit" more time or that bit.
The method uses several ways of transforming the input to have less 1 bits. Normally, if every value appears equally, you average half 1 bits. In my case, that becomes the worst case, and there's about a quarter 1 bits. This can speed up any routine, even the one that happens to be in rom, by using pre- and post- processing of results. The improvement is about 20%.
Another speedup is optimizing the same multiplier applied to multiple multiplicands. This saves a little in processing the multiplier bits once. This can save another 15%.
Using the square table method will be faster but use a lot of data and a lot of code.
Would anyone be interested in this?
|
|
... 144 posts hidden. Click here to view all posts.... |
| |
tlr
Registered: Sep 2003 Posts: 1790 |
Quoting ReposeSome kind of extra SPI to parallel circuit would help. Since the chip I mentioned has 4MHz serial speed, it's fast enough to read a parallel port at full speed.
https://highfieldtales.wordpress.com/2011/12/04/very-fast-spi-t..
You do already have two CIA synchronous serial ports though, maybe they could be used? |
| |
Repose
Registered: Oct 2010 Posts: 225 |
That's what I suggested earlier - just to be clear, Martin wanted a fast, efficient ALU ability by writing arguments to memory, which would be latched until the LSB were written, then immediately read the product back. I digressed a little by suggesting a GPU cart, which takes the idea of assisted computing to it's logical conclusion, and I felt it's a bit pointless by then.
I then suggested to use the serial port to communicate with the SPI port of a proposed ALU (which is still in stock). This would be a bit slow, as I think the SP port could only clock out in 16 cycles.
To overcome that difficulty, one could use a circuit or chip to clock at the full 4MHz of the ALU chip, and leave the result at the parallel port of the CIA, on the Userport, somehow.
To be fair, many functions can be done fairly quickly with large tables, using either the Ramlink or REU, and I wrote sample code for both. There are also methods documented at https://wilsonminesco.com/16bitMathTables/
Large Look-up Tables for Hyperfast, Accurate, 16-Bit Scaled-Integer Math
Plus: The Unexpected Power of Scaled-Integer Math |
| |
tlr
Registered: Sep 2003 Posts: 1790 |
Quoting ReposeThat's what I suggested earlier - just to be clear, Martin wanted a fast, efficient ALU ability by writing arguments to memory, which would be latched until the LSB were written, then immediately read the product back. I digressed a little by suggesting a GPU cart, which takes the idea of assisted computing to it's logical conclusion, and I felt it's a bit pointless by then.
I then suggested to use the serial port to communicate with the SPI port of a proposed ALU (which is still in stock). This would be a bit slow, as I think the SP port could only clock out in 16 cycles.
I obviously should have read more of the thread. :)
I think it'll take 32 cycles. 250kbit/s max IIRC. |
| |
Martin Piper
Registered: Nov 2007 Posts: 722 |
I was thinking a 74LS165 or MCP23S08 might be able to talk to the uM-FPU 32-bit floating point coprocessor and provide an interface to the C64. Either via user port or cartridge port. |
| |
Repose
Registered: Oct 2010 Posts: 225 |
Regarding convolutions, I remembered the term I was looking for. There is a fast convolution credited to Winograd. It works out to a lot of additions. I think it might be slow. At the end, I think you have to reduce overflowed digits into a mod plus carry to the next digit. |
| |
Repose
Registered: Oct 2010 Posts: 225 |
I've been working hard on my mathlib. I think I've worked out all the use cases and made completely flexible macros that work together in a system, almost like a macro language. These can produce fully optimal code in an easy way.
Covered:
-squaring
-constant multipliers
-re-using multipliers
-distributed multiplication (a*(b,c))
-chained multiplication (a*b*c)
-zp and self-mod versions; and they are the same speed
-use cases for multiple pointers
-cycle timing variables that respond to all variations
-automatic carry flag tracking/setting
-custom table generation for built-in constant add/multiply
-low-precision outputs with optional rounding
To do:
-update test program
-verify the 32x32
-add signed and other versions
-squaring
-polynomials
At least a few more weeks to release something I'm satisfied with :)
I also discovered xc_basic and I'd like to add my library to their source, which is using a simple shift/add loop. |
| |
Krill
Registered: Apr 2002 Posts: 2980 |
How many and which of your routines would fit in say, just under 2 kilobytes of RAM? =) |
| |
JackAsser
Registered: Jun 2002 Posts: 2014 |
Quote: How many and which of your routines would fit in say, just under 2 kilobytes of RAM? =)
Hehe probably none due to the square tables.
@repose: regarding signing you can just post-fix that if you want, not optimal though but rather fast.
; Apply sign (See C=Hacking16 for details).
lda T1+1
bpl :+
sec
lda PRODUCT+2
sbc T2+0
sta PRODUCT+2
lda PRODUCT+3
sbc T2+1
sta PRODUCT+3
:
lda T2+1
bpl :+
sec
lda PRODUCT+2
sbc T1+0
sta PRODUCT+2
lda PRODUCT+3
sbc T1+1
sta PRODUCT+3
:
But again, you already knew that no doubt. :) |
| |
Repose
Registered: Oct 2010 Posts: 225 |
Thanks for the tip! I hadn't considered it in details. However, why use 2's complement? I was thinking to just keep the sign in a byte. When multiplying, it's just EORing the bytes then. Adding/subtracting could cause an overflow, which is the only case that needs a fix. |
| |
Repose
Registered: Oct 2010 Posts: 225 |
Quote: How many and which of your routines would fit in say, just under 2 kilobytes of RAM? =)
Jackasser is right, initially I am focused on pure speed, but I realize that smaller options would be useful. You can use a half-sized table and still be fast, in fact this was the usual expression of the squares method before it was fully optimized. Other than that, an unrolled shift-add loop is all I can think of (but I am so tired of seeing those :) |
Previous - 1 | ... | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 - Next |