Log inRegister an accountBrowse CSDbHelp & documentationFacts & StatisticsThe forumsAvailable RSS-feeds on CSDbSupport CSDb Commodore 64 Scene Database
You are not logged in - nap
CSDb User Forums


Forums > CSDb Entries > Release id #205568 : Spindle 3.0
2021-07-05 11:31
Bitbreaker

Registered: Oct 2002
Posts: 498
Release id #205568 : Spindle 3.0

Let's continue this here :-)
So the kernal + basic suits 10 times on a disk in your case and the FLI 12 times. I just tried that and the kernal+basic fits 12 times on disk, and still quite a few blocks left, so there's more than 105 blocks extra on disk. The FLI fits 15 times on a disk and still some blocks free, so that is an extra of 126 blocks minimum. Speed was 6,11kb/s for the kernal and 8,05kb/s for the FLI at 100% cycles for the CPU.
Seeing that you cache a second sector, are you making use of a 512byte window now or still using a 256 byte window per block?
2021-07-05 11:42
Krill

Registered: Apr 2002
Posts: 2804
And i am somewhat surprised that shifting the checksumming from the GCR read+decode loop to an extra loop after reading apparently does not slow things down.

But what was the reason for that change? Smaller decoding tables and bigger block cache?
2021-07-06 09:44
lft

Registered: Jul 2007
Posts: 369
Previously I was checksumming on the c64 side, during the transfer. But the serial protocol was changed to avoid sending a full padded sector at the end of each job. However, with variable payloads, the system can get stuck if there's a transmission error in the size prefix, and it wasn't feasible to checksum the size first (in a separate transmission) before transferring the corresponding data; especially not within one page of resident code. So I moved the checksumming to the drive.

Most of the time, under normal trackmo conditions, the drive is waiting for the host to decrunch (while also using rastertime for other stuff). Copying the newly read sector from the stack to a separate buffer takes time, so it's not possible to read the immediately following sector, which means that the drive has nothing to do until the second-next sector appears. That's why computing the checksum on the drive doesn't affect things all that much.
2021-07-06 09:57
lft

Registered: Jul 2007
Posts: 369
The decruncher does something new: The complete file is compressed in one go, with references up to 1 KB away. The compressed data is then split into blocks that are transferred independently, in any order. When a block is received, the compressed stream is decoded and literal data items are stored at their target locations.

Copy-items can't be carried out yet, because they might refer to data in a block that hasn't been received, so *a representation of the copy item is stored in the gap*. These representations form a linked list.

When a part of this chain is known to be complete (typically at track boundaries), it is traversed and the delayed copy operations are performed. Meanwhile, new blocks are loaded and the next chain starts to build up.

Obviously, there are many details to get right, but it can be done in quite a small footprint, and it's fast.
2021-07-06 10:02
Krill

Registered: Apr 2002
Posts: 2804
Quoting lft
Most of the time, under normal trackmo conditions, the drive is waiting for the host to decrunch (while also using rastertime for other stuff).
Unexpected assertion. =) Are you implying that any pending transfers are stalled until a block is fully decrunched? I.e., that block decrunching isn't interruptible by new blocks rolling in?

Quoting lft
That's why computing the checksum on the drive doesn't affect things all that much.
My question was why checksumming isn't done on-the-fly in the read+decode loop, but in a separate loop afterwards.
But then i realised there never was a read+decode+checksum loop with Spindle. Still wonder about the speed impact if it were. =)
2021-07-06 10:21
lft

Registered: Jul 2007
Posts: 369
Quoting Krill
Still wonder about the speed impact if it were.


I tried disabling the checksumming entirely, and saw only a very minor speedup. It wasn't enough of a gain to motivate a complete rewrite of the decoder, although it might come to that in the future.
2021-07-06 10:34
lft

Registered: Jul 2007
Posts: 369
Quoting Krill
Quoting lft
Are you implying that any pending transfers are stalled until a block is fully decrunched? I.e., that block decrunching isn't interruptible by new blocks rolling in?


My decrunching happens in two stages: The literal items are handled first, and this part has to finish before the buffer page can be reused. The copy items are handled in a second stage that can be interrupted.

But that is beside the point. In the end, the host has more work to do than the drive. If you transfer as much as possible as early as possible, you still end up with a big slab of crunched data to work through at the end. Moving things around in time doesn't affect the total duration --- *except* if you insert dead-time by making the host (who has the most to do) wait for the drive. That is why a large drive-side buffer is important, along with prefetching, so the drive can supply a lot of data right at the beginning.
2021-07-06 10:50
Sparta

Registered: Feb 2017
Posts: 35
@Lft, very cool decompression algorithm! I really like how you squeezed all eight GCR decoding tables in one page. :)

As for your graphs, I do have an observation. I re-created the 100% CPU tests with Spindle 3 and Sparkle 2, and while my results with Spindle 3 matched yours, I got $378 frames, 7380 B/s for the Basic+Kernal test, and $32b frames, 8491 B/s for the FLI test using Sparkle 2. Wonder what the cause of the difference is. Did you use LoadNext calls in your tests with Sparkle 2?

I am happy to share my test d64s.
2021-07-06 10:57
Krill

Registered: Apr 2002
Posts: 2804
Quoting lft
If you transfer as much as possible as early as possible, you still end up with a big slab of crunched data to work through at the end. Moving things around in time doesn't affect the total duration
Hmm, sounds sound for block-based decrunch or your new approach, but not so sure about the traditional bytestream-based crunchers.

Tbh, i'm not quite sure what of those is ultimately better in a C-64 IRQ-loading context. Schemes not working on bytestreams come with worse pack ratio pretty much by definition due to back references limited to something less than the entire preceding unpacked file. Then they may make up for that with more speed by reduced penalty for missing blocks in an out-of-order regime.

Alas, let's all implement both benchmarks (yours and Bitfire's/Sparkle's), then ponder the results on current versions. =)
(I'll need a few more weeks to finish mine.)
2021-07-06 11:21
lft

Registered: Jul 2007
Posts: 369
Quoting Sparta
I re-created the 100% CPU tests with Spindle 3 and Sparkle 2, and while my results with Spindle 3 matched yours, I got $378 frames, 7380 B/s for the Basic+Kernal test, and $32b frames, 8491 B/s for the FLI test using Sparkle 2. Wonder what the cause of the difference is. Did you use LoadNext calls in your tests with Sparkle 2?


Hmm, that's quite a big difference. Yes, I did use LoadNext. Let's compare notes in PM.
2021-07-08 06:08
ChristopherJam

Registered: Aug 2004
Posts: 1359
Quoting lft
Copy-items can't be carried out yet, because they might refer to data in a block that hasn't been received, so *a representation of the copy item is stored in the gap*. These representations form a linked list.


Oh, that's rather clever :)

It does put a lower bound on the length of a copy item, but it's quite rare to copy fewer than three or four bytes from distant locations anyway, so I can't see there being much of a compression ratio penalty for that. I wonder if it would be worth having two kinds of copy items - one intra block, and one deferred..
2021-07-09 00:27
Krill

Registered: Apr 2002
Posts: 2804
Quoting lft
Hmm, that's quite a big difference. Yes, I did use LoadNext. Let's compare notes in PM.
Would it be possible to publish the benchmark's disk images and source?

I'm adding it to my loader's example folder (as i did with the Bitfire/Sparkle benchmark described in Loader Benchmarks) but would like to avoid reinventing the wheel and possibly skewing/biasing results. =)
RefreshSubscribe to this thread:

You need to be logged in to post in the forum.

Search the forum:
Search   for   in  
All times are CET.
Search CSDb
Advanced
Users Online
krissz
zscs
skull
iAN CooG/HVSC
jmin
Hagar/The Supply Team
Guests online: 319
Top Demos
1 Next Level  (9.8)
2 Mojo  (9.7)
3 Coma Light 13  (9.7)
4 Edge of Disgrace  (9.6)
5 No Bounds  (9.6)
6 Comaland 100%  (9.6)
7 Uncensored  (9.6)
8 The Ghost  (9.6)
9 Wonderland XIV  (9.6)
10 Bromance  (9.6)
Top onefile Demos
1 Party Elk 2  (9.7)
2 Cubic Dream  (9.6)
3 Copper Booze  (9.5)
4 Rainbow Connection  (9.5)
5 TRSAC, Gabber & Pebe..  (9.5)
6 Onscreen 5k  (9.5)
7 Dawnfall V1.1  (9.5)
8 Quadrants  (9.5)
9 Daah, Those Acid Pil..  (9.5)
10 Birth of a Flower  (9.5)
Top Groups
1 Booze Design  (9.3)
2 Nostalgia  (9.3)
3 Oxyron  (9.3)
4 Censor Design  (9.3)
5 Crest  (9.3)
Top Graphicians
1 Sulevi  (10)
2 Mirage  (9.8)
3 Lobo  (9.7)
4 Mikael  (9.7)
5 Archmage  (9.7)

Home - Disclaimer
Copyright © No Name 2001-2024
Page generated in: 0.055 sec.