| |
Bitbreaker
Registered: Oct 2002 Posts: 508 |
Release id #139503 : Spindle 2.0
So with the spin mode it was now easy to quickly do a speedtest with the files i usually test with (most of the files from cl13 side1).
It turns out that spindle nearly loads as fast as bitfire with on the fly depacking. While bitfire chews in the tracks a tad faster, it has to make breaks to finalize the depacking. So data arrives a bit too fast first and blocks pile up to be decrunched. Spindle manages to have a continuous flow due to its blockwise packing scheme here.
Therefore the 18 files used get squeezed down to 491 blocks, as with bitfire down to 391 blocks. So Spindle leeches an additional 100 blocks in about the time bitfire requires for additional depacking.
However, under load the speed of spindle turns down rapidly, with 25% cpu load it is no faster than krill's loader, with 75% load it takes eons to leech the 491 blocks in :-( What's happening there?!
When is the 50x version from Krill done? :-D HCL, what's the penis length of your loader? :-D
Results here. |
|
... 91 posts hidden. Click here to view all posts.... |
| |
chatGPZ
Registered: Dec 2001 Posts: 11386 |
als ob :) |
| |
Fungus
Registered: Sep 2002 Posts: 686 |
And n0sd0s is all stolen from Magnus Lind and everyone else we could steal from, plus my own "shit loader" is largely ripped off from action replay, they all handle sprites and irq and screen off/on, iffl, saving, what the fuck ever and who gives a shit because it does the job and I love you Danzig will you marry me. |
| |
Danzig
Registered: Jun 2002 Posts: 440 |
Quote: And n0sd0s is all stolen from Magnus Lind and everyone else we could steal from, plus my own "shit loader" is largely ripped off from action replay, they all handle sprites and irq and screen off/on, iffl, saving, what the fuck ever and who gives a shit because it does the job and I love you Danzig will you marry me.
Sure? I've been already banged, look :D Belly-2-belly suplex |
| |
Danzig
Registered: Jun 2002 Posts: 440 |
Quote: als ob :)
I remember how Hitmen paid Stan once :D And Bierkeule payz with krabbel-die-wand-nuff.. Easier to cope with! |
| |
Krill
Registered: Apr 2002 Posts: 2980 |
Quoting lftQuoting KrillOkay, i'm sceptical about those gaps, but never mind. But then, would it be possible to have the compressor handle these gaps? Such that you'd link all the small unpacked files and compress to one big file in the end, with all the advantages of minimising loader overhead and maximising pack ratio. Spindle already does that. When you specify multiple files for a loader call, they are packed end-to-end, with only a single half-filled sector at the end Doesn't Spindle only pack on a per-block basis? If so, this would mean no difference for the pack ratio and pretty much invalidate my question. Or DOES Spindle compress dictionary-based, so packing more data for a single load results in better pack ratio, many files or no? :) |
| |
Burglar
Registered: Dec 2004 Posts: 1101 |
im still hoping bitbreaker will add amount of blocks used on disk for each loader+packer someday.
but I think I read in lft's docs that he allows using already uncrunched data, if it was loaded from a previous track (and therefore must be there already), but i'm sure he will enlighten us ;)
edit:
Quote:All the data for a particular loading slot is compressed into a set of
sectors, such that each sector can be decompressed individually. The cruncher
is an optimal-path LZ packer (based on dynamic programming) that stops as soon
as the crunched data fills a disk block. Every sector contains a number of
independent units, each comprising a destination address, the number of
"pieces" of crunched data, a bit stream and a byte stream. Because the
crunched data fits in a sector, the indices into these streams are 8-bit
quantities, which speeds up the decrunching. Immediately after a track
boundary, blocks may also refer to data that was loaded earlier. |
| |
Bitbreaker
Registered: Oct 2002 Posts: 508 |
Quoting Burglarim still hoping bitbreaker will add amount of blocks used on disk for each loader+packer someday.
Not necessary, as i used bitnax in the krill and bitfire example and bb2 packs at the same rate :-D However the loaders using standard files have only 254 byte payload per block, so they might end up in using a very few more blocks. I could also just supply .d64 files of each test. The only one that differs is Spindle, the extra blocks have been named somewhere up in that thread.
The hint from Krill sounds interesting, so when i split files due to not loading under IO i might of course reference stuff that i loaded and unpacked before below $d000. I'll see what i can squeeze out there. |
| |
Burglar
Registered: Dec 2004 Posts: 1101 |
err yes ;) so ~391 blocks for all except spindle that's 25% bigger with 491 blocks. quite a large amount to be fastest :) |
| |
Bitbreaker
Registered: Oct 2002 Posts: 508 |
It's 457 blocks being used in the spindle example (2.1 that is which packs a bit better now). Loading blocks faster and therefore more isn't bringing the gain (compare the loadraw values with packed data and raw data were available), it is the depacker (nothing piles up and is finished after loading like in my case, where then oading stalls for a moment) and having the blocks now faster at hand due to preloading. That would be my guess. |
| |
Burglar
Registered: Dec 2004 Posts: 1101 |
thats quite a big improvement! A "Good work!" for lft :P
in terms of packed size, it'd be interesting (for me lol) if you added krill + exomizer to the benchmark ;) |
Previous - 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 - Next |