| |
Bitbreaker
Registered: Oct 2002 Posts: 508 |
Doynamite 1.x
Hi Folx,
after doynamite was used in some recent productions and people often stumbled over the .prg/.bin pitfall i decided to make some improvements to the packer, it can now spit out a sfx, level-packed data including a valid load-address and depack-address, as well as forward literals to keep the safety margin low. Raw data can still be loaded and output without any bytes added. Also the optimal bitlengths can be iterated now and the optimal table be glued to the output file.
I also happend to make a leaner version that lets the files get slightly bigger, but shrinks the depacker to $e0 bytes and makes depacking 5-10% faster. This might be of good use for demo systems where size matters a lot.
Any more things one could wish? |
|
... 31 posts hidden. Click here to view all posts.... |
| |
ChristopherJam
Registered: Aug 2004 Posts: 1409 |
Not quite as good as WVL-S! Here are the results with tinycrunch added:
filesizes
# bin rle wvl-f wvl-s tc bb pu doyna
- ----- ----- ----- ----- ----- ----- ----- -----
1 11008 8020 4529 4151 4329 3383 3410 3265
2 4973 4314 3532 3309 3423 2648 2687 2512
3 3949 3498 2991 2617 2972 2187 2226 2108
4 7016 6456 4242 4085 4225 3681 3595 3617
5 34760 27647 25781 24895 25210 21306 20887 20405
6 31605 12511 11283 10923 11614 9194 8877 8904
7 20392 17295 12108 11285 11445 9627 9460 9289
8 5713 5407 4179 3916 3936 3251 3314 3132
9 8960 7986 6914 6896 6572 5586 5651 5430
filesize in %
# bin rle wvl-f wvl-s tc bb pu doyna
- ----- ----- ----- ----- --- ----- ----- -----
1 100% 73% 41% 38% 39% 31% 31% 30%
2 100% 87% 71% 67% 69% 53% 54% 51%
3 100% 89% 76% 66% 75% 55% 56% 53%
4 100% 92% 60% 58% 60% 52% 51% 52%
5 100% 80% 74% 72% 73% 61% 60% 59%
6 100% 40% 36% 35% 37% 29% 28% 28%
7 100% 85% 59% 55% 56% 47% 46% 46%
8 100% 95% 73% 69% 69% 57% 58% 55%
9 100% 89% 77% 77% 73% 62% 63% 61%
number of frames to depack
# bin rle wvl-f wvl-s tc bb pu doyna
- ----- ----- ----- ----- ----- ----- ----- -----
1 11 13 14 15 58 27
2 5 7 7 9 38 17
3 4 6 6 7 28 12
4 8 9 9 10 43 20
5 36 39 42 59 300 119
6 20 25 25 37 126 49
7 22 25 26 32 138 60
8 6 8 8 10 43 18
9 9 12 12 16 73 32
As you can see, my sizes are always bracketed by WVL-F and WVL-S, and my decompression speed is two thirds of yours for 6.bin.
(crunching the entire corpus took 7 seconds on a single core of a 3GHz i7. It's a fairly slack python script, I put all recent substrings in a dict. The parameters are tuned for the lower entropy components of JamBall2, I might be able to improve the ratio if I play with it a bit). |
| |
WVL
Registered: Mar 2002 Posts: 902 |
Score \o/ |
| |
ChristopherJam
Registered: Aug 2004 Posts: 1409 |
Yes, well done!
All I can say in my defence is my decoder's even smaller than yours ;) |
| |
Bitbreaker
Registered: Oct 2002 Posts: 508 |
A bad ratio still spoils the fun, no matter how fast and tiny the depackers get. Here's some benchmark results for loading + depacking the first side (well most of, not those 2 going under IO) of CL13:
bb hclfix $0ac4
lzwvl $0a08
doynax $08a9
doynax_small $08e8
doynax_small loaddecomp $0749
So the additional loading overhead kills all the speed advantage. Loading and decompressing in one go gives the best results, but bloats the code a lot. The spindle system suffers from the same problem, a bad ratio due to having references in only one block. It feels still fast though and i get testfiles loaded around the same speed as with loaddecomp (is there a framecounter available in spindle to proof the feeling?) |
| |
HCL
Registered: Feb 2003 Posts: 728 |
Thanx, now i will not waste my time ;). *this* will not keep me from winning any compo in the future, though perhaps other things will :P. |
| |
enthusi
Registered: May 2004 Posts: 677 |
I get the feeling that it really doesnt matter much :)
Rather for crackers and onefilers or one-siders maybe? :) |
| |
ChristopherJam
Registered: Aug 2004 Posts: 1409 |
Well, the main reason I was optimising for decoder size with tinycrunch was because that was all I had room for once the demo was decoded. It's admittedly a fairly special case, it's not often I'm scraping for every last fraction of a page.
Even the music data was interleaved into unused fragments of the character definitions (only 5 bytes of every 8 were visible).
Agreed that total time for load+decrunch is usually more significant, except in cases where you can background load into some free space, but then need to quickly decrunch between ending one part and starting the next. |
Previous - 1 | 2 | 3 | 4 | 5 - Next |