| |
Bitbreaker
Registered: Oct 2002 Posts: 508 |
Release id #117852 : Doynax LZ
Has anyone else discovered further misbehaviour than that i described in the goofs? I'd add a bunch of features then and release it in a fixed and improved version. |
|
... 16 posts hidden. Click here to view all posts.... |
| |
Bitbreaker
Registered: Oct 2002 Posts: 508 |
So far it works okay when the depackaddress % 256 == 0. However it should handle other address lowbytes, but the compressed files then will be different, as it takes care that on every page (output) crossing the type bit is read again. |
| |
Krill
Registered: Apr 2002 Posts: 2980 |
So you confirm that it is not a bug but the result of an optimization.
As any compressed or literal runs must not cross a page boundary in the output buffer (if i understood you correctly), the same data will compress differently depending on its offset to the page boundaries.
With all 256 possiblities for a given data file, the maximum difference in pack ratio should not be so much now, should it? Just wondering.. :) |
| |
Bitbreaker
Registered: Oct 2002 Posts: 508 |
Well, this page wrapping thingy is the result of the "feature" to be able to always just render a new page of output per call of lz_decrunch. As it is just a single entry point one has to fetch a type_bit there. If you change the depacker to depack all on a single call, that problem can be solved easily. It even can be solved with keeping the feature (but making the depacker even bigger), by remembering if we exit from a literal run or a match.
|
| |
HCL
Registered: Feb 2003 Posts: 728 |
Personally i would not see this a s feature :P. Probably that's because of my lack of intelligence. |
| |
Bitbreaker
Registered: Oct 2002 Posts: 508 |
Well, you could depack partially by this feature until a certain barrier that you release when the memory is free to depack to. Also i think this depacker was used in a game doynax did, so it has a different focus on depacking than when using it with a demo?
I just removed this feature quickly to proof my assumptions. Result: same filesize, and i can depack to any address now \o/
|
| |
Krill
Registered: Apr 2002 Posts: 2980 |
It's certainly a feature for a streaming decompressor to only decompress a chunk of data with each call. You can implement ring buffers, decompression on demand etc. on top of it easily. |
| |
Oswald
Registered: Apr 2002 Posts: 5094 |
someone could write a little summary on what this can do above/below Byteboozer, LZWVl, and the likes please? |
| |
doynax Account closed
Registered: Oct 2004 Posts: 212 |
I've been away for the last week, putting out some fires and putting in 15-hour working days so I just noticed this thread. Well, that and I'm useless at keeping up with private e-mail.
Anyway, I'm quite impressed with Bitbreaker's work so far in finding and fixing the bugs in the packer. I think it should painfully obvious by now that I've only tested it on my own data (and the Calgary corpus IIRC.)
Unfortunately I can't see any way of handling the 256-byte destination address wrapping without wasting two cycles/bytes, e.g. decreasing the length by one before storing it and raising carry before ADC.
Quoting BitbreakerThe source is there and open? So why not developing further on it, even more as it is rendered unusable on certain files this way? Doynax did not respond to Axis's mail so far, so i was just fixing it on my own in the meantime. It is not that we coders couldn't help ourselves :-) Go ahead, it's a public domain utility. There's is nothing quite as pointless as a development tool which you can't understand and fix/modify to your liking.
Still, I'd prefer to maintain and publish the thing in an "official" release if possible. The world is littered with enough branched versions of various utilities as it is so I'd prefer to stick to a single main line as long as is feasible.
Quoting BitbreakerWell, you could depack partially by this feature until a certain barrier that you release when the memory is free to depack to. Also i think this depacker was used in a game doynax did, so it has a different focus on depacking than when using it with a demo? Yeah, that's the general idea. It was created for a shoot'em'up which was supposed to play as a single contiuous level (think SWIV on the Amiga). The loader is responsible for continually streaming in fresh data during spare cycles (e.g. tilemaps, attack sequences, charsets, sprites, music) into a circular buffer which the game consumes at leisure.
Normally the cruncher looks at the target address if you feed it a *.prg file to handle this for you and I assumed that no one would need to decrunch to multiple target addresses. Still, for a wider release an #ifdef to get a more traditional interface and a separate streaming mode in the cruncher sounds like an excellent idea. |
| |
doynax Account closed
Registered: Oct 2004 Posts: 212 |
Quoting Oswaldsomeone could write a little summary on what this can do above/below Byteboozer, LZWVl, and the likes please? What it can do? Streaming decrunching, basically. What it can't do is native crunching.
Beyond that it's just another possible trade-off between compression rate, decrunching performance, and decruncher size. The actual encoding is virtually the same as ByteBoozer, which it was based on, but with the bits rearrange somewhat for faster decoding.
Plus better parsing and tweaked match offsets to try to squeeze a few extra bytes out of it. |
| |
enthusi
Registered: May 2004 Posts: 677 |
A bit unrelated,but are there any depackers with minmum of RAM usage?
Stream-decrunch byte-by-byte without large tables in RAM or ZP.
I.e. exomizer hast that nice get_decrunched_byte but uses that 156 Byte table (=large in my context *g*).
Any ideas?
|
Previous - 1 | 2 | 3 - Next |