| |
Krill
Registered: Apr 2002 Posts: 2980 |
LZW compression on C64
Hello,
i'm looking for a tool that LZW-compresses data and comes with C64 decompression code. I guess nothing like that exists yet, but i'm asking anyways.
LZW works with a library to look up uncompressed chunks of data to insert in place of their respective compressed ones in the output data, rather than LZ77 which works with back-references to already-decompressed data, as commonly done on the C64.
This sort of compression is useable for streaming compressed data through the C64, and hasn't been used up to its potential on the C64 yet, afaik. |
|
| |
TNT Account closed
Registered: Oct 2004 Posts: 189 |
The first thing I think of when I hear LZW is GIF, so check out for C64/Apple GIF viewers for decompression source. Finding a compatible PC compression routine shouldn't be too hard :)
Searching for 6502 LZW implementation is made harder by all those pages mixing LZW and LZ77 variants. |
| |
doynax Account closed
Registered: Oct 2004 Posts: 212 |
If you've got sufficient space for a dictionary then why not use it directly as a traditional LZ decompression window? That way you get also get a nice streaming buffer to cover fluctuating read/consumption rates.
At any I wrote a streaming LZ compressor for my game, essentially an optimized version of ByteBoozer, if you're interested.
Not to say that LZW isn't worth experimenting with, for one thing it's bound to get better compression rates on some files, but I fail to see how why it would be better for streaming. |
| |
Moloch
Registered: Jan 2002 Posts: 2928 |
C=Hacking #6 (basic compress/decompress)
and I'm assuming you've looked at the various native c64/c128 versions of zip/gzip?
|
| |
HCL
Registered: Feb 2003 Posts: 728 |
I don't have it. Lucky you may get the chance to code it yourself :). |
| |
Krill
Registered: Apr 2002 Posts: 2980 |
doynax: Oh, you're right of course. Thanks, for pointing that out.
I was thinking about LZW because of saving one extra copy step for the application i have in mind, but then i realized the overhead is the same for both solutions. I'll just go for the streaming exomizer decompressor then.
Btw., there'll be two buffers then: One ringbuffer for the compressed data, which decouples fetching blocks from the drive from actually reading them into the decompressor's input stream (with my loader, they come out of order as they pass the r/w head - for speed reasons, of course), and another ringbuffer for the decompressed data which will then be used for the LZ77 back references.
The decompressed data will also be actually used and copied around frantically by the actual application, and so can't be used for back-referencing stuff. |
| |
doynax Account closed
Registered: Oct 2004 Posts: 212 |
Quoting krillI'll just go for the streaming exomizer decompressor then. I'd reconsider using Exomizer if I were you. That decruncher seems to use any and all tricks in the book to improve the compression rate and keep the code footprint small, at the cost of performance. I'm getting roughly a 2-3x speedup over Exomizer with my decruncher for a 1-2% worse compression ratio (though YMMV.)
Not to say you should necessarily use that particular decruncher, for one thing it just wasn't written to handle multiple sector buffers, but it's definitely worth looking into faster alternatives if you intend to get any mileage out of that fancy out-of-order reader you've got.
Quoting krillBtw., there'll be two buffers then: One ringbuffer for the compressed data, which decouples fetching blocks from the drive from actually reading them into the decompressor's input stream (with my loader, they come out of order as they pass the r/w head - for speed reasons, of course), and another ringbuffer for the decompressed data which will then be used for the LZ77 back references. Care to elaborate on just how that interleave-agnostic loader thing works together with decompression? Short of buffering an entire track I don't quite see how it'd work, and even then I imagine things like switching tracks and not stepping on any toes might get tricky.
Oh and don't forget that you probably want a sizable LZ window for decent results, it depends on the particular files you're compressing of course but I'd say 2-4k is reasonable. And the less predictable the read-rate is the larger your buffer needs to be to cover for it.. |
| |
Krill
Registered: Apr 2002 Posts: 2980 |
doynax: I'll just implement both and then see, after all there are a lot of non-stream decompressors to choose from already in my loader, and more options are always better.
What do you mean by "multiple sector buffers"? The decompressor just calls a get-next-byte routine, and will be fed the data in the correct order.
And yes, my plan is to have a sector buffer the size of two tracks, so the loader can happily read the next track while the stream feeder goes through the current track's blocks and eventually waits when a block is still missing. At this point i don't see any tricky situations to worry about except for deciding if/when to reject a block to be put in place of a yet-unread one - in favour of a possible next block with less overall overhead.
The decompression look-up buffer is easily resizeable and has no constraints regarding the other buffer's size. I don't see a connection to the reading speed though, since the other buffer is already there for that and hands out the next input stream bytes as soon as possible. |
| |
doynax Account closed
Registered: Oct 2004 Posts: 212 |
Quoting KrillI'll just implement both and then see, after all there are a lot of non-stream decompressors to choose from already in my loader, and more options are always better. Knock yourself out (download.) It should be more-or-less self-explanatory..
Quote:What do you mean by "multiple sector buffers"? The decompressor just calls a get-next-byte routine, and will be fed the data in the correct order. It's inlined in a couple of places but yeah it's not really a problem, just patch up some load addresses when moving on to the next sector. I guess I hadn't thought it through ;)
Quote:And yes, my plan is to have a sector buffer the size of two tracks, so the loader can happily read the next track while the stream feeder goes through the current track's blocks and eventually waits when a block is still missing. At this point i don't see any tricky situations to worry about except for deciding if/when to reject a block to be put in place of a yet-unread one - in favour of a possible next block with less overall overhead. But that's like ten and a half kilobytes of buffers!
With Exomizer I'm up to an interleave of 11 or so when doing full-time loading over a blanked screen. Even with the worst possible interleave you'd still have to spend like half the frame loading to stand a chance of gaining anything by it.
Then again if you've got the memory to spare then why not, right? ;)
Quote:The decompression look-up buffer is easily resizeable and has no constraints regarding the other buffer's size. I don't see a connection to the reading speed though, since the other buffer is already there for that and hands out the next input stream bytes as soon as possible. Yeah, you seem to be covering things with the track buffers instead.
It still applies to just the decompression by itself though. That is by decompressing in advance (and in the background) rather than on-demand as you consume the data you can balance things out a bit so that the loading/crunching is done mostly during otherwise idle frames.
Perhaps that's of limited use for a demo part but in a game it means that you don't have to reserve N cycles each frame for streaming. |
| |
HCL
Registered: Feb 2003 Posts: 728 |
..and remember, it's based on ByteBoozer :D. |
| |
Krill
Registered: Apr 2002 Posts: 2980 |
doynax: Thanks for the source. I guess the vanilla byteboozer decompressor is easier to change for my purposes, though. :)
Both buffers are resizeable, and the track buffer is not actually a track buffer. The blocks are put there in the correct order, and a reader just needs to increment its pointer linearly, wait if there's a gap, and wrap around if the end of the buffer has been reached. I will do some tests anyway to see how large that buffer should be.
Decompression in advance is something that the application can do if it chooses to. The loader itself does decompression on demand and could easily report the amount of continuous input stream bytes remaining in the buffer, so the application can choose whether to have them decompressed now or later. |
... 9 posts hidden. Click here to view all posts.... |
Previous - 1 | 2 - Next |