Log inRegister an accountBrowse CSDbHelp & documentationFacts & StatisticsThe forumsAvailable RSS-feeds on CSDbSupport CSDb Commodore 64 Scene Database
You are not logged in - nap
CSDb User Forums


Forums > C64 Coding > Exomizer question(s)
2021-08-29 17:26
Silver Dream !

Registered: Nov 2005
Posts: 107
Exomizer question(s)

What is the "circular buffer"

https://bitbucket.org/magli143/exomizer/src/b817e089425104856ca..

here?
2021-08-29 18:11
Krill

Registered: Apr 2002
Posts: 2839
It's the ring buffer at the heart of a streaming decruncher, allowing you to feed and retrieve data that is bigger than RAM.

This is an internal data structure where data is unpacked to initially, and from and to which back-referenced unpacked data is copied. Back-references are limited to the size of this buffer.
2021-08-29 18:58
Silver Dream !

Registered: Nov 2005
Posts: 107
Right - the best way to get rid of a temporary brain eclipse is to post a public question ;-)

IOW - yeah, I figured that one out and I provided the "circular_buffer" the size of the -m parameter (16 KiB) but I am still unable to decrunch my data correctly. There shouldn't be any magic to this, should it? Need to do some debugging.
2021-08-29 19:10
Silver Dream !

Registered: Nov 2005
Posts: 107
OK - found the problem - works! :-) Now need to get the ZX0 decruncher to work and do decrunch speed comparison with my data
2021-08-29 20:07
Krill

Registered: Apr 2002
Posts: 2839
Do you really need the streaming decruncher? The regular one is surely slightly faster and has no back-reference restrictions.
2021-08-30 09:29
Silver Dream !

Registered: Nov 2005
Posts: 107
In general case it seems better suited as I am not decrunching from one memory place to another (or the same) but to an I/O port. And the compressed data _might_ be larger than available memory. In particular cases, the compressed data can be fully loaded into a buffer and decompressed from there but the decruncher would still need to be modified to either return single bytes of decrunched data or to store them directly to the port by itself.

With Exomizer there seem to be incompatibilities between major versions, which kept me scratching my head for some time but what I eventually got with it (call `get_decrunched_byte()` until you're done) looks and works elegantly. The downside is that the decrunching is admittedly quite slow with this approach. I am trying to understand how I can achieve something similar with the ZX0 decruncher provided along `bitfire`. The expectation is that it will be substantially faster, even if compression rate may suffer in comparison.
2021-08-30 11:48
MagerValp

Registered: Dec 2001
Posts: 1055
Stream decrunchers will always be rather slow as you need to keep a ring buffer, ruling out several optimizations that you can't do since you have to handle wrapping. An algorithm that is slower at in memory decrunching but compresses better might win over a theoretically faster algorithm. The only way to know is benchmarking, curious to see how exomizer compares if you get ZX0 to work.
2021-08-30 13:31
Krill

Registered: Apr 2002
Posts: 2839
For your use-case (with unpacked data fitting in memory), i'd use the regular decrunchers and just put a write to the IO port after every write to output memory.

The regular decrunchers are the most optimised, and comparing those would probably give you the most robust figures.
2021-08-30 18:39
Silver Dream !

Registered: Nov 2005
Posts: 107
Quoting Krill
For your use-case (with unpacked data fitting in memory)
Decrunched data won't fit w/o sharding. Right now I get some 60+ KiB worth of well compressible data down to about five KiB. And I'll need a few of such chunks. I could do the sharding but then the compression will suffer a lot. I tried the same data in small chunks and the compressed total was about two and a half times the size of single compressed chunk. If reusing compression data/tables/whatever the crunchers use, between chunks was possible that could potentially change the picture.
2021-08-30 19:08
Krill

Registered: Apr 2002
Posts: 2839
Quoting Silver Dream !
If reusing compression data/tables/whatever the crunchers use, between chunks was possible that could potentially change the picture.
With ZX0, there is such a thing. You can give the cruncher a chunk of (previously) uncompressed data which the decruncher will then expect before the uncompressed address of the chunk to be decompressed. It's used for back-references to copy.

So you could have some kind of buffer ping-pong to approximate streaming. You'll possibly have to copy a large chunk of uncompressed data from a high address to a low address between decompressing chunks, though.
RefreshSubscribe to this thread:

You need to be logged in to post in the forum.

Search the forum:
Search   for   in  
All times are CET.
Search CSDb
Advanced
Users Online
Guests online: 120
Top Demos
1 Next Level  (9.8)
2 Mojo  (9.7)
3 Coma Light 13  (9.7)
4 Edge of Disgrace  (9.6)
5 Comaland 100%  (9.6)
6 No Bounds  (9.6)
7 Uncensored  (9.6)
8 The Ghost  (9.6)
9 Wonderland XIV  (9.6)
10 Bromance  (9.6)
Top onefile Demos
1 It's More Fun to Com..  (9.8)
2 Party Elk 2  (9.7)
3 Cubic Dream  (9.6)
4 Copper Booze  (9.5)
5 Rainbow Connection  (9.5)
6 TRSAC, Gabber & Pebe..  (9.5)
7 Onscreen 5k  (9.5)
8 Wafer Demo  (9.5)
9 Dawnfall V1.1  (9.5)
10 Quadrants  (9.5)
Top Groups
1 Oxyron  (9.3)
2 Nostalgia  (9.3)
3 Booze Design  (9.3)
4 Censor Design  (9.3)
5 Crest  (9.3)
Top Musicians
1 Rob Hubbard  (9.7)
2 Jeroen Tel  (9.7)
3 Stinsen  (9.6)
4 Mutetus  (9.6)
5 Linus  (9.6)

Home - Disclaimer
Copyright © No Name 2001-2024
Page generated in: 0.077 sec.