Log inRegister an accountBrowse CSDbHelp & documentationFacts & StatisticsThe forumsAvailable RSS-feeds on CSDbSupport CSDb Commodore 64 Scene Database
You are not logged in 
CSDb User Forums

Forums > C64 Coding > Exomizer question(s)
2021-08-29 17:26
Silver Dream !

Registered: Nov 2005
Posts: 88
Exomizer question(s)

What is the "circular buffer"


2021-08-29 18:11

Registered: Apr 2002
Posts: 2023
It's the ring buffer at the heart of a streaming decruncher, allowing you to feed and retrieve data that is bigger than RAM.

This is an internal data structure where data is unpacked to initially, and from and to which back-referenced unpacked data is copied. Back-references are limited to the size of this buffer.
2021-08-29 18:58
Silver Dream !

Registered: Nov 2005
Posts: 88
Right - the best way to get rid of a temporary brain eclipse is to post a public question ;-)

IOW - yeah, I figured that one out and I provided the "circular_buffer" the size of the -m parameter (16 KiB) but I am still unable to decrunch my data correctly. There shouldn't be any magic to this, should it? Need to do some debugging.
2021-08-29 19:10
Silver Dream !

Registered: Nov 2005
Posts: 88
OK - found the problem - works! :-) Now need to get the ZX0 decruncher to work and do decrunch speed comparison with my data
2021-08-29 20:07

Registered: Apr 2002
Posts: 2023
Do you really need the streaming decruncher? The regular one is surely slightly faster and has no back-reference restrictions.
2021-08-30 09:29
Silver Dream !

Registered: Nov 2005
Posts: 88
In general case it seems better suited as I am not decrunching from one memory place to another (or the same) but to an I/O port. And the compressed data _might_ be larger than available memory. In particular cases, the compressed data can be fully loaded into a buffer and decompressed from there but the decruncher would still need to be modified to either return single bytes of decrunched data or to store them directly to the port by itself.

With Exomizer there seem to be incompatibilities between major versions, which kept me scratching my head for some time but what I eventually got with it (call `get_decrunched_byte()` until you're done) looks and works elegantly. The downside is that the decrunching is admittedly quite slow with this approach. I am trying to understand how I can achieve something similar with the ZX0 decruncher provided along `bitfire`. The expectation is that it will be substantially faster, even if compression rate may suffer in comparison.
2021-08-30 11:48

Registered: Dec 2001
Stream decrunchers will always be rather slow as you need to keep a ring buffer, ruling out several optimizations that you can't do since you have to handle wrapping. An algorithm that is slower at in memory decrunching but compresses better might win over a theoretically faster algorithm. The only way to know is benchmarking, curious to see how exomizer compares if you get ZX0 to work.
2021-08-30 13:31

Registered: Apr 2002
Posts: 2023
For your use-case (with unpacked data fitting in memory), i'd use the regular decrunchers and just put a write to the IO port after every write to output memory.

The regular decrunchers are the most optimised, and comparing those would probably give you the most robust figures.
2021-08-30 18:39
Silver Dream !

Registered: Nov 2005
Posts: 88
Quoting Krill
For your use-case (with unpacked data fitting in memory)
Decrunched data won't fit w/o sharding. Right now I get some 60+ KiB worth of well compressible data down to about five KiB. And I'll need a few of such chunks. I could do the sharding but then the compression will suffer a lot. I tried the same data in small chunks and the compressed total was about two and a half times the size of single compressed chunk. If reusing compression data/tables/whatever the crunchers use, between chunks was possible that could potentially change the picture.
2021-08-30 19:08

Registered: Apr 2002
Posts: 2023
Quoting Silver Dream !
If reusing compression data/tables/whatever the crunchers use, between chunks was possible that could potentially change the picture.
With ZX0, there is such a thing. You can give the cruncher a chunk of (previously) uncompressed data which the decruncher will then expect before the uncompressed address of the chunk to be decompressed. It's used for back-references to copy.

So you could have some kind of buffer ping-pong to approximate streaming. You'll possibly have to copy a large chunk of uncompressed data from a high address to a low address between decompressing chunks, though.
RefreshSubscribe to this thread:

You need to be logged in to post in the forum.

Search the forum:
Search   for   in  
All times are CET.
Search CSDb
Users Online
The Human Co../Maste..
Guests online: 78
Top Demos
1 Edge of Disgrace  (9.6)
2 Coma Light 13  (9.6)
3 Bromance  (9.6)
4 Uncensored  (9.6)
5 Memento Mori  (9.5)
6 Comaland 100%  (9.5)
7 Lunatico  (9.5)
8 Unboxed  (9.5)
9 Wonderland XII  (9.5)
10 Christmas Megademo  (9.5)
Top onefile Demos
1 Copper Booze  (9.7)
2 Daah, Those Acid Pil..  (9.5)
3 Dawnfall V1.1  (9.5)
4 Cityscape 2730  (9.5)
5 To Norah  (9.5)
6 Elite Code Mechanics  (9.4)
7 Lovecats  (9.4)
8 Barry Boomer - Trapp..  (9.4)
9 For Your Sprites Only  (9.4)
10 Quadrants  (9.4)
Top Groups
1 Booze Design  (9.4)
2 Oxyron  (9.3)
3 Censor Design  (9.3)
4 Crest  (9.3)
5 Triad  (9.3)
Top Graphicians
1 Mirage  (9.8)
2 Archmage  (9.7)
3 Mikael  (9.7)
4 Razorback  (9.7)
5 JonEgg  (9.6)

Home - Disclaimer
Copyright © No Name 2001-2021
Page generated in: 0.048 sec.