Log inRegister an accountBrowse CSDbHelp & documentationFacts & StatisticsThe forumsAvailable RSS-feeds on CSDbSupport CSDb Commodore 64 Scene Database
You are not logged in - nap
CSDb User Forums


Forums > C64 Coding > Compression optimisation
2022-07-23 10:48
Krill

Registered: Apr 2002
Posts: 2839
Compression optimisation

Once in a while, 3-(or more-)sided demos appear, or 40-track images, or effectively just slightly more than 1 filled side.

These make me wonder whether all the tricks in the book have been used, or if that book is incomplete or whatever.

So here are a few from the top of my head, feel free to add more/comment/criticise etc.

0. Removing the zeroes
Empty chunks within a file make decrease compression ratio. Consider mem-filling and copying at run-time (init-time).

1. One file per part
Kind of obvious, as this increases compressibility trivially, but of course is not so trivial to achieve with some naturally-funky memory layouts.

2. Run-time generated code rather than code on disk
If not fully generated unrolled code, some complex pre-generated code can often still be represented by a few different tokens and literals, then inflated to actual 6502 code at run-time.

3. Bitmap optimisation
A few tools for that exist. =)

4. Priming the decruncher
As files tend to be loaded in linear order in demos, some memory content before loading is very predictable, such as the main (non-selfmodifying) code of the previous part.
This may be used for more back-references and fewer literals when loading the next part, increasing compression ratio.

5. In-file sorting of chunks for compressibility
This is more about increasing decompression speed than compression ratio, and it's just a hunch that still needs to be backed up by hard data.
So it may be that having the best-compressible data at the head, then progressively worse-compressing data towards the tail end of a file.
This would lead to combined loading+decrunching being quicker to take up output speed, and also have longer incompressible tail-ends so less of the crunched file actually needs to be run through the (in-place decrunching) decruncher.
 
... 21 posts hidden. Click here to view all posts....
 
2022-07-26 07:01
ChristopherJam

Registered: Aug 2004
Posts: 1378
Well, generally yes (and it's an effect further exacerbated by crunchers using the most recent occurrence), but it's not uncommon to have richer seams to mine in the far distance than the mid, and that's even without empty areas to skip over.
2022-07-26 08:16
wacek

Registered: Nov 2007
Posts: 501
Especially on 1st side of E2IRA, all different tricks in the book have been used to fit on the disk. After all the optimisations we had AFAIR 4 blocks free.
There was a moment when we run out of space and had no flip disk part yet, this is where I went through all the parts and finetuned everything. Before, one of the ideas was going 40 tracks, but I said "Fuck no" ;)
2022-07-26 08:46
Krill

Registered: Apr 2002
Posts: 2839
Quoting wacek
Especially on 1st side of E2IRA, all different tricks in the book have been used to fit on the disk.
Any tricks not mentioned so far? :)

Here's one i just remembered:

6. Consider using more than one cruncher
Across any given corpus, it's very likely that some individual files will compress better with cruncher A while others will be smaller using cruncher B.
It may pay off to use both corresponding decrunchers, and select them on a per-file basis.
One game project i'm aware of currently uses both Dali/ZX0 and Exomizer, iirc, and Exomizer vs Subsizer before ZX0 was invented.
2022-07-26 09:19
Oswald

Registered: Apr 2002
Posts: 5017
"Any tricks not mentioned so far? :)"

Sure, bitmaps can be stored in different order, in linear rows, or byte columns.

Also often tables that are generated by assembler could be generated realtime, but at this level its more like size coding compo than X sides trackmo.
2022-07-26 09:36
Krill

Registered: Apr 2002
Posts: 2839
Quoting Oswald
bitmaps can be stored in different order, in linear rows, or byte columns.
Sounds like either might or might not pay off depending on the specific bitmap. Not really a generic approach.

Quoting Oswald
Also often tables that are generated by assembler could be generated realtime, but at this level its more like size coding compo than X sides trackmo.
The really simple tables (like bitmap offset tables, N*320 or so) should be generated at run-time. When it gets more complex, don't bother until yeah, very desperate times. =)
2022-07-26 09:47
wacek

Registered: Nov 2007
Posts: 501
Quoting Oswald
"Any tricks not mentioned so far? :)"


Probably not :)
Was EORing mentioned? I don't remember if I used it in E2IRA but for sure in some 4Ks.
2022-07-26 09:52
Krill

Registered: Apr 2002
Posts: 2839
Quoting wacek
Was EORing mentioned? I don't remember if I used it in E2IRA but for sure in some 4Ks.
Can you elaborate? Simple EOR with a constant on a run of data?
2022-07-26 10:25
Oswald

Registered: Apr 2002
Posts: 5017
"Sounds like either might or might not pay off depending on the specific bitmap. Not really a generic approach."


yeah, the original bitmap layout leans against good RLE of locally similar areas, while runs of lines or columns need different type of picture to pay off
2022-07-26 10:34
wacek

Registered: Nov 2007
Posts: 501
Quoting Krill
Can you elaborate? Simple EOR with a constant on a run of data?

No, I mean fe. EORing two sets of similarily structured data with each other (fe. bitmaps), sometimes it improves the crunching, sometimes not ;)

In the 4Ks I have done with samples, I usually have the $d418 256b table for Mahoney's digi method compressed into deltas, which saved always some bytes on compression. So generally, converting tables into deltas is always a thing to try in my book.
2022-07-26 10:43
Krill

Registered: Apr 2002
Posts: 2839
Quoting wacek
No, I mean fe. EORing two sets of similarily structured data with each other (fe. bitmaps), sometimes it improves the crunching, sometimes not ;)
Ah. But then you'd EOR one thing with the other, but not both with each other, right? :)

Quoting wacek
In the 4Ks I have done with samples, I usually have the $d418 256b table for Mahoney's digi method compressed into deltas, which saved always some bytes on compression. So generally, converting tables into deltas is always a thing to try in my book.
Yeah, but this is another of those rather specific (not generic) filters that might or might not improve compression ratio on a given piece of data and a given cruncher.
Previous - 1 | 2 | 3 | 4 - Next
RefreshSubscribe to this thread:

You need to be logged in to post in the forum.

Search the forum:
Search   for   in  
All times are CET.
Search CSDb
Advanced
Users Online
Peacemaker/CENSOR/Hi..
csabanw
Yogibear/Protovision
E$G/hOKUtO fOrcE
Guests online: 170
Top Demos
1 Next Level  (9.8)
2 Mojo  (9.7)
3 Coma Light 13  (9.7)
4 Edge of Disgrace  (9.6)
5 Comaland 100%  (9.6)
6 No Bounds  (9.6)
7 Uncensored  (9.6)
8 The Ghost  (9.6)
9 Wonderland XIV  (9.6)
10 Bromance  (9.6)
Top onefile Demos
1 It's More Fun to Com..  (9.8)
2 Party Elk 2  (9.7)
3 Cubic Dream  (9.6)
4 Copper Booze  (9.5)
5 Rainbow Connection  (9.5)
6 TRSAC, Gabber & Pebe..  (9.5)
7 Onscreen 5k  (9.5)
8 Wafer Demo  (9.5)
9 Dawnfall V1.1  (9.5)
10 Quadrants  (9.5)
Top Groups
1 Oxyron  (9.3)
2 Nostalgia  (9.3)
3 Booze Design  (9.3)
4 Censor Design  (9.3)
5 Crest  (9.3)
Top Webmasters
1 Slaygon  (9.7)
2 Perff  (9.6)
3 Morpheus  (9.5)
4 Sabbi  (9.5)
5 CreaMD  (9.1)

Home - Disclaimer
Copyright © No Name 2001-2024
Page generated in: 0.049 sec.