Log inRegister an accountBrowse CSDbHelp & documentationFacts & StatisticsThe forumsAvailable RSS-feeds on CSDbSupport CSDb Commodore 64 Scene Database
  You are not logged in 
Compactor V2.0   [2021]

Compactor V2.0 Released by :

Release Date :
9 July 2021

Type :
Other Platform C64 Tool

Website :

User rating:awaiting 8 votes (7 left)   See votestatistics

Credits :
Code .... Sokrates of The Tuneful Eight
Test .... Enthusi of Onslaught, PriorArt, RGCD

Download :
http://csdb.dk/getinternalfile.php/215142/compactor.exe (downloads: 38)

Look for downloads on external sites:

User Comment
Submitted by Sokrates on 22 July 2021
Here you can find some details about "compactor" and the used compression method (in German):
User Comment
Submitted by Sokrates on 11 July 2021
@Frostbyte: makes my day if someone finds this tool useful :-) If you have padding bits/bytes in your data, you can now try to improve your compression rate with version 2.0.
User Comment
Submitted by Frostbyte on 10 July 2021
For anyone who is still a bit confused what this does (and for the simple ones amongst us to whom the wiki article is more confusing than explanatory, like me): This is not strictly compression, nothing needs to be decompressed at runtime for the data to be in usable form. It is rather a tool for reorganising indexed data, so that any similar segments can be overlapped where applicable, thus saving memory.

A very simplified example: Let's say that you have some data which is accessed via an index table. Each data segment is three bytes long. Your segments look like this (here only two for simplicity):

.byte 1, 2, 3
.byte 3, 4, 5

Then you'll have an index table like this:
.word data1
.word data2
.word data1

As you can see, here your actual data takes 6 bytes of memory. What the compactor does is it finds areas in the data where the entries can overlap, thus removing duplication from the data, and creates a so called superstring, into which each data entry now points. So the above turns into:

.byte 1, 2, 3, 4, 5

.var data1 = compactedData + 0
.var data2 = compactedData + 2

Your index remains the same. The end result is that your data1 is still 1,2,3 and data2 is still 3,4,5, but as the 3 is now shared between these two data entries, you've saved a byte of memory with no performance penalty at runtime whatsoever.

Of course this is a very simplified example and pointless in real life. The real benefits come into play when you have a lot of data with a bunch of similar patterns, where the overlapping may save you a meaningful amount of memory. Just recently I had a use case where I had 40.5k of indexed data, and with the compactor I was able to reduce it to 38.5k. 2k doesn't sound like much, unless you're running out of memory, and CPU cycles to add proper decompression at runtime.

Disclaimer: I'm not affiliated with Sokrates in any way, I just like the fact that this tool made my life a little bit easier just at the time when I needed it. :)
User Comment
Submitted by Sokrates on 9 July 2021
Thanks for mentioning! My assumption was that people are more interested in use-cases than in understanding the actual compression method, so I focused more on examples. But I will explain "shortest common superstring" and "greedy approximate algorithm" in the upcoming Brotkastenfreun.de episode 008 (german podcast). I hope this will help to share the information.
User Comment
Submitted by Krill on 9 July 2021
Have you mentioned https://en.wikipedia.org/wiki/Shortest_common_supersequence_pro.. to clear up things and avoid confusion? =)
User Comment
Submitted by Sokrates on 9 July 2021
Version 2.0 is out, thanks to enthusi for testing!

From the feedback I got I noticed that some people are confused by when/how to apply this compression method.

So first of all: this is not a replacement for standard data compression. Whenever you can use standard compression, you should do so because it will VERY likely produce a better compression rate than using 'compactor'.

You should consider using this tool for data compression when standard compression methods can NOT be used, i.e. when you don't have time or memory left for decompression. So if you need some more free bytes in your program without changing the existing timing, you might find this tool useful.

This version 2.0 adds binary input/output and optional padding bit masks. With padding bit masks you can specify bits without relevant information, potentially resulting in a better compression rate. E.g. for color ram values only the lower 4 bits are relevant, so the upper 4 bits can be specified as padding bits by applying the mask %11110000. E.g. for the color "black" the value 0 can be used, but also 16, 32, 64, 128 etc. This variety increases the chance for matches with values of other arrays when searching for overlaps.

I know this is an even more special use-case for a compression method which has already a special use-case :-) But well, I was into it and here it is...

I hope some of you have fancy use-cases for 'compactor'! I would appreciate if you could share some results then!
Search CSDb
Prev - Random - Next
Detailed Info
· Summaries (1)
· User Comments (6)
· Production Notes
Fun Stuff
· Goofs
· Hidden Parts
· Trivia
· Discuss this release
Sponsored links
Support CSDb
Help keep CSDb running:

Funding status:

About this site:
CSDb (Commodore 64 Scene Database) is a website which goal is to gather as much information and material about the scene around the commodore 64 computer - the worlds most popular home computer throughout time. Here you can find almost anything which was ever made for the commodore 64, and more is being added every day. As this website is scene related, you can mostly find demos, music and graphics made by the people who made the scene (the sceners), but you can also find a lot of the old classic games here. Try out the search box in the top right corner, or check out the CSDb main page for the latest additions.
Home - Disclaimer
Copyright © No Name 2001-2021
Page generated in: 0.086 sec.