| |
Website :
https://github.com/FPT-Sokrates/compactor
Credits :
Download :
Look for downloads on external sites:
Pokefinder.org
User Comment Submitted by Sokrates on 22 July 2021 User Comment Submitted by Sokrates on 11 July 2021
@Frostbyte: makes my day if someone finds this tool useful :-) If you have padding bits/bytes in your data, you can now try to improve your compression rate with version 2.0. | User Comment Submitted by Frostbyte on 10 July 2021
For anyone who is still a bit confused what this does (and for the simple ones amongst us to whom the wiki article is more confusing than explanatory, like me): This is not strictly compression, nothing needs to be decompressed at runtime for the data to be in usable form. It is rather a tool for reorganising indexed data, so that any similar segments can be overlapped where applicable, thus saving memory.
A very simplified example: Let's say that you have some data which is accessed via an index table. Each data segment is three bytes long. Your segments look like this (here only two for simplicity):
data1:
.byte 1, 2, 3
data2:
.byte 3, 4, 5
Then you'll have an index table like this:
dataindex:
.word data1
.word data2
.word data1
As you can see, here your actual data takes 6 bytes of memory. What the compactor does is it finds areas in the data where the entries can overlap, thus removing duplication from the data, and creates a so called superstring, into which each data entry now points. So the above turns into:
compactedData:
.byte 1, 2, 3, 4, 5
.var data1 = compactedData + 0
.var data2 = compactedData + 2
Your index remains the same. The end result is that your data1 is still 1,2,3 and data2 is still 3,4,5, but as the 3 is now shared between these two data entries, you've saved a byte of memory with no performance penalty at runtime whatsoever.
Of course this is a very simplified example and pointless in real life. The real benefits come into play when you have a lot of data with a bunch of similar patterns, where the overlapping may save you a meaningful amount of memory. Just recently I had a use case where I had 40.5k of indexed data, and with the compactor I was able to reduce it to 38.5k. 2k doesn't sound like much, unless you're running out of memory, and CPU cycles to add proper decompression at runtime.
Disclaimer: I'm not affiliated with Sokrates in any way, I just like the fact that this tool made my life a little bit easier just at the time when I needed it. :) | User Comment Submitted by Sokrates on 9 July 2021
Thanks for mentioning! My assumption was that people are more interested in use-cases than in understanding the actual compression method, so I focused more on examples. But I will explain "shortest common superstring" and "greedy approximate algorithm" in the upcoming Brotkastenfreun.de episode 008 (german podcast). I hope this will help to share the information. | User Comment Submitted by Krill on 9 July 2021 User Comment Submitted by Sokrates on 9 July 2021
Version 2.0 is out, thanks to enthusi for testing!
From the feedback I got I noticed that some people are confused by when/how to apply this compression method.
So first of all: this is not a replacement for standard data compression. Whenever you can use standard compression, you should do so because it will VERY likely produce a better compression rate than using 'compactor'.
You should consider using this tool for data compression when standard compression methods can NOT be used, i.e. when you don't have time or memory left for decompression. So if you need some more free bytes in your program without changing the existing timing, you might find this tool useful.
This version 2.0 adds binary input/output and optional padding bit masks. With padding bit masks you can specify bits without relevant information, potentially resulting in a better compression rate. E.g. for color ram values only the lower 4 bits are relevant, so the upper 4 bits can be specified as padding bits by applying the mask %11110000. E.g. for the color "black" the value 0 can be used, but also 16, 32, 64, 128 etc. This variety increases the chance for matches with values of other arrays when searching for overlaps.
I know this is an even more special use-case for a compression method which has already a special use-case :-) But well, I was into it and here it is...
I hope some of you have fancy use-cases for 'compactor'! I would appreciate if you could share some results then! |
|
|
|
| Search CSDb |
| Navigate | |
|
| Detailed Info | |
|
| Fun Stuff | |
· Goofs · Hidden Parts · Trivia
|
|
| Forum | |
|
| Support CSDb | |
|
| |
|