| |
oziphantom
Registered: Oct 2014 Posts: 490 |
modify Exomiser compressor to black list some memory locations
Does anybody know a way to modify the exomizer compression algorithm to black list FF0X memory locations to never be read from, i.e don't allow them to be used as part of a sequence?
I guess a post process would also work, if there is a simple way to convert use a sequence to literal bytes.. |
|
... 16 posts hidden. Click here to view all posts.... |
| |
ChristopherJam
Registered: Aug 2004 Posts: 1409 |
Quoting KrillThe decision should be made per sequence copy, not per source byte.
Sure, but it's still going to be slower than just blacklisting those particular source bytes. That's an extra check for every token - particularly harsh given that exo also uses copies for recently used single bytes. |
| |
Krill
Registered: Apr 2002 Posts: 2980 |
Quoting ChristopherJamQuoting KrillThe decision should be made per sequence copy, not per source byte. Sure, but it's still going to be slower than just blacklisting those particular source bytes. That's an extra check for every token - particularly harsh given that exo also uses copies for recently used single bytes. Yes, disallowing certain memory ranges on the compressor side is the preferred option, IF it is available. :)
Quoting ChristopherJamThat's an extra check for every token - particularly harsh given that exo also uses copies for recently used single bytes. An extra check for every sequence-copy token. However, it should be highly optimisable. The check only needs to be performed once the problematic range has actually been written to, and as its high-byte is $ff, the back-reference check in flat memory space should allow for early exit. There may be more opportunities for optimisation. |
| |
oziphantom
Registered: Oct 2014 Posts: 490 |
not write, read
so if when you are writing to $4000 and it wants to copy a 128 bytes sequence and that sequence starts at fE88, then the first 8 reads(as it reads from the top down) need to be the special read under FF0X code. So in order to know if it needs normal, or special, you need to do
(Start + X + Len).hi > ff where start and len are 16bits 'then do special' is probably the best case. It might be faster overall to just do Start.hi + Len.hi > fd and take the hit rather than take the hit of 16bits for the rest.
ChristopherJam is right Exomizer loves to do a sequence of 1 byte.
I have written Magnus Lind, and he sees that it might be useful for other systems as well, and if its not too much work he is happy to make "black list intervals" a feature of exomizer. |
| |
Krill
Registered: Apr 2002 Posts: 2980 |
Quoting oziphantomnot write, read A sequence cannot be read from before it has been written initially (and writing is not a problem, if i have understood you right). The write pointer is strictly ascending or descending depending on depack direction, and thus the range check is superfluous before the problematic range has been written to. This is why i wrote "The check only needs to be performed once the problematic range has actually been written to". |
| |
oziphantom
Registered: Oct 2014 Posts: 490 |
ok I see what you are saying, do the forward decompress not the backwards decompress, this then means I only have to do the slow method for 255 bytes tops |
| |
Krill
Registered: Apr 2002 Posts: 2980 |
No, what i said should apply to either depack direction.
I'm not quite sure which direction would give more optimisation opportunities for the $ff0X range check at the moment, but both would probably have to do with the difference of write pointer vs back-reference read pointer crossing the 64K bank boundary or not.
But if you intend to depack while loading, forward decompression is the way to go. |
| |
oziphantom
Registered: Oct 2014 Posts: 490 |
wait that won't work, to get PHA one must go backwards.
Since its FF0X going backwards(assuming you start above it, and if you don't just use a version that skips the check altogether) gives you 248 bytes max that won't need the check garanteed.
If you go forward then you only have 248 bytes where one must check for FF0X however you can't use PHA to write.. |
| |
oziphantom
Registered: Oct 2014 Posts: 490 |
But if you intend to depack while loading, forward decompression is the way to go.
Why is forward better from loading? (apart from it saves you flipping the file ) |
| |
Krill
Registered: Apr 2002 Posts: 2980 |
Okay, then backward decompression is a given, so any potential performance differences to forward decompression are moot.
Forward decompression is usually suited better for decompression while loading mainly because loading itself is usually performed in the forward direction. You can then decompress in-place* in the same direction. That should work for backward compression as well, given that loading is done in the same direction as well.
* Read buffer (loaded compressed file) is a subset of the write buffer (decompressed file), both end at the same address using forward direction. For Exomizer, there are a few (3-ish) compressed bytes beyond the uncompressed data. |
| |
oziphantom
Registered: Oct 2014 Posts: 490 |
Its in, and it works :D |
Previous - 1 | 2 | 3 - Next |