The source is there and open? So why not developing further on it, even more as it is rendered unusable on certain files this way? Doynax did not respond to Axis's mail so far, so i was just fixing it on my own in the meantime. It is not that we coders couldn't help ourselves :-)
Well, you could depack partially by this feature until a certain barrier that you release when the memory is free to depack to. Also i think this depacker was used in a game doynax did, so it has a different focus on depacking than when using it with a demo?
someone could write a little summary on what this can do above/below Byteboozer, LZWVl, and the likes please?
A bit unrelated,but are there any depackers with minmum of RAM usage? Stream-decrunch byte-by-byte without large tables in RAM or ZP. I.e. exomizer hast that nice get_decrunched_byte but uses that 156 Byte table (=large in my context *g*). Any ideas?
i put the stuff to our svn, but would be happy to have the most recent version from you to be sure i based my fixes on that version.
Of course things can be switched on and off by ifdefs, but also a command-line option would do.
As for the 256 byte literals i found a nice way to branch out Actually i won't get an overrun in the lowbyte, but therefore i can directly fork out with a beq after the lda lz_scratch before the adding starts. Here it doesn't hurt much it and works well :-)
lda lz_scratch lda lz_dst+0 ; sec sta lz_dst+0 bcs .upper_rts . . . .lrun_gotten: tay ; sec sbc #$01 sta lz_scratch
Also started to save some bytes on the decruncher, but it is a hard work :-)
There's decrunchers with smaller size, but usually speed is the tradeoff, as you then have to do a lot of jsrs to common code.
I was thinking more along the lines of the decruncher actually, though I suppose the users would prefer separate files.
Yeah, as you've probably noticed it was optimized for speed rather than size.