Log inRegister an accountBrowse CSDbHelp & documentationFacts & StatisticsThe forumsAvailable RSS-feeds on CSDbSupport CSDb Commodore 64 Scene Database
 Welcome to our latest new user maak ! (Registered 2024-04-18) You are not logged in - nap
CSDb User Forums


Forums > C64 Coding > Replacing games loader ...
2018-06-20 01:03
Bacchus

Registered: Jan 2002
Posts: 154
Replacing games loader ...

OK, this on cracking but still highly related to coding.

Most games I fiddled with over the year call the loader using a parameter that was the index of the file. Using the same parameter as index for my IFFL or converting it to a two byte string starting the file name has worked well for me in most cases.

I am now facing two games that are not distributed yet (old, but no scene version is out) where there is a lot of data stored directly on the disk and the game then loads it using direct track and sector. Think of it like action adventures. The game loads strings or other really small things by loading T/S and then exctacting the needed part. It's hence not really "files" most of it and there are so many that a file per string is plainly not within reach.

I can think of a few approaches;
1) Keep it as data on disk. Allocate the sectors used and then store the files on the unallocated sectors. You can't compress it - it does take a full disk side any way you look at it. It does work, looks rather neat but cannot be counted as a firstie.

2) Make a big chunk of the data to a file and push it to a REU the first thing you do. The game become ever so much more playable and fast. And the file can be compressed efficiently. You do need a REU (or simply enable it in your emulator or Ultimate Cart) but it's also still not counted as a firstie.

3) Make a big file which you then need to scan as the original 256 byte sectors are now 254, so a sector that was a full page is by necessity spread over two sectors in a file based option. I guess you can also compress the sectors individually and think of the sectors as files in an IFFL. One IFFL file equals a sector. This is an ugly bitch but could be counted as a firstie.

Any other thought on this technical challenge? I must admit I am growing fond of the REU option, and the firstie restriction is the only thing that holds me back. The Tink games we just released had been perfect in REU version. Would have saved SO much work, loading would have been near instant and it would have been a release of two neat files.

Am I missing any options or can someone provide some lateral thinking, that opens up new options by finding approaches I have missed?
 
... 36 posts hidden. Click here to view all posts....
 
2018-06-20 20:57
chatGPZ

Registered: Dec 2001
Posts: 11100
Quote:
For the sake of argument; let's assume five files of 20 blocks and then 400 "256 byte pages". That makes a total of 500 blocks.

you are missing "how much memory is there to use for it".

eg: your example is pretty trivial if you have 800 bytes for an index table
2018-06-20 22:22
Bacchus

Registered: Jan 2002
Posts: 154
You rarely have memory - you claim it ;)

Still, IMHO tables for over 256 is less trivial. Forget drive RAM. 400 files means 1200 bytes tables. Track, sector and offset. Times 400. That sort of memory is not common in any game.
2018-06-20 23:15
ChristopherJam

Registered: Aug 2004
Posts: 1370
…so then perhaps for each group of 4 source blocks, have the track/sector/offset of a record in the IFFL that itself contains four lightly compressed blocks, preceded by a header giving the compressed length of each of the four?

Then you only have 300 bytes of tables, 200 if you pad each group of four to a block boundary.

Of course, then you might have to read as many as three extra blocks, but they should all be on the same track most of the time.

(apologies if my terminology is off; I have zero experience in cracking, obviously)
2018-06-21 00:57
Bacchus

Registered: Jan 2002
Posts: 154
If the compressed sectors fit on the same physical sector, the loading wouldn't be too delayed.

I agree this is a validated way forward. It's sort of a more detailed line of thoughts as per what was already provided above.
2018-06-21 07:28
Mason

Registered: Dec 2001
Posts: 459
Well depending on the number of files on the tracks

You can find out how much room you got for buffer in memory - if it's $0800 and you can compress the files into $80 bytes then you load in chucks in $0800 - that would be 16 files

16 x 128 in the IFFL would make it possible to store 768 packed $80 files in $0800 chucks. To unpack use the method like old Level Crueler where you load the packed files into memory and depacked it afterwards
2018-06-21 08:15
JackAsser

Registered: Jun 2002
Posts: 1987
If you have >256 files then you already reference the file table via 16-bit value in the code. Why is a table needed at all? Why not provide the t/s/off/size directly in the code when needed?
2018-06-21 09:07
Martin Piper

Registered: Nov 2007
Posts: 631
Quote: If you have >256 files then you already reference the file table via 16-bit value in the code. Why is a table needed at all? Why not provide the t/s/off/size directly in the code when needed?

Depends on the level of the code hook I suppose. Conversion from the old track/sector to the new track/sector/pos table could change less in the calling code.
2018-06-21 11:17
MagerValp

Registered: Dec 2001
Posts: 1055
Quoting Bacchus
400 files means 1200 bytes tables. Track, sector and offset.


Strictly speaking you need less than 10 bits to store the track and sector, if you can spare a few bytes for a small div routine. Even so, that's 900 bytes for the tables, which is a lot.

I'm really liking ChristopherJam's idea of using 1k clusters and three bytes of offsets before the first compressed sector. You still need to read the full compressed cluster, but you don't need more than a single 256 byte buffer in RAM since you can skip the data you don't want. Performance wise I don't think there would be a large hit from using 1k clusters, since it's a relatively small time compared to drive seeking and decrunching the data.
2018-06-21 11:55
Bacchus

Registered: Jan 2002
Posts: 154
Arhgh. Big reply didn't stick. Recreation of the core parts: :-(

@tim I hear you - our front end guy got snowed over and can't do it now. It's still our ambition to do it...

@jackasser

# Files vis direct T/S

If data is stored in T/S format, you have a number of advantages: You have the full 256 bytes and the data is where you left it. Using files, data "moves around". Where on the disk the data file ends up is different per disk depending in the order you copy the files. Add a little note to a disk before you copy your game then the T/S used is totally different than if you copy it directly to the disk without the note. Also, interleaving is different between drive kernals. ProfessionalDOS doesn't use 10. I also think Jiffy goes for less. Again - with files, you can make zero assumptions on where data is.

You hence need to scan the IFFL file and make tables, as there is no way to programatically tell where the stuff is. That luxury is reserved to the static T/S environment. You typically latch the scanner before the intro so it can do it's job before while you watch the intro.

# Available memory

Your loader + depacker takes memory. If you're lucky, it fits where the old loader was. Otherwise you need to find RAM in the game which is most often a real challenge.

In the ideal IFFL case, the loader and tables fit in drive ram. No need to fiddle with computer ram. Works fine for scenarios where the 2k of drive ram is good for tables, code, sector buffer and all else you need there. Typically you hit a brick wall at around 128 files depending on how compact the loader is. @MagerValp migrated the tables to the computer in his Uload. Works fine if you have the RAM in the computer.

It all boils down to tradeoffs, and my main question was around the scenario of 256+ files and quite restricted memory.

Clustering files gives you the benefit of smaller tables at the cost of slightly deteriorated average loading times (small cost!)
2018-06-21 13:22
Perplex

Registered: Feb 2009
Posts: 254
When the files are stored on disk, tracks and sectors are determined by some kind of logic, not chosen randomly. If you implement this same logic in native code, all you need is table with sizes for each file, and you can calculate t/s and offset from that, right?
Previous - 1 | 2 | 3 | 4 | 5 | 6 - Next
RefreshSubscribe to this thread:

You need to be logged in to post in the forum.

Search the forum:
Search   for   in  
All times are CET.
Search CSDb
Advanced
Users Online
cba
icon/The Silents, Sp..
kbs/Pht/Lxt
Mr SQL
TheRyk/MYD!
Smasher/F4CG
Digger/Elysium
Frostbyte/Artline De..
Mihai
Guests online: 158
Top Demos
1 Next Level  (9.8)
2 Mojo  (9.7)
3 Coma Light 13  (9.7)
4 Edge of Disgrace  (9.6)
5 Comaland 100%  (9.6)
6 No Bounds  (9.6)
7 Uncensored  (9.6)
8 Wonderland XIV  (9.6)
9 The Ghost  (9.6)
10 Bromance  (9.6)
Top onefile Demos
1 It's More Fun to Com..  (9.9)
2 Party Elk 2  (9.7)
3 Cubic Dream  (9.6)
4 Copper Booze  (9.5)
5 Rainbow Connection  (9.5)
6 Wafer Demo  (9.5)
7 TRSAC, Gabber & Pebe..  (9.5)
8 Onscreen 5k  (9.5)
9 Dawnfall V1.1  (9.5)
10 Quadrants  (9.5)
Top Groups
1 Oxyron  (9.3)
2 Nostalgia  (9.3)
3 Booze Design  (9.3)
4 Censor Design  (9.3)
5 Crest  (9.3)
Top Musicians
1 Rob Hubbard  (9.7)
2 Jeroen Tel  (9.7)
3 Stinsen  (9.6)
4 Mutetus  (9.6)
5 Linus  (9.6)

Home - Disclaimer
Copyright © No Name 2001-2024
Page generated in: 0.064 sec.