Log inRegister an accountBrowse CSDbHelp & documentationFacts & StatisticsThe forumsAvailable RSS-feeds on CSDbSupport CSDb Commodore 64 Scene Database
You are not logged in - nap
CSDb User Forums


Forums > C64 Coding > New life for your underloved datassette unit :D
2021-10-21 02:22
Zibri
Account closed

Registered: May 2020
Posts: 304
New life for your underloved datassette unit :D

The first phase of testing just ended.
(Still in the packaging and refining phase)

But I wish to share with you all my latest accomplishment.

You might want to check this out:
https://twitter.com/zibri/status/1450979434916417540
and this:
https://twitter.com/zibri/status/1450979005117644800

The fastest example (11 kilobit/sec) has the same (or better) error rlsilience as "turbo250" but it is 3 times faster.

The slowest one (8 kilobit/sec) has the same error resilience as the standard commodore slow "save", but it is 100 times faster and twice as fast as turbo250.

;)

Notes:

1) faster speeds are possible if the tape is written with a professional equipment or hi-fi with a stabilized speed and virtually no wobbling.

2) if the tape is emulated (tapuino or similar projects) the speed can go up to 34 kilobit/sec.

3) even with datassette, higher speeds are possible but the highly depend on the status of the tape, the datassette speed and azimuth.
 
... 327 posts hidden. Click here to view all posts....
 
2021-11-06 11:54
Zibri
Account closed

Registered: May 2020
Posts: 304
Quote: Using imbalances in the likelihood of zeros vs ones in the source data is a very inefficient way of reducing the expected load duration, given how few bits you get per cycle from even the fastest of tape loaders.

You get much better mileage from compressing the file first, and optimising your loader to represent random data with as short a tape file as possible - and attempts at the latter date back decades. (At least if your goal is to minimise total time to load and decrunch the file.)


1) tape encoding is based on pulse lenghts so there is an imbalance and the lenght of a 0 or a 1 is different in ANY tape encoding on the c64.

2) I didn't check any other code. I like to study and implement things myself and I came up with the "non original" idea of 2 bits encoding by myself.

3) since 2 bits have 4 possible combinations I just privileged the more common in an uncompressed file. In a compressed one every couple of bits should have the same statistical probability.

4) my program does not do any compression or decompression but depending on the file it "compresses" the time spent in loading it.

I remember many bad cracks having a decompression program which sometimes took 10-20 or even 30 seconds.

also very few programs were able to load more than 202 blocks.

And also, NONE of them had zero jitter because everyone thought that the tape "imperfection" was way more than any jitter. That's a wrong concept and I proved it even on the 1541 with my non-jittering speed test which as usual was first accused, then insulted and then copied (badly) by groepaz, but that's how things go here evidently.
2021-11-06 12:11
tlr

Registered: Sep 2003
Posts: 1790
Quoting Zibri
1) tape encoding is based on pulse lenghts so there is an imbalance and the lenght of a 0 or a 1 is different in ANY tape encoding on the c64.

In general yes, but not true for RLL (1) or GCR (2).

(1) Datassette RLL Mastering Demo
(2) tapmaster 0.4 (used in the included trance sector tap)
2021-11-06 12:16
Krill

Registered: Apr 2002
Posts: 2980
Quoting Zibri
1) tape encoding is based on pulse lenghts so there is an imbalance and the lenght of a 0 or a 1 is different in ANY tape encoding on the c64.
Yes, and that fact just shows that two pulse lengths encoding single bits is far from the theoretical optimum. :)
2021-11-06 12:55
SLC

Registered: Jan 2002
Posts: 52
Neo-Rio: My loader is *not* an implementation of Zibri's loader. Apart from checking out the claimed benchmarks, I have not paid much attention to technical details. But there's a big likelihood that it's based on the same principles. I have no idea about the actual implementation as I never saw the actual loader.

And Zibri, drop the attitude.

None of the ideas I used came from you. The method is not exactly new, and I'm not sure it's the first time it's implemented on tape loaders either. I'm not even sure I've still landed on the optimal solution and have a few more ideas to try out. You told me to take "my projects" elsewhere, so I did... wasn't so much talk about cooperation then.

The only thing here that somehow is right is the fact I was triggered by your attitude to give it a go :)
2021-11-06 13:06
Zibri
Account closed

Registered: May 2020
Posts: 304
Quote: Neo-Rio: My loader is *not* an implementation of Zibri's loader. Apart from checking out the claimed benchmarks, I have not paid much attention to technical details. But there's a big likelihood that it's based on the same principles. I have no idea about the actual implementation as I never saw the actual loader.

And Zibri, drop the attitude.

None of the ideas I used came from you. The method is not exactly new, and I'm not sure it's the first time it's implemented on tape loaders either. I'm not even sure I've still landed on the optimal solution and have a few more ideas to try out. You told me to take "my projects" elsewhere, so I did... wasn't so much talk about cooperation then.

The only thing here that somehow is right is the fact I was triggered by your attitude to give it a go :)


Drop the b*llsh*t.
Likelihood my a$$.
You came out of your cave after I posted my videos and explaination.
None of the ideas I used came from you.
That's why they came before mine.. oh no wait.. they came AFTER!
Hmmm... weird uh?
"wasn't so much talk about cooperation then."
if you seek for cooperation, as I said, you open your own thread and then someone might contribute there. I just said you were doing it in the wrong place and posting a TAP does not help or cooperate in any way to something I already did and explained.
2021-11-06 13:52
SLC

Registered: Jan 2002
Posts: 52
And that is exactly what I did...

If you want the chain of events that led to this:

1. I posted an experiment here, you responded with rage

2. First I didn't think much of it, but then I decided I'd try to give you some competition anyway, but by doing it OUTSIDE this thread, as per your requests.

3. I then told your favorite friend Groepaz about what I wanted to try and if he had any idea on how to set up the timers for this and he suggested the approach of cascading timers. At this point, I was actually not aware of what encoding scheme you were using and I have still no knowledge on how you implemented it, neither do I care.

4. On Wednesday I started coding to see if what Groepaz suggested would work, and it showed promising results so I kept on working on it.

5. Yesterday I decided it was time to drop it on csdb.. :-)

Also: When I asked you the specific question of the benchmarking I was only wanting a reference to know where I stood compared to my competition because it was not clear if it was loading uncompressed data or not because obviously loading a lot of the same two bitpairs would skew the measurement for you as well as for me. But apart from the benchmarks and the idea of making a loader, nothing was "borrowed" from you or your ideas.

If you really think you are the only one in the world who could come up with a two bit scheme, you're at the very best delusional.

I am not even sure it is the optimal route, which is why I am still going to try a few other approaches, returning to a three pulse scheme. Now you know that, so now I can accuse you of stealing my idea if you ever do the same!
2021-11-06 13:56
ChristopherJam

Registered: Aug 2004
Posts: 1409
Quoting Zibri
1) tape encoding is based on pulse lenghts so there is an imbalance and the lenght of a 0 or a 1 is different in ANY tape encoding on the c64.


If you still think that is true, then you have not understood the discussion at Tape loaders using more than two pulse widths for data. Even you don't simply map 0 to one length, 1 to another.

Quote:
2) I didn't check any other code. I like to study and implement things myself and I came up with the "non original" idea of 2 bits encoding by myself.


Sure - but in that case, you should have no trouble believing that SLC came up with his own loader long before you started talking about one that happened to use some similar ideas. Convergent evolution happens - you just reminded SLC to push his old code out the door :) many similar ideas to yours independently. See his comment he left while I was writing this one for details!

Quote:
3) since 2 bits have 4 possible combinations I just privileged the more common in an uncompressed file. In a compressed one every couple of bits should have the same statistical probability.


This is true - and it also is an indication that there are smaller representations possible.

Quote:
I remember many bad cracks having a decompression program which sometimes took 10-20 or even 30 seconds.


Yes, decompression programs are a lot faster than they used to be. 20 seconds is considered unacceptably slow these days, with the bar being well under 10 seconds for decompressing to all of memory.

Halving the size of a file that would otherwise take over a minute to load even with a super fast tape turbo will save at least 30 seconds of loading time, so as long as the decrunch takes under 30 seconds you end up ahead. Even compression by a mere 30% would still be a drastic improvemnt.

Quote:
also very few programs were able to load more than 202 blocks.


True - but also this is rarely necessary now given modern compression codecs.

Quote:
And also, NONE of them had zero jitter because everyone thought that the tape "imperfection" was way more than any jitter. That's a wrong concept and I proved it even on the 1541 with my non-jittering speed test which as usual was first accused, then insulted and then copied (badly) by groepaz, but that's how things go here evidently.


Jitterless I can certainly see a point to, if you're pushing your throughput so close to bandwidth limitations.
2021-11-06 14:36
Neo-Rio
Account closed

Registered: Jan 2004
Posts: 63
Quote: Neo-Rio: My loader is *not* an implementation of Zibri's loader. Apart from checking out the claimed benchmarks, I have not paid much attention to technical details. But there's a big likelihood that it's based on the same principles. I have no idea about the actual implementation as I never saw the actual loader.

And Zibri, drop the attitude.

None of the ideas I used came from you. The method is not exactly new, and I'm not sure it's the first time it's implemented on tape loaders either. I'm not even sure I've still landed on the optimal solution and have a few more ideas to try out. You told me to take "my projects" elsewhere, so I did... wasn't so much talk about cooperation then.

The only thing here that somehow is right is the fact I was triggered by your attitude to give it a go :)


More than happy to be corrected here. I did not see any code and make comparisons. Only made an assumption from the chatter in this forum. So if I'm wrong - I own that mistake right there.

In any case I tried mastering both Slushload v2 and Zibri's test tap he sent me to try out.

I have a pretty woeful clone datasette. I tried aligning it and giving it a headclean, and with firmware 3.10 on an ultimate 1541-II and tape adpater - I could not get either tap generated to master properly. Slushload v2 failed take off (0x0a speed from a file I generated - which worked in VICE), and Zibri's loaded but the end result made some pretty cool glitch music along with corrupted graphics :D

Now granted that my tape deck was a pile of cack, and the ultimate 1541 is not the best mastering tool (it won't even let you lead in the motor before recording) -- and yet Gyrospeed mastered TAPs I created played back OK on it.
So while Gyrospeed is not the fastest tape turbo in existance, it is *mostly* reliable across really badly maintained datasettes IMHO.

And I suppose this was the problem with datasettes back in the day - just wide variance on equipment - and nobody really giving two flying firetrucks about it because they were cheap and nasty. While theoretically possible to get amazing speeds with it, the datasettes in question need to be properly maintained - which mine was certainly not. That's even though it was an oldstock clone datasette which I haven't had for very long, and also has a funky motor. Yet it still manages to load a lot of stuff normally.

The other thing is that emulators don't really "count" so to speak, as you should expect a perfect load every time. But this is obvious....

I am by far and away no subject matter expert on this. Only reporting so far what I'm seeing and trying to see if I can work to improve the situation. Fully expecting to be corrected on these assumptions.
2021-11-06 15:15
SLC

Registered: Jan 2002
Posts: 52
I would like to know more about what you tried and that didn't work, so if you could send me a PM about it (keeping it outside of this thread etc.) I would really appreciate that.

The 1541U does btw delay a second before it starts outputting data after you start recording, which should be enough time for the signal to settle.

The challenge with writing these signals back to tape (and which is why Zibri's real C64-approach might yield a better result as he will have full control of this process), is that you need to be more careful on how you render the signal but nothing we can do with the .tap-files can affect how for example the 1541U handles it.
2021-11-06 15:49
Zibri
Account closed

Registered: May 2020
Posts: 304
Quoting ChristopherJam

Yes, decompression programs are a lot faster than they used to be. 20 seconds is considered unacceptably slow these days, with the bar being well under 10 seconds for decompressing to all of memory.

Halving the size of a file that would otherwise take over a minute to load even with a super fast tape turbo will save at least 30 seconds of loading time, so as long as the decrunch takes under 30 seconds you end up ahead. Even compression by a mere 30% would still be a drastic improvemnt.


Jitterless I can certainly see a point to, if you're pushing your throughput so close to bandwidth limitations.


Yep.. all that's right.
About the jitter, think of it this way: if you have a jitter of 3 cycles is like having a third of the accuracy.
HeadAlign for example has a total jitter of more than 20 cycles and "most" of the movement you see is it's own jitter and not the tape.
Just saying..
Previous - 1 | ... | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | ... | 34 - Next
RefreshSubscribe to this thread:

You need to be logged in to post in the forum.

Search the forum:
Search   for   in  
All times are CET.
Search CSDb
Advanced
Users Online
anonym/padua
Mason/Unicess
sln.pixelrat
Flashback
Dano/Padua
zscs
MWR/Visdom
Steve/Laser, Zenith,..
Guests online: 98
Top Demos
1 Next Level  (9.7)
2 13:37  (9.7)
3 Mojo  (9.7)
4 Coma Light 13  (9.6)
5 Edge of Disgrace  (9.6)
6 What Is The Matrix 2  (9.6)
7 The Demo Coder  (9.6)
8 Uncensored  (9.6)
9 Comaland 100%  (9.6)
10 Wonderland XIV  (9.6)
Top onefile Demos
1 Layers  (9.6)
2 No Listen  (9.6)
3 Cubic Dream  (9.6)
4 Party Elk 2  (9.6)
5 Copper Booze  (9.6)
6 Dawnfall V1.1  (9.5)
7 Rainbow Connection  (9.5)
8 Onscreen 5k  (9.5)
9 Morph  (9.5)
10 Libertongo  (9.5)
Top Groups
1 Performers  (9.3)
2 Booze Design  (9.3)
3 Oxyron  (9.3)
4 Triad  (9.3)
5 Censor Design  (9.3)
Top Graphicians
1 Mirage  (9.8)
2 Archmage  (9.7)
3 Pal  (9.6)
4 Carrion  (9.6)
5 Sulevi  (9.6)

Home - Disclaimer
Copyright © No Name 2001-2024
Page generated in: 0.066 sec.