Log inRegister an accountBrowse CSDbHelp & documentationFacts & StatisticsThe forumsAvailable RSS-feeds on CSDbSupport CSDb Commodore 64 Scene Database
 Welcome to our latest new user maak ! (Registered 2024-04-18) You are not logged in - nap
CSDb User Forums


Forums > C64 Coding > Tape loaders using more than two pulse widths for data
2018-12-08 20:16
ChristopherJam

Registered: Aug 2004
Posts: 1370
Tape loaders using more than two pulse widths for data

I’ve been thinking a bit about a novel tape encoding, but it would rely on (among other things) having more than two pulse widths. So far as I can see, none of the old turbo loaders used a long pulse for anything beyond end of byte/end of block - the bitstream itself was just a sequence of short and medium pulses (usually around 200 cycles for short, 300 to 450 for long).

Is there any particular reason none of the popular loaders used (eg) four pulse widths for each bitpair? Even using quite widely separated durations of 210/320/450/600 would then lead to an average time per bit of just 197 cycles, a massive improvement on the 265 cycles per bit you’d get for 210/320.

(The idea I’ve been working on is a bit more complex than that, but if a simple bitpair scheme wouldn’t work for some reason, then the idea I have would need to be scaled back somewhat. Promising that long pulses were used for framing, mind..)
 
... 23 posts hidden. Click here to view all posts....
 
2018-12-12 10:24
enthusi

Registered: May 2004
Posts: 674
Yes, at least the backside had music on it ;-)
But two other things to note. If you press stop/play the starting motor will also produce all sorts of pulse-lengths (at least when read back via c2n) and of course these graphs show TAP bytes, not mere data bytes, so encoded pauses are also shown as pulses. That is the reason for those seemingly spurios pixels at the same horizontal position. Those others I'd bet as well are music.
50 cycles is _WAY_ too short unless you record/play on the very same device without touching anything (and no bad lines, sprites etc obviously).
2018-12-12 12:07
thrust64

Registered: Jun 2006
Posts: 8
That loader is mine. :)

And of course it was only meant for personal use on the same hardware. And hey, it worked for me! :D

I don't remember exactly how I came to 7 pulse lengths. But I did some statistics about bit combinations from some files I had and then some math based on the minimum pulse length differences which seemed feasible. And that resulted into the 7 lengths for optimal performance.
2018-12-12 12:11
thrust64

Registered: Jun 2006
Posts: 8
My target speed was 10k. That also played a role, IIRC.
2018-12-12 12:30
ChristopherJam

Registered: Aug 2004
Posts: 1370
I'm impressed :)
2018-12-12 12:47
enthusi

Registered: May 2004
Posts: 674
Ah good to see you here :)
I, too, was/am impressed, hehe.
2018-12-15 00:39
ChristopherJam

Registered: Aug 2004
Posts: 1370
I've managed to verify my hypothesis that the optimal weights to use for an arithmetic encoding based loader are those that ensure each symbol outputs the same number of bits per cycle.

Approximating the arithmetic encoder with a huffman code only loses one or two percent of efficiency, so that would probably be a somewhat saner path to take.

On the way, I found a marginally faster encoding using thrust64's pulse lengths. Using output words of { 11, 01, 101, 100, 000, 0011, 0010 } results in an average cost per bit of 102.9 cycles, compared
to thrust64's 108.8

Have some pretty graphs, and the script that generated them.

Each graph covers a family of pulse lengths, as given by the equations in the titles. The two lines show optimal cost per output bit, as a function of the number of pulse lengths used for the encoding.
This is purely for the data rate within each word; framing bits and error correction are outside the scope of this investigation, and would in practice add an overhead of 10-20%

There's a table after each graph giving the optimal encoding for the case where seven distinct pulse lengths are used. Note that if the gap is comparable to the shortest pulse length, the
optimal huffman encoding is isomorphic to transition=1, no transition=0, with some extra transitions inserted after long runs of zeros. This changes when the gap is relatively
small compared to the shortest length.





Using Thomas Jentzsch's pulse spacing (167+50*n cycles)
but optimising the encoding for that spacing
+----------------+-----------------+--------------+
| Pulse duration | arithmetic code | huffman code |
+----------------+-----------------+--------------+
| 167 cycles     | 0.31886         | 11           |
| 217 cycles     | 0.22645         | 01           |
| 267 cycles     | 0.16083         | 101          |
| 317 cycles     | 0.11422         | 100          |
| 367 cycles     | 0.08112         | 000          |
| 417 cycles     | 0.05761         | 0011         |
| 467 cycles     | 0.04091         | 0010         |
+----------------+-----------------+--------------+
mean cycles per bit, arithmetic code = 101.3
mean cycles per bit,    huffman code = 102.9





Using AR Turbo's short pulse, Fast Evil's spacing
(150+80*n cycles)
+----------------+-----------------+--------------+
| Pulse duration | arithmetic code | huffman code |
+----------------+-----------------+--------------+
| 150 cycles     | 0.39979         | 0            |
| 230 cycles     | 0.24517         | 10           |
| 310 cycles     | 0.15035         | 110          |
| 390 cycles     | 0.09220         | 1110         |
| 470 cycles     | 0.05654         | 11111        |
| 550 cycles     | 0.03468         | 111101       |
| 630 cycles     | 0.02127         | 111100       |
+----------------+-----------------+--------------+
mean cycles per bit, arithmetic code = 113.4
mean cycles per bit,    huffman code = 116.2





Using AR Turbo's short pulse, Fast Evil's min spacing,
increasing gap for longer pulses (150+80*n**1.1 cycles)
+----------------+-----------------+--------------+
| Pulse duration | arithmetic code | huffman code |
+----------------+-----------------+--------------+
| 150 cycles     | 0.41389         | 0            |
| 230 cycles     | 0.25856         | 10           |
| 320 cycles     | 0.15230         | 110          |
| 416 cycles     | 0.08660         | 1110         |
| 516 cycles     | 0.04810         | 11111        |
| 618 cycles     | 0.02640         | 111101       |
| 724 cycles     | 0.01415         | 111100       |
+----------------+-----------------+--------------+
mean cycles per bit, arithmetic code = 117.9
mean cycles per bit,    huffman code = 119.7





something a bit more conservative (180+ 90*n)
+----------------+-----------------+--------------+
| Pulse duration | arithmetic code | huffman code |
+----------------+-----------------+--------------+
| 180 cycles     | 0.38997         | 0            |
| 270 cycles     | 0.24352         | 10           |
| 360 cycles     | 0.15207         | 110          |
| 450 cycles     | 0.09497         | 1110         |
| 540 cycles     | 0.05930         | 11110        |
| 630 cycles     | 0.03703         | 111111       |
| 720 cycles     | 0.02313         | 111110       |
+----------------+-----------------+--------------+
mean cycles per bit, arithmetic code = 132.5
mean cycles per bit,    huffman code = 136.4


https://jamontoads.herokuapp.com/csdb/201812/compare_encodings...

No, I'm not going to go and write a loader based on this investigation in the near future. But, I hope this is useful or at least interesting to someone else out there.

Many thanks to SLC, Enthusi, thrust64 & tlr for their assistance.
2018-12-16 14:52
thrust64

Registered: Jun 2006
Posts: 8
Cool stuff. I remember that I did some similar research back then.

You limited your research to 7 different pulse lengths. Depending on the shortest pulse and the spacing, this may not be the optimal solution. E.g. for the last, conservative approach, I am pretty sure that a lower number of pulse lengths would give a better result.

Also, what sample data did you base your pattern distribution on? E.g uncompressed data must result into pretty different values depending on the content and very different than compressed one. For the latter, all bits and combinations should have about the same probability.
2018-12-16 21:18
ChristopherJam

Registered: Aug 2004
Posts: 1370
Quoting thrust64
Cool stuff. I remember that I did some similar research back then.


Thanks! I assumed as much.


Quote:
You limited your research to 7 different pulse lengths. Depending on the shortest pulse and the spacing, this may not be the optimal solution. E.g. for the last, conservative approach, I am pretty sure that a lower number of pulse lengths would give a better result.


My apologies, the tables are results for 7 pulse lengths, but the graphs show average cycles per bit as a function of the number of pulse lengths used, from 2 to 8. If you look at those, you can see that the more pulses lengths you use, the better your throughput (though admittedly you're most of the way to optimal by five or six).

Quote:
Also, what sample data did you base your pattern distribution on? E.g uncompressed data must result into pretty different values depending on the content and very different than compressed one. For the latter, all bits and combinations should have about the same probability.


I'm assuming random data - compression helps a *lot* if your raw speed is this low.
2018-12-16 21:55
thrust64

Registered: Jun 2006
Posts: 8
> ...but the graphs show average cycles per bit as a function of the number of pulse lengths used, from 2 to 8

Ah, my bad. Interesting that 8 is still the best even with conservative timing.

> I'm assuming random data...

Hm... With random data all shouldn't arithmetic data be 0.5 for single bits and 0.25 for two bit combinations? How did you get to these different values? I am sure I am missing something (again).
2018-12-17 09:08
ChristopherJam

Registered: Aug 2004
Posts: 1370
Quoting thrust64
With random data all shouldn't arithmetic data be 0.5 for single bits and 0.25 for two bit combinations? How did you get to these different values? I am sure I am missing something (again).


It's a bit inside out - the loaded file is effectively a compressed representation of the stream of pulses from the tape (something of a headfuck, I know) - so it's the probabilities of the pulses arriving in the stream that's the pertinent factor.

The key point is that longer pulses (by definition) contribute more to the length of the recording than the short pulses, so unless they contain as much information per second as the shorter pulses, they're not pulling their weight.

I initially wasn't entirely sure of my intuition on that count, so I also wrote a function that just incrementally adjusts the weights for a set of symbols and runs simulated encodings.

That took a bit more to implement but got identical results in the end; I left the code for that in the script I linked above (it's just commented out).



Short version:

If r is the optimal transmission rate in bits per second, then each symbol needs to encode d*r bits, where d is the duration of the symbol in question. The sum of the probabilities of the symbols is one, so you just need to solve sum(0.5^(d*r))=1 to find r, then you can use r to find the bits per symbol
Previous - 1 | 2 | 3 | 4 - Next
RefreshSubscribe to this thread:

You need to be logged in to post in the forum.

Search the forum:
Search   for   in  
All times are CET.
Search CSDb
Advanced
Users Online
Linus/MSL
Airwolf/F4CG
eryngi
grennouille
Higgie/Kraze/Onslaught
Guests online: 111
Top Demos
1 Next Level  (9.8)
2 Mojo  (9.7)
3 Coma Light 13  (9.7)
4 Edge of Disgrace  (9.6)
5 Comaland 100%  (9.6)
6 No Bounds  (9.6)
7 Uncensored  (9.6)
8 Wonderland XIV  (9.6)
9 The Ghost  (9.6)
10 Bromance  (9.6)
Top onefile Demos
1 It's More Fun to Com..  (9.9)
2 Party Elk 2  (9.7)
3 Cubic Dream  (9.6)
4 Copper Booze  (9.5)
5 Rainbow Connection  (9.5)
6 TRSAC, Gabber & Pebe..  (9.5)
7 Onscreen 5k  (9.5)
8 Dawnfall V1.1  (9.5)
9 Quadrants  (9.5)
10 Daah, Those Acid Pil..  (9.5)
Top Groups
1 Oxyron  (9.3)
2 Nostalgia  (9.3)
3 Booze Design  (9.3)
4 Censor Design  (9.3)
5 Crest  (9.3)
Top Crackers
1 Mr. Z  (9.9)
2 S!R  (9.9)
3 Mr Zero Page  (9.8)
4 Antitrack  (9.8)
5 OTD  (9.8)

Home - Disclaimer
Copyright © No Name 2001-2024
Page generated in: 0.051 sec.