| |
doynax Account closed
Registered: Oct 2004 Posts: 212 |
Understanding 1541 byte-sync and buffering
Lately I have been attempting work the kinks out of some old drive code. To be honest much of it was produced by trial-and-error and by peeking at the code of others, so I've been putting off getting to grips with how the device _actually_ works for quite some time now.
At the moment I'm stuck trying to resolve some issues with the drive head occasionally dropping bytes during reads and injecting extra bits during writes and I've come to the conclusion that I ought to make sense of how the GCR byte buffer actually works.
Unfortunately the documentation available is somewhat lacking and it is difficult to know how far to trust the emulator sources. Incidentally, I don't suppose there is a high-quality scan of the classic 1541 schematic (the discrete version without the PLA) out there? Ideally annotated for the hardware-challenged among us :)
My mental model is that of an 8-bit shift-register clocking through flux transitions as set bits to/from the drive head. Once empty/full the next byte is placed onto/taken from the VIA2 PRA port and a byte-ready pulse sent to the 6502 V-flag input along with VIA2 CA1. Plus there is a counter detecting >=10-bit SYNC fields during reads, at which point the shift register is reset and the SYNC signal asserted. While writing the speedzone-divider clocks this directly, whereas during reads the clock is recovered from the flux-transitions or after spaces somewhat wider than the bit period.
This broadly jives with observations such as the initial post-SYNC $FF byte, the echoing of previously read data after a write-mode transition, and observed behavior when a byte is read/written late. Except I still see glitches and oddities.
For one thing there appears to be some form of handshaking affecting the byte-ready signal. One my code does by trial-and-error is a dummy read of the $FF byte after the sync field, without which the tag byte doesn't get extracted properly, i.e.: bit $1c00 ;Wait for sync
bmi *-3
nop $1c01 ;Reading any other address causes trouble
clv
bvc *
lda $1c01 ;Tag byte
This is despite VIA read handshaking having been disabled with SoE on CA1 kept permanently asserted.
At any rate my 1541-II/1571 and Kryoflux have trouble whereas VICE 2.4 doesn't care aside from unlatching a 1571 status-bit on any VIA2 register read.
It is not immediately obvious from the Kryoflux code what is going on but then VHDL is admittedly hard going for me. Plus given how it handles write buffering (byte-aligning to the stream and dropping the first two bytes) I'm not putting much faith in its accuracy.
</rant>
I apologize for making a mountain out of a molehill here but I really do keep running into weird glitches which I can't quite understand and this is about the only reproducible one out of the lot ;)
Side-note: I warmly endorse the Kryoflux for anyone tinkering with the 1541 and wanting to know what is getting written out to disk |
|
... 27 posts hidden. Click here to view all posts.... |
| |
chatGPZ
Registered: Dec 2001 Posts: 11386 |
let me guess, you were using that 2.1 VICE from your repo? |
| |
Martin Piper
Registered: Nov 2007 Posts: 722 |
Before that. |
| |
chatGPZ
Registered: Dec 2001 Posts: 11386 |
yeah, ok. not "a few" years ago then in my book (more like 10 or even more :=)) |
| |
Martin Piper
Registered: Nov 2007 Posts: 722 |
I'm old. Time passes differently. |
| |
doynax Account closed
Registered: Oct 2004 Posts: 212 |
Quoting FungusHrm setting the clock divider to the wrong frequency when reading data will result in incorrect data being read. So it can't be derived from reading the incoming flux transitions.
I was just reading up on Frequency Shift Keying (like tape uses) and it would seem to be that the drives are using the same type of technique in hardware rather than software. Since the incorrect divider frequency would change the time period for flux transitions to be valid, no? As near as I can tell it is implemented by a counter clocked by a multiple of bit timer. On a flux transition the counter is reset and a 1-bit shifted out, whereas a lack of flux changes runs up the timer until a 0-bit is shifted out when no data seems to be is forthcoming.
Quoting FungusAlso the read is needed to clear the latched value, that's pretty straight forward and normal VIA/CIA behavior. If it's not cleared fast enough then clocked in transitions will be missed. Indeed, I don't know how I missed that. I had somehow completely forgotten about the internal VIA latching and gotten the notion that it was handled externally.
Quoting Martin PiperProbably can now. Years ago VICE emulation with regards to drive code timing was very poor. Hoxs and real hardware were the only options.
I must try it in the newest VICE sometime. Well, Hoxs does have the advantage of being able to step cycle-by-cycle, which comes in handy when debugging something timing-critical. Plus the lazy C64/drive synchronization in VICE can be a tad confusing when single-stepping through an IEC communication loop in parallel.
Of course for general code VICE makes up for it all with being able to import label files and script breakpoints/assertions. |
| |
Fungus
Registered: Sep 2002 Posts: 686 |
Yes it using FSK then, the flux transitions are all edge triggered and half waves. This makes sense since it's the technology they use for tape and modem communications too. It's old an easy to implement and works. I was looking at the timing diagrams in the PRG after reading this and that appears to be a correct assumption which is easily verifiable by anyone with a scope or capture tool.
So that does mean that the divider has to be set to the correct speed or the in-clocking will produce invalid results. It's possible this could be exploited for copy protection purposes... hrm interesting idea. |
| |
Kabuto Account closed
Registered: Sep 2004 Posts: 58 |
Wondering how reliably a disk could be read where all pulses (= ranges of same flux direction) are shortened by some % of a bit's duration. This could be abused for creating nearly uncopyable disks.
With standard GCR and its average pulse duration of 1.6 bits shortening all of them by 25% would allow squeezing in 18% more data.
But copying such a disk would be impossible with standard equipment, you could slow down the motor, but that would reduce length of 3-bit pulses (i.e. encoded bits 1001) to 2.5 bits, as mentioned earlier that's exactly where the electronics decide whether to treat it as 101 or 1001. |
| |
chatGPZ
Registered: Dec 2001 Posts: 11386 |
thats actually not uncommon - vmax for example did this iirc |
| |
tlr
Registered: Sep 2003 Posts: 1790 |
Quoting FungusYes it using FSK then, the flux transitions are all edge triggered and half waves. This makes sense since it's the technology they use for tape and modem communications too. It's old an easy to implement and works.
This is not what we'd normally call FSK. FSK as used on CBM tapes has variable bit lengths.
The scheme is rather a very crude PLL trying to lock on to the rate of bits by just resetting every time a '1' is seen.
The encoding is still just constant length '1's or '0's which have the requirement that no more than two '0's can be in a row.
There are two reasons for the requirement:
1. if there are to many '0's the PLL can't keep track of the bits within the variation of speed it is required to handle.
2. there is an anomaly in the implementation that wraps around after a '1' and three '0's. When it wraps it will generate a spurious '1' (and repeat the process). This is what is seen if the track contains no flux transitions. Three '0's in a row _can_ work but is unreliable. I see to remember that newer 1541 have more problems with these. |
| |
chatGPZ
Registered: Dec 2001 Posts: 11386 |
too bad mr.drew cant comment at this point :) |
Previous - 1 | 2 | 3 | 4 - Next |