| |
chatGPZ
Registered: Dec 2001 Posts: 11377 |
Accurately Measuring Drive RPM
To bring the discussion from 1541 Speed Test into the forum....
first lets recapitulate:
The general idea is: have a "marker" on a track, then measure the time for one revolution using timers. Generally there are different ways to achieve this:
- wait for the marker and toggle a IEC line. the C64 measures the time using CIA timer. this is what eg the well known "Kwik Load" copy does, the problem is that it is PAL/NTSC specific, and it can never be 100% exact due to the timing drift between drive and C64.
- wait for the marker and measure the time using VIA timers on the drive. the problem with this is that VIA timers are only 16bit and can not be cascaded, so you either have to measure smaller portions at a time, or rely on the wraparound and the value being in certain bounds at the time you read it.
now, to make either way slightly more accurate, a special kind of reference track can be used. typically this track will contain nothing except one marker - which makes the code a bit simpler and straightforward. this is what 1541 Speed Test does. the DOS also does something similar when formatting, to calculate the gaps. This obviosly has the problem that we are overwriting said track.
Now - the question isn't how to do all this, that's a solved problem. The question is, given a specific implementation, how *accurate* is it actually, and why?
The basic math to calculate the RPM is this:
expected ideal:
300 rounds per minute
= 5 rounds per second
= 200 milliseconds per round
at 1MHz (0,001 milliseconds per clock)
= 200000 cycles per round
to calculate RPM from cycles per round:
RPM = (200000 * 300) / cycles
two little test programs are here: https://sourceforge.net/p/vice-emu/code/HEAD/tree/testprogs/dri.. ... the first reads timer values between each sector header and then the total time for a revolution is accumulated from the delta times. the second leaves the timer running for one revolution and then indirectly gets the time for a revolution from that. to my own surprise, both appear to be accurate down to 3 cycles (in theory the second one should be more accurate, at least thats what i thought. i also expected some more jitter than just 3 cycles)
1541 Speed Test writes a track that contains one long sync, and then 5 regular bytes which serve as the marker. it then reads 6 bytes and measures the time that takes, which equals one revolution. somehow this produces a stable value without any jitter, which was a bit surprising to me too (i expected at least one cycle jitter, due to the sync waiting loops) (i am waiting for the source release and will put a derived test into the vice repo too)
So, again, the question is... how accurate are those and why? (a stable value alone does not tell its accurate). Some details are not quite clear to me, eg if we are writing a reference track, how much will that affect the accuracy of the following measurement? how will the result change when the reference track was written at a different speed than when doing the measuring? Will using a certain speedzone make it more or less accurate?
Bonus question: can we use https://en.wikipedia.org/wiki/Chinese_remainder_theorem with two VIA timers to make this more accurate? or is it a pointless exercise? |
|
... 263 posts hidden. Click here to view all posts.... |
| |
chatGPZ
Registered: Dec 2001 Posts: 11377 |
Quote:Where did you get the 50ppm (+/- 50ppm?) specification?
indeed, it sounds very precise for that era. citation needed :)
Quote:That is 50 clock cycles "jitter" over 16 million
https://en.wikipedia.org/wiki/Parts_per_million |
| |
tlr
Registered: Sep 2003 Posts: 1787 |
Quoting ZibriProbably I am not using the right terminology, but something here is not right.
I see no such deviations anywhere.
The maximum "jitter" I saw in a directly connected motor was of 0.01 and was probably induced by a very stiff new disk.
The terminology isn't the most important, the concepts are. From your last statement I suspect what the misconception is. The accuracy of the crystal (i.e +/- x ppm) is something that is fairly constant to each individual crystal. Some will be +10ppm, some -100ppm, some + 25ppm, and so on... They will _not_ sweep around randomly within that interval, hence the _precision_ is much better.
The important point is that these are two completely different aspects of the crystal's behaviour. |
| |
Krill
Registered: Apr 2002 Posts: 2971 |
Quoting ZibriIf what you wrote until now was true (I suspect some calculation error), there would be errors all over the place since a BIT can last 4 (cycles at 1mhz) (at the slowest clock) to as little as 3.25 (cycles at 1mhz).
[...]
The maximum "jitter" I saw in a directly connected motor was of 0.01 and was probably induced by a very stiff new disk. Note that the oscillator's frequency changes very slowly over time, and also that it might be quite stable at some frequency close to but not quite at 16 MHz sharp.
It will not produce "jitter" as in something you can see flicker in a short amount of time.
It will still cause an error in measurement of the absolute RPM figure with regard to a clock that is more precise than the oscillator. |
| |
chatGPZ
Registered: Dec 2001 Posts: 11377 |
Yeah, over time the deviation will only change a little bit, and slowly - see Unseens diagram. |
| |
Krill
Registered: Apr 2002 Posts: 2971 |
Quoting GroepazYeah, over time the deviation will only change a little bit, and slowly - see Unseens diagram. About that... this is from power-up of a cold device, then idling in room temperature while the oscillator asymptotically approaches zero deviation? Needs a bit more context, i'm afraid. |
| |
chatGPZ
Registered: Dec 2001 Posts: 11377 |
Its exactly that :) deviation from 16Mhz on Y and seconds after powerup on X (so roughly 2.5h) |
| |
tlr
Registered: Sep 2003 Posts: 1787 |
Quoting KrillQuoting GroepazYeah, over time the deviation will only change a little bit, and slowly - see Unseens diagram. About that... this is from power-up of a cold device, then idling in room temperature while the oscillator asymptotically approaches zero deviation? Needs a bit more context, i'm afraid.
...and it would be good form to state roughly which setup was used to measure it. I assume Unseen did it right, but if done wrong, some of the curve could easily come from the test equipment itself. |
| |
chatGPZ
Registered: Dec 2001 Posts: 11377 |
What he told me is "The Frequency counter had a OCXO and was warmed up an hour before the measurement". I can ask for more details if you tell me what to ask :) (But i also assume no terrible mistakes were made, he knows his stuff) |
| |
tlr
Registered: Sep 2003 Posts: 1787 |
Quote: What he told me is "The Frequency counter had a OCXO and was warmed up an hour before the measurement". I can ask for more details if you tell me what to ask :) (But i also assume no terrible mistakes were made, he knows his stuff)
That statement, and preferably which actual instrument it was. If you put that in the readme in the repo, then anybody doubting the graph could just look up the specs of the instrument and see for themselves. I would assume a few ppm error is to be expected, which in this case is negligible. |
| |
Krill
Registered: Apr 2002 Posts: 2971 |
The deviation can be expected to increase after running a stress test for a while, no? Especially on a 1541 with that built-in PSU. |
Previous - 1 | ... | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 - Next |