Log inRegister an accountBrowse CSDbHelp & documentationFacts & StatisticsThe forumsAvailable RSS-feeds on CSDbSupport CSDb Commodore 64 Scene Database
 Welcome to our latest new user Harvey ! (Registered 2024-11-25) You are not logged in - nap
CSDb User Forums


Forums > CSDb Discussions > Error message: "Found more than one drive on IEC bus" with C128D metal and recent demos
2017-11-08 10:26
Monte Carlos

Registered: Jun 2004
Posts: 358
Error message: "Found more than one drive on IEC bus" with C128D metal and recent demos

Sorry for this lamer question. I use an C128D(metal), currently (although i have used C64 for years until i got room problems with the complete setup).
I have attached a 1541 ultimate and changed the internal 1571 drive to device no 10 by cutting the solder pads. The ultimate has dev no 8. This way i can watch many, mostly old, demos with 1541U. However, the most current demos always complain about more than one drive being enabled. Ok, i thought, plugging off the internal 1571 from the mainbord should fix the problem. I was wrong. The loaders still complain about more than one drive.
Am i wrong with my understanding that plugging off the internal drive also disables the ATN signal?
 
... 34 posts hidden. Click here to view all posts....
 
2017-11-15 16:18
lft

Registered: Jul 2007
Posts: 369
But in principle, it shouldn't be too difficult in your case, because your loader already supports many different drives. You could add compile-time switches that modify the drivecode to a dummy version that doesn't read from disk, but still follows the state machine of the communication protocol. It shouldn't actually pull the clock/data lines of course, only ATNA. Or are there complexities in the protocol that prevent this, e.g. different-length responses depending on disk contents?
2017-11-15 18:11
Krill

Registered: Apr 2002
Posts: 2969
I see several potential problems, more or less from the top of my head:

- Not interfering with the protocol in response to ATN toggles requires ATNA-setting loops to respond at least as quickly as the active drive, if not quicker due to asynchronous clocks, various bus delays due to different positions in the daisy chain, and different pull strengths.

This seems done, but:

- Add on top of the ATNA response a state machine to follow, which might make the ATNA setting too slow and also may not be possible without ambiguities due to the inactive drives not knowing which of the two active parties just pulled the clock or data line. These are the main problems. (I haven't thoroughly analysed the protocol from the angle of third-party bus snooping yet.)

- The active drive uses a watchdog timer to exit to KERNAL protocol upon protocol breach, such as the user deciding to reset the computer at any time between or during loading. Adding this to the inactive drives might render them too slow to set ATNA, and again the decision may be impossible due to ambiguities.

- Ultimately, the ideal setup would be being able to load from any of all the connected drives (think RAID-0 setups or games), which implies solid bus arbitration and further complicated code.

So for the first step, i'm thinking of somehow shoehorning in an ATN spike detection, as the current protocol never toggles ATN quicker than 18 cycles between edges. But then proper debouncing might or might not be in order to avoid false positives, potentially complicating already this simple solution.
2017-11-15 20:37
lft

Registered: Jul 2007
Posts: 369
Good arguments.

There's no need to debounce a digitally controlled signal; You might be thinking about spike suppression. But if you don't need it on the active drive, you probably don't need it on the passive drives, right?

Perhaps the timer interrupt is the answer to this one. If your protocol requires a keepalive signal anyway, then you might be able to put it on the ATN line. Then you have to make the passive drives reset the timer whenever ATN toggles (which causes a jump in the program flow anyway). But resetting the timer takes four cycles, which makes it tricky to pull off.
2017-11-15 22:37
Krill

Registered: Apr 2002
Posts: 2969
Quoting lft
There's no need to debounce a digitally controlled signal; You might be thinking about spike suppression.
True, but same difference when seen from and handled by software, innit? :)

Quoting lft
But if you don't need it on the active drive, you probably don't need it on the passive drives, right?
Yes, it's probably not required. Although i've always wondered why Commodore did this in the KERNAL's serial code. But then they were extremely cautious in everything disk or tape. Probably wanted to err on the safe side.

Quoting lft
If your protocol requires a keepalive signal anyway
Only for the parts when the host computer sends or receives data. There is no keep-alive mechanism when the loader is idle, but a transition away from that state is easy to detect and not timing-critical.
But there's also no keep-alive signal whenever the host computer is waiting for the next block to become ready to download (i.e., when the drive reads and decodes it).
Furthermore, once the block ready signal is sent by the drive, it waits up to 16 regular timer periods (about 64K cycles each), as the computer might still be busy decompressing (copying a large chunk of data - there is no serial bus interrupt on C-64, unlike with C-128+1571, so it's polled at a few select strategic points in the code).
So, no, i don't think it's possible to simply wake up from effective bus-off by letting a keep-alive timeout expire.
2017-11-15 22:53
Krill

Registered: Apr 2002
Posts: 2969
It might, however, be possible to split up the protocol into bigger logical parts and then apply the timer thing.

I have no reason to believe that this wake-up thingy is impossible to pull off, it's just non-trivial and i haven't so far spent much effort on that. It's been on the list for years, but alas, so do other things above it. :)
2017-11-16 18:15
Monte Carlos

Registered: Jun 2004
Posts: 358
Isn't it merely a matter of installing the tight loop in the right drive(s)? Can't the tight loop be simply removed again after loading? Then the drive should become active again.
2017-11-16 18:26
Monte Carlos

Registered: Jun 2004
Posts: 358
Can the decision be left to the top level program? Let have four entry points: Install loader code in the main drive, install tightloop in the other drives,start loading,remove tight loop from the lower priority drive.call 1and 2 before loading,then call 3 and when you are sure, loading has finished call 4
2017-11-17 11:19
Krill

Registered: Apr 2002
Posts: 2969
Adding detection of a special wake-up signal to the tight loop would make it less tight, potentially making it too untight for the original purpose it is serving.
2017-11-17 12:53
chatGPZ

Registered: Dec 2001
Posts: 11357
YO! reminds me of that famous Moses P. quote: "You guys are so tight, when i am done with you, it will burn"
2017-11-17 13:12
Krill

Registered: Apr 2002
Posts: 2969
Quoting Groepaz
YO! reminds me of that famous Moses P. quote: "You guys are so tight, when i am done with you, it will burn"
Klostein: Not. Even. Once.
Previous - 1 | 2 | 3 | 4 | 5 - Next
RefreshSubscribe to this thread:

You need to be logged in to post in the forum.

Search the forum:
Search   for   in  
All times are CET.
Search CSDb
Advanced
Users Online
Scooby/G★P/Light
DJ Gruby/TRiAD
VerN
Endurion
Guests online: 68
Top Demos
1 Next Level  (9.7)
2 13:37  (9.7)
3 Coma Light 13  (9.7)
4 Edge of Disgrace  (9.6)
5 Mojo  (9.6)
6 What Is The Matrix 2  (9.6)
7 The Demo Coder  (9.6)
8 Uncensored  (9.6)
9 Wonderland XIV  (9.6)
10 Comaland 100%  (9.6)
Top onefile Demos
1 Layers  (9.6)
2 Party Elk 2  (9.6)
3 Cubic Dream  (9.6)
4 Copper Booze  (9.6)
5 Libertongo  (9.5)
6 Rainbow Connection  (9.5)
7 Onscreen 5k  (9.5)
8 Morph  (9.5)
9 Dawnfall V1.1  (9.5)
10 It's More Fun to Com..  (9.5)
Top Groups
1 Performers  (9.3)
2 Booze Design  (9.3)
3 Oxyron  (9.3)
4 Nostalgia  (9.3)
5 Triad  (9.2)
Top Original Suppliers
1 Derbyshire Ram  (9.7)
2 Fungus  (9.3)
3 Black Beard  (9.2)
4 Baracuda  (9.2)
5 hedning  (9.1)

Home - Disclaimer
Copyright © No Name 2001-2024
Page generated in: 0.039 sec.