On Wed, 14 Apr 1999, Francisco Jose Montilla wrote:

> The reason is that on a master-slave setup, the slave disc is controlled
> by the master. If on a [hda-hdc]+[hdb-hdd] raid 0+1 the master device of
> *any* IDE controller fails, the slave will inmediatly fail also (I'd bet
> it surelly will happen if it's the slave who fails, so this statement
> could be widened to "if *any* disc fails"), rendering your raid 0+1
> inmediatly unusable, and making recovery thougher.

I'm not so sure about this, as I think I've seen IDE drives that were
jumpered as a slave (with no master on the bus) get detected by Linux
before.

Some drives such as Western Digitals that have three jumper settings
(single/master/slave) may take an excessivly long time to become ready if
they are jumpered as a master and there is no slave present (or the slave
has died)--or they may not become ready at all and cause the BIOS to print
out a "Hard disc controller failure". Drives like this should probably be
set as a slave with some other drive that only has two jumper settings
(master/slave) as the master.

In any case, most of the IDE failures I've seen lately involved a bunch of
bad sectors--it's been a very long time since I've seen an IDE drive that
had failed in a manner that it couldn't be detected on the IDE bus.

Brian

Reply via email to