We've experienced a few odd anomalies during testing our IDE RAID-5 
array ( 6 x 16gb =80g).

We started with a good running array and did an e2fsck to ensure its 
integrity...

We simulated a drive failure by disconnecting a drive's power, and if
the IDE channel contained a second drive in the RAID5 array, the array 
was permanently hosed and couldn't be used, even though the 
RAID5 driver would report that it was running in ok degraded mode (5 
of 6 drives) and ALL remaining drives were functional and could be 
accessed.

We reasoned that this was because the second IDE drive (on the
channel with the failure) temporarily was offline for a brief instant
during the "failure".  Can anyone confirm these findings, and if so,
do they imply that elements of a RAID array must be on seperate IDE
channels?

It is our impression that the RAID5 array will not gracefully shut
down, and most likely be corrupted if two drives temporarily fail, or 
even go off line at once.

    Secondly, we had several instances where the RAID5 driver 
reported that it was running in degraded mode with four out of six 
drives functioning (Note: This array had no spares) - a seeming 
impossibility, but the array continued to operate.  Is this a bug?
In these cases e2fsck found excessive errors and no data could be 
used.

Third, we tried restarting the array, sometimes switching drives
around on different channels and couldn't get all drives to be
properly recognized by the RAID5 driver even though we correctly
updated the /etc/raidtab file.  Would turning off the
persistant-sperblock feature help out here?


Many thanks for any help, suggestions, or comments,

Chris Brown
[EMAIL PROTECTED]

Reply via email to