> We've experienced a few odd anomalies during testing our IDE RAID-5 
> array ( 6 x 16gb =80g).

raidtools 0.90 with raid0145 patches ?

> We simulated a drive failure by disconnecting a drive's power, and if
> the IDE channel contained a second drive in the RAID5 array, the array 
> was permanently hosed and couldn't be used, even though the 
> RAID5 driver would report that it was running in ok degraded mode (5 
> of 6 drives) and ALL remaining drives were functional and could be 
> accessed.

"should not happen" :-(

> We reasoned that this was because the second IDE drive (on the
> channel with the failure) temporarily was offline for a brief instant
> during the "failure".

That should only have been noticed it it was accessing the disk at the time.

It would have shown up as having failed.

> Can anyone confirm these findings,

Sounds wrong to me !

> and if so, do they imply that elements of a RAID array must be on seperate
> IDE channels?

If going for speed, I'd so that anyway.

[ I've never been 100% sure whether a "master" disc failure should lose a
  "slave" (or vica versa) -- "slave only" works fine ... ]

> It is our impression that the RAID5 array will not gracefully shut down,

... in the case of going below N disks .... pass ...

> and most likely be corrupted if two drives temporarily fail,

I would have hoped that it would just "stop" ...

> or even go off line at once.

same thing as failure as far as the RAID code goes -- I guess it might retry a 
bit ...

>     Secondly, we had several instances where the RAID5 driver 
> reported that it was running in degraded mode with four out of six 
> drives functioning (Note: This array had no spares) - a seeming 
> impossibility, but the array continued to operate.  Is this a bug?

Certainly sounds like it to me !!

> In these cases e2fsck found excessive errors and no data could be used.

No surprise there !!

> Third, we tried restarting the array, sometimes switching drives
> around on different channels

ARGHH !!!!!

> and couldn't get all drives to be properly recognized by the RAID5 driver

Not all thst surprised by that ...

the PSB does include infor about what device it *thinks* it is,
but I'm not sure if that is used to cope with address changes (it would be 
nice if it could -- would allow SCSI re-numbering etc ...)

> even though we correctly updated the /etc/raidtab file.

With PSBs, raidtab isn't used !

> Would turning off the persistant-sperblock feature help out here?

Definitely ...

But you really should be using PSBs ...

Reply via email to