This isn't specifically freebsd related but I'm fishing to see if
anyone has observered similar behavior from and Areca raid controller
before.  We're already in touch with their support...

Last night a disk failed on a 7 disk raid-6 array on a ARC-1220 with 1TB
WD REmumble disks.  This is certainly normal enough, except that rather
than taking the normal ~30 hours to rebuild array after a failure it
appears to have added the spare disk directly into the array and started
servicing reads from it without rebuilding it from parity!

This obviously seriously scrambled the filesystem on it and sorta
defeats the whole point of H/W raid in the first place.  XFS recovered
well enough, but most files are large enough to span a stripe and are
corrupted for it.

It's currently running a check and is finding lots of errors, I am
optomistic that it's check routine might rebuild the data from parity
but am glad this occured on a log archiving volume so it isn't a great
loss and we don't have to restore from backups anyway.

Anyone else seen such amazing examples of FAIL from Areca's?  We've got
50 or so 3wares in production and in the past 8 years have only seen one
3ware tank - it destroyed the filesystem on it's way but also complained
on the way out and wouldn't initialize since it failed its internal
diags.  Performance issues or not, at least they do their job.

-K

_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"

Reply via email to