Re: raid5 failure

2000-07-22 Thread Szilveszter Juhos

> Could this be a powersupply failure?

For example. I've seen 144 V on the motherboard. None of the drives
survived as you can expect. It was after a storm with lightnings :-)

Szilva
--
http://www.wbic.cam.ac.uk/~sj233




slink (old) raid5 recovery

2000-07-20 Thread Szilveszter Juhos

I have a quite large (~490G) raid5 array for slink (originally 2.2.13) and
succeed in to shut down incorretly. There was no any hardware failure, but
ckraid did not fixed the array. Seems stucked about 10-20% completion
(I've tried to run it about 5 times, the completion percentage was
different always). To revive it I've upgraded the kernel (2.2.16-AC0 mingo
patch + AC pre17p12 patches) and re-compiled the raidtools-0.90. mkraid
--upgrade seems fixed everything in one second, although mount complains
that "e2fsck recommended". OK, start e2fsck -v /dev/md1 (before mount),
and it passes the first two steps. At step 3 (checking directory
connectivity if I am right) stops.  According to strace the actual steps
are two mmap() calls where it is waiting, eating 99.9% of CPU (4 CPU pII
DellPowerEdge 6300 with 2G RAM), kswapd does nothing at all. Should I
upgrade to the new-style e2fs or was it a silly idea to make a huge raid
like this with ext2fs? 

Szilva
--
http://www.wbic.cam.ac.uk/~sj233