Hi again,
well today I repeated the test pulling out the disk and waiting a
longer time - with
no success. I did a raidhotremove on the failed partitions and then
added them back
with raidhotadd, a cat of /proc/mdstat revealed a quickly increasing
finish time to
recovery of the mirror
md3 :
Brian Murphy wrote:
md3 : active raid1 sdb8[2] sda8[0] 6707008 blocks [2/1] [U_] recovery=0%
finish=119606.5min (and rising)
I have also observed this problem. It seem to be specifically related to a
RAID1 configuration going into degraded mode with one disk in array. It
certainly seem to be
On Fri, 24 Sep 1999, Brian Murphy wrote:
Hi again,
well today I repeated the test pulling out the disk and waiting a
longer time - with
no success. I did a raidhotremove on the failed partitions and then
added them back
with raidhotadd, a cat of /proc/mdstat revealed a quickly
Mika Kuoppala wrote:
raidsetfaulty /dev/md0 /dev/sda1
Shouldn't there be a
raidhotremove /dev/md0 /dev/sda1
before low level removal of the scsi device?
Could explain sudden panics, if an unexpected state occurs.
tomas/
At 05:02 PM 9/24/1999 +0200, you wrote:
Mika Kuoppala wrote:
raidsetfaulty /dev/md0 /dev/sda1
Shouldn't there be a
raidhotremove /dev/md0 /dev/sda1
before low level removal of the scsi device?
Could explain sudden panics, if an unexpected state occurs.
That would make sense
Stephen Waters wrote:
how about a nice hotswap step-by-step in the howto...
Sure. Does that mean you're a volunteer? 8)
tomas/
i'm an end-user who is curious about how to do this since i am in the
process of buying a 4 18GB drive scsi system for use w/ software raid5.
my main problems are related to the scsi commands to issue and what
order they go in. if they're in a howto, then when i get the drives in i
can check out