Speaking of RAID failures... I've read all the Linux RAID
docs/HOW-TOs, etc. but while it talks about setting it up,
there seems to be _very_ little info on failure reporting
and recovery (i.e. man page for hotraidadd omits args, etc.)
Re failure reporting, is there an automated way to detect
when a drive fails and needs replacement other than using
grep/diff on /proc/mdstat?
Re recovery, it seems tedious/error prone to have to
partition up a new drive on another system, which may
have a handful of varying sized partitions, and then
after insertion, do another handful of explicit
hotraidadd's for each partition. Any tools for taking
a virgin drive and upon insertion, automatically building
up whatever data layout is on the other mirror drives,
and using /etc/raidtab to auto-hotraidadd?
-Jeff Rush
Thomas Scholten wrote:
>
> Hi All,
>
> a recently typed less /proc/mdstat showed the following:
>
> less /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid5]
> read_ahead 1024 sectors
> md0 : active raid5 sde1[3] sdd1[2] sdc1[1](F) sdb1[0] 5810304 blocks level
> 5, 64k chunk, algorithm 2 [4/3] [U_UU]
> -----------
> unused devices: <none>
>
> Can anyone tell me what the (F) after "sdc1[1](F)" means ? Hope it has
> nothing to do with Failure :-/
>
> greetings from germany
>
> Tom