I've got raid1 running on all partitions but /boot with 2.2.10 and
raidtools-19990724. Now I'm thinking about recovery. I experimented with one
partition. Here it is in raidtab:

# /home
raiddev /dev/md7
raid-level              1
nr-raid-disks           2
nr-spare-disks          0
chunk-size              4
persistent-superblock   1

device                  /dev/hda7
raid-disk               0
device                  /dev/hdc7
raid-disk               1

It was originally built with hda7 as a failed-disk 1. I tried switching it
to 0 in raidtab to see what would happen. Apparently nothing. cat
/proc/mdstat still shows:

  Personalities : [raid1] 
  read_ahead 1024 sectors
  md2 : active raid1 hdc2[0] hda2[1] 264000 blocks [2/2] [UU]
  md5 : active raid1 hdc5[0] hda5[1] 526080 blocks [2/2] [UU]
  md6 : active raid1 hdc6[0] hda6[1] 66432 blocks [2/2] [UU]
  md7 : active raid1 hdc7[0] hda7[1] 66432 blocks [2/2] [UU]
  md8 : active raid1 hdc8[0] hda8[1] 34176 blocks [2/2] [UU]
  unused devices: <none>

even after raidstop, raidstart, even reboot. It appears then, that raidtab
is only used by mkraid. Correct?

The reason for this is that it seems the failed-disk directive would be nice
for bringing the machine back up with a new disk after a failure. However,
the docs say that failed-disk cannot be first. What happens if hdc fails?

If I bring up the PC with a new hdc, I expect RAID would come up in degraded
mode of some kind, I could do a raidhotremove, partition hdc, and then
raidhotadd. Is this right?

Reply via email to