If any device in the array is failed, grub-probe will decide that the
md device does not exist. It will then claim that anything on top of
this (LVM) also does not exist.

Removed devices are ok, but recovering devices are not.

This means that you can't install/update a grub installation while / is
on an md device that has failed or recovering devices:

# grub-probe /
btrfs

# mdadm --manage --fail /dev/md0 /dev/sda1
mdadm: set /dev/sda1 faulty in /dev/md0

# grub-probe /
grub-probe: error: disk 
`lvmid/******-****-****-****-****-****-******/******-****-****-****-****-****-******'
 not found.

# mdadm --manage --remove /dev/md0 /dev/sda1
mdadm: hot removed /dev/sda1 from /dev/md0

# grub-probe /
btrfs

# mdadm --manage --re-add /dev/md0 /dev/sda1
mdadm: re-added /dev/sda1

# cat /proc/mdstat 
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] 
[raid10] 
md0 : active raid1 sda1[0](W) sdb1[1](W) sdd1[4] sdc1[3]
      976629760 blocks super 1.2 [4/3] [_UUU]
      [>....................]  recovery =  0.2% (2473984/976629760) 
finish=141.5min speed=114688K/sec
      bitmap: 6/8 pages [24KB], 65536KB chunk

unused devices: <none>

# grub-probe /
grub-probe: error: disk 
`lvmid/******-****-****-****-****-****-******/******-****-****-****-****-****-******'
 not found.

# cat /proc/mdstat 
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] 
[raid10] 
md0 : active raid1 sda1[0](W) sdb1[1](W) sdd1[4] sdc1[3]
      976629760 blocks super 1.2 [4/4] [UUUU]
      bitmap: 4/8 pages [16KB], 65536KB chunk

unused devices: <none>

# grub-probe /
btrfs

-- 
Simon Arlott

Reply via email to