*** This bug is a duplicate of bug 925280 ***
https://bugs.launchpad.net/bugs/925280
** This bug has been marked a duplicate of bug 925280
Software RAID fails to rebuild after testing degraded cold boot
--
You received this bug notification because you are a member of Ubuntu
Bugs, which
** Description changed:
- I have my /home in a RAID1 configuration (/dev/md1) with a partition on
- my laptop and a second on an external disk connected via eSATA; A third
- sits on a third external disk.I booted up with one member degraded
- (external drive not plugged in) and prior to
Still weird on reboot. mdstat seems fine, but the failed member still
thinks it is active.
:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
[raid10]
md1 : active raid1 sdb3[0]
86003840 blocks [3/1] [U__]
unused devices: none
:~#
The disk was /dev/sdc in the first instance because it was plugged after
cd-rom took /dev/sdb. In the second instance it was left plugged in on
boot, taking /dev/sdb and leaving /dev/sdc for the cd-rom.
Subsequent reboots with both disks plugged in, and removing my mdadm
udev override (removed
It suggested you remove the old meta data from the disk to re-add but I
didn't see that you did that. do that and then try and add it.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/945786
Title:
Thanks, the --zero-superblock on the device I want to re-add worked. I
think I understand what happened, possibly as a result of some mdadm
improvements. A verbose explanation follows.
The complexity I did not share, is that my three-disk RAID1 array is
actually between two laptops, each which