I have a server with a pair of raided (RAID1) disks using partition 1,2
and 4 as /boot root and and and lvm volume respectively. The two disks
are /dev/sda and /dev/sdb. They have just replaced two smaller disks
where the root partiton was NOT a raid device - it was just /dev/sda2
although there was a raided boot partition in the first partition.
Hardware only supports 2 sata channels.
I wanted to revert to root partition to the same state as one I just
took out, so I failed and removed sdb
mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
mdadm /dev/md1 --fail /dev/sdb2 --remove /dev/sdb2
mdadm /dev/md2 --fail /dev/sdb4 --remove /dev/sdb4
for each of the partitions, and shut the machine down.
I unplugged /dev/sdb and plugged in the old disk in its place and booted
up knoppix.
I asked knoppix to recreate the md devices
mdadm --assemble --scan
and it found 4 raid devices. The three on sda and the one (from the old
sda, now on sdb).
So I mounted /dev/md1 and /dev/sdb2 and reverted the root partition.
I shut the machine down again. I now removed the old disk and plugged
back in the new /dev/sdb that I had failed and removed in the first step.
HOWEVER (the punch line). When this system booted, it was not the old
reverted one but how it was before I started this cycle. In other words
it looked as though the disk which I had failed and removed was being used
If I did mdadm --detail /dev/md1 (or any of the other devices) it shows
/dev/sdb as the only device on the raid pair. To sync up again I am
having to add in the various /dev/sda partitions.
SO THE QUESTION IS. What went wrong. How does a failed device end up
being used to build the operational arrays, and the other devices end up
not being included.
--
Alan Chandler
http://www.chandlerfamily.org.uk
--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4c1d0a6b.60...@chandlerfamily.org.uk