On 09/05/2013 12:49 PM, Paul Hartman wrote: > Hi, > > I woke up this morning to see the dreaded email from mdadm telling me > one of my drives failed overnight, while I was happily dreaming about > cute puppies and kittens installing a rainbow-colored roof on my > house. The array is a RAID6 (two parity drives) and this is the > current state: > > md0 : active raid6 sdd1[5] sdg1[4] sde1[3](F) sdh1[2] sdf1[1] sdi1[0] > 11720009728 blocks super 1.2 level 6, 512k chunk, algorithm 2 > [6/5] [UUU_UU] > > I've been using RAID in Linux for years, but this is actually the > first time I've had a disk fail in one. > > If I remember correctly, the process should be as simple as: > > #remove the failed disk from the array: > mdadm /dev/md0 -r /dev/sde1 > > #pull the drive, replace with new one, partition it, then add it to the array: > mdadm /dev/md0 -a /dev/sde1 > > and sit back and eat popcorn while I enjoy the blinkenlights for the > next several hours/days? :) Any advice/suggestions for managing this > process any differently? >
This is the process I always follow: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array The sfdisk trick will save you a bit of hassle.