Chris,

Thanks for the tip. I was able to mount the drive as degraded and
recovery. Then, I deleted the faulty drive, leaving me with the
following array:


Label: media  uuid: 7b7afc82-f77c-44c0-b315-669ebd82f0c5

Total devices 6 FS bytes used 2.40TiB

devid    1 size 931.51GiB used 919.88GiB path
/dev/mapper/SAMSUNG_HD103SI_499431FS734755p1

devid    2 size 931.51GiB used 919.38GiB path /dev/dm-8

devid    3 size 1.82TiB used 1.19TiB path /dev/dm-6

devid    4 size 931.51GiB used 919.88GiB path /dev/dm-5

devid    5 size 0.00 used 918.38GiB path /dev/dm-11

devid    6 size 1.82TiB used 3.88GiB path /dev/dm-9


/dev/dm-11 is the failed drive. I take it that size 0 is a good sign.
I'm not really sure where to go from here. I tried rebooting the
system with the failed drive attached, and Btrfs re-adds it to the
array. Should I physically remove the drive now? Is a balance
recommended?


Thanks,

Justin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to