Hi All,
on a testing machine I installed four HDDs and they are configured as
RAID6. For a test I removed one of the drives (/dev/sdk) while the
volume was mounted and data was written to it. This worked well, as far
as I can see. Some I/O errors were written to /var/log/syslog, but the
volume kept working. Unfortunately the command "btrfs fi sh" did not
show any missing drives. So I remounted the volume in degraded mode:
"mount -t btrfs /dev/sdx1 -o remount,rw,degraded,noatime /mnt". This way
the drive in question was reported as missing. Then I plugged in the HDD
again (it is of course /dev/sdk again) and started a balancing in hope
that this will restore RAID6: "btrfs filesystem balance start /mnt". Now
the volume looks like this:
$ btrfs fi sh
Label: none uuid: 28410e37-77c1-4c01-8075-0d5068d9ffc2
Total devices 4 FS bytes used 257.05GiB
devid 1 size 465.76GiB used 262.03GiB path /dev/sdi1
devid 2 size 465.76GiB used 262.00GiB path /dev/sdj1
devid 3 size 465.76GiB used 261.03GiB path /dev/sdh1
devid 4 size 465.76GiB used 0.00 path /dev/sdk1
How do I reinitiate /dev/sdk1? As running "btrfs fi ba start /mnt" does
not help, I tried to remove the hdd, but
$ btrfs de de /dev/sdk1 /mnt/
ERROR: error removing the device '/dev/sdk1' - unable to go below four
devices on raid6
A replacement does not work this way either:
$ btrfs replace start -f -r /dev/sdk1 /dev/sdk1 /mnt
/dev/sdk1 is mounted
Are there other ways to replace/reinitiate the hdd then converting to
RAID 5?
Here are some more information about my configuration:
$ uname -a
Linux hostname 3.13.0-30-generic #55-Ubuntu SMP Fri Jul 4 21:40:53 UTC
2014 x86_64 x86_64 x86_64 GNU/Linux
$ btrfs --version
Btrfs v3.12
$ btrfs fi df /mnt
Data, RAID6: total=263.00GiB, used=256.82GiB
System, RAID1: total=32.00MiB, used=36.00KiB
Metadata, RAID1: total=1.00GiB, used=271.13MiB
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html