Hi,

Currently i have a raid1 configuration on two disks where one of them
is failing.

But since:
btrfs fi df /mnt/disk/
Data, RAID1: total=858.00GiB, used=638.16GiB
Data, single: total=1.00GiB, used=256.00KiB
System, RAID1: total=32.00MiB, used=132.00KiB
Metadata, RAID1: total=4.00GiB, used=1.21GiB
GlobalReserve, single: total=412.00MiB, used=0.00B

There should be no problem in failing one disk... Or so i thought!

btrfs dev delete /dev/sdb2 /mnt/disk/
ERROR: error removing the device '/dev/sdb2' - unable to go below two
devices on raid1

And i can't issue rebalance either since it will tell me about errors
until the failing disk dies.

Whats even more interesting is that i can't mount just the working
disk - ie if the other disk
*has* failed and is inaccessible... though, i haven't tried physically
removing it...

mdam has fail and remove, I assume for this reason - perhaps it's
something that should be added?

uname -r
4.2.0

btrfs --version
btrfs-progs v4.1.2
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to