FYI: I ended up wipefs'ing the drive and adding it back in. I also has
to abort the residual balance process to get the filesystem back to a
state where I could add disk. I didn't realize this until after wiping
the drive. Maybe if I'd known to look for that I could have recovered
the drive before
Sandy McArthur sandy...@gmail.com schrieb:
I have a 4 disk RAID1 setup that fails to {mount,btrfsck} when disk 4
is connected.
With disk 4 attached btrfsck errors with:
btrfsck: root-tree.c:46: btrfs_find_last_root: Assertion
`!(path-slots[0] == 0)' failed
(I'd have to reboot in a
Kai Krakow posted on Sun, 04 Aug 2013 14:41:54 +0200 as excerpted:
It is a RAID-1 so why bother with the faulty drive? Just wipe it, put it
back in, then run a btrfs balance... There should be no data loss
because all data is stored twice (two-way mirroring).
The caveat would be if it didn't
On Aug 4, 2013, at 4:19 PM, Duncan 1i5t5.dun...@cox.net wrote:
Kai Krakow posted on Sun, 04 Aug 2013 14:41:54 +0200 as excerpted:
It is a RAID-1 so why bother with the faulty drive? Just wipe it, put it
back in, then run a btrfs balance... There should be no data loss
because all data is
Duncan 1i5t5.dun...@cox.net schrieb:
It is a RAID-1 so why bother with the faulty drive? Just wipe it, put it
back in, then run a btrfs balance... There should be no data loss
because all data is stored twice (two-way mirroring).
The caveat would be if it didn't start as btrfs raid1, and
I have a 4 disk RAID1 setup that fails to {mount,btrfsck} when disk 4
is connected.
With disk 4 attached btrfsck errors with:
btrfsck: root-tree.c:46: btrfs_find_last_root: Assertion
`!(path-slots[0] == 0)' failed
(I'd have to reboot in a non-functioning state to get the full output.)
I can