On 8 September 2015 at 21:55, Ian Kumlien <ian.kuml...@gmail.com> wrote:
> On 8 September 2015 at 21:43, Ian Kumlien <ian.kuml...@gmail.com> wrote:
>> On 8 September 2015 at 21:34, Hugo Mills <h...@carfax.org.uk> wrote:
>>> On Tue, Sep 08, 2015 at 09:18:05PM +0200, Ian Kumlien wrote:
> [--8<--]
>
>>>    Physically removing it is the way to go (or disabling it using echo
>>> offline >/sys/block/sda/device/state). Once you've done that, you can
>>> mount the degraded FS with -odegraded, then either add a new device
>>> and balance to restore the RAID-1, or balance with
>>> -{d,m}convert=single to drop the redundancy to single.
>>
>> This did not work...
>
> And removing the pyscial device is not the answer either... until i
> did a read only mount ;)
>
> Didn't expect it to fail with unable to open ctree like that...

Someone thought they were done too early, only one disk => read only
mount. But, readonly mount => no balance.

I think something is wrong....

btrfs balance start -dconvert=single -mconvert=single /mnt/disk/
ERROR: error during balancing '/mnt/disk/' - Read-only file system

btrfs dev delete missing /mnt/disk/
ERROR: error removing the device 'missing' - Read-only file system

Any mount without ro becomes:
[  507.236652] BTRFS info (device sda2): allowing degraded mounts
[  507.236655] BTRFS info (device sda2): disk space caching is enabled
[  507.325365] BTRFS: bdev (null) errs: wr 2036894, rd 2031380, flush
705, corrupt 0, gen 0
[  510.983321] BTRFS: too many missing devices, writeable mount is not allowed
[  511.006241] BTRFS: open_ctree failed

And one of them has to give! ;)

> [--8<--]
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to