On 2017-03-02 12:26, Andrei Borzenkov wrote:
02.03.2017 16:41, Duncan пишет:
Chris Murphy posted on Wed, 01 Mar 2017 17:30:37 -0700 as excerpted:

[1717713.408675] BTRFS warning (device dm-8): missing devices (1)
exceeds the limit (0), writeable mount is not allowed
[1717713.446453] BTRFS error (device dm-8): open_ctree failed

[chris@f25s ~]$ uname
-r 4.9.8-200.fc25.x86_64

I thought this was fixed. I'm still getting a one time degraded rw
mount, after that it's no longer allowed, which really doesn't make any
sense because those single chunks are on the drive I'm trying to mount.
I don't understand what problem this proscription is trying to avoid. If
it's OK to mount rw,degraded once, then it's OK to allow it twice. If
it's not OK twice, it's not OK once.

AFAIK, no, it hasn't been fixed, at least not in mainline, because the
patches to fix it got stuck in some long-running project patch queue
(IIRC, the one for on-degraded auto-device-replace), with no timeline
known to me on mainline merge.

Meanwhile, the problem as I understand it is that at the first raid1
degraded writable mount, no single-mode chunks exist, but without the
second device, they are created.

Is not it the root cause? I would expect it to create degraded mirrored
chunks that will be synchronized when second device is added back.
That's exactly what it should be doing, and AFAIK what the correct fix for this should be, but in the interim just relaxing the degraded check to be per-chunk makes things usable, and is arguably how it should have been to begin with.

 (It's not clear to me whether they are
created with the first write, that is, ignoring any space in existing
degraded raid1 chunks, or if that's used up first and the single-mode
chunks only created later, when a new chunk must be allocated to continue
writing as the old ones are full.)

So the first degraded-writable mount is allowed, because no single-mode
chunks yet exist, while after such single-mode chunks are created, the
existing dumb algorithm won't allow further writable mounts, because it
sees single-mode chunks on a multi-device filesystem, and never mind that
all the single mode chunks are there, it simply doesn't check that and
won't allow writable mount because some /might/ be on the missing device.

The patches stuck in queue would make btrfs more intelligent about that,
having it check each chunk as listed in the chunk tree, and if at least
one copy is available (as would be the case for single-mode chunks
created after the degraded mount), writable mount would still be
allowed.  But... that's stuck in a long running project queue with no
known timetable for merging... <grumble, grumble>... so the only way to
get it is to go find and merge them yourself, in your own build.


Will it replicate single mode chunks when second device is added?
Not automatically, you would need to convert them to raid1 (or whatever other profile. Even with the patch, this would still be needed, but at least it would (technically) work sanely. On that note, on most of my systems, I have a startup script that calls balance with the appropriate convert flags and the soft flag for every fixed (non-removable) BTRFS volume on the system to clean up after this. The actual balance call takes no time at all unless there are actually chunks to convert, so it normally has very little impact on boot times.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to