On 2015-07-22 07:00, Russell Coker wrote:
On Tue, 23 Jun 2015 02:52:43 AM Chris Murphy wrote:
OK I actually don't know what the intended block layer behavior is
when unplugging a device, if it is supposed to vanish, or change state
somehow so that thing that depend on it can know it's "missing" or
what. So the question here is, is this working as intended? If the
layer Btrfs depends on isn't working as intended, then Btrfs is
probably going to do wild and crazy things. And I don't know that the
part of the block layer Btrfs depends on for this is the same (or
different) as what the md driver depends on.

I disagree with that statement.  BTRFS should be expected to not do wild and
crazy things regardless of what happens with block devices.
I would generally agree with this, although we really shouldn't be doing things like trying to handle hardware failures without user intervention. If a block device disappears from under us, we should throw a warning and if it's the last device in the FS, kill anything that is trying to read or write to that FS. At the very least, we should try to avoid hanging or panicking the system if all of the devices in an FS disappear out from under us.

A BTRFS RAID-1/5/6 array should cope with a single disk failing or returning
any manner of corrupted data and should not lose data or panic the kernel.
It's debatable however whether the array should go read-only when degraded. MD/DM RAID (at least, AFAIK) and most hardware RAID controllers I've seen will still accept writes to degraded arrays, although there are arguments for forcing it read-only as well. Personally, I think that should be controlled by a mount option, so the sysadmin can decide, as it really is a policy decision.

A BTRFS RAID-0 or single disk setup should cope with a disk giving errors by
mounting read-only or failing all operations on the filesystem.  It should not
affect any other filesystem or have any significant impact on the system unless
it's the root filesystem.
Or some other critical filesystem (there are still people who put /usr and/or /var on separate filesystems). Ideally, I'd love to see some some kind of warning from the kernel if a filesystem gets mounted that has the metadata/system profile set to raid0 (and possibly have some of the tools spit out such a warning also).


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to