On Wed, Nov 30, 2016 at 7:04 AM, Roman Mamedov <r...@romanrm.net> wrote:
> On Wed, 30 Nov 2016 07:50:17 -0500

> Also I don't know what is particularly insane about copying a 4-8 GB file onto
> a storage array. I'd expect both disks to write at the same time (like they
> do in pretty much any other RAID1 system), not one-after-another, effectively
> slowing down the entire operation by as much as 2x in extreme cases.

I don't experience this behavior. Writes take the same amount of time
to single profile volume as a two device raid1 profile volume. iotop
reports 2x the write bandwidth when writing to the raid1 volume, which
corresponds to simultaneous writes to both drives in the volume. It's
also not an elaborate setup by any means: two laptop drives, each in
cheap USB 3.0 cases using bus power only, connected to a USB 3.0 hub,
in turn connected to an Intel NUC.


>
> Comparing to Ext4, that one appears to have the "errors=continue" behavior by
> default, the user has to explicitly request "errors=remount-ro", and I have
> never seen anyone use or recommend the third option of "errors=panic", which
> is basically the equivalent of the current Btrfs practce.

I think in the context of degradedness, it may be appropriate to mount
degraded,ro by default rather than fail. But changing the default
isn't enough for the root fs use case, because the mount command isn't
even issued when udev's btrfs 'dev scan' fails to report back all
devices available. In this case there is a sort of "pre check" before
even mounting is attempted, and that is what fails.

Also,  Btrfs has fatal_errors=panic and it's not the default. Rather,
we just get mount failure. There really isn't anything quite like this
in the mdadm/lvm + other file system world where the array is active
degraded and the file system mounts anyway; if it doesn't mount it's
because the array isn't active, and doesn't even exist yet.


> Unplugging and replugging a SATA cable of a RAID1 member should never put your
> system under the risk of a massive filesystem corruption; you cannot say it
> absolutely doesn't with the current implementation.

I can't say it absolutely doesn't even with md. Of course it
shouldn't, but users do report corruptions on all of the other fs
lists (ext4, XFS, linux-raid) from time to time that are not the
result of user error.




-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to