On Fri, Sep 9, 2016 at 12:58 PM, Chris Murphy <li...@colorremedies.com> wrote:

>
> It should work better than it does because it works well for LVM and
> mdadm arrays.
>
> I think what's going on is the DE's mounter (udisksd) tries to mount
> each Btrfs device node, even though those nodes make up a single fs
> volume.

I updated this bug but it looks like it's going to a maintainer whose
not reading these mails.
https://bugs.freedesktop.org/show_bug.cgi?id=87277#c3

Anyway the problem is pretty bad as I describe in that bug. It pretty
much will always cause some kind of Btrfs corruption, which does get
fixed if everything is raid1. The main flaw is that it uses sysfs to
delete the device node before umounting the file system, so it causes
any multiple device Btrfs array that's automounted to become degraded.
There's something of a silver lining so long as the metadata at least
is raid1, which it probably ought to be, rather than single, dup, or
raid0.

In any case, it's far worse and more dangerous than LVM or mdadm raid
which don't exhibit this behavior. So of the available short term
options I see, udisks2 (and now storaged) needs to black list Btrfs
from automounting. It's just not smart enough to make sure it's not
doing really wrong things.



-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to