On Wed, Nov 30, 2016 at 7:37 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:

> The stability info could be improved, but _absolutely none_ of the things
> mentioned as issues with raid1 are specific to raid1.  And in general, in
> the context of a feature stability matrix, 'OK' generally means that there
> are no significant issues with that specific feature, and since none of the
> issues outlined are specific to raid1, it does meet that description of
> 'OK'.

Maybe the gotchas page needs a one or two liner for each profile's
gotchas compared to what the profile leads the user into believing.
The overriding gotcha with all Btrfs multiple device support is the
lack of monitoring and notification other than kernel messages; and
the raid10 actually being more like raid0+1 I think it certainly a
gotcha, however 'man mkfs.btrfs' contains a grid that very clearly
states raid10 can only safely lose 1 device.


> Looking at this another way, I've been using BTRFS on all my systems since
> kernel 3.16 (I forget what exact vintage that is in regular years).  I've
> not had any data integrity or data loss issues as a result of BTRFS itself
> since 3.19, and in just the past year I've had multiple raid1 profile
> filesystems survive multiple hardware issues with near zero issues (with the
> caveat that I had to re-balance after replacing devices to convert a few
> single chunks to raid1), and that includes multiple disk failures and 2 bad
> PSU's plus about a dozen (not BTRFS related) kernel panics and 4 unexpected
> power loss events.  I also have exhaustive monitoring, so I'm replacing bad
> hardware early instead of waiting for it to actually fail.

Possibly nothing aids predictably reliable storage stacks than healthy
doses of skepticism and awareness of all limitations. :-D

-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to