On Mon, Dec 18, 2017 at 08:06:57 -0500, Austin S. Hemmelgarn wrote: > The fact is, the only cases where this is really an issue is if you've > either got intermittently bad hardware, or are dealing with external
Well, the RAID1+ is all about the failing hardware. > storage devices. For the majority of people who are using multi-device > setups, the common case is internally connected fixed storage devices > with properly working hardware, and for that use case, it works > perfectly fine. If you're talking about "RAID"-0 or storage pools (volume management) that is true. But if you imply, that RAID1+ "works perfectly fine as long as hardware works fine" this is fundamentally wrong. If the hardware needs to work properly for the RAID to work properly, noone would need this RAID in the first place. > that BTRFS should not care. At the point at which a device is dropping > off the bus and reappearing with enough regularity for this to be an > issue, you have absolutely no idea how else it's corrupting your data, > and support of such a situation is beyond any filesystem (including ZFS). Support for such situation is exactly what RAID performs. So don't blame people for expecting this to be handled as long as you call the filesystem feature a 'RAID'. If this feature is not going to mitigate hardware hiccups by design (as opposed to "not implemented yet, needs some time", which is perfectly understandable), just don't call it 'RAID'. All the features currently working, like bit-rot mitigation for duplicated data (dup/raid*) using checksums, are something different than RAID itself. RAID means "survive failure of N devices/controllers" - I got one "RAID1" stuck in r/o after degraded mount, not nice... Not _expected_ to happen after single disk failure (without any reappearing). -- Tomasz Pala <go...@pld-linux.org> -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html