On 2017-04-07 12:28, Chris Murphy wrote:
On Fri, Apr 7, 2017 at 7:50 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:

If you care about both performance and data safety, I would suggest using
BTRFS raid1 mode on top of LVM or MD RAID0 together with having good backups
and good monitoring.  Statistically speaking, catastrophic hardware failures
are rare, and you'll usually have more than enough warning that a device is
failing before it actually does, so provided you keep on top of monitoring
and replace disks that are showing signs of impending failure as soon as
possible, you will be no worse off in terms of data integrity than running
ext4 or XFS on top of a LVM or MD RAID10 volume.


Depending on the workload, and what replication is being used by Ceph
above this storage stack, it might make make more sense to do
something like three lvm/md raid5 arrays, and then Btrfs single data,
raid1 metadata, across those three raid5s. That's giving up only three
drives to parity rather than 1/2 the drives, and rebuild time is
shorter than losing one drive in a raid0 array.
Ah, I had forgotten it was a Ceph back-end system. In that case, I would actually suggest essentially the same setup that Chris did, although I would personally be a bit more conservative and use RAID6 instead of RAID5 for the LVM/MD arrays. As he said though, it really depends on what higher-level replication you're doing. In particular, if you're running erasure coding instead of replication at the Ceph level, I would probably still go with BTRFS raid1 on top of LVM/MD RAID0 just to balance out the performance hit from the erasure coding.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to