On 2017-04-07 13:05, John Petrini wrote:
The use case actually is not Ceph, I was just drawing a comparison
between Ceph's object replication strategy vs BTRF's chunk mirroring.
That's actually a really good comparison that I hadn't thought of
before. From what I can tell from my limited understanding of how Ceph
works, the general principals are pretty similar, except that BTRFS
doesn't understand or implement failure domains (although having CRUSH
implemented in BTRFS for chunk placement would be a killer feature IMO).
I do find the conversation interesting however as I work with Ceph
quite a lot but have always gone with the default XFS filesystem for
on OSD's.
From a stability perspective, I would normally go with XFS still for
the OSD's. Most of the data integrity features provided by BTRFS are
also implemented in Ceph, so you don't gain much other than flexibility
currently by using BTRFS instead of XFS. The one advantage BTRFS has in
my experience over XFS for something like this is that it seems (with
recent versions at least) to be more likely to survive a power-failure
without any serious data loss than XFS is, but that's not really a
common concern in Ceph's primary use case.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html