On Wed, Oct 14, 2015 at 9:47 PM, Chris Murphy <li...@colorremedies.com> wrote: > > For that matter, now that GlusterFS has checksums and snapshots...
Interesting - I haven't kept up with that. Does it actually do end-to-end checksums? That is, compute the checksum at the time of storage, store the checksum in the metadata somehow, and ensure the checksum matches when data is retrieved? I forget whether it was glusterfs or ceph I was looking at, but some of those distributed filesystems will only checksum data while in transit, but not while it is at rest. So, if a server claims it has a copy of the file, then it is assumed to be a good copy and you never realize that even though you have 5 copies of that file distributed around the server you ended up using differs from the other 4. I'm also not sure if it supports an n+1/2 model like raid5/6, or if it is just a 2*n model like raid1. If I want to store 5TB of data with redundancy, I'd prefer to not need 10TB worth of drives to do it, regardless of how many systems they're spread across. -- Rich -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html