On 2015-09-17 14:35, Chris Murphy wrote:
On Thu, Sep 17, 2015 at 11:56 AM, Gert Menke <g...@menke.ac> wrote:
Hi,

thank you for your answers!

So it seems there are several suboptimal alternatives here...

MD+LVM is very close to what I want, but md has no way to cope with silent
data corruption. So if I'd want to use a guest filesystem that has no
checksums either, I'm out of luck.

You can use Btrfs in the guest to get at least notification of SDC. If
you want recovery also then that's a bit more challenging. The way
this has been done up until ZFS and Btrfs is T10 DIF (PI). There are
already checksums on the drive, but this adds more checksums that can
be confirmed through the entire storage stack, not just internal to
the drive hardware.

Another way is to put a conventional fs image on e.g. GlusterFS with
checksumming enabled (and at least distributed+replicated filtering).

If you do this directly on Btrfs, maybe you can mitigate some of the
fragmentation issues with bcache or dmcache; and for persistent
snapshotting, use qcow2 to do it instead of Btrfs. You'd use Btrfs
snapshots to create a subvolume for doing backups of the images, and
then get rid of the Btrfs snapshot.


The other option (which for some reason I almost never see anyone suggest), is to expose 2 disks to the guest (ideally stored on different filesystems), and do BTRFS raid1 on top of that. In general, this is what I do (except I use LVM for the storage back-end instead of a filesystem) when I have data integrity requirements in the guest. On the other hand of course, most of my VM's are trivial for me to recreate, so I don't often need this and just use DM-RAID via LVm.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to