Den tis 3 mars 2020 kl 21:48 skrev Stefan Priebe - Profihost AG <
s.pri...@profihost.ag>:

> > You can use a full local export, piped to some hash program (this is
> > what Backurne¹ does) : rbd export <image> - | xxhsum
> > Then, check the hash consistency with the original
>
> Thanks for the suggestion but this still needs to run an rbd export on
> the source and target snapshot everytime to compare hashes? Which is
> slow if you talk about 100's of terrabytes of data isn't it?
>

Sorry for not adding anything to solve your issue, but wouldn't *any*
method to validate
that 100TB is identical to some other 100TB always be slow?

It seems slightly illogical to me to mistrust that copy A is 100% identical
to copy B after some kind
of sync/replication/snap/rebuild but then hope for a method which doesn't
involve reading 100s of TB
to make sure they actually are.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to