Hi all, 

I'm not sure to really understand how cephfs snapshots mirroring is supposing 
to work. 

I have 2 ceph clusters (pacific 16.2.4) and snapshots mirroring is set up for 
only one directory, /ec42/test, in our cephfs filesytem (it's for test purposes 
but we plan to use it with about 50-60 directories and 1.5 PB). 
I have also set up a pool with erasure coding and configured the layout for my 
/ec42 directory to use the EC pool (on both clusters). 

I used the following steps to test the snapshot mirroring: 
- copy about 70GB in /ec42/test on source cluster 
- create a snapshot (mkdir /ec42/test/.snap/snap1) 
- remove 5 text files from /ec42/test (the total files size is about 5-10 KB) 
- create another snapshot ( mkdir /ec42/test/.snap/snap2) 

What I see during cephfs-mirror execution: 
- after snap1 creation, 70 GB are transferred to to target cluster, then 
snapshot (snap1) is created on target cluster 
- after snap2 creation, the remote directory (on target cluster) is emptied, 
then 70GB are transferred again and, finally, the second snapshot (snap2) is 
created 

I thought that only the diff between the snapshots would be transferred (or 
remove) but it seems that all data in the source snapshot are pushed each time. 
Is it the design of the snapshot mirroring feature or have I missed something? 
I wanted to use snapshot mirroring to backup our cephfs filesystem but it will 
be impossible if we have to transfert 1,5PB every day. 
And any other suggestion to backup cephfs with 1,5PB would be very helpful... 

Arnaud 

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to