Re: [ceph-users] Moving pools between cluster

2019-04-02 Thread Stefan Kooman
Quoting Burkhard Linke (burkhard.li...@computational.bio.uni-giessen.de):
> Hi,
> Images:
> 
> Straight-forward attempt would be exporting all images with qemu-img from
> one cluster, and uploading them again on the second cluster. But this will
> break snapshots, protections etc.

You can use rbd-mirror [1] (RBD mirroring requires the Ceph Jewel release or
later.). You do need to be able to set the "journaling" and
"exclusive-lock" feature on the rbd images (rbd feature enable
{pool-name}/{image-name} --image-feature exclusive-lock,journaling).
This will preserve snapshots, etc. When everything is mirrored you can
shutdown the VMs (or one by one) and promote the image(s) on the new
cluster, and have the VM(s) use the new cluster for their storage.
Note: You can also mirror a whole pool instead of mirroring on image level.

Gr. Stefan

[1]: http://docs.ceph.com/docs/mimic/rbd/rbd-mirroring/

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Moving pools between cluster

2019-04-02 Thread Burkhard Linke

Hi,


we are about to setup a new Ceph cluster for our Openstack cloud. Ceph 
is used for images, volumes and object storage. I'm unsure to handle 
these cases and how to move the data correctly.



Object storage:

I consider this the easiest case, since RGW itself provides the 
necessary means to synchronize clusters. But the pools are rather small 
(~5 TB for buckets), so maybe there's an easier way? How does RGW refer 
to the various pools internally? By name? By ID? (ID would be a problem, 
since a simple pool copy won't work in this case)



Images:

Straight-forward attempt would be exporting all images with qemu-img 
from one cluster, and uploading them again on the second cluster. But 
this will break snapshots, protections etc.



Volumes:

This is the most difficult case. The pool is the largest one affected 
(~60 TB), and many volumes are boot-from-volume instances acting as COW 
copy for an image. I would prefer not to flatten these images and thus 
generate a lot more data.



There are other pools we use outside of Openstack, so adding the new 
hosts to the existing cluster, moving the data by crush rules and 
splitting the cluster afterwards is not an option. Keeping all hosts in 
a single cluster and separating the pools logically within crush is also 
undesired due to administrative reasons (but will be the last resort if 
necessary).



Any comments on this? How did you move individual pools to a new cluster 
in the past?



Regards,

Burkhard


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com