Hi *,

Zitat von ceph-users-requ...@lists.ceph.com:
Hi *,

facing the problem to reduce the number of PGs for a pool, I've found
various information and suggestions, but no "definite guide" to handle
pool migration with Ceph 12.2.x. This seems to be a fairly common
problem when having to deal with "teen-age clusters", so consolidated
information would be a real help. I'm willing to start writing things
up, but don't want to duplicate information. So:

Are there any documented "operational procedures" on how to migrate

- an RBD pool (with snapshots created by Openstack)

- a CephFS data pool

- a CephFS metadata pool

to a different volume, in order to be able to utilize pool settings
that cannot be changed on an existing pool?

---

RBD pools: From what I've read, RBD snapshots are "broken" after using
"rados cppool" to move the content of an "RBD pool" to a new pool.

after having read up on "rbd-mirror" and having had a glimpse at its code, it seems that preserving clones, snapshots and their relationships has been solved for cluster-to-cluster migration.

Is this really correct? If so, it might be possible to extend the code in a fashion that will allow a one-shot, intracluster pool-to-pool migration as a spin-off to rbd-mirror.

I was thinking along the following lines:

- run rbd-mirror in a stand-alone fashion, just stating source and destination pool

- leave it to the cluster admin to take RDB "offline", so that the pool content does not change during the copy (no RBD journaling involved)

- check that the destination pool is empty. In a first version, cumulative migrates (joining multiple source pools with distinct image names) would complicate things ;)

- sync all content from source to destination pool, in a one-shot fashion

- done

Anyone out there who can judge the chances of that approach, better than me? I'd be willing to spend development time on this, but starting from scratch will be rather hard, so pointers at where to look at within the rbd-mirror code would be more than welcome.

Regards,
Jens

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to