On 12/11/2018 10:39 AM, Lionel Bouton wrote:
Le 11/12/2018 à 15:51, Konstantin Shalygin a écrit :

Currently I plan a migration of a large VM (MS Exchange, 300 Mailboxes
and 900GB DB) from qcow2 on ext4 (RAID1) to an all-flash Ceph luminous
cluster (which already holds lot's of images).
The server has access to both local and cluster-storage, I only need
to live migrate the storage, not machine.

I have never used live migration as it can cause more issues and the
VMs that are already migrated, had planned downtime.
Taking the VM offline and convert/import using qemu-img would take
some hours but I would like to still serve clients, even if it is
slower.
I believe OP is trying to use the storage migration feature of QEMU. I've never tried it and I wouldn't recommend it (probably not very tested and there is a large window for failure).

One tactic that can be used assuming OP is using LVM in the VM for storage is to add a Ceph volume to the VM (probably needs a reboot) add the corresponding virtual disk to the VM volume group and then migrate all data from the logical volume(s) to the new disk. LVM is using mirroring internally during the transfer so you get robustness by using it. It can be slow (especially with old kernels) but at least it is safe. I've done a DRBD to Ceph migration with this process 5 years ago. When all logical volumes are moved to the new disk you can remove the old disk from the volume group.

Assuming everything is on LVM including the root filesystem, only moving the boot partition will have to be done outside of LVM.

Since the OP mentioned MS Exchange, I assume the VM is running windows. You can do the same LVM-like trick in Windows Server via Disk Manager though; add the new ceph RBD disk to the existing data volume as a mirror; wait for it to sync, then break the mirror and remove the original disk.

--
Graham Allan
Minnesota Supercomputing Institute - g...@umn.edu
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to