Many thanks to Nicolas who saved my life!

When the export of disks (base + snapshot) has finished, I managed to boot up the vm into libvirt/kvm with the top disk snapshot as the main disk. Then, I believed reimporting the vm was the last thing to do, but integrated virt-v2v doesn't support importing vm with external snapshots, so when the importing process has finished, I couldn't boot up the vm.
I had to merge the snapshots with qemu tools:

qemu-img rebase-b  base.raw snap2.qcow2
qemu-img commit snap2.qcow2

And then, attaching the base image of each disk to the libvirt vm before reimporting it chosing "preallocated" for raw disks.

This is a manual method, but it was first necessary to find the disk id into lvm thanks to ovirt-shell: list disks --query "name=hortensia*" --show-all.

When finding the volume group id corresponding to the vm, I had to activate all the logical volume with lvchange -ay /dev/... and then finding qcow2 information with qemu-img info --backing-chain

*In this specific desastry, is there something to do with ovirt itself instead of exporting/reimporting, knowing that vm disks on the lun are intact, while the main reason is that the reference to some disks are broken into database?*


Le 06/12/2017 à 11:30, Nicolas Ecarnot a écrit :
Le 06/12/2017 à 11:21, Nathanaël Blanchet a écrit :
Hi all,

I'm about to lose one very important vm. I shut down this vm for maintenance and then I moved the four disks to a new created lun. This vm has 2 snapshots.

After successful move, the vm refuses to start with this message:

Bad volume specification {u'index': 0, u'domainID': u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format': u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648', u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', u'optional': u'false', u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize': '2147483648', u'poolID': u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type': u'disk'}.

I tried to merge the snaphots, export , clone from snapshot, copy disks, or deactivate disks and every action fails when it is about disk.

I began to dd lv group to get a new vm intended to a standalone libvirt/kvm, the vm quite boots up but it is an outdated version before the first snapshot. There is a lot of disks when doing a "lvs | grep 961ea94a" supposed to be disks snapshots. Which of them must I choose to get the last vm before shutting down? I'm not used to deal snapshot with virsh/libvirt, so some help will be much appreciated.

Is there some unknown command to recover this vm into ovirt?

Thank you in advance.



_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Beside specific oVirt answers, did you try to get informations about the snapshot tree with qemu-img info --backing-chain on the adequate /dev/... logical volume? As you know how to dd from LVs, you could extract every needed snapshots files and rebuild your VM outside of oVirt.
Then take time to re-import it later and safely.


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5       
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to