On Fri, Jul 31, 2020 at 12:30 PM Strahil Nikolov <hunter86...@yahoo.com>
wrote:

> Theoretically, all the data for the snapshot is both in the Engine's DB
> and on storage (OVF file).
>
> I was afraid to migrate from 4.3  to  4.4  ,  as I was planning  to  wipe
> the engine  and  just import the VMs..., but  I was  not sure  about the
> snapshots.
>
I did not check the actual backing chain of the VM disk image. I reset my
test env and will need to repeat some time to check it. In case there is
still a chain of images referring to the child image then it could be that
we need engine DB informed about this, thus wiping out engine seems not a
viable option. There should be some engine DB import type that could
restore those images and make them visible (again, if they really exist
under the hood which is still to be checked).

>
> Most probably this is a bug.
>
> @Sandro,
>
> can you assist with this one  ?
>
> Best Regards,
> Strahil Nikolov
>
> На 31 юли 2020 г. 10:01:17 GMT+03:00, Alex K <rightkickt...@gmail.com>
> написа:
> >Has anyone been able to import a storage domain and still have access
> >to VM
> >snapshots or this might be a missing feature/bug that needs to be
> >reported?
> >Reading the redhat docs about the storage domain import it seems there
> >is
> >no mention of VM snapshots if they should be accessible following the
> >import.
> >Thanx
> >
> >On Thu, Jul 30, 2020 at 3:58 PM Alex K <rightkickt...@gmail.com> wrote:
> >
> >> Hi all,
> >>
> >> I have a dual node self hosted cluster v4.3 using gluster as storage
> >so as
> >> to test an actual scenario which will need to be followed at
> >production.
> >> The purpose is to rename the cluster FQDN to a new one, wiping out
> >any
> >> reference to the old previous FQDN. I was not successful in using the
> >> engine-rename tool or other means as there are leftovers from
> >previous FQDN
> >> that cause issues.
> >>
> >> The cluster has a data storage domain with one guest VM running on it
> >> which has one snapshot.
> >> I am testing a destructive scenario as below and I find out that when
> >> importing the storage domain to the newly configured cluster, while
> >the
> >> guest VM is imported fine, I do not see the guest VM disk snapshots.
> >>
> >> Steps that I follow for this scenario:
> >>
> >> *Initial status: *
> >> I have an ovirt cluster with two hosts named v0 and v1.
> >> The gluster storage domain is configured at a separate network where
> >the
> >> hosts are named gluster0 and gluster1.
> >> The cluster has an engine and data storage domain named "engine" and
> >"vms"
> >> respectively.
> >> The "vms" storage domain hosts one guest VM with one guest VM disk
> >> snapshot.
> >> All are configured with fqdn *localdomain.local*
> >>
> >> *# Steps to rename all cluster to new fqdn lab.local and import "vms"
> >> storage domain*
> >> 1. Set v1 ovirt host at maintenance then remove it from GUI.
> >> 2.  At v1 install fresh CentOS7 using the new FQDN lab.local
> >> 3.  at v0 set global maintenance and shutdown engine. Remove the
> >engine
> >> storage data. (complete wipe of any engine related data. What is
> >important
> >> is only VM guests and their snapshots).
> >> 4.  at v0, remove bricks belonging to "engine" and "vms" gluster
> >volumes
> >> of v1 and detach gluster peer v1.
> >>
> >> gluster volume remove-brick engine replica 1
> >> gluster1:/gluster/engine/brick force
> >> gluster volume remove-brick vms replica 1 gluster1:/gluster/vms/brick
> >force
> >> gluster peer detach gluster1
> >>
> >> 5.  On v1, prepare gluster service, reattach peer and add bricks from
> >v0.
> >> At this phase all data from vms gluster volume will be synced to the
> >new
> >> host. Verify with `gluster heal info vms`.
> >> from v0 server run:
> >>
> >> gluster peer probe gluster1
> >> gluster volume add-brick engine replica 2
> >gluster1:/gluster/engine/brick
> >> gluster volume add-brick vms replica 2 gluster1:/gluster/vms/brick
> >>
> >> At this state all gluster volume are up and in sync. We confirm "vms"
> >sync
> >> with
> >> gluster volume heal info vms
> >>
> >> 6.  At freshly installed v1 install engine using the same clean
> >gluster
> >> engine volume:
> >> hosted-engine --deploy --config-append=/root/storage.conf
> >> --config-append=answers.conf (use new FQDN!)
> >>
> >> 7.  Upon completion of engine deployment and after having ensured the
> >vms
> >> gluster volume is synced (step 5) remove bricks of v0 host (v0 now
> >should
> >> not be visible at ovirt GUI) and detach gluster peer v0.
> >> at v1 host run:
> >> gluster volume remove-brick engine replica 1
> >> gluster0:/gluster/engine/brick force
> >> gluster volume remove-brick vms replica 1 gluster0:/gluster/vms/brick
> >force
> >> gluster peer detach gluster0
> >>
> >> 8. Install fresh CentOS7 on v0 and prepare it with ovirt node
> >packages,
> >> networking and gluster.
> >> 9. At v0, attach gluster bricks from v1. Confirm sync with gluster
> >volume
> >> heal info.
> >> at v1 host:
> >> gluster peer probe gluster0
> >> gluster volume add-brick engine replica 2
> >gluster0:/gluster/engine/brick
> >> gluster volume add-brick vms replica 2 gluster0:/gluster/vms/brick
> >>
> >> 10. at engine, add entry for v0 host at /etc/hosts. At ovirt GUI, add
> >v0.
> >> /etc/hosts:
> >> 10.10.10.220 node0 v0.lab.local
> >> 10.10.10.221 node1 v1.lab.local
> >> 10.10.10.222 engine.lab.local engine
> >>
> >> 10.100.100.1 gluster0
> >> 10.100.100.2 gluster1
> >>
> >> 11. At ovirt GUI import vms gluster volume as vms storage domain.
> >> At this step I have to approve operation:
> >> [image: image.png]
> >>
> >>
> >> 12. At ovirt GUI, import VMs from vms storage domain.
> >> At this step the VM is found and imported from the imported storage
> >domain
> >> "vms", but the VM does not show the previously available disk
> >snapshot.
> >>
> >> The import of the storage domain should have retained the guest VM
> >> snapshot.
> >> How can this be troubleshooted? Do I have to keep some type of engine
> >DB
> >> backup so as to make the snapshots visible? If yes, is it possible to
> >> restore this backup to a fresh engine that has a new FQDN?
> >> Thanx very much for any advise and hint.
> >>
> >> Alex
> >>
> >>
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RDAE2UT23ZUK4K3LX4SIAN5I2F2NTJ3A/

Reply via email to