> On Sun, Aug 30, 2020 at 7:13 PM
> Using export domain is not a single click, but it is not that complicated.
> But this is good feedback anyway.
>
>
> I think the issue is gluster, not qemu-img.
>
>
> How did you try? transfer via the UI is completely different than
> transfer using the
On Tue, Sep 1, 2020 at 11:26 PM Nir Soffer wrote:
>
> On Sun, Aug 30, 2020 at 7:13 PM wrote:
> >
> > Struggling with bugs and issues on OVA export/import (my clear favorite
> > otherwise, especially when moving VMs between different types of
> > hypervisors), I've tried pretty much everything
On Sun, Aug 30, 2020 at 7:13 PM wrote:
>
> Struggling with bugs and issues on OVA export/import (my clear favorite
> otherwise, especially when moving VMs between different types of
> hypervisors), I've tried pretty much everything else, too.
>
> Export domains are deprecated and require quite
Thanks for letting me know, I suspected that might be the case. I’ll make a
note to fix that in the playbook
On Mon, Aug 31, 2020 at 3:57 AM Stefan Wolf wrote:
> I think, I found the problem.
>
>
>
> It is case sensitive. For the export it is NOT case sensitive but for the
> step "wait for
I think, I found the problem.
It is case sensitive. For the export it is NOT case sensitive but for the step
"wait for export" it is. I ve changed it and now it seems to be working
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email
Struggling with bugs and issues on OVA export/import (my clear favorite
otherwise, especially when moving VMs between different types of hypervisors),
I've tried pretty much everything else, too.
Export domains are deprecated and require quite a bit of manual handling.
Unfortunately the
Interesting I’ve not hit that issue myself. I’d think it must somehow be
related to getting the event status. Is it happening to the same vms every
time? Is there anything different about the vm names or anything that would
set them apart from the others that work?
On Sun, Aug 30, 2020 at 11:56
OK,
I ve run the backup three times .
I still have two machines, where it still fails on TASK [Wait for export]
I think the Problem is not the timeout, in oVirt engine the export has already
finished : "
Exporting VM VMName as an OVA to /home/backup/in_progress/VMName.ova on Host
kvm360"
But
yes you are right,
I ve already found. But this was not realy my problem. It causes from the
HostedEngine. Long time ago I ve decreased the memory. It seems that this was
the problem. now it is seems to be working pretty well.
___
Users mailing list
Also if you look at the blog post linked on github page it has info about
increasing the ansible timeout on ovirt engine machine. This will be
necessary when dealing with large vms that take over 2 hours to export
On Sun, Aug 30, 2020 at 8:52 AM Jayme wrote:
> You should be able to fix by
You should be able to fix by increasing the timeout variable in main.yml. I
think the default is pretty low around @ 600 seconds (10 minutes). I have
mine set for a few hours since I’m dealing with large vms. I’d also
increase poll interval as well so it’s not checking for completion every 10
Checking the timestamp (diff between now and timestamp) of the exportfile could
also an option to verify if the export is still ongoing instead of using the
ovirt_event_info.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Hello,
>https://github.com/silverorange/ovirt_ansible_backup
I am also still using 4.3.
In my opinion this is by far the best and easiest solution for disaster
recovery. No need to install an appliance, and if there is a need to recover,
you can import the ova in every hypervisor - no
Probably the easiest way is to export the VM as OVA. The OVA format is a
single file which includes the entire VM image along with the config. You
can import it back into oVirt easily as well. You can do this from the GUI
on a running VM and export to OVA without bringing the VM down. The export
14 matches
Mail list logo