I am working on integrating a backup solution for our ovirt environment and
having issues with the time it takes to backup the VM's. This backup solution
is simply taking a snapshot and making a clone and backing the clone up to a
backup server.
A VM that is 100 gig takes 52 minutes to back up
Figured out that Ovirt doesn't allow for virsh to edit XML files and have them
be persistent through reboots. I needed to edit the SMBIOS settings in an XML
of a VM. Once the VM is off the XML is gone. VDSM apparently builds the XML's
out on boot up. To get this working this has to be done with
I am trying to get Cisco ISE installed into Ovirt. I was told I needed to do a
virsh edit on the vm and change a line in the smbios settings. This is all
simple enough. After a reboot or shutdown/restart the settings revert back to
what they were before. A dump of the XML in fact shows the origi
I found a task running in the db that was was my task and was just spinning
apparently. I deleted it and this zombie process is gone.
engine=# select * from job order by start_time desc;
-[ RECORD 1 ]-+-
job_id| 7c6b899e-ada2-456e-9562-
I tried adding a 400 gig disk to a VM. I could see the disk being built out on
the storage, mounted via nfs, but clearly something is stuck. A few hours later
and I see nothing finishing. Nothing seems to be logging to the engine.log and
I have restarted vdsmd on all the hosts and have restarted
Any help would be appreciated. I have since rebooted the 3rd gluster node which
is the arbiter. This doesn't seem to want to heal.
gluster volume heal bgl-vms-gfs info |grep Number
Number of entries: 68
Number of entries: 0
Number of entries: 68
___
Us
Just a quick note the volume in question is actually called bgl-vms-gfs. The
original message is still valid.
[root@bgl-vms-gfs03 bricks]# gluster volume heal bgl-vms-gfs info
Brick 10.8.255.1:/gluster/bgl-vms-gfs01/brick
/.shard/bd0bf192-e0e1-4b72-85cb-fa3497c555be.989
/.shard/bd0bf192-e0e1-4b7
The switches above our environment had some VPC issues and the port channels
went offline. The ports that had issues belonged to 2 of the gfs nodes in our
environment. We have 3 storage nodes total with the 3rd being the arbiter. I
wound up rebooting the first 2 nodes and everything came back ha
Hey all, quick question regarding new VM instances in oVirt. Is there any way
to grow the thin disk associated with a template on instantiation? This would
allow facilities like cloud-init resizefs to actually be useful. Been searching
around a bit and it seems like this is not a thing in oVirt.
Thanks for responding.
Looks like we are using the first include option. We have lots of AD servers
around the world and latency never seems to be an issue. This option seems like
it would be fine for us but I did switch it to the recursive and that sped
things up drastically.
Thank you very
This is still hanging us up. I have dug all around and can't seem to figure out
how to lay in these environment tweaks to speed things up. I see that 4.2.5
just surfaced, but didn't see anything int the release notes about AAA.
Thanks in advance for anyone that can help or point me in the right
I meant to include we are running 4.2.4.5-1.el7.
Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ov
12 matches
Mail list logo