Re: [ovirt-users] Ovirt-3.4.1: How to: Upgrading Hosted Engine Cluster
- Original Message - From: Daniel Helgenberger daniel.helgenber...@m-box.de To: users@ovirt.org Sent: Friday, May 9, 2014 6:45:36 PM Subject: [ovirt-users] Ovirt-3.4.1: How to: Upgrading Hosted Engine Cluster Hello, failing to find a procedure how to actually upgrade a HA cluster, I did the following witch turned out to be working pretty well. I am somewhat new to oVirt and was amazed how well actually; I did not need to shutdown a single VM (well, one because of mem usage; many of my running VMs have fancy stuff like iscsi and FC luns via a Quantum Stornext HA Cluster): 1. Set cluster to global maintance 2. Login to ovit engine and to the upgrade according to the release nodes. 3. After the upgrade is finished and the engine running, set the first Node in local maintenance. 4. Login the first node and yum update (with the removal of ovirt-release as mentioned in release notes).* I rebooted the node because of the kernel update. 5. Return to oVirt and reinstall the Node from GUI, it will be set to operational automatically** 6. Repeat steps 3-6 for the rest of the Nodes. 7. Remove global maintenance. 8. Update the last Node.*** * I first tried to do this with re-install from GUI. This failed; so I used the yum - update method to update all relevant services ** I do not know if this was necessary. I did this because the hosted-engine --deploy does the same thing when adding a host. *** I found this to be necessary because I had all my Nodes in local maintenance and could not migrate the Hosted engine from the last node any more. The host activation in oVirt did not remove the local maintenance set prior to the update (witch it should, IMHO). It might be desirable to have a hosted-engine command option to remove local maintenance for that reason. -- Daniel Helgenberger m box bewegtbild GmbH P: +49/30/2408781-22 F: +49/30/2408781-10 ACKERSTR. 19 D-10115 BERLIN www.m-box.de www.monkeymen.tv Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 Hi Martin, Thanks for sharing! A few notes on your nice 8 steps ;) - There are 2 maintenance modes to cover host maintenance (local) and VM maintenance (global). Global maintenance disarms all HA hosts in the cluster, so use it with caution as there's no fail over in this mode. - Initially these were available only as a command line[1]. Since 3.4.0, this was integrated into the UI, so all you need to do is move a host to maintenance in order to achieve local maintenance, and activate it to remove the maintenance mode. For global maintenance, right click the engine VM, and you will see enable/disable ha-maintenance for the global mode. - No need to re-install nodes. All you need to do is activate it. - Basically a standard procedure should include: * Move host to maintenance, login and update the host, activate the host. * Follow the above for all other HA hosts * Set the engine VM to ha maintenance (global), login to the vm and upgrade it, unset vm's ha-maintenance. Appreciate your feedback. Also, were you aware of [1] or did you look for info elsewhere? I'd like to know what we can do to improve documentation. Thanks again, Doron [1] http://www.ovirt.org/Hosted_Engine_Howto#Maintaining_the_setup ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Ovirt-3.4.1: How to: Upgrading Hosted Engine Cluster
- Original Message - From: Doron Fediuck dfedi...@redhat.com To: Daniel Helgenberger daniel.helgenber...@m-box.de Cc: users@ovirt.org Sent: Saturday, May 10, 2014 9:47:29 AM Subject: Re: [ovirt-users] Ovirt-3.4.1: How to: Upgrading Hosted Engine Cluster - Original Message - From: Daniel Helgenberger daniel.helgenber...@m-box.de To: users@ovirt.org Sent: Friday, May 9, 2014 6:45:36 PM Subject: [ovirt-users] Ovirt-3.4.1: How to: Upgrading Hosted Engine Cluster Hello, failing to find a procedure how to actually upgrade a HA cluster, I did the following witch turned out to be working pretty well. I am somewhat new to oVirt and was amazed how well actually; I did not need to shutdown a single VM (well, one because of mem usage; many of my running VMs have fancy stuff like iscsi and FC luns via a Quantum Stornext HA Cluster): 1. Set cluster to global maintance 2. Login to ovit engine and to the upgrade according to the release nodes. 3. After the upgrade is finished and the engine running, set the first Node in local maintenance. 4. Login the first node and yum update (with the removal of ovirt-release as mentioned in release notes).* I rebooted the node because of the kernel update. 5. Return to oVirt and reinstall the Node from GUI, it will be set to operational automatically** 6. Repeat steps 3-6 for the rest of the Nodes. 7. Remove global maintenance. 8. Update the last Node.*** * I first tried to do this with re-install from GUI. This failed; so I used the yum - update method to update all relevant services ** I do not know if this was necessary. I did this because the hosted-engine --deploy does the same thing when adding a host. *** I found this to be necessary because I had all my Nodes in local maintenance and could not migrate the Hosted engine from the last node any more. The host activation in oVirt did not remove the local maintenance set prior to the update (witch it should, IMHO). It might be desirable to have a hosted-engine command option to remove local maintenance for that reason. -- Daniel Helgenberger m box bewegtbild GmbH P: +49/30/2408781-22 F: +49/30/2408781-10 ACKERSTR. 19 D-10115 BERLIN www.m-box.de www.monkeymen.tv Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 Hi Martin, Thanks for sharing! A few notes on your nice 8 steps ;) - There are 2 maintenance modes to cover host maintenance (local) and VM maintenance (global). Global maintenance disarms all HA hosts in the cluster, so use it with caution as there's no fail over in this mode. - Initially these were available only as a command line[1]. Since 3.4.0, this was integrated into the UI, so all you need to do is move a host to maintenance in order to achieve local maintenance, and activate it to remove the maintenance mode. For global maintenance, right click the engine VM, and you will see enable/disable ha-maintenance for the global mode. - No need to re-install nodes. All you need to do is activate it. - Basically a standard procedure should include: * Move host to maintenance, login and update the host, activate the host. * Follow the above for all other HA hosts * Set the engine VM to ha maintenance (global), login to the vm and upgrade it, unset vm's ha-maintenance. Appreciate your feedback. Also, were you aware of [1] or did you look for info elsewhere? I'd like to know what we can do to improve documentation. Thanks again, Doron [1] http://www.ovirt.org/Hosted_Engine_Howto#Maintaining_the_setup My apologies- s/Martin/Daniel. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Template deployment over NFS and hooks
I posted my my original email to the devel list by mistake... My issues would be better answered by the usergroup. - Good morning, I have two questions. 1. Is there a way to make hooks so that they apply to all instances and templates? rather than setting one at a time? 2. I believe I already have the answer to this, however I just want to be sure that there isn't a better way. I have a ovirt node that basically uses NFS for everything (data and ISOs), and it is slow as heck to get an instance to build. I'm using a 100MB switch, I have a feeling that this needs to be upgraded... Software wise though, is there anything I can do to speed up template deployments to that node? Thanks, Richard Seguin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] creating a new host
Hi, When I create a new host the installation is failed. I receiving this error: Host host_centos65 installation failed. Command returned failure code 1 during SSH session 'r...@domain.com'. At log file: var/log/ovirt-engine/host-deploy/ovirt-20140510123905-domain.com-3fd2d93.log java.io.IOException: Command returned failure code 1 during SSH session I have seen this error Thank you Regards. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] gluster performance oVirt 3.4
HI!Created 2 node setup with oVirt 3.4 and CentOS 6.5, for storage created 2 node replicated gluster (3.5) fs on same hosts with oVirt.mount looks like this:127.0.0.1:/gluster01 on /rhev/data-center/mnt/glusterSD/127.0.0.1:_gluster01 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)when i making gluster test with dd, something like dd if=/dev/zero bs=1M count=2 of=/rhev/data-center/mnt/glusterSD/127.0.0.1\:_gluster01/kakai'm gettting speed ~ 110 MB/s, so this is 1Gbps speed of ethernet adapterbut with in VM created in oVirt speed is lower than 20 MB/swhy there is so huge difference? how can improve VMs disks speed? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Live Migration Fail
Hi Maurice, I was looking at your engine and VDSM logs, it looks like the operation of live storage migration has been done on a Host called Staurn, but the VDSM logs seems to be from Beetlejuice host, can u check this please regards, Maor On 05/10/2014 03:33 AM, Maurice James wrote: Live disk migrations are still failing even after upgrade to 3.4.1 from 3.4.0. Is this still an open issue? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Live Migration Fail
Beeltjuice is the SPM and the engine host. I run the migration again and get the log vdsm log from the destination host - Original Message - From: Maor Lipchuk mlipc...@redhat.com To: Maurice James mja...@media-node.com, users users@ovirt.org Sent: Saturday, May 10, 2014 4:42:00 PM Subject: Re: [ovirt-users] Live Migration Fail Hi Maurice, I was looking at your engine and VDSM logs, it looks like the operation of live storage migration has been done on a Host called Staurn, but the VDSM logs seems to be from Beetlejuice host, can u check this please regards, Maor On 05/10/2014 03:33 AM, Maurice James wrote: Live disk migrations are still failing even after upgrade to 3.4.1 from 3.4.0. Is this still an open issue? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Live Migration Fail
Seems like VDSM has encountered a problem to find the drive: Thread-221::ERROR::2014-05-10 17:36:57,134::vm::3928::vm.Vm::(snapshot) vmId=`7f341f92-134a-47e7-b7ed-e7df772806f3`::The base volume doesn't exist: {'device': 'disk', 'domainID': 'e0e65e47-52c8-41bd-8499-a3e025831215', 'volumeID': 'deae7162-1eb7-423e-9115-3e7de542c89c', 'imageID': '21484146-1a6c-4a31-896e-da1156888dfc'} Can u please run the tree command on /rhev/data-center/.. Also can u please run ls -l to eliminate any permission issues under image 21484146-1a6c-4a31-896e-da1156888dfc in /rhev/data-center. Does this only fails for a specific VM or is it also failing for other VMs? regards, Maor On 05/11/2014 12:43 AM, Maurice James wrote: VDSM logs from the source and destination are attached - Original Message - From: Maor Lipchuk mlipc...@redhat.com To: Maurice James mja...@media-node.com, users users@ovirt.org Sent: Saturday, May 10, 2014 4:42:00 PM Subject: Re: [ovirt-users] Live Migration Fail Hi Maurice, I was looking at your engine and VDSM logs, it looks like the operation of live storage migration has been done on a Host called Staurn, but the VDSM logs seems to be from Beetlejuice host, can u check this please regards, Maor On 05/10/2014 03:33 AM, Maurice James wrote: Live disk migrations are still failing even after upgrade to 3.4.1 from 3.4.0. Is this still an open issue? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Live Migration Fail
This fails for all VM disks - Original Message - From: Maor Lipchuk mlipc...@redhat.com To: Maurice James mja...@media-node.com Cc: users users@ovirt.org Sent: Saturday, May 10, 2014 7:19:21 PM Subject: Re: [ovirt-users] Live Migration Fail Seems like VDSM has encountered a problem to find the drive: Thread-221::ERROR::2014-05-10 17:36:57,134::vm::3928::vm.Vm::(snapshot) vmId=`7f341f92-134a-47e7-b7ed-e7df772806f3`::The base volume doesn't exist: {'device': 'disk', 'domainID': 'e0e65e47-52c8-41bd-8499-a3e025831215', 'volumeID': 'deae7162-1eb7-423e-9115-3e7de542c89c', 'imageID': '21484146-1a6c-4a31-896e-da1156888dfc'} Can u please run the tree command on /rhev/data-center/.. Also can u please run ls -l to eliminate any permission issues under image 21484146-1a6c-4a31-896e-da1156888dfc in /rhev/data-center. Does this only fails for a specific VM or is it also failing for other VMs? regards, Maor On 05/11/2014 12:43 AM, Maurice James wrote: VDSM logs from the source and destination are attached - Original Message - From: Maor Lipchuk mlipc...@redhat.com To: Maurice James mja...@media-node.com, users users@ovirt.org Sent: Saturday, May 10, 2014 4:42:00 PM Subject: Re: [ovirt-users] Live Migration Fail Hi Maurice, I was looking at your engine and VDSM logs, it looks like the operation of live storage migration has been done on a Host called Staurn, but the VDSM logs seems to be from Beetlejuice host, can u check this please regards, Maor On 05/10/2014 03:33 AM, Maurice James wrote: Live disk migrations are still failing even after upgrade to 3.4.1 from 3.4.0. Is this still an open issue? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Delete snapshots
Is it possible to delete snapshots on running VMs? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Delete snapshots
From what I see in the code of the remove snapshot command, the vm should be in DOWN state in order for the snapshot to be removed (well, this is of course just one of the conditions). - Original Message - From: Maurice James mja...@media-node.com To: users users@ovirt.org Sent: Sunday, May 11, 2014 2:53:39 AM Subject: [ovirt-users] Delete snapshots Is it possible to delete snapshots on running VMs? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users