Re: [ovirt-users] Host/storage OS upgrade: VM migrate?

2015-03-05 Thread Alan Murrell

Thanks for the reply, Darrell.

On 05/03/2015 9:26 AM, Darrell Budic wrote:

Might be safer to setup an export domain on an external drive and export your 
VMs to it, then you can import them to a clean new system after your upgrade. 
Way less to go wrong with this approach, so I’d probably recommend it.


Ah, I never thought of that.  I do have a 2TB NAS drive that supports 
NFS that I could setup as an export domain.


Thanks!  I will give it a try!

-Alan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Dell DRAC 8

2015-03-05 Thread Patrick Russell
Looks like it’s just the CMC, I can use power management on the individual sled 
DRAC’s using the drac5 fence agent no problem.

-Patrick


From: Volusion Inc
Date: Thursday, March 5, 2015 at 8:22 PM
To: "users@ovirt.org"
Subject: [ovirt-users] Dell DRAC 8

Anyone having success with fencing and DRAC 8 via CMC? We just received a 
couple Dell FX2 chassis and we’re having trouble getting the fencing agents to 
work on these. It is a CMC setup similar to the dell blade chassis, but it DRAC 
version 8.

-Patrick
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Dell DRAC 8

2015-03-05 Thread Patrick Russell
Anyone having success with fencing and DRAC 8 via CMC? We just received a 
couple Dell FX2 chassis and we’re having trouble getting the fencing agents to 
work on these. It is a CMC setup similar to the dell blade chassis, but it DRAC 
version 8.

-Patrick
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Interesting issue with gluster and ovirt3.5.1

2015-03-05 Thread Pat Pierson
I am having a very strange issue with ovirt 3.5.1 and gluster.  I have a
gluster volume with 4 nodes.  One node is specifically set as the node
hosting the gluster volume in my ovirt cluster however today it died.  I
tried working around it my modifying the hostname in the entry to another
node that is hosting the gluster volume but couldnt find a way to do it
other then modifying the entry in the progress table.  Doing that didnt
help either. So I resorted to removing the storage entry by destroying the
entry.  I am now attempting to add the data store back into ovirt however
it fails.  ovirt-engine does not report any errors. I did find the
following log on one of the nodes that stated my gluster volume was in read
only mode however I am able to mount the volume manually and add/remove
files with no issues.

Has anyone seen this before?


glusteraddvdsm.tmp.log
Description: Binary data
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] about HA or live migration

2015-03-05 Thread Gianluca Cecchi
Il 05/Mar/2015 18:58 "rino"  ha scritto:
>
> Hi,
> I was wondering to know if the same concept that have vmware about
migrate machine between nodes is available in ovirt.
>
> The scenario start when i partner tell me that he have this features in
vmware, he can move a vm between host without any impact, but the
limitation is only three nodes per host, because it do like a special copy
of memory and othres stuff every momento so if the host fail the machine
instantaneously will be ready in the other host without any impact.
>
> is that possible in ovirt ? or what happen if a host crash with all vm
that it have  or if it have something similar like mark a vm very important
and do a special task with it in case that everything crash.. or whatever
other solution have..
> I want to show that we have it on ovirt or can solve this kind of problem
>
>
> Regards

You had better starting to read oVirt Administrator Guide at
http://www.ovirt.org/OVirt_Administration_Guide

For you particular questions the answer is almost yes for all and you can
read about them at 6.7.1 8.13 and 8.14
VMware FT (fault tolerance), as you describe in your scenario, is not
available (yet) as far as I know...
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] about HA or live migration

2015-03-05 Thread rino
Hi,
I was wondering to know if the same concept that have vmware about migrate
machine between nodes is available in ovirt.

The scenario start when i partner tell me that he have this features in
vmware, he can move a vm between host without any impact, but the
limitation is only three nodes per host, because it do like a special copy
of memory and othres stuff every momento so if the host fail the machine
instantaneously will be ready in the other host without any impact.

is that possible in ovirt ? or what happen if a host crash with all vm that
it have  or if it have something similar like mark a vm very important and
do a special task with it in case that everything crash.. or whatever other
solution have..
I want to show that we have it on ovirt or can solve this kind of problem


Regards

-- 
---
Rondan Rino
Certificado en LPIC-2  
LPI ID:LPI000209832
Verification Code:gbblvwyfxu
Red Hat Certified Engineer -- RHCE -- RHCVA



Blog:http://www.itrestauracion.com.ar
Cv: http://cv.rinorondan.com.ar 
http://counter.li.org  Linux User -> #517918
Viva La Santa Federacion!!
Mueran Los Salvages Unitarios!!
^^^Transcripcion de la epoca ^^^
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host/storage OS upgrade: VM migrate?

2015-03-05 Thread Darrell Budic
In theory you can do this, but it takes a bit of work.

I migrated a cent 6 system to a cent 7 and kept my gluster bricks intact by 
backing up and restoring /etc/gluster and /var/lib/gluster, as well as the 
bricks themselves. Your milage may vary. I was also working on a multi-server 
system and was in a position to rebuild the bricks from the running systems if 
needed. And I needed to for one set, so it wasn’t perfect. If you go this 
route, make sure you backup your engine and restore it as well, a simple copy 
of your storage domain is not enough to keep a VM in ovirt (although you can 
probably import the volume as an existing domain and get the disks back).

Might be safer to setup an export domain on an external drive and export your 
VMs to it, then you can import them to a clean new system after your upgrade. 
Way less to go wrong with this approach, so I’d probably recommend it.

  -Darrell


> On Mar 5, 2015, at 4:59 AM, Alan Murrell  wrote:
> 
> Hello,
> 
> I currently run oVirt 3.5 on CentOS6.  It is on a single host with a 
> self-hosted engine.  Not an officially supported setup, but it is just a home 
> lab.
> 
> When the next release of oVirt comes out (3.6), I am thinking I may want to 
> upgrade to CentOS7 on both host and engine to take advantage of the newer 
> libraries and features of Centos7.
> 
> On the host, my storage is GlusterFS and is on the same physical HDD as the 
> OS, but on a different LVM partition.
> 
> If I were to do a fresh install on the OS partition, and run through the 
> initial steps to install oVirt and GlusterFS packages, when I get to 
> configuring GlusterFS, will it be able to pick up my existing bricks and thus 
> allow me to import that existing storage into oVirt?
> 
> Alternatively, is there a way to export my VMs to an external HDD, do a 
> completely fresh install, then import them VMs back in?  I suspect I would 
> probably need to resort to a cloning tool like "CloneZilla"?
> 
> Downtime of the VMs is not an issue, since this is just a lab and there is 
> nothing production-wise running on it.
> 
> Thanks for your advise!
> 
> Regards,
> 
> Alan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network error

2015-03-05 Thread RASTELLI Alessandro
Hi,
I get this error when I try to add the second network to a host (management 
network is OK)
VDSGenericException:   VDSErrorException: Failed to 
SetupNetworksVDS, error = Resource unavailable, code = 40
see log below:
I'm running ovirt-hosted-engine 3.5.1.1-1.el6


2015-03-05 15:02:16,562 INFO  
[org.ovirt.engine.core.bll.network.host.SetupNetworksCommand] 
(ajp--127.0.0.1-8702-3) [23e12e8d] Running command: SetupNetworksCommand 
internal: false. Entities affected :  ID: 378b60dc  
-8f28-486f-9feb-0349df25c4a9 Type: VDSAction group CONFIGURE_HOST_NETWORK with 
role type ADMIN
2015-03-05 15:02:16,577 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetupNetworksVDSCommand] 
(ajp--127.0.0.1-8702-3) [23e12e8d] START, SetupNetworksVDSCommand(HostName = 
beltorax, HostId = 378b60dc-8f28-486f-9f  eb-0349df25c4a9, 
force=false, checkConnectivity=true, conectivityTimeout=120,
networks=[Rete_40 {id=57f9f798-9bde-4b2f-aeee-8920f77169ac, 
description=null, comment=null, subnet=null, gateway=null, type=null, 
vlanId=null, stp=false, dataCenterId=0002-0002-0002-0002-03aa, mt   
   u=0, vmNetwork=true, cluster=NetworkCluster 
{id={clusterId=null, networkId=null}, status=NON_OPERATIONAL, display=false, 
required=true, migration=false}, providedBy=null, label=40, qosId=null}],
bonds=[],
interfaces=[bond0 {id=ac91d426-dbe3-480b-b1af-267f2d44ffa3, 
vdsId=378b60dc-8f28-486f-9feb-0349df25c4a9, name=bond0, 
macAddress=28:80:23:df:8e:a0, networkName=Rete_40, bondOptions=miimon=100 
mode=4, bootProto  col=STATIC_IP, address=10.69.40.154, 
subnet=255.255.255.0, gateway=10.69.40.1, mtu=0, bridged=true, type=0, 
networkImplementationDetails=null},
eno4 {id=d083cebe-07ec-4aab-b07b-a1014d3673be, 
vdsId=378b60dc-8f28-486f-9feb-0349df25c4a9, name=eno4, 
macAddress=c4:34:6b:b7:a7:13, networkName=ovirtmgmt, bondName=null, 
bootProtocol=STATIC_IP, addre  ss=10.39.193.3, 
subnet=255.255.255.0, gateway=, mtu=1500, bridged=true, speed=1000, type=2, 
networkImplementationDetails={inSync=true, managed=true}},
eno3 {id=b9fbeff5-64bc-45df-9c41-9b10ca3839f0, 
vdsId=378b60dc-8f28-486f-9feb-0349df25c4a9, name=eno3, 
macAddress=c4:34:6b:b7:a7:12, networkName=null, bondName=null, 
bootProtocol=DHCP, address=, subne  t=, gateway=null, 
mtu=1500, bridged=false, speed=0, type=0, networkImplementationDetails=null},
eno2 {id=a332843c-a9ce-469f-8702-1c918e4b7358, 
vdsId=378b60dc-8f28-486f-9feb-0349df25c4a9, name=eno2, 
macAddress=c4:34:6b:b7:a7:11, networkName=null, bondName=null, 
bootProtocol=DHCP, address=, subne  t=, gateway=null, 
mtu=1500, bridged=false, speed=0, type=0, networkImplementationDetails=null},
eno1 {id=cc27cada-8825-4268-97a0-acce2ef10543, 
vdsId=378b60dc-8f28-486f-9feb-0349df25c4a9, name=eno1, 
macAddress=c4:34:6b:b7:a7:10, networkName=null, bondName=null, 
bootProtocol=DHCP, address=, subne  t=, gateway=null, 
mtu=1500, bridged=false, speed=0, type=0, networkImplementationDetails=null},
eno49 {id=de83b23c-ab3e-4fbe-b4af-85937ff60f16, 
vdsId=378b60dc-8f28-486f-9feb-0349df25c4a9, name=eno49, 
macAddress=28:80:23:df:8e:a0, networkName=null, bondName=bond0, 
bootProtocol=NONE, address=, su  bnet=, gateway=null, 
mtu=1500, bridged=false, speed=1, type=0, 
networkImplementationDetails=null},
eno50 {id=ae9237ca-d443-479c-abb4-a29e9efb1481, 
vdsId=378b60dc-8f28-486f-9feb-0349df25c4a9, name=eno50, 
macAddress=28:80:23:df:8e:a8, networkName=null, bondName=bond0, 
bootProtocol=NONE, address=, su  bnet=, gateway=null, 
mtu=1500, bridged=false, speed=1, type=0, 
networkImplementationDetails=null}],
removedNetworks=[],
removedBonds=[]), log id: 731097e2
2015-03-05 15:02:16,607 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetupNetworksVDSCommand] 
(ajp--127.0.0.1-8702-3) [23e12e8d] FINISH, SetupNetworksVDSCommand, log id: 
731097e2
2015-03-05 15:02:16,608 WARN  
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) 
Exception thrown during message processing
2015-03-05 15:02:16,608 INFO  
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) 
Connecting to beltorax.skytech.local/10.39.193.3
2015-03-05 15:02:18,725 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetupNetworksVDSCommand] 
(ajp--127.0.0.1-8702-3) [23e12e8d] Failed in SetupNetworksVDS method
2015-03-05 15:02:18,726 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetupNetworksVDSCommand] 
(ajp--127.0.0.1-8702-3) [23e12e8d] 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException:   VDSErrorException: Failed to 
SetupNetworksVDS, error = Resource unavailable, code = 40
2015-03-05 15:02:18,727 ERROR 
[org.ovirt.engine.core.vdsb

Re: [ovirt-users] Help problem with vm after create a sub template

2015-03-05 Thread Dan Kenigsberg
On Wed, Mar 04, 2015 at 03:55:32PM +0100, nicola.gentile.to wrote:
> Good morning,
> after I create a new sub template several vm of a pool remain with state
> 'Image Locked'
> 
> What I do to solve?

To receiver a meaningful answer, it is most likely that you should
specify the version of your Engine and vdsm. How many hosts are in your
cluster? You may need to peer into /var/log/vdsm.log of the SPM at the
time out creating the new image.

What is reported on engine.log at the time of the failure?
Which type of storage do you use? (nfs?/iscsi?)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host/storage OS upgrade: VM migrate?

2015-03-05 Thread Alan Murrell

Hello,

I currently run oVirt 3.5 on CentOS6.  It is on a single host with a 
self-hosted engine.  Not an officially supported setup, but it is just a 
home lab.


When the next release of oVirt comes out (3.6), I am thinking I may want 
to upgrade to CentOS7 on both host and engine to take advantage of the 
newer libraries and features of Centos7.


On the host, my storage is GlusterFS and is on the same physical HDD as 
the OS, but on a different LVM partition.


If I were to do a fresh install on the OS partition, and run through the 
initial steps to install oVirt and GlusterFS packages, when I get to 
configuring GlusterFS, will it be able to pick up my existing bricks and 
thus allow me to import that existing storage into oVirt?


Alternatively, is there a way to export my VMs to an external HDD, do a 
completely fresh install, then import them VMs back in?  I suspect I 
would probably need to resort to a cloning tool like "CloneZilla"?


Downtime of the VMs is not an issue, since this is just a lab and there 
is nothing production-wise running on it.


Thanks for your advise!

Regards,

Alan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users