Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-06 Thread Rob Zwissler
On Wed, Mar 6, 2013 at 12:34 AM, Shireesh Anjal san...@redhat.com wrote:

 oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is currently in
 alpha and hence not available in stable repositories.
 http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/

 This issue has been reported multiple times now, and I think it needs an
 update to the oVirt 3.2 release notes. Have added a note to this effect at:
 http://www.ovirt.org/OVirt_3.2_release_notes#Storage


On one hand I like oVirt, I think you guys have done a good job with
this, and it is free software so I don't want to complain.

But on the other hand, if you release a major/stable release (ie:
oVirt 3.2), but it relies on a major/critical component (clustering
filesystem server) that is in alpha, not even beta, but alpha
prerelease form, you really should be up front and communicative about
this.  My searches turned up nothing except an offhand statement from
a GlusterFS developer, nothing from the oVirt team until now.

It is not acceptable to expect people to run something as critical as
a cluster filesystem server in alpha form on anything short of a
development test setup.  Are any other components of oVirt 3.2
dependent on non-stable general release packages?

What is the latest release of oVirt considered to be stable and
considered safe for use on production systems?

Rob
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Adding a bond.vlantag to ovirt-node

2013-03-06 Thread Rob Zwissler
Hi Alex, I'm using a very similar setup on our clusters, a few caveats
I encountered -

For oVirt to manage the networks, it would not allow me to run
ovirtmgmt on the same interface as the data networks, so I'm running
two bonds, one for ovirtmgmt and one for the data VLANs.  It makes
sense that oVirt would not allow ovirtmgmt on the trunked bond, as
oVirt needs the ability to communicate with vdsm while modifying the
trunked bond setup.  I have tried splitting off a third bond for a
dedicated storage network with Gluster but that has lead to issues of
it's own, perhaps this will work in a future version of oVirt.

With oVirt 3.1 it would not allow me to save a VLAN interface without
an IP, but it appears that with oVirt 3.2 I can, which is good, as I
want to offer a VLAN to my VMs which is on the same network as the
ovirtmgmt interface, and the oVirt 3.1 requirement of having an IP on
every bridge forces the situation of having two different interfaces
on the same subnets with IPs, without using source policy routing (or
some iptables work), whichever IP has lower entries in the routing
table becomes essentially unusable.

FWIW I'm building machines out of kickstart/cobbler with the bonded
bridged setups, and they import and are recognized directly by oVirt,
which is very convenient.  I use LACP for the trunked bonds, and use
vlanXXX as the naming convention for the bridges.  I'm using
balance-tlb (mode 5) for the ovirtmgmt bond for other reasons LACP is
not viable on that bond in our setup.  I previously tried balance-alb
(mode 6) but that is not usable for VMs, which is another issue
altogether (fixed in kernel 3.8 apparently).

regards

Rob



On Wed, Mar 6, 2013 at 3:39 AM, Alex Leonhardt alex.t...@gmail.com wrote:
 All,

 I've added manually a bond.vlantag to a Hyper-Visor, then added a bridge
 and slaved bond.vlantag to it.

 I've upped the interfaces (no IPs), however ovirt-engine still wont allow me
 to add the new bridged interface. I did this yesterday, so I thought, maybe
 it's just a cache issue, however, it doesnt seem to update the HVs network
 config periodically ?

 What can I do to get this sorted ?? I dont want to have to restart the
 networking as VMs are running and needed.

 FWIW, the setup looks like this -


 eth0
| - bond0.111  --- br1
| - bond0.112  --- ovirtmgmt
 eth1


 the change was :

 eth0
| - bond0.111  --- br1
| - bond0.112  --- ovirtmgmt
| - bond0.113  --- br2
 eth1

 I then added br2 to the ovirt-engine config, however, I'm not able to
 assign it to bond in the network config (web admin interface) for the hyper
 visor / host.

 Also see screenshot attached.

 Thanks
 Alex

 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Searching for VMs that do not have a tag

2013-03-06 Thread Rob Zwissler
I have to say, the Search  Bookmark functionality is really cool!

According to autocomplete, the only operator available for Vms:Tag is
=  It would be nice to have a != operator so we can have bookmarks
that show hosts that are not tagged in a certain way... or is there a
more general way to invert a query?

Also, is there any way to add an order tag to alter the display order?
 Any online docs for this stuff?

regards,

Rob
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-05 Thread Rob Zwissler
On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg dan...@redhat.com wrote:
 Rob,

 It seems that a bug in vdsm code is hiding the real issue.
 Could you do a

 sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py

 restart vdsmd, and retry?

 Bala, would you send a patch fixing the ParseError issue (and adding a
 unit test that would have caught it on time)?


 Regards,
 Dan.

Hi Dan, thanks for the quick response.  I did that, and here's what I
get now from the vdsm.log:

MainProcess|Thread-51::DEBUG::2013-03-05
10:03:40,723::misc::84::Storage.Misc.excCmd::(lambda)
'/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
Thread-52::DEBUG::2013-03-05
10:03:40,731::task::568::TaskManager.Task::(_updateState)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::moving from state init -
state preparing
Thread-52::INFO::2013-03-05
10:03:40,732::logUtils::41::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-52::INFO::2013-03-05
10:03:40,732::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'4af726ea-e502-4e79-a47c-6c8558ca96ad':
{'delay': '0.00584101676941', 'lastCheck': '0.2', 'code': 0, 'valid':
True}, 'fc0d44ec-528f-4bf9-8913-fa7043daf43b': {'delay':
'0.0503160953522', 'lastCheck': '0.2', 'code': 0, 'valid': True}}
Thread-52::DEBUG::2013-03-05
10:03:40,732::task::1151::TaskManager.Task::(prepare)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::finished:
{'4af726ea-e502-4e79-a47c-6c8558ca96ad': {'delay': '0.00584101676941',
'lastCheck': '0.2', 'code': 0, 'valid': True},
'fc0d44ec-528f-4bf9-8913-fa7043daf43b': {'delay': '0.0503160953522',
'lastCheck': '0.2', 'code': 0, 'valid': True}}
Thread-52::DEBUG::2013-03-05
10:03:40,732::task::568::TaskManager.Task::(_updateState)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::moving from state
preparing - state finished
Thread-52::DEBUG::2013-03-05
10:03:40,732::resourceManager::830::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-52::DEBUG::2013-03-05
10:03:40,733::resourceManager::864::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-52::DEBUG::2013-03-05
10:03:40,733::task::957::TaskManager.Task::(_decref)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::ref 0 aborting False
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk vda latency not
available
GuestMonitor-xor-q-nis02::DEBUG::2013-03-05
10:03:40,750::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc stats not
available

[Users] When does oVirt auto-migrate, and what does HA do?

2013-03-05 Thread Rob Zwissler
In what scenarios does oVirt auto-migrate VMs?  I'm aware that it
currently migates VMs when putting a host into maintenance, or when
manually selecting migration via the web interface, but when else will
hosts be migrated?  Is there any automatic compensation for resource
imbalances between hosts?  I could find no documentation on this
subject, if I missed it I apologize!

In a related question, exactly what does enabling HA (Highly
Available) mode do?  The only documentation I could find on this is at
http://www.ovirt.org/OVirt_3.0_Feature_Guide#High_availability but it
is a bit vague, and being from 3.0, possibly out of date.  Can someone
briefly describe the HA migration algorithm?

Thanks,

Rob
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-04 Thread Rob Zwissler
Running CentOS 6.3 with the following VDSM packages from dre's repo:

vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch
vdsm-gluster-4.10.3-0.30.19.el6.noarch
vdsm-python-4.10.3-0.30.19.el6.x86_64
vdsm-4.10.3-0.30.19.el6.x86_64
vdsm-cli-4.10.3-0.30.19.el6.noarch

And the following gluster packages from the gluster repo:

glusterfs-3.3.1-1.el6.x86_64
glusterfs-fuse-3.3.1-1.el6.x86_64
glusterfs-vim-3.2.7-1.el6.x86_64
glusterfs-server-3.3.1-1.el6.x86_64

I get the following errors in vdsm.log:

Thread-1483::DEBUG::2013-03-04
16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client
[10.33.9.73]::call volumesList with () {}
MainProcess|Thread-1483::DEBUG::2013-03-04
16:35:27,429::misc::84::Storage.Misc.excCmd::(lambda)
'/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
MainProcess|Thread-1483::DEBUG::2013-03-04
16:35:27,480::misc::84::Storage.Misc.excCmd::(lambda) SUCCESS: err
= ''; rc = 0
MainProcess|Thread-1483::ERROR::2013-03-04
16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper
Traceback (most recent call last):
  File /usr/share/vdsm/supervdsmServer.py, line 78, in wrapper
return func(*args, **kwargs)
  File /usr/share/vdsm/supervdsmServer.py, line 352, in wrapper
return func(*args, **kwargs)
  File /usr/share/vdsm/gluster/cli.py, line 45, in wrapper
return func(*args, **kwargs)
  File /usr/share/vdsm/gluster/cli.py, line 430, in volumeInfo
except (etree.ParseError, AttributeError, ValueError):
AttributeError: 'module' object has no attribute 'ParseError'
Thread-1483::ERROR::2013-03-04
16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
rv = func(*args, **kwargs)
  File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
  File /usr/share/vdsm/supervdsm.py, line 81, in __call__
return callMethod()
  File /usr/share/vdsm/supervdsm.py, line 72, in lambda
**kwargs)
  File string, line 2, in glusterVolumeInfo
  File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
in _callmethod
raise convert_to_error(kind, result)
AttributeError: 'module' object has no attribute 'ParseError'

Which corresponds to the following in the engine.log:

2013-03-04 16:34:46,231 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-86) START,
GlusterVolumesListVDSCommand(HostName = xor-q-virt01, HostId =
b342bf4d-d9e9-4055-b662-462dc2e6bf50), log id: 987aef3
2013-03-04 16:34:46,365 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(QuartzScheduler_Worker-86) Failed in GlusterVolumesListVDS method
2013-03-04 16:34:46,366 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(QuartzScheduler_Worker-86) Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to
GlusterVolumesListVDS, error = Unexpected exception
2013-03-04 16:34:46,367 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase]
(QuartzScheduler_Worker-86) Command GlusterVolumesListVDS execution
failed. Exception: VDSErrorException: VDSGenericException:
VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected
exception
2013-03-04 16:34:46,369 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-86) FINISH, GlusterVolumesListVDSCommand, log
id: 987aef3
2013-03-04 16:34:46,370 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterManager]
(QuartzScheduler_Worker-86) Error while refreshing Gluster lightweight
data of cluster qa-cluster1!:
org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GlusterVolumesListVDS, error = Unexpected exception
at 
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:168)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.runVdsCommand(GlusterManager.java:258)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:454)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:440)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.refreshVolumeData(GlusterManager.java:411)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.refreshClusterData(GlusterManager.java:191)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.refreshLightWeightData(GlusterManager.java:170)
[engine-bll.jar:]
at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)