Re: [Users] method glusterHostsList is not supported

2014-03-04 Thread René Koch

On 03/04/2014 05:14 AM, Sahina Bose wrote:


On 03/04/2014 01:59 AM, Itamar Heim wrote:

On 03/03/2014 07:26 PM, René Koch wrote:

Hi list,

My hosted engine is running again, so I want to start a new thread for
another issue with my setup.

I have a GlusterFS storage domain, which can be mounted from CLI without
problems. oVirt is 3.4 from ovirt-3.4.0-prerelease repository running on
CentOS 6.5 with latest updates (both OS and oVirt).

Both hosts, which act as hypervisors and GlusterFS nodes are in state
Non Operational in oVirt because Gluster command [Non interactive
user] failed on server ovirt-host02.dmz.linuxland.at.

In engine.log I see the entry glusterHostList is not supported
(attached are the log entries when activating one of the hosts):

2014-03-03 18:17:11,764 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(org.ovirt.thread.pool-6-thread-21) [6eee3cbd] Command
GlusterServersListVDSCommand(HostName = ovirt-host02.dmz.linuxland.at,
HostId = dd399eeb-f623-457a-9986-a7efc69010b2) execution failed.
Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException: type
'exceptions.Exception':method glusterHostsList is not supported

Can you give me a hint what this means and how I can activate my hosts
and storage again?
Thanks a lot!




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



sahina ?


Do you have vdsm-gluster on the node?


No, I didn't had it on (both) nodes.

After installing vdsm-gluster the storage works fine again. Thanks a lot!

But there's one strange thing. According to oVirt logs and yum.log 
GlusterFS storage worked fine until yesterdays yum update, where the 
following packages where updated:


Mar 03 10:01:09 Updated: ovirt-hosted-engine-ha-1.1.0-1.el6.noarch
Mar 03 10:01:10 Updated: otopi-1.2.0-0.5.rc.el6.noarch
Mar 03 10:01:11 Updated: ovirt-engine-sdk-python-3.4.0.6-1.el6.noarch
Mar 03 10:01:12 Updated: ovirt-hosted-engine-setup-1.1.0-1.el6.noarch
Mar 03 10:01:13 Updated: libtiff-3.9.4-10.el6_5.x86_64

According to yum.log vdsm-gluster was never installed on these hosts, 
but storage did work.


Shouldn't vdsm-gluster be a requirement for hosts and therefor be 
installed during host setup?


Do you have any clue why it storage did work until the update of these 
packages?



Regards,
René

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Host requirements for 3.4 compatibility

2014-03-04 Thread Lior Vernia
Hey Darren,

I don't think it is (at least I couldn't find it with a quick Google).
In fact, I can't even tell you how I knew that 4.14 goes with 3.4... It
should be documented better when 3.4 is officially released.

In general, when using beta/rc versions I would recommend following the
corresponding test day web page on ovirt.org, as these usually contain
the most up-to-date information on how to configure the yum repositories
for everything to work.

Yours, Lior.

On 03/03/14 18:20, Darren Evenson wrote:
 Hi Lior,
 
 Updating VDSM from 4.13 to 4.14 worked! Thank you!
 
 Is it documented anywhere what the required versions of libvit and vdsm are 
 for 3.4 compatibility?
 
 - Darren
 
 -Original Message-
 From: Lior Vernia [mailto:lver...@redhat.com] 
 Sent: Monday, March 3, 2014 7:04 AM
 To: Darren Evenson
 Cc: users@ovirt.org
 Subject: Re: [Users] Host requirements for 3.4 compatibility
 
 Hi Darren,
 
 Looks to me like your VDSM version isn't up-to-date, I think those supported 
 in 3.4 clusters are  4.14. I would try installing the ovirt yum repo file by 
 running:
 
 sudo yum localinstall
 http://resources.ovirt.org/releases/3.4.0-rc/rpm/Fedora/20/noarch/ovirt-release-11.0.2-1.noarch.rpm
 
 Then enable the ovirt-3.4.0-prerelease repository in the repo file, then 
 install vdsm. Then let us know if that worked
 
 Yours, Lior.
 
 On 01/03/14 00:32, Darren Evenson wrote:
 I have updated my engine to 3.4 rc.

  

 I created a new cluster with 3.4 compatibility version, and then I 
 moved a host I had in maintenance mode to the new cluster.

  

 When I activate it, I get the error Host kvmhost2 is compatible with 
 versions (3.0,3.1,3.2,3.3) and cannot join Cluster Cluster_new which 
 is set to version 3.4.

  

 My host was Fedora 20 with the latest updates:

  

 Kernel Version: 3.13.4 - 200.fc20.x86_64

 KVM Version: 1.6.1 - 3.fc20

 LIBVIRT Version: libvirt-1.1.3.3-5.fc20

 VDSM Version: vdsm-4.13.3-3.fc20

  

 So I enabled fedora-virt-preview and updated, but I still get the same 
 error, even now with libvirt 1.2.1:

  

 Kernel Version: 3.13.4 - 200.fc20.x86_64

 KVM Version: 1.7.0 - 5.fc20

 LIBVIRT Version: libvirt-1.2.1-3.fc20

 VDSM Version: vdsm-4.13.3-3.fc20

  

 What am I missing?

  

 - Darren

  



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] method glusterHostsList is not supported

2014-03-04 Thread Sahina Bose


On 03/04/2014 01:46 PM, René Koch wrote:

On 03/04/2014 05:14 AM, Sahina Bose wrote:


On 03/04/2014 01:59 AM, Itamar Heim wrote:

On 03/03/2014 07:26 PM, René Koch wrote:

Hi list,

My hosted engine is running again, so I want to start a new thread for
another issue with my setup.

I have a GlusterFS storage domain, which can be mounted from CLI 
without
problems. oVirt is 3.4 from ovirt-3.4.0-prerelease repository 
running on

CentOS 6.5 with latest updates (both OS and oVirt).

Both hosts, which act as hypervisors and GlusterFS nodes are in state
Non Operational in oVirt because Gluster command [Non interactive
user] failed on server ovirt-host02.dmz.linuxland.at.

In engine.log I see the entry glusterHostList is not supported
(attached are the log entries when activating one of the hosts):

2014-03-03 18:17:11,764 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(org.ovirt.thread.pool-6-thread-21) [6eee3cbd] Command
GlusterServersListVDSCommand(HostName = ovirt-host02.dmz.linuxland.at,
HostId = dd399eeb-f623-457a-9986-a7efc69010b2) execution failed.
Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException: 
type

'exceptions.Exception':method glusterHostsList is not supported

Can you give me a hint what this means and how I can activate my hosts
and storage again?
Thanks a lot!




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



sahina ?


Do you have vdsm-gluster on the node?


No, I didn't had it on (both) nodes.

After installing vdsm-gluster the storage works fine again. Thanks a lot!

But there's one strange thing. According to oVirt logs and yum.log 
GlusterFS storage worked fine until yesterdays yum update, where the 
following packages where updated:


Mar 03 10:01:09 Updated: ovirt-hosted-engine-ha-1.1.0-1.el6.noarch
Mar 03 10:01:10 Updated: otopi-1.2.0-0.5.rc.el6.noarch
Mar 03 10:01:11 Updated: ovirt-engine-sdk-python-3.4.0.6-1.el6.noarch
Mar 03 10:01:12 Updated: ovirt-hosted-engine-setup-1.1.0-1.el6.noarch
Mar 03 10:01:13 Updated: libtiff-3.9.4-10.el6_5.x86_64

According to yum.log vdsm-gluster was never installed on these hosts, 
but storage did work.


Shouldn't vdsm-gluster be a requirement for hosts and therefor be 
installed during host setup?


Do you have any clue why it storage did work until the update of these 
packages?


The host moving to Non-Operational state with error Gluster command 
failed... is dependent on whether Enable gluster service is checked 
on your cluster. This check indicates that you also want to manage 
gluster storage provisioning on the nodes.


A recent change now checks that vdsm-gluster support is available for 
such clusters. That's probably why you are seeing this error after update.





Regards,
René



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] method glusterHostsList is not supported

2014-03-04 Thread Itamar Heim

On 03/04/2014 10:16 AM, René Koch wrote:

On 03/04/2014 05:14 AM, Sahina Bose wrote:


On 03/04/2014 01:59 AM, Itamar Heim wrote:

On 03/03/2014 07:26 PM, René Koch wrote:

Hi list,

My hosted engine is running again, so I want to start a new thread for
another issue with my setup.

I have a GlusterFS storage domain, which can be mounted from CLI
without
problems. oVirt is 3.4 from ovirt-3.4.0-prerelease repository
running on
CentOS 6.5 with latest updates (both OS and oVirt).

Both hosts, which act as hypervisors and GlusterFS nodes are in state
Non Operational in oVirt because Gluster command [Non interactive
user] failed on server ovirt-host02.dmz.linuxland.at.

In engine.log I see the entry glusterHostList is not supported
(attached are the log entries when activating one of the hosts):

2014-03-03 18:17:11,764 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(org.ovirt.thread.pool-6-thread-21) [6eee3cbd] Command
GlusterServersListVDSCommand(HostName = ovirt-host02.dmz.linuxland.at,
HostId = dd399eeb-f623-457a-9986-a7efc69010b2) execution failed.
Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException:
type
'exceptions.Exception':method glusterHostsList is not supported

Can you give me a hint what this means and how I can activate my hosts
and storage again?
Thanks a lot!




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



sahina ?


Do you have vdsm-gluster on the node?


No, I didn't had it on (both) nodes.

After installing vdsm-gluster the storage works fine again. Thanks a lot!

But there's one strange thing. According to oVirt logs and yum.log
GlusterFS storage worked fine until yesterdays yum update, where the
following packages where updated:

Mar 03 10:01:09 Updated: ovirt-hosted-engine-ha-1.1.0-1.el6.noarch
Mar 03 10:01:10 Updated: otopi-1.2.0-0.5.rc.el6.noarch
Mar 03 10:01:11 Updated: ovirt-engine-sdk-python-3.4.0.6-1.el6.noarch
Mar 03 10:01:12 Updated: ovirt-hosted-engine-setup-1.1.0-1.el6.noarch
Mar 03 10:01:13 Updated: libtiff-3.9.4-10.el6_5.x86_64

According to yum.log vdsm-gluster was never installed on these hosts,
but storage did work.

Shouldn't vdsm-gluster be a requirement for hosts and therefor be
installed during host setup?

Do you have any clue why it storage did work until the update of these
packages?


vdsm-gluster is to manage gluster storage, not consume gluster storage.
did you enable the gluster mode post installing the hosts?
(if you enabled it before installing them, or re-install[1] post 
enabling gluster mode, it should have deployed vdsm-gluster as well)


[1] after moving hos to maintenance.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] method glusterHostsList is not supported

2014-03-04 Thread René Koch

On 03/04/2014 09:42 AM, Sahina Bose wrote:


On 03/04/2014 01:46 PM, René Koch wrote:

On 03/04/2014 05:14 AM, Sahina Bose wrote:


On 03/04/2014 01:59 AM, Itamar Heim wrote:

On 03/03/2014 07:26 PM, René Koch wrote:

Hi list,

My hosted engine is running again, so I want to start a new thread for
another issue with my setup.

I have a GlusterFS storage domain, which can be mounted from CLI
without
problems. oVirt is 3.4 from ovirt-3.4.0-prerelease repository
running on
CentOS 6.5 with latest updates (both OS and oVirt).

Both hosts, which act as hypervisors and GlusterFS nodes are in state
Non Operational in oVirt because Gluster command [Non interactive
user] failed on server ovirt-host02.dmz.linuxland.at.

In engine.log I see the entry glusterHostList is not supported
(attached are the log entries when activating one of the hosts):

2014-03-03 18:17:11,764 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(org.ovirt.thread.pool-6-thread-21) [6eee3cbd] Command
GlusterServersListVDSCommand(HostName = ovirt-host02.dmz.linuxland.at,
HostId = dd399eeb-f623-457a-9986-a7efc69010b2) execution failed.
Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException:
type
'exceptions.Exception':method glusterHostsList is not supported

Can you give me a hint what this means and how I can activate my hosts
and storage again?
Thanks a lot!




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



sahina ?


Do you have vdsm-gluster on the node?


No, I didn't had it on (both) nodes.

After installing vdsm-gluster the storage works fine again. Thanks a lot!

But there's one strange thing. According to oVirt logs and yum.log
GlusterFS storage worked fine until yesterdays yum update, where the
following packages where updated:

Mar 03 10:01:09 Updated: ovirt-hosted-engine-ha-1.1.0-1.el6.noarch
Mar 03 10:01:10 Updated: otopi-1.2.0-0.5.rc.el6.noarch
Mar 03 10:01:11 Updated: ovirt-engine-sdk-python-3.4.0.6-1.el6.noarch
Mar 03 10:01:12 Updated: ovirt-hosted-engine-setup-1.1.0-1.el6.noarch
Mar 03 10:01:13 Updated: libtiff-3.9.4-10.el6_5.x86_64

According to yum.log vdsm-gluster was never installed on these hosts,
but storage did work.

Shouldn't vdsm-gluster be a requirement for hosts and therefor be
installed during host setup?

Do you have any clue why it storage did work until the update of these
packages?


The host moving to Non-Operational state with error Gluster command
failed... is dependent on whether Enable gluster service is checked
on your cluster. This check indicates that you also want to manage
gluster storage provisioning on the nodes.

A recent change now checks that vdsm-gluster support is available for
such clusters. That's probably why you are seeing this error after update.



Thanks, now it's clear.
I indeed have enabled Gluster service in my cluster (but I don't use it 
as I did configure the volumes with gluster command directly)



Regards,
René





Regards,
René




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] method glusterHostsList is not supported

2014-03-04 Thread René Koch

On 03/04/2014 09:40 AM, Itamar Heim wrote:

On 03/04/2014 10:16 AM, René Koch wrote:

On 03/04/2014 05:14 AM, Sahina Bose wrote:


On 03/04/2014 01:59 AM, Itamar Heim wrote:

On 03/03/2014 07:26 PM, René Koch wrote:

Hi list,

My hosted engine is running again, so I want to start a new thread for
another issue with my setup.

I have a GlusterFS storage domain, which can be mounted from CLI
without
problems. oVirt is 3.4 from ovirt-3.4.0-prerelease repository
running on
CentOS 6.5 with latest updates (both OS and oVirt).

Both hosts, which act as hypervisors and GlusterFS nodes are in state
Non Operational in oVirt because Gluster command [Non interactive
user] failed on server ovirt-host02.dmz.linuxland.at.

In engine.log I see the entry glusterHostList is not supported
(attached are the log entries when activating one of the hosts):

2014-03-03 18:17:11,764 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(org.ovirt.thread.pool-6-thread-21) [6eee3cbd] Command
GlusterServersListVDSCommand(HostName = ovirt-host02.dmz.linuxland.at,
HostId = dd399eeb-f623-457a-9986-a7efc69010b2) execution failed.
Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException:
type
'exceptions.Exception':method glusterHostsList is not supported

Can you give me a hint what this means and how I can activate my hosts
and storage again?
Thanks a lot!




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



sahina ?


Do you have vdsm-gluster on the node?


No, I didn't had it on (both) nodes.

After installing vdsm-gluster the storage works fine again. Thanks a lot!

But there's one strange thing. According to oVirt logs and yum.log
GlusterFS storage worked fine until yesterdays yum update, where the
following packages where updated:

Mar 03 10:01:09 Updated: ovirt-hosted-engine-ha-1.1.0-1.el6.noarch
Mar 03 10:01:10 Updated: otopi-1.2.0-0.5.rc.el6.noarch
Mar 03 10:01:11 Updated: ovirt-engine-sdk-python-3.4.0.6-1.el6.noarch
Mar 03 10:01:12 Updated: ovirt-hosted-engine-setup-1.1.0-1.el6.noarch
Mar 03 10:01:13 Updated: libtiff-3.9.4-10.el6_5.x86_64

According to yum.log vdsm-gluster was never installed on these hosts,
but storage did work.

Shouldn't vdsm-gluster be a requirement for hosts and therefor be
installed during host setup?

Do you have any clue why it storage did work until the update of these
packages?


vdsm-gluster is to manage gluster storage, not consume gluster storage.
did you enable the gluster mode post installing the hosts?
(if you enabled it before installing them, or re-install[1] post
enabling gluster mode, it should have deployed vdsm-gluster as well)

[1] after moving hos to maintenance.


Yes, I did it after installing the hosts.

Would it be possible to add a check before activating this option?

I'm thinking of the following:
- Edit Cluster
- Enable Gluster Service
- OK checks if all hosts in the cluster have vdsm-gluster installed - if 
not an error message occurs that this package is required in order to 
enable the gluster service.







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] method glusterHostsList is not supported

2014-03-04 Thread Itamar Heim

On 03/04/2014 11:15 AM, René Koch wrote:

On 03/04/2014 09:40 AM, Itamar Heim wrote:

On 03/04/2014 10:16 AM, René Koch wrote:

On 03/04/2014 05:14 AM, Sahina Bose wrote:


On 03/04/2014 01:59 AM, Itamar Heim wrote:

On 03/03/2014 07:26 PM, René Koch wrote:

Hi list,

My hosted engine is running again, so I want to start a new thread
for
another issue with my setup.

I have a GlusterFS storage domain, which can be mounted from CLI
without
problems. oVirt is 3.4 from ovirt-3.4.0-prerelease repository
running on
CentOS 6.5 with latest updates (both OS and oVirt).

Both hosts, which act as hypervisors and GlusterFS nodes are in state
Non Operational in oVirt because Gluster command [Non interactive
user] failed on server ovirt-host02.dmz.linuxland.at.

In engine.log I see the entry glusterHostList is not supported
(attached are the log entries when activating one of the hosts):

2014-03-03 18:17:11,764 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]

(org.ovirt.thread.pool-6-thread-21) [6eee3cbd] Command
GlusterServersListVDSCommand(HostName =
ovirt-host02.dmz.linuxland.at,
HostId = dd399eeb-f623-457a-9986-a7efc69010b2) execution failed.
Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException:
type
'exceptions.Exception':method glusterHostsList is not supported

Can you give me a hint what this means and how I can activate my
hosts
and storage again?
Thanks a lot!




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



sahina ?


Do you have vdsm-gluster on the node?


No, I didn't had it on (both) nodes.

After installing vdsm-gluster the storage works fine again. Thanks a
lot!

But there's one strange thing. According to oVirt logs and yum.log
GlusterFS storage worked fine until yesterdays yum update, where the
following packages where updated:

Mar 03 10:01:09 Updated: ovirt-hosted-engine-ha-1.1.0-1.el6.noarch
Mar 03 10:01:10 Updated: otopi-1.2.0-0.5.rc.el6.noarch
Mar 03 10:01:11 Updated: ovirt-engine-sdk-python-3.4.0.6-1.el6.noarch
Mar 03 10:01:12 Updated: ovirt-hosted-engine-setup-1.1.0-1.el6.noarch
Mar 03 10:01:13 Updated: libtiff-3.9.4-10.el6_5.x86_64

According to yum.log vdsm-gluster was never installed on these hosts,
but storage did work.

Shouldn't vdsm-gluster be a requirement for hosts and therefor be
installed during host setup?

Do you have any clue why it storage did work until the update of these
packages?


vdsm-gluster is to manage gluster storage, not consume gluster storage.
did you enable the gluster mode post installing the hosts?
(if you enabled it before installing them, or re-install[1] post
enabling gluster mode, it should have deployed vdsm-gluster as well)

[1] after moving hos to maintenance.


Yes, I did it after installing the hosts.

Would it be possible to add a check before activating this option?

I'm thinking of the following:
- Edit Cluster
- Enable Gluster Service
- OK checks if all hosts in the cluster have vdsm-gluster installed - if
not an error message occurs that this package is required in order to
enable the gluster service.


makes sense - please open a bug to track. check would probably warn if 
unreachable hosts exists to warn user.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] method glusterHostsList is not supported

2014-03-04 Thread René Koch

On 03/04/2014 10:19 AM, Itamar Heim wrote:

On 03/04/2014 11:15 AM, René Koch wrote:

On 03/04/2014 09:40 AM, Itamar Heim wrote:

On 03/04/2014 10:16 AM, René Koch wrote:

On 03/04/2014 05:14 AM, Sahina Bose wrote:


On 03/04/2014 01:59 AM, Itamar Heim wrote:

On 03/03/2014 07:26 PM, René Koch wrote:

Hi list,

My hosted engine is running again, so I want to start a new thread
for
another issue with my setup.

I have a GlusterFS storage domain, which can be mounted from CLI
without
problems. oVirt is 3.4 from ovirt-3.4.0-prerelease repository
running on
CentOS 6.5 with latest updates (both OS and oVirt).

Both hosts, which act as hypervisors and GlusterFS nodes are in
state
Non Operational in oVirt because Gluster command [Non interactive
user] failed on server ovirt-host02.dmz.linuxland.at.

In engine.log I see the entry glusterHostList is not supported
(attached are the log entries when activating one of the hosts):

2014-03-03 18:17:11,764 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]


(org.ovirt.thread.pool-6-thread-21) [6eee3cbd] Command
GlusterServersListVDSCommand(HostName =
ovirt-host02.dmz.linuxland.at,
HostId = dd399eeb-f623-457a-9986-a7efc69010b2) execution failed.
Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException:
type
'exceptions.Exception':method glusterHostsList is not supported

Can you give me a hint what this means and how I can activate my
hosts
and storage again?
Thanks a lot!




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



sahina ?


Do you have vdsm-gluster on the node?


No, I didn't had it on (both) nodes.

After installing vdsm-gluster the storage works fine again. Thanks a
lot!

But there's one strange thing. According to oVirt logs and yum.log
GlusterFS storage worked fine until yesterdays yum update, where the
following packages where updated:

Mar 03 10:01:09 Updated: ovirt-hosted-engine-ha-1.1.0-1.el6.noarch
Mar 03 10:01:10 Updated: otopi-1.2.0-0.5.rc.el6.noarch
Mar 03 10:01:11 Updated: ovirt-engine-sdk-python-3.4.0.6-1.el6.noarch
Mar 03 10:01:12 Updated: ovirt-hosted-engine-setup-1.1.0-1.el6.noarch
Mar 03 10:01:13 Updated: libtiff-3.9.4-10.el6_5.x86_64

According to yum.log vdsm-gluster was never installed on these hosts,
but storage did work.

Shouldn't vdsm-gluster be a requirement for hosts and therefor be
installed during host setup?

Do you have any clue why it storage did work until the update of these
packages?


vdsm-gluster is to manage gluster storage, not consume gluster storage.
did you enable the gluster mode post installing the hosts?
(if you enabled it before installing them, or re-install[1] post
enabling gluster mode, it should have deployed vdsm-gluster as well)

[1] after moving hos to maintenance.


Yes, I did it after installing the hosts.

Would it be possible to add a check before activating this option?

I'm thinking of the following:
- Edit Cluster
- Enable Gluster Service
- OK checks if all hosts in the cluster have vdsm-gluster installed - if
not an error message occurs that this package is required in order to
enable the gluster service.


makes sense - please open a bug to track. check would probably warn if
unreachable hosts exists to warn user.


Done:
https://bugzilla.redhat.com/show_bug.cgi?id=1072274




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [Engine-devel] oVirt February 2014 Updates

2014-03-04 Thread Jiri Moskovcak

On 03/03/2014 05:15 PM, Antoni Segura Puimedon wrote:



- Original Message -

From: Itamar Heim ih...@redhat.com
To: users@ovirt.org
Sent: Monday, March 3, 2014 3:25:07 PM
Subject: [Engine-devel] oVirt February 2014 Updates

1. Releases

- oVirt 3.3.3 was released early in the month:
http://www.ovirt.org/OVirt_3.3.3_release_notes

- oVirt 3.3.4 about to release.
http://www.ovirt.org/OVirt_3.3.4_release_notes

- oVirt 3.4.0 about to release!

2. Events
- Leonardo Vaz is organizing ovirt attendance at FISL15, the largest
FOSS conference in LATAM which will happen from 7th to 10th of May in
Porto Alegre, Brazil:
http://softwarelivre.org/fisl15

- Allon Mureinik gave a presentation on DR with oVirt at devconf.cz
http://www.slideshare.net/AllonMureinik/dev-conf-ovirt-dr


Jiři Moskovcak presented as well the oVirt scheduler (@Jiri, can you attach
slides?)



- my slides (It's basically Gilad's slides from fosdem): 
http://jmoskovc.fedorapeople.org/scheduling_devconf.odp



I presented vdsm pluggable networking showing how to write parts of a 
configurator
and network hooks.
https://blog.antoni.me/devconf14/
(Better look at it with Chromium, firefox has a bug with svg files)



- oVirt workshop in korea slides (korean)
http://www.slideshare.net/rogan/20140208-ovirtkorea-01
https://www.facebook.com/groups/ovirt.korea

- Rogan also presented oVirt integration with OpenStack in OpenStack
day in Korea
http://alturl.com/m3jnx

- Pat Pierson posted on basic network setup
http://izen.ghostpeppersrus.com/setting-up-networks/

- Fosdem 2014 sessions (slides and videos) are at:
http://www.ovirt.org/FOSDEM_2014

- and some at Infrastructure.Next Ghent the week after forsdem.

3. oVirt Activity (software)

- oVirt Jenkins plugin by Dustin Kut Moy Cheung to control VM slaves
managed by ovirt/RHEV
https://github.com/thescouser89/ovirt-slaves-plugin

- Opaque oVirt/RHEV/Proxmox client and source code released
 https://play.google.com/store/apps/details?id=com.undatech.opaque

- great to see the NUMA push from HP:
http://www.ovirt.org/Features/NUMA_and_Virtual_NUMA
http://www.ovirt.org/Features/Detailed_NUMA_and_Virtual_NUMA

4. oVirt Activity (blogs, preso's)

- oVirt's has been accepted as a mentoring project for the Google
Summer of Code 2014.

- Oved Ourfali posted on Importing Glance images as oVirt templates
http://alturl.com/h7xid

- v2v had seen many active discussions. here's a post by Jon Archer on
how to Import regular kvm image to oVirt or RHEV
http://jonarcher.info/2014/02/import-regular-kvm-image-ovirt-rhev/

- great reviews on amazon.com for Getting Started with oVirt 3.3
http://alturl.com/5rk2p

- oVirt Deep Dive 3.3 slides (Chinese)
http://www.slideshare.net/mobile/johnwoolee/ovirt-deep-dive#

- oVirt intro video (russian)
http://alturl.com/it546

- how to install oVirt 3.3 on CentOS 6.5
http://www.youtube.com/watch?v=5i5ilSKsmbo

5. Related
- NetApp GA'd their Virtual Storage Console for RHEV, which is
implemented as an oVirt UI plugin (and then some)
http://captainkvm.com/2014/02/vsc-for-rhev-is-ga-today/#more-660
___
Engine-devel mailing list
engine-de...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPICE causes migration failure?

2014-03-04 Thread Dafna Ron

Thanks Ted,

Please send logs to the user's list since others may help if I am off line

Thanks,

Dafna


On 03/03/2014 11:48 PM, Ted Miller wrote:
Dafna, I will get the logs to you when I get a chance.  I have an 
intern to keep busy this week, and that gets higher priority than 
oVirt (unfortunately).  Ted Miller


On 3/3/2014 12:26 PM, Dafna Ron wrote:
I don't see a reason why open monitor will fail migration - at most, 
if there is a problem I would close the spice session on src and 
restarted it at the dst.
can you please attach vdsm/libvirt/qemu logs from both hosts and 
engine logs so that we can see the migration failure reason?


Thanks,
Dafna



On 03/03/2014 05:16 PM, Ted Miller wrote:
I just got my Data Center running again, and am proceeding with some 
setup  testing.


I created a VM (not doing anything useful)
I clicked on the Console and had a SPICE console up (viewed in Win7).
I had it printing the time on the screen once per second (while 
date;do sleep 1; done).

I tried to migrate the VM to another host and got in the GUI:

Migration started (VM: web1, Source: s1, Destination: s3, User: 
admin@internal).


Migration failed due to Error: Fatal error during migration (VM: 
web1, Source: s1, Destination: s3).


As I started the migration I happened to think I wonder how they 
handle the SPICE console, since I think that is a link from the host 
to my machine, letting me see the VM's screen.


After the failure, I tried shutting down the SPICE console, and 
found that the migration succeeded.  I again opened SPICE and had a 
migration fail.  Closed SPICE, migration failed.


I can understand how migrating SPICE is a problem, but, at least 
could we give the victim of this condition a meaningful error 
message?  I have seen a lot of questions about failed migrations 
(mostly due to attached CDs), but I have never seen this discussed. 
If I had not had that particular thought cross my brain at that 
particular time, I doubt that SPICE would have been where I went 
looking for a solution.


If this is the first time this issue has been raised, I am willing 
to file a bug.


Ted Miller
Elkhart, IN, USA

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users








--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] How to get status of async operation via Java SDK

2014-03-04 Thread Eric Bollengier
Hello,

From what I see in my tests, deleting a snapshot can take up to few
minutes (my test platform is not really a heavy production system, but
my VMs are powered off all the time, so I have hard time to know why the
KVM Host is having a huge CPU consumption for this operation).

So, I would like to know when the delete is actually done, and if the
status is OK. I have pieces of information about a CorrelationId, I can
read Events, but it's not really clear how to query the status of my
operation, the Response object provides only a Type, and the
documentation is oriented on creation.



Documentation or some examples would be welcome.

Thanks,

Best Regards,
Eric

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Which iso to use when installing an ovirt-node

2014-03-04 Thread Fabian Deutsch
Am Montag, den 03.03.2014, 16:49 +0100 schrieb Andy Michielsen:
 Hello,
 
 
 Which iso should I use to install an ovirt-node ?
 
 
 What the difference between the el and fc versions. (I'm installing
 the engine on a centos 6.5 minimal server.)
 
 
 Which version should I use.

Hi Andy,

el6 is Node based on CentOS
fc* is Node based on Fedora *

Currently we only build CentOS based images. The latest image can be
found here:
http://fedorapeople.org/~fabiand/node/3.0.4

We are working on restructuring our infrastructure, after that the ISO
can be found on ovirt.org again.

- fabian


signature.asc
Description: This is a digitally signed message part
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Which ovirt-node version to use

2014-03-04 Thread Fabian Deutsch
Am Montag, den 03.03.2014, 12:16 +0100 schrieb Andy Michielsen:
 Hello,
 
 
 Which ovirt-node iso should I use to install and use with the
 ovirt-engine 3.3

Hey,

as noted in a different email, please use the vdsm33 iso from

http://fedorapeople.org/~fabiand/node/3.0.4


- fabian


signature.asc
Description: This is a digitally signed message part
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] View the Console using SPICE from Windows client

2014-03-04 Thread Alon Bar-Lev
Adding Frantisek

- Original Message -
 From: Udaya Kiran P ukiran...@yahoo.in
 To: users users@ovirt.org, Alon Bar-Lev alo...@redhat.com
 Sent: Tuesday, March 4, 2014 12:51:24 PM
 Subject: View the Console using SPICE from Windows client
 
 Hi All,
 
 I have successfully launched a VM on one of the Host (FC19 Host). Console
 option for the VM is set to SPICE. However, I am not able to see the console
 after clicking on the console button in oVirt Engine.
 
 I am accessing the oVirt Engine from a Windows 7 machine. I have tried with
 Chrome, Firefox and IE.
 
 Please suggest.
 
 Thank you.
 
 Regards,
 Udaya Kiran
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Setting up an ovirt-node

2014-03-04 Thread Fabian Deutsch
Am Freitag, den 28.02.2014, 09:47 +0100 schrieb Andy Michielsen:
 Hello,
 
 
 Will try that. Do I need to configure both nic's with a static ip
 address ?

Hey,

it should be enough to only have one NIC configured.

- fabian

 Kind regards.
 
 
 
 2014-02-28 9:25 GMT+01:00 Alon Bar-Lev alo...@redhat.com:
 
 
 - Original Message -
  From: Andy Michielsen andy.michiel...@gmail.com
  To: users@ovirt.org
  Sent: Friday, February 28, 2014 10:18:55 AM
  Subject: [Users] Setting up an ovirt-node
 
  Hello,
 
  I did a clean install of a ovirt-node with the iso provided
 from ovirt.
 
  Everything went fine until I logged on with the admin user
 and configured the
  ovirt-engines address.
 
  Now I don't have any network connection any more.
 
  I have 2 nic's available and defined only the first one with
 a static IP.
 
  When I check the network settings in the admin menu it tells
 me I have
  serveral bond devices.
 
  If I log on with a root user it states that under
  /etc/sysconfig/network_scripts that there is a ifcfg_em1,
 ifcfg_ovirtmgmt
  and a ifcfg_brem1.
 
  The two last devices use the same static ip that I defined
 on the ifcfg_em1.
 
  How can I get my network back up and runnig as I will need
 this to connect to
  the engine which is running on an other server.
 
 
 Hi,
 
 I suggest you try it in different method.
 Try to enable ssh access and setup a password, do not enter
 engine address.
 Then add that ovirt-node via engine Add Host
 
 
  Kind regards.
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



signature.asc
Description: This is a digitally signed message part
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Using ovirt-node with virt-mananager

2014-03-04 Thread Fabian Deutsch
Am Donnerstag, den 27.02.2014, 08:44 +0100 schrieb Andy Michielsen:
 Hello,
 
 How do I connect to an ovirt nodr with virt manager.

Hey,

this is currently not out of the box possible. But I've also got
interest in allowing this.

oVirt Node is intended to be used with Engine.

- fabian


signature.asc
Description: This is a digitally signed message part
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Meital Bourvine
Hi Giorgio,

Can you please attach vdsm.log from both hosts, and engine.log?

Try maybe moving both hosts to maintenance, confirm host had been rebooted, 
and activate again. See if it helps.

- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 12:34:49 PM
 Subject: [Users] Data Center Non Responsive / Contending
 
 Hi everyone,
 I'm asking for help again as testing my setup I put myself in a
 situation in which I can't get out.
 
 Layout: two hosts, an iSCSI storage, the engine installed as a
 regular KVM guest on another host (external to the oVirt setup). All
 CentOS 6.5, oVirt 3.4.0beta3.
 One DC (Default), one Cluster (Default), default storage type iSCSI.
 Moreover, ISO domain is another external KVM guest exposing an NFS
 share; Export Domain is in fact a VM in this Cluster.
 
 This is a preproduction setup.
 Yesterday all was running fine until I needed to do some hardware
 maintenance.
 So I decided to put the two hosts in maintenance from the webadmin
 then shutdown them in the usual way. Engine was left operational.
 Later i booted again one host, waited some time then tried to activate
 it (from the webadmin) without success. Even switching on the second
 host didn't change anything.
 Now the Data Center status toggles between Contending and Non Responsive.
 
 None of the hosts is SPM and if I choose Select as SPM in the
 webadmin the result is this popup:
 --
  Operation Canceled
 --
 Error while executing action: Cannot force select SPM: Storage Domain
 cannot be accessed.
 -Please check that at least one Host is operational and Data Center state is
 up.
 --
 
 The two hosts are operational but the DC isn't up (downpointing red
 arrow - Non Responsive, hourglass - Contending).
 
 Storage domains are in Unknown state and if I try to Activate the
 Master Data Domain its status becomes Locked and then it fails with
 these events:
 . Invalid status on Data Center Default. Setting status to Non Responsive
 . Failed to activate Storage Domain dt02clu6070 (Data Center Default) by
 admin
 
 Connectivity seems OK. iSCSI connectivity seems OK.
 
 I'm almost certain that if I had left one host active I would have had
 zero problems.
 But I also think shutting down the full system should not be a problem.
 
 In the end I restarted the ovirt-engine service without results (as
 expected).
 
 Any thought? Logs needed?
 
 TIA,
 Giorgio.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Recommended way to disconnect and remove iSCSI direct LUNs

2014-03-04 Thread Maor Lipchuk
Hi Boyan,


Generally we don't disconnecting external Lun disks when we remove them
from the oVirt management.
You can disconnect them manually from the host, or use restart.

IIRC one reason for that, is because we might have Storage Domains which
use the same target.
another reason is that we keep those sessions, so it will be easier to
establish connection when reusing the target.

Regards,
Maor


On 02/26/2014 02:41 PM, Boyan Tabakov wrote:
 Hello,
 
 I have ovirt 3.3.2 running with FC19 nodes. I have several virtual
 machines that use directly attached iSCSI LUNs. Discovering, attaching
 and using new LUNs works without issues (vdsm needed some patching to
 work with Dell Equallogic, as described here
 https://sites.google.com/a/keele.ac.uk/partlycloudy/ovirt, but that's a
 separate issue). Also live migration works well between hosts and the
 LUNs get properly attached to the migration target host.
 
 However, I don't see any way to disconnect/remove LUNs that are no
 longer needed (e.g. VM is removed). What is the recommended way to
 remove old LUNs, so that the underlying iSCSI sessions are disconnected?
 Especially if a VM has been migrated between hosts, it leaves the LUNs
 connected on multiple nodes.
 
 Thank you in advance!
 
 Best regards,
 Boyan Tabakov
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SD Disk's Logical Volume not visible/activated on some nodes

2014-03-04 Thread Nir Soffer
- Original Message -
 From: Nir Soffer nsof...@redhat.com
 To: Boyan Tabakov bl...@alslayer.net
 Cc: users@ovirt.org, Zdenek Kabelac zkabe...@redhat.com
 Sent: Monday, March 3, 2014 9:39:47 PM
 Subject: Re: [Users] SD Disk's Logical Volume not visible/activated on some 
 nodes
 
 Hi Zdenek, can you look into this strange incident?
 
 When user creates a disk on one host (create a new lv), the lv is not seen
 on another host in the cluster.
 
 Calling multipath -r cause the new lv to appear on the other host.
 
 Finally, lvs tell us that vg_mda_free is zero - maybe unrelated, but unusual.
 
 - Original Message -
  From: Boyan Tabakov bl...@alslayer.net
  To: Nir Soffer nsof...@redhat.com
  Cc: users@ovirt.org
  Sent: Monday, March 3, 2014 9:51:05 AM
  Subject: Re: [Users] SD Disk's Logical Volume not visible/activated on some
  nodes
   Consequently, when creating/booting
   a VM with the said disk attached, the VM fails to start on host2,
   because host2 can't see the LV. Similarly, if the VM is started on
   host1, it fails to migrate to host2. Extract from host2 log is in
   the
   end. The LV in question is 6b35673e-7062-4716-a6c8-d5bf72fe3280.
  
   As far as I could track quickly the vdsm code, there is only call to
   lvs
   and not to lvscan or lvchange so the host2 LVM doesn't fully
   refresh.
   
   lvs should see any change on the shared storage.
   
   The only workaround so far has been to restart VDSM on host2, which
   makes it refresh all LVM data properly.
   
   When vdsm starts, it calls multipath -r, which ensure that we see all
   physical volumes.
   
  
   When is host2 supposed to pick up any newly created LVs in the SD
   VG?
   Any suggestions where the problem might be?
  
   When you create a new lv on the shared storage, the new lv should be
   visible on the other host. Lets start by verifying that you do see
   the new lv after a disk was created.
  
   Try this:
  
   1. Create a new disk, and check the disk uuid in the engine ui
   2. On another machine, run this command:
  
   lvs -o vg_name,lv_name,tags
  
   You can identify the new lv using tags, which should contain the new
   disk
   uuid.
  
   If you don't see the new lv from the other host, please provide
   /var/log/messages
   and /var/log/sanlock.log.
  
   Just tried that. The disk is not visible on the non-SPM node.
  
   This means that storage is not accessible from this host.
  
   Generally, the storage seems accessible ok. For example, if I restart
   the vdsmd, all volumes get picked up correctly (become visible in lvs
   output and VMs can be started with them).
   
   Lests repeat this test, but now, if you do not see the new lv, please
   run:
   
   multipath -r
   
   And report the results.
   
  
  Running multipath -r helped and the disk was properly picked up by the
  second host.
  
  Is running multipath -r safe while host is not in maintenance mode?
 
 It should be safe, vdsm uses in some cases.
 
  If yes, as a temporary workaround I can patch vdsmd to run multipath -r
  when e.g. monitoring the storage domain.
 
 I suggested running multipath as debugging aid; normally this is not needed.
 
 You should see lv on the shared storage without running multipath.
 
 Zdenek, can you explain this?
 
   One warning that I keep seeing in vdsm logs on both nodes is this:
  
   Thread-1617881::WARNING::2014-02-24
   16:57:50,627::sp::1553::Storage.StoragePool::(getInfo) VG
   3307f6fa-dd58-43db-ab23-b1fb299006c7's metadata size exceeded
critical size: mdasize=134217728 mdafree=0
   
   Can you share the output of the command bellow?
   
   lvs -o
   
   uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
  
  Here's the output for both hosts.
  
  host1:
  [root@host1 ~]# lvs -o
  uuid,name,attr,size,vg_free,vg_extent_size,vg_extent_count,vg_free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count
LV UUIDLV
Attr  LSize   VFree   Ext #Ext  Free  LV Tags
  
  VMdaSize  VMdaFree  #LV #PV
jGEpVm-oPW8-XyxI-l2yi-YF4X-qteQ-dm8SqL
  3d362bf2-20f4-438d-9ba9-486bd2e8cedf -wi-ao---   2.00g 114.62g 128.00m
  1596   917
  IU_0227da98-34b2-4b0c-b083-d42e7b760036,MD_5,PU_f4231952-76c5-4764-9c8b-ac73492ac465
 128.00m0   13   2
 
 This looks wrong - your vg_mda_free is zero - as vdsm complains.
 
 Zdenek, how can we debug this further?

I see same issue in Fedora 19.

Can you share with us the output of:

cat /etc/redhat-release
uname -a
lvm version

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] [ANN] oVirt 3.3.4 release

2014-03-04 Thread Sandro Bonazzola
The oVirt development team is pleased to announce the general
availability of oVirt 3.3.4 as of March 4th 2014. This release
solidifies oVirt as a leading KVM management application and open
source alternative to VMware vSphere.

oVirt is available now for Fedora 19 and Red Hat Enterprise Linux 6.5
(or similar).

This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.

The existing repository ovirt-stable has been updated for delivering this
release without the need of enabling any other repository.

A new oVirt Node build is also available [2].

[1] http://www.ovirt.org/OVirt_3.3.4_release_notes
[2] 
http://resources.ovirt.org/releases/3.3.4/iso/ovirt-node-iso-3.0.4-1.0.201401291204.vdsm33.el6.iso

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] View the Console using SPICE from Windows client

2014-03-04 Thread Frantisek Kobzik
Hello,

 - which invocation method are you using? I suppose you tried 'Native' since 
you use Chrome. Does the browser offer you downloading 'console.vv' file?
 - what happens when you click console?
 - have you tried VNC as well?
 - what does engine log say? was ticket set successfully?

Cheers,
Franta.


- Original Message -
From: Udaya Kiran P ukiran...@yahoo.in
To: users users@ovirt.org, Alon Bar-Lev alo...@redhat.com
Sent: Tuesday, March 4, 2014 11:51:24 AM
Subject: [Users] View the Console using SPICE from Windows client

Hi All, 

I have successfully launched a VM on one of the Host (FC19 Host). Console 
option for the VM is set to SPICE. However, I am not able to see the console 
after clicking on the console button in oVirt Engine. 

I am accessing the oVirt Engine from a Windows 7 machine. I have tried with 
Chrome, Firefox and IE. 

Please suggest. 

Thank you. 

Regards, 
Udaya Kiran 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] [QE] oVirt 3.3.5 status

2014-03-04 Thread Sandro Bonazzola
Hi,
  now that 3.3.4 has been released, it's time to look at 3.3.5.
Here is the tentative timeline:

General availability: 2014-04-09
RC Build: 2014-04-02

Nightly builds are available enabling the oVirt 3.3 snapshots repositories:

# yum-config-manager --enable ovirt-3.3-snapshot
# yum-config-manager --enable ovirt-3.3-snapshot-static


As you can see there won't be any beta release before RC for 3.3.z and the
same will be for 3.4.z.
Now we've nightly builds also for stable branches so you can test them
whenever you want to. If you're going to test it, please add yourself
as tester on [3]


Note to maintainers:
* For Release candidate builds, we'll send to all maintainers
a reminder the week before the build on Thursday morning (UTC timezone).
Packages that won't be ready before the announced compose time won't be
added to the release candidate.
Please remember to build your packages the day before repository
composition if you want it in.

* Release notes must be filled [1]

* A tracker bug has been created [2] and shows 1 bug blocking the release:
Whiteboard  Bug ID  Summary
virt1071997 VM is not locked on run once

* Please add bugs to the tracker if you think that 3.3.5 should not be released 
without them fixed.

[1] http://www.ovirt.org/OVirt_3.3.5_release_notes
[2] http://bugzilla.redhat.com/1071867
[3] http://www.ovirt.org/Testing/oVirt_3.3.5_testing


Thanks,


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Meital Bourvine
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'1810e5eb-9eb6-4797-ac50-8023a939f312',)

What's the output of:
lvs
vdsClient -s 0 getStorageDomainsList

If it exists in the list, please run:
vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312



- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 3:18:24 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 12:18 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  Hi Giorgio,
 
  Can you please attach vdsm.log from both hosts, and engine.log?
 
  Try maybe moving both hosts to maintenance, confirm host had been
  rebooted, and activate again. See if it helps.
 
 
 Hi Meital,
 tried but no positive outcome.
 
 I'm attaching logs regarding this last operations as I'm beginning to
 think the problem is in the unreacheable Export Domain. Also there is
 an emended copy of the DB taken this last night.
 
 If not enough I'll go to look for yesterday's relevant logs.
 
 Thank you for taking care,
 Giorgio.
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Recommended way to disconnect and remove iSCSI direct LUNs

2014-03-04 Thread Boyan Tabakov
Hi Maor,

Thanks for your explanation! I suppose we'll disconnect manually, if we 
decide to use direct LUNs instead of Storage domain volumes (some 
issues with SD volumes, as explained in another mail thread of mine).

Best regards,
Boyan

On Tue Mar  4 14:20:38 2014, Maor Lipchuk wrote:
 Hi Boyan,


 Generally we don't disconnecting external Lun disks when we remove them
 from the oVirt management.
 You can disconnect them manually from the host, or use restart.

 IIRC one reason for that, is because we might have Storage Domains which
 use the same target.
 another reason is that we keep those sessions, so it will be easier to
 establish connection when reusing the target.

 Regards,
 Maor


 On 02/26/2014 02:41 PM, Boyan Tabakov wrote:
 Hello,

 I have ovirt 3.3.2 running with FC19 nodes. I have several virtual
 machines that use directly attached iSCSI LUNs. Discovering, attaching
 and using new LUNs works without issues (vdsm needed some patching to
 work with Dell Equallogic, as described here
 https://sites.google.com/a/keele.ac.uk/partlycloudy/ovirt, but that's a
 separate issue). Also live migration works well between hosts and the
 LUNs get properly attached to the migration target host.

 However, I don't see any way to disconnect/remove LUNs that are no
 longer needed (e.g. VM is removed). What is the recommended way to
 remove old LUNs, so that the underlying iSCSI sessions are disconnected?
 Especially if a VM has been migrated between hosts, it leaves the LUNs
 connected on multiple nodes.

 Thank you in advance!

 Best regards,
 Boyan Tabakov



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users







signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SD Disk's Logical Volume not visible/activated on some nodes

2014-03-04 Thread Boyan Tabakov
On Tue Mar  4 14:46:33 2014, Nir Soffer wrote:
 - Original Message -
 From: Nir Soffer nsof...@redhat.com
 To: Boyan Tabakov bl...@alslayer.net
 Cc: users@ovirt.org, Zdenek Kabelac zkabe...@redhat.com
 Sent: Monday, March 3, 2014 9:39:47 PM
 Subject: Re: [Users] SD Disk's Logical Volume not visible/activated on some 
 nodes

 Hi Zdenek, can you look into this strange incident?

 When user creates a disk on one host (create a new lv), the lv is not seen
 on another host in the cluster.

 Calling multipath -r cause the new lv to appear on the other host.

 Finally, lvs tell us that vg_mda_free is zero - maybe unrelated, but unusual.

 - Original Message -
 From: Boyan Tabakov bl...@alslayer.net
 To: Nir Soffer nsof...@redhat.com
 Cc: users@ovirt.org
 Sent: Monday, March 3, 2014 9:51:05 AM
 Subject: Re: [Users] SD Disk's Logical Volume not visible/activated on some
 nodes
 Consequently, when creating/booting
 a VM with the said disk attached, the VM fails to start on host2,
 because host2 can't see the LV. Similarly, if the VM is started on
 host1, it fails to migrate to host2. Extract from host2 log is in
 the
 end. The LV in question is 6b35673e-7062-4716-a6c8-d5bf72fe3280.

 As far as I could track quickly the vdsm code, there is only call to
 lvs
 and not to lvscan or lvchange so the host2 LVM doesn't fully
 refresh.

 lvs should see any change on the shared storage.

 The only workaround so far has been to restart VDSM on host2, which
 makes it refresh all LVM data properly.

 When vdsm starts, it calls multipath -r, which ensure that we see all
 physical volumes.


 When is host2 supposed to pick up any newly created LVs in the SD
 VG?
 Any suggestions where the problem might be?

 When you create a new lv on the shared storage, the new lv should be
 visible on the other host. Lets start by verifying that you do see
 the new lv after a disk was created.

 Try this:

 1. Create a new disk, and check the disk uuid in the engine ui
 2. On another machine, run this command:

 lvs -o vg_name,lv_name,tags

 You can identify the new lv using tags, which should contain the new
 disk
 uuid.

 If you don't see the new lv from the other host, please provide
 /var/log/messages
 and /var/log/sanlock.log.

 Just tried that. The disk is not visible on the non-SPM node.

 This means that storage is not accessible from this host.

 Generally, the storage seems accessible ok. For example, if I restart
 the vdsmd, all volumes get picked up correctly (become visible in lvs
 output and VMs can be started with them).

 Lests repeat this test, but now, if you do not see the new lv, please
 run:

 multipath -r

 And report the results.


 Running multipath -r helped and the disk was properly picked up by the
 second host.

 Is running multipath -r safe while host is not in maintenance mode?

 It should be safe, vdsm uses in some cases.

 If yes, as a temporary workaround I can patch vdsmd to run multipath -r
 when e.g. monitoring the storage domain.

 I suggested running multipath as debugging aid; normally this is not needed.

 You should see lv on the shared storage without running multipath.

 Zdenek, can you explain this?

 One warning that I keep seeing in vdsm logs on both nodes is this:

 Thread-1617881::WARNING::2014-02-24
 16:57:50,627::sp::1553::Storage.StoragePool::(getInfo) VG
 3307f6fa-dd58-43db-ab23-b1fb299006c7's metadata size exceeded
  critical size: mdasize=134217728 mdafree=0

 Can you share the output of the command bellow?

 lvs -o
 
 uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name

 Here's the output for both hosts.

 host1:
 [root@host1 ~]# lvs -o
 uuid,name,attr,size,vg_free,vg_extent_size,vg_extent_count,vg_free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count
   LV UUIDLV
   Attr  LSize   VFree   Ext #Ext  Free  LV Tags

 VMdaSize  VMdaFree  #LV #PV
   jGEpVm-oPW8-XyxI-l2yi-YF4X-qteQ-dm8SqL
 3d362bf2-20f4-438d-9ba9-486bd2e8cedf -wi-ao---   2.00g 114.62g 128.00m
 1596   917
 IU_0227da98-34b2-4b0c-b083-d42e7b760036,MD_5,PU_f4231952-76c5-4764-9c8b-ac73492ac465
128.00m0   13   2

 This looks wrong - your vg_mda_free is zero - as vdsm complains.

 Zdenek, how can we debug this further?

 I see same issue in Fedora 19.

 Can you share with us the output of:

 cat /etc/redhat-release
 uname -a
 lvm version

 Nir

$ cat /etc/redhat-release
Fedora release 19 (Schrödinger’s Cat)
$ uname -a
Linux blizzard.mgmt.futurice.com 3.12.6-200.fc19.x86_64.debug #1 SMP 
Mon Dec 23 16:24:32 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
$ lvm version
  LVM version: 2.02.98(2) (2012-10-15)
  Library version: 1.02.77 (2012-10-15)
  Driver version:  4.26.0



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Increase core or socker for vCPU

2014-03-04 Thread Tejesh M
Hi,

I have a basic doubt, while increasing vCPU for a VM in RHEV-M, do we need
to increase core or sockets?  For instance, if someone asks for 2vCPU then
should i make it 2 socket x 1 core or 1 socket x 2 core..

I do understand that it depends on application too to choose. but i'm
asking in general.

Sorry, if its not relevant to this forum.

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Problem with DWH installation

2014-03-04 Thread Nicolas Ecarnot

Le 11/11/2013 14:10, Yaniv Dary a écrit :



- Original Message -

From: Michael Wagenknecht wagenkne...@fuh-e.de
To: Alex Lourie alou...@redhat.com
Cc: users users@ovirt.org
Sent: Wednesday, November 6, 2013 9:45:24 AM
Subject: Re: [Users] Problem with DWH installation

Hi Alex,
I can't find ovirt-engine-dwh-3.3 in the ovirt-release-el6 repo.
Can I only install it from the git tree?


There are no el6 packages for dwh yet. This is planned to be introduced in 
oVirt 3.4.


Hi,

Congratulations to everyone involved in 3.4 release.

May someone tell us if the packages discussed above have been managed 
now in el6 for 3.4?


Thank you.

--
Nicolas Ecarnot


You can install it with the packaging maven profile for now.



Yaniv



Michael


Am 05.11.2013 16:30, schrieb Alex Lourie:

Hi Michael

The dwh version you are trying to install is unfortunately no longer
compatible with ovirt-engine-3.3. You will need ovirt-engine-dwh-3.3 for
it to work.

Alex.



--
Mit freundlichen Grüßen

Michael Wagenknecht
FuH Entwicklungsgesellschaft mbH
Geschäftsführer Carola Fornoff
HRB Freiburg 701203, UID DE255007372
Elsässer Str. 18, D-79346 Endingen
Telefon +49 - 7642 - 92866 - 0

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
 StorageDomainDoesNotExist: Storage domain does not exist: 
 (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)

 What's the output of:
 lvs
 vdsClient -s 0 getStorageDomainsList

 If it exists in the list, please run:
 vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312


I'm attaching a compressed archive to avoid mangling by googlemail client.

Indeed the NFS storage with that id is not in the list of available
storage as it is brought up by a VM that has to be run in this very
same cluster. Obviously it isn't running at the moment.

You find this in the DB:

COPY storage_domain_static (id, storage, storage_name,
storage_domain_type, storage_type, storage_domain_format_type,
_create_date, _update_date, recoverable, last_time_used_as_master,
storage_description, storage_comment) FROM stdin;
...
1810e5eb-9eb6-4797-ac50-8023a939f312
11d4972d-f227-49ed-b997-f33cf4b2aa26nfs02EXPORT 3   1
 0   2014-02-28 18:11:23.17092+01\N  t   0   \N
  \N
...

Also, disks for that VM are carved from the Master Data Domain that is
not available ATM.

To say in other words: I thought that availability of an export domain
wasn't critical to switch on a Data Center. Am I wrong?

Thanks,
Giorgio.


lvs+vdsclient.txt.gz
Description: GNU Zip compressed data
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Meital Bourvine
Master data domain must be reachable in order for the DC to be up.
Export domain shouldn't affect the dc status.
Are you sure that you've created the export domain as an export domain, and not 
as a regular nfs?

- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 4:16:19 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  StorageDomainDoesNotExist: Storage domain does not exist:
  (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
 
  What's the output of:
  lvs
  vdsClient -s 0 getStorageDomainsList
 
  If it exists in the list, please run:
  vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312
 
 
 I'm attaching a compressed archive to avoid mangling by googlemail client.
 
 Indeed the NFS storage with that id is not in the list of available
 storage as it is brought up by a VM that has to be run in this very
 same cluster. Obviously it isn't running at the moment.
 
 You find this in the DB:
 
 COPY storage_domain_static (id, storage, storage_name,
 storage_domain_type, storage_type, storage_domain_format_type,
 _create_date, _update_date, recoverable, last_time_used_as_master,
 storage_description, storage_comment) FROM stdin;
 ...
 1810e5eb-9eb6-4797-ac50-8023a939f312
 11d4972d-f227-49ed-b997-f33cf4b2aa26nfs02EXPORT 3   1
  0   2014-02-28 18:11:23.17092+01\N  t   0   \N
   \N
 ...
 
 Also, disks for that VM are carved from the Master Data Domain that is
 not available ATM.
 
 To say in other words: I thought that availability of an export domain
 wasn't critical to switch on a Data Center. Am I wrong?
 
 Thanks,
 Giorgio.
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] [QE] oVirt 3.4.0 status

2014-03-04 Thread Sandro Bonazzola
Hi,
oVirt 3.4.0 RC has been released and is currently on QA.
While we're preparing for this week Test Day on 2014-03-06
a few blockers have been opened.

The bug tracker [1] shows the following bugs blocking the release:

Whiteboard  Bug ID  Status  Summary
infra   1070742 POST[database] support postgres user length 
within schema version
infra   1071536 POSTNotifier doesn't send any notifications 
via email
integration 1067058 POST[database] old psycopg2 does not accept 
unicode string as port name
integration 1069193 POSTRelease maven artifacts with correct 
version numbers
integration 1072307 POSTremote database cannot be used
virt1069201 ASSIGNED[REST]: Missing domain field on 
VM\Template object.
virt1071997 POSTVM is not locked on run once

All remaining bugs have been re-targeted to 3.4.1.

Maintainers / Assignee:
- Please remember to rebuild your packages before 2014-03-11 09:00 UTC if you 
want them to be included in 3.4.0 GA.
- Please add the bugs to the tracker if you think that 3.4.0 should not be 
released without them fixed.
- Please provide ETA on blockers bugs and fix them as soon as possible
- Please fill release notes, the page has been created here [2]
- Please update http://www.ovirt.org/OVirt_3.4_TestDay before 2014-02-19

Be prepared for upcoming oVirt 3.4.0 Test Day on 2014-03-06!

Thanks to all people already testing 3.4.0 RC!

[1] https://bugzilla.redhat.com/1024889
[2] http://www.ovirt.org/OVirt_3.4.0_release_notes

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
 Master data domain must be reachable in order for the DC to be up.
 Export domain shouldn't affect the dc status.
 Are you sure that you've created the export domain as an export domain, and 
 not as a regular nfs?


Yes, I am.

Don't know how to extract this info from DB, but in webadmin, in the
storage list, I have these info:

Domain Name: nfs02EXPORT
Domain Type: Export
Storage Type: NFS
Format: V1
Cross Data-Center Status: Inactive
Total Space: [N/A]
Free Space: [N/A]

ATM my only Data Domain is based on iSCSI, no NFS.





 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 4:16:19 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending

 2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  StorageDomainDoesNotExist: Storage domain does not exist:
  (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
 
  What's the output of:
  lvs
  vdsClient -s 0 getStorageDomainsList
 
  If it exists in the list, please run:
  vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312
 

 I'm attaching a compressed archive to avoid mangling by googlemail client.

 Indeed the NFS storage with that id is not in the list of available
 storage as it is brought up by a VM that has to be run in this very
 same cluster. Obviously it isn't running at the moment.

 You find this in the DB:

 COPY storage_domain_static (id, storage, storage_name,
 storage_domain_type, storage_type, storage_domain_format_type,
 _create_date, _update_date, recoverable, last_time_used_as_master,
 storage_description, storage_comment) FROM stdin;
 ...
 1810e5eb-9eb6-4797-ac50-8023a939f312
 11d4972d-f227-49ed-b997-f33cf4b2aa26nfs02EXPORT 3   1
  0   2014-02-28 18:11:23.17092+01\N  t   0   \N
   \N
 ...

 Also, disks for that VM are carved from the Master Data Domain that is
 not available ATM.

 To say in other words: I thought that availability of an export domain
 wasn't critical to switch on a Data Center. Am I wrong?

 Thanks,
 Giorgio.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Meital Bourvine
Ok, and is the iscsi functional at the moment?

- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 4:35:07 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  Master data domain must be reachable in order for the DC to be up.
  Export domain shouldn't affect the dc status.
  Are you sure that you've created the export domain as an export domain, and
  not as a regular nfs?
 
 
 Yes, I am.
 
 Don't know how to extract this info from DB, but in webadmin, in the
 storage list, I have these info:
 
 Domain Name: nfs02EXPORT
 Domain Type: Export
 Storage Type: NFS
 Format: V1
 Cross Data-Center Status: Inactive
 Total Space: [N/A]
 Free Space: [N/A]
 
 ATM my only Data Domain is based on iSCSI, no NFS.
 
 
 
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:16:19 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   StorageDomainDoesNotExist: Storage domain does not exist:
   (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
  
   What's the output of:
   lvs
   vdsClient -s 0 getStorageDomainsList
  
   If it exists in the list, please run:
   vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312
  
 
  I'm attaching a compressed archive to avoid mangling by googlemail client.
 
  Indeed the NFS storage with that id is not in the list of available
  storage as it is brought up by a VM that has to be run in this very
  same cluster. Obviously it isn't running at the moment.
 
  You find this in the DB:
 
  COPY storage_domain_static (id, storage, storage_name,
  storage_domain_type, storage_type, storage_domain_format_type,
  _create_date, _update_date, recoverable, last_time_used_as_master,
  storage_description, storage_comment) FROM stdin;
  ...
  1810e5eb-9eb6-4797-ac50-8023a939f312
  11d4972d-f227-49ed-b997-f33cf4b2aa26nfs02EXPORT 3   1
   0   2014-02-28 18:11:23.17092+01\N  t   0   \N
\N
  ...
 
  Also, disks for that VM are carved from the Master Data Domain that is
  not available ATM.
 
  To say in other words: I thought that availability of an export domain
  wasn't critical to switch on a Data Center. Am I wrong?
 
  Thanks,
  Giorgio.
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Nicolas Ecarnot

Off-topic but...



Le 04/03/2014 15:23, Meital Bourvine a écrit :

Master data domain must be reachable in order for the DC to be up.
Export domain shouldn't affect the dc status.


Last month we experienced a planned and controlled complete electrical 
shutdown of our whole datacenter.
When switching everything on, we witnessed that no matter the time we 
waited or the actions tried, our oVirt 3.3 wasn't able to start (hosts 
were responsive but not able to get activated) as long as our NFS server 
(used only for the export domain) wasn't up and running.


I did not find the time to tell about it, as I thought I was alone 
seeing that, but today, I see I'm not.


--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot
Hi Giorgio,
Apperantly the issue is caused because there is no connectivity to the export 
domain and than we fail on spmStart - that's obviously a bug that shouldn't 
happen.
can you open a bug for the issue?
in the meanwhile, as it seems to still exist - seems to me like the way for 
solving it would be either to fix the connectivity issue between vdsm and the 
storage domain or to downgrade your vdsm version to before this issue was 
introduced.

6a519e95-62ef-445b-9a98-f05c81592c85::WARNING::2014-03-04 
13:05:31,489::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['  
Volume group 1810e5eb-9e
b6-4797-ac50-8023a939f312 not found', '  Skipping volume group 
1810e5eb-9eb6-4797-ac50-8023a939f312']
6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04 
13:05:31,499::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 
1810e5eb-9eb6-4797-ac50-8023a
939f312 not found
Traceback (most recent call last):
  File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
dom = findMethod(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04 
13:05:31,500::sp::329::Storage.StoragePool::(startSpm) Unexpected error
Traceback (most recent call last):
  File /usr/share/vdsm/storage/sp.py, line 296, in startSpm
self._updateDomainsRole()
  File /usr/share/vdsm/storage/securable.py, line 75, in wrapper
return method(self, *args, **kwargs)
  File /usr/share/vdsm/storage/sp.py, line 205, in _updateDomainsRole
domain = sdCache.produce(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 98, in produce
domain.getRealDomain()
  File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
domain = self._findDomain(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
dom = findMethod(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)




- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 4:35:07 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  Master data domain must be reachable in order for the DC to be up.
  Export domain shouldn't affect the dc status.
  Are you sure that you've created the export domain as an export domain, and
  not as a regular nfs?
 
 
 Yes, I am.
 
 Don't know how to extract this info from DB, but in webadmin, in the
 storage list, I have these info:
 
 Domain Name: nfs02EXPORT
 Domain Type: Export
 Storage Type: NFS
 Format: V1
 Cross Data-Center Status: Inactive
 Total Space: [N/A]
 Free Space: [N/A]
 
 ATM my only Data Domain is based on iSCSI, no NFS.
 
 
 
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:16:19 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   StorageDomainDoesNotExist: Storage domain does not exist:
   (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
  
   What's the output of:
   lvs
   vdsClient -s 0 getStorageDomainsList
  
   If it exists in the list, please run:
   vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312
  
 
  I'm attaching a compressed archive to avoid mangling by googlemail client.
 
  Indeed the NFS storage with that id is not in the list of available
  storage as it is brought up by a VM that has to be run in this very
  same cluster. Obviously it isn't running at the moment.
 
  You find this in the DB:
 
  COPY storage_domain_static (id, storage, storage_name,
  storage_domain_type, storage_type, storage_domain_format_type,
  _create_date, _update_date, recoverable, last_time_used_as_master,
  storage_description, storage_comment) FROM stdin;
  ...
  1810e5eb-9eb6-4797-ac50-8023a939f312
  11d4972d-f227-49ed-b997-f33cf4b2aa26nfs02EXPORT 3   1
   0   2014-02-28 18:11:23.17092+01\N  t   0   \N
\N
  ...
 
  Also, disks for that VM are carved from the Master Data Domain that is
  not available ATM.
 
  To say in other words: I thought that availability of an export domain
  wasn't critical to switch on a Data Center. Am I wrong?
 
  Thanks,
  Giorgio.
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 15:38 GMT+01:00 Meital Bourvine mbour...@redhat.com:
 Ok, and is the iscsi functional at the moment?


I think so.
For example I see in the DB that the id of my Master Data Domain ,
dt02clu6070,  is  a689cb30-743e-4261-bfd1-b8b194dc85db then

[root@vbox70 ~]# lvs a689cb30-743e-4261-bfd1-b8b194dc85db
  LV   VG
 Attr   LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,62g
  5c8bb733-4b0c-43a9-9471-0fde3d159fb2
a689cb30-743e-4261-bfd1-b8b194dc85db -wi---  11,00g
  7b617ab1-70c1-42ea-9303-ceffac1da72d
a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,88g
  e4b86b91-80ec-4bba-8372-10522046ee6b
a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   9,00g
  ids
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-ao 128,00m
  inbox
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m
  leases
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   2,00g
  master
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   1,00g
  metadata
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 512,00m
  outbox
a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m

I can read from the LVs that have the LVM Available bit set:

[root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
bs=1M of=/dev/null
128+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 0,0323692 s, 4,1 GB/s

[root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
bs=1M |od -xc |head -20
00020101221000200030200
020   ! 022 002  \0 003  \0  \0  \0  \0  \0  \0 002  \0  \0
0200001
 \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0
04000010007
001  \0  \0  \0  \0  \0  \0  \0  \a  \0  \0  \0  \0  \0  \0  \0
0603661393862633033
 \0  \0  \0  \0  \0  \0  \0  \0   a   6   8   9   c   b   3   0
100372d33342d6532343136622d64662d31
  -   7   4   3   e   -   4   2   6   1   -   b   f   d   1   -
120386231623439636435386264
  b   8   b   1   9   4   d   c   8   5   d   b  \0  \0  \0  \0
1403638343839663932
 \0  \0  \0  \0  \0  \0  \0  \0   8   6   8   4   f   9   2   9
160612d62372d6638346564622d38302d35
  -   a   7   b   f   -   4   8   d   e   -   b   0   8   5   -
200656363306539353766306364762e6f62
  c   e   0   c   9   e   7   5   0   f   d   c   .   v   b   o
22037782e307270006926de
  x   7   0   .   p   r   i  \0 336 \0  \0  \0  \0  \0  \0
[root@vbox70 ~]#

Obviously I can't read from LVs that aren't available:

[root@vbox70 ~]# dd
if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
bs=1M of=/dev/null
dd: apertura di
`/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400':
No such file or directory
[root@vbox70 ~]#

But those LV are the VM's disks and I suppose it's availability is
managed by oVirt



 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 4:35:07 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending

 2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  Master data domain must be reachable in order for the DC to be up.
  Export domain shouldn't affect the dc status.
  Are you sure that you've created the export domain as an export domain, and
  not as a regular nfs?
 

 Yes, I am.

 Don't know how to extract this info from DB, but in webadmin, in the
 storage list, I have these info:

 Domain Name: nfs02EXPORT
 Domain Type: Export
 Storage Type: NFS
 Format: V1
 Cross Data-Center Status: Inactive
 Total Space: [N/A]
 Free Space: [N/A]

 ATM my only Data Domain is based on iSCSI, no NFS.





  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:16:19 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   StorageDomainDoesNotExist: Storage domain does not exist:
   (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
  
   What's the output of:
   lvs
   vdsClient -s 0 getStorageDomainsList
  
   If it exists in the list, please run:
   vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312
  
 
  I'm attaching a compressed archive to avoid mangling by googlemail client.
 
  Indeed the 

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot


- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 5:06:13 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 15:38 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  Ok, and is the iscsi functional at the moment?
 
 
 I think so.
 For example I see in the DB that the id of my Master Data Domain ,
 dt02clu6070,  is  a689cb30-743e-4261-bfd1-b8b194dc85db then
 
 [root@vbox70 ~]# lvs a689cb30-743e-4261-bfd1-b8b194dc85db
   LV   VG
  Attr   LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
   4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,62g
   5c8bb733-4b0c-43a9-9471-0fde3d159fb2
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi---  11,00g
   7b617ab1-70c1-42ea-9303-ceffac1da72d
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,88g
   e4b86b91-80ec-4bba-8372-10522046ee6b
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   9,00g
   ids
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-ao 128,00m
   inbox
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m
   leases
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   2,00g
   master
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   1,00g
   metadata
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 512,00m
   outbox
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m
 
 I can read from the LVs that have the LVM Available bit set:
 
 [root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
 bs=1M of=/dev/null
 128+0 records in
 128+0 records out
 134217728 bytes (134 MB) copied, 0,0323692 s, 4,1 GB/s
 
 [root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
 bs=1M |od -xc |head -20
 00020101221000200030200
 020   ! 022 002  \0 003  \0  \0  \0  \0  \0  \0 002  \0  \0
 0200001
  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0
 04000010007
 001  \0  \0  \0  \0  \0  \0  \0  \a  \0  \0  \0  \0  \0  \0  \0
 0603661393862633033
  \0  \0  \0  \0  \0  \0  \0  \0   a   6   8   9   c   b   3   0
 100372d33342d6532343136622d64662d31
   -   7   4   3   e   -   4   2   6   1   -   b   f   d   1   -
 120386231623439636435386264
   b   8   b   1   9   4   d   c   8   5   d   b  \0  \0  \0  \0
 1403638343839663932
  \0  \0  \0  \0  \0  \0  \0  \0   8   6   8   4   f   9   2   9
 160612d62372d6638346564622d38302d35
   -   a   7   b   f   -   4   8   d   e   -   b   0   8   5   -
 200656363306539353766306364762e6f62
   c   e   0   c   9   e   7   5   0   f   d   c   .   v   b   o
 22037782e307270006926de
   x   7   0   .   p   r   i  \0 336 \0  \0  \0  \0  \0  \0
 [root@vbox70 ~]#
 
 Obviously I can't read from LVs that aren't available:
 
 [root@vbox70 ~]# dd
 if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
 bs=1M of=/dev/null
 dd: apertura di
 `/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400':
 No such file or directory
 [root@vbox70 ~]#
 
 But those LV are the VM's disks and I suppose it's availability is
 managed by oVirt
 

please see my previous mail on this thread, the issue seems to be with the 
connectivity to the nfs path, not the iscsi.
2014-03-04 13:15:41,167 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-27) [1141851d] Correlation
 ID: null, Call Stack: null, Custom Event ID: -1, Message: Failed to connect 
Host vbox70 to the Storage Domains nfs02EXPORT.
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:35:07 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   Master data domain must be reachable in order for the DC to be up.
   Export domain shouldn't affect the dc status.
   Are you sure that you've created the export domain as an export domain,
   and
   not as a regular nfs?
  
 
  Yes, I am.
 
  Don't know how to extract this info from DB, but in webadmin, in the
  storage list, I have these info:
 
  Domain Name: nfs02EXPORT
  Domain Type: Export
  Storage Type: NFS
  Format: V1
  Cross Data-Center Status: Inactive
  Total Space: [N/A]
  Free Space: [N/A]
 
  ATM my 

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot
adding federico
- Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Giorgio Bersano giorgio.bers...@gmail.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 5:11:36 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 
 
 - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 5:06:13 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
  
  2014-03-04 15:38 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   Ok, and is the iscsi functional at the moment?
  
  
  I think so.
  For example I see in the DB that the id of my Master Data Domain ,
  dt02clu6070,  is  a689cb30-743e-4261-bfd1-b8b194dc85db then
  
  [root@vbox70 ~]# lvs a689cb30-743e-4261-bfd1-b8b194dc85db
LV   VG
   Attr   LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,62g
5c8bb733-4b0c-43a9-9471-0fde3d159fb2
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi---  11,00g
7b617ab1-70c1-42ea-9303-ceffac1da72d
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,88g
e4b86b91-80ec-4bba-8372-10522046ee6b
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   9,00g
ids
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-ao 128,00m
inbox
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m
leases
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   2,00g
master
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   1,00g
metadata
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 512,00m
outbox
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m
  
  I can read from the LVs that have the LVM Available bit set:
  
  [root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
  bs=1M of=/dev/null
  128+0 records in
  128+0 records out
  134217728 bytes (134 MB) copied, 0,0323692 s, 4,1 GB/s
  
  [root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
  bs=1M |od -xc |head -20
  00020101221000200030200
  020   ! 022 002  \0 003  \0  \0  \0  \0  \0  \0 002  \0  \0
  0200001
   \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0
  04000010007
  001  \0  \0  \0  \0  \0  \0  \0  \a  \0  \0  \0  \0  \0  \0  \0
  0603661393862633033
   \0  \0  \0  \0  \0  \0  \0  \0   a   6   8   9   c   b   3   0
  100372d33342d6532343136622d64662d31
-   7   4   3   e   -   4   2   6   1   -   b   f   d   1   -
  120386231623439636435386264
b   8   b   1   9   4   d   c   8   5   d   b  \0  \0  \0  \0
  1403638343839663932
   \0  \0  \0  \0  \0  \0  \0  \0   8   6   8   4   f   9   2   9
  160612d62372d6638346564622d38302d35
-   a   7   b   f   -   4   8   d   e   -   b   0   8   5   -
  200656363306539353766306364762e6f62
c   e   0   c   9   e   7   5   0   f   d   c   .   v   b   o
  22037782e307270006926de
x   7   0   .   p   r   i  \0 336 \0  \0  \0  \0  \0  \0
  [root@vbox70 ~]#
  
  Obviously I can't read from LVs that aren't available:
  
  [root@vbox70 ~]# dd
  if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
  bs=1M of=/dev/null
  dd: apertura di
  `/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400':
  No such file or directory
  [root@vbox70 ~]#
  
  But those LV are the VM's disks and I suppose it's availability is
  managed by oVirt
  
 
 please see my previous mail on this thread, the issue seems to be with the
 connectivity to the nfs path, not the iscsi.
 2014-03-04 13:15:41,167 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (DefaultQuartzScheduler_Worker-27) [1141851d] Correlation
  ID: null, Call Stack: null, Custom Event ID: -1, Message: Failed to connect
  Host vbox70 to the Storage Domains nfs02EXPORT.
  
  
   - Original Message -
   From: Giorgio Bersano giorgio.bers...@gmail.com
   To: Meital Bourvine mbour...@redhat.com
   Cc: users@ovirt.org Users@ovirt.org
   Sent: Tuesday, March 4, 2014 4:35:07 PM
   Subject: Re: [Users] Data Center Non Responsive / Contending
  
   2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
Master data domain must be reachable in order for the DC to be up.
Export domain shouldn't affect the dc status.
Are you sure that you've 

Re: [Users] Increase core or socker for vCPU

2014-03-04 Thread Sven Kieske
AFAIK you need to increase sockets, not cores, at least
this is what ovirt does atm, and I think libvirt/qemu/kvm
do it the same way, but I'm not 100% sure.

See:
http://www.linux-kvm.org/page/CPUHotPlug
http://wiki.qemu.org/Features/CPUHotplug

But I'm also wondering if an additional socket is the
right way to go from a design perspective.

So I'd like to hear more opinions on this topic.

Am 04.03.2014 15:00, schrieb Tejesh M:
 Hi,
 
 I have a basic doubt, while increasing vCPU for a VM in RHEV-M, do we need
 to increase core or sockets?  For instance, if someone asks for 2vCPU then
 should i make it 2 socket x 1 core or 1 socket x 2 core..
 
 I do understand that it depends on application too to choose. but i'm
 asking in general.
 
 Sorry, if its not relevant to this forum.
 
 Thanks


-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot


- Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Giorgio Bersano giorgio.bers...@gmail.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 5:03:44 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 Hi Giorgio,
 Apperantly the issue is caused because there is no connectivity to the export
 domain and than we fail on spmStart - that's obviously a bug that shouldn't
 happen.
 can you open a bug for the issue?
 in the meanwhile, as it seems to still exist - seems to me like the way for
 solving it would be either to fix the connectivity issue between vdsm and
 the storage domain or to downgrade your vdsm version to before this issue
 was introduced.

by the way, solution that we can go with is to remove the domain manually from 
the engine and forcibly cause to reconstruction of the pool metadata, so that 
issue should be resolved.

note that if it'll happen for further domains in the future the same procedure 
would be required.
up to your choice we can proceed with solution - let me know on which way you'd 
want to go.
 
 6a519e95-62ef-445b-9a98-f05c81592c85::WARNING::2014-03-04
 13:05:31,489::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
 Volume group 1810e5eb-9e
 b6-4797-ac50-8023a939f312 not found', '  Skipping volume group
 1810e5eb-9eb6-4797-ac50-8023a939f312']
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
 13:05:31,499::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
 1810e5eb-9eb6-4797-ac50-8023a
 939f312 not found
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
 13:05:31,500::sp::329::Storage.StoragePool::(startSpm) Unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sp.py, line 296, in startSpm
 self._updateDomainsRole()
   File /usr/share/vdsm/storage/securable.py, line 75, in wrapper
 return method(self, *args, **kwargs)
   File /usr/share/vdsm/storage/sp.py, line 205, in _updateDomainsRole
 domain = sdCache.produce(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 98, in produce
 domain.getRealDomain()
   File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
 return self._cache._realProduce(self._sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
 domain = self._findDomain(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
 
 
 
 
 - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:35:07 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
  
  2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   Master data domain must be reachable in order for the DC to be up.
   Export domain shouldn't affect the dc status.
   Are you sure that you've created the export domain as an export domain,
   and
   not as a regular nfs?
  
  
  Yes, I am.
  
  Don't know how to extract this info from DB, but in webadmin, in the
  storage list, I have these info:
  
  Domain Name: nfs02EXPORT
  Domain Type: Export
  Storage Type: NFS
  Format: V1
  Cross Data-Center Status: Inactive
  Total Space: [N/A]
  Free Space: [N/A]
  
  ATM my only Data Domain is based on iSCSI, no NFS.
  
  
  
  
  
   - Original Message -
   From: Giorgio Bersano giorgio.bers...@gmail.com
   To: Meital Bourvine mbour...@redhat.com
   Cc: users@ovirt.org Users@ovirt.org
   Sent: Tuesday, March 4, 2014 4:16:19 PM
   Subject: Re: [Users] Data Center Non Responsive / Contending
  
   2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
StorageDomainDoesNotExist: Storage domain does not exist:
(u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
   
What's the output of:
lvs
vdsClient -s 0 getStorageDomainsList
   
If it exists in the list, please run:
vdsClient -s 0 getStorageDomainInfo
1810e5eb-9eb6-4797-ac50-8023a939f312
   
  
   I'm attaching a compressed archive to avoid mangling by googlemail
   client.
  
   Indeed the NFS storage with that id is not in the list of available
   storage as it is brought up by a VM that has to be run in this very
   same cluster. Obviously it isn't running at the moment.
  
   You find this in the DB:
  
   COPY storage_domain_static (id, storage, storage_name,
   storage_domain_type, storage_type, storage_domain_format_type,
   _create_date, _update_date, recoverable, 

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 16:03 GMT+01:00 Liron Aravot lara...@redhat.com:
 Hi Giorgio,
 Apperantly the issue is caused because there is no connectivity to the export 
 domain and than we fail on spmStart - that's obviously a bug that shouldn't 
 happen.

Hi Liron,
we are reaching the same conclusion.

 can you open a bug for the issue?
Surely I will

 in the meanwhile, as it seems to still exist - seems to me like the way for 
 solving it would be either to fix the connectivity issue between vdsm and the 
 storage domain or to downgrade your vdsm version to before this issue was 
 introduced.


I have some problems with your suggestion(s):
- I cannot fix the connectivity between vdsm and the storage domain
because, as I already said, it is exposed by a VM by this very same
DataCenter and if the DC doesn't goes up, the NFS server can't too.
- I don't understand what does it mean to downgrade the vdsm: to which
point in time?

It seems I've put myself - again - in a situation of the the egg or
the chicken type, where the SD depends from THIS export domain but
the export domain isn't available if the DC isn't running.

This export domain isn't that important to me. I can throw it away
without any problem.

What if we edit the DB and remove any instances related to it? Any
adverse consequences?




 6a519e95-62ef-445b-9a98-f05c81592c85::WARNING::2014-03-04 
 13:05:31,489::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['  
 Volume group 1810e5eb-9e
 b6-4797-ac50-8023a939f312 not found', '  Skipping volume group 
 1810e5eb-9eb6-4797-ac50-8023a939f312']
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04 
 13:05:31,499::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 
 1810e5eb-9eb6-4797-ac50-8023a
 939f312 not found
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist: 
 (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04 
 13:05:31,500::sp::329::Storage.StoragePool::(startSpm) Unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sp.py, line 296, in startSpm
 self._updateDomainsRole()
   File /usr/share/vdsm/storage/securable.py, line 75, in wrapper
 return method(self, *args, **kwargs)
   File /usr/share/vdsm/storage/sp.py, line 205, in _updateDomainsRole
 domain = sdCache.produce(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 98, in produce
 domain.getRealDomain()
   File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
 return self._cache._realProduce(self._sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
 domain = self._findDomain(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)




 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 4:35:07 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending

 2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  Master data domain must be reachable in order for the DC to be up.
  Export domain shouldn't affect the dc status.
  Are you sure that you've created the export domain as an export domain, and
  not as a regular nfs?
 

 Yes, I am.

 Don't know how to extract this info from DB, but in webadmin, in the
 storage list, I have these info:

 Domain Name: nfs02EXPORT
 Domain Type: Export
 Storage Type: NFS
 Format: V1
 Cross Data-Center Status: Inactive
 Total Space: [N/A]
 Free Space: [N/A]

 ATM my only Data Domain is based on iSCSI, no NFS.





  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:16:19 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   StorageDomainDoesNotExist: Storage domain does not exist:
   (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
  
   What's the output of:
   lvs
   vdsClient -s 0 getStorageDomainsList
  
   If it exists in the list, please run:
   vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312
  
 
  I'm attaching a compressed archive to avoid mangling by googlemail client.
 
  Indeed the NFS storage with that id is not in the list of available
  storage as it is brought up by a VM that has to be run in this very
  same cluster. Obviously it isn't running at the moment.
 
  You find this in the DB:
 
  COPY storage_domain_static (id, storage, 

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 16:25 GMT+01:00 Liron Aravot lara...@redhat.com:


 - Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Giorgio Bersano giorgio.bers...@gmail.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 5:03:44 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending

 Hi Giorgio,
 Apperantly the issue is caused because there is no connectivity to the export
 domain and than we fail on spmStart - that's obviously a bug that shouldn't
 happen.
 can you open a bug for the issue?
 in the meanwhile, as it seems to still exist - seems to me like the way for
 solving it would be either to fix the connectivity issue between vdsm and
 the storage domain or to downgrade your vdsm version to before this issue
 was introduced.

 by the way, solution that we can go with is to remove the domain manually 
 from the engine and forcibly cause to reconstruction of the pool metadata, so 
 that issue should be resolved.


Do you mean Destroy from the webadmin?



 note that if it'll happen for further domains in the future the same 
 procedure would be required.
 up to your choice we can proceed with solution - let me know on which way 
 you'd want to go.

 6a519e95-62ef-445b-9a98-f05c81592c85::WARNING::2014-03-04
 13:05:31,489::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
 Volume group 1810e5eb-9e
 b6-4797-ac50-8023a939f312 not found', '  Skipping volume group
 1810e5eb-9eb6-4797-ac50-8023a939f312']
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
 13:05:31,499::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
 1810e5eb-9eb6-4797-ac50-8023a
 939f312 not found
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
 13:05:31,500::sp::329::Storage.StoragePool::(startSpm) Unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sp.py, line 296, in startSpm
 self._updateDomainsRole()
   File /usr/share/vdsm/storage/securable.py, line 75, in wrapper
 return method(self, *args, **kwargs)
   File /usr/share/vdsm/storage/sp.py, line 205, in _updateDomainsRole
 domain = sdCache.produce(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 98, in produce
 domain.getRealDomain()
   File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
 return self._cache._realProduce(self._sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
 domain = self._findDomain(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)




 - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:35:07 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   Master data domain must be reachable in order for the DC to be up.
   Export domain shouldn't affect the dc status.
   Are you sure that you've created the export domain as an export domain,
   and
   not as a regular nfs?
  
 
  Yes, I am.
 
  Don't know how to extract this info from DB, but in webadmin, in the
  storage list, I have these info:
 
  Domain Name: nfs02EXPORT
  Domain Type: Export
  Storage Type: NFS
  Format: V1
  Cross Data-Center Status: Inactive
  Total Space: [N/A]
  Free Space: [N/A]
 
  ATM my only Data Domain is based on iSCSI, no NFS.
 
 
 
 
 
   - Original Message -
   From: Giorgio Bersano giorgio.bers...@gmail.com
   To: Meital Bourvine mbour...@redhat.com
   Cc: users@ovirt.org Users@ovirt.org
   Sent: Tuesday, March 4, 2014 4:16:19 PM
   Subject: Re: [Users] Data Center Non Responsive / Contending
  
   2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
StorageDomainDoesNotExist: Storage domain does not exist:
(u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
   
What's the output of:
lvs
vdsClient -s 0 getStorageDomainsList
   
If it exists in the list, please run:
vdsClient -s 0 getStorageDomainInfo
1810e5eb-9eb6-4797-ac50-8023a939f312
   
  
   I'm attaching a compressed archive to avoid mangling by googlemail
   client.
  
   Indeed the NFS storage with that id is not in the list of available
   storage as it is brought up by a VM that has to be run in this very
   same cluster. Obviously it isn't running at the moment.
  
   You find this in the DB:
  
   COPY storage_domain_static (id, storage, storage_name,
   storage_domain_type, 

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot


- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Liron Aravot lara...@redhat.com
 Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org 
 Users@ovirt.org, fsimo...@redhat.com
 Sent: Tuesday, March 4, 2014 5:31:01 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 16:03 GMT+01:00 Liron Aravot lara...@redhat.com:
  Hi Giorgio,
  Apperantly the issue is caused because there is no connectivity to the
  export domain and than we fail on spmStart - that's obviously a bug that
  shouldn't happen.
 
 Hi Liron,
 we are reaching the same conclusion.
 
  can you open a bug for the issue?
 Surely I will
 
  in the meanwhile, as it seems to still exist - seems to me like the way for
  solving it would be either to fix the connectivity issue between vdsm and
  the storage domain or to downgrade your vdsm version to before this issue
  was introduced.
 
 
 I have some problems with your suggestion(s):
 - I cannot fix the connectivity between vdsm and the storage domain
 because, as I already said, it is exposed by a VM by this very same
 DataCenter and if the DC doesn't goes up, the NFS server can't too.
 - I don't understand what does it mean to downgrade the vdsm: to which
 point in time?
 
 It seems I've put myself - again - in a situation of the the egg or
 the chicken type, where the SD depends from THIS export domain but
 the export domain isn't available if the DC isn't running.
 
 This export domain isn't that important to me. I can throw it away
 without any problem.
 
 What if we edit the DB and remove any instances related to it? Any
 adverse consequences?
 

Ok, please perform a full db backup before attempting the following:
1. right click on the the domain and choose Destory
2. move all hosts to maintenance
3. log in into the database and run the following sql command:
update storage_pool where id = '{you id goes here}' set master_domain_version = 
master_domain_version + 1;
4. activate a host.
 
 
 
  6a519e95-62ef-445b-9a98-f05c81592c85::WARNING::2014-03-04
  13:05:31,489::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
  Volume group 1810e5eb-9e
  b6-4797-ac50-8023a939f312 not found', '  Skipping volume group
  1810e5eb-9eb6-4797-ac50-8023a939f312']
  6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
  13:05:31,499::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
  1810e5eb-9eb6-4797-ac50-8023a
  939f312 not found
  Traceback (most recent call last):
File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
  dom = findMethod(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
  raise se.StorageDomainDoesNotExist(sdUUID)
  StorageDomainDoesNotExist: Storage domain does not exist:
  (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
  6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
  13:05:31,500::sp::329::Storage.StoragePool::(startSpm) Unexpected error
  Traceback (most recent call last):
File /usr/share/vdsm/storage/sp.py, line 296, in startSpm
  self._updateDomainsRole()
File /usr/share/vdsm/storage/securable.py, line 75, in wrapper
  return method(self, *args, **kwargs)
File /usr/share/vdsm/storage/sp.py, line 205, in _updateDomainsRole
  domain = sdCache.produce(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 98, in produce
  domain.getRealDomain()
File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
  return self._cache._realProduce(self._sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
  domain = self._findDomain(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
  dom = findMethod(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
  raise se.StorageDomainDoesNotExist(sdUUID)
 
 
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:35:07 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   Master data domain must be reachable in order for the DC to be up.
   Export domain shouldn't affect the dc status.
   Are you sure that you've created the export domain as an export domain,
   and
   not as a regular nfs?
  
 
  Yes, I am.
 
  Don't know how to extract this info from DB, but in webadmin, in the
  storage list, I have these info:
 
  Domain Name: nfs02EXPORT
  Domain Type: Export
  Storage Type: NFS
  Format: V1
  Cross Data-Center Status: Inactive
  Total Space: [N/A]
  Free Space: [N/A]
 
  ATM my only Data Domain is based on iSCSI, no NFS.
 
 
 
 
 
   - Original Message -
   From: Giorgio Bersano giorgio.bers...@gmail.com
   To: Meital Bourvine mbour...@redhat.com
   Cc: users@ovirt.org Users@ovirt.org
   Sent: Tuesday, March 4, 2014 4:16:19 PM
   Subject: Re: [Users] Data 

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Giorgio Bersano
2014-03-04 16:37 GMT+01:00 Liron Aravot lara...@redhat.com:


 - Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Liron Aravot lara...@redhat.com
 Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org 
 Users@ovirt.org, fsimo...@redhat.com
 Sent: Tuesday, March 4, 2014 5:31:01 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending

 2014-03-04 16:03 GMT+01:00 Liron Aravot lara...@redhat.com:
  Hi Giorgio,
  Apperantly the issue is caused because there is no connectivity to the
  export domain and than we fail on spmStart - that's obviously a bug that
  shouldn't happen.

 Hi Liron,
 we are reaching the same conclusion.

  can you open a bug for the issue?
 Surely I will

  in the meanwhile, as it seems to still exist - seems to me like the way for
  solving it would be either to fix the connectivity issue between vdsm and
  the storage domain or to downgrade your vdsm version to before this issue
  was introduced.


 I have some problems with your suggestion(s):
 - I cannot fix the connectivity between vdsm and the storage domain
 because, as I already said, it is exposed by a VM by this very same
 DataCenter and if the DC doesn't goes up, the NFS server can't too.
 - I don't understand what does it mean to downgrade the vdsm: to which
 point in time?

 It seems I've put myself - again - in a situation of the the egg or
 the chicken type, where the SD depends from THIS export domain but
 the export domain isn't available if the DC isn't running.

 This export domain isn't that important to me. I can throw it away
 without any problem.

 What if we edit the DB and remove any instances related to it? Any
 adverse consequences?


 Ok, please perform a full db backup before attempting the following:
 1. right click on the the domain and choose Destory
 2. move all hosts to maintenance
 3. log in into the database and run the following sql command:
 update storage_pool where id = '{you id goes here}' set master_domain_version 
 = master_domain_version + 1;
 4. activate a host.

Ok Liron, that did the trick!

Up and running again, even that VM supposed to be the server acting as
export domain.

Now I've to run away as I'm late to a meeting but tomorrow I'll file a
bug regarding this.

Thanks to you and Meital for your assistance,
Giorgio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Sven Kieske
Would you mind sharing the link to it?
I didn't find it.

Thanks!

Am 04.03.2014 16:31, schrieb Giorgio Bersano:
 can you open a bug for the issue?
 Surely I will
 

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Host requirements for 3.4 compatibility

2014-03-04 Thread Darren Evenson
Thanks Lior. I had the ovirt repositories enabled for the machine with the 
engine, however I did not realize I needed the repositories also on the host 
machines. Makes sense, though, especially when using a pre-release version. I 
must have had it in my head that because I was on Fedora 20 I didn't have to 
worry about that...

Cheers,

- Darren

-Original Message-
From: Lior Vernia [mailto:lver...@redhat.com] 
Sent: Tuesday, March 4, 2014 2:22 AM
To: Darren Evenson
Cc: users@ovirt.org
Subject: Re: [Users] Host requirements for 3.4 compatibility

Hey Darren,

I don't think it is (at least I couldn't find it with a quick Google).
In fact, I can't even tell you how I knew that 4.14 goes with 3.4... It should 
be documented better when 3.4 is officially released.

In general, when using beta/rc versions I would recommend following the 
corresponding test day web page on ovirt.org, as these usually contain the most 
up-to-date information on how to configure the yum repositories for everything 
to work.

Yours, Lior.

On 03/03/14 18:20, Darren Evenson wrote:
 Hi Lior,
 
 Updating VDSM from 4.13 to 4.14 worked! Thank you!
 
 Is it documented anywhere what the required versions of libvit and vdsm are 
 for 3.4 compatibility?
 
 - Darren
 
 -Original Message-
 From: Lior Vernia [mailto:lver...@redhat.com]
 Sent: Monday, March 3, 2014 7:04 AM
 To: Darren Evenson
 Cc: users@ovirt.org
 Subject: Re: [Users] Host requirements for 3.4 compatibility
 
 Hi Darren,
 
 Looks to me like your VDSM version isn't up-to-date, I think those supported 
 in 3.4 clusters are  4.14. I would try installing the ovirt yum repo file by 
 running:
 
 sudo yum localinstall
 http://resources.ovirt.org/releases/3.4.0-rc/rpm/Fedora/20/noarch/ovir
 t-release-11.0.2-1.noarch.rpm
 
 Then enable the ovirt-3.4.0-prerelease repository in the repo file, 
 then install vdsm. Then let us know if that worked
 
 Yours, Lior.
 
 On 01/03/14 00:32, Darren Evenson wrote:
 I have updated my engine to 3.4 rc.

  

 I created a new cluster with 3.4 compatibility version, and then I 
 moved a host I had in maintenance mode to the new cluster.

  

 When I activate it, I get the error Host kvmhost2 is compatible with 
 versions (3.0,3.1,3.2,3.3) and cannot join Cluster Cluster_new which 
 is set to version 3.4.

  

 My host was Fedora 20 with the latest updates:

  

 Kernel Version: 3.13.4 - 200.fc20.x86_64

 KVM Version: 1.6.1 - 3.fc20

 LIBVIRT Version: libvirt-1.1.3.3-5.fc20

 VDSM Version: vdsm-4.13.3-3.fc20

  

 So I enabled fedora-virt-preview and updated, but I still get the 
 same error, even now with libvirt 1.2.1:

  

 Kernel Version: 3.13.4 - 200.fc20.x86_64

 KVM Version: 1.7.0 - 5.fc20

 LIBVIRT Version: libvirt-1.2.1-3.fc20

 VDSM Version: vdsm-4.13.3-3.fc20

  

 What am I missing?

  

 - Darren

  



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] hosted-engine rebooting in the middle of setup (was: [vdsm] [ANN] oVirt 3.4.0 Release Candidate is now available)

2014-03-04 Thread Darrell Budic
Whups, yes, that was it:

MainThread::INFO::2014-02-28 
17:23:03,546::hosted_engine::1311::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
 Shutting down vm using `/usr/sbin/hosted-engine --vm-shutdown`
MainThread::INFO::2014-02-28 
17:23:04,500::hosted_engine::1315::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
 stdout: Machine shut down

which also explains why I didn’t see anything in the engine logs, it was the 
self-hosted HA triggering the reboot when the engine shut down for the 
upgrade.. And I do remember the note about putting it into global maintenance 
before upgrading. Now ;)

Don’t know if the engine is aware it’s on a HA setup, if it it, might be a good 
thing to check for and maybe enable itself during the upgrade?

Are there any other special procedures to be aware of in a self-hosted setup? I 
haven’t tried updating the VDSM hosts for these yet, for instance. Seems like I 
shouldn’t enable global maintenance there, so the engine switches hosts 
properly?

Thanks.

  -Darrell

On Mar 2, 2014, at 2:35 PM, Liviu Elama liviu.el...@gmail.com wrote:

 Sounds like your hosts were not in maintenance mode while you were upgrading 
 the engine which explains the 2 min reboot.
 
 This should be revealed by logs 
 
 Regards 
 Liviu
 
 
 On Sun, Mar 2, 2014 at 10:32 PM, Yedidyah Bar David d...@redhat.com wrote:
 - Original Message -
  From: Darrell Budic darrell.bu...@zenfire.com
  To: Sandro Bonazzola sbona...@redhat.com
  Cc: annou...@ovirt.org, engine-devel engine-de...@ovirt.org, arch 
  a...@ovirt.org, Users@ovirt.org, VDSM
  Project Development vdsm-de...@lists.fedorahosted.org
  Sent: Saturday, March 1, 2014 1:56:23 AM
  Subject: Re: [vdsm] [Users] [ANN] oVirt 3.4.0 Release Candidate is now  
available
 
  Started testing this on two self-hosted clusters, with mixed results. There
  were updates from 3.4.0 beta 3.
 
  On both, got informed the system was going to reboot in 2 minutes while it
  was still installing yum updates.
 
  On the faster system, the whole update process finished before the 2 minutes
  were up, the VM restarted, and all appears normal.
 
  On the other, slower cluster, the 2 minutes hit while the yum updates were
  still being installed, and the system rebooted. It continued rebooting every
  3 minutes or so, and the engine console web pages are not available because
  the engine doesn’t start. it did this at least 3 times before I went ahead
  and reran engine-setup, which completed successfully. The system stopped
  restarting and the web interface was available again. A quick perusal of
  system logs and engine-setup logs didn’t reveal what requested the reboot.
 
  That was rather impolite of something to do that without warning :) At least
  it was recoverable. Seems like scheduling the reboot while the yum updates
  were still running seems like a poor idea as well.
 
 Can you please post relevant logs?
 hosts: /var/log/ovirt-hosted-engine-setup/*, 
 /var/log/ovirt-hosted-engine-ha/*,
 /var/log/vdsm/*
 engine: /var/log/ovirt-engine/setup/*, /var/log/ovirt-engine/*
 
 You can of course open a bug on bugzilla and attach there logs if you want.
 
 Thanks, and thanks for the report!
 --
 Didi
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] View the Console using SPICE from Windows client

2014-03-04 Thread Jon Forrest
On Tue, Mar 4, 2014 at 2:51 AM, Udaya Kiran P ukiran...@yahoo.in wrote:
 Hi All,

 I have successfully launched a VM on one of the Host (FC19 Host). Console
 option for the VM is set to SPICE. However, I am not able to see the console
 after clicking on the console button in oVirt Engine.

 I am accessing the oVirt Engine from a Windows 7 machine. I have tried with
 Chrome, Firefox and IE.

I suggest you try the steps I described in my posting to this list on 2/20/2014.
I had the same problem and I was able to solve it.

Jon Forrest
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Snapshots

2014-03-04 Thread Maurice James
I attempted to create a snapshot and an alert came up saying that it failed, 
but when I look at the snapshots tab for that specific VM, it says that the 
status is OK. Which should I believe?

Ver  3.3.3-2.el6  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot


- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Liron Aravot lara...@redhat.com
 Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org 
 Users@ovirt.org, fsimo...@redhat.com
 Sent: Tuesday, March 4, 2014 6:10:27 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 16:37 GMT+01:00 Liron Aravot lara...@redhat.com:
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Liron Aravot lara...@redhat.com
  Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org
  Users@ovirt.org, fsimo...@redhat.com
  Sent: Tuesday, March 4, 2014 5:31:01 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 16:03 GMT+01:00 Liron Aravot lara...@redhat.com:
   Hi Giorgio,
   Apperantly the issue is caused because there is no connectivity to the
   export domain and than we fail on spmStart - that's obviously a bug that
   shouldn't happen.
 
  Hi Liron,
  we are reaching the same conclusion.
 
   can you open a bug for the issue?
  Surely I will
 
   in the meanwhile, as it seems to still exist - seems to me like the way
   for
   solving it would be either to fix the connectivity issue between vdsm
   and
   the storage domain or to downgrade your vdsm version to before this
   issue
   was introduced.
 
 
  I have some problems with your suggestion(s):
  - I cannot fix the connectivity between vdsm and the storage domain
  because, as I already said, it is exposed by a VM by this very same
  DataCenter and if the DC doesn't goes up, the NFS server can't too.
  - I don't understand what does it mean to downgrade the vdsm: to which
  point in time?
 
  It seems I've put myself - again - in a situation of the the egg or
  the chicken type, where the SD depends from THIS export domain but
  the export domain isn't available if the DC isn't running.
 
  This export domain isn't that important to me. I can throw it away
  without any problem.
 
  What if we edit the DB and remove any instances related to it? Any
  adverse consequences?
 
 
  Ok, please perform a full db backup before attempting the following:
  1. right click on the the domain and choose Destory
  2. move all hosts to maintenance
  3. log in into the database and run the following sql command:
  update storage_pool where id = '{you id goes here}' set
  master_domain_version = master_domain_version + 1;
  4. activate a host.
 
 Ok Liron, that did the trick!
 
 Up and running again, even that VM supposed to be the server acting as
 export domain.
 
 Now I've to run away as I'm late to a meeting but tomorrow I'll file a
 bug regarding this.
 
 Thanks to you and Meital for your assistance,
 Giorgio.

Sure, happy that everything is fine!
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPICE causes migration failure?

2014-03-04 Thread Ted Miller


On 3/3/2014 12:26 PM, Dafna Ron wrote:
I don't see a reason why open monitor will fail migration - at most, if 
there is a problem I would close the spice session on src and restarted it 
at the dst.
can you please attach vdsm/libvirt/qemu logs from both hosts and engine 
logs so that we can see the migration failure reason?


Thanks,
Dafna



On 03/03/2014 05:16 PM, Ted Miller wrote:
I just got my Data Center running again, and am proceeding with some setup 
 testing.


I created a VM (not doing anything useful)
I clicked on the Console and had a SPICE console up (viewed in Win7).
I had it printing the time on the screen once per second (while date;do 
sleep 1; done).

I tried to migrate the VM to another host and got in the GUI:

Migration started (VM: web1, Source: s1, Destination: s3, User: 
admin@internal).


Migration failed due to Error: Fatal error during migration (VM: web1, 
Source: s1, Destination: s3).


As I started the migration I happened to think I wonder how they handle 
the SPICE console, since I think that is a link from the host to my 
machine, letting me see the VM's screen.


After the failure, I tried shutting down the SPICE console, and found that 
the migration succeeded.  I again opened SPICE and had a migration fail.  
Closed SPICE, migration failed.


I can understand how migrating SPICE is a problem, but, at least could we 
give the victim of this condition a meaningful error message?  I have seen 
a lot of questions about failed migrations (mostly due to attached CDs), 
but I have never seen this discussed. If I had not had that particular 
thought cross my brain at that particular time, I doubt that SPICE would 
have been where I went looking for a solution.


If this is the first time this issue has been raised, I am willing to file 
a bug.


Ted Miller
Elkhart, IN, USA

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



In finding the right one-minute slice of the logs, I saw something that makes 
me think this is due to a missing method in the glusterfs support.  Others 
who understand more of what the logs are saying can verify or correct my hunch.


Was trying to migrate from s2 to s1.

Logs on fpaste.org:
http://ur1.ca/gr48c
http://ur1.ca/gr48r
http://ur1.ca/gr493
http://ur1.ca/gr49e
http://ur1.ca/gr49i
http://ur1.ca/gr49x
http://ur1.ca/gr4a6

Ted Miller
Elkhart, IN, USA



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.5 planning

2014-03-04 Thread Juan Pablo Lorier

El 02/03/14 18:59, Itamar Heim escribió:
 On 02/28/2014 06:46 PM, Juan Pablo Lorier wrote:
 Hi,

 I'm kind of out of date at this time, but I'd like to propose something
 that was meant for 3.4 and I don't know it it made into it; use any nfs
 share as either a iso or export so you can just copy into the share and
 then update in some way the db.

 not yet in.

 Also, make export domain able to be shared among dcs as is iso domain,
 that is an rfe from long time ago and a useful one.

 true. some relief via a glance storage domain allowing that.
I know, but too much overhead in using glance.

 Attaching and dettaching domains is both time consuming and boring.
 Also using tagged and untagged networks on top of the same nic.
 Everybody does that except for ovirt.
 I also like to say that tough I have a huge enthusiasm for ovirt's fast
 evolution, I think that you may need to slow down with adding new
 features until most of the rfes that have over a year are done, because
 other way it's kind of disappointing opening a rfe just to see it
 sleeping too much time. Don't take this wrong, I've been listened and
 helped by the team everytime I needed and I'm thankful for that.

 age is not always the best criteria for prioritiy. we have RFE's
 open for several years it takes us time to get to. sometime newer RFEs
 are more important/interesting (say, leveraging cloud-init or
 neutron), sometime old RFEs are hard to close (get rid of storage
 pool, etc.).
 its a balancing act. but its also open source, which means anyone can
 try and contribute for things they would like prioritized.

I'm aware of the open source nature of the project, my way of
contributing is testing, reporting bugs and propose rfes, that much I
can do.
I understand your point, but I think that some rfes are as hard as
useful so maybe there's some room for rebalancing the effort to try and
close those ones.
Regards,
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] View the Console using SPICE from Windows client

2014-03-04 Thread Udaya Kiran P
Hi Jon,

I tried the steps mentioned by you. But I receive the following error popup.

Unable to connect to libvirt with URI [none].


Client Machine - Windows 7
Browser - Firefox (27.0)
Guest VM - CentOS 6.4

Should I set any URIs? or change any settings in the VM?

Please suggest.



Regards,
Udaya Kiran



On Tuesday, 4 March 2014 10:53 PM, Jon Forrest nob...@gmail.com wrote:
 
On Tue, Mar 4, 2014 at 2:51 AM, Udaya Kiran P ukiran...@yahoo.in wrote:

 Hi All,

 I have successfully launched a VM on one of the Host (FC19 Host). Console
 option for the VM is set to SPICE. However, I am not able to see the console
 after clicking on the console button in oVirt Engine.

 I am accessing the oVirt Engine from a Windows 7 machine. I have tried with
 Chrome, Firefox and IE.

I suggest you try the steps I described in my posting to this list on 2/20/2014.
I had the same problem and I was able to solve it.

Jon Forrest___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Importing Glance images as oVirt templates

2014-03-04 Thread Oved Ourfalli
Hi all!

In oVirt 3.4 we extended the integration with Glance, allowing to import Glance 
images as oVirt templates.
We also added a public Glance repository to be used by oVirt deployments.
A reference to this repository is automatically added in 3.4, so you'll see it 
in the UI by default, under the name ovirt-image-repository. This repository 
currently contains only a small set of images, but we hope to extend it soon.
The right way to use the Fedora and CentOS images that are there is to import 
them as templates, create VMs from them, and use cloud-init to configure them.

I wrote a blog post on how to use it.
Have a look at 
http://ovedou.blogspot.co.il/2014/03/importing-glance-images-as-ovirt.html

Will be happy to hear your comments and answer your questions,
Oved
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Snapshots

2014-03-04 Thread Gadi Ickowicz
Hi,

Were you taking a live snapshot? That process is actually composed of 2 steps:
1) Taking the snapshot (creating the a new volume that is part of the image)
2) Configuring the vm to use the new volume 
A failure in step 2 would result in the new volume being created, but the VM 
still writing to the old volume, and that warning could be what you saw.

could you please attach the engine logs, and if possible, the vdsm logs for the 
SPM at the time you took the snapshot.

Thanks,
Gadi Ickowicz

- Original Message -
From: Maurice James midnightst...@msn.com
To: users@ovirt.org
Sent: Tuesday, March 4, 2014 10:45:39 PM
Subject: [Users] Snapshots

I attempted to create a snapshot and an alert came up saying that it failed, 
but when I look at the snapshots tab for that specific VM, it says that the 
status is OK. Which should I believe? 

Ver 3.3.3-2.el6 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users