Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Leander Bessa Beernaert
Well i've checked the libvirt logs on both nodes and i found these two
lines:

2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 :
 Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
 2012-07-09 13:58:27.736+: 10226: error : qemuMonitorIORead:513 :
 Unable to read from monitor: Connection reset by peer


The log is alos filled with the message below, it gets repeated over and
over and over.

2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 : Cannot
 find 'pm-is-supported' in path: No such file or directory
 2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 : Failed
 to get host power management capabilities


On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua
razique.mahr...@gmail.comwrote:

 Hi Leander,
 try to check libvirtd.log files,
 is the instance still running on the first node while you are lauching the
 migration process ?

 Razique
 *Nuage  Co - Razique Mahroua** *
 razique.mahr...@gmail.com


 Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm trying to
 migrate a VM with the m1.tiny flavor from one machine to another. Their
 hardware are identical and they have more than enough resources to support
 the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination compute
 node: http://paste.openstack.org/show/19318/ . The scheduler's log has no
 visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there was a bug
 where instance.id was passed where instance.uuid was expected. This
 used to cause some problem.
 It looks like you are using distribution package rather than devstack
 installation, so it is likely that the issue is now fixed. Can you try
 latest packages (and/or try devstack if you can)

 I wish I could help more.

 -Mandar


 On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello,

 I've recently setup a system to test out the live migration feature. So
 far i've been able to launch the instances with the shared nfs folder.
 However, when i run the live-migration command i encounter this error in
 the destination compute node:

 2012-07-05 09:33:48 ERROR nova.manager [-] Error during
 ComputeManager.update_available_resource: Domain not found: no domain with
 matching id 2
 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call
 last):
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/manager.py, line 155, in
 periodic_tasks
 2012-07-05 09:33:48 TRACE nova.manager task(self, context)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2409, in
 update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager
 self.driver.update_available_resource(context, self.host)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1936, in update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager 'vcpus_used':
 self.get_vcpu_used(),
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1743, in get_vcpu_used
 2012-07-05 09:33:48 TRACE nova.manager dom =
 self._conn.lookupByID(dom_id)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/libvirt.py, line 2363, in lookupByID
 2012-07-05 09:33:48 TRACE nova.manager if ret is None:raise
 libvirtError('virDomainLookupByID() failed', conn=self)
 2012-07-05 09:33:48 TRACE nova.manager libvirtError: Domain not found:
 no domain with matching id 2


 Any ideas on how to solve this?

 Regards,
 Leander

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



NUAGECO-LOGO-Fblan_petit.jpg___
Mailing list: https://launchpad.net/~openstack
Post to : 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Sébastien Han
Hi!

Usually you get:

2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 :
 Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges


when you change permission in libvirt (root I presumed) which is not
necessary.

2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 : Cannot
 find 'pm-is-supported' in path: No such file or directory


This error is harmless and can be easily solved by installing the following
package:

sudo apt-get install pm-utils -y


Do you have something in the nova-scheduler logs?

Cheers!

On Tue, Jul 10, 2012 at 11:29 AM, Leander Bessa Beernaert 
leande...@gmail.com wrote:

 Well i've checked the libvirt logs on both nodes and i found these two
 lines:

 2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 :
 Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
 2012-07-09 13:58:27.736+: 10226: error : qemuMonitorIORead:513 :
 Unable to read from monitor: Connection reset by peer


 The log is alos filled with the message below, it gets repeated over and
 over and over.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 : Cannot
 find 'pm-is-supported' in path: No such file or directory
 2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 : Failed
 to get host power management capabilities


 On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 Hi Leander,
 try to check libvirtd.log files,
 is the instance still running on the first node while you are lauching
 the migration process ?

 Razique
  *Nuage  Co - Razique Mahroua** *
 razique.mahr...@gmail.com


 Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm trying to
 migrate a VM with the m1.tiny flavor from one machine to another. Their
 hardware are identical and they have more than enough resources to support
 the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination compute
 node: http://paste.openstack.org/show/19318/ . The scheduler's log has
 no visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there was a bug
 where instance.id was passed where instance.uuid was expected. This
 used to cause some problem.
 It looks like you are using distribution package rather than devstack
 installation, so it is likely that the issue is now fixed. Can you try
 latest packages (and/or try devstack if you can)

 I wish I could help more.

 -Mandar


 On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello,

 I've recently setup a system to test out the live migration feature.
 So far i've been able to launch the instances with the shared nfs folder.
 However, when i run the live-migration command i encounter this error in
 the destination compute node:

 2012-07-05 09:33:48 ERROR nova.manager [-] Error during
 ComputeManager.update_available_resource: Domain not found: no domain 
 with
 matching id 2
 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call
 last):
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/manager.py, line 155, in
 periodic_tasks
 2012-07-05 09:33:48 TRACE nova.manager task(self, context)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2409, in
 update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager
 self.driver.update_available_resource(context, self.host)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1936, in update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager 'vcpus_used':
 self.get_vcpu_used(),
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1743, in get_vcpu_used
 2012-07-05 09:33:48 TRACE nova.manager dom =
 self._conn.lookupByID(dom_id)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/libvirt.py, line 2363, in lookupByID
 2012-07-05 09:33:48 TRACE nova.manager if ret is None:raise
 libvirtError('virDomainLookupByID() failed', conn=self)
 2012-07-05 09:33:48 TRACE nova.manager libvirtError: Domain not
 found: no 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Sébastien Han
I forgot to ask, did you enable the vnc console?

If so, with which parameters?


On Tue, Jul 10, 2012 at 11:48 AM, Sébastien Han han.sebast...@gmail.comwrote:

 Hi!

 Usually you get:

 2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 :
 Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges


 when you change permission in libvirt (root I presumed) which is not
 necessary.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 : Cannot
 find 'pm-is-supported' in path: No such file or directory


 This error is harmless and can be easily solved by installing the
 following package:

 sudo apt-get install pm-utils -y


 Do you have something in the nova-scheduler logs?

 Cheers!

 On Tue, Jul 10, 2012 at 11:29 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Well i've checked the libvirt logs on both nodes and i found these two
 lines:

 2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 :
 Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
 2012-07-09 13:58:27.736+: 10226: error : qemuMonitorIORead:513 :
 Unable to read from monitor: Connection reset by peer


 The log is alos filled with the message below, it gets repeated over and
 over and over.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 : Cannot
 find 'pm-is-supported' in path: No such file or directory
 2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 : Failed
 to get host power management capabilities


 On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 Hi Leander,
 try to check libvirtd.log files,
 is the instance still running on the first node while you are lauching
 the migration process ?

 Razique
  *Nuage  Co - Razique Mahroua** *
 razique.mahr...@gmail.com


 Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm trying
 to migrate a VM with the m1.tiny flavor from one machine to another. Their
 hardware are identical and they have more than enough resources to support
 the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination compute
 node: http://paste.openstack.org/show/19318/ . The scheduler's log has
 no visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there was a
 bug where instance.id was passed where instance.uuid was expected.
 This used to cause some problem.
 It looks like you are using distribution package rather than devstack
 installation, so it is likely that the issue is now fixed. Can you try
 latest packages (and/or try devstack if you can)

 I wish I could help more.

 -Mandar


 On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello,

 I've recently setup a system to test out the live migration feature.
 So far i've been able to launch the instances with the shared nfs folder.
 However, when i run the live-migration command i encounter this error in
 the destination compute node:

 2012-07-05 09:33:48 ERROR nova.manager [-] Error during
 ComputeManager.update_available_resource: Domain not found: no domain 
 with
 matching id 2
 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call
 last):
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/manager.py, line 155, in
 periodic_tasks
 2012-07-05 09:33:48 TRACE nova.manager task(self, context)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2409, 
 in
 update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager
 self.driver.update_available_resource(context, self.host)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1936, in update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager 'vcpus_used':
 self.get_vcpu_used(),
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1743, in get_vcpu_used
 2012-07-05 09:33:48 TRACE nova.manager dom =
 self._conn.lookupByID(dom_id)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/libvirt.py, line 2363, in lookupByID
 2012-07-05 09:33:48 TRACE 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Leander Bessa Beernaert
If i don't run libvirt with root, it can't write to the shared folder. It's
the only way i've been able to get this to work. :S

Below follows the configuration of one of the compute nodes. 10.0.1.1 is
the controller and 10.0.1.2 is the compute node.

novncproxy_base_url=http://10.0.1.1:6080/vnc_auto.html
 xvpvncproxy_base_url=http://10.0.1.1:6081/console
 vncserver_proxyclient_address=10.0.1.2
 vncserver_listen=10.0.1.2


On Tue, Jul 10, 2012 at 10:49 AM, Sébastien Han han.sebast...@gmail.comwrote:

 I forgot to ask, did you enable the vnc console?

 If so, with which parameters?


 On Tue, Jul 10, 2012 at 11:48 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Hi!

 Usually you get:

 2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 :
 Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges


 when you change permission in libvirt (root I presumed) which is not
 necessary.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 : Cannot
 find 'pm-is-supported' in path: No such file or directory


 This error is harmless and can be easily solved by installing the
 following package:

 sudo apt-get install pm-utils -y


 Do you have something in the nova-scheduler logs?

 Cheers!

 On Tue, Jul 10, 2012 at 11:29 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Well i've checked the libvirt logs on both nodes and i found these two
 lines:

 2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 :
 Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
 2012-07-09 13:58:27.736+: 10226: error : qemuMonitorIORead:513 :
 Unable to read from monitor: Connection reset by peer


 The log is alos filled with the message below, it gets repeated over and
 over and over.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory
 2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 :
 Failed to get host power management capabilities


 On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 Hi Leander,
 try to check libvirtd.log files,
 is the instance still running on the first node while you are lauching
 the migration process ?

 Razique
  *Nuage  Co - Razique Mahroua** *
 razique.mahr...@gmail.com


 Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm trying
 to migrate a VM with the m1.tiny flavor from one machine to another. Their
 hardware are identical and they have more than enough resources to support
 the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination compute
 node: http://paste.openstack.org/show/19318/ . The scheduler's log has
 no visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there was a
 bug where instance.id was passed where instance.uuid was expected.
 This used to cause some problem.
 It looks like you are using distribution package rather than devstack
 installation, so it is likely that the issue is now fixed. Can you try
 latest packages (and/or try devstack if you can)

 I wish I could help more.

 -Mandar


 On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello,

 I've recently setup a system to test out the live migration feature.
 So far i've been able to launch the instances with the shared nfs 
 folder.
 However, when i run the live-migration command i encounter this error in
 the destination compute node:

 2012-07-05 09:33:48 ERROR nova.manager [-] Error during
 ComputeManager.update_available_resource: Domain not found: no domain 
 with
 matching id 2
 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call
 last):
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/manager.py, line 155, in
 periodic_tasks
 2012-07-05 09:33:48 TRACE nova.manager task(self, context)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2409, 
 in
 update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager
 self.driver.update_available_resource(context, self.host)
 2012-07-05 09:33:48 TRACE nova.manager   File
 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Sébastien Han
Change the vncserver_listen to 0.0.0.0 and re-try the live-migration, you
should get better results :)


On Tue, Jul 10, 2012 at 11:52 AM, Leander Bessa Beernaert 
leande...@gmail.com wrote:

 If i don't run libvirt with root, it can't write to the shared folder.
 It's the only way i've been able to get this to work. :S

 Below follows the configuration of one of the compute nodes. 10.0.1.1 is
 the controller and 10.0.1.2 is the compute node.

 novncproxy_base_url=http://10.0.1.1:6080/vnc_auto.html
 xvpvncproxy_base_url=http://10.0.1.1:6081/console
 vncserver_proxyclient_address=10.0.1.2
 vncserver_listen=10.0.1.2


 On Tue, Jul 10, 2012 at 10:49 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 I forgot to ask, did you enable the vnc console?

 If so, with which parameters?


 On Tue, Jul 10, 2012 at 11:48 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Hi!

 Usually you get:

 2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 :
 Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges


 when you change permission in libvirt (root I presumed) which is not
 necessary.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory


 This error is harmless and can be easily solved by installing the
 following package:

 sudo apt-get install pm-utils -y


 Do you have something in the nova-scheduler logs?

 Cheers!

 On Tue, Jul 10, 2012 at 11:29 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Well i've checked the libvirt logs on both nodes and i found these two
 lines:

 2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134
 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
 2012-07-09 13:58:27.736+: 10226: error : qemuMonitorIORead:513 :
 Unable to read from monitor: Connection reset by peer


 The log is alos filled with the message below, it gets repeated over
 and over and over.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory
 2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 :
 Failed to get host power management capabilities


 On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 Hi Leander,
 try to check libvirtd.log files,
 is the instance still running on the first node while you are lauching
 the migration process ?

 Razique
  *Nuage  Co - Razique Mahroua** *
 razique.mahr...@gmail.com


 Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm trying
 to migrate a VM with the m1.tiny flavor from one machine to another. Their
 hardware are identical and they have more than enough resources to support
 the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination compute
 node: http://paste.openstack.org/show/19318/ . The scheduler's log
 has no visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there was a
 bug where instance.id was passed where instance.uuid was expected.
 This used to cause some problem.
 It looks like you are using distribution package rather than
 devstack installation, so it is likely that the issue is now fixed. Can 
 you
 try latest packages (and/or try devstack if you can)

 I wish I could help more.

 -Mandar


 On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello,

 I've recently setup a system to test out the live migration
 feature. So far i've been able to launch the instances with the shared 
 nfs
 folder. However, when i run the live-migration command i encounter this
 error in the destination compute node:

 2012-07-05 09:33:48 ERROR nova.manager [-] Error during
 ComputeManager.update_available_resource: Domain not found: no domain 
 with
 matching id 2
 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call
 last):
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/manager.py, line 155, in
 periodic_tasks
 2012-07-05 09:33:48 TRACE nova.manager task(self, context)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 
 2409, in
 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Razique Mahroua
Ok it looks like Qemu is unable to access the instance state.could you perform a $ virsh list --all from the second node and tell me what you see ?as for the second message, make sure you installed the "dbus" packageRegards,Razique
Nuage  Co - Razique Mahrouarazique.mahr...@gmail.com

Le 10 juil. 2012 à 11:29, Leander Bessa Beernaert a écrit :Well i've checked the libvirt logs on both nodes and i found these two lines:
2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002' uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges2012-07-09 13:58:27.736+: 10226: error : qemuMonitorIORead:513 : Unable to read from monitor: Connection reset by peer
The log is alos filled with the message below, it gets repeated over and over and over.
2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 : Cannot find 'pm-is-supported' in path: No such file or directory2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 : Failed to get host power management capabilities
On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua razique.mahr...@gmail.com wrote:
Hi Leander,try to check libvirtd.log files,is the instance still running on the first node while you are lauching the migration process ?Razique

Nuage  Co - Razique Mahroua
razique.mahr...@gmail.comNUAGECO-LOGO-Fblan_petit.jpg

Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :Ok, so i've updated to the test packages fromThe migration still fails, but i see no errors in the logs. I'm trying to migrate a VM with the m1.tiny flavor from one machine to another. Their hardware are identical and they have more than enough resources to support the m1.tiny flavor:

cloud35 (total) 4  3867   186

cloud35 (used_now)   0   3125cloud35 (used_max)   000These are the logs from the origin compute node: http://paste.openstack.org/show/19319/ and the destination compute node: http://paste.openstack.org/show/19318/ . The scheduler's log has no visible errors or stack traces.

I'm still using nfsv4.Any ideas?On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert leande...@gmail.com wrote:

Thanks for the tip, it's a better than nothing :)Regards,Leander

On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे mandarv...@gmail.com wrote:
Not sure if you are able to debug this, but a while ago there was a bug where instance.id was passed where instance.uuid was expected. This used to cause some problem.


It looks like you are using distribution package rather than devstack installation, so it is likely that the issue is now fixed. Can you try latest packages (and/or try devstack if you can)

I wish I could help more.-MandarOn Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert leande...@gmail.com wrote:




Hello,I'verecentlysetup a system to test out the live migration feature. So far i've been able to launch the instances with the shared nfs folder. However, when i run the live-migration command i encounter this error in the destination compute node:






2012-07-05 09:33:48 ERROR nova.manager [-] Error during ComputeManager.update_available_resource: Domain not found: no domain with matching id 2





2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call last):2012-07-05 09:33:48 TRACE nova.manager  File "/usr/lib/python2.7/dist-packages/nova/manager.py", line 155, in periodic_tasks2012-07-05 09:33:48 TRACE nova.manager   task(self, context)





2012-07-05 09:33:48 TRACE nova.manager  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2409, in update_available_resource2012-07-05 09:33:48 TRACE nova.manager   self.driver.update_available_resource(context, self.host)





2012-07-05 09:33:48 TRACE nova.manager  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line 1936, in update_available_resource2012-07-05 09:33:48 TRACE nova.manager   'vcpus_used': self.get_vcpu_used(),





2012-07-05 09:33:48 TRACE nova.manager  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line 1743, in get_vcpu_used2012-07-05 09:33:48 TRACE nova.manager   dom = self._conn.lookupByID(dom_id)





2012-07-05 09:33:48 TRACE nova.manager  File "/usr/lib/python2.7/dist-packages/libvirt.py", line 2363, in lookupByID2012-07-05 09:33:48 TRACE nova.manager   if ret is None:raise libvirtError('virDomainLookupByID() failed', conn=self)





2012-07-05 09:33:48 TRACE nova.manager libvirtError: Domain not found: no domain with matching id 2
Any ideas on how to solve this?
Regards,
Leander
___
Mailing list: https://launchpad.net/~openstack
Post to   : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help  : https://help.launchpad.net/ListHelp



___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.net
Unsubscribe : 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Leander Bessa Beernaert
That did! Thanks :)

Do you by change have any pointer on getting the live-migration to work
without running libvirt under root?

On Tue, Jul 10, 2012 at 10:55 AM, Sébastien Han han.sebast...@gmail.comwrote:

 Change the vncserver_listen to 0.0.0.0 and re-try the live-migration, you
 should get better results :)



 On Tue, Jul 10, 2012 at 11:52 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 If i don't run libvirt with root, it can't write to the shared folder.
 It's the only way i've been able to get this to work. :S

 Below follows the configuration of one of the compute nodes. 10.0.1.1 is
 the controller and 10.0.1.2 is the compute node.

 novncproxy_base_url=http://10.0.1.1:6080/vnc_auto.html
 xvpvncproxy_base_url=http://10.0.1.1:6081/console
 vncserver_proxyclient_address=10.0.1.2
 vncserver_listen=10.0.1.2


 On Tue, Jul 10, 2012 at 10:49 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 I forgot to ask, did you enable the vnc console?

 If so, with which parameters?


 On Tue, Jul 10, 2012 at 11:48 AM, Sébastien Han han.sebast...@gmail.com
  wrote:

 Hi!

 Usually you get:

 2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134
 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges


 when you change permission in libvirt (root I presumed) which is not
 necessary.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory


 This error is harmless and can be easily solved by installing the
 following package:

 sudo apt-get install pm-utils -y


 Do you have something in the nova-scheduler logs?

 Cheers!

 On Tue, Jul 10, 2012 at 11:29 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Well i've checked the libvirt logs on both nodes and i found these two
 lines:

 2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134
 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
 2012-07-09 13:58:27.736+: 10226: error : qemuMonitorIORead:513 :
 Unable to read from monitor: Connection reset by peer


 The log is alos filled with the message below, it gets repeated over
 and over and over.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory
 2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 :
 Failed to get host power management capabilities


 On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 Hi Leander,
 try to check libvirtd.log files,
 is the instance still running on the first node while you are
 lauching the migration process ?

 Razique
  *Nuage  Co - Razique Mahroua** *
 razique.mahr...@gmail.com


 Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm
 trying to migrate a VM with the m1.tiny flavor from one machine to 
 another.
 Their hardware are identical and they have more than enough resources to
 support the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination compute
 node: http://paste.openstack.org/show/19318/ . The scheduler's log
 has no visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there was a
 bug where instance.id was passed where instance.uuid was expected.
 This used to cause some problem.
 It looks like you are using distribution package rather than
 devstack installation, so it is likely that the issue is now fixed. 
 Can you
 try latest packages (and/or try devstack if you can)

 I wish I could help more.

 -Mandar


 On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello,

 I've recently setup a system to test out the live migration
 feature. So far i've been able to launch the instances with the 
 shared nfs
 folder. However, when i run the live-migration command i encounter 
 this
 error in the destination compute node:

 2012-07-05 09:33:48 ERROR nova.manager [-] Error during
 ComputeManager.update_available_resource: Domain not found: no 
 domain with
 matching id 2
 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent
 call last):
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/manager.py, line 155, in

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Sébastien Han
Great!

The last time I ran the live-migration, it was with GlusterFS and CephFS
and I didn't changed any permissions in libvirt. I did the live-migration
with NFS once but it was in Diablo (horrible), I don't really remember my
setup. Maybe you should consider to try GlusterFS.


On Tue, Jul 10, 2012 at 12:07 PM, Leander Bessa Beernaert 
leande...@gmail.com wrote:

 That did! Thanks :)

 Do you by change have any pointer on getting the live-migration to work
 without running libvirt under root?


 On Tue, Jul 10, 2012 at 10:55 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Change the vncserver_listen to 0.0.0.0 and re-try the live-migration, you
 should get better results :)



 On Tue, Jul 10, 2012 at 11:52 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 If i don't run libvirt with root, it can't write to the shared folder.
 It's the only way i've been able to get this to work. :S

 Below follows the configuration of one of the compute nodes. 10.0.1.1 is
 the controller and 10.0.1.2 is the compute node.

 novncproxy_base_url=http://10.0.1.1:6080/vnc_auto.html
 xvpvncproxy_base_url=http://10.0.1.1:6081/console
 vncserver_proxyclient_address=10.0.1.2
 vncserver_listen=10.0.1.2


 On Tue, Jul 10, 2012 at 10:49 AM, Sébastien Han han.sebast...@gmail.com
  wrote:

 I forgot to ask, did you enable the vnc console?

 If so, with which parameters?


 On Tue, Jul 10, 2012 at 11:48 AM, Sébastien Han 
 han.sebast...@gmail.com wrote:

 Hi!

 Usually you get:

 2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134
 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges


 when you change permission in libvirt (root I presumed) which is not
 necessary.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory


 This error is harmless and can be easily solved by installing the
 following package:

 sudo apt-get install pm-utils -y


 Do you have something in the nova-scheduler logs?

 Cheers!

 On Tue, Jul 10, 2012 at 11:29 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Well i've checked the libvirt logs on both nodes and i found these
 two lines:

 2012-07-09 13:58:27.179+: 10227: warning :
 qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
 2012-07-09 13:58:27.736+: 10226: error : qemuMonitorIORead:513 :
 Unable to read from monitor: Connection reset by peer


 The log is alos filled with the message below, it gets repeated over
 and over and over.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory
 2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 :
 Failed to get host power management capabilities


 On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 Hi Leander,
 try to check libvirtd.log files,
 is the instance still running on the first node while you are
 lauching the migration process ?

 Razique
  *Nuage  Co - Razique Mahroua** *
 razique.mahr...@gmail.com


 Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm
 trying to migrate a VM with the m1.tiny flavor from one machine to 
 another.
 Their hardware are identical and they have more than enough resources to
 support the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination
 compute node: http://paste.openstack.org/show/19318/ . The
 scheduler's log has no visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there was
 a bug where instance.id was passed where instance.uuid was
 expected. This used to cause some problem.
 It looks like you are using distribution package rather than
 devstack installation, so it is likely that the issue is now fixed. 
 Can you
 try latest packages (and/or try devstack if you can)

 I wish I could help more.

 -Mandar


 On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello,

 I've recently setup a system to test out the live migration
 feature. So far i've been able to launch the instances with the 
 shared nfs
 folder. However, when i run the live-migration command i encounter 
 this
 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Leander Bessa Beernaert
Is GlusterFS be more viable for a production environment?

On Tue, Jul 10, 2012 at 11:17 AM, Sébastien Han han.sebast...@gmail.comwrote:

 Great!

 The last time I ran the live-migration, it was with GlusterFS and CephFS
 and I didn't changed any permissions in libvirt. I did the live-migration
 with NFS once but it was in Diablo (horrible), I don't really remember my
 setup. Maybe you should consider to try GlusterFS.


 On Tue, Jul 10, 2012 at 12:07 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 That did! Thanks :)

 Do you by change have any pointer on getting the live-migration to work
 without running libvirt under root?


 On Tue, Jul 10, 2012 at 10:55 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Change the vncserver_listen to 0.0.0.0 and re-try the live-migration,
 you should get better results :)



 On Tue, Jul 10, 2012 at 11:52 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 If i don't run libvirt with root, it can't write to the shared folder.
 It's the only way i've been able to get this to work. :S

 Below follows the configuration of one of the compute nodes. 10.0.1.1
 is the controller and 10.0.1.2 is the compute node.

 novncproxy_base_url=http://10.0.1.1:6080/vnc_auto.html
 xvpvncproxy_base_url=http://10.0.1.1:6081/console
 vncserver_proxyclient_address=10.0.1.2
 vncserver_listen=10.0.1.2


 On Tue, Jul 10, 2012 at 10:49 AM, Sébastien Han 
 han.sebast...@gmail.com wrote:

 I forgot to ask, did you enable the vnc console?

 If so, with which parameters?


 On Tue, Jul 10, 2012 at 11:48 AM, Sébastien Han 
 han.sebast...@gmail.com wrote:

 Hi!

 Usually you get:

 2012-07-09 13:58:27.179+: 10227: warning :
 qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges


 when you change permission in libvirt (root I presumed) which is not
 necessary.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory


 This error is harmless and can be easily solved by installing the
 following package:

 sudo apt-get install pm-utils -y


 Do you have something in the nova-scheduler logs?

 Cheers!

 On Tue, Jul 10, 2012 at 11:29 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Well i've checked the libvirt logs on both nodes and i found these
 two lines:

 2012-07-09 13:58:27.179+: 10227: warning :
 qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
 2012-07-09 13:58:27.736+: 10226: error : qemuMonitorIORead:513
 : Unable to read from monitor: Connection reset by peer


 The log is alos filled with the message below, it gets repeated over
 and over and over.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory
 2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 :
 Failed to get host power management capabilities


 On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 Hi Leander,
 try to check libvirtd.log files,
 is the instance still running on the first node while you are
 lauching the migration process ?

 Razique
  *Nuage  Co - Razique Mahroua** *
 razique.mahr...@gmail.com


 Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm
 trying to migrate a VM with the m1.tiny flavor from one machine to 
 another.
 Their hardware are identical and they have more than enough resources 
 to
 support the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination
 compute node: http://paste.openstack.org/show/19318/ . The
 scheduler's log has no visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there was
 a bug where instance.id was passed where instance.uuid was
 expected. This used to cause some problem.
 It looks like you are using distribution package rather than
 devstack installation, so it is likely that the issue is now fixed. 
 Can you
 try latest packages (and/or try devstack if you can)

 I wish I could help more.

 -Mandar


 On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello,

 I've recently setup a system to test out the live migration
 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Sébastien Han
It's production ready, RedHat offers a commercial support on it.
Just keep in mind that it's owned by Redhat ;)


On Tue, Jul 10, 2012 at 12:24 PM, Leander Bessa Beernaert 
leande...@gmail.com wrote:

 Is GlusterFS be more viable for a production environment?


 On Tue, Jul 10, 2012 at 11:17 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Great!

 The last time I ran the live-migration, it was with GlusterFS and CephFS
 and I didn't changed any permissions in libvirt. I did the live-migration
 with NFS once but it was in Diablo (horrible), I don't really remember my
 setup. Maybe you should consider to try GlusterFS.


 On Tue, Jul 10, 2012 at 12:07 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 That did! Thanks :)

 Do you by change have any pointer on getting the live-migration to work
 without running libvirt under root?


 On Tue, Jul 10, 2012 at 10:55 AM, Sébastien Han han.sebast...@gmail.com
  wrote:

 Change the vncserver_listen to 0.0.0.0 and re-try the live-migration,
 you should get better results :)



 On Tue, Jul 10, 2012 at 11:52 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 If i don't run libvirt with root, it can't write to the shared folder.
 It's the only way i've been able to get this to work. :S

 Below follows the configuration of one of the compute nodes. 10.0.1.1
 is the controller and 10.0.1.2 is the compute node.

 novncproxy_base_url=http://10.0.1.1:6080/vnc_auto.html
 xvpvncproxy_base_url=http://10.0.1.1:6081/console
 vncserver_proxyclient_address=10.0.1.2
 vncserver_listen=10.0.1.2


 On Tue, Jul 10, 2012 at 10:49 AM, Sébastien Han 
 han.sebast...@gmail.com wrote:

 I forgot to ask, did you enable the vnc console?

 If so, with which parameters?


 On Tue, Jul 10, 2012 at 11:48 AM, Sébastien Han 
 han.sebast...@gmail.com wrote:

 Hi!

 Usually you get:

 2012-07-09 13:58:27.179+: 10227: warning :
 qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges


 when you change permission in libvirt (root I presumed) which is not
 necessary.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory


 This error is harmless and can be easily solved by installing the
 following package:

 sudo apt-get install pm-utils -y


 Do you have something in the nova-scheduler logs?

 Cheers!

 On Tue, Jul 10, 2012 at 11:29 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Well i've checked the libvirt logs on both nodes and i found these
 two lines:

 2012-07-09 13:58:27.179+: 10227: warning :
 qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
 2012-07-09 13:58:27.736+: 10226: error : qemuMonitorIORead:513
 : Unable to read from monitor: Connection reset by peer


 The log is alos filled with the message below, it gets repeated
 over and over and over.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory
 2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 :
 Failed to get host power management capabilities


 On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 Hi Leander,
 try to check libvirtd.log files,
 is the instance still running on the first node while you are
 lauching the migration process ?

 Razique
  *Nuage  Co - Razique Mahroua** *
 razique.mahr...@gmail.com


 Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm
 trying to migrate a VM with the m1.tiny flavor from one machine to 
 another.
 Their hardware are identical and they have more than enough resources 
 to
 support the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination
 compute node: http://paste.openstack.org/show/19318/ . The
 scheduler's log has no visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there
 was a bug where instance.id was passed where instance.uuid was
 expected. This used to cause some problem.
 It looks like you are using distribution package rather than
 devstack installation, so it is likely that the issue is now fixed. 
 Can you
 try latest packages (and/or try devstack if you 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Leander Bessa Beernaert
Ok. Thx for the help :)

On Tue, Jul 10, 2012 at 11:30 AM, Sébastien Han han.sebast...@gmail.comwrote:

 It's production ready, RedHat offers a commercial support on it.
 Just keep in mind that it's owned by Redhat ;)



 On Tue, Jul 10, 2012 at 12:24 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Is GlusterFS be more viable for a production environment?


 On Tue, Jul 10, 2012 at 11:17 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Great!

 The last time I ran the live-migration, it was with GlusterFS and CephFS
 and I didn't changed any permissions in libvirt. I did the live-migration
 with NFS once but it was in Diablo (horrible), I don't really remember my
 setup. Maybe you should consider to try GlusterFS.


 On Tue, Jul 10, 2012 at 12:07 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 That did! Thanks :)

 Do you by change have any pointer on getting the live-migration to work
 without running libvirt under root?


 On Tue, Jul 10, 2012 at 10:55 AM, Sébastien Han 
 han.sebast...@gmail.com wrote:

 Change the vncserver_listen to 0.0.0.0 and re-try the live-migration,
 you should get better results :)



 On Tue, Jul 10, 2012 at 11:52 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 If i don't run libvirt with root, it can't write to the shared
 folder. It's the only way i've been able to get this to work. :S

 Below follows the configuration of one of the compute nodes. 10.0.1.1
 is the controller and 10.0.1.2 is the compute node.

 novncproxy_base_url=http://10.0.1.1:6080/vnc_auto.html
 xvpvncproxy_base_url=http://10.0.1.1:6081/console
 vncserver_proxyclient_address=10.0.1.2
 vncserver_listen=10.0.1.2


 On Tue, Jul 10, 2012 at 10:49 AM, Sébastien Han 
 han.sebast...@gmail.com wrote:

 I forgot to ask, did you enable the vnc console?

 If so, with which parameters?


 On Tue, Jul 10, 2012 at 11:48 AM, Sébastien Han 
 han.sebast...@gmail.com wrote:

 Hi!

 Usually you get:

 2012-07-09 13:58:27.179+: 10227: warning :
 qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges


 when you change permission in libvirt (root I presumed) which is
 not necessary.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory


 This error is harmless and can be easily solved by installing the
 following package:

 sudo apt-get install pm-utils -y


 Do you have something in the nova-scheduler logs?

 Cheers!

 On Tue, Jul 10, 2012 at 11:29 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Well i've checked the libvirt logs on both nodes and i found these
 two lines:

 2012-07-09 13:58:27.179+: 10227: warning :
 qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
 2012-07-09 13:58:27.736+: 10226: error :
 qemuMonitorIORead:513 : Unable to read from monitor: Connection 
 reset by
 peer


 The log is alos filled with the message below, it gets repeated
 over and over and over.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory
 2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 :
 Failed to get host power management capabilities


 On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 Hi Leander,
 try to check libvirtd.log files,
 is the instance still running on the first node while you are
 lauching the migration process ?

 Razique
  *Nuage  Co - Razique Mahroua** *
 razique.mahr...@gmail.com


 Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm
 trying to migrate a VM with the m1.tiny flavor from one machine to 
 another.
 Their hardware are identical and they have more than enough 
 resources to
 support the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination
 compute node: http://paste.openstack.org/show/19318/ . The
 scheduler's log has no visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there
 was a bug where instance.id was passed where instance.uuid was
 expected. This used to cause some problem.
 It looks like you are using distribution package rather than
 devstack 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Sébastien Han
Np ;)

On Tue, Jul 10, 2012 at 12:33 PM, Leander Bessa Beernaert 
leande...@gmail.com wrote:

 Ok. Thx for the help :)


 On Tue, Jul 10, 2012 at 11:30 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 It's production ready, RedHat offers a commercial support on it.
 Just keep in mind that it's owned by Redhat ;)



 On Tue, Jul 10, 2012 at 12:24 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Is GlusterFS be more viable for a production environment?


 On Tue, Jul 10, 2012 at 11:17 AM, Sébastien Han han.sebast...@gmail.com
  wrote:

 Great!

 The last time I ran the live-migration, it was with GlusterFS and
 CephFS and I didn't changed any permissions in libvirt. I did the
 live-migration with NFS once but it was in Diablo (horrible), I don't
 really remember my setup. Maybe you should consider to try GlusterFS.


 On Tue, Jul 10, 2012 at 12:07 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 That did! Thanks :)

 Do you by change have any pointer on getting the live-migration to
 work without running libvirt under root?


 On Tue, Jul 10, 2012 at 10:55 AM, Sébastien Han 
 han.sebast...@gmail.com wrote:

 Change the vncserver_listen to 0.0.0.0 and re-try the live-migration,
 you should get better results :)



 On Tue, Jul 10, 2012 at 11:52 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 If i don't run libvirt with root, it can't write to the shared
 folder. It's the only way i've been able to get this to work. :S

 Below follows the configuration of one of the compute nodes.
 10.0.1.1 is the controller and 10.0.1.2 is the compute node.

 novncproxy_base_url=http://10.0.1.1:6080/vnc_auto.html
 xvpvncproxy_base_url=http://10.0.1.1:6081/console
 vncserver_proxyclient_address=10.0.1.2
 vncserver_listen=10.0.1.2


 On Tue, Jul 10, 2012 at 10:49 AM, Sébastien Han 
 han.sebast...@gmail.com wrote:

 I forgot to ask, did you enable the vnc console?

 If so, with which parameters?


 On Tue, Jul 10, 2012 at 11:48 AM, Sébastien Han 
 han.sebast...@gmail.com wrote:

 Hi!

 Usually you get:

 2012-07-09 13:58:27.179+: 10227: warning :
 qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges


 when you change permission in libvirt (root I presumed) which is
 not necessary.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 :
 Cannot find 'pm-is-supported' in path: No such file or directory


 This error is harmless and can be easily solved by installing the
 following package:

 sudo apt-get install pm-utils -y


 Do you have something in the nova-scheduler logs?

 Cheers!

 On Tue, Jul 10, 2012 at 11:29 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Well i've checked the libvirt logs on both nodes and i found
 these two lines:

 2012-07-09 13:58:27.179+: 10227: warning :
 qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002'
 uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: 
 high-privileges
 2012-07-09 13:58:27.736+: 10226: error :
 qemuMonitorIORead:513 : Unable to read from monitor: Connection 
 reset by
 peer


 The log is alos filled with the message below, it gets repeated
 over and over and over.

 2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328
 : Cannot find 'pm-is-supported' in path: No such file or directory
 2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856
 : Failed to get host power management capabilities


 On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 Hi Leander,
 try to check libvirtd.log files,
 is the instance still running on the first node while you are
 lauching the migration process ?

 Razique
  *Nuage  Co - Razique Mahroua** *
 razique.mahr...@gmail.com


 Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm
 trying to migrate a VM with the m1.tiny flavor from one machine to 
 another.
 Their hardware are identical and they have more than enough 
 resources to
 support the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312
 5
 cloud35 (used_max)  0   0
 0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination
 compute node: http://paste.openstack.org/show/19318/ . The
 scheduler's log has no visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there
 was a bug where instance.id was passed where instance.uuid
 was expected. This used to cause 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-10 Thread Razique Mahroua
What Stephan said, appart that, yes, it's production ready :-)
Nuage  Co - Razique Mahrouarazique.mahr...@gmail.com

Le 10 juil. 2012 à 11:52, Leander Bessa Beernaert a écrit :If i don't run libvirt with root, it can't write to the shared folder. It's the only way i've been able to get this to work. :SBelow follows the configuration of one of the compute nodes. 10.0.1.1 is the controller and 10.0.1.2 is the compute node.
novncproxy_base_url=http://10.0.1.1:6080/vnc_auto.html
xvpvncproxy_base_url=http://10.0.1.1:6081/consolevncserver_proxyclient_address=10.0.1.2vncserver_listen=10.0.1.2On Tue, Jul 10, 2012 at 10:49 AM, Sébastien Han han.sebast...@gmail.com wrote:
I forgot to ask, did you enable the vnc console?If so, with which parameters?
On Tue, Jul 10, 2012 at 11:48 AM, Sébastien Han han.sebast...@gmail.com wrote:


Hi!Usually you get:




2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002' uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
when you change permission in libvirt (root I presumed) which is not necessary.




2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 : Cannot find 'pm-is-supported' in path: No such file or directoryThis error is harmless and can be easily solved by installing the following package:




sudo apt-get install pm-utils -y




Do you have something in the nova-scheduler logs?Cheers!On Tue, Jul 10, 2012 at 11:29 AM, Leander Bessa Beernaert leande...@gmail.com wrote:




Well i've checked the libvirt logs on both nodes and i found these two lines:





2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 : Domain id=2 name='instance-0002' uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges2012-07-09 13:58:27.736+: 10226: error : qemuMonitorIORead:513 : Unable to read from monitor: Connection reset by peer





The log is alos filled with the message below, it gets repeated over and over and over.





2012-07-10 09:26:02.244+: 10229: error : virExecWithHook:328 : Cannot find 'pm-is-supported' in path: No such file or directory2012-07-10 09:26:02.244+: 10229: warning : qemuCapsInit:856 : Failed to get host power management capabilities





On Tue, Jul 10, 2012 at 8:16 AM, Razique Mahroua razique.mahr...@gmail.com wrote:





Hi Leander,try to check libvirtd.log files,is the instance still running on the first node while you are lauching the migration process ?Razique






Nuage  Co - Razique Mahroua





razique.mahr...@gmail.com

Le 9 juil. 2012 à 16:09, Leander Bessa Beernaert a écrit :Ok, so i've updated to the test packages fromThe migration still fails, but i see no errors in the logs. I'm trying to migrate a VM with the m1.tiny flavor from one machine to another. Their hardware are identical and they have more than enough resources to support the m1.tiny flavor:






cloud35 (total) 4  3867   186






cloud35 (used_now)   0   3125cloud35 (used_max)   000These are the logs from the origin compute node: http://paste.openstack.org/show/19319/ and the destination compute node: http://paste.openstack.org/show/19318/ . The scheduler's log has no visible errors or stack traces.






I'm still using nfsv4.Any ideas?On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert leande...@gmail.com wrote:






Thanks for the tip, it's a better than nothing :)Regards,Leander






On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे mandarv...@gmail.com wrote:
Not sure if you are able to debug this, but a while ago there was a bug where instance.id was passed where instance.uuid was expected. This used to cause some problem.







It looks like you are using distribution package rather than devstack installation, so it is likely that the issue is now fixed. Can you try latest packages (and/or try devstack if you can)

I wish I could help more.-MandarOn Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert leande...@gmail.com wrote:









Hello,I'verecentlysetup a system to test out the live migration feature. So far i've been able to launch the instances with the shared nfs folder. However, when i run the live-migration command i encounter this error in the destination compute node:











2012-07-05 09:33:48 ERROR nova.manager [-] Error during ComputeManager.update_available_resource: Domain not found: no domain with matching id 2










2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call last):2012-07-05 09:33:48 TRACE nova.manager  File "/usr/lib/python2.7/dist-packages/nova/manager.py", line 155, in periodic_tasks2012-07-05 09:33:48 TRACE nova.manager   task(self, context)










2012-07-05 09:33:48 TRACE nova.manager  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2409, in update_available_resource2012-07-05 09:33:48 TRACE nova.manager   self.driver.update_available_resource(context, self.host)










2012-07-05 09:33:48 

Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-09 Thread Mandar Vaze / मंदार वझे
I see pre_live_migration in destination compute log, so migration at
least started.

Since there are no errors in either compute log, is it possible that
migration is taking long ? (Just a possibility)
When you say migration fails what error did you get ?

-Mandar

On Mon, Jul 9, 2012 at 7:39 PM, Leander Bessa Beernaert leande...@gmail.com
 wrote:

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm trying to
 migrate a VM with the m1.tiny flavor from one machine to another. Their
 hardware are identical and they have more than enough resources to support
 the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination compute
 node: http://paste.openstack.org/show/19318/ . The scheduler's log has no
 visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there was a bug
 where instance.id was passed where instance.uuid was expected. This
 used to cause some problem.
 It looks like you are using distribution package rather than devstack
 installation, so it is likely that the issue is now fixed. Can you try
 latest packages (and/or try devstack if you can)

 I wish I could help more.

 -Mandar


 On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello,

 I've recently setup a system to test out the live migration feature. So
 far i've been able to launch the instances with the shared nfs folder.
 However, when i run the live-migration command i encounter this error in
 the destination compute node:

 2012-07-05 09:33:48 ERROR nova.manager [-] Error during
 ComputeManager.update_available_resource: Domain not found: no domain with
 matching id 2
 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call
 last):
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/manager.py, line 155, in
 periodic_tasks
 2012-07-05 09:33:48 TRACE nova.manager task(self, context)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2409, in
 update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager
 self.driver.update_available_resource(context, self.host)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1936, in update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager 'vcpus_used':
 self.get_vcpu_used(),
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1743, in get_vcpu_used
 2012-07-05 09:33:48 TRACE nova.manager dom =
 self._conn.lookupByID(dom_id)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/libvirt.py, line 2363, in lookupByID
 2012-07-05 09:33:48 TRACE nova.manager if ret is None:raise
 libvirtError('virDomainLookupByID() failed', conn=self)
 2012-07-05 09:33:48 TRACE nova.manager libvirtError: Domain not found:
 no domain with matching id 2


 Any ideas on how to solve this?

 Regards,
 Leander

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-09 Thread Leander Bessa Beernaert
There is no error, it just doesn't do anything :s.

I've left the instance alone for 3 hours now and it's still stuck on the
original compute node.

On Mon, Jul 9, 2012 at 5:55 PM, Mandar Vaze / मंदार वझे 
mandarv...@gmail.com wrote:

 I see pre_live_migration in destination compute log, so migration at
 least started.

 Since there are no errors in either compute log, is it possible that
 migration is taking long ? (Just a possibility)
 When you say migration fails what error did you get ?

 -Mandar

 On Mon, Jul 9, 2012 at 7:39 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Ok, so i've updated to the test packages from

 The migration still fails, but i see no errors in the logs. I'm trying to
 migrate a VM with the m1.tiny flavor from one machine to another. Their
 hardware are identical and they have more than enough resources to support
 the m1.tiny flavor:

 cloud35 (total) 43867 186
 cloud35 (used_now)  0 312   5
 cloud35 (used_max)  0   0   0


 These are the logs from the origin compute node:
 http://paste.openstack.org/show/19319/  and  the destination compute
 node: http://paste.openstack.org/show/19318/ . The scheduler's log has
 no visible errors or stack traces.

 I'm still using nfsv4.

 Any ideas?


 On Fri, Jul 6, 2012 at 7:57 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Thanks for the tip, it's a better than nothing :)

 Regards,
 Leander

 On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
 mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there was a bug
 where instance.id was passed where instance.uuid was expected. This
 used to cause some problem.
 It looks like you are using distribution package rather than devstack
 installation, so it is likely that the issue is now fixed. Can you try
 latest packages (and/or try devstack if you can)

 I wish I could help more.

 -Mandar


 On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello,

 I've recently setup a system to test out the live migration feature.
 So far i've been able to launch the instances with the shared nfs folder.
 However, when i run the live-migration command i encounter this error in
 the destination compute node:

 2012-07-05 09:33:48 ERROR nova.manager [-] Error during
 ComputeManager.update_available_resource: Domain not found: no domain 
 with
 matching id 2
 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call
 last):
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/manager.py, line 155, in
 periodic_tasks
 2012-07-05 09:33:48 TRACE nova.manager task(self, context)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2409, in
 update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager
 self.driver.update_available_resource(context, self.host)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1936, in update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager 'vcpus_used':
 self.get_vcpu_used(),
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1743, in get_vcpu_used
 2012-07-05 09:33:48 TRACE nova.manager dom =
 self._conn.lookupByID(dom_id)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/libvirt.py, line 2363, in lookupByID
 2012-07-05 09:33:48 TRACE nova.manager if ret is None:raise
 libvirtError('virDomainLookupByID() failed', conn=self)
 2012-07-05 09:33:48 TRACE nova.manager libvirtError: Domain not
 found: no domain with matching id 2


 Any ideas on how to solve this?

 Regards,
 Leander

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp






___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack][Nova] Live Migration Error

2012-07-06 Thread Leander Bessa Beernaert
Hello,

I've recently setup a system to test out the live migration feature. So far
i've been able to launch the instances with the shared nfs folder. However,
when i run the live-migration command i encounter this error in the
destination compute node:

2012-07-05 09:33:48 ERROR nova.manager [-] Error during
 ComputeManager.update_available_resource: Domain not found: no domain with
 matching id 2
 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call last):
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/manager.py, line 155, in
 periodic_tasks
 2012-07-05 09:33:48 TRACE nova.manager task(self, context)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2409, in
 update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager
 self.driver.update_available_resource(context, self.host)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1936, in update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager 'vcpus_used':
 self.get_vcpu_used(),
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1743, in get_vcpu_used
 2012-07-05 09:33:48 TRACE nova.manager dom =
 self._conn.lookupByID(dom_id)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/libvirt.py, line 2363, in lookupByID
 2012-07-05 09:33:48 TRACE nova.manager if ret is None:raise
 libvirtError('virDomainLookupByID() failed', conn=self)
 2012-07-05 09:33:48 TRACE nova.manager libvirtError: Domain not found: no
 domain with matching id 2


Any ideas on how to solve this?

Regards,
Leander
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-06 Thread Mandar Vaze / मंदार वझे
Not sure if you are able to debug this, but a while ago there was a bug
where instance.id was passed where instance.uuid was expected. This used to
cause some problem.
It looks like you are using distribution package rather than devstack
installation, so it is likely that the issue is now fixed. Can you try
latest packages (and/or try devstack if you can)

I wish I could help more.

-Mandar


On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert leande...@gmail.com
 wrote:

 Hello,

 I've recently setup a system to test out the live migration feature. So
 far i've been able to launch the instances with the shared nfs folder.
 However, when i run the live-migration command i encounter this error in
 the destination compute node:

 2012-07-05 09:33:48 ERROR nova.manager [-] Error during
 ComputeManager.update_available_resource: Domain not found: no domain with
 matching id 2
 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call last):
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/manager.py, line 155, in
 periodic_tasks
 2012-07-05 09:33:48 TRACE nova.manager task(self, context)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2409, in
 update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager
 self.driver.update_available_resource(context, self.host)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1936, in update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager 'vcpus_used':
 self.get_vcpu_used(),
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1743, in get_vcpu_used
 2012-07-05 09:33:48 TRACE nova.manager dom =
 self._conn.lookupByID(dom_id)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/libvirt.py, line 2363, in lookupByID
 2012-07-05 09:33:48 TRACE nova.manager if ret is None:raise
 libvirtError('virDomainLookupByID() failed', conn=self)
 2012-07-05 09:33:48 TRACE nova.manager libvirtError: Domain not found: no
 domain with matching id 2


 Any ideas on how to solve this?

 Regards,
 Leander

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Nova] Live Migration Error

2012-07-06 Thread Leander Bessa Beernaert
Thanks for the tip, it's a better than nothing :)

Regards,
Leander
On Fri, Jul 6, 2012 at 6:32 PM, Mandar Vaze / मंदार वझे 
mandarv...@gmail.com wrote:

 Not sure if you are able to debug this, but a while ago there was a bug
 where instance.id was passed where instance.uuid was expected. This used
 to cause some problem.
 It looks like you are using distribution package rather than devstack
 installation, so it is likely that the issue is now fixed. Can you try
 latest packages (and/or try devstack if you can)

 I wish I could help more.

 -Mandar


 On Fri, Jul 6, 2012 at 3:26 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 Hello,

 I've recently setup a system to test out the live migration feature. So
 far i've been able to launch the instances with the shared nfs folder.
 However, when i run the live-migration command i encounter this error in
 the destination compute node:

 2012-07-05 09:33:48 ERROR nova.manager [-] Error during
 ComputeManager.update_available_resource: Domain not found: no domain with
 matching id 2
 2012-07-05 09:33:48 TRACE nova.manager Traceback (most recent call last):
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/manager.py, line 155, in
 periodic_tasks
 2012-07-05 09:33:48 TRACE nova.manager task(self, context)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2409, in
 update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager
 self.driver.update_available_resource(context, self.host)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1936, in update_available_resource
 2012-07-05 09:33:48 TRACE nova.manager 'vcpus_used':
 self.get_vcpu_used(),
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line
 1743, in get_vcpu_used
 2012-07-05 09:33:48 TRACE nova.manager dom =
 self._conn.lookupByID(dom_id)
 2012-07-05 09:33:48 TRACE nova.manager   File
 /usr/lib/python2.7/dist-packages/libvirt.py, line 2363, in lookupByID
 2012-07-05 09:33:48 TRACE nova.manager if ret is None:raise
 libvirtError('virDomainLookupByID() failed', conn=self)
 2012-07-05 09:33:48 TRACE nova.manager libvirtError: Domain not found:
 no domain with matching id 2


 Any ideas on how to solve this?

 Regards,
 Leander

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp