Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-10 Thread Simon Grinberg


- Original Message -
> From: "Dan Kenigsberg" 
> To: "Simon Grinberg" 
> Cc: users@ovirt.org, "Tom Brown" 
> Sent: Thursday, January 10, 2013 2:09:28 PM
> Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> 
> On Wed, Jan 09, 2013 at 11:34:56AM -0500, Simon Grinberg wrote:
> > 
> > 
> > - Original Message -
> > > From: "Dan Kenigsberg" 
> > > To: "Simon Grinberg" 
> > > Cc: users@ovirt.org, "Tom Brown" 
> > > Sent: Wednesday, January 9, 2013 6:20:02 PM
> > > Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> > > 
> > > On Wed, Jan 09, 2013 at 09:05:37AM -0500, Simon Grinberg wrote:
> > > > 
> > > > 
> > > > - Original Message -----
> > > > > From: "Dan Kenigsberg" 
> > > > > To: "Tom Brown" 
> > > > > Cc: "Simon Grinberg" , users@ovirt.org
> > > > > Sent: Wednesday, January 9, 2013 2:11:14 PM
> > > > > Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> > > > > 
> > > > > On Wed, Jan 09, 2013 at 10:06:12AM +, Tom Brown wrote:
> > > > > > 
> > > > > > 
> > > > > > >> libvirtError: internal error Process exited while
> > > > > > >> reading
> > > > > > >> console log outpu
> > > > > > > could this be related to selinux? can you try disabling
> > > > > > > it
> > > > > > > and
> > > > > > > see if migration succeeds?
> > > > > > 
> > > > > > It was indeed the case! my src node was set to disabled and
> > > > > > my
> > > > > > destination node was enforcing, this was due to the
> > > > > > destination
> > > > > > being the first HV built and therefore provisioned slightly
> > > > > > differently, my kickstart server is a VM in the pool.
> > > > > > 
> > > > > > Its interesting that a VM can be provisioned onto a node
> > > > > > that
> > > > > > is
> > > > > > set to enforcing and yet not migrated to.
> > > > > 
> > > > > I have (only a vague) memory of discussing this already...
> > > > > Shouldn't oVirt-Engine be aware of selinux enforcement? If a
> > > > > cluster
> > > > > has
> > > > > disabled hosts, an enforcing host should not be operational
> > > > > (or
> > > > > at
> > > > > least
> > > > > warn the admin about that).
> > > > 
> > > > 
> > > > I recall something like that, but I don't recall we ever
> > > > converged
> > > > and can't find the thread
> > > 
> > > What is your opinion on the subject?
> > > 
> > > I think that at the least, the scheduler must be aware of selinux
> > > enforcement when it chooses migration destination.
> > > 
> > 
> > Either all or non in the same cluster - that is the default.
> > 
> > On a mixed environment, the non enforced hosts should be move to
> > non-operational, but VM should not be migrated off due to this, we
> > don't want them moved to protected hosts without the admin
> > awareness.
> > 
> > As exception to the above, have a config parameter that allows in a
> > mixed environment to migrate VMs from an insecure onto a secure
> > host never the other way around. This is to support transition
> > from non-enabled system to enabled.
> 
> Please see Tom's report above:
> 
> > > > > > It was indeed the case! my src node was set to disabled and
> > > > > > my
> > > > > > destination node was enforcing ...
> 
> We apparently cannot migrate an insecure guest into an enforcing
> system.

Well you've asked for my opinion not current implementation :)
I'm not sure anything was implemented on the selinux requirements, need to 
check this, the error I see in this thread is run time failure due to improper 
setting, which is to be expected on migration from non labelled zone into a 
labelled zone. 

> 
> >  
> > I think this is the closest I can get to the agreement (or at least
> > concerns) raised in that old thread I can't find.
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-10 Thread Dan Kenigsberg
On Wed, Jan 09, 2013 at 11:34:56AM -0500, Simon Grinberg wrote:
> 
> 
> - Original Message -
> > From: "Dan Kenigsberg" 
> > To: "Simon Grinberg" 
> > Cc: users@ovirt.org, "Tom Brown" 
> > Sent: Wednesday, January 9, 2013 6:20:02 PM
> > Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> > 
> > On Wed, Jan 09, 2013 at 09:05:37AM -0500, Simon Grinberg wrote:
> > > 
> > > 
> > > - Original Message -
> > > > From: "Dan Kenigsberg" 
> > > > To: "Tom Brown" 
> > > > Cc: "Simon Grinberg" , users@ovirt.org
> > > > Sent: Wednesday, January 9, 2013 2:11:14 PM
> > > > Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> > > > 
> > > > On Wed, Jan 09, 2013 at 10:06:12AM +, Tom Brown wrote:
> > > > > 
> > > > > 
> > > > > >> libvirtError: internal error Process exited while reading
> > > > > >> console log outpu
> > > > > > could this be related to selinux? can you try disabling it
> > > > > > and
> > > > > > see if migration succeeds?
> > > > > 
> > > > > It was indeed the case! my src node was set to disabled and my
> > > > > destination node was enforcing, this was due to the destination
> > > > > being the first HV built and therefore provisioned slightly
> > > > > differently, my kickstart server is a VM in the pool.
> > > > > 
> > > > > Its interesting that a VM can be provisioned onto a node that
> > > > > is
> > > > > set to enforcing and yet not migrated to.
> > > > 
> > > > I have (only a vague) memory of discussing this already...
> > > > Shouldn't oVirt-Engine be aware of selinux enforcement? If a
> > > > cluster
> > > > has
> > > > disabled hosts, an enforcing host should not be operational (or
> > > > at
> > > > least
> > > > warn the admin about that).
> > > 
> > > 
> > > I recall something like that, but I don't recall we ever converged
> > > and can't find the thread
> > 
> > What is your opinion on the subject?
> > 
> > I think that at the least, the scheduler must be aware of selinux
> > enforcement when it chooses migration destination.
> > 
> 
> Either all or non in the same cluster - that is the default.
> 
> On a mixed environment, the non enforced hosts should be move to 
> non-operational, but VM should not be migrated off due to this, we don't want 
> them moved to protected hosts without the admin awareness.  
> 
> As exception to the above, have a config parameter that allows in a mixed 
> environment to migrate VMs from an insecure onto a secure host never the 
> other way around. This is to support transition from non-enabled system to 
> enabled. 

Please see Tom's report above:

> > > > > It was indeed the case! my src node was set to disabled and my
> > > > > destination node was enforcing ...

We apparently cannot migrate an insecure guest into an enforcing system.

>  
> I think this is the closest I can get to the agreement (or at least concerns) 
> raised in that old thread I can't find. 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-09 Thread Dan Kenigsberg
On Wed, Jan 09, 2013 at 09:05:37AM -0500, Simon Grinberg wrote:
> 
> 
> - Original Message -
> > From: "Dan Kenigsberg" 
> > To: "Tom Brown" 
> > Cc: "Simon Grinberg" , users@ovirt.org
> > Sent: Wednesday, January 9, 2013 2:11:14 PM
> > Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> > 
> > On Wed, Jan 09, 2013 at 10:06:12AM +, Tom Brown wrote:
> > > 
> > > 
> > > >> libvirtError: internal error Process exited while reading
> > > >> console log outpu
> > > > could this be related to selinux? can you try disabling it and
> > > > see if migration succeeds?
> > > 
> > > It was indeed the case! my src node was set to disabled and my
> > > destination node was enforcing, this was due to the destination
> > > being the first HV built and therefore provisioned slightly
> > > differently, my kickstart server is a VM in the pool.
> > > 
> > > Its interesting that a VM can be provisioned onto a node that is
> > > set to enforcing and yet not migrated to.
> > 
> > I have (only a vague) memory of discussing this already...
> > Shouldn't oVirt-Engine be aware of selinux enforcement? If a cluster
> > has
> > disabled hosts, an enforcing host should not be operational (or at
> > least
> > warn the admin about that).
> 
> 
> I recall something like that, but I don't recall we ever converged and can't 
> find the thread

What is your opinion on the subject?

I think that at the least, the scheduler must be aware of selinux
enforcement when it chooses migration destination.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-09 Thread Simon Grinberg


- Original Message -
> From: "Dan Kenigsberg" 
> To: "Simon Grinberg" 
> Cc: users@ovirt.org, "Tom Brown" 
> Sent: Wednesday, January 9, 2013 6:20:02 PM
> Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> 
> On Wed, Jan 09, 2013 at 09:05:37AM -0500, Simon Grinberg wrote:
> > 
> > 
> > - Original Message -
> > > From: "Dan Kenigsberg" 
> > > To: "Tom Brown" 
> > > Cc: "Simon Grinberg" , users@ovirt.org
> > > Sent: Wednesday, January 9, 2013 2:11:14 PM
> > > Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> > > 
> > > On Wed, Jan 09, 2013 at 10:06:12AM +, Tom Brown wrote:
> > > > 
> > > > 
> > > > >> libvirtError: internal error Process exited while reading
> > > > >> console log outpu
> > > > > could this be related to selinux? can you try disabling it
> > > > > and
> > > > > see if migration succeeds?
> > > > 
> > > > It was indeed the case! my src node was set to disabled and my
> > > > destination node was enforcing, this was due to the destination
> > > > being the first HV built and therefore provisioned slightly
> > > > differently, my kickstart server is a VM in the pool.
> > > > 
> > > > Its interesting that a VM can be provisioned onto a node that
> > > > is
> > > > set to enforcing and yet not migrated to.
> > > 
> > > I have (only a vague) memory of discussing this already...
> > > Shouldn't oVirt-Engine be aware of selinux enforcement? If a
> > > cluster
> > > has
> > > disabled hosts, an enforcing host should not be operational (or
> > > at
> > > least
> > > warn the admin about that).
> > 
> > 
> > I recall something like that, but I don't recall we ever converged
> > and can't find the thread
> 
> What is your opinion on the subject?
> 
> I think that at the least, the scheduler must be aware of selinux
> enforcement when it chooses migration destination.
> 

Either all or non in the same cluster - that is the default.

On a mixed environment, the non enforced hosts should be move to 
non-operational, but VM should not be migrated off due to this, we don't want 
them moved to protected hosts without the admin awareness.  

As exception to the above, have a config parameter that allows in a mixed 
environment to migrate VMs from an insecure onto a secure host never the other 
way around. This is to support transition from non-enabled system to enabled. 
 
I think this is the closest I can get to the agreement (or at least concerns) 
raised in that old thread I can't find. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-09 Thread Simon Grinberg


- Original Message -
> From: "Dan Kenigsberg" 
> To: "Tom Brown" 
> Cc: "Simon Grinberg" , users@ovirt.org
> Sent: Wednesday, January 9, 2013 2:11:14 PM
> Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> 
> On Wed, Jan 09, 2013 at 10:06:12AM +, Tom Brown wrote:
> > 
> > 
> > >> libvirtError: internal error Process exited while reading
> > >> console log outpu
> > > could this be related to selinux? can you try disabling it and
> > > see if migration succeeds?
> > 
> > It was indeed the case! my src node was set to disabled and my
> > destination node was enforcing, this was due to the destination
> > being the first HV built and therefore provisioned slightly
> > differently, my kickstart server is a VM in the pool.
> > 
> > Its interesting that a VM can be provisioned onto a node that is
> > set to enforcing and yet not migrated to.
> 
> I have (only a vague) memory of discussing this already...
> Shouldn't oVirt-Engine be aware of selinux enforcement? If a cluster
> has
> disabled hosts, an enforcing host should not be operational (or at
> least
> warn the admin about that).


I recall something like that, but I don't recall we ever converged and can't 
find the thread



> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-09 Thread Dan Kenigsberg
On Wed, Jan 09, 2013 at 10:06:12AM +, Tom Brown wrote:
> 
> 
> >> libvirtError: internal error Process exited while reading console log outpu
> > could this be related to selinux? can you try disabling it and see if 
> > migration succeeds?
> 
> It was indeed the case! my src node was set to disabled and my destination 
> node was enforcing, this was due to the destination being the first HV built 
> and therefore provisioned slightly differently, my kickstart server is a VM 
> in the pool.
> 
> Its interesting that a VM can be provisioned onto a node that is set to 
> enforcing and yet not migrated to.

I have (only a vague) memory of discussing this already...
Shouldn't oVirt-Engine be aware of selinux enforcement? If a cluster has
disabled hosts, an enforcing host should not be operational (or at least
warn the admin about that).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-09 Thread Tom Brown


>> libvirtError: internal error Process exited while reading console log outpu
> could this be related to selinux? can you try disabling it and see if 
> migration succeeds?

It was indeed the case! my src node was set to disabled and my destination node 
was enforcing, this was due to the destination being the first HV built and 
therefore provisioned slightly differently, my kickstart server is a VM in the 
pool.

Its interesting that a VM can be provisioned onto a node that is set to 
enforcing and yet not migrated to.

thanks

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-09 Thread Dan Kenigsberg
On Wed, Jan 09, 2013 at 03:32:42AM -0500, Haim Ateya wrote:
> both.

qemu logs (/var/log/libvirt/qemu/vmname.log) may have interesting
content, too. Tom, which are your libvirt and qemu-kvm versions (src and
dst)?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-09 Thread Haim Ateya
both.

- Original Message -
> From: "Tom Brown" 
> To: "Haim Ateya" 
> Cc: "Dan Kenigsberg" , users@ovirt.org, "Roy Golan" 
> 
> Sent: Wednesday, January 9, 2013 10:03:11 AM
> Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> 
> Source or destination?
> 
> On 9 Jan 2013, at 07:35, Haim Ateya  wrote:
> 
> > odd,
> > 
> > migration seem to be successful on destination server, but source
> > reports a problem:
> > 
> > Thread-1484336::DEBUG::2013-01-08
> > 10:41:07,659::BindingXMLRPC::883::vds::(wrapper) client
> > [10.192.42.207]::call vmMigrate with ({'src': '10.192.42.196',
> > 'dst': '10.192.42.165:54321', 'vmId': 'cfb17b98-1476-4fbf-9f
> > Thread-1484336::DEBUG::2013-01-08
> > 10:41:07,659::API::432::vds::(migrate) {'src': '10.192.42.196',
> > 'dst': '10.192.42.165:54321', 'vmId':
> > 'cfb17b98-1476-4fbf-9fab-7c7f48b60adf', 'method': 'online'}
> > Thread-1484337::DEBUG::2013-01-08
> > 10:41:07,660::vm::125::vm.Vm::(_setupVdsConnection)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::Destination server
> > is: 10.192.42.165:54321
> > Thread-1484336::DEBUG::2013-01-08
> > 10:41:07,660::BindingXMLRPC::890::vds::(wrapper) return vmMigrate
> > with {'status': {'message': 'Migration process starting', 'code':
> > 0}}
> > Thread-1484337::DEBUG::2013-01-08
> > 10:41:07,660::vm::127::vm.Vm::(_setupVdsConnection)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::Initiating connection
> > with destination
> > Thread-1484337::DEBUG::2013-01-08
> > 10:41:07,752::libvirtvm::278::vm.Vm::(_getDiskLatency)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::Disk vda latency not
> > available
> > Thread-1484337::DEBUG::2013-01-08
> > 10:41:07,835::vm::173::vm.Vm::(_prepareGuest)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::migration Process
> > begins
> > Thread-1484337::DEBUG::2013-01-08
> > 10:41:07,927::vm::237::vm.Vm::(run)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::migration semaphore
> > acquired
> > Thread-1484337::DEBUG::2013-01-08
> > 10:41:08,251::libvirtvm::449::vm.Vm::(_startUnderlyingMigration)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::starting migration to
> > qemu+tls://10.192.42.165/system
> > Thread-1484338::DEBUG::2013-01-08
> > 10:41:08,251::libvirtvm::335::vm.Vm::(run)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::migration downtime
> > thread started
> > Thread-1484339::DEBUG::2013-01-08
> > 10:41:08,252::libvirtvm::371::vm.Vm::(run)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::starting migration
> > monitor thread
> > Thread-1484337::DEBUG::2013-01-08
> > 10:41:09,521::libvirtvm::350::vm.Vm::(cancel)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::canceling migration
> > downtime thread
> > Thread-1484337::DEBUG::2013-01-08
> > 10:41:09,521::libvirtvm::409::vm.Vm::(stop)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::stopping migration
> > monitor thread
> > Thread-1484338::DEBUG::2013-01-08
> > 10:41:09,522::libvirtvm::347::vm.Vm::(run)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::migration downtime
> > thread exiting
> > Thread-1484337::ERROR::2013-01-08
> > 10:41:09,522::vm::179::vm.Vm::(_recover)
> > vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::internal error
> > Process exited while reading console log output:
> > Thread-1484340::DEBUG::2013-01-08
> > 10:41:09,544::task::568::TaskManager.Task::(_updateState)
> > Task=`bfebf940-d2a3-4b6c-948b-cac951a686bf`::moving from state
> > init -> state preparing
> > Thread-1484340::INFO::2013-01-08
> > 10:41:09,544::logUtils::37::dispatcher::(wrapper) Run and protect:
> > repoStats(options=None)
> > Thread-1484340::INFO::2013-01-08
> > 10:41:09,544::logUtils::39::dispatcher::(wrapper) Run and protect:
> > repoStats, Return response:
> > {'2a1939bd-9fa3-4896-b8a9-46234172aae7': {'delay':
> > '0.00229001045227', 'lastCheck': '
> > Thread-1484340::DEBUG::2013-01-08
> > 10:41:09,544::task::1151::TaskManager.Task::(prepare)
> > Task=`bfebf940-d2a3-4b6c-948b-cac951a686bf`::finished:
> > {'2a1939bd-9fa3-4896-b8a9-46234172aae7': {'delay':
> > '0.00229001045227',
> > Thread-1484340::DEBUG::2013-01-08
> > 10:41:09,544::task::568::TaskManager.Task::(_updateState)
> > Task=`bfebf940-d2a3-4b6c-948b-cac951a686

Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-09 Thread Tom Brown
59::libvirtvm::278::vm.Vm::(_getDiskLatency) 
> vmId=`7b8f725b-0a67-46d4-a3cf-db43daad0c42`::Disk vda latency not available
> Thread-1484341::DEBUG::2013-01-08 
> 10:41:09,559::libvirtvm::278::vm.Vm::(_getDiskLatency) 
> vmId=`9dc63ce4-0f76-4963-adfe-6f8eb1a44806`::Disk vda latency not available
> Thread-1484341::DEBUG::2013-01-08 
> 10:41:09,559::libvirtvm::278::vm.Vm::(_getDiskLatency) 
> vmId=`e8683e88-f3f2-4fe9-80f7-f4888d8e7a13`::Disk vda latency not available
> Thread-1484337::ERROR::2013-01-08 10:41:09,754::vm::258::vm.Vm::(run) 
> vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::Failed to migrate
> Traceback (most recent call last):
>  File "/usr/share/vdsm/vm.py", line 245, in run
>self._startUnderlyingMigration()
>  File "/usr/share/vdsm/libvirtvm.py", line 474, in _startUnderlyingMigration
>None, maxBandwidth)
>  File "/usr/share/vdsm/libvirtvm.py", line 510, in f
>ret = attr(*args, **kwargs)
>  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 
> 83, in wrapper
>    ret = f(*args, **kwargs)
>  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1103, in 
> migrateToURI2
>if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
> dom=self)
> libvirtError: internal error Process exited while reading console log output: 
> 
> any chance you attach libvirtd.log and qemu log (/var/log/libvirt/qemu/{}.log?
> 
> Danken - any insights?
> 
> - Original Message -
>> From: "Tom Brown" 
>> To: "Roy Golan" 
>> Cc: "Haim Ateya" , users@ovirt.org
>> Sent: Tuesday, January 8, 2013 11:50:26 AM
>> Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
>> 
>> 
>>> can you attach the same snip from the src VDSM 10.192.42.196 as
>>> well?
>> 
>> The log is pretty chatty therefore i did another migration attempt
>> and snipd'd the new
>> log from both sides.
>> 
>> see attached
>> 
>> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-08 Thread Roy Golan

On 01/09/2013 09:35 AM, Haim Ateya wrote:

libvirtError: internal error Process exited while reading console log outpu
could this be related to selinux? can you try disabling it and see if 
migration succeeds?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-08 Thread Haim Ateya
:2013-01-08 10:41:09,754::vm::258::vm.Vm::(run) 
vmId=`cfb17b98-1476-4fbf-9fab-7c7f48b60adf`::Failed to migrate
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 245, in run
self._startUnderlyingMigration()
  File "/usr/share/vdsm/libvirtvm.py", line 474, in _startUnderlyingMigration
None, maxBandwidth)
  File "/usr/share/vdsm/libvirtvm.py", line 510, in f
ret = attr(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 83, 
in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1103, in 
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
libvirtError: internal error Process exited while reading console log output: 

any chance you attach libvirtd.log and qemu log (/var/log/libvirt/qemu/{}.log?

Danken - any insights?

- Original Message -
> From: "Tom Brown" 
> To: "Roy Golan" 
> Cc: "Haim Ateya" , users@ovirt.org
> Sent: Tuesday, January 8, 2013 11:50:26 AM
> Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> 
> 
> > can you attach the same snip from the src VDSM 10.192.42.196 as
> > well?
> 
> The log is pretty chatty therefore i did another migration attempt
> and snipd'd the new
> log from both sides.
> 
> see attached
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-08 Thread Roy Golan

On 01/07/2013 01:18 PM, Tom Brown wrote:

VDSM is the virtualization agent. look at /var/log/vdsm/vdsm.log

many thanks - attached is the snip of that log when i try the migration

thanks


can you attach the same snip from the src VDSM 10.192.42.196 as well?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-06 Thread Roy Golan

On 01/03/2013 05:07 PM, Tom Brown wrote:



interesting, please search for migrationCreate command on desination host and 
search for ERROR afterwords, what do you see?

- Original Message -

From: "Tom Brown" 
To: users@ovirt.org
Sent: Thursday, January 3, 2013 4:12:05 PM
Subject: [Users] oVirt 3.1 - VM Migration Issue


Hi

I seem to have an issue with a single VM and migration, other VM's
can migrate OK - When migrating from the GUI it appears to just hang
but in the engine.log i see the following

2013-01-03 14:03:10,359 INFO  [org.ovirt.engine.core.bll.VdsSelector]
(ajp--0.0.0.0-8009-59) Checking for a specific VDS only -
id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
name:ovirt-node.domain-name, host_name(ip):10.192.42.165
2013-01-03 14:03:10,411 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(pool-3-thread-48) [4d32917d] Running command:
MigrateVmToServerCommand internal: false. Entities affected :  ID:
9dc63ce4-0f76-4963-adfe-6f8eb1a44806 Type: VM
2013-01-03 14:03:10,413 INFO  [org.ovirt.engine.core.bll.VdsSelector]
(pool-3-thread-48) [4d32917d] Checking for a specific VDS only -
id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
name:ovirt-node.domain-name, host_name(ip):10.192.42.165
2013-01-03 14:03:11,028 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(pool-3-thread-48) [4d32917d] START, MigrateVDSCommand(vdsId =
1a52b722-43a1-11e2-af96-3cd92b4c8e89,
vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
5011789b
2013-01-03 14:03:11,030 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-3-thread-48) [4d32917d] VdsBroker::migrate::Entered
(vm_guid=9dc63ce4-0f76-4963-adfe-6f8eb1a44806,
srcHost=10.192.42.196, dstHost=10.192.42.165:54321,  method=online
2013-01-03 14:03:11,031 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-3-thread-48) [4d32917d] START, MigrateBrokerVDSCommand(vdsId =
1a52b722-43a1-11e2-af96-3cd92b4c8e89,
vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
7cd53864
2013-01-03 14:03:11,041 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-3-thread-48) [4d32917d] FINISH, MigrateBrokerVDSCommand, log
id: 7cd53864
2013-01-03 14:03:11,086 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(pool-3-thread-48) [4d32917d] FINISH, MigrateVDSCommand, return:
MigratingFrom, log id: 5011789b
2013-01-03 14:03:11,606 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-29) vds::refreshVmList vm id
9dc63ce4-0f76-4963-adfe-6f8eb1a44806 is migrating to vds
ovirt-node.domain-name ignoring it in the refresh till migration is
done
2013-01-03 14:03:12,836 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-36) VM test002.domain-name
9dc63ce4-0f76-4963-adfe-6f8eb1a44806 moved from MigratingFrom --> Up
2013-01-03 14:03:12,837 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-36) adding VM
9dc63ce4-0f76-4963-adfe-6f8eb1a44806 to re-run list
2013-01-03 14:03:12,852 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-36) Rerun vm
9dc63ce4-0f76-4963-adfe-6f8eb1a44806. Called from vds
ovirt-node002.domain-name
2013-01-03 14:03:12,855 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-3-thread-48) START, MigrateStatusVDSCommand(vdsId =
1a52b722-43a1-11e2-af96-3cd92b4c8e89,
vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806), log id: 4721a1f3
2013-01-03 14:03:12,864 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-48) Failed in MigrateStatusVDS method
2013-01-03 14:03:12,865 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-48) Error code migrateErr and error message
VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS,
error = Fatal error during migration
2013-01-03 14:03:12,865 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-48) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
return value
Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus   Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 12
mMessage  Fatal error during migration


2013-01-03 14:03:12,866 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-48) Vds: ovirt-node002.itvonline.ads
2013-01-03 14:03:12,867 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-48)
Command MigrateStatusVDS execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
MigrateStatusVDS, error = Fatal error during migration
2013-01-03 1

Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-03 Thread Tom Brown


> interesting, please search for migrationCreate command on desination host and 
> search for ERROR afterwords, what do you see?
> 
> - Original Message -
>> From: "Tom Brown" 
>> To: users@ovirt.org
>> Sent: Thursday, January 3, 2013 4:12:05 PM
>> Subject: [Users] oVirt 3.1 - VM Migration Issue
>> 
>> 
>> Hi
>> 
>> I seem to have an issue with a single VM and migration, other VM's
>> can migrate OK - When migrating from the GUI it appears to just hang
>> but in the engine.log i see the following
>> 
>> 2013-01-03 14:03:10,359 INFO  [org.ovirt.engine.core.bll.VdsSelector]
>> (ajp--0.0.0.0-8009-59) Checking for a specific VDS only -
>> id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>> name:ovirt-node.domain-name, host_name(ip):10.192.42.165
>> 2013-01-03 14:03:10,411 INFO
>> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
>> (pool-3-thread-48) [4d32917d] Running command:
>> MigrateVmToServerCommand internal: false. Entities affected :  ID:
>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 Type: VM
>> 2013-01-03 14:03:10,413 INFO  [org.ovirt.engine.core.bll.VdsSelector]
>> (pool-3-thread-48) [4d32917d] Checking for a specific VDS only -
>> id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>> name:ovirt-node.domain-name, host_name(ip):10.192.42.165
>> 2013-01-03 14:03:11,028 INFO
>> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>> (pool-3-thread-48) [4d32917d] START, MigrateVDSCommand(vdsId =
>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
>> dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>> dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
>> 5011789b
>> 2013-01-03 14:03:11,030 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>> (pool-3-thread-48) [4d32917d] VdsBroker::migrate::Entered
>> (vm_guid=9dc63ce4-0f76-4963-adfe-6f8eb1a44806,
>> srcHost=10.192.42.196, dstHost=10.192.42.165:54321,  method=online
>> 2013-01-03 14:03:11,031 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>> (pool-3-thread-48) [4d32917d] START, MigrateBrokerVDSCommand(vdsId =
>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
>> dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>> dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
>> 7cd53864
>> 2013-01-03 14:03:11,041 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>> (pool-3-thread-48) [4d32917d] FINISH, MigrateBrokerVDSCommand, log
>> id: 7cd53864
>> 2013-01-03 14:03:11,086 INFO
>> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>> (pool-3-thread-48) [4d32917d] FINISH, MigrateVDSCommand, return:
>> MigratingFrom, log id: 5011789b
>> 2013-01-03 14:03:11,606 INFO
>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>> (QuartzScheduler_Worker-29) vds::refreshVmList vm id
>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 is migrating to vds
>> ovirt-node.domain-name ignoring it in the refresh till migration is
>> done
>> 2013-01-03 14:03:12,836 INFO
>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>> (QuartzScheduler_Worker-36) VM test002.domain-name
>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 moved from MigratingFrom --> Up
>> 2013-01-03 14:03:12,837 INFO
>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>> (QuartzScheduler_Worker-36) adding VM
>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 to re-run list
>> 2013-01-03 14:03:12,852 ERROR
>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>> (QuartzScheduler_Worker-36) Rerun vm
>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806. Called from vds
>> ovirt-node002.domain-name
>> 2013-01-03 14:03:12,855 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
>> (pool-3-thread-48) START, MigrateStatusVDSCommand(vdsId =
>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806), log id: 4721a1f3
>> 2013-01-03 14:03:12,864 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>> (pool-3-thread-48) Failed in MigrateStatusVDS method
>> 2013-01-03 14:03:12,865 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>> (pool-3-thread-48) Error code migrateErr and error message
>> VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS,
>> error = Fatal error during migration
>> 2013-01-03 14:03:12,865 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>> (pool-3-thread-48) Command
>> org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
>> return value
>> Class Name:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
>> mStatus   Class Name:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
>> mCode 12
>> mMessage  Fatal error during migration
>> 
>> 
>> 2013-01-03 14:03:12,866 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>> (pool-3-thread-48) Vds: ovirt-node002.itvonline.ads
>> 2013-01-03 14:0

Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-03 Thread Haim Ateya
interesting, please search for migrationCreate command on desination host and 
search for ERROR afterwords, what do you see?

- Original Message -
> From: "Tom Brown" 
> To: users@ovirt.org
> Sent: Thursday, January 3, 2013 4:12:05 PM
> Subject: [Users] oVirt 3.1 - VM Migration Issue
> 
> 
> Hi
> 
> I seem to have an issue with a single VM and migration, other VM's
> can migrate OK - When migrating from the GUI it appears to just hang
> but in the engine.log i see the following
> 
> 2013-01-03 14:03:10,359 INFO  [org.ovirt.engine.core.bll.VdsSelector]
> (ajp--0.0.0.0-8009-59) Checking for a specific VDS only -
> id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
> name:ovirt-node.domain-name, host_name(ip):10.192.42.165
> 2013-01-03 14:03:10,411 INFO
>  [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
> (pool-3-thread-48) [4d32917d] Running command:
> MigrateVmToServerCommand internal: false. Entities affected :  ID:
> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 Type: VM
> 2013-01-03 14:03:10,413 INFO  [org.ovirt.engine.core.bll.VdsSelector]
> (pool-3-thread-48) [4d32917d] Checking for a specific VDS only -
> id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
> name:ovirt-node.domain-name, host_name(ip):10.192.42.165
> 2013-01-03 14:03:11,028 INFO
>  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> (pool-3-thread-48) [4d32917d] START, MigrateVDSCommand(vdsId =
> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
> dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
> dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
> 5011789b
> 2013-01-03 14:03:11,030 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> (pool-3-thread-48) [4d32917d] VdsBroker::migrate::Entered
> (vm_guid=9dc63ce4-0f76-4963-adfe-6f8eb1a44806,
> srcHost=10.192.42.196, dstHost=10.192.42.165:54321,  method=online
> 2013-01-03 14:03:11,031 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> (pool-3-thread-48) [4d32917d] START, MigrateBrokerVDSCommand(vdsId =
> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
> dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
> dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
> 7cd53864
> 2013-01-03 14:03:11,041 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> (pool-3-thread-48) [4d32917d] FINISH, MigrateBrokerVDSCommand, log
> id: 7cd53864
> 2013-01-03 14:03:11,086 INFO
>  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> (pool-3-thread-48) [4d32917d] FINISH, MigrateVDSCommand, return:
> MigratingFrom, log id: 5011789b
> 2013-01-03 14:03:11,606 INFO
>  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (QuartzScheduler_Worker-29) vds::refreshVmList vm id
> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 is migrating to vds
> ovirt-node.domain-name ignoring it in the refresh till migration is
> done
> 2013-01-03 14:03:12,836 INFO
>  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (QuartzScheduler_Worker-36) VM test002.domain-name
> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 moved from MigratingFrom --> Up
> 2013-01-03 14:03:12,837 INFO
>  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (QuartzScheduler_Worker-36) adding VM
> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 to re-run list
> 2013-01-03 14:03:12,852 ERROR
> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (QuartzScheduler_Worker-36) Rerun vm
> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806. Called from vds
> ovirt-node002.domain-name
> 2013-01-03 14:03:12,855 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> (pool-3-thread-48) START, MigrateStatusVDSCommand(vdsId =
> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806), log id: 4721a1f3
> 2013-01-03 14:03:12,864 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (pool-3-thread-48) Failed in MigrateStatusVDS method
> 2013-01-03 14:03:12,865 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (pool-3-thread-48) Error code migrateErr and error message
> VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS,
> error = Fatal error during migration
> 2013-01-03 14:03:12,865 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (pool-3-thread-48) Command
> org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
> return value
>  Class Name:
>  org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
> mStatus   Class Name:
> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
> mCode 12
> mMessage  Fatal error during migration
> 
> 
> 2013-01-03 14:03:12,866 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (pool-3-thread-48) Vds: ovirt-node002.itvonline.ads
> 2013-01-03 14:03:12,867 ERROR
> [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-48)
> Command Migr