On Thu, Jun 18, 2015 at 02:25:08PM +0200, Jiri Denemark wrote:
> On Wed, Jun 17, 2015 at 17:31:03 +0200, Kashyap Chamarthy wrote:
> > On Tue, Jun 16, 2015 at 01:42:02AM +0300, Pavel Boldin wrote:
> ...
> > libvirtd debug log[1] from source (destination log is empty)):
> > 
> > [. . .]
> > 2015-06-17 15:13:53.317+0000: 781: debug : 
> > virDomainMigratePerform3Params:5202 : dom=0x7f2118f13c40, (VM: name=cvm1, 
> > uuid=ab4c412b-6fdc-4fc4-b78c-f1d49db10d4e), 
> > dconnuri=qemu+tcp://root@devstack3/system, params=0x7f2118f12a90, 
> > nparams=1, cookiein=(nil), cookieinlen=0, cookieout=0x7f2106f38ba8, 
> > cookieoutlen=0x7f2106f38ba4, flags=3
> > 2015-06-17 15:13:53.317+0000: 781: debug : 
> > virDomainMigratePerform3Params:5203 : params["migrate_disks"]=(string)vdb
> > 2015-06-17 15:13:53.317+0000: 781: debug : qemuMigrationPerform:5238 : 
> > driver=0x7f20f416b840, conn=0x7f20dc005c30, vm=0x7f20f41e9640, 
> > xmlin=<null>, dconnuri=qemu+tcp://root@devstack3/system, uri=<null>, 
> > graphicsuri=<null>, listenAddress=<null>, nmigrate_disks=1, 
> > migrate_disks=0x7f2118f13930, cookiein=<null>, cookieinlen=0, 
> > cookieout=0x7f2106f38ba8, cookieoutlen=0x7f2106f38ba4, flags=3, 
> > dname=<null>, resource=0, v3proto=1
> > 2015-06-17 15:13:53.317+0000: 781: debug : 
> > qemuDomainObjBeginJobInternal:1397 : Starting async job: none 
> > (async=migration out vm=0x7f20f41e9640 name=cvm1)
> > 2015-06-17 15:13:53.317+0000: 781: debug : 
> > qemuDomainObjBeginJobInternal:1414 : Waiting for async job 
> > (vm=0x7f20f41e9640 name=cvm1)
> > 2015-06-17 15:13:53.821+0000: 782: debug : virThreadJobSet:96 : Thread 782 
> > (virNetServerHandleJob) is now running job remoteDispatchDomainGetJobInfo
> > 2015-06-17 15:13:53.821+0000: 782: debug : virDomainGetJobInfo:8808 : 
> > dom=0x7f20dc008c30, (VM: name=cvm1, 
> > uuid=ab4c412b-6fdc-4fc4-b78c-f1d49db10d4e), info=0x7f2106737b50
> > 2015-06-17 15:13:53.821+0000: 782: debug : virThreadJobClear:121 : Thread 
> > 782 (virNetServerHandleJob) finished job remoteDispatchDomainGetJobInfo 
> > with ret=0
> > 2015-06-17 15:13:54.325+0000: 780: debug : virThreadJobSet:96 : Thread 780 
> > (virNetServerHandleJob) is now running job remoteDispatchDomainGetJobInfo
> > 2015-06-17 15:13:54.325+0000: 780: debug : virDomainGetJobInfo:8808 : 
> > dom=0x7f20dc008c30, (VM: name=cvm1, 
> > uuid=ab4c412b-6fdc-4fc4-b78c-f1d49db10d4e), info=0x7f2107739b50
> > 2015-06-17 15:13:54.325+0000: 780: debug : virThreadJobClear:121 : Thread 
> > 780 (virNetServerHandleJob) finished job remoteDispatchDomainGetJobInfo 
> > with ret=0
> > [. . .]
> > remoteDispatchDomainMigratePerform3Params, 784 
> > remoteDispatchDomainMigratePerform3Params) for (520s, 520s)
> > 2015-06-17 15:14:23.320+0000: 781: error : 
> > qemuDomainObjBeginJobInternal:1492 : Timed out during operation: cannot 
> > acquire state change lock (held by 
> > remoteDispatchDomainMigratePerform3Params)
> > 2015-06-17 15:14:23.320+0000: 781: debug : virThreadJobClear:121 : Thread 
> > 781 (virNetServerHandleJob) finished job 
> > remoteDispatchDomainMigratePerform3Params with ret=-1
> > 2015-06-17 15:14:23.320+0000: 783: debug : virThreadJobSet:96 : Thread 783 
> > (virNetServerHandleJob) is now running job remoteDispatchConnectClose
> > 2015-06-17 15:14:23.320+0000: 783: debug : virThreadJobClear:121 : Thread 
> > 783 (virNetServerHandleJob) finished job remoteDispatchConnectClose with 
> > ret=0
> > 
> > 
> > How can I mitigate this? (I realize this is not due to these patches,
> > proably something with my test environment.)
> > 
> > Since this is non-shared storage migration, I tried to supply
> > '--copy-storage-inc' to no avail (same error as above).
> > 
> > Probably I should test by building local RPMs.
> > 
> > [1] 
> > https://kashyapc.fedorapeople.org/virt/temp/libvirtd-log-selective-blockdev-failed.log
> 
> Could you upload a complete log somewhere? It seems a previously started
> migration is waiting for a response from QEMU. Or alternatively, it
> failed to release the jobs. I'd like to see the logs from the previous
> migration attempt.

I'm afraid, too late -- I blew that environment away and re-created
libvirt RPMs. This time, with Michal's branch from here, which also has
the additional diff he posted in his review:

    https://github.com/zippy2/libvirt/tree/storage_migration2

I did a preliminary test and it seems to have worked:

On source:

    $ virsh domblklist cvm1 Target     Source
    ------------------------------------------------
    vda        /var/lib/libvirt/images/cirros-0.3.3-x86_64-disk.img
    vdb        /export/disk2.img
    
    $ virsh migrate --verbose --p2p --copy-storage-inc \
                --migratedisks vda  --live cvm1 qemu+tcp://root@devstack3/system
    Migration: [100 %]


On Dest:
-------

Where vdb was already present.

$ virsh list
 Id    Name                           State
----------------------------------------------------
 2     cvm1                           running


$ virsh domblklist cvm1
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/cirros-0.3.3-x86_64-disk.img
vdb        /export/disk2.img


-- 
/kashyap

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Reply via email to