[libvirt] [PATCH] docs: fix double articles bug

2013-11-25 Thread Wangyufei (A)
Delete the extra article 'the'.

Signed-off-by: Wang Yufei 
---
 src/libvirt.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/src/libvirt.c b/src/libvirt.c
index eff44eb..0eb4c64 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -16525,7 +16525,7 @@ error:
  * Otherwise, creates a new secret with an automatically chosen UUID, and
  * initializes its attributes from xml.
  *
- * Returns a the secret on success, NULL on failure.
+ * Returns a secret on success, NULL on failure.
  */
 virSecretPtr
 virSecretDefineXML(virConnectPtr conn, const char *xml, unsigned int flags)
-- 
1.7.3.1.msysgit.0

Best Regards,
-WangYufei



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] docs: delete extra character

2013-11-21 Thread Wangyufei (A)
delete extra character 'p' from the comment of virInterfaceCreate

Signed-off-by: Wang Yufei 
---
 src/libvirt.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/src/libvirt.c b/src/libvirt.c
index ae05300..9355d33 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -12801,7 +12801,7 @@ error:
  * interface was defined (that is, if virInterfaceChangeBegin() had
  * been called), the interface will be brought back down (and then
  * undefined) if virInterfaceChangeRollback() is called.
-p *
+ *
  * Returns 0 in case of success, -1 in case of error
  */
 int
-- 
1.7.3.1.msysgit.0

Best Regards,
-WangYufei


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] docs: fix virDomainRestoreFlags description bug

2013-11-07 Thread Wangyufei (A)
>From 605afa95417a4ad086570f0fb6df4a5fe68ffc7f Mon Sep 17 00:00:00 2001
From: Wang Yufei 
Date: Thu, 7 Nov 2013 16:44:04 +0800
Subject: [PATCH] docs: fix virDomainRestoreFlags description bug

In virDomainRestoreFlags with VIR_DOMAIN_SAVE_BYPASS_CACHE, it risks
slowing restores from NFS, but not saves to NFS.

Signed-off-by: Wang Yufei 
---
 src/libvirt.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/src/libvirt.c b/src/libvirt.c
index 7d94cd7..90608ab 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -2925,7 +2925,7 @@ error:
  * If @flags includes VIR_DOMAIN_SAVE_BYPASS_CACHE, then libvirt will
  * attempt to bypass the file system cache while restoring the file, or
  * fail if it cannot do so for the given system; this can allow less
- * pressure on file system cache, but also risks slowing saves to NFS.
+ * pressure on file system cache, but also risks slowing restores from NFS.
  *
  * Normally, the saved state file will remember whether the domain was
  * running or paused, and restore defaults to the same state.
-- 
1.7.3.1.msysgit.0

Best Regards,
-WangYufei



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] When vm's status file being left over, some persistent but inactive vms will be lost by libvirtd after libvirtd rebooting.

2013-11-06 Thread Wangyufei (A)
The patch works well. Thanks for your reply.

Sorry to reply too late. I'm kinda of busy with other jobs recently.

> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: Monday, October 28, 2013 7:55 PM
> To: Wangyufei (A)
> Cc: libvir-list@redhat.com; Xuchao (H); Wangrui (K)
> Subject: Re: [libvirt] When vm's status file being left over, some persistent
> but inactive vms will be lost by libvirtd after libvirtd rebooting.
> 
> On Fri, Oct 18, 2013 at 03:00:22AM +, Wangyufei (A) wrote:
> > Hello,
> >   I found a problem that:
> >   vm's status file may be left over in the path /var/run/libvirt/qemu under
> some situation, such as host reboot. When vm's status file is left over, some
> > persistent but inactive vms will be lost by libvirtd after it is rebooted. 
> > And
> you can do as follows to reproduce the problem:
> >   1、Create a vm and start it by the commands: virsh define vm-xml and
> virsh start vm-name.
> >   2、Stop the libvirtd by the command: service libvirtd stop.
> >   3、Kill the qemu process related to the vm, and make the vm's status file
> left over.
> >   4、Start libvirtd.
> >   After starting the libvirtd service, we find that the vm has been lost by
> libvirtd with command"virsh list --all".
> > What we expect is that the vm is shown with shutoff status, should we?
> >
> > The reason for the problem is that:
> >   During libvirtd startup, it first loads status files of vms under the path
> /var/run/libvirt/qemu, creates virDomainObj for each vm and adds it to
> > driver->domains list.
> >   Then it creates a thread to connect related qemu process for each
> virDomainObj in the domains list.Because the qemu process has been killed,
> so connecting to
> > qemu will be failed. When connecting to qemu failed, connection-thread
> will do the follows:
> >   1、Check if vm->persistent is 1.
> >   2、If vm->persistent is not 1, then qemuDomainRemoveInactive() is
> called to remove the virDomainObj.
> >   3、Then the following calling sequence will
> occur:qemuDomainRemoveInactive()
> -->virDomainObjListRemove()-->virHashRemoveEntry(). Around
> virHashRemoveEntry(),
> >   domlist and dom will be locked and unlocked sequencely.
> >   The problem of the above steps is that vm->persistent maybe has been
> set to 1 by libvirtd main-thread when connection-thread calling
> virHashRemoveEntry() to
> > remove the dom. That is a persistent virDomainObj is removed during
> libvirtd startup.
> >
> > Two ways can resolve the above problem:
> >   1、expending the range of locking virDomainObj and virDomainObjList,
> lock the object of virDomainObj and virDomainObjList in connection-thread
> before checking vm->persistent.
> >   2、checking vm->persistent again before calling virHashRemoveEntry().
> >
> >   Do you think it is a problem described above and which way listed
> above is more suitable to resolve the problem, or is there any other better
> idea? Any suggestions?
> 
> The problem here is really that we should have loaded the persistent
> configs before we started the thread to reconnect. That ensures that
> the VM is marked persistent before the thread runs.
> 
> Can you test the patch I've just sent for this.
> 
> BTW, also please configure your email client to add line breaks at 80
> characters or less.
> 
> 
> Daniel
> --
> |: http://berrange.com  -o-
> http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o-
> http://virt-manager.org :|
> |: http://autobuild.org   -o-
> http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-
> http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH] qemu: clean up migration ports when migration cancelled

2013-11-06 Thread Wangyufei (A)
It's ok. I applied the original patch and I didn't notice it too. No body is 
perfect. Fortunately, we found and fixed it immediately. I'll be much more 
careful next time.

Thanks a lot.

> -Original Message-
> From: Jiri Denemark [mailto:jdene...@redhat.com]
> Sent: Wednesday, November 06, 2013 10:09 PM
> To: Wangyufei (A)
> Cc: libvir-list@redhat.com; Zengjunliang; Wangrui (K)
> Subject: Re: [libvirt] [PATCH] qemu: clean up migration ports when migration
> cancelled
> 
> On Wed, Nov 06, 2013 at 04:17:05 +, Wangyufei (A) wrote:
> > >From bfb2dafa8ea183781446c7cbe0376e1a2d41a5ae Mon Sep 17
> 00:00:00 2001
> > From: Zeng Junliang 
> > Date: Wed, 6 Nov 2013 11:36:57 +0800
> > Subject: [PATCH] qemu: clean up migration ports when migration cancelled
> >
> > If there's a migration cancelled, the bitmap of migration port should be
> cleaned up too.
> >
> > Signed-off-by: Zeng Junliang 
> > ---
> >  src/qemu/qemu_migration.c |7 +--
> >  1 files changed, 5 insertions(+), 2 deletions(-)
> >
> > diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> > index 4f35a7a..594d1cd 100644
> > --- a/src/qemu/qemu_migration.c
> > +++ b/src/qemu/qemu_migration.c
> > @@ -4387,12 +4387,16 @@ qemuMigrationFinish(virQEMUDriverPtr
> driver,
> >  qemuDomainObjPrivatePtr priv = vm->privateData;
> >  virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
> >  virCapsPtr caps = NULL;
> > +unsigned short port = 0;
> 
> No need to initialize port here.
> 
> >
> >  VIR_DEBUG("driver=%p, dconn=%p, vm=%p, cookiein=%s,
> cookieinlen=%d, "
> >"cookieout=%p, cookieoutlen=%p, flags=%lx,
> retcode=%d",
> >driver, dconn, vm, NULLSTR(cookiein), cookieinlen,
> >cookieout, cookieoutlen, flags, retcode);
> >
> > +port = priv->migrationPort;
> > +priv->migrationPort = 0;
> > +
> >  if (!(caps = virQEMUDriverGetCapabilities(driver, false)))
> >  goto cleanup;
> >
> > @@ -4439,8 +4443,6 @@ qemuMigrationFinish(virQEMUDriverPtr driver,
> >  }
> >
> >  qemuMigrationStopNBDServer(driver, vm, mig);
> > -virPortAllocatorRelease(driver->migrationPorts,
> priv->migrationPort);
> > -priv->migrationPort = 0;
> >
> >  if (flags & VIR_MIGRATE_PERSIST_DEST) {
> >  virDomainDefPtr vmdef;
> > @@ -4573,6 +4575,7 @@ endjob:
> >  }
> >
> >  cleanup:
> > +virPortAllocatorRelease(driver->migrationPorts, port);
> >  if (vm) {
> >  VIR_FREE(priv->origname);
> >  virObjectUnlock(vm);
> 
> ACK and pushed. My bad for not spotting thins while reviewing the
> original patch.
> 
> Jirka

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] qemu: clean up migration ports when migration cancelled

2013-11-05 Thread Wangyufei (A)
>From bfb2dafa8ea183781446c7cbe0376e1a2d41a5ae Mon Sep 17 00:00:00 2001
From: Zeng Junliang 
Date: Wed, 6 Nov 2013 11:36:57 +0800
Subject: [PATCH] qemu: clean up migration ports when migration cancelled

If there's a migration cancelled, the bitmap of migration port should be 
cleaned up too.

Signed-off-by: Zeng Junliang 
---
 src/qemu/qemu_migration.c |7 +--
 1 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 4f35a7a..594d1cd 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -4387,12 +4387,16 @@ qemuMigrationFinish(virQEMUDriverPtr driver,
 qemuDomainObjPrivatePtr priv = vm->privateData;
 virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
 virCapsPtr caps = NULL;
+unsigned short port = 0;
 
 VIR_DEBUG("driver=%p, dconn=%p, vm=%p, cookiein=%s, cookieinlen=%d, "
   "cookieout=%p, cookieoutlen=%p, flags=%lx, retcode=%d",
   driver, dconn, vm, NULLSTR(cookiein), cookieinlen,
   cookieout, cookieoutlen, flags, retcode);
 
+port = priv->migrationPort;
+priv->migrationPort = 0;
+
 if (!(caps = virQEMUDriverGetCapabilities(driver, false)))
 goto cleanup;
 
@@ -4439,8 +4443,6 @@ qemuMigrationFinish(virQEMUDriverPtr driver,
 }
 
 qemuMigrationStopNBDServer(driver, vm, mig);
-virPortAllocatorRelease(driver->migrationPorts, priv->migrationPort);
-priv->migrationPort = 0;
 
 if (flags & VIR_MIGRATE_PERSIST_DEST) {
 virDomainDefPtr vmdef;
@@ -4573,6 +4575,7 @@ endjob:
 }
 
 cleanup:
+virPortAllocatorRelease(driver->migrationPorts, port);
 if (vm) {
 VIR_FREE(priv->origname);
 virObjectUnlock(vm);
-- 
1.7.3.1.msysgit.0

Best Regards,
-WangYufei



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] qemu_migration: Avoid crashing if domain dies too quickly

2013-10-18 Thread Wangyufei (A)


> -Original Message-
> From: Michal Privoznik [mailto:mpriv...@redhat.com]
> Sent: Friday, October 18, 2013 2:37 PM
> To: Wangyufei (A)
> Cc: libvir-list@redhat.com; Wangrui (K)
> Subject: Re: [libvirt] [PATCH v2] qemu_migration: Avoid crashing if domain
> dies too quickly
> 
> On 18.10.2013 08:22, Wangyufei (A) wrote:
> > I'm sorry. I didn't get what you mean.
> >
> > In virQEMUCapsInitQMP
> >
> > if (!(xmlopt = virDomainXMLOptionNew(NULL, NULL, NULL)) ||
> > !(vm = virDomainObjNew(xmlopt)))
> > goto cleanup;
> >
> > vm->pid = pid;   //Apparently vm is not NULL here.
> >
> > if (!(mon = qemuMonitorOpen(vm, &config, true, &callbacks, NULL)))
> {  //If qemuMonitorOpen returns NULL here, but not do mon->vm =
> virObjectRef(vm); in qemuMonitorOpenInternal.
> > ret = 0;
> > goto cleanup;  // We go to cleanup here.
> > }
> >
> > virObjectLock(mon);
> >
> > if (virQEMUCapsInitQMPMonitor(qemuCaps, mon) < 0)
> > goto cleanup;
> >
> > ret = 0;
> >
> > cleanup:
> > if (mon)
> > virObjectUnlock(mon);
> > qemuMonitorClose(mon);
> > virCommandAbort(cmd);
> > virCommandFree(cmd);
> > VIR_FREE(monarg);
> > VIR_FREE(monpath);
> > virObjectUnref(vm);//vm is not NULL here, and we'll do
> something about vm->refs, right?
> 
> Yes. In fact we dispose @vm as we are the only one holding reference to
> it and we don't longer need it. If qemuMonitorOpenInternal would do
> virObjectRef(vm), then vm->refs = 2 before executing this line. After

If qemuMonitorOpenInternal did not do virObjectRef(vm) and return NULL before 
it, then vm->refs = 1 before executing this line. Right?
Now we do virObjectUnref(vm), vm will be disposed here, and that's we expected. 
Fine, I've got you. Thanks a lot.

> the execution, the refs is decremented to 1 as @mon is the only one
> holding reference to @vm.
> 
> > virObjectUnref(xmlopt);


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] qemu_migration: Avoid crashing if domain dies too quickly

2013-10-17 Thread Wangyufei (A)
I'm sorry. I didn't get what you mean.

In virQEMUCapsInitQMP

if (!(xmlopt = virDomainXMLOptionNew(NULL, NULL, NULL)) ||
!(vm = virDomainObjNew(xmlopt)))
goto cleanup;

vm->pid = pid;   //Apparently vm is not NULL here.

if (!(mon = qemuMonitorOpen(vm, &config, true, &callbacks, NULL))) {  //If 
qemuMonitorOpen returns NULL here, but not do mon->vm = virObjectRef(vm); in 
qemuMonitorOpenInternal.
ret = 0;
goto cleanup;  // We go to cleanup here.
}

virObjectLock(mon);

if (virQEMUCapsInitQMPMonitor(qemuCaps, mon) < 0)
goto cleanup;

ret = 0;

cleanup:
if (mon)
virObjectUnlock(mon);
qemuMonitorClose(mon);
virCommandAbort(cmd);
virCommandFree(cmd);
VIR_FREE(monarg);
VIR_FREE(monpath);
virObjectUnref(vm);//vm is not NULL here, and we'll do something about 
vm->refs, right?
virObjectUnref(xmlopt);

> -Original Message-
> From: Michal Privoznik [mailto:mpriv...@redhat.com]
> Sent: Friday, October 18, 2013 1:12 PM
> To: Wangyufei (A)
> Cc: libvir-list@redhat.com; Wangrui (K)
> Subject: Re: [libvirt] [PATCH v2] qemu_migration: Avoid crashing if domain
> dies too quickly
> 
> On 18.10.2013 06:06, Wangyufei (A) wrote:
> > Thanks at first, this patch some kinda solve my problem until now. But I 
> > still
> have a doubt about this patch.
> >
> >> -Original Message-
> >> From: libvir-list-boun...@redhat.com
> >> [mailto:libvir-list-boun...@redhat.com] On Behalf Of Michal Privoznik
> >> Sent: Friday, October 11, 2013 8:15 PM
> >> To: libvir-list@redhat.com
> >> Subject: [libvirt] [PATCH v2] qemu_migration: Avoid crashing if domain
> dies
> >> too quickly
> 
> >> @@ -2673,6 +2677,8 @@ cleanup:
> >>  virCommandFree(cmd);
> >>  VIR_FREE(monarg);
> >>  VIR_FREE(monpath);
> >> +virObjectUnref(vm);
> >
> > Is this virObjectUnref(vm) corresponding to mon->vm = virObjectRef(vm)
> added in qemuMonitorOpenInternal?
> > If it is, how can we confirm virObjectRef(vm) has been done in
> qemuMonitorOpenInternal? If an error (anyone follows)happened in
> qemuMonitorOpenInternal is before mon->vm = virObjectRef(vm),
> > then we still goto cleanup and do virObjectUnref(vm), the refs will be
> wrong. Am I right?
> >
> 
> Unfortunately, you've cut off the chunk above that allocates @mon.
> Anyway, on initialization, @mon is filled with zeros. So until somebody
> sets mon->vm [1] mon->vm is effectively NULL. And virObjectUnref() acts
> like NOP on NULL.
> 
> > if (!cb->eofNotify) {
> > virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
> >_("EOF notify callback must be supplied"));
> > return NULL;
> > }
> > if (!cb->errorNotify) {
> > virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
> >_("Error notify callback must be supplied"));
> > return NULL;
> > }
> >
> > if (qemuMonitorInitialize() < 0)
> > return NULL;
> >
> > if (!(mon = virObjectLockableNew(qemuMonitorClass)))
> > return NULL;
> >
> > mon->fd = -1;
> > mon->logfd = -1;
> > if (virCondInit(&mon->notify) < 0) {
> > virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
> >_("cannot initialize monitor condition"));
> > goto cleanup;
> > }
> > mon->fd = fd;
> > mon->hasSendFD = hasSendFD;
> > mon->vm = virObjectRef(vm);
> 
> 1: ^^ until after this line
> >
> >> +virObjectUnref(xmlopt);
> >>
> >>  if (pid != 0) {
> >>  char ebuf[1024];
> 
> I hope it makes things a bit clearer.
> 
> Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] qemu_migration: Avoid crashing if domain dies too quickly

2013-10-17 Thread Wangyufei (A)
Thanks at first, this patch some kinda solve my problem until now. But I still 
have a doubt about this patch.

> -Original Message-
> From: libvir-list-boun...@redhat.com
> [mailto:libvir-list-boun...@redhat.com] On Behalf Of Michal Privoznik
> Sent: Friday, October 11, 2013 8:15 PM
> To: libvir-list@redhat.com
> Subject: [libvirt] [PATCH v2] qemu_migration: Avoid crashing if domain dies
> too quickly
> 
> I've noticed a SIGSEGV-ing libvirtd on the destination when the qemu
> died too quickly = in Prepare phase. What is happening here is:
> 
> 1) [Thread 3493] We are in qemuMigrationPrepareAny() and calling
> qemuProcessStart() which subsequently calls qemuProcessWaitForMonitor()
> and qemuConnectMonitor(). So far so good. The qemuMonitorOpen()
> succeeds, however switching monitor to QMP mode fails as qemu died
> meanwhile. That is qemuMonitorSetCapabilities() returns -1.
> 
> 2013-10-08 15:54:10.629+: 3493: debug :
> qemuMonitorSetCapabilities:1356 : mon=0x14a53da0
> 2013-10-08 15:54:10.630+: 3493: debug :
> qemuMonitorJSONCommandWithFd:262 : Send command
> '{"execute":"qmp_capabilities","id":"libvirt-1"}' for write with FD -1
> 2013-10-08 15:54:10.630+: 3493: debug :
> virEventPollUpdateHandle:147 : EVENT_POLL_UPDATE_HANDLE: watch=17
> events=13
> ...
> 2013-10-08 15:54:10.631+: 3493: debug : qemuMonitorSend:956 :
> QEMU_MONITOR_SEND_MSG: mon=0x14a53da0
> msg={"execute":"qmp_capabilities","id":"libvirt-1"}
>  fd=-1
> 2013-10-08 15:54:10.631+: 3262: debug : virEventPollRunOnce:641 :
> Poll got 1 event(s)
> 
> 2) [Thread 3262] The event loop is trying to do the talking to monitor.
> However, qemu is dead already, remember?
> 
> 2013-10-08 15:54:13.436+: 3262: error : qemuMonitorIORead:551 :
> Unable to read from monitor: Connection reset by peer
> 2013-10-08 15:54:13.516+: 3262: debug : virFileClose:90 : Closed fd 25
> ...
> 2013-10-08 15:54:13.533+: 3493: debug : qemuMonitorSend:968 : Send
> command resulted in error internal error: early end of file from monitor:
> possible problem:
> 
> 3) [Thread 3493] qemuProcessStart() failed. No big deal. Go to the
> 'endjob' label and subsequently to the 'cleanup'. Since the domain is
> not persistent and ret is -1, the qemuDomainRemoveInactive() is called.
> This has an (unpleasant) effect of virObjectUnref()-in the @vm object.
> Unpleasant because the event loop which is about to trigger EOF callback
> still holds a pointer to the @vm (not the reference). See the valgrind
> output below.
> 
> 4) [Thread 3262] So the even loop starts triggering EOF:
> 
> 2013-10-08 15:54:13.542+: 3262: debug : qemuMonitorIO:729 :
> Triggering EOF callback
> 2013-10-08 15:54:13.543+: 3262: debug :
> qemuProcessHandleMonitorEOF:294 : Received EOF on 0x14549110 'migt10'
> 
> And the monitor is cleaned up. This results in calling
> qemuProcessHandleMonitorEOF with the @vm pointer passed. The pointer
> is
> kept in qemuMonitor struct.
> 
> ==3262== Thread 1:
> ==3262== Invalid read of size 4
> ==3262==at 0x77ECCAA: pthread_mutex_lock (in
> /lib64/libpthread-2.15.so)
> ==3262==by 0x52FAA06: virMutexLock (virthreadpthread.c:85)
> ==3262==by 0x52E3891: virObjectLock (virobject.c:320)
> ==3262==by 0x11626743: qemuProcessHandleMonitorEOF
> (qemu_process.c:296)
> ==3262==by 0x11642593: qemuMonitorIO (qemu_monitor.c:730)
> ==3262==by 0x52BD526: virEventPollDispatchHandles
> (vireventpoll.c:501)
> ==3262==by 0x52BDD49: virEventPollRunOnce (vireventpoll.c:648)
> ==3262==by 0x52BBC68: virEventRunDefaultImpl (virevent.c:274)
> ==3262==by 0x542D3D9: virNetServerRun (virnetserver.c:1112)
> ==3262==by 0x11F368: main (libvirtd.c:1513)
> ==3262==  Address 0x14549128 is 24 bytes inside a block of size 136 free'd
> ==3262==at 0x4C2AF5C: free (in
> /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==3262==by 0x529B1FF: virFree (viralloc.c:580)
> ==3262==by 0x52E3703: virObjectUnref (virobject.c:270)
> ==3262==by 0x531557E: virDomainObjListRemove
> (domain_conf.c:2355)
> ==3262==by 0x1160E899: qemuDomainRemoveInactive
> (qemu_domain.c:2061)
> ==3262==by 0x1163A0C6: qemuMigrationPrepareAny
> (qemu_migration.c:2450)
> ==3262==by 0x1163A923: qemuMigrationPrepareDirect
> (qemu_migration.c:2626)
> ==3262==by 0x11682D71: qemuDomainMigratePrepare3Params
> (qemu_driver.c:10309)
> ==3262==by 0x53B0976: virDomainMigratePrepare3Params
> (libvirt.c:7266)
> ==3262==by 0x1502D3:
> remoteDispatchDomainMigratePrepare3Params (remote.c:4797)
> ==3262==by 0x12DECA:
> remoteDispatchDomainMigratePrepare3ParamsHelper
> (remote_dispatch.h:5741)
> ==3262==by 0x54322EB: virNetServerProgramDispatchCall
> (virnetserverprogram.c:435)
> 
> The mon->vm is set in qemuMonitorOpenInternal() which is the correct
> place to increase @vm ref counter. The correct place to decrease the ref
> counter is then qemuMonitorDispose().
> 
> Signed-off-by: Michal Privoznik 
> ---
> 
> It turned out the hack 

[libvirt] When vm's status file being left over, some persistent but inactive vms will be lost by libvirtd after libvirtd rebooting.

2013-10-17 Thread Wangyufei (A)
Hello,
  I found a problem that:
  vm's status file may be left over in the path /var/run/libvirt/qemu under 
some situation, such as host reboot. When vm's status file is left over, some 
persistent but inactive vms will be lost by libvirtd after it is rebooted. And 
you can do as follows to reproduce the problem:
  1、Create a vm and start it by the commands: virsh define vm-xml and virsh 
start vm-name.
  2、Stop the libvirtd by the command: service libvirtd stop.
  3、Kill the qemu process related to the vm, and make the vm's status file left 
over.
  4、Start libvirtd.
  After starting the libvirtd service, we find that the vm has been lost by 
libvirtd with command"virsh list --all". 
What we expect is that the vm is shown with shutoff status, should we?

The reason for the problem is that:
  During libvirtd startup, it first loads status files of vms under the path 
/var/run/libvirt/qemu, creates virDomainObj for each vm and adds it to 
driver->domains list.  
  Then it creates a thread to connect related qemu process for each 
virDomainObj in the domains list.Because the qemu process has been killed, so 
connecting to 
qemu will be failed. When connecting to qemu failed, connection-thread will do 
the follows: 
  1、Check if vm->persistent is 1. 
  2、If vm->persistent is not 1, then qemuDomainRemoveInactive() is called to 
remove the virDomainObj.
  3、Then the following calling sequence will occur:qemuDomainRemoveInactive() 
-->virDomainObjListRemove()-->virHashRemoveEntry(). Around 
virHashRemoveEntry(), 
  domlist and dom will be locked and unlocked sequencely.
  The problem of the above steps is that vm->persistent maybe has been set to 1 
by libvirtd main-thread when connection-thread calling virHashRemoveEntry() to 
remove the dom. That is a persistent virDomainObj is removed during libvirtd 
startup.

Two ways can resolve the above problem:
  1、expending the range of locking virDomainObj and virDomainObjList, lock the 
object of virDomainObj and virDomainObjList in connection-thread before 
checking vm->persistent.
  2、checking vm->persistent again before calling virHashRemoveEntry().

  Do you think it is a problem described above and which way listed above is 
more suitable to resolve the problem, or is there any other better idea? Any 
suggestions?

Best Regards,
-WangYufei



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH] remote:Fix the parameter passed to remoteDispatchConnectDomainEventDeregisterAny() should be eventID

2013-10-17 Thread Wangyufei (A)
Thanks for your reply. Rewiting is fine.

> -Original Message-
> From: Eric Blake [mailto:ebl...@redhat.com]
> Sent: Thursday, October 17, 2013 10:28 PM
> To: Wangyufei (A); libvir-list@redhat.com
> Cc: zhouyimin Zhou(Yimin); mpriv...@redhat.com; Wangrui (K);
> jdene...@redhat.com
> Subject: Re: [libvirt] [PATCH] remote:Fix the parameter passed to
> remoteDispatchConnectDomainEventDeregisterAny() should be eventID
> 
> On 10/17/2013 03:37 AM, Wangyufei (A) wrote:
> >>From 0832ab83685e20580c8128f5505096e71e747b8a Mon Sep 17
> 00:00:00 2001
> > From: zhouyimin 
> > Date: Thu, 17 Oct 2013 15:59:21 +0800
> > Subject: [PATCH] remote:Fix the parameter passed to
> > remoteDispatchConnectDomainEventDeregisterAny() should be eventID
> 
> Subject line is too long - 'git shortlog -30' will give you a hint of typical
> subjects, which we try to keep at 60 chars or less.  Better would be:
> 
> remote: fix regression in event deregistration
> 
> >
> > Introduced by 7b87a3
> 
> Ouch - present since the 0.9.9 release.
> 
> > When I quit the process which only register
> VIR_DOMAIN_EVENT_ID_REBOOT, I got error like:
> > "libvirt: XML-RPC error : internal error: domain event 0 not registered".
> > Then I add the following code, it fixed.
> >
> > Signed-off-by: zhouyimin 
> 
> Thanks for the patch.  However, we prefer to have the Signed-off-by use a
> legal name, rather than a login alias.  Given your cc: line, is it okay if I
> rewrite this patch to use the following authorship:
> 
> From: Zhou Yimin 
> 
> Or would you prefer yet another legal spelling?  UTF-8 is fine, if you'd like
> to represent your name in native characters; some people even choose to
> represent their name natively followed by a Latin form in ().
> 
> Once we've sorted that out, I can go ahead and push this.
> 
> > ---
> >  src/remote/remote_driver.c |2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> >
> > diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c
> > index 87ef5a9..115d0bc 100644
> > --- a/src/remote/remote_driver.c
> > +++ b/src/remote/remote_driver.c
> > @@ -5137,7 +5137,7 @@ static int
> remoteConnectDomainEventDeregisterAny(virConnectPtr conn,
> >  /* If that was the last callback for this eventID, we need to disable
> >   * events on the server */
> >  if (count == 0) {
> > -args.eventID = callbackID;
> > +args.eventID = eventID;
> >
> >  if (call(conn, priv, 0,
> REMOTE_PROC_CONNECT_DOMAIN_EVENT_DEREGISTER_ANY,
> >   (xdrproc_t)
> > xdr_remote_connect_domain_event_deregister_any_args, (char *) &args,
> >
> 
> --
> Eric Blake   eblake redhat com+1-919-301-3266
> Libvirt virtualization library http://libvirt.org


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] remote:Fix the parameter passed to remoteDispatchConnectDomainEventDeregisterAny() should be eventID

2013-10-17 Thread Wangyufei (A)
Because there're so many patches a day, sometimes you may not focus on your 
interests. So I Cc to the guys who are familiar with the patch issue to get 
their focus(not random). If you feel not good about it, I will try another way 
to get your attention. That will be fine. Thanks for your suggestion. 

> -Original Message-
> From: libvir-list-boun...@redhat.com
> [mailto:libvir-list-boun...@redhat.com] On Behalf Of Jiri Denemark
> Sent: Friday, October 18, 2013 12:51 AM
> To: Wangyufei (A)
> Cc: libvir-list@redhat.com; zhouyimin Zhou(Yimin); Wangrui (K)
> Subject: Re: [libvirt] [PATCH] remote:Fix the parameter passed to
> remoteDispatchConnectDomainEventDeregisterAny() should be eventID
> 
> In addition to comments from Eric, please stop adding random libvirt
> developers to Cc: unless there's a real reason for that (which is quite
> rare). We are all subscribed to this list.
> 
> Jirka
> 
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] remote:Fix the parameter passed to remoteDispatchConnectDomainEventDeregisterAny() should be eventID

2013-10-17 Thread Wangyufei (A)
>From 0832ab83685e20580c8128f5505096e71e747b8a Mon Sep 17 00:00:00 2001
From: zhouyimin 
Date: Thu, 17 Oct 2013 15:59:21 +0800
Subject: [PATCH] remote:Fix the parameter passed to 
remoteDispatchConnectDomainEventDeregisterAny() should be eventID

Introduced by 7b87a3
When I quit the process which only register VIR_DOMAIN_EVENT_ID_REBOOT, I got 
error like:
"libvirt: XML-RPC error : internal error: domain event 0 not registered".
Then I add the following code, it fixed.

Signed-off-by: zhouyimin 
---
 src/remote/remote_driver.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c
index 87ef5a9..115d0bc 100644
--- a/src/remote/remote_driver.c
+++ b/src/remote/remote_driver.c
@@ -5137,7 +5137,7 @@ static int 
remoteConnectDomainEventDeregisterAny(virConnectPtr conn,
 /* If that was the last callback for this eventID, we need to disable
  * events on the server */
 if (count == 0) {
-args.eventID = callbackID;
+args.eventID = eventID;
 
 if (call(conn, priv, 0, 
REMOTE_PROC_CONNECT_DOMAIN_EVENT_DEREGISTER_ANY,
  (xdrproc_t) 
xdr_remote_connect_domain_event_deregister_any_args, (char *) &args,
-- 
1.7.3.1.msysgit.0


Best Regards,
-WangYufei



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] qemu_migrate: Fix assign the same port when migrating concurrently

2013-10-14 Thread Wangyufei (A)
>From f56b290eab36bbb7a9ac53778a55638d473504d1 Mon Sep 17 00:00:00 2001
From: WangYufei 
Date: Fri, 11 Oct 2013 11:27:13 +0800
Subject: [PATCH] qemu_migrate: Fix assign the same port when migrating 
concurrently

When we migrate vms concurrently, there's a chance that libvirtd on destination 
assign the same port for different migrations, which will lead to migration 
failed during migration prepare phase on destination. So we use 
virPortAllocator here to solve the problem.

Signed-off-by: WangYufei 
---
 src/qemu/qemu_command.h   |3 +++
 src/qemu/qemu_conf.h  |6 +++---
 src/qemu/qemu_domain.h|1 +
 src/qemu/qemu_driver.c|6 ++
 src/qemu/qemu_migration.c |   28 +---
 5 files changed, 34 insertions(+), 10 deletions(-)

diff --git a/src/qemu/qemu_command.h b/src/qemu/qemu_command.h
index 2e2acfb..3277ba4 100644
--- a/src/qemu/qemu_command.h
+++ b/src/qemu/qemu_command.h
@@ -51,6 +51,9 @@
 # define QEMU_WEBSOCKET_PORT_MIN  5700
 # define QEMU_WEBSOCKET_PORT_MAX  65535
 
+# define QEMU_MIGRATION_PORT_MIN 49152
+# define QEMU_MIGRATION_PORT_MAX 49215
+
 typedef struct _qemuBuildCommandLineCallbacks qemuBuildCommandLineCallbacks;
 typedef qemuBuildCommandLineCallbacks *qemuBuildCommandLineCallbacksPtr;
 struct _qemuBuildCommandLineCallbacks {
diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h
index da29a2a..3176085 100644
--- a/src/qemu/qemu_conf.h
+++ b/src/qemu/qemu_conf.h
@@ -221,6 +221,9 @@ struct _virQEMUDriver {
 /* Immutable pointer, self-locking APIs */
 virPortAllocatorPtr webSocketPorts;
 
+/* Immutable pointer, self-locking APIs */
+virPortAllocatorPtr migrationPorts;
+
 /* Immutable pointer, lockless APIs*/
 virSysinfoDefPtr hostsysinfo;
 
@@ -242,9 +245,6 @@ struct _qemuDomainCmdlineDef {
 char **env_value;
 };
 
-/* Port numbers used for KVM migration. */
-# define QEMUD_MIGRATION_FIRST_PORT 49152
-# define QEMUD_MIGRATION_NUM_PORTS 64
 
 
 void qemuDomainCmdlineDefFree(qemuDomainCmdlineDefPtr def);
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 21f116c..16c55a6 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -160,6 +160,7 @@ struct _qemuDomainObjPrivate {
 unsigned long migMaxBandwidth;
 char *origname;
 int nbdPort; /* Port used for migration with NBD */
+int migrationPort;
 
 virChrdevsPtr devs;
 
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index cfdbb9a..c08a73c 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -687,6 +687,11 @@ qemuStateInitialize(bool privileged,
  cfg->webSocketPortMax)) == NULL)
 goto error;
 
+if ((qemu_driver->migrationPorts =
+ virPortAllocatorNew(QEMU_MIGRATION_PORT_MIN,
+ QEMU_MIGRATION_PORT_MAX)) == NULL)
+goto error;
+
 if (qemuSecurityInit(qemu_driver) < 0)
 goto error;
 
@@ -993,6 +998,7 @@ qemuStateCleanup(void) {
 virObjectUnref(qemu_driver->domains);
 virObjectUnref(qemu_driver->remotePorts);
 virObjectUnref(qemu_driver->webSocketPorts);
+virObjectUnref(qemu_driver->migrationPorts);
 
 virObjectUnref(qemu_driver->xmlopt);
 
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 3a1aab7..93ae237 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -2147,6 +2147,9 @@ qemuMigrationPrepareCleanup(virQEMUDriverPtr driver,
   qemuDomainJobTypeToString(priv->job.active),
   qemuDomainAsyncJobTypeToString(priv->job.asyncJob));
 
+virPortAllocatorRelease(driver->migrationPorts, priv->migrationPort);
+priv->migrationPort = 0;
+
 if (!qemuMigrationJobIsActive(vm, QEMU_ASYNC_JOB_MIGRATION_IN))
 return;
 qemuDomainObjDiscardAsyncJob(driver, vm);
@@ -2297,6 +2300,7 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver,
 
 *def = NULL;
 priv = vm->privateData;
+priv->migrationPort = port;
 if (VIR_STRDUP(priv->origname, origname) < 0)
 goto cleanup;
 
@@ -2415,6 +2419,11 @@ cleanup:
 VIR_FREE(xmlout);
 VIR_FORCE_CLOSE(dataFD[0]);
 VIR_FORCE_CLOSE(dataFD[1]);
+if (ret < 0) {
+virPortAllocatorRelease(driver->migrationPorts, port);
+if (priv)
+priv->migrationPort = 0;
+}
 if (vm) {
 if (ret >= 0 || vm->persistent)
 virObjectUnlock(vm);
@@ -2493,7 +2502,6 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver,
const char *origname,
unsigned long flags)
 {
-static int port = 0;
 int this_port;
 char *hostname = NULL;
 const char *p;
@@ -2521,8 +2529,9 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver,
  * to be a correct hostname which refers to the target machine).
  */
 if (uri_in == NULL) {
-this_port = QEMUD_MIGRATION_FIRST_PORT + port++;
-if (port == QEMUD_MIGRATION_NUM_PORTS) port = 0;
+if (virPo

Re: [libvirt] [BUG] libvirtd on destination crash frequently while migrating vms concurrently

2013-10-11 Thread Wangyufei (A)
Thanks a lot, I'll have a try.

> -Original Message-
> From: Michal Privoznik [mailto:mpriv...@redhat.com]
> Sent: Friday, October 11, 2013 8:58 PM
> To: Wangyufei (A)
> Cc: libvir-list@redhat.com; jdene...@redhat.com; Wangrui (K)
> Subject: Re: [libvirt] [BUG] libvirtd on destination crash frequently while
> migrating vms concurrently
> 
> On 27.09.2013 09:55, Wangyufei (A) wrote:
> > Hello,
> > I found a problem that libvirtd on destination crash frequently while
> > migrating vms concurrently. For example, if I migrate 10 vms
> > concurrently ceaselessly, then after about 30 minutes the libvirtd on
> > destination will crash. So I analyzed and found two bugs during
> > migration process.
> > First, during migration prepare phase on destination, libvirtd assigns
> > ports to qemu to be startd on destination. But the port increase
> > operation is not aomic, so there's a chance that multi vms get the same
> > port, and only the first one can start successfully, others will fail to
> > start. I've applied a patch to solve this bug, and I test it, it works
> > well. If only this bug exists, libvirtd will not crash. The second bug
> > is fatal.
> > Second, I found the libvirtd crash because of segment fault which is
> > produced by accessing vm released. Apparently it's caused by
> > multi-thread operation, thread A access vm data which has released by
> > thread B. At last I proved my thought right.
> 
> So I've just pushed the patch upstream. Please give it a try if it
> resolves your problem.
> 
> Michal


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] qemu_migrate: Fix assign the same port when migrating concurrently

2013-10-10 Thread Wangyufei (A)
>From f56b290eab36bbb7a9ac53778a55638d473504d1 Mon Sep 17 00:00:00 2001
From: WangYufei 
Date: Fri, 11 Oct 2013 11:27:13 +0800
Subject: [PATCH] qemu_migrate: Fix assign the same port when migrating 
concurrently

When we migrate vms concurrently, there's a chance that libvirtd on destination 
assign the same port for different migrations, which will lead to migration 
failed during migration prepare phase on destination. So we use 
virPortAllocator here to solve the problem.

Signed-off-by: WangYufei 
---
 src/qemu/qemu_command.h   |3 +++
 src/qemu/qemu_conf.h  |6 +++---
 src/qemu/qemu_domain.h|1 +
 src/qemu/qemu_driver.c|6 ++
 src/qemu/qemu_migration.c |   28 +---
 5 files changed, 34 insertions(+), 10 deletions(-)

diff --git a/src/qemu/qemu_command.h b/src/qemu/qemu_command.h
index 2e2acfb..3277ba4 100644
--- a/src/qemu/qemu_command.h
+++ b/src/qemu/qemu_command.h
@@ -51,6 +51,9 @@
 # define QEMU_WEBSOCKET_PORT_MIN  5700
 # define QEMU_WEBSOCKET_PORT_MAX  65535

+# define QEMU_MIGRATION_PORT_MIN 49152
+# define QEMU_MIGRATION_PORT_MAX 49215
+
 typedef struct _qemuBuildCommandLineCallbacks qemuBuildCommandLineCallbacks;
 typedef qemuBuildCommandLineCallbacks *qemuBuildCommandLineCallbacksPtr;
 struct _qemuBuildCommandLineCallbacks {
diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h
index da29a2a..3176085 100644
--- a/src/qemu/qemu_conf.h
+++ b/src/qemu/qemu_conf.h
@@ -221,6 +221,9 @@ struct _virQEMUDriver {
 /* Immutable pointer, self-locking APIs */
 virPortAllocatorPtr webSocketPorts;

+/* Immutable pointer, self-locking APIs */
+virPortAllocatorPtr migrationPorts;
+
 /* Immutable pointer, lockless APIs*/
 virSysinfoDefPtr hostsysinfo;

@@ -242,9 +245,6 @@ struct _qemuDomainCmdlineDef {
 char **env_value;
 };

-/* Port numbers used for KVM migration. */
-# define QEMUD_MIGRATION_FIRST_PORT 49152
-# define QEMUD_MIGRATION_NUM_PORTS 64


 void qemuDomainCmdlineDefFree(qemuDomainCmdlineDefPtr def);
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 21f116c..16c55a6 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -160,6 +160,7 @@ struct _qemuDomainObjPrivate {
 unsigned long migMaxBandwidth;
 char *origname;
 int nbdPort; /* Port used for migration with NBD */
+int migrationPort;

 virChrdevsPtr devs;

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index cfdbb9a..c08a73c 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -687,6 +687,11 @@ qemuStateInitialize(bool privileged,
  cfg->webSocketPortMax)) == NULL)
 goto error;

+if ((qemu_driver->migrationPorts =
+ virPortAllocatorNew(QEMU_MIGRATION_PORT_MIN,
+ QEMU_MIGRATION_PORT_MAX)) == NULL)
+goto error;
+
 if (qemuSecurityInit(qemu_driver) < 0)
 goto error;

@@ -993,6 +998,7 @@ qemuStateCleanup(void) {
 virObjectUnref(qemu_driver->domains);
 virObjectUnref(qemu_driver->remotePorts);
 virObjectUnref(qemu_driver->webSocketPorts);
+virObjectUnref(qemu_driver->migrationPorts);

 virObjectUnref(qemu_driver->xmlopt);

diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 3a1aab7..93ae237 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -2147,6 +2147,9 @@ qemuMigrationPrepareCleanup(virQEMUDriverPtr driver,
   qemuDomainJobTypeToString(priv->job.active),
   qemuDomainAsyncJobTypeToString(priv->job.asyncJob));

+virPortAllocatorRelease(driver->migrationPorts, priv->migrationPort);
+priv->migrationPort = 0;
+
 if (!qemuMigrationJobIsActive(vm, QEMU_ASYNC_JOB_MIGRATION_IN))
 return;
 qemuDomainObjDiscardAsyncJob(driver, vm);
@@ -2297,6 +2300,7 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver,

 *def = NULL;
 priv = vm->privateData;
+priv->migrationPort = port;
 if (VIR_STRDUP(priv->origname, origname) < 0)
 goto cleanup;

@@ -2415,6 +2419,11 @@ cleanup:
 VIR_FREE(xmlout);
 VIR_FORCE_CLOSE(dataFD[0]);
 VIR_FORCE_CLOSE(dataFD[1]);
+if (ret < 0) {
+virPortAllocatorRelease(driver->migrationPorts, port);
+if (priv)
+priv->migrationPort = 0;
+}
 if (vm) {
 if (ret >= 0 || vm->persistent)
 virObjectUnlock(vm);
@@ -2493,7 +2502,6 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver,
const char *origname,
unsigned long flags)
 {
-static int port = 0;
 int this_port;
 char *hostname = NULL;
 const char *p;
@@ -2521,8 +2529,9 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver,
  * to be a correct hostname which refers to the target machine).
  */
 if (uri_in == NULL) {
-this_port = QEMUD_MIGRATION_FIRST_PORT + port++;
-if (port == QEMUD_MIGRATION_NUM_PORTS) port = 0;
+if (virPortAllocatorAcqu

Re: [libvirt] [BUG] libvirtd on destination crash frequently while migrating vms concurrently

2013-09-29 Thread Wangyufei (A)
Hi guys,
   Is there any problem with my analysis? Am I right?

   If my analysis is right, do we have any plan to solve this kind of problems 
caused by deletion of driver lock?

   Thanks for your time and kind to reply.

   _
   From: Wangyufei (A)
   Sent: Friday, September 27, 2013 3:56 PM
   To: libvir-list@redhat.com
   Cc: Wangrui (K); Wangyufei (A); Michal Privoznik; jdene...@redhat.com
   Subject: [BUG] libvirtd on destination crash frequently while migrating vms 
concurrently


   Hello,
   I found a problem that libvirtd on destination crash frequently while 
migrating vms concurrently. For example, if I migrate 10 vms concurrently 
ceaselessly, then after about 30 minutes the libvirtd on destination will 
crash. So I analyzed and found two bugs during migration process.
   First, during migration prepare phase on destination, libvirtd assigns ports 
to qemu to be startd on destination. But the port increase operation is not 
aomic, so there's a chance that multi vms get the same port, and only the first 
one can start successfully, others will fail to start. I've applied a patch to 
solve this bug, and I test it, it works well. If only this bug exists, libvirtd 
will not crash. The second bug is fatal.
   Second, I found the libvirtd crash because of segment fault which is 
produced by accessing vm released. Apparently it's caused by multi-thread 
operation, thread A access vm data which has released by thread B. At last I 
proved my thought right.

   Step 1. Because of bug one, the port is already occupied, so qemu on 
destination failed to start and sent a HANGUP signal to libvirtd, then libvirtd 
received this VIR_EVENT_HANDLE_HANGUP event, thread A dealing with events 
called qemuProcessHandleMonitorEOF as following:

#0  qemuProcessHandleMonitorEOF (mon=0x7f4dcd9c3130, vm=0x7f4dcd9c9780)
at qemu/qemu_process.c:399
#1  0x7f4dc18d9e87 in qemuMonitorIO (watch=68, fd=27, events=8,
opaque=0x7f4dcd9c3130) at qemu/qemu_monitor.c:668
#2  0x7f4dccae6604 in virEventPollDispatchHandles (nfds=18,
fds=0x7f4db4017e70) at util/vireventpoll.c:500
#3  0x7f4dccae7ff2 in virEventPollRunOnce () at util/vireventpoll.c:646
#4  0x7f4dccae60e4 in virEventRunDefaultImpl () at util/virevent.c:273
#5  0x7f4dccc40b25 in virNetServerRun (srv=0x7f4dcd8d26b0)
at rpc/virnetserver.c:1106
#6  0x7f4dcd6164c9 in main (argc=3, argv=0x7fff8d8f9f88)
at libvirtd.c:1518


static int virEventPollDispatchHandles(int nfds, struct pollfd *fds) {
..
/*
deleted flag is still false now, so we pass through to 
qemuProcessHandleMonitorEOF
*/
if (eventLoop.handles[i].deleted) {
EVENT_DEBUG("Skip deleted n=%d w=%d f=%d", i,
eventLoop.handles[i].watch, eventLoop.handles[i].fd);
continue;
}


   Step 2: Thread B dealing with migration on destination set deleted flag in 
virEventPollRemoveHandle as following:

#0  virEventPollRemoveHandle (watch=74) at util/vireventpoll.c:176
#1  0x7f4dccae5e6f in virEventRemoveHandle (watch=74)
at util/virevent.c:97
#2  0x7f4dc18d8ca8 in qemuMonitorClose (mon=0x7f4dbc030910)
at qemu/qemu_monitor.c:831
#3  0x7f4dc18bec63 in qemuProcessStop (driver=0x7f4dcd9bd400,
vm=0x7f4dbc00ed20, reason=VIR_DOMAIN_SHUTOFF_FAILED, flags=0)
at qemu/qemu_process.c:4302
#4  0x7f4dc18c1a83 in qemuProcessStart (conn=0x7f4dbc031020,
driver=0x7f4dcd9bd400, vm=0x7f4dbc00ed20,
migrateFrom=0x7f4dbc01af90 "tcp:[::]:49152", stdin_fd=-1,
stdin_path=0x0, snapshot=0x0,
vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, flags=6)
at qemu/qemu_process.c:4145
#5  0x7f4dc18cc688 in qemuMigrationPrepareAny (driver=0x7f4dcd9bd400,

   Step 3: Thread B cleanup vm in qemuMigrationPrepareAny after 
qemuProcessStart failed.

#0  virDomainObjDispose (obj=0x7f4dcd9c9780) at conf/domain_conf.c:2009
#1  0x7f4dccb0ccd9 in virObjectUnref (anyobj=0x7f4dcd9c9780)
at util/virobject.c:266
#2  0x7f4dccb42340 in virDomainObjListRemove (doms=0x7f4dcd9bd4f0,
dom=0x7f4dcd9c9780) at conf/domain_conf.c:2342
#3  0x7f4dc189ac33 in qemuDomainRemoveInactive (driver=0x7f4dcd9bd400,
vm=0x7f4dcd9c9780) at qemu/qemu_domain.c:1993
#4  0x7f4dc18ccad5 in qemuMigrationPrepareAny (driver=0x7f4dcd9bd400,

   Step 4: Thread A access priv which is released by thread B before, then 
libvirtd crash, bomb!

static void
qemuProcessHandleMonitorEOF(qemuMonitorPtr mon ATTRIBUTE_UNUSED,
virDomainObjPtr vm)
{
virQEMUDriverPtr driver = qemu_driver;
virDomainEventPtr event = NULL;
qemuDomainObjPrivatePtr priv;
int eventReason = VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN;
int stopReason = VIR_DOMAIN_SHUTOFF_SHUTDOWN;
const char *auditReason = "shutdown";

VIR_DEBUG("Received EOF on %p '%s'", vm, vm->def->name

[libvirt] [PATCH] qemu_migrate: Fix assign the same port when migrating concurrently

2013-09-29 Thread Wangyufei (A)
>From 6c2de34432db674072231ad66c9e8a0a600ede8a Mon Sep 17 00:00:00 2001
From: WangYufei 
Date: Mon, 30 Sep 2013 11:48:43 +0800
Subject: [PATCH] qemu_migrate: Fix assign the same port when migrating 
concurrently

When we migrate vms concurrently, there's a chance that libvirtd on destination 
assign the same port for different migrations, which will lead to migration 
failed during migration prepare phase on destination. So we use 
virPortAllocator here to solve the problem.

Signed-off-by: WangYufei 
---
 src/qemu/qemu_command.h   |3 +++
 src/qemu/qemu_conf.h  |6 +++---
 src/qemu/qemu_driver.c|6 ++
 src/qemu/qemu_migration.c |   17 ++---
 4 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/src/qemu/qemu_command.h b/src/qemu/qemu_command.h
index 2e2acfb..3277ba4 100644
--- a/src/qemu/qemu_command.h
+++ b/src/qemu/qemu_command.h
@@ -51,6 +51,9 @@
 # define QEMU_WEBSOCKET_PORT_MIN  5700
 # define QEMU_WEBSOCKET_PORT_MAX  65535

+# define QEMU_MIGRATION_PORT_MIN 49152
+# define QEMU_MIGRATION_PORT_MAX 49215
+
 typedef struct _qemuBuildCommandLineCallbacks qemuBuildCommandLineCallbacks;
 typedef qemuBuildCommandLineCallbacks *qemuBuildCommandLineCallbacksPtr;
 struct _qemuBuildCommandLineCallbacks {
diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h
index da29a2a..3176085 100644
--- a/src/qemu/qemu_conf.h
+++ b/src/qemu/qemu_conf.h
@@ -221,6 +221,9 @@ struct _virQEMUDriver {
 /* Immutable pointer, self-locking APIs */
 virPortAllocatorPtr webSocketPorts;

+/* Immutable pointer, self-locking APIs */
+virPortAllocatorPtr migrationPorts;
+
 /* Immutable pointer, lockless APIs*/
 virSysinfoDefPtr hostsysinfo;

@@ -242,9 +245,6 @@ struct _qemuDomainCmdlineDef {
 char **env_value;
 };

-/* Port numbers used for KVM migration. */
-# define QEMUD_MIGRATION_FIRST_PORT 49152
-# define QEMUD_MIGRATION_NUM_PORTS 64


 void qemuDomainCmdlineDefFree(qemuDomainCmdlineDefPtr def);
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index e8bc04d..9437b5a 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -688,6 +688,11 @@ qemuStateInitialize(bool privileged,
  cfg->webSocketPortMax)) == NULL)
 goto error;

+if ((qemu_driver->migrationPorts =
+virPortAllocatorNew(QEMU_MIGRATION_PORT_MIN,
+QEMU_MIGRATION_PORT_MAX)) == NULL)
+goto error;
+
 if (qemuSecurityInit(qemu_driver) < 0)
 goto error;

@@ -994,6 +999,7 @@ qemuStateCleanup(void) {
 virObjectUnref(qemu_driver->domains);
 virObjectUnref(qemu_driver->remotePorts);
 virObjectUnref(qemu_driver->webSocketPorts);
+virObjectUnref(qemu_driver->migrationPorts);

 virObjectUnref(qemu_driver->xmlopt);

diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 3a1aab7..82d90bf 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -2493,7 +2493,6 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver,
const char *origname,
unsigned long flags)
 {
-static int port = 0;
 int this_port;
 char *hostname = NULL;
 const char *p;
@@ -2521,8 +2520,9 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver,
  * to be a correct hostname which refers to the target machine).
  */
 if (uri_in == NULL) {
-this_port = QEMUD_MIGRATION_FIRST_PORT + port++;
-if (port == QEMUD_MIGRATION_NUM_PORTS) port = 0;
+if (virPortAllocatorAcquire(driver->migrationPorts,
+(unsigned short *)&this_port) < 0)
+goto cleanup;

 /* Get hostname */
 if ((hostname = virGetHostname()) == NULL)
@@ -2578,9 +2578,9 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver,

 if (uri->port == 0) {
 /* Generate a port */
-this_port = QEMUD_MIGRATION_FIRST_PORT + port++;
-if (port == QEMUD_MIGRATION_NUM_PORTS)
-port = 0;
+if (virPortAllocatorAcquire(driver->migrationPorts,
+(unsigned short *)&this_port) < 0)
+goto cleanup;

 /* Caller frees */
 if (virAsprintf(uri_out, "%s:%d", uri_in, this_port) < 0)
@@ -2600,8 +2600,11 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver,
 cleanup:
 virURIFree(uri);
 VIR_FREE(hostname);
-if (ret != 0)
+if (ret != 0) {
 VIR_FREE(*uri_out);
+virPortAllocatorRelease(driver->migrationPorts,
+(unsigned short)this_port);
+}
 return ret;
 }

--
1.7.3.1.msysgit.0

Best Regards,
-WangYufei



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH] qemu_migrate: Fix assign the same port when migrating concurrently

2013-09-27 Thread Wangyufei (A)
Yes, I get your point.

this_port = QEMUD_MIGRATION_FIRST_PORT + port++;
if (port == QEMUD_MIGRATION_NUM_PORTS)
port = 0;

the operations above should be atomic, otherwise, port may be bigger than 
QEMUD_MIGRATION_NUM_PORTS and out of control at last. Right?

Using virPortAllocator is much more complicated, I need to think about it. I 
guess qemu driver lock removed has sent some devils out, we'll see.

Thanks for your reply.

> -Original Message-
> From: jdene...@redhat.com [mailto:jdene...@redhat.com]
> Sent: Friday, September 27, 2013 5:06 PM
> To: Wangyufei (A)
> Cc: libvir-list@redhat.com; Michal Privoznik; Wangrui (K)
> Subject: Re: [libvirt] [PATCH] qemu_migrate: Fix assign the same port when
> migrating concurrently
> 
> On Fri, Sep 27, 2013 at 06:28:50 +, Wangyufei (A) wrote:
> > From: WangYufei 
> > Date: Fri, 27 Sep 2013 11:53:57 +0800
> > Subject: [PATCH] qemu_migrate: Fix assign the same port when migrating
> concurrently
> >
> > When we migrate vms concurrently, there's a chance that libvirtd on
> destination assign the same port for different migrations, which will lead to
> migration failed during migration prepare phase on destination. So we
> change the port increase to atomic operation to solve the problem.
> 
> Oops, this was apparently latent until the big qemu driver lock was
> removed.
> 
> >
> > Signed-off-by: WangYufei 
> > ---
> >  src/qemu/qemu_migration.c |5 +++--
> >  1 files changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> > index 3a1aab7..0f496f4 100644
> > --- a/src/qemu/qemu_migration.c
> > +++ b/src/qemu/qemu_migration.c
> > @@ -57,6 +57,7 @@
> >  #include "virhook.h"
> >  #include "virstring.h"
> >  #include "virtypedparam.h"
> > +#include "viratomic.h"
> >
> >  #define VIR_FROM_THIS VIR_FROM_QEMU
> >
> > @@ -2521,7 +2522,7 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr
> driver,
> >   * to be a correct hostname which refers to the target machine).
> >   */
> >  if (uri_in == NULL) {
> > -this_port = QEMUD_MIGRATION_FIRST_PORT + port++;
> > +this_port = QEMUD_MIGRATION_FIRST_PORT +
> virAtomicIntInc(&port);
> >  if (port == QEMUD_MIGRATION_NUM_PORTS) port = 0;
> >
> >  /* Get hostname */
> > @@ -2578,7 +2579,7 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr
> driver,
> >
> >  if (uri->port == 0) {
> >  /* Generate a port */
> > -this_port = QEMUD_MIGRATION_FIRST_PORT + port++;
> > +this_port = QEMUD_MIGRATION_FIRST_PORT +
> virAtomicIntInc(&port);
> >  if (port == QEMUD_MIGRATION_NUM_PORTS)
> >  port = 0;
> >
> 
> Unfortunately, this patch is incomplete. The increments are now atomic
> but the wrapping at QEMUD_MIGRATION_NUM_PORTS is not. I think this
> should use virPortAllocator instead.
> 
> Jirka

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [BUG] libvirtd on destination crash frequently while migrating vms concurrently

2013-09-27 Thread Wangyufei (A)
Hello,
I found a problem that libvirtd on destination crash frequently while migrating 
vms concurrently. For example, if I migrate 10 vms concurrently ceaselessly, 
then after about 30 minutes the libvirtd on destination will crash. So I 
analyzed and found two bugs during migration process.
First, during migration prepare phase on destination, libvirtd assigns ports to 
qemu to be startd on destination. But the port increase operation is not aomic, 
so there's a chance that multi vms get the same port, and only the first one 
can start successfully, others will fail to start. I've applied a patch to 
solve this bug, and I test it, it works well. If only this bug exists, libvirtd 
will not crash. The second bug is fatal.
Second, I found the libvirtd crash because of segment fault which is produced 
by accessing vm released. Apparently it's caused by multi-thread operation, 
thread A access vm data which has released by thread B. At last I proved my 
thought right.

Step 1. Because of bug one, the port is already occupied, so qemu on 
destination failed to start and sent a HANGUP signal to libvirtd, then libvirtd 
received this VIR_EVENT_HANDLE_HANGUP event, thread A dealing with events 
called qemuProcessHandleMonitorEOF as following:

#0  qemuProcessHandleMonitorEOF (mon=0x7f4dcd9c3130, vm=0x7f4dcd9c9780)
at qemu/qemu_process.c:399
#1  0x7f4dc18d9e87 in qemuMonitorIO (watch=68, fd=27, events=8,
opaque=0x7f4dcd9c3130) at qemu/qemu_monitor.c:668
#2  0x7f4dccae6604 in virEventPollDispatchHandles (nfds=18,
fds=0x7f4db4017e70) at util/vireventpoll.c:500
#3  0x7f4dccae7ff2 in virEventPollRunOnce () at util/vireventpoll.c:646
#4  0x7f4dccae60e4 in virEventRunDefaultImpl () at util/virevent.c:273
#5  0x7f4dccc40b25 in virNetServerRun (srv=0x7f4dcd8d26b0)
at rpc/virnetserver.c:1106
#6  0x7f4dcd6164c9 in main (argc=3, argv=0x7fff8d8f9f88)
at libvirtd.c:1518


static int virEventPollDispatchHandles(int nfds, struct pollfd *fds) {
..
/*
deleted flag is still false now, so we pass through to 
qemuProcessHandleMonitorEOF
*/
if (eventLoop.handles[i].deleted) {
EVENT_DEBUG("Skip deleted n=%d w=%d f=%d", i,
eventLoop.handles[i].watch, eventLoop.handles[i].fd);
continue;
}


Step 2: Thread B dealing with migration on destination set deleted flag in 
virEventPollRemoveHandle as following:

#0  virEventPollRemoveHandle (watch=74) at util/vireventpoll.c:176
#1  0x7f4dccae5e6f in virEventRemoveHandle (watch=74)
at util/virevent.c:97
#2  0x7f4dc18d8ca8 in qemuMonitorClose (mon=0x7f4dbc030910)
at qemu/qemu_monitor.c:831
#3  0x7f4dc18bec63 in qemuProcessStop (driver=0x7f4dcd9bd400,
vm=0x7f4dbc00ed20, reason=VIR_DOMAIN_SHUTOFF_FAILED, flags=0)
at qemu/qemu_process.c:4302
#4  0x7f4dc18c1a83 in qemuProcessStart (conn=0x7f4dbc031020,
driver=0x7f4dcd9bd400, vm=0x7f4dbc00ed20,
migrateFrom=0x7f4dbc01af90 "tcp:[::]:49152", stdin_fd=-1,
stdin_path=0x0, snapshot=0x0,
vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, flags=6)
at qemu/qemu_process.c:4145
#5  0x7f4dc18cc688 in qemuMigrationPrepareAny (driver=0x7f4dcd9bd400,

Step 3: Thread B cleanup vm in qemuMigrationPrepareAny after qemuProcessStart 
failed.

#0  virDomainObjDispose (obj=0x7f4dcd9c9780) at conf/domain_conf.c:2009
#1  0x7f4dccb0ccd9 in virObjectUnref (anyobj=0x7f4dcd9c9780)
at util/virobject.c:266
#2  0x7f4dccb42340 in virDomainObjListRemove (doms=0x7f4dcd9bd4f0,
dom=0x7f4dcd9c9780) at conf/domain_conf.c:2342
#3  0x7f4dc189ac33 in qemuDomainRemoveInactive (driver=0x7f4dcd9bd400,
vm=0x7f4dcd9c9780) at qemu/qemu_domain.c:1993
#4  0x7f4dc18ccad5 in qemuMigrationPrepareAny (driver=0x7f4dcd9bd400,

Step 4: Thread A access priv which is released by thread B before, then 
libvirtd crash, bomb!

static void
qemuProcessHandleMonitorEOF(qemuMonitorPtr mon ATTRIBUTE_UNUSED,
virDomainObjPtr vm)
{
virQEMUDriverPtr driver = qemu_driver;
virDomainEventPtr event = NULL;
qemuDomainObjPrivatePtr priv;
int eventReason = VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN;
int stopReason = VIR_DOMAIN_SHUTOFF_SHUTDOWN;
const char *auditReason = "shutdown";

VIR_DEBUG("Received EOF on %p '%s'", vm, vm->def->name);

virObjectLock(vm);

priv = vm->privateData;
(gdb) p priv
$1 = (qemuDomainObjPrivatePtr) 0x0
if (priv->beingDestroyed) {

At last if anything bad happened to make qemuProcessStart failed during 
migration on destination, we'll be in the big trouble that accessing some 
memory freed. I didn't find any locks or flags exist could stop this happening. 
Please help me out, thanks a lot.

Best Regards,
-WangYufei



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH] qemu_migrate: Fix assign the same port when migrating concurrently

2013-09-26 Thread Wangyufei (A)
From: WangYufei 
Date: Fri, 27 Sep 2013 11:53:57 +0800
Subject: [PATCH] qemu_migrate: Fix assign the same port when migrating 
concurrently

When we migrate vms concurrently, there's a chance that libvirtd on destination 
assign the same port for different migrations, which will lead to migration 
failed during migration prepare phase on destination. So we change the port 
increase to atomic operation to solve the problem.

Signed-off-by: WangYufei 
---
 src/qemu/qemu_migration.c |5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 3a1aab7..0f496f4 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -57,6 +57,7 @@
 #include "virhook.h"
 #include "virstring.h"
 #include "virtypedparam.h"
+#include "viratomic.h"

 #define VIR_FROM_THIS VIR_FROM_QEMU

@@ -2521,7 +2522,7 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver,
  * to be a correct hostname which refers to the target machine).
  */
 if (uri_in == NULL) {
-this_port = QEMUD_MIGRATION_FIRST_PORT + port++;
+this_port = QEMUD_MIGRATION_FIRST_PORT + virAtomicIntInc(&port);
 if (port == QEMUD_MIGRATION_NUM_PORTS) port = 0;

 /* Get hostname */
@@ -2578,7 +2579,7 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver,

 if (uri->port == 0) {
 /* Generate a port */
-this_port = QEMUD_MIGRATION_FIRST_PORT + port++;
+this_port = QEMUD_MIGRATION_FIRST_PORT + virAtomicIntInc(&port);
 if (port == QEMUD_MIGRATION_NUM_PORTS)
 port = 0;

--
1.7.3.1.msysgit.0

Best Regards,
-WangYufei



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] USB: USB Passthrough Device Autoconnect Feature

2013-09-13 Thread Wangyufei (A)
Hello,
Qemu upstream had achieved USB Passthrough Device Autoconnect Feature for the 
guest.
Such as a USB device is unplugged from the host then plugged back in to the 
same USB physical port.

the patch was:
https://lists.gnu.org/archive/html/qemu-devel/2011-05/msg02341.html

However, Libvirt has not provided such an interface that identifies a USB 
device for pass through with physical port, rather than the device number.

Do you have any ideas or plans about this problem? Thanks in advance.

Best Regards,
-WangYufei



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] Is it a problem that after virEventRegisterDefaultImpl we have handlers leaked

2013-09-03 Thread Wangyufei (A)


> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: Tuesday, September 03, 2013 6:30 PM
> To: Wangyufei (A)
> Cc: libvir-list@redhat.com; Wangrui (K)
> Subject: Re: [libvirt] Is it a problem that after virEventRegisterDefaultImpl 
> we
> have handlers leaked
> 
> On Tue, Sep 03, 2013 at 10:25:45AM +, Wangyufei (A) wrote:
> >
> >
> > > -Original Message-
> > > From: Daniel P. Berrange [mailto:berra...@redhat.com]
> > > Sent: Tuesday, September 03, 2013 5:14 PM
> > > To: Wangyufei (A)
> > > Cc: libvir-list@redhat.com; Wangrui (K)
> > > Subject: Re: [libvirt] Is it a problem that after 
> > > virEventRegisterDefaultImpl
> we
> > > have handlers leaked
> > >
> > > On Mon, Sep 02, 2013 at 07:44:39AM +, Wangyufei (A) wrote:
> > > > Hello,
> > > > When I ran programme event-test compiled from event-test.c, I found
> a
> > > problem that, after virEventRegisterDefaultImpl I do
> virConnectOpenAuth
> > > and virConnectClose, there will be handlers of socket and pipe opened by
> > > virConnectOpenAuth leaked after virConnectClose. So I did some
> analysis,
> > > and I found the fact following:
> > > >
> > > > In the condition that we only have one connection here
> > > >
> > > > int virNetSocketAddIOCallback(virNetSocketPtr sock,
> > > >   int events,
> > > >   virNetSocketIOFunc func,
> > > >   void *opaque,
> > > >   virFreeCallback ff)
> > > > {
> > > > int ret = -1;
> > > >
> > > > virObjectRef(sock); //Here we add sock refers once, then we will
> get
> > > refers equal 2 after
> > > > virObjectLock(sock);
> > > > if (sock->watch > 0) {
> > > > VIR_DEBUG("Watch already registered on socket %p", sock);
> > > > goto cleanup;
> > > > }
> > > >
> > > > if ((sock->watch = virEventAddHandle(sock->fd, //If we have
> called
> > > virEventRegisterDefaultImpl, then here we'll get a sock watch non
> negative
> > > and will not go to cleanup.
> > > >  events,
> > > >
> > > virNetSocketEventHandle,
> > > >  sock,
> > > >
> virNetSocketEventFree))
> > > < 0) {
> > > > VIR_DEBUG("Failed to register watch on socket %p", sock);
> > > > goto cleanup;
> > > > }
> > > > sock->func = func;
> > > > sock->opaque = opaque;
> > > > sock->ff = ff;
> > > >
> > > > ret = 0;
> > > >
> > > > cleanup:
> > > > virObjectUnlock(sock);
> > > > if (ret != 0)
> > > > virObjectUnref(sock); //If we haven't called
> > > virEventRegisterDefaultImpl, we'll be here after virEventAddHandle, and
> > > sock refers will decrease to 1
> > > > return ret;
> > > > }
> > > >
> > > > Condition with virEventRegisterDefaultImpl, we'll do unrefer action in
> two
> > > places:
> > > >
> > > > 1.   virEventRunDefaultImpl  ->virEventPollRunOnce
> > > ->virEventRunDefaultImpl  ->virEventPollRunOnce
> > > ->virEventPollCleanupHandles -> virNetSocketEventFree ->
> virObjectUnref
> > > >
> > > > 2.   virConnectClose ->virObjectUnref ->virConnectDispose
> > > ->remoteConnectClose  ->doRemoteClose  ->virNetClientClose
> > > ->virNetClientCloseInternal -> virNetClientIOEventLoopPassTheBuck ->
> > > virNetClientCloseLocked -> virObjectUnref
> > > >
> > > > When event dealing loop is alive, everything work fine, we'll get sock
> > > refers 2
> > > > after virConnectOpenAuth and unrefer twice in two places above after
> > > virConnectClose.
> > > > But If some accidents happened like we quit event dealing loop or
> > > virEventRunDefaultImpl
> > > > suspended, then sock refers will never be zero, and handlers will never
> be
> > > freed.
> > >
> > > Do not stop running the event loop. It is a requirement of the API that
> once
> > > you 

Re: [libvirt] Is it a problem that after virEventRegisterDefaultImpl we have handlers leaked

2013-09-03 Thread Wangyufei (A)


> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: Tuesday, September 03, 2013 5:14 PM
> To: Wangyufei (A)
> Cc: libvir-list@redhat.com; Wangrui (K)
> Subject: Re: [libvirt] Is it a problem that after virEventRegisterDefaultImpl 
> we
> have handlers leaked
> 
> On Mon, Sep 02, 2013 at 07:44:39AM +, Wangyufei (A) wrote:
> > Hello,
> > When I ran programme event-test compiled from event-test.c, I found a
> problem that, after virEventRegisterDefaultImpl I do virConnectOpenAuth
> and virConnectClose, there will be handlers of socket and pipe opened by
> virConnectOpenAuth leaked after virConnectClose. So I did some analysis,
> and I found the fact following:
> >
> > In the condition that we only have one connection here
> >
> > int virNetSocketAddIOCallback(virNetSocketPtr sock,
> >   int events,
> >   virNetSocketIOFunc func,
> >   void *opaque,
> >   virFreeCallback ff)
> > {
> > int ret = -1;
> >
> > virObjectRef(sock); //Here we add sock refers once, then we will get
> refers equal 2 after
> > virObjectLock(sock);
> > if (sock->watch > 0) {
> > VIR_DEBUG("Watch already registered on socket %p", sock);
> > goto cleanup;
> > }
> >
> > if ((sock->watch = virEventAddHandle(sock->fd, //If we have called
> virEventRegisterDefaultImpl, then here we'll get a sock watch non negative
> and will not go to cleanup.
> >  events,
> >
> virNetSocketEventHandle,
> >  sock,
> >  virNetSocketEventFree))
> < 0) {
> > VIR_DEBUG("Failed to register watch on socket %p", sock);
> > goto cleanup;
> > }
> > sock->func = func;
> > sock->opaque = opaque;
> > sock->ff = ff;
> >
> > ret = 0;
> >
> > cleanup:
> > virObjectUnlock(sock);
> > if (ret != 0)
> > virObjectUnref(sock); //If we haven't called
> virEventRegisterDefaultImpl, we'll be here after virEventAddHandle, and
> sock refers will decrease to 1
> > return ret;
> > }
> >
> > Condition with virEventRegisterDefaultImpl, we'll do unrefer action in two
> places:
> >
> > 1.   virEventRunDefaultImpl  ->virEventPollRunOnce
> ->virEventRunDefaultImpl  ->virEventPollRunOnce
> ->virEventPollCleanupHandles -> virNetSocketEventFree -> virObjectUnref
> >
> > 2.   virConnectClose ->virObjectUnref ->virConnectDispose
> ->remoteConnectClose  ->doRemoteClose  ->virNetClientClose
> ->virNetClientCloseInternal -> virNetClientIOEventLoopPassTheBuck ->
> virNetClientCloseLocked -> virObjectUnref
> >
> > When event dealing loop is alive, everything work fine, we'll get sock
> refers 2
> > after virConnectOpenAuth and unrefer twice in two places above after
> virConnectClose.
> > But If some accidents happened like we quit event dealing loop or
> virEventRunDefaultImpl
> > suspended, then sock refers will never be zero, and handlers will never be
> freed.
> 
> Do not stop running the event loop. It is a requirement of the API that once
> you have
> called  virEventRegisterDefaultImpl, you *must* always execute the event
> loop forever
> after until your application exits.
> 
> 
> > I consider to add something like virEventDeregisterDefaultImpl to set
> addHandleImpl and his buddy NULL. Apparently it is far away from fixing it
> completely.
> 
> No, stopping or de-registering event loop impls is a non-goal.
> 
> > At last I have two questions here:
> >
> >
> > 1.   Is it a problem that after virEventRegisterDefaultImpl we have
> handlers leaked?
> 
> There are no handlers leaked if you run the event loop, which is a
> requirement
> of the API.

Well, Is there any tiny chance that event loop stopped?
If that happened now, the handlers leak will affect the whole host OS, maybe no 
handlers to use at last.
Is that what we expect?
Can we do something to reduce the impact even if something impossible happened?

Thanks a lot.
> 
> 
> Daniel
> --
> |: http://berrange.com  -o-
> http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o-
> http://virt-manager.org :|
> |: http://autobuild.org   -o-
> http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-
> http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] Is it a problem that after virEventRegisterDefaultImpl we have handlers leaked

2013-09-02 Thread Wangyufei (A)
Hello,
When I ran programme event-test compiled from event-test.c, I found a problem 
that, after virEventRegisterDefaultImpl I do virConnectOpenAuth and 
virConnectClose, there will be handlers of socket and pipe opened by 
virConnectOpenAuth leaked after virConnectClose. So I did some analysis, and I 
found the fact following:

In the condition that we only have one connection here

int virNetSocketAddIOCallback(virNetSocketPtr sock,
  int events,
  virNetSocketIOFunc func,
  void *opaque,
  virFreeCallback ff)
{
int ret = -1;

virObjectRef(sock); //Here we add sock refers once, then we will get refers 
equal 2 after
virObjectLock(sock);
if (sock->watch > 0) {
VIR_DEBUG("Watch already registered on socket %p", sock);
goto cleanup;
}

if ((sock->watch = virEventAddHandle(sock->fd, //If we have called 
virEventRegisterDefaultImpl, then here we'll get a sock watch non negative and 
will not go to cleanup.
 events,
 virNetSocketEventHandle,
 sock,
 virNetSocketEventFree)) < 0) {
VIR_DEBUG("Failed to register watch on socket %p", sock);
goto cleanup;
}
sock->func = func;
sock->opaque = opaque;
sock->ff = ff;

ret = 0;

cleanup:
virObjectUnlock(sock);
if (ret != 0)
virObjectUnref(sock); //If we haven't called 
virEventRegisterDefaultImpl, we'll be here after virEventAddHandle, and sock 
refers will decrease to 1
return ret;
}

Condition with virEventRegisterDefaultImpl, we'll do unrefer action in two 
places:

1.   virEventRunDefaultImpl  ->virEventPollRunOnce  
->virEventRunDefaultImpl  ->virEventPollRunOnce  ->virEventPollCleanupHandles 
-> virNetSocketEventFree -> virObjectUnref

2.   virConnectClose ->virObjectUnref ->virConnectDispose 
->remoteConnectClose  ->doRemoteClose  ->virNetClientClose  
->virNetClientCloseInternal -> virNetClientIOEventLoopPassTheBuck -> 
virNetClientCloseLocked -> virObjectUnref

When event dealing loop is alive, everything work fine, we'll get sock refers 2 
after virConnectOpenAuth and unrefer twice in two places above after 
virConnectClose. But If some accidents happened like we quit event dealing loop 
or virEventRunDefaultImpl suspended, then sock refers will never be zero, and 
handlers will never be freed.
In a particular condition we have two threads, thread one doing event dealing 
loop, thread two doing something like virConnectOpenAuth and virConnectClose 
ceaselessly, then we'll get a big trouble that handlers available will be ate 
up soon.

I consider to add something like virEventDeregisterDefaultImpl to set 
addHandleImpl and his buddy NULL. Apparently it is far away from fixing it 
completely.

At last I have two questions here:


1.   Is it a problem that after virEventRegisterDefaultImpl we have 
handlers leaked?

2.   If it is a problem, is there anybody any good idea to help me out?


Best Regards,
-WangYufei

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH] Fix loosing the pidfile string of struct _qemuDomainObjPrivate after libvirtd service restarted.

2013-08-24 Thread Wangyufei (A)
*src/qemu/qemu_domain.c (qemuDomainObjPrivateXMLFormat): Add codes of saving 
pidfile string to vm'state file.
*src/qemu/qemu_domain.c (qemuDomainObjPrivateXMLParse): Add codes of loading 
pidfile path saved in vm's state file to the pidfile string.

Signed-off-by: Xu Chao 
---
src/qemu/qemu_domain.c | 8 
1 file changed, 8 insertions(+)

diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 7f4d17d..3c792ab 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -294,6 +294,7 @@ qemuDomainObjPrivateXMLFormat(virBufferPtr buf, void *data)
   virDomainChrTypeToString(priv->monConfig->type));
 }
+virBufferEscapeString(buf, "  \n", priv->pidfile);
 if (priv->nvcpupids) {
 size_t i;
@@ -398,6 +399,13 @@ qemuDomainObjPrivateXMLParse(xmlXPathContextPtr ctxt, void 
*data)
 goto error;
 }
+if (!(priv->pidfile =
+  virXPathString("string(./pid[1]/@path)", ctxt))) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   "%s", _("no pidfile path"));
+goto error;
+}
+
 n = virXPathNodeSet("./vcpus/vcpu", ctxt, &nodes);
 if (n < 0)
 goto error;

Best Regards,
-WangYufei

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] Is it a problem that "a vm has been shutdown, but its pidfile has not been removed"

2013-08-22 Thread Wangyufei (A)
Hello,
when do steps as follows:

1.  start a vm

2.  restart libvirtd service

3.  shutdown the vm
I found the pidfile associated with the vm has not been removed.

Analyzing related piece of code, I found when libvirtd service restarted, 
the vm's _virDomainObj structure(named dom here) in libvirtd will be 
re-established according to the vm's state-file or config-file, but its member 
of privateData.pidfile will not be reassigned value. By the way, qemudriver is 
used in my libvirt.

So when the vm is shutdown, its pidfile will not be removed, because the member 
of privateData.pidfile is NULL.

My questions are :

1.  Do you think it is a problem when a vm has been shutdown, but its pidfile 
has not been removed ?

2.  Do you think it is expected that the member of privateData.pidfile's value 
missed after libvirtd restart?

If it's a bug then I'll submit the patch to fix it.

Also list the related piece of code about removing the vm's pidfile in 
libvirt for your information:
 if (priv->pidfile &&
unlink(priv->pidfile) < 0 &&
errno != ENOENT)
VIR_WARN("Failed to remove PID file for %s: %s",
 vm->def->name, virStrerror(errno, ebuf, sizeof(ebuf)));

Best Regards,
-WangYufei

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH] qemu:Delete sockets which act as UNIX domain socket server

2013-07-18 Thread Wangyufei (A)
When I shutdown a vm, I found sockets which act as UNIX domain socket server 
were not deleted. When I add the following code, it work out.

Signed-off-by: WangYufei 
mailto:james.wangyu...@huawei.com>>
---
src/qemu/qemu_process.c |7 +++
1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 3d5e8f6..e794f37 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -4086,6 +4086,13 @@ void qemuProcessStop(virQEMUDriverPtr driver,
 priv->monConfig = NULL;
 }
+/* remove socket which acts as UNIX domain socket server */
+for (i = 0; i < vm->def->nchannels; i++) {
+if ((vm->def->channels[i]->source.type == VIR_DOMAIN_CHR_TYPE_UNIX) &&
+vm->def->channels[i]->source.data.nix.listen)
+unlink(vm->def->channels[i]->source.data.nix.path);
+}
+
 /* shut it off for sure */
 ignore_value(qemuProcessKill(vm,
  VIR_QEMU_PROCESS_KILL_FORCE|
--
1.7.3.1.msysgit.0


Best Regards,

-WangYufei

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] Why does libvirt not unlink qemu-ga or any other socket in function qemuProcessStop()

2013-07-17 Thread Wangyufei (A)
Hello,
I have defined a vm with two UNIX domain sockets. One is for qemu-ga and the 
other is for my service. These two devices will act as UNIX domain socket 
server. They are automatically generated by qemu. But when the vm was dead, 
these two sockets were not deleted. I find that libvirt will unlink vm's 
monitor socket(/var/lib/libvirt/qemu) in function qemuProcessStop(). Why does 
libvirt not unlink qemu-ga or any other socket in function qemuProcessStop()?

xml file is given below.


 kvm-1
 4194304
 4194304
 4
 
 
 
 
 hvm
 
 
 
 
 
 


 

destroy
restart
restart


/usr/bin/qemu-kvm





   
   
   
   
   
   
   
   
   
   










   






Thank you!

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] Bug 980449 - resource leak about event

2013-07-02 Thread Wangyufei (A)
Hi All,

I have filed the following bug for Libvirt event issue :

Bug 980449 - resource leak 
about event

Thanks,

WangYufei

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list