Re: [libvirt] vm snapshot multi-disk

2015-07-13 Thread Marcus
I'm guessing this is an issue with qemu, as the code indicates that libvirt
relies on the qemu monitor when a vm is active, via  qemuMonitorDeleteSnapshot.
If I'm barking up the wrong tree, someone please let me know. Otherwise
I'll follow up with qemu-devel.

On Mon, Jul 13, 2015 at 2:42 PM, Marcus  wrote:

> Oh, I almost forgot to mention the versions:
>
> libvirt 1.2.8-16.0.1.el7_1.2.x86_64
>
> qemu 2.1.2-23.el7_1.1.x86_64
>
>
> Also, I'm unclear if the domain snapshot feature is orchestrated by
> libvirt, or something that is simply called into qemu to take care of.
> Please forgive me if this is a qemu issue.
>
> On Mon, Jul 13, 2015 at 2:35 PM, Marcus  wrote:
>
>> Hi all,
>>
>> I've recently been toying with VM snapshots, and have ran into an
>> issue. Given a VM with multiple disks, it seems a snapshot-create followed
>> by a snapshot-delete will only remove the qcow2 snapshot for the first disk
>> (or perhaps just the disk that contains the memory), not all of the disk
>> snapshots it created. Is this something people are aware of?
>>
>> In searching around, I found a bug report where snapshot-creates
>> would fail due to the qcow2 snapshot ids being inconsistent. That looks
>> like it is patched for 2.4 qemu (
>> http://lists.nongnu.org/archive/html/qemu-devel/2015-03/msg04963.html),
>> this bug would trigger that one by leaving IDs around that are inconsistent
>> between member disks, but is not the same.
>>
>> # virsh snapshot-create 7
>>
>> Domain snapshot 1436792720 created
>>
>>
>> # virsh snapshot-list 7
>>
>>  Name Creation Time State
>>
>> 
>>
>>  1436792720   2015-07-13 06:05:20 -0700 running
>>
>>
>> # virsh domblklist 7
>>
>> Target Source
>>
>> 
>>
>> vda
>> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5
>>
>> vdb
>> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c
>>
>>
>> # qemu-img snapshot -l
>> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5
>>
>> Snapshot list:
>>
>> IDTAG VM SIZEDATE   VM CLOCK
>>
>> 1 1436792720 173M 2015-07-13 06:05:20   00:01:10.938
>>
>>
>> # qemu-img snapshot -l
>> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c
>>
>> Snapshot list:
>>
>> IDTAG VM SIZEDATE   VM CLOCK
>>
>> 1 14367927200 2015-07-13 06:05:20   00:01:10.938
>>
>>
>> # virsh snapshot-delete 7 1436792720
>>
>> Domain snapshot 1436792720 deleted
>>
>>
>> # qemu-img snapshot -l
>> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5
>>
>> # qemu-img snapshot -l
>> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c
>>
>> Snapshot list:
>>
>> IDTAG VM SIZEDATE   VM CLOCK
>>
>> 1 14367927200 2015-07-13 06:05:20   00:01:10.938
>>
>
>
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 2/3] Qemu: add CMT support

2015-07-13 Thread Ren, Qiaowei
On Jul 7, 2015 15:51, Ren, Qiaowei wrote:
> 
> 
> On Jul 6, 2015 14:49, Prerna wrote:
> 
>> On Sun, Jul 5, 2015 at 5:13 PM, Qiaowei Ren >  > wrote:
>> 
>> 
>>  One RFC in
>>  https://www.redhat.com/archives/libvir-list/2015-June/msg01509.html
>> 
>>  CMT (Cache Monitoring Technology) can be used to measure the
>>  usage of cache by VM running on the host. This patch will
>>  extend the bulk stats API (virDomainListGetStats) to add this
>>  field. Applications based on libvirt can use this API to achieve
>>  cache usage of VM. Because CMT implementation in Linux kernel
>>  is based on perf mechanism, this patch will enable perf event
>>  for CMT when VM is created and disable it when VM is destroyed.
>> 
>> 
>> 
>> 
>> Hi Ren,
>> 
>> One query wrt this implementation. I see you make a perf ioctl to
>> gather CMT stats each time the stats API is invoked.
>> 
>> If the CMT stats are exposed by a hardware counter, then this implies
>> logging on a per-cpu (or per-socket ???) basis.
>> 
>> This also implies that the value read will vary as the CPU (or socket)
>> on which it is being called changes.
>> 
>> 
>> Now, with this background, if we need real-world stats on a VM, we need
>> this perf ioctl executed on all CPUs/ sockets on which the VM ran.
>> Also, once done, we will need to aggregate results from each of these
>> sources.
>> 
>> 
>> In this implementation, I am missing this -- there seems no control
>> over which physical CPU the libvirt worker thread will run and collect
>> the perf data from. Data collected from this implementation might not
>> accurately model the system state.
>> 
>> I _think_ libvirt currently has no way of directing a worker thread to
>> collect stats from a given CPU -- if we do, I would be happy to learn
>> about it :)
>> 
> 
> Prerna, thanks for your reply. I checked the CMT implementation in
> kernel, and noticed that the series implement new ->count() of pmu
> driver which can aggregate the results from each cpu if perf type is
> PERF_TYPE_INTEL_CQM . The following is the link for the patch:
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?i
> d=bfe1fc d2688f557a6b6a88f59ea7619228728bd7
> 
> So I guess that this patch just need to set right perf type and "cpu=-1". Do 
> you
> think this is ok?
> 

Peter, according to your feedback about my RFC, I updated our implementation 
and submitted this patch series. Could you help review them?

Thanks,
Qiaowei


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] vm snapshot multi-disk

2015-07-13 Thread Marcus
Oh, I almost forgot to mention the versions:

libvirt 1.2.8-16.0.1.el7_1.2.x86_64

qemu 2.1.2-23.el7_1.1.x86_64


Also, I'm unclear if the domain snapshot feature is orchestrated by
libvirt, or something that is simply called into qemu to take care of.
Please forgive me if this is a qemu issue.

On Mon, Jul 13, 2015 at 2:35 PM, Marcus  wrote:

> Hi all,
>
> I've recently been toying with VM snapshots, and have ran into an
> issue. Given a VM with multiple disks, it seems a snapshot-create followed
> by a snapshot-delete will only remove the qcow2 snapshot for the first disk
> (or perhaps just the disk that contains the memory), not all of the disk
> snapshots it created. Is this something people are aware of?
>
> In searching around, I found a bug report where snapshot-creates would
> fail due to the qcow2 snapshot ids being inconsistent. That looks like it
> is patched for 2.4 qemu (
> http://lists.nongnu.org/archive/html/qemu-devel/2015-03/msg04963.html),
> this bug would trigger that one by leaving IDs around that are inconsistent
> between member disks, but is not the same.
>
> # virsh snapshot-create 7
>
> Domain snapshot 1436792720 created
>
>
> # virsh snapshot-list 7
>
>  Name Creation Time State
>
> 
>
>  1436792720   2015-07-13 06:05:20 -0700 running
>
>
> # virsh domblklist 7
>
> Target Source
>
> 
>
> vda
> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5
>
> vdb
> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c
>
>
> # qemu-img snapshot -l
> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5
>
> Snapshot list:
>
> IDTAG VM SIZEDATE   VM CLOCK
>
> 1 1436792720 173M 2015-07-13 06:05:20   00:01:10.938
>
>
> # qemu-img snapshot -l
> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c
>
> Snapshot list:
>
> IDTAG VM SIZEDATE   VM CLOCK
>
> 1 14367927200 2015-07-13 06:05:20   00:01:10.938
>
>
> # virsh snapshot-delete 7 1436792720
>
> Domain snapshot 1436792720 deleted
>
>
> # qemu-img snapshot -l
> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5
>
> # qemu-img snapshot -l
> /mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c
>
> Snapshot list:
>
> IDTAG VM SIZEDATE   VM CLOCK
>
> 1 14367927200 2015-07-13 06:05:20   00:01:10.938
>
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] vm snapshot multi-disk

2015-07-13 Thread Marcus
Hi all,

I've recently been toying with VM snapshots, and have ran into an
issue. Given a VM with multiple disks, it seems a snapshot-create followed
by a snapshot-delete will only remove the qcow2 snapshot for the first disk
(or perhaps just the disk that contains the memory), not all of the disk
snapshots it created. Is this something people are aware of?

In searching around, I found a bug report where snapshot-creates would
fail due to the qcow2 snapshot ids being inconsistent. That looks like it
is patched for 2.4 qemu (
http://lists.nongnu.org/archive/html/qemu-devel/2015-03/msg04963.html),
this bug would trigger that one by leaving IDs around that are inconsistent
between member disks, but is not the same.

# virsh snapshot-create 7

Domain snapshot 1436792720 created


# virsh snapshot-list 7

 Name Creation Time State



 1436792720   2015-07-13 06:05:20 -0700 running


# virsh domblklist 7

Target Source



vda
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5

vdb
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c


# qemu-img snapshot -l
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5

Snapshot list:

IDTAG VM SIZEDATE   VM CLOCK

1 1436792720 173M 2015-07-13 06:05:20   00:01:10.938


# qemu-img snapshot -l
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c

Snapshot list:

IDTAG VM SIZEDATE   VM CLOCK

1 14367927200 2015-07-13 06:05:20   00:01:10.938


# virsh snapshot-delete 7 1436792720

Domain snapshot 1436792720 deleted


# qemu-img snapshot -l
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5

# qemu-img snapshot -l
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c

Snapshot list:

IDTAG VM SIZEDATE   VM CLOCK

1 14367927200 2015-07-13 06:05:20   00:01:10.938
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 0/9] Add sysfs_prefix to nodeinfo.c API's

2015-07-13 Thread John Ferlan


On 07/13/2015 12:42 PM, Andrea Bolognani wrote:
> On Fri, 2015-07-10 at 17:05 +0200, Andrea Bolognani wrote:
>>
>> Patches 1-8 look good to me. Great job splitting the changes in
>> such a nice way! I've commented on patch 7 in a separate mail.
>>
>> I'll look at patch 9 on Monday.
> 
> Patch 9 looks good as well, so ACK series with the previously
> mentioned nit fixed.
> 
> I've also just sent a patch introducing a test case meant to
> make sure patch 9 is actually fixing the issue :)
> 
> Hopefully it will make it to the list despite being biggish.
> 
> Cheers.
> 

It did - thanks for the collaboration! I've included it into this series
as patch10 and pushed with the adjustment to patch 7.

Thanks!

John

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] tests: Add nodeinfo test for non-present CPUs

2015-07-13 Thread John Ferlan


On 07/13/2015 12:37 PM, Andrea Bolognani wrote:
> Some of the possible CPUs in a system might not be present, eg. they
> might be defective or might have been deconfigured from the ASM console
> in a Power system. Due to this fact, Linux keeps track of what CPUs are
> possible and what are present separately.
> 
> This test uses the data from a system where not all the possible CPUs
> are present to make sure libvirt handles this situation correctly.
> ---
> 
> This patch must be applied on top of John's series of nodeinfo
> refactors, especially
> 
>   [PATCH 9/9] nodeinfo: fix to parse present cpus rather than possible cpus
> 

ACK - already included as part of other series and pushed)

John

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v3] nodeinfo: fix to parse present cpus rather than possible cpus

2015-07-13 Thread John Ferlan


On 06/26/2015 06:27 PM, Kothapally Madhu Pavan wrote:
> Currently we are parsing all the possible cpus to get the
> nodeinfo. This fix will perform a check for present cpus
> before parsing.
> 
> Signed-off-by: Kothapally Madhu Pavan 
> 
> 
> ---
>  src/nodeinfo.c |   11 +++
>  1 file changed, 11 insertions(+)
> 

So along with a test that Andrea Bologani generated :

http://www.redhat.com/archives/libvir-list/2015-July/msg00517.html

and the sysfs_path adjustments I made:

http://www.redhat.com/archives/libvir-list/2015-July/msg00278.html

This is now pushed.

Thanks,

John
> diff --git a/src/nodeinfo.c b/src/nodeinfo.c
> index 2fafe2d..5689c9b 100644
> --- a/src/nodeinfo.c
> +++ b/src/nodeinfo.c
> @@ -43,6 +43,7 @@
>  #include "c-ctype.h"
>  #include "viralloc.h"
>  #include "nodeinfopriv.h"
> +#include "nodeinfo.h"
>  #include "physmem.h"
>  #include "virerror.h"
>  #include "count-one-bits.h"
> @@ -418,6 +419,7 @@ virNodeParseNode(const char *node,
>  int processors = 0;
>  DIR *cpudir = NULL;
>  struct dirent *cpudirent = NULL;
> +virBitmapPtr present_cpumap = NULL;
>  int sock_max = 0;
>  cpu_set_t sock_map;
>  int sock;
> @@ -438,12 +440,17 @@ virNodeParseNode(const char *node,
>  goto cleanup;
>  }
>  
> +present_cpumap = nodeGetPresentCPUBitmap();
> +
>  /* enumerate sockets in the node */
>  CPU_ZERO(&sock_map);
>  while ((direrr = virDirRead(cpudir, &cpudirent, node)) > 0) {
>  if (sscanf(cpudirent->d_name, "cpu%u", &cpu) != 1)
>  continue;
>  
> +if (present_cpumap && !(virBitmapIsSet(present_cpumap, cpu)))
> +continue;
> +
>  if ((online = virNodeGetCpuValue(node, cpu, "online", 1)) < 0)
>  goto cleanup;
>  
> @@ -477,6 +484,9 @@ virNodeParseNode(const char *node,
>  if (sscanf(cpudirent->d_name, "cpu%u", &cpu) != 1)
>  continue;
>  
> +if (present_cpumap && !(virBitmapIsSet(present_cpumap, cpu)))
> +continue;
> +
>  if ((online = virNodeGetCpuValue(node, cpu, "online", 1)) < 0)
>  goto cleanup;
>  
> @@ -537,6 +547,7 @@ virNodeParseNode(const char *node,
>  ret = -1;
>  }
>  VIR_FREE(core_maps);
> +virBitmapFree(present_cpumap);
>  
>  return ret;
>  }
> 
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 2/2] network: Add another collision check into networkCheckRouteCollision

2015-07-13 Thread John Ferlan


On 07/09/2015 10:09 AM, Martin Kletzander wrote:
> The comment above that function says: "This function can be a lot more
> exhaustive, ...", so let's be.
> 
> Check for collisions between routes in the system and static routes
> being added explicitly from the  element of the network XML.
> 
> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1094205
> 
> Signed-off-by: Martin Kletzander 
> ---
> Laine suggested moving networkCheckRouteCollision() into
> networkAddRouteToBridge() and I haven't done that simply because we
> can check it where it is now.  It would also mean parsing the file,
> which we don't want to parse anyway, multiple times or storing the
> results and I don't think it's worth neither the time nor space
> complexity.
> 
>  src/network/bridge_driver_linux.c | 29 +
>  1 file changed, 29 insertions(+)
> 
> diff --git a/src/network/bridge_driver_linux.c 
> b/src/network/bridge_driver_linux.c
> index e394dafb2216..66e5902a7b6f 100644
> --- a/src/network/bridge_driver_linux.c
> +++ b/src/network/bridge_driver_linux.c
> @@ -69,6 +69,7 @@ int networkCheckRouteCollision(virNetworkDefPtr def)
>  char iface[17], dest[128], mask[128];
>  unsigned int addr_val, mask_val;
>  virNetworkIpDefPtr ipdef;
> +virNetworkRouteDefPtr routedef;
>  int num;
>  size_t i;
> 
> @@ -123,6 +124,34 @@ int networkCheckRouteCollision(virNetworkDefPtr def)
>  goto out;
>  }
>  }
> +
> +for (i = 0;
> + (routedef = virNetworkDefGetRouteByIndex(def, AF_INET, i));
> + i++) {
> +
> +virSocketAddr r_mask, r_addr;
> +virSocketAddrPtr tmp_addr = 
> virNetworkRouteDefGetAddress(routedef);
> +int r_prefix = virNetworkRouteDefGetPrefix(routedef);
> +
> +if (!tmp_addr ||
> +virSocketAddrMaskByPrefix(tmp_addr, r_prefix, &r_addr) < 0 ||
> +virSocketAddrPrefixToNetmask(r_prefix, &r_mask, AF_INET) < 0)
> +continue;
> +
> +if ((r_addr.data.inet4.sin_addr.s_addr == addr_val) &&
> +(r_mask.data.inet4.sin_addr.s_addr == mask_val)) {
> +char *addr_str = virSocketAddrFormat(&r_addr);
> +if (!addr_str)
> +virResetLastError();
> +virReportError(VIR_ERR_INTERNAL_ERROR,
> +   _("Route address '%s' collides with one "
> + "that's in the system already"),

Could the error message could be adjusted slightly... Such as "Route
address '%s' conflicts with IP address for '%s'" (where I'm assuming the
second %s is 'iface')...  I guess some way to help point at which def is
going to be causing the problem for this def.

I also assume that the error occurs from the bz regardless of order now,
right?

Given the assumptions and noting that I'm not the expert here, both
patches seem fine to me with an adjustment to the error message.

ACK,

John

> +   NULLSTR(addr_str));
> +VIR_FREE(addr_str);
> +ret = -1;
> +goto out;
> +}
> +}
>  }
> 
>   out:
> --
> 2.4.5
> 
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] PING adding RDMA and tx-udp_tnl-segmentation NIC capabilities

2015-07-13 Thread Moshe Levi
Hi,

Can you please review "nodedev: add RDMA and tx-udp_tnl-segmentation NIC 
capabilities" patch [1]?
[1] - http://www.redhat.com/archives/libvir-list/2015-June/msg00921.html

Thanks,
Moshe Levi.
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 1/2] configure: Move Virtuozzo checks to a specific module

2015-07-13 Thread Dmitry Guryanov

On 07/10/2015 05:32 PM, Michal Privoznik wrote:

Eventually, every driver will be moved to a special module.
But for today the winner is Virtuozzo driver.


Thanks, Michal, ACKed and pushed.


Signed-off-by: Michal Privoznik 
---
  configure.ac | 24 ++--
  m4/virt-driver-vz.m4 | 46 ++
  2 files changed, 48 insertions(+), 22 deletions(-)
  create mode 100644 m4/virt-driver-vz.m4

diff --git a/configure.ac b/configure.ac
index 6533b88..71c3bb6 100644
--- a/configure.ac
+++ b/configure.ac
@@ -562,10 +562,6 @@ AC_ARG_WITH([hyperv],
[AS_HELP_STRING([--with-hyperv],
  [add Hyper-V support @<:@default=check@:>@])])
  m4_divert_text([DEFAULTS], [with_hyperv=check])
-AC_ARG_WITH([vz],
-  [AS_HELP_STRING([--with-vz],
-[add Virtuozzo support @<:@default=check@:>@])])
-m4_divert_text([DEFAULTS], [with_vz=check])
  AC_ARG_WITH([test],
[AS_HELP_STRING([--with-test],
  [add test driver support @<:@default=yes@:>@])])
@@ -1081,23 +1077,7 @@ dnl
  dnl Checks for the Parallels driver
  dnl
  
-

-if test "$with_vz" = "yes" ||
-   test "$with_vz" = "check"; then
-PKG_CHECK_MODULES([PARALLELS_SDK], [parallels-sdk],
-  [PARALLELS_SDK_FOUND=yes], [PARALLELS_SDK_FOUND=no])
-
-if test "$with_vz" = "yes" && test "$PARALLELS_SDK_FOUND" = "no"; then
-AC_MSG_ERROR([Parallels Virtualization SDK is needed to build the 
Parallels driver.])
-fi
-
-with_vz=$PARALLELS_SDK_FOUND
-if test "$with_vz" = "yes"; then
-AC_DEFINE_UNQUOTED([WITH_VZ], 1,
-   [whether vz driver is enabled])
-fi
-fi
-AM_CONDITIONAL([WITH_VZ], [test "$with_vz" = "yes"])
+LIBVIRT_DRIVER_CHECK_VZ
  
  dnl

  dnl Checks for bhyve driver
@@ -2833,7 +2813,7 @@ AC_MSG_NOTICE([  LXC: $with_lxc])
  AC_MSG_NOTICE([ PHYP: $with_phyp])
  AC_MSG_NOTICE([  ESX: $with_esx])
  AC_MSG_NOTICE([  Hyper-V: $with_hyperv])
-AC_MSG_NOTICE([   vz: $with_vz])
+LIBVIRT_DRIVER_RESULT_VZ
  LIBVIRT_DRIVER_RESULT_BHYVE
  AC_MSG_NOTICE([ Test: $with_test])
  AC_MSG_NOTICE([   Remote: $with_remote])
diff --git a/m4/virt-driver-vz.m4 b/m4/virt-driver-vz.m4
new file mode 100644
index 000..704976e
--- /dev/null
+++ b/m4/virt-driver-vz.m4
@@ -0,0 +1,46 @@
+dnl The Virtuozzo driver
+dnl
+dnl Copyright (C) 2005-2015 Red Hat, Inc.
+dnl
+dnl This library is free software; you can redistribute it and/or
+dnl modify it under the terms of the GNU Lesser General Public
+dnl License as published by the Free Software Foundation; either
+dnl version 2.1 of the License, or (at your option) any later version.
+dnl
+dnl This library is distributed in the hope that it will be useful,
+dnl but WITHOUT ANY WARRANTY; without even the implied warranty of
+dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+dnl Lesser General Public License for more details.
+dnl
+dnl You should have received a copy of the GNU Lesser General Public
+dnl License along with this library.  If not, see
+dnl .
+dnl
+
+AC_DEFUN([LIBVIRT_DRIVER_CHECK_VZ],[
+AC_ARG_WITH([vz],
+  [AS_HELP_STRING([--with-vz],
+[add Virtuozzo support @<:@default=check@:>@])])
+m4_divert_text([DEFAULTS], [with_vz=check])
+
+if test "$with_vz" = "yes" ||
+   test "$with_vz" = "check"; then
+PKG_CHECK_MODULES([PARALLELS_SDK], [parallels-sdk],
+  [PARALLELS_SDK_FOUND=yes], [PARALLELS_SDK_FOUND=no])
+
+if test "$with_vz" = "yes" && test "$PARALLELS_SDK_FOUND" = "no"; then
+AC_MSG_ERROR([Parallels Virtualization SDK is needed to build the 
Virtuozzo driver.])
+fi
+
+with_vz=$PARALLELS_SDK_FOUND
+if test "$with_vz" = "yes"; then
+AC_DEFINE_UNQUOTED([WITH_VZ], 1,
+   [whether vz driver is enabled])
+fi
+fi
+AM_CONDITIONAL([WITH_VZ], [test "$with_vz" = "yes"])
+])
+
+AC_DEFUN([LIBVIRT_DRIVER_RESULT_VZ],[
+AC_MSG_NOTICE([   vz: $with_vz])
+])


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/9] Add sysfs_prefix to nodeinfo.c API's

2015-07-13 Thread Andrea Bolognani
On Fri, 2015-07-10 at 17:05 +0200, Andrea Bolognani wrote:
> 
> Patches 1-8 look good to me. Great job splitting the changes in
> such a nice way! I've commented on patch 7 in a separate mail.
> 
> I'll look at patch 9 on Monday.

Patch 9 looks good as well, so ACK series with the previously
mentioned nit fixed.

I've also just sent a patch introducing a test case meant to
make sure patch 9 is actually fixing the issue :)

Hopefully it will make it to the list despite being biggish.

Cheers.

-- 
Andrea Bolognani
Software Engineer - Virtualization Team

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] tests: Add nodeinfo test for non-present CPUs

2015-07-13 Thread Andrea Bolognani
Some of the possible CPUs in a system might not be present, eg. they
might be defective or might have been deconfigured from the ASM console
in a Power system. Due to this fact, Linux keeps track of what CPUs are
possible and what are present separately.

This test uses the data from a system where not all the possible CPUs
are present to make sure libvirt handles this situation correctly.
---

This patch must be applied on top of John's series of nodeinfo
refactors, especially

  [PATCH 9/9] nodeinfo: fix to parse present cpus rather than possible cpus

which introduces the very fix this new test case is meant to test.

 .../linux-deconfigured-cpus/cpu/cpu0/online|  1 +
 .../linux-deconfigured-cpus/cpu/cpu1/online|  1 +
 .../linux-deconfigured-cpus/cpu/cpu10/online   |  1 +
 .../linux-deconfigured-cpus/cpu/cpu100/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu101/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu102/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu103/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu104/online  |  1 +
 .../cpu/cpu104/topology/core_id|  1 +
 .../cpu/cpu104/topology/core_siblings  |  1 +
 .../cpu/cpu104/topology/core_siblings_list |  1 +
 .../cpu/cpu104/topology/physical_package_id|  1 +
 .../cpu/cpu104/topology/thread_siblings|  1 +
 .../cpu/cpu104/topology/thread_siblings_list   |  1 +
 .../linux-deconfigured-cpus/cpu/cpu105/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu106/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu107/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu108/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu109/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu11/online   |  1 +
 .../linux-deconfigured-cpus/cpu/cpu110/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu111/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu112/online  |  1 +
 .../cpu/cpu112/topology/core_id|  1 +
 .../cpu/cpu112/topology/core_siblings  |  1 +
 .../cpu/cpu112/topology/core_siblings_list |  1 +
 .../cpu/cpu112/topology/physical_package_id|  1 +
 .../cpu/cpu112/topology/thread_siblings|  1 +
 .../cpu/cpu112/topology/thread_siblings_list   |  1 +
 .../linux-deconfigured-cpus/cpu/cpu113/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu114/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu115/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu116/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu117/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu118/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu119/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu12/online   |  1 +
 .../linux-deconfigured-cpus/cpu/cpu120/online  |  1 +
 .../cpu/cpu120/topology/core_id|  1 +
 .../cpu/cpu120/topology/core_siblings  |  1 +
 .../cpu/cpu120/topology/core_siblings_list |  1 +
 .../cpu/cpu120/topology/physical_package_id|  1 +
 .../cpu/cpu120/topology/thread_siblings|  1 +
 .../cpu/cpu120/topology/thread_siblings_list   |  1 +
 .../linux-deconfigured-cpus/cpu/cpu121/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu122/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu123/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu124/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu125/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu126/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu127/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu128/online  |  1 +
 .../cpu/cpu128/topology/core_id|  1 +
 .../cpu/cpu128/topology/core_siblings  |  1 +
 .../cpu/cpu128/topology/core_siblings_list |  1 +
 .../cpu/cpu128/topology/physical_package_id|  1 +
 .../cpu/cpu128/topology/thread_siblings|  1 +
 .../cpu/cpu128/topology/thread_siblings_list   |  1 +
 .../linux-deconfigured-cpus/cpu/cpu129/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu13/online   |  1 +
 .../linux-deconfigured-cpus/cpu/cpu130/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu131/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu132/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu133/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu134/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu135/online  |  1 +
 .../linux-deconfigured-cpus/cpu/cpu136/online  |  1 +
 .../cpu/cpu136/topology/core_id|  1 +
 .../cpu/cpu136/topology/core_siblings  |  1 +
 .../cpu/cpu136/topology/core_siblings_list |  1 +
 .../cpu/cpu136/topology/physical_package_id|  1 +
 .../cpu/cpu136/topology/thread_siblings|  1 +
 .../cpu/cpu136/topology/thread_siblings_list   |  1 +
 .../linux-deconfigured-cpus/cpu/cpu137/online  |  1 +
 ...

[libvirt] [PATCH] qemuDomainSetNumaParamsLive:

2015-07-13 Thread Michal Privoznik
https://bugzilla.redhat.com/show_bug.cgi?id=1232663

In one of my previous ptaches (bcd9a564) I've tried to fix the problem
that we blindly assumed strict NUMA mode for guests. This led to
several problems like us pinning a domain onto a nodeset via libnuma
among with CGroups. Once the nodeset was changed by user, well, it did
not result in desired effect. See the original commit for more info.
But, the commit I wrote had a bug: when NUMA parameters are changed on
a running domain we require domain to be strictly pinned onto a
nodeset. Due to a typo a condition was mis-evaluated.

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_driver.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index c8cbd57..8c705c4 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -9954,7 +9954,7 @@ qemuDomainSetNumaParamsLive(virDomainObjPtr vm,
 size_t i = 0;
 int ret = -1;
 
-if (virDomainNumatuneGetMode(vm->def->numa, -1, &mode) < 0 ||
+if (virDomainNumatuneGetMode(vm->def->numa, -1, &mode) == 0 &&
 mode != VIR_DOMAIN_NUMATUNE_MEM_STRICT) {
 virReportError(VIR_ERR_OPERATION_INVALID, "%s",
_("change of nodeset for running domain "
-- 
2.3.6

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [libvirt-python][PATCH] examples: Introduce nodestats example

2015-07-13 Thread Martin Kletzander

On Mon, Jun 29, 2015 at 04:53:38PM +0200, Michal Privoznik wrote:

So, this is an exercise to show libvirt capabilities. Firstly, for
each host NUMA nodes some statistics are printed out, i.e. total


s/nodes/node/


memory and free memory. Then, for each running domain, that has memory
strictly bound to certain host nodes, a small statistics of how much
memory it takes is printed out too. For instance:

 # ./nodestats.py
 NUMA stats
 NUMA nodes: 0   1   2   3
 MemTotal:   3950396739373943
 MemFree:434 674 149 216
 Dom 'gentoo':   1048576 1048576 1048576 1048576

We can see 4 host NUMA nodes, all of them having roughly 4GB of RAM.
Yeah, some of them has nearly all the memory consumed. Then, there's
only one running domain, called 'gentoo', and it has 1GB per each NUMA
node configured.

Signed-off-by: Michal Privoznik 
---
examples/nodestats.py | 106 ++
1 file changed, 106 insertions(+)
create mode 100755 examples/nodestats.py

diff --git a/examples/nodestats.py b/examples/nodestats.py
new file mode 100755
index 000..dbf5593
--- /dev/null
+++ b/examples/nodestats.py
@@ -0,0 +1,106 @@
+#!/usr/bin/env python
+# Print some host NUMA node statistics
+#
+# Authors:
+#   Michal Privoznik 

s/$/>/ if you want the authorship here ;)


+
+import libvirt
+import sys
+from xml.dom import minidom
+import libxml2
+
+class virBitmap:
+def __init__(self):
+self.bitmap = 0
+
+def setBit(self, offset):
+mask = 1 << offset
+self.bitmap = self.bitmap | mask
+
+def clearBit(self, offset):
+mask = ~(1 << offset)
+self.bitmap = self.bitmap & mask
+
+def isSet(self, offset):
+mask = 1 << offset
+return(self.bitmap & mask)
+
+def setRange(self, start, end):
+while (start <= end):
+self.setBit(start)
+start = start + 1
+
+def parse(self, string):
+for s in string.split(','):
+list = s.split('-', 2)
+start = int(list[0])
+if len(list) == 2:
+end = int(list[1])
+else:
+end = start
+self.setRange(start, end)
+


This last function is the only useful one.  You are storing it all in
one integer, so it's not very scalable (well, for python2 it isn't,
python3 stores that in a long that has virtually unlimited size).

Anyway, you are only storing nodeset here, for everything else you are
using arrays.  And the nodeset won't be very big.  I'd rather you
changed this to a normal array.


+def xpath_eval(ctxt, path):
+res = ctxt.xpathEval(path)
+if res is None or len(res) == 0:
+value = None
+else:
+value = res[0].content
+return value
+
+try:
+conn = libvirt.openReadOnly(None)
+except libvirt.libvirtError:
+print('Failed to connect to the hypervisor')
+sys.exit(1)
+
+try:
+capsXML = conn.getCapabilities()
+except libvirt.libvirtError:
+print('Failed to request capabilities')
+sys.exit(1)
+
+caps = minidom.parseString(capsXML)
+cells = caps.getElementsByTagName('cells')[0]
+
+nodesIDs = [ int(proc.getAttribute('id'))
+ for proc in cells.getElementsByTagName('cell') ]
+
+nodesMem = [ conn.getMemoryStats(int(proc))
+ for proc in nodesIDs]
+


You could do { proc.getAttribute('id') :
conn.getMemoryStats(int(proc.getAttribute('id))) } and have it in a
dictionary, so it's easier to list through, but I'd prefer more
readable:

nodes = {}
for p in cells.getElementsByTagName('cell'):
   i = p.getAttribute('id')
   nodes[i] = conn.getMemoryStats(i);


+doms = conn.listAllDomains(libvirt.VIR_CONNECT_LIST_DOMAINS_ACTIVE)
+domsStrict = [ proc
+   for proc in doms
+   if proc.numaParameters()['numa_mode'] == 
libvirt.VIR_DOMAIN_NUMATUNE_MEM_STRICT ]
+


This will list most of the domains even though there will be no info
for them.  You need to check whether there is any memnode with strict
memory assigned.  Although skipping everything else still feels kinda
fishy.  Maybe there should be another field for unspecified memnodes.

Also if you run this with a domain with:

 

You report the allocated memory 4 times.


+domsStrictCfg = {}
+
+for dom in domsStrict:
+xmlStr = dom.XMLDesc()
+doc = libxml2.parseDoc(xmlStr)
+ctxt = doc.xpathNewContext()
+
+domsStrictCfg[dom] = [ 0  for node in nodesIDs ]
+
+for memnode in ctxt.xpathEval("/domain/numatune/memnode"):
+ctxt.setContextNode(memnode)
+cellid = xpath_eval(ctxt, "@cellid")
+mode = xpath_eval(ctxt, "@mode")
+nodeset = xpath_eval(ctxt, "@nodeset")
+
+bitmap = virBitmap()
+bitmap.parse(nodeset)
+for node in nodesIDs:
+if bitmap.isSet(int(node)):
+mem = xpath_eval(ctxt, 
"/domain/cpu/numa/cell[@id='%s']/@memory" % cellid)
+domsStrictCfg[dom][int(no

[libvirt] Serial connection between guests

2015-07-13 Thread Marcelo Ricardo Leitner
Hi,

(I'm not subscribed to the list, please keep me on Cc)

I'm attempting to get a serial link between two guests, same hypervisor.
The only practical way I could find is to add a serial port using a pty
to a guest and then manually connecting to the serial (console in my
case) of the other guest using socat in the hypervisor.

Then it made me think.. we could have this implemented at libvirt level.
We could have a serial port on which we choose pty, udp, tcp, etc, and
also a serial port from another guest, so that libvirt would handle
socat start/stop automatically as both guests come up/down. Maybe
libvirt could even do something smarter than that, maybe it can avoid
socat somehow.

What do you think? My usage is for virtualizing TAHI:
http://networktest.sourceforge.net/usage.html
I need 2 ethernet links plus a serial one, which TAHI can't break while
running the tests.

I didn't think this regarding multi-platform & all.. just sharing the
idea/need.

Thanks,
Marcelo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 00/13] Move generic virsh data to a separate module vsh

2015-07-13 Thread Martin Kletzander

On Mon, Jun 29, 2015 at 05:37:34PM +0200, Erik Skultety wrote:

The idea behind this is that in order to introduce virt-admin client (and later
some commands/APIs), there are lots of methods in virsh that can be easily
reused by other potential clients like command and command argument passing or
error reporting.

!!! IMPORTANT !!!
These patches cannot be compiled separately, the series is split more or
less logically into chunks only to be more readable by the reviewer.
I started this at least 4 times from scratch and still haven't found a way that
splitting virsh could be done with several independent applicable commits, 
rather
than having one massive commit in the end.



Actually, I found out this is easier to review as a one patch with
various diff options used for various parts of the patch.  Some
questions and suggestions below.

Why vshClientHooks and vshCmdGrp aren't in the vshControl struct?  If
we move the client helpers into a library (I think this stuff can be
in src/util/virshell.c for example), then it won't be thread-safe.
Moving those to the control structure will also be cleaner.

Some things are still broken up, like readline stuff.  It should be
either completely hidden or completely exposed.  For example,
vshReadline{Init,Deinit} should be moved into vsh{Init,Deinit}.

Commands from former virshCmds (except connect) should be in vsh.c so
each client can use them in their cmdGroups definition without
re-implementing them.

vsh is not doing the argument parsing, but that's be fine.  I would,
at least, wrap some options in a function that can be called from
multiple clients, but that's one of the nice-to-have things that can
be done later.

vshInit() is declared with ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3);
but handles passed NULLs properly, that declaration should be fixed.

Also there are some whitespace problems (e.g. with parameters of
virshLookupDomainBy), but considering how many of them are there
inside the files already, it's nothing compared to the size of this
refactor.

The exclude_file_name_regexp--sc_avoid_strcase should only contain
^tools/vsh\.h$$, not virsh.h.

Anyway, here's a list of things that should be changed (either from
virsh to vsh or vice versa) split into categories (feel free to
disagree with any):

Totally:
- virshCommandOptTimeoutToMs
- VIRSH_MAX_XML_FILE
- virshPrettyCapacity
- vshFindDisk
- vshSnapshotListCollect

Most likely:
- vshCatchInt
- virshPrintJobProgress
- virshTreePrint(internal) with virshTreeLookup typedef
- virshPrintRaw
- virshAskReedit
- virshEdit*
- virshAllowedEscapeChar

Maybe (i.e. not needed now, but might be nice):
- virshWatchJob
- virsh-edit stuff
- vshLookupByFlags

And of course all of the below:
- vshDomainBlockJob
- vshDomainJob
- vshDomainEvent
- vshDomainEventDefined
- vshDomainEventUndefined
- vshDomainEventStarted
- vshDomainEventSuspended
- vshDomainEventResumed
- vshDomainEventStopped
- vshDomainEventShutdown
- vshDomainEventPMSuspended
- vshDomainEventCrashed
- vshDomainEventWatchdog
- vshDomainEventIOError
- vshGraphicsPhase
- vshGraphicsAddress
- vshDomainBlockJobStatus
- vshDomainEventDiskChange
- vshDomainEventTrayChange
- vshEventAgentLifecycleState
- vshEventAgentLifecycleReason
- vshNetworkEvent
- vshNetworkEventId
- vshStorageVol
- vshUpdateDiskXMLType
- vshFindDiskType
- vshUndefineVolume

Martin


signature.asc
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [libvirt-glib PATCHv5 7/7] gobject: Add wrapper for virNetworkGetDHCPLeases

2015-07-13 Thread Zeeshan Ali (Khattak)
On Fri, Jul 10, 2015 at 5:04 PM, Christophe Fergeau  wrote:
> On Fri, Jul 10, 2015 at 04:43:15PM +0100, Zeeshan Ali (Khattak) wrote:
>>
>> I really don't see the point of evaluating possible but unlinkely[1]
>> impact on any distro. As I said, we give distros enough time and that
>> should be more than enough upstream could do. i-e these exact details
>> are irrelevant to me. Since they are very relevant to you, I'd suggest
>> you do the research and then let me know which solution you want and I
>> will implement that.
>
> Patches of yours broke the build, you have a strong opinion on the right way
> to fix it, in such situations I usually go the extra mile to
> convince others that it's the best way :) That's why I'm a bit surprised
> this drags for so long with no real attempt at finding some common
> ground.

I gave you an easy way out of this dragging discussion and even
promised to implement either of the solutions you want me to. You're
still not happy so I'll just bump the dependencies now. Feel free to
implement ugly hack solution. I'm out here..

-- 
Regards,

Zeeshan Ali (Khattak)

Befriend GNOME: http://www.gnome.org/friends/

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] qemuProcessHandleMigrationStatus: Update migration status on ASYNC_JOB_SAVE too

2015-07-13 Thread Martin Kletzander

On Mon, Jul 13, 2015 at 02:20:50PM +0200, Michal Privoznik wrote:

After Jirka's migration patches libvirt is listening on migration
events from qemu instead of actively polling on the monitor. There is,
however, a little regression (introduced in 6d2edb6a42d0d41). The
problem is, the current status of migration job is updated in
qemuProcessHandleMigrationStatus if and only if migration job was
started. But we have a separate job type for saving a domain into a
file: QEMU_ASYNC_JOB_SAVE. Therefore, since this job is not strictly a
migration job, internal state was not updated and later checks failed:

 virsh # save fedora22 /tmp/fedora22_ble.save
 error: Failed to save domain fedora22 to /tmp/fedora22_ble.save
 error: operation failed: domain save job: is not active

Signed-off-by: Michal Privoznik 
---
src/qemu/qemu_process.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 2a529f7..16d39b2 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -1521,29 +1521,30 @@ static int
qemuProcessHandleMigrationStatus(qemuMonitorPtr mon ATTRIBUTE_UNUSED,
 virDomainObjPtr vm,
 int status,
 void *opaque ATTRIBUTE_UNUSED)
{
qemuDomainObjPrivatePtr priv;

virObjectLock(vm);

VIR_DEBUG("Migration of domain %p %s changed state to %s",
  vm, vm->def->name,
  qemuMonitorMigrationStatusTypeToString(status));

priv = vm->privateData;
if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT &&
-priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_IN) {
+priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_IN &&
+priv->job.asyncJob != QEMU_ASYNC_JOB_SAVE) {


I think this should just be if priv->job.asyncJob !=
QEMU_ASYNC_JOB_NONE because all async jobs can ultimately be a
migration.

ACK with that changed.


VIR_DEBUG("got MIGRATION event without a migration job");
goto cleanup;
}

priv->job.current->status.status = status;
virDomainObjBroadcast(vm);

 cleanup:
virObjectUnlock(vm);
return 0;
}


--
2.3.6

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


signature.asc
Description: PGP signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [PATCH 5/6] vz: support misc migration options

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy 

Migration API has a lot of options. This patch intention is to provide
support for those options that can be trivially supported and give
estimation for other options support in this commit message.

I. Supported.

1. VIR_MIGRATE_COMPRESSED. Means 'use compression when migration domain
memory'. It is supported but quite uncommon way: vz migration demands that this
option should be set. This is due to vz is hardcoded to moving VMs memory using
compression. So anyone who wants to migrate vz domain should set this option
thus declaring it knows it uses compression.

Why bother? May be just support this option and ignore if it is not set or
don't support at all as we can't change behaviour in this aspect.  Well I
believe that this option is, first, inherent to hypervisor implementation as
we have a task of moving domain memory to different place and, second, we have
a tradeoff here between cpu and network resources and some managment should
choose the stratery via this option. If we choose ignoring or unsupporting
implementation than this option has a too vague meaning. Let's go into more
detail.

First if we ignore situation where option is not set than we put user into
fallacy that vz hypervisor don't use compression and thus have lower cpu
usage. Second approach is to not support the option. The main reason not to
follow this way is that 'not supported and not set' is indistinguishable from
'supported and not set' and thus again fool the user.

2. VIR_MIGRATE_LIVE. Means 'reduce domain downtime by suspending it as lately
as possible' which technically means 'migrate as much domain memory as possible
before suspending'. Supported in same manner as VIR_MIGRATE_COMPRESSED as both
vz VMs and CTs are always migrated via live scheme.

One may be fooled by vz sdk flags of migration api: PVMT_HOT_MIGRATION(aka
live) and PVMT_WARM_MIGRATION(aka normal). Current implementation ignore these
flags and always use live migration.

3. VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_UNDEFINE_SOURCE. This two comes
together. Vz domain are alwasy persistent so we have to support demand option
VIR_MIGRATE_PERSIST_DEST is set and VIR_MIGRATE_UNDEFINE_SOURCE is not (and
this is done just by unsupporting it).

4. VIR_MIGRATE_PAUSED. Means 'don't resume domain on destination'. This is
trivially supported as we have a corresponding option in vz migration.

All that said the minimal command to migrate vz domain looks next:

migrate $DOMAIN $DESTINATION --live --persistent --compressed.

Not good. Say if you want to just migrate a domain without further
details you will get error messages until you add these options to
command line. I think there is a lack of notion 'default' behaviour
in all these aspects. If we have it we could just issue:

migrate $DOMAIN $DESTINATION

For vz this would give default compression for example, for qemu - default
no-compression. Then we could have flags --compressed and -no-compressed
and for vz the latter would give unsupported error.

II. Unsupported.

1. VIR_MIGRATE_UNSAFE. Vz disks are always have 'cache=none' set (this
is not reflected in current version of vz driver and will be fixed
soon). So we need not to support this option.

2. VIR_MIGRATE_CHANGE_PROTECTION. Unsupported as we have no appopriate
support from vz sdk. Although we have locks they are advisory and
cant help us.

3. VIR_MIGRATE_TUNNELLED. Means 'use libvirtd to libvirtd connection
to pass hypervisor migration traffic'. Unsupported as not among
vz hypervisor usecases. Moreover this feature only has meaning
for peer2peer migration that is not implemented in this patch set.

4. Direct migration. Which is exposed via *toURI* interface with
VIR_MIGRATE_PEER2PEER flag unset. Means 'migrate without using
libvirtd on the other side'. To support it we should add authN
means to vz driver as mentioned in 'backbone patch' which looks
ugly.

5. VIR_MIGRATE_ABORT_ON_ERROR, VIR_MIGRATE_AUTO_CONVERGE,
VIR_MIGRATE_RDMA_PIN_ALL, VIR_MIGRATE_NON_SHARED_INC,
VIR_MIGRATE_PARAM_DEST_XML, VIR_MIGRATE_PARAM_BANDWIDTH,
VIR_MIGRATE_PARAM_GRAPHICS_URI, VIR_MIGRATE_PARAM_LISTEN_ADDRESS,
VIR_MIGRATE_PARAM_MIGRATE_DISKS.
Without further discussion. They are just not usecases of vz hypevisor.

III. Unimplemented.

1. VIR_MIGRATE_OFFLINE. Means 'migrate only XML definition of a domain'.
Actually the same vz sdk call supports offline migration but nevertheless we
don't get it for free for vz domain because in case of offline migration only
steps 'begin' and 'prepare' are performed while we can't issue vz migration
command ealier than 'perform' step as we need authN cookie. So we need
extra work to be done which goes to different patchset.

2. VIR_MIGRATE_PEER2PEER. Means 'whole migration managment should
be done by the daemon of the source side'. QEMU does this but
at the cost of heavily(at my estimate) duplicate client side migration
managment code. We can do these too for vz or even better - refactor
and than support. So it goes to differen

[libvirt] [PATCH 4/6] vz: support migration uri

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy 

Signed-off-by: Nikolay Shirokovskiy 
---
 src/vz/vz_driver.c |   52 +++-
 1 files changed, 51 insertions(+), 1 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index a42597c..9fefac1 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1356,6 +1356,7 @@ vzConnectSupportsFeature(virConnectPtr conn 
ATTRIBUTE_UNUSED, int feature)
 
 #define VZ_MIGRATION_PARAMETERS \
 VIR_MIGRATE_PARAM_DEST_NAME,VIR_TYPED_PARAM_STRING, \
+VIR_MIGRATE_PARAM_URI,  VIR_TYPED_PARAM_STRING, \
 NULL
 
 static char *
@@ -1509,6 +1510,45 @@ vzParseCookie2(const char *xml, unsigned char 
*domain_uuid)
 return ret;
 }
 
+/* return copy of 'in' and check it is correct */
+static char *
+vzAdaptInUri(const char *in)
+{
+virURIPtr uri = NULL;
+char *out = NULL;
+
+uri = virURIParse(in);
+
+if (uri->scheme == NULL || uri->server == NULL) {
+virReportError(VIR_ERR_INVALID_ARG,
+   _("scheme and host are mandatory vz migration URI: %s"),
+   in);
+goto cleanup;
+}
+
+if (uri->user != NULL || uri->path != NULL ||
+uri->query != NULL || uri->fragment != NULL) {
+virReportError(VIR_ERR_INVALID_ARG,
+   _("only scheme, host and port are supported in "
+ "vz migration URI: %s"), in);
+goto cleanup;
+}
+
+if (STRNEQ(uri->scheme, "tcp")) {
+virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED,
+   _("unsupported scheme %s in migration URI %s"),
+   uri->scheme, in);
+goto cleanup;
+}
+
+if (VIR_STRDUP(out, in) < 0)
+goto cleanup;
+
+ cleanup:
+virURIFree(uri);
+return out;
+}
+
 static int
 vzDomainMigratePrepare3Params(virConnectPtr dconn,
   virTypedParameterPtr params ATTRIBUTE_UNUSED,
@@ -1522,6 +1562,11 @@ vzDomainMigratePrepare3Params(virConnectPtr dconn,
 {
 vzConnPtr privconn = dconn->privateData;
 int ret = -1;
+const char *uri = NULL;
+
+if (virTypedParamsGetString(params, nparams,
+VIR_MIGRATE_PARAM_URI, &uri) < 0)
+goto cleanup;
 
 *cookieout = NULL;
 *uri_out = NULL;
@@ -1530,7 +1575,12 @@ vzDomainMigratePrepare3Params(virConnectPtr dconn,
 goto cleanup;
 *cookieoutlen = strlen(*cookieout) + 1;
 
-if (!(*uri_out = vzCreateMigrateUri()))
+if (uri == NULL)
+*uri_out = vzCreateMigrateUri();
+else
+*uri_out = vzAdaptInUri(uri);
+
+if (*uri_out == NULL)
 goto cleanup;
 
 ret = 0;
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 6/6] vz: cleanup: define vz format of uuids

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy 

vz puts uuids into curly braces. Simply introduce new contstant to reflect this
and get rid of magic +2 in code.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/vz/vz_sdk.c   |   12 ++--
 src/vz/vz_utils.h |2 ++
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index 7646796..187fcec 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -239,7 +239,7 @@ prlsdkConnect(vzConnPtr privconn)
 PRL_HANDLE job = PRL_INVALID_HANDLE;
 PRL_HANDLE result = PRL_INVALID_HANDLE;
 PRL_HANDLE response = PRL_INVALID_HANDLE;
-char session_uuid[VIR_UUID_STRING_BUFLEN + 2];
+char session_uuid[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 buflen = ARRAY_CARDINALITY(session_uuid);
 
 pret = PrlSrv_Create(&privconn->server);
@@ -319,7 +319,7 @@ prlsdkUUIDFormat(const unsigned char *uuid, char *uuidstr)
 static PRL_HANDLE
 prlsdkSdkDomainLookupByUUID(vzConnPtr privconn, const unsigned char *uuid)
 {
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_HANDLE sdkdom = PRL_INVALID_HANDLE;
 
 prlsdkUUIDFormat(uuid, uuidstr);
@@ -368,7 +368,7 @@ prlsdkGetDomainIds(PRL_HANDLE sdkdom,
char **name,
unsigned char *uuid)
 {
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 len;
 PRL_RESULT pret;
 
@@ -1725,7 +1725,7 @@ prlsdkEventsHandler(PRL_HANDLE prlEvent, PRL_VOID_PTR 
opaque)
 vzConnPtr privconn = opaque;
 PRL_RESULT pret = PRL_ERR_FAILURE;
 PRL_HANDLE_TYPE handleType;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 unsigned char uuid[VIR_UUID_BUFLEN];
 PRL_UINT32 bufsize = ARRAY_CARDINALITY(uuidstr);
 PRL_EVENT_TYPE prlEventType;
@@ -3483,7 +3483,7 @@ prlsdkDoApplyConfig(virConnectPtr conn,
 {
 PRL_RESULT pret;
 size_t i;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 bool needBoot = true;
 char *mask = NULL;
 
@@ -4073,7 +4073,7 @@ int prlsdkMigrate(virDomainObjPtr dom, const char* 
uri_str,
 vzDomObjPtr privdom = dom->privateData;
 virURIPtr uri = NULL;
 PRL_HANDLE job = PRL_INVALID_HANDLE;
-char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+char uuidstr[VZ_UUID_STRING_BUFLEN];
 PRL_UINT32 vzflags = PRLSDK_MIGRATION_FLAGS;
 
 uri = virURIParse(uri_str);
diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h
index a779b03..98a8f77 100644
--- a/src/vz/vz_utils.h
+++ b/src/vz/vz_utils.h
@@ -55,6 +55,8 @@
 # define PARALLELS_REQUIRED_BRIDGED_NETWORK  "Bridged"
 # define PARALLELS_BRIDGED_NETWORK_TYPE  "bridged"
 
+# define VZ_UUID_STRING_BUFLEN (VIR_UUID_STRING_BUFLEN + 2)
+
 struct _vzConn {
 virMutex lock;
 
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 3/6] vz: support domain rename on migrate

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy 

---
 src/vz/vz_driver.c |   12 +---
 src/vz/vz_sdk.c|5 +++--
 src/vz/vz_sdk.h|5 -
 3 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index d5cbdc6..a42597c 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1354,7 +1354,9 @@ vzConnectSupportsFeature(virConnectPtr conn 
ATTRIBUTE_UNUSED, int feature)
 }
 }
 
-#define VZ_MIGRATION_PARAMETERS NULL
+#define VZ_MIGRATION_PARAMETERS \
+VIR_MIGRATE_PARAM_DEST_NAME,VIR_TYPED_PARAM_STRING, \
+NULL
 
 static char *
 vzDomainMigrateBegin3Params(virDomainPtr domain,
@@ -1558,12 +1560,16 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 virDomainObjPtr dom = NULL;
 const char *uri = NULL;
 unsigned char session_uuid[VIR_UUID_BUFLEN];
+const char *dname = NULL;
 
 *cookieout = NULL;
 
 if (virTypedParamsGetString(params, nparams,
 VIR_MIGRATE_PARAM_URI,
-&uri) < 0)
+&uri) < 0 ||
+virTypedParamsGetString(params, nparams,
+VIR_MIGRATE_PARAM_DEST_NAME,
+&dname) < 0)
 goto cleanup;
 
 if (!(dom = vzDomObjFromDomain(domain)))
@@ -1578,7 +1584,7 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 if (vzParseCookie1(cookiein, session_uuid) < 0)
 goto cleanup;
 
-if (prlsdkMigrate(dom, uri, session_uuid) < 0)
+if (prlsdkMigrate(dom, uri, session_uuid, dname) < 0)
 goto cleanup;
 
 if (!(*cookieout = vzFormatCookie2(dom->def->uuid)))
diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index a329c68..f1fa6da 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -4067,7 +4067,7 @@ prlsdkGetMemoryStats(virDomainObjPtr dom,
 #define PRLSDK_MIGRATION_FLAGS (PSL_HIGH_SECURITY)
 
 int prlsdkMigrate(virDomainObjPtr dom, const char* uri_str,
-  const unsigned char *session_uuid)
+  const unsigned char *session_uuid, const char *dname)
 {
 int ret = -1;
 vzDomObjPtr privdom = dom->privateData;
@@ -4081,7 +4081,8 @@ int prlsdkMigrate(virDomainObjPtr dom, const char* 
uri_str,
 goto cleanup;
 
 prlsdkUUIDFormat(session_uuid, uuidstr);
-job = PrlVm_MigrateEx(privdom->sdkdom, uri->server, uri->port, uuidstr,
+job = PrlVm_MigrateWithRenameEx(privdom->sdkdom, uri->server, uri->port, 
uuidstr,
+  dname == NULL ? "" : dname,
   "", /* use default dir for migrated instance bundle 
*/
   PRLSDK_MIGRATION_FLAGS,
   0, /* reserved flags */
diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h
index 1a90eca..971f913 100644
--- a/src/vz/vz_sdk.h
+++ b/src/vz/vz_sdk.h
@@ -77,4 +77,7 @@ prlsdkGetVcpuStats(virDomainObjPtr dom, int idx, unsigned 
long long *time);
 int
 prlsdkGetMemoryStats(virDomainObjPtr dom, virDomainMemoryStatPtr stats, 
unsigned int nr_stats);
 int
-prlsdkMigrate(virDomainObjPtr dom, const char* uri_str, const char unsigned 
*session_uuid);
+prlsdkMigrate(virDomainObjPtr dom,
+  const char* uri_str,
+  const unsigned char *session_uuid,
+  const char* dname);
-- 
1.7.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 1/6] vz: add migration backbone code

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy 

This patch makes basic vz migration possible. For example by virsh:
virsh -c vz:///system migrate $NAME vz+ssh://$DST/system

Vz migration is implemented thru interface for managed migrations for drivers
although it looks like a candadate for direct migration as all work is done by
vz sdk. The reason is that vz sdk lacks rich remote authentication capabilities
of libvirt and if we choose to implement direct migration we have to
reimplement auth means of libvirt. This brings the requirement that destination
side should have running libvirt daemon. This is not the problem as vz is
moving in the direction of tight integration with libvirt.

Another issue of this choice is that if the managment migration fails on
'finish' step driver is supposed to resume on source.  This is not compatible
with vz sdk migration but this can be overcome without loosing a constistency,
see comments in code.

Technically we have a libvirt connection to destination in managed migration
scheme and we use this connection to obtain a session_uuid (which acts as authZ
token) for vz migration. This uuid is passed from destination through cookie
on 'prepare' step.

A few words on vz migration uri. I'd probably use just 'hostname:port' uris as
we don't have different migration schemes in vz but scheme part is mandatory,
so 'tcp' is used. Looks like good name.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/vz/vz_driver.c |  250 
 src/vz/vz_sdk.c|   79 ++--
 src/vz/vz_sdk.h|2 +
 src/vz/vz_utils.h  |1 +
 4 files changed, 322 insertions(+), 10 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index 9f0c52f..e003646 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1343,6 +1343,250 @@ vzDomainMemoryStats(virDomainPtr domain,
 return ret;
 }
 
+static int
+vzConnectSupportsFeature(virConnectPtr conn ATTRIBUTE_UNUSED, int feature)
+{
+switch (feature) {
+case VIR_DRV_FEATURE_MIGRATION_PARAMS:
+return 1;
+default:
+return 0;
+}
+}
+
+#define VZ_MIGRATION_PARAMETERS NULL
+
+static char *
+vzDomainMigrateBegin3Params(virDomainPtr domain,
+virTypedParameterPtr params,
+int nparams,
+char **cookieout ATTRIBUTE_UNUSED,
+int *cookieoutlen ATTRIBUTE_UNUSED,
+unsigned int fflags ATTRIBUTE_UNUSED)
+{
+virDomainObjPtr dom = NULL;
+char *xml = NULL;
+
+if (virTypedParamsValidate(params, nparams, VZ_MIGRATION_PARAMETERS) < 0)
+goto cleanup;
+
+if (!(dom = vzDomObjFromDomain(domain)))
+goto cleanup;
+
+xml = virDomainDefFormat(dom->def, VIR_DOMAIN_DEF_FORMAT_SECURE);
+
+ cleanup:
+if (dom)
+virObjectUnlock(dom);
+
+return xml;
+}
+
+/* return 'hostname' */
+static char *
+vzCreateMigrateUri(void)
+{
+char *hostname = NULL;
+char *out = NULL;
+virURI uri = {};
+
+if ((hostname = virGetHostname()) == NULL)
+goto cleanup;
+
+if (STRPREFIX(hostname, "localhost")) {
+virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+   _("hostname on destination resolved to localhost,"
+ " but migration requires an FQDN"));
+goto cleanup;
+}
+
+/* to set const string to non-const */
+if (VIR_STRDUP(uri.scheme, "tcp") < 0)
+goto cleanup;
+uri.server = hostname;
+out = virURIFormat(&uri);
+
+ cleanup:
+VIR_FREE(hostname);
+VIR_FREE(uri.scheme);
+return out;
+}
+
+static int
+vzDomainMigratePrepare3Params(virConnectPtr dconn,
+  virTypedParameterPtr params ATTRIBUTE_UNUSED,
+  int nparams ATTRIBUTE_UNUSED,
+  const char *cookiein ATTRIBUTE_UNUSED,
+  int cookieinlen ATTRIBUTE_UNUSED,
+  char **cookieout,
+  int *cookieoutlen,
+  char **uri_out,
+  unsigned int fflags ATTRIBUTE_UNUSED)
+{
+vzConnPtr privconn = dconn->privateData;
+int ret = -1;
+char uuidstr[VIR_UUID_STRING_BUFLEN];
+
+*cookieout = NULL;
+*uri_out = NULL;
+
+virUUIDFormat(privconn->session_uuid, uuidstr);
+if (VIR_STRDUP(*cookieout, uuidstr) < 0)
+goto cleanup;
+*cookieoutlen = strlen(*cookieout) + 1;
+
+if (!(*uri_out = vzCreateMigrateUri()))
+goto cleanup;
+
+ret = 0;
+
+ cleanup:
+if (ret != 0) {
+VIR_FREE(*cookieout);
+VIR_FREE(*uri_out);
+*cookieoutlen = 0;
+}
+
+return ret;
+}
+
+static int
+vzDomainMigratePerform3Params(virDomainPtr domain,
+  const char *dconnuri ATTRIBUTE_UNUSED,
+  virTypedParameterPtr params,
+  int nparams,
+

[libvirt] [PATCH 2/6] vz: pass cookies in xml form

2015-07-13 Thread nshirokovskiy
From: Nikolay Shirokovskiy 

This way we can easily keep backward compatibility
in the future.

Use 2 distinct cookies format:
1 - between phases 'prepare' and 'perform'
2 - between phases 'perform' and 'finish'
I see no reason to use unified format like in qemu yet.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/vz/vz_driver.c |  111 ++--
 src/vz/vz_sdk.c|3 +
 2 files changed, 102 insertions(+), 12 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index e003646..d5cbdc6 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1412,6 +1412,101 @@ vzCreateMigrateUri(void)
 return out;
 }
 
+static char*
+vzFormatCookie1(const unsigned char *session_uuid)
+{
+char uuidstr[VIR_UUID_STRING_BUFLEN];
+virBuffer buf = VIR_BUFFER_INITIALIZER;
+
+virBufferAddLit(&buf, "\n");
+virUUIDFormat(session_uuid, uuidstr);
+virBufferAsprintf(&buf, "%s\n", uuidstr);
+virBufferAddLit(&buf, "\n");
+
+if (virBufferCheckError(&buf) < 0)
+return NULL;
+
+return virBufferContentAndReset(&buf);
+}
+
+static char*
+vzFormatCookie2(const unsigned char *domain_uuid)
+{
+char uuidstr[VIR_UUID_STRING_BUFLEN];
+virBuffer buf = VIR_BUFFER_INITIALIZER;
+
+virBufferAddLit(&buf, "\n");
+virUUIDFormat(domain_uuid, uuidstr);
+virBufferAsprintf(&buf, "%s\n", uuidstr);
+virBufferAddLit(&buf, "\n");
+
+if (virBufferCheckError(&buf) < 0)
+return NULL;
+
+return virBufferContentAndReset(&buf);
+}
+
+static int
+vzParseCookie1(const char *xml, unsigned char *session_uuid)
+{
+xmlDocPtr doc = NULL;
+xmlXPathContextPtr ctx = NULL;
+char *tmp = NULL;
+int ret = -1;
+
+if (!(doc = virXMLParseStringCtxt(xml, _("(_migration_cookie)"), &ctx)))
+goto cleanup;
+
+if (!(tmp = virXPathString("string(./session_uuid[1])", ctx))) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   "%s", _("missing session_uuid element in migration 
data"));
+goto cleanup;
+}
+if (virUUIDParse(tmp, session_uuid) < 0) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   "%s", _("malformed session_uuid element in migration 
data"));
+goto cleanup;
+}
+ret = 0;
+
+ cleanup:
+xmlXPathFreeContext(ctx);
+xmlFreeDoc(doc);
+VIR_FREE(tmp);
+
+return ret;
+}
+
+static int
+vzParseCookie2(const char *xml, unsigned char *domain_uuid)
+{
+xmlDocPtr doc = NULL;
+xmlXPathContextPtr ctx = NULL;
+char *tmp = NULL;
+int ret = -1;
+if (!(doc = virXMLParseStringCtxt(xml, _("(_migration_cookie)"), &ctx)))
+goto cleanup;
+
+if (!(tmp = virXPathString("string(./domain_uuid[1])", ctx))) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   "%s", _("missing domain_uuid element in migration 
data"));
+goto cleanup;
+}
+if (virUUIDParse(tmp, domain_uuid) < 0) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   "%s", _("malformed domain_uuid element in migration 
data"));
+goto cleanup;
+}
+ret = 0;
+
+ cleanup:
+xmlXPathFreeContext(ctx);
+xmlFreeDoc(doc);
+VIR_FREE(tmp);
+
+return ret;
+}
+
 static int
 vzDomainMigratePrepare3Params(virConnectPtr dconn,
   virTypedParameterPtr params ATTRIBUTE_UNUSED,
@@ -1425,13 +1520,11 @@ vzDomainMigratePrepare3Params(virConnectPtr dconn,
 {
 vzConnPtr privconn = dconn->privateData;
 int ret = -1;
-char uuidstr[VIR_UUID_STRING_BUFLEN];
 
 *cookieout = NULL;
 *uri_out = NULL;
 
-virUUIDFormat(privconn->session_uuid, uuidstr);
-if (VIR_STRDUP(*cookieout, uuidstr) < 0)
+if (!(*cookieout = vzFormatCookie1(privconn->session_uuid)))
 goto cleanup;
 *cookieoutlen = strlen(*cookieout) + 1;
 
@@ -1465,7 +1558,6 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 virDomainObjPtr dom = NULL;
 const char *uri = NULL;
 unsigned char session_uuid[VIR_UUID_BUFLEN];
-char uuidstr[VIR_UUID_STRING_BUFLEN];
 
 *cookieout = NULL;
 
@@ -1483,14 +1575,13 @@ vzDomainMigratePerform3Params(virDomainPtr domain,
 goto cleanup;
 }
 
-if (virUUIDParse(cookiein, session_uuid) < 0)
+if (vzParseCookie1(cookiein, session_uuid) < 0)
 goto cleanup;
 
 if (prlsdkMigrate(dom, uri, session_uuid) < 0)
 goto cleanup;
 
-virUUIDFormat(domain->uuid, uuidstr);
-if (VIR_STRDUP(*cookieout, uuidstr) < 0)
+if (!(*cookieout = vzFormatCookie2(dom->def->uuid)))
 goto cleanup;
 *cookieoutlen = strlen(*cookieout) + 1;
 
@@ -1539,12 +1630,8 @@ vzDomainMigrateFinish3Params(virConnectPtr dconn,
 if (cancelled)
 return NULL;
 
-if (virUUIDParse(cookiein, domain_uuid) < 0) {
-virReportError(VIR_ERR_INTERNAL_ERROR,
-   _("Could not parse UUID from string '%s'"),
-   cookiein);
+if (vzParseCookie2(cookiein, d

[libvirt] [PATCH] qemuProcessHandleMigrationStatus: Update migration status on ASYNC_JOB_SAVE too

2015-07-13 Thread Michal Privoznik
After Jirka's migration patches libvirt is listening on migration
events from qemu instead of actively polling on the monitor. There is,
however, a little regression (introduced in 6d2edb6a42d0d41). The
problem is, the current status of migration job is updated in
qemuProcessHandleMigrationStatus if and only if migration job was
started. But we have a separate job type for saving a domain into a
file: QEMU_ASYNC_JOB_SAVE. Therefore, since this job is not strictly a
migration job, internal state was not updated and later checks failed:

  virsh # save fedora22 /tmp/fedora22_ble.save
  error: Failed to save domain fedora22 to /tmp/fedora22_ble.save
  error: operation failed: domain save job: is not active

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_process.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 2a529f7..16d39b2 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -1521,29 +1521,30 @@ static int
 qemuProcessHandleMigrationStatus(qemuMonitorPtr mon ATTRIBUTE_UNUSED,
  virDomainObjPtr vm,
  int status,
  void *opaque ATTRIBUTE_UNUSED)
 {
 qemuDomainObjPrivatePtr priv;
 
 virObjectLock(vm);
 
 VIR_DEBUG("Migration of domain %p %s changed state to %s",
   vm, vm->def->name,
   qemuMonitorMigrationStatusTypeToString(status));
 
 priv = vm->privateData;
 if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT &&
-priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_IN) {
+priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_IN &&
+priv->job.asyncJob != QEMU_ASYNC_JOB_SAVE) {
 VIR_DEBUG("got MIGRATION event without a migration job");
 goto cleanup;
 }
 
 priv->job.current->status.status = status;
 virDomainObjBroadcast(vm);
 
  cleanup:
 virObjectUnlock(vm);
 return 0;
 }
 
 
-- 
2.3.6

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list