On Wed, Nov 23, 2016 at 12:03:22PM -0500, John Ferlan wrote:


On 11/04/2016 04:01 AM, Viktor Mihajlovski wrote:
Running VMs couldn't use newly hot plugged host CPUs even if the VMs
had no CPU pinning defined AND the cpuset controller was disabled in
the libvirt qemu configuration.

Add blank lines between paragraphs - just makes it easier to read.

This was because in this case the process affinity was set by libvirt
to all currently present host CPUs in order to avoid situations, where
libvirtd was deliberately running on a CPU subset and thus the spawned
VMs would be involuntarily restricted to the CPU subset inherited by
libvirtd.
That however prevents new host CPUs to be utilized when they show up.
With this change we will NOT set the VM's affinity mask if it
matches the online host CPU mask.

There's still the chance that for some reason the deliberately chosen
libvirtd affinity matches the online host CPU mask by accident. In this
case the behavior remains as it was before (CPUs offline while setting
the affinity will not be used if they show up later on).

Signed-off-by: Viktor Mihajlovski <mihaj...@linux.vnet.ibm.com>
Tested-by: Matthew Rosato <mjros...@linux.vnet.ibm.com>
---
 src/qemu/qemu_process.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 1b67aee..76f9210 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -2202,6 +2202,7 @@ qemuProcessInitCpuAffinity(virDomainObjPtr vm)
     int ret = -1;
     virBitmapPtr cpumap = NULL;
     virBitmapPtr cpumapToSet = NULL;
+    virBitmapPtr hostcpumap = NULL;
     qemuDomainObjPrivatePtr priv = vm->privateData;

     if (!vm->pid) {
@@ -2223,6 +2224,16 @@ qemuProcessInitCpuAffinity(virDomainObjPtr vm)
              * the spawned QEMU instance to all pCPUs if no map is given in
              * its config file */
             int hostcpus;
+            hostcpumap = virHostCPUGetOnlineBitmap();
+            cpumap = virProcessGetAffinity(vm->pid);

Wouldn't this set 'cpumap' to something that shortly would be
overwritten by virBitmapNew if we don't jump to cleanup in this patch?


Sure, that would need cleaning.

Beyond that - I'll let someone with more detailed knowledge of
SetAffinity nuances decide whether avoiding the call is proper.


Well, that's a long standing kernel "bug" (depending on what point you
look at it from) and this is one way of fixing lot of the issues.  We
still have no (nice) way how to fix more than half of the problems
without kernel's helping hand, but at least we can have this working.


+
+            if (hostcpumap && cpumap && virBitmapEqual(hostcpumap, cpumap)) {
+                /* we're using all available CPUs, no reason to set
+                 * mask. If libvirtd is running without explicit
+                 * affinity, we can use hotplugged CPUs for this VM */
+                ret = 0;
+                goto cleanup;
+            }


However, I think we want something else here.  Firstly, this will report
error for guests on anything else than Linux as
virHostCPUGetOnlineBitmap() is implemented only there, other platforms
just report error.  Also the code, as it is now, sets the affinity to
all CPUs on the system (even those that are offline) and it seems to
work (for some definition of "work", i.e. we don't get an error for
sched_setaffinity()).  I can't seem to wrap my head around why that
wouldn't work then.

             /* setaffinity fails if you set bits for CPUs which
              * aren't present, so we have to limit ourselves */
@@ -2248,6 +2259,7 @@ qemuProcessInitCpuAffinity(virDomainObjPtr vm)

  cleanup:
     virBitmapFree(cpumap);
+    virBitmapFree(hostcpumap);
     return ret;
 }



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Attachment: signature.asc
Description: Digital signature

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Reply via email to