On 24.11.2016 14:26, Martin Kletzander wrote:
> On Wed, Nov 23, 2016 at 12:03:22PM -0500, John Ferlan wrote:
>> 
>> 
>> On 11/04/2016 04:01 AM, Viktor Mihajlovski wrote:
>>> Running VMs couldn't use newly hot plugged host CPUs even if
>>> the VMs had no CPU pinning defined AND the cpuset controller
>>> was disabled in the libvirt qemu configuration.
>> 
>> Add blank lines between paragraphs - just makes it easier to
>> read.
>> 
>>> This was because in this case the process affinity was set by
>>> libvirt to all currently present host CPUs in order to avoid
>>> situations, where libvirtd was deliberately running on a CPU
>>> subset and thus the spawned VMs would be involuntarily
>>> restricted to the CPU subset inherited by libvirtd. That
>>> however prevents new host CPUs to be utilized when they show
>>> up. With this change we will NOT set the VM's affinity mask if
>>> it matches the online host CPU mask.
>>> 
>>> There's still the chance that for some reason the deliberately
>>> chosen libvirtd affinity matches the online host CPU mask by
>>> accident. In this case the behavior remains as it was before
>>> (CPUs offline while setting the affinity will not be used if
>>> they show up later on).
>>> 
>>> Signed-off-by: Viktor Mihajlovski
>>> <mihaj...@linux.vnet.ibm.com> Tested-by: Matthew Rosato
>>> <mjros...@linux.vnet.ibm.com> --- src/qemu/qemu_process.c | 12
>>> ++++++++++++ 1 file changed, 12 insertions(+)
>>> 
>>> diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c 
>>> index 1b67aee..76f9210 100644 --- a/src/qemu/qemu_process.c +++
>>> b/src/qemu/qemu_process.c @@ -2202,6 +2202,7 @@
>>> qemuProcessInitCpuAffinity(virDomainObjPtr vm) int ret = -1; 
>>> virBitmapPtr cpumap = NULL; virBitmapPtr cpumapToSet = NULL; +
>>> virBitmapPtr hostcpumap = NULL; qemuDomainObjPrivatePtr priv =
>>> vm->privateData;
>>> 
>>> if (!vm->pid) { @@ -2223,6 +2224,16 @@
>>> qemuProcessInitCpuAffinity(virDomainObjPtr vm) * the spawned
>>> QEMU instance to all pCPUs if no map is given in * its config
>>> file */ int hostcpus; +            hostcpumap =
>>> virHostCPUGetOnlineBitmap(); +            cpumap =
>>> virProcessGetAffinity(vm->pid);
>> 
>> Wouldn't this set 'cpumap' to something that shortly would be 
>> overwritten by virBitmapNew if we don't jump to cleanup in this
>> patch?
>> 
> 
> Sure, that would need cleaning.
> 
>> Beyond that - I'll let someone with more detailed knowledge of 
>> SetAffinity nuances decide whether avoiding the call is proper.
>> 
> 
> Well, that's a long standing kernel "bug" (depending on what point
> you look at it from) and this is one way of fixing lot of the
> issues.  We still have no (nice) way how to fix more than half of
> the problems without kernel's helping hand, but at least we can
> have this working.
> 
>> 
>>> + +            if (hostcpumap && cpumap &&
>>> virBitmapEqual(hostcpumap, cpumap)) { +                /* we're
>>> using all available CPUs, no reason to set +                 *
>>> mask. If libvirtd is running without explicit +
>>> * affinity, we can use hotplugged CPUs for this VM */ +
>>> ret = 0; +                goto cleanup; +            }
>>> 
> 
> However, I think we want something else here.  Firstly, this will
> report error for guests on anything else than Linux as 
> virHostCPUGetOnlineBitmap() is implemented only there, other
> platforms just report error.  Also the code, as it is now, sets the
> affinity to all CPUs on the system (even those that are offline)
> and it seems to work (for some definition of "work", i.e. we don't
> get an error for sched_setaffinity()).  I can't seem to wrap my
> head around why that wouldn't work then.
Well, it doesn't work :-). You can try out the following (on a 4 cpu
system).
---
$ sudo -i bash
$ taskset -p $$
pid 25380's current affinity mask: f
$ echo 0 > /sys/devices/system/cpu/cpu1/online
taskset -p $$
pid 25380's current affinity mask: d
# we can see that the affinty mask reflects the offline CPU
$ bash
$ taskset -p $$
pid 25956's current affinity mask: d
$ taskset -p -c 0-3 $$
pid 25956's current affinity list: 0,2,3
pid 25956's new affinity list: 0,2,3
# the above command attempts to set the affinity to all CPUs
# including the offline one
$ taskset -p $$
pid 25956's current affinity mask: d
# so, it seems to have worked, but
echo 1 > /sys/devices/system/cpu/cpu1/online
$ taskset -p $$
pid 25956's current affinity mask: d
# the process won't get CPU #1 back!
$ exit
$ taskset -p $$
pid 25380's current affinity mask: f
# the parent can use the hot plugged CPU because it didn't
# and so would have the child
---
If you substitute the "outer" bash with libvirtd and the "inner" bash
with QEMU, the mess becomes obvious as well as the remedy.

I'll send out a patch with the attempt of better wording...

> 
>>> /* setaffinity fails if you set bits for CPUs which * aren't
>>> present, so we have to limit ourselves */ @@ -2248,6 +2259,7 @@
>>> qemuProcessInitCpuAffinity(virDomainObjPtr vm)
>>> 
>>> cleanup: virBitmapFree(cpumap); +
>>> virBitmapFree(hostcpumap); return ret; }
>>> 
>>> 
>> 
>> -- libvir-list mailing list libvir-list@redhat.com 
>> https://www.redhat.com/mailman/listinfo/libvir-list


-- 

Mit freundlichen Grüßen/Kind Regards
   Viktor Mihajlovski

IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Martina Köderitz
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Reply via email to