If the cpuset cgroup controller is disabled in /etc/libvirt/qemu.conf
QEMU virtual machines can in principle use all host CPUs, even if they
are hot plugged, if they have no explicit CPU affinity defined.
However, there's libvirt code supposed to handle the situation where
the libvirt daemon itself is not using all host CPUs. The code in
qemuProcessInitCpuAffinity attempts to set an affinity mask including
all defined host CPUs. Unfortunately, the resulting affinity mask for
the process will not contain the offline CPUs. See also the
sched_setaffinity(2) man page.
That means that even if the host CPUs come online again, they won't be
used by the QEMU process anymore. The same is true for newly hot
plugged CPUs. So we are effectively preventing that QEMU uses all
processors instead of enabling it to use them.
It only makes sense to set the QEMU process affinity if we're able
to actually grow the set of usable CPUs, i.e. if the process affinity
is a subset of the online host CPUs.
There's still the chance that for some reason the deliberately chosen
libvirtd affinity matches the online host CPU mask by accident. In this
case the behavior remains as it was before (CPUs offline while setting
the affinity will not be used if they show up later on).
Signed-off-by: Viktor Mihajlovski <mihajlov(a)linux.vnet.ibm.com>
Tested-by: Matthew Rosato <mjrosato(a)linux.vnet.ibm.com>
---
src/qemu/qemu_process.c | 33 ++++++++++++++++++++++++---------
1 file changed, 24 insertions(+), 9 deletions(-)
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 4758c49..9d1bfa4 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -2202,6 +2202,7 @@ qemuProcessInitCpuAffinity(virDomainObjPtr vm)
int ret = -1;
virBitmapPtr cpumap = NULL;
virBitmapPtr cpumapToSet = NULL;
+ virBitmapPtr hostcpumap = NULL;
qemuDomainObjPrivatePtr priv = vm->privateData;
if (!vm->pid) {
@@ -2223,21 +2224,34 @@ qemuProcessInitCpuAffinity(virDomainObjPtr vm)
* the spawned QEMU instance to all pCPUs if no map is given in
* its config file */
int hostcpus;
+ cpumap = virProcessGetAffinity(vm->pid);
- /* setaffinity fails if you set bits for CPUs which
- * aren't present, so we have to limit ourselves */
- if ((hostcpus = virHostCPUGetCount()) < 0)
+ if (virHostCPUHasBitmap())
+ hostcpumap = virHostCPUGetOnlineBitmap();
+
+ if (hostcpumap && cpumap && virBitmapEqual(hostcpumap,
cpumap)) {
+ /* we're using all available CPUs, no reason to set
+ * mask. If libvirtd is running without explicit
+ * affinity, we can use hotplugged CPUs for this VM */
+ ret = 0;
goto cleanup;
+ } else {
+ /* setaffinity fails if you set bits for CPUs which
+ * aren't present, so we have to limit ourselves */
+ if ((hostcpus = virHostCPUGetCount()) < 0)
+ goto cleanup;
- if (hostcpus > QEMUD_CPUMASK_LEN)
- hostcpus = QEMUD_CPUMASK_LEN;
+ if (hostcpus > QEMUD_CPUMASK_LEN)
+ hostcpus = QEMUD_CPUMASK_LEN;
- if (!(cpumap = virBitmapNew(hostcpus)))
- goto cleanup;
+ virBitmapFree(cpumap);
+ if (!(cpumap = virBitmapNew(hostcpus)))
+ goto cleanup;
- virBitmapSetAll(cpumap);
+ virBitmapSetAll(cpumap);
- cpumapToSet = cpumap;
+ cpumapToSet = cpumap;
+ }
}
}
@@ -2248,6 +2262,7 @@ qemuProcessInitCpuAffinity(virDomainObjPtr vm)
cleanup:
virBitmapFree(cpumap);
+ virBitmapFree(hostcpumap);
return ret;
}
--
1.9.1