[libvirt] [PATCH v5 0/2] List only online cpus for vcpupin/emulatorpin when vcpu placement static

Currently when the vcpu placement is static and cpuset is not specified, CPU Affinity shows 0.. CPUMAX. This patchset will result in display of only online CPU's under CPU Affinity on linux. Fixes the following Bug: virsh dumpxml Fedora <domain type='kvm' id='4'> <name>Fedora</name> <uuid>aecf3e5e-6f9a-42a3-9d6a-223a75569a66</uuid> <maxMemory slots='32' unit='KiB'>3145728</maxMemory> <memory unit='KiB'>524288</memory> <currentMemory unit='KiB'>524288</currentMemory> <vcpu placement='static' current='8'>160</vcpu> <resource> <partition>/machine</partition> </resource> ..................... ....................... ......................... <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+0</label> <imagelabel>+0:+0</imagelabel> </seclabel> </domain> lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-2,4-7 Off-line CPU(s) list: 3 Thread(s) per core: 1 Core(s) per socket: 7 Socket(s): 1 .......... .......... NUMA node0 CPU(s): 0-2,4-7 NUMA node1 CPU(s): cat /sys/devices/system/cpu/online 0-2,4-7 Before Patch virsh vcpupin Fedora VCPU: CPU Affinity ---------------------------------- 0: 0-7 1: 0-7 ... ... 158: 0-7 159: 0-7 virsh emulatorpin Fedora emulator: CPU Affinity ---------------------------------- *: 0-7 After Patch virsh vcpupin Fedora VCPU: CPU Affinity ---------------------------------- 0: 0-2,4-7 1: 0-2,4-7 ... ... 158: 0-2,4-7 159: 0-2,4-7 virsh emulatorpin Fedora emulator: CPU Affinity ---------------------------------- *: 0-2,4-7 Nitesh Konkar (2): conf: List only online cpus for virsh vcpupin conf: List only online cpus for virsh emulatorpin src/conf/domain_conf.c | 6 ++++++ src/qemu/qemu_driver.c | 5 +++++ 2 files changed, 11 insertions(+) -- 2.1.0

Currently when the vcpu placement is static and cpuset is not specified, CPU Affinity under virsh vcpupin shows 0..CPUMAX. This patchset will result in display of only online CPU's under CPU Affinity on linux. Signed-off-by: Nitesh Konkar <nitkon12@linux.vnet.ibm.com> --- src/conf/domain_conf.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index e7517d9..f25cf63 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -56,6 +56,7 @@ #include "virstring.h" #include "virnetdev.h" #include "virhostdev.h" +#include "virhostcpu.h" #define VIR_FROM_THIS VIR_FROM_DOMAIN @@ -1549,10 +1550,15 @@ virDomainDefGetVcpuPinInfoHelper(virDomainDefPtr def, if (hostcpus < 0) return -1; +#ifdef __linux__ + if (!(allcpumap = virHostCPUGetOnlineBitmap())) + return -1; +#else if (!(allcpumap = virBitmapNew(hostcpus))) return -1; virBitmapSetAll(allcpumap); +#endif for (i = 0; i < maxvcpus && i < ncpumaps; i++) { virDomainVcpuDefPtr vcpu = virDomainDefGetVcpu(def, i); -- 2.1.0

Currently when the vcpu placement is static and cpuset is not specified, CPU Affinity under virsh emulatorpin shows 0..CPUMAX. This patchset will result in display of only online CPU's under CPU Affinity on linux. Signed-off-by: Nitesh Konkar <nitkon12@linux.vnet.ibm.com> --- src/qemu/qemu_driver.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index fdfe912..e69d92d 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5435,9 +5435,14 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom, autoCpuset) { cpumask = autoCpuset; } else { +#ifdef __linux__ + if (!(bitmap = virHostCPUGetOnlineBitmap())) + goto cleanup; +#else if (!(bitmap = virBitmapNew(hostcpus))) goto cleanup; virBitmapSetAll(bitmap); +#endif cpumask = bitmap; } -- 2.1.0

On Fri, Nov 25, 2016 at 14:53:07 +0530, Nitesh Konkar wrote:
Currently when the vcpu placement is static and cpuset is not specified, CPU Affinity under virsh emulatorpin shows 0..CPUMAX. This patchset will result in display of only online CPU's under CPU Affinity on linux.
Signed-off-by: Nitesh Konkar <nitkon12@linux.vnet.ibm.com> --- src/qemu/qemu_driver.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index fdfe912..e69d92d 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5435,9 +5435,14 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom, autoCpuset) { cpumask = autoCpuset; } else { +#ifdef __linux__ + if (!(bitmap = virHostCPUGetOnlineBitmap())) + goto cleanup; +#else if (!(bitmap = virBitmapNew(hostcpus))) goto cleanup; virBitmapSetAll(bitmap); +#endif
Adding conditionally compiled code twice does not look good. Please extract the above into a helper function and use it in the two places rather than having two blocks of conditional code. Also I'm not entirely perusaded that this is correct. When you are getting information for the offline VM the configuration may change until you start the VM and thus the data gathered here will not be valid. (e.g. if you turn the cpus back on) For running VMs the data depends on the state when the VM was started and at least for vcpu threads is gathered back using virProcessGetAffinity and thus should be reported correctly. Additionally we historically treat the "all bits set" cpu map as that there is no pinning configured. This would change this behavior. When the VM is started, the real cpu map should be returned, but for the config time I think the current state is correct and doing your change would be incorrect. Peter

On 06.12.2016 12:33, Peter Krempa wrote:
On Fri, Nov 25, 2016 at 14:53:07 +0530, Nitesh Konkar wrote:
Currently when the vcpu placement is static and cpuset is not specified, CPU Affinity under virsh emulatorpin shows 0..CPUMAX. This patchset will result in display of only online CPU's under CPU Affinity on linux.
Signed-off-by: Nitesh Konkar <nitkon12@linux.vnet.ibm.com> --- src/qemu/qemu_driver.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index fdfe912..e69d92d 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5435,9 +5435,14 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom, autoCpuset) { cpumask = autoCpuset; } else { +#ifdef __linux__ + if (!(bitmap = virHostCPUGetOnlineBitmap())) + goto cleanup; +#else if (!(bitmap = virBitmapNew(hostcpus))) goto cleanup; virBitmapSetAll(bitmap); +#endif
Adding conditionally compiled code twice does not look good. Please extract the above into a helper function and use it in the two places rather than having two blocks of conditional code.
Also I'm not entirely perusaded that this is correct.
When you are getting information for the offline VM the configuration may change until you start the VM and thus the data gathered here will not be valid. (e.g. if you turn the cpus back on)
For running VMs the data depends on the state when the VM was started and at least for vcpu threads is gathered back using virProcessGetAffinity and thus should be reported correctly.
Additionally we historically treat the "all bits set" cpu map as that there is no pinning configured. This would change this behavior. When the VM is started, the real cpu map should be returned, but for the config time I think the current state is correct and doing your change would be incorrect. I have no strong opinion on whether the CPU mask should match the host online mask for the defined case, but the "all bits set" map will change with the old code as well, if new CPUs are added via hotplug. The crux is that we don't have any way to find out there's in fact no
I've already posted a patch for that purpose in https://www.redhat.com/archives/libvir-list/2016-November/msg01321.html pinning in place for a process nor a way to "unpin" it.
Peter
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
-- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Köderitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294

On Tue, Dec 06, 2016 at 13:01:48 +0100, Viktor Mihajlovski wrote:
On 06.12.2016 12:33, Peter Krempa wrote:
On Fri, Nov 25, 2016 at 14:53:07 +0530, Nitesh Konkar wrote:
[...]
Additionally we historically treat the "all bits set" cpu map as that there is no pinning configured. This would change this behavior. When the VM is started, the real cpu map should be returned, but for the config time I think the current state is correct and doing your change would be incorrect. I have no strong opinion on whether the CPU mask should match the host online mask for the defined case, but the "all bits set" map will change with the old code as well, if new CPUs are added via hotplug.
CPU hotplug on the host is very uncommon and thus not tested very well.
The crux is that we don't have any way to find out there's in fact no pinning in place for a process nor a way to "unpin" it.
That is true indeed. It's not possible via the API. It's fully possible via XML though. For unpinning I have an unfinished series adding a flag to remove pinning since since any other method is not reliable enough. Finally I think that the "all ones" bitmap should cover the full map the user sends to libvirt. This would differentiate it from the "pinned to current full set of cpus" case.

On 06.12.2016 13:10, Peter Krempa wrote:
On Tue, Dec 06, 2016 at 13:01:48 +0100, Viktor Mihajlovski wrote:
On 06.12.2016 12:33, Peter Krempa wrote:
On Fri, Nov 25, 2016 at 14:53:07 +0530, Nitesh Konkar wrote:
[...]
Additionally we historically treat the "all bits set" cpu map as that there is no pinning configured. This would change this behavior. When the VM is started, the real cpu map should be returned, but for the config time I think the current state is correct and doing your change would be incorrect. I have no strong opinion on whether the CPU mask should match the host online mask for the defined case, but the "all bits set" map will change with the old code as well, if new CPUs are added via hotplug.
CPU hotplug on the host is very uncommon and thus not tested very well.
I can assure you that is very common on Linux running on z Systems and also widely used by customers. With the patchset in https://www.redhat.com/archives/libvir-list/2016-November/msg01320.html it can also be reliably uses with libvirt.
The crux is that we don't have any way to find out there's in fact no pinning in place for a process nor a way to "unpin" it.
That is true indeed. It's not possible via the API. It's fully possible via XML though.
For unpinning I have an unfinished series adding a flag to remove pinning since since any other method is not reliable enough.
Finally I think that the "all ones" bitmap should cover the full map the user sends to libvirt. This would differentiate it from the "pinned to current full set of cpus" case.
-- Mit freundlichen Grüßen/Kind Regards Viktor Mihajlovski IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Köderitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294

Polite ping. Thanks, Nitesh. On Fri, Nov 25, 2016 at 2:53 PM, Nitesh Konkar < niteshkonkar.libvirt@gmail.com> wrote:
Currently when the vcpu placement is static and cpuset is not specified, CPU Affinity shows 0.. CPUMAX. This patchset will result in display of only online CPU's under CPU Affinity on linux.
Fixes the following Bug:
virsh dumpxml Fedora
<domain type='kvm' id='4'> <name>Fedora</name> <uuid>aecf3e5e-6f9a-42a3-9d6a-223a75569a66</uuid> <maxMemory slots='32' unit='KiB'>3145728</maxMemory> <memory unit='KiB'>524288</memory> <currentMemory unit='KiB'>524288</currentMemory> <vcpu placement='static' current='8'>160</vcpu> <resource> <partition>/machine</partition> </resource> ..................... ....................... ......................... <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+0</label> <imagelabel>+0:+0</imagelabel> </seclabel> </domain>
lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-2,4-7 Off-line CPU(s) list: 3 Thread(s) per core: 1 Core(s) per socket: 7 Socket(s): 1 .......... .......... NUMA node0 CPU(s): 0-2,4-7 NUMA node1 CPU(s):
cat /sys/devices/system/cpu/online 0-2,4-7
Before Patch
virsh vcpupin Fedora
VCPU: CPU Affinity ---------------------------------- 0: 0-7 1: 0-7 ... ... 158: 0-7 159: 0-7
virsh emulatorpin Fedora emulator: CPU Affinity ---------------------------------- *: 0-7
After Patch
virsh vcpupin Fedora
VCPU: CPU Affinity ---------------------------------- 0: 0-2,4-7 1: 0-2,4-7 ... ... 158: 0-2,4-7 159: 0-2,4-7
virsh emulatorpin Fedora emulator: CPU Affinity ---------------------------------- *: 0-2,4-7
Nitesh Konkar (2): conf: List only online cpus for virsh vcpupin conf: List only online cpus for virsh emulatorpin
src/conf/domain_conf.c | 6 ++++++ src/qemu/qemu_driver.c | 5 +++++ 2 files changed, 11 insertions(+)
-- 2.1.0
participants (3)
-
Nitesh Konkar
-
Peter Krempa
-
Viktor Mihajlovski