[libvirt] [PATCH v2] qemu: bulk stats: add pcpu placement information

This patch adds the information about the physical cpu placement of virtual cpus for bulk stats. This is the only difference in output with the virDomainGetVcpus() API. Management software, like oVirt, needs this information to properly manage NUMA configurations. Signed-off-by: Francesco Romani <fromani@redhat.com> --- src/libvirt-domain.c | 2 ++ src/qemu/qemu_driver.c | 9 +++++++++ 2 files changed, 11 insertions(+) diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index cb76d8c..e84f6a8 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -10888,6 +10888,8 @@ virConnectGetDomainCapabilities(virConnectPtr conn, * from virVcpuState enum. * "vcpu.<num>.time" - virtual cpu time spent by virtual CPU <num> * as unsigned long long. + * "vcpu.<num>.physical" - real CPU number on which virtual CPU <num> is + * running, as int. -1 if offline. * * VIR_DOMAIN_STATS_INTERFACE: Return network interface statistics. * The typed parameter keys are in this format: diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 830fca7..b62cabf 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -18348,6 +18348,15 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, param_name, cpuinfo[i].cpuTime) < 0) goto cleanup; + + snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "vcpu.%zu.physical", i); + if (virTypedParamsAddInt(&record->params, + &record->nparams, + maxparams, + param_name, + cpuinfo[i].cpu) < 0) + goto cleanup; } ret = 0; -- 1.9.3

On 12/11/14 08:43, Francesco Romani wrote:
This patch adds the information about the physical cpu placement of virtual cpus for bulk stats.
This is the only difference in output with the virDomainGetVcpus() API. Management software, like oVirt, needs this information to properly manage NUMA configurations.
Are you sure that you are getting what you expect? When this stats group was first implemented I asked not to include this stat as it only shows the actual host cpu id where the guest cpu is running at that precise moment. The problem with that is that usual configurations don't map the cpus in a 1:1 fashion, but rather allow a specific guest CPU to be run on a subset of host cpus according to it's scheduling decisions. That means that the stat might oscillate in the given set where the guest vcpu is pinned at. Could you please share your use case for this one? I'm curious to see whether you have some real use of such data.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Peter
--- src/libvirt-domain.c | 2 ++ src/qemu/qemu_driver.c | 9 +++++++++ 2 files changed, 11 insertions(+)
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index cb76d8c..e84f6a8 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -10888,6 +10888,8 @@ virConnectGetDomainCapabilities(virConnectPtr conn, * from virVcpuState enum. * "vcpu.<num>.time" - virtual cpu time spent by virtual CPU <num> * as unsigned long long. + * "vcpu.<num>.physical" - real CPU number on which virtual CPU <num> is + * running, as int. -1 if offline. * * VIR_DOMAIN_STATS_INTERFACE: Return network interface statistics. * The typed parameter keys are in this format: diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 830fca7..b62cabf 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -18348,6 +18348,15 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, param_name, cpuinfo[i].cpuTime) < 0) goto cleanup; + + snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "vcpu.%zu.physical", i); + if (virTypedParamsAddInt(&record->params, + &record->nparams, + maxparams, + param_name, + cpuinfo[i].cpu) < 0) + goto cleanup; }
ret = 0;
Patch looks good though.

----- Original Message -----
From: "Peter Krempa" <pkrempa@redhat.com> To: "Francesco Romani" <fromani@redhat.com>, libvir-list@redhat.com Sent: Thursday, December 11, 2014 10:04:24 AM Subject: Re: [libvirt] [PATCH v2] qemu: bulk stats: add pcpu placement information
On 12/11/14 08:43, Francesco Romani wrote:
This patch adds the information about the physical cpu placement of virtual cpus for bulk stats.
This is the only difference in output with the virDomainGetVcpus() API. Management software, like oVirt, needs this information to properly manage NUMA configurations.
Are you sure that you are getting what you expect? When this stats group was first implemented I asked not to include this stat as it only shows the actual host cpu id where the guest cpu is running at that precise moment. The problem with that is that usual configurations don't map the cpus in a 1:1 fashion, but rather allow a specific guest CPU to be run on a subset of host cpus according to it's scheduling decisions.
That means that the stat might oscillate in the given set where the guest vcpu is pinned at. Could you please share your use case for this one? I'm curious to see whether you have some real use of such data.
There is one use case on oVirt on which this very data is used to build a what is claimed to be a Vcpu runtime pinning map. It is used on NUMA flow. I'm not really familiar with that code, and by inspecting it after your answer above, I'm not 100% convinced everything's right in oVirt. I'll need to check more deeply, and I'll reply as soon as I have trustworthy information. Thanks for the insight, -- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani
participants (2)
-
Francesco Romani
-
Peter Krempa