[libvirt] [PATCHv5 0/2] support vcpu_time in qemu

This is my enhancements to Hu's series. If it still works for his testing, then I'm ready to push it. I also think we should port virDomainGetCPUStats to LXC; at least the overall statistics are available since LXC sets up a cgroup, although I'm not quite as sure whether vcpu statistics are possible. Hu Tao (2): Add a new param 'vcpu_time' to virDomainGetCPUStats Adds support to param 'vcpu_time' in qemu_driver. include/libvirt/libvirt.h.in | 10 +++- src/qemu/qemu_driver.c | 123 ++++++++++++++++++++++++++++++++++++++--- src/util/cgroup.c | 4 +- tools/virsh.c | 14 +++-- 4 files changed, 134 insertions(+), 17 deletions(-) -- 1.7.7.6

From: Hu Tao <hutao@cn.fujitsu.com> Currently virDomainGetCPUStats gets total cpu usage, which consists of: 1. vcpu usage: the physical cpu time consumed by virtual cpu(s) of domain 2. hypervisor: `total cpu usage' - `vcpu usage' The param 'vcpu_time' is for getting vcpu usages. --- diff from v4: minor cleanups, per review include/libvirt/libvirt.h.in | 10 +++++++++- tools/virsh.c | 14 ++++++++------ 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index ac5df95..a817db8 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -1339,7 +1339,8 @@ int virDomainGetState (virDomainPtr domain, /** * VIR_DOMAIN_CPU_STATS_CPUTIME: - * cpu usage in nanoseconds, as a ullong + * cpu usage (sum of both vcpu and hypervisor usage) in nanoseconds, + * as a ullong */ #define VIR_DOMAIN_CPU_STATS_CPUTIME "cpu_time" @@ -1355,6 +1356,13 @@ int virDomainGetState (virDomainPtr domain, */ #define VIR_DOMAIN_CPU_STATS_SYSTEMTIME "system_time" +/** + * VIR_DOMAIN_CPU_STATS_VCPUTIME: + * vcpu usage in nanoseconds (cpu_time excluding hypervisor time), + * as a ullong + */ +#define VIR_DOMAIN_CPU_STATS_VCPUTIME "vcpu_time" + int virDomainGetCPUStats(virDomainPtr domain, virTypedParameterPtr params, unsigned int nparams, diff --git a/tools/virsh.c b/tools/virsh.c index 08b3854..46239fa 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -5572,6 +5572,7 @@ cmdCPUStats(vshControl *ctl, const vshCmd *cmd) virTypedParameterPtr params = NULL; int i, j, pos, max_id, cpu = -1, show_count = -1, nparams; bool show_total = false, show_per_cpu = false; + unsigned int flags = 0; if (!vshConnectionUsability(ctl, ctl->conn)) return false; @@ -5599,13 +5600,13 @@ cmdCPUStats(vshControl *ctl, const vshCmd *cmd) cpu = 0; /* get number of cpus on the node */ - if ((max_id = virDomainGetCPUStats(dom, NULL, 0, 0, 0, 0)) < 0) + if ((max_id = virDomainGetCPUStats(dom, NULL, 0, 0, 0, flags)) < 0) goto failed_stats; if (show_count < 0 || show_count > max_id) show_count = max_id; /* get percpu information */ - if ((nparams = virDomainGetCPUStats(dom, NULL, 0, 0, 1, 0)) < 0) + if ((nparams = virDomainGetCPUStats(dom, NULL, 0, 0, 1, flags)) < 0) goto failed_stats; if (!nparams) { @@ -5619,7 +5620,7 @@ cmdCPUStats(vshControl *ctl, const vshCmd *cmd) while (show_count) { int ncpus = MIN(show_count, 128); - if (virDomainGetCPUStats(dom, params, nparams, cpu, ncpus, 0) < 0) + if (virDomainGetCPUStats(dom, params, nparams, cpu, ncpus, flags) < 0) goto failed_stats; for (i = 0; i < ncpus; i++) { @@ -5630,7 +5631,8 @@ cmdCPUStats(vshControl *ctl, const vshCmd *cmd) for (j = 0; j < nparams; j++) { pos = i * nparams + j; vshPrint(ctl, "\t%-12s ", params[pos].field); - if (STREQ(params[pos].field, VIR_DOMAIN_CPU_STATS_CPUTIME) && + if ((STREQ(params[pos].field, VIR_DOMAIN_CPU_STATS_CPUTIME) || + STREQ(params[pos].field, VIR_DOMAIN_CPU_STATS_VCPUTIME)) && params[j].type == VIR_TYPED_PARAM_ULLONG) { vshPrint(ctl, "%9lld.%09lld seconds\n", params[pos].value.ul / 1000000000, @@ -5653,7 +5655,7 @@ do_show_total: goto cleanup; /* get supported num of parameter for total statistics */ - if ((nparams = virDomainGetCPUStats(dom, NULL, 0, -1, 1, 0)) < 0) + if ((nparams = virDomainGetCPUStats(dom, NULL, 0, -1, 1, flags)) < 0) goto failed_stats; if (!nparams) { @@ -5665,7 +5667,7 @@ do_show_total: goto failed_params; /* passing start_cpu == -1 gives us domain's total status */ - if ((nparams = virDomainGetCPUStats(dom, params, nparams, -1, 1, 0)) < 0) + if ((nparams = virDomainGetCPUStats(dom, params, nparams, -1, 1, flags)) < 0) goto failed_stats; vshPrint(ctl, _("Total:\n")); -- 1.7.7.6

On Thu, May 17, 2012 at 03:56:47PM -0600, Eric Blake wrote:
From: Hu Tao <hutao@cn.fujitsu.com>
Currently virDomainGetCPUStats gets total cpu usage, which consists of:
1. vcpu usage: the physical cpu time consumed by virtual cpu(s) of domain 2. hypervisor: `total cpu usage' - `vcpu usage'
The param 'vcpu_time' is for getting vcpu usages. ---
diff from v4: minor cleanups, per review
include/libvirt/libvirt.h.in | 10 +++++++++- tools/virsh.c | 14 ++++++++------ 2 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index ac5df95..a817db8 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -1339,7 +1339,8 @@ int virDomainGetState (virDomainPtr domain,
/** * VIR_DOMAIN_CPU_STATS_CPUTIME: - * cpu usage in nanoseconds, as a ullong + * cpu usage (sum of both vcpu and hypervisor usage) in nanoseconds, + * as a ullong */ #define VIR_DOMAIN_CPU_STATS_CPUTIME "cpu_time"
@@ -1355,6 +1356,13 @@ int virDomainGetState (virDomainPtr domain, */ #define VIR_DOMAIN_CPU_STATS_SYSTEMTIME "system_time"
+/** + * VIR_DOMAIN_CPU_STATS_VCPUTIME: + * vcpu usage in nanoseconds (cpu_time excluding hypervisor time), + * as a ullong + */ +#define VIR_DOMAIN_CPU_STATS_VCPUTIME "vcpu_time" + int virDomainGetCPUStats(virDomainPtr domain, virTypedParameterPtr params, unsigned int nparams, diff --git a/tools/virsh.c b/tools/virsh.c index 08b3854..46239fa 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -5572,6 +5572,7 @@ cmdCPUStats(vshControl *ctl, const vshCmd *cmd) virTypedParameterPtr params = NULL; int i, j, pos, max_id, cpu = -1, show_count = -1, nparams; bool show_total = false, show_per_cpu = false; + unsigned int flags = 0;
if (!vshConnectionUsability(ctl, ctl->conn)) return false; @@ -5599,13 +5600,13 @@ cmdCPUStats(vshControl *ctl, const vshCmd *cmd) cpu = 0;
/* get number of cpus on the node */ - if ((max_id = virDomainGetCPUStats(dom, NULL, 0, 0, 0, 0)) < 0) + if ((max_id = virDomainGetCPUStats(dom, NULL, 0, 0, 0, flags)) < 0) goto failed_stats; if (show_count < 0 || show_count > max_id) show_count = max_id;
/* get percpu information */ - if ((nparams = virDomainGetCPUStats(dom, NULL, 0, 0, 1, 0)) < 0) + if ((nparams = virDomainGetCPUStats(dom, NULL, 0, 0, 1, flags)) < 0) goto failed_stats;
if (!nparams) { @@ -5619,7 +5620,7 @@ cmdCPUStats(vshControl *ctl, const vshCmd *cmd) while (show_count) { int ncpus = MIN(show_count, 128);
- if (virDomainGetCPUStats(dom, params, nparams, cpu, ncpus, 0) < 0) + if (virDomainGetCPUStats(dom, params, nparams, cpu, ncpus, flags) < 0) goto failed_stats;
for (i = 0; i < ncpus; i++) { @@ -5630,7 +5631,8 @@ cmdCPUStats(vshControl *ctl, const vshCmd *cmd) for (j = 0; j < nparams; j++) { pos = i * nparams + j; vshPrint(ctl, "\t%-12s ", params[pos].field); - if (STREQ(params[pos].field, VIR_DOMAIN_CPU_STATS_CPUTIME) && + if ((STREQ(params[pos].field, VIR_DOMAIN_CPU_STATS_CPUTIME) || + STREQ(params[pos].field, VIR_DOMAIN_CPU_STATS_VCPUTIME)) && params[j].type == VIR_TYPED_PARAM_ULLONG) { vshPrint(ctl, "%9lld.%09lld seconds\n", params[pos].value.ul / 1000000000, @@ -5653,7 +5655,7 @@ do_show_total: goto cleanup;
/* get supported num of parameter for total statistics */ - if ((nparams = virDomainGetCPUStats(dom, NULL, 0, -1, 1, 0)) < 0) + if ((nparams = virDomainGetCPUStats(dom, NULL, 0, -1, 1, flags)) < 0) goto failed_stats;
if (!nparams) { @@ -5665,7 +5667,7 @@ do_show_total: goto failed_params;
/* passing start_cpu == -1 gives us domain's total status */ - if ((nparams = virDomainGetCPUStats(dom, params, nparams, -1, 1, 0)) < 0) + if ((nparams = virDomainGetCPUStats(dom, params, nparams, -1, 1, flags)) < 0) goto failed_stats;
vshPrint(ctl, _("Total:\n")); -- 1.7.7.6
ACK. -- Thanks, Hu Tao

From: Hu Tao <hutao@cn.fujitsu.com> This involves setting the cpuacct cgroup to a per-vcpu granularity, as well as summing the each vcpu accounting into a common array. Now that we are reading more than one cgroup file, we double-check that cpus weren't hot-plugged between reads to invalidate our summing. Signed-off-by: Eric Blake <eblake@redhat.com> --- diff from v4: rewrite qemu code to use fewer malloc calls, fix some logic bugs src/qemu/qemu_driver.c | 123 ++++++++++++++++++++++++++++++++++++++++++++---- src/util/cgroup.c | 4 +- 2 files changed, 117 insertions(+), 10 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 0fd7de1..f6d0985 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -104,7 +104,7 @@ #define QEMU_NB_NUMA_PARAM 2 #define QEMU_NB_TOTAL_CPU_STAT_PARAM 3 -#define QEMU_NB_PER_CPU_STAT_PARAM 1 +#define QEMU_NB_PER_CPU_STAT_PARAM 2 #if HAVE_LINUX_KVM_H # include <linux/kvm.h> @@ -12563,8 +12563,69 @@ qemuDomainGetTotalcpuStats(virCgroupPtr group, return nparams; } +/* This function gets the sums of cpu time consumed by all vcpus. + * For example, if there are 4 physical cpus, and 2 vcpus in a domain, + * then for each vcpu, the cpuacct.usage_percpu looks like this: + * t0 t1 t2 t3 + * and we have 2 groups of such data: + * v\p 0 1 2 3 + * 0 t00 t01 t02 t03 + * 1 t10 t11 t12 t13 + * for each pcpu, the sum is cpu time consumed by all vcpus. + * s0 = t00 + t10 + * s1 = t01 + t11 + * s2 = t02 + t12 + * s3 = t03 + t13 + */ +static int +getSumVcpuPercpuStats(virCgroupPtr group, + unsigned int nvcpu, + unsigned long long *sum_cpu_time, + unsigned int num) +{ + int ret = -1; + int i; + char *buf = NULL; + virCgroupPtr group_vcpu = NULL; + + for (i = 0; i < nvcpu; i++) { + char *pos; + unsigned long long tmp; + int j; + + if (virCgroupForVcpu(group, i, &group_vcpu, 0) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + _("error accessing cgroup cpuacct for vcpu")); + goto cleanup; + } + + if (virCgroupGetCpuacctPercpuUsage(group, &buf) < 0) + goto cleanup; + + pos = buf; + for (j = 0; j < num; j++) { + if (virStrToLong_ull(pos, &pos, 10, &tmp) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + _("cpuacct parse error")); + goto cleanup; + } + sum_cpu_time[j] += tmp; + } + + virCgroupFree(&group_vcpu); + VIR_FREE(buf); + } + + ret = 0; +cleanup: + virCgroupFree(&group_vcpu); + VIR_FREE(buf); + return ret; +} + static int qemuDomainGetPercpuStats(virDomainPtr domain, + virDomainObjPtr vm, virCgroupPtr group, virTypedParameterPtr params, unsigned int nparams, @@ -12572,20 +12633,24 @@ qemuDomainGetPercpuStats(virDomainPtr domain, unsigned int ncpus) { char *map = NULL; + char *map2 = NULL; int rv = -1; int i, max_id; char *pos; char *buf = NULL; + unsigned long long *sum_cpu_time = NULL; + unsigned long long *sum_cpu_pos; + unsigned int n = 0; + qemuDomainObjPrivatePtr priv = vm->privateData; virTypedParameterPtr ent; int param_idx; + unsigned long long cpu_time; /* return the number of supported params */ if (nparams == 0 && ncpus != 0) - return QEMU_NB_PER_CPU_STAT_PARAM; /* only cpu_time is supported */ + return QEMU_NB_PER_CPU_STAT_PARAM; - /* return percpu cputime in index 0 */ - param_idx = 0; - /* to parse account file, we need "present" cpu map */ + /* To parse account file, we need "present" cpu map. */ map = nodeGetCPUmap(domain->conn, &max_id, "present"); if (!map) return rv; @@ -12608,30 +12673,70 @@ qemuDomainGetPercpuStats(virDomainPtr domain, pos = buf; memset(params, 0, nparams * ncpus); + /* return percpu cputime in index 0 */ + param_idx = 0; + if (max_id - start_cpu > ncpus - 1) max_id = start_cpu + ncpus - 1; for (i = 0; i <= max_id; i++) { - unsigned long long cpu_time; - if (!map[i]) { cpu_time = 0; } else if (virStrToLong_ull(pos, &pos, 10, &cpu_time) < 0) { qemuReportError(VIR_ERR_INTERNAL_ERROR, _("cpuacct parse error")); goto cleanup; + } else { + n++; } if (i < start_cpu) continue; - ent = ¶ms[ (i - start_cpu) * nparams + param_idx]; + ent = ¶ms[(i - start_cpu) * nparams + param_idx]; if (virTypedParameterAssign(ent, VIR_DOMAIN_CPU_STATS_CPUTIME, VIR_TYPED_PARAM_ULLONG, cpu_time) < 0) goto cleanup; } + + /* return percpu vcputime in index 1 */ + if (++param_idx >= nparams) { + rv = nparams; + goto cleanup; + } + + if (VIR_ALLOC_N(sum_cpu_time, n) < 0) { + virReportOOMError(); + goto cleanup; + } + if (getSumVcpuPercpuStats(group, priv->nvcpupids, sum_cpu_time, n) < 0) + goto cleanup; + + /* Check that the mapping of online cpus didn't change mid-parse. */ + map2 = nodeGetCPUmap(domain->conn, &max_id, "present"); + if (!map2 || memcmp(map, map2, VIR_DOMAIN_CPUMASK_LEN) != 0) + goto cleanup; + + sum_cpu_pos = sum_cpu_time; + for (i = 0; i <= max_id; i++) { + if (!map[i]) + cpu_time = 0; + else + cpu_time = *(sum_cpu_pos++); + if (i < start_cpu) + continue; + if (virTypedParameterAssign(¶ms[(i - start_cpu) * nparams + + param_idx], + VIR_DOMAIN_CPU_STATS_VCPUTIME, + VIR_TYPED_PARAM_ULLONG, + cpu_time) < 0) + goto cleanup; + } + rv = param_idx + 1; cleanup: + VIR_FREE(sum_cpu_time); VIR_FREE(buf); VIR_FREE(map); + VIR_FREE(map2); return rv; } @@ -12683,7 +12788,7 @@ qemuDomainGetCPUStats(virDomainPtr domain, if (start_cpu == -1) ret = qemuDomainGetTotalcpuStats(group, params, nparams); else - ret = qemuDomainGetPercpuStats(domain, group, params, nparams, + ret = qemuDomainGetPercpuStats(domain, vm, group, params, nparams, start_cpu, ncpus); cleanup: virCgroupFree(&group); diff --git a/src/util/cgroup.c b/src/util/cgroup.c index ad49bc2..5b32881 100644 --- a/src/util/cgroup.c +++ b/src/util/cgroup.c @@ -530,7 +530,9 @@ static int virCgroupMakeGroup(virCgroupPtr parent, virCgroupPtr group, continue; /* We need to control cpu bandwidth for each vcpu now */ - if ((flags & VIR_CGROUP_VCPU) && (i != VIR_CGROUP_CONTROLLER_CPU)) { + if ((flags & VIR_CGROUP_VCPU) && + (i != VIR_CGROUP_CONTROLLER_CPU && + i != VIR_CGROUP_CONTROLLER_CPUACCT)) { /* treat it as unmounted and we can use virCgroupAddTask */ VIR_FREE(group->controllers[i].mountPoint); continue; -- 1.7.7.6

On Thu, May 17, 2012 at 03:56:48PM -0600, Eric Blake wrote:
From: Hu Tao <hutao@cn.fujitsu.com>
This involves setting the cpuacct cgroup to a per-vcpu granularity, as well as summing the each vcpu accounting into a common array. Now that we are reading more than one cgroup file, we double-check that cpus weren't hot-plugged between reads to invalidate our summing.
Signed-off-by: Eric Blake <eblake@redhat.com> ---
diff from v4: rewrite qemu code to use fewer malloc calls, fix some logic bugs
src/qemu/qemu_driver.c | 123 ++++++++++++++++++++++++++++++++++++++++++++---- src/util/cgroup.c | 4 +- 2 files changed, 117 insertions(+), 10 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 0fd7de1..f6d0985 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -104,7 +104,7 @@ #define QEMU_NB_NUMA_PARAM 2
#define QEMU_NB_TOTAL_CPU_STAT_PARAM 3 -#define QEMU_NB_PER_CPU_STAT_PARAM 1 +#define QEMU_NB_PER_CPU_STAT_PARAM 2
#if HAVE_LINUX_KVM_H # include <linux/kvm.h> @@ -12563,8 +12563,69 @@ qemuDomainGetTotalcpuStats(virCgroupPtr group, return nparams; }
+/* This function gets the sums of cpu time consumed by all vcpus. + * For example, if there are 4 physical cpus, and 2 vcpus in a domain, + * then for each vcpu, the cpuacct.usage_percpu looks like this: + * t0 t1 t2 t3 + * and we have 2 groups of such data: + * v\p 0 1 2 3 + * 0 t00 t01 t02 t03 + * 1 t10 t11 t12 t13 + * for each pcpu, the sum is cpu time consumed by all vcpus. + * s0 = t00 + t10 + * s1 = t01 + t11 + * s2 = t02 + t12 + * s3 = t03 + t13 + */ +static int +getSumVcpuPercpuStats(virCgroupPtr group, + unsigned int nvcpu, + unsigned long long *sum_cpu_time, + unsigned int num) +{ + int ret = -1; + int i; + char *buf = NULL; + virCgroupPtr group_vcpu = NULL; + + for (i = 0; i < nvcpu; i++) { + char *pos; + unsigned long long tmp; + int j; + + if (virCgroupForVcpu(group, i, &group_vcpu, 0) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + _("error accessing cgroup cpuacct for vcpu")); + goto cleanup; + } + + if (virCgroupGetCpuacctPercpuUsage(group, &buf) < 0) + goto cleanup; + + pos = buf; + for (j = 0; j < num; j++) { + if (virStrToLong_ull(pos, &pos, 10, &tmp) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + _("cpuacct parse error")); + goto cleanup; + } + sum_cpu_time[j] += tmp; + } + + virCgroupFree(&group_vcpu); + VIR_FREE(buf); + } + + ret = 0; +cleanup: + virCgroupFree(&group_vcpu); + VIR_FREE(buf); + return ret; +} + static int qemuDomainGetPercpuStats(virDomainPtr domain, + virDomainObjPtr vm, virCgroupPtr group, virTypedParameterPtr params, unsigned int nparams, @@ -12572,20 +12633,24 @@ qemuDomainGetPercpuStats(virDomainPtr domain, unsigned int ncpus) { char *map = NULL; + char *map2 = NULL; int rv = -1; int i, max_id; char *pos; char *buf = NULL; + unsigned long long *sum_cpu_time = NULL; + unsigned long long *sum_cpu_pos; + unsigned int n = 0; + qemuDomainObjPrivatePtr priv = vm->privateData; virTypedParameterPtr ent; int param_idx; + unsigned long long cpu_time;
/* return the number of supported params */ if (nparams == 0 && ncpus != 0) - return QEMU_NB_PER_CPU_STAT_PARAM; /* only cpu_time is supported */ + return QEMU_NB_PER_CPU_STAT_PARAM;
- /* return percpu cputime in index 0 */ - param_idx = 0; - /* to parse account file, we need "present" cpu map */ + /* To parse account file, we need "present" cpu map. */ map = nodeGetCPUmap(domain->conn, &max_id, "present"); if (!map) return rv; @@ -12608,30 +12673,70 @@ qemuDomainGetPercpuStats(virDomainPtr domain, pos = buf; memset(params, 0, nparams * ncpus);
+ /* return percpu cputime in index 0 */ + param_idx = 0; + if (max_id - start_cpu > ncpus - 1) max_id = start_cpu + ncpus - 1;
for (i = 0; i <= max_id; i++) { - unsigned long long cpu_time; - if (!map[i]) { cpu_time = 0; } else if (virStrToLong_ull(pos, &pos, 10, &cpu_time) < 0) { qemuReportError(VIR_ERR_INTERNAL_ERROR, _("cpuacct parse error")); goto cleanup; + } else { + n++; } if (i < start_cpu) continue; - ent = ¶ms[ (i - start_cpu) * nparams + param_idx]; + ent = ¶ms[(i - start_cpu) * nparams + param_idx]; if (virTypedParameterAssign(ent, VIR_DOMAIN_CPU_STATS_CPUTIME, VIR_TYPED_PARAM_ULLONG, cpu_time) < 0) goto cleanup; } + + /* return percpu vcputime in index 1 */ + if (++param_idx >= nparams) { + rv = nparams; + goto cleanup; + } + + if (VIR_ALLOC_N(sum_cpu_time, n) < 0) { + virReportOOMError(); + goto cleanup; + } + if (getSumVcpuPercpuStats(group, priv->nvcpupids, sum_cpu_time, n) < 0) + goto cleanup; + + /* Check that the mapping of online cpus didn't change mid-parse. */ + map2 = nodeGetCPUmap(domain->conn, &max_id, "present"); + if (!map2 || memcmp(map, map2, VIR_DOMAIN_CPUMASK_LEN) != 0) + goto cleanup; + + sum_cpu_pos = sum_cpu_time; + for (i = 0; i <= max_id; i++) { + if (!map[i]) + cpu_time = 0; + else + cpu_time = *(sum_cpu_pos++); + if (i < start_cpu) + continue; + if (virTypedParameterAssign(¶ms[(i - start_cpu) * nparams + + param_idx], + VIR_DOMAIN_CPU_STATS_VCPUTIME, + VIR_TYPED_PARAM_ULLONG, + cpu_time) < 0) + goto cleanup; + } + rv = param_idx + 1; cleanup: + VIR_FREE(sum_cpu_time); VIR_FREE(buf); VIR_FREE(map); + VIR_FREE(map2); return rv; }
@@ -12683,7 +12788,7 @@ qemuDomainGetCPUStats(virDomainPtr domain, if (start_cpu == -1) ret = qemuDomainGetTotalcpuStats(group, params, nparams); else - ret = qemuDomainGetPercpuStats(domain, group, params, nparams, + ret = qemuDomainGetPercpuStats(domain, vm, group, params, nparams, start_cpu, ncpus); cleanup: virCgroupFree(&group); diff --git a/src/util/cgroup.c b/src/util/cgroup.c index ad49bc2..5b32881 100644 --- a/src/util/cgroup.c +++ b/src/util/cgroup.c @@ -530,7 +530,9 @@ static int virCgroupMakeGroup(virCgroupPtr parent, virCgroupPtr group, continue;
/* We need to control cpu bandwidth for each vcpu now */ - if ((flags & VIR_CGROUP_VCPU) && (i != VIR_CGROUP_CONTROLLER_CPU)) { + if ((flags & VIR_CGROUP_VCPU) && + (i != VIR_CGROUP_CONTROLLER_CPU && + i != VIR_CGROUP_CONTROLLER_CPUACCT)) { /* treat it as unmounted and we can use virCgroupAddTask */ VIR_FREE(group->controllers[i].mountPoint); continue; -- 1.7.7.6
ACK. -- Thanks, Hu Tao

On 05/18/2012 03:09 AM, Hu Tao wrote:
On Thu, May 17, 2012 at 03:56:48PM -0600, Eric Blake wrote:
From: Hu Tao <hutao@cn.fujitsu.com>
This involves setting the cpuacct cgroup to a per-vcpu granularity, as well as summing the each vcpu accounting into a common array. Now that we are reading more than one cgroup file, we double-check that cpus weren't hot-plugged between reads to invalidate our summing.
Signed-off-by: Eric Blake <eblake@redhat.com> ---
diff from v4: rewrite qemu code to use fewer malloc calls, fix some logic bugs
ACK.
Thanks for testing the common case, even if you didn't cover the case of offline cpus. I've gone ahead and pushed this. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On Thu, May 17, 2012 at 03:56:46PM -0600, Eric Blake wrote:
This is my enhancements to Hu's series. If it still works for his testing, then I'm ready to push it.
Thanks Eric. v5 works here. But I don't have a cpu-hotplug/hotunplug environment, so this case is not tested.
I also think we should port virDomainGetCPUStats to LXC; at least the overall statistics are available since LXC sets up a cgroup, although I'm not quite as sure whether vcpu statistics are possible.
I'll look into this later.
Hu Tao (2): Add a new param 'vcpu_time' to virDomainGetCPUStats Adds support to param 'vcpu_time' in qemu_driver.
include/libvirt/libvirt.h.in | 10 +++- src/qemu/qemu_driver.c | 123 ++++++++++++++++++++++++++++++++++++++--- src/util/cgroup.c | 4 +- tools/virsh.c | 14 +++-- 4 files changed, 134 insertions(+), 17 deletions(-)
-- 1.7.7.6
-- Thanks, Hu Tao
participants (2)
-
Eric Blake
-
Hu Tao