[libvirt] [PATCH 0/4] Add new public API virDomainGetPcpusUsage and pcpuinfo command in virsh

"virt-top -1" can call virDomainGetPcpusUsage() periodically and get the CPU activities per CPU. (still require virt-top site patch). virsh is also added a pcpuinfo command which calls virDomainGetPcpusUsage(), it gets information about the physic CPUs, such as the usage of CPUs, the current attached vCPUs. # virsh pcpuinfo rhel6 CPU: 0 Curr VCPU: - Usage: 47.3 CPU: 1 Curr VCPU: 1 Usage: 46.8 CPU: 2 Curr VCPU: 0 Usage: 52.7 CPU: 3 Curr VCPU: - Usage: 44.1 Lai Jiangshan (4): Add new public API virDomainGetPcpusUsage remote: support new API virDomainGetPcpusUsage for remote driver qemu: implemnt new API virDomainGetPcpusUsage for qemu driver Enable the pcpuinfo command in virsh daemon/remote.c | 68 ++++++++++++++++++++++++++++++ include/libvirt/libvirt.h.in | 5 ++ python/generator.py | 1 + src/driver.h | 7 +++ src/libvirt.c | 51 +++++++++++++++++++++++ src/libvirt_public.syms | 5 ++ src/qemu/qemu.conf | 5 +- src/qemu/qemu_conf.c | 3 +- src/qemu/qemu_driver.c | 74 +++++++++++++++++++++++++++++++++ src/remote/remote_driver.c | 51 +++++++++++++++++++++++ src/remote/remote_protocol.x | 17 +++++++- src/remote_protocol-structs | 13 ++++++ src/util/cgroup.c | 7 +++ src/util/cgroup.h | 1 + tools/virsh.c | 93 ++++++++++++++++++++++++++++++++++++++++++ tools/virsh.pod | 5 ++ 16 files changed, 402 insertions(+), 4 deletions(-) -- 1.7.4.4

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> --- include/libvirt/libvirt.h.in | 5 ++++ python/generator.py | 1 + src/driver.h | 7 +++++ src/libvirt.c | 51 ++++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 5 ++++ 5 files changed, 69 insertions(+), 0 deletions(-) diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index d01d1bc..2ec6b6b 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -3487,6 +3487,11 @@ int virConnectSetKeepAlive(virConnectPtr conn, int interval, unsigned int count); +int virDomainGetPcpusUsage(virDomainPtr dom, + unsigned long long *usages, + int *nr_usages, + unsigned int flags); + #ifdef __cplusplus } #endif diff --git a/python/generator.py b/python/generator.py index 88c52b9..a39244f 100755 --- a/python/generator.py +++ b/python/generator.py @@ -416,6 +416,7 @@ skip_impl = ( 'virDomainBlockStatsFlags', 'virDomainSetBlockIoTune', 'virDomainGetBlockIoTune', + 'virDomainGetPcpusUsage', # not implemented yet ) qemu_skip_impl = ( diff --git a/src/driver.h b/src/driver.h index 941ff51..3d54370 100644 --- a/src/driver.h +++ b/src/driver.h @@ -770,6 +770,12 @@ typedef int int *nparams, unsigned int flags); +typedef int + (*virDrvDomainGetPcpusUsage)(virDomainPtr dom, + unsigned long long *usages, + int *nr_usages, + unsigned int flags); + /** * _virDriver: * @@ -934,6 +940,7 @@ struct _virDriver { virDrvNodeSuspendForDuration nodeSuspendForDuration; virDrvDomainSetBlockIoTune domainSetBlockIoTune; virDrvDomainGetBlockIoTune domainGetBlockIoTune; + virDrvDomainGetPcpusUsage domainGetPcpusUsage; }; typedef int diff --git a/src/libvirt.c b/src/libvirt.c index 68074e7..06a019c 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -17599,3 +17599,54 @@ error: virDispatchError(dom->conn); return -1; } + +/** + * virDomainGetPcpusUsage: + * @dom: pointer to domain object + * @usages: returned physic cpu usages + * @nr_usages: length of @usages + * @flags: flags to control the operation + * + * Get the cpu usages per every physic cpu + * + * Returns 0 if success, -1 on error + */ +int virDomainGetPcpusUsage(virDomainPtr dom, + unsigned long long *usages, + int *nr_usages, + unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(dom, "usages=%p, nr_usages=%d, flags=%x", + usages, (nr_usages) ? *nr_usages : -1, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN (dom)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (nr_usages == NULL && *nr_usages != 0) { + virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + + conn = dom->conn; + + if (conn->driver->domainGetPcpusUsage) { + int ret; + ret = conn->driver->domainGetPcpusUsage(dom, usages, nr_usages, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibDomainError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(dom->conn); + return -1; +} diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index 164039a..05f8d9e 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -508,4 +508,9 @@ LIBVIRT_0.9.8 { virNodeSuspendForDuration; } LIBVIRT_0.9.7; +LIBVIRT_0.9.9 { + global: + virDomainGetPcpusUsage; +} LIBVIRT_0.9.8; + # .... define new API here using predicted next version number .... -- 1.7.4.4

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> --- daemon/remote.c | 68 ++++++++++++++++++++++++++++++++++++++++++ src/remote/remote_driver.c | 51 +++++++++++++++++++++++++++++++ src/remote/remote_protocol.x | 17 ++++++++++- src/remote_protocol-structs | 13 ++++++++ 4 files changed, 148 insertions(+), 1 deletions(-) diff --git a/daemon/remote.c b/daemon/remote.c index e1d208c..c154892 100644 --- a/daemon/remote.c +++ b/daemon/remote.c @@ -2022,6 +2022,74 @@ cleanup: return rv; } +static int +remoteDispatchDomainGetPcpusUsage(virNetServerPtr server ATTRIBUTE_UNUSED, + virNetServerClientPtr client ATTRIBUTE_UNUSED, + virNetMessagePtr hdr ATTRIBUTE_UNUSED, + virNetMessageErrorPtr rerr, + remote_domain_get_pcpus_usage_args *args, + remote_domain_get_pcpus_usage_ret *ret) +{ + int i; + virDomainPtr dom = NULL; + int rv = -1; + unsigned long long *usages; + int nr_usages = args->nr_usages; + struct daemonClientPrivate *priv = + virNetServerClientGetPrivateData(client); + + if (!priv->conn) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("connection not open")); + goto cleanup; + } + + if (nr_usages > REMOTE_DOMAIN_PCPUS_USAGE_PARAMETERS_MAX) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("nr_usages too large")); + goto cleanup; + } + + if (VIR_ALLOC_N(usages, nr_usages) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (!(dom = get_nonnull_domain(priv->conn, args->dom))) + goto cleanup; + + if (virDomainGetPcpusUsage(dom, usages, &nr_usages, args->flags) < 0) + goto cleanup; + + ret->nr_usages = nr_usages; + + /* + * In this case, we need to send back the number of parameters + * supported + */ + if (args->nr_usages == 0) { + goto success; + } + + ret->usages.usages_len = MIN(args->nr_usages, nr_usages); + if (VIR_ALLOC_N(ret->usages.usages_val, ret->usages.usages_len) < 0) { + virReportOOMError(); + goto cleanup; + } + + for (i = 0; i < ret->usages.usages_len; i++) { + ret->usages.usages_val[i] = usages[i]; + } + +success: + rv = 0; + +cleanup: + if (rv < 0) + virNetMessageSaveError(rerr); + VIR_FREE(usages); + if (dom) + virDomainFree(dom); + return rv; +} #ifdef HAVE_SASL /* diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index 556c90c..1a4c129 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -2255,6 +2255,56 @@ done: return rv; } +static int remoteDomainGetPcpusUsage(virDomainPtr domain, + unsigned long long *usages, + int *nr_usages, + unsigned int flags) +{ + int i; + int rv = -1; + remote_domain_get_pcpus_usage_args args; + remote_domain_get_pcpus_usage_ret ret; + struct private_data *priv = domain->conn->privateData; + + remoteDriverLock(priv); + + make_nonnull_domain(&args.dom, domain); + args.nr_usages = *nr_usages; + args.flags = flags; + + memset(&ret, 0, sizeof(ret)); + + if (call(domain->conn, priv, 0, REMOTE_PROC_DOMAIN_GET_PCPUS_USAGE, + (xdrproc_t) xdr_remote_domain_get_pcpus_usage_args, + (char *) &args, + (xdrproc_t) xdr_remote_domain_get_pcpus_usage_ret, + (char *) &ret) == -1) { + goto done; + } + + /* Handle the case when the caller does not know the number of parameters + * and is asking for the number of parameters supported + */ + if (*nr_usages == 0) { + *nr_usages = ret.nr_usages; + goto cleanup; + } + + for (i = 0; i < MIN(*nr_usages, ret.usages.usages_len); i++) { + usages[i] = ret.usages.usages_val[i]; + } + *nr_usages = ret.nr_usages; + + rv = 0; + +cleanup: + xdr_free ((xdrproc_t) xdr_remote_domain_get_pcpus_usage_ret, + (char *) &ret); +done: + remoteDriverUnlock(priv); + return rv; +} + /*----------------------------------------------------------------------*/ static virDrvOpenStatus ATTRIBUTE_NONNULL (1) @@ -4677,6 +4727,7 @@ static virDriver remote_driver = { .nodeSuspendForDuration = remoteNodeSuspendForDuration, /* 0.9.8 */ .domainSetBlockIoTune = remoteDomainSetBlockIoTune, /* 0.9.8 */ .domainGetBlockIoTune = remoteDomainGetBlockIoTune, /* 0.9.8 */ + .domainGetPcpusUsage = remoteDomainGetPcpusUsage, /* 0.9.9 */ }; static virNetworkDriver network_driver = { diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 509a20b..e6b38ac 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -128,6 +128,9 @@ const REMOTE_DOMAIN_MEMORY_PARAMETERS_MAX = 16; /* Upper limit on list of blockio tuning parameters. */ const REMOTE_DOMAIN_BLOCK_IO_TUNE_PARAMETERS_MAX = 16; +/* Upper limit on list of physic cpu parameters. */ +const REMOTE_DOMAIN_PCPUS_USAGE_PARAMETERS_MAX = 16; + /* Upper limit on list of node cpu stats. */ const REMOTE_NODE_CPU_STATS_MAX = 16; @@ -1104,6 +1107,17 @@ struct remote_domain_get_block_io_tune_ret { int nparams; }; +struct remote_domain_get_pcpus_usage_args { + remote_nonnull_domain dom; + int nr_usages; + unsigned int flags; +}; + +struct remote_domain_get_pcpus_usage_ret { + uint64_t usages<REMOTE_DOMAIN_PCPUS_USAGE_PARAMETERS_MAX>; + int nr_usages; +}; + /* Network calls: */ struct remote_num_of_networks_ret { @@ -2605,7 +2619,8 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_BLOCK_RESIZE = 251, /* autogen autogen */ REMOTE_PROC_DOMAIN_SET_BLOCK_IO_TUNE = 252, /* autogen autogen */ - REMOTE_PROC_DOMAIN_GET_BLOCK_IO_TUNE = 253 /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_GET_BLOCK_IO_TUNE = 253, /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_GET_PCPUS_USAGE = 254 /* skipgen skipgen */ /* * Notice how the entries are grouped in sets of 10 ? diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index a9d4296..a103266 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -1790,6 +1790,18 @@ struct remote_node_suspend_for_duration_args { uint64_t duration; u_int flags; }; +struct remote_domain_get_pcpus_usage_args { + remote_nonnull_domain dom; + int nr_usages; + u_int flags; +}; +struct remote_domain_get_pcpus_usage_ret { + struct { + u_int usages_len; + unsigned long long *usages_val; + } usages; + int nr_usages; +}; enum remote_procedure { REMOTE_PROC_OPEN = 1, REMOTE_PROC_CLOSE = 2, @@ -2044,4 +2056,5 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_BLOCK_RESIZE = 251, REMOTE_PROC_DOMAIN_SET_BLOCK_IO_TUNE = 252, REMOTE_PROC_DOMAIN_GET_BLOCK_IO_TUNE = 253, + REMOTE_PROC_DOMAIN_GET_PCPU_USAGE = 254, }; -- 1.7.4.4

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> --- src/qemu/qemu.conf | 5 ++- src/qemu/qemu_conf.c | 3 +- src/qemu/qemu_driver.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++++ src/util/cgroup.c | 7 ++++ src/util/cgroup.h | 1 + 5 files changed, 87 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu.conf b/src/qemu/qemu.conf index c3f264f..beb0123 100644 --- a/src/qemu/qemu.conf +++ b/src/qemu/qemu.conf @@ -158,18 +158,19 @@ # - 'memory' - use for memory tunables # - 'blkio' - use for block devices I/O tunables # - 'cpuset' - use for CPUs and memory nodes +# - 'cpuacct' - use for CPUs' account # # NB, even if configured here, they won't be used unless # the administrator has mounted cgroups, e.g.: # # mkdir /dev/cgroup -# mount -t cgroup -o devices,cpu,memory,blkio,cpuset none /dev/cgroup +# mount -t cgroup -o devices,cpu,memory,blkio,cpuset,cpuacct none /dev/cgroup # # They can be mounted anywhere, and different controllers # can be mounted in different locations. libvirt will detect # where they are located. # -# cgroup_controllers = [ "cpu", "devices", "memory", "blkio", "cpuset" ] +# cgroup_controllers = [ "cpu", "devices", "memory", "blkio", "cpuset", "cpuacct" ] # This is the basic set of devices allowed / required by # all virtual machines. diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 3766119..bf8d77a 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -307,7 +307,8 @@ int qemudLoadDriverConfig(struct qemud_driver *driver, (1 << VIR_CGROUP_CONTROLLER_DEVICES) | (1 << VIR_CGROUP_CONTROLLER_MEMORY) | (1 << VIR_CGROUP_CONTROLLER_BLKIO) | - (1 << VIR_CGROUP_CONTROLLER_CPUSET); + (1 << VIR_CGROUP_CONTROLLER_CPUSET) | + (1 << VIR_CGROUP_CONTROLLER_CPUACCT); } for (i = 0 ; i < VIR_CGROUP_CONTROLLER_LAST ; i++) { if (driver->cgroupControllers & (1 << i)) { diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index ed90c66..648910e 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -10748,6 +10748,79 @@ cleanup: return ret; } +static int +qemuGetPcpusUsage(virDomainPtr dom, + unsigned long long *usages, + int *nr_usages, + unsigned int flags) +{ + struct qemud_driver *driver = dom->conn->privateData; + virCgroupPtr group = NULL; + virDomainObjPtr vm = NULL; + char *pos, *raw; + unsigned long long val; + int nr_cpus = 0; + int ret = -1; + int rc; + bool isActive; + + virCheckFlags(0, -1); + + qemuDriverLock(driver); + + vm = virDomainFindByUUID(&driver->domains, dom->uuid); + + if (vm == NULL) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + _("No such domain %s"), dom->uuid); + goto cleanup; + } + + isActive = virDomainObjIsActive(vm); + + if (!isActive) { + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("domain is not running")); + goto cleanup; + } + + if (!qemuCgroupControllerActive(driver, VIR_CGROUP_CONTROLLER_CPUACCT)) { + qemuReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("cgroup CPUACCT controller is not mounted")); + goto cleanup; + } + + if (virCgroupForDomain(driver->cgroup, vm->def->name, &group, 0) != 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + _("cannot find cgroup for domain %s"), vm->def->name); + goto cleanup; + } + + rc = virCgroupGetCpuacctPcpusUsage(group, &raw); + if (rc != 0) { + virReportSystemError(-rc, "%s", _("unable to get cpu account")); + goto cleanup; + } + + pos = raw; + while (virStrToLong_ull(pos, &pos, 10, &val) >= 0) { + if (nr_cpus < *nr_usages) { + usages[nr_cpus] = val; + } + nr_cpus++; + } + + VIR_FREE(raw); + *nr_usages = nr_cpus; + ret = 0; + +cleanup: + virCgroupFree(&group); + if (vm) + virDomainObjUnlock(vm); + qemuDriverUnlock(driver); + return ret; +} static virDomainPtr qemuDomainAttach(virConnectPtr conn, unsigned int pid, @@ -11589,6 +11662,7 @@ static virDriver qemuDriver = { .nodeSuspendForDuration = nodeSuspendForDuration, /* 0.9.8 */ .domainSetBlockIoTune = qemuDomainSetBlockIoTune, /* 0.9.8 */ .domainGetBlockIoTune = qemuDomainGetBlockIoTune, /* 0.9.8 */ + .domainGetPcpusUsage = qemuGetPcpusUsage, /* 0.9.9 */ }; diff --git a/src/util/cgroup.c b/src/util/cgroup.c index b4d3d8b..3d281f9 100644 --- a/src/util/cgroup.c +++ b/src/util/cgroup.c @@ -1522,6 +1522,13 @@ int virCgroupGetCpuacctUsage(virCgroupPtr group, unsigned long long *usage) "cpuacct.usage", usage); } +int virCgroupGetCpuacctPcpusUsage(virCgroupPtr group, char **usage) +{ + return virCgroupGetValueStr(group, + VIR_CGROUP_CONTROLLER_CPUACCT, + "cpuacct.usage_percpu", usage); +} + int virCgroupSetFreezerState(virCgroupPtr group, const char *state) { return virCgroupSetValueStr(group, diff --git a/src/util/cgroup.h b/src/util/cgroup.h index 70dd392..1d62c31 100644 --- a/src/util/cgroup.h +++ b/src/util/cgroup.h @@ -115,6 +115,7 @@ int virCgroupSetCpuCfsQuota(virCgroupPtr group, long long cfs_quota); int virCgroupGetCpuCfsQuota(virCgroupPtr group, long long *cfs_quota); int virCgroupGetCpuacctUsage(virCgroupPtr group, unsigned long long *usage); +int virCgroupGetCpuacctPcpusUsage(virCgroupPtr group, char **usage); int virCgroupSetFreezerState(virCgroupPtr group, const char *state); int virCgroupGetFreezerState(virCgroupPtr group, char **state); -- 1.7.4.4

This command gets information about the physic CPUs. Example: # virsh pcpuinfo rhel6 CPU: 0 Curr VCPU: - Usage: 47.3 CPU: 1 Curr VCPU: 1 Usage: 46.8 CPU: 2 Curr VCPU: 0 Usage: 52.7 CPU: 3 Curr VCPU: - Usage: 44.1 Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> --- tools/virsh.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ tools/virsh.pod | 5 +++ 2 files changed, 98 insertions(+), 0 deletions(-) diff --git a/tools/virsh.c b/tools/virsh.c index 0fccf88..4a3833c 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -4012,6 +4012,98 @@ cmdVcpuinfo(vshControl *ctl, const vshCmd *cmd) } /* + * "pcpuinfo" command + */ +static const vshCmdInfo info_pcpuinfo[] = { + {"help", N_("detailed domain pcpu information")}, + {"desc", N_("Returns basic information about the domain physic CPUs.")}, + {NULL, NULL} +}; + +static const vshCmdOptDef opts_pcpuinfo[] = { + {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {NULL, 0, 0, NULL} +}; + +static bool +cmdPcpuinfo(vshControl *ctl, const vshCmd *cmd) +{ + virDomainInfo info; + virDomainPtr dom; + virNodeInfo nodeinfo; + virVcpuInfoPtr cpuinfo; + unsigned char *cpumaps; + int ncpus, maxcpu; + size_t cpumaplen; + bool ret = true; + int n, m; + + if (!vshConnectionUsability(ctl, ctl->conn)) + return false; + + if (!(dom = vshCommandOptDomain(ctl, cmd, NULL))) + return false; + + if (virNodeGetInfo(ctl->conn, &nodeinfo) != 0) { + virDomainFree(dom); + return false; + } + + if (virDomainGetInfo(dom, &info) != 0) { + virDomainFree(dom); + return false; + } + + cpuinfo = vshMalloc(ctl, sizeof(virVcpuInfo)*info.nrVirtCpu); + maxcpu = VIR_NODEINFO_MAXCPUS(nodeinfo); + cpumaplen = VIR_CPU_MAPLEN(maxcpu); + cpumaps = vshMalloc(ctl, info.nrVirtCpu * cpumaplen); + + if ((ncpus = virDomainGetVcpus(dom, + cpuinfo, info.nrVirtCpu, + cpumaps, cpumaplen)) >= 0) { + unsigned long long *usages; + int nr_usages = maxcpu; + + if (VIR_ALLOC_N(usages, nr_usages) < 0) { + virReportOOMError(); + goto fail; + } + + if (virDomainGetPcpusUsage(dom, usages, &nr_usages, 0) < 0) { + VIR_FREE(usages); + goto fail; + } + + for (n = 0; n < MIN(maxcpu, nr_usages); n++) { + vshPrint(ctl, "%-15s %d\n", _("CPU:"), n); + for (m = 0; m < ncpus; m++) { + if (cpuinfo[m].cpu == n) { + vshPrint(ctl, "%-15s %d\n", _("Curr VCPU:"), m); + break; + } + } + if (m == ncpus) { + vshPrint(ctl, "%-15s %s\n", _("Curr VCPU:"), _("-")); + } + vshPrint(ctl, "%-15s %.1lf\n\n", _("Usage:"), + usages[n] / 1000000000.0); + } + VIR_FREE(usages); + goto cleanup; + } + +fail: + ret = false; + +cleanup: + VIR_FREE(cpumaps); + VIR_FREE(cpuinfo); + virDomainFree(dom); + return ret; +} + +/* * "vcpupin" command */ static const vshCmdInfo info_vcpupin[] = { @@ -15130,6 +15222,7 @@ static const vshCmdDef domManagementCmds[] = { opts_migrate_setspeed, info_migrate_setspeed, 0}, {"migrate-getspeed", cmdMigrateGetMaxSpeed, opts_migrate_getspeed, info_migrate_getspeed, 0}, + {"pcpuinfo", cmdPcpuinfo, opts_pcpuinfo, info_pcpuinfo, 0}, {"reboot", cmdReboot, opts_reboot, info_reboot, 0}, {"reset", cmdReset, opts_reset, info_reset, 0}, {"restore", cmdRestore, opts_restore, info_restore, 0}, diff --git a/tools/virsh.pod b/tools/virsh.pod index 5131ade..241e0dc 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -1217,6 +1217,11 @@ Thus, this command always takes exactly zero or two flags. Returns basic information about the domain virtual CPUs, like the number of vCPUs, the running time, the affinity to physical processors. +=item B<pcpuinfo> I<domain-id> + +Returns information about the physic CPUs of the domain, like the usage of +CPUs, the current attached vCPUs. + =item B<vcpupin> I<domain-id> [I<vcpu>] [I<cpulist>] [[I<--live>] [I<--config>] | [I<--current>]] -- 1.7.4.4

On 12/07/2011 08:40 PM, Lai Jiangshan wrote:
"virt-top -1" can call virDomainGetPcpusUsage() periodically and get the CPU activities per CPU. (still require virt-top site patch).
virsh is also added a pcpuinfo command which calls virDomainGetPcpusUsage(), it gets information about the physic CPUs, such as the usage of CPUs, the current attached vCPUs.
Meta-question - is this the time taken _by just the guest in question_ on the host's physical CPU, or is it the cumulative CPU time taken by all process on the physical CPU, in which case this would be better named as virNodeGetCpusUsage()? And if it is per-domain, then what's the difference between physical cpu usage and virtual cpu usage (that is, can we even tell the overhead of the hypervisor, and should we be exposing that overhead to the user)? -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On 12/08/2011 11:44 AM, Eric Blake wrote:
On 12/07/2011 08:40 PM, Lai Jiangshan wrote:
"virt-top -1" can call virDomainGetPcpusUsage() periodically and get the CPU activities per CPU. (still require virt-top site patch).
virsh is also added a pcpuinfo command which calls virDomainGetPcpusUsage(), it gets information about the physic CPUs, such as the usage of CPUs, the current attached vCPUs.
Meta-question - is this the time taken _by just the guest in question_ on the host's physical CPU, or is it the cumulative CPU time taken by
The time is used by the guest in question on the host's physical CPUs. per-domain and per physical CPU
all process on the physical CPU, in which case this would be better named as virNodeGetCpusUsage()? And if it is per-domain, then what's the difference between physical cpu usage and virtual cpu usage (that is, can we even tell the overhead of the hypervisor, and should we be exposing that overhead to the user)?
I did not catch your mean, could you explain more? Thanks, Lai

Hi, Eric On 12/08/2011 11:44 AM, Eric Blake wrote:
On 12/07/2011 08:40 PM, Lai Jiangshan wrote:
"virt-top -1" can call virDomainGetPcpusUsage() periodically and get the CPU activities per CPU. (still require virt-top site patch).
virsh is also added a pcpuinfo command which calls virDomainGetPcpusUsage(), it gets information about the physic CPUs, such as the usage of CPUs, the current attached vCPUs.
Meta-question - is this the time taken _by just the guest in question_ on the host's physical CPU, or is it the cumulative CPU time taken by all process on the physical CPU, in which case this would be better named as virNodeGetCpusUsage()?
It's per-domain usage.
And if it is per-domain, then what's the difference between physical cpu usage and virtual cpu usage
the API tell us how many cpu utilization the whole guest occupies for every physic CPUs. I don't know what is "virtual cpu usage" means.
(that is, can we even tell the overhead of the hypervisor, and should we be exposing that overhead to the user)?
it is not exposed to the normal user. the administrator can use "virt-top -1" or "virsh pcpuinfo" to get cpus usage, to find out which physic cpu is busy, to change the cpu bound of (new) guests, to find a proper physic cpu and offline it, or to do nothing, just to observe the stat of the guests. I/(I will) very appreciate for your reply. Thanks, Lai
participants (2)
-
Eric Blake
-
Lai Jiangshan