[libvirt] [PATCH 00/10] Support hypervisor-threads-pin in vcpupin.

Hi~ Users can use vcpupin command to bind a vcpu thread to a specific physical cpu. But besides vcpu threads, there are alse some other threads created by qemu (known as hypervisor threads) that could not be explicitly bound to physical cpus. This 10 patches implemented hypervisor threads binding, in two ways: 1) Use sched_setaffinity() function; 2) In cpuset cgroup. A new xml element is introduced, and vcpupin command is improved, see below. 1. Introduce new xml elements: <cputune> ...... <hypervisorpin cpuset='1'/> </cputune> 2. Improve vcpupin command to support hypervisor threads binding. For example, vm1 has the following configuration: <cputune> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='0' cpuset='0'/> <hypervisorpin cpuset='1'/> </cputune> 1) query all threads pining # vcpupin vm1 VCPU: CPU Affinity ---------------------------------- 0: 0 1: 1 Hypervisor: CPU Affinity ---------------------------------- *: 1 2) query hypervisor threads pining only # vcpupin vm1 --hypervisor Hypervisor: CPU Affinity ---------------------------------- *: 1 3) change hypervisor threads pining # vcpupin vm1 --hypervisor 0-1 # vcpupin vm1 --hypervisor Hypervisor: CPU Affinity ---------------------------------- *: 0-1 # taskset -p 397 pid 397's current affinity mask: 3 Note: If users want to pin a vcpu thread to pcpu, --vcpu option could no longer be omitted. Tang Chen (10): Enable cpuset cgroup and synchronous vcpupin info to cgroup. Support hypervisorpin xml parse Add qemuSetupCgroupHypervisorPin and synchronize hypervisorpin info to cgroup. Add qemuProcessSetHypervisorAffinites and set hypervisor threads affinities Introduce virDomainHypervisorPinAdd and virDomainHypervisorPinDel functions Introduce qemudDomainPinHypervisorFlags and qemudDomainGetHypervisorPinInfo in qemu driver. Introduce remoteDomainPinHypervisorFlags and remoteDomainGetHypervisorPinInfo functions in remote driver. Introduce remoteDispatchDomainPinHypervisorFlags and remoteDispatchDomainGetHypervisorPinInfo functions. Introduce virDomainPinHypervisorFlags and virDomainGetHypervisorPinInfo functions. Improve vcpupin to support hypervisorpin dynically. daemon/remote.c | 103 +++++++++ docs/schemas/domaincommon.rng | 7 + include/libvirt/libvirt.h.in | 9 + src/conf/domain_conf.c | 172 ++++++++++++++- src/conf/domain_conf.h | 7 + src/driver.h | 13 +- src/libvirt.c | 147 +++++++++++++ src/libvirt_private.syms | 2 + src/libvirt_public.syms | 2 + src/qemu/qemu_cgroup.c | 88 ++++++++ src/qemu/qemu_cgroup.h | 3 + src/qemu/qemu_driver.c | 266 ++++++++++++++++++++++- src/qemu/qemu_process.c | 54 +++++ src/remote/remote_driver.c | 102 +++++++++ src/remote/remote_protocol.x | 24 ++- src/remote_protocol-structs | 24 ++ src/util/cgroup.c | 35 +++- src/util/cgroup.h | 3 + tests/qemuxml2argvdata/qemuxml2argv-cputune.xml | 1 + tests/vcpupin | 6 +- tools/virsh.c | 145 ++++++++---- tools/virsh.pod | 16 +- 22 files changed, 1158 insertions(+), 71 deletions(-) -- 1.7.3.1 -- Best Regards, Tang chen

This patch enables cpuset cgroup, and synchronous vcpupin info set by sched_setaffinity() to cgroup. Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/qemu/qemu_cgroup.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_cgroup.h | 2 ++ src/qemu/qemu_driver.c | 43 +++++++++++++++++++++++++++++++++++-------- src/util/cgroup.c | 35 ++++++++++++++++++++++++++++++++++- src/util/cgroup.h | 3 +++ 5 files changed, 121 insertions(+), 9 deletions(-) diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index 7c6ef33..a123a00 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -471,11 +471,50 @@ cleanup: return -1; } +int qemuSetupCgroupVcpuPin(virCgroupPtr cgroup, virDomainDefPtr def, + int vcpuid) +{ + int i, rc; + char *new_cpus = NULL; + + if (vcpuid < 0 || vcpuid >= def->vcpus) { + virReportSystemError(EINVAL, + _("%s: %d"), _("invalid vcpuid"), vcpuid); + return -1; + } + + for (i = 0; i < def->cputune.nvcpupin; i++) { + if (vcpuid == def->cputune.vcpupin[i]->vcpuid) { + new_cpus = virDomainCpuSetFormat(def->cputune.vcpupin[i]->cpumask, + VIR_DOMAIN_CPUMASK_LEN); + if (!new_cpus) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + _("failed to convert cpu mask")); + goto cleanup; + } + rc = virCgroupSetCpusetCpus(cgroup, new_cpus); + if (rc < 0) { + virReportSystemError(-rc, + _("%s"), _("Unable to set cpuset.cpus")); + goto cleanup; + } + } + } + VIR_FREE(new_cpus); + return 0; + +cleanup: + if (new_cpus) + VIR_FREE(new_cpus); + return -1; +} + int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm) { virCgroupPtr cgroup = NULL; virCgroupPtr cgroup_vcpu = NULL; qemuDomainObjPrivatePtr priv = vm->privateData; + virDomainDefPtr def = vm->def; int rc; unsigned int i; unsigned long long period = vm->def->cputune.period; @@ -553,6 +592,14 @@ int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm) } } + /* Set vcpupin in cgroup if vcpupin xml is provided */ + if (def->cputune.nvcpupin) { + if (qemuCgroupControllerActive(driver, VIR_CGROUP_CONTROLLER_CPUSET)) { + if (qemuSetupCgroupVcpuPin(cgroup_vcpu, def, i) < 0) + goto cleanup; + } + } + virCgroupFree(&cgroup_vcpu); } diff --git a/src/qemu/qemu_cgroup.h b/src/qemu/qemu_cgroup.h index 92eff68..dbf783a 100644 --- a/src/qemu/qemu_cgroup.h +++ b/src/qemu/qemu_cgroup.h @@ -52,6 +52,8 @@ int qemuSetupCgroup(struct qemud_driver *driver, int qemuSetupCgroupVcpuBW(virCgroupPtr cgroup, unsigned long long period, long long quota); +int qemuSetupCgroupVcpuPin(virCgroupPtr cgroup, virDomainDefPtr def, + int vcpuid); int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm); int qemuSetupCgroupForHypervisor(struct qemud_driver *driver, virDomainObjPtr vm); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 165d1c0..9333d1c 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -3511,6 +3511,8 @@ qemudDomainPinVcpuFlags(virDomainPtr dom, struct qemud_driver *driver = dom->conn->privateData; virDomainObjPtr vm; virDomainDefPtr persistentDef = NULL; + virCgroupPtr cgroup_dom = NULL; + virCgroupPtr cgroup_vcpu = NULL; int maxcpu, hostcpus; virNodeInfo nodeinfo; int ret = -1; @@ -3565,9 +3567,37 @@ qemudDomainPinVcpuFlags(virDomainPtr dom, if (flags & VIR_DOMAIN_AFFECT_LIVE) { if (priv->vcpupids != NULL) { + /* Add config to vm->def first, because cgroup APIs need it. */ + if (virDomainVcpuPinAdd(vm->def, cpumap, maplen, vcpu) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to update or add vcpupin xml of " + "a running domain")); + goto cleanup; + } + + /* Configure the corresponding cpuset cgroup before set affinity. */ + if (qemuCgroupControllerActive(driver, + VIR_CGROUP_CONTROLLER_CPUSET)) { + if (virCgroupForDomain(driver->cgroup, vm->def->name, + &cgroup_dom, 0) == 0) { + if (virCgroupForVcpu(cgroup_dom, vcpu, &cgroup_vcpu, 0) == 0) { + if (qemuSetupCgroupVcpuPin(cgroup_vcpu, vm->def, vcpu) < 0) { + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s %d", + _("failed to set cpuset.cpus in cgroup" + " for vcpu"), vcpu); + goto cleanup; + } + } + } + } + if (virProcessInfoSetAffinity(priv->vcpupids[vcpu], - cpumap, maplen, maxcpu) < 0) + cpumap, maplen, maxcpu) < 0) { + qemuReportError(VIR_ERR_SYSTEM_ERROR, "%s %d", + _("failed to set cpu affinity for vcpu"), + vcpu); goto cleanup; + } } else { qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", _("cpu affinity is not supported")); @@ -3581,13 +3611,6 @@ qemudDomainPinVcpuFlags(virDomainPtr dom, "a running domain")); goto cleanup; } - } else { - if (virDomainVcpuPinAdd(vm->def, cpumap, maplen, vcpu) < 0) { - qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("failed to update or add vcpupin xml of " - "a running domain")); - goto cleanup; - } } if (virDomainSaveStatus(driver->caps, driver->stateDir, vm) < 0) @@ -3619,6 +3642,10 @@ qemudDomainPinVcpuFlags(virDomainPtr dom, ret = 0; cleanup: + if (cgroup_vcpu) + virCgroupFree(&cgroup_vcpu); + if (cgroup_dom) + virCgroupFree(&cgroup_dom); if (vm) virDomainObjUnlock(vm); return ret; diff --git a/src/util/cgroup.c b/src/util/cgroup.c index 724cc6e..fd2e84d 100644 --- a/src/util/cgroup.c +++ b/src/util/cgroup.c @@ -530,7 +530,8 @@ static int virCgroupMakeGroup(virCgroupPtr parent, virCgroupPtr group, continue; /* We need to control cpu bandwidth for each vcpu now */ - if ((flags & VIR_CGROUP_VCPU) && (i != VIR_CGROUP_CONTROLLER_CPU)) { + if ((flags & VIR_CGROUP_VCPU) && (i != VIR_CGROUP_CONTROLLER_CPU) && + (i != VIR_CGROUP_CONTROLLER_CPUSET)) { /* treat it as unmounted and we can use virCgroupAddTask */ VIR_FREE(group->controllers[i].mountPoint); continue; @@ -1335,6 +1336,38 @@ int virCgroupGetCpusetMems(virCgroupPtr group, char **mems) } /** + * virCgroupSetCpusetCpus: + * + * @group: The cgroup to set cpuset.cpus for + * @cpus: the cpus to set + * + * Retuens: 0 on success + */ +int virCgroupSetCpusetCpus(virCgroupPtr group, const char *cpus) +{ + return virCgroupSetValueStr(group, + VIR_CGROUP_CONTROLLER_CPUSET, + "cpuset.cpus", + cpus); +} + +/** + * virCgroupGetCpusetCpus: + * + * @group: The cgroup to get cpuset.cpus for + * @cpus: the cpus to get + * + * Retuens: 0 on success + */ +int virCgroupGetCpusetCpus(virCgroupPtr group, char **cpus) +{ + return virCgroupGetValueStr(group, + VIR_CGROUP_CONTROLLER_CPUSET, + "cpuset.cpus", + cpus); +} + +/** * virCgroupDenyAllDevices: * * @group: The cgroup to deny all permissions, for all devices diff --git a/src/util/cgroup.h b/src/util/cgroup.h index 2419ef6..beade4e 100644 --- a/src/util/cgroup.h +++ b/src/util/cgroup.h @@ -131,6 +131,9 @@ int virCgroupGetFreezerState(virCgroupPtr group, char **state); int virCgroupSetCpusetMems(virCgroupPtr group, const char *mems); int virCgroupGetCpusetMems(virCgroupPtr group, char **mems); +int virCgroupSetCpusetCpus(virCgroupPtr group, const char *cpus); +int virCgroupGetCpusetCpus(virCgroupPtr group, char **cpus); + int virCgroupRemove(virCgroupPtr group); void virCgroupFree(virCgroupPtr *group); -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- docs/schemas/domaincommon.rng | 7 ++ src/conf/domain_conf.c | 96 ++++++++++++++++++++++- src/conf/domain_conf.h | 1 + tests/qemuxml2argvdata/qemuxml2argv-cputune.xml | 1 + 4 files changed, 102 insertions(+), 3 deletions(-) diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 6a2a99f..6d8ac68 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -564,6 +564,13 @@ </attribute> </element> </zeroOrMore> + <optional> + <element name="hypervisorpin"> + <attribute name="cpuset"> + <ref name="cpuset"/> + </attribute> + </element> + </optional> </element> </optional> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index e9e5f17..cd3079f 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -7580,6 +7580,50 @@ error: goto cleanup; } +/* Parse the XML definition for hypervisorpin */ +static virDomainVcpuPinDefPtr +virDomainHypervisorPinDefParseXML(const xmlNodePtr node) +{ + virDomainVcpuPinDefPtr def = NULL; + char *tmp = NULL; + + if (VIR_ALLOC(def) < 0) { + virReportOOMError(); + return NULL; + } + + def->vcpuid = -1; + + tmp = virXMLPropString(node, "cpuset"); + + if (tmp) { + char *set = tmp; + int cpumasklen = VIR_DOMAIN_CPUMASK_LEN; + + if (VIR_ALLOC_N(def->cpumask, cpumasklen) < 0) { + virReportOOMError(); + goto error; + } + + if (virDomainCpuSetParse(set, 0, def->cpumask, + cpumasklen) < 0) + goto error; + + VIR_FREE(tmp); + } else { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing cpuset for hypervisor pin")); + goto error; + } + +cleanup: + return def; + +error: + VIR_FREE(def); + goto cleanup; +} + static int virDomainDefMaybeAddController(virDomainDefPtr def, int type, int idx) @@ -8012,6 +8056,34 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, } VIR_FREE(nodes); + if ((n = virXPathNodeSet("./cputune/hypervisorpin", ctxt, &nodes)) < 0) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("cannot extract hypervisorpin nodes")); + goto error; + } + + if (n > 1) { + virDomainReportError(VIR_ERR_XML_ERROR, "%s", + _("only one hypervisorpin is supported")); + VIR_FREE(nodes); + goto error; + } + + if (n && VIR_ALLOC(def->cputune.hypervisorpin) < 0) { + goto no_memory; + } + + if (n) { + virDomainVcpuPinDefPtr hypervisorpin = NULL; + hypervisorpin = virDomainHypervisorPinDefParseXML(nodes[0]); + + if (!hypervisorpin) + goto error; + + def->cputune.hypervisorpin = hypervisorpin; + } + VIR_FREE(nodes); + /* Extract numatune if exists. */ if ((n = virXPathNodeSet("./numatune", ctxt, &nodes)) < 0) { virDomainReportError(VIR_ERR_INTERNAL_ERROR, @@ -8960,7 +9032,7 @@ no_memory: virReportOOMError(); /* fallthrough */ - error: +error: VIR_FREE(tmp); VIR_FREE(nodes); virBitmapFree(bootMap); @@ -12464,7 +12536,8 @@ virDomainDefFormatInternal(virDomainDefPtr def, if (def->cputune.shares || def->cputune.vcpupin || def->cputune.period || def->cputune.quota || - def->cputune.hypervisor_period || def->cputune.hypervisor_quota) + def->cputune.hypervisor_period || def->cputune.hypervisor_quota || + def->cputune.hypervisorpin) virBufferAddLit(buf, " <cputune>\n"); if (def->cputune.shares) @@ -12507,9 +12580,26 @@ virDomainDefFormatInternal(virDomainDefPtr def, } } + if (def->cputune.hypervisorpin) { + virBufferAsprintf(buf, " <hypervisorpin "); + + char *cpumask = NULL; + cpumask = virDomainCpuSetFormat(def->cputune.hypervisorpin->cpumask, + VIR_DOMAIN_CPUMASK_LEN); + if (cpumask == NULL) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("failed to format cpuset for hypervisor")); + goto cleanup; + } + + virBufferAsprintf(buf, "cpuset='%s'/>\n", cpumask); + VIR_FREE(cpumask); + } + if (def->cputune.shares || def->cputune.vcpupin || def->cputune.period || def->cputune.quota || - def->cputune.hypervisor_period || def->cputune.hypervisor_quota) + def->cputune.hypervisor_period || def->cputune.hypervisor_quota || + def->cputune.hypervisorpin) virBufferAddLit(buf, " </cputune>\n"); if (def->numatune.memory.nodemask) { diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index f7d52c3..3db35d4 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -1566,6 +1566,7 @@ struct _virDomainDef { long long hypervisor_quota; int nvcpupin; virDomainVcpuPinDefPtr *vcpupin; + virDomainVcpuPinDefPtr hypervisorpin; } cputune; virDomainNumatuneDef numatune; diff --git a/tests/qemuxml2argvdata/qemuxml2argv-cputune.xml b/tests/qemuxml2argvdata/qemuxml2argv-cputune.xml index 6f70571..ad0f16a 100644 --- a/tests/qemuxml2argvdata/qemuxml2argv-cputune.xml +++ b/tests/qemuxml2argvdata/qemuxml2argv-cputune.xml @@ -10,6 +10,7 @@ <quota>-1</quota> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> + <hypervisorpin cpuset='1'/> </cputune> <os> <type arch='i686' machine='pc'>hvm</type> -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/qemu/qemu_cgroup.c | 41 +++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_cgroup.h | 1 + 2 files changed, 42 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index a123a00..2d8d9ee 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -509,6 +509,39 @@ cleanup: return -1; } +int qemuSetupCgroupHypervisorPin(virCgroupPtr cgroup, virDomainDefPtr def) +{ + int rc; + char *new_cpus = NULL; + + if (!def->cputune.hypervisorpin) + return 0; + + new_cpus = virDomainCpuSetFormat(def->cputune.hypervisorpin->cpumask, + VIR_DOMAIN_CPUMASK_LEN); + if (!new_cpus) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + _("failed to convert cpu mask")); + goto cleanup; + } + + rc = virCgroupSetCpusetCpus(cgroup, new_cpus); + if (rc < 0) { + virReportSystemError(-rc, + _("%s"), _("Unable to set cpuset.cpus")); + goto cleanup; + } + + VIR_FREE(new_cpus); + + return 0; + +cleanup: + if (new_cpus) + VIR_FREE(new_cpus); + return -1; +} + int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm) { virCgroupPtr cgroup = NULL; @@ -625,6 +658,7 @@ int qemuSetupCgroupForHypervisor(struct qemud_driver *driver, qemuDomainObjPrivatePtr priv = vm->privateData; unsigned long long period = vm->def->cputune.hypervisor_period; long long quota = vm->def->cputune.hypervisor_quota; + virDomainDefPtr def = vm->def; int rc; if (driver->cgroup == NULL) @@ -670,6 +704,13 @@ int qemuSetupCgroupForHypervisor(struct qemud_driver *driver, } } + if (def->cputune.hypervisorpin) { + if (qemuCgroupControllerActive(driver, VIR_CGROUP_CONTROLLER_CPUSET)) { + if (qemuSetupCgroupHypervisorPin(cgroup_hypervisor, def) < 0) + goto cleanup; + } + } + virCgroupFree(&cgroup_hypervisor); virCgroupFree(&cgroup); return 0; diff --git a/src/qemu/qemu_cgroup.h b/src/qemu/qemu_cgroup.h index dbf783a..8664cea 100644 --- a/src/qemu/qemu_cgroup.h +++ b/src/qemu/qemu_cgroup.h @@ -54,6 +54,7 @@ int qemuSetupCgroupVcpuBW(virCgroupPtr cgroup, long long quota); int qemuSetupCgroupVcpuPin(virCgroupPtr cgroup, virDomainDefPtr def, int vcpuid); +int qemuSetupCgroupHypervisorPin(virCgroupPtr cgroup, virDomainDefPtr def); int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm); int qemuSetupCgroupForHypervisor(struct qemud_driver *driver, virDomainObjPtr vm); -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/qemu/qemu_process.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 54 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index b605eee..33f65e1 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -1985,6 +1985,56 @@ cleanup: return ret; } +/* Set CPU affinities for hypervisor threads if hypervisorpin xml provided. */ +static int +qemuProcessSetHypervisorAffinites(virConnectPtr conn, + virDomainObjPtr vm) +{ + virDomainDefPtr def = vm->def; + pid_t pid = vm->pid; + unsigned char *cpumask = NULL; + unsigned char *cpumap = NULL; + virNodeInfo nodeinfo; + int cpumaplen, hostcpus, maxcpu, i; + int ret = -1; + + if (virNodeGetInfo(conn, &nodeinfo) != 0) + return -1; + + if (!def->cputune.hypervisorpin) + return 0; + + hostcpus = VIR_NODEINFO_MAXCPUS(nodeinfo); + cpumaplen = VIR_CPU_MAPLEN(hostcpus); + maxcpu = cpumaplen * 8; + + if (maxcpu > hostcpus) + maxcpu = hostcpus; + + if (VIR_ALLOC_N(cpumap, cpumaplen) < 0) { + virReportOOMError(); + return -1; + } + + cpumask = (unsigned char *)def->cputune.hypervisorpin->cpumask; + for(i = 0; i < VIR_DOMAIN_CPUMASK_LEN; i++) { + if (cpumask[i]) + VIR_USE_CPU(cpumap, i); + } + + if (virProcessInfoSetAffinity(pid, + cpumap, + cpumaplen, + maxcpu) < 0) { + goto cleanup; + } + + ret = 0; +cleanup: + VIR_FREE(cpumap); + return ret; +} + static int qemuProcessInitPasswords(virConnectPtr conn, struct qemud_driver *driver, @@ -3676,6 +3726,10 @@ int qemuProcessStart(virConnectPtr conn, if (qemuProcessSetVcpuAffinites(conn, vm) < 0) goto cleanup; + VIR_DEBUG("Setting hypervisor threads affinities"); + if (qemuProcessSetHypervisorAffinites(conn, vm) < 0) + goto cleanup; + VIR_DEBUG("Setting any required VM passwords"); if (qemuProcessInitPasswords(conn, driver, vm) < 0) goto cleanup; -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/conf/domain_conf.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++ src/conf/domain_conf.h | 6 ++++ src/libvirt_private.syms | 2 + 3 files changed, 84 insertions(+), 0 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index cd3079f..4f56010 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -10690,6 +10690,82 @@ virDomainVcpuPinDel(virDomainDefPtr def, int vcpu) return 0; } +int +virDomainHypervisorPinAdd(virDomainDefPtr def, + unsigned char *cpumap, + int maplen) +{ + virDomainVcpuPinDefPtr hypervisorpin = NULL; + char *cpumask = NULL; + int i; + + if (VIR_ALLOC_N(cpumask, VIR_DOMAIN_CPUMASK_LEN) < 0) { + virReportOOMError(); + goto cleanup; + } + + /* Reset cpumask to all 0s. */ + for (i = 0; i < VIR_DOMAIN_CPUMASK_LEN; i++) + cpumask[i] = 0; + + /* Convert bitmap (cpumap) to cpumask, which is byte map. */ + for (i = 0; i < maplen; i++) { + int cur; + + for (cur = 0; cur < 8; cur++) { + if (cpumap[i] & (1 << cur)) + cpumask[i * 8 + cur] = 1; + } + } + + if (!def->cputune.hypervisorpin) { + /* No hypervisorpin exists yet. */ + if (VIR_ALLOC(hypervisorpin) < 0) { + virReportOOMError(); + goto cleanup; + } + + hypervisorpin->vcpuid = -1; + hypervisorpin->cpumask = cpumask; + def->cputune.hypervisorpin = hypervisorpin; + } else { + /* Since there is only 1 hypervisorpin for each vm, + * juest replace the old one. + */ + VIR_FREE(def->cputune.hypervisorpin->cpumask); + def->cputune.hypervisorpin->cpumask = cpumask; + } + + return 0; + +cleanup: + if (cpumask) + VIR_FREE(cpumask); + return -1; +} + +int +virDomainHypervisorPinDel(virDomainDefPtr def) +{ + virDomainVcpuPinDefPtr hypervisorpin = NULL; + + /* No hypervisorpin exists yet */ + if (!def->cputune.hypervisorpin) { + return 0; + } + + hypervisorpin = def->cputune.hypervisorpin; + + VIR_FREE(hypervisorpin->cpumask); + VIR_FREE(hypervisorpin); + def->cputune.hypervisorpin = NULL; + + if (def->cputune.hypervisorpin) + return -1; + + return 0; +} + static int virDomainLifecycleDefFormat(virBufferPtr buf, int type, diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 3db35d4..172cd95 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -1944,6 +1944,12 @@ int virDomainVcpuPinAdd(virDomainDefPtr def, int virDomainVcpuPinDel(virDomainDefPtr def, int vcpu); +int virDomainHypervisorPinAdd(virDomainDefPtr def, + unsigned char *cpumap, + int maplen); + +int virDomainHypervisorPinDel(virDomainDefPtr def); + int virDomainDiskIndexByName(virDomainDefPtr def, const char *name, bool allow_ambiguous); const char *virDomainDiskPathByName(virDomainDefPtr, const char *name); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 8612044..9e3238e 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -476,6 +476,8 @@ virDomainTimerTrackTypeFromString; virDomainTimerTrackTypeToString; virDomainVcpuPinAdd; virDomainVcpuPinDel; +virDomainHypervisorPinAdd; +virDomainHypervisorPinDel; virDomainVcpuPinFindByVcpu; virDomainVcpuPinIsDuplicate; virDomainVideoDefFree; -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/driver.h | 13 +++- src/qemu/qemu_driver.c | 223 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 235 insertions(+), 1 deletions(-) diff --git a/src/driver.h b/src/driver.h index 03d249b..18f7f26 100644 --- a/src/driver.h +++ b/src/driver.h @@ -296,7 +296,16 @@ typedef int unsigned char *cpumaps, int maplen, unsigned int flags); - +typedef int + (*virDrvDomainPinHypervisorFlags) (virDomainPtr domain, + unsigned char *cpumap, + int maplen, + unsigned int flags); +typedef int + (*virDrvDomainGetHypervisorPinInfo) (virDomainPtr domain, + unsigned char *cpumaps, + int maplen, + unsigned int flags); typedef int (*virDrvDomainGetVcpus) (virDomainPtr domain, virVcpuInfoPtr info, @@ -908,6 +917,8 @@ struct _virDriver { virDrvDomainPinVcpu domainPinVcpu; virDrvDomainPinVcpuFlags domainPinVcpuFlags; virDrvDomainGetVcpuPinInfo domainGetVcpuPinInfo; + virDrvDomainPinHypervisorFlags domainPinHypervisorFlags; + virDrvDomainGetHypervisorPinInfo domainGetHypervisorPinInfo; virDrvDomainGetVcpus domainGetVcpus; virDrvDomainGetMaxVcpus domainGetMaxVcpus; virDrvDomainGetSecurityLabel domainGetSecurityLabel; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9333d1c..edca26a 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -3748,6 +3748,227 @@ cleanup: } static int +qemudDomainPinHypervisorFlags(virDomainPtr dom, + unsigned char *cpumap, + int maplen, + unsigned int flags) +{ + struct qemud_driver *driver = dom->conn->privateData; + virDomainObjPtr vm; + virCgroupPtr cgroup_dom = NULL; + virCgroupPtr cgroup_hypervisor = NULL; + pid_t pid; + virDomainDefPtr persistentDef = NULL; + int maxcpu, hostcpus; + virNodeInfo nodeinfo; + int ret = -1; + qemuDomainObjPrivatePtr priv; + bool canResetting = true; + int pcpu; + + virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | + VIR_DOMAIN_AFFECT_CONFIG, -1); + + qemuDriverLock(driver); + vm = virDomainFindByUUID(&driver->domains, dom->uuid); + qemuDriverUnlock(driver); + + if (!vm) { + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virUUIDFormat(dom->uuid, uuidstr); + qemuReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s'"), uuidstr); + goto cleanup; + } + + if (virDomainLiveConfigHelperMethod(driver->caps, vm, &flags, + &persistentDef) < 0) + goto cleanup; + + priv = vm->privateData; + + if (nodeGetInfo(dom->conn, &nodeinfo) < 0) + goto cleanup; + hostcpus = VIR_NODEINFO_MAXCPUS(nodeinfo); + maxcpu = maplen * 8; + if (maxcpu > hostcpus) + maxcpu = hostcpus; + /* pinning to all physical cpus means resetting, + * so check if we can reset setting. + */ + for (pcpu = 0; pcpu < hostcpus; pcpu++) { + if ((cpumap[pcpu/8] & (1 << (pcpu % 8))) == 0) { + canResetting = false; + break; + } + } + + pid = vm->pid; + + if (flags & VIR_DOMAIN_AFFECT_LIVE) { + + if (priv->vcpupids != NULL) { + if (virDomainHypervisorPinAdd(vm->def, cpumap, maplen) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to update or add hypervisorpin xml " + "of a running domain")); + goto cleanup; + } + + if (qemuCgroupControllerActive(driver, + VIR_CGROUP_CONTROLLER_CPUSET)) { + /* + * Configure the corresponding cpuset cgroup. + * If no cgroup for domain or hypervisor exists, do nothing. + */ + if (virCgroupForDomain(driver->cgroup, vm->def->name, + &cgroup_dom, 0) == 0) { + if (virCgroupForHypervisor(cgroup_dom, &cgroup_hypervisor, 0) == 0) { + if (qemuSetupCgroupHypervisorPin(cgroup_hypervisor, vm->def) < 0) { + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("failed to set cpuset.cpus in cgroup" + " for hypervisor threads")); + goto cleanup; + } + } + } + } + + if (canResetting) { + if (virDomainHypervisorPinDel(vm->def) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to delete hypervisorpin xml of " + "a running domain")); + goto cleanup; + } + } + + if (virProcessInfoSetAffinity(pid, cpumap, maplen, maxcpu) < 0) { + qemuReportError(VIR_ERR_SYSTEM_ERROR, "%s", + _("failed to set cpu affinity for " + "hypervisor threads")); + goto cleanup; + } + } else { + qemuReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("cpu affinity is not supported")); + goto cleanup; + } + + if (virDomainSaveStatus(driver->caps, driver->stateDir, vm) < 0) + goto cleanup; + } + + if (flags & VIR_DOMAIN_AFFECT_CONFIG) { + + if (canResetting) { + if (virDomainHypervisorPinDel(persistentDef) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to delete hypervisorpin xml of " + "a persistent domain")); + goto cleanup; + } + } else { + if (virDomainHypervisorPinAdd(persistentDef, cpumap, maplen) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to update or add hypervisorpin xml " + "of a persistent domain")); + goto cleanup; + } + } + + ret = virDomainSaveConfig(driver->configDir, persistentDef); + goto cleanup; + } + + ret = 0; + +cleanup: + if (cgroup_hypervisor) + virCgroupFree(&cgroup_hypervisor); + if (cgroup_dom) + virCgroupFree(&cgroup_dom); + + if (vm) + virDomainObjUnlock(vm); + return ret; +} + +static int +qemudDomainGetHypervisorPinInfo(virDomainPtr dom, + unsigned char *cpumaps, + int maplen, + unsigned int flags) +{ + struct qemud_driver *driver = dom->conn->privateData; + virDomainObjPtr vm = NULL; + virNodeInfo nodeinfo; + virDomainDefPtr targetDef = NULL; + int ret = -1; + int maxcpu, hostcpus, pcpu; + virDomainVcpuPinDefPtr hypervisorpin = NULL; + char *cpumask = NULL; + + virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | + VIR_DOMAIN_AFFECT_CONFIG, -1); + + qemuDriverLock(driver); + vm = virDomainFindByUUID(&driver->domains, dom->uuid); + qemuDriverUnlock(driver); + + if (!vm) { + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virUUIDFormat(dom->uuid, uuidstr); + qemuReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s'"), uuidstr); + goto cleanup; + } + + if (virDomainLiveConfigHelperMethod(driver->caps, vm, &flags, + &targetDef) < 0) + goto cleanup; + + if (flags & VIR_DOMAIN_AFFECT_LIVE) + targetDef = vm->def; + + /* Coverity didn't realize that targetDef must be set if we got here. */ + sa_assert(targetDef); + + if (nodeGetInfo(dom->conn, &nodeinfo) < 0) + goto cleanup; + hostcpus = VIR_NODEINFO_MAXCPUS(nodeinfo); + maxcpu = maplen * 8; + if (maxcpu > hostcpus) + maxcpu = hostcpus; + + /* initialize cpumaps */ + memset(cpumaps, 0xff, maplen); + if (maxcpu % 8) { + cpumaps[maplen - 1] &= (1 << maxcpu % 8) - 1; + } + + /* If no hypervisorpin, all cpus should be used */ + hypervisorpin = targetDef->cputune.hypervisorpin; + if (!hypervisorpin) { + ret = 0; + goto cleanup; + } + + cpumask = hypervisorpin->cpumask; + for (pcpu = 0; pcpu < maxcpu; pcpu++) { + if (cpumask[pcpu] == 0) + VIR_UNUSE_CPU(cpumaps, pcpu); + } + + ret = 1; + +cleanup: + if (vm) + virDomainObjUnlock(vm); + return ret; +} + +static int qemudDomainGetVcpus(virDomainPtr dom, virVcpuInfoPtr info, int maxinfo, @@ -12978,6 +13199,8 @@ static virDriver qemuDriver = { .domainPinVcpu = qemudDomainPinVcpu, /* 0.4.4 */ .domainPinVcpuFlags = qemudDomainPinVcpuFlags, /* 0.9.3 */ .domainGetVcpuPinInfo = qemudDomainGetVcpuPinInfo, /* 0.9.3 */ + .domainPinHypervisorFlags = qemudDomainPinHypervisorFlags, /* 0.9.11 */ + .domainGetHypervisorPinInfo = qemudDomainGetHypervisorPinInfo, /* 0.9.11 */ .domainGetVcpus = qemudDomainGetVcpus, /* 0.4.4 */ .domainGetMaxVcpus = qemudDomainGetMaxVcpus, /* 0.4.4 */ .domainGetSecurityLabel = qemudDomainGetSecurityLabel, /* 0.6.1 */ -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/remote/remote_driver.c | 102 ++++++++++++++++++++++++++++++++++++++++++ src/remote/remote_protocol.x | 24 +++++++++- src/remote_protocol-structs | 24 ++++++++++ 3 files changed, 149 insertions(+), 1 deletions(-) diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index af46384..884d4a4 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -1743,6 +1743,106 @@ done: } static int +remoteDomainPinHypervisorFlags (virDomainPtr dom, + unsigned char *cpumap, + int cpumaplen, + unsigned int flags) +{ + int rv = -1; + struct private_data *priv = dom->conn->privateData; + remote_domain_pin_hypervisor_flags_args args; + + remoteDriverLock(priv); + + if (cpumaplen > REMOTE_CPUMAP_MAX) { + remoteError(VIR_ERR_RPC, + _("%s length greater than maximum: %d > %d"), + "cpumap", (int)cpumaplen, REMOTE_CPUMAP_MAX); + goto done; + } + + make_nonnull_domain(&args.dom, dom); + args.vcpu = -1; + args.cpumap.cpumap_val = (char *)cpumap; + args.cpumap.cpumap_len = cpumaplen; + args.flags = flags; + + if (call(dom->conn, priv, 0, REMOTE_PROC_DOMAIN_PIN_HYPERVISOR_FLAGS, + (xdrproc_t) xdr_remote_domain_pin_hypervisor_flags_args, + (char *) &args, + (xdrproc_t) xdr_void, (char *) NULL) == -1) { + goto done; + } + + rv = 0; + +done: + remoteDriverUnlock(priv); + return rv; +} + + +static int +remoteDomainGetHypervisorPinInfo (virDomainPtr domain, + unsigned char *cpumaps, + int maplen, + unsigned int flags) +{ + int rv = -1; + int i; + remote_domain_get_hypervisor_pin_info_args args; + remote_domain_get_hypervisor_pin_info_ret ret; + struct private_data *priv = domain->conn->privateData; + + remoteDriverLock(priv); + + /* There is only one cpumap for all hypervisor threads */ + if (INT_MULTIPLY_OVERFLOW(1, maplen) || + maplen > REMOTE_CPUMAPS_MAX) { + remoteError(VIR_ERR_RPC, + _("vCPU map buffer length exceeds maximum: %d > %d"), + maplen, REMOTE_CPUMAPS_MAX); + goto done; + } + + make_nonnull_domain(&args.dom, domain); + args.ncpumaps = 1; + args.maplen = maplen; + args.flags = flags; + + memset(&ret, 0, sizeof ret); + + if (call (domain->conn, priv, 0, REMOTE_PROC_DOMAIN_GET_HYPERVISOR_PIN_INFO, + (xdrproc_t) xdr_remote_domain_get_hypervisor_pin_info_args, + (char *) &args, + (xdrproc_t) xdr_remote_domain_get_hypervisor_pin_info_ret, + (char *) &ret) == -1) + goto done; + + if (ret.cpumaps.cpumaps_len > maplen) { + remoteError(VIR_ERR_RPC, + _("host reports map buffer length exceeds maximum: %d > %d"), + ret.cpumaps.cpumaps_len, maplen); + goto cleanup; + } + + memset(cpumaps, 0, maplen); + + for (i = 0; i < ret.cpumaps.cpumaps_len; ++i) + cpumaps[i] = ret.cpumaps.cpumaps_val[i]; + + rv = ret.num; + +cleanup: + xdr_free ((xdrproc_t) xdr_remote_domain_get_hypervisor_pin_info_ret, + (char *) &ret); + +done: + remoteDriverUnlock(priv); + return rv; +} + +static int remoteDomainGetVcpus (virDomainPtr domain, virVcpuInfoPtr info, int maxinfo, @@ -4998,6 +5098,8 @@ static virDriver remote_driver = { .domainPinVcpu = remoteDomainPinVcpu, /* 0.3.0 */ .domainPinVcpuFlags = remoteDomainPinVcpuFlags, /* 0.9.3 */ .domainGetVcpuPinInfo = remoteDomainGetVcpuPinInfo, /* 0.9.3 */ + .domainPinHypervisorFlags = remoteDomainPinHypervisorFlags, /* 0.9.11 */ + .domainGetHypervisorPinInfo = remoteDomainGetHypervisorPinInfo, /* 0.9.11 */ .domainGetVcpus = remoteDomainGetVcpus, /* 0.3.0 */ .domainGetMaxVcpus = remoteDomainGetMaxVcpus, /* 0.3.0 */ .domainGetSecurityLabel = remoteDomainGetSecurityLabel, /* 0.6.1 */ diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 2d57247..1ad9b44 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -1054,6 +1054,25 @@ struct remote_domain_get_vcpu_pin_info_ret { int num; }; +struct remote_domain_pin_hypervisor_flags_args { + remote_nonnull_domain dom; + unsigned int vcpu; + opaque cpumap<REMOTE_CPUMAP_MAX>; /* (unsigned char *) */ + unsigned int flags; +}; + +struct remote_domain_get_hypervisor_pin_info_args { + remote_nonnull_domain dom; + int ncpumaps; + int maplen; + unsigned int flags; +}; + +struct remote_domain_get_hypervisor_pin_info_ret { + opaque cpumaps<REMOTE_CPUMAPS_MAX>; + int num; +}; + struct remote_domain_get_vcpus_args { remote_nonnull_domain dom; int maxinfo; @@ -2782,7 +2801,10 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_PM_WAKEUP = 267, /* autogen autogen */ REMOTE_PROC_DOMAIN_EVENT_TRAY_CHANGE = 268, /* autogen autogen */ REMOTE_PROC_DOMAIN_EVENT_PMWAKEUP = 269, /* autogen autogen */ - REMOTE_PROC_DOMAIN_EVENT_PMSUSPEND = 270 /* autogen autogen */ + REMOTE_PROC_DOMAIN_EVENT_PMSUSPEND = 270, /* autogen autogen */ + + REMOTE_PROC_DOMAIN_PIN_HYPERVISOR_FLAGS = 271, /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_GET_HYPERVISOR_PIN_INFO = 272 /* skipgen skipgen */ /* * Notice how the entries are grouped in sets of 10 ? diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 9b2414f..69a80b9 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -718,6 +718,28 @@ struct remote_domain_get_vcpu_pin_info_ret { } cpumaps; int num; }; +struct remote_domain_pin_hypervisor_flags_args { + remote_nonnull_domain dom; + u_int vcpu; + struct { + u_int cpumap_len; + char * cpumap_val; + } cpumap; + u_int flags; +}; +struct remote_domain_get_hypervisor_pin_info_args { + remote_nonnull_domain dom; + int ncpumaps; + int maplen; + u_int flags; +}; +struct remote_domain_get_hypervisor_pin_info_ret { + struct { + u_int cpumaps_len; + char * cpumaps_val; + } cpumaps; + int num; +}; struct remote_domain_get_vcpus_args { remote_nonnull_domain dom; int maxinfo; @@ -2192,4 +2214,6 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_EVENT_TRAY_CHANGE = 268, REMOTE_PROC_DOMAIN_EVENT_PMWAKEUP = 269, REMOTE_PROC_DOMAIN_EVENT_PMSUSPEND = 270, + REMOTE_PROC_DOMAIN_PIN_HYPERVISOR_FLAGS = 271, + REMOTE_PROC_DOMAIN_GET_HYPERVISOR_PIN_INFO = 272, }; -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- daemon/remote.c | 103 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 103 insertions(+), 0 deletions(-) diff --git a/daemon/remote.c b/daemon/remote.c index 16a8a05..524c4bf 100644 --- a/daemon/remote.c +++ b/daemon/remote.c @@ -1454,6 +1454,109 @@ no_memory: } static int +remoteDispatchDomainPinHypervisorFlags(virNetServerPtr server ATTRIBUTE_UNUSED, + virNetServerClientPtr client, + virNetMessagePtr msg ATTRIBUTE_UNUSED, + virNetMessageErrorPtr rerr, + remote_domain_pin_hypervisor_flags_args *args) +{ + int rv = -1; + virDomainPtr dom = NULL; + struct daemonClientPrivate *priv = + virNetServerClientGetPrivateData(client); + + if (!priv->conn) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("connection not open")); + goto cleanup; + } + + if (!(dom = get_nonnull_domain(priv->conn, args->dom))) + goto cleanup; + + if (virDomainPinHypervisorFlags(dom, + (unsigned char *) args->cpumap.cpumap_val, + args->cpumap.cpumap_len, + args->flags) < 0) + goto cleanup; + + rv = 0; + +cleanup: + if (rv < 0) + virNetMessageSaveError(rerr); + if (dom) + virDomainFree(dom); + return rv; +} + + +static int +remoteDispatchDomainGetHypervisorPinInfo(virNetServerPtr server ATTRIBUTE_UNUSED, + virNetServerClientPtr client ATTRIBUTE_UNUSED, + virNetMessagePtr msg ATTRIBUTE_UNUSED, + virNetMessageErrorPtr rerr, + remote_domain_get_hypervisor_pin_info_args *args, + remote_domain_get_hypervisor_pin_info_ret *ret) +{ + virDomainPtr dom = NULL; + unsigned char *cpumaps = NULL; + int num; + int rv = -1; + struct daemonClientPrivate *priv = + virNetServerClientGetPrivateData(client); + + if (!priv->conn) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("connection not open")); + goto cleanup; + } + + if (!(dom = get_nonnull_domain(priv->conn, args->dom))) + goto cleanup; + + /* There is only one cpumap struct for all hypervisor threads */ + if (args->ncpumaps != 1) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("ncpumaps != 1")); + goto cleanup; + } + + if (INT_MULTIPLY_OVERFLOW(args->ncpumaps, args->maplen) || + args->ncpumaps * args->maplen > REMOTE_CPUMAPS_MAX) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("maxinfo * maplen > REMOTE_CPUMAPS_MAX")); + goto cleanup; + } + + /* Allocate buffers to take the results */ + if (args->maplen > 0 && + VIR_ALLOC_N(cpumaps, args->maplen) < 0) + goto no_memory; + + if ((num = virDomainGetHypervisorPinInfo(dom, + cpumaps, + args->maplen, + args->flags)) < 0) + goto cleanup; + + ret->num = num; + ret->cpumaps.cpumaps_len = args->maplen; + ret->cpumaps.cpumaps_val = (char *) cpumaps; + cpumaps = NULL; + + rv = 0; + +cleanup: + if (rv < 0) + virNetMessageSaveError(rerr); + VIR_FREE(cpumaps); + if (dom) + virDomainFree(dom); + return rv; + +no_memory: + virReportOOMError(); + goto cleanup; +} + +static int remoteDispatchDomainGetVcpus(virNetServerPtr server ATTRIBUTE_UNUSED, virNetServerClientPtr client ATTRIBUTE_UNUSED, virNetMessagePtr msg ATTRIBUTE_UNUSED, -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- include/libvirt/libvirt.h.in | 9 +++ src/libvirt.c | 147 ++++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 2 + 3 files changed, 158 insertions(+), 0 deletions(-) diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index 8caee0d..281a800 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -1835,6 +1835,15 @@ int virDomainGetVcpuPinInfo (virDomainPtr domain, unsigned char *cpumaps, int maplen, unsigned int flags); +int virDomainPinHypervisorFlags (virDomainPtr domain, + unsigned char *cpumap, + int maplen, + unsigned int flags); + +int virDomainGetHypervisorPinInfo (virDomainPtr domain, + unsigned char *cpumaps, + int maplen, + unsigned int flags); /** * VIR_USE_CPU: diff --git a/src/libvirt.c b/src/libvirt.c index af42d3b..5a26166 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -8857,6 +8857,153 @@ error: } /** + * virDomainPinHypervisorFlags: + * @domain: pointer to domain object, or NULL for Domain0 + * @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes) (IN) + * Each bit set to 1 means that corresponding CPU is usable. + * Bytes are stored in little-endian order: CPU0-7, 8-15... + * In each byte, lowest CPU number is least significant bit. + * @maplen: number of bytes in cpumap, from 1 up to size of CPU map in + * underlying virtualization system (Xen...). + * If maplen < size, missing bytes are set to zero. + * If maplen > size, failure code is returned. + * @flags: bitwise-OR of virDomainModificationImpact + * + * Dynamically change the real CPUs which can be allocated to all hypervisor + * threads. This function may require privileged access to the hypervisor. + * + * @flags may include VIR_DOMAIN_AFFECT_LIVE or VIR_DOMAIN_AFFECT_CONFIG. + * Both flags may be set. + * If VIR_DOMAIN_AFFECT_LIVE is set, the change affects a running domain + * and may fail if domain is not alive. + * If VIR_DOMAIN_AFFECT_CONFIG is set, the change affects persistent state, + * and will fail for transient domains. If neither flag is specified (that is, + * @flags is VIR_DOMAIN_AFFECT_CURRENT), then an inactive domain modifies + * persistent setup, while an active domain is hypervisor-dependent on whether + * just live or both live and persistent state is changed. + * Not all hypervisors can support all flag combinations. + * + * See also virDomainGetHypervisorPinInfo for querying this information. + * + * Returns 0 in case of success, -1 in case of failure. + * + */ +int +virDomainPinHypervisorFlags(virDomainPtr domain, unsigned char *cpumap, + int maplen, unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "cpumap=%p, maplen=%d, flags=%x", + cpumap, maplen, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN(domain)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (domain->conn->flags & VIR_CONNECT_RO) { + virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if ((cpumap == NULL) || (maplen < 1)) { + virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + + conn = domain->conn; + + if (conn->driver->domainPinHypervisorFlags) { + int ret; + ret = conn->driver->domainPinHypervisorFlags (domain, cpumap, maplen, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(domain->conn); + return -1; +} + +/** + * virDomainGetHypervisorPinInfo: + * @domain: pointer to domain object, or NULL for Domain0 + * @cpumap: pointer to a bit map of real CPUs for all hypervisor threads of + * this domain (in 8-bit bytes) (OUT) + * There is only one cpumap for all hypervisor threads. + * Must not be NULL. + * @maplen: the number of bytes in one cpumap, from 1 up to size of CPU map. + * Must be positive. + * @flags: bitwise-OR of virDomainModificationImpact + * Must not be VIR_DOMAIN_AFFECT_LIVE and + * VIR_DOMAIN_AFFECT_CONFIG concurrently. + * + * Query the CPU affinity setting of all hypervisor threads of domain, store + * it in cpumap. + * + * Returns 1 in case of success, + * 0 in case of no hypervisor threads are pined to pcpus, + * -1 in case of failure. + */ +int +virDomainGetHypervisorPinInfo(virDomainPtr domain, unsigned char *cpumap, + int maplen, unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "cpumap=%p, maplen=%d, flags=%x", + cpumap, maplen, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN(domain)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (!cpumap || maplen <= 0) { + virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + if (INT_MULTIPLY_OVERFLOW(1, maplen)) { + virLibDomainError(VIR_ERR_OVERFLOW, _("input too large: 1 * %d"), + maplen); + goto error; + } + + /* At most one of these two flags should be set. */ + if ((flags & VIR_DOMAIN_AFFECT_LIVE) && + (flags & VIR_DOMAIN_AFFECT_CONFIG)) { + virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + conn = domain->conn; + + if (conn->driver->domainGetHypervisorPinInfo) { + int ret; + ret = conn->driver->domainGetHypervisorPinInfo(domain, cpumap, + maplen, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(domain->conn); + return -1; +} + +/** * virDomainGetVcpus: * @domain: pointer to domain object, or NULL for Domain0 * @info: pointer to an array of virVcpuInfo structures (OUT) diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index 46c13fb..e958044 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -532,6 +532,8 @@ LIBVIRT_0.9.10 { LIBVIRT_0.9.11 { global: virDomainPMWakeup; + virDomainPinHypervisorFlags; + virDomainGetHypervisorPinInfo; } LIBVIRT_0.9.10; # .... define new API here using predicted next version number .... -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- tests/vcpupin | 6 +- tools/virsh.c | 145 +++++++++++++++++++++++++++++++++++++------------------ tools/virsh.pod | 16 ++++-- 3 files changed, 110 insertions(+), 57 deletions(-) diff --git a/tests/vcpupin b/tests/vcpupin index 5952862..ffd16fa 100755 --- a/tests/vcpupin +++ b/tests/vcpupin @@ -30,16 +30,16 @@ fi fail=0 # Invalid syntax. -$abs_top_builddir/tools/virsh --connect test:///default vcpupin test a 0,1 > out 2>&1 +$abs_top_builddir/tools/virsh --connect test:///default vcpupin test a --vcpu 0,1 > out 2>&1 test $? = 1 || fail=1 cat <<\EOF > exp || fail=1 -error: vcpupin: Invalid or missing vCPU number. +error: vcpupin: Invalid or missing vCPU number, or missing --hypervisor option. EOF compare exp out || fail=1 # An out-of-range vCPU number deserves a diagnostic, too. -$abs_top_builddir/tools/virsh --connect test:///default vcpupin test 100 0,1 > out 2>&1 +$abs_top_builddir/tools/virsh --connect test:///default vcpupin test --vcpu 100 0,1 > out 2>&1 test $? = 1 || fail=1 cat <<\EOF > exp || fail=1 error: vcpupin: Invalid vCPU number. diff --git a/tools/virsh.c b/tools/virsh.c index e177684..7820d8a 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -5213,14 +5213,15 @@ cmdVcpuinfo(vshControl *ctl, const vshCmd *cmd) * "vcpupin" command */ static const vshCmdInfo info_vcpupin[] = { - {"help", N_("control or query domain vcpu affinity")}, - {"desc", N_("Pin domain VCPUs to host physical CPUs.")}, + {"help", N_("control or query domain vcpu and hypervisor threads affinities")}, + {"desc", N_("Pin domain VCPUs or hypervisor threads to host physical CPUs.")}, {NULL, NULL} }; static const vshCmdOptDef opts_vcpupin[] = { {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, - {"vcpu", VSH_OT_INT, 0, N_("vcpu number")}, + {"vcpu", VSH_OT_INT, VSH_OFLAG_REQ_OPT, N_("vcpu number")}, + {"hypervisor", VSH_OT_BOOL, VSH_OFLAG_REQ_OPT, N_("pin hypervisor threads")}, {"cpulist", VSH_OT_DATA, VSH_OFLAG_EMPTY_OK, N_("host cpu number(s) to set, or omit option to query")}, {"config", VSH_OT_BOOL, 0, N_("affect next boot")}, @@ -5229,6 +5230,45 @@ static const vshCmdOptDef opts_vcpupin[] = { {NULL, 0, 0, NULL} }; +/* + * Helper function to print vcpupin and hypervisorpin info. + */ +static bool +printPinInfo(unsigned char *cpumaps, size_t cpumaplen, + int maxcpu, int vcpuindex) +{ + int cpu, lastcpu; + bool bit, lastbit, isInvert; + + if (!cpumaps || cpumaplen <= 0 || maxcpu <= 0 || vcpuindex < 0) { + return false; + } + + bit = lastbit = isInvert = false; + lastcpu = -1; + + for (cpu = 0; cpu < maxcpu; cpu++) { + bit = VIR_CPU_USABLE(cpumaps, cpumaplen, vcpuindex, cpu); + + isInvert = (bit ^ lastbit); + if (bit && isInvert) { + if (lastcpu == -1) + vshPrint(ctl, "%d", cpu); + else + vshPrint(ctl, ",%d", cpu); + lastcpu = cpu; + } + if (!bit && isInvert && lastcpu != cpu - 1) + vshPrint(ctl, "-%d", cpu - 1); + lastbit = bit; + } + if (bit && !isInvert) { + vshPrint(ctl, "-%d", maxcpu - 1); + } + + return true; +} + static bool cmdVcpuPin(vshControl *ctl, const vshCmd *cmd) { @@ -5241,13 +5281,13 @@ cmdVcpuPin(vshControl *ctl, const vshCmd *cmd) unsigned char *cpumap = NULL; unsigned char *cpumaps = NULL; size_t cpumaplen; - bool bit, lastbit, isInvert; - int i, cpu, lastcpu, maxcpu, ncpus; + int i, cpu, lastcpu, maxcpu, ncpus, nhyper; bool unuse = false; const char *cur; bool config = vshCommandOptBool(cmd, "config"); bool live = vshCommandOptBool(cmd, "live"); bool current = vshCommandOptBool(cmd, "current"); + bool hypervisor = vshCommandOptBool(cmd, "hypervisor"); bool query = false; /* Query mode if no cpulist */ unsigned int flags = 0; @@ -5282,8 +5322,18 @@ cmdVcpuPin(vshControl *ctl, const vshCmd *cmd) /* In query mode, "vcpu" is optional */ if (vshCommandOptInt(cmd, "vcpu", &vcpu) < !query) { - vshError(ctl, "%s", - _("vcpupin: Invalid or missing vCPU number.")); + if (!hypervisor) { + vshError(ctl, "%s", + _("vcpupin: Invalid or missing vCPU number, " + "or missing --hypervisor option.")); + virDomainFree(dom); + return false; + } + } + + if (hypervisor && vcpu != -1) { + vshError(ctl, "%s", _("vcpupin: --hypervisor must be specified " + "exclusively to --vcpu.")); virDomainFree(dom); return false; } @@ -5315,47 +5365,45 @@ cmdVcpuPin(vshControl *ctl, const vshCmd *cmd) if (flags == -1) flags = VIR_DOMAIN_AFFECT_CURRENT; - cpumaps = vshMalloc(ctl, info.nrVirtCpu * cpumaplen); - if ((ncpus = virDomainGetVcpuPinInfo(dom, info.nrVirtCpu, - cpumaps, cpumaplen, flags)) >= 0) { - - vshPrint(ctl, "%s %s\n", _("VCPU:"), _("CPU Affinity")); - vshPrint(ctl, "----------------------------------\n"); - for (i = 0; i < ncpus; i++) { - - if (vcpu != -1 && i != vcpu) - continue; - - bit = lastbit = isInvert = false; - lastcpu = -1; + if (!hypervisor) { + cpumaps = vshMalloc(ctl, info.nrVirtCpu * cpumaplen); + if ((ncpus = virDomainGetVcpuPinInfo(dom, info.nrVirtCpu, + cpumaps, cpumaplen, flags)) >= 0) { + vshPrint(ctl, "%s %s\n", _("VCPU:"), _("CPU Affinity")); + vshPrint(ctl, "----------------------------------\n"); + for (i = 0; i < ncpus; i++) { + if (vcpu != -1 && i != vcpu) + continue; - vshPrint(ctl, "%4d: ", i); - for (cpu = 0; cpu < maxcpu; cpu++) { + vshPrint(ctl, "%4d: ", i); + ret = printPinInfo(cpumaps, cpumaplen, maxcpu, i); + vshPrint(ctl, "\n"); + if (!ret) + break; + } + } else { + ret = false; + } + VIR_FREE(cpumaps); + } - bit = VIR_CPU_USABLE(cpumaps, cpumaplen, i, cpu); + if (vcpu == -1) { + cpumaps = vshMalloc(ctl, cpumaplen); + if ((nhyper = virDomainGetHypervisorPinInfo(dom, cpumaps, + cpumaplen, flags)) >= 0) { + if (!hypervisor) + vshPrint(ctl, "\n"); + vshPrint(ctl, "%s %s\n", _("Hypervisor:"), _("CPU Affinity")); + vshPrint(ctl, "----------------------------------\n"); - isInvert = (bit ^ lastbit); - if (bit && isInvert) { - if (lastcpu == -1) - vshPrint(ctl, "%d", cpu); - else - vshPrint(ctl, ",%d", cpu); - lastcpu = cpu; - } - if (!bit && isInvert && lastcpu != cpu - 1) - vshPrint(ctl, "-%d", cpu - 1); - lastbit = bit; - } - if (bit && !isInvert) { - vshPrint(ctl, "-%d", maxcpu - 1); - } - vshPrint(ctl, "\n"); + vshPrint(ctl, " *: "); + ret = printPinInfo(cpumaps, cpumaplen, maxcpu, 0); + vshPrint(ctl, "\n"); + } else if (nhyper < 0) { + ret = false; } - - } else { - ret = false; + VIR_FREE(cpumaps); } - VIR_FREE(cpumaps); goto cleanup; } @@ -5433,13 +5481,14 @@ cmdVcpuPin(vshControl *ctl, const vshCmd *cmd) } if (flags == -1) { - if (virDomainPinVcpu(dom, vcpu, cpumap, cpumaplen) != 0) { + flags = VIR_DOMAIN_AFFECT_LIVE; + } + if (!hypervisor) { + if (virDomainPinVcpuFlags(dom, vcpu, cpumap, cpumaplen, flags) != 0) ret = false; - } } else { - if (virDomainPinVcpuFlags(dom, vcpu, cpumap, cpumaplen, flags) != 0) { + if (virDomainPinHypervisorFlags(dom, cpumap, cpumaplen, flags) != 0) ret = false; - } } cleanup: diff --git a/tools/virsh.pod b/tools/virsh.pod index ef71717..0cdabb4 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -1522,12 +1522,16 @@ Thus, this command always takes exactly zero or two flags. Returns basic information about the domain virtual CPUs, like the number of vCPUs, the running time, the affinity to physical processors. -=item B<vcpupin> I<domain-id> [I<vcpu>] [I<cpulist>] [[I<--live>] -[I<--config>] | [I<--current>]] - -Query or change the pinning of domain VCPUs to host physical CPUs. To -pin a single I<vcpu>, specify I<cpulist>; otherwise, you can query one -I<vcpu> or omit I<vcpu> to list all at once. +=item B<vcpupin> I<domain-id> [I<vcpu>] [I<hypervicor>] [I<cpulist>] +[[I<--live>] [I<--config>] | [I<--current>]] + +Query or change the pinning of domain VCPUs or hypervisor threads to host physical CPUs. +To pin a single I<vcpu>, specify I<cpulist>; otherwise, you can query one +I<vcpu>. +To pin all I<hypervisor> threads, specify I<cpulist>; otherwise, you can +query I<hypervisor>. +You can also omit I<vcpu> or I<hypervisor> to list vcpus and hypervisor threads +all at once. I<cpulist> is a list of physical CPU numbers. Its syntax is a comma separated list and a special markup using '-' and '^' (ex. '0-4', '0-3,^2') can -- 1.7.3.1

Hi~ Forgot to mention that this patch set is based on Wen Congyang's patch set: "support to set cpu bandwidth for hypervisor threads" https://www.redhat.com/archives/libvir-list/2012-April/msg01326.html Wen's patches should be applied first. Thanks.:) On 05/09/2012 02:08 PM, tangchen wrote:
Hi~
Users can use vcpupin command to bind a vcpu thread to a specific physical cpu. But besides vcpu threads, there are alse some other threads created by qemu (known as hypervisor threads) that could not be explicitly bound to physical cpus.
This 10 patches implemented hypervisor threads binding, in two ways: 1) Use sched_setaffinity() function; 2) In cpuset cgroup.
A new xml element is introduced, and vcpupin command is improved, see below.
1. Introduce new xml elements: <cputune> ...... <hypervisorpin cpuset='1'/> </cputune>
2. Improve vcpupin command to support hypervisor threads binding.
For example, vm1 has the following configuration: <cputune> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='0' cpuset='0'/> <hypervisorpin cpuset='1'/> </cputune>
1) query all threads pining
# vcpupin vm1 VCPU: CPU Affinity ---------------------------------- 0: 0 1: 1
Hypervisor: CPU Affinity ---------------------------------- *: 1
2) query hypervisor threads pining only
# vcpupin vm1 --hypervisor Hypervisor: CPU Affinity ---------------------------------- *: 1
3) change hypervisor threads pining
# vcpupin vm1 --hypervisor 0-1
# vcpupin vm1 --hypervisor Hypervisor: CPU Affinity ---------------------------------- *: 0-1
# taskset -p 397 pid 397's current affinity mask: 3
Note: If users want to pin a vcpu thread to pcpu, --vcpu option could no longer be omitted.
Tang Chen (10): Enable cpuset cgroup and synchronous vcpupin info to cgroup. Support hypervisorpin xml parse Add qemuSetupCgroupHypervisorPin and synchronize hypervisorpin info to cgroup. Add qemuProcessSetHypervisorAffinites and set hypervisor threads affinities Introduce virDomainHypervisorPinAdd and virDomainHypervisorPinDel functions Introduce qemudDomainPinHypervisorFlags and qemudDomainGetHypervisorPinInfo in qemu driver. Introduce remoteDomainPinHypervisorFlags and remoteDomainGetHypervisorPinInfo functions in remote driver. Introduce remoteDispatchDomainPinHypervisorFlags and remoteDispatchDomainGetHypervisorPinInfo functions. Introduce virDomainPinHypervisorFlags and virDomainGetHypervisorPinInfo functions. Improve vcpupin to support hypervisorpin dynically.
daemon/remote.c | 103 +++++++++ docs/schemas/domaincommon.rng | 7 + include/libvirt/libvirt.h.in | 9 + src/conf/domain_conf.c | 172 ++++++++++++++- src/conf/domain_conf.h | 7 + src/driver.h | 13 +- src/libvirt.c | 147 +++++++++++++ src/libvirt_private.syms | 2 + src/libvirt_public.syms | 2 + src/qemu/qemu_cgroup.c | 88 ++++++++ src/qemu/qemu_cgroup.h | 3 + src/qemu/qemu_driver.c | 266 ++++++++++++++++++++++- src/qemu/qemu_process.c | 54 +++++ src/remote/remote_driver.c | 102 +++++++++ src/remote/remote_protocol.x | 24 ++- src/remote_protocol-structs | 24 ++ src/util/cgroup.c | 35 +++- src/util/cgroup.h | 3 + tests/qemuxml2argvdata/qemuxml2argv-cputune.xml | 1 + tests/vcpupin | 6 +- tools/virsh.c | 145 ++++++++---- tools/virsh.pod | 16 +- 22 files changed, 1158 insertions(+), 71 deletions(-)
-- Best Regards, Tang chen
participants (1)
-
tangchen