[libvirt] [PATCH 00/13] Support hypervisor-threads-pin in vcpupin.

Hi~ Users can use vcpupin command to bind a vcpu thread to a specific physical cpu. But besides vcpu threads, there are alse some other threads created by qemu (known as hypervisor threads) that could not be explicitly bound to physical cpus. The first 3 patches are from Wen Congyang, which implement Cgroup for differrent hypervisors. The other 10 patches implemented hypervisor threads binding, in two ways: 1) Use sched_setaffinity() function; 2) In cpuset cgroup. A new xml element is introduced, and vcpupin command is improved, see below. 1. Introduce new xml elements: <cputune> ...... <hypervisorpin cpuset='1'/> </cputune> 2. Improve vcpupin command to support hypervisor threads binding. For example, vm1 has the following configuration: <cputune> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='0' cpuset='0'/> <hypervisorpin cpuset='1'/> </cputune> 1) query all threads pining # vcpupin vm1 VCPU: CPU Affinity ---------------------------------- 0: 0 1: 1 Hypervisor: CPU Affinity ---------------------------------- *: 1 2) query hypervisor threads pining only # vcpupin vm1 --hypervisor Hypervisor: CPU Affinity ---------------------------------- *: 1 3) change hypervisor threads pining # vcpupin vm1 --hypervisor 0-1 # vcpupin vm1 --hypervisor Hypervisor: CPU Affinity ---------------------------------- *: 0-1 # taskset -p 397 pid 397's current affinity mask: 3 Note: If users want to pin a vcpu thread to pcpu, --vcpu option could no longer be omitted. Tang Chen (10): Enable cpuset cgroup and synchronous vcpupin info to cgroup. Support hypervisorpin xml parse. Introduce qemuSetupCgroupHypervisorPin and synchronize hypervisorpin info to cgroup. Add qemuProcessSetHypervisorAffinites and set hypervisor threads affinities Introduce virDomainHypervisorPinAdd and virDomainHypervisorPinDel functions Introduce qemudDomainPinHypervisorFlags and qemudDomainGetHypervisorPinInfo in qemu driver. Introduce remoteDomainPinHypervisorFlags and remoteDomainGetHypervisorPinInfo functions in remote driver. Introduce remoteDispatchDomainPinHypervisorFlags and remoteDispatchDomainGetHypervisorPinInfo functions. Introduce virDomainPinHypervisorFlags and virDomainGetHypervisorPinInfo functions. Improve vcpupin to support hypervisorpin dynically. Wen Congyang (3): Introduce the function virCgroupForHypervisor introduce the function virCgroupMoveTask() create a new cgroup and move all hypervisor threads to the new cgroup daemon/remote.c | 103 +++++++++ docs/schemas/domaincommon.rng | 7 + include/libvirt/libvirt.h.in | 9 + src/conf/domain_conf.c | 173 +++++++++++++++- src/conf/domain_conf.h | 7 + src/driver.h | 13 +- src/libvirt.c | 147 +++++++++++++ src/libvirt_private.syms | 6 + src/libvirt_public.syms | 6 + src/qemu/qemu_cgroup.c | 149 +++++++++++++- src/qemu/qemu_cgroup.h | 5 + src/qemu/qemu_driver.c | 266 ++++++++++++++++++++++- src/qemu/qemu_process.c | 58 +++++ src/remote/remote_driver.c | 102 +++++++++ src/remote/remote_protocol.x | 24 ++- src/remote_protocol-structs | 24 ++ src/util/cgroup.c | 132 +++++++++++- src/util/cgroup.h | 9 + tests/qemuxml2argvdata/qemuxml2argv-cputune.xml | 1 + tests/vcpupin | 6 +- tools/virsh.c | 145 ++++++++---- tools/virsh.pod | 16 +- 22 files changed, 1335 insertions(+), 73 deletions(-) -- 1.7.3.1

Introduce the function virCgroupForHypervisor() to create sub directory for hypervisor thread(include I/O thread, vhost-net thread) --- src/libvirt_private.syms | 1 + src/util/cgroup.c | 42 ++++++++++++++++++++++++++++++++++++++++++ src/util/cgroup.h | 4 ++++ 3 files changed, 47 insertions(+), 0 deletions(-) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index fdf2186..c581063 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -67,6 +67,7 @@ virCgroupDenyAllDevices; virCgroupDenyDevicePath; virCgroupForDomain; virCgroupForDriver; +virCgroupForHypervisor; virCgroupForVcpu; virCgroupFree; virCgroupGetBlkioWeight; diff --git a/src/util/cgroup.c b/src/util/cgroup.c index 5b32881..66d98e3 100644 --- a/src/util/cgroup.c +++ b/src/util/cgroup.c @@ -946,6 +946,48 @@ int virCgroupForVcpu(virCgroupPtr driver ATTRIBUTE_UNUSED, #endif /** + * virCgroupForHypervisor: + * + * @driver: group for the domain + * @group: Pointer to returned virCgroupPtr + * + * Returns 0 on success + */ +#if defined HAVE_MNTENT_H && defined HAVE_GETMNTENT_R +int virCgroupForHypervisor(virCgroupPtr driver, + virCgroupPtr *group, + int create) +{ + int rc; + char *path; + + if (driver == NULL) + return -EINVAL; + + if (virAsprintf(&path, "%s/hypervisor", driver->path) < 0) + return -ENOMEM; + + rc = virCgroupNew(path, group); + VIR_FREE(path); + + if (rc == 0) { + rc = virCgroupMakeGroup(driver, *group, create, VIR_CGROUP_VCPU); + if (rc != 0) + virCgroupFree(group); + } + + return rc; +} +#else +int virCgroupForHypervisor(virCgroupPtr driver ATTRIBUTE_UNUSED, + virCgroupPtr *group ATTRIBUTE_UNUSED, + int create ATTRIBUTE_UNUSED) +{ + return -ENXIO; +} + +#endif +/** * virCgroupSetBlkioWeight: * * @group: The cgroup to change io weight for diff --git a/src/util/cgroup.h b/src/util/cgroup.h index 05325ae..315ebd6 100644 --- a/src/util/cgroup.h +++ b/src/util/cgroup.h @@ -47,6 +47,10 @@ int virCgroupForVcpu(virCgroupPtr driver, virCgroupPtr *group, int create); +int virCgroupForHypervisor(virCgroupPtr driver, + virCgroupPtr *group, + int create); + int virCgroupPathOfController(virCgroupPtr group, int controller, const char *key, -- 1.7.3.1

On 06/05/2012 02:13 AM, tangchen wrote:
Introduce the function virCgroupForHypervisor() to create sub directory for hypervisor thread(include I/O thread, vhost-net thread)
According to your cover letter, this patch was written by Wen, but I don't see a From: listing or Signed-off-by or any other indication that would let 'git am' credit Wen as the author. Instead, it tries to credit you, using only your email alias 'tangchen' instead of your full name (again, by the cover letter, and looking at the current contents of AUTHORS, I assume you prefer 'Tang Chen').
--- src/libvirt_private.syms | 1 + src/util/cgroup.c | 42 ++++++++++++++++++++++++++++++++++++++++++ src/util/cgroup.h | 4 ++++ 3 files changed, 47 insertions(+), 0 deletions(-)
/** + * virCgroupForHypervisor: + * + * @driver: group for the domain + * @group: Pointer to returned virCgroupPtr + * + * Returns 0 on success
or -errno value on failure. Other than that, the patch looks fine to me, but can you please rebase it to latest libvirt.git and resubmit it so that it gets recorded with correct authorship? -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

Hi~ On 07/03/2012 04:40 AM, Eric Blake wrote:
On 06/05/2012 02:13 AM, tangchen wrote:
Introduce the function virCgroupForHypervisor() to create sub directory for hypervisor thread(include I/O thread, vhost-net thread)
According to your cover letter, this patch was written by Wen, but I don't see a From: listing or Signed-off-by or any other indication that would let 'git am' credit Wen as the author. Instead, it tries to credit you, using only your email alias 'tangchen' instead of your full name (again, by the cover letter, and looking at the current contents of AUTHORS, I assume you prefer 'Tang Chen').
I forgot to change my git config when making the patches. Sorry about that. Thanks, I will fix it.:) And the other comments, I am working on it. :)
--- src/libvirt_private.syms | 1 + src/util/cgroup.c | 42 ++++++++++++++++++++++++++++++++++++++++++ src/util/cgroup.h | 4 ++++ 3 files changed, 47 insertions(+), 0 deletions(-)
/** + * virCgroupForHypervisor: + * + * @driver: group for the domain + * @group: Pointer to returned virCgroupPtr + * + * Returns 0 on success
or -errno value on failure.
Other than that, the patch looks fine to me, but can you please rebase it to latest libvirt.git and resubmit it so that it gets recorded with correct authorship?
-- Best Regards, Tang chen

Hi~ The sign-off-by problems have been solved, and a new rebased patch set will be sent soon. I am testing it, thanks. :) On 07/03/2012 04:40 AM, Eric Blake wrote:
On 06/05/2012 02:13 AM, tangchen wrote:
Introduce the function virCgroupForHypervisor() to create sub directory for hypervisor thread(include I/O thread, vhost-net thread)
According to your cover letter, this patch was written by Wen, but I don't see a From: listing or Signed-off-by or any other indication that would let 'git am' credit Wen as the author. Instead, it tries to credit you, using only your email alias 'tangchen' instead of your full name (again, by the cover letter, and looking at the current contents of AUTHORS, I assume you prefer 'Tang Chen').
--- src/libvirt_private.syms | 1 + src/util/cgroup.c | 42 ++++++++++++++++++++++++++++++++++++++++++ src/util/cgroup.h | 4 ++++ 3 files changed, 47 insertions(+), 0 deletions(-)
/** + * virCgroupForHypervisor: + * + * @driver: group for the domain + * @group: Pointer to returned virCgroupPtr + * + * Returns 0 on success
or -errno value on failure.
Other than that, the patch looks fine to me, but can you please rebase it to latest libvirt.git and resubmit it so that it gets recorded with correct authorship?
-- Best Regards, Tang chen

Introduce a new API to move all tasks from a cgroup to another cgroup --- src/libvirt_private.syms | 1 + src/util/cgroup.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++ src/util/cgroup.h | 2 + 3 files changed, 58 insertions(+), 0 deletions(-) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index c581063..6ff1a3b 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -87,6 +87,7 @@ virCgroupKill; virCgroupKillPainfully; virCgroupKillRecursive; virCgroupMounted; +virCgroupMoveTask; virCgroupPathOfController; virCgroupRemove; virCgroupSetBlkioDeviceWeight; diff --git a/src/util/cgroup.c b/src/util/cgroup.c index 66d98e3..c5dddc1 100644 --- a/src/util/cgroup.c +++ b/src/util/cgroup.c @@ -791,6 +791,61 @@ int virCgroupAddTask(virCgroupPtr group, pid_t pid) return rc; } +static int virCgrouAddTaskStr(virCgroupPtr group, const char *pidstr) +{ + unsigned long long value; + + if (virStrToLong_ull(pidstr, NULL, 10, &value) < 0) + return -EINVAL; + + return virCgroupAddTask(group, value); +} + +/** + * virCgroupMoveTask: + * + * @src_group: The source cgroup where all tasks are removed from + * @dest_group: The destination where all tasks are added to + * + * Returns: 0 on success + */ +int virCgroupMoveTask(virCgroupPtr src_group, virCgroupPtr dest_group) +{ + int rc = 0; + int i; + char *content, *value, *next; + + for (i = 0 ; i < VIR_CGROUP_CONTROLLER_LAST ; i++) { + /* Skip over controllers not mounted */ + if (!src_group->controllers[i].mountPoint || + !dest_group->controllers[i].mountPoint) + continue; + + rc = virCgroupGetValueStr(src_group, i, "tasks", &content); + if (rc != 0) + break; + + value = content; + while((next = strchr(value, '\n')) != NULL) { + *next = '\0'; + if ((rc = virCgrouAddTaskStr(dest_group, value) < 0)) + goto cleanup; + value = next + 1; + } + if (*value != '\0') { + if ((rc = virCgrouAddTaskStr(dest_group, value) < 0)) + goto cleanup; + } + + VIR_FREE(content); + } + + return 0; + +cleanup: + virCgroupMoveTask(dest_group, src_group); + return rc; +} /** * virCgroupForDriver: diff --git a/src/util/cgroup.h b/src/util/cgroup.h index 315ebd6..308ea47 100644 --- a/src/util/cgroup.h +++ b/src/util/cgroup.h @@ -58,6 +58,8 @@ int virCgroupPathOfController(virCgroupPtr group, int virCgroupAddTask(virCgroupPtr group, pid_t pid); +int virCgroupMoveTask(virCgroupPtr src_group, virCgroupPtr dest_group); + int virCgroupSetBlkioWeight(virCgroupPtr group, unsigned int weight); int virCgroupGetBlkioWeight(virCgroupPtr group, unsigned int *weight); -- 1.7.3.1

On 06/05/2012 02:16 AM, tangchen wrote:
Introduce a new API to move all tasks from a cgroup to another cgroup
Again, authorship is incorrect for the purposes of 'git am'.
--- src/libvirt_private.syms | 1 + src/util/cgroup.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++ src/util/cgroup.h | 2 + 3 files changed, 58 insertions(+), 0 deletions(-)
@@ -791,6 +791,61 @@ int virCgroupAddTask(virCgroupPtr group, pid_t pid) return rc; }
+static int virCgrouAddTaskStr(virCgroupPtr group, const char *pidstr)
s/CgrouAdd/CgroupAdd/
+int virCgroupMoveTask(virCgroupPtr src_group, virCgroupPtr dest_group) +{ + int rc = 0; + int i; + char *content, *value, *next; + + for (i = 0 ; i < VIR_CGROUP_CONTROLLER_LAST ; i++) { + /* Skip over controllers not mounted */ + if (!src_group->controllers[i].mountPoint || + !dest_group->controllers[i].mountPoint) + continue;
Should we insist that src_group and dest_group have the same set of mounted controllers? I'm worried that if we call this function, but the set of mounted controllers differs between the two sets, then we end up moving processes between some controllers and stranding them in the source for the remaining controllers.
+ + rc = virCgroupGetValueStr(src_group, i, "tasks", &content); + if (rc != 0) + break;
Should we try to undo any changes if we fail partway through? This just breaks the outer 'for' loop and returns 0, instead of using 'goto cleanup'.
+ + value = content; + while((next = strchr(value, '\n')) != NULL) {
Coding style: space after 'while'
+ *next = '\0'; + if ((rc = virCgrouAddTaskStr(dest_group, value) < 0)) + goto cleanup; + value = next + 1; + } + if (*value != '\0') { + if ((rc = virCgrouAddTaskStr(dest_group, value) < 0))
Does it make sense to parse all the string into integers, just to format the integers back into strings? Or would it be simpler to just cat the contents of the 'tasks' file from the source into the destination without bothering to interpret the date in transit?
+ goto cleanup; + } + + VIR_FREE(content); + } + + return 0; + +cleanup: + virCgroupMoveTask(dest_group, src_group);
Is this cleanup always correct, or is it only correct if 'dest_group' started life empty? Should we at least log a warning message if a move was partially attempted and then reverted, particularly if the reversion attempt failed? -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

hi~
+int virCgroupMoveTask(virCgroupPtr src_group, virCgroupPtr dest_group) +{ + int rc = 0; + int i; + char *content, *value, *next; + + for (i = 0 ; i < VIR_CGROUP_CONTROLLER_LAST ; i++) { + /* Skip over controllers not mounted */ + if (!src_group->controllers[i].mountPoint || + !dest_group->controllers[i].mountPoint) + continue;
Should we insist that src_group and dest_group have the same set of mounted controllers? I'm worried that if we call this function, but the set of mounted controllers differs between the two sets, then we end up moving processes between some controllers and stranding them in the source for the remaining controllers.
True. So I change it to move tasks under one controller, and leave all the other controller unmodified. You can see it in my new patch. :) As you know, different cgroups are independent to each other. So I think operate on only one controller will make sense.
+ *next = '\0'; + if ((rc = virCgrouAddTaskStr(dest_group, value) < 0)) + goto cleanup; + value = next + 1; + } + if (*value != '\0') { + if ((rc = virCgrouAddTaskStr(dest_group, value) < 0))
Does it make sense to parse all the string into integers, just to format the integers back into strings? Or would it be simpler to just cat the contents of the 'tasks' file from the source into the destination without bothering to interpret the date in transit?
I have tried this. But it seems that tasks file includes "\n", so it won't work.
+ goto cleanup; + } + + VIR_FREE(content); + } + + return 0; + +cleanup: + virCgroupMoveTask(dest_group, src_group);
Is this cleanup always correct, or is it only correct if 'dest_group' started life empty? Should we at least log a warning message if a move was partially attempted and then reverted, particularly if the reversion attempt failed?
The cleanup way is also changed, please refer to my new patches. Thanks. :) -- Best Regards, Tang chen

create a new cgroup and move all hypervisor threads to the new cgroup. And then we can do the other things: 1. limit only vcpu usage rather than the whole qemu 2. limit for hypervisor threads(include vhost-net threads) --- src/qemu/qemu_cgroup.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_cgroup.h | 2 + src/qemu/qemu_process.c | 4 +++ 3 files changed, 63 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index f8f375f..e69ef5b 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -573,6 +573,63 @@ cleanup: return -1; } +int qemuSetupCgroupForHypervisor(struct qemud_driver *driver, + virDomainObjPtr vm) +{ + virCgroupPtr cgroup = NULL; + virCgroupPtr cgroup_hypervisor = NULL; + qemuDomainObjPrivatePtr priv = vm->privateData; + int rc; + + if (driver->cgroup == NULL) + return 0; /* Not supported, so claim success */ + + rc = virCgroupForDomain(driver->cgroup, vm->def->name, &cgroup, 0); + if (rc != 0) { + virReportSystemError(-rc, + _("Unable to find cgroup for %s"), + vm->def->name); + goto cleanup; + } + + if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) { + /* If we does not know VCPU<->PID mapping or all vcpu runs in the same + * thread, we cannot control each vcpu. + */ + virCgroupFree(&cgroup); + return 0; + } + + rc = virCgroupForHypervisor(cgroup, &cgroup_hypervisor, 1); + if (rc < 0) { + virReportSystemError(-rc, + _("Unable to create hypervisor cgroup for %s"), + vm->def->name); + goto cleanup; + } + + rc = virCgroupMoveTask(cgroup, cgroup_hypervisor); + if (rc < 0) { + virReportSystemError(-rc, + _("Unable to move taks from domain cgroup to " + "hypervisor cgroup for %s"), + vm->def->name); + goto cleanup; + } + + virCgroupFree(&cgroup_hypervisor); + virCgroupFree(&cgroup); + return 0; + +cleanup: + virCgroupFree(&cgroup_hypervisor); + if (cgroup) { + virCgroupRemove(cgroup); + virCgroupFree(&cgroup); + } + + return -1; +} int qemuRemoveCgroup(struct qemud_driver *driver, virDomainObjPtr vm, diff --git a/src/qemu/qemu_cgroup.h b/src/qemu/qemu_cgroup.h index c1023b3..cf0d383 100644 --- a/src/qemu/qemu_cgroup.h +++ b/src/qemu/qemu_cgroup.h @@ -54,6 +54,8 @@ int qemuSetupCgroupVcpuBW(virCgroupPtr cgroup, unsigned long long period, long long quota); int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm); +int qemuSetupCgroupForHypervisor(struct qemud_driver *driver, + virDomainObjPtr vm); int qemuRemoveCgroup(struct qemud_driver *driver, virDomainObjPtr vm, int quiet); diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 58ba5bf..31c2c30 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3674,6 +3674,10 @@ int qemuProcessStart(virConnectPtr conn, if (qemuSetupCgroupForVcpu(driver, vm) < 0) goto cleanup; + VIR_DEBUG("Setting cgroup for hypervisor(if required)"); + if (qemuSetupCgroupForHypervisor(driver, vm) < 0) + goto cleanup; + VIR_DEBUG("Setting VCPU affinities"); if (qemuProcessSetVcpuAffinites(conn, vm) < 0) goto cleanup; -- 1.7.3.1

On 06/05/2012 02:16 AM, tangchen wrote:
create a new cgroup and move all hypervisor threads to the new cgroup. And then we can do the other things: 1. limit only vcpu usage rather than the whole qemu 2. limit for hypervisor threads(include vhost-net threads)
A really useful thing to add to this commit message would be an ascii view of the cgroup hierarchy being created. If I understand correctly, this creates the following levels: cgroup mount point libvirt subdirectory (all libvirt management) driver subdirectory (all guests tied to one driver) hypervisor subdirectory (all processes tied to one guest) vcpu subdirectory (all processes tied to one VCPU of a guest) I would almost prefer to call it a VM cgroup instead of a hypervisor cgroup (and that reflects back to naming chosen in 2/13), as I tend to think of 'hypervisor' meaning 'qemu' - the technology that drives multiple guests, while I think of 'VM' meaning 'single guest', a collection of possible multiple processes under a single qemu process umbrella for running a given guest.
+int qemuSetupCgroupForHypervisor(struct qemud_driver *driver, + virDomainObjPtr vm)
More evidence that the naming choice is confusing - you named the parameter 'vm' instead of 'hypervisor'. That is, I think naming this qemuSetupCgroupForVM(...) makes more sense.
+ + if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) { + /* If we does not know VCPU<->PID mapping or all vcpu runs in the same
s/vcpu runs/vcpus run/
+ * thread, we cannot control each vcpu. + */ + virCgroupFree(&cgroup); + return 0;
It makes sense to ignore failure to set up a vcpu sub-cgroup if the user never requested the feature, with the end result being that they lose out on the functionality. But if the user explicitly requested per-vcpu usage and we can't set it up, should this return a failure? In other words, I'm worried whether we need to detect whether it is always safe to ignore the failure (as done here) or whether there are situations where setup failure should prevent running the VM until the cgroup situation is resolved.
+ + rc = virCgroupMoveTask(cgroup, cgroup_hypervisor); + if (rc < 0) { + virReportSystemError(-rc, + _("Unable to move taks from domain cgroup to "
s/taks/task/, and listing the task id might be useful for diagnostic purposes.
+++ b/src/qemu/qemu_process.c @@ -3674,6 +3674,10 @@ int qemuProcessStart(virConnectPtr conn, if (qemuSetupCgroupForVcpu(driver, vm) < 0) goto cleanup;
+ VIR_DEBUG("Setting cgroup for hypervisor(if required)");
s/hypervisor(if/hypervisor (if/ -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

hi~ On 07/03/2012 07:06 AM, Eric Blake wrote:
On 06/05/2012 02:16 AM, tangchen wrote:
create a new cgroup and move all hypervisor threads to the new cgroup. And then we can do the other things: 1. limit only vcpu usage rather than the whole qemu 2. limit for hypervisor threads(include vhost-net threads)
A really useful thing to add to this commit message would be an ascii view of the cgroup hierarchy being created. If I understand correctly, this creates the following levels:
cgroup mount point libvirt subdirectory (all libvirt management) driver subdirectory (all guests tied to one driver) hypervisor subdirectory (all processes tied to one guest) vcpu subdirectory (all processes tied to one VCPU of a guest)
I would almost prefer to call it a VM cgroup instead of a hypervisor cgroup (and that reflects back to naming chosen in 2/13), as I tend to think of 'hypervisor' meaning 'qemu' - the technology that drives multiple guests, while I think of 'VM' meaning 'single guest', a collection of possible multiple processes under a single qemu process umbrella for running a given guest.
Well, actually I see it as follow: cgroup mount point libvirt subdirectory (all libvirt management) driver subdirectory (all guests tied to one driver) VM subdirectory (all processes tied to one guest) vcpu subdirectory (all processes tied to one VCPU of a guest) & hypervisor subdirectory So I think the name is fine. What do you think? Now, I didn't change anything here. But if you insist, I think we can discuss it farther.
+ + if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) { + /* If we does not know VCPU<->PID mapping or all vcpu runs in the same
s/vcpu runs/vcpus run/
+ * thread, we cannot control each vcpu. + */ + virCgroupFree(&cgroup); + return 0;
It makes sense to ignore failure to set up a vcpu sub-cgroup if the user never requested the feature, with the end result being that they lose out on the functionality. But if the user explicitly requested per-vcpu usage and we can't set it up, should this return a failure? In other words, I'm worried whether we need to detect whether it is always safe to ignore the failure (as done here) or whether there are situations where setup failure should prevent running the VM until the cgroup situation is resolved.
I report an error here, please refer to my new patches. Thanks. :) -- Best Regards, Tang chen

This patch enables cpuset cgroup, and synchronous vcpupin info set by sched_setaffinity() to cgroup. Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/libvirt_private.syms | 2 + src/qemu/qemu_cgroup.c | 51 ++++++++++++++++++++++++++++++++++++++++++++- src/qemu/qemu_cgroup.h | 2 + src/qemu/qemu_driver.c | 43 +++++++++++++++++++++++++++++++------- src/util/cgroup.c | 35 ++++++++++++++++++++++++++++++- src/util/cgroup.h | 3 ++ 6 files changed, 125 insertions(+), 11 deletions(-) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 6ff1a3b..88cc37a 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -77,6 +77,7 @@ virCgroupGetCpuShares; virCgroupGetCpuacctPercpuUsage; virCgroupGetCpuacctStat; virCgroupGetCpuacctUsage; +virCgroupGetCpusetCpus; virCgroupGetCpusetMems; virCgroupGetFreezerState; virCgroupGetMemSwapHardLimit; @@ -95,6 +96,7 @@ virCgroupSetBlkioWeight; virCgroupSetCpuCfsPeriod; virCgroupSetCpuCfsQuota; virCgroupSetCpuShares; +virCgroupSetCpusetCpus; virCgroupSetCpusetMems; virCgroupSetFreezerState; virCgroupSetMemSwapHardLimit; diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index e69ef5b..1085478 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -473,18 +473,57 @@ cleanup: rc = virCgroupSetCpuCfsPeriod(cgroup, old_period); if (rc < 0) virReportSystemError(-rc, - _("%s"), - "Unable to rollback cpu bandwidth period"); + "%s", + _("Unable to rollback cpu bandwidth period")); } return -1; } +int qemuSetupCgroupVcpuPin(virCgroupPtr cgroup, virDomainDefPtr def, + int vcpuid) +{ + int i, rc; + char *new_cpus = NULL; + + if (vcpuid < 0 || vcpuid >= def->vcpus) { + virReportSystemError(EINVAL, + "%s: %d", _("invalid vcpuid"), vcpuid); + return -1; + } + + for (i = 0; i < def->cputune.nvcpupin; i++) { + if (vcpuid == def->cputune.vcpupin[i]->vcpuid) { + new_cpus = virDomainCpuSetFormat(def->cputune.vcpupin[i]->cpumask, + VIR_DOMAIN_CPUMASK_LEN); + if (!new_cpus) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + _("failed to convert cpu mask")); + goto cleanup; + } + rc = virCgroupSetCpusetCpus(cgroup, new_cpus); + if (rc < 0) { + virReportSystemError(-rc, + "%s", _("Unable to set cpuset.cpus")); + goto cleanup; + } + } + } + VIR_FREE(new_cpus); + return 0; + +cleanup: + if (new_cpus) + VIR_FREE(new_cpus); + return -1; +} + int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm) { virCgroupPtr cgroup = NULL; virCgroupPtr cgroup_vcpu = NULL; qemuDomainObjPrivatePtr priv = vm->privateData; + virDomainDefPtr def = vm->def; int rc; unsigned int i; unsigned long long period = vm->def->cputune.period; @@ -556,6 +595,14 @@ int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm) } } + /* Set vcpupin in cgroup if vcpupin xml is provided */ + if (def->cputune.nvcpupin) { + if (qemuCgroupControllerActive(driver, VIR_CGROUP_CONTROLLER_CPUSET)) { + if (qemuSetupCgroupVcpuPin(cgroup_vcpu, def, i) < 0) + goto cleanup; + } + } + virCgroupFree(&cgroup_vcpu); } diff --git a/src/qemu/qemu_cgroup.h b/src/qemu/qemu_cgroup.h index cf0d383..91d5632 100644 --- a/src/qemu/qemu_cgroup.h +++ b/src/qemu/qemu_cgroup.h @@ -53,6 +53,8 @@ int qemuSetupCgroup(struct qemud_driver *driver, int qemuSetupCgroupVcpuBW(virCgroupPtr cgroup, unsigned long long period, long long quota); +int qemuSetupCgroupVcpuPin(virCgroupPtr cgroup, virDomainDefPtr def, + int vcpuid); int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm); int qemuSetupCgroupForHypervisor(struct qemud_driver *driver, virDomainObjPtr vm); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index d3f74d2..b0eef80 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -3551,6 +3551,8 @@ qemudDomainPinVcpuFlags(virDomainPtr dom, struct qemud_driver *driver = dom->conn->privateData; virDomainObjPtr vm; virDomainDefPtr persistentDef = NULL; + virCgroupPtr cgroup_dom = NULL; + virCgroupPtr cgroup_vcpu = NULL; int maxcpu, hostcpus; virNodeInfo nodeinfo; int ret = -1; @@ -3605,9 +3607,37 @@ qemudDomainPinVcpuFlags(virDomainPtr dom, if (flags & VIR_DOMAIN_AFFECT_LIVE) { if (priv->vcpupids != NULL) { + /* Add config to vm->def first, because cgroup APIs need it. */ + if (virDomainVcpuPinAdd(vm->def, cpumap, maplen, vcpu) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to update or add vcpupin xml of " + "a running domain")); + goto cleanup; + } + + /* Configure the corresponding cpuset cgroup before set affinity. */ + if (qemuCgroupControllerActive(driver, + VIR_CGROUP_CONTROLLER_CPUSET)) { + if (virCgroupForDomain(driver->cgroup, vm->def->name, + &cgroup_dom, 0) == 0) { + if (virCgroupForVcpu(cgroup_dom, vcpu, &cgroup_vcpu, 0) == 0) { + if (qemuSetupCgroupVcpuPin(cgroup_vcpu, vm->def, vcpu) < 0) { + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s %d", + _("failed to set cpuset.cpus in cgroup" + " for vcpu"), vcpu); + goto cleanup; + } + } + } + } + if (virProcessInfoSetAffinity(priv->vcpupids[vcpu], - cpumap, maplen, maxcpu) < 0) + cpumap, maplen, maxcpu) < 0) { + qemuReportError(VIR_ERR_SYSTEM_ERROR, "%s %d", + _("failed to set cpu affinity for vcpu"), + vcpu); goto cleanup; + } } else { qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", _("cpu affinity is not supported")); @@ -3621,13 +3651,6 @@ qemudDomainPinVcpuFlags(virDomainPtr dom, "a running domain")); goto cleanup; } - } else { - if (virDomainVcpuPinAdd(vm->def, cpumap, maplen, vcpu) < 0) { - qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("failed to update or add vcpupin xml of " - "a running domain")); - goto cleanup; - } } if (virDomainSaveStatus(driver->caps, driver->stateDir, vm) < 0) @@ -3659,6 +3682,10 @@ qemudDomainPinVcpuFlags(virDomainPtr dom, ret = 0; cleanup: + if (cgroup_vcpu) + virCgroupFree(&cgroup_vcpu); + if (cgroup_dom) + virCgroupFree(&cgroup_dom); if (vm) virDomainObjUnlock(vm); return ret; diff --git a/src/util/cgroup.c b/src/util/cgroup.c index c5dddc1..ba3153c 100644 --- a/src/util/cgroup.c +++ b/src/util/cgroup.c @@ -532,7 +532,8 @@ static int virCgroupMakeGroup(virCgroupPtr parent, virCgroupPtr group, /* We need to control cpu bandwidth for each vcpu now */ if ((flags & VIR_CGROUP_VCPU) && (i != VIR_CGROUP_CONTROLLER_CPU && - i != VIR_CGROUP_CONTROLLER_CPUACCT)) { + i != VIR_CGROUP_CONTROLLER_CPUACCT && + i != VIR_CGROUP_CONTROLLER_CPUSET)) { /* treat it as unmounted and we can use virCgroupAddTask */ VIR_FREE(group->controllers[i].mountPoint); continue; @@ -1337,6 +1338,38 @@ int virCgroupGetCpusetMems(virCgroupPtr group, char **mems) } /** + * virCgroupSetCpusetCpus: + * + * @group: The cgroup to set cpuset.cpus for + * @cpus: the cpus to set + * + * Retuens: 0 on success + */ +int virCgroupSetCpusetCpus(virCgroupPtr group, const char *cpus) +{ + return virCgroupSetValueStr(group, + VIR_CGROUP_CONTROLLER_CPUSET, + "cpuset.cpus", + cpus); +} + +/** + * virCgroupGetCpusetCpus: + * + * @group: The cgroup to get cpuset.cpus for + * @cpus: the cpus to get + * + * Retuens: 0 on success + */ +int virCgroupGetCpusetCpus(virCgroupPtr group, char **cpus) +{ + return virCgroupGetValueStr(group, + VIR_CGROUP_CONTROLLER_CPUSET, + "cpuset.cpus", + cpus); +} + +/** * virCgroupDenyAllDevices: * * @group: The cgroup to deny all permissions, for all devices diff --git a/src/util/cgroup.h b/src/util/cgroup.h index 308ea47..1e01cbd 100644 --- a/src/util/cgroup.h +++ b/src/util/cgroup.h @@ -133,6 +133,9 @@ int virCgroupGetFreezerState(virCgroupPtr group, char **state); int virCgroupSetCpusetMems(virCgroupPtr group, const char *mems); int virCgroupGetCpusetMems(virCgroupPtr group, char **mems); +int virCgroupSetCpusetCpus(virCgroupPtr group, const char *cpus); +int virCgroupGetCpusetCpus(virCgroupPtr group, char **cpus); + int virCgroupRemove(virCgroupPtr group); void virCgroupFree(virCgroupPtr *group); -- 1.7.3.1

On 06/05/2012 02:17 AM, tangchen wrote:
This patch enables cpuset cgroup, and synchronous vcpupin info set by sched_setaffinity() to cgroup.
This doesn't really give many details about what you are trying to do here.
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/libvirt_private.syms | 2 + src/qemu/qemu_cgroup.c | 51 ++++++++++++++++++++++++++++++++++++++++++++- src/qemu/qemu_cgroup.h | 2 + src/qemu/qemu_driver.c | 43 +++++++++++++++++++++++++++++++------- src/util/cgroup.c | 35 ++++++++++++++++++++++++++++++- src/util/cgroup.h | 3 ++ 6 files changed, 125 insertions(+), 11 deletions(-)
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 6ff1a3b..88cc37a 100644 --- a/src/libvirt_private.syms
+++ b/src/qemu/qemu_cgroup.c @@ -473,18 +473,57 @@ cleanup: rc = virCgroupSetCpuCfsPeriod(cgroup, old_period); if (rc < 0) virReportSystemError(-rc, - _("%s"), - "Unable to rollback cpu bandwidth period"); + "%s", + _("Unable to rollback cpu bandwidth period"));
This hunk is an independent bug fix, and should be pushed separately. I will take care of that shortly.
}
return -1; }
+int qemuSetupCgroupVcpuPin(virCgroupPtr cgroup, virDomainDefPtr def, + int vcpuid) +{ + int i, rc; + char *new_cpus = NULL; + + if (vcpuid < 0 || vcpuid >= def->vcpus) { + virReportSystemError(EINVAL, + "%s: %d", _("invalid vcpuid"), vcpuid);
I would write this: virReportSystemError(EINVAL, _("invalid vcpuid: %d"), vcpuid);
+ return -1; + } + + for (i = 0; i < def->cputune.nvcpupin; i++) { + if (vcpuid == def->cputune.vcpupin[i]->vcpuid) { + new_cpus = virDomainCpuSetFormat(def->cputune.vcpupin[i]->cpumask, + VIR_DOMAIN_CPUMASK_LEN); + if (!new_cpus) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + _("failed to convert cpu mask")); + goto cleanup; + } + rc = virCgroupSetCpusetCpus(cgroup, new_cpus); + if (rc < 0) { + virReportSystemError(-rc, + "%s", _("Unable to set cpuset.cpus")); + goto cleanup; + } + } + } + VIR_FREE(new_cpus); + return 0; + +cleanup: + if (new_cpus) + VIR_FREE(new_cpus);
This fails 'make syntax-check': src/qemu/qemu_cgroup.c: if (new_cpus) VIR_FREE(new_cpus); maint.mk: found useless "if" before "free" above
+ return -1; +}
And since you call VIR_FREE(new_cpus) on both success and failure path, I'd consolidate things. Declare this up front: int ret = -1; then the tail of the function becomes: ret = 0; cleanup: VIR_FREE(new_cpus); return ret; }
@@ -556,6 +595,14 @@ int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm) } }
+ /* Set vcpupin in cgroup if vcpupin xml is provided */ + if (def->cputune.nvcpupin) { + if (qemuCgroupControllerActive(driver, VIR_CGROUP_CONTROLLER_CPUSET)) { + if (qemuSetupCgroupVcpuPin(cgroup_vcpu, def, i) < 0) + goto cleanup;
Rather than nesting this deeply, you could use &&, as in: if (def->cputune.nvcpupin && qemuCgroupControllerActive(driver, VIR_CGROUP_CONTROLLER_CPUSET) && qemuSetupCgroupVcpuPin(cgroup_vcpu, def, i) < 0) goto cleanup;
@@ -3605,9 +3607,37 @@ qemudDomainPinVcpuFlags(virDomainPtr dom, if (flags & VIR_DOMAIN_AFFECT_LIVE) {
if (priv->vcpupids != NULL) { + /* Add config to vm->def first, because cgroup APIs need it. */ + if (virDomainVcpuPinAdd(vm->def, cpumap, maplen, vcpu) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to update or add vcpupin xml of " + "a running domain")); + goto cleanup; + } + + /* Configure the corresponding cpuset cgroup before set affinity. */ + if (qemuCgroupControllerActive(driver, + VIR_CGROUP_CONTROLLER_CPUSET)) { + if (virCgroupForDomain(driver->cgroup, vm->def->name, + &cgroup_dom, 0) == 0) { + if (virCgroupForVcpu(cgroup_dom, vcpu, &cgroup_vcpu, 0) == 0) { + if (qemuSetupCgroupVcpuPin(cgroup_vcpu, vm->def, vcpu) < 0) { + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s %d", + _("failed to set cpuset.cpus in cgroup" + " for vcpu"), vcpu);
Another place where I would embed the %d into the message, as in: _("failed to set cpuset.cpus in cgroup for vcpu %d"), vcpu
+ goto cleanup; + } + } + } + } + if (virProcessInfoSetAffinity(priv->vcpupids[vcpu], - cpumap, maplen, maxcpu) < 0) + cpumap, maplen, maxcpu) < 0) { + qemuReportError(VIR_ERR_SYSTEM_ERROR, "%s %d", + _("failed to set cpu affinity for vcpu"), + vcpu);
and again -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

This patch adds a new xml element <hypervisorpin cpuset='1'>, and also the parser functions, docs, and tests. Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- docs/schemas/domaincommon.rng | 7 ++ src/conf/domain_conf.c | 97 ++++++++++++++++++++++- src/conf/domain_conf.h | 1 + tests/qemuxml2argvdata/qemuxml2argv-cputune.xml | 1 + 4 files changed, 103 insertions(+), 3 deletions(-) diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 62c28c8..af46d8c 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -554,6 +554,13 @@ </attribute> </element> </zeroOrMore> + <optional> + <element name="hypervisorpin"> + <attribute name="cpuset"> + <ref name="cpuset"/> + </attribute> + </element> + </optional> </element> </optional> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 221e1d0..c3b3c0b 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -7789,6 +7789,51 @@ error: goto cleanup; } +/* Parse the XML definition for hypervisorpin */ +static virDomainVcpuPinDefPtr +virDomainHypervisorPinDefParseXML(const xmlNodePtr node) +{ + virDomainVcpuPinDefPtr def = NULL; + char *tmp = NULL; + + if (VIR_ALLOC(def) < 0) { + virReportOOMError(); + return NULL; + } + + def->vcpuid = -1; + + tmp = virXMLPropString(node, "cpuset"); + + if (tmp) { + char *set = tmp; + int cpumasklen = VIR_DOMAIN_CPUMASK_LEN; + + if (VIR_ALLOC_N(def->cpumask, cpumasklen) < 0) { + virReportOOMError(); + goto error; + } + + if (virDomainCpuSetParse(set, 0, def->cpumask, + cpumasklen) < 0) + goto error; + + VIR_FREE(tmp); + } else { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing cpuset for hypervisor pin")); + goto error; + } + +cleanup: + return def; + +error: + VIR_FREE(tmp); + VIR_FREE(def); + goto cleanup; +} + static int virDomainDefMaybeAddController(virDomainDefPtr def, int type, int idx) @@ -8182,6 +8227,34 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, } VIR_FREE(nodes); + if ((n = virXPathNodeSet("./cputune/hypervisorpin", ctxt, &nodes)) < 0) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("cannot extract hypervisorpin nodes")); + goto error; + } + + if (n > 1) { + virDomainReportError(VIR_ERR_XML_ERROR, "%s", + _("only one hypervisorpin is supported")); + VIR_FREE(nodes); + goto error; + } + + if (n && VIR_ALLOC(def->cputune.hypervisorpin) < 0) { + goto no_memory; + } + + if (n) { + virDomainVcpuPinDefPtr hypervisorpin = NULL; + hypervisorpin = virDomainHypervisorPinDefParseXML(nodes[0]); + + if (!hypervisorpin) + goto error; + + def->cputune.hypervisorpin = hypervisorpin; + } + VIR_FREE(nodes); + /* Extract numatune if exists. */ if ((n = virXPathNodeSet("./numatune", ctxt, &nodes)) < 0) { virDomainReportError(VIR_ERR_INTERNAL_ERROR, @@ -9186,7 +9259,7 @@ no_memory: virReportOOMError(); /* fallthrough */ - error: +error: VIR_FREE(tmp); VIR_FREE(nodes); virBitmapFree(bootMap); @@ -12733,7 +12806,8 @@ virDomainDefFormatInternal(virDomainDefPtr def, virBufferAsprintf(buf, ">%u</vcpu>\n", def->maxvcpus); if (def->cputune.shares || def->cputune.vcpupin || - def->cputune.period || def->cputune.quota) + def->cputune.period || def->cputune.quota || + def->cputune.hypervisorpin) virBufferAddLit(buf, " <cputune>\n"); if (def->cputune.shares) @@ -12765,8 +12839,25 @@ virDomainDefFormatInternal(virDomainDefPtr def, } } + if (def->cputune.hypervisorpin) { + virBufferAsprintf(buf, " <hypervisorpin "); + + char *cpumask = NULL; + cpumask = virDomainCpuSetFormat(def->cputune.hypervisorpin->cpumask, + VIR_DOMAIN_CPUMASK_LEN); + if (cpumask == NULL) { + virDomainReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("failed to format cpuset for hypervisor")); + goto cleanup; + } + + virBufferAsprintf(buf, "cpuset='%s'/>\n", cpumask); + VIR_FREE(cpumask); + } + if (def->cputune.shares || def->cputune.vcpupin || - def->cputune.period || def->cputune.quota) + def->cputune.period || def->cputune.quota || + def->cputune.hypervisorpin) virBufferAddLit(buf, " </cputune>\n"); if (def->numatune.memory.nodemask || diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 8d5b35a..c7cd8a3 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -1598,6 +1598,7 @@ struct _virDomainDef { long long quota; int nvcpupin; virDomainVcpuPinDefPtr *vcpupin; + virDomainVcpuPinDefPtr hypervisorpin; } cputune; virDomainNumatuneDef numatune; diff --git a/tests/qemuxml2argvdata/qemuxml2argv-cputune.xml b/tests/qemuxml2argvdata/qemuxml2argv-cputune.xml index df3101d..b72af1b 100644 --- a/tests/qemuxml2argvdata/qemuxml2argv-cputune.xml +++ b/tests/qemuxml2argvdata/qemuxml2argv-cputune.xml @@ -10,6 +10,7 @@ <quota>-1</quota> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> + <hypervisorpin cpuset='1'/> </cputune> <os> <type arch='i686' machine='pc'>hvm</type> -- 1.7.3.1

Introduce qemuSetupCgroupHypervisorPin() function and synchronize hypervisorpin info to cgroup. Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/qemu/qemu_cgroup.c | 41 +++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_cgroup.h | 1 + 2 files changed, 42 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index 1085478..395298f 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -518,6 +518,39 @@ cleanup: return -1; } +int qemuSetupCgroupHypervisorPin(virCgroupPtr cgroup, virDomainDefPtr def) +{ + int rc; + char *new_cpus = NULL; + + if (!def->cputune.hypervisorpin) + return 0; + + new_cpus = virDomainCpuSetFormat(def->cputune.hypervisorpin->cpumask, + VIR_DOMAIN_CPUMASK_LEN); + if (!new_cpus) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + _("failed to convert cpu mask")); + goto cleanup; + } + + rc = virCgroupSetCpusetCpus(cgroup, new_cpus); + if (rc < 0) { + virReportSystemError(-rc, + _("%s"), _("Unable to set cpuset.cpus")); + goto cleanup; + } + + VIR_FREE(new_cpus); + + return 0; + +cleanup: + if (new_cpus) + VIR_FREE(new_cpus); + return -1; +} + int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm) { virCgroupPtr cgroup = NULL; @@ -626,6 +659,7 @@ int qemuSetupCgroupForHypervisor(struct qemud_driver *driver, virCgroupPtr cgroup = NULL; virCgroupPtr cgroup_hypervisor = NULL; qemuDomainObjPrivatePtr priv = vm->privateData; + virDomainDefPtr def = vm->def; int rc; if (driver->cgroup == NULL) @@ -664,6 +698,13 @@ int qemuSetupCgroupForHypervisor(struct qemud_driver *driver, goto cleanup; } + if (def->cputune.hypervisorpin) { + if (qemuCgroupControllerActive(driver, VIR_CGROUP_CONTROLLER_CPUSET)) { + if (qemuSetupCgroupHypervisorPin(cgroup_hypervisor, def) < 0) + goto cleanup; + } + } + virCgroupFree(&cgroup_hypervisor); virCgroupFree(&cgroup); return 0; diff --git a/src/qemu/qemu_cgroup.h b/src/qemu/qemu_cgroup.h index 91d5632..12444c3 100644 --- a/src/qemu/qemu_cgroup.h +++ b/src/qemu/qemu_cgroup.h @@ -55,6 +55,7 @@ int qemuSetupCgroupVcpuBW(virCgroupPtr cgroup, long long quota); int qemuSetupCgroupVcpuPin(virCgroupPtr cgroup, virDomainDefPtr def, int vcpuid); +int qemuSetupCgroupHypervisorPin(virCgroupPtr cgroup, virDomainDefPtr def); int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm); int qemuSetupCgroupForHypervisor(struct qemud_driver *driver, virDomainObjPtr vm); -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/qemu/qemu_process.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 54 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 31c2c30..e73cc92 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -1965,6 +1965,56 @@ cleanup: return ret; } +/* Set CPU affinities for hypervisor threads if hypervisorpin xml provided. */ +static int +qemuProcessSetHypervisorAffinites(virConnectPtr conn, + virDomainObjPtr vm) +{ + virDomainDefPtr def = vm->def; + pid_t pid = vm->pid; + unsigned char *cpumask = NULL; + unsigned char *cpumap = NULL; + virNodeInfo nodeinfo; + int cpumaplen, hostcpus, maxcpu, i; + int ret = -1; + + if (virNodeGetInfo(conn, &nodeinfo) != 0) + return -1; + + if (!def->cputune.hypervisorpin) + return 0; + + hostcpus = VIR_NODEINFO_MAXCPUS(nodeinfo); + cpumaplen = VIR_CPU_MAPLEN(hostcpus); + maxcpu = cpumaplen * 8; + + if (maxcpu > hostcpus) + maxcpu = hostcpus; + + if (VIR_ALLOC_N(cpumap, cpumaplen) < 0) { + virReportOOMError(); + return -1; + } + + cpumask = (unsigned char *)def->cputune.hypervisorpin->cpumask; + for(i = 0; i < VIR_DOMAIN_CPUMASK_LEN; i++) { + if (cpumask[i]) + VIR_USE_CPU(cpumap, i); + } + + if (virProcessInfoSetAffinity(pid, + cpumap, + cpumaplen, + maxcpu) < 0) { + goto cleanup; + } + + ret = 0; +cleanup: + VIR_FREE(cpumap); + return ret; +} + static int qemuProcessInitPasswords(virConnectPtr conn, struct qemud_driver *driver, @@ -3682,6 +3732,10 @@ int qemuProcessStart(virConnectPtr conn, if (qemuProcessSetVcpuAffinites(conn, vm) < 0) goto cleanup; + VIR_DEBUG("Setting hypervisor threads affinities"); + if (qemuProcessSetHypervisorAffinites(conn, vm) < 0) + goto cleanup; + VIR_DEBUG("Setting any required VM passwords"); if (qemuProcessInitPasswords(conn, driver, vm) < 0) goto cleanup; -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/conf/domain_conf.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++ src/conf/domain_conf.h | 6 ++++ src/libvirt_private.syms | 2 + 3 files changed, 84 insertions(+), 0 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index c3b3c0b..ee2b676 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -10917,6 +10917,82 @@ virDomainVcpuPinDel(virDomainDefPtr def, int vcpu) return 0; } +int +virDomainHypervisorPinAdd(virDomainDefPtr def, + unsigned char *cpumap, + int maplen) +{ + virDomainVcpuPinDefPtr hypervisorpin = NULL; + char *cpumask = NULL; + int i; + + if (VIR_ALLOC_N(cpumask, VIR_DOMAIN_CPUMASK_LEN) < 0) { + virReportOOMError(); + goto cleanup; + } + + /* Reset cpumask to all 0s. */ + for (i = 0; i < VIR_DOMAIN_CPUMASK_LEN; i++) + cpumask[i] = 0; + + /* Convert bitmap (cpumap) to cpumask, which is byte map. */ + for (i = 0; i < maplen; i++) { + int cur; + + for (cur = 0; cur < 8; cur++) { + if (cpumap[i] & (1 << cur)) + cpumask[i * 8 + cur] = 1; + } + } + + if (!def->cputune.hypervisorpin) { + /* No hypervisorpin exists yet. */ + if (VIR_ALLOC(hypervisorpin) < 0) { + virReportOOMError(); + goto cleanup; + } + + hypervisorpin->vcpuid = -1; + hypervisorpin->cpumask = cpumask; + def->cputune.hypervisorpin = hypervisorpin; + } else { + /* Since there is only 1 hypervisorpin for each vm, + * juest replace the old one. + */ + VIR_FREE(def->cputune.hypervisorpin->cpumask); + def->cputune.hypervisorpin->cpumask = cpumask; + } + + return 0; + +cleanup: + if (cpumask) + VIR_FREE(cpumask); + return -1; +} + +int +virDomainHypervisorPinDel(virDomainDefPtr def) +{ + virDomainVcpuPinDefPtr hypervisorpin = NULL; + + /* No hypervisorpin exists yet */ + if (!def->cputune.hypervisorpin) { + return 0; + } + + hypervisorpin = def->cputune.hypervisorpin; + + VIR_FREE(hypervisorpin->cpumask); + VIR_FREE(hypervisorpin); + def->cputune.hypervisorpin = NULL; + + if (def->cputune.hypervisorpin) + return -1; + + return 0; +} + static int virDomainLifecycleDefFormat(virBufferPtr buf, int type, diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index c7cd8a3..32a9803 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -1977,6 +1977,12 @@ int virDomainVcpuPinAdd(virDomainDefPtr def, int virDomainVcpuPinDel(virDomainDefPtr def, int vcpu); +int virDomainHypervisorPinAdd(virDomainDefPtr def, + unsigned char *cpumap, + int maplen); + +int virDomainHypervisorPinDel(virDomainDefPtr def); + int virDomainDiskIndexByName(virDomainDefPtr def, const char *name, bool allow_ambiguous); const char *virDomainDiskPathByName(virDomainDefPtr, const char *name); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 88cc37a..f5213f4 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -487,6 +487,8 @@ virDomainTimerTrackTypeFromString; virDomainTimerTrackTypeToString; virDomainVcpuPinAdd; virDomainVcpuPinDel; +virDomainHypervisorPinAdd; +virDomainHypervisorPinDel; virDomainVcpuPinFindByVcpu; virDomainVcpuPinIsDuplicate; virDomainVideoDefFree; -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/driver.h | 13 +++- src/qemu/qemu_driver.c | 223 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 235 insertions(+), 1 deletions(-) diff --git a/src/driver.h b/src/driver.h index aa7a377..dab21ca 100644 --- a/src/driver.h +++ b/src/driver.h @@ -296,7 +296,16 @@ typedef int unsigned char *cpumaps, int maplen, unsigned int flags); - +typedef int + (*virDrvDomainPinHypervisorFlags) (virDomainPtr domain, + unsigned char *cpumap, + int maplen, + unsigned int flags); +typedef int + (*virDrvDomainGetHypervisorPinInfo) (virDomainPtr domain, + unsigned char *cpumaps, + int maplen, + unsigned int flags); typedef int (*virDrvDomainGetVcpus) (virDomainPtr domain, virVcpuInfoPtr info, @@ -908,6 +917,8 @@ struct _virDriver { virDrvDomainPinVcpu domainPinVcpu; virDrvDomainPinVcpuFlags domainPinVcpuFlags; virDrvDomainGetVcpuPinInfo domainGetVcpuPinInfo; + virDrvDomainPinHypervisorFlags domainPinHypervisorFlags; + virDrvDomainGetHypervisorPinInfo domainGetHypervisorPinInfo; virDrvDomainGetVcpus domainGetVcpus; virDrvDomainGetMaxVcpus domainGetMaxVcpus; virDrvDomainGetSecurityLabel domainGetSecurityLabel; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index b0eef80..312b58b 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -3788,6 +3788,227 @@ cleanup: } static int +qemudDomainPinHypervisorFlags(virDomainPtr dom, + unsigned char *cpumap, + int maplen, + unsigned int flags) +{ + struct qemud_driver *driver = dom->conn->privateData; + virDomainObjPtr vm; + virCgroupPtr cgroup_dom = NULL; + virCgroupPtr cgroup_hypervisor = NULL; + pid_t pid; + virDomainDefPtr persistentDef = NULL; + int maxcpu, hostcpus; + virNodeInfo nodeinfo; + int ret = -1; + qemuDomainObjPrivatePtr priv; + bool canResetting = true; + int pcpu; + + virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | + VIR_DOMAIN_AFFECT_CONFIG, -1); + + qemuDriverLock(driver); + vm = virDomainFindByUUID(&driver->domains, dom->uuid); + qemuDriverUnlock(driver); + + if (!vm) { + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virUUIDFormat(dom->uuid, uuidstr); + qemuReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s'"), uuidstr); + goto cleanup; + } + + if (virDomainLiveConfigHelperMethod(driver->caps, vm, &flags, + &persistentDef) < 0) + goto cleanup; + + priv = vm->privateData; + + if (nodeGetInfo(dom->conn, &nodeinfo) < 0) + goto cleanup; + hostcpus = VIR_NODEINFO_MAXCPUS(nodeinfo); + maxcpu = maplen * 8; + if (maxcpu > hostcpus) + maxcpu = hostcpus; + /* pinning to all physical cpus means resetting, + * so check if we can reset setting. + */ + for (pcpu = 0; pcpu < hostcpus; pcpu++) { + if ((cpumap[pcpu/8] & (1 << (pcpu % 8))) == 0) { + canResetting = false; + break; + } + } + + pid = vm->pid; + + if (flags & VIR_DOMAIN_AFFECT_LIVE) { + + if (priv->vcpupids != NULL) { + if (virDomainHypervisorPinAdd(vm->def, cpumap, maplen) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to update or add hypervisorpin xml " + "of a running domain")); + goto cleanup; + } + + if (qemuCgroupControllerActive(driver, + VIR_CGROUP_CONTROLLER_CPUSET)) { + /* + * Configure the corresponding cpuset cgroup. + * If no cgroup for domain or hypervisor exists, do nothing. + */ + if (virCgroupForDomain(driver->cgroup, vm->def->name, + &cgroup_dom, 0) == 0) { + if (virCgroupForHypervisor(cgroup_dom, &cgroup_hypervisor, 0) == 0) { + if (qemuSetupCgroupHypervisorPin(cgroup_hypervisor, vm->def) < 0) { + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("failed to set cpuset.cpus in cgroup" + " for hypervisor threads")); + goto cleanup; + } + } + } + } + + if (canResetting) { + if (virDomainHypervisorPinDel(vm->def) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to delete hypervisorpin xml of " + "a running domain")); + goto cleanup; + } + } + + if (virProcessInfoSetAffinity(pid, cpumap, maplen, maxcpu) < 0) { + qemuReportError(VIR_ERR_SYSTEM_ERROR, "%s", + _("failed to set cpu affinity for " + "hypervisor threads")); + goto cleanup; + } + } else { + qemuReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("cpu affinity is not supported")); + goto cleanup; + } + + if (virDomainSaveStatus(driver->caps, driver->stateDir, vm) < 0) + goto cleanup; + } + + if (flags & VIR_DOMAIN_AFFECT_CONFIG) { + + if (canResetting) { + if (virDomainHypervisorPinDel(persistentDef) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to delete hypervisorpin xml of " + "a persistent domain")); + goto cleanup; + } + } else { + if (virDomainHypervisorPinAdd(persistentDef, cpumap, maplen) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to update or add hypervisorpin xml " + "of a persistent domain")); + goto cleanup; + } + } + + ret = virDomainSaveConfig(driver->configDir, persistentDef); + goto cleanup; + } + + ret = 0; + +cleanup: + if (cgroup_hypervisor) + virCgroupFree(&cgroup_hypervisor); + if (cgroup_dom) + virCgroupFree(&cgroup_dom); + + if (vm) + virDomainObjUnlock(vm); + return ret; +} + +static int +qemudDomainGetHypervisorPinInfo(virDomainPtr dom, + unsigned char *cpumaps, + int maplen, + unsigned int flags) +{ + struct qemud_driver *driver = dom->conn->privateData; + virDomainObjPtr vm = NULL; + virNodeInfo nodeinfo; + virDomainDefPtr targetDef = NULL; + int ret = -1; + int maxcpu, hostcpus, pcpu; + virDomainVcpuPinDefPtr hypervisorpin = NULL; + char *cpumask = NULL; + + virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | + VIR_DOMAIN_AFFECT_CONFIG, -1); + + qemuDriverLock(driver); + vm = virDomainFindByUUID(&driver->domains, dom->uuid); + qemuDriverUnlock(driver); + + if (!vm) { + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virUUIDFormat(dom->uuid, uuidstr); + qemuReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s'"), uuidstr); + goto cleanup; + } + + if (virDomainLiveConfigHelperMethod(driver->caps, vm, &flags, + &targetDef) < 0) + goto cleanup; + + if (flags & VIR_DOMAIN_AFFECT_LIVE) + targetDef = vm->def; + + /* Coverity didn't realize that targetDef must be set if we got here. */ + sa_assert(targetDef); + + if (nodeGetInfo(dom->conn, &nodeinfo) < 0) + goto cleanup; + hostcpus = VIR_NODEINFO_MAXCPUS(nodeinfo); + maxcpu = maplen * 8; + if (maxcpu > hostcpus) + maxcpu = hostcpus; + + /* initialize cpumaps */ + memset(cpumaps, 0xff, maplen); + if (maxcpu % 8) { + cpumaps[maplen - 1] &= (1 << maxcpu % 8) - 1; + } + + /* If no hypervisorpin, all cpus should be used */ + hypervisorpin = targetDef->cputune.hypervisorpin; + if (!hypervisorpin) { + ret = 0; + goto cleanup; + } + + cpumask = hypervisorpin->cpumask; + for (pcpu = 0; pcpu < maxcpu; pcpu++) { + if (cpumask[pcpu] == 0) + VIR_UNUSE_CPU(cpumaps, pcpu); + } + + ret = 1; + +cleanup: + if (vm) + virDomainObjUnlock(vm); + return ret; +} + +static int qemudDomainGetVcpus(virDomainPtr dom, virVcpuInfoPtr info, int maxinfo, @@ -13026,6 +13247,8 @@ static virDriver qemuDriver = { .domainPinVcpu = qemudDomainPinVcpu, /* 0.4.4 */ .domainPinVcpuFlags = qemudDomainPinVcpuFlags, /* 0.9.3 */ .domainGetVcpuPinInfo = qemudDomainGetVcpuPinInfo, /* 0.9.3 */ + .domainPinHypervisorFlags = qemudDomainPinHypervisorFlags, /* 0.9.12 */ + .domainGetHypervisorPinInfo = qemudDomainGetHypervisorPinInfo, /* 0.9.12 */ .domainGetVcpus = qemudDomainGetVcpus, /* 0.4.4 */ .domainGetMaxVcpus = qemudDomainGetMaxVcpus, /* 0.4.4 */ .domainGetSecurityLabel = qemudDomainGetSecurityLabel, /* 0.6.1 */ -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- src/remote/remote_driver.c | 102 ++++++++++++++++++++++++++++++++++++++++++ src/remote/remote_protocol.x | 24 +++++++++- src/remote_protocol-structs | 24 ++++++++++ 3 files changed, 149 insertions(+), 1 deletions(-) diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index 299cd69..a945a8e 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -1743,6 +1743,106 @@ done: } static int +remoteDomainPinHypervisorFlags (virDomainPtr dom, + unsigned char *cpumap, + int cpumaplen, + unsigned int flags) +{ + int rv = -1; + struct private_data *priv = dom->conn->privateData; + remote_domain_pin_hypervisor_flags_args args; + + remoteDriverLock(priv); + + if (cpumaplen > REMOTE_CPUMAP_MAX) { + remoteError(VIR_ERR_RPC, + _("%s length greater than maximum: %d > %d"), + "cpumap", (int)cpumaplen, REMOTE_CPUMAP_MAX); + goto done; + } + + make_nonnull_domain(&args.dom, dom); + args.vcpu = -1; + args.cpumap.cpumap_val = (char *)cpumap; + args.cpumap.cpumap_len = cpumaplen; + args.flags = flags; + + if (call(dom->conn, priv, 0, REMOTE_PROC_DOMAIN_PIN_HYPERVISOR_FLAGS, + (xdrproc_t) xdr_remote_domain_pin_hypervisor_flags_args, + (char *) &args, + (xdrproc_t) xdr_void, (char *) NULL) == -1) { + goto done; + } + + rv = 0; + +done: + remoteDriverUnlock(priv); + return rv; +} + + +static int +remoteDomainGetHypervisorPinInfo (virDomainPtr domain, + unsigned char *cpumaps, + int maplen, + unsigned int flags) +{ + int rv = -1; + int i; + remote_domain_get_hypervisor_pin_info_args args; + remote_domain_get_hypervisor_pin_info_ret ret; + struct private_data *priv = domain->conn->privateData; + + remoteDriverLock(priv); + + /* There is only one cpumap for all hypervisor threads */ + if (INT_MULTIPLY_OVERFLOW(1, maplen) || + maplen > REMOTE_CPUMAPS_MAX) { + remoteError(VIR_ERR_RPC, + _("vCPU map buffer length exceeds maximum: %d > %d"), + maplen, REMOTE_CPUMAPS_MAX); + goto done; + } + + make_nonnull_domain(&args.dom, domain); + args.ncpumaps = 1; + args.maplen = maplen; + args.flags = flags; + + memset(&ret, 0, sizeof ret); + + if (call (domain->conn, priv, 0, REMOTE_PROC_DOMAIN_GET_HYPERVISOR_PIN_INFO, + (xdrproc_t) xdr_remote_domain_get_hypervisor_pin_info_args, + (char *) &args, + (xdrproc_t) xdr_remote_domain_get_hypervisor_pin_info_ret, + (char *) &ret) == -1) + goto done; + + if (ret.cpumaps.cpumaps_len > maplen) { + remoteError(VIR_ERR_RPC, + _("host reports map buffer length exceeds maximum: %d > %d"), + ret.cpumaps.cpumaps_len, maplen); + goto cleanup; + } + + memset(cpumaps, 0, maplen); + + for (i = 0; i < ret.cpumaps.cpumaps_len; ++i) + cpumaps[i] = ret.cpumaps.cpumaps_val[i]; + + rv = ret.num; + +cleanup: + xdr_free ((xdrproc_t) xdr_remote_domain_get_hypervisor_pin_info_ret, + (char *) &ret); + +done: + remoteDriverUnlock(priv); + return rv; +} + +static int remoteDomainGetVcpus (virDomainPtr domain, virVcpuInfoPtr info, int maxinfo, @@ -5003,6 +5103,8 @@ static virDriver remote_driver = { .domainPinVcpu = remoteDomainPinVcpu, /* 0.3.0 */ .domainPinVcpuFlags = remoteDomainPinVcpuFlags, /* 0.9.3 */ .domainGetVcpuPinInfo = remoteDomainGetVcpuPinInfo, /* 0.9.3 */ + .domainPinHypervisorFlags = remoteDomainPinHypervisorFlags, /* 0.9.12 */ + .domainGetHypervisorPinInfo = remoteDomainGetHypervisorPinInfo, /* 0.9.12 */ .domainGetVcpus = remoteDomainGetVcpus, /* 0.3.0 */ .domainGetMaxVcpus = remoteDomainGetMaxVcpus, /* 0.3.0 */ .domainGetSecurityLabel = remoteDomainGetSecurityLabel, /* 0.6.1 */ diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 2d57247..1ad9b44 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -1054,6 +1054,25 @@ struct remote_domain_get_vcpu_pin_info_ret { int num; }; +struct remote_domain_pin_hypervisor_flags_args { + remote_nonnull_domain dom; + unsigned int vcpu; + opaque cpumap<REMOTE_CPUMAP_MAX>; /* (unsigned char *) */ + unsigned int flags; +}; + +struct remote_domain_get_hypervisor_pin_info_args { + remote_nonnull_domain dom; + int ncpumaps; + int maplen; + unsigned int flags; +}; + +struct remote_domain_get_hypervisor_pin_info_ret { + opaque cpumaps<REMOTE_CPUMAPS_MAX>; + int num; +}; + struct remote_domain_get_vcpus_args { remote_nonnull_domain dom; int maxinfo; @@ -2782,7 +2801,10 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_PM_WAKEUP = 267, /* autogen autogen */ REMOTE_PROC_DOMAIN_EVENT_TRAY_CHANGE = 268, /* autogen autogen */ REMOTE_PROC_DOMAIN_EVENT_PMWAKEUP = 269, /* autogen autogen */ - REMOTE_PROC_DOMAIN_EVENT_PMSUSPEND = 270 /* autogen autogen */ + REMOTE_PROC_DOMAIN_EVENT_PMSUSPEND = 270, /* autogen autogen */ + + REMOTE_PROC_DOMAIN_PIN_HYPERVISOR_FLAGS = 271, /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_GET_HYPERVISOR_PIN_INFO = 272 /* skipgen skipgen */ /* * Notice how the entries are grouped in sets of 10 ? diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 9b2414f..69a80b9 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -718,6 +718,28 @@ struct remote_domain_get_vcpu_pin_info_ret { } cpumaps; int num; }; +struct remote_domain_pin_hypervisor_flags_args { + remote_nonnull_domain dom; + u_int vcpu; + struct { + u_int cpumap_len; + char * cpumap_val; + } cpumap; + u_int flags; +}; +struct remote_domain_get_hypervisor_pin_info_args { + remote_nonnull_domain dom; + int ncpumaps; + int maplen; + u_int flags; +}; +struct remote_domain_get_hypervisor_pin_info_ret { + struct { + u_int cpumaps_len; + char * cpumaps_val; + } cpumaps; + int num; +}; struct remote_domain_get_vcpus_args { remote_nonnull_domain dom; int maxinfo; @@ -2192,4 +2214,6 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_EVENT_TRAY_CHANGE = 268, REMOTE_PROC_DOMAIN_EVENT_PMWAKEUP = 269, REMOTE_PROC_DOMAIN_EVENT_PMSUSPEND = 270, + REMOTE_PROC_DOMAIN_PIN_HYPERVISOR_FLAGS = 271, + REMOTE_PROC_DOMAIN_GET_HYPERVISOR_PIN_INFO = 272, }; -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- daemon/remote.c | 103 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 103 insertions(+), 0 deletions(-) diff --git a/daemon/remote.c b/daemon/remote.c index a02c09b..823338a 100644 --- a/daemon/remote.c +++ b/daemon/remote.c @@ -1454,6 +1454,109 @@ no_memory: } static int +remoteDispatchDomainPinHypervisorFlags(virNetServerPtr server ATTRIBUTE_UNUSED, + virNetServerClientPtr client, + virNetMessagePtr msg ATTRIBUTE_UNUSED, + virNetMessageErrorPtr rerr, + remote_domain_pin_hypervisor_flags_args *args) +{ + int rv = -1; + virDomainPtr dom = NULL; + struct daemonClientPrivate *priv = + virNetServerClientGetPrivateData(client); + + if (!priv->conn) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("connection not open")); + goto cleanup; + } + + if (!(dom = get_nonnull_domain(priv->conn, args->dom))) + goto cleanup; + + if (virDomainPinHypervisorFlags(dom, + (unsigned char *) args->cpumap.cpumap_val, + args->cpumap.cpumap_len, + args->flags) < 0) + goto cleanup; + + rv = 0; + +cleanup: + if (rv < 0) + virNetMessageSaveError(rerr); + if (dom) + virDomainFree(dom); + return rv; +} + + +static int +remoteDispatchDomainGetHypervisorPinInfo(virNetServerPtr server ATTRIBUTE_UNUSED, + virNetServerClientPtr client ATTRIBUTE_UNUSED, + virNetMessagePtr msg ATTRIBUTE_UNUSED, + virNetMessageErrorPtr rerr, + remote_domain_get_hypervisor_pin_info_args *args, + remote_domain_get_hypervisor_pin_info_ret *ret) +{ + virDomainPtr dom = NULL; + unsigned char *cpumaps = NULL; + int num; + int rv = -1; + struct daemonClientPrivate *priv = + virNetServerClientGetPrivateData(client); + + if (!priv->conn) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("connection not open")); + goto cleanup; + } + + if (!(dom = get_nonnull_domain(priv->conn, args->dom))) + goto cleanup; + + /* There is only one cpumap struct for all hypervisor threads */ + if (args->ncpumaps != 1) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("ncpumaps != 1")); + goto cleanup; + } + + if (INT_MULTIPLY_OVERFLOW(args->ncpumaps, args->maplen) || + args->ncpumaps * args->maplen > REMOTE_CPUMAPS_MAX) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("maxinfo * maplen > REMOTE_CPUMAPS_MAX")); + goto cleanup; + } + + /* Allocate buffers to take the results */ + if (args->maplen > 0 && + VIR_ALLOC_N(cpumaps, args->maplen) < 0) + goto no_memory; + + if ((num = virDomainGetHypervisorPinInfo(dom, + cpumaps, + args->maplen, + args->flags)) < 0) + goto cleanup; + + ret->num = num; + ret->cpumaps.cpumaps_len = args->maplen; + ret->cpumaps.cpumaps_val = (char *) cpumaps; + cpumaps = NULL; + + rv = 0; + +cleanup: + if (rv < 0) + virNetMessageSaveError(rerr); + VIR_FREE(cpumaps); + if (dom) + virDomainFree(dom); + return rv; + +no_memory: + virReportOOMError(); + goto cleanup; +} + +static int remoteDispatchDomainGetVcpus(virNetServerPtr server ATTRIBUTE_UNUSED, virNetServerClientPtr client ATTRIBUTE_UNUSED, virNetMessagePtr msg ATTRIBUTE_UNUSED, -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- include/libvirt/libvirt.h.in | 9 +++ src/libvirt.c | 147 ++++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 6 ++ 3 files changed, 162 insertions(+), 0 deletions(-) diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index da3ce29..44bdb7d 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -1827,6 +1827,15 @@ int virDomainGetVcpuPinInfo (virDomainPtr domain, unsigned char *cpumaps, int maplen, unsigned int flags); +int virDomainPinHypervisorFlags (virDomainPtr domain, + unsigned char *cpumap, + int maplen, + unsigned int flags); + +int virDomainGetHypervisorPinInfo (virDomainPtr domain, + unsigned char *cpumaps, + int maplen, + unsigned int flags); /** * VIR_USE_CPU: diff --git a/src/libvirt.c b/src/libvirt.c index ec8307e..2e8bf3a 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -8716,6 +8716,153 @@ error: } /** + * virDomainPinHypervisorFlags: + * @domain: pointer to domain object, or NULL for Domain0 + * @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes) (IN) + * Each bit set to 1 means that corresponding CPU is usable. + * Bytes are stored in little-endian order: CPU0-7, 8-15... + * In each byte, lowest CPU number is least significant bit. + * @maplen: number of bytes in cpumap, from 1 up to size of CPU map in + * underlying virtualization system (Xen...). + * If maplen < size, missing bytes are set to zero. + * If maplen > size, failure code is returned. + * @flags: bitwise-OR of virDomainModificationImpact + * + * Dynamically change the real CPUs which can be allocated to all hypervisor + * threads. This function may require privileged access to the hypervisor. + * + * @flags may include VIR_DOMAIN_AFFECT_LIVE or VIR_DOMAIN_AFFECT_CONFIG. + * Both flags may be set. + * If VIR_DOMAIN_AFFECT_LIVE is set, the change affects a running domain + * and may fail if domain is not alive. + * If VIR_DOMAIN_AFFECT_CONFIG is set, the change affects persistent state, + * and will fail for transient domains. If neither flag is specified (that is, + * @flags is VIR_DOMAIN_AFFECT_CURRENT), then an inactive domain modifies + * persistent setup, while an active domain is hypervisor-dependent on whether + * just live or both live and persistent state is changed. + * Not all hypervisors can support all flag combinations. + * + * See also virDomainGetHypervisorPinInfo for querying this information. + * + * Returns 0 in case of success, -1 in case of failure. + * + */ +int +virDomainPinHypervisorFlags(virDomainPtr domain, unsigned char *cpumap, + int maplen, unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "cpumap=%p, maplen=%d, flags=%x", + cpumap, maplen, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN(domain)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (domain->conn->flags & VIR_CONNECT_RO) { + virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if ((cpumap == NULL) || (maplen < 1)) { + virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + + conn = domain->conn; + + if (conn->driver->domainPinHypervisorFlags) { + int ret; + ret = conn->driver->domainPinHypervisorFlags (domain, cpumap, maplen, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(domain->conn); + return -1; +} + +/** + * virDomainGetHypervisorPinInfo: + * @domain: pointer to domain object, or NULL for Domain0 + * @cpumap: pointer to a bit map of real CPUs for all hypervisor threads of + * this domain (in 8-bit bytes) (OUT) + * There is only one cpumap for all hypervisor threads. + * Must not be NULL. + * @maplen: the number of bytes in one cpumap, from 1 up to size of CPU map. + * Must be positive. + * @flags: bitwise-OR of virDomainModificationImpact + * Must not be VIR_DOMAIN_AFFECT_LIVE and + * VIR_DOMAIN_AFFECT_CONFIG concurrently. + * + * Query the CPU affinity setting of all hypervisor threads of domain, store + * it in cpumap. + * + * Returns 1 in case of success, + * 0 in case of no hypervisor threads are pined to pcpus, + * -1 in case of failure. + */ +int +virDomainGetHypervisorPinInfo(virDomainPtr domain, unsigned char *cpumap, + int maplen, unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "cpumap=%p, maplen=%d, flags=%x", + cpumap, maplen, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN(domain)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (!cpumap || maplen <= 0) { + virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + if (INT_MULTIPLY_OVERFLOW(1, maplen)) { + virLibDomainError(VIR_ERR_OVERFLOW, _("input too large: 1 * %d"), + maplen); + goto error; + } + + /* At most one of these two flags should be set. */ + if ((flags & VIR_DOMAIN_AFFECT_LIVE) && + (flags & VIR_DOMAIN_AFFECT_CONFIG)) { + virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + conn = domain->conn; + + if (conn->driver->domainGetHypervisorPinInfo) { + int ret; + ret = conn->driver->domainGetHypervisorPinInfo(domain, cpumap, + maplen, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(domain->conn); + return -1; +} + +/** * virDomainGetVcpus: * @domain: pointer to domain object, or NULL for Domain0 * @info: pointer to an array of virVcpuInfo structures (OUT) diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index 46c13fb..6576cd8 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -534,4 +534,10 @@ LIBVIRT_0.9.11 { virDomainPMWakeup; } LIBVIRT_0.9.10; +LIBVIRT_0.9.12 { + global: + virDomainPinHypervisorFlags; + virDomainGetHypervisorPinInfo; +} LIBVIRT_0.9.11; + # .... define new API here using predicted next version number .... -- 1.7.3.1

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com> --- tests/vcpupin | 6 +- tools/virsh.c | 145 +++++++++++++++++++++++++++++++++++++------------------ tools/virsh.pod | 16 ++++-- 3 files changed, 110 insertions(+), 57 deletions(-) diff --git a/tests/vcpupin b/tests/vcpupin index 5952862..ffd16fa 100755 --- a/tests/vcpupin +++ b/tests/vcpupin @@ -30,16 +30,16 @@ fi fail=0 # Invalid syntax. -$abs_top_builddir/tools/virsh --connect test:///default vcpupin test a 0,1 > out 2>&1 +$abs_top_builddir/tools/virsh --connect test:///default vcpupin test a --vcpu 0,1 > out 2>&1 test $? = 1 || fail=1 cat <<\EOF > exp || fail=1 -error: vcpupin: Invalid or missing vCPU number. +error: vcpupin: Invalid or missing vCPU number, or missing --hypervisor option. EOF compare exp out || fail=1 # An out-of-range vCPU number deserves a diagnostic, too. -$abs_top_builddir/tools/virsh --connect test:///default vcpupin test 100 0,1 > out 2>&1 +$abs_top_builddir/tools/virsh --connect test:///default vcpupin test --vcpu 100 0,1 > out 2>&1 test $? = 1 || fail=1 cat <<\EOF > exp || fail=1 error: vcpupin: Invalid vCPU number. diff --git a/tools/virsh.c b/tools/virsh.c index a934c13..e9475bf 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -5237,14 +5237,15 @@ cmdVcpuinfo(vshControl *ctl, const vshCmd *cmd) * "vcpupin" command */ static const vshCmdInfo info_vcpupin[] = { - {"help", N_("control or query domain vcpu affinity")}, - {"desc", N_("Pin domain VCPUs to host physical CPUs.")}, + {"help", N_("control or query domain vcpu and hypervisor threads affinities")}, + {"desc", N_("Pin domain VCPUs or hypervisor threads to host physical CPUs.")}, {NULL, NULL} }; static const vshCmdOptDef opts_vcpupin[] = { {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, - {"vcpu", VSH_OT_INT, 0, N_("vcpu number")}, + {"vcpu", VSH_OT_INT, VSH_OFLAG_REQ_OPT, N_("vcpu number")}, + {"hypervisor", VSH_OT_BOOL, VSH_OFLAG_REQ_OPT, N_("pin hypervisor threads")}, {"cpulist", VSH_OT_DATA, VSH_OFLAG_EMPTY_OK, N_("host cpu number(s) to set, or omit option to query")}, {"config", VSH_OT_BOOL, 0, N_("affect next boot")}, @@ -5253,6 +5254,45 @@ static const vshCmdOptDef opts_vcpupin[] = { {NULL, 0, 0, NULL} }; +/* + * Helper function to print vcpupin and hypervisorpin info. + */ +static bool +printPinInfo(unsigned char *cpumaps, size_t cpumaplen, + int maxcpu, int vcpuindex) +{ + int cpu, lastcpu; + bool bit, lastbit, isInvert; + + if (!cpumaps || cpumaplen <= 0 || maxcpu <= 0 || vcpuindex < 0) { + return false; + } + + bit = lastbit = isInvert = false; + lastcpu = -1; + + for (cpu = 0; cpu < maxcpu; cpu++) { + bit = VIR_CPU_USABLE(cpumaps, cpumaplen, vcpuindex, cpu); + + isInvert = (bit ^ lastbit); + if (bit && isInvert) { + if (lastcpu == -1) + vshPrint(ctl, "%d", cpu); + else + vshPrint(ctl, ",%d", cpu); + lastcpu = cpu; + } + if (!bit && isInvert && lastcpu != cpu - 1) + vshPrint(ctl, "-%d", cpu - 1); + lastbit = bit; + } + if (bit && !isInvert) { + vshPrint(ctl, "-%d", maxcpu - 1); + } + + return true; +} + static bool cmdVcpuPin(vshControl *ctl, const vshCmd *cmd) { @@ -5265,13 +5305,13 @@ cmdVcpuPin(vshControl *ctl, const vshCmd *cmd) unsigned char *cpumap = NULL; unsigned char *cpumaps = NULL; size_t cpumaplen; - bool bit, lastbit, isInvert; - int i, cpu, lastcpu, maxcpu, ncpus; + int i, cpu, lastcpu, maxcpu, ncpus, nhyper; bool unuse = false; const char *cur; bool config = vshCommandOptBool(cmd, "config"); bool live = vshCommandOptBool(cmd, "live"); bool current = vshCommandOptBool(cmd, "current"); + bool hypervisor = vshCommandOptBool(cmd, "hypervisor"); bool query = false; /* Query mode if no cpulist */ unsigned int flags = 0; @@ -5306,8 +5346,18 @@ cmdVcpuPin(vshControl *ctl, const vshCmd *cmd) /* In query mode, "vcpu" is optional */ if (vshCommandOptInt(cmd, "vcpu", &vcpu) < !query) { - vshError(ctl, "%s", - _("vcpupin: Invalid or missing vCPU number.")); + if (!hypervisor) { + vshError(ctl, "%s", + _("vcpupin: Invalid or missing vCPU number, " + "or missing --hypervisor option.")); + virDomainFree(dom); + return false; + } + } + + if (hypervisor && vcpu != -1) { + vshError(ctl, "%s", _("vcpupin: --hypervisor must be specified " + "exclusively to --vcpu.")); virDomainFree(dom); return false; } @@ -5339,47 +5389,45 @@ cmdVcpuPin(vshControl *ctl, const vshCmd *cmd) if (flags == -1) flags = VIR_DOMAIN_AFFECT_CURRENT; - cpumaps = vshMalloc(ctl, info.nrVirtCpu * cpumaplen); - if ((ncpus = virDomainGetVcpuPinInfo(dom, info.nrVirtCpu, - cpumaps, cpumaplen, flags)) >= 0) { - - vshPrint(ctl, "%s %s\n", _("VCPU:"), _("CPU Affinity")); - vshPrint(ctl, "----------------------------------\n"); - for (i = 0; i < ncpus; i++) { - - if (vcpu != -1 && i != vcpu) - continue; - - bit = lastbit = isInvert = false; - lastcpu = -1; + if (!hypervisor) { + cpumaps = vshMalloc(ctl, info.nrVirtCpu * cpumaplen); + if ((ncpus = virDomainGetVcpuPinInfo(dom, info.nrVirtCpu, + cpumaps, cpumaplen, flags)) >= 0) { + vshPrint(ctl, "%s %s\n", _("VCPU:"), _("CPU Affinity")); + vshPrint(ctl, "----------------------------------\n"); + for (i = 0; i < ncpus; i++) { + if (vcpu != -1 && i != vcpu) + continue; - vshPrint(ctl, "%4d: ", i); - for (cpu = 0; cpu < maxcpu; cpu++) { + vshPrint(ctl, "%4d: ", i); + ret = printPinInfo(cpumaps, cpumaplen, maxcpu, i); + vshPrint(ctl, "\n"); + if (!ret) + break; + } + } else { + ret = false; + } + VIR_FREE(cpumaps); + } - bit = VIR_CPU_USABLE(cpumaps, cpumaplen, i, cpu); + if (vcpu == -1) { + cpumaps = vshMalloc(ctl, cpumaplen); + if ((nhyper = virDomainGetHypervisorPinInfo(dom, cpumaps, + cpumaplen, flags)) >= 0) { + if (!hypervisor) + vshPrint(ctl, "\n"); + vshPrint(ctl, "%s %s\n", _("Hypervisor:"), _("CPU Affinity")); + vshPrint(ctl, "----------------------------------\n"); - isInvert = (bit ^ lastbit); - if (bit && isInvert) { - if (lastcpu == -1) - vshPrint(ctl, "%d", cpu); - else - vshPrint(ctl, ",%d", cpu); - lastcpu = cpu; - } - if (!bit && isInvert && lastcpu != cpu - 1) - vshPrint(ctl, "-%d", cpu - 1); - lastbit = bit; - } - if (bit && !isInvert) { - vshPrint(ctl, "-%d", maxcpu - 1); - } - vshPrint(ctl, "\n"); + vshPrint(ctl, " *: "); + ret = printPinInfo(cpumaps, cpumaplen, maxcpu, 0); + vshPrint(ctl, "\n"); + } else if (nhyper < 0) { + ret = false; } - - } else { - ret = false; + VIR_FREE(cpumaps); } - VIR_FREE(cpumaps); goto cleanup; } @@ -5457,13 +5505,14 @@ cmdVcpuPin(vshControl *ctl, const vshCmd *cmd) } if (flags == -1) { - if (virDomainPinVcpu(dom, vcpu, cpumap, cpumaplen) != 0) { + flags = VIR_DOMAIN_AFFECT_LIVE; + } + if (!hypervisor) { + if (virDomainPinVcpuFlags(dom, vcpu, cpumap, cpumaplen, flags) != 0) ret = false; - } } else { - if (virDomainPinVcpuFlags(dom, vcpu, cpumap, cpumaplen, flags) != 0) { + if (virDomainPinHypervisorFlags(dom, cpumap, cpumaplen, flags) != 0) ret = false; - } } cleanup: diff --git a/tools/virsh.pod b/tools/virsh.pod index ef71717..0cdabb4 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -1522,12 +1522,16 @@ Thus, this command always takes exactly zero or two flags. Returns basic information about the domain virtual CPUs, like the number of vCPUs, the running time, the affinity to physical processors. -=item B<vcpupin> I<domain-id> [I<vcpu>] [I<cpulist>] [[I<--live>] -[I<--config>] | [I<--current>]] - -Query or change the pinning of domain VCPUs to host physical CPUs. To -pin a single I<vcpu>, specify I<cpulist>; otherwise, you can query one -I<vcpu> or omit I<vcpu> to list all at once. +=item B<vcpupin> I<domain-id> [I<vcpu>] [I<hypervicor>] [I<cpulist>] +[[I<--live>] [I<--config>] | [I<--current>]] + +Query or change the pinning of domain VCPUs or hypervisor threads to host physical CPUs. +To pin a single I<vcpu>, specify I<cpulist>; otherwise, you can query one +I<vcpu>. +To pin all I<hypervisor> threads, specify I<cpulist>; otherwise, you can +query I<hypervisor>. +You can also omit I<vcpu> or I<hypervisor> to list vcpus and hypervisor threads +all at once. I<cpulist> is a list of physical CPU numbers. Its syntax is a comma separated list and a special markup using '-' and '^' (ex. '0-4', '0-3,^2') can -- 1.7.3.1

On 06/05/2012 02:08 AM, tangchen wrote:
Hi~
Users can use vcpupin command to bind a vcpu thread to a specific physical cpu. But besides vcpu threads, there are alse some other threads created by qemu (known as hypervisor threads) that could not be explicitly bound to physical cpus.
I haven't had a chance to look at this yet, but it's on my to-do list to review this in time for 0.9.13. Thanks for your patience. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

Hi~ If anybody have time to take a look at these patches, please give some comments. Thanks.:) On 06/07/2012 03:45 AM, Eric Blake wrote:
On 06/05/2012 02:08 AM, tangchen wrote:
Hi~
Users can use vcpupin command to bind a vcpu thread to a specific physical cpu. But besides vcpu threads, there are alse some other threads created by qemu (known as hypervisor threads) that could not be explicitly bound to physical cpus.
I haven't had a chance to look at this yet, but it's on my to-do list to review this in time for 0.9.13. Thanks for your patience.
-- Best Regards, Tang chen

Hi, Eric On 06/07/2012 03:45 AM, Eric Blake wrote:
On 06/05/2012 02:08 AM, tangchen wrote:
Hi~
Users can use vcpupin command to bind a vcpu thread to a specific physical cpu. But besides vcpu threads, there are alse some other threads created by qemu (known as hypervisor threads) that could not be explicitly bound to physical cpus.
I haven't had a chance to look at this yet, but it's on my to-do list to review this in time for 0.9.13. Thanks for your patience.
I noticed libvirt 0.9.13 has been released, so would you please have a look at these patches? Thanks. :)
-- Best Regards, Tang chen
participants (2)
-
Eric Blake
-
tangchen