[libvirt] [PATCH v4 0/4] vcpupin: configure inactive domains' CPU affinity setting

Hi all, This patchset enables us to configure inactive domains' CPU affinity setting. v3 -> v4: - Rebase *[PATCHv4 1/4] vcpupin: inroduce a new libvir API (virDomainPinVcpuFlags) *[PATCHv4 2/4] vcpupin: implement the code to address the new API in the qemu driver *[PATCHv4 3/4] vcpupin: implement the remote protocol to address the new API *[PATCHv4 4/4] vcpupin: add the new options to "virsh vcpupin" command Best regards, Taku Izumi

This patch introduces a new libvirt API (virDomainPinVcpuFlags) Signd-off-by: Taku Izumi <izumi.taku@jp.fujitsu.com> --- include/libvirt/libvirt.h.in | 6 +++ src/driver.h | 7 +++ src/libvirt.c | 83 +++++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 1 4 files changed, 97 insertions(+) Index: libvirt/include/libvirt/libvirt.h.in =================================================================== --- libvirt.orig/include/libvirt/libvirt.h.in +++ libvirt/include/libvirt/libvirt.h.in @@ -1114,6 +1114,7 @@ typedef virVcpuInfo *virVcpuInfoPtr; /* Flags for controlling virtual CPU hot-plugging. */ typedef enum { + VIR_DOMAIN_VCPU_CURRENT = 0, /* Affect current domain state */ /* Must choose at least one of these two bits; SetVcpus can choose both */ VIR_DOMAIN_VCPU_LIVE = (1 << 0), /* Affect active domain */ VIR_DOMAIN_VCPU_CONFIG = (1 << 1), /* Affect next boot */ @@ -1134,6 +1135,11 @@ int virDomainPinVcpu unsigned int vcpu, unsigned char *cpumap, int maplen); +int virDomainPinVcpuFlags (virDomainPtr domain, + unsigned int vcpu, + unsigned char *cpumap, + int maplen, + unsigned int flags); /** * VIR_USE_CPU: Index: libvirt/src/driver.h =================================================================== --- libvirt.orig/src/driver.h +++ libvirt/src/driver.h @@ -230,6 +230,12 @@ typedef int unsigned char *cpumap, int maplen); typedef int + (*virDrvDomainPinVcpuFlags) (virDomainPtr domain, + unsigned int vcpu, + unsigned char *cpumap, + int maplen, + unsigned int flags); +typedef int (*virDrvDomainGetVcpus) (virDomainPtr domain, virVcpuInfoPtr info, int maxinfo, @@ -661,6 +667,7 @@ struct _virDriver { virDrvDomainSetVcpusFlags domainSetVcpusFlags; virDrvDomainGetVcpusFlags domainGetVcpusFlags; virDrvDomainPinVcpu domainPinVcpu; + virDrvDomainPinVcpuFlags domainPinVcpuFlags; virDrvDomainGetVcpus domainGetVcpus; virDrvDomainGetMaxVcpus domainGetMaxVcpus; virDrvDomainGetSecurityLabel domainGetSecurityLabel; Index: libvirt/src/libvirt.c =================================================================== --- libvirt.orig/src/libvirt.c +++ libvirt/src/libvirt.c @@ -6330,6 +6330,89 @@ error: } /** + * virDomainPinVcpuFlags: + * @domain: pointer to domain object, or NULL for Domain0 + * @vcpu: virtual CPU number + * @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes) (IN) + * Each bit set to 1 means that corresponding CPU is usable. + * Bytes are stored in little-endian order: CPU0-7, 8-15... + * In each byte, lowest CPU number is least significant bit. + * @maplen: number of bytes in cpumap, from 1 up to size of CPU map in + * underlying virtualization system (Xen...). + * If maplen < size, missing bytes are set to zero. + * If maplen > size, failure code is returned. + * @flags: an OR'ed subset of virDomainVcpuFlags + * + * Dynamically change the real CPUs which can be allocated to a virtual CPU. + * This function requires privileged access to the hypervisor. + * + * @flags may include VIR_DOMAIN_VCPU_LIVE or VIR_DOMAIN_VCPU_CONFIG. + * Both flags may be set, but VIR_DOMAIN_VCPU_MAXIMUM cannot be set. + * If VIR_DOMAIN_VCPU_LIVE is set, the change affects a running domain + * and may fail if domain is not alive. + * If VIR_DOMAIN_VCPU_CONFIG is set, the change affects persistent state, + * and will fail for transient domains. + * If neither flag is specified (that is, @flags is VIR_DOMAIN_VCPU_CURRENT), + * then an inactive domain modifies persistent setup, while an active domain + * is hypervisor-dependent on whether just live or both live and persistent + * state is changed. + * Not all hypervisors can support all flag combinations. + * + * Returns 0 in case of success, -1 in case of failure. + * + */ +int +virDomainPinVcpuFlags(virDomainPtr domain, unsigned int vcpu, + unsigned char *cpumap, int maplen, unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "vcpu=%u, cpumap=%p, maplen=%d, flags=%u", + vcpu, cpumap, maplen, flags); + + virResetLastError(); + + if (flags & ~(VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_CONFIG)) { + virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (!VIR_IS_CONNECTED_DOMAIN(domain)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (domain->conn->flags & VIR_CONNECT_RO) { + virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if ((vcpu > 32000) || (cpumap == NULL) || (maplen < 1)) { + virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + + conn = domain->conn; + + if (conn->driver->domainPinVcpuFlags) { + int ret; + ret = conn->driver->domainPinVcpuFlags (domain, vcpu, cpumap, maplen, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(domain->conn); + return -1; + +} + +/** * virDomainGetVcpus: * @domain: pointer to domain object, or NULL for Domain0 * @info: pointer to an array of virVcpuInfo structures (OUT) Index: libvirt/src/libvirt_public.syms =================================================================== --- libvirt.orig/src/libvirt_public.syms +++ libvirt/src/libvirt_public.syms @@ -442,6 +442,7 @@ LIBVIRT_0.9.2 { virDomainInjectNMI; virDomainScreenshot; virDomainSetSchedulerParametersFlags; + virDomainPinVcpuFlags; } LIBVIRT_0.9.0; # .... define new API here using predicted next version number ....

On 05/20/2011 04:08 AM, Taku Izumi wrote:
/** + * virDomainPinVcpuFlags: + * @domain: pointer to domain object, or NULL for Domain0 + * @vcpu: virtual CPU number + * @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes) (IN) + * Each bit set to 1 means that corresponding CPU is usable. + * Bytes are stored in little-endian order: CPU0-7, 8-15... + * In each byte, lowest CPU number is least significant bit. + * @maplen: number of bytes in cpumap, from 1 up to size of CPU map in + * underlying virtualization system (Xen...). + * If maplen < size, missing bytes are set to zero. + * If maplen > size, failure code is returned. + * @flags: an OR'ed subset of virDomainVcpuFlags + * + * Dynamically change the real CPUs which can be allocated to a virtual CPU. + * This function requires privileged access to the hypervisor. + * + * @flags may include VIR_DOMAIN_VCPU_LIVE or VIR_DOMAIN_VCPU_CONFIG. + * Both flags may be set, but VIR_DOMAIN_VCPU_MAXIMUM cannot be set. + * If VIR_DOMAIN_VCPU_LIVE is set, the change affects a running domain + * and may fail if domain is not alive. + * If VIR_DOMAIN_VCPU_CONFIG is set, the change affects persistent state, + * and will fail for transient domains. + * If neither flag is specified (that is, @flags is VIR_DOMAIN_VCPU_CURRENT), + * then an inactive domain modifies persistent setup, while an active domain + * is hypervisor-dependent on whether just live or both live and persistent + * state is changed. + * Not all hypervisors can support all flag combinations.
I am really confused about what the default flags (VIR_DOMAIN_VCPU_CURRENT) actually means. It doesn't seem like I can ever know what it means from an API perspective (since it's behavior is arbitrary depending on the hypervisor). Wouldn't it be better to require _either_ VIR_DOMAIN_VCPU_LIVE _or_ VIR_DOMAIN_VCPU_CONFIG to be set so that the intended behavior can be set with specificity? Also, shouldn't these flags be named VIR_DOMAIN_PIN_VCPU_FLAGS_{LIVE|CONFIG} since they are meant only for the virDomainPinVcpuFlags() API? -- Adam Litke IBM Linux Technology Center

On 05/20/2011 02:54 PM, Adam Litke wrote:
I am really confused about what the default flags (VIR_DOMAIN_VCPU_CURRENT) actually means. It doesn't seem like I can ever know what it means from an API perspective (since it's behavior is arbitrary depending on the hypervisor). Wouldn't it be better to require _either_ VIR_DOMAIN_VCPU_LIVE _or_ VIR_DOMAIN_VCPU_CONFIG to be set so that the intended behavior can be set with specificity?
Anywhere we use _CURRENT, it is supposed to mean _LIVE (if the VM is running) or _CONFIG (if the VM is persistent but offline). Yes, explicitly specifying _LIVE, _CONFIG, or the combination of both, is probably better. And in the past, we haven't always used those conventions; that is, there are some APIs where _CURRENT is non-zero, and the default (0) was unclear in its meaning. But all new APIs should use _CURRENT as 0, so that the default has clear semantics. However, there are some hypervisors that cannot do just _LIVE on persistent domains - they can only do _LIVE|_CONFIG (for transient domains, though, it should be obvious that just _LIVE works, since _CONFIG cannot succeed on a transient domain). -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On 05/20/2011 05:17 PM, Eric Blake wrote:
On 05/20/2011 02:54 PM, Adam Litke wrote:
I am really confused about what the default flags (VIR_DOMAIN_VCPU_CURRENT) actually means. It doesn't seem like I can ever know what it means from an API perspective (since it's behavior is arbitrary depending on the hypervisor). Wouldn't it be better to require _either_ VIR_DOMAIN_VCPU_LIVE _or_ VIR_DOMAIN_VCPU_CONFIG to be set so that the intended behavior can be set with specificity?
Anywhere we use _CURRENT, it is supposed to mean _LIVE (if the VM is running) or _CONFIG (if the VM is persistent but offline).
Yes, explicitly specifying _LIVE, _CONFIG, or the combination of both, is probably better.
And in the past, we haven't always used those conventions; that is, there are some APIs where _CURRENT is non-zero, and the default (0) was unclear in its meaning. But all new APIs should use _CURRENT as 0, so that the default has clear semantics.
However, there are some hypervisors that cannot do just _LIVE on persistent domains - they can only do _LIVE|_CONFIG (for transient domains, though, it should be obvious that just _LIVE works, since _CONFIG cannot succeed on a transient domain).
Thanks for your explanation, Eric. It's a lot clearer to me now and the API makes sense. Cleanups to the virsh documentation as you suggest will hopefully prevent future questions like mine. -- Adam Litke IBM Linux Technology Center

Thank you for commenting.
Also, shouldn't these flags be named VIR_DOMAIN_PIN_VCPU_FLAGS_{LIVE|CONFIG} since they are meant only for the virDomainPinVcpuFlags() API?
If VIR_DOMAIN_PIN_VCPU_XXX is preferred instead of VIR_DOMAIN_VCPU_XXX , I'll do so. -- Best regards, Taku Izumi

This patch implements the code to address the new API (virDomainPinVcpuFlags) in the qemu driver. Signed-off-by: Taku Izumi <izumi.taku@jp.fujitsu.com> --- src/qemu/qemu_driver.c | 99 +++++++++++++++++++++++++++++++++++++------------ 1 file changed, 76 insertions(+), 23 deletions(-) Index: libvirt/src/qemu/qemu_driver.c =================================================================== --- libvirt.orig/src/qemu/qemu_driver.c +++ libvirt/src/qemu/qemu_driver.c @@ -2859,17 +2859,24 @@ qemudDomainSetVcpus(virDomainPtr dom, un static int -qemudDomainPinVcpu(virDomainPtr dom, - unsigned int vcpu, - unsigned char *cpumap, - int maplen) { +qemudDomainPinVcpuFlags(virDomainPtr dom, + unsigned int vcpu, + unsigned char *cpumap, + int maplen, + unsigned int flags) { + struct qemud_driver *driver = dom->conn->privateData; virDomainObjPtr vm; + virDomainDefPtr persistentDef = NULL; int maxcpu, hostcpus; virNodeInfo nodeinfo; int ret = -1; + bool isActive; qemuDomainObjPrivatePtr priv; + virCheckFlags(VIR_DOMAIN_VCPU_LIVE | + VIR_DOMAIN_VCPU_CONFIG, -1); + qemuDriverLock(driver); vm = virDomainFindByUUID(&driver->domains, dom->uuid); qemuDriverUnlock(driver); @@ -2882,9 +2889,18 @@ qemudDomainPinVcpu(virDomainPtr dom, goto cleanup; } - if (!virDomainObjIsActive(vm)) { - qemuReportError(VIR_ERR_OPERATION_INVALID, - "%s",_("cannot pin vcpus on an inactive domain")); + isActive = virDomainObjIsActive(vm); + if (flags == VIR_DOMAIN_VCPU_CURRENT) { + if (isActive) + flags = VIR_DOMAIN_VCPU_LIVE; + else + flags = VIR_DOMAIN_VCPU_CONFIG; + } + + if (!isActive && (flags & VIR_DOMAIN_VCPU_LIVE)) { + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("a domain is inactive; can change only " + "persistent config")); goto cleanup; } @@ -2897,27 +2913,54 @@ qemudDomainPinVcpu(virDomainPtr dom, goto cleanup; } - if (nodeGetInfo(dom->conn, &nodeinfo) < 0) - goto cleanup; + if (flags & VIR_DOMAIN_VCPU_CONFIG) { + if (!vm->persistent) { + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("cannot change persistent config of a transient domain")); + goto cleanup; + } + if (!(persistentDef = virDomainObjGetPersistentDef(driver->caps, vm))) + goto cleanup; + } - hostcpus = VIR_NODEINFO_MAXCPUS(nodeinfo); - maxcpu = maplen * 8; - if (maxcpu > hostcpus) - maxcpu = hostcpus; + if (flags & VIR_DOMAIN_VCPU_LIVE) { - if (priv->vcpupids != NULL) { - if (virProcessInfoSetAffinity(priv->vcpupids[vcpu], - cpumap, maplen, maxcpu) < 0) + if (nodeGetInfo(dom->conn, &nodeinfo) < 0) goto cleanup; - } else { - qemuReportError(VIR_ERR_NO_SUPPORT, - "%s", _("cpu affinity is not supported")); - goto cleanup; + + hostcpus = VIR_NODEINFO_MAXCPUS(nodeinfo); + maxcpu = maplen * 8; + if (maxcpu > hostcpus) + maxcpu = hostcpus; + + if (priv->vcpupids != NULL) { + if (virProcessInfoSetAffinity(priv->vcpupids[vcpu], + cpumap, maplen, maxcpu) < 0) + goto cleanup; + } else { + qemuReportError(VIR_ERR_NO_SUPPORT, + "%s", _("cpu affinity is not supported")); + goto cleanup; + } + + if (virDomainVcpupinAdd(vm->def, cpumap, maplen, vcpu) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to update or add vcpupin xml of " + "a running domain")); + goto cleanup; + } + } - if (virDomainVcpupinAdd(vm->def, cpumap, maplen, vcpu) < 0) { - qemuReportError(VIR_ERR_INTERNAL_ERROR, - "%s", _("failed to update or add vcpupin xml")); + if (flags & VIR_DOMAIN_VCPU_CONFIG) { + + if (virDomainVcpupinAdd(persistentDef, cpumap, maplen, vcpu) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to update or add vcpupin xml of " + "a persistent domain")); + goto cleanup; + } + ret = virDomainSaveConfig(driver->configDir, persistentDef); goto cleanup; } @@ -2930,6 +2973,15 @@ cleanup: } static int +qemudDomainPinVcpu(virDomainPtr dom, + unsigned int vcpu, + unsigned char *cpumap, + int maplen) { + return qemudDomainPinVcpuFlags(dom, vcpu, cpumap, maplen, + VIR_DOMAIN_VCPU_LIVE); +} + +static int qemudDomainGetVcpus(virDomainPtr dom, virVcpuInfoPtr info, int maxinfo, @@ -7669,6 +7721,7 @@ static virDriver qemuDriver = { .domainSetVcpusFlags = qemudDomainSetVcpusFlags, /* 0.8.5 */ .domainGetVcpusFlags = qemudDomainGetVcpusFlags, /* 0.8.5 */ .domainPinVcpu = qemudDomainPinVcpu, /* 0.4.4 */ + .domainPinVcpuFlags = qemudDomainPinVcpuFlags, /* 0.9.2 */ .domainGetVcpus = qemudDomainGetVcpus, /* 0.4.4 */ .domainGetMaxVcpus = qemudDomainGetMaxVcpus, /* 0.4.4 */ .domainGetSecurityLabel = qemudDomainGetSecurityLabel, /* 0.6.1 */

On 05/20/2011 04:09 AM, Taku Izumi wrote:
--- libvirt.orig/src/qemu/qemu_driver.c +++ libvirt/src/qemu/qemu_driver.c @@ -2859,17 +2859,24 @@ qemudDomainSetVcpus(virDomainPtr dom, un
static int -qemudDomainPinVcpu(virDomainPtr dom, - unsigned int vcpu, - unsigned char *cpumap, - int maplen) { +qemudDomainPinVcpuFlags(virDomainPtr dom, + unsigned int vcpu, + unsigned char *cpumap, + int maplen, + unsigned int flags) { + struct qemud_driver *driver = dom->conn->privateData; virDomainObjPtr vm; + virDomainDefPtr persistentDef = NULL; int maxcpu, hostcpus; virNodeInfo nodeinfo; int ret = -1; + bool isActive; qemuDomainObjPrivatePtr priv;
+ virCheckFlags(VIR_DOMAIN_VCPU_LIVE | + VIR_DOMAIN_VCPU_CONFIG, -1);
Is this even possible since you already checked it at the top level? -- Adam Litke IBM Linux Technology Center

This patch implements the remote protocol to address the new API (virDomainPinVcpuFlags). Signd-off-by: Taku Izumi <izumi.taku@jp.fujitsu.com> --- daemon/remote.c | 42 ++++++++++++++++++++++++++++++++++++++++++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 10 +++++++++- src/remote_protocol-structs | 9 +++++++++ 4 files changed, 61 insertions(+), 1 deletion(-) Index: libvirt/src/remote/remote_protocol.x =================================================================== --- libvirt.orig/src/remote/remote_protocol.x +++ libvirt/src/remote/remote_protocol.x @@ -864,6 +864,13 @@ struct remote_domain_pin_vcpu_args { opaque cpumap<REMOTE_CPUMAP_MAX>; }; +struct remote_domain_pin_vcpu_flags_args { + remote_nonnull_domain dom; + int vcpu; + opaque cpumap<REMOTE_CPUMAP_MAX>; + unsigned int flags; +}; + struct remote_domain_get_vcpus_args { remote_nonnull_domain dom; int maxinfo; @@ -2291,7 +2298,8 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_MIGRATE_PERFORM3 = 216, /* skipgen skipgen */ REMOTE_PROC_DOMAIN_MIGRATE_FINISH3 = 217, /* skipgen skipgen */ REMOTE_PROC_DOMAIN_MIGRATE_CONFIRM3 = 218, /* skipgen skipgen */ - REMOTE_PROC_DOMAIN_SET_SCHEDULER_PARAMETERS_FLAGS = 219 /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_SET_SCHEDULER_PARAMETERS_FLAGS = 219, /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_PIN_VCPU_FLAGS = 220 /* skipgen autogen */ /* * Notice how the entries are grouped in sets of 10 ? Index: libvirt/src/remote_protocol-structs =================================================================== --- libvirt.orig/src/remote_protocol-structs +++ libvirt/src/remote_protocol-structs @@ -548,6 +548,15 @@ struct remote_domain_pin_vcpu_args { char * cpumap_val; } cpumap; }; +struct remote_domain_pin_vcpu_flags_args { + remote_nonnull_domain dom; + int vcpu; + struct { + u_int cpumap_len; + char * cpumap_val; + } cpumap; + u_int flags; +}; struct remote_domain_get_vcpus_args { remote_nonnull_domain dom; int maxinfo; Index: libvirt/src/remote/remote_driver.c =================================================================== --- libvirt.orig/src/remote/remote_driver.c +++ libvirt/src/remote/remote_driver.c @@ -6791,6 +6791,7 @@ static virDriver remote_driver = { .domainSetVcpusFlags = remoteDomainSetVcpusFlags, /* 0.8.5 */ .domainGetVcpusFlags = remoteDomainGetVcpusFlags, /* 0.8.5 */ .domainPinVcpu = remoteDomainPinVcpu, /* 0.3.0 */ + .domainPinVcpuFlags = remoteDomainPinVcpuFlags, /* 0.9.2 */ .domainGetVcpus = remoteDomainGetVcpus, /* 0.3.0 */ .domainGetMaxVcpus = remoteDomainGetMaxVcpus, /* 0.3.0 */ .domainGetSecurityLabel = remoteDomainGetSecurityLabel, /* 0.6.1 */ Index: libvirt/daemon/remote.c =================================================================== --- libvirt.orig/daemon/remote.c +++ libvirt/daemon/remote.c @@ -1327,6 +1327,48 @@ cleanup: } static int +remoteDispatchDomainPinVcpuFlags(struct qemud_server *server ATTRIBUTE_UNUSED, + struct qemud_client *client ATTRIBUTE_UNUSED, + virConnectPtr conn, + remote_message_header *hdr ATTRIBUTE_UNUSED, + remote_error *rerr, + remote_domain_pin_vcpu_flags_args *args, + void *ret ATTRIBUTE_UNUSED) +{ + virDomainPtr dom = NULL; + int rv = -1; + + if (!conn) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("connection not open")); + goto cleanup; + } + + if (!(dom = get_nonnull_domain(conn, args->dom))) + goto cleanup; + + if (args->cpumap.cpumap_len > REMOTE_CPUMAP_MAX) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("cpumap_len > REMOTE_CPUMAP_MAX")); + goto cleanup; + } + + if (virDomainPinVcpuFlags(dom, + args->vcpu, + (unsigned char *) args->cpumap.cpumap_val, + args->cpumap.cpumap_len, + args->flags) < 0) + goto cleanup; + + rv = 0; + +cleanup: + if (rv < 0) + remoteDispatchError(rerr); + if (dom) + virDomainFree(dom); + return rv; +} + +static int remoteDispatchDomainSetMemoryParameters(struct qemud_server *server ATTRIBUTE_UNUSED, struct qemud_client *client

This patch adds the new option (--live, --config and --current) to "virsh vcpupin" command. The behavior of above aption is the same as that of "virsh setmem", "virsh setvcpus", and whatnot. When the --config option is specified, the command affects a persistent domain, while --live option is specified, it affects a running (live) domain. The --current option cannot be used with --config or --live at the same time, and when --current is specified, it affects a "current" domain. Signed-off-by: Taku Izumi <izumi.taku@jp.fujitsu.com> --- tools/virsh.c | 33 +++++++++++++++++++++++++++++++-- tools/virsh.pod | 8 +++++++- 2 files changed, 38 insertions(+), 3 deletions(-) Index: libvirt/tools/virsh.c =================================================================== --- libvirt.orig/tools/virsh.c +++ libvirt/tools/virsh.c @@ -2757,6 +2757,9 @@ static const vshCmdOptDef opts_vcpupin[] {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, {"vcpu", VSH_OT_INT, VSH_OFLAG_REQ, N_("vcpu number")}, {"cpulist", VSH_OT_DATA, VSH_OFLAG_REQ, N_("host cpu number(s) (comma separated)")}, + {"config", VSH_OT_BOOL, 0, N_("affect next boot")}, + {"live", VSH_OT_BOOL, 0, N_("affect running domain")}, + {"current", VSH_OT_BOOL, 0, N_("affect current domain")}, {NULL, 0, 0, NULL} }; @@ -2773,6 +2776,26 @@ cmdVcpupin(vshControl *ctl, const vshCmd int cpumaplen; int i; enum { expect_num, expect_num_or_comma } state; + int config = vshCommandOptBool(cmd, "config"); + int live = vshCommandOptBool(cmd, "live"); + int current = vshCommandOptBool(cmd, "current"); + int flags = 0; + + if (current) { + if (live || config) { + vshError(ctl, "%s", _("--current must be specified exclusively")); + return false; + } + flags = VIR_DOMAIN_VCPU_CURRENT; + } else { + if (config) + flags |= VIR_DOMAIN_VCPU_CONFIG; + if (live) + flags |= VIR_DOMAIN_VCPU_LIVE; + /* neither option is specified */ + if (!live && !config) + flags = -1; + } if (!vshConnectionUsability(ctl, ctl->conn)) return false; @@ -2869,8 +2892,14 @@ cmdVcpupin(vshControl *ctl, const vshCmd cpulist++; } while (cpulist); - if (virDomainPinVcpu(dom, vcpu, cpumap, cpumaplen) != 0) { - ret = false; + if (flags == -1) { + if (virDomainPinVcpu(dom, vcpu, cpumap, cpumaplen) != 0) { + ret = false; + } + } else { + if (virDomainPinVcpuFlags(dom, vcpu, cpumap, cpumaplen, flags) != 0) { + ret = false; + } } VIR_FREE(cpumap); Index: libvirt/tools/virsh.pod =================================================================== --- libvirt.orig/tools/virsh.pod +++ libvirt/tools/virsh.pod @@ -767,10 +767,16 @@ values; these two flags cannot both be s Returns basic information about the domain virtual CPUs, like the number of vCPUs, the running time, the affinity to physical processors. -=item B<vcpupin> I<domain-id> I<vcpu> I<cpulist> +=item B<vcpupin> I<domain-id> I<vcpu> I<cpulist> optional I<--live> I<--config> +I<--current> Pin domain VCPUs to host physical CPUs. The I<vcpu> number must be provided and I<cpulist> is a comma separated list of physical CPU numbers. +If I<--live> is specified, affect a running guest. +If I<--config> is specified, affect the next boot of a persistent guest. +If I<--current> is specified, affect the current guest state. +Both I<--live> and I<--config> flags may be given, but I<--current> is exclusive. +If no flag is specified, behavior is different depending on hypervisor. =item B<vncdisplay> I<domain-id>

On 05/20/2011 04:10 AM, Taku Izumi wrote:
--- libvirt.orig/tools/virsh.c +++ libvirt/tools/virsh.c @@ -2757,6 +2757,9 @@ static const vshCmdOptDef opts_vcpupin[] {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, {"vcpu", VSH_OT_INT, VSH_OFLAG_REQ, N_("vcpu number")}, {"cpulist", VSH_OT_DATA, VSH_OFLAG_REQ, N_("host cpu number(s) (comma separated)")}, + {"config", VSH_OT_BOOL, 0, N_("affect next boot")}, + {"live", VSH_OT_BOOL, 0, N_("affect running domain")}, + {"current", VSH_OT_BOOL, 0, N_("affect current domain")},
I am probably just unenlightened, but I still don't understand the difference between 'current domain' and 'running domain'.
{NULL, 0, 0, NULL} };
<snip>
--- libvirt.orig/tools/virsh.pod +++ libvirt/tools/virsh.pod @@ -767,10 +767,16 @@ values; these two flags cannot both be s Returns basic information about the domain virtual CPUs, like the number of vCPUs, the running time, the affinity to physical processors.
-=item B<vcpupin> I<domain-id> I<vcpu> I<cpulist> +=item B<vcpupin> I<domain-id> I<vcpu> I<cpulist> optional I<--live> I<--config> +I<--current>
Pin domain VCPUs to host physical CPUs. The I<vcpu> number must be provided and I<cpulist> is a comma separated list of physical CPU numbers. +If I<--live> is specified, affect a running guest. +If I<--config> is specified, affect the next boot of a persistent guest. +If I<--current> is specified, affect the current guest state. +Both I<--live> and I<--config> flags may be given, but I<--current> is exclusive. +If no flag is specified, behavior is different depending on hypervisor.
According to the API documentation, --current is the same as not specifying either flag. If that's correct, than both the default (no flags) and --current have the same behavior (which is hypervisor dependent). The --current switch doesn't even seem necessary. -- Adam Litke IBM Linux Technology Center

On 05/20/2011 03:11 PM, Adam Litke wrote:
On 05/20/2011 04:10 AM, Taku Izumi wrote:
--- libvirt.orig/tools/virsh.c +++ libvirt/tools/virsh.c @@ -2757,6 +2757,9 @@ static const vshCmdOptDef opts_vcpupin[] {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, {"vcpu", VSH_OT_INT, VSH_OFLAG_REQ, N_("vcpu number")}, {"cpulist", VSH_OT_DATA, VSH_OFLAG_REQ, N_("host cpu number(s) (comma separated)")}, + {"config", VSH_OT_BOOL, 0, N_("affect next boot")}, + {"live", VSH_OT_BOOL, 0, N_("affect running domain")}, + {"current", VSH_OT_BOOL, 0, N_("affect current domain")},
I am probably just unenlightened, but I still don't understand the difference between 'current domain' and 'running domain'.
We could probably clean up the 'virsh help' and virsh.pod man page wording to be more explicit: --current affects the current state of the domain (--live if domain is running, --config otherwise). That is, for a running domain (including a transient domain), it is a hot-plug action that will not affect the next boot; and for an inactive domain, it affects all future boots.
According to the API documentation, --current is the same as not specifying either flag. If that's correct, than both the default (no flags) and --current have the same behavior (which is hypervisor dependent). The --current switch doesn't even seem necessary.
I'm not sure if it matters here, but it does matter for some other APIs, such as 'virsh schedinfo' using virDomainSetSchedulerParameters vs. virDomainSetSchedulerParametersFlags - lack of all three boolean options implies using the older API (which implies hypervisor-dependent flags); using any of the three boolean options implies using the newer API (where --current has an explicit meaning). -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org
participants (3)
-
Adam Litke
-
Eric Blake
-
Taku Izumi