[libvirt] [PATCH 0/7] qemu: vcpu coldplug and ordering fixes

Few fixes for various isues connected to coldplug and vcpu order. Peter Krempa (7): qemu: process: Fix off-by-one in vcpu order duplicate error message qemu: process: Don't use shifted indexes for vcpu order verification qemu: process: Enforce 'vcpu' order range to <1,maxvcpus> qemu: Fix coldplug of vcpus qemu: vcpu: Clear vcpu order information rather than making it invalid lib: Introduce VIR_DOMAIN_VCPU_HOTPLUGGABLE for virDomainSetVcpusFlags qemu: Allow making vcpus hotpluggable with virDomainSetVcpusFlags docs/formatdomain.html.in | 3 +- include/libvirt/libvirt-domain.h | 1 + src/conf/domain_conf.c | 10 ++++++ src/conf/domain_conf.h | 1 + src/libvirt_private.syms | 1 + src/qemu/qemu_driver.c | 76 +++++++++++++++++++++++++++++++++++++--- src/qemu/qemu_process.c | 14 +++++--- tools/virsh-domain.c | 7 ++++ tools/virsh.pod | 7 +++- 9 files changed, 108 insertions(+), 12 deletions(-) -- 2.10.0

The bitmap indexes for the order duplicate check are shifted to 0 since vcpu order 0 is not allowed. The error message doesn't need such treating though. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1370360 --- src/qemu/qemu_process.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 1f56883..bd715b1 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4791,7 +4791,7 @@ qemuProcessValidateHotpluggableVcpus(virDomainDefPtr def) if (vcpu->order != 0) { if (virBitmapIsBitSet(ordermap, vcpu->order - 1)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, - _("duplicate vcpu order '%u'"), vcpu->order - 1); + _("duplicate vcpu order '%u'"), vcpu->order); goto cleanup; } -- 2.10.0

On Wed, Sep 21, 2016 at 01:49:53PM +0200, Peter Krempa wrote:
The bitmap indexes for the order duplicate check are shifted to 0 since vcpu order 0 is not allowed. The error message doesn't need such treating though.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1370360 --- src/qemu/qemu_process.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
ACK and safe for freeze. Pavel

Allocate a one larger bitmap rather than shifting the indexes back to zero. --- src/qemu/qemu_process.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index bd715b1..b40daee 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4772,7 +4772,7 @@ qemuProcessValidateHotpluggableVcpus(virDomainDefPtr def) virBitmapPtr ordermap = NULL; int ret = -1; - if (!(ordermap = virBitmapNew(maxvcpus))) + if (!(ordermap = virBitmapNew(maxvcpus + 1))) goto cleanup; /* validate: @@ -4789,13 +4789,13 @@ qemuProcessValidateHotpluggableVcpus(virDomainDefPtr def) continue; if (vcpu->order != 0) { - if (virBitmapIsBitSet(ordermap, vcpu->order - 1)) { + if (virBitmapIsBitSet(ordermap, vcpu->order)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, _("duplicate vcpu order '%u'"), vcpu->order); goto cleanup; } - ignore_value(virBitmapSetBit(ordermap, vcpu->order - 1)); + ignore_value(virBitmapSetBit(ordermap, vcpu->order)); } -- 2.10.0

On Wed, Sep 21, 2016 at 01:49:54PM +0200, Peter Krempa wrote:
Allocate a one larger bitmap rather than shifting the indexes back to zero. --- src/qemu/qemu_process.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
ACK and safe for freeze. Pavel

The current code that validates duplicate vcpu order would not work properly if the order would exceed def->maxvcpus. Limit the order to the interval described. --- src/qemu/qemu_process.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index b40daee..7007fd9 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4795,10 +4795,14 @@ qemuProcessValidateHotpluggableVcpus(virDomainDefPtr def) goto cleanup; } - ignore_value(virBitmapSetBit(ordermap, vcpu->order)); + if (virBitmapSetBit(ordermap, vcpu->order)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("vcpu order '%u' exceeds vcpu count"), + vcpu->order); + goto cleanup; + } } - for (j = i + 1; j < (i + vcpupriv->vcpus); j++) { subvcpu = virDomainDefGetVcpu(def, j); if (subvcpu->hotpluggable != vcpu->hotpluggable || -- 2.10.0

On Wed, Sep 21, 2016 at 01:49:55PM +0200, Peter Krempa wrote:
The current code that validates duplicate vcpu order would not work properly if the order would exceed def->maxvcpus. Limit the order to the interval described. --- src/qemu/qemu_process.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
ACK and safe for freeze. Pavel

virDomainDefSetVcpus was not designed to handle coldplug of vcpus now that we can set state of vcpus individually. Introduce qemuDomainSetVcpusConfig that properly handles state changes of vcpus when coldplugging so that invalid configurations are not created. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1375939 --- src/qemu/qemu_driver.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 53 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index e29180d..336673d 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4898,6 +4898,58 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, } +/** + * qemuDomainSetVcpusConfig: + * @def: config/offline definition of a domain + * @nvcpus: target vcpu count + * + * Properly handle cold(un)plug of vcpus: + * - plug in inactive vcpus/uplug active rather than rewriting state + * - fix hotpluggable state + */ +static void +qemuDomainSetVcpusConfig(virDomainDefPtr def, + unsigned int nvcpus) +{ + virDomainVcpuDefPtr vcpu; + size_t curvcpus = virDomainDefGetVcpus(def); + size_t maxvcpus = virDomainDefGetVcpusMax(def); + size_t i; + + + if (curvcpus == nvcpus) + return; + + if (curvcpus < nvcpus) { + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + + if (!vcpu || vcpu->online) + continue; + + vcpu->online = true; + vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + + if (++curvcpus == nvcpus) + break; + } + } else { + for (i = maxvcpus; i != 0; i--) { + vcpu = virDomainDefGetVcpu(def, i - 1); + + if (!vcpu || !vcpu->online) + continue; + + vcpu->online = false; + vcpu->hotpluggable = VIR_TRISTATE_BOOL_YES; + + if (--curvcpus == nvcpus) + break; + } + } +} + + static int qemuDomainSetVcpusInternal(virQEMUDriverPtr driver, virDomainObjPtr vm, @@ -4928,8 +4980,7 @@ qemuDomainSetVcpusInternal(virQEMUDriverPtr driver, goto cleanup; if (persistentDef) { - if (virDomainDefSetVcpus(persistentDef, nvcpus) < 0) - goto cleanup; + qemuDomainSetVcpusConfig(persistentDef, nvcpus); if (virDomainSaveConfig(cfg->configDir, driver->caps, persistentDef) < 0) goto cleanup; -- 2.10.0

On Wed, Sep 21, 2016 at 01:49:56PM +0200, Peter Krempa wrote:
virDomainDefSetVcpus was not designed to handle coldplug of vcpus now that we can set state of vcpus individually.
Introduce qemuDomainSetVcpusConfig that properly handles state changes of vcpus when coldplugging so that invalid configurations are not created.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1375939 --- src/qemu/qemu_driver.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 53 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index e29180d..336673d 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4898,6 +4898,58 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, }
+/** + * qemuDomainSetVcpusConfig: + * @def: config/offline definition of a domain + * @nvcpus: target vcpu count + * + * Properly handle cold(un)plug of vcpus: + * - plug in inactive vcpus/uplug active rather than rewriting state + * - fix hotpluggable state + */ +static void +qemuDomainSetVcpusConfig(virDomainDefPtr def, + unsigned int nvcpus) +{ + virDomainVcpuDefPtr vcpu; + size_t curvcpus = virDomainDefGetVcpus(def); + size_t maxvcpus = virDomainDefGetVcpusMax(def); + size_t i; + + + if (curvcpus == nvcpus) + return; + + if (curvcpus < nvcpus) { + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + + if (!vcpu || vcpu->online) + continue;
vcpu == NULL should not happen because virDomainDefGetVcpu() checks if (i >= maxvcpus)
+ + vcpu->online = true; + vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + + if (++curvcpus == nvcpus) + break; + } + } else { + for (i = maxvcpus; i != 0; i--) { + vcpu = virDomainDefGetVcpu(def, i - 1); + + if (!vcpu || !vcpu->online) + continue; + + vcpu->online = false; + vcpu->hotpluggable = VIR_TRISTATE_BOOL_YES; + + if (--curvcpus == nvcpus) + break; + } + } +}
ACK and safe for freeze. Pavel

On Fri, Sep 30, 2016 at 09:52:08 +0200, Pavel Hrdina wrote:
On Wed, Sep 21, 2016 at 01:49:56PM +0200, Peter Krempa wrote:
virDomainDefSetVcpus was not designed to handle coldplug of vcpus now that we can set state of vcpus individually.
Introduce qemuDomainSetVcpusConfig that properly handles state changes of vcpus when coldplugging so that invalid configurations are not created.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1375939 --- src/qemu/qemu_driver.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 53 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index e29180d..336673d 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4898,6 +4898,58 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, }
+/** + * qemuDomainSetVcpusConfig: + * @def: config/offline definition of a domain + * @nvcpus: target vcpu count + * + * Properly handle cold(un)plug of vcpus: + * - plug in inactive vcpus/uplug active rather than rewriting state + * - fix hotpluggable state + */ +static void +qemuDomainSetVcpusConfig(virDomainDefPtr def, + unsigned int nvcpus) +{ + virDomainVcpuDefPtr vcpu; + size_t curvcpus = virDomainDefGetVcpus(def); + size_t maxvcpus = virDomainDefGetVcpusMax(def); + size_t i; + + + if (curvcpus == nvcpus) + return; + + if (curvcpus < nvcpus) { + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + + if (!vcpu || vcpu->online) + continue;
vcpu == NULL should not happen because virDomainDefGetVcpu() checks if (i >= maxvcpus)
Coverity and a few versions of GCC are grumpy about that though since their static analysis sucks. See: 05f89657eed52550050f9308bb7cb8d56dde9cd0 a6ab72a9c3ad475a544ffd53a782e46a02437006

Certain operations may make the vcpu order information invalid. Since the order is primarily used to ensure migration compatibility and has basically no other user benefits, clear the order prior to certain operations and document that it may be cleared. All the operations that would clear the order can still be properly executed by defining a new domain configuration rather than using the helper APIs. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1370357 --- docs/formatdomain.html.in | 3 ++- src/conf/domain_conf.c | 10 ++++++++++ src/conf/domain_conf.h | 1 + src/libvirt_private.syms | 1 + src/qemu/qemu_driver.c | 5 +++++ 5 files changed, 19 insertions(+), 1 deletion(-) diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index f48a4d8..6b9377d 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -569,7 +569,8 @@ the order may be be duplicated accross all vcpus that need to be enabled at once. Specifying order is not necessary, vcpus are then added in an arbitrary order. If order info is used, it must be used for - all online vcpus. + all online vcpus. Hypervisors may clear or update ordering information + during certain operations to assure valid configuration. Note that hypervisors may create hotpluggable vcpus differently from boot vcpus thus special initialization may be necessary. diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index a70be31..478cec8 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -25504,3 +25504,13 @@ virDomainGetBlkioParametersAssignFromDef(virDomainDefPtr def, virBufferFreeAndReset(&buf); return -1; } + + +void +virDomainDefVcpuOrderClear(virDomainDefPtr def) +{ + size_t i; + + for (i = 0; i < def->maxvcpus; i++) + def->vcpus[i]->order = 0; +} diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index f43fcc8..869cce5 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2474,6 +2474,7 @@ unsigned int virDomainDefGetVcpus(const virDomainDef *def); virBitmapPtr virDomainDefGetOnlineVcpumap(const virDomainDef *def); virDomainVcpuDefPtr virDomainDefGetVcpu(virDomainDefPtr def, unsigned int vcpu) ATTRIBUTE_RETURN_CHECK; +void virDomainDefVcpuOrderClear(virDomainDefPtr def); virDomainObjPtr virDomainObjNew(virDomainXMLOptionPtr caps) ATTRIBUTE_NONNULL(1); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 562a3d9..690319e 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -254,6 +254,7 @@ virDomainDefSetMemoryTotal; virDomainDefSetVcpus; virDomainDefSetVcpusMax; virDomainDefValidate; +virDomainDefVcpuOrderClear; virDomainDeleteConfig; virDomainDeviceAddressIsValid; virDomainDeviceAddressTypeToString; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 336673d..357be4e 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4742,6 +4742,9 @@ qemuDomainSetVcpusMax(virQEMUDriverPtr driver, } } + /* ordering information may become invalid, thus clear it */ + virDomainDefVcpuOrderClear(persistentDef); + if (virDomainDefSetVcpusMax(persistentDef, nvcpus, driver->xmlopt) < 0) goto cleanup; @@ -4916,6 +4919,8 @@ qemuDomainSetVcpusConfig(virDomainDefPtr def, size_t maxvcpus = virDomainDefGetVcpusMax(def); size_t i; + /* ordering information may become invalid, thus clear it */ + virDomainDefVcpuOrderClear(def); if (curvcpus == nvcpus) return; -- 2.10.0

On Wed, Sep 21, 2016 at 01:49:57PM +0200, Peter Krempa wrote:
Certain operations may make the vcpu order information invalid. Since the order is primarily used to ensure migration compatibility and has basically no other user benefits, clear the order prior to certain operations and document that it may be cleared.
All the operations that would clear the order can still be properly executed by defining a new domain configuration rather than using the helper APIs.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1370357 ---
ACK and safe for freeze. Pavel

For compatibility reasons virDomainSetVcpus needs to add vcpus as non hotpluggable which means that the users whill not be able to unplug it after the VM has started. Add a flag that will allow to tell the API that the unpluggable vcpus are okay. --- include/libvirt/libvirt-domain.h | 1 + tools/virsh-domain.c | 7 +++++++ tools/virsh.pod | 7 ++++++- 3 files changed, 14 insertions(+), 1 deletion(-) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index f1c35d9..385b846 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -1707,6 +1707,7 @@ typedef enum { /* Additionally, these flags may be bitwise-OR'd in. */ VIR_DOMAIN_VCPU_MAXIMUM = (1 << 2), /* Max rather than current count */ VIR_DOMAIN_VCPU_GUEST = (1 << 3), /* Modify state of the cpu in the guest */ + VIR_DOMAIN_VCPU_HOTPLUGGABLE = (1 << 4), /* Make vcpus added hot(un)pluggable */ } virDomainVcpuFlags; int virDomainSetVcpus (virDomainPtr domain, diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 3829b17..2ce0a06 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -6722,6 +6722,10 @@ static const vshCmdOptDef opts_setvcpus[] = { .type = VSH_OT_BOOL, .help = N_("modify cpu state in the guest") }, + {.name = "hotpluggable", + .type = VSH_OT_BOOL, + .help = N_("make added vcpus hot(un)pluggable") + }, {.name = NULL} }; @@ -6736,6 +6740,7 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd) bool live = vshCommandOptBool(cmd, "live"); bool current = vshCommandOptBool(cmd, "current"); bool guest = vshCommandOptBool(cmd, "guest"); + bool hotpluggable = vshCommandOptBool(cmd, "hotpluggable"); unsigned int flags = VIR_DOMAIN_AFFECT_CURRENT; VSH_EXCLUSIVE_OPTIONS_VAR(current, live); @@ -6752,6 +6757,8 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd) flags |= VIR_DOMAIN_VCPU_GUEST; if (maximum) flags |= VIR_DOMAIN_VCPU_MAXIMUM; + if (hotpluggable) + flags |= VIR_DOMAIN_VCPU_HOTPLUGGABLE; if (!(dom = virshCommandOptDomain(ctl, cmd, NULL))) return false; diff --git a/tools/virsh.pod b/tools/virsh.pod index 49abda9..cbb237a 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -2316,7 +2316,7 @@ exclusive. If no flag is specified, behavior is different depending on hypervisor. =item B<setvcpus> I<domain> I<count> [I<--maximum>] [[I<--config>] -[I<--live>] | [I<--current>]] [I<--guest>] +[I<--live>] | [I<--current>]] [I<--guest>] [I<--hotpluggable>] Change the number of virtual CPUs active in a guest domain. By default, this command works on active guest domains. To change the settings for an @@ -2348,6 +2348,11 @@ If I<--guest> is specified, then the count of cpus is modified in the guest instead of the hypervisor. This flag is usable only for live domains and may require guest agent to be configured in the guest. +To allow adding vcpus to persistent definitions that can be later hotunplugged +after the domain is booted it is necessary to specify the I<--hotpluggable> +flag. Vcpus added to live domains supporting vcpu unplug are automatically +marked as hotpluggable. + The I<--maximum> flag controls the maximum number of virtual cpus that can be hot-plugged the next time the domain is booted. As such, it must only be used with the I<--config> flag, and not with the I<--live> or the I<--current> -- 2.10.0

On Wed, Sep 21, 2016 at 01:49:58PM +0200, Peter Krempa wrote:
For compatibility reasons virDomainSetVcpus needs to add vcpus as non hotpluggable which means that the users whill not be able to unplug it
s/whill/will/
after the VM has started. Add a flag that will allow to tell the API that the unpluggable vcpus are okay. ---
ACK Pavel

Implement support for VIR_DOMAIN_VCPU_HOTPLUGGABLE so that users can choose to make vcpus added by the API removable. --- src/qemu/qemu_driver.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 357be4e..8453628 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4912,7 +4912,8 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, */ static void qemuDomainSetVcpusConfig(virDomainDefPtr def, - unsigned int nvcpus) + unsigned int nvcpus, + bool hotpluggable) { virDomainVcpuDefPtr vcpu; size_t curvcpus = virDomainDefGetVcpus(def); @@ -4933,7 +4934,12 @@ qemuDomainSetVcpusConfig(virDomainDefPtr def, continue; vcpu->online = true; - vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + if (hotpluggable) { + vcpu->hotpluggable = VIR_TRISTATE_BOOL_YES; + def->individualvcpus = true; + } else { + vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + } if (++curvcpus == nvcpus) break; @@ -4960,7 +4966,8 @@ qemuDomainSetVcpusInternal(virQEMUDriverPtr driver, virDomainObjPtr vm, virDomainDefPtr def, virDomainDefPtr persistentDef, - unsigned int nvcpus) + unsigned int nvcpus, + bool hotpluggable) { virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); int ret = -1; @@ -4985,7 +4992,7 @@ qemuDomainSetVcpusInternal(virQEMUDriverPtr driver, goto cleanup; if (persistentDef) { - qemuDomainSetVcpusConfig(persistentDef, nvcpus); + qemuDomainSetVcpusConfig(persistentDef, nvcpus, hotpluggable); if (virDomainSaveConfig(cfg->configDir, driver->caps, persistentDef) < 0) goto cleanup; @@ -5008,12 +5015,14 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, virDomainObjPtr vm = NULL; virDomainDefPtr def; virDomainDefPtr persistentDef; + bool hotpluggable = !!(flags & VIR_DOMAIN_VCPU_HOTPLUGGABLE); int ret = -1; virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | VIR_DOMAIN_AFFECT_CONFIG | VIR_DOMAIN_VCPU_MAXIMUM | - VIR_DOMAIN_VCPU_GUEST, -1); + VIR_DOMAIN_VCPU_GUEST | + VIR_DOMAIN_VCPU_HOTPLUGGABLE, -1); if (!(vm = qemuDomObjFromDomain(dom))) goto cleanup; @@ -5032,7 +5041,8 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, else if (flags & VIR_DOMAIN_VCPU_MAXIMUM) ret = qemuDomainSetVcpusMax(driver, def, persistentDef, nvcpus); else - ret = qemuDomainSetVcpusInternal(driver, vm, def, persistentDef, nvcpus); + ret = qemuDomainSetVcpusInternal(driver, vm, def, persistentDef, + nvcpus, hotpluggable); endjob: qemuDomainObjEndJob(driver, vm); -- 2.10.0

On 09/21/2016 05:19 PM, Peter Krempa wrote:
Implement support for VIR_DOMAIN_VCPU_HOTPLUGGABLE so that users can choose to make vcpus added by the API removable. --- src/qemu/qemu_driver.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 357be4e..8453628 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4912,7 +4912,8 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, */ static void qemuDomainSetVcpusConfig(virDomainDefPtr def, - unsigned int nvcpus) + unsigned int nvcpus, + bool hotpluggable) { virDomainVcpuDefPtr vcpu; Here if (curvcpus == nvcpus)
return we still need to allow if someone wants to switch from hotpluggable = yes to no/ vice versa.
size_t curvcpus = virDomainDefGetVcpus(def); @@ -4933,7 +4934,12 @@ qemuDomainSetVcpusConfig(virDomainDefPtr def, continue;
vcpu->online = true; - vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + if (hotpluggable) { + vcpu->hotpluggable = VIR_TRISTATE_BOOL_YES; + def->individualvcpus = true; + } else { + vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + }
if (++curvcpus == nvcpus) break;
Can we add checks here to see on PPC, the config is valid with a check when topology is given in xml to see (curvcpus%threads_per_core == 0) Otherwise with virsh setvcpus rhel71 13 --config --hotpluggable for a guest with topology <topology sockets='8' cores='2' threads='4'/> we would see, 2016-09-29 10:12:05.929+0000: 1137: error : qemuProcessValidateHotpluggableVcpus:4829 : unsupported configuration: vcpus '12' and '13' are in the same hotplug group but differ in configuration OR Even when setvcpus live to a number not leading to a complete core, we have checks leading to sensible error (error: unsupported configuration: target vm vcpu granularity does not allow the desired vcpu count ) . So, in case of --config also may be we can add the check to bring the consistency. Otherwise the series looks good with the above two cases addressed . ACK Thanks, Shivaprasad
@@ -4960,7 +4966,8 @@ qemuDomainSetVcpusInternal(virQEMUDriverPtr driver, virDomainObjPtr vm, virDomainDefPtr def, virDomainDefPtr persistentDef, - unsigned int nvcpus) + unsigned int nvcpus, + bool hotpluggable) { virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); int ret = -1; @@ -4985,7 +4992,7 @@ qemuDomainSetVcpusInternal(virQEMUDriverPtr driver, goto cleanup;
if (persistentDef) { - qemuDomainSetVcpusConfig(persistentDef, nvcpus); + qemuDomainSetVcpusConfig(persistentDef, nvcpus, hotpluggable);
if (virDomainSaveConfig(cfg->configDir, driver->caps, persistentDef) < 0) goto cleanup; @@ -5008,12 +5015,14 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, virDomainObjPtr vm = NULL; virDomainDefPtr def; virDomainDefPtr persistentDef; + bool hotpluggable = !!(flags & VIR_DOMAIN_VCPU_HOTPLUGGABLE); int ret = -1;
virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | VIR_DOMAIN_AFFECT_CONFIG | VIR_DOMAIN_VCPU_MAXIMUM | - VIR_DOMAIN_VCPU_GUEST, -1); + VIR_DOMAIN_VCPU_GUEST | + VIR_DOMAIN_VCPU_HOTPLUGGABLE, -1);
if (!(vm = qemuDomObjFromDomain(dom))) goto cleanup; @@ -5032,7 +5041,8 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, else if (flags & VIR_DOMAIN_VCPU_MAXIMUM) ret = qemuDomainSetVcpusMax(driver, def, persistentDef, nvcpus); else - ret = qemuDomainSetVcpusInternal(driver, vm, def, persistentDef, nvcpus); + ret = qemuDomainSetVcpusInternal(driver, vm, def, persistentDef, + nvcpus, hotpluggable);
endjob: qemuDomainObjEndJob(driver, vm);

On Thu, Sep 29, 2016 at 16:29:13 +0530, Shivaprasad G Bhat wrote:
On 09/21/2016 05:19 PM, Peter Krempa wrote:
Implement support for VIR_DOMAIN_VCPU_HOTPLUGGABLE so that users can choose to make vcpus added by the API removable. --- src/qemu/qemu_driver.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 357be4e..8453628 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4912,7 +4912,8 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, */ static void qemuDomainSetVcpusConfig(virDomainDefPtr def, - unsigned int nvcpus) + unsigned int nvcpus, + bool hotpluggable) { virDomainVcpuDefPtr vcpu; Here if (curvcpus == nvcpus)
return we still need to allow if someone wants to switch from hotpluggable = yes to no/ vice versa.
No. As no new vcpus were added there's nothing to turn to hotpluggable. The flag turns only the newly added vcpus as hotpluggable.
size_t curvcpus = virDomainDefGetVcpus(def); @@ -4933,7 +4934,12 @@ qemuDomainSetVcpusConfig(virDomainDefPtr def, continue;
vcpu->online = true; - vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + if (hotpluggable) { + vcpu->hotpluggable = VIR_TRISTATE_BOOL_YES; + def->individualvcpus = true; + } else { + vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + }
if (++curvcpus == nvcpus) break;
Can we add checks here to see on PPC, the config is valid with a check when topology is given in xml to see (curvcpus%threads_per_core == 0)
No. For PPC and all the weird archs that don't have thread level hotplug we can't really know what to use and what is a legitimate configuration until we start the VM and query qemu.
Otherwise with virsh setvcpus rhel71 13 --config --hotpluggable for a guest with topology <topology sockets='8' cores='2' threads='4'/> we would see, 2016-09-29 10:12:05.929+0000: 1137: error : qemuProcessValidateHotpluggableVcpus:4829 : unsupported configuration: vcpus '12' and '13' are in the same hotplug group but differ in configuration
Yes, you can configure the same thing manually in the XML.
OR
Even when setvcpus live to a number not leading to a complete core, we have checks leading to sensible error (error: unsupported configuration: target vm vcpu granularity does not allow the desired vcpu count ) . So, in case of --config also may be we can add the check to bring the consistency.
As said above. Libvirt can't surely detect that the "weird" approach is needed. Peter

On 09/29/2016 05:32 PM, Peter Krempa wrote:
On Thu, Sep 29, 2016 at 16:29:13 +0530, Shivaprasad G Bhat wrote:
On 09/21/2016 05:19 PM, Peter Krempa wrote:
Implement support for VIR_DOMAIN_VCPU_HOTPLUGGABLE so that users can choose to make vcpus added by the API removable. --- src/qemu/qemu_driver.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 357be4e..8453628 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4912,7 +4912,8 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, */ static void qemuDomainSetVcpusConfig(virDomainDefPtr def, - unsigned int nvcpus) + unsigned int nvcpus, + bool hotpluggable) { virDomainVcpuDefPtr vcpu; Here if (curvcpus == nvcpus)
return we still need to allow if someone wants to switch from hotpluggable = yes to no/ vice versa. No. As no new vcpus were added there's nothing to turn to hotpluggable. The flag turns only the newly added vcpus as hotpluggable.
What if someone had the hotplug = yes. And He wants to make it "no". ? With no changes to the vcpus.
size_t curvcpus = virDomainDefGetVcpus(def); @@ -4933,7 +4934,12 @@ qemuDomainSetVcpusConfig(virDomainDefPtr def, continue;
vcpu->online = true; - vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + if (hotpluggable) { + vcpu->hotpluggable = VIR_TRISTATE_BOOL_YES; + def->individualvcpus = true; + } else { + vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + }
if (++curvcpus == nvcpus) break;
Can we add checks here to see on PPC, the config is valid with a check when topology is given in xml to see (curvcpus%threads_per_core == 0)
No. For PPC and all the weird archs that don't have thread level hotplug we can't really know what to use and what is a legitimate configuration until we start the VM and query qemu.
Otherwise with virsh setvcpus rhel71 13 --config --hotpluggable for a guest with topology <topology sockets='8' cores='2' threads='4'/> we would see, 2016-09-29 10:12:05.929+0000: 1137: error : qemuProcessValidateHotpluggableVcpus:4829 : unsupported configuration: vcpus '12' and '13' are in the same hotplug group but differ in configuration Yes, you can configure the same thing manually in the XML.
OR
Even when setvcpus live to a number not leading to a complete core, we have checks leading to sensible error (error: unsupported configuration: target vm vcpu granularity does not allow the desired vcpu count ) . So, in case of --config also may be we can add the check to bring the consistency. As said above. Libvirt can't surely detect that the "weird" approach is needed.
Peter

On Thu, Sep 29, 2016 at 17:37:53 +0530, Shivaprasad G Bhat wrote:
On 09/29/2016 05:32 PM, Peter Krempa wrote:
On Thu, Sep 29, 2016 at 16:29:13 +0530, Shivaprasad G Bhat wrote:
On 09/21/2016 05:19 PM, Peter Krempa wrote:
Implement support for VIR_DOMAIN_VCPU_HOTPLUGGABLE so that users can choose to make vcpus added by the API removable. --- src/qemu/qemu_driver.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 357be4e..8453628 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4912,7 +4912,8 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, */ static void qemuDomainSetVcpusConfig(virDomainDefPtr def, - unsigned int nvcpus) + unsigned int nvcpus, + bool hotpluggable) { virDomainVcpuDefPtr vcpu; Here if (curvcpus == nvcpus)
return we still need to allow if someone wants to switch from hotpluggable = yes to no/ vice versa. No. As no new vcpus were added there's nothing to turn to hotpluggable. The flag turns only the newly added vcpus as hotpluggable.
What if someone had the hotplug = yes. And He wants to make it "no". ? With no changes to the vcpus.
You can make two calls to the API, decreasing and re-increasing the count. Or edit the XML directly since its a non-simple change. If the API call would change existing vcpus to either hotpluggable or not it would break configurations. Eg. as PPC requires first 8 vcpus to be non hotpluggable vs. just one on intel.

On 09/29/2016 05:50 PM, Peter Krempa wrote:
On Thu, Sep 29, 2016 at 17:37:53 +0530, Shivaprasad G Bhat wrote:
On 09/29/2016 05:32 PM, Peter Krempa wrote:
On Thu, Sep 29, 2016 at 16:29:13 +0530, Shivaprasad G Bhat wrote:
On 09/21/2016 05:19 PM, Peter Krempa wrote:
Implement support for VIR_DOMAIN_VCPU_HOTPLUGGABLE so that users can choose to make vcpus added by the API removable. --- src/qemu/qemu_driver.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 357be4e..8453628 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4912,7 +4912,8 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, */ static void qemuDomainSetVcpusConfig(virDomainDefPtr def, - unsigned int nvcpus) + unsigned int nvcpus, + bool hotpluggable) { virDomainVcpuDefPtr vcpu; Here if (curvcpus == nvcpus)
return we still need to allow if someone wants to switch from hotpluggable = yes to no/ vice versa. No. As no new vcpus were added there's nothing to turn to hotpluggable. The flag turns only the newly added vcpus as hotpluggable. What if someone had the hotplug = yes. And He wants to make it "no". ? With no changes to the vcpus.
You can make two calls to the API, decreasing and re-increasing the count. Or edit the XML directly since its a non-simple change. If the API call would change existing vcpus to either hotpluggable or not it would break configurations. Eg. as PPC requires first 8 vcpus to be non hotpluggable vs. just one on intel.
Agreed. Thanks! ACK the series. -Shivaprasad

On Wed, Sep 21, 2016 at 01:49:59PM +0200, Peter Krempa wrote:
Implement support for VIR_DOMAIN_VCPU_HOTPLUGGABLE so that users can choose to make vcpus added by the API removable. ---
ACK Pavel

On Wed, Sep 21, 2016 at 13:49:52 +0200, Peter Krempa wrote:
Few fixes for various isues connected to coldplug and vcpu order.
Peter Krempa (7): qemu: process: Fix off-by-one in vcpu order duplicate error message qemu: process: Don't use shifted indexes for vcpu order verification qemu: process: Enforce 'vcpu' order range to <1,maxvcpus> qemu: Fix coldplug of vcpus qemu: vcpu: Clear vcpu order information rather than making it invalid lib: Introduce VIR_DOMAIN_VCPU_HOTPLUGGABLE for virDomainSetVcpusFlags qemu: Allow making vcpus hotpluggable with virDomainSetVcpusFlags
Ping
participants (3)
-
Pavel Hrdina
-
Peter Krempa
-
Shivaprasad G Bhat