On Thu, Apr 16, 2026 at 21:51:23 +0530, Akash Kulhalli via Devel wrote:
Add VIR_DOMAIN_VCPU_ASYNC to the vCPU management APIs and introduce VIR_DOMAIN_EVENT_ID_VCPU_REMOVED, carrying the libvirt XML vCPU id.
For live vCPU unplug, async mode returns after successfully submitting the unplug request instead of doing the short wait used by the non-async path. The live XML is left unchanged until completion is confirmed, and the final outcome is reported through domain events: successful completion emits VCPU_REMOVED, while guest-rejected unplug requests continue to emit DEVICE_REMOVAL_FAILED.
This closes the current gap where successful vCPU hot-unplug emits no domain event for virsh event or other libvirt event consumers to observe. Thread the new event through the remote protocol, add --async to virsh setvcpus and virsh setvcpu, and teach virsh event and event-test about vcpu-removed.
Async mode is supported only for live vCPU unplug.
Signed-off-by: Akash Kulhalli <akash.kulhalli@oracle.com> --- examples/c/misc/event-test.c | 12 +++++ include/libvirt/libvirt-domain.h | 23 +++++++++ src/conf/domain_event.c | 66 +++++++++++++++++++++++++ src/conf/domain_event.h | 6 +++ src/libvirt-domain.c | 40 +++++++++++++++- src/libvirt_private.syms | 2 + src/qemu/qemu_driver.c | 33 +++++++++++-- src/qemu/qemu_hotplug.c | 74 +++++++++++++++++++++++++---- src/qemu/qemu_hotplug.h | 8 ++-- src/remote/remote_daemon_dispatch.c | 26 ++++++++++ src/remote/remote_driver.c | 29 +++++++++++ src/remote/remote_protocol.x | 14 +++++- src/remote_protocol-structs | 6 +++ tests/qemuhotplugtest.c | 6 ++- tools/virsh-domain-event.c | 16 +++++++ tools/virsh-domain.c | 34 +++++++++++++ 16 files changed, 373 insertions(+), 22 deletions(-)
As noted previously you'll need to split this patch. Split it to the following commits: 1) new event callback and corresponding boilerplate (including required virsh changes since they are enfocred at compile time) 2) wire up firing of the event on existing unplug scenarios (including corresponding change to (qemuDomainRemoveVcpuAlias) 3) changes to qemu driver internals to plumb in the flags to skip waiting for the unplug, they will be dormant at this point 4) addition of the flag for 'virDomainSetVcpusFlags', including docs and impl in qemu driver, corresponding virsh change can be included here 5) addition of the flag for 'virDomainSetVcpu' etc... ^^^
diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index 4a8e3114b35d..a326d133ee41 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -2605,6 +2605,7 @@ typedef enum { VIR_DOMAIN_VCPU_MAXIMUM = (1 << 2), /* Max rather than current count (Since: 0.8.5) */ VIR_DOMAIN_VCPU_GUEST = (1 << 3), /* Modify state of the cpu in the guest (Since: 1.1.0) */ VIR_DOMAIN_VCPU_HOTPLUGGABLE = (1 << 4), /* Make vcpus added hot(un)pluggable (Since: 2.4.0) */ + VIR_DOMAIN_VCPU_ASYNC = (1 << 5), /* Return after firing live unplug request(s) (Since: 12.3.0) */
The description is a bit vague. I'd say "Don't wait for the guest to comply with unplug request(s)" and then explain it later in the function. If you decide to explain it here you'll have to first (in a separate patch) have to refactor the docs here to swith to the prefix-mode (move the existing comments for *all* fields before the enum so that you can use proper multiline comments. [...]
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index db9eea57745c..c290dc6efeca 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -7702,6 +7702,14 @@ virDomainSendProcessSignal(virDomainPtr domain, * whether it also affects persistent configuration; for more control, * use virDomainSetVcpusFlags(). * + * When this API decreases the live vCPU count by hot-unplugging vCPUs in + * the hypervisor, completion may be asynchronous. Successful unplug + * completion is reported by VIR_DOMAIN_EVENT_ID_VCPU_REMOVED, carrying the + * XML ``<vcpu id='...'>`` value. Rejected unplug requests continue to be + * reported by VIR_DOMAIN_EVENT_ID_DEVICE_REMOVAL_FAILED. The success event + * may be delivered before this API call returns. A timeout is reported only + * through the returned error, not through a domain event.
This needs to be added when adding the event.
+ * * Returns 0 in case of success, -1 in case of failure. * * Since: 0.1.4 @@ -7773,8 +7781,23 @@ virDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus) * be used with live guests and is incompatible with VIR_DOMAIN_VCPU_MAXIMUM. * The usage of this flag may require a guest agent configured. * + * If @flags includes VIR_DOMAIN_VCPU_ASYNC, only vCPU hot-unplug is requested + * asynchronously. In this mode, success means that all required unplug + * request(s) were successfully fired; final completion is reported by + * the VIR_DOMAIN_EVENT_ID_VCPU_REMOVED event, and rejection is reported by a + * `VIR_DOMAIN_EVENT_ID_DEVICE_REMOVAL_FAILED` event type.
This needs to be added when adding the impl of the flag
+ * * Not all hypervisors can support all flag combinations. * + * When this API decreases the live vCPU count by hot-unplugging vCPUs, + * completion may be asynchronous. Successful unplug completion is reported by + * VIR_DOMAIN_EVENT_ID_VCPU_REMOVED, carrying the XML ``<vcpu id='...'>`` + * value within the event data. Rejected unplug requests continue to be + * reported by VIR_DOMAIN_EVENT_ID_DEVICE_REMOVAL_FAILED. The success event + * may be delivered before this API call returns. In non-async mode, a + * timeout is reported only through the returned error, not through a domain + * event.
This needs to be added when adding the event.
+ * * Returns 0 in case of success, -1 in case of failure. * * Since: 0.8.5 @@ -13125,7 +13148,8 @@ virDomainSetGuestVcpus(virDomainPtr domain, * @domain: pointer to domain object * @vcpumap: text representation of a bitmap of vcpus to set * @state: 0 to disable/1 to enable cpus described by @vcpumap - * @flags: bitwise-OR of virDomainModificationImpact + * @flags: bitwise-OR of virDomainModificationImpact with optional + * VIR_DOMAIN_VCPU_ASYNC
You can't do this. You'll need to add a new enum of flags specifically for this API. Mirror virDomainModificationImpact in the enum and add new flags separately.
@@ -13134,6 +13158,20 @@ virDomainSetGuestVcpus(virDomainPtr domain, * * Note that OSes and hypervisors may require vCPU 0 to stay online. * + * If @flags includes VIR_DOMAIN_VCPU_ASYNC, only live vCPU disable + * (hot-unplug) is requested asynchronously. In this mode, success means the + * unplug request was successfully fired; final completion is reported by + * VIR_DOMAIN_EVENT_ID_VCPU_REMOVED, while rejection is reported by + * VIR_DOMAIN_EVENT_ID_DEVICE_REMOVAL_FAILED.
This will need to be reworded once the above is done and added when adding the impl.
+ * + * When this API disables live vCPUs by hot-unplugging them, the operation + * completion may be asynchronous. Successful unplug completion is reported by + * VIR_DOMAIN_EVENT_ID_VCPU_REMOVED, carrying the XML ``<vcpu id='...'>`` + * value. Rejected unplug requests continue to be reported by + * VIR_DOMAIN_EVENT_ID_DEVICE_REMOVAL_FAILED. The success event may be + * delivered before this API call returns. In non-async mode, a timeout is + * reported only through the returned error, not through a domain event.
This needs to be added when adding the event/
+ * * Returns 0 on success, -1 on error. * * Since: 3.1.0
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index d227ac58cdb4..ad4dc11c970f 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -3620,7 +3620,14 @@ processDeviceDeletedEvent(virQEMUDriver *driver, }
if (STRPREFIX(devAlias, "vcpu")) { - qemuDomainRemoveVcpuAlias(vm, devAlias); + int vcpuid; + virObjectEvent *event; + if ((vcpuid = qemuDomainRemoveVcpuAlias(vm, devAlias)) == -1) + goto endjob; + + event = virDomainEventVcpuRemovedNewFromObj(vm, vcpuid); + virObjectEventStateQueue(driver->domainEventState, event); +
Extra line.
} else { if (virDomainDefFindDevice(vm->def, devAlias, &dev, true) < 0) goto endjob; @@ -4269,6 +4276,7 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, virDomainObj *vm = NULL; virDomainDef *def; virDomainDef *persistentDef; + bool async = !!(flags & VIR_DOMAIN_VCPU_ASYNC); bool hotpluggable = !!(flags & VIR_DOMAIN_VCPU_HOTPLUGGABLE); bool useAgent = !!(flags & VIR_DOMAIN_VCPU_GUEST); int ret = -1; @@ -4277,7 +4285,8 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, VIR_DOMAIN_AFFECT_CONFIG | VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_GUEST | - VIR_DOMAIN_VCPU_HOTPLUGGABLE, -1); + VIR_DOMAIN_VCPU_HOTPLUGGABLE | + VIR_DOMAIN_VCPU_ASYNC, -1);
if (!(vm = qemuDomainObjFromDomain(dom))) goto cleanup; @@ -4297,13 +4306,19 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) goto endjob;
+ if (async && (useAgent || persistentDef || !def)) {
Do not reject this on persistentDef update. It will break the compound operation and require users do 2 calls needlessly. It does need to be rejected with useAgend, but the rest doesn't make sense, even on a persistent-only update IMO we can accept the flag, since the hypervisor will not need to be informed and thus there's nothiong to do .
+ virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("asynchronous mode is supported only for live vcpu unplug")); + goto endjob; + } + if (useAgent) ret = qemuDomainSetVcpusAgent(vm, nvcpus); else if (flags & VIR_DOMAIN_VCPU_MAXIMUM) ret = qemuDomainSetVcpusMax(driver, vm, def, persistentDef, nvcpus); else ret = qemuDomainSetVcpusInternal(driver, vm, def, persistentDef, - nvcpus, hotpluggable); + nvcpus, hotpluggable, async);
endjob: if (useAgent) @@ -19169,12 +19184,14 @@ qemuDomainSetVcpu(virDomainPtr dom, virDomainObj *vm = NULL; virDomainDef *def = NULL; virDomainDef *persistentDef = NULL; + bool async = !!(flags & VIR_DOMAIN_VCPU_ASYNC);
As noted you'll have to use a separate flag here.
g_autoptr(virBitmap) map = NULL; ssize_t lastvcpu; int ret = -1;
virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | - VIR_DOMAIN_AFFECT_CONFIG, -1); + VIR_DOMAIN_AFFECT_CONFIG | + VIR_DOMAIN_VCPU_ASYNC, -1);
if (state != 0 && state != 1) { virReportInvalidArg(state, "%s", _("unsupported state value")); @@ -19220,7 +19237,13 @@ qemuDomainSetVcpu(virDomainPtr dom, } }
- ret = qemuDomainSetVcpuInternal(driver, vm, def, persistentDef, map, !!state); + if (async && (persistentDef || !def)) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("asynchronous mode is supported only for live vcpu unplug")); + goto endjob; + }
Here this doesn't make sense at all per the above explanation.
+ + ret = qemuDomainSetVcpuInternal(driver, vm, def, persistentDef, map, !!state, async);
endjob: virDomainObjEndJob(vm); diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c index b7a282b96e52..0b3a781cea19 100644 --- a/src/qemu/qemu_hotplug.c +++ b/src/qemu/qemu_hotplug.c @@ -5723,6 +5723,12 @@ qemuDomainRemoveDevice(virQEMUDriver *driver, return 0; }
+static bool +qemuDomainDeviceRemoved(virDomainObj *vm) +{ + qemuDomainObjPrivate *priv = vm->privateData; + return priv->unplug.status == QEMU_DOMAIN_UNPLUGGING_DEVICE_STATUS_OK; +}
static void qemuDomainMarkDeviceAliasForRemoval(virDomainObj *vm, @@ -6761,7 +6767,7 @@ qemuDomainRemoveVcpu(virDomainObj *vm, }
-void +int qemuDomainRemoveVcpuAlias(virDomainObj *vm, const char *alias) { @@ -6774,13 +6780,16 @@ qemuDomainRemoveVcpuAlias(virDomainObj *vm, vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu);
if (STREQ_NULLABLE(alias, vcpupriv->alias)) { - qemuDomainRemoveVcpu(vm, i); - return; + if (qemuDomainRemoveVcpu(vm, i) < 0) + return -1; + + return i; } }
VIR_DEBUG("vcpu '%s' not found in vcpulist of domain '%s'", alias, vm->def->name); + return -1; }
@@ -6788,7 +6797,8 @@ static int qemuDomainHotplugDelVcpu(virQEMUDriver *driver, virQEMUDriverConfig *cfg, virDomainObj *vm, - unsigned int vcpu) + unsigned int vcpu, + bool async) { virDomainVcpuDef *vcpuinfo = virDomainDefGetVcpu(vm->def, vcpu); qemuDomainVcpuPrivate *vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpuinfo); @@ -6796,6 +6806,7 @@ qemuDomainHotplugDelVcpu(virQEMUDriver *driver, unsigned int nvcpus = vcpupriv->vcpus; int rc; int ret = -1; + virObjectEvent *event = NULL;
if (!vcpupriv->alias) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, @@ -6813,6 +6824,22 @@ qemuDomainHotplugDelVcpu(virQEMUDriver *driver, goto cleanup; } } else { + if (async) { + /* rc = 0 is implied in this branch */ + if (qemuDomainDeviceRemoved(vm)) { + /* event has already arrived, handle it now */ + goto success; + } + /* + * event has not arrived yet, but the monitor operation was + * successful; there will not be a waiter anymore when this thread + * exits. Reset removal state now so that the event handling path + * can be properly triggered if and when the event does arrive + */ + ret = 0; + goto cleanup;
Why didn't you do this as with async device removal? Specifically skip the call to 'qemuDomainMarkDeviceAliasForRemoval' in async mode and just return 0 in async mode here. IMO there's no need for qemuDomainDeviceRemoved or the 'success label, just skip qemuDomainMarkDeviceAliasForRemoval and qemuDomainWaitForDeviceRemoval and assume success without removing the device from the definition. That way the removal will be always handled asynchronously without the need for any different/questionable code/approach.
+ } + if ((rc = qemuDomainWaitForDeviceRemoval(vm)) <= 0) { if (rc == 0) virReportError(VIR_ERR_OPERATION_TIMEOUT, "%s", @@ -6821,6 +6848,7 @@ qemuDomainHotplugDelVcpu(virQEMUDriver *driver, } }
+ success: if (qemuDomainRemoveVcpu(vm, vcpu) < 0) goto cleanup;
@@ -6831,6 +6859,10 @@ qemuDomainHotplugDelVcpu(virQEMUDriver *driver,
ret = 0;
+ /* emit event now to close the async caller loop */ + event = virDomainEventVcpuRemovedNewFromObj(vm, vcpu); + virObjectEventStateQueue(driver->domainEventState, event);
This ought to happen from qemuDomainRemoveVcpu rather than scattering it randomly through the multiple paths to that function.
+ cleanup: qemuDomainResetDeviceRemoval(vm); return ret;
[...]
@@ -7114,12 +7147,20 @@ qemuDomainSetVcpusInternal(virQEMUDriver *driver, virDomainDef *def, virDomainDef *persistentDef, unsigned int nvcpus, - bool hotpluggable) + bool hotpluggable, + bool async) { g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); g_autoptr(virBitmap) vcpumap = NULL; bool enable;
+ if (async && persistentDef) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("asynchronous mode is supported only for live vcpu unplug")); + + return -1; + }
Remove this per previous explanation. (It has also broken alignment).
+ if (def && nvcpus > virDomainDefGetVcpusMax(def)) { virReportError(VIR_ERR_INVALID_ARG, _("requested vcpus is greater than max allowable vcpus for the live domain: %1$u > %2$u"), @@ -7139,7 +7180,13 @@ qemuDomainSetVcpusInternal(virQEMUDriver *driver, &enable))) return -1;
- if (qemuDomainSetVcpusLive(driver, cfg, vm, vcpumap, enable) < 0) + if (async && enable) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("asynchronous mode is supported only for vcpu unplug")); + return -1; + }
This breaks the semantics of the API where it sets a specific cpu amount. the caller doesn't say if removing or adding cpus. IMO this should accept async mode also when enabling cpus and it will just do nothing because no cpus will be removed,
+ + if (qemuDomainSetVcpusLive(driver, cfg, vm, vcpumap, enable, async) < 0) return -1; }
@@ -7289,7 +7336,8 @@ qemuDomainSetVcpuInternal(virQEMUDriver *driver, virDomainDef *def, virDomainDef *persistentDef, virBitmap *map, - bool state) + bool state, + bool async) { g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); g_autoptr(virBitmap) livevcpus = NULL; @@ -7320,8 +7368,14 @@ qemuDomainSetVcpuInternal(virQEMUDriver *driver, return -1; }
+ if (async && state) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("asynchronous mode is supported only for vcpu unplug")); + return -1; + }
Here, since the caller explicitly sends the state this could make sense but IMO is not needed same as in the previous case. Just document that async mode is for unplug only. Consider also renaming the variable to async_unplug everywhere.
+ if (livevcpus && - qemuDomainSetVcpusLive(driver, cfg, vm, livevcpus, state) < 0) + qemuDomainSetVcpusLive(driver, cfg, vm, livevcpus, state, async) < 0) return -1;
if (persistentDef) {
[...]
diff --git a/tests/qemuhotplugtest.c b/tests/qemuhotplugtest.c index ea9d3243f8b1..36bc4d913826 100644 --- a/tests/qemuhotplugtest.c +++ b/tests/qemuhotplugtest.c @@ -322,6 +322,7 @@ struct testQemuHotplugCpuParams { GHashTable *capsLatestFiles; GHashTable *capsCache; GHashTable *schemaCache; + bool async;
What is the point of this ...
};
@@ -420,7 +421,7 @@ testQemuHotplugCpuGroup(const void *opaque)
rc = qemuDomainSetVcpusInternal(&driver, data->vm, data->vm->def, data->vm->newDef, params->newcpus, - true); + true, params->async);
if (params->fail) { if (rc == 0) @@ -458,7 +459,8 @@ testQemuHotplugCpuIndividual(const void *opaque) goto cleanup;
rc = qemuDomainSetVcpuInternal(&driver, data->vm, data->vm->def, - data->vm->newDef, map, params->state); + data->vm->newDef, map, params->state, + params->async);
... if the test never changes to async mode?
if (params->fail) { if (rc == 0)
[...]
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 08a1ce395378..350a3b6cd8b2 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -7667,6 +7667,10 @@ static const vshCmdOptDef opts_setvcpus[] = { .type = VSH_OT_BOOL, .help = N_("make added vcpus hot(un)pluggable") }, + {.name = "async", + .type = VSH_OT_BOOL, + .help = N_("return after firing live vcpu unplug request(s)") + },
The new flag needs to be documented in the virsh man page. See docs/manpages/virsh.rst
{.name = NULL} };
@@ -7681,6 +7685,7 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd) bool current = vshCommandOptBool(cmd, "current"); bool guest = vshCommandOptBool(cmd, "guest"); bool hotpluggable = vshCommandOptBool(cmd, "hotpluggable"); + bool async = vshCommandOptBool(cmd, "async"); unsigned int flags = VIR_DOMAIN_AFFECT_CURRENT;
VSH_EXCLUSIVE_OPTIONS_VAR(current, live); @@ -7699,6 +7704,8 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd) flags |= VIR_DOMAIN_VCPU_MAXIMUM; if (hotpluggable) flags |= VIR_DOMAIN_VCPU_HOTPLUGGABLE; + if (async) + flags |= VIR_DOMAIN_VCPU_ASYNC;
if (!(dom = virshCommandOptDomain(ctl, cmd, NULL))) return false; @@ -7711,6 +7718,12 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd) return false; }
+ if (async && (config || maximum || guest || hotpluggable)) { + vshError(ctl, "%s", + _("--async can be used only for live hypervisor vcpu unplug"));
'hypervisor vpcu unplug' doesn't make sense. You are unplugging the cpu from the guest. In addition this is setting cpus potentially both ways so shows that the checks rejecting To fix the above
+ return false; + } + /* none of the options were specified */ if (!current && flags == 0) { if (virDomainSetVcpus(dom, count) != 0) @@ -7720,6 +7733,10 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd) return false; }
+ if (async) + vshPrintExtra(ctl, "%s", + _("vCPU unplug requests sent successfully\n")); +
No need for the extra blurb.
return true; }
[...]
@@ -7829,6 +7846,10 @@ static const vshCmdOptDef opts_setvcpu[] = { .type = VSH_OT_BOOL, .help = N_("disable cpus specified by cpumap") }, + {.name = "async", + .type = VSH_OT_BOOL, + .help = N_("return after firing live vcpu unplug request") + },
The new flag needs to be documented in the virsh man page. See docs/manpages/virsh.rst
VIRSH_COMMON_OPT_DOMAIN_CONFIG, VIRSH_COMMON_OPT_DOMAIN_LIVE, VIRSH_COMMON_OPT_DOMAIN_CURRENT, @@ -7843,6 +7864,7 @@ cmdSetvcpu(vshControl *ctl, const vshCmd *cmd) bool disable = vshCommandOptBool(cmd, "disable"); bool config = vshCommandOptBool(cmd, "config"); bool live = vshCommandOptBool(cmd, "live"); + bool async = vshCommandOptBool(cmd, "async"); const char *vcpulist = NULL; int state = 0; unsigned int flags = VIR_DOMAIN_AFFECT_CURRENT; @@ -7856,12 +7878,20 @@ cmdSetvcpu(vshControl *ctl, const vshCmd *cmd) flags |= VIR_DOMAIN_AFFECT_CONFIG; if (live) flags |= VIR_DOMAIN_AFFECT_LIVE; + if (async) + flags |= VIR_DOMAIN_VCPU_ASYNC;
if (!(enable || disable)) { vshError(ctl, "%s", _("one of --enable, --disable is required")); return false; }
+ if (async && (config || enable)) { + vshError(ctl, "%s", + _("--async can be used only for live vcpu disable")); + return false; + }
Use VSH_EXCLUSIVE_OPTIONS("async", "enable"); as noted elsewhere do not reject aync with --config.
+ if (vshCommandOptString(ctl, cmd, "vcpulist", &vcpulist)) return false;
@@ -7874,6 +7904,10 @@ cmdSetvcpu(vshControl *ctl, const vshCmd *cmd) if (virDomainSetVcpu(dom, vcpulist, state, flags) < 0) return false;
+ if (async) + vshPrintExtra(ctl, "%s", + _("vCPU unplug request sent successfully\n"));
IMO there's no need for the extra message.
+ return true; }
-- 2.47.3