[libvirt] [PATCH v2 00/23] qemu: Add support for new vcpu hotplug and unplug

Version 2 attempts to address feedback on v1: - added more documentation - fixed various typos - fixed bugs appearing when restarting daemon - fixed fallback code - tested upgrade of libvirt Some of the patches were already pushed and this series is rebased on top of that changes. Patches 1-18 with exception of 16 were more or less ACKed already. I've dropped the last patch adding the events from this series. I'm working on adding a new event for vcpu lifecycle. I'm also working on reporting of the vcpu granularity via the domcapabilities API. I'm going to post those separately as well as the new API that will allow modifying the vcpus individually. Peter Krempa (23): qemu: monitor: Return structures from qemuMonitorGetCPUInfo qemu: monitor: Return struct from qemuMonitor(Text|Json)QueryCPUs qemu: caps: Add capability for query-hotpluggable-cpus command qemu: Forbid config when topology based cpu count doesn't match the config qemu: capabilities: Extract availability of new cpu hotplug for machine types qemu: monitor: Extract QOM path from query-cpus reply qemu: monitor: Add support for calling query-hotpluggable-cpus qemu: monitor: Add algorithm for combining query-(hotpluggable-)-cpus data tests: Add test infrastructure for qemuMonitorGetCPUInfo tests: cpu-hotplug: Add data for ppc64 platform including hotplug tests: cpu-hotplug: Add data for ppc64 out-of-order hotplug tests: cpu-hotplug: Add data for ppc64 without threads enabled qemu: domain: Extract cpu-hotplug related data qemu: domain: Prepare for VCPUs vanishing while libvirt is not running util: Extract and rename qemuDomainDelCgroupForThread to virCgroupDelThread conf: Add XML for individual vCPU hotplug qemu: migration: Prepare for non-contiguous vcpu configurations qemu: command: Add helper to convert vcpu definition to JSON props qemu: process: Copy final vcpu order information into the vcpu definition qemu: command: Add support for sparse vcpu topologies qemu: Use mondern vcpu hotplug approach if possible qemu: hotplug: Allow marking unplugged devices by alias qemu: hotplug: Add support for VCPU unplug docs/formatdomain.html.in | 40 +++ docs/schemas/domaincommon.rng | 25 ++ src/conf/domain_conf.c | 152 +++++++++- src/conf/domain_conf.h | 6 + src/libvirt_private.syms | 1 + src/qemu/qemu_capabilities.c | 31 +- src/qemu/qemu_capabilities.h | 3 + src/qemu/qemu_command.c | 50 +++- src/qemu/qemu_command.h | 3 + src/qemu/qemu_domain.c | 312 +++++++++++++++++---- src/qemu/qemu_domain.h | 19 +- src/qemu/qemu_driver.c | 246 +++++++++------- src/qemu/qemu_hotplug.c | 124 +++++++- src/qemu/qemu_hotplug.h | 7 + src/qemu/qemu_migration.c | 16 +- src/qemu/qemu_monitor.c | 260 ++++++++++++++++- src/qemu/qemu_monitor.h | 58 +++- src/qemu/qemu_monitor_json.c | 266 +++++++++++++++--- src/qemu/qemu_monitor_json.h | 8 +- src/qemu/qemu_monitor_text.c | 41 +-- src/qemu/qemu_monitor_text.h | 3 +- src/qemu/qemu_process.c | 182 +++++++++++- src/util/vircgroup.c | 20 ++ src/util/vircgroup.h | 4 + .../generic-vcpus-individual.xml | 23 ++ tests/genericxml2xmltest.c | 2 + tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml | 55 ++-- .../qemumonitorjson-cpuinfo-ppc64-basic-cpus.json | 77 +++++ ...emumonitorjson-cpuinfo-ppc64-basic-hotplug.json | 27 ++ .../qemumonitorjson-cpuinfo-ppc64-basic.data | 40 +++ ...mumonitorjson-cpuinfo-ppc64-hotplug-1-cpus.json | 149 ++++++++++ ...onitorjson-cpuinfo-ppc64-hotplug-1-hotplug.json | 28 ++ .../qemumonitorjson-cpuinfo-ppc64-hotplug-1.data | 51 ++++ ...mumonitorjson-cpuinfo-ppc64-hotplug-2-cpus.json | 221 +++++++++++++++ ...onitorjson-cpuinfo-ppc64-hotplug-2-hotplug.json | 29 ++ .../qemumonitorjson-cpuinfo-ppc64-hotplug-2.data | 62 ++++ ...mumonitorjson-cpuinfo-ppc64-hotplug-4-cpus.json | 221 +++++++++++++++ ...onitorjson-cpuinfo-ppc64-hotplug-4-hotplug.json | 29 ++ .../qemumonitorjson-cpuinfo-ppc64-hotplug-4.data | 62 ++++ ...umonitorjson-cpuinfo-ppc64-no-threads-cpus.json | 77 +++++ ...nitorjson-cpuinfo-ppc64-no-threads-hotplug.json | 125 +++++++++ .../qemumonitorjson-cpuinfo-ppc64-no-threads.data | 72 +++++ ...nitorjson-cpuinfo-x86-basic-pluggable-cpus.json | 50 ++++ ...orjson-cpuinfo-x86-basic-pluggable-hotplug.json | 82 ++++++ ...emumonitorjson-cpuinfo-x86-basic-pluggable.data | 39 +++ tests/qemumonitorjsontest.c | 183 +++++++++++- .../qemuxml2argv-cpu-hotplug-startup.args | 20 ++ .../qemuxml2argv-cpu-hotplug-startup.xml | 29 ++ tests/qemuxml2argvtest.c | 2 + tests/testutils.c | 4 +- 50 files changed, 3360 insertions(+), 276 deletions(-) create mode 100644 tests/genericxml2xmlindata/generic-vcpus-individual.xml create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable.data create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.xml -- 2.8.2

The function will gradually add more returned data. Return a struct for every vCPU containing the data. --- Notes: v2: - already ACKed src/qemu/qemu_domain.c | 25 +++++++++------------- src/qemu/qemu_monitor.c | 57 +++++++++++++++++++++++++++++++++++++++++++------ src/qemu/qemu_monitor.h | 13 ++++++++++- 3 files changed, 72 insertions(+), 23 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 6c0f261..4b8c878 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5737,10 +5737,11 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, int asyncJob) { virDomainVcpuDefPtr vcpu; + qemuDomainVcpuPrivatePtr vcpupriv; + qemuMonitorCPUInfoPtr info = NULL; size_t maxvcpus = virDomainDefGetVcpusMax(vm->def); - pid_t *cpupids = NULL; - int ncpupids; size_t i; + int rc; int ret = -1; /* @@ -5776,32 +5777,26 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) return -1; - ncpupids = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &cpupids); + + rc = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &info, maxvcpus); + if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; - /* failure to get the VCPU <-> PID mapping or to execute the query - * command will not be treated fatal as some versions of qemu don't - * support this command */ - if (ncpupids <= 0) { - virResetLastError(); - ret = 0; + if (rc < 0) goto cleanup; - } for (i = 0; i < maxvcpus; i++) { vcpu = virDomainDefGetVcpu(vm->def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); - if (i < ncpupids) - QEMU_DOMAIN_VCPU_PRIVATE(vcpu)->tid = cpupids[i]; - else - QEMU_DOMAIN_VCPU_PRIVATE(vcpu)->tid = 0; + vcpupriv->tid = info[i].tid; } ret = 0; cleanup: - VIR_FREE(cpupids); + qemuMonitorCPUInfoFree(info, maxvcpus); return ret; } diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 83e1272..285beae 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -1656,25 +1656,68 @@ qemuMonitorSystemReset(qemuMonitorPtr mon) } +void +qemuMonitorCPUInfoFree(qemuMonitorCPUInfoPtr cpus, + size_t ncpus ATTRIBUTE_UNUSED) +{ + if (!cpus) + return; + + VIR_FREE(cpus); +} + + /** * qemuMonitorGetCPUInfo: * @mon: monitor - * @pids: returned array of thread ids corresponding to the vCPUs + * @cpus: pointer filled by array of qemuMonitorCPUInfo structures + * @maxvcpus: total possible number of vcpus + * + * Detects VCPU information. If qemu doesn't support or fails reporting + * information this function will return success as other parts of libvirt + * are able to cope with that. * - * Detects the vCPU thread ids. Returns count of detected vCPUs on success, - * 0 if qemu didn't report thread ids (does not report libvirt error), - * -1 on error (reports libvirt error). + * Returns 0 on success (including if qemu didn't report any data) and + * -1 on error (reports libvirt error). */ int qemuMonitorGetCPUInfo(qemuMonitorPtr mon, - int **pids) + qemuMonitorCPUInfoPtr *vcpus, + size_t maxvcpus) { + qemuMonitorCPUInfoPtr info = NULL; + int *pids = NULL; + size_t i; + int ret = -1; + int rc; + QEMU_CHECK_MONITOR(mon); + if (VIR_ALLOC_N(info, maxvcpus) < 0) + return -1; + if (mon->json) - return qemuMonitorJSONQueryCPUs(mon, pids); + rc = qemuMonitorJSONQueryCPUs(mon, &pids); else - return qemuMonitorTextQueryCPUs(mon, pids); + rc = qemuMonitorTextQueryCPUs(mon, &pids); + + if (rc < 0) { + virResetLastError(); + VIR_STEAL_PTR(*vcpus, info); + ret = 0; + goto cleanup; + } + + for (i = 0; i < rc; i++) + info[i].tid = pids[i]; + + VIR_STEAL_PTR(*vcpus, info); + ret = 0; + + cleanup: + qemuMonitorCPUInfoFree(info, maxvcpus); + VIR_FREE(pids); + return ret; } diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 591d3ed..3fa993f 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -390,8 +390,19 @@ int qemuMonitorGetStatus(qemuMonitorPtr mon, int qemuMonitorSystemReset(qemuMonitorPtr mon); int qemuMonitorSystemPowerdown(qemuMonitorPtr mon); + +struct _qemuMonitorCPUInfo { + pid_t tid; +}; +typedef struct _qemuMonitorCPUInfo qemuMonitorCPUInfo; +typedef qemuMonitorCPUInfo *qemuMonitorCPUInfoPtr; + +void qemuMonitorCPUInfoFree(qemuMonitorCPUInfoPtr list, + size_t nitems); int qemuMonitorGetCPUInfo(qemuMonitorPtr mon, - int **pids); + qemuMonitorCPUInfoPtr *vcpus, + size_t maxvcpus); + int qemuMonitorGetVirtType(qemuMonitorPtr mon, virDomainVirtType *virtType); int qemuMonitorGetBalloonInfo(qemuMonitorPtr mon, -- 2.8.2

At 08/19/2016 10:38 PM, Peter Krempa wrote:
The function will gradually add more returned data. Return a struct for every vCPU containing the data. /** * qemuMonitorGetCPUInfo: * @mon: monitor - * @pids: returned array of thread ids corresponding to the vCPUs + * @cpus: pointer filled by array of qemuMonitorCPUInfo structures
s/@cpus/@vcpus
+ * @maxvcpus: total possible number of vcpus + * + * Detects VCPU information. If qemu doesn't support or fails reporting + * information this function will return success as other parts of libvirt + * are able to cope with that. * - * Detects the vCPU thread ids. Returns count of detected vCPUs on success, - * 0 if qemu didn't report thread ids (does not report libvirt error), - * -1 on error (reports libvirt error). + * Returns 0 on success (including if qemu didn't report any data) and + * -1 on error (reports libvirt error). */ int qemuMonitorGetCPUInfo(qemuMonitorPtr mon, - int **pids) + qemuMonitorCPUInfoPtr *vcpus, + size_t maxvcpus) { + qemuMonitorCPUInfoPtr info = NULL; + int *pids = NULL; + size_t i; + int ret = -1; + int rc; + QEMU_CHECK_MONITOR(mon);
+ if (VIR_ALLOC_N(info, maxvcpus) < 0) + return -1; + if (mon->json) - return qemuMonitorJSONQueryCPUs(mon, pids); + rc = qemuMonitorJSONQueryCPUs(mon, &pids); else - return qemuMonitorTextQueryCPUs(mon, pids); + rc = qemuMonitorTextQueryCPUs(mon, &pids); + + if (rc < 0) { + virResetLastError(); + VIR_STEAL_PTR(*vcpus, info); + ret = 0; + goto cleanup; + } + + for (i = 0; i < rc; i++) + info[i].tid = pids[i]; + + VIR_STEAL_PTR(*vcpus, info); + ret = 0; + + cleanup: + qemuMonitorCPUInfoFree(info, maxvcpus); + VIR_FREE(pids); + return ret; }
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 591d3ed..3fa993f 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -390,8 +390,19 @@ int qemuMonitorGetStatus(qemuMonitorPtr mon, int qemuMonitorSystemReset(qemuMonitorPtr mon); int qemuMonitorSystemPowerdown(qemuMonitorPtr mon);
+ +struct _qemuMonitorCPUInfo { + pid_t tid; +}; +typedef struct _qemuMonitorCPUInfo qemuMonitorCPUInfo; +typedef qemuMonitorCPUInfo *qemuMonitorCPUInfoPtr; + +void qemuMonitorCPUInfoFree(qemuMonitorCPUInfoPtr list, + size_t nitems); int qemuMonitorGetCPUInfo(qemuMonitorPtr mon, - int **pids); + qemuMonitorCPUInfoPtr *vcpus, + size_t maxvcpus); + int qemuMonitorGetVirtType(qemuMonitorPtr mon, virDomainVirtType *virtType); int qemuMonitorGetBalloonInfo(qemuMonitorPtr mon,

Prepare to extract more data by returning a array of structs rather than just an array of thread ids. Additionally report fatal errors separately from qemu not being able to produce data. --- Notes: v2: - initialized ncpus in qemuMonitorTextQueryCPUs - fixed return value on alloc fail src/qemu/qemu_monitor.c | 31 ++++++++++++------ src/qemu/qemu_monitor.h | 6 ++++ src/qemu/qemu_monitor_json.c | 77 +++++++++++++++++++++++--------------------- src/qemu/qemu_monitor_json.h | 3 +- src/qemu/qemu_monitor_text.c | 41 +++++++++++------------ src/qemu/qemu_monitor_text.h | 3 +- tests/qemumonitorjsontest.c | 39 +++++++++++++++------- 7 files changed, 121 insertions(+), 79 deletions(-) diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 285beae..f25c68d 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -1666,6 +1666,16 @@ qemuMonitorCPUInfoFree(qemuMonitorCPUInfoPtr cpus, VIR_FREE(cpus); } +void +qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries, + size_t nentries ATTRIBUTE_UNUSED) +{ + if (!entries) + return; + + VIR_FREE(entries); +} + /** * qemuMonitorGetCPUInfo: @@ -1686,7 +1696,8 @@ qemuMonitorGetCPUInfo(qemuMonitorPtr mon, size_t maxvcpus) { qemuMonitorCPUInfoPtr info = NULL; - int *pids = NULL; + struct qemuMonitorQueryCpusEntry *cpuentries = NULL; + size_t ncpuentries = 0; size_t i; int ret = -1; int rc; @@ -1697,26 +1708,28 @@ qemuMonitorGetCPUInfo(qemuMonitorPtr mon, return -1; if (mon->json) - rc = qemuMonitorJSONQueryCPUs(mon, &pids); + rc = qemuMonitorJSONQueryCPUs(mon, &cpuentries, &ncpuentries); else - rc = qemuMonitorTextQueryCPUs(mon, &pids); + rc = qemuMonitorTextQueryCPUs(mon, &cpuentries, &ncpuentries); if (rc < 0) { - virResetLastError(); - VIR_STEAL_PTR(*vcpus, info); - ret = 0; + if (rc == -2) { + VIR_STEAL_PTR(*vcpus, info); + ret = 0; + } + goto cleanup; } - for (i = 0; i < rc; i++) - info[i].tid = pids[i]; + for (i = 0; i < ncpuentries; i++) + info[i].tid = cpuentries[i].tid; VIR_STEAL_PTR(*vcpus, info); ret = 0; cleanup: qemuMonitorCPUInfoFree(info, maxvcpus); - VIR_FREE(pids); + qemuMonitorQueryCpusFree(cpuentries, ncpuentries); return ret; } diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 3fa993f..826991f 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -390,6 +390,12 @@ int qemuMonitorGetStatus(qemuMonitorPtr mon, int qemuMonitorSystemReset(qemuMonitorPtr mon); int qemuMonitorSystemPowerdown(qemuMonitorPtr mon); +struct qemuMonitorQueryCpusEntry { + pid_t tid; +}; +void qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries, + size_t nentries); + struct _qemuMonitorCPUInfo { pid_t tid; diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 1df1e4a..4f4fb4f 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -1323,69 +1323,69 @@ int qemuMonitorJSONSystemReset(qemuMonitorPtr mon) * { "CPU": 1, "current": false, "halted": true, "pc": 7108165 } ] */ static int -qemuMonitorJSONExtractCPUInfo(virJSONValuePtr reply, - int **pids) +qemuMonitorJSONExtractCPUInfo(virJSONValuePtr data, + struct qemuMonitorQueryCpusEntry **entries, + size_t *nentries) { - virJSONValuePtr data; + struct qemuMonitorQueryCpusEntry *cpus = NULL; int ret = -1; size_t i; - int *threads = NULL; ssize_t ncpus; - if (!(data = virJSONValueObjectGetArray(reply, "return"))) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("cpu reply was missing return data")); - goto cleanup; - } - - if ((ncpus = virJSONValueArraySize(data)) <= 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("cpu information was empty")); - goto cleanup; - } + if ((ncpus = virJSONValueArraySize(data)) <= 0) + return -2; - if (VIR_ALLOC_N(threads, ncpus) < 0) + if (VIR_ALLOC_N(cpus, ncpus) < 0) goto cleanup; for (i = 0; i < ncpus; i++) { virJSONValuePtr entry = virJSONValueArrayGet(data, i); - int thread; + int thread = 0; if (!entry) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("cpu information was missing an array element")); + ret = -2; goto cleanup; } - if (virJSONValueObjectGetNumberInt(entry, "thread_id", &thread) < 0) { - /* Some older qemu versions don't report the thread_id, - * so treat this as non-fatal, simply returning no data */ - ret = 0; - goto cleanup; - } + /* Some older qemu versions don't report the thread_id so treat this as + * non-fatal, simply returning no data */ + ignore_value(virJSONValueObjectGetNumberInt(entry, "thread_id", &thread)); - threads[i] = thread; + cpus[i].tid = thread; } - *pids = threads; - threads = NULL; - ret = ncpus; + VIR_STEAL_PTR(*entries, cpus); + *nentries = ncpus; + ret = 0; cleanup: - VIR_FREE(threads); + qemuMonitorQueryCpusFree(cpus, ncpus); return ret; } +/** + * qemuMonitorJSONQueryCPUs: + * + * @mon: monitor object + * @entries: filled with detected entries on success + * @nentries: number of entries returned + * + * Queries qemu for cpu-related information. Failure to execute the command or + * extract results does not produce an error as libvirt can continue without + * this information. + * + * Returns 0 on success success, -1 on a fatal error (oom ...) and -2 if the + * query failed gracefully. + */ int qemuMonitorJSONQueryCPUs(qemuMonitorPtr mon, - int **pids) + struct qemuMonitorQueryCpusEntry **entries, + size_t *nentries) { int ret = -1; - virJSONValuePtr cmd = qemuMonitorJSONMakeCommand("query-cpus", - NULL); + virJSONValuePtr cmd = qemuMonitorJSONMakeCommand("query-cpus", NULL); virJSONValuePtr reply = NULL; - - *pids = NULL; + virJSONValuePtr data; if (!cmd) return -1; @@ -1393,10 +1393,13 @@ qemuMonitorJSONQueryCPUs(qemuMonitorPtr mon, if (qemuMonitorJSONCommand(mon, cmd, &reply) < 0) goto cleanup; - if (qemuMonitorJSONCheckError(cmd, reply) < 0) + if (!(data = virJSONValueObjectGetArray(reply, "return"))) { + ret = -2; goto cleanup; + } + + ret = qemuMonitorJSONExtractCPUInfo(data, entries, nentries); - ret = qemuMonitorJSONExtractCPUInfo(reply, pids); cleanup: virJSONValueFree(cmd); virJSONValueFree(reply); diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 174f0ef..90fe697 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -59,7 +59,8 @@ int qemuMonitorJSONSystemPowerdown(qemuMonitorPtr mon); int qemuMonitorJSONSystemReset(qemuMonitorPtr mon); int qemuMonitorJSONQueryCPUs(qemuMonitorPtr mon, - int **pids); + struct qemuMonitorQueryCpusEntry **entries, + size_t *nentries); int qemuMonitorJSONGetVirtType(qemuMonitorPtr mon, virDomainVirtType *virtType); int qemuMonitorJSONUpdateVideoMemorySize(qemuMonitorPtr mon, diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c index ca04965..ff7cc79 100644 --- a/src/qemu/qemu_monitor_text.c +++ b/src/qemu/qemu_monitor_text.c @@ -502,12 +502,15 @@ int qemuMonitorTextSystemReset(qemuMonitorPtr mon) int qemuMonitorTextQueryCPUs(qemuMonitorPtr mon, - int **pids) + struct qemuMonitorQueryCpusEntry **entries, + size_t *nentries) { char *qemucpus = NULL; char *line; - pid_t *cpupids = NULL; - size_t ncpupids = 0; + struct qemuMonitorQueryCpusEntry *cpus = NULL; + size_t ncpus = 0; + struct qemuMonitorQueryCpusEntry cpu = {0}; + int ret = -2; /* -2 denotes a non-fatal error to get the data */ if (qemuMonitorHMPCommand(mon, "info cpus", &qemucpus) < 0) return -1; @@ -529,15 +532,19 @@ qemuMonitorTextQueryCPUs(qemuMonitorPtr mon, /* Extract host Thread ID */ if ((offset = strstr(line, "thread_id=")) == NULL) - goto error; + goto cleanup; if (virStrToLong_i(offset + strlen("thread_id="), &end, 10, &tid) < 0) - goto error; + goto cleanup; if (end == NULL || !c_isspace(*end)) - goto error; + goto cleanup; - if (VIR_APPEND_ELEMENT_COPY(cpupids, ncpupids, tid) < 0) - goto error; + cpu.tid = tid; + + if (VIR_APPEND_ELEMENT_COPY(cpus, ncpus, cpu) < 0) { + ret = -1; + goto cleanup; + } VIR_DEBUG("tid=%d", tid); @@ -547,20 +554,14 @@ qemuMonitorTextQueryCPUs(qemuMonitorPtr mon, line = strchr(offset, '\n'); } while (line != NULL); - /* Validate we got data for all VCPUs we expected */ - VIR_FREE(qemucpus); - *pids = cpupids; - return ncpupids; + VIR_STEAL_PTR(*entries, cpus); + *nentries = ncpus; + ret = 0; - error: + cleanup: + qemuMonitorQueryCpusFree(cpus, ncpus); VIR_FREE(qemucpus); - VIR_FREE(cpupids); - - /* Returning 0 to indicate non-fatal failure, since - * older QEMU does not have VCPU<->PID mapping and - * we don't want to fail on that - */ - return 0; + return ret; } diff --git a/src/qemu/qemu_monitor_text.h b/src/qemu/qemu_monitor_text.h index b7f0cab..86f43e7 100644 --- a/src/qemu/qemu_monitor_text.h +++ b/src/qemu/qemu_monitor_text.h @@ -50,7 +50,8 @@ int qemuMonitorTextSystemPowerdown(qemuMonitorPtr mon); int qemuMonitorTextSystemReset(qemuMonitorPtr mon); int qemuMonitorTextQueryCPUs(qemuMonitorPtr mon, - int **pids); + struct qemuMonitorQueryCpusEntry **entries, + size_t *nentries); int qemuMonitorTextGetVirtType(qemuMonitorPtr mon, virDomainVirtType *virtType); int qemuMonitorTextGetBalloonInfo(qemuMonitorPtr mon, diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index 9988754..ad68e16 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -1201,6 +1201,16 @@ GEN_TEST_FUNC(qemuMonitorJSONNBDServerStart, "localhost", 12345) GEN_TEST_FUNC(qemuMonitorJSONNBDServerAdd, "vda", true) GEN_TEST_FUNC(qemuMonitorJSONDetachCharDev, "serial1") +static bool +testQemuMonitorJSONqemuMonitorJSONQueryCPUsEqual(struct qemuMonitorQueryCpusEntry *a, + struct qemuMonitorQueryCpusEntry *b) +{ + if (a->tid != b->tid) + return false; + + return true; +} + static int testQemuMonitorJSONqemuMonitorJSONQueryCPUs(const void *data) @@ -1208,9 +1218,14 @@ testQemuMonitorJSONqemuMonitorJSONQueryCPUs(const void *data) virDomainXMLOptionPtr xmlopt = (virDomainXMLOptionPtr)data; qemuMonitorTestPtr test = qemuMonitorTestNewSimple(true, xmlopt); int ret = -1; - pid_t *cpupids = NULL; - pid_t expected_cpupids[] = {17622, 17624, 17626, 17628}; - int ncpupids; + struct qemuMonitorQueryCpusEntry *cpudata = NULL; + struct qemuMonitorQueryCpusEntry expect[] = { + {17622}, + {17624}, + {17626}, + {17628}, + }; + size_t ncpudata; size_t i; if (!test) @@ -1252,19 +1267,21 @@ testQemuMonitorJSONqemuMonitorJSONQueryCPUs(const void *data) "}") < 0) goto cleanup; - ncpupids = qemuMonitorJSONQueryCPUs(qemuMonitorTestGetMonitor(test), &cpupids); + if (qemuMonitorJSONQueryCPUs(qemuMonitorTestGetMonitor(test), + &cpudata, &ncpudata) < 0) + goto cleanup; - if (ncpupids != 4) { + if (ncpudata != 4) { virReportError(VIR_ERR_INTERNAL_ERROR, - "Expecting ncpupids = 4 but got %d", ncpupids); + "Expecting ncpupids = 4 but got %zu", ncpudata); goto cleanup; } - for (i = 0; i < ncpupids; i++) { - if (cpupids[i] != expected_cpupids[i]) { + for (i = 0; i < ncpudata; i++) { + if (!testQemuMonitorJSONqemuMonitorJSONQueryCPUsEqual(cpudata + i, + expect + i)) { virReportError(VIR_ERR_INTERNAL_ERROR, - "Expecting cpupids[%zu] = %d but got %d", - i, expected_cpupids[i], cpupids[i]); + "vcpu entry %zu does not match expected data", i); goto cleanup; } } @@ -1272,7 +1289,7 @@ testQemuMonitorJSONqemuMonitorJSONQueryCPUs(const void *data) ret = 0; cleanup: - VIR_FREE(cpupids); + qemuMonitorQueryCpusFree(cpudata, ncpudata); qemuMonitorTestFree(test); return ret; } -- 2.8.2

--- Notes: v2: - already ACKed src/qemu/qemu_capabilities.c | 2 ++ src/qemu/qemu_capabilities.h | 1 + tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml | 1 + 3 files changed, 4 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 8efb8fb..88d2c6e 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -341,6 +341,7 @@ VIR_ENUM_IMPL(virQEMUCaps, QEMU_CAPS_LAST, "intel-iommu", "smm", "virtio-pci-disable-legacy", + "query-hotpluggable-cpus", ); @@ -1462,6 +1463,7 @@ struct virQEMUCapsStringFlags virQEMUCapsCommands[] = { { "change-backing-file", QEMU_CAPS_CHANGE_BACKING_FILE }, { "rtc-reset-reinjection", QEMU_CAPS_RTC_RESET_REINJECTION }, { "migrate-incoming", QEMU_CAPS_INCOMING_DEFER }, + { "query-hotpluggable-cpus", QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS }, }; struct virQEMUCapsStringFlags virQEMUCapsMigration[] = { diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index 776a0f3..365394d 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -374,6 +374,7 @@ typedef enum { QEMU_CAPS_DEVICE_INTEL_IOMMU, /* -device intel-iommu */ QEMU_CAPS_MACHINE_SMM_OPT, /* -machine xxx,smm=on/off/auto */ QEMU_CAPS_VIRTIO_PCI_DISABLE_LEGACY, /* virtio-*pci.disable-legacy */ + QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS, /* qmp command query-hotpluggable-cpus */ QEMU_CAPS_LAST /* this must always be the last item */ } virQEMUCapsFlags; diff --git a/tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml index ccb190b..daae569 100644 --- a/tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml @@ -193,6 +193,7 @@ <flag name='intel-iommu'/> <flag name='smm'/> <flag name='virtio-pci-disable-legacy'/> + <flag name='query-hotpluggable-cpus'/> <version>2006091</version> <kvmVersion>0</kvmVersion> <package> (v2.7.0-rc1-52-g42e0d60)</package> -- 2.8.2

As of qemu commit: commit a32ef3bfc12c8d0588f43f74dcc5280885bbdb30 Author: Thomas Huth <thuth@redhat.com> Date: Wed Jul 22 15:59:50 2015 +0200 vl: Add another sanity check to smp_parse() function v2.4.0-952-ga32ef3b configuration where the maximum CPU count doesn't match the topology is rejected. Prior to that only configurations where the topology would contain more cpus than the maximum count would be rejected. Use QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS as a relevant recent enough witness to avoid breaking old configs. --- Notes: v2: -fixed typo in commit message - already ACKed src/qemu/qemu_domain.c | 38 ++++++++++++++++++++++++++++++++------ 1 file changed, 32 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 4b8c878..c56dc75 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2314,12 +2314,21 @@ qemuDomainDefPostParse(virDomainDefPtr def, static int qemuDomainDefValidate(const virDomainDef *def, virCapsPtr caps ATTRIBUTE_UNUSED, - void *opaque ATTRIBUTE_UNUSED) + void *opaque) { + virQEMUDriverPtr driver = opaque; + virQEMUCapsPtr qemuCaps = NULL; + size_t topologycpus; + int ret = -1; + + if (!(qemuCaps = virQEMUCapsCacheLookup(driver->qemuCapsCache, + def->emulator))) + goto cleanup; + if (def->mem.min_guarantee) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("Parameter 'min_guarantee' not supported by QEMU.")); - return -1; + goto cleanup; } if (def->os.loader && @@ -2330,7 +2339,7 @@ qemuDomainDefValidate(const virDomainDef *def, if (!qemuDomainMachineIsQ35(def)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("Secure boot is supported with q35 machine types only")); - return -1; + goto cleanup; } /* Now, technically it is possible to have secure boot on @@ -2339,17 +2348,34 @@ qemuDomainDefValidate(const virDomainDef *def, if (def->os.arch != VIR_ARCH_X86_64) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("Secure boot is supported for x86_64 architecture only")); - return -1; + goto cleanup; } if (def->features[VIR_DOMAIN_FEATURE_SMM] != VIR_TRISTATE_SWITCH_ON) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("Secure boot requires SMM feature enabled")); - return -1; + goto cleanup; } } - return 0; + /* qemu as of 2.5.0 rejects SMP topologies that don't match the cpu count */ + if (def->cpu && def->cpu->sockets) { + topologycpus = def->cpu->sockets * def->cpu->cores * def->cpu->threads; + if (topologycpus != virDomainDefGetVcpusMax(def)) { + /* presence of query-hotpluggable-cpus should be a good enough witness */ + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("CPU topology doesn't match maximum vcpu count")); + goto cleanup; + } + } + } + + ret = 0; + + cleanup: + virObjectUnref(qemuCaps); + return ret; } -- 2.8.2

On 08/19/2016 10:38 AM, Peter Krempa wrote:
As of qemu commit: commit a32ef3bfc12c8d0588f43f74dcc5280885bbdb30 Author: Thomas Huth <thuth@redhat.com> Date: Wed Jul 22 15:59:50 2015 +0200
vl: Add another sanity check to smp_parse() function
v2.4.0-952-ga32ef3b
configuration where the maximum CPU count doesn't match the topology is rejected. Prior to that only configurations where the topology would contain more cpus than the maximum count would be rejected.
Use QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS as a relevant recent enough witness to avoid breaking old configs. ---
Notes: v2: -fixed typo in commit message - already ACKed
src/qemu/qemu_domain.c | 38 ++++++++++++++++++++++++++++++++------ 1 file changed, 32 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 4b8c878..c56dc75 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2314,12 +2314,21 @@ qemuDomainDefPostParse(virDomainDefPtr def, static int qemuDomainDefValidate(const virDomainDef *def, virCapsPtr caps ATTRIBUTE_UNUSED, - void *opaque ATTRIBUTE_UNUSED) + void *opaque) { + virQEMUDriverPtr driver = opaque; + virQEMUCapsPtr qemuCaps = NULL; + size_t topologycpus; + int ret = -1; + + if (!(qemuCaps = virQEMUCapsCacheLookup(driver->qemuCapsCache, + def->emulator))) + goto cleanup; + if (def->mem.min_guarantee) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("Parameter 'min_guarantee' not supported by QEMU.")); - return -1; + goto cleanup; }
if (def->os.loader && @@ -2330,7 +2339,7 @@ qemuDomainDefValidate(const virDomainDef *def, if (!qemuDomainMachineIsQ35(def)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("Secure boot is supported with q35 machine types only")); - return -1; + goto cleanup; }
/* Now, technically it is possible to have secure boot on @@ -2339,17 +2348,34 @@ qemuDomainDefValidate(const virDomainDef *def, if (def->os.arch != VIR_ARCH_X86_64) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("Secure boot is supported for x86_64 architecture only")); - return -1; + goto cleanup; }
if (def->features[VIR_DOMAIN_FEATURE_SMM] != VIR_TRISTATE_SWITCH_ON) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("Secure boot requires SMM feature enabled")); - return -1; + goto cleanup; } }
- return 0; + /* qemu as of 2.5.0 rejects SMP topologies that don't match the cpu count */ + if (def->cpu && def->cpu->sockets) { + topologycpus = def->cpu->sockets * def->cpu->cores * def->cpu->threads; + if (topologycpus != virDomainDefGetVcpusMax(def)) { + /* presence of query-hotpluggable-cpus should be a good enough witness */ + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("CPU topology doesn't match maximum vcpu count")); + goto cleanup; + }
Would it be a good idea to add a VIR_WARN or VIR_INFO (something) that will give a heads up that the configuration may prevent some future start? Especially since you went through all the math before detecting the capability... John
+ } + } + + ret = 0; + + cleanup: + virObjectUnref(qemuCaps); + return ret; }

On Fri, Aug 19, 2016 at 13:10:09 -0400, John Ferlan wrote:
On 08/19/2016 10:38 AM, Peter Krempa wrote:
As of qemu commit: commit a32ef3bfc12c8d0588f43f74dcc5280885bbdb30 Author: Thomas Huth <thuth@redhat.com> Date: Wed Jul 22 15:59:50 2015 +0200
vl: Add another sanity check to smp_parse() function
v2.4.0-952-ga32ef3b
configuration where the maximum CPU count doesn't match the topology is rejected. Prior to that only configurations where the topology would contain more cpus than the maximum count would be rejected.
Use QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS as a relevant recent enough witness to avoid breaking old configs. ---
Notes: v2: -fixed typo in commit message - already ACKed
src/qemu/qemu_domain.c | 38 ++++++++++++++++++++++++++++++++------ 1 file changed, 32 insertions(+), 6 deletions(-)
[...]
+ /* qemu as of 2.5.0 rejects SMP topologies that don't match the cpu count */ + if (def->cpu && def->cpu->sockets) { + topologycpus = def->cpu->sockets * def->cpu->cores * def->cpu->threads; + if (topologycpus != virDomainDefGetVcpusMax(def)) { + /* presence of query-hotpluggable-cpus should be a good enough witness */ + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("CPU topology doesn't match maximum vcpu count")); + goto cleanup; + }
Would it be a good idea to add a VIR_WARN or VIR_INFO (something) that will give a heads up that the configuration may prevent some future start? Especially since you went through all the math before detecting the capability...
Well, adding a warning would be cerainly easy, but as it was discussed earlier, using VIR_WARN or VIR_INFO doesn't make much sense since it's improbable that the user will be reading logs after defining a VM. Peter

QEMU reports whether 'query-hotpluggable-cpus' is supported for a given machine type. Extract and cache the information using the capability cache. When copying the capabilities for a new start of qemu, mask out the presence of QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS if the machine type doesn't support hotpluggable cpus. --- Notes: v2: - improved commit message - already ACKed src/qemu/qemu_capabilities.c | 29 ++++++++++++- src/qemu/qemu_capabilities.h | 2 + src/qemu/qemu_monitor.h | 1 + src/qemu/qemu_monitor_json.c | 3 ++ tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml | 54 ++++++++++++------------ 5 files changed, 61 insertions(+), 28 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 88d2c6e..2ca7803 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -349,6 +349,7 @@ struct virQEMUCapsMachineType { char *name; char *alias; unsigned int maxCpus; + bool hotplugCpus; }; /* * Update the XML parser/formatter when adding more @@ -550,6 +551,7 @@ virQEMUCapsParseMachineTypesStr(const char *output, } /* When parsing from command line we don't have information about maxCpus */ qemuCaps->machineTypes[qemuCaps->nmachineTypes-1].maxCpus = 0; + qemuCaps->machineTypes[qemuCaps->nmachineTypes-1].hotplugCpus = false; } while ((p = next)); @@ -2105,6 +2107,7 @@ virQEMUCapsPtr virQEMUCapsNewCopy(virQEMUCapsPtr qemuCaps) VIR_STRDUP(ret->machineTypes[i].alias, qemuCaps->machineTypes[i].alias) < 0) goto error; ret->machineTypes[i].maxCpus = qemuCaps->machineTypes[i].maxCpus; + ret->machineTypes[i].hotplugCpus = qemuCaps->machineTypes[i].hotplugCpus; } if (VIR_ALLOC_N(ret->gicCapabilities, qemuCaps->ngicCapabilities) < 0) @@ -2414,6 +2417,20 @@ int virQEMUCapsGetMachineMaxCpus(virQEMUCapsPtr qemuCaps, } +bool virQEMUCapsGetMachineHotplugCpus(virQEMUCapsPtr qemuCaps, + const char *name) +{ + size_t i; + + for (i = 0; i < qemuCaps->nmachineTypes; i++) { + if (STREQ_NULLABLE(qemuCaps->machineTypes[i].name, name)) + return qemuCaps->machineTypes[i].hotplugCpus; + } + + return false; +} + + /** * virQEMUCapsSetGICCapabilities: * @qemuCaps: QEMU capabilities @@ -2572,6 +2589,7 @@ virQEMUCapsProbeQMPMachineTypes(virQEMUCapsPtr qemuCaps, goto cleanup; mach->maxCpus = machines[i]->maxCpus; + mach->hotplugCpus = machines[i]->hotplugCpus; if (machines[i]->isDefault) defIdx = qemuCaps->nmachineTypes - 1; @@ -2830,7 +2848,7 @@ int virQEMUCapsProbeQMP(virQEMUCapsPtr qemuCaps, * ... * <cpu name="pentium3"/> * ... - * <machine name="pc-1.0" alias="pc" maxCpus="4"/> + * <machine name="pc-1.0" alias="pc" hotplugCpus='yes' maxCpus="4"/> * ... * </qemuCaps> */ @@ -2991,6 +3009,11 @@ virQEMUCapsLoadCache(virQEMUCapsPtr qemuCaps, const char *filename, goto cleanup; } VIR_FREE(str); + + str = virXMLPropString(nodes[i], "hotplugCpus"); + if (STREQ_NULLABLE(str, "yes")) + qemuCaps->machineTypes[i].hotplugCpus = true; + VIR_FREE(str); } } VIR_FREE(nodes); @@ -3124,6 +3147,8 @@ virQEMUCapsFormatCache(virQEMUCapsPtr qemuCaps, if (qemuCaps->machineTypes[i].alias) virBufferEscapeString(&buf, " alias='%s'", qemuCaps->machineTypes[i].alias); + if (qemuCaps->machineTypes[i].hotplugCpus) + virBufferAddLit(&buf, " hotplugCpus='yes'"); virBufferAsprintf(&buf, " maxCpus='%u'/>\n", qemuCaps->machineTypes[i].maxCpus); } @@ -3912,6 +3937,8 @@ virQEMUCapsFilterByMachineType(virQEMUCapsPtr qemuCaps, virQEMUCapsClear(qemuCaps, filter->flags[j]); } + if (!virQEMUCapsGetMachineHotplugCpus(qemuCaps, machineType)) + virQEMUCapsClear(qemuCaps, QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS); } diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index 365394d..a74d39f 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -428,6 +428,8 @@ const char *virQEMUCapsGetCanonicalMachine(virQEMUCapsPtr qemuCaps, const char *name); int virQEMUCapsGetMachineMaxCpus(virQEMUCapsPtr qemuCaps, const char *name); +bool virQEMUCapsGetMachineHotplugCpus(virQEMUCapsPtr qemuCaps, + const char *name); int virQEMUCapsGetMachineTypesCaps(virQEMUCapsPtr qemuCaps, size_t *nmachines, virCapsGuestMachinePtr **machines); diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 826991f..417091c 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -854,6 +854,7 @@ struct _qemuMonitorMachineInfo { bool isDefault; char *alias; unsigned int maxCpus; + bool hotplugCpus; }; int qemuMonitorGetMachines(qemuMonitorPtr mon, diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 4f4fb4f..ab5423e 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -4840,6 +4840,9 @@ int qemuMonitorJSONGetMachines(qemuMonitorPtr mon, _("query-machines reply has malformed 'cpu-max' data")); goto cleanup; } + + ignore_value(virJSONValueObjectGetBoolean(child, "hotpluggable-cpus", + &info->hotplugCpus)); } ret = n; diff --git a/tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml index daae569..7a54040 100644 --- a/tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml @@ -227,31 +227,31 @@ <cpu name='core2duo'/> <cpu name='phenom'/> <cpu name='qemu64'/> - <machine name='pc-i440fx-2.7' alias='pc' maxCpus='255'/> - <machine name='pc-0.12' maxCpus='255'/> - <machine name='pc-i440fx-2.4' maxCpus='255'/> - <machine name='pc-1.3' maxCpus='255'/> - <machine name='pc-q35-2.7' alias='q35' maxCpus='255'/> - <machine name='pc-q35-2.6' maxCpus='255'/> - <machine name='pc-i440fx-1.7' maxCpus='255'/> - <machine name='pc-i440fx-1.6' maxCpus='255'/> - <machine name='pc-0.11' maxCpus='255'/> - <machine name='pc-i440fx-2.3' maxCpus='255'/> - <machine name='pc-0.10' maxCpus='255'/> - <machine name='pc-1.2' maxCpus='255'/> - <machine name='pc-i440fx-2.2' maxCpus='255'/> - <machine name='isapc' maxCpus='1'/> - <machine name='pc-q35-2.5' maxCpus='255'/> - <machine name='pc-0.15' maxCpus='255'/> - <machine name='pc-i440fx-1.5' maxCpus='255'/> - <machine name='pc-0.14' maxCpus='255'/> - <machine name='pc-i440fx-2.6' maxCpus='255'/> - <machine name='pc-i440fx-1.4' maxCpus='255'/> - <machine name='pc-i440fx-2.5' maxCpus='255'/> - <machine name='pc-1.1' maxCpus='255'/> - <machine name='pc-i440fx-2.1' maxCpus='255'/> - <machine name='pc-1.0' maxCpus='255'/> - <machine name='pc-i440fx-2.0' maxCpus='255'/> - <machine name='pc-q35-2.4' maxCpus='255'/> - <machine name='pc-0.13' maxCpus='255'/> + <machine name='pc-i440fx-2.7' alias='pc' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-0.12' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-i440fx-2.4' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-1.3' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-q35-2.7' alias='q35' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-q35-2.6' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-i440fx-1.7' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-i440fx-1.6' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-0.11' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-i440fx-2.3' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-0.10' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-1.2' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-i440fx-2.2' hotplugCpus='yes' maxCpus='255'/> + <machine name='isapc' hotplugCpus='yes' maxCpus='1'/> + <machine name='pc-q35-2.5' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-0.15' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-i440fx-1.5' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-0.14' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-i440fx-2.6' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-i440fx-1.4' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-i440fx-2.5' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-1.1' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-i440fx-2.1' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-1.0' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-i440fx-2.0' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-q35-2.4' hotplugCpus='yes' maxCpus='255'/> + <machine name='pc-0.13' hotplugCpus='yes' maxCpus='255'/> </qemuCaps> -- 2.8.2

To allow matching up the data returned by query-cpus to entries in the query-hotpluggable-cpus reply for CPU hotplug it's necessary to extract the QOM path as it's the only link between the two. --- Notes: v2: - fixed commit message - already ACKed src/qemu/qemu_monitor.c | 7 ++++++- src/qemu/qemu_monitor.h | 1 + src/qemu/qemu_monitor_json.c | 16 ++++++++++++++-- tests/qemumonitorjsontest.c | 14 +++++++++----- 4 files changed, 30 insertions(+), 8 deletions(-) diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index f25c68d..f87f431 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -1668,11 +1668,16 @@ qemuMonitorCPUInfoFree(qemuMonitorCPUInfoPtr cpus, void qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries, - size_t nentries ATTRIBUTE_UNUSED) + size_t nentries) { + size_t i; + if (!entries) return; + for (i = 0; i < nentries; i++) + VIR_FREE(entries[i].qom_path); + VIR_FREE(entries); } diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 417091c..576b2af 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -392,6 +392,7 @@ int qemuMonitorSystemPowerdown(qemuMonitorPtr mon); struct qemuMonitorQueryCpusEntry { pid_t tid; + char *qom_path; }; void qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries, size_t nentries); diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index ab5423e..9e0ea87 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -1319,8 +1319,16 @@ int qemuMonitorJSONSystemReset(qemuMonitorPtr mon) /* - * [ { "CPU": 0, "current": true, "halted": false, "pc": 3227107138 }, - * { "CPU": 1, "current": false, "halted": true, "pc": 7108165 } ] + * + * [{ "arch": "x86", + * "current": true, + * "CPU": 0, + * "qom_path": "/machine/unattached/device[0]", + * "pc": -2130415978, + * "halted": true, + * "thread_id": 2631237}, + * {...} + * ] */ static int qemuMonitorJSONExtractCPUInfo(virJSONValuePtr data, @@ -1341,6 +1349,7 @@ qemuMonitorJSONExtractCPUInfo(virJSONValuePtr data, for (i = 0; i < ncpus; i++) { virJSONValuePtr entry = virJSONValueArrayGet(data, i); int thread = 0; + const char *qom_path; if (!entry) { ret = -2; goto cleanup; @@ -1349,8 +1358,11 @@ qemuMonitorJSONExtractCPUInfo(virJSONValuePtr data, /* Some older qemu versions don't report the thread_id so treat this as * non-fatal, simply returning no data */ ignore_value(virJSONValueObjectGetNumberInt(entry, "thread_id", &thread)); + qom_path = virJSONValueObjectGetString(entry, "qom_path"); cpus[i].tid = thread; + if (VIR_STRDUP(cpus[i].qom_path, qom_path) < 0) + goto cleanup; } VIR_STEAL_PTR(*entries, cpus); diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index ad68e16..4be4618 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -1205,7 +1205,8 @@ static bool testQemuMonitorJSONqemuMonitorJSONQueryCPUsEqual(struct qemuMonitorQueryCpusEntry *a, struct qemuMonitorQueryCpusEntry *b) { - if (a->tid != b->tid) + if (a->tid != b->tid || + STRNEQ_NULLABLE(a->qom_path, b->qom_path)) return false; return true; @@ -1220,10 +1221,10 @@ testQemuMonitorJSONqemuMonitorJSONQueryCPUs(const void *data) int ret = -1; struct qemuMonitorQueryCpusEntry *cpudata = NULL; struct qemuMonitorQueryCpusEntry expect[] = { - {17622}, - {17624}, - {17626}, - {17628}, + {17622, (char *) "/machine/unattached/device[0]"}, + {17624, (char *) "/machine/unattached/device[1]"}, + {17626, (char *) "/machine/unattached/device[2]"}, + {17628, NULL}, }; size_t ncpudata; size_t i; @@ -1237,6 +1238,7 @@ testQemuMonitorJSONqemuMonitorJSONQueryCPUs(const void *data) " {" " \"current\": true," " \"CPU\": 0," + " \"qom_path\": \"/machine/unattached/device[0]\"," " \"pc\": -2130530478," " \"halted\": true," " \"thread_id\": 17622" @@ -1244,6 +1246,7 @@ testQemuMonitorJSONqemuMonitorJSONQueryCPUs(const void *data) " {" " \"current\": false," " \"CPU\": 1," + " \"qom_path\": \"/machine/unattached/device[1]\"," " \"pc\": -2130530478," " \"halted\": true," " \"thread_id\": 17624" @@ -1251,6 +1254,7 @@ testQemuMonitorJSONqemuMonitorJSONQueryCPUs(const void *data) " {" " \"current\": false," " \"CPU\": 2," + " \"qom_path\": \"/machine/unattached/device[2]\"," " \"pc\": -2130530478," " \"halted\": true," " \"thread_id\": 17626" -- 2.8.2

Add support for retrieving information regarding hotpluggable cpu units supported by qemu. Data returned by the command carries information needed to figure out the granularity of hotplug, the necessary cpu type name and the topology information. Note that qemu doesn't specify any particular order of the entries thus it's necessary sort them by socket_id, core_id and thread_id to the order libvirt expects. --- Notes: v2: - fixed alias string in comment - already ACKed src/qemu/qemu_monitor.h | 16 ++++ src/qemu/qemu_monitor_json.c | 170 +++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 5 ++ 3 files changed, 191 insertions(+) diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 576b2af..58f8327 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -398,6 +398,22 @@ void qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries, size_t nentries); +struct qemuMonitorQueryHotpluggableCpusEntry { + char *type; /* name of the cpu to use with device_add */ + unsigned int vcpus; /* count of virtual cpus in the guest this entry adds */ + char *qom_path; /* full device qom path only present for online cpus */ + char *alias; /* device alias, may be NULL for non-hotpluggable entities */ + + /* topology information -1 if qemu didn't report given parameter */ + int node_id; + int socket_id; + int core_id; + int thread_id; +}; +void qemuMonitorQueryHotpluggableCpusFree(struct qemuMonitorQueryHotpluggableCpusEntry *entries, + size_t nentries); + + struct _qemuMonitorCPUInfo { pid_t tid; }; diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 9e0ea87..3d0a120 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -7070,3 +7070,173 @@ qemuMonitorJSONGetRTCTime(qemuMonitorPtr mon, virJSONValueFree(reply); return ret; } + + +void +qemuMonitorQueryHotpluggableCpusFree(struct qemuMonitorQueryHotpluggableCpusEntry *entries, + size_t nentries) +{ + struct qemuMonitorQueryHotpluggableCpusEntry *entry; + size_t i; + + if (!entries) + return; + + for (i = 0; i < nentries; i++) { + entry = entries + i; + + VIR_FREE(entry->type); + VIR_FREE(entry->qom_path); + VIR_FREE(entry->alias); + } + + VIR_FREE(entries); +} + + +/** + * [{ + * "props": { + * "core-id": 0, + * "thread-id": 0, + * "socket-id": 0 + * }, + * "vcpus-count": 1, + * "qom-path": "/machine/unattached/device[0]", + * "type": "qemu64-x86_64-cpu" + * }, + * {...} + * ] + */ +static int +qemuMonitorJSONProcessHotpluggableCpusReply(virJSONValuePtr vcpu, + struct qemuMonitorQueryHotpluggableCpusEntry *entry) +{ + virJSONValuePtr props; + const char *tmp; + + if (!(tmp = virJSONValueObjectGetString(vcpu, "type"))) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-hotpluggable-cpus didn't return device type")); + return -1; + } + + if (VIR_STRDUP(entry->type, tmp) < 0) + return -1; + + if (virJSONValueObjectGetNumberUint(vcpu, "vcpus-count", &entry->vcpus) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-hotpluggable-cpus didn't return vcpus-count")); + return -1; + } + + if (!(props = virJSONValueObjectGetObject(vcpu, "props"))) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-hotpluggable-cpus didn't return device props")); + return -1; + } + + entry->node_id = -1; + entry->socket_id = -1; + entry->core_id = -1; + entry->thread_id = -1; + + ignore_value(virJSONValueObjectGetNumberInt(props, "node-id", &entry->node_id)); + ignore_value(virJSONValueObjectGetNumberInt(props, "socket-id", &entry->socket_id)); + ignore_value(virJSONValueObjectGetNumberInt(props, "core-id", &entry->core_id)); + ignore_value(virJSONValueObjectGetNumberInt(props, "thread-id", &entry->thread_id)); + + if (entry->node_id == -1 && entry->socket_id == -1 && + entry->core_id == -1 && entry->thread_id == -1) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-hotpluggable-cpus entry doesn't report " + "topology information")); + return -1; + } + + /* qom path is not present unless the vCPU is online */ + if ((tmp = virJSONValueObjectGetString(vcpu, "qom-path"))) { + if (VIR_STRDUP(entry->qom_path, tmp) < 0) + return -1; + + /* alias is the part after last slash having a "vcpu" prefix */ + if ((tmp = strrchr(tmp, '/')) && STRPREFIX(tmp + 1, "vcpu")) { + if (VIR_STRDUP(entry->alias, tmp + 1) < 0) + return -1; + } + } + + return 0; +} + + +static int +qemuMonitorQueryHotpluggableCpusEntrySort(const void *p1, + const void *p2) +{ + const struct qemuMonitorQueryHotpluggableCpusEntry *a = p1; + const struct qemuMonitorQueryHotpluggableCpusEntry *b = p2; + + if (a->socket_id != b->socket_id) + return a->socket_id - b->socket_id; + + if (a->core_id != b->core_id) + return a->core_id - b->core_id; + + return a->thread_id - b->thread_id; +} + + +int +qemuMonitorJSONGetHotpluggableCPUs(qemuMonitorPtr mon, + struct qemuMonitorQueryHotpluggableCpusEntry **entries, + size_t *nentries) +{ + struct qemuMonitorQueryHotpluggableCpusEntry *info = NULL; + ssize_t ninfo = 0; + int ret = -1; + size_t i; + virJSONValuePtr data; + virJSONValuePtr cmd; + virJSONValuePtr reply = NULL; + virJSONValuePtr vcpu; + + if (!(cmd = qemuMonitorJSONMakeCommand("query-hotpluggable-cpus", NULL))) + return -1; + + if (qemuMonitorJSONCommand(mon, cmd, &reply) < 0) + goto cleanup; + + if (qemuMonitorJSONCheckError(cmd, reply) < 0) + goto cleanup; + + data = virJSONValueObjectGet(reply, "return"); + + if ((ninfo = virJSONValueArraySize(data)) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-hotpluggable-cpus reply is not an array")); + goto cleanup; + } + + if (VIR_ALLOC_N(info, ninfo) < 0) + goto cleanup; + + for (i = 0; i < ninfo; i++) { + vcpu = virJSONValueArrayGet(data, i); + + if (qemuMonitorJSONProcessHotpluggableCpusReply(vcpu, info + i) < 0) + goto cleanup; + } + + qsort(info, ninfo, sizeof(*info), qemuMonitorQueryHotpluggableCpusEntrySort); + + VIR_STEAL_PTR(*entries, info); + *nentries = ninfo; + ret = 0; + + cleanup: + qemuMonitorQueryHotpluggableCpusFree(info, ninfo); + virJSONValueFree(cmd); + virJSONValueFree(reply); + return ret; +} diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 90fe697..0eab96f 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -494,4 +494,9 @@ int qemuMonitorJSONMigrateStartPostCopy(qemuMonitorPtr mon) int qemuMonitorJSONGetRTCTime(qemuMonitorPtr mon, struct tm *tm) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); + +int qemuMonitorJSONGetHotpluggableCPUs(qemuMonitorPtr mon, + struct qemuMonitorQueryHotpluggableCpusEntry **entries, + size_t *nentries) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3); #endif /* QEMU_MONITOR_JSON_H */ -- 2.8.2

On 08/19/2016 10:38 AM, Peter Krempa wrote:
Add support for retrieving information regarding hotpluggable cpu units supported by qemu. Data returned by the command carries information needed to figure out the granularity of hotplug, the necessary cpu type name and the topology information.
Note that qemu doesn't specify any particular order of the entries thus it's necessary sort them by socket_id, core_id and thread_id to the order libvirt expects. ---
Notes: v2: - fixed alias string in comment - already ACKed
src/qemu/qemu_monitor.h | 16 ++++ src/qemu/qemu_monitor_json.c | 170 +++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 5 ++ 3 files changed, 191 insertions(+)
[...]
+ +/** + * [{ + * "props": { + * "core-id": 0, + * "thread-id": 0, + * "socket-id": 0 + * }, + * "vcpus-count": 1, + * "qom-path": "/machine/unattached/device[0]", + * "type": "qemu64-x86_64-cpu" + * }, + * {...} + * ] + */ +static int +qemuMonitorJSONProcessHotpluggableCpusReply(virJSONValuePtr vcpu, + struct qemuMonitorQueryHotpluggableCpusEntry *entry) +{ + virJSONValuePtr props; + const char *tmp; + + if (!(tmp = virJSONValueObjectGetString(vcpu, "type"))) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-hotpluggable-cpus didn't return device type")); + return -1; + } + + if (VIR_STRDUP(entry->type, tmp) < 0) + return -1; + + if (virJSONValueObjectGetNumberUint(vcpu, "vcpus-count", &entry->vcpus) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-hotpluggable-cpus didn't return vcpus-count")); + return -1; + } + + if (!(props = virJSONValueObjectGetObject(vcpu, "props"))) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-hotpluggable-cpus didn't return device props")); + return -1; + } + + entry->node_id = -1; + entry->socket_id = -1; + entry->core_id = -1; + entry->thread_id = -1; + + ignore_value(virJSONValueObjectGetNumberInt(props, "node-id", &entry->node_id)); + ignore_value(virJSONValueObjectGetNumberInt(props, "socket-id", &entry->socket_id)); + ignore_value(virJSONValueObjectGetNumberInt(props, "core-id", &entry->core_id)); + ignore_value(virJSONValueObjectGetNumberInt(props, "thread-id", &entry->thread_id)); + + if (entry->node_id == -1 && entry->socket_id == -1 && + entry->core_id == -1 && entry->thread_id == -1) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-hotpluggable-cpus entry doesn't report " + "topology information")); + return -1; + } + + /* qom path is not present unless the vCPU is online */ + if ((tmp = virJSONValueObjectGetString(vcpu, "qom-path"))) { + if (VIR_STRDUP(entry->qom_path, tmp) < 0) + return -1; + + /* alias is the part after last slash having a "vcpu" prefix */ + if ((tmp = strrchr(tmp, '/')) && STRPREFIX(tmp + 1, "vcpu")) { + if (VIR_STRDUP(entry->alias, tmp + 1) < 0) + return -1;
Hmm... Just trying to understand... The example above has: "qom-path": "/machine/unattached/device[0]", So "/vcpuX" shows up when? IOW: How does an entry look that has that vcpu string in it? Ah - I see a couple patches later in the tests: qom_path='/machine/peripheral/vcpu0' I was also wondering why you extract this, but I see in later patches... John
+ } + } + + return 0; +} + +
[...]

For hotplug purposes it's necessary to retrieve data using query-hotpluggable-cpus while the old query-cpus API report thread IDs and order of hotplug. This patch adds code that merges the data using a rather non-trivial algorithm and fills the data to the qemuMonitorCPUInfo structure for adding to appropriate place in the domain definition. --- Notes: v2: - fixed loop in the fallback info populator function - fixed debug message src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_monitor.c | 197 ++++++++++++++++++++++++++++++++++++++++++++++-- src/qemu/qemu_monitor.h | 23 +++++- 3 files changed, 212 insertions(+), 10 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index c56dc75..45ded03 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5804,7 +5804,7 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) return -1; - rc = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &info, maxvcpus); + rc = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &info, maxvcpus, false); if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index f87f431..451786b 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -1656,13 +1656,36 @@ qemuMonitorSystemReset(qemuMonitorPtr mon) } +static void +qemuMonitorCPUInfoClear(qemuMonitorCPUInfoPtr cpus, + size_t ncpus) +{ + size_t i; + + for (i = 0; i < ncpus; i++) { + cpus[i].id = 0; + cpus[i].socket_id = -1; + cpus[i].core_id = -1; + cpus[i].thread_id = -1; + cpus[i].vcpus = 0; + cpus[i].tid = 0; + + VIR_FREE(cpus[i].qom_path); + VIR_FREE(cpus[i].alias); + VIR_FREE(cpus[i].type); + } +} + + void qemuMonitorCPUInfoFree(qemuMonitorCPUInfoPtr cpus, - size_t ncpus ATTRIBUTE_UNUSED) + size_t ncpus) { if (!cpus) return; + qemuMonitorCPUInfoClear(cpus, ncpus); + VIR_FREE(cpus); } @@ -1683,10 +1706,148 @@ qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries, /** + * Legacy approach doesn't allow out of order cpus, thus no complex matching + * algorithm is necessary */ +static void +qemuMonitorGetCPUInfoLegacy(struct qemuMonitorQueryCpusEntry *cpuentries, + size_t ncpuentries, + qemuMonitorCPUInfoPtr vcpus, + size_t maxvcpus) +{ + size_t i; + + for (i = 0; i < maxvcpus; i++) { + if (i < ncpuentries) + vcpus[i].tid = cpuentries[i].tid; + + /* for legacy hotplug to work we need to fake the vcpu count added by + * enabling a given vcpu */ + vcpus[i].vcpus = 1; + } +} + + +/** + * qemuMonitorGetCPUInfoHotplug: + * + * This function stitches together data retrieved via query-hotpluggable-cpus + * which returns entities on the hotpluggable level (which may describe more + * than one guest logical vcpu) with the output of query-cpus, having an entry + * per enabled guest logical vcpu. + * + * query-hotpluggable-cpus conveys following information: + * - topology information and number of logical vcpus this entry creates + * - device type name of the entry that needs to be used when hotplugging + * - qom path in qemu which can be used to map the entry against query-cpus + * + * query-cpus conveys following information: + * - thread id of a given guest logical vcpu + * - order in which the vcpus were inserted + * - qom path to allow mapping the two together + * + * The libvirt's internal structure has an entry for each possible (even + * disabled) guest vcpu. The purpose is to map the data together so that we are + * certain of the thread id mapping and the information required for vcpu + * hotplug. + * + * This function returns 0 on success and -1 on error, but does not report + * libvirt errors so that fallback approach can be used. + */ +static int +qemuMonitorGetCPUInfoHotplug(struct qemuMonitorQueryHotpluggableCpusEntry *hotplugvcpus, + size_t nhotplugvcpus, + struct qemuMonitorQueryCpusEntry *cpuentries, + size_t ncpuentries, + qemuMonitorCPUInfoPtr vcpus, + size_t maxvcpus) +{ + int order = 1; + size_t totalvcpus = 0; + size_t i; + size_t j; + + /* ensure that the total vcpu count reported by query-hotpluggable-cpus equals + * to the libvirt maximum cpu count */ + for (i = 0; i < nhotplugvcpus; i++) + totalvcpus += hotplugvcpus[i].vcpus; + + if (totalvcpus != maxvcpus) { + VIR_DEBUG("expected '%zu' total vcpus got '%zu'", maxvcpus, totalvcpus); + return -1; + } + + /* Note the order in which the hotpluggable entities are inserted by + * matching them to the query-cpus entries */ + for (i = 0; i < ncpuentries; i++) { + for (j = 0; j < nhotplugvcpus; j++) { + if (!cpuentries[i].qom_path || + !hotplugvcpus[j].qom_path || + !STRPREFIX(cpuentries[i].qom_path, hotplugvcpus[j].qom_path)) + continue; + + /* add ordering info for hotpluggable entries */ + if (hotplugvcpus[j].enable_id == 0) + hotplugvcpus[j].enable_id = order++; + + break; + } + } + + /* transfer appropriate data from the hotpluggable list to corresponding + * entries. the entries returned by qemu may in fact describe multiple + * logical vcpus in the guest */ + j = 0; + for (i = 0; i < nhotplugvcpus; i++) { + vcpus[j].socket_id = hotplugvcpus[i].socket_id; + vcpus[j].core_id = hotplugvcpus[i].core_id; + vcpus[j].thread_id = hotplugvcpus[i].thread_id; + vcpus[j].vcpus = hotplugvcpus[i].vcpus; + VIR_STEAL_PTR(vcpus[j].qom_path, hotplugvcpus[i].qom_path); + VIR_STEAL_PTR(vcpus[j].alias, hotplugvcpus[i].alias); + VIR_STEAL_PTR(vcpus[j].type, hotplugvcpus[i].type); + vcpus[j].id = hotplugvcpus[i].enable_id; + + /* skip over vcpu entries covered by this hotpluggable entry */ + j += hotplugvcpus[i].vcpus; + } + + /* match entries from query cpus to the output array taking into account + * multi-vcpu objects */ + for (j = 0; j < ncpuentries; j++) { + /* find the correct entry or beginning of group of entries */ + for (i = 0; i < maxvcpus; i++) { + if (cpuentries[j].qom_path && vcpus[i].qom_path && + STRPREFIX(cpuentries[j].qom_path, vcpus[i].qom_path)) + break; + } + + if (i == maxvcpus) { + VIR_DEBUG("too many query-cpus entries for a given " + "query-hotpluggable-cpus entry"); + return -1; + } + + if (vcpus[i].vcpus != 1) { + /* find a possibly empty vcpu thread for core granularity systems */ + for (; i < maxvcpus; i++) { + if (vcpus[i].tid == 0) + break; + } + } + + vcpus[i].tid = cpuentries[j].tid; + } + + return 0; +} + + +/** * qemuMonitorGetCPUInfo: * @mon: monitor * @cpus: pointer filled by array of qemuMonitorCPUInfo structures * @maxvcpus: total possible number of vcpus + * @hotplug: query data relevant for hotplug support * * Detects VCPU information. If qemu doesn't support or fails reporting * information this function will return success as other parts of libvirt @@ -1698,20 +1859,32 @@ qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries, int qemuMonitorGetCPUInfo(qemuMonitorPtr mon, qemuMonitorCPUInfoPtr *vcpus, - size_t maxvcpus) + size_t maxvcpus, + bool hotplug) { - qemuMonitorCPUInfoPtr info = NULL; + struct qemuMonitorQueryHotpluggableCpusEntry *hotplugcpus = NULL; + size_t nhotplugcpus = 0; struct qemuMonitorQueryCpusEntry *cpuentries = NULL; size_t ncpuentries = 0; - size_t i; int ret = -1; int rc; + qemuMonitorCPUInfoPtr info = NULL; - QEMU_CHECK_MONITOR(mon); + if (hotplug) + QEMU_CHECK_MONITOR_JSON(mon); + else + QEMU_CHECK_MONITOR(mon); if (VIR_ALLOC_N(info, maxvcpus) < 0) return -1; + /* initialize a few non-zero defaults */ + qemuMonitorCPUInfoClear(info, maxvcpus); + + if (hotplug && + (qemuMonitorJSONGetHotpluggableCPUs(mon, &hotplugcpus, &nhotplugcpus)) < 0) + goto cleanup; + if (mon->json) rc = qemuMonitorJSONQueryCPUs(mon, &cpuentries, &ncpuentries); else @@ -1726,15 +1899,23 @@ qemuMonitorGetCPUInfo(qemuMonitorPtr mon, goto cleanup; } - for (i = 0; i < ncpuentries; i++) - info[i].tid = cpuentries[i].tid; + if (!hotplugcpus || + qemuMonitorGetCPUInfoHotplug(hotplugcpus, nhotplugcpus, + cpuentries, ncpuentries, + info, maxvcpus) < 0) { + /* Fallback to the legacy algorithm. Hotplug paths will make sure that + * the apropriate data is present */ + qemuMonitorCPUInfoClear(info, maxvcpus); + qemuMonitorGetCPUInfoLegacy(cpuentries, ncpuentries, info, maxvcpus); + } VIR_STEAL_PTR(*vcpus, info); ret = 0; cleanup: - qemuMonitorCPUInfoFree(info, maxvcpus); + qemuMonitorQueryHotpluggableCpusFree(hotplugcpus, nhotplugcpus); qemuMonitorQueryCpusFree(cpuentries, ncpuentries); + qemuMonitorCPUInfoFree(info, maxvcpus); return ret; } diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 58f8327..b838725 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -409,6 +409,9 @@ struct qemuMonitorQueryHotpluggableCpusEntry { int socket_id; int core_id; int thread_id; + + /* internal data */ + int enable_id; }; void qemuMonitorQueryHotpluggableCpusFree(struct qemuMonitorQueryHotpluggableCpusEntry *entries, size_t nentries); @@ -416,6 +419,23 @@ void qemuMonitorQueryHotpluggableCpusFree(struct qemuMonitorQueryHotpluggableCpu struct _qemuMonitorCPUInfo { pid_t tid; + int id; /* order of enabling of the given cpu */ + + /* topology info for hotplug purposes. Hotplug of given vcpu impossible if + * all entries are -1 */ + int socket_id; + int core_id; + int thread_id; + unsigned int vcpus; /* number of vcpus added if given entry is hotplugged */ + + /* name of the qemu type to add in case of hotplug */ + char *type; + + /* alias of an hotpluggable entry. Entries with alias can be hot-unplugged */ + char *alias; + + /* internal for use in the matching code */ + char *qom_path; }; typedef struct _qemuMonitorCPUInfo qemuMonitorCPUInfo; typedef qemuMonitorCPUInfo *qemuMonitorCPUInfoPtr; @@ -424,7 +444,8 @@ void qemuMonitorCPUInfoFree(qemuMonitorCPUInfoPtr list, size_t nitems); int qemuMonitorGetCPUInfo(qemuMonitorPtr mon, qemuMonitorCPUInfoPtr *vcpus, - size_t maxvcpus); + size_t maxvcpus, + bool hotplug); int qemuMonitorGetVirtType(qemuMonitorPtr mon, virDomainVirtType *virtType); -- 2.8.2

On 08/19/2016 10:38 AM, Peter Krempa wrote:
For hotplug purposes it's necessary to retrieve data using query-hotpluggable-cpus while the old query-cpus API report thread IDs and order of hotplug.
This patch adds code that merges the data using a rather non-trivial algorithm and fills the data to the qemuMonitorCPUInfo structure for adding to appropriate place in the domain definition. ---
Notes: v2: - fixed loop in the fallback info populator function - fixed debug message
src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_monitor.c | 197 ++++++++++++++++++++++++++++++++++++++++++++++-- src/qemu/qemu_monitor.h | 23 +++++- 3 files changed, 212 insertions(+), 10 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index c56dc75..45ded03 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5804,7 +5804,7 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) return -1;
- rc = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &info, maxvcpus); + rc = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &info, maxvcpus, false);
if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index f87f431..451786b 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -1656,13 +1656,36 @@ qemuMonitorSystemReset(qemuMonitorPtr mon) }
+static void +qemuMonitorCPUInfoClear(qemuMonitorCPUInfoPtr cpus, + size_t ncpus) +{ + size_t i; + + for (i = 0; i < ncpus; i++) { + cpus[i].id = 0; + cpus[i].socket_id = -1; + cpus[i].core_id = -1; + cpus[i].thread_id = -1; + cpus[i].vcpus = 0; + cpus[i].tid = 0; + + VIR_FREE(cpus[i].qom_path); + VIR_FREE(cpus[i].alias); + VIR_FREE(cpus[i].type); + } +} + + void qemuMonitorCPUInfoFree(qemuMonitorCPUInfoPtr cpus, - size_t ncpus ATTRIBUTE_UNUSED) + size_t ncpus) { if (!cpus) return;
+ qemuMonitorCPUInfoClear(cpus, ncpus); + VIR_FREE(cpus); }
@@ -1683,10 +1706,148 @@ qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries,
/** + * Legacy approach doesn't allow out of order cpus, thus no complex matching + * algorithm is necessary */ +static void +qemuMonitorGetCPUInfoLegacy(struct qemuMonitorQueryCpusEntry *cpuentries, + size_t ncpuentries, + qemuMonitorCPUInfoPtr vcpus, + size_t maxvcpus) +{ + size_t i; + + for (i = 0; i < maxvcpus; i++) { + if (i < ncpuentries) + vcpus[i].tid = cpuentries[i].tid; + + /* for legacy hotplug to work we need to fake the vcpu count added by + * enabling a given vcpu */ + vcpus[i].vcpus = 1; + } +} + + +/** + * qemuMonitorGetCPUInfoHotplug: + * + * This function stitches together data retrieved via query-hotpluggable-cpus + * which returns entities on the hotpluggable level (which may describe more + * than one guest logical vcpu) with the output of query-cpus, having an entry + * per enabled guest logical vcpu. + * + * query-hotpluggable-cpus conveys following information: + * - topology information and number of logical vcpus this entry creates + * - device type name of the entry that needs to be used when hotplugging + * - qom path in qemu which can be used to map the entry against query-cpus + * + * query-cpus conveys following information: + * - thread id of a given guest logical vcpu + * - order in which the vcpus were inserted + * - qom path to allow mapping the two together + * + * The libvirt's internal structure has an entry for each possible (even + * disabled) guest vcpu. The purpose is to map the data together so that we are + * certain of the thread id mapping and the information required for vcpu + * hotplug. + * + * This function returns 0 on success and -1 on error, but does not report + * libvirt errors so that fallback approach can be used. + */ +static int +qemuMonitorGetCPUInfoHotplug(struct qemuMonitorQueryHotpluggableCpusEntry *hotplugvcpus, + size_t nhotplugvcpus, + struct qemuMonitorQueryCpusEntry *cpuentries, + size_t ncpuentries, + qemuMonitorCPUInfoPtr vcpus, + size_t maxvcpus) +{ + int order = 1; + size_t totalvcpus = 0; + size_t i; + size_t j; + + /* ensure that the total vcpu count reported by query-hotpluggable-cpus equals + * to the libvirt maximum cpu count */ + for (i = 0; i < nhotplugvcpus; i++) + totalvcpus += hotplugvcpus[i].vcpus; + + if (totalvcpus != maxvcpus) { + VIR_DEBUG("expected '%zu' total vcpus got '%zu'", maxvcpus, totalvcpus); + return -1; + }
Still trying to come to grips with 'nhotplugvcpus', 'ncpuentries', and 'maxvcpus'
+ + /* Note the order in which the hotpluggable entities are inserted by + * matching them to the query-cpus entries */ + for (i = 0; i < ncpuentries; i++) { + for (j = 0; j < nhotplugvcpus; j++) { + if (!cpuentries[i].qom_path || + !hotplugvcpus[j].qom_path || + !STRPREFIX(cpuentries[i].qom_path, hotplugvcpus[j].qom_path)) + continue; + + /* add ordering info for hotpluggable entries */ + if (hotplugvcpus[j].enable_id == 0) + hotplugvcpus[j].enable_id = order++; + + break;
So enable_id always == 1 (or 0 I supposed) and order never gets beyond 2? Or am I missing something not obvious. I guess I thought you were trying to figure out the order of nhotplugvcpus so that you could fill in vcpus in the correct order, but that doesn't match the subsequent algorithm. John
+ } + } + + /* transfer appropriate data from the hotpluggable list to corresponding + * entries. the entries returned by qemu may in fact describe multiple + * logical vcpus in the guest */ + j = 0; + for (i = 0; i < nhotplugvcpus; i++) { + vcpus[j].socket_id = hotplugvcpus[i].socket_id; + vcpus[j].core_id = hotplugvcpus[i].core_id; + vcpus[j].thread_id = hotplugvcpus[i].thread_id; + vcpus[j].vcpus = hotplugvcpus[i].vcpus; + VIR_STEAL_PTR(vcpus[j].qom_path, hotplugvcpus[i].qom_path); + VIR_STEAL_PTR(vcpus[j].alias, hotplugvcpus[i].alias); + VIR_STEAL_PTR(vcpus[j].type, hotplugvcpus[i].type); + vcpus[j].id = hotplugvcpus[i].enable_id; + + /* skip over vcpu entries covered by this hotpluggable entry */ + j += hotplugvcpus[i].vcpus; + } + + /* match entries from query cpus to the output array taking into account + * multi-vcpu objects */ + for (j = 0; j < ncpuentries; j++) { + /* find the correct entry or beginning of group of entries */ + for (i = 0; i < maxvcpus; i++) { + if (cpuentries[j].qom_path && vcpus[i].qom_path && + STRPREFIX(cpuentries[j].qom_path, vcpus[i].qom_path)) + break; + } + + if (i == maxvcpus) { + VIR_DEBUG("too many query-cpus entries for a given " + "query-hotpluggable-cpus entry"); + return -1; + } + + if (vcpus[i].vcpus != 1) { + /* find a possibly empty vcpu thread for core granularity systems */ + for (; i < maxvcpus; i++) { + if (vcpus[i].tid == 0) + break; + } + } + + vcpus[i].tid = cpuentries[j].tid; + } + + return 0; +} + + +/** * qemuMonitorGetCPUInfo: * @mon: monitor * @cpus: pointer filled by array of qemuMonitorCPUInfo structures * @maxvcpus: total possible number of vcpus + * @hotplug: query data relevant for hotplug support * * Detects VCPU information. If qemu doesn't support or fails reporting * information this function will return success as other parts of libvirt @@ -1698,20 +1859,32 @@ qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries, int qemuMonitorGetCPUInfo(qemuMonitorPtr mon, qemuMonitorCPUInfoPtr *vcpus, - size_t maxvcpus) + size_t maxvcpus, + bool hotplug) { - qemuMonitorCPUInfoPtr info = NULL; + struct qemuMonitorQueryHotpluggableCpusEntry *hotplugcpus = NULL; + size_t nhotplugcpus = 0; struct qemuMonitorQueryCpusEntry *cpuentries = NULL; size_t ncpuentries = 0; - size_t i; int ret = -1; int rc; + qemuMonitorCPUInfoPtr info = NULL;
- QEMU_CHECK_MONITOR(mon); + if (hotplug) + QEMU_CHECK_MONITOR_JSON(mon); + else + QEMU_CHECK_MONITOR(mon);
if (VIR_ALLOC_N(info, maxvcpus) < 0) return -1;
+ /* initialize a few non-zero defaults */ + qemuMonitorCPUInfoClear(info, maxvcpus); + + if (hotplug && + (qemuMonitorJSONGetHotpluggableCPUs(mon, &hotplugcpus, &nhotplugcpus)) < 0) + goto cleanup; + if (mon->json) rc = qemuMonitorJSONQueryCPUs(mon, &cpuentries, &ncpuentries); else @@ -1726,15 +1899,23 @@ qemuMonitorGetCPUInfo(qemuMonitorPtr mon, goto cleanup; }
- for (i = 0; i < ncpuentries; i++) - info[i].tid = cpuentries[i].tid; + if (!hotplugcpus || + qemuMonitorGetCPUInfoHotplug(hotplugcpus, nhotplugcpus, + cpuentries, ncpuentries, + info, maxvcpus) < 0) { + /* Fallback to the legacy algorithm. Hotplug paths will make sure that + * the apropriate data is present */ + qemuMonitorCPUInfoClear(info, maxvcpus); + qemuMonitorGetCPUInfoLegacy(cpuentries, ncpuentries, info, maxvcpus); + }
VIR_STEAL_PTR(*vcpus, info); ret = 0;
cleanup: - qemuMonitorCPUInfoFree(info, maxvcpus); + qemuMonitorQueryHotpluggableCpusFree(hotplugcpus, nhotplugcpus); qemuMonitorQueryCpusFree(cpuentries, ncpuentries); + qemuMonitorCPUInfoFree(info, maxvcpus); return ret; }
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 58f8327..b838725 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -409,6 +409,9 @@ struct qemuMonitorQueryHotpluggableCpusEntry { int socket_id; int core_id; int thread_id; + + /* internal data */ + int enable_id; }; void qemuMonitorQueryHotpluggableCpusFree(struct qemuMonitorQueryHotpluggableCpusEntry *entries, size_t nentries); @@ -416,6 +419,23 @@ void qemuMonitorQueryHotpluggableCpusFree(struct qemuMonitorQueryHotpluggableCpu
struct _qemuMonitorCPUInfo { pid_t tid; + int id; /* order of enabling of the given cpu */ + + /* topology info for hotplug purposes. Hotplug of given vcpu impossible if + * all entries are -1 */ + int socket_id; + int core_id; + int thread_id; + unsigned int vcpus; /* number of vcpus added if given entry is hotplugged */ + + /* name of the qemu type to add in case of hotplug */ + char *type; + + /* alias of an hotpluggable entry. Entries with alias can be hot-unplugged */ + char *alias; + + /* internal for use in the matching code */ + char *qom_path; }; typedef struct _qemuMonitorCPUInfo qemuMonitorCPUInfo; typedef qemuMonitorCPUInfo *qemuMonitorCPUInfoPtr; @@ -424,7 +444,8 @@ void qemuMonitorCPUInfoFree(qemuMonitorCPUInfoPtr list, size_t nitems); int qemuMonitorGetCPUInfo(qemuMonitorPtr mon, qemuMonitorCPUInfoPtr *vcpus, - size_t maxvcpus); + size_t maxvcpus, + bool hotplug);
int qemuMonitorGetVirtType(qemuMonitorPtr mon, virDomainVirtType *virtType);

On 08/19/2016 02:33 PM, John Ferlan wrote:
[...]
+/** + * qemuMonitorGetCPUInfoHotplug: + * + * This function stitches together data retrieved via query-hotpluggable-cpus + * which returns entities on the hotpluggable level (which may describe more + * than one guest logical vcpu) with the output of query-cpus, having an entry + * per enabled guest logical vcpu. + * + * query-hotpluggable-cpus conveys following information: + * - topology information and number of logical vcpus this entry creates + * - device type name of the entry that needs to be used when hotplugging + * - qom path in qemu which can be used to map the entry against query-cpus + * + * query-cpus conveys following information: + * - thread id of a given guest logical vcpu + * - order in which the vcpus were inserted + * - qom path to allow mapping the two together + * + * The libvirt's internal structure has an entry for each possible (even + * disabled) guest vcpu. The purpose is to map the data together so that we are + * certain of the thread id mapping and the information required for vcpu + * hotplug. + * + * This function returns 0 on success and -1 on error, but does not report + * libvirt errors so that fallback approach can be used. + */ +static int +qemuMonitorGetCPUInfoHotplug(struct qemuMonitorQueryHotpluggableCpusEntry *hotplugvcpus, + size_t nhotplugvcpus, + struct qemuMonitorQueryCpusEntry *cpuentries, + size_t ncpuentries, + qemuMonitorCPUInfoPtr vcpus, + size_t maxvcpus) +{ + int order = 1; + size_t totalvcpus = 0; + size_t i; + size_t j; + + /* ensure that the total vcpu count reported by query-hotpluggable-cpus equals + * to the libvirt maximum cpu count */ + for (i = 0; i < nhotplugvcpus; i++) + totalvcpus += hotplugvcpus[i].vcpus; + + if (totalvcpus != maxvcpus) { + VIR_DEBUG("expected '%zu' total vcpus got '%zu'", maxvcpus, totalvcpus); + return -1; + }
Still trying to come to grips with 'nhotplugvcpus', 'ncpuentries', and 'maxvcpus'
+ + /* Note the order in which the hotpluggable entities are inserted by + * matching them to the query-cpus entries */ + for (i = 0; i < ncpuentries; i++) { + for (j = 0; j < nhotplugvcpus; j++) { + if (!cpuentries[i].qom_path || + !hotplugvcpus[j].qom_path || + !STRPREFIX(cpuentries[i].qom_path, hotplugvcpus[j].qom_path)) + continue; + + /* add ordering info for hotpluggable entries */ + if (hotplugvcpus[j].enable_id == 0) + hotplugvcpus[j].enable_id = order++; + + break;
So enable_id always == 1 (or 0 I supposed) and order never gets beyond 2? Or am I missing something not obvious.
UGH - face palm Friday... break the inner loop. John
I guess I thought you were trying to figure out the order of nhotplugvcpus so that you could fill in vcpus in the correct order, but that doesn't match the subsequent algorithm.
John
+ } + } + [...]

On 08/19/2016 08:08 PM, Peter Krempa wrote:
For hotplug purposes it's necessary to retrieve data using query-hotpluggable-cpus while the old query-cpus API report thread IDs and order of hotplug.
This patch adds code that merges the data using a rather non-trivial algorithm and fills the data to the qemuMonitorCPUInfo structure for adding to appropriate place in the domain definition. ---
Notes: v2: - fixed loop in the fallback info populator function - fixed debug message
src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_monitor.c | 197 ++++++++++++++++++++++++++++++++++++++++++++++-- src/qemu/qemu_monitor.h | 23 +++++- 3 files changed, 212 insertions(+), 10 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index c56dc75..45ded03 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5804,7 +5804,7 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) return -1;
- rc = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &info, maxvcpus); + rc = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &info, maxvcpus, false);
if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index f87f431..451786b 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -1656,13 +1656,36 @@ qemuMonitorSystemReset(qemuMonitorPtr mon) }
+static void +qemuMonitorCPUInfoClear(qemuMonitorCPUInfoPtr cpus, + size_t ncpus) +{ + size_t i; + + for (i = 0; i < ncpus; i++) { + cpus[i].id = 0; + cpus[i].socket_id = -1; + cpus[i].core_id = -1; + cpus[i].thread_id = -1; + cpus[i].vcpus = 0; + cpus[i].tid = 0; + + VIR_FREE(cpus[i].qom_path); + VIR_FREE(cpus[i].alias); + VIR_FREE(cpus[i].type); + } +} + + void qemuMonitorCPUInfoFree(qemuMonitorCPUInfoPtr cpus, - size_t ncpus ATTRIBUTE_UNUSED) + size_t ncpus) { if (!cpus) return;
+ qemuMonitorCPUInfoClear(cpus, ncpus); + VIR_FREE(cpus); }
@@ -1683,10 +1706,148 @@ qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries,
/** + * Legacy approach doesn't allow out of order cpus, thus no complex matching + * algorithm is necessary */ +static void +qemuMonitorGetCPUInfoLegacy(struct qemuMonitorQueryCpusEntry *cpuentries, + size_t ncpuentries, + qemuMonitorCPUInfoPtr vcpus, + size_t maxvcpus) +{ + size_t i; + + for (i = 0; i < maxvcpus; i++) { + if (i < ncpuentries) + vcpus[i].tid = cpuentries[i].tid; + + /* for legacy hotplug to work we need to fake the vcpu count added by + * enabling a given vcpu */ + vcpus[i].vcpus = 1; + } +} + + +/** + * qemuMonitorGetCPUInfoHotplug: + * + * This function stitches together data retrieved via query-hotpluggable-cpus + * which returns entities on the hotpluggable level (which may describe more + * than one guest logical vcpu) with the output of query-cpus, having an entry + * per enabled guest logical vcpu. + * + * query-hotpluggable-cpus conveys following information: + * - topology information and number of logical vcpus this entry creates + * - device type name of the entry that needs to be used when hotplugging + * - qom path in qemu which can be used to map the entry against query-cpus + * + * query-cpus conveys following information: + * - thread id of a given guest logical vcpu + * - order in which the vcpus were inserted + * - qom path to allow mapping the two together + * + * The libvirt's internal structure has an entry for each possible (even + * disabled) guest vcpu. The purpose is to map the data together so that we are + * certain of the thread id mapping and the information required for vcpu + * hotplug. + * + * This function returns 0 on success and -1 on error, but does not report + * libvirt errors so that fallback approach can be used. + */ +static int +qemuMonitorGetCPUInfoHotplug(struct qemuMonitorQueryHotpluggableCpusEntry *hotplugvcpus, + size_t nhotplugvcpus, + struct qemuMonitorQueryCpusEntry *cpuentries, + size_t ncpuentries, + qemuMonitorCPUInfoPtr vcpus, + size_t maxvcpus) +{ + int order = 1; + size_t totalvcpus = 0; + size_t i; + size_t j; + + /* ensure that the total vcpu count reported by query-hotpluggable-cpus equals + * to the libvirt maximum cpu count */ + for (i = 0; i < nhotplugvcpus; i++) + totalvcpus += hotplugvcpus[i].vcpus; + + if (totalvcpus != maxvcpus) { + VIR_DEBUG("expected '%zu' total vcpus got '%zu'", maxvcpus, totalvcpus); + return -1; + } + + /* Note the order in which the hotpluggable entities are inserted by + * matching them to the query-cpus entries */ + for (i = 0; i < ncpuentries; i++) { + for (j = 0; j < nhotplugvcpus; j++) { + if (!cpuentries[i].qom_path || + !hotplugvcpus[j].qom_path || + !STRPREFIX(cpuentries[i].qom_path, hotplugvcpus[j].qom_path)) + continue; + + /* add ordering info for hotpluggable entries */ + if (hotplugvcpus[j].enable_id == 0) + hotplugvcpus[j].enable_id = order++; + + break; + } + } + + /* transfer appropriate data from the hotpluggable list to corresponding + * entries. the entries returned by qemu may in fact describe multiple + * logical vcpus in the guest */ + j = 0; + for (i = 0; i < nhotplugvcpus; i++) { + vcpus[j].socket_id = hotplugvcpus[i].socket_id; + vcpus[j].core_id = hotplugvcpus[i].core_id; + vcpus[j].thread_id = hotplugvcpus[i].thread_id; + vcpus[j].vcpus = hotplugvcpus[i].vcpus; + VIR_STEAL_PTR(vcpus[j].qom_path, hotplugvcpus[i].qom_path); + VIR_STEAL_PTR(vcpus[j].alias, hotplugvcpus[i].alias); + VIR_STEAL_PTR(vcpus[j].type, hotplugvcpus[i].type); + vcpus[j].id = hotplugvcpus[i].enable_id; + + /* skip over vcpu entries covered by this hotpluggable entry */ + j += hotplugvcpus[i].vcpus; + } + + /* match entries from query cpus to the output array taking into account + * multi-vcpu objects */ + for (j = 0; j < ncpuentries; j++) { + /* find the correct entry or beginning of group of entries */ + for (i = 0; i < maxvcpus; i++) { + if (cpuentries[j].qom_path && vcpus[i].qom_path && + STRPREFIX(cpuentries[j].qom_path, vcpus[i].qom_path))
Paths /machine/peripheral/vcpu10(and so on) would all end up matching /machine/peripheral/vcpu1 (i =1) and vcpu[10] onwards wont have a tid assigned and eventually fail at qemuDomainValidateVcpuInfo() for vcpus beyond 10.
+ break; + } + + if (i == maxvcpus) { + VIR_DEBUG("too many query-cpus entries for a given " + "query-hotpluggable-cpus entry"); + return -1; + } + + if (vcpus[i].vcpus != 1) { + /* find a possibly empty vcpu thread for core granularity systems */ + for (; i < maxvcpus; i++) { + if (vcpus[i].tid == 0) + break; + } + } + + vcpus[i].tid = cpuentries[j].tid; + } + + return 0; +} + + +/** * qemuMonitorGetCPUInfo: * @mon: monitor * @cpus: pointer filled by array of qemuMonitorCPUInfo structures * @maxvcpus: total possible number of vcpus + * @hotplug: query data relevant for hotplug support * * Detects VCPU information. If qemu doesn't support or fails reporting * information this function will return success as other parts of libvirt @@ -1698,20 +1859,32 @@ qemuMonitorQueryCpusFree(struct qemuMonitorQueryCpusEntry *entries, int qemuMonitorGetCPUInfo(qemuMonitorPtr mon, qemuMonitorCPUInfoPtr *vcpus, - size_t maxvcpus) + size_t maxvcpus, + bool hotplug) { - qemuMonitorCPUInfoPtr info = NULL; + struct qemuMonitorQueryHotpluggableCpusEntry *hotplugcpus = NULL; + size_t nhotplugcpus = 0; struct qemuMonitorQueryCpusEntry *cpuentries = NULL; size_t ncpuentries = 0; - size_t i; int ret = -1; int rc; + qemuMonitorCPUInfoPtr info = NULL;
- QEMU_CHECK_MONITOR(mon); + if (hotplug) + QEMU_CHECK_MONITOR_JSON(mon); + else + QEMU_CHECK_MONITOR(mon);
if (VIR_ALLOC_N(info, maxvcpus) < 0) return -1;
+ /* initialize a few non-zero defaults */ + qemuMonitorCPUInfoClear(info, maxvcpus); + + if (hotplug && + (qemuMonitorJSONGetHotpluggableCPUs(mon, &hotplugcpus, &nhotplugcpus)) < 0) + goto cleanup; + if (mon->json) rc = qemuMonitorJSONQueryCPUs(mon, &cpuentries, &ncpuentries); else @@ -1726,15 +1899,23 @@ qemuMonitorGetCPUInfo(qemuMonitorPtr mon, goto cleanup; }
- for (i = 0; i < ncpuentries; i++) - info[i].tid = cpuentries[i].tid; + if (!hotplugcpus || + qemuMonitorGetCPUInfoHotplug(hotplugcpus, nhotplugcpus, + cpuentries, ncpuentries, + info, maxvcpus) < 0) { + /* Fallback to the legacy algorithm. Hotplug paths will make sure that + * the apropriate data is present */ + qemuMonitorCPUInfoClear(info, maxvcpus); + qemuMonitorGetCPUInfoLegacy(cpuentries, ncpuentries, info, maxvcpus); + }
VIR_STEAL_PTR(*vcpus, info); ret = 0;
cleanup: - qemuMonitorCPUInfoFree(info, maxvcpus); + qemuMonitorQueryHotpluggableCpusFree(hotplugcpus, nhotplugcpus); qemuMonitorQueryCpusFree(cpuentries, ncpuentries); + qemuMonitorCPUInfoFree(info, maxvcpus); return ret; }
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 58f8327..b838725 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -409,6 +409,9 @@ struct qemuMonitorQueryHotpluggableCpusEntry { int socket_id; int core_id; int thread_id; + + /* internal data */ + int enable_id; }; void qemuMonitorQueryHotpluggableCpusFree(struct qemuMonitorQueryHotpluggableCpusEntry *entries, size_t nentries); @@ -416,6 +419,23 @@ void qemuMonitorQueryHotpluggableCpusFree(struct qemuMonitorQueryHotpluggableCpu
struct _qemuMonitorCPUInfo { pid_t tid; + int id; /* order of enabling of the given cpu */ + + /* topology info for hotplug purposes. Hotplug of given vcpu impossible if + * all entries are -1 */ + int socket_id; + int core_id; + int thread_id; + unsigned int vcpus; /* number of vcpus added if given entry is hotplugged */ + + /* name of the qemu type to add in case of hotplug */ + char *type; + + /* alias of an hotpluggable entry. Entries with alias can be hot-unplugged */ + char *alias; + + /* internal for use in the matching code */ + char *qom_path; }; typedef struct _qemuMonitorCPUInfo qemuMonitorCPUInfo; typedef qemuMonitorCPUInfo *qemuMonitorCPUInfoPtr; @@ -424,7 +444,8 @@ void qemuMonitorCPUInfoFree(qemuMonitorCPUInfoPtr list, size_t nitems); int qemuMonitorGetCPUInfo(qemuMonitorPtr mon, qemuMonitorCPUInfoPtr *vcpus, - size_t maxvcpus); + size_t maxvcpus, + bool hotplug);
int qemuMonitorGetVirtType(qemuMonitorPtr mon, virDomainVirtType *virtType);

On Tue, Aug 23, 2016 at 17:54:51 +0530, Shivaprasad G Bhat wrote:
On 08/19/2016 08:08 PM, Peter Krempa wrote:
[...]
+ /* match entries from query cpus to the output array taking into account + * multi-vcpu objects */ + for (j = 0; j < ncpuentries; j++) { + /* find the correct entry or beginning of group of entries */ + for (i = 0; i < maxvcpus; i++) { + if (cpuentries[j].qom_path && vcpus[i].qom_path && + STRPREFIX(cpuentries[j].qom_path, vcpus[i].qom_path))
Paths /machine/peripheral/vcpu10(and so on) would all end up matching /machine/peripheral/vcpu1 (i =1) and vcpu[10] onwards wont have a tid assigned and eventually fail at qemuDomainValidateVcpuInfo() for vcpus beyond 10.
Good catch. I've fixed it in v3 and added a test case. Peter

As the combination algorithm is rather complex and ugly it's necessary to make sure it works properly. Add test suite infrastructure for testing it along with a basic test based on x86_64 platform. --- Notes: v2: - shortened very long line - already ACKed ...nitorjson-cpuinfo-x86-basic-pluggable-cpus.json | 50 ++++++++ ...orjson-cpuinfo-x86-basic-pluggable-hotplug.json | 82 +++++++++++++ ...emumonitorjson-cpuinfo-x86-basic-pluggable.data | 39 ++++++ tests/qemumonitorjsontest.c | 134 +++++++++++++++++++++ 4 files changed, 305 insertions(+) create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable.data diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-cpus.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-cpus.json new file mode 100644 index 0000000..7a49731 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-cpus.json @@ -0,0 +1,50 @@ +{ + "return": [ + { + "arch": "x86", + "current": true, + "CPU": 0, + "qom_path": "/machine/unattached/device[0]", + "pc": -2130415978, + "halted": true, + "thread_id": 518291 + }, + { + "arch": "x86", + "current": false, + "CPU": 1, + "qom_path": "/machine/unattached/device[2]", + "pc": -2130415978, + "halted": true, + "thread_id": 518292 + }, + { + "arch": "x86", + "current": false, + "CPU": 2, + "qom_path": "/machine/unattached/device[3]", + "pc": -2130415978, + "halted": true, + "thread_id": 518294 + }, + { + "arch": "x86", + "current": false, + "CPU": 3, + "qom_path": "/machine/unattached/device[4]", + "pc": -2130415978, + "halted": true, + "thread_id": 518295 + }, + { + "arch": "x86", + "current": false, + "CPU": 4, + "qom_path": "/machine/unattached/device[5]", + "pc": -2130415978, + "halted": true, + "thread_id": 518296 + } + ], + "id": "libvirt-22" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-hotplug.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-hotplug.json new file mode 100644 index 0000000..3f35018 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-hotplug.json @@ -0,0 +1,82 @@ +{ + "return": [ + { + "props": { + "core-id": 1, + "thread-id": 1, + "socket-id": 1 + }, + "vcpus-count": 1, + "type": "qemu64-x86_64-cpu" + }, + { + "props": { + "core-id": 1, + "thread-id": 0, + "socket-id": 1 + }, + "vcpus-count": 1, + "type": "qemu64-x86_64-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 1, + "socket-id": 1 + }, + "vcpus-count": 1, + "type": "qemu64-x86_64-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 0, + "socket-id": 1 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[5]", + "type": "qemu64-x86_64-cpu" + }, + { + "props": { + "core-id": 1, + "thread-id": 1, + "socket-id": 0 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[4]", + "type": "qemu64-x86_64-cpu" + }, + { + "props": { + "core-id": 1, + "thread-id": 0, + "socket-id": 0 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[3]", + "type": "qemu64-x86_64-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 1, + "socket-id": 0 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[2]", + "type": "qemu64-x86_64-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 0, + "socket-id": 0 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[0]", + "type": "qemu64-x86_64-cpu" + } + ], + "id": "libvirt-23" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable.data b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable.data new file mode 100644 index 0000000..a367a09 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable.data @@ -0,0 +1,39 @@ +[vcpu libvirt-id='0'] + thread-id='518291' + qemu-id='1' + type='qemu64-x86_64-cpu' + qom_path='/machine/unattached/device[0]' + topology: socket='0' core='0' thread='0' vcpus='1' +[vcpu libvirt-id='1'] + thread-id='518292' + qemu-id='2' + type='qemu64-x86_64-cpu' + qom_path='/machine/unattached/device[2]' + topology: socket='0' core='0' thread='1' vcpus='1' +[vcpu libvirt-id='2'] + thread-id='518294' + qemu-id='3' + type='qemu64-x86_64-cpu' + qom_path='/machine/unattached/device[3]' + topology: socket='0' core='1' thread='0' vcpus='1' +[vcpu libvirt-id='3'] + thread-id='518295' + qemu-id='4' + type='qemu64-x86_64-cpu' + qom_path='/machine/unattached/device[4]' + topology: socket='0' core='1' thread='1' vcpus='1' +[vcpu libvirt-id='4'] + thread-id='518296' + qemu-id='5' + type='qemu64-x86_64-cpu' + qom_path='/machine/unattached/device[5]' + topology: socket='1' core='0' thread='0' vcpus='1' +[vcpu libvirt-id='5'] + type='qemu64-x86_64-cpu' + topology: socket='1' core='0' thread='1' vcpus='1' +[vcpu libvirt-id='6'] + type='qemu64-x86_64-cpu' + topology: socket='1' core='1' thread='0' vcpus='1' +[vcpu libvirt-id='7'] + type='qemu64-x86_64-cpu' + topology: socket='1' core='1' thread='1' vcpus='1' diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index 4be4618..6132f5d 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -2336,6 +2336,130 @@ testQemuMonitorJSONGetIOThreads(const void *data) return ret; } +struct testCPUInfoData { + const char *name; + size_t maxvcpus; + virDomainXMLOptionPtr xmlopt; +}; + + +static char * +testQemuMonitorCPUInfoFormat(qemuMonitorCPUInfoPtr vcpus, + size_t nvcpus) +{ + virBuffer buf = VIR_BUFFER_INITIALIZER; + qemuMonitorCPUInfoPtr vcpu; + size_t i; + + for (i = 0; i < nvcpus; i++) { + vcpu = vcpus + i; + + virBufferAsprintf(&buf, "[vcpu libvirt-id='%zu']\n", i); + virBufferAdjustIndent(&buf, 4); + + if (vcpu->tid) + virBufferAsprintf(&buf, "thread-id='%llu'\n", + (unsigned long long) vcpu->tid); + + if (vcpu->id != 0) + virBufferAsprintf(&buf, "qemu-id='%d'\n", vcpu->id); + + if (vcpu->type) + virBufferAsprintf(&buf, "type='%s'\n", vcpu->type); + + if (vcpu->alias) + virBufferAsprintf(&buf, "alias='%s'\n", vcpu->alias); + if (vcpu->qom_path) + virBufferAsprintf(&buf, "qom_path='%s'\n", vcpu->qom_path); + + if (vcpu->socket_id != -1 || vcpu->core_id != -1 || + vcpu->thread_id != -1 || vcpu->vcpus != 0) { + virBufferAddLit(&buf, "topology:"); + if (vcpu->socket_id != -1) + virBufferAsprintf(&buf, " socket='%d'", vcpu->socket_id); + if (vcpu->core_id != -1) + virBufferAsprintf(&buf, " core='%d'", vcpu->core_id); + if (vcpu->thread_id != -1) + virBufferAsprintf(&buf, " thread='%d'", vcpu->thread_id); + if (vcpu->vcpus != 0) + virBufferAsprintf(&buf, " vcpus='%u'", vcpu->vcpus); + virBufferAddLit(&buf, "\n"); + } + + virBufferAdjustIndent(&buf, -4); + } + + return virBufferContentAndReset(&buf); +} + + +static int +testQemuMonitorCPUInfo(const void *opaque) +{ + const struct testCPUInfoData *data = opaque; + qemuMonitorTestPtr test = qemuMonitorTestNewSimple(true, data->xmlopt); + char *queryCpusFile = NULL; + char *queryHotpluggableFile = NULL; + char *dataFile = NULL; + char *queryCpusStr = NULL; + char *queryHotpluggableStr = NULL; + char *actual = NULL; + qemuMonitorCPUInfoPtr vcpus = NULL; + int rc; + int ret = -1; + + if (!test) + return -1; + + if (virAsprintf(&queryCpusFile, + "%s/qemumonitorjsondata/qemumonitorjson-cpuinfo-%s-cpus.json", + abs_srcdir, data->name) < 0 || + virAsprintf(&queryHotpluggableFile, + "%s/qemumonitorjsondata/qemumonitorjson-cpuinfo-%s-hotplug.json", + abs_srcdir, data->name) < 0 || + virAsprintf(&dataFile, + "%s/qemumonitorjsondata/qemumonitorjson-cpuinfo-%s.data", + abs_srcdir, data->name) < 0) + goto cleanup; + + if (virTestLoadFile(queryCpusFile, &queryCpusStr) < 0) + goto cleanup; + + if (virTestLoadFile(queryHotpluggableFile, &queryHotpluggableStr) < 0) + goto cleanup; + + if (qemuMonitorTestAddItem(test, "query-hotpluggable-cpus", + queryHotpluggableStr) < 0) + goto cleanup; + + if (qemuMonitorTestAddItem(test, "query-cpus", queryCpusStr) < 0) + goto cleanup; + + rc = qemuMonitorGetCPUInfo(qemuMonitorTestGetMonitor(test), + &vcpus, data->maxvcpus, true); + + if (rc < 0) + goto cleanup; + + actual = testQemuMonitorCPUInfoFormat(vcpus, data->maxvcpus); + + if (virTestCompareToFile(actual, dataFile) < 0) + goto cleanup; + + ret = 0; + cleanup: + VIR_FREE(queryCpusFile); + VIR_FREE(queryHotpluggableFile); + VIR_FREE(dataFile); + VIR_FREE(queryCpusStr); + VIR_FREE(queryHotpluggableStr); + VIR_FREE(actual); + qemuMonitorCPUInfoFree(vcpus, data->maxvcpus); + qemuMonitorTestFree(test); + return ret; +} + + static int mymain(void) { @@ -2378,6 +2502,14 @@ mymain(void) ret = -1; \ } while (0) +#define DO_TEST_CPU_INFO(name, maxvcpus) \ + do { \ + struct testCPUInfoData data = {name, maxvcpus, driver.xmlopt}; \ + if (virTestRun("GetCPUInfo(" name ")", testQemuMonitorCPUInfo, \ + &data) < 0) \ + ret = -1; \ + } while (0) + DO_TEST(GetStatus); DO_TEST(GetVersion); DO_TEST(GetMachines); @@ -2452,6 +2584,8 @@ mymain(void) DO_TEST_CPU_DATA("full"); DO_TEST_CPU_DATA("ecx"); + DO_TEST_CPU_INFO("x86-basic-pluggable", 8); + qemuTestDriverFree(&driver); return (ret == 0) ? EXIT_SUCCESS : EXIT_FAILURE; -- 2.8.2

Power 8 platform's basic hotpluggable unit is a core rather than a thread for x86_64 family. This introduces most of the complexity of the matching code and thus needs to be tested. The test data contain data captured from in-order cpu hotplug and unplug operations. --- Notes: v2: - dropped test 3 as it was identical to test 1 - did not change alias in the data - already ACKed .../qemumonitorjson-cpuinfo-ppc64-basic-cpus.json | 77 +++++++ ...emumonitorjson-cpuinfo-ppc64-basic-hotplug.json | 27 +++ .../qemumonitorjson-cpuinfo-ppc64-basic.data | 40 ++++ ...mumonitorjson-cpuinfo-ppc64-hotplug-1-cpus.json | 149 ++++++++++++++ ...onitorjson-cpuinfo-ppc64-hotplug-1-hotplug.json | 28 +++ .../qemumonitorjson-cpuinfo-ppc64-hotplug-1.data | 51 +++++ ...mumonitorjson-cpuinfo-ppc64-hotplug-2-cpus.json | 221 +++++++++++++++++++++ ...onitorjson-cpuinfo-ppc64-hotplug-2-hotplug.json | 29 +++ .../qemumonitorjson-cpuinfo-ppc64-hotplug-2.data | 62 ++++++ tests/qemumonitorjsontest.c | 4 + 10 files changed, 688 insertions(+) create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2.data diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-cpus.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-cpus.json new file mode 100644 index 0000000..27a3d8b --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-cpus.json @@ -0,0 +1,77 @@ +{ + "return": [ + { + "arch": "ppc", + "current": true, + "CPU": 0, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[0]", + "halted": false, + "thread_id": 21925 + }, + { + "arch": "ppc", + "current": false, + "CPU": 1, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[1]", + "halted": false, + "thread_id": 21926 + }, + { + "arch": "ppc", + "current": false, + "CPU": 2, + "nip": -4611686018422360608, + "qom_path": "/machine/unattached/device[1]/thread[2]", + "halted": false, + "thread_id": 21927 + }, + { + "arch": "ppc", + "current": false, + "CPU": 3, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[3]", + "halted": false, + "thread_id": 21928 + }, + { + "arch": "ppc", + "current": false, + "CPU": 4, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[4]", + "halted": false, + "thread_id": 21930 + }, + { + "arch": "ppc", + "current": false, + "CPU": 5, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[5]", + "halted": false, + "thread_id": 21931 + }, + { + "arch": "ppc", + "current": false, + "CPU": 6, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[6]", + "halted": false, + "thread_id": 21932 + }, + { + "arch": "ppc", + "current": false, + "CPU": 7, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[7]", + "halted": false, + "thread_id": 21933 + } + ], + "id": "libvirt-12" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-hotplug.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-hotplug.json new file mode 100644 index 0000000..513317b --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-hotplug.json @@ -0,0 +1,27 @@ +{ + "return": [ + { + "props": { + "core-id": 16 + }, + "vcpus-count": 8, + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 8 + }, + "vcpus-count": 8, + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 0 + }, + "vcpus-count": 8, + "qom-path": "/machine/unattached/device[1]", + "type": "host-spapr-cpu-core" + } + ], + "id": "libvirt-11" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic.data b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic.data new file mode 100644 index 0000000..9fc8148 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic.data @@ -0,0 +1,40 @@ +[vcpu libvirt-id='0'] + thread-id='21925' + qemu-id='1' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[1]' + topology: core='0' vcpus='8' +[vcpu libvirt-id='1'] + thread-id='21926' +[vcpu libvirt-id='2'] + thread-id='21927' +[vcpu libvirt-id='3'] + thread-id='21928' +[vcpu libvirt-id='4'] + thread-id='21930' +[vcpu libvirt-id='5'] + thread-id='21931' +[vcpu libvirt-id='6'] + thread-id='21932' +[vcpu libvirt-id='7'] + thread-id='21933' +[vcpu libvirt-id='8'] + type='host-spapr-cpu-core' + topology: core='8' vcpus='8' +[vcpu libvirt-id='9'] +[vcpu libvirt-id='10'] +[vcpu libvirt-id='11'] +[vcpu libvirt-id='12'] +[vcpu libvirt-id='13'] +[vcpu libvirt-id='14'] +[vcpu libvirt-id='15'] +[vcpu libvirt-id='16'] + type='host-spapr-cpu-core' + topology: core='16' vcpus='8' +[vcpu libvirt-id='17'] +[vcpu libvirt-id='18'] +[vcpu libvirt-id='19'] +[vcpu libvirt-id='20'] +[vcpu libvirt-id='21'] +[vcpu libvirt-id='22'] +[vcpu libvirt-id='23'] diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-cpus.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-cpus.json new file mode 100644 index 0000000..7771cbc --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-cpus.json @@ -0,0 +1,149 @@ +{ + "return": [ + { + "arch": "ppc", + "current": true, + "CPU": 0, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[0]", + "halted": false, + "thread_id": 21925 + }, + { + "arch": "ppc", + "current": false, + "CPU": 1, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[1]", + "halted": false, + "thread_id": 21926 + }, + { + "arch": "ppc", + "current": false, + "CPU": 2, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[2]", + "halted": false, + "thread_id": 21927 + }, + { + "arch": "ppc", + "current": false, + "CPU": 3, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[3]", + "halted": false, + "thread_id": 21928 + }, + { + "arch": "ppc", + "current": false, + "CPU": 4, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[4]", + "halted": false, + "thread_id": 21930 + }, + { + "arch": "ppc", + "current": false, + "CPU": 5, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[5]", + "halted": false, + "thread_id": 21931 + }, + { + "arch": "ppc", + "current": false, + "CPU": 6, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[6]", + "halted": false, + "thread_id": 21932 + }, + { + "arch": "ppc", + "current": false, + "CPU": 7, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[7]", + "halted": false, + "thread_id": 21933 + }, + { + "arch": "ppc", + "current": false, + "CPU": 8, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[0]", + "halted": false, + "thread_id": 22131 + }, + { + "arch": "ppc", + "current": false, + "CPU": 9, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[1]", + "halted": false, + "thread_id": 22132 + }, + { + "arch": "ppc", + "current": false, + "CPU": 10, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[2]", + "halted": false, + "thread_id": 22133 + }, + { + "arch": "ppc", + "current": false, + "CPU": 11, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[3]", + "halted": false, + "thread_id": 22134 + }, + { + "arch": "ppc", + "current": false, + "CPU": 12, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[4]", + "halted": false, + "thread_id": 22135 + }, + { + "arch": "ppc", + "current": false, + "CPU": 13, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[5]", + "halted": false, + "thread_id": 22136 + }, + { + "arch": "ppc", + "current": false, + "CPU": 14, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[6]", + "halted": false, + "thread_id": 22137 + }, + { + "arch": "ppc", + "current": false, + "CPU": 15, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[7]", + "halted": false, + "thread_id": 22138 + } + ], + "id": "libvirt-14" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-hotplug.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-hotplug.json new file mode 100644 index 0000000..e7594c3 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-hotplug.json @@ -0,0 +1,28 @@ +{ + "return": [ + { + "props": { + "core-id": 16 + }, + "vcpus-count": 8, + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 8 + }, + "vcpus-count": 8, + "qom-path": "/machine/peripheral/vcpu0", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 0 + }, + "vcpus-count": 8, + "qom-path": "/machine/unattached/device[1]", + "type": "host-spapr-cpu-core" + } + ], + "id": "libvirt-15" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1.data b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1.data new file mode 100644 index 0000000..b0139b5 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1.data @@ -0,0 +1,51 @@ +[vcpu libvirt-id='0'] + thread-id='21925' + qemu-id='1' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[1]' + topology: core='0' vcpus='8' +[vcpu libvirt-id='1'] + thread-id='21926' +[vcpu libvirt-id='2'] + thread-id='21927' +[vcpu libvirt-id='3'] + thread-id='21928' +[vcpu libvirt-id='4'] + thread-id='21930' +[vcpu libvirt-id='5'] + thread-id='21931' +[vcpu libvirt-id='6'] + thread-id='21932' +[vcpu libvirt-id='7'] + thread-id='21933' +[vcpu libvirt-id='8'] + thread-id='22131' + qemu-id='2' + type='host-spapr-cpu-core' + alias='vcpu0' + qom_path='/machine/peripheral/vcpu0' + topology: core='8' vcpus='8' +[vcpu libvirt-id='9'] + thread-id='22132' +[vcpu libvirt-id='10'] + thread-id='22133' +[vcpu libvirt-id='11'] + thread-id='22134' +[vcpu libvirt-id='12'] + thread-id='22135' +[vcpu libvirt-id='13'] + thread-id='22136' +[vcpu libvirt-id='14'] + thread-id='22137' +[vcpu libvirt-id='15'] + thread-id='22138' +[vcpu libvirt-id='16'] + type='host-spapr-cpu-core' + topology: core='16' vcpus='8' +[vcpu libvirt-id='17'] +[vcpu libvirt-id='18'] +[vcpu libvirt-id='19'] +[vcpu libvirt-id='20'] +[vcpu libvirt-id='21'] +[vcpu libvirt-id='22'] +[vcpu libvirt-id='23'] diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-cpus.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-cpus.json new file mode 100644 index 0000000..b377b6a --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-cpus.json @@ -0,0 +1,221 @@ +{ + "return": [ + { + "arch": "ppc", + "current": true, + "CPU": 0, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[0]", + "halted": false, + "thread_id": 21925 + }, + { + "arch": "ppc", + "current": false, + "CPU": 1, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[1]", + "halted": false, + "thread_id": 21926 + }, + { + "arch": "ppc", + "current": false, + "CPU": 2, + "nip": -4611686018422360576, + "qom_path": "/machine/unattached/device[1]/thread[2]", + "halted": false, + "thread_id": 21927 + }, + { + "arch": "ppc", + "current": false, + "CPU": 3, + "nip": -4611686018422360596, + "qom_path": "/machine/unattached/device[1]/thread[3]", + "halted": false, + "thread_id": 21928 + }, + { + "arch": "ppc", + "current": false, + "CPU": 4, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[4]", + "halted": false, + "thread_id": 21930 + }, + { + "arch": "ppc", + "current": false, + "CPU": 5, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[5]", + "halted": false, + "thread_id": 21931 + }, + { + "arch": "ppc", + "current": false, + "CPU": 6, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[6]", + "halted": false, + "thread_id": 21932 + }, + { + "arch": "ppc", + "current": false, + "CPU": 7, + "nip": -4611686018422360596, + "qom_path": "/machine/unattached/device[1]/thread[7]", + "halted": false, + "thread_id": 21933 + }, + { + "arch": "ppc", + "current": false, + "CPU": 8, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[0]", + "halted": false, + "thread_id": 22131 + }, + { + "arch": "ppc", + "current": false, + "CPU": 9, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[1]", + "halted": false, + "thread_id": 22132 + }, + { + "arch": "ppc", + "current": false, + "CPU": 10, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[2]", + "halted": false, + "thread_id": 22133 + }, + { + "arch": "ppc", + "current": false, + "CPU": 11, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[3]", + "halted": false, + "thread_id": 22134 + }, + { + "arch": "ppc", + "current": false, + "CPU": 12, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[4]", + "halted": false, + "thread_id": 22135 + }, + { + "arch": "ppc", + "current": false, + "CPU": 13, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[5]", + "halted": false, + "thread_id": 22136 + }, + { + "arch": "ppc", + "current": false, + "CPU": 14, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[6]", + "halted": false, + "thread_id": 22137 + }, + { + "arch": "ppc", + "current": false, + "CPU": 15, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu0/thread[7]", + "halted": false, + "thread_id": 22138 + }, + { + "arch": "ppc", + "current": false, + "CPU": 16, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[0]", + "halted": false, + "thread_id": 22223 + }, + { + "arch": "ppc", + "current": false, + "CPU": 17, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[1]", + "halted": false, + "thread_id": 22224 + }, + { + "arch": "ppc", + "current": false, + "CPU": 18, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[2]", + "halted": false, + "thread_id": 22225 + }, + { + "arch": "ppc", + "current": false, + "CPU": 19, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[3]", + "halted": false, + "thread_id": 22226 + }, + { + "arch": "ppc", + "current": false, + "CPU": 20, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[4]", + "halted": false, + "thread_id": 22227 + }, + { + "arch": "ppc", + "current": false, + "CPU": 21, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[5]", + "halted": false, + "thread_id": 22228 + }, + { + "arch": "ppc", + "current": false, + "CPU": 22, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[6]", + "halted": false, + "thread_id": 22229 + }, + { + "arch": "ppc", + "current": false, + "CPU": 23, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[7]", + "halted": false, + "thread_id": 22230 + } + ], + "id": "libvirt-17" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-hotplug.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-hotplug.json new file mode 100644 index 0000000..7027531 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-hotplug.json @@ -0,0 +1,29 @@ +{ + "return": [ + { + "props": { + "core-id": 16 + }, + "vcpus-count": 8, + "qom-path": "/machine/peripheral/vcpu1", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 8 + }, + "vcpus-count": 8, + "qom-path": "/machine/peripheral/vcpu0", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 0 + }, + "vcpus-count": 8, + "qom-path": "/machine/unattached/device[1]", + "type": "host-spapr-cpu-core" + } + ], + "id": "libvirt-18" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2.data b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2.data new file mode 100644 index 0000000..ea4b099 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2.data @@ -0,0 +1,62 @@ +[vcpu libvirt-id='0'] + thread-id='21925' + qemu-id='1' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[1]' + topology: core='0' vcpus='8' +[vcpu libvirt-id='1'] + thread-id='21926' +[vcpu libvirt-id='2'] + thread-id='21927' +[vcpu libvirt-id='3'] + thread-id='21928' +[vcpu libvirt-id='4'] + thread-id='21930' +[vcpu libvirt-id='5'] + thread-id='21931' +[vcpu libvirt-id='6'] + thread-id='21932' +[vcpu libvirt-id='7'] + thread-id='21933' +[vcpu libvirt-id='8'] + thread-id='22131' + qemu-id='2' + type='host-spapr-cpu-core' + alias='vcpu0' + qom_path='/machine/peripheral/vcpu0' + topology: core='8' vcpus='8' +[vcpu libvirt-id='9'] + thread-id='22132' +[vcpu libvirt-id='10'] + thread-id='22133' +[vcpu libvirt-id='11'] + thread-id='22134' +[vcpu libvirt-id='12'] + thread-id='22135' +[vcpu libvirt-id='13'] + thread-id='22136' +[vcpu libvirt-id='14'] + thread-id='22137' +[vcpu libvirt-id='15'] + thread-id='22138' +[vcpu libvirt-id='16'] + thread-id='22223' + qemu-id='3' + type='host-spapr-cpu-core' + alias='vcpu1' + qom_path='/machine/peripheral/vcpu1' + topology: core='16' vcpus='8' +[vcpu libvirt-id='17'] + thread-id='22224' +[vcpu libvirt-id='18'] + thread-id='22225' +[vcpu libvirt-id='19'] + thread-id='22226' +[vcpu libvirt-id='20'] + thread-id='22227' +[vcpu libvirt-id='21'] + thread-id='22228' +[vcpu libvirt-id='22'] + thread-id='22229' +[vcpu libvirt-id='23'] + thread-id='22230' diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index 6132f5d..e3ee74b 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -2586,6 +2586,10 @@ mymain(void) DO_TEST_CPU_INFO("x86-basic-pluggable", 8); + DO_TEST_CPU_INFO("ppc64-basic", 24); + DO_TEST_CPU_INFO("ppc64-hotplug-1", 24); + DO_TEST_CPU_INFO("ppc64-hotplug-2", 24); + qemuTestDriverFree(&driver); return (ret == 0) ? EXIT_SUCCESS : EXIT_FAILURE; -- 2.8.2

Test the algorithm that extracts the order in which the vcpu entries were plugged in on a sample of data created by plugging in vcpus arbitrarily. --- Notes: v2: - already acked ...mumonitorjson-cpuinfo-ppc64-hotplug-4-cpus.json | 221 +++++++++++++++++++++ ...onitorjson-cpuinfo-ppc64-hotplug-4-hotplug.json | 29 +++ .../qemumonitorjson-cpuinfo-ppc64-hotplug-4.data | 62 ++++++ tests/qemumonitorjsontest.c | 1 + 4 files changed, 313 insertions(+) create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4.data diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-cpus.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-cpus.json new file mode 100644 index 0000000..bcb6eab --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-cpus.json @@ -0,0 +1,221 @@ +{ + "return": [ + { + "arch": "ppc", + "current": true, + "CPU": 0, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[0]", + "halted": false, + "thread_id": 21925 + }, + { + "arch": "ppc", + "current": false, + "CPU": 1, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[1]", + "halted": false, + "thread_id": 21926 + }, + { + "arch": "ppc", + "current": false, + "CPU": 2, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[2]", + "halted": false, + "thread_id": 21927 + }, + { + "arch": "ppc", + "current": false, + "CPU": 3, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[3]", + "halted": false, + "thread_id": 21928 + }, + { + "arch": "ppc", + "current": false, + "CPU": 4, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[4]", + "halted": false, + "thread_id": 21930 + }, + { + "arch": "ppc", + "current": false, + "CPU": 5, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[5]", + "halted": false, + "thread_id": 21931 + }, + { + "arch": "ppc", + "current": false, + "CPU": 6, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[6]", + "halted": false, + "thread_id": 21932 + }, + { + "arch": "ppc", + "current": false, + "CPU": 7, + "nip": -4611686018426772172, + "qom_path": "/machine/unattached/device[1]/thread[7]", + "halted": false, + "thread_id": 21933 + }, + { + "arch": "ppc", + "current": false, + "CPU": 8, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[0]", + "halted": false, + "thread_id": 22741 + }, + { + "arch": "ppc", + "current": false, + "CPU": 9, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[1]", + "halted": false, + "thread_id": 22742 + }, + { + "arch": "ppc", + "current": false, + "CPU": 10, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[2]", + "halted": false, + "thread_id": 22743 + }, + { + "arch": "ppc", + "current": false, + "CPU": 11, + "nip": -4611686018419474700, + "qom_path": "/machine/peripheral/vcpu1/thread[3]", + "halted": false, + "thread_id": 22744 + }, + { + "arch": "ppc", + "current": false, + "CPU": 12, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[4]", + "halted": false, + "thread_id": 22745 + }, + { + "arch": "ppc", + "current": false, + "CPU": 13, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[5]", + "halted": false, + "thread_id": 22746 + }, + { + "arch": "ppc", + "current": false, + "CPU": 14, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[6]", + "halted": false, + "thread_id": 22747 + }, + { + "arch": "ppc", + "current": false, + "CPU": 15, + "nip": -4611686018426772172, + "qom_path": "/machine/peripheral/vcpu1/thread[7]", + "halted": false, + "thread_id": 22748 + }, + { + "arch": "ppc", + "current": false, + "CPU": 16, + "nip": 0, + "qom_path": "/machine/peripheral/vcpu0/thread[0]", + "halted": true, + "thread_id": 23170 + }, + { + "arch": "ppc", + "current": false, + "CPU": 17, + "nip": 0, + "qom_path": "/machine/peripheral/vcpu0/thread[1]", + "halted": true, + "thread_id": 23171 + }, + { + "arch": "ppc", + "current": false, + "CPU": 18, + "nip": 0, + "qom_path": "/machine/peripheral/vcpu0/thread[2]", + "halted": true, + "thread_id": 23172 + }, + { + "arch": "ppc", + "current": false, + "CPU": 19, + "nip": 0, + "qom_path": "/machine/peripheral/vcpu0/thread[3]", + "halted": true, + "thread_id": 23173 + }, + { + "arch": "ppc", + "current": false, + "CPU": 20, + "nip": 0, + "qom_path": "/machine/peripheral/vcpu0/thread[4]", + "halted": true, + "thread_id": 23174 + }, + { + "arch": "ppc", + "current": false, + "CPU": 21, + "nip": 0, + "qom_path": "/machine/peripheral/vcpu0/thread[5]", + "halted": true, + "thread_id": 23175 + }, + { + "arch": "ppc", + "current": false, + "CPU": 22, + "nip": 0, + "qom_path": "/machine/peripheral/vcpu0/thread[6]", + "halted": true, + "thread_id": 23176 + }, + { + "arch": "ppc", + "current": false, + "CPU": 23, + "nip": 0, + "qom_path": "/machine/peripheral/vcpu0/thread[7]", + "halted": true, + "thread_id": 23177 + } + ], + "id": "libvirt-37" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-hotplug.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-hotplug.json new file mode 100644 index 0000000..ded2054 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-hotplug.json @@ -0,0 +1,29 @@ +{ + "return": [ + { + "props": { + "core-id": 16 + }, + "vcpus-count": 8, + "qom-path": "/machine/peripheral/vcpu1", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 8 + }, + "vcpus-count": 8, + "qom-path": "/machine/peripheral/vcpu0", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 0 + }, + "vcpus-count": 8, + "qom-path": "/machine/unattached/device[1]", + "type": "host-spapr-cpu-core" + } + ], + "id": "libvirt-38" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4.data b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4.data new file mode 100644 index 0000000..22a425d --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4.data @@ -0,0 +1,62 @@ +[vcpu libvirt-id='0'] + thread-id='21925' + qemu-id='1' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[1]' + topology: core='0' vcpus='8' +[vcpu libvirt-id='1'] + thread-id='21926' +[vcpu libvirt-id='2'] + thread-id='21927' +[vcpu libvirt-id='3'] + thread-id='21928' +[vcpu libvirt-id='4'] + thread-id='21930' +[vcpu libvirt-id='5'] + thread-id='21931' +[vcpu libvirt-id='6'] + thread-id='21932' +[vcpu libvirt-id='7'] + thread-id='21933' +[vcpu libvirt-id='8'] + thread-id='23170' + qemu-id='3' + type='host-spapr-cpu-core' + alias='vcpu0' + qom_path='/machine/peripheral/vcpu0' + topology: core='8' vcpus='8' +[vcpu libvirt-id='9'] + thread-id='23171' +[vcpu libvirt-id='10'] + thread-id='23172' +[vcpu libvirt-id='11'] + thread-id='23173' +[vcpu libvirt-id='12'] + thread-id='23174' +[vcpu libvirt-id='13'] + thread-id='23175' +[vcpu libvirt-id='14'] + thread-id='23176' +[vcpu libvirt-id='15'] + thread-id='23177' +[vcpu libvirt-id='16'] + thread-id='22741' + qemu-id='2' + type='host-spapr-cpu-core' + alias='vcpu1' + qom_path='/machine/peripheral/vcpu1' + topology: core='16' vcpus='8' +[vcpu libvirt-id='17'] + thread-id='22742' +[vcpu libvirt-id='18'] + thread-id='22743' +[vcpu libvirt-id='19'] + thread-id='22744' +[vcpu libvirt-id='20'] + thread-id='22745' +[vcpu libvirt-id='21'] + thread-id='22746' +[vcpu libvirt-id='22'] + thread-id='22747' +[vcpu libvirt-id='23'] + thread-id='22748' diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index e3ee74b..388e43f 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -2589,6 +2589,7 @@ mymain(void) DO_TEST_CPU_INFO("ppc64-basic", 24); DO_TEST_CPU_INFO("ppc64-hotplug-1", 24); DO_TEST_CPU_INFO("ppc64-hotplug-2", 24); + DO_TEST_CPU_INFO("ppc64-hotplug-4", 24); qemuTestDriverFree(&driver); -- 2.8.2

The reported data is unusual so add it to the test suite. --- Notes: v2: - already ACKed ...umonitorjson-cpuinfo-ppc64-no-threads-cpus.json | 77 +++++++++++++ ...nitorjson-cpuinfo-ppc64-no-threads-hotplug.json | 125 +++++++++++++++++++++ .../qemumonitorjson-cpuinfo-ppc64-no-threads.data | 72 ++++++++++++ tests/qemumonitorjsontest.c | 1 + 4 files changed, 275 insertions(+) create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads.data diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-cpus.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-cpus.json new file mode 100644 index 0000000..31a3905 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-cpus.json @@ -0,0 +1,77 @@ +{ + "return": [ + { + "arch": "ppc", + "current": true, + "CPU": 0, + "nip": -4611686018426772876, + "qom_path": "/machine/unattached/device[1]/thread[0]", + "halted": false, + "thread_id": 35232 + }, + { + "arch": "ppc", + "current": false, + "CPU": 1, + "nip": -4611686018426772876, + "qom_path": "/machine/unattached/device[2]/thread[0]", + "halted": false, + "thread_id": 35233 + }, + { + "arch": "ppc", + "current": false, + "CPU": 2, + "nip": -4611686018426772876, + "qom_path": "/machine/unattached/device[3]/thread[0]", + "halted": false, + "thread_id": 35234 + }, + { + "arch": "ppc", + "current": false, + "CPU": 3, + "nip": -4611686018426772876, + "qom_path": "/machine/unattached/device[4]/thread[0]", + "halted": false, + "thread_id": 35235 + }, + { + "arch": "ppc", + "current": false, + "CPU": 4, + "nip": -4611686018426772876, + "qom_path": "/machine/unattached/device[5]/thread[0]", + "halted": false, + "thread_id": 35236 + }, + { + "arch": "ppc", + "current": false, + "CPU": 5, + "nip": -4611686018426772876, + "qom_path": "/machine/unattached/device[6]/thread[0]", + "halted": false, + "thread_id": 35237 + }, + { + "arch": "ppc", + "current": false, + "CPU": 6, + "nip": -4611686018426772876, + "qom_path": "/machine/unattached/device[7]/thread[0]", + "halted": false, + "thread_id": 35238 + }, + { + "arch": "ppc", + "current": false, + "CPU": 7, + "nip": -4611686018426772876, + "qom_path": "/machine/unattached/device[8]/thread[0]", + "halted": false, + "thread_id": 35239 + } + ], + "id": "libvirt-11" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-hotplug.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-hotplug.json new file mode 100644 index 0000000..30785a9 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-hotplug.json @@ -0,0 +1,125 @@ +{ + "return": [ + { + "props": { + "core-id": 120 + }, + "vcpus-count": 1, + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 112 + }, + "vcpus-count": 1, + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 104 + }, + "vcpus-count": 1, + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 96 + }, + "vcpus-count": 1, + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 88 + }, + "vcpus-count": 1, + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 80 + }, + "vcpus-count": 1, + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 72 + }, + "vcpus-count": 1, + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 64 + }, + "vcpus-count": 1, + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 56 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[8]", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 48 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[7]", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 40 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[6]", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 32 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[5]", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 24 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[4]", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 16 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[3]", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 8 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[2]", + "type": "host-spapr-cpu-core" + }, + { + "props": { + "core-id": 0 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[1]", + "type": "host-spapr-cpu-core" + } + ], + "id": "libvirt-12" +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads.data b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads.data new file mode 100644 index 0000000..d7ab77b --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads.data @@ -0,0 +1,72 @@ +[vcpu libvirt-id='0'] + thread-id='35232' + qemu-id='1' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[1]' + topology: core='0' vcpus='1' +[vcpu libvirt-id='1'] + thread-id='35233' + qemu-id='2' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[2]' + topology: core='8' vcpus='1' +[vcpu libvirt-id='2'] + thread-id='35234' + qemu-id='3' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[3]' + topology: core='16' vcpus='1' +[vcpu libvirt-id='3'] + thread-id='35235' + qemu-id='4' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[4]' + topology: core='24' vcpus='1' +[vcpu libvirt-id='4'] + thread-id='35236' + qemu-id='5' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[5]' + topology: core='32' vcpus='1' +[vcpu libvirt-id='5'] + thread-id='35237' + qemu-id='6' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[6]' + topology: core='40' vcpus='1' +[vcpu libvirt-id='6'] + thread-id='35238' + qemu-id='7' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[7]' + topology: core='48' vcpus='1' +[vcpu libvirt-id='7'] + thread-id='35239' + qemu-id='8' + type='host-spapr-cpu-core' + qom_path='/machine/unattached/device[8]' + topology: core='56' vcpus='1' +[vcpu libvirt-id='8'] + type='host-spapr-cpu-core' + topology: core='64' vcpus='1' +[vcpu libvirt-id='9'] + type='host-spapr-cpu-core' + topology: core='72' vcpus='1' +[vcpu libvirt-id='10'] + type='host-spapr-cpu-core' + topology: core='80' vcpus='1' +[vcpu libvirt-id='11'] + type='host-spapr-cpu-core' + topology: core='88' vcpus='1' +[vcpu libvirt-id='12'] + type='host-spapr-cpu-core' + topology: core='96' vcpus='1' +[vcpu libvirt-id='13'] + type='host-spapr-cpu-core' + topology: core='104' vcpus='1' +[vcpu libvirt-id='14'] + type='host-spapr-cpu-core' + topology: core='112' vcpus='1' +[vcpu libvirt-id='15'] + type='host-spapr-cpu-core' + topology: core='120' vcpus='1' diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index 388e43f..b8ec07b 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -2590,6 +2590,7 @@ mymain(void) DO_TEST_CPU_INFO("ppc64-hotplug-1", 24); DO_TEST_CPU_INFO("ppc64-hotplug-2", 24); DO_TEST_CPU_INFO("ppc64-hotplug-4", 24); + DO_TEST_CPU_INFO("ppc64-no-threads", 16); qemuTestDriverFree(&driver); -- 2.8.2

Now that the monitor code gathers all the data we can extract it to relevant places either in the definition or the private data of a vcpu. As only thread id is broken for TCG guests we may extract the rest of the data and just skip assigning of the thread id. In case where qemu would allow cpu hotplug in TCG mode this will make it work eventually. --- src/qemu/qemu_domain.c | 90 ++++++++++++++++++++++++++++++++------------------ src/qemu/qemu_domain.h | 10 ++++++ 2 files changed, 67 insertions(+), 33 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 45ded03..f0ae84e 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -854,8 +854,12 @@ qemuDomainVcpuPrivateNew(void) static void -qemuDomainVcpuPrivateDispose(void *obj ATTRIBUTE_UNUSED) +qemuDomainVcpuPrivateDispose(void *obj) { + qemuDomainVcpuPrivatePtr priv = obj; + + VIR_FREE(priv->type); + VIR_FREE(priv->alias); return; } @@ -5746,6 +5750,15 @@ qemuDomainValidateVcpuInfo(virDomainObjPtr vm) } +bool +qemuDomainSupportsNewVcpuHotplug(virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + + return virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS); +} + + /** * qemuDomainRefreshVcpuInfo: * @driver: qemu driver data @@ -5767,44 +5780,16 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, qemuMonitorCPUInfoPtr info = NULL; size_t maxvcpus = virDomainDefGetVcpusMax(vm->def); size_t i; + bool hotplug; int rc; int ret = -1; - /* - * Current QEMU *can* report info about host threads mapped - * to vCPUs, but it is not in a manner we can correctly - * deal with. The TCG CPU emulation does have a separate vCPU - * thread, but it runs every vCPU in that same thread. So it - * is impossible to setup different affinity per thread. - * - * What's more the 'query-cpus' command returns bizarre - * data for the threads. It gives the TCG thread for the - * vCPU 0, but for vCPUs 1-> N, it actually replies with - * the main process thread ID. - * - * The result is that when we try to set affinity for - * vCPU 1, it will actually change the affinity of the - * emulator thread :-( When you try to set affinity for - * vCPUs 2, 3.... it will fail if the affinity was - * different from vCPU 1. - * - * We *could* allow vcpu pinning with TCG, if we made the - * restriction that all vCPUs had the same mask. This would - * at least let us separate emulator from vCPUs threads, as - * we do for KVM. It would need some changes to our cgroups - * CPU layout though, and error reporting for the config - * restrictions. - * - * Just disable CPU pinning with TCG until someone wants - * to try to do this hard work. - */ - if (vm->def->virtType == VIR_DOMAIN_VIRT_QEMU) - return 0; + hotplug = qemuDomainSupportsNewVcpuHotplug(vm); if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) return -1; - rc = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &info, maxvcpus, false); + rc = qemuMonitorGetCPUInfo(qemuDomainGetMonitor(vm), &info, maxvcpus, hotplug); if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; @@ -5816,7 +5801,46 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, vcpu = virDomainDefGetVcpu(vm->def, i); vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); - vcpupriv->tid = info[i].tid; + /* + * Current QEMU *can* report info about host threads mapped + * to vCPUs, but it is not in a manner we can correctly + * deal with. The TCG CPU emulation does have a separate vCPU + * thread, but it runs every vCPU in that same thread. So it + * is impossible to setup different affinity per thread. + * + * What's more the 'query-cpus' command returns bizarre + * data for the threads. It gives the TCG thread for the + * vCPU 0, but for vCPUs 1-> N, it actually replies with + * the main process thread ID. + * + * The result is that when we try to set affinity for + * vCPU 1, it will actually change the affinity of the + * emulator thread :-( When you try to set affinity for + * vCPUs 2, 3.... it will fail if the affinity was + * different from vCPU 1. + * + * We *could* allow vcpu pinning with TCG, if we made the + * restriction that all vCPUs had the same mask. This would + * at least let us separate emulator from vCPUs threads, as + * we do for KVM. It would need some changes to our cgroups + * CPU layout though, and error reporting for the config + * restrictions. + * + * Just disable CPU pinning with TCG until someone wants + * to try to do this hard work. + */ + if (vm->def->virtType != VIR_DOMAIN_VIRT_QEMU) + vcpupriv->tid = info[i].tid; + + vcpupriv->socket_id = info[i].socket_id; + vcpupriv->core_id = info[i].core_id; + vcpupriv->thread_id = info[i].thread_id; + vcpupriv->vcpus = info[i].vcpus; + VIR_FREE(vcpupriv->type); + VIR_STEAL_PTR(vcpupriv->type, info[i].type); + VIR_FREE(vcpupriv->alias); + VIR_STEAL_PTR(vcpupriv->alias, info[i].alias); + vcpupriv->enable_id = info[i].id; } ret = 0; diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 0613093..c95acdf 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -315,6 +315,15 @@ struct _qemuDomainVcpuPrivate { virObject parent; pid_t tid; /* vcpu thread id */ + int enable_id; /* order in which the vcpus were enabled in qemu */ + char *alias; + + /* information for hotpluggable cpus */ + char *type; + int socket_id; + int core_id; + int thread_id; + int vcpus; }; # define QEMU_DOMAIN_VCPU_PRIVATE(vcpu) \ @@ -645,6 +654,7 @@ int qemuDomainDefValidateMemoryHotplug(const virDomainDef *def, virQEMUCapsPtr qemuCaps, const virDomainMemoryDef *mem); +bool qemuDomainSupportsNewVcpuHotplug(virDomainObjPtr vm); bool qemuDomainHasVcpuPids(virDomainObjPtr vm); pid_t qemuDomainGetVcpuPid(virDomainObjPtr vm, unsigned int vcpuid); int qemuDomainValidateVcpuInfo(virDomainObjPtr vm); -- 2.8.2

Similarly to devices the guest may allow unplug of the VCPU if libvirt is down. To avoid problems, refresh the vcpu state on reconnect. Don't mess with the vcpu state otherwise. --- Notes: v2: already ACKed src/qemu/qemu_domain.c | 9 ++++++++- src/qemu/qemu_domain.h | 3 ++- src/qemu/qemu_driver.c | 4 ++-- src/qemu/qemu_process.c | 7 +++++-- 4 files changed, 17 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index f0ae84e..d4f472b 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5764,16 +5764,20 @@ qemuDomainSupportsNewVcpuHotplug(virDomainObjPtr vm) * @driver: qemu driver data * @vm: domain object * @asyncJob: current asynchronous job type + * @state: refresh vcpu state * * Updates vCPU information private data of @vm. Due to historical reasons this * function returns success even if some data were not reported by qemu. * + * If @state is true, the vcpu state is refreshed as reported by the monitor. + * * Returns 0 on success and -1 on fatal error. */ int qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, virDomainObjPtr vm, - int asyncJob) + int asyncJob, + bool state) { virDomainVcpuDefPtr vcpu; qemuDomainVcpuPrivatePtr vcpupriv; @@ -5841,6 +5845,9 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, VIR_FREE(vcpupriv->alias); VIR_STEAL_PTR(vcpupriv->alias, info[i].alias); vcpupriv->enable_id = info[i].id; + + if (hotplug && state) + vcpu->online = !!info[i].qom_path; } ret = 0; diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index c95acdf..b4b3dae 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -660,7 +660,8 @@ pid_t qemuDomainGetVcpuPid(virDomainObjPtr vm, unsigned int vcpuid); int qemuDomainValidateVcpuInfo(virDomainObjPtr vm); int qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, virDomainObjPtr vm, - int asyncJob); + int asyncJob, + bool state); bool qemuDomainSupportsNicdev(virDomainDefPtr def, virDomainNetDefPtr net); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a0ac2ef..8ff95f6 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4640,7 +4640,7 @@ qemuDomainHotplugAddVcpu(virQEMUDriverPtr driver, vcpuinfo->online = true; - if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) + if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE, false) < 0) goto cleanup; if (qemuDomainValidateVcpuInfo(vm) < 0) @@ -4689,7 +4689,7 @@ qemuDomainHotplugDelVcpu(virQEMUDriverPtr driver, goto cleanup; } - if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) + if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE, false) < 0) goto cleanup; if (qemuDomainValidateVcpuInfo(vm) < 0) { diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 907da30..3cf9d8c 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3354,6 +3354,9 @@ qemuProcessReconnect(void *opaque) ignore_value(virSecurityManagerCheckAllLabel(driver->securityManager, obj->def)); + if (qemuDomainRefreshVcpuInfo(driver, obj, QEMU_ASYNC_JOB_NONE, true) < 0) + goto error; + if (virSecurityManagerReserveLabel(driver->securityManager, obj->def, obj->pid) < 0) goto error; @@ -5234,7 +5237,7 @@ qemuProcessLaunch(virConnectPtr conn, goto cleanup; VIR_DEBUG("Refreshing VCPU info"); - if (qemuDomainRefreshVcpuInfo(driver, vm, asyncJob) < 0) + if (qemuDomainRefreshVcpuInfo(driver, vm, asyncJob, false) < 0) goto cleanup; if (qemuDomainValidateVcpuInfo(vm) < 0) @@ -6029,7 +6032,7 @@ int qemuProcessAttach(virConnectPtr conn ATTRIBUTE_UNUSED, } VIR_DEBUG("Detecting VCPU PIDs"); - if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE) < 0) + if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE, false) < 0) goto error; if (qemuDomainValidateVcpuInfo(vm) < 0) -- 2.8.2

--- src/libvirt_private.syms | 1 + src/qemu/qemu_driver.c | 27 +++------------------------ src/util/vircgroup.c | 20 ++++++++++++++++++++ src/util/vircgroup.h | 4 ++++ 4 files changed, 28 insertions(+), 24 deletions(-) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index a32ce1c..77df0c0 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1269,6 +1269,7 @@ virCgroupBindMount; virCgroupControllerAvailable; virCgroupControllerTypeFromString; virCgroupControllerTypeToString; +virCgroupDelThread; virCgroupDenyAllDevices; virCgroupDenyDevice; virCgroupDenyDevicePath; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 8ff95f6..d083e46 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4590,25 +4590,6 @@ static void qemuProcessEventHandler(void *data, void *opaque) static int -qemuDomainDelCgroupForThread(virCgroupPtr cgroup, - virCgroupThreadName nameval, - int idx) -{ - virCgroupPtr new_cgroup = NULL; - - if (cgroup) { - if (virCgroupNewThread(cgroup, nameval, idx, false, &new_cgroup) < 0) - return -1; - - /* Remove the offlined cgroup */ - virCgroupRemove(new_cgroup); - virCgroupFree(&new_cgroup); - } - - return 0; -} - -static int qemuDomainHotplugAddVcpu(virQEMUDriverPtr driver, virDomainObjPtr vm, unsigned int vcpu) @@ -4701,8 +4682,7 @@ qemuDomainHotplugDelVcpu(virQEMUDriverPtr driver, virDomainAuditVcpu(vm, oldvcpus, oldvcpus - 1, "update", true); - if (qemuDomainDelCgroupForThread(priv->cgroup, - VIR_CGROUP_THREAD_VCPU, vcpu) < 0) + if (virCgroupDelThread(priv->cgroup, VIR_CGROUP_THREAD_VCPU, vcpu) < 0) goto cleanup; ret = 0; @@ -5904,9 +5884,8 @@ qemuDomainHotplugDelIOThread(virQEMUDriverPtr driver, virDomainIOThreadIDDel(vm->def, iothread_id); - if (qemuDomainDelCgroupForThread(priv->cgroup, - VIR_CGROUP_THREAD_IOTHREAD, - iothread_id) < 0) + if (virCgroupDelThread(priv->cgroup, VIR_CGROUP_THREAD_IOTHREAD, + iothread_id) < 0) goto cleanup; ret = 0; diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c index 07cd7f6..f2477d5 100644 --- a/src/util/vircgroup.c +++ b/src/util/vircgroup.c @@ -4821,3 +4821,23 @@ virCgroupControllerAvailable(int controller ATTRIBUTE_UNUSED) return false; } #endif /* !VIR_CGROUP_SUPPORTED */ + + +int +virCgroupDelThread(virCgroupPtr cgroup, + virCgroupThreadName nameval, + int idx) +{ + virCgroupPtr new_cgroup = NULL; + + if (cgroup) { + if (virCgroupNewThread(cgroup, nameval, idx, false, &new_cgroup) < 0) + return -1; + + /* Remove the offlined cgroup */ + virCgroupRemove(new_cgroup); + virCgroupFree(&new_cgroup); + } + + return 0; +} diff --git a/src/util/vircgroup.h b/src/util/vircgroup.h index 2ddbb35..4b8f3ff 100644 --- a/src/util/vircgroup.h +++ b/src/util/vircgroup.h @@ -86,6 +86,10 @@ int virCgroupNewThread(virCgroupPtr domain, virCgroupPtr *group) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(5); +int virCgroupDelThread(virCgroupPtr cgroup, + virCgroupThreadName nameval, + int idx); + int virCgroupNewDetect(pid_t pid, int controllers, virCgroupPtr *group); -- 2.8.2

Individual vCPU hotplug requires us to track the state of any vCPU. To allow this add the following XML: <domain> ... <vcpu current='1'>2</vcpu> <vcpus> <vcpu id='0' enabled='no' hotpluggable='yes'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='1'/> </vcpus> ... The 'enabled' attribute allows to control the state of the vcpu. 'hotpluggable' controls whether given vcpu can be hotplugged and 'order' allows to specify the order to add the vcpus. --- Notes: v2: - added documentation - added tweak of hotplug state when reconnecting to a VM - discussed design requests in previous thread docs/formatdomain.html.in | 40 ++++++ docs/schemas/domaincommon.rng | 25 ++++ src/conf/domain_conf.c | 152 ++++++++++++++++++++- src/conf/domain_conf.h | 6 + src/qemu/qemu_domain.c | 11 +- .../generic-vcpus-individual.xml | 23 ++++ tests/genericxml2xmltest.c | 2 + tests/testutils.c | 4 +- 8 files changed, 259 insertions(+), 4 deletions(-) create mode 100644 tests/genericxml2xmlindata/generic-vcpus-individual.xml diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index bfbb0f2..062045b 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -489,6 +489,10 @@ <domain> ... <vcpu placement='static' cpuset="1-4,^3,6" current="1">2</vcpu> + <vcpus> + <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/> + <vcpu id='1' enabled='no' hotpluggable='yes'/> + </vcpus> ... </domain> </pre> @@ -542,6 +546,42 @@ </dd> </dl> </dd> + <dt><code>vcpus</code></dt> + <dd> + The vcpus element allows to control state of individual vcpus. + + The <code>id</code> attribute specifies the vCPU id as used by libvirt + in other places such as vcpu pinning, scheduler information and NUMA + assignment. Note that the vcpu ID as seen in the guest may differ from + libvirt ID in certain cases. Valid IDs are from 0 to the maximum vcpu + count as set by the <code>vcpu</code> element minus 1. + + The <code>enabled</code> attribute allows to control the state of the + vcpu. Valid values are <code>yes</code> and <code>no</code>. + + <code>hotpluggable</code> controls whether given vcpu can be hotplugged + and hotunplugged in cases when the cpu is enabled at boot. Note that + all disabled vcpus must be hotpluggable. Valid values are + <code>yes</code> and <code>no</code>. + + <code>order</code> allows to specify the order to add the vcpus. For + hypervisors/platforms that require to insert multiple vcpus at once + the order may be be duplicated accross all vcpus that need to be + enabled at once. Specifying order is not necessary, vcpus are then + added in an arbitrary order. + + Note that hypervisors may create hotpluggable vcpus differently from + boot vcpus thus special initialization may be necessary. + + Hypervisors may require that vcpus enabled on boot which are not + hotpluggable are clustered at the beginning starting with ID 0. It may + be also required that vcpu 0 is always present and non-hotpluggable. + + Note that providing state for individual cpus may be necessary to enable + support of addressable vCPU hotplug and this feature may not be + supported by all hypervisors. + <span class="since">Since 2.2.0 (QEMU only)</span> + </dd> </dl> <h3><a name="elementsIOThreadsAllocation">IOThreads Allocation</a></h3> diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 052f28c..5b3d652 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -583,6 +583,31 @@ </optional> <optional> + <element name="vcpus"> + <zeroOrMore> + <element name="vcpu"> + <attribute name="id"> + <ref name="unsignedInt"/> + </attribute> + <attribute name="enabled"> + <ref name="virYesNo"/> + </attribute> + <optional> + <attribute name="hotpluggable"> + <ref name="virYesNo"/> + </attribute> + </optional> + <optional> + <attribute name="order"> + <ref name="unsignedInt"/> + </attribute> + </optional> + </element> + </zeroOrMore> + </element> + </optional> + + <optional> <element name="iothreads"> <ref name="unsignedInt"/> </element> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 14d4f7d..df2e53f 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -4332,6 +4332,13 @@ virDomainDefPostParseCheckFeatures(virDomainDefPtr def, } } + if (UNSUPPORTED(VIR_DOMAIN_DEF_FEATURE_INDIVIDUAL_VCPUS) && + def->individualvcpus) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("individual CPU state configuration is not supported")); + return -1; + } + return 0; } @@ -4406,6 +4413,43 @@ virDomainDefPostParseDeviceIterator(virDomainDefPtr def, static int +virDomainVcpuDefPostParse(virDomainDefPtr def) +{ + virDomainVcpuDefPtr vcpu; + size_t maxvcpus = virDomainDefGetVcpusMax(def); + size_t i; + + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + + switch (vcpu->hotpluggable) { + case VIR_TRISTATE_BOOL_ABSENT: + if (vcpu->online) + vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + else + vcpu->hotpluggable = VIR_TRISTATE_BOOL_YES; + break; + + case VIR_TRISTATE_BOOL_NO: + if (!vcpu->online) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("vcpu '%zu' is both offline and not " + "hotpluggable"), i); + return -1; + } + break; + + case VIR_TRISTATE_BOOL_YES: + case VIR_TRISTATE_BOOL_LAST: + break; + } + } + + return 0; +} + + +static int virDomainDefPostParseInternal(virDomainDefPtr def, struct virDomainDefPostParseDeviceIteratorData *data) { @@ -4416,6 +4460,9 @@ virDomainDefPostParseInternal(virDomainDefPtr def, return -1; } + if (virDomainVcpuDefPostParse(def) < 0) + return -1; + if (virDomainDefPostParseMemory(def, data->parseFlags) < 0) return -1; @@ -15521,6 +15568,8 @@ virDomainVcpuParse(virDomainDefPtr def, virDomainXMLOptionPtr xmlopt) { int n; + xmlNodePtr *nodes = NULL; + size_t i; char *tmp = NULL; unsigned int maxvcpus; unsigned int vcpus; @@ -15549,8 +15598,6 @@ virDomainVcpuParse(virDomainDefPtr def, vcpus = maxvcpus; } - if (virDomainDefSetVcpus(def, vcpus) < 0) - goto cleanup; tmp = virXPathString("string(./vcpu[1]/@placement)", ctxt); if (tmp) { @@ -15582,9 +15629,80 @@ virDomainVcpuParse(virDomainDefPtr def, } } + if ((n = virXPathNodeSet("./vcpus/vcpu", ctxt, &nodes)) < 0) + goto cleanup; + + if (n) { + /* if individual vcpu states are provided take them as master */ + def->individualvcpus = true; + + for (i = 0; i < n; i++) { + virDomainVcpuDefPtr vcpu; + int state; + unsigned int id; + unsigned int order; + + if (!(tmp = virXMLPropString(nodes[i], "id")) || + virStrToLong_uip(tmp, NULL, 10, &id) < 0) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("missing or invalid vcpu id")); + goto cleanup; + } + + VIR_FREE(tmp); + + if (id >= def->maxvcpus) { + virReportError(VIR_ERR_XML_ERROR, + _("vcpu id '%u' is out of range of maximum " + "vcpu count"), id); + goto cleanup; + } + + vcpu = virDomainDefGetVcpu(def, id); + + if (!(tmp = virXMLPropString(nodes[i], "enabled"))) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("missing vcpu enabled state")); + goto cleanup; + } + + if ((state = virTristateBoolTypeFromString(tmp)) < 0) { + virReportError(VIR_ERR_XML_ERROR, + _("invalid vcpu 'enabled' value '%s'"), tmp); + goto cleanup; + } + VIR_FREE(tmp); + + vcpu->online = state == VIR_TRISTATE_BOOL_YES; + + if ((tmp = virXMLPropString(nodes[i], "hotpluggable"))) { + if ((vcpu->hotpluggable = virTristateBoolTypeFromString(tmp)) < 0) { + virReportError(VIR_ERR_XML_ERROR, + _("invalid vcpu 'hotpluggable' value '%s'"), tmp); + goto cleanup; + } + VIR_FREE(tmp); + } + + if ((tmp = virXMLPropString(nodes[i], "order"))) { + if (virStrToLong_uip(tmp, NULL, 10, &order) < 0) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("invalid vcpu order")); + goto cleanup; + } + vcpu->order = order; + VIR_FREE(tmp); + } + } + } else { + if (virDomainDefSetVcpus(def, vcpus) < 0) + goto cleanup; + } + ret = 0; cleanup: + VIR_FREE(nodes); VIR_FREE(tmp); return ret; @@ -18642,6 +18760,13 @@ virDomainDefVcpuCheckAbiStability(virDomainDefPtr src, "destination definitions"), i); return false; } + + if (svcpu->order != dvcpu->order) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("vcpu enable order of vCPU '%zu' differs between " + "source and destination definitions"), i); + return false; + } } return true; @@ -22946,6 +23071,8 @@ static int virDomainCpuDefFormat(virBufferPtr buf, const virDomainDef *def) { + virDomainVcpuDefPtr vcpu; + size_t i; char *cpumask = NULL; int ret = -1; @@ -22962,6 +23089,27 @@ virDomainCpuDefFormat(virBufferPtr buf, virBufferAsprintf(buf, " current='%u'", virDomainDefGetVcpus(def)); virBufferAsprintf(buf, ">%u</vcpu>\n", virDomainDefGetVcpusMax(def)); + if (def->individualvcpus) { + virBufferAddLit(buf, "<vcpus>\n"); + virBufferAdjustIndent(buf, 2); + for (i = 0; i < def->maxvcpus; i++) { + vcpu = def->vcpus[i]; + + virBufferAsprintf(buf, "<vcpu id='%zu' enabled='%s'", + i, vcpu->online ? "yes" : "no"); + if (vcpu->hotpluggable) + virBufferAsprintf(buf, " hotpluggable='%s'", + virTristateBoolTypeToString(vcpu->hotpluggable)); + + if (vcpu->order != 0) + virBufferAsprintf(buf, " order='%d'", vcpu->order); + + virBufferAddLit(buf, "/>\n"); + } + virBufferAdjustIndent(buf, -2); + virBufferAddLit(buf, "</vcpus>\n"); + } + ret = 0; cleanup: diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 8b26724..b889125 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2046,6 +2046,9 @@ typedef virDomainVcpuDef *virDomainVcpuDefPtr; struct _virDomainVcpuDef { bool online; + virTristateBool hotpluggable; + unsigned int order; + virBitmapPtr cpumask; virDomainThreadSchedParam sched; @@ -2142,6 +2145,8 @@ struct _virDomainDef { virDomainVcpuDefPtr *vcpus; size_t maxvcpus; + /* set if the vcpu definition was specified individually */ + bool individualvcpus; int placement_mode; virBitmapPtr cpumask; @@ -2344,6 +2349,7 @@ typedef enum { VIR_DOMAIN_DEF_FEATURE_MEMORY_HOTPLUG = (1 << 1), VIR_DOMAIN_DEF_FEATURE_OFFLINE_VCPUPIN = (1 << 2), VIR_DOMAIN_DEF_FEATURE_NAME_SLASH = (1 << 3), + VIR_DOMAIN_DEF_FEATURE_INDIVIDUAL_VCPUS = (1 << 4), } virDomainDefFeatures; diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index d4f472b..5f4c642 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5846,8 +5846,17 @@ qemuDomainRefreshVcpuInfo(virQEMUDriverPtr driver, VIR_STEAL_PTR(vcpupriv->alias, info[i].alias); vcpupriv->enable_id = info[i].id; - if (hotplug && state) + if (hotplug && state) { vcpu->online = !!info[i].qom_path; + + /* mark cpus that don't have an alias as non-hotpluggable */ + if (vcpu->online) { + if (vcpupriv->alias) + vcpu->hotpluggable = VIR_TRISTATE_BOOL_YES; + else + vcpu->hotpluggable = VIR_TRISTATE_BOOL_NO; + } + } } ret = 0; diff --git a/tests/genericxml2xmlindata/generic-vcpus-individual.xml b/tests/genericxml2xmlindata/generic-vcpus-individual.xml new file mode 100644 index 0000000..cbcf8fd --- /dev/null +++ b/tests/genericxml2xmlindata/generic-vcpus-individual.xml @@ -0,0 +1,23 @@ +<domain type='qemu'> + <name>foobar</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static' current='2'>4</vcpu> + <vcpus> + <vcpu id='0' enabled='no' hotpluggable='yes' order='1'/> + <vcpu id='1' enabled='yes' hotpluggable='no'/> + <vcpu id='2' enabled='no' hotpluggable='yes' order='2'/> + <vcpu id='3' enabled='yes' hotpluggable='no'/> + </vcpus> + <os> + <type arch='i686' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + </devices> +</domain> diff --git a/tests/genericxml2xmltest.c b/tests/genericxml2xmltest.c index a487727..2ea2396 100644 --- a/tests/genericxml2xmltest.c +++ b/tests/genericxml2xmltest.c @@ -97,6 +97,8 @@ mymain(void) DO_TEST("perf"); + DO_TEST("vcpus-individual"); + virObjectUnref(caps); virObjectUnref(xmlopt); diff --git a/tests/testutils.c b/tests/testutils.c index 8af8707..8ea6ab8 100644 --- a/tests/testutils.c +++ b/tests/testutils.c @@ -1089,7 +1089,9 @@ virCapsPtr virTestGenericCapsInit(void) return NULL; } -static virDomainDefParserConfig virTestGenericDomainDefParserConfig; +static virDomainDefParserConfig virTestGenericDomainDefParserConfig = { + .features = VIR_DOMAIN_DEF_FEATURE_INDIVIDUAL_VCPUS, +}; static virDomainXMLPrivateDataCallbacks virTestGenericPrivateDataCallbacks; virDomainXMLOptionPtr virTestGenericDomainXMLConfInit(void) -- 2.8.2

On 08/19/2016 10:38 AM, Peter Krempa wrote:
Individual vCPU hotplug requires us to track the state of any vCPU. To allow this add the following XML:
<domain> ... <vcpu current='1'>2</vcpu> <vcpus> <vcpu id='0' enabled='no' hotpluggable='yes'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='1'/> </vcpus> ...
Rules as of patch 20 don't allow this config to work - error: unsupported configuration: vcpu 0 can't be offline John
The 'enabled' attribute allows to control the state of the vcpu. 'hotpluggable' controls whether given vcpu can be hotplugged and 'order' allows to specify the order to add the vcpus.
[...]

On Fri, Aug 19, 2016 at 17:08:50 -0400, John Ferlan wrote:
On 08/19/2016 10:38 AM, Peter Krempa wrote:
Individual vCPU hotplug requires us to track the state of any vCPU. To allow this add the following XML:
<domain> ... <vcpu current='1'>2</vcpu> <vcpus> <vcpu id='0' enabled='no' hotpluggable='yes'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='1'/> </vcpus> ...
Rules as of patch 20 don't allow this config to work -
Erm, yes. This was a generic example. I'll change it for a working one.

Introduce a new migration cookie flag that will be used for any configurations that are not compatible with libvirt that would not support the specific vcpu hotplug approach. This will make sure that old libvirt does not fail to reproduce the configuration correctly. --- Notes: v2: - already ACKed src/qemu/qemu_domain.c | 35 +++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 3 +++ src/qemu/qemu_migration.c | 16 ++++++++++++++-- 3 files changed, 52 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 5f4c642..69e1e38 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5964,3 +5964,38 @@ qemuDomainPrepareChannel(virDomainChrDefPtr channel, return 0; } + + +/** + * qemuDomainVcpuHotplugIsInOrder: + * @def: domain definition + * + * Returns true if online vcpus were added in order (clustered behind vcpu0 + * with increasing order). + */ +bool +qemuDomainVcpuHotplugIsInOrder(virDomainDefPtr def) +{ + size_t maxvcpus = virDomainDefGetVcpusMax(def); + virDomainVcpuDefPtr vcpu; + unsigned int prevorder = 0; + size_t seenonlinevcpus = 0; + size_t i; + + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + + if (!vcpu->online) + break; + + if (vcpu->order < prevorder) + break; + + if (vcpu->order > prevorder) + prevorder = vcpu->order; + + seenonlinevcpus++; + } + + return seenonlinevcpus == virDomainDefGetVcpus(def); +} diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index b4b3dae..b873a8b 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -722,4 +722,7 @@ int qemuDomainPrepareChannel(virDomainChrDefPtr chr, const char *domainChannelTargetDir) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); +bool qemuDomainVcpuHotplugIsInOrder(virDomainDefPtr def) + ATTRIBUTE_NONNULL(1); + #endif /* __QEMU_DOMAIN_H__ */ diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index d018add..024d248 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -92,6 +92,7 @@ enum qemuMigrationCookieFlags { QEMU_MIGRATION_COOKIE_FLAG_NBD, QEMU_MIGRATION_COOKIE_FLAG_STATS, QEMU_MIGRATION_COOKIE_FLAG_MEMORY_HOTPLUG, + QEMU_MIGRATION_COOKIE_FLAG_CPU_HOTPLUG, QEMU_MIGRATION_COOKIE_FLAG_LAST }; @@ -105,7 +106,8 @@ VIR_ENUM_IMPL(qemuMigrationCookieFlag, "network", "nbd", "statistics", - "memory-hotplug"); + "memory-hotplug", + "cpu-hotplug"); enum qemuMigrationCookieFeatures { QEMU_MIGRATION_COOKIE_GRAPHICS = (1 << QEMU_MIGRATION_COOKIE_FLAG_GRAPHICS), @@ -115,6 +117,7 @@ enum qemuMigrationCookieFeatures { QEMU_MIGRATION_COOKIE_NBD = (1 << QEMU_MIGRATION_COOKIE_FLAG_NBD), QEMU_MIGRATION_COOKIE_STATS = (1 << QEMU_MIGRATION_COOKIE_FLAG_STATS), QEMU_MIGRATION_COOKIE_MEMORY_HOTPLUG = (1 << QEMU_MIGRATION_COOKIE_FLAG_MEMORY_HOTPLUG), + QEMU_MIGRATION_COOKIE_CPU_HOTPLUG = (1 << QEMU_MIGRATION_COOKIE_FLAG_CPU_HOTPLUG), }; typedef struct _qemuMigrationCookieGraphics qemuMigrationCookieGraphics; @@ -1408,6 +1411,9 @@ qemuMigrationBakeCookie(qemuMigrationCookiePtr mig, if (flags & QEMU_MIGRATION_COOKIE_MEMORY_HOTPLUG) mig->flagsMandatory |= QEMU_MIGRATION_COOKIE_MEMORY_HOTPLUG; + if (flags & QEMU_MIGRATION_COOKIE_CPU_HOTPLUG) + mig->flagsMandatory |= QEMU_MIGRATION_COOKIE_CPU_HOTPLUG; + if (!(*cookieout = qemuMigrationCookieXMLFormatStr(driver, mig))) return -1; @@ -3192,6 +3198,11 @@ qemuMigrationBeginPhase(virQEMUDriverPtr driver, vm->newDef && virDomainDefHasMemoryHotplug(vm->newDef))) cookieFlags |= QEMU_MIGRATION_COOKIE_MEMORY_HOTPLUG; + if (!qemuDomainVcpuHotplugIsInOrder(vm->def) || + ((flags & VIR_MIGRATE_PERSIST_DEST) && + vm->newDef && !qemuDomainVcpuHotplugIsInOrder(vm->newDef))) + cookieFlags |= QEMU_MIGRATION_COOKIE_CPU_HOTPLUG; + if (!(mig = qemuMigrationEatCookie(driver, vm, NULL, 0, 0))) goto cleanup; @@ -3687,7 +3698,8 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen, QEMU_MIGRATION_COOKIE_LOCKSTATE | QEMU_MIGRATION_COOKIE_NBD | - QEMU_MIGRATION_COOKIE_MEMORY_HOTPLUG))) + QEMU_MIGRATION_COOKIE_MEMORY_HOTPLUG | + QEMU_MIGRATION_COOKIE_CPU_HOTPLUG))) goto cleanup; if (STREQ_NULLABLE(protocol, "rdma") && -- 2.8.2

For use on the monitor we need to format certain parts of the vcpu private definition into a JSON object. Add a helper. --- Notes: v2: - already ACKed src/qemu/qemu_command.c | 30 ++++++++++++++++++++++++++++++ src/qemu/qemu_command.h | 3 +++ 2 files changed, 33 insertions(+) diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index a6dea6a..28e5a7e 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -9799,3 +9799,33 @@ qemuBuildChrDeviceStr(char **deviceStr, return ret; } + + +virJSONValuePtr +qemuBuildHotpluggableCPUProps(const virDomainVcpuDef *vcpu) +{ + qemuDomainVcpuPrivatePtr vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); + virJSONValuePtr ret = NULL; + + if (virJSONValueObjectCreate(&ret, "s:driver", vcpupriv->type, + "s:id", vcpupriv->alias, NULL) < 0) + goto error; + + if (vcpupriv->socket_id != -1 && + virJSONValueObjectAdd(ret, "i:socket-id", vcpupriv->socket_id, NULL) < 0) + goto error; + + if (vcpupriv->core_id != -1 && + virJSONValueObjectAdd(ret, "i:core-id", vcpupriv->core_id, NULL) < 0) + goto error; + + if (vcpupriv->thread_id != -1 && + virJSONValueObjectAdd(ret, "i:thread-id", vcpupriv->thread_id, NULL) < 0) + goto error; + + return ret; + + error: + virJSONValueFree(ret); + return NULL; +} diff --git a/src/qemu/qemu_command.h b/src/qemu/qemu_command.h index dcf9ba6..9b9ccb6 100644 --- a/src/qemu/qemu_command.h +++ b/src/qemu/qemu_command.h @@ -179,4 +179,7 @@ bool qemuCheckCCWS390AddressSupport(const virDomainDef *def, virQEMUCapsPtr qemuCaps, const char *devicename); +virJSONValuePtr qemuBuildHotpluggableCPUProps(const virDomainVcpuDef *vcpu) + ATTRIBUTE_NONNULL(1); + #endif /* __QEMU_COMMAND_H__*/ -- 2.8.2

The vcpu order information is extracted only for hotpluggable entities, while vcpu definitions belonging to the same hotpluggable entity need to all share the order information. We also can't overwrite it right away in the vcpu info detection code as the order is necessary to add the hotpluggable vcpus enabled on boot in the correct order. The helper will store the order information in places where we are certain that it's necessary. --- src/qemu/qemu_domain.c | 34 ++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 3 +++ src/qemu/qemu_process.c | 2 ++ 3 files changed, 39 insertions(+) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 69e1e38..aa93498 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5999,3 +5999,37 @@ qemuDomainVcpuHotplugIsInOrder(virDomainDefPtr def) return seenonlinevcpus == virDomainDefGetVcpus(def); } + + +/** + * qemuDomainVcpuPersistOrder: + * @def: domain definition + * + * Saves the order of vcpus detected from qemu to the domain definition. + * The private data note the order only for the entry describing the + * hotpluggable entity. This function copies the order into the definition part + * of all sub entities. + */ +void +qemuDomainVcpuPersistOrder(virDomainDefPtr def) +{ + size_t maxvcpus = virDomainDefGetVcpusMax(def); + virDomainVcpuDefPtr vcpu; + qemuDomainVcpuPrivatePtr vcpupriv; + unsigned int prevorder = 0; + size_t i; + + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); + + if (!vcpu->online) { + vcpu->order = 0; + } else { + if (vcpupriv->enable_id != 0) + prevorder = vcpupriv->enable_id; + + vcpu->order = prevorder; + } + } +} diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index b873a8b..13c0372 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -725,4 +725,7 @@ int qemuDomainPrepareChannel(virDomainChrDefPtr chr, bool qemuDomainVcpuHotplugIsInOrder(virDomainDefPtr def) ATTRIBUTE_NONNULL(1); +void qemuDomainVcpuPersistOrder(virDomainDefPtr def) + ATTRIBUTE_NONNULL(1); + #endif /* __QEMU_DOMAIN_H__ */ diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 3cf9d8c..f3915a5 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -5243,6 +5243,8 @@ qemuProcessLaunch(virConnectPtr conn, if (qemuDomainValidateVcpuInfo(vm) < 0) goto cleanup; + qemuDomainVcpuPersistOrder(vm->def); + VIR_DEBUG("Detecting IOThread PIDs"); if (qemuProcessDetectIOThreadPIDs(driver, vm, asyncJob) < 0) goto cleanup; -- 2.8.2

Add support for using the new approach to hotplug vcpus using device_add during startup of qemu to allow sparse vcpu topologies. There are a few limitations imposed by qemu on the supported configuration: - vcpu0 needs to be always present and not hotpluggable - non-hotpluggable cpus need to be ordered at the beginning - order of the vcpus needs to be unique for every single hotpluggable entity Qemu also doesn't really allow to query the information necessary to start a VM with the vcpus directly on the commandline. Fortunately they can be hotplugged during startup. The new hotplug code uses the following approach: - non-hotpluggable vcpus are counted and put to the -smp option - qemu is started - qemu is queried for the necessary information - the configuration is checked - the hotpluggable vcpus are hotplugged - vcpus are started This patch adds a lot of checking code and enables the support to specify the individual vcpu element with qemu. --- src/qemu/qemu_command.c | 20 ++- src/qemu/qemu_domain.c | 76 ++++++++- src/qemu/qemu_process.c | 173 +++++++++++++++++++++ .../qemuxml2argv-cpu-hotplug-startup.args | 20 +++ .../qemuxml2argv-cpu-hotplug-startup.xml | 29 ++++ tests/qemuxml2argvtest.c | 2 + 6 files changed, 315 insertions(+), 5 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.xml diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 28e5a7e..c1dc390 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7082,17 +7082,29 @@ qemuBuildMachineCommandLine(virCommandPtr cmd, static int qemuBuildSmpCommandLine(virCommandPtr cmd, - const virDomainDef *def) + virDomainDefPtr def) { char *smp; virBuffer buf = VIR_BUFFER_INITIALIZER; + unsigned int maxvcpus = virDomainDefGetVcpusMax(def); + unsigned int nvcpus = 0; + virDomainVcpuDefPtr vcpu; + size_t i; + + /* count un-hotpluggable enabled vcpus. Hotpluggable ones will be added + * in a different way */ + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + if (vcpu->online && vcpu->hotpluggable == VIR_TRISTATE_BOOL_NO) + nvcpus++; + } virCommandAddArg(cmd, "-smp"); - virBufferAsprintf(&buf, "%u", virDomainDefGetVcpus(def)); + virBufferAsprintf(&buf, "%u", nvcpus); - if (virDomainDefHasVcpusOffline(def)) - virBufferAsprintf(&buf, ",maxcpus=%u", virDomainDefGetVcpusMax(def)); + if (nvcpus != maxvcpus) + virBufferAsprintf(&buf, ",maxcpus=%u", maxvcpus); /* sockets, cores, and threads are either all zero * or all non-zero, thus checking one of them is enough */ if (def->cpu && def->cpu->sockets) { diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index aa93498..1602824 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2253,6 +2253,76 @@ qemuDomainRecheckInternalPaths(virDomainDefPtr def, static int +qemuDomainDefVcpusPostParse(virDomainDefPtr def) +{ + unsigned int maxvcpus = virDomainDefGetVcpusMax(def); + virDomainVcpuDefPtr vcpu; + virDomainVcpuDefPtr prevvcpu; + size_t i; + bool has_order = false; + + /* vcpu 0 needs to be present, first, and non-hotpluggable */ + vcpu = virDomainDefGetVcpu(def, 0); + if (!vcpu->online) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("vcpu 0 can't be offline")); + return -1; + } + if (vcpu->hotpluggable == VIR_TRISTATE_BOOL_YES) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("vcpu0 can't be hotpluggable")); + return -1; + } + if (vcpu->order != 0 && vcpu->order != 1) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("vcpu0 must be enabled first")); + return -1; + } + + if (vcpu->order != 0) + has_order = true; + + prevvcpu = vcpu; + + /* all online vcpus or no online vcpu need to have order set */ + for (i = 1; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + + if (vcpu->online && + (vcpu->order != 0) != has_order) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("all vcpus must have either set or unset order")); + return -1; + } + + /* few conditions for non-hotpluggable (thus onine) vcpus */ + if (vcpu->hotpluggable == VIR_TRISTATE_BOOL_NO) { + /* they can be ordered only at the beinning */ + if (prevvcpu->hotpluggable == VIR_TRISTATE_BOOL_YES) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("online non-hotpluggable vcpus need to be at " + "the beginning")); + return -1; + } + + /* they need to be in order (qemu doesn't support any order yet). + * Also note that multiple vcpus may share order on some platforms */ + if (prevvcpu->order > vcpu->order) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("online non-hotpluggable vcpus must be ordered " + "in ascending order")); + return -1; + } + } + + prevvcpu = vcpu; + } + + return 0; +} + + +static int qemuDomainDefPostParse(virDomainDefPtr def, virCapsPtr caps, unsigned int parseFlags, @@ -2307,6 +2377,9 @@ qemuDomainDefPostParse(virDomainDefPtr def, if (virSecurityManagerVerify(driver->securityManager, def) < 0) goto cleanup; + if (qemuDomainDefVcpusPostParse(def) < 0) + goto cleanup; + ret = 0; cleanup: virObjectUnref(qemuCaps); @@ -2709,7 +2782,8 @@ virDomainDefParserConfig virQEMUDriverDomainDefParserConfig = { .deviceValidateCallback = qemuDomainDeviceDefValidate, .features = VIR_DOMAIN_DEF_FEATURE_MEMORY_HOTPLUG | - VIR_DOMAIN_DEF_FEATURE_OFFLINE_VCPUPIN + VIR_DOMAIN_DEF_FEATURE_OFFLINE_VCPUPIN | + VIR_DOMAIN_DEF_FEATURE_INDIVIDUAL_VCPUS, }; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index f3915a5..c31d2ad 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4760,6 +4760,167 @@ qemuProcessSetupIOThreads(virDomainObjPtr vm) } +static int +qemuProcessValidateHotpluggableVcpus(virDomainDefPtr def) +{ + virDomainVcpuDefPtr vcpu; + virDomainVcpuDefPtr subvcpu; + qemuDomainVcpuPrivatePtr vcpupriv; + unsigned int maxvcpus = virDomainDefGetVcpusMax(def); + size_t i = 0; + size_t j; + virBitmapPtr ordermap = NULL; + int ret = -1; + + if (!(ordermap = virBitmapNew(maxvcpus))) + goto cleanup; + + /* validate: + * - all hotpluggable entities to be hotplugged have the correct data + * - vcpus belonging to a hotpluggable entity share configuration + * - order of the hotpluggable entities is unique + */ + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); + + /* skip over hotpluggable entities */ + if (vcpupriv->vcpus == 0) + continue; + + if (vcpu->order != 0) { + if (virBitmapIsBitSet(ordermap, vcpu->order - 1)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("duplicate vcpu order '%u'"), vcpu->order - 1); + goto cleanup; + } + + ignore_value(virBitmapSetBit(ordermap, vcpu->order - 1)); + } + + + for (j = i + 1; j < (i + vcpupriv->vcpus); j++) { + subvcpu = virDomainDefGetVcpu(def, j); + if (subvcpu->hotpluggable != vcpu->hotpluggable || + subvcpu->online != vcpu->online || + subvcpu->order != vcpu->order) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("vcpus '%zu' and '%zu' are in the same hotplug " + "group but differ in configuration"), i, j); + goto cleanup; + } + } + + if (vcpu->online && vcpu->hotpluggable == VIR_TRISTATE_BOOL_YES) { + if ((vcpupriv->socket_id == -1 && vcpupriv->core_id == -1 && + vcpupriv->thread_id == -1) || + !vcpupriv->type) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("vcpu '%zu' is missing hotplug data"), i); + goto cleanup; + } + } + } + + ret = 0; + cleanup: + virBitmapFree(ordermap); + return ret; +} + + +static int +qemuDomainHasHotpluggableStartupVcpus(virDomainDefPtr def) +{ + size_t maxvcpus = virDomainDefGetVcpusMax(def); + virDomainVcpuDefPtr vcpu; + size_t i; + + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + + if (vcpu->online && vcpu->hotpluggable == VIR_TRISTATE_BOOL_YES) + return true; + } + + return false; +} + + +static int +qemuProcessVcpusSortOrder(const void *a, + const void *b) +{ + const virDomainVcpuDef *vcpua = a; + const virDomainVcpuDef *vcpub = b; + + return vcpua->order - vcpub->order; +} + + +static int +qemuProcessSetupHotpluggableVcpus(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuDomainAsyncJob asyncJob) +{ + unsigned int maxvcpus = virDomainDefGetVcpusMax(vm->def); + virDomainVcpuDefPtr vcpu; + qemuDomainVcpuPrivatePtr vcpupriv; + virJSONValuePtr vcpuprops = NULL; + size_t i; + int ret = -1; + int rc; + + virDomainVcpuDefPtr *bootHotplug = NULL; + size_t nbootHotplug = 0; + + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(vm->def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); + + if (vcpu->hotpluggable == VIR_TRISTATE_BOOL_YES && vcpu->online && + vcpupriv->vcpus != 0) { + if (virAsprintf(&vcpupriv->alias, "vcpu%zu", i) < 0) + goto cleanup; + + if (VIR_APPEND_ELEMENT(bootHotplug, nbootHotplug, vcpu) < 0) + goto cleanup; + } + } + + qsort(bootHotplug, nbootHotplug, sizeof(*bootHotplug), + qemuProcessVcpusSortOrder); + + for (i = 0; i < nbootHotplug; i++) { + vcpu = bootHotplug[i]; + + if (!(vcpuprops = qemuBuildHotpluggableCPUProps(vcpu))) + goto cleanup; + + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + goto cleanup; + + rc = qemuMonitorAddDeviceArgs(qemuDomainGetMonitor(vm), vcpuprops); + vcpuprops = NULL; + + if (qemuDomainObjExitMonitor(driver, vm) < 0) + goto cleanup; + + if (rc < 0) + goto cleanup; + + virJSONValueFree(vcpuprops); + } + + ret = 0; + + cleanup: + VIR_FREE(bootHotplug); + virJSONValueFree(vcpuprops); + return ret; +} + + /** * qemuProcessPrepareDomain * @@ -5236,6 +5397,18 @@ qemuProcessLaunch(virConnectPtr conn, if (qemuSetupCpusetMems(vm) < 0) goto cleanup; + VIR_DEBUG("setting up hotpluggable cpus"); + if (qemuDomainHasHotpluggableStartupVcpus(vm->def)) { + if (qemuDomainRefreshVcpuInfo(driver, vm, asyncJob, false) < 0) + goto cleanup; + + if (qemuProcessValidateHotpluggableVcpus(vm->def) < 0) + goto cleanup; + + if (qemuProcessSetupHotpluggableVcpus(driver, vm, asyncJob) < 0) + goto cleanup; + } + VIR_DEBUG("Refreshing VCPU info"); if (qemuDomainRefreshVcpuInfo(driver, vm, asyncJob, false) < 0) goto cleanup; diff --git a/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.args b/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.args new file mode 100644 index 0000000..035f250 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.args @@ -0,0 +1,20 @@ +LC_ALL=C \ +PATH=/bin \ +HOME=/home/test \ +USER=test \ +LOGNAME=test \ +QEMU_AUDIO_DRV=none \ +/usr/bin/qemu \ +-name QEMUGuest1 \ +-S \ +-M pc \ +-m 214 \ +-smp 1,maxcpus=6,sockets=3,cores=2,threads=1 \ +-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ +-nographic \ +-nodefaults \ +-monitor unix:/tmp/lib/domain--1-QEMUGuest1/monitor.sock,server,nowait \ +-no-acpi \ +-boot n \ +-usb \ +-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 diff --git a/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.xml b/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.xml new file mode 100644 index 0000000..58718aa --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.xml @@ -0,0 +1,29 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219100</memory> + <currentMemory unit='KiB'>219100</currentMemory> + <vcpu placement='static' current='3'>6</vcpu> + <vcpus> + <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/> + <vcpu id='1' enabled='no' hotpluggable='yes'/> + <vcpu id='2' enabled='no' hotpluggable='yes'/> + <vcpu id='3' enabled='no' hotpluggable='yes'/> + <vcpu id='4' enabled='yes' hotpluggable='yes' order='2'/> + <vcpu id='5' enabled='yes' hotpluggable='yes' order='3'/> + </vcpus> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + <boot dev='network'/> + </os> + <cpu> + <topology sockets="3" cores="2" threads="1"/> + </cpu> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/bin/qemu</emulator> + </devices> +</domain> diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index e8b8cb4..39abe72 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -2106,6 +2106,8 @@ mymain(void) DO_TEST("intel-iommu", QEMU_CAPS_DEVICE_PCI_BRIDGE, QEMU_CAPS_DEVICE_DMI_TO_PCI_BRIDGE, QEMU_CAPS_DEVICE_INTEL_IOMMU); + DO_TEST("cpu-hotplug-startup", QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS); + qemuTestDriverFree(&driver); return ret == 0 ? EXIT_SUCCESS : EXIT_FAILURE; -- 2.8.2

On 08/19/2016 10:38 AM, Peter Krempa wrote:
Add support for using the new approach to hotplug vcpus using device_add during startup of qemu to allow sparse vcpu topologies.
There are a few limitations imposed by qemu on the supported configuration: - vcpu0 needs to be always present and not hotpluggable - non-hotpluggable cpus need to be ordered at the beginning - order of the vcpus needs to be unique for every single hotpluggable entity
Qemu also doesn't really allow to query the information necessary to start a VM with the vcpus directly on the commandline. Fortunately they can be hotplugged during startup.
The new hotplug code uses the following approach: - non-hotpluggable vcpus are counted and put to the -smp option - qemu is started - qemu is queried for the necessary information - the configuration is checked - the hotpluggable vcpus are hotplugged - vcpus are started
Should these "rules" be listed in the html somewhere? Certainly could be challenging for the first few people to try and set something up... I it seems the "order" (from the XML) only matters if online, true?
This patch adds a lot of checking code and enables the support to specify the individual vcpu element with qemu. --- src/qemu/qemu_command.c | 20 ++- src/qemu/qemu_domain.c | 76 ++++++++- src/qemu/qemu_process.c | 173 +++++++++++++++++++++ .../qemuxml2argv-cpu-hotplug-startup.args | 20 +++ .../qemuxml2argv-cpu-hotplug-startup.xml | 29 ++++ tests/qemuxml2argvtest.c | 2 + 6 files changed, 315 insertions(+), 5 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.xml
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 28e5a7e..c1dc390 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7082,17 +7082,29 @@ qemuBuildMachineCommandLine(virCommandPtr cmd,
static int qemuBuildSmpCommandLine(virCommandPtr cmd, - const virDomainDef *def) + virDomainDefPtr def) { char *smp; virBuffer buf = VIR_BUFFER_INITIALIZER; + unsigned int maxvcpus = virDomainDefGetVcpusMax(def); + unsigned int nvcpus = 0; + virDomainVcpuDefPtr vcpu; + size_t i; + + /* count un-hotpluggable enabled vcpus. Hotpluggable ones will be added + * in a different way */ + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + if (vcpu->online && vcpu->hotpluggable == VIR_TRISTATE_BOOL_NO) + nvcpus++; + }
virCommandAddArg(cmd, "-smp");
- virBufferAsprintf(&buf, "%u", virDomainDefGetVcpus(def)); + virBufferAsprintf(&buf, "%u", nvcpus);
- if (virDomainDefHasVcpusOffline(def)) - virBufferAsprintf(&buf, ",maxcpus=%u", virDomainDefGetVcpusMax(def)); + if (nvcpus != maxvcpus) + virBufferAsprintf(&buf, ",maxcpus=%u", maxvcpus); /* sockets, cores, and threads are either all zero * or all non-zero, thus checking one of them is enough */ if (def->cpu && def->cpu->sockets) { diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index aa93498..1602824 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2253,6 +2253,76 @@ qemuDomainRecheckInternalPaths(virDomainDefPtr def,
static int +qemuDomainDefVcpusPostParse(virDomainDefPtr def) +{ + unsigned int maxvcpus = virDomainDefGetVcpusMax(def); + virDomainVcpuDefPtr vcpu; + virDomainVcpuDefPtr prevvcpu; + size_t i; + bool has_order = false; + + /* vcpu 0 needs to be present, first, and non-hotpluggable */ + vcpu = virDomainDefGetVcpu(def, 0); + if (!vcpu->online) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("vcpu 0 can't be offline")); + return -1; + } + if (vcpu->hotpluggable == VIR_TRISTATE_BOOL_YES) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("vcpu0 can't be hotpluggable")); + return -1; + } + if (vcpu->order != 0 && vcpu->order != 1) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("vcpu0 must be enabled first")); + return -1; + } + + if (vcpu->order != 0) + has_order = true; + + prevvcpu = vcpu; + + /* all online vcpus or no online vcpu need to have order set */
s/no online/non online/ ?
+ for (i = 1; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + + if (vcpu->online && + (vcpu->order != 0) != has_order) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("all vcpus must have either set or unset order")); + return -1; + } + + /* few conditions for non-hotpluggable (thus onine) vcpus */
s/onine/online
+ if (vcpu->hotpluggable == VIR_TRISTATE_BOOL_NO) { + /* they can be ordered only at the beinning */
s/beinning/beginning
+ if (prevvcpu->hotpluggable == VIR_TRISTATE_BOOL_YES) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("online non-hotpluggable vcpus need to be at " + "the beginning"));
"need to be ordered prior to hotplugable vcpus" ? Yours works, just trying to work "order" in there since that's what's important.
+ return -1; + } + + /* they need to be in order (qemu doesn't support any order yet). + * Also note that multiple vcpus may share order on some platforms */ + if (prevvcpu->order > vcpu->order) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("online non-hotpluggable vcpus must be ordered " + "in ascending order")); + return -1; + } + } + + prevvcpu = vcpu; + } + + return 0; +} + + +static int qemuDomainDefPostParse(virDomainDefPtr def, virCapsPtr caps, unsigned int parseFlags, @@ -2307,6 +2377,9 @@ qemuDomainDefPostParse(virDomainDefPtr def, if (virSecurityManagerVerify(driver->securityManager, def) < 0) goto cleanup;
+ if (qemuDomainDefVcpusPostParse(def) < 0) + goto cleanup; + ret = 0; cleanup: virObjectUnref(qemuCaps); @@ -2709,7 +2782,8 @@ virDomainDefParserConfig virQEMUDriverDomainDefParserConfig = { .deviceValidateCallback = qemuDomainDeviceDefValidate,
.features = VIR_DOMAIN_DEF_FEATURE_MEMORY_HOTPLUG | - VIR_DOMAIN_DEF_FEATURE_OFFLINE_VCPUPIN + VIR_DOMAIN_DEF_FEATURE_OFFLINE_VCPUPIN | + VIR_DOMAIN_DEF_FEATURE_INDIVIDUAL_VCPUS, };
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index f3915a5..c31d2ad 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4760,6 +4760,167 @@ qemuProcessSetupIOThreads(virDomainObjPtr vm) }
+static int +qemuProcessValidateHotpluggableVcpus(virDomainDefPtr def) +{ + virDomainVcpuDefPtr vcpu; + virDomainVcpuDefPtr subvcpu; + qemuDomainVcpuPrivatePtr vcpupriv; + unsigned int maxvcpus = virDomainDefGetVcpusMax(def); + size_t i = 0; + size_t j; + virBitmapPtr ordermap = NULL; + int ret = -1; + + if (!(ordermap = virBitmapNew(maxvcpus))) + goto cleanup; + + /* validate: + * - all hotpluggable entities to be hotplugged have the correct data + * - vcpus belonging to a hotpluggable entity share configuration + * - order of the hotpluggable entities is unique + */ + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); + + /* skip over hotpluggable entities */ + if (vcpupriv->vcpus == 0) + continue; + + if (vcpu->order != 0) { + if (virBitmapIsBitSet(ordermap, vcpu->order - 1)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("duplicate vcpu order '%u'"), vcpu->order - 1); + goto cleanup; + } + + ignore_value(virBitmapSetBit(ordermap, vcpu->order - 1)); + } + + + for (j = i + 1; j < (i + vcpupriv->vcpus); j++) { + subvcpu = virDomainDefGetVcpu(def, j); + if (subvcpu->hotpluggable != vcpu->hotpluggable || + subvcpu->online != vcpu->online || + subvcpu->order != vcpu->order) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("vcpus '%zu' and '%zu' are in the same hotplug " + "group but differ in configuration"), i, j); + goto cleanup; + } + } + + if (vcpu->online && vcpu->hotpluggable == VIR_TRISTATE_BOOL_YES) { + if ((vcpupriv->socket_id == -1 && vcpupriv->core_id == -1 && + vcpupriv->thread_id == -1) || + !vcpupriv->type) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("vcpu '%zu' is missing hotplug data"), i); + goto cleanup; + } + } + } + + ret = 0; + cleanup: + virBitmapFree(ordermap); + return ret; +} + + +static int +qemuDomainHasHotpluggableStartupVcpus(virDomainDefPtr def) +{ + size_t maxvcpus = virDomainDefGetVcpusMax(def); + virDomainVcpuDefPtr vcpu; + size_t i; + + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + + if (vcpu->online && vcpu->hotpluggable == VIR_TRISTATE_BOOL_YES) + return true; + } + + return false; +} + + +static int +qemuProcessVcpusSortOrder(const void *a, + const void *b) +{ + const virDomainVcpuDef *vcpua = a; + const virDomainVcpuDef *vcpub = b; + + return vcpua->order - vcpub->order; +} + + +static int +qemuProcessSetupHotpluggableVcpus(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuDomainAsyncJob asyncJob) +{ + unsigned int maxvcpus = virDomainDefGetVcpusMax(vm->def); + virDomainVcpuDefPtr vcpu; + qemuDomainVcpuPrivatePtr vcpupriv; + virJSONValuePtr vcpuprops = NULL; + size_t i; + int ret = -1; + int rc; + + virDomainVcpuDefPtr *bootHotplug = NULL; + size_t nbootHotplug = 0; + + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(vm->def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); + + if (vcpu->hotpluggable == VIR_TRISTATE_BOOL_YES && vcpu->online && + vcpupriv->vcpus != 0) { + if (virAsprintf(&vcpupriv->alias, "vcpu%zu", i) < 0) + goto cleanup;
Is it possible the ->alias is already filled in?? Just checking we cannot overwrite here... I don't believe so since this is called at Launch only...
+ + if (VIR_APPEND_ELEMENT(bootHotplug, nbootHotplug, vcpu) < 0) + goto cleanup; + } + }
Yet another "roll your eyes" Coverity complaint... if nbootHotplug == 0, then bootHotplug == NULL, which is bad for qsort. [it was the only new one too] John
+ + qsort(bootHotplug, nbootHotplug, sizeof(*bootHotplug), + qemuProcessVcpusSortOrder); + + for (i = 0; i < nbootHotplug; i++) { + vcpu = bootHotplug[i]; + + if (!(vcpuprops = qemuBuildHotpluggableCPUProps(vcpu))) + goto cleanup; + + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + goto cleanup; + + rc = qemuMonitorAddDeviceArgs(qemuDomainGetMonitor(vm), vcpuprops); + vcpuprops = NULL; + + if (qemuDomainObjExitMonitor(driver, vm) < 0) + goto cleanup; + + if (rc < 0) + goto cleanup; + + virJSONValueFree(vcpuprops); + } + + ret = 0; + + cleanup: + VIR_FREE(bootHotplug); + virJSONValueFree(vcpuprops); + return ret; +} + + /** * qemuProcessPrepareDomain * @@ -5236,6 +5397,18 @@ qemuProcessLaunch(virConnectPtr conn, if (qemuSetupCpusetMems(vm) < 0) goto cleanup;
+ VIR_DEBUG("setting up hotpluggable cpus"); + if (qemuDomainHasHotpluggableStartupVcpus(vm->def)) { + if (qemuDomainRefreshVcpuInfo(driver, vm, asyncJob, false) < 0) + goto cleanup; + + if (qemuProcessValidateHotpluggableVcpus(vm->def) < 0) + goto cleanup; + + if (qemuProcessSetupHotpluggableVcpus(driver, vm, asyncJob) < 0) + goto cleanup; + } + VIR_DEBUG("Refreshing VCPU info"); if (qemuDomainRefreshVcpuInfo(driver, vm, asyncJob, false) < 0) goto cleanup; diff --git a/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.args b/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.args new file mode 100644 index 0000000..035f250 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.args @@ -0,0 +1,20 @@ +LC_ALL=C \ +PATH=/bin \ +HOME=/home/test \ +USER=test \ +LOGNAME=test \ +QEMU_AUDIO_DRV=none \ +/usr/bin/qemu \ +-name QEMUGuest1 \ +-S \ +-M pc \ +-m 214 \ +-smp 1,maxcpus=6,sockets=3,cores=2,threads=1 \ +-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ +-nographic \ +-nodefaults \ +-monitor unix:/tmp/lib/domain--1-QEMUGuest1/monitor.sock,server,nowait \ +-no-acpi \ +-boot n \ +-usb \ +-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 diff --git a/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.xml b/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.xml new file mode 100644 index 0000000..58718aa --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.xml @@ -0,0 +1,29 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219100</memory> + <currentMemory unit='KiB'>219100</currentMemory> + <vcpu placement='static' current='3'>6</vcpu> + <vcpus> + <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/> + <vcpu id='1' enabled='no' hotpluggable='yes'/> + <vcpu id='2' enabled='no' hotpluggable='yes'/> + <vcpu id='3' enabled='no' hotpluggable='yes'/> + <vcpu id='4' enabled='yes' hotpluggable='yes' order='2'/> + <vcpu id='5' enabled='yes' hotpluggable='yes' order='3'/> + </vcpus> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + <boot dev='network'/> + </os> + <cpu> + <topology sockets="3" cores="2" threads="1"/> + </cpu> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/bin/qemu</emulator> + </devices> +</domain> diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index e8b8cb4..39abe72 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -2106,6 +2106,8 @@ mymain(void) DO_TEST("intel-iommu", QEMU_CAPS_DEVICE_PCI_BRIDGE, QEMU_CAPS_DEVICE_DMI_TO_PCI_BRIDGE, QEMU_CAPS_DEVICE_INTEL_IOMMU);
+ DO_TEST("cpu-hotplug-startup", QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS); + qemuTestDriverFree(&driver);
return ret == 0 ? EXIT_SUCCESS : EXIT_FAILURE;

To allow unplugging the vcpus, hotplugging of vcpus on platforms which require to plug multiple logical vcpus at once or pluging them in in arbitrary order it's necessary to use the new device_add interface for vcpu hotplug. This patch adds support for the device_add interface using the old setvcpus API by implementing an algorihm to select the appropriate entities to plug in. --- src/qemu/qemu_driver.c | 155 +++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 138 insertions(+), 17 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index d083e46..2f88d23 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4594,46 +4594,66 @@ qemuDomainHotplugAddVcpu(virQEMUDriverPtr driver, virDomainObjPtr vm, unsigned int vcpu) { - qemuDomainObjPrivatePtr priv = vm->privateData; + virJSONValuePtr vcpuprops = NULL; virDomainVcpuDefPtr vcpuinfo = virDomainDefGetVcpu(vm->def, vcpu); qemuDomainVcpuPrivatePtr vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpuinfo); + unsigned int nvcpus = vcpupriv->vcpus; + bool newhotplug = qemuDomainSupportsNewVcpuHotplug(vm); int ret = -1; int rc; int oldvcpus = virDomainDefGetVcpus(vm->def); + size_t i; - if (vcpuinfo->online) { - virReportError(VIR_ERR_INVALID_ARG, - _("vCPU '%u' is already online"), vcpu); - return -1; + if (newhotplug) { + if (virAsprintf(&vcpupriv->alias, "vcpu%u", vcpu) < 0) + goto cleanup; + + if (!(vcpuprops = qemuBuildHotpluggableCPUProps(vcpuinfo))) + goto cleanup; } qemuDomainObjEnterMonitor(driver, vm); - rc = qemuMonitorSetCPU(priv->mon, vcpu, true); + if (newhotplug) { + rc = qemuMonitorAddDeviceArgs(qemuDomainGetMonitor(vm), vcpuprops); + vcpuprops = NULL; + } else { + rc = qemuMonitorSetCPU(qemuDomainGetMonitor(vm), vcpu, true); + } if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; - virDomainAuditVcpu(vm, oldvcpus, oldvcpus + 1, "update", rc == 0); + virDomainAuditVcpu(vm, oldvcpus, oldvcpus + nvcpus, "update", rc == 0); if (rc < 0) goto cleanup; - vcpuinfo->online = true; + /* start outputing of the new XML element to allow keeping unpluggability */ + if (newhotplug) + vm->def->individualvcpus = true; if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE, false) < 0) goto cleanup; - if (qemuDomainValidateVcpuInfo(vm) < 0) - goto cleanup; + for (i = vcpu; i < vcpu + nvcpus; i++) { + vcpuinfo = virDomainDefGetVcpu(vm->def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpuinfo); + + vcpuinfo->online = true; - if (vcpupriv->tid > 0 && - qemuProcessSetupVcpu(vm, vcpu) < 0) + if (vcpupriv->tid > 0 && + qemuProcessSetupVcpu(vm, i) < 0) + goto cleanup; + } + + if (qemuDomainValidateVcpuInfo(vm) < 0) goto cleanup; ret = 0; cleanup: + virJSONValueFree(vcpuprops); return ret; } @@ -4771,6 +4791,95 @@ qemuDomainSetVcpusMax(virQEMUDriverPtr driver, } +/** + * qemuDomainSelectHotplugVcpuEntities: + * + * @def: domain definition + * @nvcpus: target vcpu count + * @cpumap: vcpu entity IDs filled on success + * + * Tries to find which vcpu entities need to be enabled or disabled to reach + * @nvcpus. This function works in order of the legacy hotplug but is able to + * skip over entries that are added out of order. + */ +static virBitmapPtr +qemuDomainSelectHotplugVcpuEntities(virDomainDefPtr def, + unsigned int nvcpus) +{ + virBitmapPtr ret = NULL; + virDomainVcpuDefPtr vcpu; + qemuDomainVcpuPrivatePtr vcpupriv; + unsigned int maxvcpus = virDomainDefGetVcpusMax(def); + unsigned int curvcpus = virDomainDefGetVcpus(def); + ssize_t i; + + if (!(ret = virBitmapNew(maxvcpus))) + return NULL; + + if (nvcpus > curvcpus) { + for (i = 0; i < maxvcpus && curvcpus < nvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); + + if (vcpu->online) + continue; + + if (vcpupriv->vcpus == 0) + continue; + + curvcpus += vcpupriv->vcpus; + + if (curvcpus > nvcpus) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("target vm vcpu granularity does not allow the " + "desired vcpu count")); + goto error; + } + + ignore_value(virBitmapSetBit(ret, i)); + } + } else { + for (i = maxvcpus - 1; i >= 0 && curvcpus > nvcpus; i--) { + vcpu = virDomainDefGetVcpu(def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); + + if (!vcpu->online) + continue; + + if (vcpupriv->vcpus == 0) + continue; + + if (!vcpupriv->alias) + continue; + + curvcpus -= vcpupriv->vcpus; + + if (curvcpus < nvcpus) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("target vm vcpu granularity does not allow the " + "desired vcpu count")); + goto error; + } + + ignore_value(virBitmapSetBit(ret, i)); + } + } + + if (curvcpus != nvcpus) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("failed to find appropriate hotpluggable vcpus to " + "reach the desired target vcpu count")); + goto error; + } + + return ret; + + error: + virBitmapFree(ret); + return NULL; +} + + static int qemuDomainSetVcpusLive(virQEMUDriverPtr driver, virQEMUDriverConfigPtr cfg, @@ -4784,8 +4893,14 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, char *all_nodes_str = NULL; virBitmapPtr all_nodes = NULL; virErrorPtr err = NULL; + virBitmapPtr vcpumap = NULL; + ssize_t nextvcpu = -1; + int rc = 0; int ret = -1; + if (!(vcpumap = qemuDomainSelectHotplugVcpuEntities(vm->def, nvcpus))) + goto cleanup; + if (virNumaIsAvailable() && virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) { if (virCgroupNewThread(priv->cgroup, VIR_CGROUP_THREAD_EMULATOR, 0, @@ -4804,20 +4919,25 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, } if (nvcpus > virDomainDefGetVcpus(vm->def)) { - for (i = virDomainDefGetVcpus(vm->def); i < nvcpus; i++) { - if (qemuDomainHotplugAddVcpu(driver, vm, i) < 0) - goto cleanup; + while ((nextvcpu = virBitmapNextSetBit(vcpumap, nextvcpu)) != -1) { + if ((rc = qemuDomainHotplugAddVcpu(driver, vm, nextvcpu)) < 0) + break; } } else { for (i = virDomainDefGetVcpus(vm->def) - 1; i >= nvcpus; i--) { - if (qemuDomainHotplugDelVcpu(driver, vm, i) < 0) - goto cleanup; + if ((rc = qemuDomainHotplugDelVcpu(driver, vm, i)) < 0) + break; } } + qemuDomainVcpuPersistOrder(vm->def); + if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) goto cleanup; + if (rc < 0) + goto cleanup; + ret = 0; cleanup: @@ -4832,6 +4952,7 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, VIR_FREE(all_nodes_str); virBitmapFree(all_nodes); virCgroupFree(&cgroup_temp); + virBitmapFree(vcpumap); return ret; } -- 2.8.2

$SUBJ: s/mondern/modern On 08/19/2016 10:38 AM, Peter Krempa wrote:
To allow unplugging the vcpus, hotplugging of vcpus on platforms which require to plug multiple logical vcpus at once or pluging them in in
s/pluging/plugging s/in in/in an/
arbitrary order it's necessary to use the new device_add interface for vcpu hotplug.
This patch adds support for the device_add interface using the old setvcpus API by implementing an algorihm to select the appropriate
s/algorihm/algorithm
entities to plug in. --- src/qemu/qemu_driver.c | 155 +++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 138 insertions(+), 17 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index d083e46..2f88d23 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4594,46 +4594,66 @@ qemuDomainHotplugAddVcpu(virQEMUDriverPtr driver, virDomainObjPtr vm, unsigned int vcpu) { - qemuDomainObjPrivatePtr priv = vm->privateData; + virJSONValuePtr vcpuprops = NULL; virDomainVcpuDefPtr vcpuinfo = virDomainDefGetVcpu(vm->def, vcpu); qemuDomainVcpuPrivatePtr vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpuinfo); + unsigned int nvcpus = vcpupriv->vcpus; + bool newhotplug = qemuDomainSupportsNewVcpuHotplug(vm); int ret = -1; int rc; int oldvcpus = virDomainDefGetVcpus(vm->def); + size_t i;
- if (vcpuinfo->online) { - virReportError(VIR_ERR_INVALID_ARG, - _("vCPU '%u' is already online"), vcpu); - return -1; + if (newhotplug) { + if (virAsprintf(&vcpupriv->alias, "vcpu%u", vcpu) < 0) + goto cleanup;
Right - need to fill this in with something since it would be empty due to qom_path not having /vcpu# when initially queried, but now I wonder will we "overwrite" this... qemuMonitorJSONProcessHotpluggableCpusReply will generate new ->alias and qemuMonitorGetCPUInfo will VIR_FREE before stealing... So nope... But had to go through the process ;-)
+ + if (!(vcpuprops = qemuBuildHotpluggableCPUProps(vcpuinfo))) + goto cleanup; }
qemuDomainObjEnterMonitor(driver, vm);
- rc = qemuMonitorSetCPU(priv->mon, vcpu, true); + if (newhotplug) { + rc = qemuMonitorAddDeviceArgs(qemuDomainGetMonitor(vm), vcpuprops); + vcpuprops = NULL; + } else { + rc = qemuMonitorSetCPU(qemuDomainGetMonitor(vm), vcpu, true); + }
if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup;
- virDomainAuditVcpu(vm, oldvcpus, oldvcpus + 1, "update", rc == 0); + virDomainAuditVcpu(vm, oldvcpus, oldvcpus + nvcpus, "update", rc == 0);
if (rc < 0) goto cleanup;
- vcpuinfo->online = true; + /* start outputing of the new XML element to allow keeping unpluggability */
outputting
+ if (newhotplug) + vm->def->individualvcpus = true;
if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE, false) < 0) goto cleanup;
- if (qemuDomainValidateVcpuInfo(vm) < 0) - goto cleanup;
Like qemuDomainRemoveVcpu will "eventually" have - a comment... /* validation requires us to set the expected state prior to calling it */
+ for (i = vcpu; i < vcpu + nvcpus; i++) { + vcpuinfo = virDomainDefGetVcpu(vm->def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpuinfo); + + vcpuinfo->online = true;
- if (vcpupriv->tid > 0 && - qemuProcessSetupVcpu(vm, vcpu) < 0) + if (vcpupriv->tid > 0 && + qemuProcessSetupVcpu(vm, i) < 0) + goto cleanup; + } + + if (qemuDomainValidateVcpuInfo(vm) < 0) goto cleanup;
ret = 0;
cleanup: + virJSONValueFree(vcpuprops); return ret; }
@@ -4771,6 +4791,95 @@ qemuDomainSetVcpusMax(virQEMUDriverPtr driver, }
+/** + * qemuDomainSelectHotplugVcpuEntities: + * + * @def: domain definition + * @nvcpus: target vcpu count + * @cpumap: vcpu entity IDs filled on success + * + * Tries to find which vcpu entities need to be enabled or disabled to reach + * @nvcpus. This function works in order of the legacy hotplug but is able to + * skip over entries that are added out of order. + */ +static virBitmapPtr +qemuDomainSelectHotplugVcpuEntities(virDomainDefPtr def, + unsigned int nvcpus) +{ + virBitmapPtr ret = NULL; + virDomainVcpuDefPtr vcpu; + qemuDomainVcpuPrivatePtr vcpupriv; + unsigned int maxvcpus = virDomainDefGetVcpusMax(def); + unsigned int curvcpus = virDomainDefGetVcpus(def); + ssize_t i; +
Is there an optimization if "nvcpus == curvcpus")? John
+ if (!(ret = virBitmapNew(maxvcpus))) + return NULL; + + if (nvcpus > curvcpus) { + for (i = 0; i < maxvcpus && curvcpus < nvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); + + if (vcpu->online) + continue; + + if (vcpupriv->vcpus == 0) + continue; + + curvcpus += vcpupriv->vcpus; + + if (curvcpus > nvcpus) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("target vm vcpu granularity does not allow the " + "desired vcpu count")); + goto error; + } + + ignore_value(virBitmapSetBit(ret, i)); + } + } else { + for (i = maxvcpus - 1; i >= 0 && curvcpus > nvcpus; i--) { + vcpu = virDomainDefGetVcpu(def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); + + if (!vcpu->online) + continue; + + if (vcpupriv->vcpus == 0) + continue; + + if (!vcpupriv->alias) + continue; + + curvcpus -= vcpupriv->vcpus; + + if (curvcpus < nvcpus) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("target vm vcpu granularity does not allow the " + "desired vcpu count")); + goto error; + } + + ignore_value(virBitmapSetBit(ret, i)); + } + } + + if (curvcpus != nvcpus) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("failed to find appropriate hotpluggable vcpus to " + "reach the desired target vcpu count")); + goto error; + } + + return ret; + + error: + virBitmapFree(ret); + return NULL; +} + + static int qemuDomainSetVcpusLive(virQEMUDriverPtr driver, virQEMUDriverConfigPtr cfg, @@ -4784,8 +4893,14 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, char *all_nodes_str = NULL; virBitmapPtr all_nodes = NULL; virErrorPtr err = NULL; + virBitmapPtr vcpumap = NULL; + ssize_t nextvcpu = -1; + int rc = 0; int ret = -1;
+ if (!(vcpumap = qemuDomainSelectHotplugVcpuEntities(vm->def, nvcpus))) + goto cleanup; + if (virNumaIsAvailable() && virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) { if (virCgroupNewThread(priv->cgroup, VIR_CGROUP_THREAD_EMULATOR, 0, @@ -4804,20 +4919,25 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, }
if (nvcpus > virDomainDefGetVcpus(vm->def)) { - for (i = virDomainDefGetVcpus(vm->def); i < nvcpus; i++) { - if (qemuDomainHotplugAddVcpu(driver, vm, i) < 0) - goto cleanup; + while ((nextvcpu = virBitmapNextSetBit(vcpumap, nextvcpu)) != -1) { + if ((rc = qemuDomainHotplugAddVcpu(driver, vm, nextvcpu)) < 0) + break; } } else { for (i = virDomainDefGetVcpus(vm->def) - 1; i >= nvcpus; i--) { - if (qemuDomainHotplugDelVcpu(driver, vm, i) < 0) - goto cleanup; + if ((rc = qemuDomainHotplugDelVcpu(driver, vm, i)) < 0) + break; } }
+ qemuDomainVcpuPersistOrder(vm->def); + if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) goto cleanup;
+ if (rc < 0) + goto cleanup; + ret = 0;
cleanup: @@ -4832,6 +4952,7 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, VIR_FREE(all_nodes_str); virBitmapFree(all_nodes); virCgroupFree(&cgroup_temp); + virBitmapFree(vcpumap);
return ret; }

Add a overlay function that takes the alias directly rather than extracting it from a device info. --- src/qemu/qemu_hotplug.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c index 00e4a75..31ef22f 100644 --- a/src/qemu/qemu_hotplug.c +++ b/src/qemu/qemu_hotplug.c @@ -3529,8 +3529,8 @@ qemuDomainRemoveDevice(virQEMUDriverPtr driver, static void -qemuDomainMarkDeviceForRemoval(virDomainObjPtr vm, - virDomainDeviceInfoPtr info) +qemuDomainMarkDeviceAliasForRemoval(virDomainObjPtr vm, + const char *alias) { qemuDomainObjPrivatePtr priv = vm->privateData; @@ -3539,9 +3539,19 @@ qemuDomainMarkDeviceForRemoval(virDomainObjPtr vm, if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE_DEL_EVENT)) return; - priv->unplug.alias = info->alias; + priv->unplug.alias = alias; } + +static void +qemuDomainMarkDeviceForRemoval(virDomainObjPtr vm, + virDomainDeviceInfoPtr info) + +{ + qemuDomainMarkDeviceAliasForRemoval(vm, info->alias); +} + + static void qemuDomainResetDeviceRemoval(virDomainObjPtr vm) { -- 2.8.2

This patch removes the old vcpu unplug code completely and replaces it with the new code using device_del. The old hotplug code basically never worked with any recent qemu and thus is useless. As the new code is using device_del all the implications of using it are present. Contrary to the device deletion code, the vcpu deletion code fails if the unplug request is not executed in time. --- src/qemu/qemu_driver.c | 74 ++++++--------------------------- src/qemu/qemu_hotplug.c | 108 ++++++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_hotplug.h | 7 ++++ 3 files changed, 128 insertions(+), 61 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 2f88d23..b9b0502 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4061,11 +4061,15 @@ processDeviceDeletedEvent(virQEMUDriverPtr driver, goto endjob; } - if (virDomainDefFindDevice(vm->def, devAlias, &dev, true) < 0) - goto endjob; + if (STRPREFIX(devAlias, "vcpu")) { + qemuDomainRemoveVcpuAlias(driver, vm, devAlias); + } else { + if (virDomainDefFindDevice(vm->def, devAlias, &dev, true) < 0) + goto endjob; - if (qemuDomainRemoveDevice(driver, vm, &dev) < 0) - goto endjob; + if (qemuDomainRemoveDevice(driver, vm, &dev) < 0) + goto endjob; + } if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) VIR_WARN("unable to save domain status after removing device %s", @@ -4659,60 +4663,6 @@ qemuDomainHotplugAddVcpu(virQEMUDriverPtr driver, static int -qemuDomainHotplugDelVcpu(virQEMUDriverPtr driver, - virDomainObjPtr vm, - unsigned int vcpu) -{ - qemuDomainObjPrivatePtr priv = vm->privateData; - virDomainVcpuDefPtr vcpuinfo = virDomainDefGetVcpu(vm->def, vcpu); - int ret = -1; - int rc; - int oldvcpus = virDomainDefGetVcpus(vm->def); - - if (!vcpuinfo->online) { - virReportError(VIR_ERR_INVALID_ARG, - _("vCPU '%u' is already offline"), vcpu); - return -1; - } - - vcpuinfo->online = false; - - qemuDomainObjEnterMonitor(driver, vm); - - rc = qemuMonitorSetCPU(priv->mon, vcpu, false); - - if (qemuDomainObjExitMonitor(driver, vm) < 0) - goto cleanup; - - if (rc < 0) { - virDomainAuditVcpu(vm, oldvcpus, oldvcpus - 1, "update", false); - vcpuinfo->online = true; - goto cleanup; - } - - if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE, false) < 0) - goto cleanup; - - if (qemuDomainValidateVcpuInfo(vm) < 0) { - /* rollback vcpu count if the setting has failed */ - virDomainAuditVcpu(vm, oldvcpus, oldvcpus - 1, "update", false); - vcpuinfo->online = true; - goto cleanup; - } - - virDomainAuditVcpu(vm, oldvcpus, oldvcpus - 1, "update", true); - - if (virCgroupDelThread(priv->cgroup, VIR_CGROUP_THREAD_VCPU, vcpu) < 0) - goto cleanup; - - ret = 0; - - cleanup: - return ret; -} - - -static int qemuDomainSetVcpusAgent(virDomainObjPtr vm, unsigned int nvcpus) { @@ -4887,7 +4837,6 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, unsigned int nvcpus) { qemuDomainObjPrivatePtr priv = vm->privateData; - size_t i; virCgroupPtr cgroup_temp = NULL; char *mem_mask = NULL; char *all_nodes_str = NULL; @@ -4924,8 +4873,11 @@ qemuDomainSetVcpusLive(virQEMUDriverPtr driver, break; } } else { - for (i = virDomainDefGetVcpus(vm->def) - 1; i >= nvcpus; i--) { - if ((rc = qemuDomainHotplugDelVcpu(driver, vm, i)) < 0) + for (nextvcpu = virDomainDefGetVcpusMax(vm->def) - 1; nextvcpu >= 0; nextvcpu--) { + if (!virBitmapIsBitSet(vcpumap, nextvcpu)) + continue; + + if ((rc = qemuDomainHotplugDelVcpu(driver, vm, nextvcpu)) < 0) break; } } diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c index 31ef22f..1bdde5b 100644 --- a/src/qemu/qemu_hotplug.c +++ b/src/qemu/qemu_hotplug.c @@ -4419,3 +4419,111 @@ qemuDomainDetachMemoryDevice(virQEMUDriverPtr driver, qemuDomainResetDeviceRemoval(vm); return ret; } + + +static int +qemuDomainRemoveVcpu(virQEMUDriverPtr driver, + virDomainObjPtr vm, + unsigned int vcpu) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + virDomainVcpuDefPtr vcpuinfo = virDomainDefGetVcpu(vm->def, vcpu); + qemuDomainVcpuPrivatePtr vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpuinfo); + int oldvcpus = virDomainDefGetVcpus(vm->def); + unsigned int nvcpus = vcpupriv->vcpus; + size_t i; + + if (qemuDomainRefreshVcpuInfo(driver, vm, QEMU_ASYNC_JOB_NONE, false) < 0) + return -1; + + /* validation requires us to set the expected state prior to calling it */ + for (i = vcpu; i < vcpu + nvcpus; i++) { + vcpuinfo = virDomainDefGetVcpu(vm->def, i); + vcpuinfo->online = false; + } + + if (qemuDomainValidateVcpuInfo(vm) < 0) { + /* rollback vcpu count if the setting has failed */ + virDomainAuditVcpu(vm, oldvcpus, oldvcpus - nvcpus, "update", false); + + for (i = vcpu; i < vcpu + nvcpus; i++) { + vcpuinfo = virDomainDefGetVcpu(vm->def, i); + vcpuinfo->online = true; + } + return -1; + } + + virDomainAuditVcpu(vm, oldvcpus, oldvcpus - nvcpus, "update", true); + + for (i = vcpu; i < vcpu + nvcpus; i++) { + vcpuinfo = virDomainDefGetVcpu(vm->def, i); + if (virCgroupDelThread(priv->cgroup, VIR_CGROUP_THREAD_VCPU, i) < 0) + return -1; + } + + return 0; +} + + +void +qemuDomainRemoveVcpuAlias(virQEMUDriverPtr driver, + virDomainObjPtr vm, + const char *alias) +{ + virDomainVcpuDefPtr vcpu; + qemuDomainVcpuPrivatePtr vcpupriv; + size_t i; + + for (i = 0; i < virDomainDefGetVcpusMax(vm->def); i++) { + vcpu = virDomainDefGetVcpu(vm->def, i); + vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpu); + + if (STREQ_NULLABLE(alias, vcpupriv->alias)) { + qemuDomainRemoveVcpu(driver, vm, i); + return; + } + } +} + + +int +qemuDomainHotplugDelVcpu(virQEMUDriverPtr driver, + virDomainObjPtr vm, + unsigned int vcpu) +{ + virDomainVcpuDefPtr vcpuinfo = virDomainDefGetVcpu(vm->def, vcpu); + qemuDomainVcpuPrivatePtr vcpupriv = QEMU_DOMAIN_VCPU_PRIVATE(vcpuinfo); + int oldvcpus = virDomainDefGetVcpus(vm->def); + unsigned int nvcpus = vcpupriv->vcpus; + int rc; + + if (!vcpupriv->alias) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, + _("vcpu '%u' can't be unplugged"), vcpu); + return -1; + } + + qemuDomainMarkDeviceAliasForRemoval(vm, vcpupriv->alias); + + qemuDomainObjEnterMonitor(driver, vm); + + rc = qemuMonitorDelDevice(qemuDomainGetMonitor(vm), vcpupriv->alias); + + if (qemuDomainObjExitMonitor(driver, vm) < 0) + return -1; + + if (rc < 0) { + virDomainAuditVcpu(vm, oldvcpus, oldvcpus - nvcpus, "update", false); + return -1; + } + + if ((rc = qemuDomainWaitForDeviceRemoval(vm)) <= 0) { + if (rc == 0) + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("vcpu unplug request timed out")); + + return -1; + } + + return qemuDomainRemoveVcpu(driver, vm, vcpu); +} diff --git a/src/qemu/qemu_hotplug.h b/src/qemu/qemu_hotplug.h index 165d345..b048cf4 100644 --- a/src/qemu/qemu_hotplug.h +++ b/src/qemu/qemu_hotplug.h @@ -105,6 +105,13 @@ int qemuDomainDetachRNGDevice(virQEMUDriverPtr driver, virDomainObjPtr vm, virDomainRNGDefPtr rng); +int qemuDomainHotplugDelVcpu(virQEMUDriverPtr driver, + virDomainObjPtr vm, + unsigned int vcpu); +void qemuDomainRemoveVcpuAlias(virQEMUDriverPtr driver, + virDomainObjPtr vm, + const char *alias); + int qemuDomainChrInsert(virDomainDefPtr vmdef, virDomainChrDefPtr chr); -- 2.8.2

On 08/19/2016 10:38 AM, Peter Krempa wrote:
Version 2 attempts to address feedback on v1: - added more documentation - fixed various typos - fixed bugs appearing when restarting daemon - fixed fallback code - tested upgrade of libvirt
Some of the patches were already pushed and this series is rebased on top of that changes.
Patches 1-18 with exception of 16 were more or less ACKed already.
I've dropped the last patch adding the events from this series. I'm working on adding a new event for vcpu lifecycle. I'm also working on reporting of the vcpu granularity via the domcapabilities API. I'm going to post those separately as well as the new API that will allow modifying the vcpus individually.
Peter Krempa (23): qemu: monitor: Return structures from qemuMonitorGetCPUInfo qemu: monitor: Return struct from qemuMonitor(Text|Json)QueryCPUs qemu: caps: Add capability for query-hotpluggable-cpus command qemu: Forbid config when topology based cpu count doesn't match the config qemu: capabilities: Extract availability of new cpu hotplug for machine types qemu: monitor: Extract QOM path from query-cpus reply qemu: monitor: Add support for calling query-hotpluggable-cpus qemu: monitor: Add algorithm for combining query-(hotpluggable-)-cpus data tests: Add test infrastructure for qemuMonitorGetCPUInfo tests: cpu-hotplug: Add data for ppc64 platform including hotplug tests: cpu-hotplug: Add data for ppc64 out-of-order hotplug tests: cpu-hotplug: Add data for ppc64 without threads enabled qemu: domain: Extract cpu-hotplug related data qemu: domain: Prepare for VCPUs vanishing while libvirt is not running util: Extract and rename qemuDomainDelCgroupForThread to virCgroupDelThread conf: Add XML for individual vCPU hotplug qemu: migration: Prepare for non-contiguous vcpu configurations qemu: command: Add helper to convert vcpu definition to JSON props qemu: process: Copy final vcpu order information into the vcpu definition qemu: command: Add support for sparse vcpu topologies qemu: Use mondern vcpu hotplug approach if possible qemu: hotplug: Allow marking unplugged devices by alias qemu: hotplug: Add support for VCPU unplug
docs/formatdomain.html.in | 40 +++ docs/schemas/domaincommon.rng | 25 ++ src/conf/domain_conf.c | 152 +++++++++- src/conf/domain_conf.h | 6 + src/libvirt_private.syms | 1 + src/qemu/qemu_capabilities.c | 31 +- src/qemu/qemu_capabilities.h | 3 + src/qemu/qemu_command.c | 50 +++- src/qemu/qemu_command.h | 3 + src/qemu/qemu_domain.c | 312 +++++++++++++++++---- src/qemu/qemu_domain.h | 19 +- src/qemu/qemu_driver.c | 246 +++++++++------- src/qemu/qemu_hotplug.c | 124 +++++++- src/qemu/qemu_hotplug.h | 7 + src/qemu/qemu_migration.c | 16 +- src/qemu/qemu_monitor.c | 260 ++++++++++++++++- src/qemu/qemu_monitor.h | 58 +++- src/qemu/qemu_monitor_json.c | 266 +++++++++++++++--- src/qemu/qemu_monitor_json.h | 8 +- src/qemu/qemu_monitor_text.c | 41 +-- src/qemu/qemu_monitor_text.h | 3 +- src/qemu/qemu_process.c | 182 +++++++++++- src/util/vircgroup.c | 20 ++ src/util/vircgroup.h | 4 + .../generic-vcpus-individual.xml | 23 ++ tests/genericxml2xmltest.c | 2 + tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml | 55 ++-- .../qemumonitorjson-cpuinfo-ppc64-basic-cpus.json | 77 +++++ ...emumonitorjson-cpuinfo-ppc64-basic-hotplug.json | 27 ++ .../qemumonitorjson-cpuinfo-ppc64-basic.data | 40 +++ ...mumonitorjson-cpuinfo-ppc64-hotplug-1-cpus.json | 149 ++++++++++ ...onitorjson-cpuinfo-ppc64-hotplug-1-hotplug.json | 28 ++ .../qemumonitorjson-cpuinfo-ppc64-hotplug-1.data | 51 ++++ ...mumonitorjson-cpuinfo-ppc64-hotplug-2-cpus.json | 221 +++++++++++++++ ...onitorjson-cpuinfo-ppc64-hotplug-2-hotplug.json | 29 ++ .../qemumonitorjson-cpuinfo-ppc64-hotplug-2.data | 62 ++++ ...mumonitorjson-cpuinfo-ppc64-hotplug-4-cpus.json | 221 +++++++++++++++ ...onitorjson-cpuinfo-ppc64-hotplug-4-hotplug.json | 29 ++ .../qemumonitorjson-cpuinfo-ppc64-hotplug-4.data | 62 ++++ ...umonitorjson-cpuinfo-ppc64-no-threads-cpus.json | 77 +++++ ...nitorjson-cpuinfo-ppc64-no-threads-hotplug.json | 125 +++++++++ .../qemumonitorjson-cpuinfo-ppc64-no-threads.data | 72 +++++ ...nitorjson-cpuinfo-x86-basic-pluggable-cpus.json | 50 ++++ ...orjson-cpuinfo-x86-basic-pluggable-hotplug.json | 82 ++++++ ...emumonitorjson-cpuinfo-x86-basic-pluggable.data | 39 +++ tests/qemumonitorjsontest.c | 183 +++++++++++- .../qemuxml2argv-cpu-hotplug-startup.args | 20 ++ .../qemuxml2argv-cpu-hotplug-startup.xml | 29 ++ tests/qemuxml2argvtest.c | 2 + tests/testutils.c | 4 +- 50 files changed, 3360 insertions(+), 276 deletions(-) create mode 100644 tests/genericxml2xmlindata/generic-vcpus-individual.xml create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable.data create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.xml
Went through them all - made a few comments along the way (along with a face palm one). Nothing major seen - although I think the rules for qemu usage will certainly trip someone up! It's "too bad" some of those architectural differences couldn't have been hidden by qemu. ACK series - John

On Sat, Aug 20, 2016 at 09:00:37 -0400, John Ferlan wrote:
On 08/19/2016 10:38 AM, Peter Krempa wrote:
Version 2 attempts to address feedback on v1: - added more documentation - fixed various typos - fixed bugs appearing when restarting daemon - fixed fallback code - tested upgrade of libvirt
Some of the patches were already pushed and this series is rebased on top of that changes.
Patches 1-18 with exception of 16 were more or less ACKed already.
I've dropped the last patch adding the events from this series. I'm working on adding a new event for vcpu lifecycle. I'm also working on reporting of the vcpu granularity via the domcapabilities API. I'm going to post those separately as well as the new API that will allow modifying the vcpus individually.
[...]
Went through them all - made a few comments along the way (along with a face palm one). Nothing major seen - although I think the rules for qemu usage will certainly trip someone up! It's "too bad" some of those
I'll post a patch that describes the qemu caveats in the web documentation later (mostly as it will certainly contain typos and wrong phrasing as usual :) )
architectural differences couldn't have been hidden by qemu.
At least we can try to hide that.
ACK series -
Thanks for going through it, some of the functions are unpleasant. Peter
participants (4)
-
Dou Liyang
-
John Ferlan
-
Peter Krempa
-
Shivaprasad G Bhat