[libvirt] [PATCH 00/34] Prepare for specific vcpu hot(un)plug - part 1

This series is getting rather big. The target is to refactor the way libvirt stores info about vCPUs into a single structure (okay, two structures for the qemu driver. Part 1 is not yet completely there, well, not even halfway. Future work will involve fully allocating priv->vcpupids to the maxcpus size and moving around few other bits of data in cputune and other parts to the new structure. Yet another follow up work is then to add new APIs for vCPU hotplug, which will enable adding vCPUs sparsely (useful if you have NUMA). Since this refactor will result in tracking all vcpu-related data in one struct, the result will automagically fix a few bugs where we'd end up with invalid config after vcpu unplug or other operations. The changes can also be fetched at: git fetch git://pipo.sk/pipo/libvirt.git vcpu-refactor-part-1 Peter Krempa (34): hyperv: Allocate 'def' via virDomainDefNew openvz: Refactor extraction of vcpu count phyp: Refactor extraction of vcpu count xenapi: Refactor extraction of vcpu count conf: Use local copy of maxvcpus in virDomainVcpuParse conf: Drop useless check when parsing cpu scheduler info conf: Replace writes to def->maxvcpus with accessor conf: Extract update of vcpu count if maxvcpus is decreased conf: Add helper to check whether domain has offline vCPUs conf: Assume at least 1 maximum and current vCPU for every conf conf: Replace read access to def->maxvcpus with accessor conf: Replace writes to def->vcpus with accessor conf: Move vcpu count check into helper conf: Replace read accesses to def->vcpus with accessor conf: Turn def->maxvcpus into size_t qemu: domain: Add helper to access vm->privateData->agent qemu: Extract vCPU onlining/offlining via agent into a separate function qemu: qemuDomainSetVcpusAgent: re-check agent before calling it the again qemu: Split up vCPU hotplug and hotunplug qemu: cpu hotplug: Fix error handling logic qemu: monitor: Explain logic of qemuMonitorGetCPUInfo qemu: monitor: Remove weird return values from qemuMonitorSetCPU qemu: cpu hotplug: Move loops to qemuDomainSetVcpusFlags qemu: Refactor qemuDomainHotplugVcpus qemu: refactor qemuDomainHotunplugVcpus conf: turn def->vcpus into a structure conf: ABI: Split up and improve vcpu info ABI checking conf: Add helper to get pointer to a certain vCPU definition qemu: cgroup: Remove now unreachable check qemu: Drop checking vcpu threads in emulator bandwidth getter/setter qemu: Replace checking for vcpu<->pid mapping availability with a helper qemu: Add helper to retrieve vCPU pid qemu: driver: Refactor qemuDomainHelperGetVcpus qemu: cgroup: Don't use priv->ncpupids to iterate domain vCPUs src/conf/domain_audit.c | 2 +- src/conf/domain_conf.c | 207 ++++++++++++++--- src/conf/domain_conf.h | 22 +- src/hyperv/hyperv_driver.c | 12 +- src/libvirt_private.syms | 6 + src/libxl/libxl_conf.c | 6 +- src/libxl/libxl_driver.c | 38 +-- src/lxc/lxc_controller.c | 2 +- src/lxc/lxc_driver.c | 2 +- src/lxc/lxc_native.c | 5 - src/openvz/openvz_conf.c | 14 +- src/openvz/openvz_driver.c | 31 +-- src/phyp/phyp_driver.c | 15 +- src/qemu/qemu_cgroup.c | 42 ++-- src/qemu/qemu_command.c | 25 +- src/qemu/qemu_domain.c | 47 ++++ src/qemu/qemu_domain.h | 4 + src/qemu/qemu_driver.c | 538 ++++++++++++++++++++++--------------------- src/qemu/qemu_monitor.c | 9 + src/qemu/qemu_monitor_json.c | 4 - src/qemu/qemu_monitor_text.c | 22 +- src/qemu/qemu_process.c | 22 +- src/test/test_driver.c | 38 +-- src/uml/uml_driver.c | 2 +- src/vbox/vbox_common.c | 19 +- src/vmware/vmware_driver.c | 2 +- src/vmx/vmx.c | 34 +-- src/vz/vz_driver.c | 2 +- src/vz/vz_sdk.c | 9 +- src/xen/xm_internal.c | 19 +- src/xenapi/xenapi_driver.c | 11 +- src/xenapi/xenapi_utils.c | 6 +- src/xenconfig/xen_common.c | 16 +- src/xenconfig/xen_sxpr.c | 26 ++- tests/openvzutilstest.c | 2 +- 35 files changed, 783 insertions(+), 478 deletions(-) -- 2.6.2

Use the helper that is necessary to fill out some values rather than allocating it directly. --- src/hyperv/hyperv_driver.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/hyperv/hyperv_driver.c b/src/hyperv/hyperv_driver.c index 1958bbe..72261df 100644 --- a/src/hyperv/hyperv_driver.c +++ b/src/hyperv/hyperv_driver.c @@ -774,7 +774,7 @@ hypervDomainGetXMLDesc(virDomainPtr domain, unsigned int flags) /* Flags checked by virDomainDefFormat */ - if (VIR_ALLOC(def) < 0) + if (!(def = virDomainDefNew())) goto cleanup; virUUIDFormat(domain->uuid, uuid_string); -- 2.6.2

To simplify further refactors change the way the vcpu count is extracted to use a temp variable rather than juggling with def->maxvcpus. --- src/openvz/openvz_conf.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/src/openvz/openvz_conf.c b/src/openvz/openvz_conf.c index db0a9a7..c0f65c9 100644 --- a/src/openvz/openvz_conf.c +++ b/src/openvz/openvz_conf.c @@ -522,6 +522,7 @@ int openvzLoadDomains(struct openvz_driver *driver) char *outbuf = NULL; char *line; virCommandPtr cmd = NULL; + unsigned int vcpus = 0; if (openvzAssignUUIDs() < 0) return -1; @@ -575,12 +576,14 @@ int openvzLoadDomains(struct openvz_driver *driver) veid); goto cleanup; } else if (ret > 0) { - def->maxvcpus = strtoI(temp); + vcpus = strtoI(temp); } - if (ret == 0 || def->maxvcpus == 0) - def->maxvcpus = openvzGetNodeCPUs(); - def->vcpus = def->maxvcpus; + if (ret == 0 || vcpus == 0) + vcpus = openvzGetNodeCPUs(); + + def->maxvcpus = vcpus; + def->vcpus = vcpus; /* XXX load rest of VM config data .... */ -- 2.6.2

To simplify further refactors change the way the vcpu count is extracted to use a temp variable rather than juggling with def.maxvcpus. --- src/phyp/phyp_driver.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c index 2912fc4..14264c0 100644 --- a/src/phyp/phyp_driver.c +++ b/src/phyp/phyp_driver.c @@ -3255,6 +3255,7 @@ phypDomainGetXMLDesc(virDomainPtr dom, unsigned int flags) virDomainDef def; char *managed_system = phyp_driver->managed_system; unsigned long long memory; + unsigned int vcpus; /* Flags checked by virDomainDefFormat */ @@ -3289,12 +3290,14 @@ phypDomainGetXMLDesc(virDomainPtr dom, unsigned int flags) goto err; } - if ((def.maxvcpus = def.vcpus = - phypGetLparCPU(dom->conn, managed_system, dom->id)) == 0) { + if ((vcpus = phypGetLparCPU(dom->conn, managed_system, dom->id)) == 0) { VIR_ERROR(_("Unable to determine domain's CPU.")); goto err; } + def.maxvcpus = vcpus; + def.vcpus = vcpus; + return virDomainDefFormat(&def, virDomainDefFormatConvertXMLFlags(flags)); -- 2.6.2

To simplify further refactors change the way the vcpu count is extracted to use a temp variable rather than juggling with def->maxvcpus. --- src/xenapi/xenapi_driver.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c index 3045c5a..e503974 100644 --- a/src/xenapi/xenapi_driver.c +++ b/src/xenapi/xenapi_driver.c @@ -1403,6 +1403,7 @@ xenapiDomainGetXMLDesc(virDomainPtr dom, unsigned int flags) char *val = NULL; struct xen_vif_set *vif_set = NULL; char *xml; + unsigned int vcpus; /* Flags checked by virDomainDefFormat */ @@ -1498,7 +1499,12 @@ xenapiDomainGetXMLDesc(virDomainPtr dom, unsigned int flags) } else { defPtr->mem.cur_balloon = memory; } - defPtr->maxvcpus = defPtr->vcpus = xenapiDomainGetMaxVcpus(dom); + + vcpus = xenapiDomainGetMaxVcpus(dom); + + defPtr->maxvcpus = vcpus; + defPtr->vcpus = vcpus; + enum xen_on_normal_exit action; if (xen_vm_get_actions_after_shutdown(session, &action, vm)) defPtr->onPoweroff = xenapiNormalExitEnum2virDomainLifecycle(action); -- 2.6.2

Use the local variable rather than getting it all the time from the struct. This will simplify further refactors. --- src/conf/domain_conf.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 0ac7dbf..3c8a926 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -14664,13 +14664,13 @@ virDomainVcpuParse(virDomainDefPtr def, goto cleanup; } - def->vcpus = def->maxvcpus; + def->vcpus = maxvcpus; } - if (def->maxvcpus < def->vcpus) { + if (maxvcpus < def->vcpus) { virReportError(VIR_ERR_INTERNAL_ERROR, _("maxvcpus must not be less than current vcpus " - "(%u < %u)"), def->maxvcpus, def->vcpus); + "(%u < %u)"), maxvcpus, def->vcpus); goto cleanup; } -- 2.6.2

On 11/20/2015 10:21 AM, Peter Krempa wrote:
Use the local variable rather than getting it all the time from the struct. This will simplify further refactors. --- src/conf/domain_conf.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 0ac7dbf..3c8a926 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -14664,13 +14664,13 @@ virDomainVcpuParse(virDomainDefPtr def, goto cleanup; }
- def->vcpus = def->maxvcpus; + def->vcpus = maxvcpus;
There is no local maxvcpus (yet) and this breaks git bisect. ACK 1-5 with this fixed John
}
- if (def->maxvcpus < def->vcpus) { + if (maxvcpus < def->vcpus) { virReportError(VIR_ERR_INTERNAL_ERROR, _("maxvcpus must not be less than current vcpus " - "(%u < %u)"), def->maxvcpus, def->vcpus); + "(%u < %u)"), maxvcpus, def->vcpus); goto cleanup; }

On Mon, Nov 23, 2015 at 07:04:43 -0500, John Ferlan wrote:
On 11/20/2015 10:21 AM, Peter Krempa wrote:
Use the local variable rather than getting it all the time from the struct. This will simplify further refactors. --- src/conf/domain_conf.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 0ac7dbf..3c8a926 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -14664,13 +14664,13 @@ virDomainVcpuParse(virDomainDefPtr def, goto cleanup; }
- def->vcpus = def->maxvcpus; + def->vcpus = maxvcpus;
There is no local maxvcpus (yet) and this breaks git bisect.
ACK 1-5 with this fixed
Indeed, this patch was moved a little bit too far. After putting it behind "conf: Replace writes to def->maxvcpus with accessor" it compiles just fine. I've pushed 1-4 and I'll push this after the mentioned patch after fixing it. Thanks Peter

The checked predicate is a deduction from the following checks: 1) maximum cpu id is checked for every parsed <vcpusched> element 2) the resulting bitmaps are checked for overlaps 3) there has to be at least one cpu per <vcpusched>
From the above checks we can indeed deduce that if we have one <vcpusched> element per CPU we will have at most 'maxvcpus' of them.
Drop the explicit check since it's redundant. --- src/conf/domain_conf.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 3c8a926..a744412 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -15224,12 +15224,6 @@ virDomainDefParseXML(xmlDocPtr xml, goto error; } if (n) { - if (n > def->maxvcpus) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("too many vcpusched nodes in cputune")); - goto error; - } - if (VIR_ALLOC_N(def->cputune.vcpusched, n) < 0) goto error; def->cputune.nvcpusched = n; -- 2.6.2

To support further refactors replace all write access to def->maxvcpus with a accessor function. --- src/conf/domain_conf.c | 18 ++++++++++++++++-- src/conf/domain_conf.h | 2 ++ src/hyperv/hyperv_driver.c | 5 ++++- src/libvirt_private.syms | 1 + src/libxl/libxl_driver.c | 8 ++++++-- src/lxc/lxc_native.c | 4 +++- src/openvz/openvz_conf.c | 4 +++- src/openvz/openvz_driver.c | 5 ++++- src/phyp/phyp_driver.c | 4 +++- src/qemu/qemu_command.c | 9 +++++++-- src/qemu/qemu_driver.c | 4 +++- src/test/test_driver.c | 4 +++- src/vbox/vbox_common.c | 11 +++++++++-- src/vmx/vmx.c | 5 ++++- src/vz/vz_sdk.c | 4 +++- src/xen/xm_internal.c | 4 +++- src/xenapi/xenapi_driver.c | 4 +++- src/xenconfig/xen_common.c | 4 +++- src/xenconfig/xen_sxpr.c | 3 ++- 19 files changed, 82 insertions(+), 21 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index a744412..e0fc09c 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1424,6 +1424,16 @@ void virDomainLeaseDefFree(virDomainLeaseDefPtr def) } +int +virDomainDefSetVCpusMax(virDomainDefPtr def, + unsigned int vcpus) +{ + def->maxvcpus = vcpus; + + return 0; +} + + virDomainDiskDefPtr virDomainDiskDefNew(virDomainXMLOptionPtr xmlopt) { @@ -14645,18 +14655,22 @@ virDomainVcpuParse(virDomainDefPtr def, { int n; char *tmp = NULL; + unsigned int maxvcpus; int ret = -1; - if ((n = virXPathUInt("string(./vcpu[1])", ctxt, &def->maxvcpus)) < 0) { + if ((n = virXPathUInt("string(./vcpu[1])", ctxt, &maxvcpus)) < 0) { if (n == -2) { virReportError(VIR_ERR_XML_ERROR, "%s", _("maximum vcpus count must be an integer")); goto cleanup; } - def->maxvcpus = 1; + maxvcpus = 1; } + if (virDomainDefSetVCpusMax(def, maxvcpus) < 0) + goto cleanup; + if ((n = virXPathUInt("string(./vcpu[1]/@current)", ctxt, &def->vcpus)) < 0) { if (n == -2) { virReportError(VIR_ERR_XML_ERROR, "%s", diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 8d43ee6..498ca99 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2325,6 +2325,8 @@ struct _virDomainDef { xmlNodePtr metadata; }; +int virDomainDefSetVCpusMax(virDomainDefPtr def, unsigned int vcpus); + unsigned long long virDomainDefGetMemoryInitial(const virDomainDef *def); void virDomainDefSetMemoryTotal(virDomainDefPtr def, unsigned long long size); void virDomainDefSetMemoryInitial(virDomainDefPtr def, unsigned long long size); diff --git a/src/hyperv/hyperv_driver.c b/src/hyperv/hyperv_driver.c index 72261df..61e06b0 100644 --- a/src/hyperv/hyperv_driver.c +++ b/src/hyperv/hyperv_driver.c @@ -873,8 +873,11 @@ hypervDomainGetXMLDesc(virDomainPtr domain, unsigned int flags) virDomainDefSetMemoryTotal(def, memorySettingData->data->Limit * 1024); /* megabyte to kilobyte */ def->mem.cur_balloon = memorySettingData->data->VirtualQuantity * 1024; /* megabyte to kilobyte */ + if (virDomainDefSetVCpusMax(def, + processorSettingData->data->VirtualQuantity) < 0) + goto cleanup; + def->vcpus = processorSettingData->data->VirtualQuantity; - def->maxvcpus = processorSettingData->data->VirtualQuantity; def->os.type = VIR_DOMAIN_OSTYPE_HVM; /* FIXME: devices section is totally missing */ diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 7e60d87..321f926 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -230,6 +230,7 @@ virDomainDefParseString; virDomainDefPostParse; virDomainDefSetMemoryInitial; virDomainDefSetMemoryTotal; +virDomainDefSetVCpusMax; virDomainDeleteConfig; virDomainDeviceAddressIsValid; virDomainDeviceAddressTypeToString; diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index d77a0e4..5ef0784 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -552,8 +552,10 @@ libxlAddDom0(libxlDriverPrivatePtr driver) def = NULL; virDomainObjSetState(vm, VIR_DOMAIN_RUNNING, VIR_DOMAIN_RUNNING_BOOTED); + if (virDomainDefSetVCpusMax(vm->def, d_info.vcpu_max_id + 1)) + goto cleanup; + vm->def->vcpus = d_info.vcpu_online; - vm->def->maxvcpus = d_info.vcpu_max_id + 1; vm->def->mem.cur_balloon = d_info.current_memkb; virDomainDefSetMemoryTotal(vm->def, d_info.max_memkb); @@ -2184,7 +2186,9 @@ libxlDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, switch (flags) { case VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_CONFIG: - def->maxvcpus = nvcpus; + if (virDomainDefSetVCpusMax(def, nvcpus) < 0) + goto cleanup; + if (nvcpus < def->vcpus) def->vcpus = nvcpus; break; diff --git a/src/lxc/lxc_native.c b/src/lxc/lxc_native.c index 2f95597..d4a72c1 100644 --- a/src/lxc/lxc_native.c +++ b/src/lxc/lxc_native.c @@ -1019,7 +1019,9 @@ lxcParseConfigString(const char *config) /* Value not handled by the LXC driver, setting to * minimum required to make XML parsing pass */ - vmdef->maxvcpus = 1; + if (virDomainDefSetVCpusMax(vmdef, 1) < 0) + goto error; + vmdef->vcpus = 1; vmdef->nfss = 0; diff --git a/src/openvz/openvz_conf.c b/src/openvz/openvz_conf.c index c0f65c9..aabb7c4 100644 --- a/src/openvz/openvz_conf.c +++ b/src/openvz/openvz_conf.c @@ -582,7 +582,9 @@ int openvzLoadDomains(struct openvz_driver *driver) if (ret == 0 || vcpus == 0) vcpus = openvzGetNodeCPUs(); - def->maxvcpus = vcpus; + if (virDomainDefSetVCpusMax(def, vcpus) < 0) + goto cleanup; + def->vcpus = vcpus; /* XXX load rest of VM config data .... */ diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c index b8c0f50..60b40d5 100644 --- a/src/openvz/openvz_driver.c +++ b/src/openvz/openvz_driver.c @@ -1368,7 +1368,10 @@ static int openvzDomainSetVcpusInternal(virDomainObjPtr vm, if (virRun(prog, NULL) < 0) return -1; - vm->def->maxvcpus = vm->def->vcpus = nvcpus; + if (virDomainDefSetVCpusMax(vm->def, nvcpus) < 0) + return -1; + + vm->def->vcpus = nvcpus; return 0; } diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c index 14264c0..7c77e23 100644 --- a/src/phyp/phyp_driver.c +++ b/src/phyp/phyp_driver.c @@ -3295,7 +3295,9 @@ phypDomainGetXMLDesc(virDomainPtr dom, unsigned int flags) goto err; } - def.maxvcpus = vcpus; + if (virDomainDefSetVCpusMax(&def, vcpus) < 0) + goto err; + def.vcpus = vcpus; return virDomainDefFormat(&def, diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index ef5ef93..af283af 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -12576,7 +12576,11 @@ qemuParseCommandLineSmp(virDomainDefPtr dom, } } - dom->maxvcpus = maxcpus ? maxcpus : dom->vcpus; + if (maxcpus == 0) + maxcpus = dom->vcpus; + + if (virDomainDefSetVCpusMax(dom, maxcpus) < 0) + goto error; if (sockets && cores && threads) { virCPUDefPtr cpu; @@ -12690,7 +12694,8 @@ qemuParseCommandLine(virCapsPtr qemuCaps, def->id = -1; def->mem.cur_balloon = 64 * 1024; virDomainDefSetMemoryTotal(def, def->mem.cur_balloon); - def->maxvcpus = 1; + if (virDomainDefSetVCpusMax(def, 1) < 0) + goto error; def->vcpus = 1; def->clock.offset = VIR_DOMAIN_CLOCK_OFFSET_UTC; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 65ccf99..8ab3209 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4979,7 +4979,9 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, } if (flags & VIR_DOMAIN_VCPU_MAXIMUM) { - persistentDef->maxvcpus = nvcpus; + if (virDomainDefSetVCpusMax(persistentDef, nvcpus) < 0) + goto endjob; + if (nvcpus < persistentDef->vcpus) persistentDef->vcpus = nvcpus; } else { diff --git a/src/test/test_driver.c b/src/test/test_driver.c index 9ccd567..53d9338 100644 --- a/src/test/test_driver.c +++ b/src/test/test_driver.c @@ -2376,7 +2376,9 @@ testDomainSetVcpusFlags(virDomainPtr domain, unsigned int nrCpus, if (persistentDef) { if (flags & VIR_DOMAIN_VCPU_MAXIMUM) { - persistentDef->maxvcpus = nrCpus; + if (virDomainDefSetVCpusMax(persistentDef, nrCpus) < 0) + goto cleanup; + if (nrCpus < persistentDef->vcpus) persistentDef->vcpus = nrCpus; } else { diff --git a/src/vbox/vbox_common.c b/src/vbox/vbox_common.c index 3e6ed7a..20f44e9 100644 --- a/src/vbox/vbox_common.c +++ b/src/vbox/vbox_common.c @@ -3901,7 +3901,10 @@ static char *vboxDomainGetXMLDesc(virDomainPtr dom, unsigned int flags) virDomainDefSetMemoryTotal(def, memorySize * 1024); gVBoxAPI.UIMachine.GetCPUCount(machine, &CPUCount); - def->maxvcpus = def->vcpus = CPUCount; + if (virDomainDefSetVCpusMax(def, CPUCount) < 0) + goto cleanup; + + def->vcpus = CPUCount; /* Skip cpumasklen, cpumask, onReboot, onPoweroff, onCrash */ @@ -6055,7 +6058,11 @@ static char *vboxDomainSnapshotGetXMLDesc(virDomainSnapshotPtr snapshot, def->dom->os.type = VIR_DOMAIN_OSTYPE_HVM; def->dom->os.arch = virArchFromHost(); gVBoxAPI.UIMachine.GetCPUCount(machine, &CPUCount); - def->dom->maxvcpus = def->dom->vcpus = CPUCount; + if (virDomainDefSetVCpusMax(def->dom, CPUCount) < 0) + goto cleanup; + + def->dom->vcpus = CPUCount; + if (vboxSnapshotGetReadWriteDisks(def, snapshot) < 0) VIR_DEBUG("Could not get read write disks for snapshot"); diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c index 7c3c10a..41a872a 100644 --- a/src/vmx/vmx.c +++ b/src/vmx/vmx.c @@ -1457,7 +1457,10 @@ virVMXParseConfig(virVMXContext *ctx, goto cleanup; } - def->maxvcpus = def->vcpus = numvcpus; + if (virDomainDefSetVCpusMax(def, numvcpus) < 0) + goto cleanup; + + def->vcpus = numvcpus; /* vmx:sched.cpu.affinity -> def:cpumask */ /* NOTE: maps to VirtualMachine:config.cpuAffinity.affinitySet */ diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index 750133d..bef5146 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -1150,8 +1150,10 @@ prlsdkConvertCpuInfo(PRL_HANDLE sdkdom, if (cpuCount > hostcpus) cpuCount = hostcpus; + if (virDomainDefSetVCpusMax(def, cpuCount) < 0) + goto cleanup; + def->vcpus = cpuCount; - def->maxvcpus = cpuCount; pret = PrlVmCfg_GetCpuMask(sdkdom, NULL, &buflen); prlsdkCheckRetGoto(pret, cleanup); diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c index 75f98b1..7321b9f 100644 --- a/src/xen/xm_internal.c +++ b/src/xen/xm_internal.c @@ -704,7 +704,9 @@ xenXMDomainSetVcpusFlags(virConnectPtr conn, } if (flags & VIR_DOMAIN_VCPU_MAXIMUM) { - entry->def->maxvcpus = vcpus; + if (virDomainDefSetVCpusMax(entry->def, vcpus) < 0) + goto cleanup; + if (entry->def->vcpus > vcpus) entry->def->vcpus = vcpus; } else { diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c index e503974..11cace1 100644 --- a/src/xenapi/xenapi_driver.c +++ b/src/xenapi/xenapi_driver.c @@ -1502,7 +1502,9 @@ xenapiDomainGetXMLDesc(virDomainPtr dom, unsigned int flags) vcpus = xenapiDomainGetMaxVcpus(dom); - defPtr->maxvcpus = vcpus; + if (virDomainDefSetVCpusMax(defPtr, vcpus) < 0) + goto error; + defPtr->vcpus = vcpus; enum xen_on_normal_exit action; diff --git a/src/xenconfig/xen_common.c b/src/xenconfig/xen_common.c index 0890c73..05fc76c 100644 --- a/src/xenconfig/xen_common.c +++ b/src/xenconfig/xen_common.c @@ -502,7 +502,9 @@ xenParseCPUFeatures(virConfPtr conf, virDomainDefPtr def) MAX_VIRT_CPUS < count) return -1; - def->maxvcpus = count; + if (virDomainDefSetVCpusMax(def, count) < 0) + return -1; + if (xenConfigGetULong(conf, "vcpu_avail", &count, -1) < 0) return -1; diff --git a/src/xenconfig/xen_sxpr.c b/src/xenconfig/xen_sxpr.c index 7fc9c9d..64a317d 100644 --- a/src/xenconfig/xen_sxpr.c +++ b/src/xenconfig/xen_sxpr.c @@ -1173,7 +1173,8 @@ xenParseSxpr(const struct sexpr *root, } } - def->maxvcpus = sexpr_int(root, "domain/vcpus"); + if (virDomainDefSetVCpusMax(def, sexpr_int(root, "domain/vcpus")) < 0) + goto error; def->vcpus = count_one_bits_l(sexpr_u64(root, "domain/vcpu_avail")); if (!def->vcpus || def->maxvcpus < def->vcpus) def->vcpus = def->maxvcpus; -- 2.6.2

On 11/20/2015 10:21 AM, Peter Krempa wrote:
To support further refactors replace all write access to def->maxvcpus with a accessor function. --- src/conf/domain_conf.c | 18 ++++++++++++++++-- src/conf/domain_conf.h | 2 ++ src/hyperv/hyperv_driver.c | 5 ++++- src/libvirt_private.syms | 1 + src/libxl/libxl_driver.c | 8 ++++++-- src/lxc/lxc_native.c | 4 +++- src/openvz/openvz_conf.c | 4 +++- src/openvz/openvz_driver.c | 5 ++++- src/phyp/phyp_driver.c | 4 +++- src/qemu/qemu_command.c | 9 +++++++-- src/qemu/qemu_driver.c | 4 +++- src/test/test_driver.c | 4 +++- src/vbox/vbox_common.c | 11 +++++++++-- src/vmx/vmx.c | 5 ++++- src/vz/vz_sdk.c | 4 +++- src/xen/xm_internal.c | 4 +++- src/xenapi/xenapi_driver.c | 4 +++- src/xenconfig/xen_common.c | 4 +++- src/xenconfig/xen_sxpr.c | 3 ++- 19 files changed, 82 insertions(+), 21 deletions(-)
To be consistent with other uses (e.g. drivers, remote, libvirt-api), I think it should be "Vcpus" rather than "VCpus". The other options are of course 'vCPUs' or "VCPUs", but they both look strange in/as API names. The consistency is more for searching for vCPU related functionality and since VCpu isn't used at all - introducing it just requires another way to look-up names John

On 11/20/2015 10:21 AM, Peter Krempa wrote:
To support further refactors replace all write access to def->maxvcpus with a accessor function. --- src/conf/domain_conf.c | 18 ++++++++++++++++-- src/conf/domain_conf.h | 2 ++ src/hyperv/hyperv_driver.c | 5 ++++- src/libvirt_private.syms | 1 + src/libxl/libxl_driver.c | 8 ++++++-- src/lxc/lxc_native.c | 4 +++- src/openvz/openvz_conf.c | 4 +++- src/openvz/openvz_driver.c | 5 ++++- src/phyp/phyp_driver.c | 4 +++- src/qemu/qemu_command.c | 9 +++++++-- src/qemu/qemu_driver.c | 4 +++- src/test/test_driver.c | 4 +++- src/vbox/vbox_common.c | 11 +++++++++-- src/vmx/vmx.c | 5 ++++- src/vz/vz_sdk.c | 4 +++- src/xen/xm_internal.c | 4 +++- src/xenapi/xenapi_driver.c | 4 +++- src/xenconfig/xen_common.c | 4 +++- src/xenconfig/xen_sxpr.c | 3 ++- 19 files changed, 82 insertions(+), 21 deletions(-)
Now that I'm much further along...
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index a744412..e0fc09c 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1424,6 +1424,16 @@ void virDomainLeaseDefFree(virDomainLeaseDefPtr def) }
+int +virDomainDefSetVCpusMax(virDomainDefPtr def, + unsigned int vcpus)
should this change to "maxvcpus"? Not all that important, but it may make for easier reading later on when def->maxvcpus intersperses with def->vcpus and there's a vcpus variable that relates to max and not current. John
+{ + def->maxvcpus = vcpus; + + return 0; +} + +

The code can be unified into the new accessor rather than being scattered accross the drivers. --- src/conf/domain_conf.c | 3 +++ src/libxl/libxl_driver.c | 3 --- src/qemu/qemu_driver.c | 3 --- src/test/test_driver.c | 3 --- src/xen/xm_internal.c | 3 --- 5 files changed, 3 insertions(+), 12 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index e0fc09c..6bed826 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1428,6 +1428,9 @@ int virDomainDefSetVCpusMax(virDomainDefPtr def, unsigned int vcpus) { + if (vcpus < def->vcpus) + def->vcpus = vcpus; + def->maxvcpus = vcpus; return 0; diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index 5ef0784..e85874a 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -2188,9 +2188,6 @@ libxlDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, case VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_CONFIG: if (virDomainDefSetVCpusMax(def, nvcpus) < 0) goto cleanup; - - if (nvcpus < def->vcpus) - def->vcpus = nvcpus; break; case VIR_DOMAIN_VCPU_CONFIG: diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 8ab3209..f879060 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4981,9 +4981,6 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, if (flags & VIR_DOMAIN_VCPU_MAXIMUM) { if (virDomainDefSetVCpusMax(persistentDef, nvcpus) < 0) goto endjob; - - if (nvcpus < persistentDef->vcpus) - persistentDef->vcpus = nvcpus; } else { persistentDef->vcpus = nvcpus; } diff --git a/src/test/test_driver.c b/src/test/test_driver.c index 53d9338..cfd7bdc 100644 --- a/src/test/test_driver.c +++ b/src/test/test_driver.c @@ -2378,9 +2378,6 @@ testDomainSetVcpusFlags(virDomainPtr domain, unsigned int nrCpus, if (flags & VIR_DOMAIN_VCPU_MAXIMUM) { if (virDomainDefSetVCpusMax(persistentDef, nrCpus) < 0) goto cleanup; - - if (nrCpus < persistentDef->vcpus) - persistentDef->vcpus = nrCpus; } else { persistentDef->vcpus = nrCpus; } diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c index 7321b9f..2838525 100644 --- a/src/xen/xm_internal.c +++ b/src/xen/xm_internal.c @@ -706,9 +706,6 @@ xenXMDomainSetVcpusFlags(virConnectPtr conn, if (flags & VIR_DOMAIN_VCPU_MAXIMUM) { if (virDomainDefSetVCpusMax(entry->def, vcpus) < 0) goto cleanup; - - if (entry->def->vcpus > vcpus) - entry->def->vcpus = vcpus; } else { entry->def->vcpus = vcpus; } -- 2.6.2

The new helper will simplify checking whether the domain config contains inactive vCPUs. --- src/conf/domain_conf.c | 9 ++++++++- src/conf/domain_conf.h | 1 + src/libvirt_private.syms | 1 + src/openvz/openvz_driver.c | 2 +- src/qemu/qemu_command.c | 4 ++-- src/vbox/vbox_common.c | 2 +- src/vmx/vmx.c | 2 +- src/vz/vz_sdk.c | 2 +- src/xenconfig/xen_common.c | 2 +- src/xenconfig/xen_sxpr.c | 4 ++-- 10 files changed, 19 insertions(+), 10 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 6bed826..3a1dcc7 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1437,6 +1437,13 @@ virDomainDefSetVCpusMax(virDomainDefPtr def, } +bool +virDomainDefHasVCpusOffline(const virDomainDef *def) +{ + return def->vcpus < def->maxvcpus; +} + + virDomainDiskDefPtr virDomainDiskDefNew(virDomainXMLOptionPtr xmlopt) { @@ -21785,7 +21792,7 @@ virDomainDefFormatInternal(virDomainDefPtr def, virBufferAsprintf(buf, " cpuset='%s'", cpumask); VIR_FREE(cpumask); } - if (def->vcpus != def->maxvcpus) + if (virDomainDefHasVCpusOffline(def)) virBufferAsprintf(buf, " current='%u'", def->vcpus); virBufferAsprintf(buf, ">%u</vcpu>\n", def->maxvcpus); diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 498ca99..de7412c 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2326,6 +2326,7 @@ struct _virDomainDef { }; int virDomainDefSetVCpusMax(virDomainDefPtr def, unsigned int vcpus); +bool virDomainDefHasVCpusOffline(const virDomainDef *def); unsigned long long virDomainDefGetMemoryInitial(const virDomainDef *def); void virDomainDefSetMemoryTotal(virDomainDefPtr def, unsigned long long size); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 321f926..7e6ea4b 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -219,6 +219,7 @@ virDomainDefGetMemoryInitial; virDomainDefGetSecurityLabelDef; virDomainDefHasDeviceAddress; virDomainDefHasMemoryHotplug; +virDomainDefHasVCpusOffline; virDomainDefMaybeAddController; virDomainDefMaybeAddInput; virDomainDefNeedsPlacementAdvice; diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c index 60b40d5..1361432 100644 --- a/src/openvz/openvz_driver.c +++ b/src/openvz/openvz_driver.c @@ -1030,7 +1030,7 @@ openvzDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int fla if (openvzDomainSetNetworkConfig(conn, vm->def) < 0) goto cleanup; - if (vm->def->vcpus != vm->def->maxvcpus) { + if (virDomainDefHasVCpusOffline(vm->def)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("current vcpu count must equal maximum")); goto cleanup; diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index af283af..ef44b8e 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7852,7 +7852,7 @@ qemuBuildSmpArgStr(const virDomainDef *def, virBufferAsprintf(&buf, "%u", def->vcpus); if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_SMP_TOPOLOGY)) { - if (def->vcpus != def->maxvcpus) + if (virDomainDefHasVCpusOffline(def)) virBufferAsprintf(&buf, ",maxcpus=%u", def->maxvcpus); /* sockets, cores, and threads are either all zero * or all non-zero, thus checking one of them is enough */ @@ -7865,7 +7865,7 @@ qemuBuildSmpArgStr(const virDomainDef *def, virBufferAsprintf(&buf, ",cores=%u", 1); virBufferAsprintf(&buf, ",threads=%u", 1); } - } else if (def->vcpus != def->maxvcpus) { + } else if (virDomainDefHasVCpusOffline(def)) { virBufferFreeAndReset(&buf); /* FIXME - consider hot-unplugging cpus after boot for older qemu */ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", diff --git a/src/vbox/vbox_common.c b/src/vbox/vbox_common.c index 20f44e9..4c88fa9 100644 --- a/src/vbox/vbox_common.c +++ b/src/vbox/vbox_common.c @@ -1891,7 +1891,7 @@ vboxDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int flags def->mem.cur_balloon, (unsigned)rc); } - if (def->vcpus != def->maxvcpus) { + if (virDomainDefHasVCpusOffline(def)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("current vcpu count must equal maximum")); } diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c index 41a872a..5456e3d 100644 --- a/src/vmx/vmx.c +++ b/src/vmx/vmx.c @@ -3175,7 +3175,7 @@ virVMXFormatConfig(virVMXContext *ctx, virDomainXMLOptionPtr xmlopt, virDomainDe } /* def:maxvcpus -> vmx:numvcpus */ - if (def->vcpus != def->maxvcpus) { + if (virDomainDefHasVCpusOffline(def)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("No support for domain XML entry 'vcpu' attribute " "'current'")); diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index bef5146..d3aa3e2 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -1933,7 +1933,7 @@ prlsdkCheckUnsupportedParams(PRL_HANDLE sdkdom, virDomainDefPtr def) return -1; } - if (def->vcpus != def->maxvcpus) { + if (virDomainDefHasVCpusOffline(def)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("current vcpus must be equal to maxvcpus")); return -1; diff --git a/src/xenconfig/xen_common.c b/src/xenconfig/xen_common.c index 05fc76c..e21576d 100644 --- a/src/xenconfig/xen_common.c +++ b/src/xenconfig/xen_common.c @@ -1531,7 +1531,7 @@ xenFormatCPUAllocation(virConfPtr conf, virDomainDefPtr def) /* Computing the vcpu_avail bitmask works because MAX_VIRT_CPUS is either 32, or 64 on a platform where long is big enough. */ - if (def->vcpus < def->maxvcpus && + if (virDomainDefHasVCpusOffline(def) && xenConfigSetInt(conf, "vcpu_avail", (1UL << def->vcpus) - 1) < 0) goto cleanup; diff --git a/src/xenconfig/xen_sxpr.c b/src/xenconfig/xen_sxpr.c index 64a317d..505ef76 100644 --- a/src/xenconfig/xen_sxpr.c +++ b/src/xenconfig/xen_sxpr.c @@ -2226,7 +2226,7 @@ xenFormatSxpr(virConnectPtr conn, virBufferAsprintf(&buf, "(vcpus %u)", def->maxvcpus); /* Computing the vcpu_avail bitmask works because MAX_VIRT_CPUS is either 32, or 64 on a platform where long is big enough. */ - if (def->vcpus < def->maxvcpus) + if (virDomainDefHasVCpusOffline(def)) virBufferAsprintf(&buf, "(vcpu_avail %lu)", (1UL << def->vcpus) - 1); if (def->cpumask) { @@ -2308,7 +2308,7 @@ xenFormatSxpr(virConnectPtr conn, virBufferEscapeSexpr(&buf, "(kernel '%s')", def->os.loader->path); virBufferAsprintf(&buf, "(vcpus %u)", def->maxvcpus); - if (def->vcpus < def->maxvcpus) + if (virDomainDefHasVCpusOffline(def)) virBufferAsprintf(&buf, "(vcpu_avail %lu)", (1UL << def->vcpus) - 1); -- 2.6.2

On 11/20/2015 10:21 AM, Peter Krempa wrote:
The new helper will simplify checking whether the domain config contains inactive vCPUs. --- src/conf/domain_conf.c | 9 ++++++++- src/conf/domain_conf.h | 1 + src/libvirt_private.syms | 1 + src/openvz/openvz_driver.c | 2 +- src/qemu/qemu_command.c | 4 ++-- src/vbox/vbox_common.c | 2 +- src/vmx/vmx.c | 2 +- src/vz/vz_sdk.c | 2 +- src/xenconfig/xen_common.c | 2 +- src/xenconfig/xen_sxpr.c | 4 ++-- 10 files changed, 19 insertions(+), 10 deletions(-)
Like Patch 7 - use "Vcpus" rather than "VCpus" John

Set new domain configs to contain at least 1 vCPU add a check that maximum vCPU count isn't set to 0 and remove unnecesary checks. The openvz test suite change is necessary since the test case generates the config via virDomainDefNew but does not set the vCPU info. With the change to virDomainDefNew the expected output has changed. --- src/conf/domain_conf.c | 12 ++++++++++++ src/lxc/lxc_native.c | 7 ------- src/openvz/openvz_driver.c | 20 ++++++++------------ src/qemu/qemu_command.c | 3 --- src/vmx/vmx.c | 6 +++--- tests/openvzutilstest.c | 2 +- 6 files changed, 24 insertions(+), 26 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 3a1dcc7..6b16430 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1428,6 +1428,12 @@ int virDomainDefSetVCpusMax(virDomainDefPtr def, unsigned int vcpus) { + if (vcpus == 0) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("domain config can't have 0 maximum vCPUs")); + return -1; + } + if (vcpus < def->vcpus) def->vcpus = vcpus; @@ -2697,6 +2703,12 @@ virDomainDefNew(void) if (!(ret->numa = virDomainNumaNew())) goto error; + /* assume at least 1 cpu for every config */ + if (virDomainDefSetVCpusMax(ret, 1) < 0) + goto error; + + ret->vcpus = 1; + ret->mem.hard_limit = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; ret->mem.soft_limit = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; ret->mem.swap_hard_limit = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; diff --git a/src/lxc/lxc_native.c b/src/lxc/lxc_native.c index d4a72c1..a3fea0a 100644 --- a/src/lxc/lxc_native.c +++ b/src/lxc/lxc_native.c @@ -1017,13 +1017,6 @@ lxcParseConfigString(const char *config) vmdef->onPoweroff = VIR_DOMAIN_LIFECYCLE_DESTROY; vmdef->virtType = VIR_DOMAIN_VIRT_LXC; - /* Value not handled by the LXC driver, setting to - * minimum required to make XML parsing pass */ - if (virDomainDefSetVCpusMax(vmdef, 1) < 0) - goto error; - - vmdef->vcpus = 1; - vmdef->nfss = 0; vmdef->os.type = VIR_DOMAIN_OSTYPE_EXE; diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c index 1361432..53a2d57 100644 --- a/src/openvz/openvz_driver.c +++ b/src/openvz/openvz_driver.c @@ -1035,12 +1035,10 @@ openvzDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int fla _("current vcpu count must equal maximum")); goto cleanup; } - if (vm->def->maxvcpus > 0) { - if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("Could not set number of vCPUs")); - goto cleanup; - } + if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Could not set number of vCPUs")); + goto cleanup; } if (vm->def->mem.cur_balloon > 0) { @@ -1133,12 +1131,10 @@ openvzDomainCreateXML(virConnectPtr conn, const char *xml, vm->def->id = vm->pid; virDomainObjSetState(vm, VIR_DOMAIN_RUNNING, VIR_DOMAIN_RUNNING_BOOTED); - if (vm->def->maxvcpus > 0) { - if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("Could not set number of vCPUs")); - goto cleanup; - } + if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Could not set number of vCPUs")); + goto cleanup; } dom = virGetDomain(conn, vm->def->name, vm->def->uuid); diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index ef44b8e..cc6785f 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -12694,9 +12694,6 @@ qemuParseCommandLine(virCapsPtr qemuCaps, def->id = -1; def->mem.cur_balloon = 64 * 1024; virDomainDefSetMemoryTotal(def, def->mem.cur_balloon); - if (virDomainDefSetVCpusMax(def, 1) < 0) - goto error; - def->vcpus = 1; def->clock.offset = VIR_DOMAIN_CLOCK_OFFSET_UTC; def->onReboot = VIR_DOMAIN_LIFECYCLE_RESTART; diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c index 5456e3d..0223e94 100644 --- a/src/vmx/vmx.c +++ b/src/vmx/vmx.c @@ -3181,10 +3181,10 @@ virVMXFormatConfig(virVMXContext *ctx, virDomainXMLOptionPtr xmlopt, virDomainDe "'current'")); goto cleanup; } - if (def->maxvcpus <= 0 || (def->maxvcpus % 2 != 0 && def->maxvcpus != 1)) { + if ((def->maxvcpus % 2 != 0 && def->maxvcpus != 1)) { virReportError(VIR_ERR_INTERNAL_ERROR, - _("Expecting domain XML entry 'vcpu' to be an unsigned " - "integer (1 or a multiple of 2) but found %d"), + _("Expecting domain XML entry 'vcpu' to be 1 or a " + "multiple of 2 but found %d"), def->maxvcpus); goto cleanup; } diff --git a/tests/openvzutilstest.c b/tests/openvzutilstest.c index 1414d70..0214fe5 100644 --- a/tests/openvzutilstest.c +++ b/tests/openvzutilstest.c @@ -81,7 +81,7 @@ testReadNetworkConf(const void *data ATTRIBUTE_UNUSED) " <uuid>00000000-0000-0000-0000-000000000000</uuid>\n" " <memory unit='KiB'>0</memory>\n" " <currentMemory unit='KiB'>0</currentMemory>\n" - " <vcpu placement='static'>0</vcpu>\n" + " <vcpu placement='static'>1</vcpu>\n" " <os>\n" " <type>exe</type>\n" " <init>/sbin/init</init>\n" -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Set new domain configs to contain at least 1 vCPU add a check that maximum vCPU count isn't set to 0 and remove unnecesary checks.
The openvz test suite change is necessary since the test case generates the config via virDomainDefNew but does not set the vCPU info. With the change to virDomainDefNew the expected output has changed. --- src/conf/domain_conf.c | 12 ++++++++++++ src/lxc/lxc_native.c | 7 ------- src/openvz/openvz_driver.c | 20 ++++++++------------ src/qemu/qemu_command.c | 3 --- src/vmx/vmx.c | 6 +++--- tests/openvzutilstest.c | 2 +- 6 files changed, 24 insertions(+), 26 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 3a1dcc7..6b16430 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1428,6 +1428,12 @@ int virDomainDefSetVCpusMax(virDomainDefPtr def, unsigned int vcpus) { + if (vcpus == 0) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("domain config can't have 0 maximum vCPUs"));
"domain configuration requires at least 1 vCPU" Shouldn't this be a post parse check rather than a parse check; otherwise, couldn't a domain disappear? May also "solve" the openvz usage pattern issue...
+ return -1; + } + if (vcpus < def->vcpus) def->vcpus = vcpus;
@@ -2697,6 +2703,12 @@ virDomainDefNew(void) if (!(ret->numa = virDomainNumaNew())) goto error;
+ /* assume at least 1 cpu for every config */ + if (virDomainDefSetVCpusMax(ret, 1) < 0) + goto error; + + ret->vcpus = 1; +
[1]
From a quick read - this is what generates issues w/ openvz which seems to allow a "0" to be interpreted as all CPU's on the host during parse.
ret->mem.hard_limit = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; ret->mem.soft_limit = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; ret->mem.swap_hard_limit = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; diff --git a/src/lxc/lxc_native.c b/src/lxc/lxc_native.c index d4a72c1..a3fea0a 100644 --- a/src/lxc/lxc_native.c +++ b/src/lxc/lxc_native.c @@ -1017,13 +1017,6 @@ lxcParseConfigString(const char *config) vmdef->onPoweroff = VIR_DOMAIN_LIFECYCLE_DESTROY; vmdef->virtType = VIR_DOMAIN_VIRT_LXC;
- /* Value not handled by the LXC driver, setting to - * minimum required to make XML parsing pass */ - if (virDomainDefSetVCpusMax(vmdef, 1) < 0) - goto error; - - vmdef->vcpus = 1; - vmdef->nfss = 0; vmdef->os.type = VIR_DOMAIN_OSTYPE_EXE;
diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c index 1361432..53a2d57 100644 --- a/src/openvz/openvz_driver.c +++ b/src/openvz/openvz_driver.c @@ -1035,12 +1035,10 @@ openvzDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int fla _("current vcpu count must equal maximum")); goto cleanup; } - if (vm->def->maxvcpus > 0) { - if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("Could not set number of vCPUs")); - goto cleanup; - } + if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Could not set number of vCPUs")); + goto cleanup; }
if (vm->def->mem.cur_balloon > 0) { @@ -1133,12 +1131,10 @@ openvzDomainCreateXML(virConnectPtr conn, const char *xml, vm->def->id = vm->pid; virDomainObjSetState(vm, VIR_DOMAIN_RUNNING, VIR_DOMAIN_RUNNING_BOOTED);
- if (vm->def->maxvcpus > 0) { - if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("Could not set number of vCPUs")); - goto cleanup; - } + if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Could not set number of vCPUs")); + goto cleanup; }
dom = virGetDomain(conn, vm->def->name, vm->def->uuid); diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index ef44b8e..cc6785f 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -12694,9 +12694,6 @@ qemuParseCommandLine(virCapsPtr qemuCaps, def->id = -1; def->mem.cur_balloon = 64 * 1024; virDomainDefSetMemoryTotal(def, def->mem.cur_balloon); - if (virDomainDefSetVCpusMax(def, 1) < 0) - goto error; - def->vcpus = 1; def->clock.offset = VIR_DOMAIN_CLOCK_OFFSET_UTC;
def->onReboot = VIR_DOMAIN_LIFECYCLE_RESTART; diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c index 5456e3d..0223e94 100644 --- a/src/vmx/vmx.c +++ b/src/vmx/vmx.c @@ -3181,10 +3181,10 @@ virVMXFormatConfig(virVMXContext *ctx, virDomainXMLOptionPtr xmlopt, virDomainDe "'current'")); goto cleanup; } - if (def->maxvcpus <= 0 || (def->maxvcpus % 2 != 0 && def->maxvcpus != 1)) { + if ((def->maxvcpus % 2 != 0 && def->maxvcpus != 1)) { virReportError(VIR_ERR_INTERNAL_ERROR, - _("Expecting domain XML entry 'vcpu' to be an unsigned " - "integer (1 or a multiple of 2) but found %d"), + _("Expecting domain XML entry 'vcpu' to be 1 or a " + "multiple of 2 but found %d"), def->maxvcpus); goto cleanup; } diff --git a/tests/openvzutilstest.c b/tests/openvzutilstest.c index 1414d70..0214fe5 100644 --- a/tests/openvzutilstest.c +++ b/tests/openvzutilstest.c @@ -81,7 +81,7 @@ testReadNetworkConf(const void *data ATTRIBUTE_UNUSED) " <uuid>00000000-0000-0000-0000-000000000000</uuid>\n" " <memory unit='KiB'>0</memory>\n" " <currentMemory unit='KiB'>0</currentMemory>\n" - " <vcpu placement='static'>0</vcpu>\n" + " <vcpu placement='static'>1</vcpu>\n" " <os>\n" " <type>exe</type>\n" " <init>/sbin/init</init>\n"
[1] Looking through history of things, finds : https://www.redhat.com/archives/libvir-list/2008-November/msg00253.html which seems to indicate that not providing a vCPU value or providing one of zero allows from the container to use all the CPU's on the host. Also the original commit id 'd6caacd1' of the test seems to indicate having a 0 is acceptable. Hopefully someone doing OpenVZ development could chime in here. It seems some code was shared w/r/t reading a configuration file and perhaps the output of a vcpus into the XML would be expected for this type of network device. That is - is the output here then fed into something else that's creating some network object and will object finding a 1 for vcpu count. Personally I don't have a problem with requiring something, but I'm wondering if it's "expected" in that environment. ACK 6-9 w/ the "VCpus" -> "Vcpus" change Conditional ACK 10 on whether this is "right" for openvz and whether the maxvcpus being non-zero check should be in post parse... John

On Mon, Nov 23, 2015 at 08:59:06 -0500, John Ferlan wrote:
On 11/20/2015 10:22 AM, Peter Krempa wrote:
Set new domain configs to contain at least 1 vCPU add a check that maximum vCPU count isn't set to 0 and remove unnecesary checks.
The openvz test suite change is necessary since the test case generates the config via virDomainDefNew but does not set the vCPU info. With the change to virDomainDefNew the expected output has changed. --- src/conf/domain_conf.c | 12 ++++++++++++ src/lxc/lxc_native.c | 7 ------- src/openvz/openvz_driver.c | 20 ++++++++------------ src/qemu/qemu_command.c | 3 --- src/vmx/vmx.c | 6 +++--- tests/openvzutilstest.c | 2 +- 6 files changed, 24 insertions(+), 26 deletions(-)
[...]
diff --git a/tests/openvzutilstest.c b/tests/openvzutilstest.c index 1414d70..0214fe5 100644 --- a/tests/openvzutilstest.c +++ b/tests/openvzutilstest.c @@ -81,7 +81,7 @@ testReadNetworkConf(const void *data ATTRIBUTE_UNUSED) " <uuid>00000000-0000-0000-0000-000000000000</uuid>\n" " <memory unit='KiB'>0</memory>\n" " <currentMemory unit='KiB'>0</currentMemory>\n" - " <vcpu placement='static'>0</vcpu>\n" + " <vcpu placement='static'>1</vcpu>\n" " <os>\n" " <type>exe</type>\n" " <init>/sbin/init</init>\n"
[1] Looking through history of things, finds :
https://www.redhat.com/archives/libvir-list/2008-November/msg00253.html
which seems to indicate that not providing a vCPU value or providing one of zero allows from the container to use all the CPU's on the host. Also the original commit id 'd6caacd1' of the test seems to indicate having a
0 is acceptable. Hopefully someone doing OpenVZ development could chime in here. It seems some code was shared w/r/t reading a configuration file and perhaps the output of a vcpus into the XML would be expected for this type of network device. That is - is the output here then fed into something else that's creating some network object and will object finding a 1 for vcpu count.
Hmm, right. I didn't notice that. I'll probably either replace this patch by code that adds it to the post parse check, or drop it entirely. I think it's not exactly necessary in this series. Peter

Finalize the refactor by adding the 'virDomainDefGetVCpusMax' getter and reusing it accross libvirt. --- src/conf/domain_conf.c | 22 +++++++++++++++------- src/conf/domain_conf.h | 1 + src/libvirt_private.syms | 1 + src/libxl/libxl_conf.c | 4 ++-- src/libxl/libxl_driver.c | 9 ++++++--- src/openvz/openvz_driver.c | 4 ++-- src/qemu/qemu_command.c | 4 ++-- src/qemu/qemu_driver.c | 10 +++++----- src/qemu/qemu_process.c | 2 +- src/test/test_driver.c | 13 ++++++++----- src/vbox/vbox_common.c | 4 ++-- src/vmx/vmx.c | 16 +++++++++------- src/vz/vz_driver.c | 2 +- src/xen/xm_internal.c | 9 ++++++--- src/xenapi/xenapi_utils.c | 6 ++---- src/xenconfig/xen_common.c | 4 ++-- src/xenconfig/xen_sxpr.c | 8 ++++---- 17 files changed, 69 insertions(+), 50 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 6b16430..4e5b7b6 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1450,6 +1450,13 @@ virDomainDefHasVCpusOffline(const virDomainDef *def) } +unsigned int +virDomainDefGetVCpusMax(const virDomainDef *def) +{ + return def->maxvcpus; +} + + virDomainDiskDefPtr virDomainDiskDefNew(virDomainXMLOptionPtr xmlopt) { @@ -15266,7 +15273,8 @@ virDomainDefParseXML(xmlDocPtr xml, for (i = 0; i < def->cputune.nvcpusched; i++) { if (virDomainThreadSchedParse(nodes[i], - 0, def->maxvcpus - 1, + 0, + virDomainDefGetVCpusMax(def) - 1, "vcpus", &def->cputune.vcpusched[i]) < 0) goto error; @@ -15343,7 +15351,7 @@ virDomainDefParseXML(xmlDocPtr xml, goto error; if (def->cpu->sockets && - def->maxvcpus > + virDomainDefGetVCpusMax(def) > def->cpu->sockets * def->cpu->cores * def->cpu->threads) { virReportError(VIR_ERR_XML_DETAIL, "%s", _("Maximum CPUs greater than topology limit")); @@ -15355,14 +15363,14 @@ virDomainDefParseXML(xmlDocPtr xml, if (virDomainNumaDefCPUParseXML(def->numa, ctxt) < 0) goto error; - if (virDomainNumaGetCPUCountTotal(def->numa) > def->maxvcpus) { + if (virDomainNumaGetCPUCountTotal(def->numa) > virDomainDefGetVCpusMax(def)) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Number of CPUs in <numa> exceeds the" " <vcpu> count")); goto error; } - if (virDomainNumaGetMaxCPUID(def->numa) >= def->maxvcpus) { + if (virDomainNumaGetMaxCPUID(def->numa) >= virDomainDefGetVCpusMax(def)) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("CPU IDs in <numa> exceed the <vcpu> count")); goto error; @@ -17843,10 +17851,10 @@ virDomainDefCheckABIStability(virDomainDefPtr src, dst->vcpus, src->vcpus); goto error; } - if (src->maxvcpus != dst->maxvcpus) { + if (virDomainDefGetVCpusMax(src) != virDomainDefGetVCpusMax(dst)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, _("Target domain vCPU max %d does not match source %d"), - dst->maxvcpus, src->maxvcpus); + virDomainDefGetVCpusMax(dst), virDomainDefGetVCpusMax(src)); goto error; } @@ -21806,7 +21814,7 @@ virDomainDefFormatInternal(virDomainDefPtr def, } if (virDomainDefHasVCpusOffline(def)) virBufferAsprintf(buf, " current='%u'", def->vcpus); - virBufferAsprintf(buf, ">%u</vcpu>\n", def->maxvcpus); + virBufferAsprintf(buf, ">%u</vcpu>\n", virDomainDefGetVCpusMax(def)); if (def->niothreadids > 0) { virBufferAsprintf(buf, "<iothreads>%u</iothreads>\n", diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index de7412c..433e5c9 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2327,6 +2327,7 @@ struct _virDomainDef { int virDomainDefSetVCpusMax(virDomainDefPtr def, unsigned int vcpus); bool virDomainDefHasVCpusOffline(const virDomainDef *def); +unsigned int virDomainDefGetVCpusMax(const virDomainDef *def); unsigned long long virDomainDefGetMemoryInitial(const virDomainDef *def); void virDomainDefSetMemoryTotal(virDomainDefPtr def, unsigned long long size); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 7e6ea4b..d2993c1 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -217,6 +217,7 @@ virDomainDefGetDefaultEmulator; virDomainDefGetMemoryActual; virDomainDefGetMemoryInitial; virDomainDefGetSecurityLabelDef; +virDomainDefGetVCpusMax; virDomainDefHasDeviceAddress; virDomainDefHasMemoryHotplug; virDomainDefHasVCpusOffline; diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c index 4eed5ca..82ccb89 100644 --- a/src/libxl/libxl_conf.c +++ b/src/libxl/libxl_conf.c @@ -643,8 +643,8 @@ libxlMakeDomBuildInfo(virDomainDefPtr def, else libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_PV); - b_info->max_vcpus = def->maxvcpus; - if (libxl_cpu_bitmap_alloc(ctx, &b_info->avail_vcpus, def->maxvcpus)) + b_info->max_vcpus = virDomainDefGetVCpusMax(def); + if (libxl_cpu_bitmap_alloc(ctx, &b_info->avail_vcpus, b_info->max_vcpus)) return -1; libxl_bitmap_set_none(&b_info->avail_vcpus); for (i = 0; i < def->vcpus; i++) diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index e85874a..8b0fd39 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -2159,8 +2159,8 @@ libxlDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, goto endjob; } - if (!(flags & VIR_DOMAIN_VCPU_MAXIMUM) && vm->def->maxvcpus < max) - max = vm->def->maxvcpus; + if (!(flags & VIR_DOMAIN_VCPU_MAXIMUM) && virDomainDefGetVCpusMax(vm->def) < max) + max = virDomainDefGetVCpusMax(vm->def); if (nvcpus > max) { virReportError(VIR_ERR_INVALID_ARG, @@ -2297,7 +2297,10 @@ libxlDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) def = vm->newDef ? vm->newDef : vm->def; } - ret = (flags & VIR_DOMAIN_VCPU_MAXIMUM) ? def->maxvcpus : def->vcpus; + if (flags & VIR_DOMAIN_VCPU_MAXIMUM) + ret = virDomainDefGetVCpusMax(def); + else + ret = def->vcpus; cleanup: if (vm) diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c index 53a2d57..90e2aad 100644 --- a/src/openvz/openvz_driver.c +++ b/src/openvz/openvz_driver.c @@ -1035,7 +1035,7 @@ openvzDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int fla _("current vcpu count must equal maximum")); goto cleanup; } - if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { + if (openvzDomainSetVcpusInternal(vm, virDomainDefGetVCpusMax(vm->def)) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Could not set number of vCPUs")); goto cleanup; @@ -1131,7 +1131,7 @@ openvzDomainCreateXML(virConnectPtr conn, const char *xml, vm->def->id = vm->pid; virDomainObjSetState(vm, VIR_DOMAIN_RUNNING, VIR_DOMAIN_RUNNING_BOOTED); - if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) { + if (openvzDomainSetVcpusInternal(vm, virDomainDefGetVCpusMax(vm->def)) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Could not set number of vCPUs")); goto cleanup; diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index cc6785f..b136314 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7853,7 +7853,7 @@ qemuBuildSmpArgStr(const virDomainDef *def, if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_SMP_TOPOLOGY)) { if (virDomainDefHasVCpusOffline(def)) - virBufferAsprintf(&buf, ",maxcpus=%u", def->maxvcpus); + virBufferAsprintf(&buf, ",maxcpus=%u", virDomainDefGetVCpusMax(def)); /* sockets, cores, and threads are either all zero * or all non-zero, thus checking one of them is enough */ if (def->cpu && def->cpu->sockets) { @@ -7861,7 +7861,7 @@ qemuBuildSmpArgStr(const virDomainDef *def, virBufferAsprintf(&buf, ",cores=%u", def->cpu->cores); virBufferAsprintf(&buf, ",threads=%u", def->cpu->threads); } else { - virBufferAsprintf(&buf, ",sockets=%u", def->maxvcpus); + virBufferAsprintf(&buf, ",sockets=%u", virDomainDefGetVCpusMax(def)); virBufferAsprintf(&buf, ",cores=%u", 1); virBufferAsprintf(&buf, ",threads=%u", 1); } diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index f879060..da66ee7 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4911,10 +4911,10 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, } if (def) - maxvcpus = def->maxvcpus; + maxvcpus = virDomainDefGetVCpusMax(def); if (persistentDef) { - if (!maxvcpus || maxvcpus > persistentDef->maxvcpus) - maxvcpus = persistentDef->maxvcpus; + if (!maxvcpus || maxvcpus > virDomainDefGetVCpusMax(persistentDef)) + maxvcpus = virDomainDefGetVCpusMax(persistentDef); } if (!(flags & VIR_DOMAIN_VCPU_MAXIMUM) && nvcpus > maxvcpus) { virReportError(VIR_ERR_INVALID_ARG, @@ -5557,7 +5557,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) } } else { if (flags & VIR_DOMAIN_VCPU_MAXIMUM) - ret = def->maxvcpus; + ret = virDomainDefGetVCpusMax(def); else ret = def->vcpus; } @@ -19078,7 +19078,7 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, &record->nparams, maxparams, "vcpu.maximum", - (unsigned) dom->def->maxvcpus) < 0) + virDomainDefGetVCpusMax(dom->def)) < 0) return -1; if (VIR_ALLOC_N(cpuinfo, dom->def->vcpus) < 0) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 2192ad8..0706ee3 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3921,7 +3921,7 @@ qemuValidateCpuMax(virDomainDefPtr def, virQEMUCapsPtr qemuCaps) if (!maxCpus) return true; - if (def->maxvcpus > maxCpus) { + if (virDomainDefGetVCpusMax(def) > maxCpus) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("Maximum CPUs greater than specified machine type limit")); return false; diff --git a/src/test/test_driver.c b/src/test/test_driver.c index cfd7bdc..4043928 100644 --- a/src/test/test_driver.c +++ b/src/test/test_driver.c @@ -2312,7 +2312,10 @@ testDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags) if (!(def = virDomainObjGetOneDef(vm, flags))) goto cleanup; - ret = (flags & VIR_DOMAIN_VCPU_MAXIMUM) ? def->maxvcpus : def->vcpus; + if (flags & VIR_DOMAIN_VCPU_MAXIMUM) + ret = virDomainDefGetVCpusMax(def); + else + ret = def->vcpus; cleanup: virDomainObjEndAPI(&vm); @@ -2355,19 +2358,19 @@ testDomainSetVcpusFlags(virDomainPtr domain, unsigned int nrCpus, if (virDomainObjGetDefs(privdom, flags, &def, &persistentDef) < 0) goto cleanup; - if (def && def->maxvcpus < nrCpus) { + if (def && virDomainDefGetVCpusMax(def) < nrCpus) { virReportError(VIR_ERR_INVALID_ARG, _("requested cpu amount exceeds maximum (%d > %d)"), - nrCpus, def->maxvcpus); + nrCpus, virDomainDefGetVCpusMax(def)); goto cleanup; } if (persistentDef && !(flags & VIR_DOMAIN_VCPU_MAXIMUM) && - persistentDef->maxvcpus < nrCpus) { + virDomainDefGetVCpusMax(persistentDef) < nrCpus) { virReportError(VIR_ERR_INVALID_ARG, _("requested cpu amount exceeds maximum (%d > %d)"), - nrCpus, persistentDef->maxvcpus); + nrCpus, virDomainDefGetVCpusMax(persistentDef)); goto cleanup; } diff --git a/src/vbox/vbox_common.c b/src/vbox/vbox_common.c index 4c88fa9..a05b438 100644 --- a/src/vbox/vbox_common.c +++ b/src/vbox/vbox_common.c @@ -1895,11 +1895,11 @@ vboxDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int flags virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("current vcpu count must equal maximum")); } - rc = gVBoxAPI.UIMachine.SetCPUCount(machine, def->maxvcpus); + rc = gVBoxAPI.UIMachine.SetCPUCount(machine, virDomainDefGetVCpusMax(def)); if (NS_FAILED(rc)) { virReportError(VIR_ERR_INTERNAL_ERROR, _("could not set the number of virtual CPUs to: %u, rc=%08x"), - def->maxvcpus, (unsigned)rc); + virDomainDefGetVCpusMax(def), (unsigned)rc); } rc = gVBoxAPI.UIMachine.SetCPUProperty(machine, CPUPropertyType_PAE, diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c index 0223e94..44f76f2 100644 --- a/src/vmx/vmx.c +++ b/src/vmx/vmx.c @@ -3066,6 +3066,7 @@ virVMXFormatConfig(virVMXContext *ctx, virDomainXMLOptionPtr xmlopt, virDomainDe bool scsi_present[4] = { false, false, false, false }; int scsi_virtualDev[4] = { -1, -1, -1, -1 }; bool floppy_present[2] = { false, false }; + unsigned int maxvcpus; if (ctx->formatFileName == NULL) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -3181,15 +3182,16 @@ virVMXFormatConfig(virVMXContext *ctx, virDomainXMLOptionPtr xmlopt, virDomainDe "'current'")); goto cleanup; } - if ((def->maxvcpus % 2 != 0 && def->maxvcpus != 1)) { + maxvcpus = virDomainDefGetVCpusMax(def); + if (maxvcpus % 2 != 0 && maxvcpus != 1) { virReportError(VIR_ERR_INTERNAL_ERROR, - _("Expecting domain XML entry 'vcpu' to be 1 or a " - "multiple of 2 but found %d"), - def->maxvcpus); + _("Expecting domain XML entry 'vcpu' to be an unsigned " + "integer (1 or a multiple of 2) but found %d"), + maxvcpus); goto cleanup; } - virBufferAsprintf(&buffer, "numvcpus = \"%d\"\n", def->maxvcpus); + virBufferAsprintf(&buffer, "numvcpus = \"%d\"\n", maxvcpus); /* def:cpumask -> vmx:sched.cpu.affinity */ if (def->cpumask && virBitmapSize(def->cpumask) > 0) { @@ -3202,11 +3204,11 @@ virVMXFormatConfig(virVMXContext *ctx, virDomainXMLOptionPtr xmlopt, virDomainDe while ((bit = virBitmapNextSetBit(def->cpumask, bit)) >= 0) ++sched_cpu_affinity_length; - if (sched_cpu_affinity_length < def->maxvcpus) { + if (sched_cpu_affinity_length < maxvcpus) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Expecting domain XML attribute 'cpuset' of entry " "'vcpu' to contain at least %d CPU(s)"), - def->maxvcpus); + maxvcpus); goto cleanup; } diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index 39f58a4..c4a46ee 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -1372,7 +1372,7 @@ vzDomainGetVcpusFlags(virDomainPtr dom, goto cleanup; if (flags & VIR_DOMAIN_VCPU_MAXIMUM) - ret = privdom->def->maxvcpus; + ret = virDomainDefGetVCpusMax(privdom->def); else ret = privdom->def->vcpus; diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c index 2838525..b7b78d7 100644 --- a/src/xen/xm_internal.c +++ b/src/xen/xm_internal.c @@ -695,7 +695,8 @@ xenXMDomainSetVcpusFlags(virConnectPtr conn, /* Can't specify a current larger than stored maximum; but * reducing maximum can silently reduce current. */ if (!(flags & VIR_DOMAIN_VCPU_MAXIMUM)) - max = entry->def->maxvcpus; + max = virDomainDefGetVCpusMax(entry->def); + if (vcpus > max) { virReportError(VIR_ERR_INVALID_ARG, _("requested vcpus is greater than max allowable" @@ -760,8 +761,10 @@ xenXMDomainGetVcpusFlags(virConnectPtr conn, if (!(entry = virHashLookup(priv->configCache, filename))) goto cleanup; - ret = ((flags & VIR_DOMAIN_VCPU_MAXIMUM) ? entry->def->maxvcpus - : entry->def->vcpus); + if (flags & VIR_DOMAIN_VCPU_MAXIMUM) + ret = virDomainDefGetVCpusMax(entry->def); + else + ret = entry->def->vcpus; cleanup: xenUnifiedUnlock(priv); diff --git a/src/xenapi/xenapi_utils.c b/src/xenapi/xenapi_utils.c index a80e084..d40f959 100644 --- a/src/xenapi/xenapi_utils.c +++ b/src/xenapi/xenapi_utils.c @@ -504,10 +504,8 @@ createVMRecordFromXml(virConnectPtr conn, virDomainDefPtr def, else (*record)->memory_dynamic_max = (*record)->memory_static_max; - if (def->maxvcpus) { - (*record)->vcpus_max = (int64_t) def->maxvcpus; - (*record)->vcpus_at_startup = (int64_t) def->vcpus; - } + (*record)->vcpus_max = (int64_t) virDomainDefGetVCpusMax(def); + (*record)->vcpus_at_startup = (int64_t) def->vcpus; if (def->onPoweroff) (*record)->actions_after_shutdown = actionShutdownLibvirt2XenapiEnum(def->onPoweroff); if (def->onReboot) diff --git a/src/xenconfig/xen_common.c b/src/xenconfig/xen_common.c index e21576d..deea68b 100644 --- a/src/xenconfig/xen_common.c +++ b/src/xenconfig/xen_common.c @@ -508,7 +508,7 @@ xenParseCPUFeatures(virConfPtr conf, virDomainDefPtr def) if (xenConfigGetULong(conf, "vcpu_avail", &count, -1) < 0) return -1; - def->vcpus = MIN(count_one_bits_l(count), def->maxvcpus); + def->vcpus = MIN(count_one_bits_l(count), virDomainDefGetVCpusMax(def)); if (xenConfigGetString(conf, "cpus", &str, NULL) < 0) return -1; @@ -1526,7 +1526,7 @@ xenFormatCPUAllocation(virConfPtr conf, virDomainDefPtr def) int ret = -1; char *cpus = NULL; - if (xenConfigSetInt(conf, "vcpus", def->maxvcpus) < 0) + if (xenConfigSetInt(conf, "vcpus", virDomainDefGetVCpusMax(def)) < 0) goto cleanup; /* Computing the vcpu_avail bitmask works because MAX_VIRT_CPUS is diff --git a/src/xenconfig/xen_sxpr.c b/src/xenconfig/xen_sxpr.c index 505ef76..d984305 100644 --- a/src/xenconfig/xen_sxpr.c +++ b/src/xenconfig/xen_sxpr.c @@ -1176,8 +1176,8 @@ xenParseSxpr(const struct sexpr *root, if (virDomainDefSetVCpusMax(def, sexpr_int(root, "domain/vcpus")) < 0) goto error; def->vcpus = count_one_bits_l(sexpr_u64(root, "domain/vcpu_avail")); - if (!def->vcpus || def->maxvcpus < def->vcpus) - def->vcpus = def->maxvcpus; + if (!def->vcpus || virDomainDefGetVCpusMax(def) < def->vcpus) + def->vcpus = virDomainDefGetVCpusMax(def); tmp = sexpr_node(root, "domain/on_poweroff"); if (tmp != NULL) { @@ -2223,7 +2223,7 @@ xenFormatSxpr(virConnectPtr conn, virBufferAsprintf(&buf, "(memory %llu)(maxmem %llu)", VIR_DIV_UP(def->mem.cur_balloon, 1024), VIR_DIV_UP(virDomainDefGetMemoryActual(def), 1024)); - virBufferAsprintf(&buf, "(vcpus %u)", def->maxvcpus); + virBufferAsprintf(&buf, "(vcpus %u)", virDomainDefGetVCpusMax(def)); /* Computing the vcpu_avail bitmask works because MAX_VIRT_CPUS is either 32, or 64 on a platform where long is big enough. */ if (virDomainDefHasVCpusOffline(def)) @@ -2307,7 +2307,7 @@ xenFormatSxpr(virConnectPtr conn, else virBufferEscapeSexpr(&buf, "(kernel '%s')", def->os.loader->path); - virBufferAsprintf(&buf, "(vcpus %u)", def->maxvcpus); + virBufferAsprintf(&buf, "(vcpus %u)", virDomainDefGetVCpusMax(def)); if (virDomainDefHasVCpusOffline(def)) virBufferAsprintf(&buf, "(vcpu_avail %lu)", (1UL << def->vcpus) - 1); -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Finalize the refactor by adding the 'virDomainDefGetVCpusMax' getter and reusing it accross libvirt. --- src/conf/domain_conf.c | 22 +++++++++++++++------- src/conf/domain_conf.h | 1 + src/libvirt_private.syms | 1 + src/libxl/libxl_conf.c | 4 ++-- src/libxl/libxl_driver.c | 9 ++++++--- src/openvz/openvz_driver.c | 4 ++-- src/qemu/qemu_command.c | 4 ++-- src/qemu/qemu_driver.c | 10 +++++----- src/qemu/qemu_process.c | 2 +- src/test/test_driver.c | 13 ++++++++----- src/vbox/vbox_common.c | 4 ++-- src/vmx/vmx.c | 16 +++++++++------- src/vz/vz_driver.c | 2 +- src/xen/xm_internal.c | 9 ++++++--- src/xenapi/xenapi_utils.c | 6 ++---- src/xenconfig/xen_common.c | 4 ++-- src/xenconfig/xen_sxpr.c | 8 ++++---- 17 files changed, 69 insertions(+), 50 deletions(-)
Even just after this patch, a cscope search on "->maxvcpus" returns: libxlDomainGetPerCPUStats virDomainDefHasVCpusOffline Cannot recall for sure, but can there be some sort of syntax-check for direct accesses (outside of domain_conf)? Just so there's no current upstream patches that escape someone's review...
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 6b16430..4e5b7b6 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1450,6 +1450,13 @@ virDomainDefHasVCpusOffline(const virDomainDef *def)
This API accesses maxvcpus directly still: return def->vcpus < def->maxvcpus; Although I do note by the end of the entire patch series a number of these new API's access ->maxvcpus directly. Just seems 'safer' than any access other the "Set" vcpusmax function uses the accessor. I'll try to remember to point them out when I see them (let's see how well short term memory is working today!).
}
+unsigned int +virDomainDefGetVCpusMax(const virDomainDef *def)
s/VCpus/Vcpus/g (or VCPUs - whatever had been chosen)
+{ + return def->maxvcpus; +} + +
[...]
diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c index 0223e94..44f76f2 100644 --- a/src/vmx/vmx.c +++ b/src/vmx/vmx.c @@ -3066,6 +3066,7 @@ virVMXFormatConfig(virVMXContext *ctx, virDomainXMLOptionPtr xmlopt, virDomainDe bool scsi_present[4] = { false, false, false, false }; int scsi_virtualDev[4] = { -1, -1, -1, -1 }; bool floppy_present[2] = { false, false }; + unsigned int maxvcpus;
if (ctx->formatFileName == NULL) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -3181,15 +3182,16 @@ virVMXFormatConfig(virVMXContext *ctx, virDomainXMLOptionPtr xmlopt, virDomainDe "'current'")); goto cleanup; } - if ((def->maxvcpus % 2 != 0 && def->maxvcpus != 1)) { + maxvcpus = virDomainDefGetVCpusMax(def); + if (maxvcpus % 2 != 0 && maxvcpus != 1) { virReportError(VIR_ERR_INTERNAL_ERROR, - _("Expecting domain XML entry 'vcpu' to be 1 or a " - "multiple of 2 but found %d"), - def->maxvcpus); + _("Expecting domain XML entry 'vcpu' to be an unsigned " + "integer (1 or a multiple of 2) but found %d"), + maxvcpus);
Error message thrashing ... patch 10 changed this message one way and then this patch changes it back. Doesn't really matter to me which way it goes, but I actually liked the way patch 10 says it rather than going back to this old format.
goto cleanup; }
- virBufferAsprintf(&buffer, "numvcpus = \"%d\"\n", def->maxvcpus); + virBufferAsprintf(&buffer, "numvcpus = \"%d\"\n", maxvcpus);
/* def:cpumask -> vmx:sched.cpu.affinity */ if (def->cpumask && virBitmapSize(def->cpumask) > 0) {
[...]
index a80e084..d40f959 100644 --- a/src/xenapi/xenapi_utils.c +++ b/src/xenapi/xenapi_utils.c @@ -504,10 +504,8 @@ createVMRecordFromXml(virConnectPtr conn, virDomainDefPtr def, else (*record)->memory_dynamic_max = (*record)->memory_static_max;
- if (def->maxvcpus) { - (*record)->vcpus_max = (int64_t) def->maxvcpus; - (*record)->vcpus_at_startup = (int64_t) def->vcpus; - } + (*record)->vcpus_max = (int64_t) virDomainDefGetVCpusMax(def); + (*record)->vcpus_at_startup = (int64_t) def->vcpus;
Hmmm... is this yet another hypervisor that allowed maxvcpus == 0 to mean get me the number of CPU's on the host? For which setting a 1 by default will change expectations? If patch 10 was where we forced maxvcpus to be at least 1, then perhaps the "if (def->maxvcpus)" check removal needs to be there instead - just so it's captured in the right place. ACK - with at least function name adjustment and accessor for libxlDomainGetPerCPUStats. Whether virDomainDefHasVCpusOffline changes or not is less important, although for consistency it probably should. John
if (def->onPoweroff) (*record)->actions_after_shutdown = actionShutdownLibvirt2XenapiEnum(def->onPoweroff); if (def->onReboot)

--- src/conf/domain_conf.c | 25 +++++++++++++++++++------ src/conf/domain_conf.h | 1 + src/hyperv/hyperv_driver.c | 5 ++++- src/libvirt_private.syms | 1 + src/libxl/libxl_driver.c | 14 +++++++++----- src/openvz/openvz_conf.c | 3 ++- src/openvz/openvz_driver.c | 4 +++- src/phyp/phyp_driver.c | 3 ++- src/qemu/qemu_command.c | 9 ++++++--- src/qemu/qemu_driver.c | 8 +++++--- src/test/test_driver.c | 8 +++++--- src/vbox/vbox_common.c | 6 ++++-- src/vmx/vmx.c | 3 ++- src/vz/vz_sdk.c | 3 ++- src/xen/xm_internal.c | 3 ++- src/xenapi/xenapi_driver.c | 3 ++- src/xenconfig/xen_common.c | 5 ++++- src/xenconfig/xen_sxpr.c | 10 +++++++--- 18 files changed, 80 insertions(+), 34 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 4e5b7b6..d8c1068 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1435,7 +1435,7 @@ virDomainDefSetVCpusMax(virDomainDefPtr def, } if (vcpus < def->vcpus) - def->vcpus = vcpus; + virDomainDefSetVCpus(def, vcpus); def->maxvcpus = vcpus; @@ -1457,6 +1457,16 @@ virDomainDefGetVCpusMax(const virDomainDef *def) } +int +virDomainDefSetVCpus(virDomainDefPtr def, + unsigned int vcpus) +{ + def->vcpus = vcpus; + + return 0; +} + + virDomainDiskDefPtr virDomainDiskDefNew(virDomainXMLOptionPtr xmlopt) { @@ -2711,11 +2721,10 @@ virDomainDefNew(void) goto error; /* assume at least 1 cpu for every config */ - if (virDomainDefSetVCpusMax(ret, 1) < 0) + if (virDomainDefSetVCpusMax(ret, 1) < 0 || + virDomainDefSetVCpus(ret, 1) < 0) goto error; - ret->vcpus = 1; - ret->mem.hard_limit = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; ret->mem.soft_limit = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; ret->mem.swap_hard_limit = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED; @@ -14685,6 +14694,7 @@ virDomainVcpuParse(virDomainDefPtr def, int n; char *tmp = NULL; unsigned int maxvcpus; + unsigned int vcpus; int ret = -1; if ((n = virXPathUInt("string(./vcpu[1])", ctxt, &maxvcpus)) < 0) { @@ -14700,16 +14710,19 @@ virDomainVcpuParse(virDomainDefPtr def, if (virDomainDefSetVCpusMax(def, maxvcpus) < 0) goto cleanup; - if ((n = virXPathUInt("string(./vcpu[1]/@current)", ctxt, &def->vcpus)) < 0) { + if ((n = virXPathUInt("string(./vcpu[1]/@current)", ctxt, &vcpus)) < 0) { if (n == -2) { virReportError(VIR_ERR_XML_ERROR, "%s", _("current vcpus count must be an integer")); goto cleanup; } - def->vcpus = maxvcpus; + vcpus = maxvcpus; } + if (virDomainDefSetVCpus(def, vcpus) < 0) + goto cleanup; + if (maxvcpus < def->vcpus) { virReportError(VIR_ERR_INTERNAL_ERROR, _("maxvcpus must not be less than current vcpus " diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 433e5c9..44f707f 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2328,6 +2328,7 @@ struct _virDomainDef { int virDomainDefSetVCpusMax(virDomainDefPtr def, unsigned int vcpus); bool virDomainDefHasVCpusOffline(const virDomainDef *def); unsigned int virDomainDefGetVCpusMax(const virDomainDef *def); +int virDomainDefSetVCpus(virDomainDefPtr def, unsigned int vcpus); unsigned long long virDomainDefGetMemoryInitial(const virDomainDef *def); void virDomainDefSetMemoryTotal(virDomainDefPtr def, unsigned long long size); diff --git a/src/hyperv/hyperv_driver.c b/src/hyperv/hyperv_driver.c index 61e06b0..690bee7 100644 --- a/src/hyperv/hyperv_driver.c +++ b/src/hyperv/hyperv_driver.c @@ -877,7 +877,10 @@ hypervDomainGetXMLDesc(virDomainPtr domain, unsigned int flags) processorSettingData->data->VirtualQuantity) < 0) goto cleanup; - def->vcpus = processorSettingData->data->VirtualQuantity; + if (virDomainDefSetVCpus(def, + processorSettingData->data->VirtualQuantity) < 0) + goto cleanup; + def->os.type = VIR_DOMAIN_OSTYPE_HVM; /* FIXME: devices section is totally missing */ diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index d2993c1..b08c9c7 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -232,6 +232,7 @@ virDomainDefParseString; virDomainDefPostParse; virDomainDefSetMemoryInitial; virDomainDefSetMemoryTotal; +virDomainDefSetVCpus; virDomainDefSetVCpusMax; virDomainDeleteConfig; virDomainDeviceAddressIsValid; diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index 8b0fd39..8b225a4 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -555,7 +555,8 @@ libxlAddDom0(libxlDriverPrivatePtr driver) if (virDomainDefSetVCpusMax(vm->def, d_info.vcpu_max_id + 1)) goto cleanup; - vm->def->vcpus = d_info.vcpu_online; + if (virDomainDefSetVCpus(vm->def, d_info.vcpu_online) < 0) + goto cleanup; vm->def->mem.cur_balloon = d_info.current_memkb; virDomainDefSetMemoryTotal(vm->def, d_info.max_memkb); @@ -2191,7 +2192,8 @@ libxlDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, break; case VIR_DOMAIN_VCPU_CONFIG: - def->vcpus = nvcpus; + if (virDomainDefSetVCpus(def, nvcpus) < 0) + goto cleanup; break; case VIR_DOMAIN_VCPU_LIVE: @@ -2201,7 +2203,8 @@ libxlDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, " with libxenlight"), vm->def->id); goto endjob; } - vm->def->vcpus = nvcpus; + if (virDomainDefSetVCpus(vm->def, nvcpus) < 0) + goto endjob; break; case VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_CONFIG: @@ -2211,8 +2214,9 @@ libxlDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, " with libxenlight"), vm->def->id); goto endjob; } - vm->def->vcpus = nvcpus; - def->vcpus = nvcpus; + if (virDomainDefSetVCpus(vm->def, nvcpus) < 0 || + virDomainDefSetVCpus(def, nvcpus) < 0) + goto endjob; break; } diff --git a/src/openvz/openvz_conf.c b/src/openvz/openvz_conf.c index aabb7c4..74f496e 100644 --- a/src/openvz/openvz_conf.c +++ b/src/openvz/openvz_conf.c @@ -585,7 +585,8 @@ int openvzLoadDomains(struct openvz_driver *driver) if (virDomainDefSetVCpusMax(def, vcpus) < 0) goto cleanup; - def->vcpus = vcpus; + if (virDomainDefSetVCpus(def, vcpus) < 0) + goto cleanup; /* XXX load rest of VM config data .... */ diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c index 90e2aad..37c00b2 100644 --- a/src/openvz/openvz_driver.c +++ b/src/openvz/openvz_driver.c @@ -1367,7 +1367,9 @@ static int openvzDomainSetVcpusInternal(virDomainObjPtr vm, if (virDomainDefSetVCpusMax(vm->def, nvcpus) < 0) return -1; - vm->def->vcpus = nvcpus; + if (virDomainDefSetVCpus(vm->def, nvcpus) < 0) + return -1; + return 0; } diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c index 7c77e23..a60b8b2 100644 --- a/src/phyp/phyp_driver.c +++ b/src/phyp/phyp_driver.c @@ -3298,7 +3298,8 @@ phypDomainGetXMLDesc(virDomainPtr dom, unsigned int flags) if (virDomainDefSetVCpusMax(&def, vcpus) < 0) goto err; - def.vcpus = vcpus; + if (virDomainDefSetVCpus(&def, vcpus) < 0) + goto err; return virDomainDefFormat(&def, virDomainDefFormatConvertXMLFlags(flags)); diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index b136314..4a67361 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -12543,6 +12543,7 @@ qemuParseCommandLineSmp(virDomainDefPtr dom, unsigned int cores = 0; unsigned int threads = 0; unsigned int maxcpus = 0; + unsigned int vcpus = 0; size_t i; int nkws; char **kws; @@ -12557,9 +12558,8 @@ qemuParseCommandLineSmp(virDomainDefPtr dom, for (i = 0; i < nkws; i++) { if (vals[i] == NULL) { if (i > 0 || - virStrToLong_i(kws[i], &end, 10, &n) < 0 || *end != '\0') + virStrToLong_ui(kws[i], &end, 10, &vcpus) < 0 || *end != '\0') goto syntax; - dom->vcpus = n; } else { if (virStrToLong_i(vals[i], &end, 10, &n) < 0 || *end != '\0') goto syntax; @@ -12577,11 +12577,14 @@ qemuParseCommandLineSmp(virDomainDefPtr dom, } if (maxcpus == 0) - maxcpus = dom->vcpus; + maxcpus = vcpus; if (virDomainDefSetVCpusMax(dom, maxcpus) < 0) goto error; + if (virDomainDefSetVCpus(dom, vcpus) < 0) + goto error; + if (sockets && cores && threads) { virCPUDefPtr cpu; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index da66ee7..632ffb5 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4835,8 +4835,9 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, cleanup: VIR_FREE(cpupids); VIR_FREE(mem_mask); - if (virDomainObjIsActive(vm)) - vm->def->vcpus = vcpus; + if (virDomainObjIsActive(vm) && + virDomainDefSetVCpus(vm->def, vcpus) < 0) + ret = -1; virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", rc == 1); if (cgroup_vcpu) virCgroupFree(&cgroup_vcpu); @@ -4982,7 +4983,8 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, if (virDomainDefSetVCpusMax(persistentDef, nvcpus) < 0) goto endjob; } else { - persistentDef->vcpus = nvcpus; + if (virDomainDefSetVCpus(persistentDef, nvcpus) < 0) + goto endjob; } if (virDomainSaveConfig(cfg->configDir, persistentDef) < 0) diff --git a/src/test/test_driver.c b/src/test/test_driver.c index 4043928..00f5c1e 100644 --- a/src/test/test_driver.c +++ b/src/test/test_driver.c @@ -2374,15 +2374,17 @@ testDomainSetVcpusFlags(virDomainPtr domain, unsigned int nrCpus, goto cleanup; } - if (def) - def->vcpus = nrCpus; + if (def && + virDomainDefSetVCpus(def, nrCpus) < 0) + goto cleanup; if (persistentDef) { if (flags & VIR_DOMAIN_VCPU_MAXIMUM) { if (virDomainDefSetVCpusMax(persistentDef, nrCpus) < 0) goto cleanup; } else { - persistentDef->vcpus = nrCpus; + if (virDomainDefSetVCpus(persistentDef, nrCpus) < 0) + goto cleanup; } } diff --git a/src/vbox/vbox_common.c b/src/vbox/vbox_common.c index a05b438..b240e04 100644 --- a/src/vbox/vbox_common.c +++ b/src/vbox/vbox_common.c @@ -3904,7 +3904,8 @@ static char *vboxDomainGetXMLDesc(virDomainPtr dom, unsigned int flags) if (virDomainDefSetVCpusMax(def, CPUCount) < 0) goto cleanup; - def->vcpus = CPUCount; + if (virDomainDefSetVCpus(def, CPUCount) < 0) + goto cleanup; /* Skip cpumasklen, cpumask, onReboot, onPoweroff, onCrash */ @@ -6061,7 +6062,8 @@ static char *vboxDomainSnapshotGetXMLDesc(virDomainSnapshotPtr snapshot, if (virDomainDefSetVCpusMax(def->dom, CPUCount) < 0) goto cleanup; - def->dom->vcpus = CPUCount; + if (virDomainDefSetVCpus(def->dom, CPUCount) < 0) + goto cleanup; if (vboxSnapshotGetReadWriteDisks(def, snapshot) < 0) VIR_DEBUG("Could not get read write disks for snapshot"); diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c index 44f76f2..62636a9 100644 --- a/src/vmx/vmx.c +++ b/src/vmx/vmx.c @@ -1460,7 +1460,8 @@ virVMXParseConfig(virVMXContext *ctx, if (virDomainDefSetVCpusMax(def, numvcpus) < 0) goto cleanup; - def->vcpus = numvcpus; + if (virDomainDefSetVCpus(def, numvcpus) < 0) + goto cleanup; /* vmx:sched.cpu.affinity -> def:cpumask */ /* NOTE: maps to VirtualMachine:config.cpuAffinity.affinitySet */ diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index d3aa3e2..68c51a8 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -1153,7 +1153,8 @@ prlsdkConvertCpuInfo(PRL_HANDLE sdkdom, if (virDomainDefSetVCpusMax(def, cpuCount) < 0) goto cleanup; - def->vcpus = cpuCount; + if (virDomainDefSetVCpus(def, cpuCount) < 0) + goto cleanup; pret = PrlVmCfg_GetCpuMask(sdkdom, NULL, &buflen); prlsdkCheckRetGoto(pret, cleanup); diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c index b7b78d7..374cc41 100644 --- a/src/xen/xm_internal.c +++ b/src/xen/xm_internal.c @@ -708,7 +708,8 @@ xenXMDomainSetVcpusFlags(virConnectPtr conn, if (virDomainDefSetVCpusMax(entry->def, vcpus) < 0) goto cleanup; } else { - entry->def->vcpus = vcpus; + if (virDomainDefSetVCpus(entry->def, vcpus) < 0) + goto cleanup; } /* If this fails, should we try to undo our changes to the diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c index 11cace1..df2ed1b 100644 --- a/src/xenapi/xenapi_driver.c +++ b/src/xenapi/xenapi_driver.c @@ -1505,7 +1505,8 @@ xenapiDomainGetXMLDesc(virDomainPtr dom, unsigned int flags) if (virDomainDefSetVCpusMax(defPtr, vcpus) < 0) goto error; - defPtr->vcpus = vcpus; + if (virDomainDefSetVCpus(defPtr, vcpus) < 0) + goto error; enum xen_on_normal_exit action; if (xen_vm_get_actions_after_shutdown(session, &action, vm)) diff --git a/src/xenconfig/xen_common.c b/src/xenconfig/xen_common.c index deea68b..d617773 100644 --- a/src/xenconfig/xen_common.c +++ b/src/xenconfig/xen_common.c @@ -508,7 +508,10 @@ xenParseCPUFeatures(virConfPtr conf, virDomainDefPtr def) if (xenConfigGetULong(conf, "vcpu_avail", &count, -1) < 0) return -1; - def->vcpus = MIN(count_one_bits_l(count), virDomainDefGetVCpusMax(def)); + if (virDomainDefSetVCpus(def, MIN(count_one_bits_l(count), + virDomainDefGetVCpusMax(def))) < 0) + return -1; + if (xenConfigGetString(conf, "cpus", &str, NULL) < 0) return -1; diff --git a/src/xenconfig/xen_sxpr.c b/src/xenconfig/xen_sxpr.c index d984305..534130e 100644 --- a/src/xenconfig/xen_sxpr.c +++ b/src/xenconfig/xen_sxpr.c @@ -1092,6 +1092,7 @@ xenParseSxpr(const struct sexpr *root, const char *tmp; virDomainDefPtr def; int hvm = 0, vmlocaltime; + unsigned int vcpus; if (!(def = virDomainDefNew())) goto error; @@ -1175,9 +1176,12 @@ xenParseSxpr(const struct sexpr *root, if (virDomainDefSetVCpusMax(def, sexpr_int(root, "domain/vcpus")) < 0) goto error; - def->vcpus = count_one_bits_l(sexpr_u64(root, "domain/vcpu_avail")); - if (!def->vcpus || virDomainDefGetVCpusMax(def) < def->vcpus) - def->vcpus = virDomainDefGetVCpusMax(def); + vcpus = count_one_bits_l(sexpr_u64(root, "domain/vcpu_avail")); + if (!vcpus || virDomainDefGetVCpusMax(def) < vcpus) + vcpus = virDomainDefGetVCpusMax(def); + + if (virDomainDefSetVCpus(def, vcpus) < 0) + goto error; tmp = sexpr_node(root, "domain/on_poweroff"); if (tmp != NULL) { -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
--- src/conf/domain_conf.c | 25 +++++++++++++++++++------ src/conf/domain_conf.h | 1 + src/hyperv/hyperv_driver.c | 5 ++++- src/libvirt_private.syms | 1 + src/libxl/libxl_driver.c | 14 +++++++++----- src/openvz/openvz_conf.c | 3 ++- src/openvz/openvz_driver.c | 4 +++- src/phyp/phyp_driver.c | 3 ++- src/qemu/qemu_command.c | 9 ++++++--- src/qemu/qemu_driver.c | 8 +++++--- src/test/test_driver.c | 8 +++++--- src/vbox/vbox_common.c | 6 ++++-- src/vmx/vmx.c | 3 ++- src/vz/vz_sdk.c | 3 ++- src/xen/xm_internal.c | 3 ++- src/xenapi/xenapi_driver.c | 3 ++- src/xenconfig/xen_common.c | 5 ++++- src/xenconfig/xen_sxpr.c | 10 +++++++--- 18 files changed, 80 insertions(+), 34 deletions(-)
Still prefer to see "Vcpus" (or "VCPUs") rather than "VCpus"... [...]
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index b136314..4a67361 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -12543,6 +12543,7 @@ qemuParseCommandLineSmp(virDomainDefPtr dom, unsigned int cores = 0; unsigned int threads = 0; unsigned int maxcpus = 0; + unsigned int vcpus = 0; size_t i; int nkws; char **kws; @@ -12557,9 +12558,8 @@ qemuParseCommandLineSmp(virDomainDefPtr dom, for (i = 0; i < nkws; i++) { if (vals[i] == NULL) { if (i > 0 || - virStrToLong_i(kws[i], &end, 10, &n) < 0 || *end != '\0') + virStrToLong_ui(kws[i], &end, 10, &vcpus) < 0 || *end != '\0') goto syntax; - dom->vcpus = n; } else { if (virStrToLong_i(vals[i], &end, 10, &n) < 0 || *end != '\0') goto syntax;
A few lines down from here: else if (STREQ(kws[i], "maxcpus")) maxcpus = n; Unrelated to this patch, but perhaps related to an earlier patch (at least w/r/t to the Long_ui rather than Long_i change. Since all the elements are unsigned int, then all could use _ui. In any case, ACK w/ "VCpus" name adjustment John
@@ -12577,11 +12577,14 @@ qemuParseCommandLineSmp(virDomainDefPtr dom, }
if (maxcpus == 0) - maxcpus = dom->vcpus; + maxcpus = vcpus;
if (virDomainDefSetVCpusMax(dom, maxcpus) < 0) goto error;
+ if (virDomainDefSetVCpus(dom, vcpus) < 0) + goto error; + if (sockets && cores && threads) { virCPUDefPtr cpu;

--- src/conf/domain_conf.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index d8c1068..3062b3a 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1461,6 +1461,13 @@ int virDomainDefSetVCpus(virDomainDefPtr def, unsigned int vcpus) { + if (vcpus > def->maxvcpus) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("maxvcpus must not be less than current vcpus (%u < %u)"), + vcpus, def->maxvcpus); + return -1; + } + def->vcpus = vcpus; return 0; @@ -14723,13 +14730,6 @@ virDomainVcpuParse(virDomainDefPtr def, if (virDomainDefSetVCpus(def, vcpus) < 0) goto cleanup; - if (maxvcpus < def->vcpus) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("maxvcpus must not be less than current vcpus " - "(%u < %u)"), maxvcpus, def->vcpus); - goto cleanup; - } - tmp = virXPathString("string(./vcpu[1]/@placement)", ctxt); if (tmp) { if ((def->placement_mode = -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
--- src/conf/domain_conf.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index d8c1068..3062b3a 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1461,6 +1461,13 @@ int virDomainDefSetVCpus(virDomainDefPtr def, unsigned int vcpus) { + if (vcpus > def->maxvcpus) {
Use accessor (virDomainDefGetVCpusMax) FWIW: Just thinking about patch 12 where the code is reading off the command line and setting counts, but not checking that vcpus <= maxvcpus in qemuParseCommandLineSmp... Although I suspect that won't be a problem since that code is reading qemu command line and I would hope we could assume (haha) that qemu would have failed if vcpus > maxvcpus and of course if cores/threads/sockets didn't add up properly as well. ACK with the accessor change. John
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("maxvcpus must not be less than current vcpus (%u < %u)"), + vcpus, def->maxvcpus); + return -1; + } + def->vcpus = vcpus;
return 0; @@ -14723,13 +14730,6 @@ virDomainVcpuParse(virDomainDefPtr def, if (virDomainDefSetVCpus(def, vcpus) < 0) goto cleanup;
- if (maxvcpus < def->vcpus) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("maxvcpus must not be less than current vcpus " - "(%u < %u)"), maxvcpus, def->vcpus); - goto cleanup; - } - tmp = virXPathString("string(./vcpu[1]/@placement)", ctxt); if (tmp) { if ((def->placement_mode =

On Mon, Nov 23, 2015 at 10:35:34 -0500, John Ferlan wrote:
On 11/20/2015 10:22 AM, Peter Krempa wrote:
--- src/conf/domain_conf.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index d8c1068..3062b3a 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1461,6 +1461,13 @@ int virDomainDefSetVCpus(virDomainDefPtr def, unsigned int vcpus) { + if (vcpus > def->maxvcpus) {
Use accessor (virDomainDefGetVCpusMax)
These are actually accessors to the vcpu related values, so using it here is rather counterproductive. As you've probably noticed later once def->vcpus becomes an array of structs, def->maxvcpus is the count tracking element here, so this function really is a accessor to this whole infrastructure, so I'll not change it. Peter

--- src/conf/domain_audit.c | 2 +- src/conf/domain_conf.c | 19 +++++++++++++------ src/conf/domain_conf.h | 1 + src/libvirt_private.syms | 1 + src/libxl/libxl_conf.c | 2 +- src/libxl/libxl_driver.c | 8 ++++---- src/lxc/lxc_controller.c | 2 +- src/lxc/lxc_driver.c | 2 +- src/openvz/openvz_driver.c | 2 +- src/phyp/phyp_driver.c | 5 +++-- src/qemu/qemu_command.c | 2 +- src/qemu/qemu_driver.c | 36 ++++++++++++++++++------------------ src/qemu/qemu_process.c | 10 +++++----- src/test/test_driver.c | 14 +++++++------- src/uml/uml_driver.c | 2 +- src/vmware/vmware_driver.c | 2 +- src/vmx/vmx.c | 14 ++++++++------ src/xen/xm_internal.c | 4 ++-- src/xenapi/xenapi_utils.c | 2 +- src/xenconfig/xen_common.c | 3 ++- src/xenconfig/xen_sxpr.c | 5 +++-- 21 files changed, 76 insertions(+), 62 deletions(-) diff --git a/src/conf/domain_audit.c b/src/conf/domain_audit.c index b842495..332d975 100644 --- a/src/conf/domain_audit.c +++ b/src/conf/domain_audit.c @@ -885,7 +885,7 @@ virDomainAuditStart(virDomainObjPtr vm, const char *reason, bool success) virDomainAuditMemory(vm, 0, virDomainDefGetMemoryActual(vm->def), "start", true); - virDomainAuditVcpu(vm, 0, vm->def->vcpus, "start", true); + virDomainAuditVcpu(vm, 0, virDomainDefGetVCpus(vm->def), "start", true); if (vm->def->niothreadids) virDomainAuditIOThread(vm, 0, vm->def->niothreadids, "start", true); diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 3062b3a..e8a3d10 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1474,6 +1474,13 @@ virDomainDefSetVCpus(virDomainDefPtr def, } +unsigned int +virDomainDefGetVCpus(const virDomainDef *def) +{ + return def->vcpus; +} + + virDomainDiskDefPtr virDomainDiskDefNew(virDomainXMLOptionPtr xmlopt) { @@ -15200,7 +15207,7 @@ virDomainDefParseXML(xmlDocPtr xml, goto error; } - if (vcpupin->id >= def->vcpus) { + if (vcpupin->id >= virDomainDefGetVCpus(def)) { /* To avoid the regression when daemon loading * domain confs, we can't simply error out if * <vcpupin> nodes greater than current vcpus, @@ -15218,10 +15225,10 @@ virDomainDefParseXML(xmlDocPtr xml, * the policy specified explicitly as def->cpuset. */ if (def->cpumask) { - if (VIR_REALLOC_N(def->cputune.vcpupin, def->vcpus) < 0) + if (VIR_REALLOC_N(def->cputune.vcpupin, virDomainDefGetVCpus(def)) < 0) goto error; - for (i = 0; i < def->vcpus; i++) { + for (i = 0; i < virDomainDefGetVCpus(def); i++) { if (virDomainPinIsDuplicate(def->cputune.vcpupin, def->cputune.nvcpupin, i)) @@ -17858,10 +17865,10 @@ virDomainDefCheckABIStability(virDomainDefPtr src, goto error; } - if (src->vcpus != dst->vcpus) { + if (virDomainDefGetVCpus(src) != virDomainDefGetVCpus(dst)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, _("Target domain vCPU count %d does not match source %d"), - dst->vcpus, src->vcpus); + virDomainDefGetVCpus(dst), virDomainDefGetVCpus(src)); goto error; } if (virDomainDefGetVCpusMax(src) != virDomainDefGetVCpusMax(dst)) { @@ -21826,7 +21833,7 @@ virDomainDefFormatInternal(virDomainDefPtr def, VIR_FREE(cpumask); } if (virDomainDefHasVCpusOffline(def)) - virBufferAsprintf(buf, " current='%u'", def->vcpus); + virBufferAsprintf(buf, " current='%u'", virDomainDefGetVCpus(def)); virBufferAsprintf(buf, ">%u</vcpu>\n", virDomainDefGetVCpusMax(def)); if (def->niothreadids > 0) { diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 44f707f..0845b2b 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2329,6 +2329,7 @@ int virDomainDefSetVCpusMax(virDomainDefPtr def, unsigned int vcpus); bool virDomainDefHasVCpusOffline(const virDomainDef *def); unsigned int virDomainDefGetVCpusMax(const virDomainDef *def); int virDomainDefSetVCpus(virDomainDefPtr def, unsigned int vcpus); +unsigned int virDomainDefGetVCpus(const virDomainDef *def); unsigned long long virDomainDefGetMemoryInitial(const virDomainDef *def); void virDomainDefSetMemoryTotal(virDomainDefPtr def, unsigned long long size); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index b08c9c7..d2c4945 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -217,6 +217,7 @@ virDomainDefGetDefaultEmulator; virDomainDefGetMemoryActual; virDomainDefGetMemoryInitial; virDomainDefGetSecurityLabelDef; +virDomainDefGetVCpus; virDomainDefGetVCpusMax; virDomainDefHasDeviceAddress; virDomainDefHasMemoryHotplug; diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c index 82ccb89..7600b7e 100644 --- a/src/libxl/libxl_conf.c +++ b/src/libxl/libxl_conf.c @@ -647,7 +647,7 @@ libxlMakeDomBuildInfo(virDomainDefPtr def, if (libxl_cpu_bitmap_alloc(ctx, &b_info->avail_vcpus, b_info->max_vcpus)) return -1; libxl_bitmap_set_none(&b_info->avail_vcpus); - for (i = 0; i < def->vcpus; i++) + for (i = 0; i < virDomainDefGetVCpus(def); i++) libxl_bitmap_set((&b_info->avail_vcpus), i); if (def->clock.ntimers > 0 && diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index 8b225a4..c8b2557 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -1601,7 +1601,7 @@ libxlDomainGetInfo(virDomainPtr dom, virDomainInfoPtr info) } info->state = virDomainObjGetState(vm, NULL); - info->nrVirtCpu = vm->def->vcpus; + info->nrVirtCpu = virDomainDefGetVCpus(vm->def); ret = 0; cleanup: @@ -2304,7 +2304,7 @@ libxlDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) if (flags & VIR_DOMAIN_VCPU_MAXIMUM) ret = virDomainDefGetVCpusMax(def); else - ret = def->vcpus; + ret = virDomainDefGetVCpus(def); cleanup: if (vm) @@ -2441,8 +2441,8 @@ libxlDomainGetVcpuPinInfo(virDomainPtr dom, int ncpumaps, sa_assert(targetDef); /* Clamp to actual number of vcpus */ - if (ncpumaps > targetDef->vcpus) - ncpumaps = targetDef->vcpus; + if (ncpumaps > virDomainDefGetVCpus(targetDef)) + ncpumaps = virDomainDefGetVCpus(targetDef); if ((hostcpus = libxl_get_max_cpus(cfg->ctx)) < 0) goto cleanup; diff --git a/src/lxc/lxc_controller.c b/src/lxc/lxc_controller.c index 3e5d2b4..b9500f4 100644 --- a/src/lxc/lxc_controller.c +++ b/src/lxc/lxc_controller.c @@ -771,7 +771,7 @@ static int virLXCControllerGetNumadAdvice(virLXCControllerPtr ctrl, * either <vcpu> or <numatune> is 'auto'. */ if (virDomainDefNeedsPlacementAdvice(ctrl->def)) { - nodeset = virNumaGetAutoPlacementAdvice(ctrl->def->vcpus, + nodeset = virNumaGetAutoPlacementAdvice(virDomainDefGetVCpus(ctrl->def), ctrl->def->mem.cur_balloon); if (!nodeset) goto cleanup; diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c index 1a9550e..fc82b4c 100644 --- a/src/lxc/lxc_driver.c +++ b/src/lxc/lxc_driver.c @@ -617,7 +617,7 @@ static int lxcDomainGetInfo(virDomainPtr dom, } info->maxMem = virDomainDefGetMemoryActual(vm->def); - info->nrVirtCpu = vm->def->vcpus; + info->nrVirtCpu = virDomainDefGetVCpus(vm->def); ret = 0; cleanup: diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c index 37c00b2..9146b1b 100644 --- a/src/openvz/openvz_driver.c +++ b/src/openvz/openvz_driver.c @@ -465,7 +465,7 @@ static int openvzDomainGetInfo(virDomainPtr dom, info->maxMem = virDomainDefGetMemoryActual(vm->def); info->memory = vm->def->mem.cur_balloon; - info->nrVirtCpu = vm->def->vcpus; + info->nrVirtCpu = virDomainDefGetVCpus(vm->def); ret = 0; cleanup: diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c index a60b8b2..7bdb910 100644 --- a/src/phyp/phyp_driver.c +++ b/src/phyp/phyp_driver.c @@ -3525,11 +3525,12 @@ phypBuildLpar(virConnectPtr conn, virDomainDefPtr def) if (system_type == HMC) virBufferAsprintf(&buf, " -m %s", managed_system); virBufferAsprintf(&buf, " -r lpar -p %s -i min_mem=%lld,desired_mem=%lld," - "max_mem=%lld,desired_procs=%d,virtual_scsi_adapters=%s", + "max_mem=%lld,desired_procs=%u,virtual_scsi_adapters=%s", def->name, def->mem.cur_balloon, def->mem.cur_balloon, virDomainDefGetMemoryInitial(def), - (int) def->vcpus, virDomainDiskGetSource(def->disks[0])); + virDomainDefGetVCpus(def), + virDomainDiskGetSource(def->disks[0])); ret = phypExecBuffer(session, &buf, &exit_status, conn, false); if (exit_status < 0) { diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 4a67361..b4eeb1d 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7849,7 +7849,7 @@ qemuBuildSmpArgStr(const virDomainDef *def, { virBuffer buf = VIR_BUFFER_INITIALIZER; - virBufferAsprintf(&buf, "%u", def->vcpus); + virBufferAsprintf(&buf, "%u", virDomainDefGetVCpus(def)); if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_SMP_TOPOLOGY)) { if (virDomainDefHasVCpusOffline(def)) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 632ffb5..95b9ede 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2659,7 +2659,7 @@ qemuDomainGetInfo(virDomainPtr dom, } } - if (VIR_ASSIGN_IS_OVERFLOW(info->nrVirtCpu, vm->def->vcpus)) { + if (VIR_ASSIGN_IS_OVERFLOW(info->nrVirtCpu, virDomainDefGetVCpus(vm->def))) { virReportError(VIR_ERR_OVERFLOW, "%s", _("cpu count too large")); goto cleanup; } @@ -4700,7 +4700,7 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, size_t i; int rc = 1; int ret = -1; - int oldvcpus = vm->def->vcpus; + int oldvcpus = virDomainDefGetVCpus(vm->def); int vcpus = oldvcpus; pid_t *cpupids = NULL; int ncpupids; @@ -4929,11 +4929,11 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, if (!qemuDomainAgentAvailable(vm, true)) goto endjob; - if (nvcpus > vm->def->vcpus) { + if (nvcpus > virDomainDefGetVCpus(vm->def)) { virReportError(VIR_ERR_INVALID_ARG, _("requested vcpu count is greater than the count " "of enabled vcpus in the domain: %d > %d"), - nvcpus, vm->def->vcpus); + nvcpus, virDomainDefGetVCpus(vm->def)); goto endjob; } @@ -4972,8 +4972,8 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, if (persistentDef) { /* remove vcpupin entries for vcpus that were unplugged */ - if (nvcpus < persistentDef->vcpus) { - for (i = persistentDef->vcpus - 1; i >= nvcpus; i--) + if (nvcpus < virDomainDefGetVCpus(persistentDef)) { + for (i = virDomainDefGetVCpus(persistentDef) - 1; i >= nvcpus; i--) virDomainPinDel(&persistentDef->cputune.vcpupin, &persistentDef->cputune.nvcpupin, i); @@ -5067,17 +5067,17 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, priv = vm->privateData; - if (def && vcpu >= def->vcpus) { + if (def && vcpu >= virDomainDefGetVCpus(def)) { virReportError(VIR_ERR_INVALID_ARG, _("vcpu %d is out of range of live cpu count %d"), - vcpu, def->vcpus); + vcpu, virDomainDefGetVCpus(def)); goto endjob; } - if (persistentDef && vcpu >= persistentDef->vcpus) { + if (persistentDef && vcpu >= virDomainDefGetVCpus(persistentDef)) { virReportError(VIR_ERR_INVALID_ARG, _("vcpu %d is out of range of persistent cpu count %d"), - vcpu, persistentDef->vcpus); + vcpu, virDomainDefGetVCpus(persistentDef)); goto endjob; } @@ -5246,8 +5246,8 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom, priv = vm->privateData; /* Clamp to actual number of vcpus */ - if (ncpumaps > def->vcpus) - ncpumaps = def->vcpus; + if (ncpumaps > virDomainDefGetVCpus(def)) + ncpumaps = virDomainDefGetVCpus(def); if (ncpumaps < 1) goto cleanup; @@ -5561,7 +5561,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) if (flags & VIR_DOMAIN_VCPU_MAXIMUM) ret = virDomainDefGetVCpusMax(def); else - ret = def->vcpus; + ret = virDomainDefGetVCpus(def); } @@ -10587,7 +10587,7 @@ qemuGetVcpusBWLive(virDomainObjPtr vm, goto cleanup; if (*quota > 0) - *quota /= vm->def->vcpus; + *quota /= virDomainDefGetVCpus(vm->def); goto out; } @@ -19073,7 +19073,7 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, &record->nparams, maxparams, "vcpu.current", - (unsigned) dom->def->vcpus) < 0) + virDomainDefGetVCpus(dom->def)) < 0) return -1; if (virTypedParamsAddUInt(&record->params, @@ -19083,17 +19083,17 @@ qemuDomainGetStatsVcpu(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, virDomainDefGetVCpusMax(dom->def)) < 0) return -1; - if (VIR_ALLOC_N(cpuinfo, dom->def->vcpus) < 0) + if (VIR_ALLOC_N(cpuinfo, virDomainDefGetVCpus(dom->def)) < 0) return -1; - if (qemuDomainHelperGetVcpus(dom, cpuinfo, dom->def->vcpus, + if (qemuDomainHelperGetVcpus(dom, cpuinfo, virDomainDefGetVCpus(dom->def), NULL, 0) < 0) { virResetLastError(); ret = 0; /* it's ok to be silent and go ahead */ goto cleanup; } - for (i = 0; i < dom->def->vcpus; i++) { + for (i = 0; i < virDomainDefGetVCpus(dom->def); i++) { snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, "vcpu.%zu.state", i); if (virTypedParamsAddInt(&record->params, diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 0706ee3..721647f 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2072,11 +2072,11 @@ qemuProcessDetectVcpuPIDs(virQEMUDriverPtr driver, return 0; } - if (ncpupids != vm->def->vcpus) { + if (ncpupids != virDomainDefGetVCpus(vm->def)) { virReportError(VIR_ERR_INTERNAL_ERROR, _("got wrong number of vCPU pids from QEMU monitor. " "got %d, wanted %d"), - ncpupids, vm->def->vcpus); + ncpupids, virDomainDefGetVCpus(vm->def)); VIR_FREE(cpupids); return -1; } @@ -2292,7 +2292,7 @@ qemuProcessSetVcpuAffinities(virDomainObjPtr vm) int n; int ret = -1; VIR_DEBUG("Setting affinity on CPUs nvcpupin=%zu nvcpus=%d nvcpupids=%d", - def->cputune.nvcpupin, def->vcpus, priv->nvcpupids); + def->cputune.nvcpupin, virDomainDefGetVCpus(def), priv->nvcpupids); if (!def->cputune.nvcpupin) return 0; @@ -2311,7 +2311,7 @@ qemuProcessSetVcpuAffinities(virDomainObjPtr vm) return 0; } - for (n = 0; n < def->vcpus; n++) { + for (n = 0; n < virDomainDefGetVCpus(def); n++) { /* set affinity only for existing vcpus */ if (!(pininfo = virDomainPinFind(def->cputune.vcpupin, def->cputune.nvcpupin, @@ -4678,7 +4678,7 @@ int qemuProcessStart(virConnectPtr conn, * either <vcpu> or <numatune> is 'auto'. */ if (virDomainDefNeedsPlacementAdvice(vm->def)) { - nodeset = virNumaGetAutoPlacementAdvice(vm->def->vcpus, + nodeset = virNumaGetAutoPlacementAdvice(virDomainDefGetVCpus(vm->def), virDomainDefGetMemoryActual(vm->def)); if (!nodeset) goto error; diff --git a/src/test/test_driver.c b/src/test/test_driver.c index 00f5c1e..ff28fcd 100644 --- a/src/test/test_driver.c +++ b/src/test/test_driver.c @@ -1927,7 +1927,7 @@ static int testDomainGetInfo(virDomainPtr domain, info->state = virDomainObjGetState(privdom, NULL); info->memory = privdom->def->mem.cur_balloon; info->maxMem = virDomainDefGetMemoryActual(privdom->def); - info->nrVirtCpu = privdom->def->vcpus; + info->nrVirtCpu = virDomainDefGetVCpus(privdom->def); info->cpuTime = ((tv.tv_sec * 1000ll * 1000ll * 1000ll) + (tv.tv_usec * 1000ll)); ret = 0; @@ -2315,7 +2315,7 @@ testDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags) if (flags & VIR_DOMAIN_VCPU_MAXIMUM) ret = virDomainDefGetVCpusMax(def); else - ret = def->vcpus; + ret = virDomainDefGetVCpus(def); cleanup: virDomainObjEndAPI(&vm); @@ -2447,8 +2447,8 @@ static int testDomainGetVcpus(virDomainPtr domain, virBitmapSetAll(allcpumap); /* Clamp to actual number of vcpus */ - if (maxinfo > privdom->def->vcpus) - maxinfo = privdom->def->vcpus; + if (maxinfo > virDomainDefGetVCpus(privdom->def)) + maxinfo = virDomainDefGetVCpus(privdom->def); memset(info, 0, sizeof(*info) * maxinfo); memset(cpumaps, 0, maxinfo * maplen); @@ -2506,7 +2506,7 @@ static int testDomainPinVcpu(virDomainPtr domain, goto cleanup; } - if (vcpu > privdom->def->vcpus) { + if (vcpu > virDomainDefGetVCpus(privdom->def)) { virReportError(VIR_ERR_INVALID_ARG, "%s", _("requested vcpu is higher than allocated vcpus")); goto cleanup; @@ -2560,8 +2560,8 @@ testDomainGetVcpuPinInfo(virDomainPtr dom, virBitmapSetAll(allcpumap); /* Clamp to actual number of vcpus */ - if (ncpumaps > def->vcpus) - ncpumaps = def->vcpus; + if (ncpumaps > virDomainDefGetVCpus(def)) + ncpumaps = virDomainDefGetVCpus(def); for (vcpu = 0; vcpu < ncpumaps; vcpu++) { virDomainPinDefPtr pininfo; diff --git a/src/uml/uml_driver.c b/src/uml/uml_driver.c index 14598fc..aad4745 100644 --- a/src/uml/uml_driver.c +++ b/src/uml/uml_driver.c @@ -1916,7 +1916,7 @@ static int umlDomainGetInfo(virDomainPtr dom, info->maxMem = virDomainDefGetMemoryActual(vm->def); info->memory = vm->def->mem.cur_balloon; - info->nrVirtCpu = vm->def->vcpus; + info->nrVirtCpu = virDomainDefGetVCpus(vm->def); ret = 0; cleanup: diff --git a/src/vmware/vmware_driver.c b/src/vmware/vmware_driver.c index a12b03a..f793adc 100644 --- a/src/vmware/vmware_driver.c +++ b/src/vmware/vmware_driver.c @@ -1142,7 +1142,7 @@ vmwareDomainGetInfo(virDomainPtr dom, virDomainInfoPtr info) info->cpuTime = 0; info->maxMem = virDomainDefGetMemoryActual(vm->def); info->memory = vm->def->mem.cur_balloon; - info->nrVirtCpu = vm->def->vcpus; + info->nrVirtCpu = virDomainDefGetVCpus(vm->def); ret = 0; cleanup: diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c index 62636a9..654e431 100644 --- a/src/vmx/vmx.c +++ b/src/vmx/vmx.c @@ -1539,13 +1539,14 @@ virVMXParseConfig(virVMXContext *ctx, } if (sched_cpu_shares != NULL) { + unsigned int vcpus = virDomainDefGetVCpus(def); /* See http://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.... */ if (STRCASEEQ(sched_cpu_shares, "low")) { - def->cputune.shares = def->vcpus * 500; + def->cputune.shares = vcpus * 500; } else if (STRCASEEQ(sched_cpu_shares, "normal")) { - def->cputune.shares = def->vcpus * 1000; + def->cputune.shares = vcpus * 1000; } else if (STRCASEEQ(sched_cpu_shares, "high")) { - def->cputune.shares = def->vcpus * 2000; + def->cputune.shares = vcpus * 2000; } else if (virStrToLong_ul(sched_cpu_shares, NULL, 10, &def->cputune.shares) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, @@ -3228,12 +3229,13 @@ virVMXFormatConfig(virVMXContext *ctx, virDomainXMLOptionPtr xmlopt, virDomainDe /* def:cputune.shares -> vmx:sched.cpu.shares */ if (def->cputune.sharesSpecified) { + unsigned int vcpus = virDomainDefGetVCpus(def); /* See http://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.... */ - if (def->cputune.shares == def->vcpus * 500) { + if (def->cputune.shares == vcpus * 500) { virBufferAddLit(&buffer, "sched.cpu.shares = \"low\"\n"); - } else if (def->cputune.shares == def->vcpus * 1000) { + } else if (def->cputune.shares == vcpus * 1000) { virBufferAddLit(&buffer, "sched.cpu.shares = \"normal\"\n"); - } else if (def->cputune.shares == def->vcpus * 2000) { + } else if (def->cputune.shares == vcpus * 2000) { virBufferAddLit(&buffer, "sched.cpu.shares = \"high\"\n"); } else { virBufferAsprintf(&buffer, "sched.cpu.shares = \"%lu\"\n", diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c index 374cc41..7e227bc 100644 --- a/src/xen/xm_internal.c +++ b/src/xen/xm_internal.c @@ -483,7 +483,7 @@ xenXMDomainGetInfo(virConnectPtr conn, memset(info, 0, sizeof(virDomainInfo)); info->maxMem = virDomainDefGetMemoryActual(entry->def); info->memory = entry->def->mem.cur_balloon; - info->nrVirtCpu = entry->def->vcpus; + info->nrVirtCpu = virDomainDefGetVCpus(entry->def); info->state = VIR_DOMAIN_SHUTOFF; info->cpuTime = 0; @@ -765,7 +765,7 @@ xenXMDomainGetVcpusFlags(virConnectPtr conn, if (flags & VIR_DOMAIN_VCPU_MAXIMUM) ret = virDomainDefGetVCpusMax(entry->def); else - ret = entry->def->vcpus; + ret = virDomainDefGetVCpus(entry->def); cleanup: xenUnifiedUnlock(priv); diff --git a/src/xenapi/xenapi_utils.c b/src/xenapi/xenapi_utils.c index d40f959..6f33e8a 100644 --- a/src/xenapi/xenapi_utils.c +++ b/src/xenapi/xenapi_utils.c @@ -505,7 +505,7 @@ createVMRecordFromXml(virConnectPtr conn, virDomainDefPtr def, (*record)->memory_dynamic_max = (*record)->memory_static_max; (*record)->vcpus_max = (int64_t) virDomainDefGetVCpusMax(def); - (*record)->vcpus_at_startup = (int64_t) def->vcpus; + (*record)->vcpus_at_startup = (int64_t) virDomainDefGetVCpus(def); if (def->onPoweroff) (*record)->actions_after_shutdown = actionShutdownLibvirt2XenapiEnum(def->onPoweroff); if (def->onReboot) diff --git a/src/xenconfig/xen_common.c b/src/xenconfig/xen_common.c index d617773..cbde572 100644 --- a/src/xenconfig/xen_common.c +++ b/src/xenconfig/xen_common.c @@ -1535,7 +1535,8 @@ xenFormatCPUAllocation(virConfPtr conf, virDomainDefPtr def) /* Computing the vcpu_avail bitmask works because MAX_VIRT_CPUS is either 32, or 64 on a platform where long is big enough. */ if (virDomainDefHasVCpusOffline(def) && - xenConfigSetInt(conf, "vcpu_avail", (1UL << def->vcpus) - 1) < 0) + xenConfigSetInt(conf, "vcpu_avail", + (1UL << virDomainDefGetVCpus(def)) - 1) < 0) goto cleanup; if ((def->cpumask != NULL) && diff --git a/src/xenconfig/xen_sxpr.c b/src/xenconfig/xen_sxpr.c index 534130e..32c5e08 100644 --- a/src/xenconfig/xen_sxpr.c +++ b/src/xenconfig/xen_sxpr.c @@ -2231,7 +2231,8 @@ xenFormatSxpr(virConnectPtr conn, /* Computing the vcpu_avail bitmask works because MAX_VIRT_CPUS is either 32, or 64 on a platform where long is big enough. */ if (virDomainDefHasVCpusOffline(def)) - virBufferAsprintf(&buf, "(vcpu_avail %lu)", (1UL << def->vcpus) - 1); + virBufferAsprintf(&buf, "(vcpu_avail %lu)", + (1UL << virDomainDefGetVCpus(def)) - 1); if (def->cpumask) { char *ranges = virBitmapFormat(def->cpumask); @@ -2314,7 +2315,7 @@ xenFormatSxpr(virConnectPtr conn, virBufferAsprintf(&buf, "(vcpus %u)", virDomainDefGetVCpusMax(def)); if (virDomainDefHasVCpusOffline(def)) virBufferAsprintf(&buf, "(vcpu_avail %lu)", - (1UL << def->vcpus) - 1); + (1UL << virDomainDefGetVCpus(def)) - 1); for (i = 0; i < def->os.nBootDevs; i++) { switch (def->os.bootDevs[i]) { -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
--- src/conf/domain_audit.c | 2 +- src/conf/domain_conf.c | 19 +++++++++++++------ src/conf/domain_conf.h | 1 + src/libvirt_private.syms | 1 + src/libxl/libxl_conf.c | 2 +- src/libxl/libxl_driver.c | 8 ++++---- src/lxc/lxc_controller.c | 2 +- src/lxc/lxc_driver.c | 2 +- src/openvz/openvz_driver.c | 2 +- src/phyp/phyp_driver.c | 5 +++-- src/qemu/qemu_command.c | 2 +- src/qemu/qemu_driver.c | 36 ++++++++++++++++++------------------ src/qemu/qemu_process.c | 10 +++++----- src/test/test_driver.c | 14 +++++++------- src/uml/uml_driver.c | 2 +- src/vmware/vmware_driver.c | 2 +- src/vmx/vmx.c | 14 ++++++++------ src/xen/xm_internal.c | 4 ++-- src/xenapi/xenapi_utils.c | 2 +- src/xenconfig/xen_common.c | 3 ++- src/xenconfig/xen_sxpr.c | 5 +++-- 21 files changed, 76 insertions(+), 62 deletions(-)
Again change the name from "VCpus" to "Vcpus" (or "VCPUs") Using cscope - after this patch the following still access ->vcpus virBhyveProcessBuildBhyveCmd bhyveDomainGetInfo virDomainDefSetVCpusMax virDomainDefHasVCpusOffline vzDomainGetInfo (twice) vzDomainGetVcpusFlags prlsdkCheckUnsupportedParams prlsdkDoApplyConfig Here too I'll try to remember to flag non accessor changes if I see them in future patches (although I suspect adjustment there could be ugly). Similar to comment earlier out maxvcpus, if there were some syntax-check rule that could be added (outside of domain_conf.c) - that would be great. ACK w/ the name adjustment and use of accessor in listed functions John

Later on this will also be used to track size of the vcpu data array. Use size_t so that we can utilize the memory allocation helpers. --- src/conf/domain_conf.c | 2 +- src/conf/domain_conf.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index e8a3d10..897b643 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1463,7 +1463,7 @@ virDomainDefSetVCpus(virDomainDefPtr def, { if (vcpus > def->maxvcpus) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, - _("maxvcpus must not be less than current vcpus (%u < %u)"), + _("maxvcpus must not be less than current vcpus (%u < %zu)"), vcpus, def->maxvcpus); return -1; } diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 0845b2b..3490f02 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2203,7 +2203,7 @@ struct _virDomainDef { virDomainMemtune mem; unsigned int vcpus; - unsigned int maxvcpus; + size_t maxvcpus; int placement_mode; virBitmapPtr cpumask; -- 2.6.2

As in commit 88dc7e0c2fb, the helper can be used in cases where the function actually does not access anyting in the private data besides the agent. --- src/qemu/qemu_domain.c | 12 ++++++++++++ src/qemu/qemu_domain.h | 1 + 2 files changed, 13 insertions(+) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 0861bfd..4913a3b 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -1840,6 +1840,18 @@ qemuDomainObjEnterMonitorAsync(virQEMUDriverPtr driver, } +/** + * qemuDomainGetAgent: + * @vm: domain object + * + * Returns the agent pointer of @vm; + */ +qemuAgentPtr +qemuDomainGetAgent(virDomainObjPtr vm) +{ + return (((qemuDomainObjPrivatePtr)(vm->privateData))->agent); +} + /* * obj must be locked before calling diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 8b6b1a3..03cf6ef 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -295,6 +295,7 @@ int qemuDomainObjEnterMonitorAsync(virQEMUDriverPtr driver, ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_RETURN_CHECK; +qemuAgentPtr qemuDomainGetAgent(virDomainObjPtr vm); void qemuDomainObjEnterAgent(virDomainObjPtr obj) ATTRIBUTE_NONNULL(1); void qemuDomainObjExitAgent(virDomainObjPtr obj) -- 2.6.2

Separate the code so that qemuDomainSetVcpusFlags contains only code relevant to hardware hotplug/unplug. --- src/qemu/qemu_driver.c | 137 +++++++++++++++++++++++++++---------------------- 1 file changed, 77 insertions(+), 60 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 95b9ede..ab22c65 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4853,6 +4853,59 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, static int +qemuDomainSetVcpusAgent(virDomainObjPtr vm, + unsigned int nvcpus) +{ + qemuAgentCPUInfoPtr cpuinfo = NULL; + int ncpuinfo; + int ret = -1; + + if (!qemuDomainAgentAvailable(vm, true)) + goto cleanup; + + if (nvcpus > virDomainDefGetVCpus(vm->def)) { + virReportError(VIR_ERR_INVALID_ARG, + _("requested vcpu count is greater than the count " + "of enabled vcpus in the domain: %d > %d"), + nvcpus, virDomainDefGetVCpus(vm->def)); + goto cleanup; + } + + qemuDomainObjEnterAgent(vm); + ncpuinfo = qemuAgentGetVCPUs(qemuDomainGetAgent(vm), &cpuinfo); + qemuDomainObjExitAgent(vm); + + if (ncpuinfo < 0) + goto cleanup; + + if (qemuAgentUpdateCPUInfo(nvcpus, cpuinfo, ncpuinfo) < 0) + goto cleanup; + + qemuDomainObjEnterAgent(vm); + ret = qemuAgentSetVCPUs(qemuDomainGetAgent(vm), cpuinfo, ncpuinfo); + qemuDomainObjExitAgent(vm); + + if (ret < 0) + goto cleanup; + + if (ret < ncpuinfo) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("failed to set state of cpu %d via guest agent"), + cpuinfo[ret-1].id); + ret = -1; + goto cleanup; + } + + ret = 0; + + cleanup: + VIR_FREE(cpuinfo); + + return ret; +} + + +static int qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, unsigned int flags) { @@ -4863,8 +4916,6 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, int ret = -1; unsigned int maxvcpus = 0; virQEMUDriverConfigPtr cfg = NULL; - qemuAgentCPUInfoPtr cpuinfo = NULL; - int ncpuinfo; qemuDomainObjPrivatePtr priv; size_t i; virCgroupPtr cgroup_temp = NULL; @@ -4891,10 +4942,15 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) goto cleanup; + if (flags & VIR_DOMAIN_VCPU_GUEST) { + ret = qemuDomainSetVcpusAgent(vm, nvcpus); + goto endjob; + } + if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) goto endjob; - if (def && !(flags & VIR_DOMAIN_VCPU_GUEST) && virNumaIsAvailable() && + if (def && virNumaIsAvailable() && virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) { if (virCgroupNewThread(priv->cgroup, VIR_CGROUP_THREAD_EMULATOR, 0, false, &cgroup_temp) < 0) @@ -4925,71 +4981,33 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, goto endjob; } - if (flags & VIR_DOMAIN_VCPU_GUEST) { - if (!qemuDomainAgentAvailable(vm, true)) - goto endjob; - - if (nvcpus > virDomainDefGetVCpus(vm->def)) { - virReportError(VIR_ERR_INVALID_ARG, - _("requested vcpu count is greater than the count " - "of enabled vcpus in the domain: %d > %d"), - nvcpus, virDomainDefGetVCpus(vm->def)); - goto endjob; - } - - qemuDomainObjEnterAgent(vm); - ncpuinfo = qemuAgentGetVCPUs(priv->agent, &cpuinfo); - qemuDomainObjExitAgent(vm); - - if (ncpuinfo < 0) - goto endjob; - - if (qemuAgentUpdateCPUInfo(nvcpus, cpuinfo, ncpuinfo) < 0) + if (def) { + if (qemuDomainHotplugVcpus(driver, vm, nvcpus) < 0) goto endjob; - qemuDomainObjEnterAgent(vm); - ret = qemuAgentSetVCPUs(priv->agent, cpuinfo, ncpuinfo); - qemuDomainObjExitAgent(vm); - - if (ret < 0) + if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm) < 0) goto endjob; + } - if (ret < ncpuinfo) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("failed to set state of cpu %d via guest agent"), - cpuinfo[ret-1].id); - ret = -1; - goto endjob; + if (persistentDef) { + /* remove vcpupin entries for vcpus that were unplugged */ + if (nvcpus < virDomainDefGetVCpus(persistentDef)) { + for (i = virDomainDefGetVCpus(persistentDef) - 1; i >= nvcpus; i--) + virDomainPinDel(&persistentDef->cputune.vcpupin, + &persistentDef->cputune.nvcpupin, + i); } - } else { - if (def) { - if (qemuDomainHotplugVcpus(driver, vm, nvcpus) < 0) - goto endjob; - if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm) < 0) + if (flags & VIR_DOMAIN_VCPU_MAXIMUM) { + if (virDomainDefSetVCpusMax(persistentDef, nvcpus) < 0) goto endjob; - } - - if (persistentDef) { - /* remove vcpupin entries for vcpus that were unplugged */ - if (nvcpus < virDomainDefGetVCpus(persistentDef)) { - for (i = virDomainDefGetVCpus(persistentDef) - 1; i >= nvcpus; i--) - virDomainPinDel(&persistentDef->cputune.vcpupin, - &persistentDef->cputune.nvcpupin, - i); - } - - if (flags & VIR_DOMAIN_VCPU_MAXIMUM) { - if (virDomainDefSetVCpusMax(persistentDef, nvcpus) < 0) - goto endjob; - } else { - if (virDomainDefSetVCpus(persistentDef, nvcpus) < 0) - goto endjob; - } - - if (virDomainSaveConfig(cfg->configDir, persistentDef) < 0) + } else { + if (virDomainDefSetVCpus(persistentDef, nvcpus) < 0) goto endjob; } + + if (virDomainSaveConfig(cfg->configDir, persistentDef) < 0) + goto endjob; } ret = 0; @@ -5006,7 +5024,6 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, cleanup: virDomainObjEndAPI(&vm); - VIR_FREE(cpuinfo); VIR_FREE(mem_mask); VIR_FREE(all_nodes_str); virBitmapFree(all_nodes); -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Separate the code so that qemuDomainSetVcpusFlags contains only code relevant to hardware hotplug/unplug. --- src/qemu/qemu_driver.c | 137 +++++++++++++++++++++++++++---------------------- 1 file changed, 77 insertions(+), 60 deletions(-)
ACK 15, 16
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 95b9ede..ab22c65 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4853,6 +4853,59 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver,
This function relies on qemuAgentUpdateCPUInfo to perform the maxvcpus checks that wouldn't be made by refactoring this code here. Perhaps something worthy to note in the commit message (at least that's my assumption based on reading the code). ACK - John
static int +qemuDomainSetVcpusAgent(virDomainObjPtr vm, + unsigned int nvcpus) +{
[...]

With a very unfortunate timing, the agent might vanish before we do the second call while the locks were down. Re-check that the agent is available before attempting it again. --- src/qemu/qemu_driver.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index ab22c65..72879cf 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4881,6 +4881,9 @@ qemuDomainSetVcpusAgent(virDomainObjPtr vm, if (qemuAgentUpdateCPUInfo(nvcpus, cpuinfo, ncpuinfo) < 0) goto cleanup; + if (!qemuDomainAgentAvailable(vm, true)) + goto cleanup; + qemuDomainObjEnterAgent(vm); ret = qemuAgentSetVCPUs(qemuDomainGetAgent(vm), cpuinfo, ncpuinfo); qemuDomainObjExitAgent(vm); -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
With a very unfortunate timing, the agent might vanish before we do the second call while the locks were down. Re-check that the agent is available before attempting it again. --- src/qemu/qemu_driver.c | 3 +++ 1 file changed, 3 insertions(+)
ACK John

There's only very little common code among the two operations. Split the functions so that the internals are easier to understand and refactor later. --- src/qemu/qemu_driver.c | 210 ++++++++++++++++++++++++++++++++----------------- 1 file changed, 136 insertions(+), 74 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 72879cf..a483220 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4710,31 +4710,15 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, qemuDomainObjEnterMonitor(driver, vm); - /* We need different branches here, because we want to offline - * in reverse order to onlining, so any partial fail leaves us in a - * reasonably sensible state */ - if (nvcpus > vcpus) { - for (i = vcpus; i < nvcpus; i++) { - /* Online new CPU */ - rc = qemuMonitorSetCPU(priv->mon, i, true); - if (rc == 0) - goto unsupported; - if (rc < 0) - goto exit_monitor; - - vcpus++; - } - } else { - for (i = vcpus - 1; i >= nvcpus; i--) { - /* Offline old CPU */ - rc = qemuMonitorSetCPU(priv->mon, i, false); - if (rc == 0) - goto unsupported; - if (rc < 0) - goto exit_monitor; + for (i = vcpus; i < nvcpus; i++) { + /* Online new CPU */ + rc = qemuMonitorSetCPU(priv->mon, i, true); + if (rc == 0) + goto unsupported; + if (rc < 0) + goto exit_monitor; - vcpus--; - } + vcpus++; } /* hotplug succeeded */ @@ -4755,15 +4739,6 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, goto cleanup; } - /* check if hotplug has failed */ - if (vcpus < oldvcpus && ncpupids == oldvcpus) { - virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", - _("qemu didn't unplug the vCPUs properly")); - vcpus = oldvcpus; - ret = -1; - goto cleanup; - } - if (ncpupids != vcpus) { virReportError(VIR_ERR_INTERNAL_ERROR, _("got wrong number of vCPU pids from QEMU monitor. " @@ -4781,50 +4756,37 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, &mem_mask, -1) < 0) goto cleanup; - if (nvcpus > oldvcpus) { - for (i = oldvcpus; i < nvcpus; i++) { - if (priv->cgroup) { - cgroup_vcpu = - qemuDomainAddCgroupForThread(priv->cgroup, - VIR_CGROUP_THREAD_VCPU, - i, mem_mask, - cpupids[i]); - if (!cgroup_vcpu) - goto cleanup; - } + for (i = oldvcpus; i < nvcpus; i++) { + if (priv->cgroup) { + cgroup_vcpu = + qemuDomainAddCgroupForThread(priv->cgroup, + VIR_CGROUP_THREAD_VCPU, + i, mem_mask, + cpupids[i]); + if (!cgroup_vcpu) + goto cleanup; + } - /* Inherit def->cpuset */ - if (vm->def->cpumask) { - if (qemuDomainHotplugAddPin(vm->def->cpumask, i, - &vm->def->cputune.vcpupin, - &vm->def->cputune.nvcpupin) < 0) { - ret = -1; - goto cleanup; - } - if (qemuDomainHotplugPinThread(vm->def->cpumask, i, cpupids[i], - cgroup_vcpu) < 0) { - ret = -1; - goto cleanup; - } + /* Inherit def->cpuset */ + if (vm->def->cpumask) { + if (qemuDomainHotplugAddPin(vm->def->cpumask, i, + &vm->def->cputune.vcpupin, + &vm->def->cputune.nvcpupin) < 0) { + ret = -1; + goto cleanup; } - virCgroupFree(&cgroup_vcpu); - - if (qemuProcessSetSchedParams(i, cpupids[i], - vm->def->cputune.nvcpusched, - vm->def->cputune.vcpusched) < 0) + if (qemuDomainHotplugPinThread(vm->def->cpumask, i, cpupids[i], + cgroup_vcpu) < 0) { + ret = -1; goto cleanup; + } } - } else { - for (i = oldvcpus - 1; i >= nvcpus; i--) { - if (qemuDomainDelCgroupForThread(priv->cgroup, - VIR_CGROUP_THREAD_VCPU, i) < 0) - goto cleanup; + virCgroupFree(&cgroup_vcpu); - /* Free vcpupin setting */ - virDomainPinDel(&vm->def->cputune.vcpupin, - &vm->def->cputune.nvcpupin, - i); - } + if (qemuProcessSetSchedParams(i, cpupids[i], + vm->def->cputune.nvcpusched, + vm->def->cputune.vcpusched) < 0) + goto cleanup; } priv->nvcpupids = ncpupids; @@ -4853,6 +4815,101 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, static int +qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, + virDomainObjPtr vm, + unsigned int nvcpus) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + size_t i; + int rc = 1; + int ret = -1; + int oldvcpus = virDomainDefGetVCpus(vm->def); + int vcpus = oldvcpus; + pid_t *cpupids = NULL; + int ncpupids; + + qemuDomainObjEnterMonitor(driver, vm); + + for (i = vcpus - 1; i >= nvcpus; i--) { + /* Offline old CPU */ + rc = qemuMonitorSetCPU(priv->mon, i, false); + if (rc == 0) + goto unsupported; + if (rc < 0) + goto exit_monitor; + + vcpus--; + } + + ret = 0; + + /* After hotplugging the CPUs we need to re-detect threads corresponding + * to the virtual CPUs. Some older versions don't provide the thread ID + * or don't have the "info cpus" command (and they don't support multiple + * CPUs anyways), so errors in the re-detection will not be treated + * fatal */ + if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) <= 0) { + virResetLastError(); + goto exit_monitor; + } + if (qemuDomainObjExitMonitor(driver, vm) < 0) { + ret = -1; + goto cleanup; + } + + /* check if hotunplug has failed */ + if (ncpupids == oldvcpus) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("qemu didn't unplug the vCPUs properly")); + vcpus = oldvcpus; + ret = -1; + goto cleanup; + } + + if (ncpupids != vcpus) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("got wrong number of vCPU pids from QEMU monitor. " + "got %d, wanted %d"), + ncpupids, vcpus); + vcpus = oldvcpus; + ret = -1; + goto cleanup; + } + + for (i = oldvcpus - 1; i >= nvcpus; i--) { + if (qemuDomainDelCgroupForThread(priv->cgroup, + VIR_CGROUP_THREAD_VCPU, i) < 0) + goto cleanup; + + /* Free vcpupin setting */ + virDomainPinDel(&vm->def->cputune.vcpupin, + &vm->def->cputune.nvcpupin, + i); + } + + priv->nvcpupids = ncpupids; + VIR_FREE(priv->vcpupids); + priv->vcpupids = cpupids; + cpupids = NULL; + + cleanup: + VIR_FREE(cpupids); + if (virDomainObjIsActive(vm) && + virDomainDefSetVCpus(vm->def, vcpus) < 0) + ret = -1; + virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", rc == 1); + return ret; + + unsupported: + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("cannot change vcpu count of this domain")); + exit_monitor: + ignore_value(qemuDomainObjExitMonitor(driver, vm)); + goto cleanup; +} + + +static int qemuDomainSetVcpusAgent(virDomainObjPtr vm, unsigned int nvcpus) { @@ -4985,8 +5042,13 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, } if (def) { - if (qemuDomainHotplugVcpus(driver, vm, nvcpus) < 0) - goto endjob; + if (nvcpus > virDomainDefGetVCpus(def)) { + if (qemuDomainHotplugVcpus(driver, vm, nvcpus) < 0) + goto endjob; + } else { + if (qemuDomainHotunplugVcpus(driver, vm, nvcpus) < 0) + goto endjob; + } if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm) < 0) goto endjob; -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
There's only very little common code among the two operations. Split the functions so that the internals are easier to understand and refactor later. --- src/qemu/qemu_driver.c | 210 ++++++++++++++++++++++++++++++++----------------- 1 file changed, 136 insertions(+), 74 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 72879cf..a483220 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4710,31 +4710,15 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver,
qemuDomainObjEnterMonitor(driver, vm);
- /* We need different branches here, because we want to offline - * in reverse order to onlining, so any partial fail leaves us in a - * reasonably sensible state */
I originally though it might have be nice to carry this comment in the unplug - just to understand why going in reverse order was chosen, but I see eventually that becomes irrelevant.
- if (nvcpus > vcpus) { - for (i = vcpus; i < nvcpus; i++) { - /* Online new CPU */ - rc = qemuMonitorSetCPU(priv->mon, i, true); - if (rc == 0) - goto unsupported; - if (rc < 0) - goto exit_monitor; - - vcpus++; - } - } else { - for (i = vcpus - 1; i >= nvcpus; i--) { - /* Offline old CPU */ - rc = qemuMonitorSetCPU(priv->mon, i, false); - if (rc == 0) - goto unsupported; - if (rc < 0) - goto exit_monitor; + for (i = vcpus; i < nvcpus; i++) { + /* Online new CPU */ + rc = qemuMonitorSetCPU(priv->mon, i, true); + if (rc == 0) + goto unsupported; + if (rc < 0) + goto exit_monitor;
- vcpus--; - } + vcpus++; }
/* hotplug succeeded */ @@ -4755,15 +4739,6 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, goto cleanup; }
- /* check if hotplug has failed */ - if (vcpus < oldvcpus && ncpupids == oldvcpus) { - virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", - _("qemu didn't unplug the vCPUs properly")); - vcpus = oldvcpus; - ret = -1; - goto cleanup; - } - if (ncpupids != vcpus) { virReportError(VIR_ERR_INTERNAL_ERROR, _("got wrong number of vCPU pids from QEMU monitor. " @@ -4781,50 +4756,37 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, &mem_mask, -1) < 0) goto cleanup;
- if (nvcpus > oldvcpus) { - for (i = oldvcpus; i < nvcpus; i++) { - if (priv->cgroup) { - cgroup_vcpu = - qemuDomainAddCgroupForThread(priv->cgroup, - VIR_CGROUP_THREAD_VCPU, - i, mem_mask, - cpupids[i]); - if (!cgroup_vcpu) - goto cleanup; - }
Good thing I peeked ahead one patch ;-) Was going to make a comment about the ret = -1; logic especially w/r/t how [n]vcpupids is handled.
+ for (i = oldvcpus; i < nvcpus; i++) { + if (priv->cgroup) { + cgroup_vcpu = + qemuDomainAddCgroupForThread(priv->cgroup, + VIR_CGROUP_THREAD_VCPU, + i, mem_mask, + cpupids[i]); + if (!cgroup_vcpu) + goto cleanup; + }
- /* Inherit def->cpuset */ - if (vm->def->cpumask) { - if (qemuDomainHotplugAddPin(vm->def->cpumask, i, - &vm->def->cputune.vcpupin, - &vm->def->cputune.nvcpupin) < 0) { - ret = -1; - goto cleanup; - } - if (qemuDomainHotplugPinThread(vm->def->cpumask, i, cpupids[i], - cgroup_vcpu) < 0) { - ret = -1; - goto cleanup; - } + /* Inherit def->cpuset */ + if (vm->def->cpumask) { + if (qemuDomainHotplugAddPin(vm->def->cpumask, i, + &vm->def->cputune.vcpupin, + &vm->def->cputune.nvcpupin) < 0) { + ret = -1; + goto cleanup; } - virCgroupFree(&cgroup_vcpu); - - if (qemuProcessSetSchedParams(i, cpupids[i], - vm->def->cputune.nvcpusched, - vm->def->cputune.vcpusched) < 0) + if (qemuDomainHotplugPinThread(vm->def->cpumask, i, cpupids[i], + cgroup_vcpu) < 0) { + ret = -1; goto cleanup; + } } - } else { - for (i = oldvcpus - 1; i >= nvcpus; i--) { - if (qemuDomainDelCgroupForThread(priv->cgroup, - VIR_CGROUP_THREAD_VCPU, i) < 0) - goto cleanup; + virCgroupFree(&cgroup_vcpu);
- /* Free vcpupin setting */ - virDomainPinDel(&vm->def->cputune.vcpupin, - &vm->def->cputune.nvcpupin, - i); - } + if (qemuProcessSetSchedParams(i, cpupids[i], + vm->def->cputune.nvcpusched, + vm->def->cputune.vcpusched) < 0) + goto cleanup; }
priv->nvcpupids = ncpupids; @@ -4853,6 +4815,101 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver,
static int +qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, + virDomainObjPtr vm, + unsigned int nvcpus) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + size_t i; + int rc = 1; + int ret = -1; + int oldvcpus = virDomainDefGetVCpus(vm->def); + int vcpus = oldvcpus; + pid_t *cpupids = NULL; + int ncpupids; + + qemuDomainObjEnterMonitor(driver, vm); + + for (i = vcpus - 1; i >= nvcpus; i--) { + /* Offline old CPU */ + rc = qemuMonitorSetCPU(priv->mon, i, false); + if (rc == 0) + goto unsupported; + if (rc < 0) + goto exit_monitor; + + vcpus--; + } + + ret = 0; + + /* After hotplugging the CPUs we need to re-detect threads corresponding + * to the virtual CPUs. Some older versions don't provide the thread ID + * or don't have the "info cpus" command (and they don't support multiple + * CPUs anyways), so errors in the re-detection will not be treated + * fatal */ + if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) <= 0) { + virResetLastError(); + goto exit_monitor; + } + if (qemuDomainObjExitMonitor(driver, vm) < 0) { + ret = -1; + goto cleanup; + } + + /* check if hotunplug has failed */ + if (ncpupids == oldvcpus) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("qemu didn't unplug the vCPUs properly")); + vcpus = oldvcpus; + ret = -1; + goto cleanup; + } + + if (ncpupids != vcpus) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("got wrong number of vCPU pids from QEMU monitor. " + "got %d, wanted %d"), + ncpupids, vcpus); + vcpus = oldvcpus; + ret = -1; + goto cleanup; + } + + for (i = oldvcpus - 1; i >= nvcpus; i--) { + if (qemuDomainDelCgroupForThread(priv->cgroup, + VIR_CGROUP_THREAD_VCPU, i) < 0) + goto cleanup; + + /* Free vcpupin setting */ + virDomainPinDel(&vm->def->cputune.vcpupin, + &vm->def->cputune.nvcpupin, + i); + } + + priv->nvcpupids = ncpupids; + VIR_FREE(priv->vcpupids); + priv->vcpupids = cpupids; + cpupids = NULL; + + cleanup: + VIR_FREE(cpupids); + if (virDomainObjIsActive(vm) && + virDomainDefSetVCpus(vm->def, vcpus) < 0) + ret = -1; + virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", rc == 1); + return ret; + + unsupported: + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("cannot change vcpu count of this domain")); + exit_monitor: + ignore_value(qemuDomainObjExitMonitor(driver, vm)); + goto cleanup; +} + + +static int qemuDomainSetVcpusAgent(virDomainObjPtr vm, unsigned int nvcpus) { @@ -4985,8 +5042,13 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, }
if (def) { - if (qemuDomainHotplugVcpus(driver, vm, nvcpus) < 0) - goto endjob; + if (nvcpus > virDomainDefGetVCpus(def)) { + if (qemuDomainHotplugVcpus(driver, vm, nvcpus) < 0) + goto endjob; + } else { + if (qemuDomainHotunplugVcpus(driver, vm, nvcpus) < 0) + goto endjob; + }
Could have gone with HotplugAddVcpus and HotplugDelVcpus (similar to IOThreads). Whether you change is up to you. ACK - John
if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm) < 0) goto endjob;

On Mon, Nov 23, 2015 at 14:19:58 -0500, John Ferlan wrote:
On 11/20/2015 10:22 AM, Peter Krempa wrote:
There's only very little common code among the two operations. Split the functions so that the internals are easier to understand and refactor later. --- src/qemu/qemu_driver.c | 210 ++++++++++++++++++++++++++++++++----------------- 1 file changed, 136 insertions(+), 74 deletions(-)
[...]
if (def) { - if (qemuDomainHotplugVcpus(driver, vm, nvcpus) < 0) - goto endjob; + if (nvcpus > virDomainDefGetVCpus(def)) { + if (qemuDomainHotplugVcpus(driver, vm, nvcpus) < 0) + goto endjob; + } else { + if (qemuDomainHotunplugVcpus(driver, vm, nvcpus) < 0) + goto endjob; + }
Could have gone with HotplugAddVcpus and HotplugDelVcpus (similar to IOThreads). Whether you change is up to you.
I decided to go with 'qemuDomainHotplugAddVcpu' without the plural form. It will be misleading a bit until one of the later commits removes the loop to the caller.
ACK -
John
Peter

The cpu hotplug helper functions used negative error handling in a part of them, although some code that was added later didn't properly set the error codes in some cases. This would cause improper error messages in cases where we couldn't modify the numa cpu mask and a few other cases. Fix the logic by converting it to the regularly used pattern. --- src/qemu/qemu_driver.c | 26 +++++++++----------------- 1 file changed, 9 insertions(+), 17 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a483220..49fdd63 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4721,10 +4721,6 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, vcpus++; } - /* hotplug succeeded */ - - ret = 0; - /* After hotplugging the CPUs we need to re-detect threads corresponding * to the virtual CPUs. Some older versions don't provide the thread ID * or don't have the "info cpus" command (and they don't support multiple @@ -4732,12 +4728,12 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, * fatal */ if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) <= 0) { virResetLastError(); + ret = 0; goto exit_monitor; } - if (qemuDomainObjExitMonitor(driver, vm) < 0) { - ret = -1; + + if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; - } if (ncpupids != vcpus) { virReportError(VIR_ERR_INTERNAL_ERROR, @@ -4745,7 +4741,6 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, "got %d, wanted %d"), ncpupids, vcpus); vcpus = oldvcpus; - ret = -1; goto cleanup; } @@ -4772,12 +4767,10 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, if (qemuDomainHotplugAddPin(vm->def->cpumask, i, &vm->def->cputune.vcpupin, &vm->def->cputune.nvcpupin) < 0) { - ret = -1; goto cleanup; } if (qemuDomainHotplugPinThread(vm->def->cpumask, i, cpupids[i], cgroup_vcpu) < 0) { - ret = -1; goto cleanup; } } @@ -4794,6 +4787,8 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, priv->vcpupids = cpupids; cpupids = NULL; + ret = 0; + cleanup: VIR_FREE(cpupids); VIR_FREE(mem_mask); @@ -4841,8 +4836,6 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, vcpus--; } - ret = 0; - /* After hotplugging the CPUs we need to re-detect threads corresponding * to the virtual CPUs. Some older versions don't provide the thread ID * or don't have the "info cpus" command (and they don't support multiple @@ -4850,19 +4843,17 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, * fatal */ if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) <= 0) { virResetLastError(); + ret = 0; goto exit_monitor; } - if (qemuDomainObjExitMonitor(driver, vm) < 0) { - ret = -1; + if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; - } /* check if hotunplug has failed */ if (ncpupids == oldvcpus) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", _("qemu didn't unplug the vCPUs properly")); vcpus = oldvcpus; - ret = -1; goto cleanup; } @@ -4872,7 +4863,6 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, "got %d, wanted %d"), ncpupids, vcpus); vcpus = oldvcpus; - ret = -1; goto cleanup; } @@ -4892,6 +4882,8 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, priv->vcpupids = cpupids; cpupids = NULL; + ret = 0; + cleanup: VIR_FREE(cpupids); if (virDomainObjIsActive(vm) && -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
The cpu hotplug helper functions used negative error handling in a part of them, although some code that was added later didn't properly set the error codes in some cases. This would cause improper error messages in cases where we couldn't modify the numa cpu mask and a few other cases.
Fix the logic by converting it to the regularly used pattern. --- src/qemu/qemu_driver.c | 26 +++++++++----------------- 1 file changed, 9 insertions(+), 17 deletions(-)
ACK (could have removed a couple of open/close {} brackets [1] One other "could do" thing since I peeked to the next patch - qemuMonitorSetCPU could lift the comments from qemuMonitorJSONSetCPU or qemuMonitorTextSetCPU... John
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a483220..49fdd63 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4721,10 +4721,6 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, vcpus++; }
- /* hotplug succeeded */ - - ret = 0; - /* After hotplugging the CPUs we need to re-detect threads corresponding * to the virtual CPUs. Some older versions don't provide the thread ID * or don't have the "info cpus" command (and they don't support multiple @@ -4732,12 +4728,12 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, * fatal */ if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) <= 0) { virResetLastError(); + ret = 0; goto exit_monitor; } - if (qemuDomainObjExitMonitor(driver, vm) < 0) { - ret = -1; + + if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; - }
if (ncpupids != vcpus) { virReportError(VIR_ERR_INTERNAL_ERROR, @@ -4745,7 +4741,6 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, "got %d, wanted %d"), ncpupids, vcpus); vcpus = oldvcpus; - ret = -1; goto cleanup; }
@@ -4772,12 +4767,10 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, if (qemuDomainHotplugAddPin(vm->def->cpumask, i, &vm->def->cputune.vcpupin, &vm->def->cputune.nvcpupin) < 0) { - ret = -1; goto cleanup; }
[1] {} not necessary
if (qemuDomainHotplugPinThread(vm->def->cpumask, i, cpupids[i], cgroup_vcpu) < 0) { - ret = -1; goto cleanup; }
[1] {} not necessary
} @@ -4794,6 +4787,8 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, priv->vcpupids = cpupids; cpupids = NULL;
+ ret = 0; + cleanup: VIR_FREE(cpupids); VIR_FREE(mem_mask); @@ -4841,8 +4836,6 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, vcpus--; }
- ret = 0; - /* After hotplugging the CPUs we need to re-detect threads corresponding * to the virtual CPUs. Some older versions don't provide the thread ID * or don't have the "info cpus" command (and they don't support multiple @@ -4850,19 +4843,17 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, * fatal */ if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) <= 0) { virResetLastError(); + ret = 0; goto exit_monitor; } - if (qemuDomainObjExitMonitor(driver, vm) < 0) { - ret = -1; + if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; - }
/* check if hotunplug has failed */ if (ncpupids == oldvcpus) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", _("qemu didn't unplug the vCPUs properly")); vcpus = oldvcpus; - ret = -1; goto cleanup; }
@@ -4872,7 +4863,6 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, "got %d, wanted %d"), ncpupids, vcpus); vcpus = oldvcpus; - ret = -1; goto cleanup; }
@@ -4892,6 +4882,8 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, priv->vcpupids = cpupids; cpupids = NULL;
+ ret = 0; + cleanup: VIR_FREE(cpupids); if (virDomainObjIsActive(vm) &&

The return value has non-obvious semantics. Document it. --- src/qemu/qemu_monitor.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 50c6549..cf7ecb6 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -1606,6 +1606,15 @@ qemuMonitorSystemReset(qemuMonitorPtr mon) } +/** + * qemuMonitorGetCPUInfo: + * @mon: monitor + * @pids: returned array of thread ids corresponding to the vCPUs + * + * Detects the vCPU thread ids. Returns count of detected vCPUs on success, + * 0 if qemu didn't report thread ids (does not report libvirt error), + * -1 on error (reports libvirt error). + */ int qemuMonitorGetCPUInfo(qemuMonitorPtr mon, int **pids) -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
The return value has non-obvious semantics. Document it. --- src/qemu/qemu_monitor.c | 9 +++++++++ 1 file changed, 9 insertions(+)
ACK - wish I'd peeked two ahead before I sent last one ;-) John

Let the function report errors internally and change it to return standard return codes. --- src/qemu/qemu_driver.c | 22 ++++------------------ src/qemu/qemu_monitor_json.c | 4 ---- src/qemu/qemu_monitor_text.c | 22 +++++++++++----------- 3 files changed, 15 insertions(+), 33 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 49fdd63..9011b2d 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4698,7 +4698,6 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, { qemuDomainObjPrivatePtr priv = vm->privateData; size_t i; - int rc = 1; int ret = -1; int oldvcpus = virDomainDefGetVCpus(vm->def); int vcpus = oldvcpus; @@ -4712,10 +4711,7 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, for (i = vcpus; i < nvcpus; i++) { /* Online new CPU */ - rc = qemuMonitorSetCPU(priv->mon, i, true); - if (rc == 0) - goto unsupported; - if (rc < 0) + if (qemuMonitorSetCPU(priv->mon, i, true) < 0) goto exit_monitor; vcpus++; @@ -4795,14 +4791,11 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, if (virDomainObjIsActive(vm) && virDomainDefSetVCpus(vm->def, vcpus) < 0) ret = -1; - virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", rc == 1); + virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", ret == 0); if (cgroup_vcpu) virCgroupFree(&cgroup_vcpu); return ret; - unsupported: - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("cannot change vcpu count of this domain")); exit_monitor: ignore_value(qemuDomainObjExitMonitor(driver, vm)); goto cleanup; @@ -4816,7 +4809,6 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, { qemuDomainObjPrivatePtr priv = vm->privateData; size_t i; - int rc = 1; int ret = -1; int oldvcpus = virDomainDefGetVCpus(vm->def); int vcpus = oldvcpus; @@ -4827,10 +4819,7 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, for (i = vcpus - 1; i >= nvcpus; i--) { /* Offline old CPU */ - rc = qemuMonitorSetCPU(priv->mon, i, false); - if (rc == 0) - goto unsupported; - if (rc < 0) + if (qemuMonitorSetCPU(priv->mon, i, false) < 0) goto exit_monitor; vcpus--; @@ -4889,12 +4878,9 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, if (virDomainObjIsActive(vm) && virDomainDefSetVCpus(vm->def, vcpus) < 0) ret = -1; - virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", rc == 1); + virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", ret == 0); return ret; - unsupported: - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("cannot change vcpu count of this domain")); exit_monitor: ignore_value(qemuDomainObjExitMonitor(driver, vm)); goto cleanup; diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 86b8c7b..50d6f62 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -2188,10 +2188,6 @@ int qemuMonitorJSONSetCPU(qemuMonitorPtr mon, else ret = qemuMonitorJSONCheckError(cmd, reply); - /* this function has non-standard return values, so adapt it */ - if (ret == 0) - ret = 1; - cleanup: virJSONValueFree(cmd); virJSONValueFree(reply); diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c index f44da20..fd38d01 100644 --- a/src/qemu/qemu_monitor_text.c +++ b/src/qemu/qemu_monitor_text.c @@ -1137,8 +1137,7 @@ qemuMonitorTextSetBalloon(qemuMonitorPtr mon, /* - * Returns: 0 if CPU hotplug not supported, +1 if CPU hotplug worked - * or -1 on failure + * Returns: 0 if CPU modification was successful or -1 on failure */ int qemuMonitorTextSetCPU(qemuMonitorPtr mon, int cpu, bool online) { @@ -1149,22 +1148,23 @@ int qemuMonitorTextSetCPU(qemuMonitorPtr mon, int cpu, bool online) if (virAsprintf(&cmd, "cpu_set %d %s", cpu, online ? "online" : "offline") < 0) return -1; - if (qemuMonitorHMPCommand(mon, cmd, &reply) < 0) { - VIR_FREE(cmd); - return -1; - } - VIR_FREE(cmd); + if (qemuMonitorHMPCommand(mon, cmd, &reply) < 0) + goto cleanup; /* If the command failed qemu prints: 'unknown command' * No message is printed on success it seems */ if (strstr(reply, "unknown command:")) { - /* Don't set error - it is expected CPU onlining fails on many qemu - caller will handle */ - ret = 0; - } else { - ret = 1; + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("cannot change vcpu count of this domain")); + goto cleanup; } + ret = 0; + + cleanup: VIR_FREE(reply); + VIR_FREE(cmd); + return ret; } -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Let the function report errors internally and change it to return standard return codes. --- src/qemu/qemu_driver.c | 22 ++++------------------ src/qemu/qemu_monitor_json.c | 4 ---- src/qemu/qemu_monitor_text.c | 22 +++++++++++----------- 3 files changed, 15 insertions(+), 33 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 49fdd63..9011b2d 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4698,7 +4698,6 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, { qemuDomainObjPrivatePtr priv = vm->privateData; size_t i; - int rc = 1; int ret = -1; int oldvcpus = virDomainDefGetVCpus(vm->def); int vcpus = oldvcpus; @@ -4712,10 +4711,7 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver,
for (i = vcpus; i < nvcpus; i++) { /* Online new CPU */ - rc = qemuMonitorSetCPU(priv->mon, i, true); - if (rc == 0) - goto unsupported; - if (rc < 0) + if (qemuMonitorSetCPU(priv->mon, i, true) < 0) goto exit_monitor;
vcpus++; @@ -4795,14 +4791,11 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, if (virDomainObjIsActive(vm) && virDomainDefSetVCpus(vm->def, vcpus) < 0) ret = -1; - virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", rc == 1); + virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", ret == 0); if (cgroup_vcpu) virCgroupFree(&cgroup_vcpu); return ret;
- unsupported: - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("cannot change vcpu count of this domain")); exit_monitor: ignore_value(qemuDomainObjExitMonitor(driver, vm)); goto cleanup; @@ -4816,7 +4809,6 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, { qemuDomainObjPrivatePtr priv = vm->privateData; size_t i; - int rc = 1; int ret = -1; int oldvcpus = virDomainDefGetVCpus(vm->def); int vcpus = oldvcpus; @@ -4827,10 +4819,7 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver,
for (i = vcpus - 1; i >= nvcpus; i--) { /* Offline old CPU */ - rc = qemuMonitorSetCPU(priv->mon, i, false); - if (rc == 0) - goto unsupported; - if (rc < 0) + if (qemuMonitorSetCPU(priv->mon, i, false) < 0) goto exit_monitor;
vcpus--; @@ -4889,12 +4878,9 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, if (virDomainObjIsActive(vm) && virDomainDefSetVCpus(vm->def, vcpus) < 0) ret = -1; - virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", rc == 1); + virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", ret == 0); return ret;
- unsupported: - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("cannot change vcpu count of this domain")); exit_monitor: ignore_value(qemuDomainObjExitMonitor(driver, vm)); goto cleanup; diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 86b8c7b..50d6f62 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c
Need to adjust comments here... Probably could move the comments to qemuMonitorSetCPU just so it doesn't cause chase into second level to know what function returns.
@@ -2188,10 +2188,6 @@ int qemuMonitorJSONSetCPU(qemuMonitorPtr mon, else ret = qemuMonitorJSONCheckError(cmd, reply);
- /* this function has non-standard return values, so adapt it */ - if (ret == 0) - ret = 1; - cleanup: virJSONValueFree(cmd); virJSONValueFree(reply); diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c index f44da20..fd38d01 100644 --- a/src/qemu/qemu_monitor_text.c +++ b/src/qemu/qemu_monitor_text.c @@ -1137,8 +1137,7 @@ qemuMonitorTextSetBalloon(qemuMonitorPtr mon,
/* - * Returns: 0 if CPU hotplug not supported, +1 if CPU hotplug worked - * or -1 on failure + * Returns: 0 if CPU modification was successful or -1 on failure */
Could copy/move the comment to qemuMonitorSetCPU ACK - as long as JSON function comments modified. John
int qemuMonitorTextSetCPU(qemuMonitorPtr mon, int cpu, bool online) { @@ -1149,22 +1148,23 @@ int qemuMonitorTextSetCPU(qemuMonitorPtr mon, int cpu, bool online) if (virAsprintf(&cmd, "cpu_set %d %s", cpu, online ? "online" : "offline") < 0) return -1;
- if (qemuMonitorHMPCommand(mon, cmd, &reply) < 0) { - VIR_FREE(cmd); - return -1; - } - VIR_FREE(cmd); + if (qemuMonitorHMPCommand(mon, cmd, &reply) < 0) + goto cleanup;
/* If the command failed qemu prints: 'unknown command' * No message is printed on success it seems */ if (strstr(reply, "unknown command:")) { - /* Don't set error - it is expected CPU onlining fails on many qemu - caller will handle */ - ret = 0; - } else { - ret = 1; + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("cannot change vcpu count of this domain")); + goto cleanup; }
+ ret = 0; + + cleanup: VIR_FREE(reply); + VIR_FREE(cmd); + return ret; }

On Mon, Nov 23, 2015 at 15:07:32 -0500, John Ferlan wrote:
On 11/20/2015 10:22 AM, Peter Krempa wrote:
Let the function report errors internally and change it to return standard return codes. --- src/qemu/qemu_driver.c | 22 ++++------------------ src/qemu/qemu_monitor_json.c | 4 ---- src/qemu/qemu_monitor_text.c | 22 +++++++++++----------- 3 files changed, 15 insertions(+), 33 deletions(-)
[...]
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 86b8c7b..50d6f62 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c
Need to adjust comments here... Probably could move the comments to qemuMonitorSetCPU just so it doesn't cause chase into second level to know what function returns.
Oh, indeed I missed the comment. I've removed it and used the one from the text monitor in the monitor dispatcher file.
@@ -2188,10 +2188,6 @@ int qemuMonitorJSONSetCPU(qemuMonitorPtr mon, else ret = qemuMonitorJSONCheckError(cmd, reply);
- /* this function has non-standard return values, so adapt it */ - if (ret == 0) - ret = 1; - cleanup: virJSONValueFree(cmd); virJSONValueFree(reply); diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c index f44da20..fd38d01 100644 --- a/src/qemu/qemu_monitor_text.c +++ b/src/qemu/qemu_monitor_text.c @@ -1137,8 +1137,7 @@ qemuMonitorTextSetBalloon(qemuMonitorPtr mon,
/* - * Returns: 0 if CPU hotplug not supported, +1 if CPU hotplug worked - * or -1 on failure + * Returns: 0 if CPU modification was successful or -1 on failure */
Could copy/move the comment to qemuMonitorSetCPU
Done. Thanks for the suggestion.
ACK - as long as JSON function comments modified.
John

qemuDomainHotplugVcpus/qemuDomainHotunplugVcpus are complex enough in regards of adding one CPU. Additionally it will be desired to reuse those functions later with specific vCPU hotplug. Move the loops for adding vCPUs into qemuDomainSetVcpusFlags so that the helpers can be made simpler and more straightforward. --- src/qemu/qemu_driver.c | 105 ++++++++++++++++++++++--------------------------- 1 file changed, 48 insertions(+), 57 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9011b2d..9f0e3a3 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4694,10 +4694,9 @@ qemuDomainDelCgroupForThread(virCgroupPtr cgroup, static int qemuDomainHotplugVcpus(virQEMUDriverPtr driver, virDomainObjPtr vm, - unsigned int nvcpus) + unsigned int vcpu) { qemuDomainObjPrivatePtr priv = vm->privateData; - size_t i; int ret = -1; int oldvcpus = virDomainDefGetVCpus(vm->def); int vcpus = oldvcpus; @@ -4709,13 +4708,10 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, qemuDomainObjEnterMonitor(driver, vm); - for (i = vcpus; i < nvcpus; i++) { - /* Online new CPU */ - if (qemuMonitorSetCPU(priv->mon, i, true) < 0) - goto exit_monitor; + if (qemuMonitorSetCPU(priv->mon, vcpu, true) < 0) + goto exit_monitor; - vcpus++; - } + vcpus++; /* After hotplugging the CPUs we need to re-detect threads corresponding * to the virtual CPUs. Some older versions don't provide the thread ID @@ -4747,37 +4743,34 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, &mem_mask, -1) < 0) goto cleanup; - for (i = oldvcpus; i < nvcpus; i++) { - if (priv->cgroup) { - cgroup_vcpu = - qemuDomainAddCgroupForThread(priv->cgroup, - VIR_CGROUP_THREAD_VCPU, - i, mem_mask, - cpupids[i]); - if (!cgroup_vcpu) - goto cleanup; - } + if (priv->cgroup) { + cgroup_vcpu = + qemuDomainAddCgroupForThread(priv->cgroup, + VIR_CGROUP_THREAD_VCPU, + vcpu, mem_mask, + cpupids[vcpu]); + if (!cgroup_vcpu) + goto cleanup; + } - /* Inherit def->cpuset */ - if (vm->def->cpumask) { - if (qemuDomainHotplugAddPin(vm->def->cpumask, i, - &vm->def->cputune.vcpupin, - &vm->def->cputune.nvcpupin) < 0) { - goto cleanup; - } - if (qemuDomainHotplugPinThread(vm->def->cpumask, i, cpupids[i], - cgroup_vcpu) < 0) { - goto cleanup; - } + /* Inherit def->cpuset */ + if (vm->def->cpumask) { + if (qemuDomainHotplugAddPin(vm->def->cpumask, vcpu, + &vm->def->cputune.vcpupin, + &vm->def->cputune.nvcpupin) < 0) { + goto cleanup; } - virCgroupFree(&cgroup_vcpu); - - if (qemuProcessSetSchedParams(i, cpupids[i], - vm->def->cputune.nvcpusched, - vm->def->cputune.vcpusched) < 0) + if (qemuDomainHotplugPinThread(vm->def->cpumask, vcpu, cpupids[vcpu], + cgroup_vcpu) < 0) { goto cleanup; + } } + if (qemuProcessSetSchedParams(vcpu, cpupids[vcpu], + vm->def->cputune.nvcpusched, + vm->def->cputune.vcpusched) < 0) + goto cleanup; + priv->nvcpupids = ncpupids; VIR_FREE(priv->vcpupids); priv->vcpupids = cpupids; @@ -4791,7 +4784,7 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, if (virDomainObjIsActive(vm) && virDomainDefSetVCpus(vm->def, vcpus) < 0) ret = -1; - virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", ret == 0); + virDomainAuditVcpu(vm, oldvcpus, vcpus, "update", ret == 0); if (cgroup_vcpu) virCgroupFree(&cgroup_vcpu); return ret; @@ -4805,10 +4798,9 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, static int qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, virDomainObjPtr vm, - unsigned int nvcpus) + unsigned int vcpu) { qemuDomainObjPrivatePtr priv = vm->privateData; - size_t i; int ret = -1; int oldvcpus = virDomainDefGetVCpus(vm->def); int vcpus = oldvcpus; @@ -4817,13 +4809,10 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, qemuDomainObjEnterMonitor(driver, vm); - for (i = vcpus - 1; i >= nvcpus; i--) { - /* Offline old CPU */ - if (qemuMonitorSetCPU(priv->mon, i, false) < 0) - goto exit_monitor; + if (qemuMonitorSetCPU(priv->mon, vcpu, false) < 0) + goto exit_monitor; - vcpus--; - } + vcpus--; /* After hotplugging the CPUs we need to re-detect threads corresponding * to the virtual CPUs. Some older versions don't provide the thread ID @@ -4855,16 +4844,14 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, goto cleanup; } - for (i = oldvcpus - 1; i >= nvcpus; i--) { - if (qemuDomainDelCgroupForThread(priv->cgroup, - VIR_CGROUP_THREAD_VCPU, i) < 0) - goto cleanup; + if (qemuDomainDelCgroupForThread(priv->cgroup, + VIR_CGROUP_THREAD_VCPU, vcpu) < 0) + goto cleanup; - /* Free vcpupin setting */ - virDomainPinDel(&vm->def->cputune.vcpupin, - &vm->def->cputune.nvcpupin, - i); - } + /* Free vcpupin setting */ + virDomainPinDel(&vm->def->cputune.vcpupin, + &vm->def->cputune.nvcpupin, + vcpu); priv->nvcpupids = ncpupids; VIR_FREE(priv->vcpupids); @@ -4878,7 +4865,7 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, if (virDomainObjIsActive(vm) && virDomainDefSetVCpus(vm->def, vcpus) < 0) ret = -1; - virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", ret == 0); + virDomainAuditVcpu(vm, oldvcpus, vcpus, "update", ret == 0); return ret; exit_monitor: @@ -5021,11 +5008,15 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, if (def) { if (nvcpus > virDomainDefGetVCpus(def)) { - if (qemuDomainHotplugVcpus(driver, vm, nvcpus) < 0) - goto endjob; + for (i = virDomainDefGetVCpus(def); i < nvcpus; i++) { + if (qemuDomainHotplugVcpus(driver, vm, i) < 0) + goto endjob; + } } else { - if (qemuDomainHotunplugVcpus(driver, vm, nvcpus) < 0) - goto endjob; + for (i = virDomainDefGetVCpus(def) - 1; i >= nvcpus; i--) { + if (qemuDomainHotunplugVcpus(driver, vm, i) < 0) + goto endjob; + } } if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm) < 0) -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
qemuDomainHotplugVcpus/qemuDomainHotunplugVcpus are complex enough in regards of adding one CPU. Additionally it will be desired to reuse those functions later with specific vCPU hotplug.
Move the loops for adding vCPUs into qemuDomainSetVcpusFlags so that the helpers can be made simpler and more straightforward. --- src/qemu/qemu_driver.c | 105 ++++++++++++++++++++++--------------------------- 1 file changed, 48 insertions(+), 57 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9011b2d..9f0e3a3 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4694,10 +4694,9 @@ qemuDomainDelCgroupForThread(virCgroupPtr cgroup, static int qemuDomainHotplugVcpus(virQEMUDriverPtr driver, virDomainObjPtr vm, - unsigned int nvcpus) + unsigned int vcpu) { qemuDomainObjPrivatePtr priv = vm->privateData; - size_t i; int ret = -1; int oldvcpus = virDomainDefGetVCpus(vm->def); int vcpus = oldvcpus;
You could set this to oldvcpus + 1;...
@@ -4709,13 +4708,10 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver,
qemuDomainObjEnterMonitor(driver, vm);
- for (i = vcpus; i < nvcpus; i++) { - /* Online new CPU */ - if (qemuMonitorSetCPU(priv->mon, i, true) < 0) - goto exit_monitor; + if (qemuMonitorSetCPU(priv->mon, vcpu, true) < 0) + goto exit_monitor;
- vcpus++; - } + vcpus++;
Thus removing the need for this... and an Audit message that doesn't use the same value for oldvcpus and vcpus (although it could before these changes).
/* After hotplugging the CPUs we need to re-detect threads corresponding * to the virtual CPUs. Some older versions don't provide the thread ID @@ -4747,37 +4743,34 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, &mem_mask, -1) < 0) goto cleanup;
- for (i = oldvcpus; i < nvcpus; i++) { - if (priv->cgroup) { - cgroup_vcpu = - qemuDomainAddCgroupForThread(priv->cgroup, - VIR_CGROUP_THREAD_VCPU, - i, mem_mask, - cpupids[i]); - if (!cgroup_vcpu) - goto cleanup; - } + if (priv->cgroup) { + cgroup_vcpu = + qemuDomainAddCgroupForThread(priv->cgroup, + VIR_CGROUP_THREAD_VCPU, + vcpu, mem_mask, + cpupids[vcpu]); + if (!cgroup_vcpu) + goto cleanup; + }
- /* Inherit def->cpuset */ - if (vm->def->cpumask) { - if (qemuDomainHotplugAddPin(vm->def->cpumask, i, - &vm->def->cputune.vcpupin, - &vm->def->cputune.nvcpupin) < 0) { - goto cleanup; - } - if (qemuDomainHotplugPinThread(vm->def->cpumask, i, cpupids[i], - cgroup_vcpu) < 0) { - goto cleanup; - } + /* Inherit def->cpuset */ + if (vm->def->cpumask) { + if (qemuDomainHotplugAddPin(vm->def->cpumask, vcpu, + &vm->def->cputune.vcpupin, + &vm->def->cputune.nvcpupin) < 0) { + goto cleanup; } - virCgroupFree(&cgroup_vcpu);
Ahhh.. finally ;-)
- - if (qemuProcessSetSchedParams(i, cpupids[i], - vm->def->cputune.nvcpusched, - vm->def->cputune.vcpusched) < 0) + if (qemuDomainHotplugPinThread(vm->def->cpumask, vcpu, cpupids[vcpu], + cgroup_vcpu) < 0) { goto cleanup; + } }
+ if (qemuProcessSetSchedParams(vcpu, cpupids[vcpu], + vm->def->cputune.nvcpusched, + vm->def->cputune.vcpusched) < 0) + goto cleanup; + priv->nvcpupids = ncpupids; VIR_FREE(priv->vcpupids); priv->vcpupids = cpupids; @@ -4791,7 +4784,7 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, if (virDomainObjIsActive(vm) && virDomainDefSetVCpus(vm->def, vcpus) < 0) ret = -1; - virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", ret == 0); + virDomainAuditVcpu(vm, oldvcpus, vcpus, "update", ret == 0); if (cgroup_vcpu) virCgroupFree(&cgroup_vcpu); return ret; @@ -4805,10 +4798,9 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, static int qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, virDomainObjPtr vm, - unsigned int nvcpus) + unsigned int vcpu) { qemuDomainObjPrivatePtr priv = vm->privateData; - size_t i; int ret = -1; int oldvcpus = virDomainDefGetVCpus(vm->def); int vcpus = oldvcpus;
Same as Hotplug, but in reverse... oldvcpus - 1;
@@ -4817,13 +4809,10 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver,
qemuDomainObjEnterMonitor(driver, vm);
- for (i = vcpus - 1; i >= nvcpus; i--) { - /* Offline old CPU */ - if (qemuMonitorSetCPU(priv->mon, i, false) < 0) - goto exit_monitor; + if (qemuMonitorSetCPU(priv->mon, vcpu, false) < 0) + goto exit_monitor;
- vcpus--; - } + vcpus--;
Removing this...
/* After hotplugging the CPUs we need to re-detect threads corresponding * to the virtual CPUs. Some older versions don't provide the thread ID @@ -4855,16 +4844,14 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, goto cleanup; }
- for (i = oldvcpus - 1; i >= nvcpus; i--) { - if (qemuDomainDelCgroupForThread(priv->cgroup, - VIR_CGROUP_THREAD_VCPU, i) < 0) - goto cleanup; + if (qemuDomainDelCgroupForThread(priv->cgroup, + VIR_CGROUP_THREAD_VCPU, vcpu) < 0) + goto cleanup;
- /* Free vcpupin setting */ - virDomainPinDel(&vm->def->cputune.vcpupin, - &vm->def->cputune.nvcpupin, - i); - } + /* Free vcpupin setting */ + virDomainPinDel(&vm->def->cputune.vcpupin, + &vm->def->cputune.nvcpupin, + vcpu);
priv->nvcpupids = ncpupids; VIR_FREE(priv->vcpupids); @@ -4878,7 +4865,7 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, if (virDomainObjIsActive(vm) && virDomainDefSetVCpus(vm->def, vcpus) < 0) ret = -1; - virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", ret == 0); + virDomainAuditVcpu(vm, oldvcpus, vcpus, "update", ret == 0); return ret;
exit_monitor: @@ -5021,11 +5008,15 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus,
if (def) {
Could use unsigned int curvcpus = virDomainDefGetVCpus(def); Although I suppose the compiler will optimize anyway...
if (nvcpus > virDomainDefGetVCpus(def)) { - if (qemuDomainHotplugVcpus(driver, vm, nvcpus) < 0) - goto endjob; + for (i = virDomainDefGetVCpus(def); i < nvcpus; i++) { + if (qemuDomainHotplugVcpus(driver, vm, i) < 0) + goto endjob; + } } else { - if (qemuDomainHotunplugVcpus(driver, vm, nvcpus) < 0) - goto endjob;
Perhaps this is where the comment removed during patch 19 would be descriptive, e.g. adjust the following to fit here. - /* We need different branches here, because we want to offline - * in reverse order to onlining, so any partial fail leaves us in a - * reasonably sensible state */ ACK - none of the mentioned changes is required, merely suggestions John
+ for (i = virDomainDefGetVCpus(def) - 1; i >= nvcpus; i--) { + if (qemuDomainHotunplugVcpus(driver, vm, i) < 0) + goto endjob; + } }
if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm) < 0)

Refactor the code flow so that 'exit_monitor:' can be removed. This patch also moves the auditing and setting of the new vCPU count right to the place where the hotplug happens, since it's possible that the hotplug succeeds and adds a cpu while other stuff fails. Lastly, failures of qemuMonitorGetCPUInfo are now reported rather than ignored. The function retuns 0 if it "successfully" detected 0 threads. --- src/qemu/qemu_driver.c | 54 +++++++++++++++++++++++--------------------------- 1 file changed, 25 insertions(+), 29 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9f0e3a3..9e0e334 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4698,41 +4698,46 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, { qemuDomainObjPrivatePtr priv = vm->privateData; int ret = -1; + int rc; int oldvcpus = virDomainDefGetVCpus(vm->def); - int vcpus = oldvcpus; pid_t *cpupids = NULL; - int ncpupids; + int ncpupids = 0; virCgroupPtr cgroup_vcpu = NULL; char *mem_mask = NULL; virDomainNumatuneMemMode mem_mode; qemuDomainObjEnterMonitor(driver, vm); - if (qemuMonitorSetCPU(priv->mon, vcpu, true) < 0) - goto exit_monitor; - - vcpus++; + rc = qemuMonitorSetCPU(priv->mon, vcpu, true); - /* After hotplugging the CPUs we need to re-detect threads corresponding - * to the virtual CPUs. Some older versions don't provide the thread ID - * or don't have the "info cpus" command (and they don't support multiple - * CPUs anyways), so errors in the re-detection will not be treated - * fatal */ - if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) <= 0) { - virResetLastError(); - ret = 0; - goto exit_monitor; - } + if (rc == 0) + ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids); if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; - if (ncpupids != vcpus) { + virDomainAuditVcpu(vm, oldvcpus, oldvcpus + 1, "update", rc == 0); + + if (rc < 0) + goto cleanup; + + ignore_value(virDomainDefSetVCpus(vm->def, oldvcpus + 1)); + + if (ncpupids < 0) + goto cleanup; + + /* failure to re-detect vCPU pids after hotplug due to lack of support was + * historically deemed not fatal. We need to skip the rest of the steps though. */ + if (ncpupids == 0) { + ret = 0; + goto cleanup; + } + + if (ncpupids != oldvcpus + 1) { virReportError(VIR_ERR_INTERNAL_ERROR, _("got wrong number of vCPU pids from QEMU monitor. " "got %d, wanted %d"), - ncpupids, vcpus); - vcpus = oldvcpus; + ncpupids, oldvcpus + 1); goto cleanup; } @@ -4781,17 +4786,8 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, cleanup: VIR_FREE(cpupids); VIR_FREE(mem_mask); - if (virDomainObjIsActive(vm) && - virDomainDefSetVCpus(vm->def, vcpus) < 0) - ret = -1; - virDomainAuditVcpu(vm, oldvcpus, vcpus, "update", ret == 0); - if (cgroup_vcpu) - virCgroupFree(&cgroup_vcpu); + virCgroupFree(&cgroup_vcpu); return ret; - - exit_monitor: - ignore_value(qemuDomainObjExitMonitor(driver, vm)); - goto cleanup; } -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Refactor the code flow so that 'exit_monitor:' can be removed.
This patch also moves the auditing and setting of the new vCPU count right to the place where the hotplug happens, since it's possible that the hotplug succeeds and adds a cpu while other stuff fails.
Lastly, failures of qemuMonitorGetCPUInfo are now reported rather than ignored. The function retuns 0 if it "successfully" detected 0 threads. --- src/qemu/qemu_driver.c | 54 +++++++++++++++++++++++--------------------------- 1 file changed, 25 insertions(+), 29 deletions(-)
Damn - should have peeked ahead...
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9f0e3a3..9e0e334 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4698,41 +4698,46 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, { qemuDomainObjPrivatePtr priv = vm->privateData; int ret = -1; + int rc; int oldvcpus = virDomainDefGetVCpus(vm->def); - int vcpus = oldvcpus; pid_t *cpupids = NULL; - int ncpupids; + int ncpupids = 0; virCgroupPtr cgroup_vcpu = NULL; char *mem_mask = NULL; virDomainNumatuneMemMode mem_mode;
qemuDomainObjEnterMonitor(driver, vm);
- if (qemuMonitorSetCPU(priv->mon, vcpu, true) < 0) - goto exit_monitor; - - vcpus++; + rc = qemuMonitorSetCPU(priv->mon, vcpu, true);
- /* After hotplugging the CPUs we need to re-detect threads corresponding - * to the virtual CPUs. Some older versions don't provide the thread ID - * or don't have the "info cpus" command (and they don't support multiple - * CPUs anyways), so errors in the re-detection will not be treated - * fatal */ - if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) <= 0) { - virResetLastError(); - ret = 0; - goto exit_monitor; - } + if (rc == 0) + ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids);
if (qemuDomainObjExitMonitor(driver, vm) < 0)
This could overwrite a qemuMonitorGetCPUInfo error, but that's no different than we current could do (though we don't ResetLast on GetCPUInfo failure).
goto cleanup;
But if we leave here, then we know we're still active thus the adjustment from cleanup to call SetVcpus only if still active is satisfied...
- if (ncpupids != vcpus) { + virDomainAuditVcpu(vm, oldvcpus, oldvcpus + 1, "update", rc == 0);
If the ExitMonitor fails, then we won't Audit
+ + if (rc < 0) + goto cleanup; + + ignore_value(virDomainDefSetVCpus(vm->def, oldvcpus + 1));
Why not just : if (virDomainDefSetVCpus(vm->def, oldvcpus + 1) < 0 || ncpupids < 0) goto cleanup; I would *hope* that we don't fail SetVcpus at this point - at least we can avoid the pointless ignore_value ACK - as long as we can audit on failure to ExitMonitor... Whether you feel the GetCPUInfo error is worth saving/sending is up to you. The last comment is purely an observation. John
+ + if (ncpupids < 0) + goto cleanup; + + /* failure to re-detect vCPU pids after hotplug due to lack of support was + * historically deemed not fatal. We need to skip the rest of the steps though. */ + if (ncpupids == 0) { + ret = 0; + goto cleanup; + } + + if (ncpupids != oldvcpus + 1) { virReportError(VIR_ERR_INTERNAL_ERROR, _("got wrong number of vCPU pids from QEMU monitor. " "got %d, wanted %d"), - ncpupids, vcpus); - vcpus = oldvcpus; + ncpupids, oldvcpus + 1); goto cleanup; }
@@ -4781,17 +4786,8 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, cleanup: VIR_FREE(cpupids); VIR_FREE(mem_mask); - if (virDomainObjIsActive(vm) && - virDomainDefSetVCpus(vm->def, vcpus) < 0) - ret = -1; - virDomainAuditVcpu(vm, oldvcpus, vcpus, "update", ret == 0); - if (cgroup_vcpu) - virCgroupFree(&cgroup_vcpu); + virCgroupFree(&cgroup_vcpu); return ret; - - exit_monitor: - ignore_value(qemuDomainObjExitMonitor(driver, vm)); - goto cleanup; }

On Mon, Nov 23, 2015 at 16:39:54 -0500, John Ferlan wrote:
On 11/20/2015 10:22 AM, Peter Krempa wrote:
Refactor the code flow so that 'exit_monitor:' can be removed.
This patch also moves the auditing and setting of the new vCPU count right to the place where the hotplug happens, since it's possible that the hotplug succeeds and adds a cpu while other stuff fails.
Lastly, failures of qemuMonitorGetCPUInfo are now reported rather than ignored. The function retuns 0 if it "successfully" detected 0 threads. --- src/qemu/qemu_driver.c | 54 +++++++++++++++++++++++--------------------------- 1 file changed, 25 insertions(+), 29 deletions(-)
Damn - should have peeked ahead...
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9f0e3a3..9e0e334 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4698,41 +4698,46 @@ qemuDomainHotplugVcpus(virQEMUDriverPtr driver, { qemuDomainObjPrivatePtr priv = vm->privateData; int ret = -1; + int rc; int oldvcpus = virDomainDefGetVCpus(vm->def); - int vcpus = oldvcpus; pid_t *cpupids = NULL; - int ncpupids; + int ncpupids = 0; virCgroupPtr cgroup_vcpu = NULL; char *mem_mask = NULL; virDomainNumatuneMemMode mem_mode;
qemuDomainObjEnterMonitor(driver, vm);
- if (qemuMonitorSetCPU(priv->mon, vcpu, true) < 0) - goto exit_monitor; - - vcpus++; + rc = qemuMonitorSetCPU(priv->mon, vcpu, true);
- /* After hotplugging the CPUs we need to re-detect threads corresponding - * to the virtual CPUs. Some older versions don't provide the thread ID - * or don't have the "info cpus" command (and they don't support multiple - * CPUs anyways), so errors in the re-detection will not be treated - * fatal */ - if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) <= 0) { - virResetLastError(); - ret = 0; - goto exit_monitor; - } + if (rc == 0) + ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids);
if (qemuDomainObjExitMonitor(driver, vm) < 0)
This could overwrite a qemuMonitorGetCPUInfo error, but that's no
Since that would happen only if the domain managed to die when we were in the monitor, the reported error will probably be even more descriptive than the IO error from sending the message to the VM. By the way qemuDomainObjExitMonitor() does not overwrite the error in this case since it does check whether we have a error reported at this point so it will actually report the error from qemuMonitorGetCPUInfo if it happened.
different than we current could do (though we don't ResetLast on GetCPUInfo failure).
goto cleanup;
But if we leave here, then we know we're still active thus the adjustment from cleanup to call SetVcpus only if still active is satisfied...
- if (ncpupids != vcpus) { + virDomainAuditVcpu(vm, oldvcpus, oldvcpus + 1, "update", rc == 0);
If the ExitMonitor fails, then we won't Audit
I'm not entirely sure it would make sense to audit anything at the point when the VM is dead. The audit log already contains the entry that the VM died and freed all resources so auditing that the vCPU count change failed is somewhat irrelevant IMO.
+ + if (rc < 0) + goto cleanup; + + ignore_value(virDomainDefSetVCpus(vm->def, oldvcpus + 1));
Why not just :
if (virDomainDefSetVCpus(vm->def, oldvcpus + 1) < 0 || ncpupids < 0) goto cleanup;
I would *hope* that we don't fail SetVcpus at this point - at least we can avoid the pointless ignore_value
Currently and also with the planed changes virDomainDefSetVcpus can't fail at this point, since we already checked that the requested CPU count was in bounds of when it would fail.
ACK - as long as we can audit on failure to ExitMonitor... Whether you feel the GetCPUInfo error is worth saving/sending is up to you. The last comment is purely an observation.
John

Refactor the code flow so that 'exit_monitor:' can be removed. This patch moves the auditing functions into places where it's certain that hotunplug was or was not successful and reports errors from qemuMonitorGetCPUInfo properly. --- src/qemu/qemu_driver.c | 50 +++++++++++++++----------------------------------- 1 file changed, 15 insertions(+), 35 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9e0e334..614c7f8 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4798,48 +4798,36 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, { qemuDomainObjPrivatePtr priv = vm->privateData; int ret = -1; + int rc; int oldvcpus = virDomainDefGetVCpus(vm->def); - int vcpus = oldvcpus; pid_t *cpupids = NULL; - int ncpupids; + int ncpupids = 0; qemuDomainObjEnterMonitor(driver, vm); - if (qemuMonitorSetCPU(priv->mon, vcpu, false) < 0) - goto exit_monitor; + rc = qemuMonitorSetCPU(priv->mon, vcpu, false); - vcpus--; + if (rc == 0) + ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids); - /* After hotplugging the CPUs we need to re-detect threads corresponding - * to the virtual CPUs. Some older versions don't provide the thread ID - * or don't have the "info cpus" command (and they don't support multiple - * CPUs anyways), so errors in the re-detection will not be treated - * fatal */ - if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) <= 0) { - virResetLastError(); - ret = 0; - goto exit_monitor; - } if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; - /* check if hotunplug has failed */ - if (ncpupids == oldvcpus) { - virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", - _("qemu didn't unplug the vCPUs properly")); - vcpus = oldvcpus; + virDomainAuditVcpu(vm, oldvcpus, oldvcpus - 1, "update", + rc == 0 && ncpupids == oldvcpus -1); + + if (rc < 0 || ncpupids < 0) goto cleanup; - } - if (ncpupids != vcpus) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("got wrong number of vCPU pids from QEMU monitor. " - "got %d, wanted %d"), - ncpupids, vcpus); - vcpus = oldvcpus; + /* check if hotunplug has failed */ + if (ncpupids != oldvcpus - 1) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, + _("qemu didn't unplug vCPU '%u' properly"), vcpu); goto cleanup; } + ignore_value(virDomainDefSetVCpus(vm->def, oldvcpus - 1)); + if (qemuDomainDelCgroupForThread(priv->cgroup, VIR_CGROUP_THREAD_VCPU, vcpu) < 0) goto cleanup; @@ -4858,15 +4846,7 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, cleanup: VIR_FREE(cpupids); - if (virDomainObjIsActive(vm) && - virDomainDefSetVCpus(vm->def, vcpus) < 0) - ret = -1; - virDomainAuditVcpu(vm, oldvcpus, vcpus, "update", ret == 0); return ret; - - exit_monitor: - ignore_value(qemuDomainObjExitMonitor(driver, vm)); - goto cleanup; } -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Refactor the code flow so that 'exit_monitor:' can be removed.
This patch moves the auditing functions into places where it's certain that hotunplug was or was not successful and reports errors from qemuMonitorGetCPUInfo properly. --- src/qemu/qemu_driver.c | 50 +++++++++++++++----------------------------------- 1 file changed, 15 insertions(+), 35 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9e0e334..614c7f8 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4798,48 +4798,36 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver, { qemuDomainObjPrivatePtr priv = vm->privateData; int ret = -1; + int rc; int oldvcpus = virDomainDefGetVCpus(vm->def); - int vcpus = oldvcpus; pid_t *cpupids = NULL; - int ncpupids; + int ncpupids = 0;
qemuDomainObjEnterMonitor(driver, vm);
- if (qemuMonitorSetCPU(priv->mon, vcpu, false) < 0) - goto exit_monitor; + rc = qemuMonitorSetCPU(priv->mon, vcpu, false);
- vcpus--; + if (rc == 0) + ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids);
- /* After hotplugging the CPUs we need to re-detect threads corresponding - * to the virtual CPUs. Some older versions don't provide the thread ID - * or don't have the "info cpus" command (and they don't support multiple - * CPUs anyways), so errors in the re-detection will not be treated - * fatal */ - if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) <= 0) { - virResetLastError(); - ret = 0; - goto exit_monitor; - } if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup;
- /* check if hotunplug has failed */ - if (ncpupids == oldvcpus) { - virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", - _("qemu didn't unplug the vCPUs properly")); - vcpus = oldvcpus; + virDomainAuditVcpu(vm, oldvcpus, oldvcpus - 1, "update", + rc == 0 && ncpupids == oldvcpus -1); +
Similar comments to 24 w/r/t ExitMonitor failure and lack of Audit and overwritten last message possible from GetCPUInfo failure. w/r/t "&& ncpupids == oldvcpus - 1" in the audit message - if ncpupids == 0 here, then unless we've dropped to zero vcpu's, this will always trip strangely. IOW: The ncpupids == 0 has been lost...
+ if (rc < 0 || ncpupids < 0) goto cleanup; - }
- if (ncpupids != vcpus) { - virReportError(VIR_ERR_INTERNAL_ERROR, - _("got wrong number of vCPU pids from QEMU monitor. " - "got %d, wanted %d"), - ncpupids, vcpus); - vcpus = oldvcpus; + /* check if hotunplug has failed */ + if (ncpupids != oldvcpus - 1) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, + _("qemu didn't unplug vCPU '%u' properly"), vcpu); goto cleanup; }
+ ignore_value(virDomainDefSetVCpus(vm->def, oldvcpus - 1)); +
Again - I would hope it wouldn't fail and not sure why ignore_value should be used... I would think we'd have after the (rc < 0 || ncpupids < 0) check (e.g. similar to Hotplug order): if (virDomainDefSetVCpus(vm->def, oldvcpus - 1) < 0) goto cleanup; /* Gratuitous comment here ... */ if (ncpupids == 0) { ret = 0; goto cleanup; } I'm sure you'll figure out a better order for an ACK... John
if (qemuDomainDelCgroupForThread(priv->cgroup, VIR_CGROUP_THREAD_VCPU, vcpu) < 0) goto cleanup; @@ -4858,15 +4846,7 @@ qemuDomainHotunplugVcpus(virQEMUDriverPtr driver,
cleanup: VIR_FREE(cpupids); - if (virDomainObjIsActive(vm) && - virDomainDefSetVCpus(vm->def, vcpus) < 0) - ret = -1; - virDomainAuditVcpu(vm, oldvcpus, vcpus, "update", ret == 0); return ret; - - exit_monitor: - ignore_value(qemuDomainObjExitMonitor(driver, vm)); - goto cleanup; }

To allow collecting all relevant data at one place let's make def->vcpus a structure and then we can start moving stuff into it. --- src/conf/domain_conf.c | 55 ++++++++++++++++++++++++++++++++++++++++++++------ src/conf/domain_conf.h | 10 ++++++++- 2 files changed, 58 insertions(+), 7 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 897b643..631e1db 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1424,20 +1424,38 @@ void virDomainLeaseDefFree(virDomainLeaseDefPtr def) } +static void +virDomainVCpuInfoClear(virDomainVCpuInfoPtr info) +{ + if (!info) + return; +} + + int virDomainDefSetVCpusMax(virDomainDefPtr def, unsigned int vcpus) { + size_t i; + + if (def->maxvcpus == vcpus) + return 0; + if (vcpus == 0) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("domain config can't have 0 maximum vCPUs")); return -1; } - if (vcpus < def->vcpus) - virDomainDefSetVCpus(def, vcpus); + if (def->maxvcpus < vcpus) { + if (VIR_EXPAND_N(def->vcpus, def->maxvcpus, vcpus - def->maxvcpus) < 0) + return -1; + } else { + for (i = vcpus; i < def->maxvcpus; i++) + virDomainVCpuInfoClear(&def->vcpus[i]); - def->maxvcpus = vcpus; + VIR_SHRINK_N(def->vcpus, def->maxvcpus, def->maxvcpus - vcpus); + } return 0; } @@ -1446,7 +1464,14 @@ virDomainDefSetVCpusMax(virDomainDefPtr def, bool virDomainDefHasVCpusOffline(const virDomainDef *def) { - return def->vcpus < def->maxvcpus; + size_t i; + + for (i = 0; i < def->maxvcpus; i++) { + if (!def->vcpus[i].online) + return true; + } + + return false; } @@ -1461,6 +1486,8 @@ int virDomainDefSetVCpus(virDomainDefPtr def, unsigned int vcpus) { + size_t i; + if (vcpus > def->maxvcpus) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, _("maxvcpus must not be less than current vcpus (%u < %zu)"), @@ -1468,7 +1495,11 @@ virDomainDefSetVCpus(virDomainDefPtr def, return -1; } - def->vcpus = vcpus; + for (i = 0; i < vcpus; i++) + def->vcpus[i].online = true; + + for (i = vcpus; i < def->maxvcpus; i++) + def->vcpus[i].online = false; return 0; } @@ -1477,7 +1508,15 @@ virDomainDefSetVCpus(virDomainDefPtr def, unsigned int virDomainDefGetVCpus(const virDomainDef *def) { - return def->vcpus; + size_t i; + unsigned int ret = 0; + + for (i = 0; i < def->maxvcpus; i++) { + if (def->vcpus[i].online) + ret++; + } + + return ret; } @@ -2505,6 +2544,10 @@ void virDomainDefFree(virDomainDefPtr def) virDomainResourceDefFree(def->resource); + for (i = 0; i < def->maxvcpus; i++) + virDomainVCpuInfoClear(&def->vcpus[i]); + VIR_FREE(def->vcpus); + /* hostdevs must be freed before nets (or any future "intelligent * hostdevs") because the pointer to the hostdev is really * pointing into the middle of the higher level device's object, diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 3490f02..68f82c6 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2129,6 +2129,14 @@ struct _virDomainCputune { virDomainThreadSchedParamPtr iothreadsched; }; + +typedef struct _virDomainVCpuInfo virDomainVCpuInfo; +typedef virDomainVCpuInfo *virDomainVCpuInfoPtr; + +struct _virDomainVCpuInfo { + bool online; +}; + typedef struct _virDomainBlkiotune virDomainBlkiotune; typedef virDomainBlkiotune *virDomainBlkiotunePtr; @@ -2202,7 +2210,7 @@ struct _virDomainDef { virDomainBlkiotune blkio; virDomainMemtune mem; - unsigned int vcpus; + virDomainVCpuInfoPtr vcpus; size_t maxvcpus; int placement_mode; virBitmapPtr cpumask; -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
To allow collecting all relevant data at one place let's make def->vcpus a structure and then we can start moving stuff into it. --- src/conf/domain_conf.c | 55 ++++++++++++++++++++++++++++++++++++++++++++------ src/conf/domain_conf.h | 10 ++++++++- 2 files changed, 58 insertions(+), 7 deletions(-)
Well I have to assume at this point neither of us builds w/ bhyve or vz/parallels enabled! (true for me); otherwise, the build would have gone down in a flaming mess.
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 897b643..631e1db 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1424,20 +1424,38 @@ void virDomainLeaseDefFree(virDomainLeaseDefPtr def) }
+static void +virDomainVCpuInfoClear(virDomainVCpuInfoPtr info)'
Use of "Vcpus" or "VCPUs" is preferred.
+{ + if (!info) + return; +} + + int virDomainDefSetVCpusMax(virDomainDefPtr def, unsigned int vcpus)
Hmmmm. check that earlier thought... maybe "newvcpus"? I dunno, my eyes are getting tired though!
{ + size_t i; + + if (def->maxvcpus == vcpus) + return 0; + if (vcpus == 0) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("domain config can't have 0 maximum vCPUs")); return -1; }
- if (vcpus < def->vcpus) - virDomainDefSetVCpus(def, vcpus); + if (def->maxvcpus < vcpus) { + if (VIR_EXPAND_N(def->vcpus, def->maxvcpus, vcpus - def->maxvcpus) < 0) + return -1; + } else { + for (i = vcpus; i < def->maxvcpus; i++) + virDomainVCpuInfoClear(&def->vcpus[i]);
- def->maxvcpus = vcpus; + VIR_SHRINK_N(def->vcpus, def->maxvcpus, def->maxvcpus - vcpus); + }
return 0; } @@ -1446,7 +1464,14 @@ virDomainDefSetVCpusMax(virDomainDefPtr def, bool virDomainDefHasVCpusOffline(const virDomainDef *def) { - return def->vcpus < def->maxvcpus; + size_t i; + + for (i = 0; i < def->maxvcpus; i++) {
Should there be an accessor to def->maxvcpus?
+ if (!def->vcpus[i].online) + return true; + } + + return false; }
@@ -1461,6 +1486,8 @@ int virDomainDefSetVCpus(virDomainDefPtr def, unsigned int vcpus) { + size_t i; + if (vcpus > def->maxvcpus) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, _("maxvcpus must not be less than current vcpus (%u < %zu)"), @@ -1468,7 +1495,11 @@ virDomainDefSetVCpus(virDomainDefPtr def, return -1; }
- def->vcpus = vcpus; + for (i = 0; i < vcpus; i++) + def->vcpus[i].online = true; + + for (i = vcpus; i < def->maxvcpus; i++)
Should there be an accessor to def->maxvcpus? That'd be for both uses.
+ def->vcpus[i].online = false;
return 0; } @@ -1477,7 +1508,15 @@ virDomainDefSetVCpus(virDomainDefPtr def, unsigned int virDomainDefGetVCpus(const virDomainDef *def) { - return def->vcpus; + size_t i; + unsigned int ret = 0; + + for (i = 0; i < def->maxvcpus; i++) {
Should there be accessor to "def->maxvcpus"? ACK with some adjustments... More importantly the "VCpus" change, but less so the accessor to ->maxvcpus John
+ if (def->vcpus[i].online) + ret++; + } + + return ret; }
@@ -2505,6 +2544,10 @@ void virDomainDefFree(virDomainDefPtr def)
virDomainResourceDefFree(def->resource);
+ for (i = 0; i < def->maxvcpus; i++) + virDomainVCpuInfoClear(&def->vcpus[i]); + VIR_FREE(def->vcpus); + /* hostdevs must be freed before nets (or any future "intelligent * hostdevs") because the pointer to the hostdev is really * pointing into the middle of the higher level device's object, diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 3490f02..68f82c6 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2129,6 +2129,14 @@ struct _virDomainCputune { virDomainThreadSchedParamPtr iothreadsched; };
+ +typedef struct _virDomainVCpuInfo virDomainVCpuInfo; +typedef virDomainVCpuInfo *virDomainVCpuInfoPtr; + +struct _virDomainVCpuInfo { + bool online; +}; + typedef struct _virDomainBlkiotune virDomainBlkiotune; typedef virDomainBlkiotune *virDomainBlkiotunePtr;
@@ -2202,7 +2210,7 @@ struct _virDomainDef { virDomainBlkiotune blkio; virDomainMemtune mem;
- unsigned int vcpus; + virDomainVCpuInfoPtr vcpus; size_t maxvcpus; int placement_mode; virBitmapPtr cpumask;

On 11/23/2015 05:22 PM, John Ferlan wrote:
On 11/20/2015 10:22 AM, Peter Krempa wrote:
To allow collecting all relevant data at one place let's make def->vcpus a structure and then we can start moving stuff into it. --- src/conf/domain_conf.c | 55 ++++++++++++++++++++++++++++++++++++++++++++------ src/conf/domain_conf.h | 10 ++++++++- 2 files changed, 58 insertions(+), 7 deletions(-)
[...]
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 3490f02..68f82c6 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2129,6 +2129,14 @@ struct _virDomainCputune { virDomainThreadSchedParamPtr iothreadsched; };
+ +typedef struct _virDomainVCpuInfo virDomainVCpuInfo; +typedef virDomainVCpuInfo *virDomainVCpuInfoPtr; + +struct _virDomainVCpuInfo { + bool online; +};
Missed noting these 'VcpuInfo' or 'VCPUInfo" not "VCpuInfo" John
+ typedef struct _virDomainBlkiotune virDomainBlkiotune; typedef virDomainBlkiotune *virDomainBlkiotunePtr;
@@ -2202,7 +2210,7 @@ struct _virDomainDef { virDomainBlkiotune blkio; virDomainMemtune mem;
- unsigned int vcpus; + virDomainVCpuInfoPtr vcpus; size_t maxvcpus; int placement_mode; virBitmapPtr cpumask;
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

Extract the checking code into a separate function and prepare the infrastructure for checking the new structure type. --- src/conf/domain_conf.c | 41 ++++++++++++++++++++++++++++++----------- 1 file changed, 30 insertions(+), 11 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 631e1db..66fc6d3 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -17835,6 +17835,35 @@ virDomainMemoryDefCheckABIStability(virDomainMemoryDefPtr src, } +static bool +virDomainDefVcpuCheckAbiStability(virDomainDefPtr src, + virDomainDefPtr dst) +{ + size_t i; + + if (src->maxvcpus != dst->maxvcpus) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Target domain vCPU max %zu does not match source %zu"), + dst->maxvcpus, src->maxvcpus); + return false; + } + + for (i = 0; i < src->maxvcpus; i++) { + virDomainVCpuInfoPtr svcpu = &src->vcpus[i]; + virDomainVCpuInfoPtr dvcpu = &dst->vcpus[i]; + + if (svcpu->online != dvcpu->online) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("State of vCPU '%zu' differs between source and " + "destination definitions"), i); + return false; + } + } + + return true; +} + + /* This compares two configurations and looks for any differences * which will affect the guest ABI. This is primarily to allow * validation of custom XML config passed in during migration @@ -17908,18 +17937,8 @@ virDomainDefCheckABIStability(virDomainDefPtr src, goto error; } - if (virDomainDefGetVCpus(src) != virDomainDefGetVCpus(dst)) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, - _("Target domain vCPU count %d does not match source %d"), - virDomainDefGetVCpus(dst), virDomainDefGetVCpus(src)); + if (!virDomainDefVcpuCheckAbiStability(src, dst)) goto error; - } - if (virDomainDefGetVCpusMax(src) != virDomainDefGetVCpusMax(dst)) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, - _("Target domain vCPU max %d does not match source %d"), - virDomainDefGetVCpusMax(dst), virDomainDefGetVCpusMax(src)); - goto error; - } if (src->iothreads != dst->iothreads) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Extract the checking code into a separate function and prepare the infrastructure for checking the new structure type. --- src/conf/domain_conf.c | 41 ++++++++++++++++++++++++++++++----------- 1 file changed, 30 insertions(+), 11 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 631e1db..66fc6d3 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -17835,6 +17835,35 @@ virDomainMemoryDefCheckABIStability(virDomainMemoryDefPtr src, }
+static bool +virDomainDefVcpuCheckAbiStability(virDomainDefPtr src, + virDomainDefPtr dst)
I see use of "Vcpu" here...
+{ + size_t i; + + if (src->maxvcpus != dst->maxvcpus) {
Should these be accessors? Like they were in the moved code?
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Target domain vCPU max %zu does not match source %zu"), + dst->maxvcpus, src->maxvcpus); + return false; + } + + for (i = 0; i < src->maxvcpus; i++) {
Allowing for this to be an accessor/local too.
+ virDomainVCpuInfoPtr svcpu = &src->vcpus[i]; + virDomainVCpuInfoPtr dvcpu = &dst->vcpus[i]; + + if (svcpu->online != dvcpu->online) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("State of vCPU '%zu' differs between source and " + "destination definitions"), i); + return false; + }
This changes the original code/design just slightly from a counted value online to having the same order between source/dest. If the current algorithm of using the first 'current' vcpus doesn't change, then I foresee perhaps an interesting issue/question. Say we start with 2 current (0,1) and 4 total (0,1,2,3). If we allow someone to start/hotplug #3, then a migration occurs. Would the "target" start "0,1,2" or "0,1,3"? If I think about the current algorithm, it's get the # of vCPU's "current" (virDomainDefGetVCpus) and then set online in order 0, 1, 2 (virDomainDefSetVCpus). That causes a failure for this algorithm, but should it? Again only an issue if you're ultimate goal is to allow the user to choose which vCPU's to place online or offline. I haven't looked that far forward yet. Conditional ACK depending on response. John
+ } + + return true; +} + + /* This compares two configurations and looks for any differences * which will affect the guest ABI. This is primarily to allow * validation of custom XML config passed in during migration @@ -17908,18 +17937,8 @@ virDomainDefCheckABIStability(virDomainDefPtr src, goto error; }
- if (virDomainDefGetVCpus(src) != virDomainDefGetVCpus(dst)) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, - _("Target domain vCPU count %d does not match source %d"), - virDomainDefGetVCpus(dst), virDomainDefGetVCpus(src)); + if (!virDomainDefVcpuCheckAbiStability(src, dst)) goto error; - } - if (virDomainDefGetVCpusMax(src) != virDomainDefGetVCpusMax(dst)) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, - _("Target domain vCPU max %d does not match source %d"), - virDomainDefGetVCpusMax(dst), virDomainDefGetVCpusMax(src)); - goto error; - }
if (src->iothreads != dst->iothreads) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED,

On Mon, Nov 23, 2015 at 17:41:05 -0500, John Ferlan wrote:
On 11/20/2015 10:22 AM, Peter Krempa wrote:
Extract the checking code into a separate function and prepare the infrastructure for checking the new structure type. --- src/conf/domain_conf.c | 41 ++++++++++++++++++++++++++++++----------- 1 file changed, 30 insertions(+), 11 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 631e1db..66fc6d3 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -17835,6 +17835,35 @@ virDomainMemoryDefCheckABIStability(virDomainMemoryDefPtr src, }
+static bool +virDomainDefVcpuCheckAbiStability(virDomainDefPtr src, + virDomainDefPtr dst)
I see use of "Vcpu" here...
+{ + size_t i; + + if (src->maxvcpus != dst->maxvcpus) {
Should these be accessors? Like they were in the moved code?
Hmm, yes in this case they should be used. I reordered the patches.
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Target domain vCPU max %zu does not match source %zu"), + dst->maxvcpus, src->maxvcpus); + return false; + } + + for (i = 0; i < src->maxvcpus; i++) {
Allowing for this to be an accessor/local too.
+ virDomainVCpuInfoPtr svcpu = &src->vcpus[i]; + virDomainVCpuInfoPtr dvcpu = &dst->vcpus[i]; + + if (svcpu->online != dvcpu->online) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("State of vCPU '%zu' differs between source and " + "destination definitions"), i); + return false; + }
This changes the original code/design just slightly from a counted value online to having the same order between source/dest. If the current algorithm of using the first 'current' vcpus doesn't change, then I foresee perhaps an interesting issue/question.
Say we start with 2 current (0,1) and 4 total (0,1,2,3). If we allow someone to start/hotplug #3, then a migration occurs. Would the "target" start "0,1,2" or "0,1,3"?
It should be 0,1,3. Later I'll introduce XML elements that will track individual vCPUs and allow to transport their state accross to the destination. Currently this is just preparation code for that and since there currently is no way to create disjoint vCPU indexes this basically does the same as the previous code, but in a less optimal fashion. I can extract just the code as-is and bump the algorithm change to the patch that will actually be adding this.
If I think about the current algorithm, it's get the # of vCPU's "current" (virDomainDefGetVCpus) and then set online in order 0, 1, 2 (virDomainDefSetVCpus).
That causes a failure for this algorithm, but should it? Again only an issue if you're ultimate goal is to allow the user to choose which vCPU's to place online or offline. I haven't looked that far forward yet.
Looking forward wouldn't really help. The patches don't exist yet ;)
Conditional ACK depending on response.
Peter

Once more stuff will be moved into the vCPU data structure it will be necessary to get a specific one in some ocasions. Add a helper that will simplify this task. --- src/conf/domain_conf.c | 15 +++++++++++++++ src/conf/domain_conf.h | 4 ++++ src/libvirt_private.syms | 1 + 3 files changed, 20 insertions(+) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 66fc6d3..f4b4700 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1520,6 +1520,21 @@ virDomainDefGetVCpus(const virDomainDef *def) } +virDomainVCpuInfoPtr +virDomainDefGetVCpu(virDomainDefPtr def, + unsigned int vcpu) +{ + if (vcpu > def->maxvcpus) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("vCPU '%u' is not present in domain definition"), + vcpu); + return NULL; + } + + return &def->vcpus[vcpu]; +} + + virDomainDiskDefPtr virDomainDiskDefNew(virDomainXMLOptionPtr xmlopt) { diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 68f82c6..7c9457a 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2135,6 +2135,8 @@ typedef virDomainVCpuInfo *virDomainVCpuInfoPtr; struct _virDomainVCpuInfo { bool online; + + virBitmapPtr cpumask; }; typedef struct _virDomainBlkiotune virDomainBlkiotune; @@ -2338,6 +2340,8 @@ bool virDomainDefHasVCpusOffline(const virDomainDef *def); unsigned int virDomainDefGetVCpusMax(const virDomainDef *def); int virDomainDefSetVCpus(virDomainDefPtr def, unsigned int vcpus); unsigned int virDomainDefGetVCpus(const virDomainDef *def); +virDomainVCpuInfoPtr virDomainDefGetVCpu(virDomainDefPtr def, unsigned int vcpu) + ATTRIBUTE_RETURN_CHECK; unsigned long long virDomainDefGetMemoryInitial(const virDomainDef *def); void virDomainDefSetMemoryTotal(virDomainDefPtr def, unsigned long long size); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index d2c4945..449caf6 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -217,6 +217,7 @@ virDomainDefGetDefaultEmulator; virDomainDefGetMemoryActual; virDomainDefGetMemoryInitial; virDomainDefGetSecurityLabelDef; +virDomainDefGetVCpu; virDomainDefGetVCpus; virDomainDefGetVCpusMax; virDomainDefHasDeviceAddress; -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Once more stuff will be moved into the vCPU data structure it will be necessary to get a specific one in some ocasions. Add a helper that will simplify this task. --- src/conf/domain_conf.c | 15 +++++++++++++++ src/conf/domain_conf.h | 4 ++++ src/libvirt_private.syms | 1 + 3 files changed, 20 insertions(+)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 66fc6d3..f4b4700 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1520,6 +1520,21 @@ virDomainDefGetVCpus(const virDomainDef *def) }
+virDomainVCpuInfoPtr +virDomainDefGetVCpu(virDomainDefPtr def,
Use of "VCpu" rather than "Vcpu" or "VCPU" Think this should be "GetVcpuInfo"...
+ unsigned int vcpu) +{ + if (vcpu > def->maxvcpus) {
Should be an accessor for def->maxvcpus
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("vCPU '%u' is not present in domain definition"), + vcpu); + return NULL; + } + + return &def->vcpus[vcpu];
yeah - thinking about my comments in 27 - I can foresee a problem with the ABI check going forward.
+} + + virDomainDiskDefPtr virDomainDiskDefNew(virDomainXMLOptionPtr xmlopt) { diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 68f82c6..7c9457a 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2135,6 +2135,8 @@ typedef virDomainVCpuInfo *virDomainVCpuInfoPtr;
struct _virDomainVCpuInfo { bool online; + + virBitmapPtr cpumask;
This should be in some future patch... ACK with changes. John
};
typedef struct _virDomainBlkiotune virDomainBlkiotune; @@ -2338,6 +2340,8 @@ bool virDomainDefHasVCpusOffline(const virDomainDef *def); unsigned int virDomainDefGetVCpusMax(const virDomainDef *def); int virDomainDefSetVCpus(virDomainDefPtr def, unsigned int vcpus); unsigned int virDomainDefGetVCpus(const virDomainDef *def); +virDomainVCpuInfoPtr virDomainDefGetVCpu(virDomainDefPtr def, unsigned int vcpu) + ATTRIBUTE_RETURN_CHECK;
unsigned long long virDomainDefGetMemoryInitial(const virDomainDef *def); void virDomainDefSetMemoryTotal(virDomainDefPtr def, unsigned long long size); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index d2c4945..449caf6 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -217,6 +217,7 @@ virDomainDefGetDefaultEmulator; virDomainDefGetMemoryActual; virDomainDefGetMemoryInitial; virDomainDefGetSecurityLabelDef; +virDomainDefGetVCpu; virDomainDefGetVCpus; virDomainDefGetVCpusMax; virDomainDefHasDeviceAddress;

Since commit 0c04906fa the check for priv->cgroup doesn't make sense as the calls to virCgroupHasController return the same information. Remove it and move it's comment partially to the new check. The already spurious check was also later copied to the iothreads code. --- src/qemu/qemu_cgroup.c | 18 ++---------------- 1 file changed, 2 insertions(+), 16 deletions(-) diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index a8e0b8c..3c7694a 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -998,18 +998,13 @@ qemuSetupCgroupForVcpu(virDomainObjPtr vm) /* * If CPU cgroup controller is not initialized here, then we need * neither period nor quota settings. And if CPUSET controller is - * not initialized either, then there's nothing to do anyway. + * not initialized either, then there's nothing to do anyway. CPU pinning + * will be set via virProcessSetAffinity. */ if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPU) && !virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) return 0; - /* We are trying to setup cgroups for CPU pinning, which can also be done - * with virProcessSetAffinity, thus the lack of cgroups is not fatal here. - */ - if (priv->cgroup == NULL) - return 0; - if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) { /* If we don't know VCPU<->PID mapping or all vcpu runs in the same * thread, we cannot control each vcpu. @@ -1109,9 +1104,6 @@ qemuSetupCgroupForEmulator(virDomainObjPtr vm) !virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) return 0; - if (priv->cgroup == NULL) - return 0; /* Not supported, so claim success */ - if (virCgroupNewThread(priv->cgroup, VIR_CGROUP_THREAD_EMULATOR, 0, true, &cgroup_emulator) < 0) goto cleanup; @@ -1182,12 +1174,6 @@ qemuSetupCgroupForIOThreads(virDomainObjPtr vm) !virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) return 0; - /* We are trying to setup cgroups for CPU pinning, which can also be done - * with virProcessSetAffinity, thus the lack of cgroups is not fatal here. - */ - if (priv->cgroup == NULL) - return 0; - if (virDomainNumatuneGetMode(vm->def->numa, -1, &mem_mode) == 0 && mem_mode == VIR_DOMAIN_NUMATUNE_MEM_STRICT && virDomainNumatuneMaybeFormatNodeset(vm->def->numa, -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Since commit 0c04906fa the check for priv->cgroup doesn't make sense as the calls to virCgroupHasController return the same information. Remove it and move it's comment partially to the new check.
The already spurious check was also later copied to the iothreads code.
naturally ;-)!
--- src/qemu/qemu_cgroup.c | 18 ++---------------- 1 file changed, 2 insertions(+), 16 deletions(-)
ACK - John (losing steam... only 5 more to go....)

The vCPU threads make sense in the counterparts that set the vCPU bandwidth/quota, not in the emulator one. The emulator tunables are set all the time anyways. Drop the extra check and remove the now unneeded vm argument. --- src/qemu/qemu_driver.c | 33 ++++++++++----------------------- 1 file changed, 10 insertions(+), 23 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 614c7f8..8047d36 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -10312,18 +10312,15 @@ qemuSetVcpusBWLive(virDomainObjPtr vm, virCgroupPtr cgroup, } static int -qemuSetEmulatorBandwidthLive(virDomainObjPtr vm, virCgroupPtr cgroup, - unsigned long long period, long long quota) +qemuSetEmulatorBandwidthLive(virCgroupPtr cgroup, + unsigned long long period, + long long quota) { - qemuDomainObjPrivatePtr priv = vm->privateData; virCgroupPtr cgroup_emulator = NULL; if (period == 0 && quota == 0) return 0; - if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) - return 0; - if (virCgroupNewThread(cgroup, VIR_CGROUP_THREAD_EMULATOR, 0, false, &cgroup_emulator) < 0) goto cleanup; @@ -10500,7 +10497,7 @@ qemuDomainSetSchedulerParametersFlags(virDomainPtr dom, QEMU_SCHED_MIN_PERIOD, QEMU_SCHED_MAX_PERIOD); if (flags & VIR_DOMAIN_AFFECT_LIVE && value_ul) { - if ((rc = qemuSetEmulatorBandwidthLive(vm, priv->cgroup, + if ((rc = qemuSetEmulatorBandwidthLive(priv->cgroup, value_ul, 0))) goto endjob; @@ -10521,7 +10518,7 @@ qemuDomainSetSchedulerParametersFlags(virDomainPtr dom, QEMU_SCHED_MIN_QUOTA, QEMU_SCHED_MAX_QUOTA); if (flags & VIR_DOMAIN_AFFECT_LIVE && value_l) { - if ((rc = qemuSetEmulatorBandwidthLive(vm, priv->cgroup, + if ((rc = qemuSetEmulatorBandwidthLive(priv->cgroup, 0, value_l))) goto endjob; @@ -10636,29 +10633,19 @@ qemuGetVcpusBWLive(virDomainObjPtr vm, } static int -qemuGetEmulatorBandwidthLive(virDomainObjPtr vm, virCgroupPtr cgroup, - unsigned long long *period, long long *quota) +qemuGetEmulatorBandwidthLive(virCgroupPtr cgroup, + unsigned long long *period, + long long *quota) { virCgroupPtr cgroup_emulator = NULL; - qemuDomainObjPrivatePtr priv = NULL; - int rc; int ret = -1; - priv = vm->privateData; - if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) { - /* We don't create sub dir for each vcpu */ - *period = 0; - *quota = 0; - return 0; - } - /* get period and quota for emulator */ if (virCgroupNewThread(cgroup, VIR_CGROUP_THREAD_EMULATOR, 0, false, &cgroup_emulator) < 0) goto cleanup; - rc = qemuGetVcpuBWLive(cgroup_emulator, period, quota); - if (rc < 0) + if (qemuGetVcpuBWLive(cgroup_emulator, period, quota) < 0) goto cleanup; ret = 0; @@ -10748,7 +10735,7 @@ qemuDomainGetSchedulerParametersFlags(virDomainPtr dom, } if (*nparams > 3 && cpu_bw_status) { - rc = qemuGetEmulatorBandwidthLive(vm, priv->cgroup, &emulator_period, + rc = qemuGetEmulatorBandwidthLive(priv->cgroup, &emulator_period, &emulator_quota); if (rc != 0) goto cleanup; -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
The vCPU threads make sense in the counterparts that set the vCPU bandwidth/quota, not in the emulator one. The emulator tunables are set all the time anyways.
Drop the extra check and remove the now unneeded vm argument. --- src/qemu/qemu_driver.c | 33 ++++++++++----------------------- 1 file changed, 10 insertions(+), 23 deletions(-)
Seems reasonable - ACK John

Add qemuDomainHasVCpuPids to do the checking and replace in place checks with it. --- src/qemu/qemu_cgroup.c | 7 ++----- src/qemu/qemu_domain.c | 15 +++++++++++++++ src/qemu/qemu_domain.h | 2 ++ src/qemu/qemu_driver.c | 29 +++++++++++++---------------- src/qemu/qemu_process.c | 7 ++++--- 5 files changed, 36 insertions(+), 24 deletions(-) diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index 3c7694a..56c2e90 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -1005,12 +1005,9 @@ qemuSetupCgroupForVcpu(virDomainObjPtr vm) !virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) return 0; - if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) { - /* If we don't know VCPU<->PID mapping or all vcpu runs in the same - * thread, we cannot control each vcpu. - */ + /* If vCPU<->pid mapping is missing we can't do vCPU pinning */ + if (!qemuDomainHasVCpuPids(vm)) return 0; - } if (virDomainNumatuneGetMode(vm->def->numa, -1, &mem_mode) == 0 && mem_mode == VIR_DOMAIN_NUMATUNE_MEM_STRICT && diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 4913a3b..8a45825 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -3956,3 +3956,18 @@ qemuDomainRequiresMlock(virDomainDefPtr def) return false; } + + +/** + * qemuDomainHasVCpuPids: + * @vm: Domain object + * + * Returns true if we were able to successfully detect vCPU pids for the VM. + */ +bool +qemuDomainHasVCpuPids(virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + + return priv->nvcpupids > 0; +} diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 03cf6ef..7f2eca1 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -491,4 +491,6 @@ int qemuDomainDefValidateMemoryHotplug(const virDomainDef *def, virQEMUCapsPtr qemuCaps, const virDomainMemoryDef *mem); +bool qemuDomainHasVCpuPids(virDomainObjPtr vm); + #endif /* __QEMU_DOMAIN_H__ */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 8047d36..4b7452c 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1428,7 +1428,7 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo, size_t i, v; qemuDomainObjPrivatePtr priv = vm->privateData; - if (priv->vcpupids == NULL) { + if (!qemuDomainHasVCpuPids(vm)) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("cpu affinity is not supported")); return -1; @@ -5118,7 +5118,7 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, } if (def) { - if (priv->vcpupids == NULL) { + if (!qemuDomainHasVCpuPids(vm)) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("cpu affinity is not supported")); goto endjob; @@ -10287,21 +10287,18 @@ qemuSetVcpusBWLive(virDomainObjPtr vm, virCgroupPtr cgroup, if (period == 0 && quota == 0) return 0; - /* If we does not know VCPU<->PID mapping or all vcpu runs in the same - * thread, we cannot control each vcpu. So we only modify cpu bandwidth - * when each vcpu has a separated thread. - */ - if (priv->nvcpupids != 0 && priv->vcpupids[0] != vm->pid) { - for (i = 0; i < priv->nvcpupids; i++) { - if (virCgroupNewThread(cgroup, VIR_CGROUP_THREAD_VCPU, i, - false, &cgroup_vcpu) < 0) - goto cleanup; + if (!qemuDomainHasVCpuPids(vm)) + return 0; - if (qemuSetupCgroupVcpuBW(cgroup_vcpu, period, quota) < 0) - goto cleanup; + for (i = 0; i < priv->nvcpupids; i++) { + if (virCgroupNewThread(cgroup, VIR_CGROUP_THREAD_VCPU, i, + false, &cgroup_vcpu) < 0) + goto cleanup; - virCgroupFree(&cgroup_vcpu); - } + if (qemuSetupCgroupVcpuBW(cgroup_vcpu, period, quota) < 0) + goto cleanup; + + virCgroupFree(&cgroup_vcpu); } return 0; @@ -10604,7 +10601,7 @@ qemuGetVcpusBWLive(virDomainObjPtr vm, int ret = -1; priv = vm->privateData; - if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) { + if (!qemuDomainHasVCpuPids(vm)) { /* We do not create sub dir for each vcpu */ rc = qemuGetVcpuBWLive(priv->cgroup, period, quota); if (rc < 0) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 721647f..d7f45b3 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2291,12 +2291,13 @@ qemuProcessSetVcpuAffinities(virDomainObjPtr vm) virDomainPinDefPtr pininfo; int n; int ret = -1; - VIR_DEBUG("Setting affinity on CPUs nvcpupin=%zu nvcpus=%d nvcpupids=%d", - def->cputune.nvcpupin, virDomainDefGetVCpus(def), priv->nvcpupids); + VIR_DEBUG("Setting affinity on CPUs nvcpupin=%zu nvcpus=%d hasVcpupids=%d", + def->cputune.nvcpupin, virDomainDefGetVCpus(def), + qemuDomainHasVCpuPids(vm)); if (!def->cputune.nvcpupin) return 0; - if (priv->vcpupids == NULL) { + if (!qemuDomainHasVCpuPids(vm)) { /* If any CPU has custom affinity that differs from the * VM default affinity, we must reject it */ -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Add qemuDomainHasVCpuPids to do the checking and replace in place checks with it. --- src/qemu/qemu_cgroup.c | 7 ++----- src/qemu/qemu_domain.c | 15 +++++++++++++++ src/qemu/qemu_domain.h | 2 ++ src/qemu/qemu_driver.c | 29 +++++++++++++---------------- src/qemu/qemu_process.c | 7 ++++--- 5 files changed, 36 insertions(+), 24 deletions(-)
Well I got close - reached critical mass here.
diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index 3c7694a..56c2e90 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -1005,12 +1005,9 @@ qemuSetupCgroupForVcpu(virDomainObjPtr vm) !virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) return 0;
- if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) { - /* If we don't know VCPU<->PID mapping or all vcpu runs in the same - * thread, we cannot control each vcpu. - */
I'm curious about the vcpupids[0] == vm->pid checks... I feel like I missed some nuance with the last patch. Perhaps a bit more explanation with regard to what happened will help. What up to this point so far including this patch allows the "safe" removal of that check
+ /* If vCPU<->pid mapping is missing we can't do vCPU pinning */ + if (!qemuDomainHasVCpuPids(vm)) return 0; - }
if (virDomainNumatuneGetMode(vm->def->numa, -1, &mem_mode) == 0 && mem_mode == VIR_DOMAIN_NUMATUNE_MEM_STRICT && diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 4913a3b..8a45825 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -3956,3 +3956,18 @@ qemuDomainRequiresMlock(virDomainDefPtr def)
return false; } + + +/** + * qemuDomainHasVCpuPids: + * @vm: Domain object + * + * Returns true if we were able to successfully detect vCPU pids for the VM. + */ +bool +qemuDomainHasVCpuPids(virDomainObjPtr vm)
Use of "Vcpu" or "VCPU" rather than "VCpu"
+{ + qemuDomainObjPrivatePtr priv = vm->privateData; + + return priv->nvcpupids > 0; +} diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 03cf6ef..7f2eca1 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -491,4 +491,6 @@ int qemuDomainDefValidateMemoryHotplug(const virDomainDef *def, virQEMUCapsPtr qemuCaps, const virDomainMemoryDef *mem);
+bool qemuDomainHasVCpuPids(virDomainObjPtr vm); + #endif /* __QEMU_DOMAIN_H__ */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 8047d36..4b7452c 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1428,7 +1428,7 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo, size_t i, v; qemuDomainObjPrivatePtr priv = vm->privateData;
- if (priv->vcpupids == NULL) { + if (!qemuDomainHasVCpuPids(vm)) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("cpu affinity is not supported")); return -1; @@ -5118,7 +5118,7 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, }
if (def) { - if (priv->vcpupids == NULL) { + if (!qemuDomainHasVCpuPids(vm)) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("cpu affinity is not supported")); goto endjob; @@ -10287,21 +10287,18 @@ qemuSetVcpusBWLive(virDomainObjPtr vm, virCgroupPtr cgroup, if (period == 0 && quota == 0) return 0;
- /* If we does not know VCPU<->PID mapping or all vcpu runs in the same - * thread, we cannot control each vcpu. So we only modify cpu bandwidth - * when each vcpu has a separated thread. - */ - if (priv->nvcpupids != 0 && priv->vcpupids[0] != vm->pid) { - for (i = 0; i < priv->nvcpupids; i++) { - if (virCgroupNewThread(cgroup, VIR_CGROUP_THREAD_VCPU, i, - false, &cgroup_vcpu) < 0) - goto cleanup; + if (!qemuDomainHasVCpuPids(vm)) + return 0;
Here again the vcpupids[0] thing. John
- if (qemuSetupCgroupVcpuBW(cgroup_vcpu, period, quota) < 0) - goto cleanup; + for (i = 0; i < priv->nvcpupids; i++) { + if (virCgroupNewThread(cgroup, VIR_CGROUP_THREAD_VCPU, i, + false, &cgroup_vcpu) < 0) + goto cleanup;
- virCgroupFree(&cgroup_vcpu); - } + if (qemuSetupCgroupVcpuBW(cgroup_vcpu, period, quota) < 0) + goto cleanup; + + virCgroupFree(&cgroup_vcpu); }
return 0; @@ -10604,7 +10601,7 @@ qemuGetVcpusBWLive(virDomainObjPtr vm, int ret = -1;
priv = vm->privateData; - if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) { + if (!qemuDomainHasVCpuPids(vm)) { /* We do not create sub dir for each vcpu */ rc = qemuGetVcpuBWLive(priv->cgroup, period, quota); if (rc < 0) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 721647f..d7f45b3 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2291,12 +2291,13 @@ qemuProcessSetVcpuAffinities(virDomainObjPtr vm) virDomainPinDefPtr pininfo; int n; int ret = -1; - VIR_DEBUG("Setting affinity on CPUs nvcpupin=%zu nvcpus=%d nvcpupids=%d", - def->cputune.nvcpupin, virDomainDefGetVCpus(def), priv->nvcpupids); + VIR_DEBUG("Setting affinity on CPUs nvcpupin=%zu nvcpus=%d hasVcpupids=%d", + def->cputune.nvcpupin, virDomainDefGetVCpus(def), + qemuDomainHasVCpuPids(vm)); if (!def->cputune.nvcpupin) return 0;
- if (priv->vcpupids == NULL) { + if (!qemuDomainHasVCpuPids(vm)) { /* If any CPU has custom affinity that differs from the * VM default affinity, we must reject it */

On Mon, Nov 23, 2015 at 18:19:18 -0500, John Ferlan wrote:
On 11/20/2015 10:22 AM, Peter Krempa wrote:
Add qemuDomainHasVCpuPids to do the checking and replace in place checks with it. --- src/qemu/qemu_cgroup.c | 7 ++----- src/qemu/qemu_domain.c | 15 +++++++++++++++ src/qemu/qemu_domain.h | 2 ++ src/qemu/qemu_driver.c | 29 +++++++++++++---------------- src/qemu/qemu_process.c | 7 ++++--- 5 files changed, 36 insertions(+), 24 deletions(-)
Well I got close - reached critical mass here.
diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index 3c7694a..56c2e90 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -1005,12 +1005,9 @@ qemuSetupCgroupForVcpu(virDomainObjPtr vm) !virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPUSET)) return 0;
- if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) { - /* If we don't know VCPU<->PID mapping or all vcpu runs in the same - * thread, we cannot control each vcpu. - */
I'm curious about the vcpupids[0] == vm->pid checks... I feel like I missed some nuance with the last patch. Perhaps a bit more explanation with regard to what happened will help. What up to this point so far including this patch allows the "safe" removal of that check
Hmmm, it indeed isn't really obvious due to the fact that the code that allows all these checks to be removed actually wasn't touched by this series. In commits b07f3d821dfb11a118ee75ea275fd6ab737d9500 and 65686e5a81d654d834d338fceeaf0229b2ca4f0d. Implication of those commits is that if qemu doesn't support vCPU pinning, the thread array is never allocated rather than creating a fake one. This implies that all vcpu-thread-id related operations are relevant only if the array is allocated. That also implies that it contains correct data. I'll try to compile the above info into the commit message. Sorry for causing misunderstandings :), it was actually obvious to me what is happening solely due to the fact that I've seen the above commits. Peter

Instead of directly accessing the array add a helper to do this. --- src/qemu/qemu_cgroup.c | 3 ++- src/qemu/qemu_domain.c | 20 ++++++++++++++++++++ src/qemu/qemu_domain.h | 1 + src/qemu/qemu_driver.c | 7 ++++--- src/qemu/qemu_process.c | 5 ++--- 5 files changed, 29 insertions(+), 7 deletions(-) diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index 56c2e90..d8a2b03 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -1023,7 +1023,8 @@ qemuSetupCgroupForVcpu(virDomainObjPtr vm) goto cleanup; /* move the thread for vcpu to sub dir */ - if (virCgroupAddTask(cgroup_vcpu, priv->vcpupids[i]) < 0) + if (virCgroupAddTask(cgroup_vcpu, + qemuDomainGetVCpuPid(vm, i)) < 0) goto cleanup; if (period || quota) { diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 8a45825..be1f2b4 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -3971,3 +3971,23 @@ qemuDomainHasVCpuPids(virDomainObjPtr vm) return priv->nvcpupids > 0; } + + +/** + * qemuDomainGetVCpuPid: + * @vm: domain object + * @vcpu: cpu id + * + * Returns the vCPU pid. If @vcpu is offline or out of range 0 is returned. + */ +pid_t +qemuDomainGetVCpuPid(virDomainObjPtr vm, + unsigned int vcpu) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + + if (vcpu >= priv->nvcpupids) + return 0; + + return priv->vcpupids[vcpu]; +} diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 7f2eca1..c1aad61 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -492,5 +492,6 @@ int qemuDomainDefValidateMemoryHotplug(const virDomainDef *def, const virDomainMemoryDef *mem); bool qemuDomainHasVCpuPids(virDomainObjPtr vm); +pid_t qemuDomainGetVCpuPid(virDomainObjPtr vm, unsigned int vcpu); #endif /* __QEMU_DOMAIN_H__ */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 4b7452c..c659328 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1449,7 +1449,7 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo, &(info[i].cpu), NULL, vm->pid, - priv->vcpupids[i]) < 0) { + qemuDomainGetVCpuPid(vm, i)) < 0) { virReportSystemError(errno, "%s", _("cannot get vCPU placement & pCPU time")); return -1; @@ -1462,7 +1462,7 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo, unsigned char *cpumap = VIR_GET_CPUMAP(cpumaps, maplen, v); virBitmapPtr map = NULL; - if (!(map = virProcessGetAffinity(priv->vcpupids[v]))) + if (!(map = virProcessGetAffinity(qemuDomainGetVCpuPid(vm, v)))) return -1; virBitmapToDataBuf(map, cpumap, maplen); @@ -5156,7 +5156,8 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, goto endjob; } } else { - if (virProcessSetAffinity(priv->vcpupids[vcpu], pcpumap) < 0) { + if (virProcessSetAffinity(qemuDomainGetVCpuPid(vm, vcpu), + pcpumap) < 0) { virReportError(VIR_ERR_SYSTEM_ERROR, _("failed to set cpu affinity for vcpu %d"), vcpu); diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index d7f45b3..4a2cc66 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2286,7 +2286,6 @@ qemuProcessSetLinkStates(virQEMUDriverPtr driver, static int qemuProcessSetVcpuAffinities(virDomainObjPtr vm) { - qemuDomainObjPrivatePtr priv = vm->privateData; virDomainDefPtr def = vm->def; virDomainPinDefPtr pininfo; int n; @@ -2319,7 +2318,7 @@ qemuProcessSetVcpuAffinities(virDomainObjPtr vm) n))) continue; - if (virProcessSetAffinity(priv->vcpupids[n], + if (virProcessSetAffinity(qemuDomainGetVCpuPid(vm, n), pininfo->cpumask) < 0) { goto cleanup; } @@ -2407,7 +2406,7 @@ qemuProcessSetSchedulers(virDomainObjPtr vm) size_t i = 0; for (i = 0; i < priv->nvcpupids; i++) { - if (qemuProcessSetSchedParams(i, priv->vcpupids[i], + if (qemuProcessSetSchedParams(i, qemuDomainGetVCpuPid(vm, i), vm->def->cputune.nvcpusched, vm->def->cputune.vcpusched) < 0) return -1; -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Instead of directly accessing the array add a helper to do this. --- src/qemu/qemu_cgroup.c | 3 ++- src/qemu/qemu_domain.c | 20 ++++++++++++++++++++ src/qemu/qemu_domain.h | 1 + src/qemu/qemu_driver.c | 7 ++++--- src/qemu/qemu_process.c | 5 ++--- 5 files changed, 29 insertions(+), 7 deletions(-)
I believe technically it's a "tid" and not a "pid", but since 1/2 the code calls it one way and the other calls it a different way it's a coin flip on whether it... Theoretically different than vm->pid too (although I am still trying to recall why vcpupids[0] was being compared to vm->pid) Couple of other places according to cscope that still reference vcpupids[]: qemuDomainObjPrivateXMLFormat qemuDomainObjPrivateXMLParse What about the innocent bystanders? IOW: What gets called with a now potentially "0" for pid if the passed vcpu is out of bounds? Where 0 is self/current process for some I believe. Not that this is any less worse than what theoretically could happen with some out of range fetch, but the question is more towards should we not make the call then? If the goal is to make the code better, then perhaps the "error condition" should be checked. virCgroupAddTask qemuGetProcessInfo virProcessGetAffinity virProcessSetAffinity qemuProcessSetSchedParams
diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index 56c2e90..d8a2b03 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -1023,7 +1023,8 @@ qemuSetupCgroupForVcpu(virDomainObjPtr vm) goto cleanup;
/* move the thread for vcpu to sub dir */ - if (virCgroupAddTask(cgroup_vcpu, priv->vcpupids[i]) < 0) + if (virCgroupAddTask(cgroup_vcpu, + qemuDomainGetVCpuPid(vm, i)) < 0) goto cleanup;
if (period || quota) { diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 8a45825..be1f2b4 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -3971,3 +3971,23 @@ qemuDomainHasVCpuPids(virDomainObjPtr vm)
return priv->nvcpupids > 0; } + + +/** + * qemuDomainGetVCpuPid: + * @vm: domain object + * @vcpu: cpu id + * + * Returns the vCPU pid. If @vcpu is offline or out of range 0 is returned. + */ +pid_t +qemuDomainGetVCpuPid(virDomainObjPtr vm, + unsigned int vcpu)
Would prefer "VCPU" or "Vcpu" An ACK would be conditional on your thoughts regarding usage of tid vs. pid and how much error checking to do. I'm assuming right now that you're setting something up for the next batch of changes. John
+{ + qemuDomainObjPrivatePtr priv = vm->privateData; + + if (vcpu >= priv->nvcpupids) + return 0; + + return priv->vcpupids[vcpu]; +} diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 7f2eca1..c1aad61 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -492,5 +492,6 @@ int qemuDomainDefValidateMemoryHotplug(const virDomainDef *def, const virDomainMemoryDef *mem);
bool qemuDomainHasVCpuPids(virDomainObjPtr vm); +pid_t qemuDomainGetVCpuPid(virDomainObjPtr vm, unsigned int vcpu);
#endif /* __QEMU_DOMAIN_H__ */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 4b7452c..c659328 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1449,7 +1449,7 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo, &(info[i].cpu), NULL, vm->pid, - priv->vcpupids[i]) < 0) { + qemuDomainGetVCpuPid(vm, i)) < 0) { virReportSystemError(errno, "%s", _("cannot get vCPU placement & pCPU time")); return -1; @@ -1462,7 +1462,7 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo, unsigned char *cpumap = VIR_GET_CPUMAP(cpumaps, maplen, v); virBitmapPtr map = NULL;
- if (!(map = virProcessGetAffinity(priv->vcpupids[v]))) + if (!(map = virProcessGetAffinity(qemuDomainGetVCpuPid(vm, v)))) return -1;
virBitmapToDataBuf(map, cpumap, maplen); @@ -5156,7 +5156,8 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, goto endjob; } } else { - if (virProcessSetAffinity(priv->vcpupids[vcpu], pcpumap) < 0) { + if (virProcessSetAffinity(qemuDomainGetVCpuPid(vm, vcpu), + pcpumap) < 0) { virReportError(VIR_ERR_SYSTEM_ERROR, _("failed to set cpu affinity for vcpu %d"), vcpu); diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index d7f45b3..4a2cc66 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2286,7 +2286,6 @@ qemuProcessSetLinkStates(virQEMUDriverPtr driver, static int qemuProcessSetVcpuAffinities(virDomainObjPtr vm) { - qemuDomainObjPrivatePtr priv = vm->privateData; virDomainDefPtr def = vm->def; virDomainPinDefPtr pininfo; int n; @@ -2319,7 +2318,7 @@ qemuProcessSetVcpuAffinities(virDomainObjPtr vm) n))) continue;
- if (virProcessSetAffinity(priv->vcpupids[n], + if (virProcessSetAffinity(qemuDomainGetVCpuPid(vm, n), pininfo->cpumask) < 0) { goto cleanup; } @@ -2407,7 +2406,7 @@ qemuProcessSetSchedulers(virDomainObjPtr vm) size_t i = 0;
for (i = 0; i < priv->nvcpupids; i++) { - if (qemuProcessSetSchedParams(i, priv->vcpupids[i], + if (qemuProcessSetSchedParams(i, qemuDomainGetVCpuPid(vm, i), vm->def->cputune.nvcpusched, vm->def->cputune.vcpusched) < 0) return -1;

On Tue, Nov 24, 2015 at 07:25:37 -0500, John Ferlan wrote:
On 11/20/2015 10:22 AM, Peter Krempa wrote:
Instead of directly accessing the array add a helper to do this. --- src/qemu/qemu_cgroup.c | 3 ++- src/qemu/qemu_domain.c | 20 ++++++++++++++++++++ src/qemu/qemu_domain.h | 1 + src/qemu/qemu_driver.c | 7 ++++--- src/qemu/qemu_process.c | 5 ++--- 5 files changed, 29 insertions(+), 7 deletions(-)
I believe technically it's a "tid" and not a "pid", but since 1/2 the code calls it one way and the other calls it a different way it's a coin flip on whether it... Theoretically different than vm->pid too (although I am still trying to recall why vcpupids[0] was being compared to vm->pid)
Couple of other places according to cscope that still reference vcpupids[]:
qemuDomainObjPrivateXMLFormat qemuDomainObjPrivateXMLParse
These will be refactored later. Both the formatter and parser will format also the corresponding cpu ID to the thread ID.
What about the innocent bystanders? IOW: What gets called with a now potentially "0" for pid if the passed vcpu is out of bounds? Where 0 is self/current process for some I believe. Not that this is any less worse than what theoretically could happen with some out of range fetch, but the question is more towards should we not make the call then? If the goal is to make the code better, then perhaps the "error condition" should be checked.
virCgroupAddTask qemuGetProcessInfo virProcessGetAffinity virProcessSetAffinity qemuProcessSetSchedParams
None of those should be ever called if the thread is 0. Currently it's impossible and later the caller will need to make sure that this is the case. Otherwise things will break. Peter

Change some of the control structures and switch to using the new vcpu structure. --- src/qemu/qemu_driver.c | 77 ++++++++++++++++++++++++++++---------------------- 1 file changed, 43 insertions(+), 34 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index c659328..b9f8e72 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1422,11 +1422,17 @@ qemuGetProcessInfo(unsigned long long *cpuTime, int *lastCpu, long *vm_rss, static int -qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo, - unsigned char *cpumaps, int maplen) +qemuDomainHelperGetVcpus(virDomainObjPtr vm, + virVcpuInfoPtr info, + int maxinfo, + unsigned char *cpumaps, + int maplen) { - size_t i, v; - qemuDomainObjPrivatePtr priv = vm->privateData; + size_t ncpuinfo = 0; + size_t i; + + if (maxinfo == 0) + return 0; if (!qemuDomainHasVCpuPids(vm)) { virReportError(VIR_ERR_OPERATION_INVALID, @@ -1434,43 +1440,46 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo, return -1; } - /* Clamp to actual number of vcpus */ - if (maxinfo > priv->nvcpupids) - maxinfo = priv->nvcpupids; - - if (maxinfo >= 1) { - if (info != NULL) { - memset(info, 0, sizeof(*info) * maxinfo); - for (i = 0; i < maxinfo; i++) { - info[i].number = i; - info[i].state = VIR_VCPU_RUNNING; - - if (qemuGetProcessInfo(&(info[i].cpuTime), - &(info[i].cpu), - NULL, - vm->pid, - qemuDomainGetVCpuPid(vm, i)) < 0) { - virReportSystemError(errno, "%s", - _("cannot get vCPU placement & pCPU time")); - return -1; - } + if (info) + memset(info, 0, sizeof(*info) * maxinfo); + + if (cpumaps) + memset(cpumaps, 0, sizeof(*cpumaps) * maxinfo); + + for (i = 0; i < virDomainDefGetVCpusMax(vm->def) && ncpuinfo < maxinfo; i++) { + virDomainVCpuInfoPtr vcpu = virDomainDefGetVCpu(vm->def, i); + pid_t vcpupid = qemuDomainGetVCpuPid(vm, i); + + if (!vcpu->online) + continue; + + if (info) { + info[i].number = i; + info[i].state = VIR_VCPU_RUNNING; + + if (qemuGetProcessInfo(&(info[i].cpuTime), &(info[i].cpu), NULL, + vm->pid, vcpupid) < 0) { + virReportSystemError(errno, "%s", + _("cannot get vCPU placement & pCPU time")); + return -1; } } - if (cpumaps != NULL) { - for (v = 0; v < maxinfo; v++) { - unsigned char *cpumap = VIR_GET_CPUMAP(cpumaps, maplen, v); - virBitmapPtr map = NULL; + if (cpumaps) { + unsigned char *cpumap = VIR_GET_CPUMAP(cpumaps, maplen, i); + virBitmapPtr map = NULL; - if (!(map = virProcessGetAffinity(qemuDomainGetVCpuPid(vm, v)))) - return -1; + if (!(map = virProcessGetAffinity(vcpupid))) + return -1; - virBitmapToDataBuf(map, cpumap, maplen); - virBitmapFree(map); - } + virBitmapToDataBuf(map, cpumap, maplen); + virBitmapFree(map); } + + ncpuinfo++; } - return maxinfo; + + return ncpuinfo; } -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Change some of the control structures and switch to using the new vcpu structure. --- src/qemu/qemu_driver.c | 77 ++++++++++++++++++++++++++++---------------------- 1 file changed, 43 insertions(+), 34 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index c659328..b9f8e72 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1422,11 +1422,17 @@ qemuGetProcessInfo(unsigned long long *cpuTime, int *lastCpu, long *vm_rss,
static int -qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo, - unsigned char *cpumaps, int maplen) +qemuDomainHelperGetVcpus(virDomainObjPtr vm, + virVcpuInfoPtr info, + int maxinfo, + unsigned char *cpumaps, + int maplen) { - size_t i, v; - qemuDomainObjPrivatePtr priv = vm->privateData; + size_t ncpuinfo = 0; + size_t i; + + if (maxinfo == 0) + return 0;
if (!qemuDomainHasVCpuPids(vm)) { virReportError(VIR_ERR_OPERATION_INVALID, @@ -1434,43 +1440,46 @@ qemuDomainHelperGetVcpus(virDomainObjPtr vm, virVcpuInfoPtr info, int maxinfo, return -1; }
- /* Clamp to actual number of vcpus */ - if (maxinfo > priv->nvcpupids) - maxinfo = priv->nvcpupids; - - if (maxinfo >= 1) { - if (info != NULL) { - memset(info, 0, sizeof(*info) * maxinfo); - for (i = 0; i < maxinfo; i++) { - info[i].number = i; - info[i].state = VIR_VCPU_RUNNING; - - if (qemuGetProcessInfo(&(info[i].cpuTime), - &(info[i].cpu), - NULL, - vm->pid, - qemuDomainGetVCpuPid(vm, i)) < 0) { - virReportSystemError(errno, "%s", - _("cannot get vCPU placement & pCPU time")); - return -1; - } + if (info) + memset(info, 0, sizeof(*info) * maxinfo); + + if (cpumaps) + memset(cpumaps, 0, sizeof(*cpumaps) * maxinfo); + + for (i = 0; i < virDomainDefGetVCpusMax(vm->def) && ncpuinfo < maxinfo; i++) {
This line is longer than 80 cols.
+ virDomainVCpuInfoPtr vcpu = virDomainDefGetVCpu(vm->def, i); + pid_t vcpupid = qemuDomainGetVCpuPid(vm, i); + + if (!vcpu->online) + continue;
So if the goal is to eventually allow vcpu 0 & 2 of 4 vcpu's to be online, then this algorithm will need a slight adjustment. Of course there's also the what if 'vcpupid == 0' that hasn't been checked here (comments from patch 32). I "believe" what needs to be done is change the [i] below to [ncpuinfo] - that way the info & cpumaps would be returned with only the ONLINE/RUNNING vCPU's and 'info[]' won't have gaps which won't be accessible if a "2" is returned... I think the same holds true for the VIR_GET_CPUMAP ACK with some adjustments. John
+ + if (info) { + info[i].number = i; + info[i].state = VIR_VCPU_RUNNING; + + if (qemuGetProcessInfo(&(info[i].cpuTime), &(info[i].cpu), NULL, + vm->pid, vcpupid) < 0) { + virReportSystemError(errno, "%s", + _("cannot get vCPU placement & pCPU time")); + return -1; } }
- if (cpumaps != NULL) { - for (v = 0; v < maxinfo; v++) { - unsigned char *cpumap = VIR_GET_CPUMAP(cpumaps, maplen, v); - virBitmapPtr map = NULL; + if (cpumaps) { + unsigned char *cpumap = VIR_GET_CPUMAP(cpumaps, maplen, i); + virBitmapPtr map = NULL;
- if (!(map = virProcessGetAffinity(qemuDomainGetVCpuPid(vm, v)))) - return -1; + if (!(map = virProcessGetAffinity(vcpupid))) + return -1;
- virBitmapToDataBuf(map, cpumap, maplen); - virBitmapFree(map); - } + virBitmapToDataBuf(map, cpumap, maplen); + virBitmapFree(map); } + + ncpuinfo++; } - return maxinfo; + + return ncpuinfo; }

On Tue, Nov 24, 2015 at 07:56:19 -0500, John Ferlan wrote:
On 11/20/2015 10:22 AM, Peter Krempa wrote:
Change some of the control structures and switch to using the new vcpu structure. --- src/qemu/qemu_driver.c | 77 ++++++++++++++++++++++++++++---------------------- 1 file changed, 43 insertions(+), 34 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index c659328..b9f8e72 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c
[...]
+ + if (cpumaps) + memset(cpumaps, 0, sizeof(*cpumaps) * maxinfo); + + for (i = 0; i < virDomainDefGetVCpusMax(vm->def) && ncpuinfo < maxinfo; i++) {
This line is longer than 80 cols.
+ virDomainVCpuInfoPtr vcpu = virDomainDefGetVCpu(vm->def, i); + pid_t vcpupid = qemuDomainGetVCpuPid(vm, i); + + if (!vcpu->online) + continue;
So if the goal is to eventually allow vcpu 0 & 2 of 4 vcpu's to be online, then this algorithm will need a slight adjustment.
Yes that's the goal.
Of course there's also the what if 'vcpupid == 0' that hasn't been checked here (comments from patch 32).
I "believe" what needs to be done is change the [i] below to [ncpuinfo] - that way the info & cpumaps would be returned with only the ONLINE/RUNNING vCPU's and 'info[]' won't have gaps which won't be accessible if a "2" is returned... I think the same holds true for the VIR_GET_CPUMAP
Oh, yeah. I forgot to fix that when trying multiple approaches

Use the proper data structures for the iteration since ncpupids will be made private later. --- src/qemu/qemu_cgroup.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index d8a2b03..06c20c1 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -800,7 +800,12 @@ qemuRestoreCgroupState(virDomainObjPtr vm) if (virCgroupSetCpusetMems(priv->cgroup, mem_mask) < 0) goto error; - for (i = 0; i < priv->nvcpupids; i++) { + for (i = 0; i < virDomainDefGetVCpusMax(vm->def); i++) { + virDomainVCpuInfoPtr vcpu = virDomainDefGetVCpu(vm->def, i); + + if (!vcpu->online) + continue; + if (virCgroupNewThread(priv->cgroup, VIR_CGROUP_THREAD_VCPU, i, false, &cgroup_temp) < 0 || virCgroupSetCpusetMemoryMigrate(cgroup_temp, true) < 0 || @@ -1016,7 +1021,12 @@ qemuSetupCgroupForVcpu(virDomainObjPtr vm) &mem_mask, -1) < 0) goto cleanup; - for (i = 0; i < priv->nvcpupids; i++) { + for (i = 0; i < virDomainDefGetVCpusMax(def); i++) { + virDomainVCpuInfoPtr vcpu = virDomainDefGetVCpu(def, i); + + if (!vcpu->online) + continue; + virCgroupFree(&cgroup_vcpu); if (virCgroupNewThread(priv->cgroup, VIR_CGROUP_THREAD_VCPU, i, true, &cgroup_vcpu) < 0) -- 2.6.2

On 11/20/2015 10:22 AM, Peter Krempa wrote:
Use the proper data structures for the iteration since ncpupids will be made private later. --- src/qemu/qemu_cgroup.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index d8a2b03..06c20c1 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -800,7 +800,12 @@ qemuRestoreCgroupState(virDomainObjPtr vm) if (virCgroupSetCpusetMems(priv->cgroup, mem_mask) < 0) goto error;
- for (i = 0; i < priv->nvcpupids; i++) { + for (i = 0; i < virDomainDefGetVCpusMax(vm->def); i++) { + virDomainVCpuInfoPtr vcpu = virDomainDefGetVCpu(vm->def, i); +
What if !vcpu? Shouldn't happen, but not checked - trying to consider future too.
+ if (!vcpu->online) + continue; + if (virCgroupNewThread(priv->cgroup, VIR_CGROUP_THREAD_VCPU, i, false, &cgroup_temp) < 0 || virCgroupSetCpusetMemoryMigrate(cgroup_temp, true) < 0 || @@ -1016,7 +1021,12 @@ qemuSetupCgroupForVcpu(virDomainObjPtr vm) &mem_mask, -1) < 0) goto cleanup;
- for (i = 0; i < priv->nvcpupids; i++) { + for (i = 0; i < virDomainDefGetVCpusMax(def); i++) { + virDomainVCpuInfoPtr vcpu = virDomainDefGetVCpu(def, i); +
Same here ACK w/ adjustments... That reminds me - patch 33 will have the same issue. John
+ if (!vcpu->online) + continue; + virCgroupFree(&cgroup_vcpu); if (virCgroupNewThread(priv->cgroup, VIR_CGROUP_THREAD_VCPU, i, true, &cgroup_vcpu) < 0)

On Tue, Nov 24, 2015 at 08:41:53 -0500, John Ferlan wrote:
On 11/20/2015 10:22 AM, Peter Krempa wrote:
Use the proper data structures for the iteration since ncpupids will be made private later. --- src/qemu/qemu_cgroup.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index d8a2b03..06c20c1 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -800,7 +800,12 @@ qemuRestoreCgroupState(virDomainObjPtr vm) if (virCgroupSetCpusetMems(priv->cgroup, mem_mask) < 0) goto error;
- for (i = 0; i < priv->nvcpupids; i++) { + for (i = 0; i < virDomainDefGetVCpusMax(vm->def); i++) { + virDomainVCpuInfoPtr vcpu = virDomainDefGetVCpu(vm->def, i); +
What if !vcpu? Shouldn't happen, but not checked - trying to consider future too.
Actually it really can't and should not ever happen. The setters will always allocate the array fully so iterating to virDomainDefGetVCpusMax should always be valid. Peter
participants (2)
-
John Ferlan
-
Peter Krempa