
On 10/13/2015 12:08 PM, Peter Krempa wrote:
On Tue, Oct 13, 2015 at 11:47:07 -0400, John Ferlan wrote:
During qemu process startup (qemuProcessStart), the call to qemuProcessDetectIOThreadPIDs will not attempt to fill in the ->thread_id values if the binary doesn't support IOThreads. However, subsequent calls to setup the IOThread cgroups, affinity, and scheduler parameters had no such check, thus could attempt to set thread_id = 0, which would not be a good idea.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/qemu/qemu_cgroup.c | 3 +++ src/qemu/qemu_process.c | 18 ++++++++++++------ 2 files changed, 15 insertions(+), 6 deletions(-)
Do we even allow to start a VM that has IOthreads enabled but qemu does not support them? In that case that's the point to fix so that we have a central point where it's checked.
Upon further reflection... During qemuBuildCommandLine there is a check for 'iothreads > 0 and have the capability', then create 'niothreadids' objects. Then during qemuCheckIOThreads called from qemuBuildDriveDevStr, we will fail if there's a disk with an "iothread='#'" property. Thus, if just defined (eg, <iothreads> and <iothreadids>), we ignore them if the emulator doesn't support it. However, if used, then yes we fail. Naturally this is one of those only one downstream platform really cares - just happens to be one that does a lot of QA for Red Hat. Although a fairly useless configuration, I think because it seems it's possible to have 'niothreadids' with empty fields other than 'iothread_id' that checking for the capability is more correct in these instances than checking for niothreadids. Unless of course you feel we should fail in qemuBuildCommandLine if iothreads are defined. John
diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index 570dab5..66318ea 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -1163,6 +1163,9 @@ qemuSetupCgroupForIOThreads(virDomainObjPtr vm) char *mem_mask = NULL; virDomainNumatuneMemMode mem_mode;
+ if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_OBJECT_IOTHREAD)) + return 0;
Spreading checks like this through the code usually ends up to be fragile.
I think this should be if (def->niothreads == 0) return 0;
+ if ((period || quota) && !virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_CPU)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 8aa9efc..67e7cbc 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2514,10 +2514,14 @@ qemuProcessSetEmulatorAffinity(virDomainObjPtr vm) static int qemuProcessSetIOThreadsAffinity(virDomainObjPtr vm) { + qemuDomainObjPrivatePtr priv = vm->privateData; virDomainDefPtr def = vm->def; size_t i; int ret = -1;
+ if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_OBJECT_IOTHREAD)) + return 0; + for (i = 0; i < def->niothreadids; i++) {
Here this function is doing the right thing.
/* set affinity only for existing iothreads */ if (!def->iothreadids[i]->cpumask) @@ -2574,12 +2578,14 @@ qemuProcessSetSchedulers(virDomainObjPtr vm) return -1; }
- for (i = 0; i < vm->def->niothreadids; i++) { - if (qemuProcessSetSchedParams(vm->def->iothreadids[i]->iothread_id, - vm->def->iothreadids[i]->thread_id, - vm->def->cputune.niothreadsched, - vm->def->cputune.iothreadsched) < 0) - return -1; + if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_OBJECT_IOTHREAD)) { + for (i = 0; i < vm->def->niothreadids; i++) {
As well as here.
+ if (qemuProcessSetSchedParams(vm->def->iothreadids[i]->iothread_id, + vm->def->iothreadids[i]->thread_id, + vm->def->cputune.niothreadsched, + vm->def->cputune.iothreadsched) < 0) + return -1; + }
Peter