
On 03/08/2015 07:03 AM, Ján Tomko wrote:
On Fri, Mar 06, 2015 at 01:47:06PM -0500, John Ferlan wrote:
Based on jtomko's review of the IOThread series and my foray into the libvirt-perl bindings, these patches make the following adjustments:
Patch 1 - Found while generating the virsh iothreadpin description, the vcpupin description had a few missing words Patch 2 - While working on the libvirt-perl bindings - I found I was a bit overaggressive with my GetIOThreadsInfo interface with regard to checking ReadOnly unnecessarily Patch 3 - Adjust the IOThread CPU Affnity algorithm based on jtomko's review comments Patch 4 - Fallout because I ran the patches through my Coverity checker. Patch 5 - Similar to IOThread - adjust the GetVcpuInfo CPU Affinity algorithm for the returned cpumap Patch 5 - Similar to IOThread - adjust the GetEmulatorInfo CPU Affinity algorithm for the returned cpumap
John Ferlan (6): Fix syntax for vcpupin description Remove ReadOnly check for GetIOThreadsInfo qemu: Change/Fix IOThread CPU affinity bitmap manipulation qemu: Resolve Coverity CHECKED_RETURN issue qemu: Change qemuDomainGetVcpuPinInfo bitmap manipulation qemu: Change qemuDomainGetEmulatorPinInfo bitmap manipulation
src/libvirt-domain.c | 1 - src/qemu/qemu_driver.c | 177 +++++++++++++++++++++---------------------------- tools/virsh.pod | 4 +- 3 files changed, 76 insertions(+), 106 deletions(-)
ACK series, thanks for touching up VcpuInfo and EmulatorInfo as well!
There's one bug that I noticed: If the CPUs are pinned domain-wide, that is: <vcpu placement='static' cpuset='0-1'>4</vcpu> <iothreads>2</iothreads>
Both vcpu threads and iothreads will inherit this pinning.
For a shutoff domain, vcpupininfo will display 0-1 for all vcpus, but iothreadsinfo shows 0-4, even though they will get pinned to 0-1 after domain startup.
Turns out the vpcupin info is filled for all the vcpus when the XML is parsed since commit 10f8a45deb0f057a70a0d49794d3a3d19d17dceb
Falling back to targetDef->cpumask in qemuDomainGetIOThreadsConfig (as qemuDomainGetEmulatorPinInfo does) would solve that too.
Squashed the following into patch 4 to address this... diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 2a9db1b..ceceafa 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5653,6 +5653,8 @@ qemuDomainGetIOThreadsConfig(virDomainDefPtr targetDef, virDomainIOThreadInfoPtr **info) { virDomainIOThreadInfoPtr *info_ret = NULL; + virBitmapPtr bitmap = NULL; + virBitmapPtr cpumask = NULL; int hostcpus; size_t i; int ret = -1; @@ -5668,7 +5670,6 @@ qemuDomainGetIOThreadsConfig(virDomainDefPtr targetDef, for (i = 0; i < targetDef->iothreads; i++) { virDomainVcpuPinDefPtr pininfo; - virBitmapPtr bitmap = NULL; if (VIR_ALLOC(info_ret[i]) < 0) goto cleanup; @@ -5681,20 +5682,22 @@ qemuDomainGetIOThreadsConfig(virDomainDefPtr targetDef, targetDef->cputune.niothreadspin, i + 1); if (!pininfo) { - if (!(bitmap = virBitmapNew(hostcpus))) - goto cleanup; - virBitmapSetAll(bitmap); + if (targetDef->cpumask) { + cpumask = targetDef->cpumask; + } else { + if (!(bitmap = virBitmapNew(hostcpus))) + goto cleanup; + virBitmapSetAll(bitmap); + cpumask = bitmap; + } } else { - bitmap = pininfo->cpumask; + cpumask = pininfo->cpumask; } - if (virBitmapToData(bitmap, &info_ret[i]->cpumap, - &info_ret[i]->cpumaplen) < 0) { - if (!pininfo) - virBitmapFree(bitmap); + if (virBitmapToData(cpumask, &info_ret[i]->cpumap, + &info_ret[i]->cpumaplen) < 0) goto cleanup; - } - if (!pininfo) - virBitmapFree(bitmap); + virBitmapFree(bitmap); + bitmap = NULL; } *info = info_ret; @@ -5707,6 +5710,7 @@ qemuDomainGetIOThreadsConfig(virDomainDefPtr targetDef, virDomainIOThreadsInfoFree(info_ret[i]); VIR_FREE(info_ret); } + virBitmapFree(bitmap); return ret; } Replaced the overly aggressively removed lines from patch 5... I will send a separate patch for the domain_conf.c change that addresses any needs for iothreadspin setup to use the def->cpumask. and pushed. Tks - John