On 03/04/2015 12:45 PM, Ján Tomko wrote:
On Tue, Feb 17, 2015 at 04:03:52PM -0500, John Ferlan wrote:
> Depending on the flags passed, either attempt to return the active/live
> IOThread data for the domain or the config data.
>
> The active/live path will call into the Monitor in order to get the
> IOThread data and then correlate the thread_id's returned from the
> monitor to the currently running system/threads in order to ascertain
> the affinity for each iothread_id.
>
> The config path will map each of the configured IOThreads and return
> any configured iothreadspin data
>
> Both paths will peruse the 'targetDef' domain list looking for
'disks'
> that have been assigned to a specific IOThread. An IOThread may have
> no resources associated
>
> Signed-off-by: John Ferlan <jferlan(a)redhat.com>
> ---
> src/qemu/qemu_driver.c | 281 +++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 281 insertions(+)
>
Just a few nits, until the API gets final...
> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 1bbbe9b..2c9d08c 100644
> +static int
> +
> + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0)
> + goto cleanup;
> +
> + if (!virDomainObjIsActive(vm)) {
> + virReportError(VIR_ERR_OPERATION_INVALID, "%s",
> + _("cannot list IOThreads for an inactive
domain"));
> + goto endjob;
> + }
> +
> + priv = vm->privateData;
> + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_OBJECT_IOTHREAD)) {
> + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
> + _("IOThreads not supported with this binary"));
> + goto endjob;
> + }
> +
> + if (qemuDomainObjEnterMonitorAsync(driver, vm, QEMU_ASYNC_JOB_NONE) < 0)
> + goto endjob;
EnterMonitorAsync with ASYNC_JOB_NONE is essentially EnterMonitor
> + for (i = 0; i < targetDef->iothreads; i++) {
> + if (VIR_ALLOC(info_ret[i]) < 0)
> + goto cleanup;
> +
> + /* IOThreads being counting at 1 */
> + info_ret[i]->iothread_id = i + 1;
> +
> + if (VIR_ALLOC_N(info_ret[i]->cpumap, maplen) < 0)
> + goto cleanup;
> +
> + /* Initialize the cpumap */
> + info_ret[i]->cpumaplen = maplen;
> + memset(info_ret[i]->cpumap, 0xff, maplen);
> + if (maxcpu % 8)
> + info_ret[i]->cpumap[maplen - 1] &= (1 << maxcpu % 8) - 1;
virBitmapToData could make this more readable.
This is where I'm not sure varying from the existing API's model w/r/t
cpumap & maplen, etc. should be done..
> + }
> +
> + /* If iothreadspin setting exists, there are unused physical cpus */
> + iothreadspin_list = targetDef->cputune.iothreadspin;
A temporary variable pointing to iothreadspin[i] is more common.
straight copy of qemuDomainGetVcpuPinInfo
> + for (i = 0; i < targetDef->cputune.niothreadspin; i++)
{
> + /* vcpuid is the iothread_id...
> + * iothread_id is the index into info_ret + 1, so we can
> + * assume that the info_ret index we want is vcpuid - 1
> + */
> + cpumap = info_ret[iothreadspin_list[i]->vcpuid - 1]->cpumap;
> + cpumask = iothreadspin_list[i]->cpumask;
> +
> + for (pcpu = 0; pcpu < maxcpu; pcpu++) {
> + if (virBitmapGetBit(cpumask, pcpu, &pinned) < 0)
> + goto cleanup;
> + if (!pinned)
> + VIR_UNUSE_CPU(cpumap, pcpu);
> + }
> + }
> +
tks -
John