On 03/10/2015 12:37 PM, Ján Tomko wrote:
On Tue, Mar 10, 2015 at 11:20:43AM -0400, John Ferlan wrote:
>
>
> On 03/10/2015 09:51 AM, Ján Tomko wrote:
>> On Fri, Mar 06, 2015 at 09:05:44AM -0500, John Ferlan wrote:
>
> <...snip...>
>
>>> +
>>> + /* pinning to all physical cpus means resetting,
>>
>> It doesn't.
>> By default the iothreads inherit the pinning from the domain's cpumask.
>>
>> A completely clear bitmap would be a better value to mean resetting,
>> since it makes no sense otherwise. But getting the cpumask in that case
>> won't be that easy.
>>
>
> So again - this is taken from qemuDomainPinVcpuFlags() - if it's invalid
> here, then it's invalid there as well.
Yes, that function might also not behave properly on the corner case of
pinning a vcpu to all CPUs.
But that one has been released for ages. This is new code that doesn't
have to repeat its mistakes.
No issues with doing things right and obviously I wasn't aware
the current API's are broken...
>
> Is your objection the comment? Should it be striken or restated?
>
It's not the comment I have a problem with, it's the behavior that follows it.
Calling this API on a domain with a per-domain cpuset to pin an iothread
to all pCPUs will result in:
a) for AFFECT_LIVE
The iothread will get pinned to all pCPUs
The pininfo will get deleted from the definition. But missing pininfo
does not mean all pCPUs if there is a per-domain cpuset.
b) for AFFECT_CONFIG
The pininfo will be deleted, even though it does not match the
cpumask.
I think the easiest 'fix' is to just drop all the 'doReset'
functionality
and do not allow deleting pininfo with this API.
Alternatively, (since you already rejected
VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO above) you can pin the iothread to
the domain's cpuset.
Also, the full bitmap is ambiguous - does it means 'reset
it to default' or 'pin it to all pCPUs'?
Well let's just say I don't know all the nuances and known
issues of cpu pinning...
Knowing that the other APIs (vcpu and emulator) are broken - are
you planning to send patches to fix them as well? Or is that
something I should do mimicing whatever is settled upon here?
So, from how I read your comments I have the following diffs which
removes the doReset conditions, and removes the PinDel calls as a
result of them. Theoretically now nothing calls the PinDel API and
while I suppose I could remove it, once the hot remove an IOThread
code is added, I'd need it back.
With respect to the pin to the domain cpuset - this API takes a new
cpumap so I'm not sure I understand under what condition would thus
use the domain's cpuset. Maybe I read your comment too literally
John
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index e3d0acf..74440f1 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -5773,7 +5773,6 @@ qemuDomainPinIOThread(virDomainPtr dom,
virCapsPtr caps = NULL;
virDomainDefPtr persistentDef = NULL;
virBitmapPtr pcpumap = NULL;
- bool doReset = false;
qemuDomainObjPrivatePtr priv;
virDomainVcpuPinDefPtr *newIOThreadsPin = NULL;
size_t newIOThreadsPinNum = 0;
@@ -5823,12 +5822,6 @@ qemuDomainPinIOThread(virDomainPtr dom,
goto endjob;
}
- /* pinning to all physical cpus means resetting,
- * so check if we can reset setting.
- */
- if (virBitmapIsAllSet(pcpumap))
- doReset = true;
-
if (flags & VIR_DOMAIN_AFFECT_LIVE) {
if (priv->iothreadpids == NULL) {
@@ -5891,17 +5884,13 @@ qemuDomainPinIOThread(virDomainPtr dom,
}
}
- if (doReset) {
- virDomainIOThreadsPinDel(vm->def, iothread_id);
- } else {
- if (vm->def->cputune.iothreadspin)
- virDomainVcpuPinDefArrayFree(vm->def->cputune.iothreadspin,
- vm->def->cputune.niothreadspin);
+ if (vm->def->cputune.iothreadspin)
+ virDomainVcpuPinDefArrayFree(vm->def->cputune.iothreadspin,
+ vm->def->cputune.niothreadspin);
- vm->def->cputune.iothreadspin = newIOThreadsPin;
- vm->def->cputune.niothreadspin = newIOThreadsPinNum;
- newIOThreadsPin = NULL;
- }
+ vm->def->cputune.iothreadspin = newIOThreadsPin;
+ vm->def->cputune.niothreadspin = newIOThreadsPinNum;
+ newIOThreadsPin = NULL;
if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm) < 0)
goto endjob;
@@ -5930,24 +5919,20 @@ qemuDomainPinIOThread(virDomainPtr dom,
goto endjob;
}
- if (doReset) {
- virDomainIOThreadsPinDel(persistentDef, iothread_id);
- } else {
- if (!persistentDef->cputune.iothreadspin) {
- if (VIR_ALLOC(persistentDef->cputune.iothreadspin) < 0)
- goto endjob;
- persistentDef->cputune.niothreadspin = 0;
- }
- if (virDomainIOThreadsPinAdd(&persistentDef->cputune.iothreadspin,
- &persistentDef->cputune.niothreadspin,
- cpumap,
- maplen,
- iothread_id) < 0) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("failed to update or add iothreadspin xml "
- "of a persistent domain"));
+ if (!persistentDef->cputune.iothreadspin) {
+ if (VIR_ALLOC(persistentDef->cputune.iothreadspin) < 0)
goto endjob;
- }
+ persistentDef->cputune.niothreadspin = 0;
+ }
+ if (virDomainIOThreadsPinAdd(&persistentDef->cputune.iothreadspin,
+ &persistentDef->cputune.niothreadspin,
+ cpumap,
+ maplen,
+ iothread_id) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("failed to update or add iothreadspin xml "
+ "of a persistent domain"));
+ goto endjob;
}
ret = virDomainSaveConfig(cfg->configDir, persistentDef);