On Tue, Aug 26, 2014 at 06:03:46PM -0400, John Ferlan wrote:
On 08/26/2014 01:03 AM, Martin Kletzander wrote:
> On Mon, Aug 25, 2014 at 08:38:09PM -0400, John Ferlan wrote:
>> diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
>> index 287a3f3..740b6ec 100644
>> --- a/src/qemu/qemu_command.c
>> +++ b/src/qemu/qemu_command.c
>> @@ -3862,6 +3862,9 @@ qemuBuildDriveDevStr(virDomainDefPtr def,
>> virBufferAddLit(&opt, "virtio-blk-device");
>> } else {
>> virBufferAddLit(&opt, "virtio-blk-pci");
>> + if (disk->iothread > 0)
>> + virBufferAsprintf(&opt,
",iothread=libvirtIothread%u",
>> + disk->iothread);
>
> You are using the def->iothread only to format it with virtio disks,
> but ... [1]
>
>> }
>> qemuBuildIoEventFdStr(&opt, disk->ioeventfd, qemuCaps);
>> if (disk->event_idx &&
>> @@ -6614,6 +6617,31 @@ qemuBuildNumaArgStr(virQEMUDriverConfigPtr cfg,
>> }
>>
>>
>> +bool
>> +qemuDomainIothreadsAvailable(virDomainDefPtr def,
>> + virQEMUCapsPtr qemuCaps,
>> + virDomainDiskDefPtr disk)
>> +{
>> + bool inuse;
>> + const char *src = virDomainDiskGetSource(disk);
>> +
>> + if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_OBJECT_IOTHREAD) ||
>> + !def->iothreadmap) {
>> + virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
>> + _("IOThreads not supported for this QEMU "
>> + "for disk '%s'"), src);
>> + return false;
>> + }
>> + if (virBitmapGetBit(def->iothreadmap, disk->iothread, &inuse) ||
inuse) {
>> + virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
>> + _("IOThread '%u' for disk '%s' is
already "
>> + "being used"), disk->iothread, src);
>
> Some disks may not have source set (empty cdrom drive), so it should
> probably be NULLSTR(src), but I think ""IOThread '%u' used
multiple
> times" is more than enough.
>
> This leads me to a code simplification idea. If you checked
> QEMU_CAPS_OBJECT_IOTHREAD when creating the iothread objects it's
> enough to make virBitmapGetBit() handle NULL bitmap carefully and then
> just virBitmapGetBit() the bit the same way. And if the message does
> not output the disk name (src), this whole function can be thrown away
> and instead of doing:
>
> if (!qemuDomainIothreadsAvailable(def, qemuCaps, disk))
> goto error;
>
> you'd do:
>
> if (virBitmapGetBit(def->iothreadmap, disk->iothread, &tmp) || tmp)
> virReportError(...);
>
I looked at the above code simplification idea and I wasn't able to see
what you were driving at. Maybe it's my current code myopia... There's
a ditty about not being able to see the forest through the trees that
may apply for me. Maybe I'm missing something obvious - I dunno...
Beyond the obvious of if any other code passes a NULL Bitmap, the
existing code will fail miserably... Modifying GetBit/SetBit/ClearBit in
order to handle a NULL iothreadmap doesn't help the case where iothread
== 0 which I think could be the more common case.
Oh, yes, I missed a case, but anyway, nothing is wrong with reporting
it when building the command-line. Looking back at my ideas, it seems
like a copy-paste from insomnia-driven programming book.
I've adjusted where the checks are made to within the
BuildDriveDevStr
code. That way both the hotplug and regular startup will make the same
check.
Great, that way it's still in building a command-line, but applies for
hotplug too.
> Just an idea, though, not a requirement ;) If you keep it in a
> separate function, though, I'd suggest checking the range and
> reporting specific error, because when testing I got for example this
> error:
>
> "IOThread '3' for disk '(null)' is already being used"
>
> even though it clearly wasn't; I just had only 1 or 2 I/O threads
> created.
>
>> + return false;
>> + }
>> + return true;
>> +}
>> +
>> +
>> static int
>> qemuBuildGraphicsVNCCommandLine(virQEMUDriverConfigPtr cfg,
>> virCommandPtr cmd,
>> @@ -8055,6 +8083,13 @@ qemuBuildCommandLine(virConnectPtr conn,
>> disk->src->driverName, disk->src->path);
>> goto error;
>> }
>> +
>> + /* Check iothreads relationship */
>> + if (disk->iothread > 0) {
>> + if (!qemuDomainIothreadsAvailable(def, qemuCaps, disk))
>> + goto error;
>> + ignore_value(virBitmapSetBit(def->iothreadmap,
disk->iothread));
>> + }
>> }
>>
>
> [1] ... here you check it for all disks. And it's kept in the domain
> definition for all disks as well. Maybe removing it from unsupported
> disks and checking(+building) the iothreadmap could be done in
> qemuDomainDefPostParse().
>
I'm not sure using a PostParse will work properly, especially in the
hotplug case where the disk xml is not added to the configuration yet -
the live add is attempted first and if it succeeds, then the bitmap
would need to be updated. If the configuration is to be saved, then the
on-disk config is updated. I assume that for hotplug the domain def
passed along is the "live" one that already has the map filled in for
any disks that were properly vetted at startup time. Without the "live"
map being updated, one could add 'n' live disks to the same IOThread and
that has seemingly nothing to do with PostParse - although I may have
misfollowed things.
The other part of PostParse that's difficult is that the DevicePostParse
would probably be the place to check whether the device itself is valid
because while 'virtio' bus could be set, when does 'pci' get set?
I also note qemuDomainDefPostParse() uses virCapsPtr and not
virQEMUCapsPtr and it's seemingly tasked to primarily add things that
will be needed by the guest due to "other" found devices in the guest
XML and not to peruse through the ndisks looking to fill in a bitmap or
check for all the right configuration setting options for iothreads.
Just wasn't clear enough whether it "should be" used for this purpose.
I'm not against using it - just wasn't completely sure.
Yes, PostParse shouldn't be doing that, the iothreadmap is easier to
be created when starting the machine. Then it's clear that live
machines have iothreadmap and persistent definitions don't. I was
just curious whether it would make sense to error out earlier (when
defining the XML) rather then later (when starting the machine).
Unfortunately I covered that simple piece of information in a crusty
hardly readable shell.
[...]
So v2 is coming to a mailbox near you soon...
Looking forward!
Martin