On Tue, 2016-02-02 at 13:59 +0100, Andrew Jones wrote:
> > > Our introspection support in QOM only allows us to say
that a property
> > > is a particular type (int / enum / str / whatever). We
don't have any
> > > way to expose info
about what subset of possible values for a type
are
> > > permitted. So I don't see any near term way to
inform apps
that the
> > > gic property accepts values x, y and but not z.
This actually doesn't matter for the v2 vs. v3 case.
The gic-version property
doesn't exist at all for v2-only QEMU. Although maybe the
gic-version property
should be
reworked. Instead of gic-version=<highest-supported>, we could
create
one boolean property per version supported, i.e.
gicv2, gicv3,
gicv4...
Hold on, so "gic-version=3" means "support all GIC versions up to 3",
not "support GIC version 3 only"? I thought the latter.
> > 2) Just implement something in libvirt that checks what
the kernel
> > supports directly via the well-defined KVM interface and chooses
> > the highest supported version per default.
[...]
I'm not familiar enough with libvirt, nor the use of QMP, to
really argue
one way or another, but I find it a bit strange that we'd prefer libvirt
to query two entities over one. And, why should the libvirt installed on
a particular host prefer gicv3 as the default, just because KVM supports
it, even when QEMU does not? The default will fail every time until QEMU
is upgraded, which may not be necessary/desired.
Shouldn't the default be "host", to mean "whatever the host
supports",
rather than a specific version based either on host or QEMU probing?
That should work for every QEMU version, right?
Finally, I thought we
were trying to get away from relying on QEMU's error messages to make any
sort of decisions.
We wouldn't be looking at the error message and base our decision on
that, we would just display it to the user if QEMU fails to run.
That already happens for a bunch of other situations, so I don't think
it's really a problem, especially because libvirt can't possibly be
expected to catch every possible QEMU failure and sugar-coat it before
reporting it to the user.
I don't know what else libvirt queries directly from KVM, but
IMO, it
should be a long-term goal to stop doing it, not add to more. Besides
libvirt then properly selecting defaults that both KVM and QEMU support,
it would allow /dev/kvm to have QEMU-only selinux labels applied.
One thing that comes to mind is the number of threads per subcores on
ppc64 hosts, and I don't think that's the kind of information QEMU
would provide via QMP.
In the short term we definitely need libvirt to be able to pass
"gic-version=host" to QEMU, and I'm working on a patch that enables
that feature.
As I've already voiced in this thread, my feeling is that the probing
should happen in one place only (QEMU) and libvirt should merely
query that information and report it back to the user, to avoid any
possible disagreement.
On the other hand, Dan has plenty more experience and also knowledge
that spans the whole stack, so in general I trust his opinion :)
One other way to handle this would be to simply report the GIC
versions *libvirt* supports, and let the user pick either the
default ("host", which should work anywhere) or a specific version,
which might or might not actually be accepted by QEMU. I think
there are other places in libvirt where this approach is used,
even though is probably not the most user friendly...
Cheers.
--
Andrea Bolognani
Software Engineer - Virtualization Team