On 10/23/2014 11:14 AM, Daniel P. Berrange wrote:
Currently we have a bit of a craz setup for activating
drivers for virConnectPtr. We iterate over the primary
hypervisor drivers until one matches and activates. This
bit is fine.
Then we do the same for the network, storage, interface,
etc secondary drivers. This is where the fun starts.
In the case of the stateless drivers that run outside
of libvirtd, they always want the secondary drivers to
match the primary driver. Unfortunately if they don't
register any dummy secondary driver they'll get given
the reomte driver instead. The probing of secondary
drivers is really unhelpful here.
In the case of stateful drivers that run inside of
libvirtd, there is (almost) always only one driver instance
that they want to work with. So at best the probing of
secondary drivers is a waste of time. With the increasing
feature set though we're getting tight dependancies from
the hypervisor driver to the secondary driver. eg QEMU
depends on the specific network driver we have because it
calls various internal APIs it exposes.
I have never liked the sneakiness of those backdoor private APIs into
the network driver, but didn't see a reasonable way around it at the
time I added them in, and they worked, and nobody NACKed them... I think
they should maybe be replaced by full fledged public APIs, but they need
to be defined in a much more complete and future-proof fashion. Of
course calling them would then require the ability to call a public
libvirt API from within another public libvirt API (actually, I guess we
already do this when we call virNetworkLookupByName() and
virNetworkGetBridgeName() from qemuNetworkIfaceConnect(), but I think
it's slightly accidental that that works).
We also have circular dependancy problems during libvirtd
startup:
https://www.redhat.com/archives/libvir-list/2014-January/msg01143.html
There the storage driver needs a virConnectPtr instance in
order to talk to the secret driver. At the same time the
storage driver must run before the hypervisor driver because
the hypervisor driver needs to use it for autostart of VMs.
This is a horrible mess.
The solution I think is to remove the concept of probing for
secondary drivers entirely. Instead I suggest creating a
struct virDriver {
...
}
which contains a pointer to virHypervisorDriver,
virNetworkDriver, virInterfaceDriver, etc. now virConnectPtr
will only reference the single virDriverPtr and when we open
a hypervisor driver, we immediately get a full set of APIs
for everything. This way all the hypervisor drivers will
always know exactly what secondary driver they are working
with.
The one thing this will obstruct is the ability to use two different
secondary drivers at the same time; I don't know if this will be an
issue. An example of what could lead to this - we might decide to
implement a separate network driver that uses Open vSwitch as the
backend rather than the Linux host bridge, but want to allow use of both
on the same system. Or perhaps we might decide we want to have a
separate backend for nwfilter that uses the OVS version of flow
management (or whatever they call it) instead of iptables. (Well, those
are actually probably not very good examples, because (for the network
case) we already demonstrate how both could be supported by the same
driver in our parallel support of "unmanaged" networks that can point to
either a host bridge or and OVS bridge, and (for the nwfilter case) it
isn't apparent how or by whom the choice of which driver to use would be
made.)
So maybe that isn't a problem at all :-)
We'll be able to remove all the xxxPrivateData elements from
the virConnectPtr. The stateless drivers can just access the
main privateData. The stateful drivers can avoid any use of
privateData at all - just access their global static variable
with the driverState.
Yeah, I always wondered why the *privateData was necessary, especially
since (as you say) some functions are directly accessing the global anyway.