On Wed, Jan 22, 2014 at 8:36 AM, Matthias Bolte <
matthias.bolte(a)googlemail.com> wrote:
2014/1/22 Adam Walters <adam(a)pandorasboxen.com>:
> On Mon, Jan 20, 2014 at 11:27 AM, Daniel P. Berrange <
berrange(a)redhat.com>
> wrote:
>>
>> On Fri, Dec 20, 2013 at 11:03:53PM -0500, Adam Walters wrote:
>> > This patchset adds a driver named 'config' that allows access to
>> > configuration
>> > data, such as secret and storage definitions. This is a pre-requisite
>> > for my
>> > next patchset which resolves the race condition on libvirtd startup
and
>> > the
>> > circular dependencies between QEMU and the storage driver.
>>
>> I vaguely recall something being mentioned in the past, but not the
>> details. Can you explain the details of the circular dependancy
>> problem that you're currently facing ?
>>
>> > The basic rationale behind this idea is that there exist circumstances
>> > under
>> > which a driver may need to access things such as secrets during a time
>> > at
>> > which there is no active connection to a hypervisor. Without a
>> > connection,
>> > the data can't be accessed currently. I felt that this was a much
>> > simpler
>> > solution to the problem that building new APIs that do not require a
>> > connection
>> > to operate.
>>
>> We have a handful of places in our code where we call out to public
>> APIs to implement some piece of functionality. I've never been all
>> that happy about these scenarios. If we call the public API directly,
>> they cause havoc with error reporting because we end up dispatching
>> and resetting errors in the middle of an nested API call. Calling the
>> driver methods directly by doing conn->driver->invokeMethod() is
>> somewhat tedious & error prone because we're then skipping various
>> sanity tests that the public APIs do. With ACL checking now implemented
>> we also have the slightly odd situation that a public API check which
>> is documented as requiring permission 'xxxx' may also require
permissions
>> 'yyyy' and 'zzzz' to deal with other public APIs we invoke
secretly.
>>
>> I think there is a fairly strong argument that our internal
>> implementations
>> could be decoupled from the public APIs, so we can call methods
internally
>> without having to go via the public APIs at all.
>>
>> On the flip side though, there's also a long term desire to separate
>> the different drivers into separate daemons, eg so the secrets driver
>> might move into a libvirt-secretsd daemon, which might explicitly
>> require use to go via the public APIs.
>
>
> Mulling this over last night, I think there may be an alternative
> implementation
> that would be sustainable long-term. You mentioned a desire to split the
> drivers
> into separate daemons eventually... It might seem silly today, but what
if I
> went
> ahead and implemented a connection URI for each of the existing drivers?
> This
> would result in a 'network:///', 'secrets:///',
'storage:///', etc. Once
> complete, the
> existing code base could slowly be updated to utilize connections to
those
> new
> URIs in preparation for splitting the code out into daemons. This would
end
> up
> with the current libvirtd process eventually becoming a connection
broker,
> with
> a compatibility shim to allow access to the API as it is currently
defined
> for
> backwards compatibility.
But keep in mind that there is not only one network, secrets or
storage driver. For example, the ESX hypervisor driver comes with its
own ESX specific storage and network subdriver. It's the same for the
VirtualBox, HyperV ans Parallels hypervisor driver. Today those are
bundled under the URI of their hypervisor drivers: esx://, vbox://,
hyperv:// and parallels://. How does this workout with a URI per
subdriver?
Honestly, I don't know how that would work out. To be perfectly honest,
I have not played around with any of the hypervisor drivers other than
the QEMU and LXC drivers, which both use the libvirt generic storage
and network drivers. Mainly, my thoughts were to suggest a possible
alternative to 'config:///' driver, which Daniel had pointed out may not
be a good long-term solution to the problem. The generic 'config:///'
driver fixes the immediate need, but would not really make sense if
and when libvirt gets split into multiple daemons. At that point, there
will likely be a need for some form (possibly internal-only) of connection
URI for each daemon.
Though, if the ESX driver, for example, uses its own network driver, can
you list the ESX networks through the libvirt net-list command? If not,
then a change like that shouldn't really affect them, at least in theory.
If you can list them, then perhaps an architecture there you have a few
more daemons may fix it. Something like 'libvirt-esx-bridged' would handle
the libvirt<->esx bridge. Then you could move the code from the esx driver
that handles the networking over to the network driver. Similar theory of
operation for other drivers. The bridge daemon would not have any
dependencies, and thus could start before anything else. A setup like
this would cause libvirt to expand into a lot of different processes,
though,
so I don't know if that would work out well, either. Perhaps for drivers
like this,
the 'libvirt-esxd' process could run two threads. One is a communication
bridge, while the other could be the hypervisor driver.
Basically just thinking out loud here, but not coming up with a whole lot of
options to fix the circular dependencies currently present while also
providing an easy migration path going forward to allow splitting libvirt
into
multiple processes while simultaneously not muddying the public API in the
short-term. What would make that easier would be a method to make the
'config:///' URI only usable from explicitly allowed locations (in this
case only
from within the storage driver). That would allow the code I wrote to solve
the
current problem without really muddying the public APIs, since there would
be
some measure of control over who could utilize the new driver. Any
additional
use would require a patch to libvirt to enable it, thus prompting
discussion over
why the access is needed at all.