On Wed, Jan 24, 2007 at 02:17:31PM +0000, Richard W.M. Jones wrote:
This is a follow on to this thread:
https://www.redhat.com/archives/libvir-list/2007-January/thread.html#00064
but I think it deserves a thread of its own for discussion.
Background:
Dan drew this diagram proposing a way to include remote access to
systems from within libvirt:
http://people.redhat.com/berrange/libvirt/libvirt-arch-remote-2.png
libvirt would continue as now to provide direct hypervisor calls,
direct access to xend and so on. But in addition, a new backend
would be written ("remote") which could talk to a remote daemon
("libvirtd") using some sort of RPC mechanism.
Position:
I gave this architecture some thought over the weekend, and I
like it for the following reasons (some not very technical):
* Authentication and encryption is handled entirely within the
libvirt / libvirtd library, allowing us to use whatever RPC
mechanism we like on top of a selection of transports of our
choosing (eg. GnuTLS, ssh, unencrypted TCP sockets, ...)
Yes, having a single consistent wire encyption & user auth system across
all virt backends makes a very nice end user / admin story
* We don't need to modify xend at all, and additionally we
won't
need to modify future flavour-of-the-month virtual machine monitors.
I have a particular issue with xend (written in Python) because
in my own tests I've seen my Python XMLRPC/SSL server
actually segfault. It doesn't inspire me that this Python
solution is adding anything more than apparent security.
Did I mention XenD is slow. If we can get remote management of Xen
bypassing XenD just like we do for the local case, we'll be much
better off.
* The architecture is very flexible: It allows virt-manager to
run as root or as non-root, according to customer wishes.
virt-manager can make direct HV calls, or everything can be
remoted, and it's easy to explain to the user about the
performance vs management trade-offs.
* It's relatively easy to implement. Note that libvirtd is just
a thin server layer linked to its own copy of libvirt.
* Another proposal was to make all libvirt calls remote
(
http://people.redhat.com/berrange/libvirt/libvirt-arch-remote-3.png)
but I don't think this is a going concern because (1) it requires
a daemon always be run, which is another installation problem and
another chance for sysadmins to give up, and (2) the perception will
be that this is slow, whether or not that is actually true.
I'd never compared performance of direct hypercalls vs libvirt_proxy
before, so I did a little test. The most commonly called method is
virt-manager is virDomainGetInfo for fetching current status of a
running domain - we call that once a second per guest.
So I wrote a simple program in C which calls virDomainGetInfo 100,000
times for 3 active guest VMs. I ran the test under a couple of different
libvirt backends. The results were:
1. As root, direct hypercalls -> 1.4 seconds
2. As non-root, hypercalls via libvirt_proxy -> 9 seconds
3. As non-root, via XenD -> 45 minutes [1]
So although it is x10 slower than hypercalls, the libvirt_proxy is
actaully pretty damn fast - 9 seconds for 300,000 calls.
There are many reasons the XenD path is slow. Each operation makes
a new HTTP request. It spawns a new thread per request. It talks to
XenStore for every request which has very high I/O overhead. It uses
the old SEXPR protocol which requests far more info than we actually
need. It is written in Python. Now I'm sure we can improve performance
somewhat by switching to the new XML-RPC api, and getting persistent
connections running, but I doubt it'll ever be as fast as libvirt_proxy
let alone hypercalls. So as mentioned above, I'd like to take XenD
out of the loop for remote management just like we do for the local
case with libvirt_proxy, but with full authenticated read+write access.
Now some concerns:
* libvirtd will likely need to be run as root, so another root
daemon written in C listening on a public port. (On the other
hand, xend listening on a public port also isn't too desirable,
even with authentication).
For Xen we have no choice but to have something running as root
since hypercalls needs to mlock() memory and access the Xen device
node. So both options are unpleasent, but we have to choose one,
and I can't say XenD is the obvious winner - particularly given the
tendancy of python SSL code to segfault. We can, however, make sure
that libvirtd is written to allow the full suite of modern protection
mechanisms to be applied - SELinux, execshield, TLS, fortify source
etc.
* If Xen upstream in the meantime come up with a secure remote
access
method then potentially this means clients could have to choose
between the two, or run two services (libvirtd + Xen/remote).
* There are issues with versioning the remote API. Do we allow
different versions of libvirt/libvirtd to talk to each other?
Do we provide backwards compatibility when we move to a new API?
We can simply apply the same rules as we do for public API. No changes
to existing calls, only additions are allowed. A simple protocol
version number can allow the client & server to negotiate the mutually
supported feature set.
* Do we allow more than one client to talk to a libvirtd daemon
(no | multiple readers one writer | multiple readers & writers).
The latter - since we have things modifiying domains via XenD, or the
HV indirectly updating domain state, we defacto have multiple writers
already. From my work in the qemu daemon, I didn't encounter any major
problems with allowing multiple writers - by using a poll() based
single-thread event loop approach, I avoided any nasty multi-thread
problems associated with multiple connections. Provided each request
can be completed in a short amount of time, there should be no need to
go fully threaded.
* What's the right level to make a remote API? Should we batch
calls up together?
We may be already constrained by the client side API - all calls in
the libvirt public API are synchronous so from a single client thread
there is nothing available to batch.
I've been investigating RPC mechanisms and there seem to be two
reasonable possibilities, SunRPC and XMLRPC. (Both would need to
run over some sort of secure connection, so there is a layer below
both). My analysis of those is here:
http://et.redhat.com/~rjones/secure_rpc/
SunRPC would handle our current APIs fine. We've talked every now & then
about providing asynchronous callbacks into the API - eg, so the client
can be notified of VM state changes without having to poll the
virDomainGetInfo api every second. The RPC wire protocol certainly
supports that, but its not clear the C apis do.
The XDR wire formating rules are very nicely defined - another option is
to use XDR as the wire encoding for our existing prototype impl in the
qemud.
For XML-RPC I'd like to do a proof of concept of the virDomainGetInfo
method impl to see how much overhead it adds. Hopefully it would be
acceptable, although I'm sure its a fair bit more than XDR / SunRPC.
We would need persistent connections for XML-RPC to be viable, particularly
with TLS enabled. Since XML-RPC doesn't really impose any firm C API
I imagine we could get a-synchronous notifications from the server
working without much trouble.
Dan.
[1] It didn't actually finish after 45 seconds. I just got bored of waiting.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules:
http://search.cpan.org/~danberr/ -=|
|=- Projects:
http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|