Hi,
On Wed, 2007-01-24 at 23:48 +0000, Daniel P. Berrange wrote:
On Wed, Jan 24, 2007 at 02:17:31PM +0000, Richard W.M. Jones wrote:
> * Another proposal was to make all libvirt calls remote
> (
http://people.redhat.com/berrange/libvirt/libvirt-arch-remote-3.png)
> but I don't think this is a going concern because (1) it requires
> a daemon always be run, which is another installation problem and
> another chance for sysadmins to give up, and (2) the perception will
> be that this is slow, whether or not that is actually true.
Note, I don't think this was just being proposed as a way to
re-architect things for the remote access stuff. I suggested it as a way
to ensure that if we aggregated hypervisor types under the one connect
we could ensure that multiple guests of the same name couldn't be
created.
i.e. you'd need this if the daemon was more than just a proxy and
performed any management tasks itself. If the daemon is *just* a proxy,
I don't think it makes sense.
I'd never compared performance of direct hypercalls vs
libvirt_proxy
before, so I did a little test. The most commonly called method is
virt-manager is virDomainGetInfo for fetching current status of a
running domain - we call that once a second per guest.
So I wrote a simple program in C which calls virDomainGetInfo 100,000
times for 3 active guest VMs. I ran the test under a couple of different
libvirt backends. The results were:
1. As root, direct hypercalls -> 1.4 seconds
2. As non-root, hypercalls via libvirt_proxy -> 9 seconds
3. As non-root, via XenD -> 45 minutes [1]
So although it is x10 slower than hypercalls, the libvirt_proxy is
actaully pretty damn fast - 9 seconds for 300,000 calls.
That's interesting because it shows that the overhead of a roundtrip on
a unix domain socket and context switch is negligible (i.e. 25us)
I'd caution against optimising for this benchmark, though. An
application author should not consider virDomainGetInfo() to be so cheap
that even the extra 9ms caused by XenD should cause a problem for their
application[1].
Consider factoring in a 100ms network roundtrip to the call. You'd be
talking about 8hr 20min just for the 300,000 roundtrips.
But anyway, I'd agree with the conclusions - using a daemon for the
local case is not a problem from a performance perspective and avoiding
XenD where we can gives a nice win.
[1] It didn't actually finish after 45 seconds. I just got bored
of waiting.
Oh, look at this ... I only saw this now :-)
So, XenD is *at least* 9ms per call ...
Cheers,
Mark.
[1] - This reminds me of when there was an almost fanatical effort to
make in-process CORBA calls with ORBit really, really fast. It was
pointless, though, because app authors could not rely on a CORBA call
being fast because it could just as easily be out-of-process.