On Thu, Jan 25, 2007 at 04:56:23AM -0500, Daniel Veillard wrote:
On Wed, Jan 24, 2007 at 11:48:47PM +0000, Daniel P. Berrange wrote:
> There are many reasons the XenD path is slow. Each operation makes
> a new HTTP request. It spawns a new thread per request. It talks to
> XenStore for every request which has very high I/O overhead. It uses
> the old SEXPR protocol which requests far more info than we actually
> need. It is written in Python. Now I'm sure we can improve performance
> somewhat by switching to the new XML-RPC api, and getting persistent
> connections running, but I doubt it'll ever be as fast as libvirt_proxy
> let alone hypercalls. So as mentioned above, I'd like to take XenD
> out of the loop for remote management just like we do for the local
> case with libvirt_proxy, but with full authenticated read+write access.
I love XML, but I doubt switching to XML-RPC will speed things up. Well
maybe if the parser is written in C, but overall parsing an XML instance has
a cost and I doubt you will get anyway close to the 300,000/s of a proxy like
RPC, that would mean 600,000 XML instances parsing per second of just overhead
and that's really not realistic.
Actually I should have clarified that - the reason I suggested switching to
XML-RPC might be faster, is not because XML is fast to parse! The current
SEXPR protocol basically requires us to fetch the entire VM description each
time, even though we only want the VM status info - building this VM description
has quite significant overhead in XenD. So switching to XML-RPC would let us
only fetch the info we actually need, which outght to remove a significant
chunk of XenD cpu time.
My only concern with an ad-hoc protocol like the proxy one is that
it
would make harder to build client side say in Java (since we don't have
bindings and that would be a relatively nice way to do it as mixing C
and Java always raises some resistance/problems). Though I really don't
think this is a blocker.
Not as much as you might think - in my side job working on DBus I've noticed
that in the past couple of months people have provided 100% pure C# and Java
libraries to speak the raw DBus protocol without very much effort - in many
ways it actually simplified their code. So provided we /document/ any protocol
and used reasonably portable data types I don't think Java clients would be
all that difficult.
> > I've been investigating RPC mechanisms and there seem
to be two
> > reasonable possibilities, SunRPC and XMLRPC. (Both would need to
> > run over some sort of secure connection, so there is a layer below
> > both). My analysis of those is here:
> >
> >
http://et.redhat.com/~rjones/secure_rpc/
>
> SunRPC would handle our current APIs fine. We've talked every now & then
> about providing asynchronous callbacks into the API - eg, so the client
> can be notified of VM state changes without having to poll the
> virDomainGetInfo api every second. The RPC wire protocol certainly
> supports that, but its not clear the C apis do.
Callbacks are hairy, somehow I would prefer to allow piggybacking
extra payload on an RPC return than initiating one from the server,
this simplifies things both on the client and server code, but also
integration on the client event loop (please no threads !)
I would expect that if we wanted to add callbacks, we'd have either provide
a method to get a libvirt file descriptor, which the client could then add
to their app's own eventloop; Or we'd let the client provide a set of functions
which libvirt could use for (un)registering its file descriptors with an event
loop as needed. I certainly wasn't thinking about threads :-) The latter is
what DBus does, the former is what SunRPC does (albeit for a different reason,
not async callbacks).
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules:
http://search.cpan.org/~danberr/ -=|
|=- Projects:
http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|