On Fri, Sep 25, 2009 at 09:58:51AM +0200, Daniel Veillard wrote:
On Mon, Aug 24, 2009 at 09:51:03PM +0100, Daniel P. Berrange wrote:
Okay, this is very similar in principle with HTTP pipelining
with IMHO the same benefits and the same potential drawbacks.
A couple of things to check might be:
- the maximum amount of concurrent active streams allowed,
for example suppose you want to migrate in emergency
all the domains out of a failing machine, some level of
serialization may be better than say attempting to migrate
all 100 domains at the same time. 10 parallel stream might
be better, but we need to make sure the API allows to report
such condition.
We could certainly add a tunable in /etc/libvirt/libvirtd.conf
that limits the number of streams that are allowed per client.
- the maximum chunking size, but with 256k I think this is
covered.
Yes, the remote protocol itself limits each message to 256k
currently. I think this is a good enough size, since it avoids
the stream delaying RPC calls, and the encryption chunk size
is going to be smaller than this anyway, so you won't gain much
from larger chunks.
- synchronization internally between threads to avoid deadlocks
or poor performances, that can be very hard to debug, so I
guess an effort should be provided to explain how things are
designed internally.
Each individual virStreamPtr object is directly associated with
a single API call, so in essence each virStreamPtr should only
really be used from a single thread. That said, the virStreamPtr
internal drivers should all lock the virStreamPtr object as
they require to provide safety.
Daniel
--
|: Red Hat, Engineering, London -o-
http://people.redhat.com/berrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org -o-
http://ovirt.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|