On Wed, Apr 11, 2007 at 01:54:52PM +0100, Richard W.M. Jones wrote:
Daniel P. Berrange wrote:
>On Wed, Apr 11, 2007 at 01:01:30PM +0100, Richard W.M. Jones wrote:
>>I don't think those patches got memory allocation / deallocation of XDR
>>structures right. Not surprising really since it's totally
>>undocumented! In a bid to rectify this, I have documented how to do it
>>here:
>>
>>http://et.redhat.com/~rjones/xdr_tests/
>
>Nice. BTW on the subject of record streams - xdrrec_create - while very
>nice looking on the surface, it is utterly useless because the impl
>relies on the underlying FD / socket being in blocking mode. If you use
>non-blocking sockets marshalling/de-marshalling will fail on the first
>-EAGAIN the routines see, and they have no way to restart where they
>left off. This is why I serialized to/from a xdrmem buffer, and then used
>my own read/write code to send to the socket where I could correctly deal
>with non-blocking mode.
Oh right - that totally passed me by. When are you using non-blocking
sockets? I was under the impression they were all blocking in
qemu_internal / libvirt_qemud.
The client end is always blocking because its only got a single connection
to worry about. The server end is completely non-blocking because it has
to deal with multiple client connections as well as I/O for the guest VM
monitor / stderr / stdout and forking/threading just adds uneccessary
complexity.
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules:
http://search.cpan.org/~danberr/ -=|
|=- Projects:
http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|