On Wed, Apr 08, 2009 at 08:06:11PM +0100, Jamie Lokier wrote:
Anthony Liguori wrote:
> It doesn't. When an app enables events, we would start queuing them,
> but if it didn't consume them in a timely manner (or at all), we would
> start leaking memory badly.
>
> We want to be robust even in the face of poorly written management
> apps/scripts so we need some expiration function too.
What happens when an app stops reading the monitor channel for a
little while, and there's enough monitor output to fill TCP buffers or
terminal buffers? Does it block QEMU? Does QEMU drop arbitrary bytes
from the stream, corrupting the output syntax?
One scheme would be to have a small buffer - enough to store say 10 events.
If the monitor is blocking for write, and the buffer is full then start to
discard all further events. When the buffer has more space again, then
send an explicit 'overflow' event informing the app that stuff has been
dropped from the event queue.
In normal circumstances the app would never see this message, but if there
was some unexpected problem causing the app to not process events quickly
enough, then at least it would be able to then detect that qemu has
discarded alot of events, and re-synchronize its state by running appropriate
'info' commands.
Regards,
Daniel
--
|: Red Hat, Engineering, London -o-
http://people.redhat.com/berrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org -o-
http://ovirt.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|