Paul Brook wrote:
> It has to be some finite amount. You're right, it's
arbitrary, but so
> is every other memory limitation we have in QEMU. You could make it
> user configurable but that's just punting the problem.
>
> You have to do some level of buffering. It's unavoidable. If you
> aren't buffering at the event level, you buffer at the socket level, etc.
>
No you don't. If you use event flags rather than discrete events then you
don't need to buffer at all. You just need to be able to store the state of
each type of event you're going to raise, which should be a bounded set.
This has its own set of issues - typically race conditions or "lost" events if
the client (libvirt) code isn't written carefully, and means you can't attach
information with an event, only indicate that something happened.
However if the correct model is used (event driven polling rather than purely
event driven) this shouldn't be problem.
It's just deferring the problem. Consider the case of VNC user
authentication. You want to have events associated with whenever a user
connects and disconnects so you can keep track of who's been on a
virtual machine for security purposes.
In my model, you record the last 10 minutes worth of events. If a user
aggressively connects/reconnects, you could consume a huge amount of
memory. You could further limit it by recording only a finite number of
events to combat that problem.
In your model, the VNC server triggers a user-connected event and a
user-disconnected event. This bits are constantly reset which is fine.
When libvirt gets around to seeing the events, it know wants to know
who's been connecting/disconnecting.
Either you show only the current user, which is not terribly useful, or
you have the VNC server queue a list of the most recent user
connects/disconnects. The problem with this model is that by pushing
the queuing to the event generators, you end open up the possibility
that someone is going to get something wrong. It's just more places to
do it poorly.
Regards,
Anthony Liguori
Paul