Anthony Liguori wrote:
It doesn't. When an app enables events, we would start queuing
them,
but if it didn't consume them in a timely manner (or at all), we would
start leaking memory badly.
We want to be robust even in the face of poorly written management
apps/scripts so we need some expiration function too.
What happens when an app stops reading the monitor channel for a
little while, and there's enough monitor output to fill TCP buffers or
terminal buffers? Does it block QEMU? Does QEMU drop arbitrary bytes
from the stream, corrupting the output syntax?
If you send events only to the monitor which requests them, then you
could say that they are sent immediately to that monitor, and if the
app stops reading the monitor, whatever normally happens when it stops
reading happens to these events.
In other words, no need for arbitrary expiration time. Makes it
determinstic at least.
>>And then in the 2nd monitor channel, a single 'wait'
command would turn
>>off the monitor prompt and make the channel dedicated for just events,
>>one per line
>>
>> (qemu) wait
>> rtc-change UTC+0100
>> vnc-client connect 192.46.12.4:9353
>> vnc-client disconnect 192.46.12.4:9353
>> vnc-client connect 192.46.12.2:9353
>> vnc-client disconnect 192.46.12.2:9353
>
>IMHO this is more useful than having "wait" just get one event.
>You'll need a dedicated monitor channel for events anyway, so with
>one-event-per-wait the management app would have to issue wait in a loop.
There two issues with this syntax. The first is that 'notify
EVENT'
logically acts on the global event set. That's not necessarily what you
want. For instance, libvirt may attach to a monitor and issue a 'wait
"vm-state vnc-events"' and I may have some wiz-bang app that wants to
connect on another monitor, and issue a 'wait "watchdog-events"'. My
super-deluxe app may sit watching for watchdog events to do some sort of
fancy RAS stuff or something like that.
I like this idea a lot.
Specifically I like the idea that separate monitoring apps can operate
independently, even watching the same events if they need to.
A natural way to support that is per-monitor (connection?) event sets.
To reliably track state, monitoring apps which aren't in control of
the VM themselves (just monitoring) will need to do this:
1. Request events.
2. _Then_ check the current state of things they care about.
(E.g. is the VM running)
3. _Then_ listen for new events since step 1.
Otherwise you get races similar to those signal/select races.
That argues for
(qemu) notify event-type-list
ok
(qemu) query blah blah...
results
(qemu) wait
Rather than
(qemu) wait event-type-list
As the latter form cannot accomodate a race-free monitoring pattern
unless you have a second connection which does the state query after
the first "wait" has been issued. It would be silly to force
monitoring apps to open two monitor connections just to view some
state of QEMU, when one is enough.
Also, the latter form (wait event-type-list) _must_ output something
like "ok, events follow" after it has registered for the events,
otherwise a monitoring app does not know when it's safe to query state
on a second connection to avoid races.
The 'notify EVENT' model makes this difficult unless you have
notify act
only on the current monitor session.
That would be nice!
Monitor "sessions" are ill-defined
though b/c of things like tcp:// reconnects so I wouldn't want to do that.
Oh dear. Is defining it insurmountable?
Why can't each TCP (re)connection be a new monitor?
>BTW: "wait" is quite generic. Maybe we should name the
commands
>notify-*, i.e. have
Good point, I like wait_event personally.
Me too.
And request_event, rather than notify.
And a way to remove items from the event set.
-- Jamie