
On Fri, 23 May 2014 10:48:18 -0300 Marcelo Tosatti <mtosatti@redhat.com> wrote:
On Fri, May 23, 2014 at 03:35:19PM +0200, Markus Armbruster wrote:
Luiz Capitulino <lcapitulino@redhat.com> writes:
On Fri, 23 May 2014 00:50:38 -0300 Marcelo Tosatti <mtosatti@redhat.com> wrote:
Then the guest triggers an RTC update, so qemu sends an event, but the event is lost. Then libvirtd starts again, and doesn't realize the event is lost.
Yes, but that case is also true for any other QMP asynchronous event, and therefore should be handled generically i suppose (QMP channel data should be maintained across libvirtd shutdown). Luiz?
Maintaining QMP channel data doesn't solve this problem, because all sorts of race conditions are still possible. For example, libvirt could crash after having received the event but before handling it.
The most reliable way we found to solve this problem, and that's what we do for other events, is to allow libvirt to query the information the event is reporting. An event is nothing more than a state change in QEMU, and QEMU state is persistent during the life time of the VM, so we allow libvirt to query the state of anything that may send an event.
In fact, this is a general rule: when libvirt tracks an event, it also needs a way to poll for the information in the event.
I see.
This also seems pretty harmful wrt losing events:
/* Global, one-time initializer to configure the rate limiting * and initialize state */ static void monitor_protocol_event_init(void) { /* Limit RTC & BALLOON events to 1 per second */ monitor_protocol_event_throttle(QEVENT_RTC_CHANGE, 1000);
Better remove it.
You mean, this is causing problems to the RTC_CHANGE event or you found a general problem? Can you give more details?