[libvirt] [[RFC] 0/8] Implement async QEMU event handling in libvirtd.

As noted in https://www.redhat.com/archives/libvir-list/2017-May/msg00016.html libvirt-QEMU driver handles all async events from the main loop. Each event handling needs the per-VM lock to make forward progress. In the case where an async event is received for the same VM which has an RPC running, the main loop is held up contending for the same lock. This impacts scalability, and should be addressed on priority. Note that libvirt does have a 2-step deferred handling for a few event categories, but (1) That is insufficient since blockign happens before the handler could disambiguate which one needs to be posted to this other queue. (2) There needs to be homogeniety. The current series builds a framework for recording and handling VM events. It initializes per-VM event queue, and a global event queue pointing to events from all the VMs. Event handling is staggered in 2 stages: - When an event is received, it is enqueued in the per-VM queue as well as the global queues. - The global queue is built into the QEMU Driver as a threadpool (currently with a single thread). - Enqueuing of a new event triggers the global event worker thread, which then attempts to take a lock for this event's VM. - If the lock is available, the event worker runs the function handling this event type. Once done, it dequeues this event from the global as well as per-VM queues. - If the lock is unavailable(ie taken by RPC thread), the event worker thread leaves this as-is and picks up the next event. - Once the RPC thread completes, it looks for events pertaining to the VM in the per-VM event queue. It then processes the events serially (holding the VM lock) until there are no more events remaining for this VM. At this point, the per-VM lock is relinquished. Patch Series status: Strictly RFC only. No compilation issues. I have not had a chance to (stress) test it after rebase to latest master. Note that documentation and test coverage is TBD, since a few open points remain. Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM completes. - Needs careful consideration in all cases where a QMP event is used to "signal" an RPC thread, else will deadlock. Will be happy to drive more discussion in the community and completely implement it. Prerna Saxena (8): Introduce virObjectTrylock() QEMU Event handling: Introduce async event helpers in qemu_event.[ch] Setup global and per-VM event queues. Also initialize per-VM queues when libvirt reconnects to an existing VM. Events: Allow monitor to "enqueue" events to a queue. Also introduce a framework of handlers for each event type, that can be called when the handler is running an event. Events: Plumb event handling calls before a domain's APIs complete. Code refactor: Move helper functions of doCoreDump*, syncNicRxFilter*, and qemuOpenFile* to qemu_process.[ch] Fold back the 2-stage event implementation for a few events : Watchdog, Monitor EOF, Serial changed, Guest panic, Nic RX filter changed .. into single level. Initialize the per-VM event queues in context of domain init. src/Makefile.am | 1 + src/conf/domain_conf.h | 3 + src/libvirt_private.syms | 1 + src/qemu/qemu_conf.h | 4 + src/qemu/qemu_driver.c | 1710 +++++++---------------------------- src/qemu/qemu_event.c | 317 +++++++ src/qemu/qemu_event.h | 231 +++++ src/qemu/qemu_monitor.c | 592 ++++++++++-- src/qemu/qemu_monitor.h | 80 +- src/qemu/qemu_monitor_json.c | 291 +++--- src/qemu/qemu_process.c | 2031 ++++++++++++++++++++++++++++++++++-------- src/qemu/qemu_process.h | 88 ++ src/util/virobject.c | 26 + src/util/virobject.h | 4 + src/util/virthread.c | 5 + src/util/virthread.h | 1 + tests/qemumonitortestutils.c | 2 +- 17 files changed, 3411 insertions(+), 1976 deletions(-) create mode 100644 src/qemu/qemu_event.c create mode 100644 src/qemu/qemu_event.h -- 2.9.5

This is a wrapper function that: (1) Attempts to take a lock on the object. (2) gracefully returns if the object is already locked. Signed-off-by: Prerna Saxena <saxenap.ltc@gmail.com> --- src/libvirt_private.syms | 1 + src/util/virobject.c | 26 ++++++++++++++++++++++++++ src/util/virobject.h | 4 ++++ src/util/virthread.c | 5 +++++ src/util/virthread.h | 1 + 5 files changed, 37 insertions(+) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 9243c55..c0ab8b5 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -2362,6 +2362,7 @@ virObjectRWLockableNew; virObjectRWLockRead; virObjectRWLockWrite; virObjectRWUnlock; +virObjectTrylock; virObjectUnlock; virObjectUnref; diff --git a/src/util/virobject.c b/src/util/virobject.c index cfa821c..796ea06 100644 --- a/src/util/virobject.c +++ b/src/util/virobject.c @@ -495,6 +495,32 @@ virObjectRWLockWrite(void *anyobj) /** + * virObjectTrylock: + * @anyobj: any instance of virObjectLockable or virObjectRWLockable + * + * Attempt to acquire a lock on @anyobj. The lock must be released by + * virObjectUnlock. + * Returns: + * 0: If the lock was successfully taken. + * errno : Indicates error. + * + * The caller is expected to have acquired a reference + * on the object before locking it (eg virObjectRef). + * The object must be unlocked before releasing this + * reference. + */ +int +virObjectTrylock(void *anyobj) +{ + virObjectLockablePtr obj = virObjectGetLockableObj(anyobj); + + if (!obj) + return -1; + + return virMutexTrylock(&obj->lock); +} + +/** * virObjectUnlock: * @anyobj: any instance of virObjectLockable * diff --git a/src/util/virobject.h b/src/util/virobject.h index ac6cf22..402ea32 100644 --- a/src/util/virobject.h +++ b/src/util/virobject.h @@ -124,6 +124,10 @@ void virObjectLock(void *lockableobj) ATTRIBUTE_NONNULL(1); +int +virObjectTrylock(void *lockableobj) + ATTRIBUTE_NONNULL(1); + void virObjectRWLockRead(void *lockableobj) ATTRIBUTE_NONNULL(1); diff --git a/src/util/virthread.c b/src/util/virthread.c index 6c49515..07b7a3f 100644 --- a/src/util/virthread.c +++ b/src/util/virthread.c @@ -89,6 +89,11 @@ void virMutexLock(virMutexPtr m) pthread_mutex_lock(&m->lock); } +int virMutexTrylock(virMutexPtr m) +{ + return pthread_mutex_trylock(&m->lock); +} + void virMutexUnlock(virMutexPtr m) { pthread_mutex_unlock(&m->lock); diff --git a/src/util/virthread.h b/src/util/virthread.h index e466d9b..8e3da2c 100644 --- a/src/util/virthread.h +++ b/src/util/virthread.h @@ -132,6 +132,7 @@ int virMutexInitRecursive(virMutexPtr m) ATTRIBUTE_RETURN_CHECK; void virMutexDestroy(virMutexPtr m); void virMutexLock(virMutexPtr m); +int virMutexTrylock(virMutexPtr m); void virMutexUnlock(virMutexPtr m); -- 2.9.5

These contain basic functions for setting up event lists (global as well as per-VM). Also include methods for enqueuing and dequeuing events. Per-event metadata is also encoded herewith. Signed-off-by: Prerna Saxena <saxenap.ltc@gmail.com> --- src/Makefile.am | 1 + src/qemu/qemu_event.c | 75 +++++++++++++++++ src/qemu/qemu_event.h | 224 ++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 300 insertions(+) create mode 100644 src/qemu/qemu_event.c create mode 100644 src/qemu/qemu_event.h diff --git a/src/Makefile.am b/src/Makefile.am index b74856b..73a98ca 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -903,6 +903,7 @@ QEMU_DRIVER_SOURCES = \ qemu/qemu_domain.c qemu/qemu_domain.h \ qemu/qemu_domain_address.c qemu/qemu_domain_address.h \ qemu/qemu_cgroup.c qemu/qemu_cgroup.h \ + qemu/qemu_event.c qemu/qemu_event.h \ qemu/qemu_hostdev.c qemu/qemu_hostdev.h \ qemu/qemu_hotplug.c qemu/qemu_hotplug.h \ qemu/qemu_hotplugpriv.h \ diff --git a/src/qemu/qemu_event.c b/src/qemu/qemu_event.c new file mode 100644 index 0000000..e27ea0d --- /dev/null +++ b/src/qemu/qemu_event.c @@ -0,0 +1,75 @@ +/* + * qemu_event.c: + * optimize qemu async event handling. + * + * Copyright (C) 2017-2026 Nutanix, Inc. + * Copyright (C) 2017 Prerna Saxena + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + * + * Author: Prerna Saxena <prerna.saxena@nutanix.com> + */ + +#include "config.h" +#include "internal.h" +# include "qemu_monitor.h" +# include "qemu_conf.h" +# include "qemu_event.h" +#include "qemu_process.h" + +#include "virerror.h" +#include "viralloc.h" +#include "virlog.h" +#include "virobject.h" +#include "virstring.h" + +#define VIR_FROM_THIS VIR_FROM_QEMU + +VIR_LOG_INIT("qemu.qemu_event"); + +VIR_ENUM_IMPL(qemuMonitorEvent, + QEMU_EVENT_LAST, + "ACPI Event", "Balloon Change", "Block IO Error", + "Block Job Event", + "Block Write Threshold", "Device Deleted", + "Device Tray Moved", "Graphics", "Guest Panicked", + "Migration", "Migration pass", + "Nic RX Filter Changed", "Powerdown", "Reset", "Resume", + "RTC Change", "Shutdown", "Stop", + "Suspend", "Suspend To Disk", + "Virtual Serial Port Change", + "Wakeup", "Watchdog"); + +virQemuEventList* virQemuEventListInit(void) +{ + virQemuEventList *ev_list; + if (VIR_ALLOC(ev_list) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "Unable to allocate virQemuEventList"); + return NULL; + } + + if (virMutexInit(&ev_list->lock) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("cannot initialize mutex")); + VIR_FREE(ev_list); + return NULL; + } + + ev_list->head = NULL; + ev_list->last = NULL; + + return ev_list; +} diff --git a/src/qemu/qemu_event.h b/src/qemu/qemu_event.h new file mode 100644 index 0000000..9781795 --- /dev/null +++ b/src/qemu/qemu_event.h @@ -0,0 +1,224 @@ +/* + * qemu_event.h: interaction with QEMU JSON monitor event layer + * Carve out improved interactions with qemu. + * + * Copyright (C) 2017-2026 Nutanix, Inc. + * Copyright (C) 2017 Prerna Saxena + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + * + * Author: Prerna Saxena <prerna.saxena@nutanix.com> + */ + + +#ifndef QEMU_EVENT_H +# define QEMU_EVENT_H + +# include "internal.h" +# include "virobject.h" + +typedef enum { + QEMU_EVENT_ACPI_OST, + QEMU_EVENT_BALLOON_CHANGE, + QEMU_EVENT_BLOCK_IO_ERROR, + QEMU_EVENT_BLOCK_JOB, + QEMU_EVENT_BLOCK_WRITE_THRESHOLD, + QEMU_EVENT_DEVICE_DELETED, + QEMU_EVENT_DEVICE_TRAY_MOVED, + QEMU_EVENT_GRAPHICS, + QEMU_EVENT_GUEST_PANICKED, + QEMU_EVENT_MIGRATION, + QEMU_EVENT_MIGRATION_PASS, + QEMU_EVENT_NIC_RX_FILTER_CHANGED, + QEMU_EVENT_POWERDOWN, + QEMU_EVENT_RESET, + QEMU_EVENT_RESUME, + QEMU_EVENT_RTC_CHANGE, + QEMU_EVENT_SHUTDOWN, + QEMU_EVENT_STOP, + QEMU_EVENT_SUSPEND, + QEMU_EVENT_SUSPEND_DISK, + QEMU_EVENT_SERIAL_CHANGE, + QEMU_EVENT_WAKEUP, + QEMU_EVENT_WATCHDOG, + + QEMU_EVENT_LAST, + +} qemuMonitorEventType; + +VIR_ENUM_DECL(qemuMonitorEvent); + +struct _qemuEvent; +typedef struct _qemuEvent * qemuEventPtr; + +struct qemuEventAcpiOstInfoData { + char *alias; + char *slotType; + char *slot; + unsigned int source; + unsigned int status; +}; + +struct qemuEventBalloonChangeData { + unsigned long long actual; +}; + +struct qemuEventIOErrorData { + char *device; + int action; + char *reason; +}; + +struct qemuEventBlockJobData { + int status; + char *device; + int type; +}; + +struct qemuEventBlockThresholdData { + char *nodename; + unsigned long long threshold; + unsigned long long excess; +}; + +struct qemuEventDeviceDeletedData { + char *device; +}; + +struct qemuEventTrayChangeData { + char *devAlias; + int reason; +}; + +struct qemuEventGuestPanicData { +}; + +struct qemuEventMigrationStatusData { + int status; +}; + +struct qemuEventMigrationPassData { + int pass; +}; + +struct qemuEventNicRxFilterChangeData { + char *devAlias; +}; + +struct qemuEventRTCChangeData { + long long offset; +}; + +struct qemuEventGraphicsData { + int phase; + int localFamilyID; + int remoteFamilyID; + + char *localNode; + char *localService; + char *remoteNode; + char *remoteService; + char *authScheme; + char *x509dname; + char *saslUsername; +}; + +struct qemuEventSerialChangeData { + char *devAlias; + bool connected; +}; + +struct qemuEventWatchdogData { + int action; +}; + +struct _qemuEvent { + qemuMonitorEventType ev_type; + unsigned long ev_id; + long long seconds; + unsigned int micros; + virDomainObjPtr vm; + void (*handler)(qemuEventPtr ev, void *opaque); + union qemuEventData { + struct qemuEventAcpiOstInfoData ev_acpi; + struct qemuEventBalloonChangeData ev_balloon; + struct qemuEventIOErrorData ev_IOErr; + struct qemuEventBlockJobData ev_blockJob; + struct qemuEventBlockThresholdData ev_threshold; + struct qemuEventDeviceDeletedData ev_deviceDel; + struct qemuEventTrayChangeData ev_tray; + struct qemuEventGuestPanicData ev_panic; + struct qemuEventMigrationStatusData ev_migStatus; + struct qemuEventMigrationPassData ev_migPass; + struct qemuEventNicRxFilterChangeData ev_nic; + struct qemuEventRTCChangeData ev_rtc; + struct qemuEventGraphicsData ev_graphics; + struct qemuEventSerialChangeData ev_serial; + struct qemuEventWatchdogData ev_watchdog; + } evData; +}; + + + +// Define a Global event queue. +// This is a double LL with qemuEventPtr embedded. + +struct _qemuGlobalEventListElement { + unsigned long ev_id; + virDomainObjPtr vm; + struct _qemuGlobalEventListElement *prev; + struct _qemuGlobalEventListElement *next; +}; + +struct _qemuGlobalEventList { + virMutex lock; + struct _qemuGlobalEventListElement *head; + struct _qemuGlobalEventListElement *last; +}; + +/* Global list of event entries of all VM */ +typedef struct _qemuGlobalEventList virQemuEventList; + +struct _qemuVmEventQueueElement { + qemuEventPtr ev; + struct _qemuVmEventQueueElement *next; +}; + +// Define a Per-VM event queue. +struct _qemuVmEventQueue { + struct _qemuVmEventQueueElement *head; + struct _qemuVmEventQueueElement *last; + virMutex lock; + }; + +typedef struct _qemuVmEventQueue virQemuVmEventQueue; + + + +virQemuEventList* virQemuEventListInit(void); +int virQemuVmEventListInit(virDomainObjPtr vm); +/** + * viEnqueueVMEvent() + * Adds a new event to : + * - the global event queue. + * - the event queue for this VM + * + */ +int virEnqueueVMEvent(virQemuEventList *qlist, qemuEventPtr ev); +qemuEventPtr virDequeueVMEvent(virQemuEventList *qlist, virDomainObjPtr vm); +void virEventWorkerScanQueue(void *dummy, void *opaque); +void virEventRunHandler(qemuEventPtr ev, void *opaque); +void virDomainConsumeVMEvents(virDomainObjPtr vm, void *opaque); +#endif -- 2.9.5

Signed-off-by: Prerna Saxena <saxenap.ltc@gmail.com> --- src/conf/domain_conf.h | 3 + src/qemu/qemu_conf.h | 4 + src/qemu/qemu_driver.c | 9 ++ src/qemu/qemu_event.c | 229 ++++++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_event.h | 1 - src/qemu/qemu_process.c | 2 + 6 files changed, 247 insertions(+), 1 deletion(-) diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index a42efcf..7fe38e7 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2496,6 +2496,9 @@ struct _virDomainObj { unsigned long long original_memlock; /* Original RLIMIT_MEMLOCK, zero if no * restore will be required later */ + + /* Pointer to per-VM Event Queue */ + void *vmq; }; typedef bool (*virDomainObjListACLFilter)(virConnectPtr conn, diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h index 13b6f81..e63dc98 100644 --- a/src/qemu/qemu_conf.h +++ b/src/qemu/qemu_conf.h @@ -33,6 +33,7 @@ # include "domain_conf.h" # include "snapshot_conf.h" # include "domain_event.h" +# include "qemu_event.h" # include "virthread.h" # include "security/security_manager.h" # include "virpci.h" @@ -235,6 +236,9 @@ struct _virQEMUDriver { /* Immutable pointer, self-locking APIs */ virDomainObjListPtr domains; + /* Immutable pointer, contains Qemu Driver Event List */ + virQemuEventList *ev_list; + /* Immutable pointer */ char *qemuImgBinary; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 7c6f167..8a005d0 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -52,6 +52,7 @@ #include "qemu_command.h" #include "qemu_parse_command.h" #include "qemu_cgroup.h" +#include "qemu_event.h" #include "qemu_hostdev.h" #include "qemu_hotplug.h" #include "qemu_monitor.h" @@ -650,6 +651,14 @@ qemuStateInitialize(bool privileged, if (!(qemu_driver->domains = virDomainObjListNew())) goto error; + /* Init domain Async QMP events */ + qemu_driver->ev_list = virQemuEventListInit(); + if (!qemu_driver->ev_list) { + virReportSystemError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Unable to initialize QMP event queues")); + goto error; + } + /* Init domain events */ qemu_driver->domainEventState = virObjectEventStateNew(); if (!qemu_driver->domainEventState) diff --git a/src/qemu/qemu_event.c b/src/qemu/qemu_event.c index e27ea0d..d52fad2 100644 --- a/src/qemu/qemu_event.c +++ b/src/qemu/qemu_event.c @@ -73,3 +73,232 @@ virQemuEventList* virQemuEventListInit(void) return ev_list; } + +int virQemuVmEventListInit(virDomainObjPtr vm) +{ + virQemuVmEventQueue* vmq; + if (!vm) + return -1; + + if (VIR_ALLOC(vmq) < 0) + return -1; + + vmq->last = NULL; + vmq->head = NULL; + + if (!virMutexInit(&vmq->lock)) { + vm->vmq = vmq; + return 0; + } + return -1; +} +/** + * virEnqueueVMEvent() + * Adds a new event to: + * - Global event queue + * - the event queue for this VM + * + * Return : 0 (success) + * -1 (failure) + */ +int virEnqueueVMEvent(virQemuEventList *qlist, qemuEventPtr ev) +{ + struct _qemuGlobalEventListElement *globalEntry; + virQemuVmEventQueue *vmq; + struct _qemuVmEventQueueElement *vmq_entry; + + if (!qlist || !ev || !ev->vm || !ev->vm->vmq) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "No queue list instantiated." + "Dropping event %d for Vm %s", + ev->ev_type, ev->vm->def->name); + goto error; + } + + if (VIR_ALLOC(globalEntry) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "Allocation error." + "Dropping event %d for Vm %s", + ev->ev_type, ev->vm->def->name); + goto error; + } + + if (VIR_ALLOC(vmq_entry) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "Allocation error." + "Dropping event %d for Vm %s", + ev->ev_type, ev->vm->def->name); + free(globalEntry); + goto error; + } + + vmq_entry->ev = ev; + vmq_entry->next = NULL; + + virObjectRef(ev->vm); + globalEntry->vm = ev->vm; + globalEntry->next = NULL; + globalEntry->prev = NULL; + /* Note that this order needs to be maintained + * for dequeue too else ABBA deadlocks will happen */ + + /* Insert into per-Vm queue */ + vmq = ev->vm->vmq; + + virMutexLock(&(vmq->lock)); + if (vmq->last) { + vmq->last->next = vmq_entry; + vmq_entry->ev->ev_id = vmq->last->ev->ev_id + 1; + } else { + vmq->head = vmq_entry; + vmq_entry->ev->ev_id = 1; + } + vmq->last = vmq_entry; + globalEntry->ev_id = vmq_entry->ev->ev_id; + /* Insert the event into the global queue */ + virMutexLock(&(qlist->lock)); + if (qlist->last) { + qlist->last->next = globalEntry; + globalEntry->prev = qlist->last; + } else { + qlist->head = globalEntry; + } + + qlist->last = globalEntry; + virMutexUnlock(&(qlist->lock)); + virMutexUnlock(&(vmq->lock)); + + return 0; + +error: + return -1; +} + +/** + * virDequeueVMEvent: Dequeues the first event of this VM from : + * - the global event table; + * - the per-VM event table; + * + * Needs to be called with VM lock held. Else the event is deleted forever and + * cannot be picked up by any other worker thread. + */ +qemuEventPtr virDequeueVMEvent(virQemuEventList *qlist, virDomainObjPtr vm) +{ + qemuEventPtr ret_ev; + struct _qemuVmEventQueue *cur_vmq; + struct _qemuVmEventQueueElement *vmq_entry; + struct _qemuGlobalEventListElement *iter; + const char *ref_uuid; + + if (!qlist || !vm || !vm->vmq) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "No queue list /VM/ event for this vm %s", + vm?vm->def->name:NULL); + goto error; + } + + cur_vmq = vm->vmq; + + /* Acquire a ref to first event from per-Vm event queue + */ + virMutexLock(&(cur_vmq->lock)); + vmq_entry = cur_vmq->head; + + if (cur_vmq->head == NULL) { + virMutexUnlock(&(cur_vmq->lock)); + goto error; + } + ref_uuid = (const char *)vmq_entry->ev->vm->def->uuid; + + /* Purge the event from global queue, and then from local queue. + * So that ev_ids are always consistent. + */ + virMutexLock(&(qlist->lock)); + iter = qlist->head; + while (iter) { + if (iter->vm != NULL && + STREQ((const char *)iter->vm->def->uuid, ref_uuid) && + iter->ev_id == vmq_entry->ev->ev_id) { + // Found the element, delete it. + if (iter->prev != NULL) + iter->prev->next = iter->next; + else + /* This was the first element */ + qlist->head = iter->next; + if (iter->next != NULL) + iter->next->prev = iter->prev; + else + /* This was the last element */ + qlist->last = iter->prev; + break; + } else { + iter = iter->next; + } + } + + // Now remove this from per-Vm queue: + cur_vmq->head = vmq_entry->next; + virMutexUnlock(&(qlist->lock)); + + virMutexUnlock(&(cur_vmq->lock)); + + ret_ev = vmq_entry->ev; + free(vmq_entry); + if (iter) + free(iter); + + return ret_ev; +error: + return NULL; +} + +void +virEventWorkerScanQueue(void *dummy ATTRIBUTE_UNUSED, void *opaque) +{ + virQEMUDriverPtr driver = opaque; + struct _qemuGlobalEventListElement *globalEntry = driver->ev_list->head; + virDomainObjPtr vm = NULL; + + if (!globalEntry) + return; + + VIR_WARN("Running event driver"); + + while (globalEntry) { + vm = globalEntry->vm; + if (vm != NULL) { + if (!virObjectTrylock(vm)) { + break; + } + } + // Todo:Clear events for irrelevant VMs + globalEntry = globalEntry->next; + } + + // Scanned the entire list, but no worthy event found. Exit now. + if (!globalEntry) + return; + + virDomainConsumeVMEvents(vm, opaque); + + virObjectUnlock(vm); + + return; +} + +/* Called under the VM lock */ +void virDomainConsumeVMEvents(virDomainObjPtr vm, void *opaque) +{ + virQEMUDriverPtr driver = opaque; + qemuEventPtr evt = virDequeueVMEvent(driver->ev_list, vm); + + while (evt) { + VIR_WARN("Processing event %d vm %s", evt->ev_type, vm->def->name); + if (evt->handler) + (evt->handler)(evt, opaque); + free(evt); + virObjectUnref(vm); + evt = virDequeueVMEvent(driver->ev_list, vm); + } + return; +} diff --git a/src/qemu/qemu_event.h b/src/qemu/qemu_event.h index 9781795..4173834 100644 --- a/src/qemu/qemu_event.h +++ b/src/qemu/qemu_event.h @@ -219,6 +219,5 @@ int virQemuVmEventListInit(virDomainObjPtr vm); int virEnqueueVMEvent(virQemuEventList *qlist, qemuEventPtr ev); qemuEventPtr virDequeueVMEvent(virQemuEventList *qlist, virDomainObjPtr vm); void virEventWorkerScanQueue(void *dummy, void *opaque); -void virEventRunHandler(qemuEventPtr ev, void *opaque); void virDomainConsumeVMEvents(virDomainObjPtr vm, void *opaque); #endif diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 9f26dfc..8e6498e 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -6941,6 +6941,8 @@ qemuProcessReconnect(void *opaque) goto error; jobStarted = true; + if (virQemuVmEventListInit(obj) < 0) + goto error; /* XXX If we ever gonna change pid file pattern, come up with * some intelligence here to deal with old paths. */ if (!(priv->pidfile = virPidFileBuildPath(cfg->stateDir, obj->def->name))) -- 2.9.5

Signed-off-by: Prerna Saxena <saxenap.ltc@gmail.com> --- src/qemu/qemu_event.c | 11 + src/qemu/qemu_event.h | 8 + src/qemu/qemu_monitor.c | 592 +++++++++++++++++++++++++++----- src/qemu/qemu_monitor.h | 80 +++-- src/qemu/qemu_monitor_json.c | 291 ++++++++++------ src/qemu/qemu_process.c | 789 +++++++++++++++++++++++++++---------------- src/qemu/qemu_process.h | 2 + tests/qemumonitortestutils.c | 2 +- 8 files changed, 1273 insertions(+), 502 deletions(-) diff --git a/src/qemu/qemu_event.c b/src/qemu/qemu_event.c index d52fad2..beb309f 100644 --- a/src/qemu/qemu_event.c +++ b/src/qemu/qemu_event.c @@ -50,6 +50,7 @@ VIR_ENUM_IMPL(qemuMonitorEvent, "RTC Change", "Shutdown", "Stop", "Suspend", "Suspend To Disk", "Virtual Serial Port Change", + "Spice migrated", "Wakeup", "Watchdog"); virQemuEventList* virQemuEventListInit(void) @@ -302,3 +303,13 @@ void virDomainConsumeVMEvents(virDomainObjPtr vm, void *opaque) } return; } + +extern void qemuProcessEmitMonitorEvent(qemuEventPtr ev, void *opaque); + +void virEventRunHandler(qemuEventPtr ev, void *opaque) +{ + if (!ev) + return; + + return qemuProcessEmitMonitorEvent(ev, opaque); +} diff --git a/src/qemu/qemu_event.h b/src/qemu/qemu_event.h index 4173834..8552fd1 100644 --- a/src/qemu/qemu_event.h +++ b/src/qemu/qemu_event.h @@ -51,6 +51,7 @@ typedef enum { QEMU_EVENT_SUSPEND, QEMU_EVENT_SUSPEND_DISK, QEMU_EVENT_SERIAL_CHANGE, + QEMU_EVENT_SPICE_MIGRATED, QEMU_EVENT_WAKEUP, QEMU_EVENT_WATCHDOG, @@ -102,7 +103,12 @@ struct qemuEventTrayChangeData { int reason; }; +struct qemuEventShutdownData { + virTristateBool guest_initiated; +}; + struct qemuEventGuestPanicData { + void *info; }; struct qemuEventMigrationStatusData { @@ -159,6 +165,7 @@ struct _qemuEvent { struct qemuEventBlockThresholdData ev_threshold; struct qemuEventDeviceDeletedData ev_deviceDel; struct qemuEventTrayChangeData ev_tray; + struct qemuEventShutdownData ev_shutdown; struct qemuEventGuestPanicData ev_panic; struct qemuEventMigrationStatusData ev_migStatus; struct qemuEventMigrationPassData ev_migPass; @@ -219,5 +226,6 @@ int virQemuVmEventListInit(virDomainObjPtr vm); int virEnqueueVMEvent(virQemuEventList *qlist, qemuEventPtr ev); qemuEventPtr virDequeueVMEvent(virQemuEventList *qlist, virDomainObjPtr vm); void virEventWorkerScanQueue(void *dummy, void *opaque); +void virEventRunHandler(qemuEventPtr ev, void *opaque); void virDomainConsumeVMEvents(virDomainObjPtr vm, void *opaque); #endif diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 7a26785..4e45cf9 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -34,6 +34,7 @@ #include "qemu_monitor_json.h" #include "qemu_domain.h" #include "qemu_process.h" +#include "qemu_event.h" #include "virerror.h" #include "viralloc.h" #include "virlog.h" @@ -1316,6 +1317,14 @@ qemuMonitorGetDiskSecret(qemuMonitorPtr mon, return ret; } +static int +qemuMonitorEnqueueEvent(qemuMonitorPtr mon, qemuEventPtr ev) +{ + int ret = -1; + QEMU_MONITOR_CALLBACK(mon, ret, domainEnqueueEvent, ev->vm, ev); + + return ret; +} int qemuMonitorEmitEvent(qemuMonitorPtr mon, const char *event, @@ -1332,90 +1341,189 @@ qemuMonitorEmitEvent(qemuMonitorPtr mon, const char *event, int -qemuMonitorEmitShutdown(qemuMonitorPtr mon, virTristateBool guest) +qemuMonitorEmitShutdown(qemuMonitorPtr mon, virTristateBool guest, + long long seconds, unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p guest=%u", mon, guest); mon->willhangup = 1; + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_SHUTDOWN; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + ev->evData.ev_shutdown.guest_initiated = guest; - QEMU_MONITOR_CALLBACK(mon, ret, domainShutdown, mon->vm, guest); + VIR_DEBUG("Vm %s received shutdown event initiated by %u", + mon->vm->def->name, guest); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int -qemuMonitorEmitReset(qemuMonitorPtr mon) +qemuMonitorEmitReset(qemuMonitorPtr mon, long long seconds, unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_RESET; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; - QEMU_MONITOR_CALLBACK(mon, ret, domainReset, mon->vm); + VIR_DEBUG("Vm %s received reset event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int -qemuMonitorEmitPowerdown(qemuMonitorPtr mon) +qemuMonitorEmitPowerdown(qemuMonitorPtr mon, long long seconds, unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_POWERDOWN; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; - QEMU_MONITOR_CALLBACK(mon, ret, domainPowerdown, mon->vm); + VIR_DEBUG("Vm %s received powerdown event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int -qemuMonitorEmitStop(qemuMonitorPtr mon) +qemuMonitorEmitStop(qemuMonitorPtr mon, long long seconds, unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_STOP; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; - QEMU_MONITOR_CALLBACK(mon, ret, domainStop, mon->vm); + VIR_DEBUG("Vm %s received stop event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int -qemuMonitorEmitResume(qemuMonitorPtr mon) +qemuMonitorEmitResume(qemuMonitorPtr mon, long long seconds, unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_RESUME; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; - QEMU_MONITOR_CALLBACK(mon, ret, domainResume, mon->vm); + VIR_DEBUG("Vm %s received resume event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int qemuMonitorEmitGuestPanic(qemuMonitorPtr mon, - qemuMonitorEventPanicInfoPtr info) + qemuMonitorEventPanicInfoPtr info, + long long seconds, unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); - QEMU_MONITOR_CALLBACK(mon, ret, domainGuestPanic, mon->vm, info); + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_GUEST_PANICKED; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + ev->evData.ev_panic.info = info; + + VIR_DEBUG("Vm %s received guest panic event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int -qemuMonitorEmitRTCChange(qemuMonitorPtr mon, long long offset) +qemuMonitorEmitRTCChange(qemuMonitorPtr mon, long long offset, + long long seconds, unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } - QEMU_MONITOR_CALLBACK(mon, ret, domainRTCChange, mon->vm, offset); + ev->ev_type = QEMU_EVENT_RTC_CHANGE; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + ev->evData.ev_rtc.offset = offset; + + VIR_DEBUG("Vm %s received RTC change event", mon->vm->def->name); + + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int -qemuMonitorEmitWatchdog(qemuMonitorPtr mon, int action) +qemuMonitorEmitWatchdog(qemuMonitorPtr mon, int action, + long long seconds, unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_WATCHDOG; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + ev->evData.ev_watchdog.action = action; - QEMU_MONITOR_CALLBACK(mon, ret, domainWatchdog, mon->vm, action); + VIR_DEBUG("Vm %s received watchdog event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } @@ -1424,13 +1532,39 @@ int qemuMonitorEmitIOError(qemuMonitorPtr mon, const char *diskAlias, int action, - const char *reason) + const char *reason, + long long seconds, unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + struct qemuEventIOErrorData *d = NULL; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_BLOCK_IO_ERROR; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + d = &(ev->evData.ev_IOErr); + d->action = action; - QEMU_MONITOR_CALLBACK(mon, ret, domainIOError, mon->vm, - diskAlias, action, reason); + if (VIR_STRDUP(d->device, diskAlias) < 0) { + goto cleanup; + } + if (VIR_STRDUP(d->reason, reason) < 0) { + goto cleanup; + } + VIR_DEBUG("Vm %s received block IO error event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); + return ret +; +cleanup: + if (d->device) + VIR_FREE(d->device); + VIR_FREE(ev); return ret; } @@ -1446,15 +1580,73 @@ qemuMonitorEmitGraphics(qemuMonitorPtr mon, const char *remoteService, const char *authScheme, const char *x509dname, - const char *saslUsername) + const char *saslUsername, + long long seconds, unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + struct qemuEventGraphicsData *d; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_GRAPHICS; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + d = &(ev->evData.ev_graphics); + + d->phase = phase; + d->localFamilyID = localFamily; + d->remoteFamilyID = remoteFamily; + + if (VIR_STRDUP((d->localNode), localNode) < 0) { + goto cleanup; + } + + if (VIR_STRDUP((d->localService), localService) < 0) { + goto cleanup; + } + + if (VIR_STRDUP((d->remoteNode), remoteNode) < 0) { + goto cleanup; + } + + if (VIR_STRDUP((d->remoteService), remoteService) < 0) { + goto cleanup; + } + + if (VIR_STRDUP((d->authScheme), authScheme) < 0) { + goto cleanup; + } + + if (VIR_STRDUP((d->x509dname), x509dname) < 0) { + goto cleanup; + } - QEMU_MONITOR_CALLBACK(mon, ret, domainGraphics, mon->vm, phase, - localFamily, localNode, localService, - remoteFamily, remoteNode, remoteService, - authScheme, x509dname, saslUsername); + if (VIR_STRDUP((d->saslUsername), saslUsername) < 0) { + goto cleanup; + } + + VIR_DEBUG("Vm %s received Graphics event", mon->vm->def->name); + virObjectRef(ev->vm); + return ret; + +cleanup: + if (d->localNode) + VIR_FREE(d->localNode); + if (d->localService) + VIR_FREE(d->localService); + if (d->remoteNode) + VIR_FREE(d->remoteNode); + if (d->remoteService) + VIR_FREE(d->remoteService); + if (d->authScheme) + VIR_FREE(d->authScheme); + if (d->x509dname) + VIR_FREE(d->x509dname); + VIR_FREE(ev); return ret; } @@ -1462,50 +1654,101 @@ qemuMonitorEmitGraphics(qemuMonitorPtr mon, int qemuMonitorEmitTrayChange(qemuMonitorPtr mon, const char *devAlias, - int reason) + int reason, + long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + struct qemuEventTrayChangeData *d; - QEMU_MONITOR_CALLBACK(mon, ret, domainTrayChange, mon->vm, - devAlias, reason); + if (VIR_ALLOC(ev) < 0){ + return ret; + } + ev->ev_type = QEMU_EVENT_DEVICE_TRAY_MOVED; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + d = &(ev->evData.ev_tray); + + if (VIR_STRDUP((d->devAlias), devAlias) < 0) { + VIR_FREE(ev); + return ret; + } + d->reason = reason; + VIR_DEBUG("Vm %s received tray change event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int -qemuMonitorEmitPMWakeup(qemuMonitorPtr mon) +qemuMonitorEmitPMWakeup(qemuMonitorPtr mon, long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } - QEMU_MONITOR_CALLBACK(mon, ret, domainPMWakeup, mon->vm); + ev->ev_type = QEMU_EVENT_WAKEUP; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + VIR_DEBUG("Vm %s received PM Wakeup event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int -qemuMonitorEmitPMSuspend(qemuMonitorPtr mon) +qemuMonitorEmitPMSuspend(qemuMonitorPtr mon, long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; - QEMU_MONITOR_CALLBACK(mon, ret, domainPMSuspend, mon->vm); + if (VIR_ALLOC(ev) < 0){ + return ret; + } + ev->ev_type = QEMU_EVENT_SUSPEND; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + + VIR_DEBUG("Vm %s received PM Suspend event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int -qemuMonitorEmitPMSuspendDisk(qemuMonitorPtr mon) +qemuMonitorEmitPMSuspendDisk(qemuMonitorPtr mon, long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; - QEMU_MONITOR_CALLBACK(mon, ret, domainPMSuspendDisk, mon->vm); + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_SUSPEND_DISK; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + VIR_DEBUG("Vm %s received PM Suspend Disk event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } @@ -1514,51 +1757,122 @@ int qemuMonitorEmitBlockJob(qemuMonitorPtr mon, const char *diskAlias, int type, - int status) + int status, long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + struct qemuEventBlockJobData *d = NULL; + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_BLOCK_JOB; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + d = &(ev->evData.ev_blockJob); + + if (VIR_STRDUP(d->device, diskAlias) < 0) { + VIR_FREE(ev); + return ret; + } - QEMU_MONITOR_CALLBACK(mon, ret, domainBlockJob, mon->vm, - diskAlias, type, status); + d->type = type; + d->status = status; + VIR_DEBUG("Vm %s received Block Job event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int qemuMonitorEmitBalloonChange(qemuMonitorPtr mon, - unsigned long long actual) + unsigned long long actual, + long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_BALLOON_CHANGE; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + ev->evData.ev_balloon.actual = actual; - QEMU_MONITOR_CALLBACK(mon, ret, domainBalloonChange, mon->vm, actual); + VIR_DEBUG("Vm %s received balloon change event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int qemuMonitorEmitDeviceDeleted(qemuMonitorPtr mon, - const char *devAlias) + const char *devAlias, + long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + struct qemuEventDeviceDeletedData *d = NULL; - QEMU_MONITOR_CALLBACK(mon, ret, domainDeviceDeleted, mon->vm, devAlias); + if (VIR_ALLOC(ev) < 0){ + return ret; + } + ev->ev_type = QEMU_EVENT_DEVICE_DELETED; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + d = &(ev->evData.ev_deviceDel); + + if (VIR_STRDUP(d->device, devAlias) < 0) { + VIR_FREE(ev); + return ret; + } + VIR_DEBUG("Vm %s received device deleted event for %s", mon->vm->def->name, + devAlias); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int qemuMonitorEmitNicRxFilterChanged(qemuMonitorPtr mon, - const char *devAlias) + const char *devAlias, + long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + struct qemuEventNicRxFilterChangeData *d = NULL; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } - QEMU_MONITOR_CALLBACK(mon, ret, domainNicRxFilterChanged, mon->vm, devAlias); + ev->ev_type = QEMU_EVENT_NIC_RX_FILTER_CHANGED; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + d = &(ev->evData.ev_nic); + if (VIR_STRDUP(d->devAlias, devAlias) < 0) { + VIR_FREE(ev); + return ret; + } + VIR_DEBUG("Vm %s received nic RX filter change event for %s", mon->vm->def->name, + devAlias); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } @@ -1566,52 +1880,110 @@ qemuMonitorEmitNicRxFilterChanged(qemuMonitorPtr mon, int qemuMonitorEmitSerialChange(qemuMonitorPtr mon, const char *devAlias, - bool connected) + bool connected, + long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p, devAlias='%s', connected=%d", mon, devAlias, connected); + qemuEventPtr ev; + struct qemuEventSerialChangeData *d = NULL; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } - QEMU_MONITOR_CALLBACK(mon, ret, domainSerialChange, mon->vm, devAlias, connected); + ev->ev_type = QEMU_EVENT_SERIAL_CHANGE; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + d = &(ev->evData.ev_serial); + d->connected = connected; + if (VIR_STRDUP(d->devAlias, devAlias) < 0) { + VIR_FREE(ev); + return ret; + } + VIR_DEBUG("Vm %s received Serial change event for %s", mon->vm->def->name, + devAlias); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int -qemuMonitorEmitSpiceMigrated(qemuMonitorPtr mon) +qemuMonitorEmitSpiceMigrated(qemuMonitorPtr mon, long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p", mon); + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } - QEMU_MONITOR_CALLBACK(mon, ret, domainSpiceMigrated, mon->vm); + ev->ev_type = QEMU_EVENT_SPICE_MIGRATED; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + VIR_DEBUG("Vm %s received spice migrated event", mon->vm->def->name); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int qemuMonitorEmitMigrationStatus(qemuMonitorPtr mon, - int status) + int status, + long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p, status=%s", - mon, NULLSTR(qemuMonitorMigrationStatusTypeToString(status))); + qemuEventPtr ev; - QEMU_MONITOR_CALLBACK(mon, ret, domainMigrationStatus, mon->vm, status); + if (VIR_ALLOC(ev) < 0){ + return ret; + } + ev->ev_type = QEMU_EVENT_MIGRATION; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + ev->evData.ev_migStatus.status = status; + + VIR_DEBUG("Vm %s received migration status %s", mon->vm->def->name, + NULLSTR(qemuMonitorMigrationStatusTypeToString(status))); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } int qemuMonitorEmitMigrationPass(qemuMonitorPtr mon, - int pass) + int pass, + long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p, pass=%d", mon, pass); + qemuEventPtr ev; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } - QEMU_MONITOR_CALLBACK(mon, ret, domainMigrationPass, mon->vm, pass); + ev->ev_type = QEMU_EVENT_MIGRATION_PASS; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + ev->evData.ev_migPass.pass = pass; + VIR_DEBUG("Vm %s received migration pass %d", mon->vm->def->name, + pass); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } @@ -1622,15 +1994,49 @@ qemuMonitorEmitAcpiOstInfo(qemuMonitorPtr mon, const char *slotType, const char *slot, unsigned int source, - unsigned int status) + unsigned int status, + long long seconds, + unsigned int micros) { int ret = -1; - VIR_DEBUG("mon=%p, alias='%s', slotType='%s', slot='%s', source='%u' status=%u", - mon, NULLSTR(alias), slotType, slot, source, status); + qemuEventPtr ev; + struct qemuEventAcpiOstInfoData *d = NULL; + + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_ACPI_OST; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + ev->evData.ev_acpi.source = source; + ev->evData.ev_acpi.status = status; + + d = &(ev->evData.ev_acpi); + + if (VIR_STRDUP(d->alias, alias) < 0) { + goto cleanup; + } + if (VIR_STRDUP(d->slotType, slotType) < 0) { + goto cleanup; + } + if (VIR_STRDUP(d->slot, slot) < 0) { + goto cleanup; + } - QEMU_MONITOR_CALLBACK(mon, ret, domainAcpiOstInfo, mon->vm, - alias, slotType, slot, source, status); + VIR_DEBUG("Vm %s received ACPI OST event: alias[%s] slotType [%s] slot[%s]" + " status[%d]", mon->vm->def->name, alias, slotType, slot, status); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); + return ret; +cleanup: + if (d->alias) + VIR_FREE(d->alias); + if (d->slotType) + VIR_FREE(d->slotType); + VIR_FREE(ev); return ret; } @@ -1639,16 +2045,36 @@ int qemuMonitorEmitBlockThreshold(qemuMonitorPtr mon, const char *nodename, unsigned long long threshold, - unsigned long long excess) + unsigned long long excess, + long long seconds, + unsigned int micros) { int ret = -1; + qemuEventPtr ev; + struct qemuEventBlockThresholdData *d = NULL; - VIR_DEBUG("mon=%p, node-name='%s', threshold='%llu', excess='%llu'", - mon, nodename, threshold, excess); + if (VIR_ALLOC(ev) < 0){ + return ret; + } + + ev->ev_type = QEMU_EVENT_BLOCK_WRITE_THRESHOLD; + ev->vm = mon->vm; + ev->seconds = seconds; + ev->micros = micros; + ev->evData.ev_threshold.threshold = threshold; + ev->evData.ev_threshold.excess = excess; - QEMU_MONITOR_CALLBACK(mon, ret, domainBlockThreshold, mon->vm, - nodename, threshold, excess); + d = &(ev->evData.ev_threshold); + if (VIR_STRDUP(d->nodename, nodename) < 0) { + VIR_FREE(ev); + return ret; + } + VIR_DEBUG("Vm %s received Block Threshold event:" + "node-name='%s', threshold='%llu', excess='%llu'", + mon->vm->def->name, nodename, threshold, excess); + virObjectRef(ev->vm); + ret = qemuMonitorEnqueueEvent(mon, ev); return ret; } diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index d9c27ac..7b5a984 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -35,6 +35,7 @@ # include "device_conf.h" # include "cpu/cpu.h" # include "util/virgic.h" +# include "qemu_event.h" typedef struct _qemuMonitor qemuMonitor; typedef qemuMonitor *qemuMonitorPtr; @@ -89,7 +90,7 @@ struct _qemuMonitorEventPanicInfoHyperv { }; typedef struct _qemuMonitorEventPanicInfo qemuMonitorEventPanicInfo; -typedef qemuMonitorEventPanicInfo *qemuMonitorEventPanicInfoPtr; +typedef qemuMonitorEventPanicInfo * qemuMonitorEventPanicInfoPtr; struct _qemuMonitorEventPanicInfo { qemuMonitorEventPanicInfoType type; union { @@ -128,6 +129,10 @@ typedef int (*qemuMonitorDomainEventCallback)(qemuMonitorPtr mon, unsigned int micros, const char *details, void *opaque); +typedef int (*qemuMonitorDomainEnqueueEventCallback)(qemuMonitorPtr mon, + virDomainObjPtr vm, + qemuEventPtr ev, + void *opaque); typedef int (*qemuMonitorDomainShutdownCallback)(qemuMonitorPtr mon, virDomainObjPtr vm, virTristateBool guest, @@ -254,6 +259,7 @@ struct _qemuMonitorCallbacks { qemuMonitorErrorNotifyCallback errorNotify; qemuMonitorDiskSecretLookupCallback diskSecretLookup; qemuMonitorDomainEventCallback domainEvent; + qemuMonitorDomainEnqueueEventCallback domainEnqueueEvent; qemuMonitorDomainShutdownCallback domainShutdown; qemuMonitorDomainResetCallback domainReset; qemuMonitorDomainPowerdownCallback domainPowerdown; @@ -345,17 +351,25 @@ int qemuMonitorGetDiskSecret(qemuMonitorPtr mon, int qemuMonitorEmitEvent(qemuMonitorPtr mon, const char *event, long long seconds, unsigned int micros, const char *details); -int qemuMonitorEmitShutdown(qemuMonitorPtr mon, virTristateBool guest); -int qemuMonitorEmitReset(qemuMonitorPtr mon); -int qemuMonitorEmitPowerdown(qemuMonitorPtr mon); -int qemuMonitorEmitStop(qemuMonitorPtr mon); -int qemuMonitorEmitResume(qemuMonitorPtr mon); -int qemuMonitorEmitRTCChange(qemuMonitorPtr mon, long long offset); -int qemuMonitorEmitWatchdog(qemuMonitorPtr mon, int action); +int qemuMonitorEmitShutdown(qemuMonitorPtr mon, virTristateBool guest, + long long seconds, unsigned int micros); +int qemuMonitorEmitReset(qemuMonitorPtr mon, + long long seconds, unsigned int micros); +int qemuMonitorEmitPowerdown(qemuMonitorPtr mon, + long long seconds, unsigned int micros); +int qemuMonitorEmitStop(qemuMonitorPtr mon, + long long seconds, unsigned int micros); +int qemuMonitorEmitResume(qemuMonitorPtr mon, + long long seconds, unsigned int micros); +int qemuMonitorEmitRTCChange(qemuMonitorPtr mon, long long offset, + long long seconds, unsigned int micros); +int qemuMonitorEmitWatchdog(qemuMonitorPtr mon, int action, + long long seconds, unsigned int micros); int qemuMonitorEmitIOError(qemuMonitorPtr mon, const char *diskAlias, int action, - const char *reason); + const char *reason, + long long seconds, unsigned int micros); int qemuMonitorEmitGraphics(qemuMonitorPtr mon, int phase, int localFamily, @@ -366,45 +380,61 @@ int qemuMonitorEmitGraphics(qemuMonitorPtr mon, const char *remoteService, const char *authScheme, const char *x509dname, - const char *saslUsername); + const char *saslUsername, + long long seconds, unsigned int micros); int qemuMonitorEmitTrayChange(qemuMonitorPtr mon, const char *devAlias, - int reason); -int qemuMonitorEmitPMWakeup(qemuMonitorPtr mon); -int qemuMonitorEmitPMSuspend(qemuMonitorPtr mon); + int reason, + long long seconds, unsigned int micros); +int qemuMonitorEmitPMWakeup(qemuMonitorPtr mon, + long long seconds, unsigned int micros); +int qemuMonitorEmitPMSuspend(qemuMonitorPtr mon, + long long seconds, unsigned int micros); int qemuMonitorEmitBlockJob(qemuMonitorPtr mon, const char *diskAlias, int type, - int status); + int status, + long long seconds, unsigned int micros); int qemuMonitorEmitBalloonChange(qemuMonitorPtr mon, - unsigned long long actual); -int qemuMonitorEmitPMSuspendDisk(qemuMonitorPtr mon); + unsigned long long actual, + long long seconds, unsigned int micros); +int qemuMonitorEmitPMSuspendDisk(qemuMonitorPtr mon, + long long seconds, unsigned int micros); int qemuMonitorEmitGuestPanic(qemuMonitorPtr mon, - qemuMonitorEventPanicInfoPtr info); + qemuMonitorEventPanicInfoPtr info, + long long seconds, unsigned int micros); int qemuMonitorEmitDeviceDeleted(qemuMonitorPtr mon, - const char *devAlias); + const char *devAlias, + long long seconds, unsigned int micros); int qemuMonitorEmitNicRxFilterChanged(qemuMonitorPtr mon, - const char *devAlias); + const char *devAlias, + long long seconds, unsigned int micros); int qemuMonitorEmitSerialChange(qemuMonitorPtr mon, const char *devAlias, - bool connected); -int qemuMonitorEmitSpiceMigrated(qemuMonitorPtr mon); + bool connected, + long long seconds, unsigned int micros); +int qemuMonitorEmitSpiceMigrated(qemuMonitorPtr mon, + long long seconds, unsigned int micros); int qemuMonitorEmitMigrationStatus(qemuMonitorPtr mon, - int status); + int status, + long long seconds, unsigned int micros); int qemuMonitorEmitMigrationPass(qemuMonitorPtr mon, - int pass); + int pass, + long long seconds, unsigned int micros); int qemuMonitorEmitAcpiOstInfo(qemuMonitorPtr mon, const char *alias, const char *slotType, const char *slot, unsigned int source, - unsigned int status); + unsigned int status, + long long seconds, unsigned int micros); int qemuMonitorEmitBlockThreshold(qemuMonitorPtr mon, const char *nodename, unsigned long long threshold, - unsigned long long excess); + unsigned long long excess, + long long seconds, unsigned int micros); int qemuMonitorStartCPUs(qemuMonitorPtr mon, virConnectPtr conn); diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index a9070fe..b4c7118 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -59,41 +59,73 @@ VIR_LOG_INIT("qemu.qemu_monitor_json"); #define LINE_ENDING "\r\n" -static void qemuMonitorJSONHandleShutdown(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleReset(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandlePowerdown(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleStop(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleResume(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleRTCChange(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleWatchdog(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleIOError(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleVNCConnect(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleVNCInitialize(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleVNCDisconnect(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleSPICEConnect(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleSPICEInitialize(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleSPICEDisconnect(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleTrayChange(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandlePMWakeup(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandlePMSuspend(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleBlockJobCompleted(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleBlockJobCanceled(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleBlockJobReady(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleBalloonChange(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandlePMSuspendDisk(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleGuestPanic(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleDeviceDeleted(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleNicRxFilterChanged(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleSerialChange(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleSpiceMigrated(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleMigrationStatus(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleMigrationPass(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleAcpiOstInfo(qemuMonitorPtr mon, virJSONValuePtr data); -static void qemuMonitorJSONHandleBlockThreshold(qemuMonitorPtr mon, virJSONValuePtr data); +static void qemuMonitorJSONHandleShutdown(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleReset(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandlePowerdown(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleStop(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleResume(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleRTCChange(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleWatchdog(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleIOError(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleVNCConnect(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleVNCInitialize(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleVNCDisconnect(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleSPICEConnect(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleSPICEInitialize(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleSPICEDisconnect(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleTrayChange(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandlePMWakeup(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandlePMSuspend(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleBlockJobCompleted(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleBlockJobCanceled(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleBlockJobReady(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleBalloonChange(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandlePMSuspendDisk(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleGuestPanic(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleDeviceDeleted(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleNicRxFilterChanged(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleSerialChange(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleSpiceMigrated(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleMigrationStatus(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleMigrationPass(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleAcpiOstInfo(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); +static void qemuMonitorJSONHandleBlockThreshold(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); typedef struct { const char *type; - void (*handler)(qemuMonitorPtr mon, virJSONValuePtr data); + void (*handler)(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros); } qemuEventHandler; static qemuEventHandler eventHandlers[] = { @@ -146,7 +178,6 @@ qemuMonitorJSONIOProcessEvent(qemuMonitorPtr mon, const char *type; qemuEventHandler *handler; virJSONValuePtr data; - char *details = NULL; virJSONValuePtr timestamp; long long seconds = -1; unsigned int micros = 0; @@ -161,23 +192,20 @@ qemuMonitorJSONIOProcessEvent(qemuMonitorPtr mon, } /* Not all events have data; and event reporting is best-effort only */ - if ((data = virJSONValueObjectGet(obj, "data"))) - details = virJSONValueToString(data, false); + ignore_value(data = virJSONValueObjectGet(obj, "data")); if ((timestamp = virJSONValueObjectGet(obj, "timestamp"))) { ignore_value(virJSONValueObjectGetNumberLong(timestamp, "seconds", &seconds)); ignore_value(virJSONValueObjectGetNumberUint(timestamp, "microseconds", µs)); } - qemuMonitorEmitEvent(mon, type, seconds, micros, details); - VIR_FREE(details); handler = bsearch(type, eventHandlers, ARRAY_CARDINALITY(eventHandlers), sizeof(eventHandlers[0]), qemuMonitorEventCompare); if (handler) { VIR_DEBUG("handle %s handler=%p data=%p", type, handler->handler, data); - (handler->handler)(mon, data); + (handler->handler)(mon, data, seconds, micros); } return 0; } @@ -523,7 +551,8 @@ qemuMonitorJSONKeywordStringToJSON(const char *str, const char *firstkeyword) } -static void qemuMonitorJSONHandleShutdown(qemuMonitorPtr mon, virJSONValuePtr data) +static void qemuMonitorJSONHandleShutdown(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { bool guest = false; virTristateBool guest_initiated = VIR_TRISTATE_BOOL_ABSENT; @@ -531,27 +560,35 @@ static void qemuMonitorJSONHandleShutdown(qemuMonitorPtr mon, virJSONValuePtr da if (data && virJSONValueObjectGetBoolean(data, "guest", &guest) == 0) guest_initiated = guest ? VIR_TRISTATE_BOOL_YES : VIR_TRISTATE_BOOL_NO; - qemuMonitorEmitShutdown(mon, guest_initiated); + qemuMonitorEmitShutdown(mon, guest_initiated, seconds, micros); } -static void qemuMonitorJSONHandleReset(qemuMonitorPtr mon, virJSONValuePtr data ATTRIBUTE_UNUSED) +static void qemuMonitorJSONHandleReset(qemuMonitorPtr mon, + virJSONValuePtr data ATTRIBUTE_UNUSED, + long long seconds, unsigned int micros) { - qemuMonitorEmitReset(mon); + qemuMonitorEmitReset(mon, seconds, micros); } -static void qemuMonitorJSONHandlePowerdown(qemuMonitorPtr mon, virJSONValuePtr data ATTRIBUTE_UNUSED) +static void qemuMonitorJSONHandlePowerdown(qemuMonitorPtr mon, + virJSONValuePtr data ATTRIBUTE_UNUSED, + long long seconds, unsigned int micros) { - qemuMonitorEmitPowerdown(mon); + qemuMonitorEmitPowerdown(mon, seconds, micros); } -static void qemuMonitorJSONHandleStop(qemuMonitorPtr mon, virJSONValuePtr data ATTRIBUTE_UNUSED) +static void qemuMonitorJSONHandleStop(qemuMonitorPtr mon, + virJSONValuePtr data ATTRIBUTE_UNUSED, + long long seconds, unsigned int micros) { - qemuMonitorEmitStop(mon); + qemuMonitorEmitStop(mon, seconds, micros); } -static void qemuMonitorJSONHandleResume(qemuMonitorPtr mon, virJSONValuePtr data ATTRIBUTE_UNUSED) +static void qemuMonitorJSONHandleResume(qemuMonitorPtr mon, + virJSONValuePtr data ATTRIBUTE_UNUSED, + long long seconds, unsigned int micros) { - qemuMonitorEmitResume(mon); + qemuMonitorEmitResume(mon, seconds, micros); } @@ -599,7 +636,9 @@ qemuMonitorJSONGuestPanicExtractInfo(virJSONValuePtr data) static void qemuMonitorJSONHandleGuestPanic(qemuMonitorPtr mon, - virJSONValuePtr data) + virJSONValuePtr data, + long long seconds, + unsigned int micros) { virJSONValuePtr infojson = virJSONValueObjectGetObject(data, "info"); qemuMonitorEventPanicInfoPtr info = NULL; @@ -607,25 +646,27 @@ qemuMonitorJSONHandleGuestPanic(qemuMonitorPtr mon, if (infojson) info = qemuMonitorJSONGuestPanicExtractInfo(infojson); - qemuMonitorEmitGuestPanic(mon, info); + qemuMonitorEmitGuestPanic(mon, info, seconds, micros); } -static void qemuMonitorJSONHandleRTCChange(qemuMonitorPtr mon, virJSONValuePtr data) +static void qemuMonitorJSONHandleRTCChange(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { long long offset = 0; if (virJSONValueObjectGetNumberLong(data, "offset", &offset) < 0) { VIR_WARN("missing offset in RTC change event"); offset = 0; } - qemuMonitorEmitRTCChange(mon, offset); + qemuMonitorEmitRTCChange(mon, offset, seconds, micros); } VIR_ENUM_DECL(qemuMonitorWatchdogAction) VIR_ENUM_IMPL(qemuMonitorWatchdogAction, VIR_DOMAIN_EVENT_WATCHDOG_LAST, "none", "pause", "reset", "poweroff", "shutdown", "debug", "inject-nmi"); -static void qemuMonitorJSONHandleWatchdog(qemuMonitorPtr mon, virJSONValuePtr data) +static void qemuMonitorJSONHandleWatchdog(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { const char *action; int actionID; @@ -639,7 +680,7 @@ static void qemuMonitorJSONHandleWatchdog(qemuMonitorPtr mon, virJSONValuePtr da } else { actionID = VIR_DOMAIN_EVENT_WATCHDOG_NONE; } - qemuMonitorEmitWatchdog(mon, actionID); + qemuMonitorEmitWatchdog(mon, actionID, seconds, micros); } VIR_ENUM_DECL(qemuMonitorIOErrorAction) @@ -648,7 +689,8 @@ VIR_ENUM_IMPL(qemuMonitorIOErrorAction, VIR_DOMAIN_EVENT_IO_ERROR_LAST, static void -qemuMonitorJSONHandleIOError(qemuMonitorPtr mon, virJSONValuePtr data) +qemuMonitorJSONHandleIOError(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { const char *device; const char *action; @@ -676,7 +718,7 @@ qemuMonitorJSONHandleIOError(qemuMonitorPtr mon, virJSONValuePtr data) actionID = VIR_DOMAIN_EVENT_IO_ERROR_NONE; } - qemuMonitorEmitIOError(mon, device, actionID, reason); + qemuMonitorEmitIOError(mon, device, actionID, reason, seconds, micros); } @@ -688,7 +730,8 @@ VIR_ENUM_IMPL(qemuMonitorGraphicsAddressFamily, static void qemuMonitorJSONHandleGraphicsVNC(qemuMonitorPtr mon, virJSONValuePtr data, - int phase) + int phase, + long long seconds, unsigned int micros) { const char *localNode, *localService, *localFamily; const char *remoteNode, *remoteService, *remoteFamily; @@ -753,31 +796,39 @@ qemuMonitorJSONHandleGraphicsVNC(qemuMonitorPtr mon, qemuMonitorEmitGraphics(mon, phase, localFamilyID, localNode, localService, remoteFamilyID, remoteNode, remoteService, - authScheme, x509dname, saslUsername); + authScheme, x509dname, saslUsername, + seconds, micros); } -static void qemuMonitorJSONHandleVNCConnect(qemuMonitorPtr mon, virJSONValuePtr data) +static void qemuMonitorJSONHandleVNCConnect(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { - qemuMonitorJSONHandleGraphicsVNC(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_CONNECT); + qemuMonitorJSONHandleGraphicsVNC(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_CONNECT, + seconds, micros); } -static void qemuMonitorJSONHandleVNCInitialize(qemuMonitorPtr mon, virJSONValuePtr data) +static void qemuMonitorJSONHandleVNCInitialize(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { - qemuMonitorJSONHandleGraphicsVNC(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_INITIALIZE); + qemuMonitorJSONHandleGraphicsVNC(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_INITIALIZE, + seconds, micros); } -static void qemuMonitorJSONHandleVNCDisconnect(qemuMonitorPtr mon, virJSONValuePtr data) +static void qemuMonitorJSONHandleVNCDisconnect(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { - qemuMonitorJSONHandleGraphicsVNC(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_DISCONNECT); + qemuMonitorJSONHandleGraphicsVNC(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_DISCONNECT, + seconds, micros); } static void qemuMonitorJSONHandleGraphicsSPICE(qemuMonitorPtr mon, virJSONValuePtr data, - int phase) + int phase, + long long seconds, unsigned int micros) { const char *lhost, *lport, *lfamily; const char *rhost, *rport, *rfamily; @@ -834,31 +885,39 @@ qemuMonitorJSONHandleGraphicsSPICE(qemuMonitorPtr mon, } qemuMonitorEmitGraphics(mon, phase, lfamilyID, lhost, lport, rfamilyID, - rhost, rport, auth, NULL, NULL); + rhost, rport, auth, NULL, NULL, + seconds, micros); } -static void qemuMonitorJSONHandleSPICEConnect(qemuMonitorPtr mon, virJSONValuePtr data) +static void qemuMonitorJSONHandleSPICEConnect(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { - qemuMonitorJSONHandleGraphicsSPICE(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_CONNECT); + qemuMonitorJSONHandleGraphicsSPICE(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_CONNECT, + seconds, micros); } -static void qemuMonitorJSONHandleSPICEInitialize(qemuMonitorPtr mon, virJSONValuePtr data) +static void qemuMonitorJSONHandleSPICEInitialize(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { - qemuMonitorJSONHandleGraphicsSPICE(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_INITIALIZE); + qemuMonitorJSONHandleGraphicsSPICE(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_INITIALIZE, + seconds, micros); } -static void qemuMonitorJSONHandleSPICEDisconnect(qemuMonitorPtr mon, virJSONValuePtr data) +static void qemuMonitorJSONHandleSPICEDisconnect(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { - qemuMonitorJSONHandleGraphicsSPICE(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_DISCONNECT); + qemuMonitorJSONHandleGraphicsSPICE(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_DISCONNECT, + seconds, micros); } static void qemuMonitorJSONHandleBlockJobImpl(qemuMonitorPtr mon, virJSONValuePtr data, - int event) + int event, + long long seconds, unsigned int micros) { const char *device; const char *type_str; @@ -908,12 +967,13 @@ qemuMonitorJSONHandleBlockJobImpl(qemuMonitorPtr mon, } out: - qemuMonitorEmitBlockJob(mon, device, type, event); + qemuMonitorEmitBlockJob(mon, device, type, event, seconds, micros); } static void qemuMonitorJSONHandleTrayChange(qemuMonitorPtr mon, - virJSONValuePtr data) + virJSONValuePtr data, + long long seconds, unsigned int micros) { const char *devAlias = NULL; bool trayOpened; @@ -934,50 +994,58 @@ qemuMonitorJSONHandleTrayChange(qemuMonitorPtr mon, else reason = VIR_DOMAIN_EVENT_TRAY_CHANGE_CLOSE; - qemuMonitorEmitTrayChange(mon, devAlias, reason); + qemuMonitorEmitTrayChange(mon, devAlias, reason, seconds, micros); } static void qemuMonitorJSONHandlePMWakeup(qemuMonitorPtr mon, - virJSONValuePtr data ATTRIBUTE_UNUSED) + virJSONValuePtr data ATTRIBUTE_UNUSED, + long long seconds, unsigned int micros) { - qemuMonitorEmitPMWakeup(mon); + qemuMonitorEmitPMWakeup(mon, micros, seconds); } static void qemuMonitorJSONHandlePMSuspend(qemuMonitorPtr mon, - virJSONValuePtr data ATTRIBUTE_UNUSED) + virJSONValuePtr data ATTRIBUTE_UNUSED, + long long seconds, unsigned int micros) { - qemuMonitorEmitPMSuspend(mon); + qemuMonitorEmitPMSuspend(mon, seconds, micros); } static void qemuMonitorJSONHandleBlockJobCompleted(qemuMonitorPtr mon, - virJSONValuePtr data) + virJSONValuePtr data, + long long seconds, unsigned int micros) { qemuMonitorJSONHandleBlockJobImpl(mon, data, - VIR_DOMAIN_BLOCK_JOB_COMPLETED); + VIR_DOMAIN_BLOCK_JOB_COMPLETED, + seconds, micros); } static void qemuMonitorJSONHandleBlockJobCanceled(qemuMonitorPtr mon, - virJSONValuePtr data) + virJSONValuePtr data, + long long seconds, unsigned int micros) { qemuMonitorJSONHandleBlockJobImpl(mon, data, - VIR_DOMAIN_BLOCK_JOB_CANCELED); + VIR_DOMAIN_BLOCK_JOB_CANCELED, + seconds, micros); } static void qemuMonitorJSONHandleBlockJobReady(qemuMonitorPtr mon, - virJSONValuePtr data) + virJSONValuePtr data, + long long seconds, unsigned int micros) { qemuMonitorJSONHandleBlockJobImpl(mon, data, - VIR_DOMAIN_BLOCK_JOB_READY); + VIR_DOMAIN_BLOCK_JOB_READY, seconds, micros); } static void qemuMonitorJSONHandleBalloonChange(qemuMonitorPtr mon, - virJSONValuePtr data) + virJSONValuePtr data, + long long seconds, unsigned int micros) { unsigned long long actual = 0; if (virJSONValueObjectGetNumberUlong(data, "actual", &actual) < 0) { @@ -985,18 +1053,20 @@ qemuMonitorJSONHandleBalloonChange(qemuMonitorPtr mon, return; } actual = VIR_DIV_UP(actual, 1024); - qemuMonitorEmitBalloonChange(mon, actual); + qemuMonitorEmitBalloonChange(mon, actual, seconds, micros); } static void qemuMonitorJSONHandlePMSuspendDisk(qemuMonitorPtr mon, - virJSONValuePtr data ATTRIBUTE_UNUSED) + virJSONValuePtr data ATTRIBUTE_UNUSED, + long long seconds, unsigned int micros) { - qemuMonitorEmitPMSuspendDisk(mon); + qemuMonitorEmitPMSuspendDisk(mon, seconds, micros); } static void -qemuMonitorJSONHandleDeviceDeleted(qemuMonitorPtr mon, virJSONValuePtr data) +qemuMonitorJSONHandleDeviceDeleted(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { const char *device; @@ -1005,12 +1075,13 @@ qemuMonitorJSONHandleDeviceDeleted(qemuMonitorPtr mon, virJSONValuePtr data) return; } - qemuMonitorEmitDeviceDeleted(mon, device); + qemuMonitorEmitDeviceDeleted(mon, device, seconds, micros); } static void -qemuMonitorJSONHandleNicRxFilterChanged(qemuMonitorPtr mon, virJSONValuePtr data) +qemuMonitorJSONHandleNicRxFilterChanged(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { const char *name; @@ -1019,13 +1090,14 @@ qemuMonitorJSONHandleNicRxFilterChanged(qemuMonitorPtr mon, virJSONValuePtr data return; } - qemuMonitorEmitNicRxFilterChanged(mon, name); + qemuMonitorEmitNicRxFilterChanged(mon, name, seconds, micros); } static void qemuMonitorJSONHandleSerialChange(qemuMonitorPtr mon, - virJSONValuePtr data) + virJSONValuePtr data, + long long seconds, unsigned int micros) { const char *name; bool connected; @@ -1040,21 +1112,23 @@ qemuMonitorJSONHandleSerialChange(qemuMonitorPtr mon, return; } - qemuMonitorEmitSerialChange(mon, name, connected); + qemuMonitorEmitSerialChange(mon, name, connected, seconds, micros); } static void qemuMonitorJSONHandleSpiceMigrated(qemuMonitorPtr mon, - virJSONValuePtr data ATTRIBUTE_UNUSED) + virJSONValuePtr data ATTRIBUTE_UNUSED, + long long seconds, unsigned int micros) { - qemuMonitorEmitSpiceMigrated(mon); + qemuMonitorEmitSpiceMigrated(mon, seconds, micros); } static void qemuMonitorJSONHandleMigrationStatus(qemuMonitorPtr mon, - virJSONValuePtr data) + virJSONValuePtr data, + long long seconds, unsigned int micros) { const char *str; int status; @@ -1069,13 +1143,14 @@ qemuMonitorJSONHandleMigrationStatus(qemuMonitorPtr mon, return; } - qemuMonitorEmitMigrationStatus(mon, status); + qemuMonitorEmitMigrationStatus(mon, status, seconds, micros); } static void qemuMonitorJSONHandleMigrationPass(qemuMonitorPtr mon, - virJSONValuePtr data) + virJSONValuePtr data, + long long seconds, unsigned int micros) { int pass; @@ -1084,12 +1159,13 @@ qemuMonitorJSONHandleMigrationPass(qemuMonitorPtr mon, return; } - qemuMonitorEmitMigrationPass(mon, pass); + qemuMonitorEmitMigrationPass(mon, pass, seconds, micros); } static void -qemuMonitorJSONHandleAcpiOstInfo(qemuMonitorPtr mon, virJSONValuePtr data) +qemuMonitorJSONHandleAcpiOstInfo(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { virJSONValuePtr info; const char *alias; @@ -1116,7 +1192,8 @@ qemuMonitorJSONHandleAcpiOstInfo(qemuMonitorPtr mon, virJSONValuePtr data) if (virJSONValueObjectGetNumberUint(info, "status", &status) < 0) goto error; - qemuMonitorEmitAcpiOstInfo(mon, alias, slotType, slot, source, status); + qemuMonitorEmitAcpiOstInfo(mon, alias, slotType, slot, source, status, + seconds, micros); return; error: @@ -1126,7 +1203,8 @@ qemuMonitorJSONHandleAcpiOstInfo(qemuMonitorPtr mon, virJSONValuePtr data) static void -qemuMonitorJSONHandleBlockThreshold(qemuMonitorPtr mon, virJSONValuePtr data) +qemuMonitorJSONHandleBlockThreshold(qemuMonitorPtr mon, virJSONValuePtr data, + long long seconds, unsigned int micros) { const char *nodename; unsigned long long threshold; @@ -1141,7 +1219,8 @@ qemuMonitorJSONHandleBlockThreshold(qemuMonitorPtr mon, virJSONValuePtr data) if (virJSONValueObjectGetNumberUlong(data, "amount-exceeded", &excess) < 0) goto error; - qemuMonitorEmitBlockThreshold(mon, nodename, threshold, excess); + qemuMonitorEmitBlockThreshold(mon, nodename, threshold, excess, + seconds, micros); return; error: diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 8e6498e..ee8bae5 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -82,6 +82,12 @@ VIR_LOG_INIT("qemu.qemu_process"); +typedef struct { + qemuMonitorEventType type; + void (*handler_func)(qemuEventPtr ev, void *opaque); +} qemuEventFuncTable; + + /** * qemuProcessRemoveDomainStatus * @@ -474,20 +480,24 @@ qemuProcessFindVolumeQcowPassphrase(qemuMonitorPtr mon ATTRIBUTE_UNUSED, return ret; } - -static int -qemuProcessHandleReset(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - void *opaque) +static void +qemuProcessEventHandleReset(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event; qemuDomainObjPrivatePtr priv; + virDomainObjPtr vm; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); - int ret = -1; - virObjectLock(vm); + if (!ev) + return; + if (!ev->vm) { + VIR_INFO("Dropping reset event for unknown VM"); + return; + } + vm = ev->vm; event = virDomainEventRebootNewFromObj(vm); priv = vm->privateData; if (priv->agent) @@ -516,12 +526,10 @@ qemuProcessHandleReset(qemuMonitorPtr mon ATTRIBUTE_UNUSED, qemuDomainObjEndJob(driver, vm); } - ret = 0; cleanup: - virObjectUnlock(vm); qemuDomainEventQueue(driver, event); virObjectUnref(cfg); - return ret; + return; } @@ -623,60 +631,69 @@ qemuProcessShutdownOrReboot(virQEMUDriverPtr driver, } -static int -qemuProcessHandleEvent(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - const char *eventName, - long long seconds, - unsigned int micros, - const char *details, - void *opaque) + +void +qemuProcessEmitMonitorEvent(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; + virDomainObjPtr vm; + const char * eventName; - VIR_DEBUG("vm=%p", vm); + if (!ev) + return; + + vm = ev->vm; + + eventName = qemuMonitorEventTypeToString(ev->ev_type); + VIR_DEBUG("vm=%s monitor event %s", vm->def->name, eventName); - virObjectLock(vm); event = virDomainQemuMonitorEventNew(vm->def->id, vm->def->name, vm->def->uuid, eventName, - seconds, micros, details); - - virObjectUnlock(vm); - qemuDomainEventQueue(driver, event); + ev->seconds, ev->micros, NULL); + if (event) + qemuDomainEventQueue(driver, event); - return 0; + return; } -static int -qemuProcessHandleShutdown(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - virTristateBool guest_initiated, - void *opaque) +static void +qemuProcessEventHandleShutdown(qemuEventPtr ev, + + void *opaque) { virQEMUDriverPtr driver = opaque; qemuDomainObjPrivatePtr priv; + virDomainObjPtr vm; + virTristateBool guest_initiated; virObjectEventPtr event = NULL; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); int detail = 0; - VIR_DEBUG("vm=%p", vm); + if (!ev) + return; + vm = ev->vm; - virObjectLock(vm); + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping SHUTDOWN event"); + goto exit; + } + VIR_DEBUG("Processing SHUTDOWN event for VM %s", vm->def->name); priv = vm->privateData; if (priv->gotShutdown) { VIR_DEBUG("Ignoring repeated SHUTDOWN event from domain %s", vm->def->name); - goto unlock; + goto exit; } else if (!virDomainObjIsActive(vm)) { VIR_DEBUG("Ignoring SHUTDOWN event from inactive domain %s", vm->def->name); - goto unlock; + goto exit; } priv->gotShutdown = true; - + guest_initiated = ev->evData.ev_shutdown.guest_initiated; VIR_DEBUG("Transitioned guest %s to shutdown state", vm->def->name); virDomainObjSetState(vm, @@ -710,34 +727,39 @@ qemuProcessHandleShutdown(qemuMonitorPtr mon ATTRIBUTE_UNUSED, qemuAgentNotifyEvent(priv->agent, QEMU_AGENT_EVENT_SHUTDOWN); qemuProcessShutdownOrReboot(driver, vm); - - unlock: - virObjectUnlock(vm); qemuDomainEventQueue(driver, event); - virObjectUnref(cfg); - return 0; + virObjectUnref(cfg); +exit: + return; } - -static int -qemuProcessHandleStop(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - void *opaque) +static void +qemuProcessEventHandleStop(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; virDomainPausedReason reason = VIR_DOMAIN_PAUSED_UNKNOWN; virDomainEventSuspendedDetailType detail = VIR_DOMAIN_EVENT_SUSPENDED_PAUSED; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + virDomainObjPtr vm; + + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping STOP event"); + goto exit; + } - virObjectLock(vm); if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) { qemuDomainObjPrivatePtr priv = vm->privateData; if (priv->gotShutdown) { VIR_DEBUG("Ignoring STOP event after SHUTDOWN"); - goto unlock; + goto exit; } if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT) { @@ -776,31 +798,38 @@ qemuProcessHandleStop(qemuMonitorPtr mon ATTRIBUTE_UNUSED, } } - unlock: - virObjectUnlock(vm); +exit: qemuDomainEventQueue(driver, event); virObjectUnref(cfg); - return 0; + return; } -static int -qemuProcessHandleResume(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, +static void +qemuProcessEventHandleResume(qemuEventPtr ev, void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; + virDomainObjPtr vm; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping RESUME event"); + goto exit; + } + if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_PAUSED) { qemuDomainObjPrivatePtr priv = vm->privateData; if (priv->gotShutdown) { VIR_DEBUG("Ignoring RESUME event after SHUTDOWN"); - goto unlock; + goto exit; } VIR_DEBUG("Transitioned guest %s out of paused into resumed state", @@ -818,25 +847,32 @@ qemuProcessHandleResume(qemuMonitorPtr mon ATTRIBUTE_UNUSED, } } - unlock: - virObjectUnlock(vm); +exit: qemuDomainEventQueue(driver, event); virObjectUnref(cfg); - return 0; + return; } -static int -qemuProcessHandleRTCChange(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - long long offset, - void *opaque) +static void +qemuProcessEventHandleRTCChange(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; + virDomainObjPtr vm; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + unsigned long long offset; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping RTC event"); + goto exit; + } + offset = ev->evData.ev_rtc.offset; if (vm->def->clock.offset == VIR_DOMAIN_CLOCK_OFFSET_VARIABLE) { /* when a basedate is manually given on the qemu commandline * rather than simply "-rtc base=utc", the offset sent by qemu @@ -862,26 +898,33 @@ qemuProcessHandleRTCChange(qemuMonitorPtr mon ATTRIBUTE_UNUSED, event = virDomainEventRTCChangeNewFromObj(vm, offset); - virObjectUnlock(vm); - - qemuDomainEventQueue(driver, event); + if (event) + qemuDomainEventQueue(driver, event); virObjectUnref(cfg); - return 0; +exit: + return; } - -static int -qemuProcessHandleWatchdog(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - int action, - void *opaque) +static void +qemuProcessEventHandleWatchdog1(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr watchdogEvent = NULL; virObjectEventPtr lifecycleEvent = NULL; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + virDomainObjPtr vm; + int action; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Watchdog event"); + goto exit; + } + action = ev->evData.ev_watchdog.action; watchdogEvent = virDomainEventWatchdogNewFromObj(vm, action); if (action == VIR_DOMAIN_EVENT_WATCHDOG_PAUSE && @@ -923,22 +966,16 @@ qemuProcessHandleWatchdog(qemuMonitorPtr mon ATTRIBUTE_UNUSED, } } - if (vm) - virObjectUnlock(vm); qemuDomainEventQueue(driver, watchdogEvent); qemuDomainEventQueue(driver, lifecycleEvent); virObjectUnref(cfg); - return 0; +exit: + return; } - -static int -qemuProcessHandleIOError(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - const char *diskAlias, - int action, - const char *reason, +static void +qemuProcessEventHandleIOError(qemuEventPtr ev, void *opaque) { virQEMUDriverPtr driver = opaque; @@ -949,8 +986,24 @@ qemuProcessHandleIOError(qemuMonitorPtr mon ATTRIBUTE_UNUSED, const char *devAlias; virDomainDiskDefPtr disk; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + virDomainObjPtr vm; + const char *diskAlias; + int action; + const char *reason; + + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping IO Error event"); + goto exit; + } + + diskAlias = ev->evData.ev_IOErr.device; + action = ev->evData.ev_IOErr.action; + reason = ev->evData.ev_IOErr.reason; - virObjectLock(vm); disk = qemuProcessFindDomainDiskByAlias(vm, diskAlias); if (disk) { @@ -975,7 +1028,7 @@ qemuProcessHandleIOError(qemuMonitorPtr mon ATTRIBUTE_UNUSED, virDomainObjSetState(vm, VIR_DOMAIN_PAUSED, VIR_DOMAIN_PAUSED_IOERROR); lifecycleEvent = virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_SUSPENDED, - VIR_DOMAIN_EVENT_SUSPENDED_IOERROR); + VIR_DOMAIN_PAUSED_IOERROR); VIR_FREE(priv->lockState); if (virDomainLockProcessPause(driver->lockManager, vm, &priv->lockState) < 0) @@ -985,32 +1038,45 @@ qemuProcessHandleIOError(qemuMonitorPtr mon ATTRIBUTE_UNUSED, if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) VIR_WARN("Unable to save status on vm %s after IO error", vm->def->name); } - virObjectUnlock(vm); qemuDomainEventQueue(driver, ioErrorEvent); qemuDomainEventQueue(driver, ioErrorEvent2); qemuDomainEventQueue(driver, lifecycleEvent); + +exit: virObjectUnref(cfg); - return 0; + VIR_FREE(ev->evData.ev_IOErr.device); + VIR_FREE(ev->evData.ev_IOErr.reason); + return; } -static int -qemuProcessHandleBlockJob(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - const char *diskAlias, - int type, - int status, - void *opaque) +static void +qemuProcessEventHandleBlockJob(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; struct qemuProcessEvent *processEvent = NULL; virDomainDiskDefPtr disk; qemuDomainDiskPrivatePtr diskPriv; char *data = NULL; + virDomainObjPtr vm; + const char *diskAlias; + int type, status; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; - VIR_DEBUG("Block job for device %s (domain: %p,%s) type %d status %d", + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Block Job event"); + goto cleanup; + } + + diskAlias = ev->evData.ev_blockJob.device; + type = ev->evData.ev_blockJob.type; + status = ev->evData.ev_blockJob.status; + + VIR_INFO("Block job for device %s (domain: %p,%s) type %d status %d", diskAlias, vm, vm->def->name, type, status); if (!(disk = qemuProcessFindDomainDiskByAlias(vm, diskAlias))) @@ -1043,8 +1109,7 @@ qemuProcessHandleBlockJob(qemuMonitorPtr mon ATTRIBUTE_UNUSED, } cleanup: - virObjectUnlock(vm); - return 0; + return; error: if (processEvent) VIR_FREE(processEvent->data); @@ -1052,21 +1117,9 @@ qemuProcessHandleBlockJob(qemuMonitorPtr mon ATTRIBUTE_UNUSED, goto cleanup; } - -static int -qemuProcessHandleGraphics(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - int phase, - int localFamily, - const char *localNode, - const char *localService, - int remoteFamily, - const char *remoteNode, - const char *remoteService, - const char *authScheme, - const char *x509dname, - const char *saslUsername, - void *opaque) +static void +qemuProcessEventHandleGraphics(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event; @@ -1074,49 +1127,65 @@ qemuProcessHandleGraphics(qemuMonitorPtr mon ATTRIBUTE_UNUSED, virDomainEventGraphicsAddressPtr remoteAddr = NULL; virDomainEventGraphicsSubjectPtr subject = NULL; size_t i; + virDomainObjPtr vm; + struct qemuEventGraphicsData *data; + + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Graphics event"); + goto exit; + } + data = &(ev->evData.ev_graphics); if (VIR_ALLOC(localAddr) < 0) goto error; - localAddr->family = localFamily; - if (VIR_STRDUP(localAddr->service, localService) < 0 || - VIR_STRDUP(localAddr->node, localNode) < 0) + + localAddr->family = data->localFamilyID; + if (VIR_STRDUP(localAddr->service, data->localService) < 0 || + VIR_STRDUP(localAddr->node, data->localNode) < 0) goto error; if (VIR_ALLOC(remoteAddr) < 0) goto error; - remoteAddr->family = remoteFamily; - if (VIR_STRDUP(remoteAddr->service, remoteService) < 0 || - VIR_STRDUP(remoteAddr->node, remoteNode) < 0) + remoteAddr->family = data->remoteFamilyID; + if (VIR_STRDUP(remoteAddr->service, data->remoteService) < 0 || + VIR_STRDUP(remoteAddr->node, data->remoteNode) < 0) goto error; if (VIR_ALLOC(subject) < 0) goto error; - if (x509dname) { + if (data->x509dname) { if (VIR_REALLOC_N(subject->identities, subject->nidentity+1) < 0) goto error; subject->nidentity++; if (VIR_STRDUP(subject->identities[subject->nidentity-1].type, "x509dname") < 0 || - VIR_STRDUP(subject->identities[subject->nidentity-1].name, x509dname) < 0) + VIR_STRDUP(subject->identities[subject->nidentity-1].name, + data->x509dname) < 0) goto error; } - if (saslUsername) { + if (data->saslUsername) { if (VIR_REALLOC_N(subject->identities, subject->nidentity+1) < 0) goto error; subject->nidentity++; if (VIR_STRDUP(subject->identities[subject->nidentity-1].type, "saslUsername") < 0 || - VIR_STRDUP(subject->identities[subject->nidentity-1].name, saslUsername) < 0) + VIR_STRDUP(subject->identities[subject->nidentity-1].name, + data->saslUsername) < 0) goto error; } - virObjectLock(vm); - event = virDomainEventGraphicsNewFromObj(vm, phase, localAddr, remoteAddr, authScheme, subject); - virObjectUnlock(vm); + event = virDomainEventGraphicsNewFromObj(vm, data->phase, + localAddr, remoteAddr, + data->authScheme, + subject); qemuDomainEventQueue(driver, event); - return 0; + goto exit; - error: +error: if (localAddr) { VIR_FREE(localAddr->service); VIR_FREE(localAddr->node); @@ -1136,22 +1205,41 @@ qemuProcessHandleGraphics(qemuMonitorPtr mon ATTRIBUTE_UNUSED, VIR_FREE(subject); } - return -1; +exit: + VIR_FREE(ev->evData.ev_graphics.localNode); + VIR_FREE(ev->evData.ev_graphics.localService); + VIR_FREE(ev->evData.ev_graphics.remoteNode); + VIR_FREE(ev->evData.ev_graphics.remoteService); + VIR_FREE(ev->evData.ev_graphics.x509dname); + VIR_FREE(ev->evData.ev_graphics.saslUsername); + VIR_FREE(ev->evData.ev_graphics.authScheme); + return; } -static int -qemuProcessHandleTrayChange(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - const char *devAlias, - int reason, - void *opaque) +static void +qemuProcessEventHandleTrayChange(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; virDomainDiskDefPtr disk; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + virDomainObjPtr vm; + const char *devAlias; + int reason; + + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Graphics event"); + goto exit; + } + + devAlias = ev->evData.ev_tray.devAlias; + reason = ev->evData.ev_tray.reason; - virObjectLock(vm); disk = qemuProcessFindDomainDiskByAlias(vm, devAlias); if (disk) { @@ -1168,27 +1256,36 @@ qemuProcessHandleTrayChange(qemuMonitorPtr mon ATTRIBUTE_UNUSED, VIR_WARN("Unable to save status on vm %s after tray moved event", vm->def->name); } - - virDomainObjBroadcast(vm); +// Why the broadcast here? +// virDomainObjBroadcast(vm); } - virObjectUnlock(vm); qemuDomainEventQueue(driver, event); virObjectUnref(cfg); - return 0; +exit: + VIR_FREE(ev->evData.ev_tray.devAlias); + return; } -static int -qemuProcessHandlePMWakeup(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - void *opaque) +static void +qemuProcessEventHandlePMWakeup(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; virObjectEventPtr lifecycleEvent = NULL; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + virDomainObjPtr vm; + + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping PM Wakeup event"); + goto exit; + } - virObjectLock(vm); event = virDomainEventPMWakeupNewFromObj(vm); /* Don't set domain status back to running if it wasn't paused @@ -1210,24 +1307,32 @@ qemuProcessHandlePMWakeup(qemuMonitorPtr mon ATTRIBUTE_UNUSED, } } - virObjectUnlock(vm); qemuDomainEventQueue(driver, event); qemuDomainEventQueue(driver, lifecycleEvent); virObjectUnref(cfg); - return 0; +exit: + return; } -static int -qemuProcessHandlePMSuspend(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - void *opaque) +static void +qemuProcessEventHandlePMSuspend(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; virObjectEventPtr lifecycleEvent = NULL; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + virDomainObjPtr vm; + + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping PM Suspend event"); + goto exit; + } - virObjectLock(vm); event = virDomainEventPMSuspendNewFromObj(vm); if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) { @@ -1251,25 +1356,33 @@ qemuProcessHandlePMSuspend(qemuMonitorPtr mon ATTRIBUTE_UNUSED, qemuAgentNotifyEvent(priv->agent, QEMU_AGENT_EVENT_SUSPEND); } - virObjectUnlock(vm); - qemuDomainEventQueue(driver, event); qemuDomainEventQueue(driver, lifecycleEvent); virObjectUnref(cfg); - return 0; +exit: + return; } -static int -qemuProcessHandleBalloonChange(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - unsigned long long actual, - void *opaque) +static void +qemuProcessEventHandleBalloonChange(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + virDomainObjPtr vm; + unsigned long long actual; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Balloon event"); + goto exit; + } + + actual = ev->evData.ev_balloon.actual; event = virDomainEventBalloonChangeNewFromObj(vm, actual); VIR_DEBUG("Updating balloon from %lld to %lld kb", @@ -1279,24 +1392,30 @@ qemuProcessHandleBalloonChange(qemuMonitorPtr mon ATTRIBUTE_UNUSED, if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) VIR_WARN("unable to save domain status with balloon change"); - virObjectUnlock(vm); - qemuDomainEventQueue(driver, event); virObjectUnref(cfg); - return 0; +exit: + return; } -static int -qemuProcessHandlePMSuspendDisk(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - void *opaque) +static void +qemuProcessEventHandlePMSuspendDisk(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; virObjectEventPtr lifecycleEvent = NULL; virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + virDomainObjPtr vm; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping PM Suspend disk event"); + goto exit; + } event = virDomainEventPMSuspendDiskNewFromObj(vm); if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) { @@ -1320,28 +1439,35 @@ qemuProcessHandlePMSuspendDisk(qemuMonitorPtr mon ATTRIBUTE_UNUSED, qemuAgentNotifyEvent(priv->agent, QEMU_AGENT_EVENT_SUSPEND); } - virObjectUnlock(vm); - qemuDomainEventQueue(driver, event); qemuDomainEventQueue(driver, lifecycleEvent); virObjectUnref(cfg); - - return 0; +exit: + return; } -static int -qemuProcessHandleGuestPanic(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - qemuMonitorEventPanicInfoPtr info, - void *opaque) +static void +qemuProcessEventHandleGuestPanic(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; struct qemuProcessEvent *processEvent; + virDomainObjPtr vm; + qemuMonitorEventPanicInfoPtr info; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Guest Panic event"); + goto exit; + } + + info = ev->evData.ev_panic.info; if (VIR_ALLOC(processEvent) < 0) - goto cleanup; + goto exit; processEvent->eventType = QEMU_PROCESS_EVENT_GUESTPANIC; processEvent->action = vm->def->onCrash; @@ -1357,26 +1483,31 @@ qemuProcessHandleGuestPanic(qemuMonitorPtr mon ATTRIBUTE_UNUSED, VIR_FREE(processEvent); } - cleanup: - if (vm) - virObjectUnlock(vm); - - return 0; +exit: + return; } -int -qemuProcessHandleDeviceDeleted(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - const char *devAlias, - void *opaque) +static void +qemuProcessEventHandleDeviceDeleted(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; struct qemuProcessEvent *processEvent = NULL; char *data; + virDomainObjPtr vm; + const char *devAlias; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Device deleted event"); + goto cleanup; + } + devAlias = ev->evData.ev_deviceDel.device; VIR_DEBUG("Device %s removed from domain %p %s", devAlias, vm, vm->def->name); @@ -1400,8 +1531,8 @@ qemuProcessHandleDeviceDeleted(qemuMonitorPtr mon ATTRIBUTE_UNUSED, } cleanup: - virObjectUnlock(vm); - return 0; + VIR_FREE(ev->evData.ev_deviceDel.device); + return; error: if (processEvent) VIR_FREE(processEvent->data); @@ -1444,20 +1575,34 @@ qemuProcessHandleDeviceDeleted(qemuMonitorPtr mon ATTRIBUTE_UNUSED, * Note that qemu does not emit the event for all the documented sources or * devices. */ -static int -qemuProcessHandleAcpiOstInfo(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - const char *alias, - const char *slotType, - const char *slot, - unsigned int source, - unsigned int status, - void *opaque) + +static void +qemuProcessEventHandleAcpiOstInfo(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; + virDomainObjPtr vm; + const char *alias; + const char *slotType; + const char *slot; + unsigned int source; + unsigned int status; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping ACPI event"); + goto exit; + } + + alias = ev->evData.ev_acpi.alias; + slotType = ev->evData.ev_acpi.slotType; + slot = ev->evData.ev_acpi.slot; + source = ev->evData.ev_acpi.source; + status = ev->evData.ev_acpi.status; VIR_DEBUG("ACPI OST info for device %s domain %p %s. " "slotType='%s' slot='%s' source=%u status=%u", @@ -1471,20 +1616,19 @@ qemuProcessHandleAcpiOstInfo(qemuMonitorPtr mon ATTRIBUTE_UNUSED, event = virDomainEventDeviceRemovalFailedNewFromObj(vm, alias); } - virObjectUnlock(vm); qemuDomainEventQueue(driver, event); - return 0; +exit: + VIR_FREE(ev->evData.ev_acpi.alias); + VIR_FREE(ev->evData.ev_acpi.slotType); + VIR_FREE(ev->evData.ev_acpi.slot); + return; } -static int -qemuProcessHandleBlockThreshold(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - const char *nodename, - unsigned long long threshold, - unsigned long long excess, - void *opaque) +static void +qemuProcessEventHandleBlockThreshold(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; @@ -1493,8 +1637,23 @@ qemuProcessHandleBlockThreshold(qemuMonitorPtr mon ATTRIBUTE_UNUSED, unsigned int idx; char *dev = NULL; const char *path = NULL; + virDomainObjPtr vm; + const char *nodename; + unsigned long long threshold; + unsigned long long excess; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Block Threshold event"); + goto exit; + } + + nodename = ev->evData.ev_threshold.nodename; + threshold = ev->evData.ev_threshold.threshold; + excess = ev->evData.ev_threshold.excess; VIR_DEBUG("BLOCK_WRITE_THRESHOLD event for block node '%s' in domain %p %s:" "threshold '%llu' exceeded by '%llu'", @@ -1511,25 +1670,33 @@ qemuProcessHandleBlockThreshold(qemuMonitorPtr mon ATTRIBUTE_UNUSED, } } - virObjectUnlock(vm); qemuDomainEventQueue(driver, event); - - return 0; +exit: + VIR_FREE(ev->evData.ev_threshold.nodename); + return; } -static int -qemuProcessHandleNicRxFilterChanged(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - const char *devAlias, - void *opaque) +static void +qemuProcessEventHandleNicRxFilterChanged(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; struct qemuProcessEvent *processEvent = NULL; char *data; + virDomainObjPtr vm; + char *devAlias; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Nic Filter Change event"); + goto exit; + } + devAlias = ev->evData.ev_nic.devAlias; VIR_DEBUG("Device %s RX Filter changed in domain %p %s", devAlias, vm, vm->def->name); @@ -1548,30 +1715,39 @@ qemuProcessHandleNicRxFilterChanged(qemuMonitorPtr mon ATTRIBUTE_UNUSED, goto error; } - cleanup: - virObjectUnlock(vm); - return 0; +exit: + VIR_FREE(ev->evData.ev_nic.devAlias); + return; error: if (processEvent) VIR_FREE(processEvent->data); VIR_FREE(processEvent); - goto cleanup; + goto exit; } -static int -qemuProcessHandleSerialChanged(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - const char *devAlias, - bool connected, - void *opaque) +static void +qemuProcessEventHandleSerialChanged(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; struct qemuProcessEvent *processEvent = NULL; char *data; + virDomainObjPtr vm; + char *devAlias; + bool connected; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Serial Change event"); + goto cleanup; + } + + devAlias = ev->evData.ev_serial.devAlias; + connected = ev->evData.ev_serial.connected; VIR_DEBUG("Serial port %s state changed to '%d' in domain %p %s", devAlias, connected, vm, vm->def->name); @@ -1592,8 +1768,8 @@ qemuProcessHandleSerialChanged(qemuMonitorPtr mon ATTRIBUTE_UNUSED, } cleanup: - virObjectUnlock(vm); - return 0; + VIR_FREE(ev->evData.ev_serial.devAlias); + return; error: if (processEvent) VIR_FREE(processEvent->data); @@ -1602,14 +1778,21 @@ qemuProcessHandleSerialChanged(qemuMonitorPtr mon ATTRIBUTE_UNUSED, } -static int -qemuProcessHandleSpiceMigrated(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - void *opaque ATTRIBUTE_UNUSED) +static void +qemuProcessEventHandleSpiceMigrated(qemuEventPtr ev, + void *opaque ATTRIBUTE_UNUSED) { qemuDomainObjPrivatePtr priv; + virDomainObjPtr vm; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Spice Migrated event"); + goto cleanup; + } VIR_DEBUG("Spice migration completed for domain %p %s", vm, vm->def->name); @@ -1621,28 +1804,37 @@ qemuProcessHandleSpiceMigrated(qemuMonitorPtr mon ATTRIBUTE_UNUSED, } priv->job.spiceMigrated = true; - virDomainObjBroadcast(vm); +// virDomainObjBroadcast(vm); cleanup: - virObjectUnlock(vm); - return 0; + return; } -static int -qemuProcessHandleMigrationStatus(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - int status, - void *opaque ATTRIBUTE_UNUSED) +static void +qemuProcessEventHandleMigrationStatus(qemuEventPtr ev, + void *opaque ATTRIBUTE_UNUSED) { qemuDomainObjPrivatePtr priv; + virDomainObjPtr vm; + int status; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Migration Status event"); + goto cleanup; + } + + status = ev->evData.ev_migStatus.status; VIR_DEBUG("Migration of domain %p %s changed state to %s", vm, vm->def->name, qemuMonitorMigrationStatusTypeToString(status)); + priv = vm->privateData; if (priv->job.asyncJob == QEMU_ASYNC_JOB_NONE) { VIR_DEBUG("got MIGRATION event without a migration job"); @@ -1650,25 +1842,32 @@ qemuProcessHandleMigrationStatus(qemuMonitorPtr mon ATTRIBUTE_UNUSED, } priv->job.current->stats.status = status; - virDomainObjBroadcast(vm); +// virDomainObjBroadcast(vm); cleanup: - virObjectUnlock(vm); - return 0; + return; } -static int -qemuProcessHandleMigrationPass(qemuMonitorPtr mon ATTRIBUTE_UNUSED, - virDomainObjPtr vm, - int pass, - void *opaque) +static void +qemuProcessEventHandleMigrationPass(qemuEventPtr ev, + void *opaque) { virQEMUDriverPtr driver = opaque; qemuDomainObjPrivatePtr priv; + virDomainObjPtr vm; + int pass; - virObjectLock(vm); + if (!ev) + return; + vm = ev->vm; + if (!ev->vm) { + VIR_WARN("Unable to locate VM, dropping Migration Pass event"); + goto cleanup; + } + + pass = ev->evData.ev_migPass.pass; VIR_DEBUG("Migrating domain %p %s, iteration %d", vm, vm->def->name, pass); @@ -1681,42 +1880,58 @@ qemuProcessHandleMigrationPass(qemuMonitorPtr mon ATTRIBUTE_UNUSED, qemuDomainEventQueue(driver, virDomainEventMigrationIterationNewFromObj(vm, pass)); - cleanup: - virObjectUnlock(vm); - return 0; +cleanup: + return; } +static qemuEventFuncTable qemuEventFunctions[] = { + { QEMU_EVENT_ACPI_OST, qemuProcessEventHandleAcpiOstInfo, }, + { QEMU_EVENT_BALLOON_CHANGE, qemuProcessEventHandleBalloonChange, }, + { QEMU_EVENT_BLOCK_IO_ERROR, qemuProcessEventHandleIOError, }, + { QEMU_EVENT_BLOCK_JOB, qemuProcessEventHandleBlockJob, }, + { QEMU_EVENT_BLOCK_WRITE_THRESHOLD, qemuProcessEventHandleBlockThreshold, }, + { QEMU_EVENT_DEVICE_DELETED, qemuProcessEventHandleDeviceDeleted, }, + { QEMU_EVENT_DEVICE_TRAY_MOVED, qemuProcessEventHandleTrayChange, }, + { QEMU_EVENT_GRAPHICS, qemuProcessEventHandleGraphics,}, + { QEMU_EVENT_GUEST_PANICKED, qemuProcessEventHandleGuestPanic,}, + { QEMU_EVENT_MIGRATION, qemuProcessEventHandleMigrationStatus,}, + { QEMU_EVENT_MIGRATION_PASS, qemuProcessEventHandleMigrationPass,}, + { QEMU_EVENT_NIC_RX_FILTER_CHANGED, qemuProcessEventHandleNicRxFilterChanged, }, + { QEMU_EVENT_POWERDOWN, NULL, }, + { QEMU_EVENT_RESET, qemuProcessEventHandleReset, }, + { QEMU_EVENT_RESUME, qemuProcessEventHandleResume, }, + { QEMU_EVENT_RTC_CHANGE, qemuProcessEventHandleRTCChange, }, + { QEMU_EVENT_SHUTDOWN, qemuProcessEventHandleShutdown,}, + { QEMU_EVENT_SPICE_MIGRATED, qemuProcessEventHandleSpiceMigrated, }, + { QEMU_EVENT_STOP, qemuProcessEventHandleStop, }, + { QEMU_EVENT_SUSPEND, qemuProcessEventHandlePMSuspend,}, + { QEMU_EVENT_SUSPEND_DISK, qemuProcessEventHandlePMSuspendDisk,}, + { QEMU_EVENT_SERIAL_CHANGE, qemuProcessEventHandleSerialChanged,}, + { QEMU_EVENT_WAKEUP, qemuProcessEventHandlePMWakeup,}, + { QEMU_EVENT_WATCHDOG, qemuProcessEventHandleWatchdog1,}, +}; + +static int +qemuProcessEnqueueEvent(qemuMonitorPtr mon ATTRIBUTE_UNUSED, + virDomainObjPtr vm ATTRIBUTE_UNUSED, + qemuEventPtr ev, + void *opaque) +{ + virQEMUDriverPtr driver = opaque; + /* Bad code alert: Fix this lookup to scan table for correct index. + * Works for now since event table is sorted */ + ev->handler = qemuEventFunctions[ev->ev_type].handler_func; + return virEnqueueVMEvent(driver->ev_list, ev); +} static qemuMonitorCallbacks monitorCallbacks = { .eofNotify = qemuProcessHandleMonitorEOF, .errorNotify = qemuProcessHandleMonitorError, .diskSecretLookup = qemuProcessFindVolumeQcowPassphrase, - .domainEvent = qemuProcessHandleEvent, - .domainShutdown = qemuProcessHandleShutdown, - .domainStop = qemuProcessHandleStop, - .domainResume = qemuProcessHandleResume, - .domainReset = qemuProcessHandleReset, - .domainRTCChange = qemuProcessHandleRTCChange, - .domainWatchdog = qemuProcessHandleWatchdog, - .domainIOError = qemuProcessHandleIOError, - .domainGraphics = qemuProcessHandleGraphics, - .domainBlockJob = qemuProcessHandleBlockJob, - .domainTrayChange = qemuProcessHandleTrayChange, - .domainPMWakeup = qemuProcessHandlePMWakeup, - .domainPMSuspend = qemuProcessHandlePMSuspend, - .domainBalloonChange = qemuProcessHandleBalloonChange, - .domainPMSuspendDisk = qemuProcessHandlePMSuspendDisk, - .domainGuestPanic = qemuProcessHandleGuestPanic, - .domainDeviceDeleted = qemuProcessHandleDeviceDeleted, - .domainNicRxFilterChanged = qemuProcessHandleNicRxFilterChanged, - .domainSerialChange = qemuProcessHandleSerialChanged, - .domainSpiceMigrated = qemuProcessHandleSpiceMigrated, - .domainMigrationStatus = qemuProcessHandleMigrationStatus, - .domainMigrationPass = qemuProcessHandleMigrationPass, - .domainAcpiOstInfo = qemuProcessHandleAcpiOstInfo, - .domainBlockThreshold = qemuProcessHandleBlockThreshold, + .domainEnqueueEvent = qemuProcessEnqueueEvent, }; + static void qemuProcessMonitorReportLogError(qemuMonitorPtr mon, const char *msg, diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index 814b86d..a2bbc4f 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -129,6 +129,8 @@ int qemuProcessFinishStartup(virConnectPtr conn, bool startCPUs, virDomainPausedReason pausedReason); +void qemuProcessEmitMonitorEvent(qemuEventPtr ev, + void *opaque); typedef enum { VIR_QEMU_PROCESS_STOP_MIGRATED = 1 << 0, VIR_QEMU_PROCESS_STOP_NO_RELABEL = 1 << 1, diff --git a/tests/qemumonitortestutils.c b/tests/qemumonitortestutils.c index 5e30fb0..94375a3 100644 --- a/tests/qemumonitortestutils.c +++ b/tests/qemumonitortestutils.c @@ -1013,7 +1013,7 @@ qemuMonitorTestErrorNotify(qemuMonitorPtr mon ATTRIBUTE_UNUSED, static qemuMonitorCallbacks qemuMonitorTestCallbacks = { .eofNotify = qemuMonitorTestEOFNotify, .errorNotify = qemuMonitorTestErrorNotify, - .domainDeviceDeleted = qemuProcessHandleDeviceDeleted, + .domainDeviceDeleted = NULL, //qemuProcessHandleDeviceDeleted, }; -- 2.9.5

Signed-off-by: Prerna Saxena <saxenap.ltc@gmail.com> --- src/qemu/qemu_driver.c | 131 +++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 128 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 8a005d0..b249347 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1900,6 +1900,7 @@ static int qemuDomainSuspend(virDomainPtr dom) qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); qemuDomainEventQueue(driver, event); @@ -1967,6 +1968,7 @@ static int qemuDomainResume(virDomainPtr dom) qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); qemuDomainEventQueue(driver, event); virObjectUnref(cfg); @@ -2057,6 +2059,7 @@ static int qemuDomainShutdownFlags(virDomainPtr dom, unsigned int flags) qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -2159,6 +2162,7 @@ qemuDomainReboot(virDomainPtr dom, unsigned int flags) qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -2206,6 +2210,7 @@ qemuDomainReset(virDomainPtr dom, unsigned int flags) qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -2294,6 +2299,7 @@ qemuDomainDestroyFlags(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); qemuDomainEventQueue(driver, event); return ret; @@ -2307,6 +2313,7 @@ qemuDomainDestroy(virDomainPtr dom) static char *qemuDomainGetOSType(virDomainPtr dom) { virDomainObjPtr vm; + virQEMUDriverPtr driver = dom->conn->privateData; char *type = NULL; if (!(vm = qemuDomObjFromDomain(dom))) @@ -2318,6 +2325,7 @@ static char *qemuDomainGetOSType(virDomainPtr dom) { ignore_value(VIR_STRDUP(type, virDomainOSTypeToString(vm->def->os.type))); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return type; } @@ -2328,6 +2336,7 @@ qemuDomainGetMaxMemory(virDomainPtr dom) { virDomainObjPtr vm; unsigned long long ret = 0; + virQEMUDriverPtr driver = dom->conn->privateData; if (!(vm = qemuDomObjFromDomain(dom))) goto cleanup; @@ -2338,6 +2347,7 @@ qemuDomainGetMaxMemory(virDomainPtr dom) ret = virDomainDefGetMemoryTotal(vm->def); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -2455,6 +2465,7 @@ static int qemuDomainSetMemoryFlags(virDomainPtr dom, unsigned long newmem, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -2543,6 +2554,7 @@ static int qemuDomainSetMemoryStatsPeriod(virDomainPtr dom, int period, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -2583,6 +2595,7 @@ static int qemuDomainInjectNMI(virDomainPtr domain, unsigned int flags) qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -2646,6 +2659,7 @@ static int qemuDomainSendKey(virDomainPtr domain, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -2702,6 +2716,7 @@ qemuDomainGetInfo(virDomainPtr dom, ret = 0; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -2739,6 +2754,7 @@ qemuDomainGetControlInfo(virDomainPtr dom, virDomainObjPtr vm; qemuDomainObjPrivatePtr priv; int ret = -1; + virQEMUDriverPtr driver = dom->conn->privateData; virCheckFlags(0, -1); @@ -2788,6 +2804,7 @@ qemuDomainGetControlInfo(virDomainPtr dom, ret = 0; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -3577,6 +3594,7 @@ qemuDomainSaveFlags(virDomainPtr dom, const char *path, const char *dxml, compressedpath, dxml, flags); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); VIR_FREE(compressedpath); virObjectUnref(cfg); @@ -3653,6 +3671,7 @@ qemuDomainManagedSave(virDomainPtr dom, unsigned int flags) vm->hasManagedSave = true; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); VIR_FREE(name); VIR_FREE(compressedpath); @@ -3689,7 +3708,7 @@ qemuDomainHasManagedSaveImage(virDomainPtr dom, unsigned int flags) { virDomainObjPtr vm = NULL; int ret = -1; - + virQEMUDriverPtr driver = dom->conn->privateData; virCheckFlags(0, -1); if (!(vm = qemuDomObjFromDomain(dom))) @@ -3701,6 +3720,7 @@ qemuDomainHasManagedSaveImage(virDomainPtr dom, unsigned int flags) ret = vm->hasManagedSave; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -3736,6 +3756,7 @@ qemuDomainManagedSaveRemove(virDomainPtr dom, unsigned int flags) cleanup: VIR_FREE(name); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -3976,6 +3997,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, qemuDomainRemoveInactiveJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); qemuDomainEventQueue(driver, event); return ret; @@ -4078,6 +4100,7 @@ qemuDomainScreenshot(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -4832,6 +4855,7 @@ static void qemuProcessEventHandler(void *data, void *opaque) break; } + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); VIR_FREE(processEvent); } @@ -4973,6 +4997,7 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -5144,6 +5169,7 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virBitmapFree(pcpumap); virObjectUnref(cfg); @@ -5172,6 +5198,7 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom, bool live; int ret = -1; virBitmapPtr autoCpuset = NULL; + virQEMUDriverPtr driver = dom->conn->privateData; virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | VIR_DOMAIN_AFFECT_CONFIG, -1); @@ -5191,6 +5218,7 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom, ret = virDomainDefGetVcpuPinInfoHelper(def, maplen, ncpumaps, cpumaps, virHostCPUGetCount(), autoCpuset); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -5302,6 +5330,7 @@ qemuDomainPinEmulator(virDomainPtr dom, qemuDomainEventQueue(driver, event); VIR_FREE(str); virBitmapFree(pcpumap); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -5321,6 +5350,7 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom, virBitmapPtr cpumask = NULL; virBitmapPtr bitmap = NULL; virBitmapPtr autoCpuset = NULL; + virQEMUDriverPtr driver = dom->conn->privateData; virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | VIR_DOMAIN_AFFECT_CONFIG, -1); @@ -5359,6 +5389,7 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom, ret = 1; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virBitmapFree(bitmap); return ret; @@ -5373,6 +5404,7 @@ qemuDomainGetVcpus(virDomainPtr dom, { virDomainObjPtr vm; int ret = -1; + virQEMUDriverPtr driver = dom->conn->privateData; if (!(vm = qemuDomObjFromDomain(dom))) goto cleanup; @@ -5390,6 +5422,7 @@ qemuDomainGetVcpus(virDomainPtr dom, NULL); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -5465,6 +5498,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); VIR_FREE(cpuinfo); return ret; @@ -5649,6 +5683,7 @@ qemuDomainGetIOThreadInfo(virDomainPtr dom, ret = qemuDomainGetIOThreadsConfig(targetDef, info); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -5785,6 +5820,7 @@ qemuDomainPinIOThread(virDomainPtr dom, qemuDomainEventQueue(driver, event); VIR_FREE(str); virBitmapFree(pcpumap); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -6098,6 +6134,7 @@ qemuDomainAddIOThread(virDomainPtr dom, ret = qemuDomainChgIOThread(driver, vm, iothread_id, true, flags); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -6130,6 +6167,7 @@ qemuDomainDelIOThread(virDomainPtr dom, ret = qemuDomainChgIOThread(driver, vm, iothread_id, false, flags); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -6178,6 +6216,7 @@ static int qemuDomainGetSecurityLabel(virDomainPtr dom, virSecurityLabelPtr secl ret = 0; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -6240,6 +6279,7 @@ static int qemuDomainGetSecurityLabelList(virDomainPtr dom, } cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -6738,6 +6778,7 @@ qemuDomainRestoreFlags(virConnectPtr conn, virFileWrapperFdFree(wrapperFd); if (vm && ret < 0) qemuDomainRemoveInactiveJob(driver, vm); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virNWFilterUnlockFilterUpdates(); return ret; @@ -6893,6 +6934,7 @@ qemuDomainManagedSaveGetXMLDesc(virDomainPtr dom, unsigned int flags) virQEMUSaveDataFree(data); virDomainDefFree(def); VIR_FORCE_CLOSE(fd); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); VIR_FREE(path); return ret; @@ -7045,6 +7087,8 @@ static char ret = qemuDomainFormatXML(driver, vm, flags); cleanup: + if (vm) + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -7327,6 +7371,7 @@ qemuDomainCreateWithFlags(virDomainPtr dom, unsigned int flags) qemuProcessEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virNWFilterUnlockFilterUpdates(); return ret; @@ -7421,6 +7466,7 @@ qemuDomainDefineXMLFlags(virConnectPtr conn, cleanup: virDomainDefFree(oldDef); virDomainDefFree(def); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); qemuDomainEventQueue(driver, event); virObjectUnref(caps); @@ -7548,6 +7594,7 @@ qemuDomainUndefineFlags(virDomainPtr dom, cleanup: VIR_FREE(name); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); qemuDomainEventQueue(driver, event); virObjectUnref(cfg); @@ -8471,6 +8518,7 @@ qemuDomainAttachDeviceFlags(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virNWFilterUnlockFilterUpdates(); return ret; @@ -8586,6 +8634,7 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr dom, if (dev != dev_copy) virDomainDeviceDefFree(dev_copy); virDomainDeviceDefFree(dev); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); @@ -8710,6 +8759,7 @@ qemuDomainDetachDeviceFlags(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -8725,7 +8775,7 @@ static int qemuDomainGetAutostart(virDomainPtr dom, { virDomainObjPtr vm; int ret = -1; - + virQEMUDriverPtr driver = dom->conn->privateData; if (!(vm = qemuDomObjFromDomain(dom))) goto cleanup; @@ -8736,6 +8786,7 @@ static int qemuDomainGetAutostart(virDomainPtr dom, ret = 0; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -8811,6 +8862,7 @@ static int qemuDomainSetAutostart(virDomainPtr dom, cleanup: VIR_FREE(configFile); VIR_FREE(autostartLink); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -8863,6 +8915,7 @@ static char *qemuDomainGetSchedulerType(virDomainPtr dom, ignore_value(VIR_STRDUP(ret, "posix")); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -9258,6 +9311,7 @@ qemuDomainSetBlkioParameters(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -9351,6 +9405,7 @@ qemuDomainGetBlkioParameters(virDomainPtr dom, ret = 0; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -9495,6 +9550,7 @@ qemuDomainSetMemoryParameters(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -9582,6 +9638,7 @@ qemuDomainGetMemoryParameters(virDomainPtr dom, ret = 0; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -9772,6 +9829,7 @@ qemuDomainSetNumaParameters(virDomainPtr dom, cleanup: virBitmapFree(nodeset); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -9792,6 +9850,7 @@ qemuDomainGetNumaParameters(virDomainPtr dom, virDomainDefPtr def = NULL; bool live = false; virBitmapPtr autoNodeset = NULL; + virQEMUDriverPtr driver = dom->conn->privateData; virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | VIR_DOMAIN_AFFECT_CONFIG | @@ -9852,6 +9911,7 @@ qemuDomainGetNumaParameters(virDomainPtr dom, cleanup: VIR_FREE(nodeset); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -9969,6 +10029,7 @@ qemuDomainSetPerfEvents(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -10035,6 +10096,7 @@ qemuDomainGetPerfEvents(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virTypedParamsFree(par, npar); return ret; @@ -10449,6 +10511,7 @@ qemuDomainSetSchedulerParametersFlags(virDomainPtr dom, cleanup: virDomainDefFree(persistentDefCopy); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); if (eventNparams) virTypedParamsFree(eventParams, eventNparams); @@ -10694,6 +10757,7 @@ qemuDomainGetSchedulerParametersFlags(virDomainPtr dom, ret = 0; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -10793,6 +10857,7 @@ qemuDomainBlockResize(virDomainPtr dom, cleanup: VIR_FREE(device); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -10938,6 +11003,7 @@ qemuDomainBlockStats(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); VIR_FREE(blockstats); return ret; @@ -11024,6 +11090,7 @@ qemuDomainBlockStatsFlags(virDomainPtr dom, cleanup: VIR_FREE(blockstats); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -11033,6 +11100,7 @@ qemuDomainInterfaceStats(virDomainPtr dom, const char *path, virDomainInterfaceStatsPtr stats) { + virQEMUDriverPtr driver = dom->conn->privateData; virDomainObjPtr vm; virDomainNetDefPtr net = NULL; int ret = -1; @@ -11066,6 +11134,7 @@ qemuDomainInterfaceStats(virDomainPtr dom, ret = 0; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -11263,6 +11332,7 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom, cleanup: virNetDevBandwidthFree(bandwidth); virNetDevBandwidthFree(newBandwidth); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -11279,6 +11349,7 @@ qemuDomainGetInterfaceParameters(virDomainPtr dom, virDomainObjPtr vm = NULL; virDomainDefPtr def = NULL; virDomainNetDefPtr net = NULL; + virQEMUDriverPtr driver = dom->conn->privateData; int ret = -1; virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | @@ -11377,6 +11448,7 @@ qemuDomainGetInterfaceParameters(virDomainPtr dom, ret = 0; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -11450,6 +11522,7 @@ qemuDomainMemoryStats(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -11598,6 +11671,7 @@ qemuDomainMemoryPeek(virDomainPtr dom, if (tmp) unlink(tmp); VIR_FREE(tmp); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -11898,6 +11972,7 @@ qemuDomainGetBlockInfo(virDomainPtr dom, cleanup: VIR_FREE(alias); virHashFree(stats); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -13187,6 +13262,7 @@ qemuDomainGetJobInfo(virDomainPtr dom, ret = qemuDomainJobInfoToInfo(&jobInfo, info); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -13232,6 +13308,7 @@ qemuDomainGetJobStats(virDomainPtr dom, VIR_FREE(priv->job.completed); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -13295,6 +13372,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -13339,6 +13417,7 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -13393,6 +13472,7 @@ qemuDomainMigrateGetMaxDowntime(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -13448,6 +13528,7 @@ qemuDomainMigrateGetCompressionCache(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -13503,6 +13584,7 @@ qemuDomainMigrateSetCompressionCache(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -13561,6 +13643,7 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, } cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -13645,6 +13728,7 @@ qemuDomainMigrateStartPostCopy(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -15142,6 +15226,7 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain, qemuDomainObjEndAsyncJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virDomainSnapshotDefFree(def); VIR_FREE(xml); @@ -15209,6 +15294,7 @@ qemuDomainListAllSnapshots(virDomainPtr domain, { virDomainObjPtr vm = NULL; int n = -1; + virQEMUDriverPtr driver = domain->conn->privateData; virCheckFlags(VIR_DOMAIN_SNAPSHOT_LIST_ROOTS | VIR_DOMAIN_SNAPSHOT_FILTERS_ALL, -1); @@ -15222,6 +15308,7 @@ qemuDomainListAllSnapshots(virDomainPtr domain, n = virDomainListSnapshots(vm->snapshots, NULL, domain, snaps, flags); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return n; } @@ -15407,7 +15494,7 @@ qemuDomainSnapshotCurrent(virDomainPtr domain, { virDomainObjPtr vm; virDomainSnapshotPtr snapshot = NULL; - + virQEMUDriverPtr driver = domain->conn->privateData; virCheckFlags(0, NULL); if (!(vm = qemuDomObjFromDomain(domain))) @@ -15425,6 +15512,7 @@ qemuDomainSnapshotCurrent(virDomainPtr domain, snapshot = virGetDomainSnapshot(domain, vm->current_snapshot->def->name); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return snapshot; } @@ -15459,6 +15547,7 @@ qemuDomainSnapshotGetXMLDesc(virDomainSnapshotPtr snapshot, 0); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return xml; } @@ -15926,6 +16015,7 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot, qemuDomainEventQueue(driver, event); qemuDomainEventQueue(driver, event2); } + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); @@ -16087,6 +16177,7 @@ qemuDomainSnapshotDelete(virDomainSnapshotPtr snapshot, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(cfg); return ret; @@ -16133,6 +16224,7 @@ static int qemuDomainQemuMonitorCommand(virDomainPtr domain, const char *cmd, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -16241,6 +16333,7 @@ qemuDomainOpenConsole(virDomainPtr dom, size_t i; virDomainChrDefPtr chr = NULL; qemuDomainObjPrivatePtr priv; + virQEMUDriverPtr driver = dom->conn->privateData; virCheckFlags(VIR_DOMAIN_CONSOLE_SAFE | VIR_DOMAIN_CONSOLE_FORCE, -1); @@ -16307,6 +16400,7 @@ qemuDomainOpenConsole(virDomainPtr dom, } cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -16322,6 +16416,7 @@ qemuDomainOpenChannel(virDomainPtr dom, size_t i; virDomainChrDefPtr chr = NULL; qemuDomainObjPrivatePtr priv; + virQEMUDriverPtr driver = dom->conn->privateData; virCheckFlags(VIR_DOMAIN_CHANNEL_FORCE, -1); @@ -16381,6 +16476,7 @@ qemuDomainOpenChannel(virDomainPtr dom, } cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -16606,6 +16702,7 @@ qemuDomainBlockPullCommon(virQEMUDriverPtr driver, VIR_FREE(basePath); VIR_FREE(backingPath); VIR_FREE(device); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -16836,6 +16933,7 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -16902,6 +17000,7 @@ qemuDomainBlockJobSetSpeed(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; @@ -17219,6 +17318,7 @@ qemuDomainBlockRebase(virDomainPtr dom, const char *path, const char *base, dest = NULL; cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virStorageSourceFree(dest); return ret; @@ -17625,6 +17725,7 @@ qemuDomainOpenGraphics(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -17702,6 +17803,7 @@ qemuDomainOpenGraphicsFD(virDomainPtr dom, cleanup: VIR_FORCE_CLOSE(pair[0]); VIR_FORCE_CLOSE(pair[1]); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -18121,6 +18223,7 @@ qemuDomainSetBlockIoTune(virDomainPtr dom, cleanup: VIR_FREE(info.group_name); VIR_FREE(device); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); if (eventNparams) virTypedParamsFree(eventParams, eventNparams); @@ -18282,6 +18385,7 @@ qemuDomainGetBlockIoTune(virDomainPtr dom, cleanup: VIR_FREE(reply.group_name); VIR_FREE(device); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -18353,6 +18457,7 @@ qemuDomainGetDiskErrors(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virHashFree(table); if (ret < 0) { @@ -18406,6 +18511,7 @@ qemuDomainSetMetadata(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); @@ -18420,6 +18526,7 @@ qemuDomainGetMetadata(virDomainPtr dom, { virDomainObjPtr vm; char *ret = NULL; + virQEMUDriverPtr driver = dom->conn->privateData; if (!(vm = qemuDomObjFromDomain(dom))) return NULL; @@ -18430,6 +18537,7 @@ qemuDomainGetMetadata(virDomainPtr dom, ret = virDomainObjGetMetadata(vm, type, uri, flags); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -18447,6 +18555,7 @@ qemuDomainGetCPUStats(virDomainPtr domain, int ret = -1; qemuDomainObjPrivatePtr priv; virBitmapPtr guestvcpus = NULL; + virQEMUDriverPtr driver = domain->conn->privateData; virCheckFlags(VIR_TYPED_PARAM_STRING_OKAY, -1); @@ -18482,6 +18591,7 @@ qemuDomainGetCPUStats(virDomainPtr domain, start_cpu, ncpus, guestvcpus); cleanup: virBitmapFree(guestvcpus); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -18569,6 +18679,7 @@ qemuDomainPMSuspendForDuration(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -18617,6 +18728,7 @@ qemuDomainPMWakeup(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -18683,6 +18795,7 @@ qemuDomainQemuAgentCommand(virDomainPtr domain, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return result; } @@ -18972,6 +19085,7 @@ qemuDomainGetTime(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -19054,6 +19168,7 @@ qemuDomainSetTime(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -19092,6 +19207,7 @@ qemuDomainFSFreeze(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -19136,6 +19252,7 @@ qemuDomainFSThaw(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -20235,6 +20352,7 @@ qemuDomainGetFSInfo(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); virDomainDefFree(def); virObjectUnref(caps); @@ -20295,6 +20413,7 @@ qemuDomainInterfaceAddresses(virDomainPtr dom, } cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -20443,6 +20562,7 @@ qemuDomainSetUserPassword(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -20585,6 +20705,7 @@ static int qemuDomainRename(virDomainPtr dom, qemuDomainObjEndJob(driver, vm); cleanup: + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -20698,6 +20819,7 @@ qemuDomainGetGuestVcpus(virDomainPtr dom, cleanup: VIR_FREE(info); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -20785,6 +20907,7 @@ qemuDomainSetGuestVcpus(virDomainPtr dom, cleanup: VIR_FREE(info); virBitmapFree(map); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -20858,6 +20981,7 @@ qemuDomainSetVcpu(virDomainPtr dom, cleanup: virBitmapFree(map); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } @@ -20931,6 +21055,7 @@ qemuDomainSetBlockThreshold(virDomainPtr dom, cleanup: VIR_FREE(nodename); + virDomainConsumeVMEvents(vm, driver); virDomainObjEndAPI(&vm); return ret; } -- 2.9.5

Signed-off-by: Prerna Saxena <saxenap.ltc@gmail.com> --- src/qemu/qemu_driver.c | 1161 ----------------------------------------------- src/qemu/qemu_process.c | 1133 +++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_process.h | 86 ++++ 3 files changed, 1219 insertions(+), 1161 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index b249347..9d495fb 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -151,11 +151,6 @@ static int qemuDomainObjStart(virConnectPtr conn, static int qemuDomainManagedSaveLoad(virDomainObjPtr vm, void *opaque); -static int qemuOpenFileAs(uid_t fallback_uid, gid_t fallback_gid, - bool dynamicOwnership, - const char *path, int oflags, - bool *needUnlink, bool *bypassSecurityDriver); - static int qemuGetDHCPInterfaces(virDomainPtr dom, virDomainObjPtr vm, virDomainInterfacePtr **ifaces); @@ -2819,38 +2814,6 @@ qemuDomainGetControlInfo(virDomainPtr dom, verify(sizeof(QEMU_SAVE_MAGIC) == sizeof(QEMU_SAVE_PARTIAL)); -typedef enum { - QEMU_SAVE_FORMAT_RAW = 0, - QEMU_SAVE_FORMAT_GZIP = 1, - QEMU_SAVE_FORMAT_BZIP2 = 2, - /* - * Deprecated by xz and never used as part of a release - * QEMU_SAVE_FORMAT_LZMA - */ - QEMU_SAVE_FORMAT_XZ = 3, - QEMU_SAVE_FORMAT_LZOP = 4, - /* Note: add new members only at the end. - These values are used in the on-disk format. - Do not change or re-use numbers. */ - - QEMU_SAVE_FORMAT_LAST -} virQEMUSaveFormat; - -VIR_ENUM_DECL(qemuSaveCompression) -VIR_ENUM_IMPL(qemuSaveCompression, QEMU_SAVE_FORMAT_LAST, - "raw", - "gzip", - "bzip2", - "xz", - "lzop") - -VIR_ENUM_DECL(qemuDumpFormat) -VIR_ENUM_IMPL(qemuDumpFormat, VIR_DOMAIN_CORE_DUMP_FORMAT_LAST, - "elf", - "kdump-zlib", - "kdump-lzo", - "kdump-snappy") - typedef struct _virQEMUSaveHeader virQEMUSaveHeader; typedef virQEMUSaveHeader *virQEMUSaveHeaderPtr; struct _virQEMUSaveHeader { @@ -3062,214 +3025,6 @@ qemuCompressGetCommand(virQEMUSaveFormat compression) return ret; } -/** - * qemuOpenFile: - * @driver: driver object - * @vm: domain object - * @path: path to file to open - * @oflags: flags for opening/creation of the file - * @needUnlink: set to true if file was created by this function - * @bypassSecurityDriver: optional pointer to a boolean that will be set to true - * if security driver operations are pointless (due to - * NFS mount) - * - * Internal function to properly create or open existing files, with - * ownership affected by qemu driver setup and domain DAC label. - * - * Returns the file descriptor on success and negative errno on failure. - * - * This function should not be used on storage sources. Use - * qemuDomainStorageFileInit and storage driver APIs if possible. - **/ -static int -qemuOpenFile(virQEMUDriverPtr driver, - virDomainObjPtr vm, - const char *path, - int oflags, - bool *needUnlink, - bool *bypassSecurityDriver) -{ - int ret = -1; - virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); - uid_t user = cfg->user; - gid_t group = cfg->group; - bool dynamicOwnership = cfg->dynamicOwnership; - virSecurityLabelDefPtr seclabel; - - virObjectUnref(cfg); - - /* TODO: Take imagelabel into account? */ - if (vm && - (seclabel = virDomainDefGetSecurityLabelDef(vm->def, "dac")) != NULL && - seclabel->label != NULL && - (virParseOwnershipIds(seclabel->label, &user, &group) < 0)) - goto cleanup; - - ret = qemuOpenFileAs(user, group, dynamicOwnership, - path, oflags, needUnlink, bypassSecurityDriver); - - cleanup: - return ret; -} - -static int -qemuOpenFileAs(uid_t fallback_uid, gid_t fallback_gid, - bool dynamicOwnership, - const char *path, int oflags, - bool *needUnlink, bool *bypassSecurityDriver) -{ - struct stat sb; - bool is_reg = true; - bool need_unlink = false; - bool bypass_security = false; - unsigned int vfoflags = 0; - int fd = -1; - int path_shared = virFileIsSharedFS(path); - uid_t uid = geteuid(); - gid_t gid = getegid(); - - /* path might be a pre-existing block dev, in which case - * we need to skip the create step, and also avoid unlink - * in the failure case */ - if (oflags & O_CREAT) { - need_unlink = true; - - /* Don't force chown on network-shared FS - * as it is likely to fail. */ - if (path_shared <= 0 || dynamicOwnership) - vfoflags |= VIR_FILE_OPEN_FORCE_OWNER; - - if (stat(path, &sb) == 0) { - /* It already exists, we don't want to delete it on error */ - need_unlink = false; - - is_reg = !!S_ISREG(sb.st_mode); - /* If the path is regular file which exists - * already and dynamic_ownership is off, we don't - * want to change its ownership, just open it as-is */ - if (is_reg && !dynamicOwnership) { - uid = sb.st_uid; - gid = sb.st_gid; - } - } - } - - /* First try creating the file as root */ - if (!is_reg) { - if ((fd = open(path, oflags & ~O_CREAT)) < 0) { - fd = -errno; - goto error; - } - } else { - if ((fd = virFileOpenAs(path, oflags, S_IRUSR | S_IWUSR, uid, gid, - vfoflags | VIR_FILE_OPEN_NOFORK)) < 0) { - /* If we failed as root, and the error was permission-denied - (EACCES or EPERM), assume it's on a network-connected share - where root access is restricted (eg, root-squashed NFS). If the - qemu user is non-root, just set a flag to - bypass security driver shenanigans, and retry the operation - after doing setuid to qemu user */ - if ((fd != -EACCES && fd != -EPERM) || fallback_uid == geteuid()) - goto error; - - /* On Linux we can also verify the FS-type of the directory. */ - switch (path_shared) { - case 1: - /* it was on a network share, so we'll continue - * as outlined above - */ - break; - - case -1: - virReportSystemError(-fd, oflags & O_CREAT - ? _("Failed to create file " - "'%s': couldn't determine fs type") - : _("Failed to open file " - "'%s': couldn't determine fs type"), - path); - goto cleanup; - - case 0: - default: - /* local file - log the error returned by virFileOpenAs */ - goto error; - } - - /* If we created the file above, then we need to remove it; - * otherwise, the next attempt to create will fail. If the - * file had already existed before we got here, then we also - * don't want to delete it and allow the following to succeed - * or fail based on existing protections - */ - if (need_unlink) - unlink(path); - - /* Retry creating the file as qemu user */ - - /* Since we're passing different modes... */ - vfoflags |= VIR_FILE_OPEN_FORCE_MODE; - - if ((fd = virFileOpenAs(path, oflags, - S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP, - fallback_uid, fallback_gid, - vfoflags | VIR_FILE_OPEN_FORK)) < 0) { - virReportSystemError(-fd, oflags & O_CREAT - ? _("Error from child process creating '%s'") - : _("Error from child process opening '%s'"), - path); - goto cleanup; - } - - /* Since we had to setuid to create the file, and the fstype - is NFS, we assume it's a root-squashing NFS share, and that - the security driver stuff would have failed anyway */ - - bypass_security = true; - } - } - cleanup: - if (needUnlink) - *needUnlink = need_unlink; - if (bypassSecurityDriver) - *bypassSecurityDriver = bypass_security; - return fd; - - error: - virReportSystemError(-fd, oflags & O_CREAT - ? _("Failed to create file '%s'") - : _("Failed to open file '%s'"), - path); - goto cleanup; -} - - -static int -qemuFileWrapperFDClose(virDomainObjPtr vm, - virFileWrapperFdPtr fd) -{ - int ret; - - /* virFileWrapperFd uses iohelper to write data onto disk. - * However, iohelper calls fdatasync() which may take ages to - * finish. Therefore, we shouldn't be waiting with the domain - * object locked. */ - - /* XXX Currently, this function is intended for *Save() only - * as restore needs some reworking before it's ready for - * this. */ - - virObjectUnlock(vm); - ret = virFileWrapperFdClose(fd); - virObjectLock(vm); - if (!virDomainObjIsActive(vm)) { - if (!virGetLastError()) - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("domain is no longer running")); - ret = -1; - } - return ret; -} - /* Helper function to execute a migration to file with a correct save header * the caller needs to make sure that the processors are stopped and do all other @@ -3481,82 +3236,6 @@ qemuDomainSaveInternal(virQEMUDriverPtr driver, virDomainPtr dom, return ret; } - -/* qemuGetCompressionProgram: - * @imageFormat: String representation from qemu.conf for the compression - * image format being used (dump, save, or snapshot). - * @compresspath: Pointer to a character string to store the fully qualified - * path from virFindFileInPath. - * @styleFormat: String representing the style of format (dump, save, snapshot) - * @use_raw_on_fail: Boolean indicating how to handle the error path. For - * callers that are OK with invalid data or inability to - * find the compression program, just return a raw format - * and let the path remain as NULL. - * - * Returns: - * virQEMUSaveFormat - Integer representation of the compression - * program to be used for particular style - * (e.g. dump, save, or snapshot). - * QEMU_SAVE_FORMAT_RAW - If there is no qemu.conf imageFormat value or - * no there was an error, then just return RAW - * indicating none. - */ -static int ATTRIBUTE_NONNULL(2) -qemuGetCompressionProgram(const char *imageFormat, - char **compresspath, - const char *styleFormat, - bool use_raw_on_fail) -{ - int ret; - - *compresspath = NULL; - - if (!imageFormat) - return QEMU_SAVE_FORMAT_RAW; - - if ((ret = qemuSaveCompressionTypeFromString(imageFormat)) < 0) - goto error; - - if (ret == QEMU_SAVE_FORMAT_RAW) - return QEMU_SAVE_FORMAT_RAW; - - if (!(*compresspath = virFindFileInPath(imageFormat))) - goto error; - - return ret; - - error: - if (ret < 0) { - if (use_raw_on_fail) - VIR_WARN("Invalid %s image format specified in " - "configuration file, using raw", - styleFormat); - else - virReportError(VIR_ERR_OPERATION_FAILED, - _("Invalid %s image format specified " - "in configuration file"), - styleFormat); - } else { - if (use_raw_on_fail) - VIR_WARN("Compression program for %s image format in " - "configuration file isn't available, using raw", - styleFormat); - else - virReportError(VIR_ERR_OPERATION_FAILED, - _("Compression program for %s image format " - "in configuration file isn't available"), - styleFormat); - } - - /* Use "raw" as the format if the specified format is not valid, - * or the compress program is not available. */ - if (use_raw_on_fail) - return QEMU_SAVE_FORMAT_RAW; - - return -1; -} - - static int qemuDomainSaveFlags(virDomainPtr dom, const char *path, const char *dxml, unsigned int flags) @@ -3761,147 +3440,6 @@ qemuDomainManagedSaveRemove(virDomainPtr dom, unsigned int flags) return ret; } -static int qemuDumpToFd(virQEMUDriverPtr driver, virDomainObjPtr vm, - int fd, qemuDomainAsyncJob asyncJob, - const char *dumpformat) -{ - qemuDomainObjPrivatePtr priv = vm->privateData; - int ret = -1; - - if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DUMP_GUEST_MEMORY)) { - virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", - _("dump-guest-memory is not supported")); - return -1; - } - - if (qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, fd) < 0) - return -1; - - VIR_FREE(priv->job.current); - priv->job.dump_memory_only = true; - - if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) - return -1; - - if (dumpformat) { - ret = qemuMonitorGetDumpGuestMemoryCapability(priv->mon, dumpformat); - - if (ret <= 0) { - virReportError(VIR_ERR_INVALID_ARG, - _("unsupported dumpformat '%s' " - "for this QEMU binary"), - dumpformat); - ret = -1; - goto cleanup; - } - } - - ret = qemuMonitorDumpToFd(priv->mon, fd, dumpformat); - - cleanup: - ignore_value(qemuDomainObjExitMonitor(driver, vm)); - - return ret; -} - -static int -doCoreDump(virQEMUDriverPtr driver, - virDomainObjPtr vm, - const char *path, - unsigned int dump_flags, - unsigned int dumpformat) -{ - int fd = -1; - int ret = -1; - virFileWrapperFdPtr wrapperFd = NULL; - int directFlag = 0; - unsigned int flags = VIR_FILE_WRAPPER_NON_BLOCKING; - const char *memory_dump_format = NULL; - virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); - char *compressedpath = NULL; - - /* We reuse "save" flag for "dump" here. Then, we can support the same - * format in "save" and "dump". This path doesn't need the compression - * program to exist and can ignore the return value - it only cares to - * get the compressedpath */ - ignore_value(qemuGetCompressionProgram(cfg->dumpImageFormat, - &compressedpath, - "dump", true)); - - /* Create an empty file with appropriate ownership. */ - if (dump_flags & VIR_DUMP_BYPASS_CACHE) { - flags |= VIR_FILE_WRAPPER_BYPASS_CACHE; - directFlag = virFileDirectFdFlag(); - if (directFlag < 0) { - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("bypass cache unsupported by this system")); - goto cleanup; - } - } - /* Core dumps usually imply last-ditch analysis efforts are - * desired, so we intentionally do not unlink even if a file was - * created. */ - if ((fd = qemuOpenFile(driver, vm, path, - O_CREAT | O_TRUNC | O_WRONLY | directFlag, - NULL, NULL)) < 0) - goto cleanup; - - if (!(wrapperFd = virFileWrapperFdNew(&fd, path, flags))) - goto cleanup; - - if (dump_flags & VIR_DUMP_MEMORY_ONLY) { - if (!(memory_dump_format = qemuDumpFormatTypeToString(dumpformat))) { - virReportError(VIR_ERR_INVALID_ARG, - _("unknown dumpformat '%d'"), dumpformat); - goto cleanup; - } - - /* qemu dumps in "elf" without dumpformat set */ - if (STREQ(memory_dump_format, "elf")) - memory_dump_format = NULL; - - ret = qemuDumpToFd(driver, vm, fd, QEMU_ASYNC_JOB_DUMP, - memory_dump_format); - } else { - if (dumpformat != VIR_DOMAIN_CORE_DUMP_FORMAT_RAW) { - virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", - _("kdump-compressed format is only supported with " - "memory-only dump")); - goto cleanup; - } - - if (!qemuMigrationIsAllowed(driver, vm, false, 0)) - goto cleanup; - - ret = qemuMigrationToFile(driver, vm, fd, compressedpath, - QEMU_ASYNC_JOB_DUMP); - } - - if (ret < 0) - goto cleanup; - - if (VIR_CLOSE(fd) < 0) { - virReportSystemError(errno, - _("unable to close file %s"), - path); - goto cleanup; - } - if (qemuFileWrapperFDClose(vm, wrapperFd) < 0) - goto cleanup; - - ret = 0; - - cleanup: - VIR_FORCE_CLOSE(fd); - if (ret != 0) - unlink(path); - virFileWrapperFdFree(wrapperFd); - VIR_FREE(compressedpath); - virObjectUnref(cfg); - return ret; -} - - static int qemuDomainCoreDumpWithFormat(virDomainPtr dom, const char *path, @@ -4106,712 +3644,13 @@ qemuDomainScreenshot(virDomainPtr dom, return ret; } -static char * -getAutoDumpPath(virQEMUDriverPtr driver, - virDomainObjPtr vm) -{ - char *dumpfile = NULL; - char *domname = virDomainObjGetShortName(vm->def); - char timestr[100]; - struct tm time_info; - time_t curtime = time(NULL); - virQEMUDriverConfigPtr cfg = NULL; - - if (!domname) - return NULL; - - cfg = virQEMUDriverGetConfig(driver); - - localtime_r(&curtime, &time_info); - strftime(timestr, sizeof(timestr), "%Y-%m-%d-%H:%M:%S", &time_info); - - ignore_value(virAsprintf(&dumpfile, "%s/%s-%s", - cfg->autoDumpPath, - domname, - timestr)); - - virObjectUnref(cfg); - VIR_FREE(domname); - return dumpfile; -} - -static void -processWatchdogEvent(virQEMUDriverPtr driver, - virDomainObjPtr vm, - int action) -{ - int ret; - virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); - char *dumpfile = getAutoDumpPath(driver, vm); - unsigned int flags = VIR_DUMP_MEMORY_ONLY; - - if (!dumpfile) - goto cleanup; - - switch (action) { - case VIR_DOMAIN_WATCHDOG_ACTION_DUMP: - if (qemuDomainObjBeginAsyncJob(driver, vm, - QEMU_ASYNC_JOB_DUMP, - VIR_DOMAIN_JOB_OPERATION_DUMP) < 0) { - goto cleanup; - } - - if (!virDomainObjIsActive(vm)) { - virReportError(VIR_ERR_OPERATION_INVALID, - "%s", _("domain is not running")); - goto endjob; - } - - flags |= cfg->autoDumpBypassCache ? VIR_DUMP_BYPASS_CACHE: 0; - if ((ret = doCoreDump(driver, vm, dumpfile, flags, - VIR_DOMAIN_CORE_DUMP_FORMAT_RAW)) < 0) - virReportError(VIR_ERR_OPERATION_FAILED, - "%s", _("Dump failed")); - - ret = qemuProcessStartCPUs(driver, vm, NULL, - VIR_DOMAIN_RUNNING_UNPAUSED, - QEMU_ASYNC_JOB_DUMP); - - if (ret < 0) - virReportError(VIR_ERR_OPERATION_FAILED, - "%s", _("Resuming after dump failed")); - break; - default: - goto cleanup; - } - - endjob: - qemuDomainObjEndAsyncJob(driver, vm); - - cleanup: - VIR_FREE(dumpfile); - virObjectUnref(cfg); -} -static int -doCoreDumpToAutoDumpPath(virQEMUDriverPtr driver, - virDomainObjPtr vm, - unsigned int flags) -{ - int ret = -1; - virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); - char *dumpfile = getAutoDumpPath(driver, vm); - if (!dumpfile) - goto cleanup; - flags |= cfg->autoDumpBypassCache ? VIR_DUMP_BYPASS_CACHE: 0; - if ((ret = doCoreDump(driver, vm, dumpfile, flags, - VIR_DOMAIN_CORE_DUMP_FORMAT_RAW)) < 0) - virReportError(VIR_ERR_OPERATION_FAILED, - "%s", _("Dump failed")); - cleanup: - VIR_FREE(dumpfile); - virObjectUnref(cfg); - return ret; -} -static void -qemuProcessGuestPanicEventInfo(virQEMUDriverPtr driver, - virDomainObjPtr vm, - qemuMonitorEventPanicInfoPtr info) -{ - char *msg = qemuMonitorGuestPanicEventInfoFormatMsg(info); - char *timestamp = virTimeStringNow(); - if (msg && timestamp) - qemuDomainLogAppendMessage(driver, vm, "%s: panic %s\n", timestamp, msg); - VIR_FREE(timestamp); - VIR_FREE(msg); -} - - -static void -processGuestPanicEvent(virQEMUDriverPtr driver, - virDomainObjPtr vm, - int action, - qemuMonitorEventPanicInfoPtr info) -{ - qemuDomainObjPrivatePtr priv = vm->privateData; - virObjectEventPtr event = NULL; - virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); - bool removeInactive = false; - - if (qemuDomainObjBeginAsyncJob(driver, vm, QEMU_ASYNC_JOB_DUMP, - VIR_DOMAIN_JOB_OPERATION_DUMP) < 0) - goto cleanup; - - if (!virDomainObjIsActive(vm)) { - VIR_DEBUG("Ignoring GUEST_PANICKED event from inactive domain %s", - vm->def->name); - goto endjob; - } - - if (info) - qemuProcessGuestPanicEventInfo(driver, vm, info); - - virDomainObjSetState(vm, VIR_DOMAIN_CRASHED, VIR_DOMAIN_CRASHED_PANICKED); - - event = virDomainEventLifecycleNewFromObj(vm, - VIR_DOMAIN_EVENT_CRASHED, - VIR_DOMAIN_EVENT_CRASHED_PANICKED); - - qemuDomainEventQueue(driver, event); - - if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) { - VIR_WARN("Unable to save status on vm %s after state change", - vm->def->name); - } - - if (virDomainLockProcessPause(driver->lockManager, vm, &priv->lockState) < 0) - VIR_WARN("Unable to release lease on %s", vm->def->name); - VIR_DEBUG("Preserving lock state '%s'", NULLSTR(priv->lockState)); - - switch (action) { - case VIR_DOMAIN_LIFECYCLE_CRASH_COREDUMP_DESTROY: - if (doCoreDumpToAutoDumpPath(driver, vm, VIR_DUMP_MEMORY_ONLY) < 0) - goto endjob; - ATTRIBUTE_FALLTHROUGH; - - case VIR_DOMAIN_LIFECYCLE_CRASH_DESTROY: - qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_CRASHED, - QEMU_ASYNC_JOB_DUMP, 0); - event = virDomainEventLifecycleNewFromObj(vm, - VIR_DOMAIN_EVENT_STOPPED, - VIR_DOMAIN_EVENT_STOPPED_CRASHED); - - qemuDomainEventQueue(driver, event); - virDomainAuditStop(vm, "destroyed"); - removeInactive = true; - break; - - case VIR_DOMAIN_LIFECYCLE_CRASH_COREDUMP_RESTART: - if (doCoreDumpToAutoDumpPath(driver, vm, VIR_DUMP_MEMORY_ONLY) < 0) - goto endjob; - ATTRIBUTE_FALLTHROUGH; - - case VIR_DOMAIN_LIFECYCLE_CRASH_RESTART: - qemuDomainSetFakeReboot(driver, vm, true); - qemuProcessShutdownOrReboot(driver, vm); - break; - - case VIR_DOMAIN_LIFECYCLE_CRASH_PRESERVE: - break; - - default: - break; - } - - endjob: - qemuDomainObjEndAsyncJob(driver, vm); - if (removeInactive) - qemuDomainRemoveInactiveJob(driver, vm); - - cleanup: - virObjectUnref(cfg); -} - - -static void -processDeviceDeletedEvent(virQEMUDriverPtr driver, - virDomainObjPtr vm, - char *devAlias) -{ - virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); - virDomainDeviceDef dev; - - VIR_DEBUG("Removing device %s from domain %p %s", - devAlias, vm, vm->def->name); - - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) - goto cleanup; - - if (!virDomainObjIsActive(vm)) { - VIR_DEBUG("Domain is not running"); - goto endjob; - } - - if (STRPREFIX(devAlias, "vcpu")) { - qemuDomainRemoveVcpuAlias(driver, vm, devAlias); - } else { - if (virDomainDefFindDevice(vm->def, devAlias, &dev, true) < 0) - goto endjob; - - if (qemuDomainRemoveDevice(driver, vm, &dev) < 0) - goto endjob; - } - - if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) - VIR_WARN("unable to save domain status after removing device %s", - devAlias); - - endjob: - qemuDomainObjEndJob(driver, vm); - - cleanup: - VIR_FREE(devAlias); - virObjectUnref(cfg); -} - - -static void -syncNicRxFilterMacAddr(char *ifname, virNetDevRxFilterPtr guestFilter, - virNetDevRxFilterPtr hostFilter) -{ - char newMacStr[VIR_MAC_STRING_BUFLEN]; - - if (virMacAddrCmp(&hostFilter->mac, &guestFilter->mac)) { - virMacAddrFormat(&guestFilter->mac, newMacStr); - - /* set new MAC address from guest to associated macvtap device */ - if (virNetDevSetMAC(ifname, &guestFilter->mac) < 0) { - VIR_WARN("Couldn't set new MAC address %s to device %s " - "while responding to NIC_RX_FILTER_CHANGED", - newMacStr, ifname); - } else { - VIR_DEBUG("device %s MAC address set to %s", ifname, newMacStr); - } - } -} - - -static void -syncNicRxFilterGuestMulticast(char *ifname, virNetDevRxFilterPtr guestFilter, - virNetDevRxFilterPtr hostFilter) -{ - size_t i, j; - bool found; - char macstr[VIR_MAC_STRING_BUFLEN]; - - for (i = 0; i < guestFilter->multicast.nTable; i++) { - found = false; - - for (j = 0; j < hostFilter->multicast.nTable; j++) { - if (virMacAddrCmp(&guestFilter->multicast.table[i], - &hostFilter->multicast.table[j]) == 0) { - found = true; - break; - } - } - - if (!found) { - virMacAddrFormat(&guestFilter->multicast.table[i], macstr); - - if (virNetDevAddMulti(ifname, &guestFilter->multicast.table[i]) < 0) { - VIR_WARN("Couldn't add new multicast MAC address %s to " - "device %s while responding to NIC_RX_FILTER_CHANGED", - macstr, ifname); - } else { - VIR_DEBUG("Added multicast MAC %s to %s interface", - macstr, ifname); - } - } - } -} - - -static void -syncNicRxFilterHostMulticast(char *ifname, virNetDevRxFilterPtr guestFilter, - virNetDevRxFilterPtr hostFilter) -{ - size_t i, j; - bool found; - char macstr[VIR_MAC_STRING_BUFLEN]; - - for (i = 0; i < hostFilter->multicast.nTable; i++) { - found = false; - - for (j = 0; j < guestFilter->multicast.nTable; j++) { - if (virMacAddrCmp(&hostFilter->multicast.table[i], - &guestFilter->multicast.table[j]) == 0) { - found = true; - break; - } - } - - if (!found) { - virMacAddrFormat(&hostFilter->multicast.table[i], macstr); - - if (virNetDevDelMulti(ifname, &hostFilter->multicast.table[i]) < 0) { - VIR_WARN("Couldn't delete multicast MAC address %s from " - "device %s while responding to NIC_RX_FILTER_CHANGED", - macstr, ifname); - } else { - VIR_DEBUG("Deleted multicast MAC %s from %s interface", - macstr, ifname); - } - } - } -} - - -static void -syncNicRxFilterPromiscMode(char *ifname, - virNetDevRxFilterPtr guestFilter, - virNetDevRxFilterPtr hostFilter) -{ - bool promisc; - bool setpromisc = false; - - /* Set macvtap promisc mode to true if the guest has vlans defined */ - /* or synchronize the macvtap promisc mode if different from guest */ - if (guestFilter->vlan.nTable > 0) { - if (!hostFilter->promiscuous) { - setpromisc = true; - promisc = true; - } - } else if (hostFilter->promiscuous != guestFilter->promiscuous) { - setpromisc = true; - promisc = guestFilter->promiscuous; - } - - if (setpromisc) { - if (virNetDevSetPromiscuous(ifname, promisc) < 0) { - VIR_WARN("Couldn't set PROMISC flag to %s for device %s " - "while responding to NIC_RX_FILTER_CHANGED", - promisc ? "true" : "false", ifname); - } - } -} - - -static void -syncNicRxFilterMultiMode(char *ifname, virNetDevRxFilterPtr guestFilter, - virNetDevRxFilterPtr hostFilter) -{ - if (hostFilter->multicast.mode != guestFilter->multicast.mode) { - switch (guestFilter->multicast.mode) { - case VIR_NETDEV_RX_FILTER_MODE_ALL: - if (virNetDevSetRcvAllMulti(ifname, true)) { - - VIR_WARN("Couldn't set allmulticast flag to 'on' for " - "device %s while responding to " - "NIC_RX_FILTER_CHANGED", ifname); - } - break; - - case VIR_NETDEV_RX_FILTER_MODE_NORMAL: - if (virNetDevSetRcvMulti(ifname, true)) { - - VIR_WARN("Couldn't set multicast flag to 'on' for " - "device %s while responding to " - "NIC_RX_FILTER_CHANGED", ifname); - } - - if (virNetDevSetRcvAllMulti(ifname, false)) { - VIR_WARN("Couldn't set allmulticast flag to 'off' for " - "device %s while responding to " - "NIC_RX_FILTER_CHANGED", ifname); - } - break; - - case VIR_NETDEV_RX_FILTER_MODE_NONE: - if (virNetDevSetRcvAllMulti(ifname, false)) { - VIR_WARN("Couldn't set allmulticast flag to 'off' for " - "device %s while responding to " - "NIC_RX_FILTER_CHANGED", ifname); - } - - if (virNetDevSetRcvMulti(ifname, false)) { - VIR_WARN("Couldn't set multicast flag to 'off' for " - "device %s while responding to " - "NIC_RX_FILTER_CHANGED", - ifname); - } - break; - } - } -} - - -static void -syncNicRxFilterDeviceOptions(char *ifname, virNetDevRxFilterPtr guestFilter, - virNetDevRxFilterPtr hostFilter) -{ - syncNicRxFilterPromiscMode(ifname, guestFilter, hostFilter); - syncNicRxFilterMultiMode(ifname, guestFilter, hostFilter); -} - - -static void -syncNicRxFilterMulticast(char *ifname, - virNetDevRxFilterPtr guestFilter, - virNetDevRxFilterPtr hostFilter) -{ - syncNicRxFilterGuestMulticast(ifname, guestFilter, hostFilter); - syncNicRxFilterHostMulticast(ifname, guestFilter, hostFilter); -} - -static void -processNicRxFilterChangedEvent(virQEMUDriverPtr driver, - virDomainObjPtr vm, - char *devAlias) -{ - virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); - qemuDomainObjPrivatePtr priv = vm->privateData; - virDomainDeviceDef dev; - virDomainNetDefPtr def; - virNetDevRxFilterPtr guestFilter = NULL; - virNetDevRxFilterPtr hostFilter = NULL; - int ret; - - VIR_DEBUG("Received NIC_RX_FILTER_CHANGED event for device %s " - "from domain %p %s", - devAlias, vm, vm->def->name); - - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) - goto cleanup; - - if (!virDomainObjIsActive(vm)) { - VIR_DEBUG("Domain is not running"); - goto endjob; - } - - if (virDomainDefFindDevice(vm->def, devAlias, &dev, true) < 0) { - VIR_WARN("NIC_RX_FILTER_CHANGED event received for " - "non-existent device %s in domain %s", - devAlias, vm->def->name); - goto endjob; - } - if (dev.type != VIR_DOMAIN_DEVICE_NET) { - VIR_WARN("NIC_RX_FILTER_CHANGED event received for " - "non-network device %s in domain %s", - devAlias, vm->def->name); - goto endjob; - } - def = dev.data.net; - - if (!virDomainNetGetActualTrustGuestRxFilters(def)) { - VIR_DEBUG("ignore NIC_RX_FILTER_CHANGED event for network " - "device %s in domain %s", - def->info.alias, vm->def->name); - /* not sending "query-rx-filter" will also suppress any - * further NIC_RX_FILTER_CHANGED events for this device - */ - goto endjob; - } - - /* handle the event - send query-rx-filter and respond to it. */ - - VIR_DEBUG("process NIC_RX_FILTER_CHANGED event for network " - "device %s in domain %s", def->info.alias, vm->def->name); - - qemuDomainObjEnterMonitor(driver, vm); - ret = qemuMonitorQueryRxFilter(priv->mon, devAlias, &guestFilter); - if (qemuDomainObjExitMonitor(driver, vm) < 0) - ret = -1; - if (ret < 0) - goto endjob; - - if (virDomainNetGetActualType(def) == VIR_DOMAIN_NET_TYPE_DIRECT) { - - if (virNetDevGetRxFilter(def->ifname, &hostFilter)) { - VIR_WARN("Couldn't get current RX filter for device %s " - "while responding to NIC_RX_FILTER_CHANGED", - def->ifname); - goto endjob; - } - - /* For macvtap connections, set the following macvtap network device - * attributes to match those of the guest network device: - * - MAC address - * - Multicast MAC address table - * - Device options: - * - PROMISC - * - MULTICAST - * - ALLMULTI - */ - syncNicRxFilterMacAddr(def->ifname, guestFilter, hostFilter); - syncNicRxFilterMulticast(def->ifname, guestFilter, hostFilter); - syncNicRxFilterDeviceOptions(def->ifname, guestFilter, hostFilter); - } - - if (virDomainNetGetActualType(def) == VIR_DOMAIN_NET_TYPE_NETWORK) { - const char *brname = virDomainNetGetActualBridgeName(def); - - /* For libivrt network connections, set the following TUN/TAP network - * device attributes to match those of the guest network device: - * - QoS filters (which are based on MAC address) - */ - if (virDomainNetGetActualBandwidth(def) && - def->data.network.actual && - virNetDevBandwidthUpdateFilter(brname, &guestFilter->mac, - def->data.network.actual->class_id) < 0) - goto endjob; - } - - endjob: - qemuDomainObjEndJob(driver, vm); - - cleanup: - virNetDevRxFilterFree(hostFilter); - virNetDevRxFilterFree(guestFilter); - VIR_FREE(devAlias); - virObjectUnref(cfg); -} - - -static void -processSerialChangedEvent(virQEMUDriverPtr driver, - virDomainObjPtr vm, - char *devAlias, - bool connected) -{ - virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); - virDomainChrDeviceState newstate; - virObjectEventPtr event = NULL; - virDomainDeviceDef dev; - qemuDomainObjPrivatePtr priv = vm->privateData; - - if (connected) - newstate = VIR_DOMAIN_CHR_DEVICE_STATE_CONNECTED; - else - newstate = VIR_DOMAIN_CHR_DEVICE_STATE_DISCONNECTED; - - VIR_DEBUG("Changing serial port state %s in domain %p %s", - devAlias, vm, vm->def->name); - - if (newstate == VIR_DOMAIN_CHR_DEVICE_STATE_DISCONNECTED && - virDomainObjIsActive(vm) && priv->agent) { - /* peek into the domain definition to find the channel */ - if (virDomainDefFindDevice(vm->def, devAlias, &dev, true) == 0 && - dev.type == VIR_DOMAIN_DEVICE_CHR && - dev.data.chr->deviceType == VIR_DOMAIN_CHR_DEVICE_TYPE_CHANNEL && - dev.data.chr->targetType == VIR_DOMAIN_CHR_CHANNEL_TARGET_TYPE_VIRTIO && - STREQ_NULLABLE(dev.data.chr->target.name, "org.qemu.guest_agent.0")) - /* Close agent monitor early, so that other threads - * waiting for the agent to reply can finish and our - * job we acquire below can succeed. */ - qemuAgentNotifyClose(priv->agent); - - /* now discard the data, since it may possibly change once we unlock - * while entering the job */ - memset(&dev, 0, sizeof(dev)); - } - - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) - goto cleanup; - - if (!virDomainObjIsActive(vm)) { - VIR_DEBUG("Domain is not running"); - goto endjob; - } - - if (virDomainDefFindDevice(vm->def, devAlias, &dev, true) < 0) - goto endjob; - - /* we care only about certain devices */ - if (dev.type != VIR_DOMAIN_DEVICE_CHR || - dev.data.chr->deviceType != VIR_DOMAIN_CHR_DEVICE_TYPE_CHANNEL || - dev.data.chr->targetType != VIR_DOMAIN_CHR_CHANNEL_TARGET_TYPE_VIRTIO) - goto endjob; - - dev.data.chr->state = newstate; - - if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) - VIR_WARN("unable to save status of domain %s after updating state of " - "channel %s", vm->def->name, devAlias); - - if (STREQ_NULLABLE(dev.data.chr->target.name, "org.qemu.guest_agent.0")) { - if (newstate == VIR_DOMAIN_CHR_DEVICE_STATE_CONNECTED) { - if (qemuConnectAgent(driver, vm) < 0) - goto endjob; - } else { - if (priv->agent) { - qemuAgentClose(priv->agent); - priv->agent = NULL; - } - priv->agentError = false; - } - - event = virDomainEventAgentLifecycleNewFromObj(vm, newstate, - VIR_CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_REASON_CHANNEL); - qemuDomainEventQueue(driver, event); - } - - endjob: - qemuDomainObjEndJob(driver, vm); - - cleanup: - VIR_FREE(devAlias); - virObjectUnref(cfg); - -} - - -static void -processBlockJobEvent(virQEMUDriverPtr driver, - virDomainObjPtr vm, - char *diskAlias, - int type, - int status) -{ - virDomainDiskDefPtr disk; - - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) - goto cleanup; - - if (!virDomainObjIsActive(vm)) { - VIR_DEBUG("Domain is not running"); - goto endjob; - } - - if ((disk = qemuProcessFindDomainDiskByAlias(vm, diskAlias))) - qemuBlockJobEventProcess(driver, vm, disk, QEMU_ASYNC_JOB_NONE, type, status); - - endjob: - qemuDomainObjEndJob(driver, vm); - cleanup: - VIR_FREE(diskAlias); -} - - -static void -processMonitorEOFEvent(virQEMUDriverPtr driver, - virDomainObjPtr vm) -{ - qemuDomainObjPrivatePtr priv = vm->privateData; - int eventReason = VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN; - int stopReason = VIR_DOMAIN_SHUTOFF_SHUTDOWN; - const char *auditReason = "shutdown"; - unsigned int stopFlags = 0; - virObjectEventPtr event = NULL; - - if (qemuProcessBeginStopJob(driver, vm, QEMU_JOB_DESTROY, true) < 0) - return; - - if (!virDomainObjIsActive(vm)) { - VIR_DEBUG("Domain %p '%s' is not active, ignoring EOF", - vm, vm->def->name); - goto endjob; - } - - if (priv->monJSON && !priv->gotShutdown) { - VIR_DEBUG("Monitor connection to '%s' closed without SHUTDOWN event; " - "assuming the domain crashed", vm->def->name); - eventReason = VIR_DOMAIN_EVENT_STOPPED_FAILED; - stopReason = VIR_DOMAIN_SHUTOFF_CRASHED; - auditReason = "failed"; - } - - if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_IN) { - stopFlags |= VIR_QEMU_PROCESS_STOP_MIGRATED; - qemuMigrationErrorSave(driver, vm->def->name, - qemuMonitorLastError(priv->mon)); - } - - event = virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPPED, - eventReason); - qemuProcessStop(driver, vm, stopReason, QEMU_ASYNC_JOB_NONE, stopFlags); - virDomainAuditStop(vm, auditReason); - qemuDomainEventQueue(driver, event); - - endjob: - qemuDomainRemoveInactive(driver, vm); - qemuDomainObjEndJob(driver, vm); -} static void qemuProcessEventHandler(void *data, void *opaque) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index ee8bae5..d2b5fe8 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -36,6 +36,7 @@ #include "qemu_processpriv.h" #include "qemu_alias.h" #include "qemu_block.h" +#include "qemu_blockjob.h" #include "qemu_domain.h" #include "qemu_domain_address.h" #include "qemu_cgroup.h" @@ -87,6 +88,18 @@ typedef struct { void (*handler_func)(qemuEventPtr ev, void *opaque); } qemuEventFuncTable; +VIR_ENUM_IMPL(qemuSaveCompression, QEMU_SAVE_FORMAT_LAST, + "raw", + "gzip", + "bzip2", + "xz", + "lzop") + +VIR_ENUM_IMPL(qemuDumpFormat, VIR_DOMAIN_CORE_DUMP_FORMAT_LAST, + "elf", + "kdump-zlib", + "kdump-lzo", + "kdump-snappy") /** * qemuProcessRemoveDomainStatus @@ -7458,3 +7471,1123 @@ qemuProcessReconnectAll(virConnectPtr conn, virQEMUDriverPtr driver) struct qemuProcessReconnectData data = {.conn = conn, .driver = driver}; virDomainObjListForEach(driver->domains, qemuProcessReconnectHelper, &data); } + +static int +qemuOpenFileAs(uid_t fallback_uid, gid_t fallback_gid, + bool dynamicOwnership, + const char *path, int oflags, + bool *needUnlink, bool *bypassSecurityDriver) +{ + struct stat sb; + bool is_reg = true; + bool need_unlink = false; + bool bypass_security = false; + unsigned int vfoflags = 0; + int fd = -1; + int path_shared = virFileIsSharedFS(path); + uid_t uid = geteuid(); + gid_t gid = getegid(); + + /* path might be a pre-existing block dev, in which case + * we need to skip the create step, and also avoid unlink + * in the failure case */ + if (oflags & O_CREAT) { + need_unlink = true; + + /* Don't force chown on network-shared FS + * as it is likely to fail. */ + if (path_shared <= 0 || dynamicOwnership) + vfoflags |= VIR_FILE_OPEN_FORCE_OWNER; + + if (stat(path, &sb) == 0) { + /* It already exists, we don't want to delete it on error */ + need_unlink = false; + + is_reg = !!S_ISREG(sb.st_mode); + /* If the path is regular file which exists + * already and dynamic_ownership is off, we don't + * want to change its ownership, just open it as-is */ + if (is_reg && !dynamicOwnership) { + uid = sb.st_uid; + gid = sb.st_gid; + } + } + } + + /* First try creating the file as root */ + if (!is_reg) { + if ((fd = open(path, oflags & ~O_CREAT)) < 0) { + fd = -errno; + goto error; + } + } else { + if ((fd = virFileOpenAs(path, oflags, S_IRUSR | S_IWUSR, uid, gid, + vfoflags | VIR_FILE_OPEN_NOFORK)) < 0) { + /* If we failed as root, and the error was permission-denied + (EACCES or EPERM), assume it's on a network-connected share + where root access is restricted (eg, root-squashed NFS). If the + qemu user is non-root, just set a flag to + bypass security driver shenanigans, and retry the operation + after doing setuid to qemu user */ + if ((fd != -EACCES && fd != -EPERM) || fallback_uid == geteuid()) + goto error; + + /* On Linux we can also verify the FS-type of the directory. */ + switch (path_shared) { + case 1: + /* it was on a network share, so we'll continue + * as outlined above + */ + break; + + case -1: + virReportSystemError(-fd, oflags & O_CREAT + ? _("Failed to create file " + "'%s': couldn't determine fs type") + : _("Failed to open file " + "'%s': couldn't determine fs type"), + path); + goto cleanup; + + case 0: + default: + /* local file - log the error returned by virFileOpenAs */ + goto error; + } + + /* If we created the file above, then we need to remove it; + * otherwise, the next attempt to create will fail. If the + * file had already existed before we got here, then we also + * don't want to delete it and allow the following to succeed + * or fail based on existing protections + */ + if (need_unlink) + unlink(path); + + /* Retry creating the file as qemu user */ + + /* Since we're passing different modes... */ + vfoflags |= VIR_FILE_OPEN_FORCE_MODE; + + if ((fd = virFileOpenAs(path, oflags, + S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP, + fallback_uid, fallback_gid, + vfoflags | VIR_FILE_OPEN_FORK)) < 0) { + virReportSystemError(-fd, oflags & O_CREAT + ? _("Error from child process creating '%s'") + : _("Error from child process opening '%s'"), + path); + goto cleanup; + } + + /* Since we had to setuid to create the file, and the fstype + is NFS, we assume it's a root-squashing NFS share, and that + the security driver stuff would have failed anyway */ + + bypass_security = true; + } + } + cleanup: + if (needUnlink) + *needUnlink = need_unlink; + if (bypassSecurityDriver) + *bypassSecurityDriver = bypass_security; + return fd; + + error: + virReportSystemError(-fd, oflags & O_CREAT + ? _("Failed to create file '%s'") + : _("Failed to open file '%s'"), + path); + goto cleanup; +} + +/** + * qemuOpenFile: + * @driver: driver object + * @vm: domain object + * @path: path to file to open + * @oflags: flags for opening/creation of the file + * @needUnlink: set to true if file was created by this function + * @bypassSecurityDriver: optional pointer to a boolean that will be set to true + * if security driver operations are pointless (due to + * NFS mount) + * + * Internal function to properly create or open existing files, with + * ownership affected by qemu driver setup and domain DAC label. + * + * Returns the file descriptor on success and negative errno on failure. + * + * This function should not be used on storage sources. Use + * qemuDomainStorageFileInit and storage driver APIs if possible. + **/ +int +qemuOpenFile(virQEMUDriverPtr driver, + virDomainObjPtr vm, + const char *path, + int oflags, + bool *needUnlink, + bool *bypassSecurityDriver) +{ + int ret = -1; + virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + uid_t user = cfg->user; + gid_t group = cfg->group; + bool dynamicOwnership = cfg->dynamicOwnership; + virSecurityLabelDefPtr seclabel; + + virObjectUnref(cfg); + + /* TODO: Take imagelabel into account? */ + if (vm && + (seclabel = virDomainDefGetSecurityLabelDef(vm->def, "dac")) != NULL && + seclabel->label != NULL && + (virParseOwnershipIds(seclabel->label, &user, &group) < 0)) + goto cleanup; + + ret = qemuOpenFileAs(user, group, dynamicOwnership, + path, oflags, needUnlink, bypassSecurityDriver); + + cleanup: + return ret; +} + +/* qemuGetCompressionProgram: + * @imageFormat: String representation from qemu.conf for the compression + * image format being used (dump, save, or snapshot). + * @compresspath: Pointer to a character string to store the fully qualified + * path from virFindFileInPath. + * @styleFormat: String representing the style of format (dump, save, snapshot) + * @use_raw_on_fail: Boolean indicating how to handle the error path. For + * callers that are OK with invalid data or inability to + * find the compression program, just return a raw format + * and let the path remain as NULL. + * + * Returns: + * virQEMUSaveFormat - Integer representation of the compression + * program to be used for particular style + * (e.g. dump, save, or snapshot). + * QEMU_SAVE_FORMAT_RAW - If there is no qemu.conf imageFormat value or + * no there was an error, then just return RAW + * indicating none. + **/ +int ATTRIBUTE_NONNULL(2) +qemuGetCompressionProgram(const char *imageFormat, + char **compresspath, + const char *styleFormat, + bool use_raw_on_fail) +{ + int ret; + + *compresspath = NULL; + + if (!imageFormat) + return QEMU_SAVE_FORMAT_RAW; + + if ((ret = qemuSaveCompressionTypeFromString(imageFormat)) < 0) + goto error; + + if (ret == QEMU_SAVE_FORMAT_RAW) + return QEMU_SAVE_FORMAT_RAW; + + if (!(*compresspath = virFindFileInPath(imageFormat))) + goto error; + + return ret; + + error: + if (ret < 0) { + if (use_raw_on_fail) + VIR_WARN("Invalid %s image format specified in " + "configuration file, using raw", + styleFormat); + else + virReportError(VIR_ERR_OPERATION_FAILED, + _("Invalid %s image format specified " + "in configuration file"), + styleFormat); + } else { + if (use_raw_on_fail) + VIR_WARN("Compression program for %s image format in " + "configuration file isn't available, using raw", + styleFormat); + else + virReportError(VIR_ERR_OPERATION_FAILED, + _("Compression program for %s image format " + "in configuration file isn't available"), + styleFormat); + } + + /* Use "raw" as the format if the specified format is not valid, + * or the compress program is not available. */ + if (use_raw_on_fail) + return QEMU_SAVE_FORMAT_RAW; + + return -1; +} + + +static int qemuDumpToFd(virQEMUDriverPtr driver, virDomainObjPtr vm, + int fd, qemuDomainAsyncJob asyncJob, + const char *dumpformat) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + int ret = -1; + + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DUMP_GUEST_MEMORY)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("dump-guest-memory is not supported")); + return -1; + } + + if (qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, fd) < 0) + return -1; + + VIR_FREE(priv->job.current); + priv->job.dump_memory_only = true; + + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + return -1; + + if (dumpformat) { + ret = qemuMonitorGetDumpGuestMemoryCapability(priv->mon, dumpformat); + + if (ret <= 0) { + virReportError(VIR_ERR_INVALID_ARG, + _("unsupported dumpformat '%s' " + "for this QEMU binary"), + dumpformat); + ret = -1; + goto cleanup; + } + } + + ret = qemuMonitorDumpToFd(priv->mon, fd, dumpformat); + + cleanup: + ignore_value(qemuDomainObjExitMonitor(driver, vm)); + + return ret; +} + +int +doCoreDump(virQEMUDriverPtr driver, + virDomainObjPtr vm, + const char *path, + unsigned int dump_flags, + unsigned int dumpformat) +{ + int fd = -1; + int ret = -1; + virFileWrapperFdPtr wrapperFd = NULL; + int directFlag = 0; + unsigned int flags = VIR_FILE_WRAPPER_NON_BLOCKING; + const char *memory_dump_format = NULL; + virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + char *compressedpath = NULL; + + /* We reuse "save" flag for "dump" here. Then, we can support the same + * format in "save" and "dump". This path doesn't need the compression + * program to exist and can ignore the return value - it only cares to + * get the compressedpath */ + ignore_value(qemuGetCompressionProgram(cfg->dumpImageFormat, + &compressedpath, + "dump", true)); + + /* Create an empty file with appropriate ownership. */ + if (dump_flags & VIR_DUMP_BYPASS_CACHE) { + flags |= VIR_FILE_WRAPPER_BYPASS_CACHE; + directFlag = virFileDirectFdFlag(); + if (directFlag < 0) { + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("bypass cache unsupported by this system")); + goto cleanup; + } + } + /* Core dumps usually imply last-ditch analysis efforts are + * desired, so we intentionally do not unlink even if a file was + * created. */ + if ((fd = qemuOpenFile(driver, vm, path, + O_CREAT | O_TRUNC | O_WRONLY | directFlag, + NULL, NULL)) < 0) + goto cleanup; + + if (!(wrapperFd = virFileWrapperFdNew(&fd, path, flags))) + goto cleanup; + + if (dump_flags & VIR_DUMP_MEMORY_ONLY) { + if (!(memory_dump_format = qemuDumpFormatTypeToString(dumpformat))) { + virReportError(VIR_ERR_INVALID_ARG, + _("unknown dumpformat '%d'"), dumpformat); + goto cleanup; + } + + /* qemu dumps in "elf" without dumpformat set */ + if (STREQ(memory_dump_format, "elf")) + memory_dump_format = NULL; + + ret = qemuDumpToFd(driver, vm, fd, QEMU_ASYNC_JOB_DUMP, + memory_dump_format); + } else { + if (dumpformat != VIR_DOMAIN_CORE_DUMP_FORMAT_RAW) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("kdump-compressed format is only supported with " + "memory-only dump")); + goto cleanup; + } + + if (!qemuMigrationIsAllowed(driver, vm, false, 0)) + goto cleanup; + + ret = qemuMigrationToFile(driver, vm, fd, compressedpath, + QEMU_ASYNC_JOB_DUMP); + } + + if (ret < 0) + goto cleanup; + + if (VIR_CLOSE(fd) < 0) { + virReportSystemError(errno, + _("unable to close file %s"), + path); + goto cleanup; + } + if (qemuFileWrapperFDClose(vm, wrapperFd) < 0) + goto cleanup; + + ret = 0; + + cleanup: + VIR_FORCE_CLOSE(fd); + if (ret != 0) + unlink(path); + virFileWrapperFdFree(wrapperFd); + VIR_FREE(compressedpath); + virObjectUnref(cfg); + return ret; +} + +int +qemuFileWrapperFDClose(virDomainObjPtr vm, + virFileWrapperFdPtr fd) +{ + int ret; + + /* virFileWrapperFd uses iohelper to write data onto disk. + * However, iohelper calls fdatasync() which may take ages to + * finish. Therefore, we shouldn't be waiting with the domain + * object locked. */ + + /* XXX Currently, this function is intended for *Save() only + * as restore needs some reworking before it's ready for + * this. */ + + virObjectUnlock(vm); + ret = virFileWrapperFdClose(fd); + virObjectLock(vm); + if (!virDomainObjIsActive(vm)) { + if (!virGetLastError()) + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("domain is no longer running")); + ret = -1; + } + return ret; +} + +static char * +getAutoDumpPath(virQEMUDriverPtr driver, + virDomainObjPtr vm) +{ + char *dumpfile = NULL; + char *domname = virDomainObjGetShortName(vm->def); + char timestr[100]; + struct tm time_info; + time_t curtime = time(NULL); + virQEMUDriverConfigPtr cfg = NULL; + + if (!domname) + return NULL; + + cfg = virQEMUDriverGetConfig(driver); + + localtime_r(&curtime, &time_info); + strftime(timestr, sizeof(timestr), "%Y-%m-%d-%H:%M:%S", &time_info); + + ignore_value(virAsprintf(&dumpfile, "%s/%s-%s", + cfg->autoDumpPath, + domname, + timestr)); + + virObjectUnref(cfg); + VIR_FREE(domname); + return dumpfile; +} +void +processWatchdogEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + int action) +{ + int ret; + virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + char *dumpfile = getAutoDumpPath(driver, vm); + unsigned int flags = VIR_DUMP_MEMORY_ONLY; + + if (!dumpfile) + goto cleanup; + + switch (action) { + case VIR_DOMAIN_WATCHDOG_ACTION_DUMP: + if (qemuDomainObjBeginAsyncJob(driver, vm, + QEMU_ASYNC_JOB_DUMP, + VIR_DOMAIN_JOB_OPERATION_DUMP) < 0) { + goto cleanup; + } + + if (!virDomainObjIsActive(vm)) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is not running")); + goto endjob; + } + + flags |= cfg->autoDumpBypassCache ? VIR_DUMP_BYPASS_CACHE: 0; + if ((ret = doCoreDump(driver, vm, dumpfile, flags, + VIR_DOMAIN_CORE_DUMP_FORMAT_RAW)) < 0) + virReportError(VIR_ERR_OPERATION_FAILED, + "%s", _("Dump failed")); + + ret = qemuProcessStartCPUs(driver, vm, NULL, + VIR_DOMAIN_RUNNING_UNPAUSED, + QEMU_ASYNC_JOB_DUMP); + + if (ret < 0) + virReportError(VIR_ERR_OPERATION_FAILED, + "%s", _("Resuming after dump failed")); + break; + default: + goto cleanup; + } + + endjob: + qemuDomainObjEndAsyncJob(driver, vm); + + cleanup: + VIR_FREE(dumpfile); + virObjectUnref(cfg); +} + +static int +doCoreDumpToAutoDumpPath(virQEMUDriverPtr driver, + virDomainObjPtr vm, + unsigned int flags) +{ + int ret = -1; + virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + char *dumpfile = getAutoDumpPath(driver, vm); + + if (!dumpfile) + goto cleanup; + + flags |= cfg->autoDumpBypassCache ? VIR_DUMP_BYPASS_CACHE: 0; + if ((ret = doCoreDump(driver, vm, dumpfile, flags, + VIR_DOMAIN_CORE_DUMP_FORMAT_RAW)) < 0) + virReportError(VIR_ERR_OPERATION_FAILED, + "%s", _("Dump failed")); + cleanup: + VIR_FREE(dumpfile); + virObjectUnref(cfg); + return ret; +} + +static void +qemuProcessGuestPanicEventInfo(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuMonitorEventPanicInfoPtr info) +{ + char *msg = qemuMonitorGuestPanicEventInfoFormatMsg(info); + char *timestamp = virTimeStringNow(); + + if (msg && timestamp) + qemuDomainLogAppendMessage(driver, vm, "%s: panic %s\n", timestamp, msg); + + VIR_FREE(timestamp); + VIR_FREE(msg); +} + +void +processGuestPanicEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + int action, + qemuMonitorEventPanicInfoPtr info) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + virObjectEventPtr event = NULL; + virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + bool removeInactive = false; + + if (qemuDomainObjBeginAsyncJob(driver, vm, QEMU_ASYNC_JOB_DUMP, + VIR_DOMAIN_JOB_OPERATION_DUMP) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + VIR_DEBUG("Ignoring GUEST_PANICKED event from inactive domain %s", + vm->def->name); + goto endjob; + } + + if (info) + qemuProcessGuestPanicEventInfo(driver, vm, info); + + virDomainObjSetState(vm, VIR_DOMAIN_CRASHED, VIR_DOMAIN_CRASHED_PANICKED); + + event = virDomainEventLifecycleNewFromObj(vm, + VIR_DOMAIN_EVENT_CRASHED, + VIR_DOMAIN_EVENT_CRASHED_PANICKED); + + qemuDomainEventQueue(driver, event); + + if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) { + VIR_WARN("Unable to save status on vm %s after state change", + vm->def->name); + } + + if (virDomainLockProcessPause(driver->lockManager, vm, &priv->lockState) < 0) + VIR_WARN("Unable to release lease on %s", vm->def->name); + VIR_DEBUG("Preserving lock state '%s'", NULLSTR(priv->lockState)); + + switch (action) { + case VIR_DOMAIN_LIFECYCLE_CRASH_COREDUMP_DESTROY: + if (doCoreDumpToAutoDumpPath(driver, vm, VIR_DUMP_MEMORY_ONLY) < 0) + goto endjob; + ATTRIBUTE_FALLTHROUGH; + + case VIR_DOMAIN_LIFECYCLE_CRASH_DESTROY: + qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_CRASHED, + QEMU_ASYNC_JOB_DUMP, 0); + event = virDomainEventLifecycleNewFromObj(vm, + VIR_DOMAIN_EVENT_STOPPED, + VIR_DOMAIN_EVENT_STOPPED_CRASHED); + + qemuDomainEventQueue(driver, event); + virDomainAuditStop(vm, "destroyed"); + removeInactive = true; + break; + + case VIR_DOMAIN_LIFECYCLE_CRASH_COREDUMP_RESTART: + if (doCoreDumpToAutoDumpPath(driver, vm, VIR_DUMP_MEMORY_ONLY) < 0) + goto endjob; + ATTRIBUTE_FALLTHROUGH; + + case VIR_DOMAIN_LIFECYCLE_CRASH_RESTART: + qemuDomainSetFakeReboot(driver, vm, true); + qemuProcessShutdownOrReboot(driver, vm); + break; + + case VIR_DOMAIN_LIFECYCLE_CRASH_PRESERVE: + break; + + default: + break; + } + + endjob: + qemuDomainObjEndAsyncJob(driver, vm); + if (removeInactive) + qemuDomainRemoveInactiveJob(driver, vm); + + cleanup: + virObjectUnref(cfg); +} + +void +processDeviceDeletedEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + char *devAlias) +{ + virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + virDomainDeviceDef dev; + + VIR_DEBUG("Removing device %s from domain %p %s", + devAlias, vm, vm->def->name); + + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + VIR_DEBUG("Domain is not running"); + goto endjob; + } + + if (STRPREFIX(devAlias, "vcpu")) { + qemuDomainRemoveVcpuAlias(driver, vm, devAlias); + } else { + if (virDomainDefFindDevice(vm->def, devAlias, &dev, true) < 0) + goto endjob; + + if (qemuDomainRemoveDevice(driver, vm, &dev) < 0) + goto endjob; + } + + if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) + VIR_WARN("unable to save domain status after removing device %s", + devAlias); + + endjob: + qemuDomainObjEndJob(driver, vm); + + cleanup: + VIR_FREE(devAlias); + virObjectUnref(cfg); +} + + + + +void +syncNicRxFilterMacAddr(char *ifname, virNetDevRxFilterPtr guestFilter, + virNetDevRxFilterPtr hostFilter) +{ + char newMacStr[VIR_MAC_STRING_BUFLEN]; + + if (virMacAddrCmp(&hostFilter->mac, &guestFilter->mac)) { + virMacAddrFormat(&guestFilter->mac, newMacStr); + + /* set new MAC address from guest to associated macvtap device */ + if (virNetDevSetMAC(ifname, &guestFilter->mac) < 0) { + VIR_WARN("Couldn't set new MAC address %s to device %s " + "while responding to NIC_RX_FILTER_CHANGED", + newMacStr, ifname); + } else { + VIR_DEBUG("device %s MAC address set to %s", ifname, newMacStr); + } + } +} + +static void +syncNicRxFilterGuestMulticast(char *ifname, virNetDevRxFilterPtr guestFilter, + virNetDevRxFilterPtr hostFilter) +{ + size_t i, j; + bool found; + char macstr[VIR_MAC_STRING_BUFLEN]; + + for (i = 0; i < guestFilter->multicast.nTable; i++) { + found = false; + + for (j = 0; j < hostFilter->multicast.nTable; j++) { + if (virMacAddrCmp(&guestFilter->multicast.table[i], + &hostFilter->multicast.table[j]) == 0) { + found = true; + break; + } + } + + if (!found) { + virMacAddrFormat(&guestFilter->multicast.table[i], macstr); + + if (virNetDevAddMulti(ifname, &guestFilter->multicast.table[i]) < 0) { + VIR_WARN("Couldn't add new multicast MAC address %s to " + "device %s while responding to NIC_RX_FILTER_CHANGED", + macstr, ifname); + } else { + VIR_DEBUG("Added multicast MAC %s to %s interface", + macstr, ifname); + } + } + } +} + +static void +syncNicRxFilterHostMulticast(char *ifname, virNetDevRxFilterPtr guestFilter, + virNetDevRxFilterPtr hostFilter) +{ + size_t i, j; + bool found; + char macstr[VIR_MAC_STRING_BUFLEN]; + + for (i = 0; i < hostFilter->multicast.nTable; i++) { + found = false; + + for (j = 0; j < guestFilter->multicast.nTable; j++) { + if (virMacAddrCmp(&hostFilter->multicast.table[i], + &guestFilter->multicast.table[j]) == 0) { + found = true; + break; + } + } + + if (!found) { + virMacAddrFormat(&hostFilter->multicast.table[i], macstr); + + if (virNetDevDelMulti(ifname, &hostFilter->multicast.table[i]) < 0) { + VIR_WARN("Couldn't delete multicast MAC address %s from " + "device %s while responding to NIC_RX_FILTER_CHANGED", + macstr, ifname); + } else { + VIR_DEBUG("Deleted multicast MAC %s from %s interface", + macstr, ifname); + } + } + } +} + + +static void +syncNicRxFilterPromiscMode(char *ifname, + virNetDevRxFilterPtr guestFilter, + virNetDevRxFilterPtr hostFilter) +{ + bool promisc; + bool setpromisc = false; + + /* Set macvtap promisc mode to true if the guest has vlans defined */ + /* or synchronize the macvtap promisc mode if different from guest */ + if (guestFilter->vlan.nTable > 0) { + if (!hostFilter->promiscuous) { + setpromisc = true; + promisc = true; + } + } else if (hostFilter->promiscuous != guestFilter->promiscuous) { + setpromisc = true; + promisc = guestFilter->promiscuous; + } + + if (setpromisc) { + if (virNetDevSetPromiscuous(ifname, promisc) < 0) { + VIR_WARN("Couldn't set PROMISC flag to %s for device %s " + "while responding to NIC_RX_FILTER_CHANGED", + promisc ? "true" : "false", ifname); + } + } +} + +static void +syncNicRxFilterMultiMode(char *ifname, virNetDevRxFilterPtr guestFilter, + virNetDevRxFilterPtr hostFilter) +{ + if (hostFilter->multicast.mode != guestFilter->multicast.mode) { + switch (guestFilter->multicast.mode) { + case VIR_NETDEV_RX_FILTER_MODE_ALL: + if (virNetDevSetRcvAllMulti(ifname, true)) { + + VIR_WARN("Couldn't set allmulticast flag to 'on' for " + "device %s while responding to " + "NIC_RX_FILTER_CHANGED", ifname); + } + break; + + case VIR_NETDEV_RX_FILTER_MODE_NORMAL: + if (virNetDevSetRcvMulti(ifname, true)) { + + VIR_WARN("Couldn't set multicast flag to 'on' for " + "device %s while responding to " + "NIC_RX_FILTER_CHANGED", ifname); + } + + if (virNetDevSetRcvAllMulti(ifname, false)) { + VIR_WARN("Couldn't set allmulticast flag to 'off' for " + "device %s while responding to " + "NIC_RX_FILTER_CHANGED", ifname); + } + break; + + case VIR_NETDEV_RX_FILTER_MODE_NONE: + if (virNetDevSetRcvAllMulti(ifname, false)) { + VIR_WARN("Couldn't set allmulticast flag to 'off' for " + "device %s while responding to " + "NIC_RX_FILTER_CHANGED", ifname); + } + + if (virNetDevSetRcvMulti(ifname, false)) { + VIR_WARN("Couldn't set multicast flag to 'off' for " + "device %s while responding to " + "NIC_RX_FILTER_CHANGED", + ifname); + } + break; + } + } +} + +void +syncNicRxFilterDeviceOptions(char *ifname, virNetDevRxFilterPtr guestFilter, + virNetDevRxFilterPtr hostFilter) +{ + syncNicRxFilterPromiscMode(ifname, guestFilter, hostFilter); + syncNicRxFilterMultiMode(ifname, guestFilter, hostFilter); +} + + +void +syncNicRxFilterMulticast(char *ifname, + virNetDevRxFilterPtr guestFilter, + virNetDevRxFilterPtr hostFilter) +{ + syncNicRxFilterGuestMulticast(ifname, guestFilter, hostFilter); + syncNicRxFilterHostMulticast(ifname, guestFilter, hostFilter); +} + +void +processNicRxFilterChangedEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + char *devAlias) +{ + virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + qemuDomainObjPrivatePtr priv = vm->privateData; + virDomainDeviceDef dev; + virDomainNetDefPtr def; + virNetDevRxFilterPtr guestFilter = NULL; + virNetDevRxFilterPtr hostFilter = NULL; + int ret; + + VIR_DEBUG("Received NIC_RX_FILTER_CHANGED event for device %s " + "from domain %p %s", + devAlias, vm, vm->def->name); + + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + VIR_DEBUG("Domain is not running"); + goto endjob; + } + + if (virDomainDefFindDevice(vm->def, devAlias, &dev, true) < 0) { + VIR_WARN("NIC_RX_FILTER_CHANGED event received for " + "non-existent device %s in domain %s", + devAlias, vm->def->name); + goto endjob; + } + if (dev.type != VIR_DOMAIN_DEVICE_NET) { + VIR_WARN("NIC_RX_FILTER_CHANGED event received for " + "non-network device %s in domain %s", + devAlias, vm->def->name); + goto endjob; + } + def = dev.data.net; + + if (!virDomainNetGetActualTrustGuestRxFilters(def)) { + VIR_DEBUG("ignore NIC_RX_FILTER_CHANGED event for network " + "device %s in domain %s", + def->info.alias, vm->def->name); + /* not sending "query-rx-filter" will also suppress any + * further NIC_RX_FILTER_CHANGED events for this device + */ + goto endjob; + } + + /* handle the event - send query-rx-filter and respond to it. */ + + VIR_DEBUG("process NIC_RX_FILTER_CHANGED event for network " + "device %s in domain %s", def->info.alias, vm->def->name); + + qemuDomainObjEnterMonitor(driver, vm); + ret = qemuMonitorQueryRxFilter(priv->mon, devAlias, &guestFilter); + if (qemuDomainObjExitMonitor(driver, vm) < 0) + ret = -1; + if (ret < 0) + goto endjob; + + if (virDomainNetGetActualType(def) == VIR_DOMAIN_NET_TYPE_DIRECT) { + + if (virNetDevGetRxFilter(def->ifname, &hostFilter)) { + VIR_WARN("Couldn't get current RX filter for device %s " + "while responding to NIC_RX_FILTER_CHANGED", + def->ifname); + goto endjob; + } + + /* For macvtap connections, set the following macvtap network device + * attributes to match those of the guest network device: + * - MAC address + * - Multicast MAC address table + * - Device options: + * - PROMISC + * - MULTICAST + * - ALLMULTI + */ + syncNicRxFilterMacAddr(def->ifname, guestFilter, hostFilter); + syncNicRxFilterMulticast(def->ifname, guestFilter, hostFilter); + syncNicRxFilterDeviceOptions(def->ifname, guestFilter, hostFilter); + } + + if (virDomainNetGetActualType(def) == VIR_DOMAIN_NET_TYPE_NETWORK) { + const char *brname = virDomainNetGetActualBridgeName(def); + + /* For libivrt network connections, set the following TUN/TAP network + * device attributes to match those of the guest network device: + * - QoS filters (which are based on MAC address) + */ + if (virDomainNetGetActualBandwidth(def) && + def->data.network.actual && + virNetDevBandwidthUpdateFilter(brname, &guestFilter->mac, + def->data.network.actual->class_id) < 0) + goto endjob; + } + + endjob: + qemuDomainObjEndJob(driver, vm); + + cleanup: + virNetDevRxFilterFree(hostFilter); + virNetDevRxFilterFree(guestFilter); + VIR_FREE(devAlias); + virObjectUnref(cfg); +} + +void +processSerialChangedEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + char *devAlias, + bool connected) +{ + virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + virDomainChrDeviceState newstate; + virObjectEventPtr event = NULL; + virDomainDeviceDef dev; + qemuDomainObjPrivatePtr priv = vm->privateData; + + if (connected) + newstate = VIR_DOMAIN_CHR_DEVICE_STATE_CONNECTED; + else + newstate = VIR_DOMAIN_CHR_DEVICE_STATE_DISCONNECTED; + + VIR_DEBUG("Changing serial port state %s in domain %p %s", + devAlias, vm, vm->def->name); + + if (newstate == VIR_DOMAIN_CHR_DEVICE_STATE_DISCONNECTED && + virDomainObjIsActive(vm) && priv->agent) { + /* peek into the domain definition to find the channel */ + if (virDomainDefFindDevice(vm->def, devAlias, &dev, true) == 0 && + dev.type == VIR_DOMAIN_DEVICE_CHR && + dev.data.chr->deviceType == VIR_DOMAIN_CHR_DEVICE_TYPE_CHANNEL && + dev.data.chr->targetType == VIR_DOMAIN_CHR_CHANNEL_TARGET_TYPE_VIRTIO && + STREQ_NULLABLE(dev.data.chr->target.name, "org.qemu.guest_agent.0")) + /* Close agent monitor early, so that other threads + * waiting for the agent to reply can finish and our + * job we acquire below can succeed. */ + qemuAgentNotifyClose(priv->agent); + + /* now discard the data, since it may possibly change once we unlock + * while entering the job */ + memset(&dev, 0, sizeof(dev)); + } + + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + VIR_DEBUG("Domain is not running"); + goto endjob; + } + + if (virDomainDefFindDevice(vm->def, devAlias, &dev, true) < 0) + goto endjob; + + /* we care only about certain devices */ + if (dev.type != VIR_DOMAIN_DEVICE_CHR || + dev.data.chr->deviceType != VIR_DOMAIN_CHR_DEVICE_TYPE_CHANNEL || + dev.data.chr->targetType != VIR_DOMAIN_CHR_CHANNEL_TARGET_TYPE_VIRTIO) + goto endjob; + + dev.data.chr->state = newstate; + + if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) + VIR_WARN("unable to save status of domain %s after updating state of " + "channel %s", vm->def->name, devAlias); + + if (STREQ_NULLABLE(dev.data.chr->target.name, "org.qemu.guest_agent.0")) { + if (newstate == VIR_DOMAIN_CHR_DEVICE_STATE_CONNECTED) { + if (qemuConnectAgent(driver, vm) < 0) + goto endjob; + } else { + if (priv->agent) { + qemuAgentClose(priv->agent); + priv->agent = NULL; + } + priv->agentError = false; + } + + event = virDomainEventAgentLifecycleNewFromObj(vm, newstate, + VIR_CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_REASON_CHANNEL); + qemuDomainEventQueue(driver, event); + } + + endjob: + qemuDomainObjEndJob(driver, vm); + + cleanup: + VIR_FREE(devAlias); + virObjectUnref(cfg); + +} + +void +processBlockJobEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + char *diskAlias, + int type, + int status) +{ + virDomainDiskDefPtr disk; + + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + VIR_DEBUG("Domain is not running"); + goto endjob; + } + + if ((disk = qemuProcessFindDomainDiskByAlias(vm, diskAlias))) + qemuBlockJobEventProcess(driver, vm, disk, QEMU_ASYNC_JOB_NONE, type, status); + + endjob: + qemuDomainObjEndJob(driver, vm); + cleanup: + VIR_FREE(diskAlias); +} + +void +processMonitorEOFEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + int eventReason = VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN; + int stopReason = VIR_DOMAIN_SHUTOFF_SHUTDOWN; + const char *auditReason = "shutdown"; + unsigned int stopFlags = 0; + virObjectEventPtr event = NULL; + + if (qemuProcessBeginStopJob(driver, vm, QEMU_JOB_DESTROY, true) < 0) + return; + + if (!virDomainObjIsActive(vm)) { + VIR_DEBUG("Domain %p '%s' is not active, ignoring EOF", + vm, vm->def->name); + goto endjob; + } + + if (priv->monJSON && !priv->gotShutdown) { + VIR_DEBUG("Monitor connection to '%s' closed without SHUTDOWN event; " + "assuming the domain crashed", vm->def->name); + eventReason = VIR_DOMAIN_EVENT_STOPPED_FAILED; + stopReason = VIR_DOMAIN_SHUTOFF_CRASHED; + auditReason = "failed"; + } + + if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_IN) { + stopFlags |= VIR_QEMU_PROCESS_STOP_MIGRATED; + qemuMigrationErrorSave(driver, vm->def->name, + qemuMonitorLastError(priv->mon)); + } + + event = virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPPED, + eventReason); + qemuProcessStop(driver, vm, stopReason, QEMU_ASYNC_JOB_NONE, stopFlags); + virDomainAuditStop(vm, auditReason); + qemuDomainEventQueue(driver, event); + + endjob: + qemuDomainRemoveInactive(driver, vm); + qemuDomainObjEndJob(driver, vm); +} diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index a2bbc4f..95007b5 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -25,6 +25,92 @@ # include "qemu_conf.h" # include "qemu_domain.h" +typedef enum { + QEMU_SAVE_FORMAT_RAW = 0, + QEMU_SAVE_FORMAT_GZIP = 1, + QEMU_SAVE_FORMAT_BZIP2 = 2, + /* + * Deprecated by xz and never used as part of a release + * QEMU_SAVE_FORMAT_LZMA + */ + QEMU_SAVE_FORMAT_XZ = 3, + QEMU_SAVE_FORMAT_LZOP = 4, + /* Note: add new members only at the end. + These values are used in the on-disk format. + Do not change or re-use numbers. */ + + QEMU_SAVE_FORMAT_LAST +} virQEMUSaveFormat; + +VIR_ENUM_DECL(qemuSaveCompression) +VIR_ENUM_DECL(qemuDumpFormat) + +int +qemuFileWrapperFDClose(virDomainObjPtr vm, + virFileWrapperFdPtr fd); + +int +qemuGetCompressionProgram(const char *imageFormat, + char **compresspath, + const char *styleFormat, + bool use_raw_on_fail); +int +qemuOpenFile(virQEMUDriverPtr driver, + virDomainObjPtr vm, + const char *path, + int oflags, + bool *needUnlink, + bool *bypassSecurityDriver); +int +doCoreDump(virQEMUDriverPtr driver, + virDomainObjPtr vm, + const char *path, + unsigned int dump_flags, + unsigned int dumpformat); + +void +syncNicRxFilterMacAddr(char *ifname, virNetDevRxFilterPtr guestFilter, + virNetDevRxFilterPtr hostFilter); + +void +syncNicRxFilterDeviceOptions(char *ifname, virNetDevRxFilterPtr guestFilter, + virNetDevRxFilterPtr hostFilter); + +void +syncNicRxFilterMulticast(char *ifname, + virNetDevRxFilterPtr guestFilter, + virNetDevRxFilterPtr hostFilter); + +void +processWatchdogEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + int action); +void +processGuestPanicEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + int action, + qemuMonitorEventPanicInfoPtr info); +void +processNicRxFilterChangedEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + char *devAlias); +void +processSerialChangedEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + char *devAlias, + bool connected); + +void +processBlockJobEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + char *diskAlias, + int type, + int status); +void processMonitorEOFEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm); +void processDeviceDeletedEvent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + char *devAlias); int qemuProcessPrepareMonitorChr(virDomainChrSourceDefPtr monConfig, const char *domainDir); -- 2.9.5

Also, the enqueuing of a new event now triggers virEventWorkerScanQueue() Signed-off-by: Prerna Saxena <saxenap.ltc@gmail.com> --- src/qemu/qemu_driver.c | 61 ++---------------------- src/qemu/qemu_process.c | 121 +++++++++++------------------------------------- 2 files changed, 29 insertions(+), 153 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9d495fb..881f253 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -138,8 +138,6 @@ VIR_LOG_INIT("qemu.qemu_driver"); #define QEMU_NB_BANDWIDTH_PARAM 7 -static void qemuProcessEventHandler(void *data, void *opaque); - static int qemuStateCleanup(void); static int qemuDomainObjStart(virConnectPtr conn, @@ -936,7 +934,9 @@ qemuStateInitialize(bool privileged, qemuProcessReconnectAll(conn, qemu_driver); - qemu_driver->workerPool = virThreadPoolNew(0, 1, 0, qemuProcessEventHandler, qemu_driver); + qemu_driver->workerPool = virThreadPoolNew(0, 1, 0, virEventWorkerScanQueue, + qemu_driver); + if (!qemu_driver->workerPool) goto error; @@ -3645,61 +3645,6 @@ qemuDomainScreenshot(virDomainPtr dom, } - - - - - - - - -static void qemuProcessEventHandler(void *data, void *opaque) -{ - struct qemuProcessEvent *processEvent = data; - virDomainObjPtr vm = processEvent->vm; - virQEMUDriverPtr driver = opaque; - - VIR_DEBUG("vm=%p, event=%d", vm, processEvent->eventType); - - virObjectLock(vm); - - switch (processEvent->eventType) { - case QEMU_PROCESS_EVENT_WATCHDOG: - processWatchdogEvent(driver, vm, processEvent->action); - break; - case QEMU_PROCESS_EVENT_GUESTPANIC: - processGuestPanicEvent(driver, vm, processEvent->action, - processEvent->data); - break; - case QEMU_PROCESS_EVENT_DEVICE_DELETED: - processDeviceDeletedEvent(driver, vm, processEvent->data); - break; - case QEMU_PROCESS_EVENT_NIC_RX_FILTER_CHANGED: - processNicRxFilterChangedEvent(driver, vm, processEvent->data); - break; - case QEMU_PROCESS_EVENT_SERIAL_CHANGED: - processSerialChangedEvent(driver, vm, processEvent->data, - processEvent->action); - break; - case QEMU_PROCESS_EVENT_BLOCK_JOB: - processBlockJobEvent(driver, vm, - processEvent->data, - processEvent->action, - processEvent->status); - break; - case QEMU_PROCESS_EVENT_MONITOR_EOF: - processMonitorEOFEvent(driver, vm); - break; - case QEMU_PROCESS_EVENT_LAST: - break; - } - - virDomainConsumeVMEvents(vm, driver); - virDomainObjEndAPI(&vm); - VIR_FREE(processEvent); -} - - static int qemuDomainSetVcpusAgent(virDomainObjPtr vm, unsigned int nvcpus) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index d2b5fe8..f9270e0 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -962,21 +962,11 @@ qemuProcessEventHandleWatchdog1(qemuEventPtr ev, } if (vm->def->watchdog->action == VIR_DOMAIN_WATCHDOG_ACTION_DUMP) { - struct qemuProcessEvent *processEvent; - if (VIR_ALLOC(processEvent) == 0) { - processEvent->eventType = QEMU_PROCESS_EVENT_WATCHDOG; - processEvent->action = VIR_DOMAIN_WATCHDOG_ACTION_DUMP; - processEvent->vm = vm; - /* Hold an extra reference because we can't allow 'vm' to be - * deleted before handling watchdog event is finished. - */ - virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - if (!virObjectUnref(vm)) - vm = NULL; - VIR_FREE(processEvent); - } - } + /* Hold an extra reference because we can't allow 'vm' to be + * deleted before handling watchdog event is finished.*/ + virObjectRef(vm); + processWatchdogEvent(driver, vm, VIR_DOMAIN_WATCHDOG_ACTION_DUMP); + virObjectUnref(vm); } qemuDomainEventQueue(driver, watchdogEvent); @@ -1068,12 +1058,10 @@ qemuProcessEventHandleBlockJob(qemuEventPtr ev, void *opaque) { virQEMUDriverPtr driver = opaque; - struct qemuProcessEvent *processEvent = NULL; virDomainDiskDefPtr disk; qemuDomainDiskPrivatePtr diskPriv; - char *data = NULL; virDomainObjPtr vm; - const char *diskAlias; + char *diskAlias = NULL; int type, status; if (!ev) @@ -1082,7 +1070,7 @@ qemuProcessEventHandleBlockJob(qemuEventPtr ev, if (!ev->vm) { VIR_WARN("Unable to locate VM, dropping Block Job event"); - goto cleanup; + goto error; } diskAlias = ev->evData.ev_blockJob.device; @@ -1103,31 +1091,16 @@ qemuProcessEventHandleBlockJob(qemuEventPtr ev, virDomainObjBroadcast(vm); } else { /* there is no waiting SYNC API, dispatch the update to a thread */ - if (VIR_ALLOC(processEvent) < 0) - goto error; - - processEvent->eventType = QEMU_PROCESS_EVENT_BLOCK_JOB; - if (VIR_STRDUP(data, diskAlias) < 0) - goto error; - processEvent->data = data; - processEvent->vm = vm; - processEvent->action = type; - processEvent->status = status; virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - ignore_value(virObjectUnref(vm)); - goto error; - } + processBlockJobEvent(driver, vm, diskAlias, type, status); + virObjectUnref(vm); } - cleanup: +error: + if (diskAlias) + VIR_FREE(diskAlias); return; - error: - if (processEvent) - VIR_FREE(processEvent->data); - VIR_FREE(processEvent); - goto cleanup; } static void @@ -1465,7 +1438,6 @@ qemuProcessEventHandleGuestPanic(qemuEventPtr ev, void *opaque) { virQEMUDriverPtr driver = opaque; - struct qemuProcessEvent *processEvent; virDomainObjPtr vm; qemuMonitorEventPanicInfoPtr info; @@ -1479,22 +1451,12 @@ qemuProcessEventHandleGuestPanic(qemuEventPtr ev, } info = ev->evData.ev_panic.info; - if (VIR_ALLOC(processEvent) < 0) - goto exit; - - processEvent->eventType = QEMU_PROCESS_EVENT_GUESTPANIC; - processEvent->action = vm->def->onCrash; - processEvent->vm = vm; - processEvent->data = info; /* Hold an extra reference because we can't allow 'vm' to be * deleted before handling guest panic event is finished. */ virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - if (!virObjectUnref(vm)) - vm = NULL; - VIR_FREE(processEvent); - } + processGuestPanicEvent(driver, vm, vm->def->onCrash, info); + virObjectUnref(vm); exit: return; @@ -1506,10 +1468,8 @@ qemuProcessEventHandleDeviceDeleted(qemuEventPtr ev, void *opaque) { virQEMUDriverPtr driver = opaque; - struct qemuProcessEvent *processEvent = NULL; - char *data; virDomainObjPtr vm; - const char *devAlias; + char *devAlias = NULL; if (!ev) return; @@ -1528,29 +1488,14 @@ qemuProcessEventHandleDeviceDeleted(qemuEventPtr ev, QEMU_DOMAIN_UNPLUGGING_DEVICE_STATUS_OK)) goto cleanup; - if (VIR_ALLOC(processEvent) < 0) - goto error; - - processEvent->eventType = QEMU_PROCESS_EVENT_DEVICE_DELETED; - if (VIR_STRDUP(data, devAlias) < 0) - goto error; - processEvent->data = data; - processEvent->vm = vm; - virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - ignore_value(virObjectUnref(vm)); - goto error; - } + processDeviceDeletedEvent(driver, vm, devAlias); + virObjectUnref(vm); cleanup: - VIR_FREE(ev->evData.ev_deviceDel.device); + if (ev->evData.ev_deviceDel.device) + VIR_FREE(ev->evData.ev_deviceDel.device); return; - error: - if (processEvent) - VIR_FREE(processEvent->data); - VIR_FREE(processEvent); - goto cleanup; } @@ -1744,8 +1689,6 @@ qemuProcessEventHandleSerialChanged(qemuEventPtr ev, void *opaque) { virQEMUDriverPtr driver = opaque; - struct qemuProcessEvent *processEvent = NULL; - char *data; virDomainObjPtr vm; char *devAlias; bool connected; @@ -1764,30 +1707,14 @@ qemuProcessEventHandleSerialChanged(qemuEventPtr ev, VIR_DEBUG("Serial port %s state changed to '%d' in domain %p %s", devAlias, connected, vm, vm->def->name); - if (VIR_ALLOC(processEvent) < 0) - goto error; - - processEvent->eventType = QEMU_PROCESS_EVENT_SERIAL_CHANGED; - if (VIR_STRDUP(data, devAlias) < 0) - goto error; - processEvent->data = data; - processEvent->action = connected; - processEvent->vm = vm; virObjectRef(vm); - if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - ignore_value(virObjectUnref(vm)); - goto error; - } + processSerialChangedEvent(driver, vm, devAlias, connected); + virObjectUnref(vm); cleanup: VIR_FREE(ev->evData.ev_serial.devAlias); return; - error: - if (processEvent) - VIR_FREE(processEvent->data); - VIR_FREE(processEvent); - goto cleanup; } @@ -1934,7 +1861,11 @@ qemuProcessEnqueueEvent(qemuMonitorPtr mon ATTRIBUTE_UNUSED, /* Bad code alert: Fix this lookup to scan table for correct index. * Works for now since event table is sorted */ ev->handler = qemuEventFunctions[ev->ev_type].handler_func; - return virEnqueueVMEvent(driver->ev_list, ev); + if (!virEnqueueVMEvent(driver->ev_list, ev)) { + /* Bad code alert #2: Use a better notification mechanism */ + return virThreadPoolSendJob(driver->workerPool, 0, NULL); + } + return -1; } static qemuMonitorCallbacks monitorCallbacks = { -- 2.9.5

Signed-off-by: Prerna Saxena <saxenap.ltc@gmail.com> --- src/qemu/qemu_driver.c | 20 ++++++++++++++++++++ src/qemu/qemu_event.c | 2 ++ 2 files changed, 22 insertions(+) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 881f253..e4b6d06 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1781,6 +1781,11 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL))) goto cleanup; + + if (virQemuVmEventListInit(vm) < 0) { + virDomainObjListRemove(driver->domains, vm); + goto cleanup; + } virObjectRef(vm); def = NULL; @@ -5531,6 +5536,11 @@ qemuDomainRestoreFlags(virConnectPtr conn, VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL))) goto cleanup; + + if (virQemuVmEventListInit(vm) < 0) { + virDomainObjListRemove(driver->domains, vm); + goto cleanup; + } virObjectRef(vm); def = NULL; @@ -6208,6 +6218,11 @@ qemuDomainDefineXMLFlags(virConnectPtr conn, 0, &oldDef))) goto cleanup; + if (virQemuVmEventListInit(vm) < 0) { + virDomainObjListRemove(driver->domains, vm); + goto cleanup; + } + virObjectRef(vm); def = NULL; if (qemuDomainHasBlockjob(vm, true)) { @@ -15073,6 +15088,11 @@ static virDomainPtr qemuDomainQemuAttach(virConnectPtr conn, NULL))) goto cleanup; + if (virQemuVmEventListInit(vm) < 0) { + virDomainObjListRemove(driver->domains, vm); + goto cleanup; + } + virObjectRef(vm); def = NULL; diff --git a/src/qemu/qemu_event.c b/src/qemu/qemu_event.c index beb309f..00a06ee 100644 --- a/src/qemu/qemu_event.c +++ b/src/qemu/qemu_event.c @@ -145,6 +145,8 @@ int virEnqueueVMEvent(virQemuEventList *qlist, qemuEventPtr ev) /* Insert into per-Vm queue */ vmq = ev->vm->vmq; + if (!vmq) + goto error; virMutexLock(&(vmq->lock)); if (vmq->last) { -- 2.9.5

On Tue, Oct 24, 2017 at 10:34:53 -0700, Prerna Saxena wrote:
As noted in https://www.redhat.com/archives/libvir-list/2017-May/msg00016.html libvirt-QEMU driver handles all async events from the main loop. Each event handling needs the per-VM lock to make forward progress. In the case where an async event is received for the same VM which has an RPC running, the main loop is held up contending for the same lock.
This impacts scalability, and should be addressed on priority.
Note that libvirt does have a 2-step deferred handling for a few event categories, but (1) That is insufficient since blockign happens before the handler could disambiguate which one needs to be posted to this other queue. (2) There needs to be homogeniety.
The current series builds a framework for recording and handling VM events. It initializes per-VM event queue, and a global event queue pointing to events from all the VMs. Event handling is staggered in 2 stages: - When an event is received, it is enqueued in the per-VM queue as well as the global queues. - The global queue is built into the QEMU Driver as a threadpool (currently with a single thread). - Enqueuing of a new event triggers the global event worker thread, which then attempts to take a lock for this event's VM. - If the lock is available, the event worker runs the function handling this event type. Once done, it dequeues this event from the global as well as per-VM queues. - If the lock is unavailable(ie taken by RPC thread), the event worker thread leaves this as-is and picks up the next event.
If I get it right, the event is either processed immediately when its VM object is unlocked or it has to wait until the current job running on the VM object finishes even though the lock may be released before that. Correct? If so, this needs to be addressed.
- Once the RPC thread completes, it looks for events pertaining to the VM in the per-VM event queue. It then processes the events serially (holding the VM lock) until there are no more events remaining for this VM. At this point, the per-VM lock is relinquished.
Patch Series status: Strictly RFC only. No compilation issues. I have not had a chance to (stress) test it after rebase to latest master. Note that documentation and test coverage is TBD, since a few open points remain.
Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM completes. - Needs careful consideration in all cases where a QMP event is used to "signal" an RPC thread, else will deadlock.
This last issue is actually a show stopper here. We need to make sure QMP events are processed while a job is still active on the same domain. Otherwise thinks kile block jobs and migration, which are long running jobs driven by events, will break. Jirka

On Wed, Oct 25, 2017 at 4:12 PM, Jiri Denemark <jdenemar@redhat.com> wrote:
On Tue, Oct 24, 2017 at 10:34:53 -0700, Prerna Saxena wrote:
As noted in https://www.redhat.com/archives/libvir-list/2017-May/msg00016.html libvirt-QEMU driver handles all async events from the main loop. Each event handling needs the per-VM lock to make forward progress. In the case where an async event is received for the same VM which has an RPC running, the main loop is held up contending for the same lock.
This impacts scalability, and should be addressed on priority.
Note that libvirt does have a 2-step deferred handling for a few event categories, but (1) That is insufficient since blockign happens before the handler could disambiguate which one needs to be posted to this other queue. (2) There needs to be homogeniety.
The current series builds a framework for recording and handling VM events. It initializes per-VM event queue, and a global event queue pointing to events from all the VMs. Event handling is staggered in 2 stages: - When an event is received, it is enqueued in the per-VM queue as well as the global queues. - The global queue is built into the QEMU Driver as a threadpool (currently with a single thread). - Enqueuing of a new event triggers the global event worker thread, which then attempts to take a lock for this event's VM. - If the lock is available, the event worker runs the function
handling
this event type. Once done, it dequeues this event from the global as well as per-VM queues. - If the lock is unavailable(ie taken by RPC thread), the event
worker
thread leaves this as-is and picks up the next event.
If I get it right, the event is either processed immediately when its VM object is unlocked or it has to wait until the current job running on the VM object finishes even though the lock may be released before that. Correct? If so, this needs to be addressed.
In most cases, the lock is released just before we end the API. However, it is a small change that can be made.
- Once the RPC thread completes, it looks for events pertaining to the VM in the per-VM event queue. It then processes the events serially (holding the VM lock) until there are no more events remaining for this VM. At this point, the per-VM lock is relinquished.
Patch Series status: Strictly RFC only. No compilation issues. I have not had a chance to (stress) test it after rebase to latest master. Note that documentation and test coverage is TBD, since a few open points remain.
Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM completes. - Needs careful consideration in all cases where a QMP event is used to "signal" an RPC thread, else will deadlock.
This last issue is actually a show stopper here. We need to make sure QMP events are processed while a job is still active on the same domain. Otherwise thinks kile block jobs and migration, which are long running jobs driven by events, will break.
Jirka
Completely agree, which is why I have explicitly mentioned this. However, I do not completely follow why it needs to be this way. Can the block job APIs between QEMU <--> libvirt be fixed so that such behaviour is avoided ? Regards, Prerna

On Thu, Oct 26, 2017 at 10:21:17 +0530, Prerna wrote:
On Wed, Oct 25, 2017 at 4:12 PM, Jiri Denemark <jdenemar@redhat.com> wrote:
On Tue, Oct 24, 2017 at 10:34:53 -0700, Prerna Saxena wrote:
[...]
Patch Series status: Strictly RFC only. No compilation issues. I have not had a chance to (stress) test it after rebase to latest master. Note that documentation and test coverage is TBD, since a few open points remain.
Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM completes. - Needs careful consideration in all cases where a QMP event is used to "signal" an RPC thread, else will deadlock.
This last issue is actually a show stopper here. We need to make sure QMP events are processed while a job is still active on the same domain. Otherwise thinks kile block jobs and migration, which are long running jobs driven by events, will break.
Jirka
Completely agree, which is why I have explicitly mentioned this. However, I do not completely follow why it needs to be this way. Can the block job APIs between QEMU <--> libvirt be fixed so that such behaviour is avoided ?
Not really. Events from qemu are a big improvement from the times when we were polling for state of jobs. Additionally migration with storage in libvirt uses blockjobs which are asynchronous in qemu but libvirt needs to wait for them synchronously. Since blockjobs in qemu need to be asynchronous (they take a long time and libvirt needs to be able to use the monitor meanwhile) requires us to do handling of incomming events while a libvirt API is active.

On Tue, Oct 24, 2017 at 10:34:53AM -0700, Prerna Saxena wrote:
As noted in https://www.redhat.com/archives/libvir-list/2017-May/msg00016.html libvirt-QEMU driver handles all async events from the main loop. Each event handling needs the per-VM lock to make forward progress. In the case where an async event is received for the same VM which has an RPC running, the main loop is held up contending for the same lock.
This impacts scalability, and should be addressed on priority.
Note that libvirt does have a 2-step deferred handling for a few event categories, but (1) That is insufficient since blockign happens before the handler could disambiguate which one needs to be posted to this other queue. (2) There needs to be homogeniety.
The current series builds a framework for recording and handling VM events. It initializes per-VM event queue, and a global event queue pointing to events from all the VMs. Event handling is staggered in 2 stages: - When an event is received, it is enqueued in the per-VM queue as well as the global queues. - The global queue is built into the QEMU Driver as a threadpool (currently with a single thread). - Enqueuing of a new event triggers the global event worker thread, which then attempts to take a lock for this event's VM. - If the lock is available, the event worker runs the function handling this event type. Once done, it dequeues this event from the global as well as per-VM queues. - If the lock is unavailable(ie taken by RPC thread), the event worker thread leaves this as-is and picks up the next event. - Once the RPC thread completes, it looks for events pertaining to the VM in the per-VM event queue. It then processes the events serially (holding the VM lock) until there are no more events remaining for this VM. At this point, the per-VM lock is relinquished.
One of the nice aspects of processing the QEMU events in the main event loop is that handling of them is self-throttling. ie if one QEMU process goes mental and issues lots of events, we'll spend alot of time processing them all, but our memory usage is still bounded. If we take events from the VM and put them on a queue that is processed asynchronously, and the event processing thread gets stuck for some reason, then libvirt will end up queuing an unbounded number of events. This could cause considerable memory usage in libvirt. This could be exploited by a malicious VM to harm libvirt. eg a malicious QEMU could stop responding to monitor RPC calls, which would tie up the RPC threads and handling of RPC calls. This would in turn prevent events being processed due to being unable to acquire the state lock. Now the VM can flood libvirtd with events which will all be read off the wire and queued in memory, potentially forever. So libvirt memory usage will balloon to an arbitrary level. Now, the current code isn't great when faced with malicious QEMU because with the same approach, QEMU can cause libvirtd main event loop to stall as you found. This feels less bad than unbounded memory usage though - if libvirt uses lots of memory, this will push other apps on the host into swap, and/or trigger the OOM killer. So I agree that we need to make event processing asynchronous from the main loop. When doing that through, I think we need to put an upper limit on the number of events we're willing to queue from a VM. When we get that limit, we should update the monitor event loop watch so that we stop reading further events, until existing events have been processed. The other attractive thing bout the way events currently work is that it is also automatically avoids lock acquisition priority problems wrt incoming RPC calls. ie, we are guaranteed that once thread workers have finished all currnetly queued RPC calls, we will be able to acquire the VM lock to process the event. All further RPC calls on the wire won't be read off the wire until events have been processed. With events being processed asychronously from RPC calls, there is a risk of starvation where the event loop thread constantly looses the race to acquire the VM lock vs other incoming RPC calls. I guess this is one reason why you choose to process pending events at the end of the RPC call processing.
Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM completes.
Yeah, these two scenarios are not very nice IMHO. I'm also pretty wary of having 2 completely different places in which events are processed. ie some events are processed by the dedicated event thread, while other events are processed immediately after an RPC call. When you have these kind of distinct code paths it tends to lead to bugs because one code path will be taken most of the time, and the other code path only taken in unusal circumstances (and is thus rarely tested and liable to have bugs). I tend to thing that we would have to do - A dedicated pool of threads for processing events from all VMs - Stop reading events from QMP if VM's event queue depth is > N - Maintain an ordered queue of waiters for the VM job lock and explicitly wakeup the first waiter The last point means we would be guaranteed to process all VM events before processing new RPC calls for that event. I did wonder if we could perhaps re-use the priority RPC thread pool for procesing events, but in retrospect I think that is unwise. We want the priority pool to always be able to handle a VM destroy event if we're stuck processing events. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

Thanks for your reply Daniel. I am still on vacation all of this week so have not been able to respond. Few questions inline: On Thu, Oct 26, 2017 at 2:43 PM, Daniel P. Berrange <berrange@redhat.com> wrote:
On Tue, Oct 24, 2017 at 10:34:53AM -0700, Prerna Saxena wrote:
As noted in https://www.redhat.com/archives/libvir-list/2017-May/msg00016.html libvirt-QEMU driver handles all async events from the main loop. Each event handling needs the per-VM lock to make forward progress. In the case where an async event is received for the same VM which has an RPC running, the main loop is held up contending for the same lock.
This impacts scalability, and should be addressed on priority.
Note that libvirt does have a 2-step deferred handling for a few event categories, but (1) That is insufficient since blockign happens before the handler could disambiguate which one needs to be posted to this other queue. (2) There needs to be homogeniety.
The current series builds a framework for recording and handling VM events. It initializes per-VM event queue, and a global event queue pointing to events from all the VMs. Event handling is staggered in 2 stages: - When an event is received, it is enqueued in the per-VM queue as well as the global queues. - The global queue is built into the QEMU Driver as a threadpool (currently with a single thread). - Enqueuing of a new event triggers the global event worker thread, which then attempts to take a lock for this event's VM. - If the lock is available, the event worker runs the function
handling
this event type. Once done, it dequeues this event from the global as well as per-VM queues. - If the lock is unavailable(ie taken by RPC thread), the event
worker
thread leaves this as-is and picks up the next event. - Once the RPC thread completes, it looks for events pertaining to the VM in the per-VM event queue. It then processes the events serially (holding the VM lock) until there are no more events remaining for this VM. At this point, the per-VM lock is relinquished.
One of the nice aspects of processing the QEMU events in the main event loop is that handling of them is self-throttling. ie if one QEMU process goes mental and issues lots of events, we'll spend alot of time processing them all, but our memory usage is still bounded.
If we take events from the VM and put them on a queue that is processed asynchronously, and the event processing thread gets stuck for some reason, then libvirt will end up queuing an unbounded number of events. This could cause considerable memory usage in libvirt. This could be exploited by a malicious VM to harm libvirt. eg a malicious QEMU could stop responding to monitor RPC calls, which would tie up the RPC threads and handling of RPC calls. This would in turn prevent events being processed due to being unable to acquire the state lock. Now the VM can flood libvirtd with events which will all be read off the wire and queued in memory, potentially forever. So libvirt memory usage will balloon to an arbitrary level.
Now, the current code isn't great when faced with malicious QEMU because with the same approach, QEMU can cause libvirtd main event loop to stall as you found. This feels less bad than unbounded memory usage though - if libvirt uses lots of memory, this will push other apps on the host into swap, and/or trigger the OOM killer.
So I agree that we need to make event processing asynchronous from the main loop. When doing that through, I think we need to put an upper limit on the number of events we're willing to queue from a VM. When we get that limit, we should update the monitor event loop watch so that we stop reading further events, until existing events have been processed.
We can update the watch at the monitor socket level. Ie, if we have hit threshold limits reading events off the monitor socket, we ignore this socket FD going forward. Now, this also means that we will miss any replies coming off the same socket. It will mean that legitimate RPC replies coming off the same socket will get ignored too. And in this case, we deadlock, since event notifications will not be processed until the ongoing RPC completes.
The other attractive thing bout the way events currently work is that it is also automatically avoids lock acquisition priority problems wrt incoming RPC calls. ie, we are guaranteed that once thread workers have finished all currnetly queued RPC calls, we will be able to acquire the VM lock to process the event. All further RPC calls on the wire won't be read off the wire until events have been processed.
With events being processed asychronously from RPC calls, there is a risk of starvation where the event loop thread constantly looses the race to acquire the VM lock vs other incoming RPC calls. I guess this is one reason why you choose to process pending events at the end of the RPC call processing.
Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM completes.
Yeah, these two scenarios are not very nice IMHO. I'm also pretty wary of having 2 completely different places in which events are processed. ie some events are processed by the dedicated event thread, while other events are processed immediately after an RPC call. When you have these kind of distinct code paths it tends to lead to bugs because one code path will be taken most of the time, and the other code path only taken in unusal circumstances (and is thus rarely tested and liable to have bugs).
I tend to thing that we would have to do
- A dedicated pool of threads for processing events from all VMs - Stop reading events from QMP if VM's event queue depth is > N - Maintain an ordered queue of waiters for the VM job lock and explicitly wakeup the first waiter
How does this guarantee that the event list would be processed before the next RPC starts, even if the RPC had arrived before the event did?
The last point means we would be guaranteed to process all VM events before processing new RPC calls for that event.
I did wonder if we could perhaps re-use the priority RPC thread pool for procesing events, but in retrospect I think that is unwise. We want the priority pool to always be able to handle a VM destroy event if we're stuck processing events.
Also, would you have have some suggestions as to how the block job API should be fixed? That appears to be the biggest immediate impediment. Regards, Prerna

On Mon, Nov 06, 2017 at 06:43:12AM +0100, Prerna wrote:
Thanks for your reply Daniel. I am still on vacation all of this week so have not been able to respond. Few questions inline:
On Thu, Oct 26, 2017 at 2:43 PM, Daniel P. Berrange <berrange@redhat.com> wrote:
On Tue, Oct 24, 2017 at 10:34:53AM -0700, Prerna Saxena wrote:
As noted in https://www.redhat.com/archives/libvir-list/2017-May/msg00016.html libvirt-QEMU driver handles all async events from the main loop. Each event handling needs the per-VM lock to make forward progress. In the case where an async event is received for the same VM which has an RPC running, the main loop is held up contending for the same lock.
This impacts scalability, and should be addressed on priority.
Note that libvirt does have a 2-step deferred handling for a few event categories, but (1) That is insufficient since blockign happens before the handler could disambiguate which one needs to be posted to this other queue. (2) There needs to be homogeniety.
The current series builds a framework for recording and handling VM events. It initializes per-VM event queue, and a global event queue pointing to events from all the VMs. Event handling is staggered in 2 stages: - When an event is received, it is enqueued in the per-VM queue as well as the global queues. - The global queue is built into the QEMU Driver as a threadpool (currently with a single thread). - Enqueuing of a new event triggers the global event worker thread, which then attempts to take a lock for this event's VM. - If the lock is available, the event worker runs the function
handling
this event type. Once done, it dequeues this event from the global as well as per-VM queues. - If the lock is unavailable(ie taken by RPC thread), the event
worker
thread leaves this as-is and picks up the next event. - Once the RPC thread completes, it looks for events pertaining to the VM in the per-VM event queue. It then processes the events serially (holding the VM lock) until there are no more events remaining for this VM. At this point, the per-VM lock is relinquished.
One of the nice aspects of processing the QEMU events in the main event loop is that handling of them is self-throttling. ie if one QEMU process goes mental and issues lots of events, we'll spend alot of time processing them all, but our memory usage is still bounded.
If we take events from the VM and put them on a queue that is processed asynchronously, and the event processing thread gets stuck for some reason, then libvirt will end up queuing an unbounded number of events. This could cause considerable memory usage in libvirt. This could be exploited by a malicious VM to harm libvirt. eg a malicious QEMU could stop responding to monitor RPC calls, which would tie up the RPC threads and handling of RPC calls. This would in turn prevent events being processed due to being unable to acquire the state lock. Now the VM can flood libvirtd with events which will all be read off the wire and queued in memory, potentially forever. So libvirt memory usage will balloon to an arbitrary level.
Now, the current code isn't great when faced with malicious QEMU because with the same approach, QEMU can cause libvirtd main event loop to stall as you found. This feels less bad than unbounded memory usage though - if libvirt uses lots of memory, this will push other apps on the host into swap, and/or trigger the OOM killer.
So I agree that we need to make event processing asynchronous from the main loop. When doing that through, I think we need to put an upper limit on the number of events we're willing to queue from a VM. When we get that limit, we should update the monitor event loop watch so that we stop reading further events, until existing events have been processed.
We can update the watch at the monitor socket level. Ie, if we have hit threshold limits reading events off the monitor socket, we ignore this socket FD going forward. Now, this also means that we will miss any replies coming off the same socket. It will mean that legitimate RPC replies coming off the same socket will get ignored too. And in this case, we deadlock, since event notifications will not be processed until the ongoing RPC completes.
An RPC waiting for a reply should release the mutex, but have the job change lock. So the risk would be if processing an event required obtaining the job change lock. IIUC, the current code should already suffer from that risk though, because events processed directly in the main loop thread would also need to acquire the job change lock. One approach would be to require that whichever thread holds the job change lock be responsible for processing any events that arrive while it is waiting for its reply. It would also have to validate the VM is in expected state before processing its reply.
The other attractive thing bout the way events currently work is that it is also automatically avoids lock acquisition priority problems wrt incoming RPC calls. ie, we are guaranteed that once thread workers have finished all currnetly queued RPC calls, we will be able to acquire the VM lock to process the event. All further RPC calls on the wire won't be read off the wire until events have been processed.
With events being processed asychronously from RPC calls, there is a risk of starvation where the event loop thread constantly looses the race to acquire the VM lock vs other incoming RPC calls. I guess this is one reason why you choose to process pending events at the end of the RPC call processing.
Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM completes.
Yeah, these two scenarios are not very nice IMHO. I'm also pretty wary of having 2 completely different places in which events are processed. ie some events are processed by the dedicated event thread, while other events are processed immediately after an RPC call. When you have these kind of distinct code paths it tends to lead to bugs because one code path will be taken most of the time, and the other code path only taken in unusal circumstances (and is thus rarely tested and liable to have bugs).
I tend to thing that we would have to do
- A dedicated pool of threads for processing events from all VMs - Stop reading events from QMP if VM's event queue depth is > N - Maintain an ordered queue of waiters for the VM job lock and explicitly wakeup the first waiter
How does this guarantee that the event list would be processed before the next RPC starts, even if the RPC had arrived before the event did?
It doesn't need to guarantee that.
The last point means we would be guaranteed to process all VM events before processing new RPC calls for that event.
I did wonder if we could perhaps re-use the priority RPC thread pool for procesing events, but in retrospect I think that is unwise. We want the priority pool to always be able to handle a VM destroy event if we're stuck processing events.
Also, would you have have some suggestions as to how the block job API should be fixed? That appears to be the biggest immediate impediment.
I'm not seeing what's broken with the block job API Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

I spent a while trying to work through this proposal, here are a few points which need more thought: On Wed, Nov 8, 2017 at 7:22 PM, Daniel P. Berrange <berrange@redhat.com> wrote:
Thanks for your reply Daniel. I am still on vacation all of this week so have not been able to respond. Few questions inline:
On Thu, Oct 26, 2017 at 2:43 PM, Daniel P. Berrange <berrange@redhat.com
wrote:
On Tue, Oct 24, 2017 at 10:34:53AM -0700, Prerna Saxena wrote:
As noted in https://www.redhat.com/archives/libvir-list/2017-May/msg00016.html libvirt-QEMU driver handles all async events from the main loop. Each event handling needs the per-VM lock to make forward progress.
In
the case where an async event is received for the same VM which has an RPC running, the main loop is held up contending for the same lock.
This impacts scalability, and should be addressed on priority.
Note that libvirt does have a 2-step deferred handling for a few event categories, but (1) That is insufficient since blockign happens before the handler could disambiguate which one needs to be posted to this other queue. (2) There needs to be homogeniety.
The current series builds a framework for recording and handling VM events. It initializes per-VM event queue, and a global event queue pointing to events from all the VMs. Event handling is staggered in 2 stages: - When an event is received, it is enqueued in the per-VM queue as well as the global queues. - The global queue is built into the QEMU Driver as a threadpool (currently with a single thread). - Enqueuing of a new event triggers the global event worker thread, which then attempts to take a lock for this event's VM. - If the lock is available, the event worker runs the function handling this event type. Once done, it dequeues this event from the global as well as per-VM queues. - If the lock is unavailable(ie taken by RPC thread), the event worker thread leaves this as-is and picks up the next event. - Once the RPC thread completes, it looks for events pertaining to
VM in the per-VM event queue. It then processes the events serially (holding the VM lock) until there are no more events remaining for this VM. At this point, the per-VM lock is relinquished.
One of the nice aspects of processing the QEMU events in the main event loop is that handling of them is self-throttling. ie if one QEMU
goes mental and issues lots of events, we'll spend alot of time
them all, but our memory usage is still bounded.
If we take events from the VM and put them on a queue that is processed asynchronously, and the event processing thread gets stuck for some reason, then libvirt will end up queuing an unbounded number of events. This could cause considerable memory usage in libvirt. This could be exploited by a malicious VM to harm libvirt. eg a malicious QEMU could stop responding to monitor RPC calls, which would tie up the RPC
On Mon, Nov 06, 2017 at 06:43:12AM +0100, Prerna wrote: the process processing threads
and handling of RPC calls. This would in turn prevent events being processed due to being unable to acquire the state lock. Now the VM can flood libvirtd with events which will all be read off the wire and queued in memory, potentially forever. So libvirt memory usage will balloon to an arbitrary level.
Now, the current code isn't great when faced with malicious QEMU because with the same approach, QEMU can cause libvirtd main event loop to stall as you found. This feels less bad than unbounded memory usage though - if libvirt uses lots of memory, this will push other apps on the host into swap, and/or trigger the OOM killer.
So I agree that we need to make event processing asynchronous from the main loop. When doing that through, I think we need to put an upper limit on the number of events we're willing to queue from a VM. When we get that limit, we should update the monitor event loop watch so that we stop reading further events, until existing events have been processed.
We can update the watch at the monitor socket level. Ie, if we have hit threshold limits reading events off the monitor socket, we ignore this socket FD going forward. Now, this also means that we will miss any replies coming off the same socket. It will mean that legitimate RPC replies coming off the same socket will get ignored too. And in this case, we deadlock, since event notifications will not be processed until the ongoing RPC completes.
An RPC waiting for a reply should release the mutex, but have the job change lock. So the risk would be if processing an event required obtaining the job change lock.
IIUC, the current code should already suffer from that risk though, because events processed directly in the main loop thread would also need to acquire the job change lock.
(1) Not every change to a VM is attributed to a running job, so the locking enforced by per-VM mutex enforces strict serialization. As an example, see qemuProcessHandleShutdown(), which is the handler that runs in response to a "SHUTDOWN" event. It effectively kills the VM but does not start a job for it ( I think it is a bug, not sure why it is his way) Also, the handling of IO errors is another instance. qemuProcessHandleIOError() can pause a VM if the domain is configured that way, but all this is done without starting a job.
One approach would be to require that whichever thread holds the job change lock be responsible for processing any events that arrive while it is waiting for its reply. It would also have to validate the VM is in expected state before processing its reply.
How do I interrupt the context of the current RPC pthread which is waiting for a reply to run an event handler? I think this would need coroutine-based implementation to switch execution contexts. Is that what you were referring to ?
The other attractive thing bout the way events currently work is that it is also automatically avoids lock acquisition priority problems wrt incoming RPC calls. ie, we are guaranteed that once thread workers have finished all currnetly queued RPC calls, we will be able to acquire the VM lock to process the event. All further RPC calls on the wire won't be read off the wire until events have been processed.
With events being processed asychronously from RPC calls, there is a risk of starvation where the event loop thread constantly looses the race to acquire the VM lock vs other incoming RPC calls. I guess this is one reason why you choose to process pending events at the end of the RPC call processing.
Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM completes.
Yeah, these two scenarios are not very nice IMHO. I'm also pretty wary of having 2 completely different places in which events are processed. ie some events are processed by the dedicated event thread, while other events are processed immediately after an RPC call. When you have these kind of distinct code paths it tends to lead to bugs because one code path will be taken most of the time, and the other code path only taken in unusal circumstances (and is thus rarely tested and liable to have bugs).
I tend to thing that we would have to do
- A dedicated pool of threads for processing events from all VMs - Stop reading events from QMP if VM's event queue depth is > N - Maintain an ordered queue of waiters for the VM job lock and explicitly wakeup the first waiter
How does this guarantee that the event list would be processed before the next RPC starts, even if the RPC had arrived before the event did?
It doesn't need to guarantee that.
Dont you think that guarantee is necessary, and central to event handling? We cannot start the next RPC until all changes to this domain effected by current stream of events is fully processed in libvirtd ?
The last point means we would be guaranteed to process all VM events before processing new RPC calls for that event.
I did wonder if we could perhaps re-use the priority RPC thread pool for procesing events, but in retrospect I think that is unwise. We want the priority pool to always be able to handle a VM destroy event if we're stuck processing events.
Also, would you have have some suggestions as to how the block job API should be fixed? That appears to be the biggest immediate impediment.
I'm not seeing what's broken with the block job API
In qemuProcessHandleBlockJob(), the handling of same event is different whether or not a sync API was waiting for this event. In my proposed scheme, where the event handling definitively happens after all API requests complete, this might lead to deadlock. Can you pls let me know if this is not a possibility ?
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/ dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/ dberrange :|

On Tue, Oct 24, 2017 at 07:34 PM +0200, Prerna Saxena <saxenap.ltc@gmail.com> wrote:
As noted in https://www.redhat.com/archives/libvir-list/2017-May/msg00016.html libvirt-QEMU driver handles all async events from the main loop. Each event handling needs the per-VM lock to make forward progress. In the case where an async event is received for the same VM which has an RPC running, the main loop is held up contending for the same lock.
What's about the remaining qemuMonitorCallbacks? The main event loop can still be 'blocked' by e.g. qemuProcessHandleMonitorEOF if the VM is already locked by a worker thread. In fact, we currently have a problem with D-Bus which causes a D-Bus call (of a worker thread) to run into the timeout of 30 seconds. During these 30 seconds the main event loop is stuck. I tried the patch series and got a segmentation fault: Thread 1 "libvirtd" received signal SIGSEGV, Segmentation fault. 0x000003ff98faa452 in virEnqueueVMEvent (qlist=0x3ff908ce760, ev=<optimized out>) at ../../src/qemu/qemu_event.c:153 153 vmq_entry->ev->ev_id = vmq->last->ev->ev_id + 1; (gdb) bt #0 0x000003ff98faa452 in virEnqueueVMEvent (qlist=0x3ff908ce760, ev=<optimized out>) at ../../src/qemu/qemu_event.c:153 #1 0x000003ff98fc3564 in qemuProcessEnqueueEvent (mon=<optimized out>, vm=<optimized out>, ev=<optimized out>, opaque=0x3ff90548ec0) at ../../src/qemu/qemu_process.c:1864 #2 0x000003ff98fe4804 in qemuMonitorEnqueueEvent (mon=mon@entry=0x3ff4c007440, ev=0x2aa1e0104c0) at ../../src/qemu/qemu_monitor.c:1325 #3 0x000003ff98fe7102 in qemuMonitorEmitShutdown (mon=mon@entry=0x3ff4c007440, guest=<optimized out>, seconds=seconds@entry=1510683878, micros=micros@entry=703956) at ../../src/qemu/qemu_monitor.c:1365 #4 0x000003ff98ffc19a in qemuMonitorJSONHandleShutdown (mon=0x3ff4c007440, data=<optimized out>, seconds=1510683878, micros=<optimized out>) at ../../src/qemu/qemu_monitor_json.c:552 #5 0x000003ff98ffbb8a in qemuMonitorJSONIOProcessEvent (mon=mon@entry=0x3ff4c007440, obj=obj@entry=0x2aa1e012030) at ../../src/qemu/qemu_monitor_json.c:208 #6 0x000003ff99002138 in qemuMonitorJSONIOProcessLine (mon=mon@entry=0x3ff4c007440, line=0x2aa1e010460 "{\"timestamp\": {\"seconds\": 1510683878, \"microseconds\": 703956}, \"event\": \"SHUTDOWN\"}", msg=msg@entry=0x0) at ../../src/qemu/qemu_monitor_json.c:237 #7 0x000003ff990022b4 in qemuMonitorJSONIOProcess (mon=mon@entry=0x3ff4c007440, data=0x2aa1e014bc0 "{\"timestamp\": {\"seconds\": 1510683878, \"microseconds\": 703956}, \"event\": \"SHUTDOWN\"}\r\n", len=85, msg=msg@entry=0x0) at ../../src/qemu/qemu_monitor_json.c:279 #8 0x000003ff98fe4b44 in qemuMonitorIOProcess (mon=mon@entry=0x3ff4c007440) at ../../src/qemu/qemu_monitor.c:443 #9 0x000003ff98fe5d00 in qemuMonitorIO (watch=<optimized out>, fd=<optimized out>, events=0, opaque=0x3ff4c007440) at ../../src/qemu/qemu_monitor.c:697 #10 0x000003ffa68d6442 in virEventPollDispatchHandles (nfds=<optimized out>, fds=0x2aa1e013990) at ../../src/util/vireventpoll.c:508 #11 0x000003ffa68d66c8 in virEventPollRunOnce () at ../../src/util/vireventpoll.c:657 #12 0x000003ffa68d44e4 in virEventRunDefaultImpl () at ../../src/util/virevent.c:327 #13 0x000003ffa6a83c5e in virNetDaemonRun (dmn=0x2aa1dfe3eb0) at ../../src/rpc/virnetdaemon.c:838 #14 0x000002aa1df29cc4 in main (argc=<optimized out>, argv=<optimized out>) at ../../daemon/libvirtd.c:1494
This impacts scalability, and should be addressed on priority.
Note that libvirt does have a 2-step deferred handling for a few event categories, but (1) That is insufficient since blockign happens before the handler could disambiguate which one needs to be posted to this other queue. (2) There needs to be homogeniety.
The current series builds a framework for recording and handling VM events. It initializes per-VM event queue, and a global event queue pointing to events from all the VMs. Event handling is staggered in 2 stages: - When an event is received, it is enqueued in the per-VM queue as well as the global queues. - The global queue is built into the QEMU Driver as a threadpool (currently with a single thread). - Enqueuing of a new event triggers the global event worker thread, which then attempts to take a lock for this event's VM. - If the lock is available, the event worker runs the function handling this event type. Once done, it dequeues this event from the global as well as per-VM queues. - If the lock is unavailable(ie taken by RPC thread), the event worker thread leaves this as-is and picks up the next event. - Once the RPC thread completes, it looks for events pertaining to the VM in the per-VM event queue. It then processes the events serially (holding the VM lock) until there are no more events remaining for this VM. At this point, the per-VM lock is relinquished.
Patch Series status: Strictly RFC only. No compilation issues. I have not had a chance to (stress) test it after rebase to latest master. Note that documentation and test coverage is TBD, since a few open points remain.
Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM completes. - Needs careful consideration in all cases where a QMP event is used to "signal" an RPC thread, else will deadlock.
Will be happy to drive more discussion in the community and completely implement it.
Prerna Saxena (8): Introduce virObjectTrylock() QEMU Event handling: Introduce async event helpers in qemu_event.[ch] Setup global and per-VM event queues. Also initialize per-VM queues when libvirt reconnects to an existing VM. Events: Allow monitor to "enqueue" events to a queue. Also introduce a framework of handlers for each event type, that can be called when the handler is running an event. Events: Plumb event handling calls before a domain's APIs complete. Code refactor: Move helper functions of doCoreDump*, syncNicRxFilter*, and qemuOpenFile* to qemu_process.[ch] Fold back the 2-stage event implementation for a few events : Watchdog, Monitor EOF, Serial changed, Guest panic, Nic RX filter changed .. into single level. Initialize the per-VM event queues in context of domain init.
src/Makefile.am | 1 + src/conf/domain_conf.h | 3 + src/libvirt_private.syms | 1 + src/qemu/qemu_conf.h | 4 + src/qemu/qemu_driver.c | 1710 +++++++---------------------------- src/qemu/qemu_event.c | 317 +++++++ src/qemu/qemu_event.h | 231 +++++ src/qemu/qemu_monitor.c | 592 ++++++++++-- src/qemu/qemu_monitor.h | 80 +- src/qemu/qemu_monitor_json.c | 291 +++--- src/qemu/qemu_process.c | 2031 ++++++++++++++++++++++++++++++++++-------- src/qemu/qemu_process.h | 88 ++ src/util/virobject.c | 26 + src/util/virobject.h | 4 + src/util/virthread.c | 5 + src/util/virthread.h | 1 + tests/qemumonitortestutils.c | 2 +- 17 files changed, 3411 insertions(+), 1976 deletions(-) create mode 100644 src/qemu/qemu_event.c create mode 100644 src/qemu/qemu_event.h
-- 2.9.5
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Beste Grüße / Kind regards Marc Hartmayer IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294

On Wed, Nov 15, 2017 at 12:07 AM, Marc Hartmayer < mhartmay@linux.vnet.ibm.com> wrote:
On Tue, Oct 24, 2017 at 07:34 PM +0200, Prerna Saxena < saxenap.ltc@gmail.com> wrote:
As noted in https://www.redhat.com/archives/libvir-list/2017-May/msg00016.html libvirt-QEMU driver handles all async events from the main loop. Each event handling needs the per-VM lock to make forward progress. In the case where an async event is received for the same VM which has an RPC running, the main loop is held up contending for the same lock.
What's about the remaining qemuMonitorCallbacks? The main event loop can still be 'blocked' by e.g. qemuProcessHandleMonitorEOF if the VM is already locked by a worker thread. In fact, we currently have a problem with D-Bus which causes a D-Bus call (of a worker thread) to run into the timeout of 30 seconds. During these 30 seconds the main event loop is stuck.
EOF handling in the current series is still yet another event, and hence is done only once worker threads complete. It needs to go into a priority thread pool, and should wake up RPC workers.
I tried the patch series and got a segmentation fault:
Thread 1 "libvirtd" received signal SIGSEGV, Segmentation fault. 0x000003ff98faa452 in virEnqueueVMEvent (qlist=0x3ff908ce760, ev=<optimized out>) at ../../src/qemu/qemu_event.c:153 153 vmq_entry->ev->ev_id = vmq->last->ev->ev_id + 1; (gdb) bt #0 0x000003ff98faa452 in virEnqueueVMEvent (qlist=0x3ff908ce760, ev=<optimized out>) at ../../src/qemu/qemu_event.c:153 #1 0x000003ff98fc3564 in qemuProcessEnqueueEvent (mon=<optimized out>, vm=<optimized out>, ev=<optimized out>, opaque=0x3ff90548ec0) at ../../src/qemu/qemu_process.c:1864 #2 0x000003ff98fe4804 in qemuMonitorEnqueueEvent (mon=mon@entry=0x3ff4c007440, ev=0x2aa1e0104c0) at ../../src/qemu/qemu_monitor.c:1325 #3 0x000003ff98fe7102 in qemuMonitorEmitShutdown (mon=mon@entry=0x3ff4c007440, guest=<optimized out>, seconds=seconds@entry=1510683878, micros=micros@entry=703956) at ../../src/qemu/qemu_monitor.c:1365 #4 0x000003ff98ffc19a in qemuMonitorJSONHandleShutdown (mon=0x3ff4c007440, data=<optimized out>, seconds=1510683878, micros=<optimized out>) at ../../src/qemu/qemu_monitor_json.c:552 #5 0x000003ff98ffbb8a in qemuMonitorJSONIOProcessEvent (mon=mon@entry=0x3ff4c007440, obj=obj@entry=0x2aa1e012030) at ../../src/qemu/qemu_monitor_json.c:208 #6 0x000003ff99002138 in qemuMonitorJSONIOProcessLine (mon=mon@entry=0x3ff4c007440, line=0x2aa1e010460 "{\"timestamp\": {\"seconds\": 1510683878, \"microseconds\": 703956}, \"event\": \"SHUTDOWN\"}", msg=msg@entry=0x0) at ../../src/qemu/qemu_monitor_json.c:237 #7 0x000003ff990022b4 in qemuMonitorJSONIOProcess (mon=mon@entry=0x3ff4c007440, data=0x2aa1e014bc0 "{\"timestamp\": {\"seconds\": 1510683878, \"microseconds\": 703956}, \"event\": \"SHUTDOWN\"}\r\n", len=85, msg=msg@entry=0x0) at ../../src/qemu/qemu_monitor_json.c:279 #8 0x000003ff98fe4b44 in qemuMonitorIOProcess (mon=mon@entry=0x3ff4c007440) at ../../src/qemu/qemu_monitor.c:443 #9 0x000003ff98fe5d00 in qemuMonitorIO (watch=<optimized out>, fd=<optimized out>, events=0, opaque=0x3ff4c007440) at ../../src/qemu/qemu_monitor.c:697 #10 0x000003ffa68d6442 in virEventPollDispatchHandles (nfds=<optimized out>, fds=0x2aa1e013990) at ../../src/util/vireventpoll.c:508 #11 0x000003ffa68d66c8 in virEventPollRunOnce () at ../../src/util/vireventpoll.c:657 #12 0x000003ffa68d44e4 in virEventRunDefaultImpl () at ../../src/util/virevent.c:327 #13 0x000003ffa6a83c5e in virNetDaemonRun (dmn=0x2aa1dfe3eb0) at ../../src/rpc/virnetdaemon.c:838 #14 0x000002aa1df29cc4 in main (argc=<optimized out>, argv=<optimized out>) at ../../daemon/libvirtd.c:1494
Thanks for trying it out. Let me look into this.
This impacts scalability, and should be addressed on priority.
Note that libvirt does have a 2-step deferred handling for a few event categories, but (1) That is insufficient since blockign happens before the handler could disambiguate which one needs to be posted to this other queue. (2) There needs to be homogeniety.
The current series builds a framework for recording and handling VM events. It initializes per-VM event queue, and a global event queue pointing to events from all the VMs. Event handling is staggered in 2 stages: - When an event is received, it is enqueued in the per-VM queue as well as the global queues. - The global queue is built into the QEMU Driver as a threadpool (currently with a single thread). - Enqueuing of a new event triggers the global event worker thread, which then attempts to take a lock for this event's VM. - If the lock is available, the event worker runs the function
handling
this event type. Once done, it dequeues this event from the global as well as per-VM queues. - If the lock is unavailable(ie taken by RPC thread), the event
worker
thread leaves this as-is and picks up the next event. - Once the RPC thread completes, it looks for events pertaining to the VM in the per-VM event queue. It then processes the events serially (holding the VM lock) until there are no more events remaining for this VM. At this point, the per-VM lock is relinquished.
Patch Series status: Strictly RFC only. No compilation issues. I have not had a chance to (stress) test it after rebase to latest master. Note that documentation and test coverage is TBD, since a few open points remain.
Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM
completes.
- Needs careful consideration in all cases where a QMP event is used to "signal" an RPC thread, else will deadlock.
Will be happy to drive more discussion in the community and completely implement it.
Prerna Saxena (8): Introduce virObjectTrylock() QEMU Event handling: Introduce async event helpers in qemu_event.[ch] Setup global and per-VM event queues. Also initialize per-VM queues when libvirt reconnects to an existing VM. Events: Allow monitor to "enqueue" events to a queue. Also introduce a framework of handlers for each event type, that can be called when the handler is running an event. Events: Plumb event handling calls before a domain's APIs complete. Code refactor: Move helper functions of doCoreDump*, syncNicRxFilter*, and qemuOpenFile* to qemu_process.[ch] Fold back the 2-stage event implementation for a few events : Watchdog, Monitor EOF, Serial changed, Guest panic, Nic RX filter changed .. into single level. Initialize the per-VM event queues in context of domain init.
src/Makefile.am | 1 + src/conf/domain_conf.h | 3 + src/libvirt_private.syms | 1 + src/qemu/qemu_conf.h | 4 + src/qemu/qemu_driver.c | 1710 +++++++---------------------------- src/qemu/qemu_event.c | 317 +++++++ src/qemu/qemu_event.h | 231 +++++ src/qemu/qemu_monitor.c | 592 ++++++++++-- src/qemu/qemu_monitor.h | 80 +- src/qemu/qemu_monitor_json.c | 291 +++--- src/qemu/qemu_process.c | 2031 ++++++++++++++++++++++++++++++ ++++-------- src/qemu/qemu_process.h | 88 ++ src/util/virobject.c | 26 + src/util/virobject.h | 4 + src/util/virthread.c | 5 + src/util/virthread.h | 1 + tests/qemumonitortestutils.c | 2 +- 17 files changed, 3411 insertions(+), 1976 deletions(-) create mode 100644 src/qemu/qemu_event.c create mode 100644 src/qemu/qemu_event.h
-- 2.9.5
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Beste Grüße / Kind regards Marc Hartmayer
IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294

On Tue, Oct 24, 2017 at 07:34 PM +0200, Prerna Saxena <saxenap.ltc@gmail.com> wrote:
As noted in https://www.redhat.com/archives/libvir-list/2017-May/msg00016.html libvirt-QEMU driver handles all async events from the main loop. Each event handling needs the per-VM lock to make forward progress. In the case where an async event is received for the same VM which has an RPC running, the main loop is held up contending for the same lock.
This impacts scalability, and should be addressed on priority.
Note that libvirt does have a 2-step deferred handling for a few event categories, but (1) That is insufficient since blockign happens before the handler could disambiguate which one needs to be posted to this other queue. (2) There needs to be homogeniety.
The current series builds a framework for recording and handling VM events. It initializes per-VM event queue, and a global event queue pointing to events from all the VMs. Event handling is staggered in 2 stages: - When an event is received, it is enqueued in the per-VM queue as well as the global queues. - The global queue is built into the QEMU Driver as a threadpool (currently with a single thread). - Enqueuing of a new event triggers the global event worker thread, which then attempts to take a lock for this event's VM. - If the lock is available, the event worker runs the function handling this event type. Once done, it dequeues this event from the global as well as per-VM queues. - If the lock is unavailable(ie taken by RPC thread), the event worker thread leaves this as-is and picks up the next event. - Once the RPC thread completes, it looks for events pertaining to the VM in the per-VM event queue. It then processes the events serially (holding the VM lock) until there are no more events remaining for this VM. At this point, the per-VM lock is relinquished.
Patch Series status: Strictly RFC only. No compilation issues. I have not had a chance to (stress) test it after rebase to latest master. Note that documentation and test coverage is TBD, since a few open points remain.
Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM completes. - Needs careful consideration in all cases where a QMP event is used to "signal" an RPC thread, else will deadlock.
Will be happy to drive more discussion in the community and completely implement it.
Prerna Saxena (8): Introduce virObjectTrylock() QEMU Event handling: Introduce async event helpers in qemu_event.[ch] Setup global and per-VM event queues. Also initialize per-VM queues when libvirt reconnects to an existing VM. Events: Allow monitor to "enqueue" events to a queue. Also introduce a framework of handlers for each event type, that can be called when the handler is running an event. Events: Plumb event handling calls before a domain's APIs complete. Code refactor: Move helper functions of doCoreDump*, syncNicRxFilter*, and qemuOpenFile* to qemu_process.[ch] Fold back the 2-stage event implementation for a few events : Watchdog, Monitor EOF, Serial changed, Guest panic, Nic RX filter changed .. into single level. Initialize the per-VM event queues in context of domain init.
src/Makefile.am | 1 + src/conf/domain_conf.h | 3 + src/libvirt_private.syms | 1 + src/qemu/qemu_conf.h | 4 + src/qemu/qemu_driver.c | 1710 +++++++---------------------------- src/qemu/qemu_event.c | 317 +++++++ src/qemu/qemu_event.h | 231 +++++ src/qemu/qemu_monitor.c | 592 ++++++++++-- src/qemu/qemu_monitor.h | 80 +- src/qemu/qemu_monitor_json.c | 291 +++--- src/qemu/qemu_process.c | 2031 ++++++++++++++++++++++++++++++++++-------- src/qemu/qemu_process.h | 88 ++ src/util/virobject.c | 26 + src/util/virobject.h | 4 + src/util/virthread.c | 5 + src/util/virthread.h | 1 + tests/qemumonitortestutils.c | 2 +- 17 files changed, 3411 insertions(+), 1976 deletions(-) create mode 100644 src/qemu/qemu_event.c create mode 100644 src/qemu/qemu_event.h
-- 2.9.5
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Hey Prerna, any updates so far? :) I’m just curious about the status of this series as it would fix a performance problem we’ve. Thank you. Beste Grüße / Kind regards Marc Hartmayer IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294

Hi Marc, Currently the block job handling needs to be sorted out before events can assume independent handling from RPC contexts. Sorry, I have not been able to revisit this in the past 2 months, but this is something I would very much like to fix. I will try looking up the exact dependency of the block layer so that this series could make some progress. regards, Prerna On Wed, Mar 21, 2018 at 1:24 PM, Marc Hartmayer <mhartmay@linux.vnet.ibm.com
wrote:
On Tue, Oct 24, 2017 at 07:34 PM +0200, Prerna Saxena < saxenap.ltc@gmail.com> wrote:
As noted in https://www.redhat.com/archives/libvir-list/2017-May/msg00016.html libvirt-QEMU driver handles all async events from the main loop. Each event handling needs the per-VM lock to make forward progress. In the case where an async event is received for the same VM which has an RPC running, the main loop is held up contending for the same lock.
This impacts scalability, and should be addressed on priority.
Note that libvirt does have a 2-step deferred handling for a few event categories, but (1) That is insufficient since blockign happens before the handler could disambiguate which one needs to be posted to this other queue. (2) There needs to be homogeniety.
The current series builds a framework for recording and handling VM events. It initializes per-VM event queue, and a global event queue pointing to events from all the VMs. Event handling is staggered in 2 stages: - When an event is received, it is enqueued in the per-VM queue as well as the global queues. - The global queue is built into the QEMU Driver as a threadpool (currently with a single thread). - Enqueuing of a new event triggers the global event worker thread, which then attempts to take a lock for this event's VM. - If the lock is available, the event worker runs the function handling this event type. Once done, it dequeues this event from the global as well as per-VM queues. - If the lock is unavailable(ie taken by RPC thread), the event worker thread leaves this as-is and picks up the next event. - Once the RPC thread completes, it looks for events pertaining to the VM in the per-VM event queue. It then processes the events serially (holding the VM lock) until there are no more events remaining for this VM. At this point, the per-VM lock is relinquished.
Patch Series status: Strictly RFC only. No compilation issues. I have not had a chance to (stress) test it after rebase to latest master. Note that documentation and test coverage is TBD, since a few open points remain.
Known issues/ caveats: - RPC handling time will become non-deterministic. - An event will only be "notified" to a client once the RPC for same VM completes. - Needs careful consideration in all cases where a QMP event is used to "signal" an RPC thread, else will deadlock.
Will be happy to drive more discussion in the community and completely implement it.
Prerna Saxena (8): Introduce virObjectTrylock() QEMU Event handling: Introduce async event helpers in qemu_event.[ch] Setup global and per-VM event queues. Also initialize per-VM queues when libvirt reconnects to an existing VM. Events: Allow monitor to "enqueue" events to a queue. Also introduce a framework of handlers for each event type, that can be called when the handler is running an event. Events: Plumb event handling calls before a domain's APIs complete. Code refactor: Move helper functions of doCoreDump*, syncNicRxFilter*, and qemuOpenFile* to qemu_process.[ch] Fold back the 2-stage event implementation for a few events : Watchdog, Monitor EOF, Serial changed, Guest panic, Nic RX filter changed .. into single level. Initialize the per-VM event queues in context of domain init.
src/Makefile.am | 1 + src/conf/domain_conf.h | 3 + src/libvirt_private.syms | 1 + src/qemu/qemu_conf.h | 4 + src/qemu/qemu_driver.c | 1710 +++++++---------------------------- src/qemu/qemu_event.c | 317 +++++++ src/qemu/qemu_event.h | 231 +++++ src/qemu/qemu_monitor.c | 592 ++++++++++-- src/qemu/qemu_monitor.h | 80 +- src/qemu/qemu_monitor_json.c | 291 +++--- src/qemu/qemu_process.c | 2031 ++++++++++++++++++++++++++++++ ++++-------- src/qemu/qemu_process.h | 88 ++ src/util/virobject.c | 26 + src/util/virobject.h | 4 + src/util/virthread.c | 5 + src/util/virthread.h | 1 + tests/qemumonitortestutils.c | 2 +- 17 files changed, 3411 insertions(+), 1976 deletions(-) create mode 100644 src/qemu/qemu_event.c create mode 100644 src/qemu/qemu_event.h
-- 2.9.5
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Hey Prerna,
any updates so far? :) I’m just curious about the status of this series as it would fix a performance problem we’ve.
Thank you.
Beste Grüße / Kind regards Marc Hartmayer
IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294
participants (6)
-
Daniel P. Berrange
-
Jiri Denemark
-
Marc Hartmayer
-
Peter Krempa
-
Prerna
-
Prerna Saxena