QEMU monitor commands may sleep for a prolonged period of time.
If the virDomainObjPtr or qemu driver lock is held this will
needlessly block execution of many other API calls. it also
prevents asynchronous monitor events from being dispatched
while a monitor command is executing, because deadlock will
ensure.
To resolve this, it is neccessary to release all locks while
executing a monitor command. This change introduces a flag
indicating that a monitor job is active, and a condition
variable to synchronize access to this flag. This ensures that
only a single thread can be making a state change or executing
a monitor command at a time, while still allowing other API
calls to be completed without blocking
* src/qemu/qemu_driver.c: Release driver and domain lock when
running monitor commands
* src/qemu/THREADS.txt: Document threading rules
---
src/qemu/THREADS.txt | 283 ++++++++++++++++++++++
src/qemu/qemu_driver.c | 616 +++++++++++++++++++++++++++++++++++------------
2 files changed, 741 insertions(+), 158 deletions(-)
create mode 100644 src/qemu/THREADS.txt
diff --git a/src/qemu/THREADS.txt b/src/qemu/THREADS.txt
new file mode 100644
index 0000000..1af1b83
--- /dev/null
+++ b/src/qemu/THREADS.txt
@@ -0,0 +1,283 @@
+ QEMU Driver Threading: The Rules
+ =================================
+
+This document describes how thread safety is ensured throughout
+the QEMU driver. The criteria for this model are:
+
+ - Objects must never be exclusively locked for any pro-longed time
+ - Code which sleeps must be able to time out after suitable period
+ - Must be safe against dispatch asynchronous events from monitor
+
+
+Basic locking primitives
+------------------------
+
+There are a number of locks on various objects
+
+ * struct qemud_driver: RWLock
+
+ This is the top level lock on the entire driver. Every API call in
+ the QEMU driver is blocked while this is held, though some internal
+ callbacks may still run asynchronously. This lock must never be held
+ for anything which sleeps/waits (ie monitor commands)
+
+ When obtaining the driver lock, under *NO* circumstances must
+ any lock be held on a virDomainObjPtr. This *WILL* result in
+ deadlock.
+
+
+
+ * virDomainObjPtr: Mutex
+
+ Will be locked after calling any of the virDomainFindBy{ID,Name,UUID}
+ methods.
+
+ Lock must be held when changing/reading any variable in the virDomainObjPtr
+
+ Once the lock is held, you must *NOT* try to lock the driver. You must
+ release all virDomainObjPtr locks before locking the driver, or deadlock
+ *WILL* occurr.
+
+ If the lock needs to be dropped & then re-acquired for a short period of
+ time, the reference count must be incremented first using virDomainObjRef().
+ If the reference count is incremented in this way, it is not neccessary
+ to have the driver locked when re-acquiring the dropped locked, since the
+ reference count prevents it being freed by another thread.
+
+ This lock must not be held for anything which sleeps/waits (ie monitor
+ commands).
+
+
+
+ * qemuMonitorPrivatePtr: Job condition
+
+ Since virDomainObjPtr lock must not be held during sleeps, the job condition
+ provides additional protection for code making updates.
+
+ Immediately after acquiring the virDomainObjPtr lock, any method which intends
+ to update state, must acquire the job condition. The virDomainObjPtr lock
+ is released while blocking on this condition variable. Once the job condition
+ is acquired a method can safely release the virDomainObjPtr lock whenever it
+ hits a piece of code which may sleep/wait, and re-acquire it after the sleep/
+ wait.
+
+
+ * qemuMonitorPtr: Mutex
+
+ Lock to be used when invoking any monitor command to ensure safety
+ wrt any asynchronous events that may be dispatched from the monitor.
+ It should be acquired before running a command.
+
+ The job condition *MUST* be held before acquiring the monitor lock
+
+ The virDomainObjPtr lock *MUST* be held before acquiring the monitor
+ lock.
+
+ The virDomainObjPtr lock *MUST* then be released when invoking the
+ monitor command.
+
+ The driver lock *MUST* be released when invoking the monitor commands.
+
+ This ensures that the virDomainObjPtr & driver are both unlocked while
+ sleeping/waiting for the monitor response.
+
+
+
+Helper methods
+--------------
+
+To lock the driver
+
+ qemuDriverLock()
+ - Acquires the driver lock
+
+ qemuDriverUnlock()
+ - Releases the driver lock
+
+
+
+To lock the virDomainObjPtr
+
+ virDomainObjLock()
+ - Acquires the virDomainObjPtr lock
+
+ virDomainObjUnlock()
+ - Releases the virDomainObjPtr lock
+
+
+
+To acquire the job mutex
+
+ qemuDomainObjBeginJob() (if driver is unlocked)
+ - Increments ref count on virDomainObjPtr
+ - Wait qemuDomainObjPrivate condition 'jobActive != 0' using virDomainObjPtr
mutex
+ - Sets jobActive to 1
+
+ qemuDomainObjBeginJobWithDriver() (if driver needs to be locked)
+ - Unlocks driver
+ - Increments ref count on virDomainObjPtr
+ - Wait qemuDomainObjPrivate condition 'jobActive != 0' using virDomainObjPtr
mutex
+ - Sets jobActive to 1
+ - Unlocks virDomainObjPtr
+ - Locks driver
+ - Locks virDomainObjPtr
+
+ NB: this variant is required in order to comply with lock ordering rules
+ for virDomainObjPtr vs driver
+
+
+ qemuDomainObjEndJob()
+ - Set jobActive to 0
+ - Signal on qemuDomainObjPrivate condition
+ - Decrements ref count on virDomainObjPtr
+
+
+
+To acquire the QEMU monitor lock
+
+ qemuDomainObjEnterMonitor()
+ - Acquires the qemuMonitorObjPtr lock
+ - Releases the virDomainObjPtr lock
+
+ qemuDomainObjExitMonitor()
+ - Acquires the virDomainObjPtr lock
+ - Releases the qemuMonitorObjPtr lock
+
+ NB: caller must take care to drop the driver lock if neccessary
+
+
+To acquire the QEMU monitor lock with the driver lock held
+
+ qemuDomainObjEnterMonitorWithDriver()
+ - Acquires the qemuMonitorObjPtr lock
+ - Releases the virDomainObjPtr lock
+ - Releases the driver lock
+
+ qemuDomainObjExitMonitorWithDriver()
+ - Acquires the driver lock
+ - Acquires the virDomainObjPtr lock
+ - Releases the qemuMonitorObjPtr lock
+
+ NB: caller must take care to drop the driver lock if neccessary
+
+
+Design patterns
+---------------
+
+
+ * Accessing or updating something with just the driver
+
+ qemuDriverLock(driver);
+
+ ...do work...
+
+ qemuDriverUnlock(driver);
+
+
+
+ * Accessing something directly todo with a virDomainObjPtr
+
+ virDomainObjPtr obj;
+
+ qemuDriverLock(driver);
+ obj = virDomainFindByUUID(driver->domains, dom->uuid);
+ qemuDriverUnlock(driver);
+
+ ...do work...
+
+ virDomainObjUnlock(obj);
+
+
+
+ * Accessing something directly todo with a virDomainObjPtr and driver
+
+ virDomainObjPtr obj;
+
+ qemuDriverLock(driver);
+ obj = virDomainFindByUUID(driver->domains, dom->uuid);
+
+ ...do work...
+
+ virDomainObjUnlock(obj);
+ qemuDriverUnlock(driver);
+
+
+
+ * Updating something directly todo with a virDomainObjPtr
+
+ virDomainObjPtr obj;
+
+ qemuDriverLockRO(driver);
+ obj = virDomainFindByUUID(driver->domains, dom->uuid);
+ qemuDriverUnlock(driver);
+
+ qemuDomainObjBeginJob(obj);
+
+ ...do work...
+
+ qemuDomainObjEndJob(obj);
+
+ virDomainObjUnlock(obj);
+
+
+
+
+ * Invoking a monitor command on a virDomainObjPtr
+
+
+ virDomainObjPtr obj;
+ qemuDomainObjPrivatePtr priv;
+
+ qemuDriverLockRO(driver);
+ obj = virDomainFindByUUID(driver->domains, dom->uuid);
+ qemuDriverUnlock(driver);
+
+ qemuDomainObjBeginJob(obj);
+
+ ...do prep work...
+
+ qemuDomainObjEnterMonitor(obj);
+ qemuMonitorXXXX(priv->mon);
+ qemuDomainObjExitMonitor(obj);
+
+ ...do final work...
+
+ qemuDomainObjEndJob(obj);
+ virDomainObjUnlock(obj);
+
+
+
+
+ * Invoking a monitor command on a virDomainObjPtr with driver locked too
+
+
+ virDomainObjPtr obj;
+ qemuDomainObjPrivatePtr priv;
+
+ qemuDriverLock(driver);
+ obj = virDomainFindByUUID(driver->domains, dom->uuid);
+
+ qemuDomainObjBeginJobWithDriver(obj);
+
+ ...do prep work...
+
+ qemuDomainObjEnterMonitorWithDriver(driver, obj);
+ qemuMonitorXXXX(priv->mon);
+ qemuDomainObjExitMonitorWithDriver(driver, obj);
+
+ ...do final work...
+
+ qemuDomainObjEndJob(obj);
+ virDomainObjUnlock(obj);
+ qemuDriverUnlock(driver);
+
+
+
+Summary
+-------
+
+ * Respect lock ordering rules: never lock driver if anything else is
+ already locked
+
+ * Don't hold locks in code which sleeps: unlock driver & virDomainObjPtr
+ when using monitor
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index fea439b..b7cde56 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -25,6 +25,7 @@
#include <sys/types.h>
#include <sys/poll.h>
+#include <sys/time.h>
#include <dirent.h>
#include <limits.h>
#include <string.h>
@@ -79,6 +80,11 @@
typedef struct _qemuDomainObjPrivate qemuDomainObjPrivate;
typedef qemuDomainObjPrivate *qemuDomainObjPrivatePtr;
struct _qemuDomainObjPrivate {
+ virCond jobCond; /* Use in conjunction with main virDomainObjPtr lock */
+ int jobActive; /* Non-zero if a job is active. Only 1 job is allowed at any time
+ * A job includes *all* monitor commands, even those just querying
+ * information, not merely actions */
+
qemuMonitorPtr mon;
};
@@ -141,19 +147,145 @@ static void qemuDomainObjPrivateFree(void *data)
}
+/*
+ * obj must be locked before calling, qemud_driver must NOT be locked
+ *
+ * This must be called by anything that will change the VM state
+ * in any way, or anything that will use the QEMU monitor.
+ *
+ * Upon successful return, the object will have its ref count increased,
+ * successful calls must be followed by EndJob eventually
+ */
+static int qemuDomainObjBeginJob(virDomainObjPtr obj) ATTRIBUTE_RETURN_CHECK;
+static int qemuDomainObjBeginJob(virDomainObjPtr obj)
+{
+ qemuDomainObjPrivatePtr priv = obj->privateData;
+
+ virDomainObjRef(obj);
+
+ while (priv->jobActive) {
+ if (virCondWait(&priv->jobCond, &obj->lock) < 0) {
+ virDomainObjUnref(obj);
+ virReportSystemError(NULL, errno,
+ "%s", _("cannot acquire job
mutex"));
+ return -1;
+ }
+ }
+ priv->jobActive = 1;
+
+ return 0;
+}
+
+/*
+ * obj must be locked before calling, qemud_driver must be locked
+ *
+ * This must be called by anything that will change the VM state
+ * in any way, or anything that will use the QEMU monitor.
+ */
+static int qemuDomainObjBeginJobWithDriver(struct qemud_driver *driver,
+ virDomainObjPtr obj) ATTRIBUTE_RETURN_CHECK;
+static int qemuDomainObjBeginJobWithDriver(struct qemud_driver *driver,
+ virDomainObjPtr obj)
+{
+ qemuDomainObjPrivatePtr priv = obj->privateData;
+
+ virDomainObjRef(obj);
+ qemuDriverUnlock(driver);
+
+ while (priv->jobActive) {
+ if (virCondWait(&priv->jobCond, &obj->lock) < 0) {
+ virDomainObjUnref(obj);
+ virReportSystemError(NULL, errno,
+ "%s", _("cannot acquire job
mutex"));
+ return -1;
+ }
+ }
+ priv->jobActive = 1;
+
+ virDomainObjUnlock(obj);
+ qemuDriverLock(driver);
+ virDomainObjLock(obj);
+
+ return 0;
+}
+
+/*
+ * obj must be locked before calling, qemud_driver does not matter
+ *
+ * To be called after completing the work associated with the
+ * earlier qemuDomainBeginJob() call
+ */
+static void qemuDomainObjEndJob(virDomainObjPtr obj)
+{
+ qemuDomainObjPrivatePtr priv = obj->privateData;
+
+ priv->jobActive = 0;
+ virCondSignal(&priv->jobCond);
+
+ virDomainObjUnref(obj);
+}
+
+
+/*
+ * obj must be locked before calling, qemud_driver must be unlocked
+ *
+ * To be called immediately before any QEMU monitor API call
+ * Must have alrady called qemuDomainObjBeginJob().
+ *
+ * To be followed with qemuDomainObjExitMonitor() once complete
+ */
static void qemuDomainObjEnterMonitor(virDomainObjPtr obj)
{
qemuDomainObjPrivatePtr priv = obj->privateData;
qemuMonitorLock(priv->mon);
+ virDomainObjUnlock(obj);
}
+/* obj must NOT be locked before calling, qemud_driver must be unlocked
+ *
+ * Should be paired with an earlier qemuDomainObjEnterMonitor() call
+ */
static void qemuDomainObjExitMonitor(virDomainObjPtr obj)
{
qemuDomainObjPrivatePtr priv = obj->privateData;
qemuMonitorUnlock(priv->mon);
+ virDomainObjLock(obj);
+}
+
+
+/*
+ * obj must be locked before calling, qemud_driver must be locked
+ *
+ * To be called immediately before any QEMU monitor API call
+ * Must have alrady called qemuDomainObjBeginJob().
+ *
+ * To be followed with qemuDomainObjExitMonitorWithDriver() once complete
+ */
+static void qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
virDomainObjPtr obj)
+{
+ qemuDomainObjPrivatePtr priv = obj->privateData;
+
+ qemuMonitorLock(priv->mon);
+ virDomainObjUnlock(obj);
+ qemuDriverUnlock(driver);
+}
+
+
+/* obj must NOT be locked before calling, qemud_driver must be unlocked,
+ * and will be locked after returning
+ *
+ * Should be paired with an earlier qemuDomainObjEnterMonitor() call
+ */
+static void qemuDomainObjExitMonitorWithDriver(struct qemud_driver *driver,
virDomainObjPtr obj)
+{
+ qemuDomainObjPrivatePtr priv = obj->privateData;
+
+ qemuMonitorUnlock(priv->mon);
+ qemuDriverLock(driver);
+ virDomainObjLock(obj);
}
@@ -2677,11 +2809,15 @@ static virDomainPtr qemudDomainCreate(virConnectPtr conn, const
char *xml,
def = NULL;
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup; /* XXXX free the 'vm' we created ? */
+
if (qemudStartVMDaemon(conn, driver, vm, NULL, -1) < 0) {
+ qemuDomainObjEndJob(vm);
virDomainRemoveInactive(&driver->domains,
vm);
vm = NULL;
- goto cleanup;
+ goto endjob;
}
event = virDomainEventNewFromObj(vm,
@@ -2691,6 +2827,10 @@ static virDomainPtr qemudDomainCreate(virConnectPtr conn, const
char *xml,
dom = virGetDomain(conn, vm->def->name, vm->def->uuid);
if (dom) dom->id = vm->def->id;
+endjob:
+ if (vm)
+ qemuDomainObjEndJob(vm);
+
cleanup:
virDomainDefFree(def);
if (vm)
@@ -2718,28 +2858,34 @@ static int qemudDomainSuspend(virDomainPtr dom) {
_("no domain with matching uuid '%s'"),
uuidstr);
goto cleanup;
}
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
if (!virDomainObjIsActive(vm)) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
- goto cleanup;
+ goto endjob;
}
if (vm->state != VIR_DOMAIN_PAUSED) {
qemuDomainObjPrivatePtr priv = vm->privateData;
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorStopCPUs(priv->mon) < 0) {
- qemuDomainObjExitMonitor(vm);
- goto cleanup;
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ goto endjob;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
vm->state = VIR_DOMAIN_PAUSED;
event = virDomainEventNewFromObj(vm,
VIR_DOMAIN_EVENT_SUSPENDED,
VIR_DOMAIN_EVENT_SUSPENDED_PAUSED);
}
if (virDomainSaveStatus(dom->conn, driver->stateDir, vm) < 0)
- goto cleanup;
+ goto endjob;
ret = 0;
+endjob:
+ qemuDomainObjEndJob(vm);
+
cleanup:
if (vm)
virDomainObjUnlock(vm);
@@ -2767,31 +2913,38 @@ static int qemudDomainResume(virDomainPtr dom) {
_("no domain with matching uuid '%s'"),
uuidstr);
goto cleanup;
}
+
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
if (!virDomainObjIsActive(vm)) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
- goto cleanup;
+ goto endjob;
}
if (vm->state == VIR_DOMAIN_PAUSED) {
qemuDomainObjPrivatePtr priv = vm->privateData;
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorStartCPUs(priv->mon, dom->conn) < 0) {
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (virGetLastError() == NULL)
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
"%s", _("resume operation
failed"));
- goto cleanup;
+ goto endjob;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
vm->state = VIR_DOMAIN_RUNNING;
event = virDomainEventNewFromObj(vm,
VIR_DOMAIN_EVENT_RESUMED,
VIR_DOMAIN_EVENT_RESUMED_UNPAUSED);
}
if (virDomainSaveStatus(dom->conn, driver->stateDir, vm) < 0)
- goto cleanup;
+ goto endjob;
ret = 0;
+endjob:
+ qemuDomainObjEndJob(vm);
+
cleanup:
if (vm)
virDomainObjUnlock(vm);
@@ -2819,10 +2972,13 @@ static int qemudDomainShutdown(virDomainPtr dom) {
goto cleanup;
}
+ if (qemuDomainObjBeginJob(vm) < 0)
+ goto cleanup;
+
if (!virDomainObjIsActive(vm)) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
- goto cleanup;
+ goto endjob;
}
qemuDomainObjPrivatePtr priv = vm->privateData;
@@ -2830,6 +2986,9 @@ static int qemudDomainShutdown(virDomainPtr dom) {
ret = qemuMonitorSystemPowerdown(priv->mon);
qemuDomainObjExitMonitor(vm);
+endjob:
+ qemuDomainObjEndJob(vm);
+
cleanup:
if (vm)
virDomainObjUnlock(vm);
@@ -2852,10 +3011,14 @@ static int qemudDomainDestroy(virDomainPtr dom) {
_("no domain with matching uuid '%s'"),
uuidstr);
goto cleanup;
}
+
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
if (!virDomainObjIsActive(vm)) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
- goto cleanup;
+ goto endjob;
}
qemudShutdownVMDaemon(dom->conn, driver, vm);
@@ -2863,12 +3026,17 @@ static int qemudDomainDestroy(virDomainPtr dom) {
VIR_DOMAIN_EVENT_STOPPED,
VIR_DOMAIN_EVENT_STOPPED_DESTROYED);
if (!vm->persistent) {
+ qemuDomainObjEndJob(vm);
virDomainRemoveInactive(&driver->domains,
vm);
vm = NULL;
}
ret = 0;
+endjob:
+ if (vm)
+ qemuDomainObjEndJob(vm);
+
cleanup:
if (vm)
virDomainObjUnlock(vm);
@@ -2985,25 +3153,31 @@ static int qemudDomainSetMemory(virDomainPtr dom, unsigned long
newmem) {
goto cleanup;
}
+ if (qemuDomainObjBeginJob(vm) < 0)
+ goto cleanup;
+
if (virDomainObjIsActive(vm)) {
qemuDomainObjPrivatePtr priv = vm->privateData;
qemuDomainObjEnterMonitor(vm);
int r = qemuMonitorSetBalloon(priv->mon, newmem);
qemuDomainObjExitMonitor(vm);
if (r < 0)
- goto cleanup;
+ goto endjob;
/* Lack of balloon support is a fatal error */
if (r == 0) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_NO_SUPPORT,
"%s", _("cannot set memory of an active
domain"));
- goto cleanup;
+ goto endjob;
}
} else {
vm->def->memory = newmem;
}
ret = 0;
+endjob:
+ qemuDomainObjEndJob(vm);
+
cleanup:
if (vm)
virDomainObjUnlock(vm);
@@ -3044,17 +3218,28 @@ static int qemudDomainGetInfo(virDomainPtr dom,
if (virDomainObjIsActive(vm)) {
qemuDomainObjPrivatePtr priv = vm->privateData;
- qemuDomainObjEnterMonitor(vm);
- err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
- qemuDomainObjExitMonitor(vm);
- if (err < 0)
- goto cleanup;
+ if (!priv->jobActive) {
+ if (qemuDomainObjBeginJob(vm) < 0)
+ goto cleanup;
+
+ qemuDomainObjEnterMonitor(vm);
+ err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
+ qemuDomainObjExitMonitor(vm);
+ if (err < 0) {
+ qemuDomainObjEndJob(vm);
+ goto cleanup;
+ }
+
+ if (err == 0)
+ /* Balloon not supported, so maxmem is always the allocation */
+ info->memory = vm->def->maxmem;
+ else
+ info->memory = balloon;
- if (err == 0)
- /* Balloon not supported, so maxmem is always the allocation */
- info->memory = vm->def->maxmem;
- else
- info->memory = balloon;
+ qemuDomainObjEndJob(vm);
+ } else {
+ info->memory = vm->def->memory;
+ }
} else {
info->memory = vm->def->memory;
}
@@ -3145,22 +3330,25 @@ static int qemudDomainSave(virDomainPtr dom,
goto cleanup;
}
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
if (!virDomainObjIsActive(vm)) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
- goto cleanup;
+ goto endjob;
}
/* Pause */
if (vm->state == VIR_DOMAIN_RUNNING) {
qemuDomainObjPrivatePtr priv = vm->privateData;
header.was_running = 1;
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorStopCPUs(priv->mon) < 0) {
- qemuDomainObjExitMonitor(vm);
- goto cleanup;
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ goto endjob;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
vm->state = VIR_DOMAIN_PAUSED;
}
@@ -3169,7 +3357,7 @@ static int qemudDomainSave(virDomainPtr dom,
if (!xml) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
"%s", _("failed to get domain xml"));
- goto cleanup;
+ goto endjob;
}
header.xml_len = strlen(xml) + 1;
@@ -3177,26 +3365,26 @@ static int qemudDomainSave(virDomainPtr dom,
if ((fd = open(path, O_CREAT|O_TRUNC|O_WRONLY, S_IRUSR|S_IWUSR)) < 0) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
_("failed to create '%s'"), path);
- goto cleanup;
+ goto endjob;
}
if (safewrite(fd, &header, sizeof(header)) != sizeof(header)) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
"%s", _("failed to write save header"));
- goto cleanup;
+ goto endjob;
}
if (safewrite(fd, xml, header.xml_len) != header.xml_len) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
"%s", _("failed to write xml"));
- goto cleanup;
+ goto endjob;
}
if (close(fd) < 0) {
virReportSystemError(dom->conn, errno,
_("unable to save file %s"),
path);
- goto cleanup;
+ goto endjob;
}
fd = -1;
@@ -3220,7 +3408,7 @@ static int qemudDomainSave(virDomainPtr dom,
}
if (ret < 0)
- goto cleanup;
+ goto endjob;
/* Shut it down */
qemudShutdownVMDaemon(dom->conn, driver, vm);
@@ -3228,11 +3416,16 @@ static int qemudDomainSave(virDomainPtr dom,
VIR_DOMAIN_EVENT_STOPPED,
VIR_DOMAIN_EVENT_STOPPED_SAVED);
if (!vm->persistent) {
+ qemuDomainObjEndJob(vm);
virDomainRemoveInactive(&driver->domains,
vm);
vm = NULL;
}
+endjob:
+ if (vm)
+ qemuDomainObjEndJob(vm);
+
cleanup:
if (fd != -1)
close(fd);
@@ -3272,10 +3465,13 @@ static int qemudDomainCoreDump(virDomainPtr dom,
goto cleanup;
}
+ if (qemuDomainObjBeginJob(vm) < 0)
+ goto cleanup;
+
if (!virDomainObjIsActive(vm)) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
- goto cleanup;
+ goto endjob;
}
/* Migrate will always stop the VM, so once we support live dumping
@@ -3290,7 +3486,7 @@ static int qemudDomainCoreDump(virDomainPtr dom,
qemuDomainObjEnterMonitor(vm);
if (qemuMonitorStopCPUs(priv->mon) < 0) {
qemuDomainObjExitMonitor(vm);
- goto cleanup;
+ goto endjob;
}
qemuDomainObjExitMonitor(vm);
paused = 1;
@@ -3300,7 +3496,6 @@ static int qemudDomainCoreDump(virDomainPtr dom,
ret = qemuMonitorMigrateToCommand(priv->mon, 0, args, path);
qemuDomainObjExitMonitor(vm);
paused = 1;
-cleanup:
/* Since the monitor is always attached to a pty for libvirt, it
will support synchronous operations so we always get here after
@@ -3314,6 +3509,11 @@ cleanup:
}
qemuDomainObjExitMonitor(vm);
}
+
+endjob:
+ qemuDomainObjEndJob(vm);
+
+cleanup:
if (vm)
virDomainObjUnlock(vm);
return ret;
@@ -3339,35 +3539,41 @@ static int qemudDomainSetVcpus(virDomainPtr dom, unsigned int
nvcpus) {
goto cleanup;
}
+ if (qemuDomainObjBeginJob(vm) < 0)
+ goto cleanup;
+
if (virDomainObjIsActive(vm)) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s",
_("cannot change vcpu count of an active domain"));
- goto cleanup;
+ goto endjob;
}
if (!(type = virDomainVirtTypeToString(vm->def->virtType))) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_INTERNAL_ERROR,
_("unknown virt type in domain definition
'%d'"),
vm->def->virtType);
- goto cleanup;
+ goto endjob;
}
if ((max = qemudGetMaxVCPUs(dom->conn, type)) < 0) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_INTERNAL_ERROR,
"%s",
_("could not determine max vcpus for the domain"));
- goto cleanup;
+ goto endjob;
}
if (nvcpus > max) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_INVALID_ARG,
_("requested vcpus is greater than max allowable"
" vcpus for the domain: %d > %d"), nvcpus, max);
- goto cleanup;
+ goto endjob;
}
vm->def->vcpus = nvcpus;
ret = 0;
+endjob:
+ qemuDomainObjEndJob(vm);
+
cleanup:
if (vm)
virDomainObjUnlock(vm);
@@ -3776,6 +3982,9 @@ static int qemudDomainRestore(virConnectPtr conn,
}
def = NULL;
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
if (header.version == 2) {
const char *intermediate_argv[3] = { NULL, "-dc", NULL };
const char *prog = qemudSaveCompressionTypeToString(header.compressed);
@@ -3783,7 +3992,7 @@ static int qemudDomainRestore(virConnectPtr conn,
qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
_("Invalid compressed save format %d"),
header.compressed);
- goto cleanup;
+ goto endjob;
}
if (header.compressed != QEMUD_SAVE_FORMAT_RAW) {
@@ -3795,7 +4004,7 @@ static int qemudDomainRestore(virConnectPtr conn,
qemudReportError(conn, NULL, NULL, VIR_ERR_INTERNAL_ERROR,
_("Failed to start decompression binary %s"),
intermediate_argv[0]);
- goto cleanup;
+ goto endjob;
}
}
}
@@ -3812,11 +4021,12 @@ static int qemudDomainRestore(virConnectPtr conn,
fd = -1;
if (ret < 0) {
if (!vm->persistent) {
+ qemuDomainObjEndJob(vm);
virDomainRemoveInactive(&driver->domains,
vm);
vm = NULL;
}
- goto cleanup;
+ goto endjob;
}
event = virDomainEventNewFromObj(vm,
@@ -3826,20 +4036,24 @@ static int qemudDomainRestore(virConnectPtr conn,
/* If it was running before, resume it now. */
if (header.was_running) {
qemuDomainObjPrivatePtr priv = vm->privateData;
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorStartCPUs(priv->mon, conn) < 0) {
if (virGetLastError() == NULL)
qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
"%s", _("failed to resume
domain"));
- qemuDomainObjExitMonitor(vm);
- goto cleanup;
+ qemuDomainObjExitMonitorWithDriver(driver,vm);
+ goto endjob;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
vm->state = VIR_DOMAIN_RUNNING;
virDomainSaveStatus(conn, driver->stateDir, vm);
}
ret = 0;
+endjob:
+ if (vm)
+ qemuDomainObjEndJob(vm);
+
cleanup:
virDomainDefFree(def);
VIR_FREE(xml);
@@ -3877,14 +4091,22 @@ static char *qemudDomainDumpXML(virDomainPtr dom,
/* Refresh current memory based on balloon info */
if (virDomainObjIsActive(vm)) {
qemuDomainObjPrivatePtr priv = vm->privateData;
- qemuDomainObjEnterMonitor(vm);
- err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
- qemuDomainObjExitMonitor(vm);
- if (err < 0)
- goto cleanup;
- if (err > 0)
- vm->def->memory = balloon;
- /* err == 0 indicates no balloon support, so ignore it */
+ /* Don't delay if someone's using the monitor, just use
+ * existing most recent data instead */
+ if (!priv->jobActive) {
+ if (qemuDomainObjBeginJob(vm) < 0)
+ goto cleanup;
+
+ qemuDomainObjEnterMonitor(vm);
+ err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
+ qemuDomainObjExitMonitor(vm);
+ qemuDomainObjEndJob(vm);
+ if (err < 0)
+ goto cleanup;
+ if (err > 0)
+ vm->def->memory = balloon;
+ /* err == 0 indicates no balloon support, so ignore it */
+ }
}
ret = virDomainDefFormat(dom->conn,
@@ -4095,12 +4317,24 @@ static int qemudDomainStart(virDomainPtr dom) {
goto cleanup;
}
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
+ if (virDomainObjIsActive(vm)) {
+ qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
+ "%s", _("domain is already running"));
+ goto endjob;
+ }
+
ret = qemudStartVMDaemon(dom->conn, driver, vm, NULL, -1);
if (ret != -1)
event = virDomainEventNewFromObj(vm,
VIR_DOMAIN_EVENT_STARTED,
VIR_DOMAIN_EVENT_STARTED_BOOTED);
+endjob:
+ qemuDomainObjEndJob(vm);
+
cleanup:
if (vm)
virDomainObjUnlock(vm);
@@ -4397,6 +4631,7 @@ static char *qemudDiskDeviceName(const virConnectPtr conn,
}
static int qemudDomainChangeEjectableMedia(virConnectPtr conn,
+ struct qemud_driver *driver,
virDomainObjPtr vm,
virDomainDeviceDefPtr dev,
unsigned int qemuCmdFlags)
@@ -4450,13 +4685,13 @@ static int qemudDomainChangeEjectableMedia(virConnectPtr conn,
}
qemuDomainObjPrivatePtr priv = vm->privateData;
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (newdisk->src) {
ret = qemuMonitorChangeMedia(priv->mon, devname, newdisk->src);
} else {
ret = qemuMonitorEjectMedia(priv->mon, devname);
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (ret == 0) {
VIR_FREE(origdisk->src);
@@ -4470,6 +4705,7 @@ static int qemudDomainChangeEjectableMedia(virConnectPtr conn,
static int qemudDomainAttachPciDiskDevice(virConnectPtr conn,
+ struct qemud_driver *driver,
virDomainObjPtr vm,
virDomainDeviceDefPtr dev)
{
@@ -4490,14 +4726,14 @@ static int qemudDomainAttachPciDiskDevice(virConnectPtr conn,
return -1;
}
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorAddPCIDisk(priv->mon,
dev->data.disk->src,
type,
&dev->data.disk->pci_addr.domain,
&dev->data.disk->pci_addr.bus,
&dev->data.disk->pci_addr.slot);
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (ret == 0)
virDomainDiskInsertPreAlloced(vm->def, dev->data.disk);
@@ -4506,6 +4742,7 @@ static int qemudDomainAttachPciDiskDevice(virConnectPtr conn,
}
static int qemudDomainAttachUsbMassstorageDevice(virConnectPtr conn,
+ struct qemud_driver *driver,
virDomainObjPtr vm,
virDomainDeviceDefPtr dev)
{
@@ -4531,9 +4768,9 @@ static int qemudDomainAttachUsbMassstorageDevice(virConnectPtr
conn,
return -1;
}
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorAddUSBDisk(priv->mon, dev->data.disk->src);
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (ret == 0)
virDomainDiskInsertPreAlloced(vm->def, dev->data.disk);
@@ -4594,24 +4831,24 @@ static int qemudDomainAttachNetDevice(virConnectPtr conn,
if (virAsprintf(&tapfd_name, "fd-%s", net->hostnet_name) <
0)
goto no_memory;
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorSendFileHandle(priv->mon, tapfd_name, tapfd) < 0) {
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
}
if (qemuBuildHostNetStr(conn, net, ' ',
net->vlan, tapfd_name, &netstr) < 0)
goto try_tapfd_close;
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorAddHostNetwork(priv->mon, netstr) < 0) {
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
goto try_tapfd_close;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (tapfd != -1)
close(tapfd);
@@ -4620,15 +4857,15 @@ static int qemudDomainAttachNetDevice(virConnectPtr conn,
if (qemuBuildNicStr(conn, net, NULL, net->vlan, &nicstr) < 0)
goto try_remove;
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorAddPCINetwork(priv->mon, nicstr,
&net->pci_addr.domain,
&net->pci_addr.bus,
&net->pci_addr.slot) < 0) {
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
goto try_remove;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
ret = 0;
@@ -4647,20 +4884,20 @@ try_remove:
if (!net->hostnet_name || net->vlan == 0)
VIR_WARN0(_("Unable to remove network backend\n"));
else {
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorRemoveHostNetwork(priv->mon, net->vlan,
net->hostnet_name) < 0)
VIR_WARN(_("Failed to remove network backend for vlan %d, net
%s"),
net->vlan, net->hostnet_name);
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
}
goto cleanup;
try_tapfd_close:
if (tapfd_name) {
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorCloseFileHandle(priv->mon, tapfd_name) < 0)
VIR_WARN(_("Failed to close tapfd with '%s'\n"),
tapfd_name);
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
}
goto cleanup;
@@ -4704,7 +4941,7 @@ static int qemudDomainAttachHostPciDevice(virConnectPtr conn,
return -1;
}
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorAddPCIHostDevice(priv->mon,
hostdev->source.subsys.u.pci.domain,
hostdev->source.subsys.u.pci.bus,
@@ -4713,7 +4950,7 @@ static int qemudDomainAttachHostPciDevice(virConnectPtr conn,
&hostdev->source.subsys.u.pci.guest_addr.domain,
&hostdev->source.subsys.u.pci.guest_addr.bus,
&hostdev->source.subsys.u.pci.guest_addr.slot);
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (ret < 0)
goto error;
@@ -4728,6 +4965,7 @@ error:
}
static int qemudDomainAttachHostUsbDevice(virConnectPtr conn,
+ struct qemud_driver *driver,
virDomainObjPtr vm,
virDomainDeviceDefPtr dev)
{
@@ -4739,7 +4977,7 @@ static int qemudDomainAttachHostUsbDevice(virConnectPtr conn,
return -1;
}
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (dev->data.hostdev->source.subsys.u.usb.vendor) {
ret = qemuMonitorAddUSBDeviceMatch(priv->mon,
dev->data.hostdev->source.subsys.u.usb.vendor,
@@ -4749,7 +4987,7 @@ static int qemudDomainAttachHostUsbDevice(virConnectPtr conn,
dev->data.hostdev->source.subsys.u.usb.bus,
dev->data.hostdev->source.subsys.u.usb.device);
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (ret != -1)
vm->def->hostdevs[vm->def->nhostdevs++] = dev->data.hostdev;
@@ -4781,7 +5019,7 @@ static int qemudDomainAttachHostDevice(virConnectPtr conn,
case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_PCI:
return qemudDomainAttachHostPciDevice(conn, driver, vm, dev);
case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_USB:
- return qemudDomainAttachHostUsbDevice(conn, vm, dev);
+ return qemudDomainAttachHostUsbDevice(conn, driver, vm, dev);
default:
qemudReportError(conn, dom, NULL, VIR_ERR_NO_SUPPORT,
_("hostdev subsys type '%s' not supported"),
@@ -4809,21 +5047,24 @@ static int qemudDomainAttachDevice(virDomainPtr dom,
goto cleanup;
}
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
if (!virDomainObjIsActive(vm)) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s", _("cannot attach device on inactive
domain"));
- goto cleanup;
+ goto endjob;
}
dev = virDomainDeviceDefParse(dom->conn, driver->caps, vm->def, xml,
VIR_DOMAIN_XML_INACTIVE);
if (dev == NULL)
- goto cleanup;
+ goto endjob;
if (qemudExtractVersionInfo(vm->def->emulator,
NULL,
&qemuCmdFlags) < 0)
- goto cleanup;
+ goto endjob;
if (dev->type == VIR_DOMAIN_DEVICE_DISK) {
if (qemuCgroupControllerActive(driver, VIR_CGROUP_CONTROLLER_DEVICES)) {
@@ -4831,7 +5072,7 @@ static int qemudDomainAttachDevice(virDomainPtr dom,
qemudReportError(dom->conn, NULL, NULL, VIR_ERR_INTERNAL_ERROR,
_("Unable to find cgroup for %s\n"),
vm->def->name);
- goto cleanup;
+ goto endjob;
}
if (dev->data.disk->src != NULL &&
dev->data.disk->type == VIR_DOMAIN_DISK_TYPE_BLOCK &&
@@ -4840,7 +5081,7 @@ static int qemudDomainAttachDevice(virDomainPtr dom,
qemudReportError(dom->conn, NULL, NULL, VIR_ERR_INTERNAL_ERROR,
_("unable to allow device %s"),
dev->data.disk->src);
- goto cleanup;
+ goto endjob;
}
}
@@ -4851,9 +5092,9 @@ static int qemudDomainAttachDevice(virDomainPtr dom,
driver->securityDriver->domainSetSecurityImageLabel(dom->conn,
vm, dev->data.disk);
if (qemuDomainSetDeviceOwnership(dom->conn, driver, dev, 0) < 0)
- goto cleanup;
+ goto endjob;
- ret = qemudDomainChangeEjectableMedia(dom->conn, vm, dev, qemuCmdFlags);
+ ret = qemudDomainChangeEjectableMedia(dom->conn, driver, vm, dev,
qemuCmdFlags);
break;
case VIR_DOMAIN_DISK_DEVICE_DISK:
@@ -4861,13 +5102,13 @@ static int qemudDomainAttachDevice(virDomainPtr dom,
driver->securityDriver->domainSetSecurityImageLabel(dom->conn,
vm, dev->data.disk);
if (qemuDomainSetDeviceOwnership(dom->conn, driver, dev, 0) < 0)
- goto cleanup;
+ goto endjob;
if (dev->data.disk->bus == VIR_DOMAIN_DISK_BUS_USB) {
- ret = qemudDomainAttachUsbMassstorageDevice(dom->conn, vm, dev);
+ ret = qemudDomainAttachUsbMassstorageDevice(dom->conn, driver, vm,
dev);
} else if (dev->data.disk->bus == VIR_DOMAIN_DISK_BUS_SCSI ||
dev->data.disk->bus == VIR_DOMAIN_DISK_BUS_VIRTIO) {
- ret = qemudDomainAttachPciDiskDevice(dom->conn, vm, dev);
+ ret = qemudDomainAttachPciDiskDevice(dom->conn, driver, vm, dev);
} else {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_NO_SUPPORT,
_("disk bus '%s' cannot be
hotplugged."),
@@ -4894,12 +5135,15 @@ static int qemudDomainAttachDevice(virDomainPtr dom,
qemudReportError(dom->conn, dom, NULL, VIR_ERR_NO_SUPPORT,
_("device type '%s' cannot be attached"),
virDomainDeviceTypeToString(dev->type));
- goto cleanup;
+ goto endjob;
}
if (!ret && virDomainSaveStatus(dom->conn, driver->stateDir, vm) <
0)
ret = -1;
+endjob:
+ qemuDomainObjEndJob(vm);
+
cleanup:
if (cgroup)
virCgroupFree(&cgroup);
@@ -4916,7 +5160,9 @@ cleanup:
}
static int qemudDomainDetachPciDiskDevice(virConnectPtr conn,
- virDomainObjPtr vm, virDomainDeviceDefPtr dev)
+ struct qemud_driver *driver,
+ virDomainObjPtr vm,
+ virDomainDeviceDefPtr dev)
{
int i, ret = -1;
virDomainDiskDefPtr detach = NULL;
@@ -4942,7 +5188,7 @@ static int qemudDomainDetachPciDiskDevice(virConnectPtr conn,
goto cleanup;
}
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorRemovePCIDevice(priv->mon,
detach->pci_addr.domain,
detach->pci_addr.bus,
@@ -4950,7 +5196,7 @@ static int qemudDomainDetachPciDiskDevice(virConnectPtr conn,
qemuDomainObjExitMonitor(vm);
goto cleanup;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (vm->def->ndisks > 1) {
memmove(vm->def->disks + i,
@@ -4975,6 +5221,7 @@ cleanup:
static int
qemudDomainDetachNetDevice(virConnectPtr conn,
+ struct qemud_driver *driver,
virDomainObjPtr vm,
virDomainDeviceDefPtr dev)
{
@@ -5006,20 +5253,20 @@ qemudDomainDetachNetDevice(virConnectPtr conn,
goto cleanup;
}
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorRemovePCIDevice(priv->mon,
detach->pci_addr.domain,
detach->pci_addr.bus,
detach->pci_addr.slot) < 0) {
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
}
if (qemuMonitorRemoveHostNetwork(priv->mon, detach->vlan,
detach->hostnet_name) < 0) {
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (vm->def->nnets > 1) {
memmove(vm->def->nets + i,
@@ -5083,15 +5330,15 @@ static int qemudDomainDetachHostPciDevice(virConnectPtr conn,
return -1;
}
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorRemovePCIDevice(priv->mon,
detach->source.subsys.u.pci.guest_addr.domain,
detach->source.subsys.u.pci.guest_addr.bus,
detach->source.subsys.u.pci.guest_addr.slot) <
0) {
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
return -1;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
ret = 0;
@@ -5182,29 +5429,32 @@ static int qemudDomainDetachDevice(virDomainPtr dom,
goto cleanup;
}
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
if (!virDomainObjIsActive(vm)) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s", _("cannot detach device on inactive
domain"));
- goto cleanup;
+ goto endjob;
}
dev = virDomainDeviceDefParse(dom->conn, driver->caps, vm->def, xml,
VIR_DOMAIN_XML_INACTIVE);
if (dev == NULL)
- goto cleanup;
+ goto endjob;
if (dev->type == VIR_DOMAIN_DEVICE_DISK &&
dev->data.disk->device == VIR_DOMAIN_DISK_DEVICE_DISK &&
(dev->data.disk->bus == VIR_DOMAIN_DISK_BUS_SCSI ||
dev->data.disk->bus == VIR_DOMAIN_DISK_BUS_VIRTIO)) {
- ret = qemudDomainDetachPciDiskDevice(dom->conn, vm, dev);
+ ret = qemudDomainDetachPciDiskDevice(dom->conn, driver, vm, dev);
if (driver->securityDriver)
driver->securityDriver->domainRestoreSecurityImageLabel(dom->conn,
vm, dev->data.disk);
if (qemuDomainSetDeviceOwnership(dom->conn, driver, dev, 1) < 0)
VIR_WARN0("Fail to restore disk device ownership");
} else if (dev->type == VIR_DOMAIN_DEVICE_NET) {
- ret = qemudDomainDetachNetDevice(dom->conn, vm, dev);
+ ret = qemudDomainDetachNetDevice(dom->conn, driver, vm, dev);
} else if (dev->type == VIR_DOMAIN_DEVICE_HOSTDEV) {
ret = qemudDomainDetachHostDevice(dom->conn, driver, vm, dev);
} else
@@ -5214,6 +5464,9 @@ static int qemudDomainDetachDevice(virDomainPtr dom,
if (!ret && virDomainSaveStatus(dom->conn, driver->stateDir, vm) <
0)
ret = -1;
+endjob:
+ qemuDomainObjEndJob(vm);
+
cleanup:
virDomainDeviceDefFree(dev);
if (vm)
@@ -5497,10 +5750,14 @@ qemudDomainBlockStats (virDomainPtr dom,
_("no domain with matching uuid '%s'"),
uuidstr);
goto cleanup;
}
+
+ if (qemuDomainObjBeginJob(vm) < 0)
+ goto cleanup;
+
if (!virDomainObjIsActive (vm)) {
qemudReportError (dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
- goto cleanup;
+ goto endjob;
}
for (i = 0 ; i < vm->def->ndisks ; i++) {
@@ -5513,12 +5770,12 @@ qemudDomainBlockStats (virDomainPtr dom,
if (!disk) {
qemudReportError (dom->conn, dom, NULL, VIR_ERR_INVALID_ARG,
_("invalid path: %s"), path);
- goto cleanup;
+ goto endjob;
}
qemu_dev_name = qemudDiskDeviceName(dom->conn, disk);
if (!qemu_dev_name)
- goto cleanup;
+ goto endjob;
qemuDomainObjPrivatePtr priv = vm->privateData;
qemuDomainObjEnterMonitor(vm);
@@ -5531,6 +5788,9 @@ qemudDomainBlockStats (virDomainPtr dom,
&stats->errs);
qemuDomainObjExitMonitor(vm);
+endjob:
+ qemuDomainObjEndJob(vm);
+
cleanup:
VIR_FREE(qemu_dev_name);
if (vm)
@@ -5700,22 +5960,25 @@ qemudDomainMemoryPeek (virDomainPtr dom,
goto cleanup;
}
+ if (qemuDomainObjBeginJob(vm) < 0)
+ goto cleanup;
+
if (!virDomainObjIsActive(vm)) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
- goto cleanup;
+ goto endjob;
}
if (virAsprintf(&tmp, driver->cacheDir, "/qemu.mem.XXXXXX") < 0)
{
virReportOOMError(dom->conn);
- goto cleanup;
+ goto endjob;
}
/* Create a temporary filename. */
if ((fd = mkstemp (tmp)) == -1) {
virReportSystemError (dom->conn, errno,
_("mkstemp(\"%s\") failed"), tmp);
- goto cleanup;
+ goto endjob;
}
qemuDomainObjPrivatePtr priv = vm->privateData;
@@ -5723,12 +5986,12 @@ qemudDomainMemoryPeek (virDomainPtr dom,
if (flags == VIR_MEMORY_VIRTUAL) {
if (qemuMonitorSaveVirtualMemory(priv->mon, offset, size, tmp) < 0) {
qemuDomainObjExitMonitor(vm);
- goto cleanup;
+ goto endjob;
}
} else {
if (qemuMonitorSavePhysicalMemory(priv->mon, offset, size, tmp) < 0) {
qemuDomainObjExitMonitor(vm);
- goto cleanup;
+ goto endjob;
}
}
qemuDomainObjExitMonitor(vm);
@@ -5738,11 +6001,14 @@ qemudDomainMemoryPeek (virDomainPtr dom,
virReportSystemError (dom->conn, errno,
_("failed to read temporary file "
"created with template %s"), tmp);
- goto cleanup;
+ goto endjob;
}
ret = 0;
+endjob:
+ qemuDomainObjEndJob(vm);
+
cleanup:
VIR_FREE(tmp);
if (fd >= 0) close (fd);
@@ -6191,13 +6457,16 @@ qemudDomainMigratePrepareTunnel(virConnectPtr dconn,
}
def = NULL;
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
/* Domain starts inactive, even if the domain XML had an id field. */
vm->def->id = -1;
if (virAsprintf(&unixfile, "%s/qemu.tunnelmigrate.dest.%s",
driver->stateDir, vm->def->name) < 0) {
virReportOOMError (dconn);
- goto cleanup;
+ goto endjob;
}
unlink(unixfile);
@@ -6206,7 +6475,7 @@ qemudDomainMigratePrepareTunnel(virConnectPtr dconn,
qemudReportError(dconn, NULL, NULL, VIR_ERR_INTERNAL_ERROR,
_("Cannot determine QEMU argv syntax %s"),
vm->def->emulator);
- goto cleanup;
+ goto endjob;
}
if (qemuCmdFlags & QEMUD_CMD_FLAG_MIGRATE_QEMU_UNIX)
internalret = virAsprintf(&migrateFrom, "unix:%s", unixfile);
@@ -6215,11 +6484,11 @@ qemudDomainMigratePrepareTunnel(virConnectPtr dconn,
else {
qemudReportError(dconn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
"%s", _("Destination qemu is too old to support
tunnelled migration"));
- goto cleanup;
+ goto endjob;
}
if (internalret < 0) {
virReportOOMError(dconn);
- goto cleanup;
+ goto endjob;
}
/* Start the QEMU daemon, with the same command-line arguments plus
* -incoming unix:/path/to/file or exec:nc -U /path/to/file
@@ -6234,20 +6503,21 @@ qemudDomainMigratePrepareTunnel(virConnectPtr dconn,
virDomainRemoveInactive(&driver->domains, vm);
vm = NULL;
}
- goto cleanup;
+ goto endjob;
}
qemust = qemuStreamMigOpen(st, unixfile);
if (qemust == NULL) {
qemudShutdownVMDaemon(NULL, driver, vm);
if (!vm->persistent) {
+ qemuDomainObjEndJob(vm);
virDomainRemoveInactive(&driver->domains, vm);
vm = NULL;
}
virReportSystemError(dconn, errno,
_("cannot open unix socket '%s' for tunnelled
migration"),
unixfile);
- goto cleanup;
+ goto endjob;
}
st->driver = &qemuStreamMigDrv;
@@ -6258,6 +6528,10 @@ qemudDomainMigratePrepareTunnel(virConnectPtr dconn,
VIR_DOMAIN_EVENT_STARTED_MIGRATED);
ret = 0;
+endjob:
+ if (vm)
+ qemuDomainObjEndJob(vm);
+
cleanup:
virDomainDefFree(def);
unlink(unixfile);
@@ -6434,6 +6708,9 @@ qemudDomainMigratePrepare2 (virConnectPtr dconn,
}
def = NULL;
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
/* Domain starts inactive, even if the domain XML had an id field. */
vm->def->id = -1;
@@ -6446,10 +6723,11 @@ qemudDomainMigratePrepare2 (virConnectPtr dconn,
* should have already done that.
*/
if (!vm->persistent) {
+ qemuDomainObjEndJob(vm);
virDomainRemoveInactive(&driver->domains, vm);
vm = NULL;
}
- goto cleanup;
+ goto endjob;
}
event = virDomainEventNewFromObj(vm,
@@ -6457,6 +6735,10 @@ qemudDomainMigratePrepare2 (virConnectPtr dconn,
VIR_DOMAIN_EVENT_STARTED_MIGRATED);
ret = 0;
+endjob:
+ if (vm)
+ qemuDomainObjEndJob(vm);
+
cleanup:
virDomainDefFree(def);
if (ret != 0) {
@@ -6476,6 +6758,7 @@ cleanup:
* not encrypted obviously
*/
static int doNativeMigrate(virDomainPtr dom,
+ struct qemud_driver *driver,
virDomainObjPtr vm,
const char *uri,
unsigned long flags ATTRIBUTE_UNUSED,
@@ -6507,15 +6790,15 @@ static int doNativeMigrate(virDomainPtr dom,
goto cleanup;
}
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (resource > 0 &&
qemuMonitorSetMigrationSpeed(priv->mon, resource) < 0) {
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
}
if (qemuMonitorMigrateToHost(priv->mon, 0, uribits->server, uribits->port)
< 0) {
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
}
@@ -6527,10 +6810,10 @@ static int doNativeMigrate(virDomainPtr dom,
&transferred,
&remaining,
&total) < 0) {
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (status != QEMU_MONITOR_MIGRATION_STATUS_COMPLETED) {
qemudReportError (dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
@@ -6581,6 +6864,7 @@ static int doTunnelSendAll(virDomainPtr dom,
}
static int doTunnelMigrate(virDomainPtr dom,
+ struct qemud_driver *driver,
virConnectPtr dconn,
virDomainObjPtr vm,
const char *dom_xml,
@@ -6589,7 +6873,6 @@ static int doTunnelMigrate(virDomainPtr dom,
const char *dname,
unsigned long resource)
{
- struct qemud_driver *driver = dom->conn->privateData;
qemuDomainObjPrivatePtr priv = vm->privateData;
int client_sock = -1;
int qemu_sock = -1;
@@ -6687,7 +6970,7 @@ static int doTunnelMigrate(virDomainPtr dom,
goto cleanup;
/* 3. start migration on source */
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCmdFlags & QEMUD_CMD_FLAG_MIGRATE_QEMU_UNIX)
internalret = qemuMonitorMigrateToUnix(priv->mon, 1, unixfile);
else if (qemuCmdFlags & QEMUD_CMD_FLAG_MIGRATE_QEMU_EXEC) {
@@ -6696,7 +6979,7 @@ static int doTunnelMigrate(virDomainPtr dom,
} else {
internalret = -1;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (internalret < 0) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
"%s", _("tunnelled migration monitor command
failed"));
@@ -6709,16 +6992,16 @@ static int doTunnelMigrate(virDomainPtr dom,
/* it is also possible that the migrate didn't fail initially, but
* rather failed later on. Check the output of "info migrate"
*/
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorGetMigrationStatus(priv->mon,
&status,
&transferred,
&remaining,
&total) < 0) {
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cancel;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (status == QEMU_MONITOR_MIGRATION_STATUS_ERROR) {
qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
@@ -6739,9 +7022,9 @@ static int doTunnelMigrate(virDomainPtr dom,
cancel:
if (retval != 0) {
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
qemuMonitorMigrateCancel(priv->mon);
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
}
finish:
@@ -6774,6 +7057,7 @@ cleanup:
* virDomainMigrateVersion2 from libvirt.c, but running in source
* libvirtd context, instead of client app context */
static int doNonTunnelMigrate(virDomainPtr dom,
+ struct qemud_driver *driver,
virConnectPtr dconn,
virDomainObjPtr vm,
const char *dom_xml,
@@ -6803,7 +7087,7 @@ static int doNonTunnelMigrate(virDomainPtr dom,
_("domainMigratePrepare2 did not set uri"));
}
- if (doNativeMigrate(dom, vm, uri_out, flags, dname, resource) < 0)
+ if (doNativeMigrate(dom, driver, vm, uri_out, flags, dname, resource) < 0)
goto finish;
retval = 0;
@@ -6822,6 +7106,7 @@ cleanup:
static int doPeer2PeerMigrate(virDomainPtr dom,
+ struct qemud_driver *driver,
virDomainObjPtr vm,
const char *uri,
unsigned long flags,
@@ -6857,9 +7142,9 @@ static int doPeer2PeerMigrate(virDomainPtr dom,
}
if (flags & VIR_MIGRATE_TUNNELLED)
- ret = doTunnelMigrate(dom, dconn, vm, dom_xml, uri, flags, dname, resource);
+ ret = doTunnelMigrate(dom, driver, dconn, vm, dom_xml, uri, flags, dname,
resource);
else
- ret = doNonTunnelMigrate(dom, dconn, vm, dom_xml, uri, flags, dname, resource);
+ ret = doNonTunnelMigrate(dom, driver, dconn, vm, dom_xml, uri, flags, dname,
resource);
cleanup:
VIR_FREE(dom_xml);
@@ -6896,21 +7181,24 @@ qemudDomainMigratePerform (virDomainPtr dom,
goto cleanup;
}
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
if (!virDomainObjIsActive(vm)) {
qemudReportError (dom->conn, dom, NULL, VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
- goto cleanup;
+ goto endjob;
}
if (!(flags & VIR_MIGRATE_LIVE)) {
qemuDomainObjPrivatePtr priv = vm->privateData;
/* Pause domain for non-live migration */
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorStopCPUs(priv->mon) < 0) {
- qemuDomainObjExitMonitor(vm);
- goto cleanup;
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ goto endjob;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
paused = 1;
event = virDomainEventNewFromObj(vm,
@@ -6922,12 +7210,12 @@ qemudDomainMigratePerform (virDomainPtr dom,
}
if ((flags & VIR_MIGRATE_TUNNELLED)) {
- if (doPeer2PeerMigrate(dom, vm, uri, flags, dname, resource) < 0)
+ if (doPeer2PeerMigrate(dom, driver, vm, uri, flags, dname, resource) < 0)
/* doTunnelMigrate already set the error, so just get out */
- goto cleanup;
+ goto endjob;
} else {
- if (doNativeMigrate(dom, vm, uri, flags, dname, resource) < 0)
- goto cleanup;
+ if (doNativeMigrate(dom, driver, vm, uri, flags, dname, resource) < 0)
+ goto endjob;
}
/* Clean up the source domain. */
@@ -6939,16 +7227,17 @@ qemudDomainMigratePerform (virDomainPtr dom,
VIR_DOMAIN_EVENT_STOPPED_MIGRATED);
if (!vm->persistent || (flags & VIR_MIGRATE_UNDEFINE_SOURCE)) {
virDomainDeleteConfig(dom->conn, driver->configDir,
driver->autostartDir, vm);
+ qemuDomainObjEndJob(vm);
virDomainRemoveInactive(&driver->domains, vm);
vm = NULL;
}
ret = 0;
-cleanup:
+endjob:
if (paused) {
qemuDomainObjPrivatePtr priv = vm->privateData;
/* we got here through some sort of failure; start the domain again */
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorStartCPUs(priv->mon, dom->conn) < 0) {
/* Hm, we already know we are in error here. We don't want to
* overwrite the previous error, though, so we just throw something
@@ -6957,13 +7246,16 @@ cleanup:
VIR_ERROR(_("Failed to resume guest %s after failure\n"),
vm->def->name);
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
event = virDomainEventNewFromObj(vm,
VIR_DOMAIN_EVENT_RESUMED,
VIR_DOMAIN_EVENT_RESUMED_MIGRATED);
}
+ if (vm)
+ qemuDomainObjEndJob(vm);
+cleanup:
if (vm)
virDomainObjUnlock(vm);
if (event)
@@ -6996,6 +7288,9 @@ qemudDomainMigrateFinish2 (virConnectPtr dconn,
goto cleanup;
}
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
/* Did the migration go as planned? If yes, return the domain
* object, but if no, clean up the empty qemu process.
*/
@@ -7016,7 +7311,7 @@ qemudDomainMigrateFinish2 (virConnectPtr dconn,
* situation and management tools are smart.
*/
vm = NULL;
- goto cleanup;
+ goto endjob;
}
event = virDomainEventNewFromObj(vm,
@@ -7035,15 +7330,15 @@ qemudDomainMigrateFinish2 (virConnectPtr dconn,
* >= 0.10.6 to work properly. This isn't strictly necessary on
* older qemu's, but it also doesn't hurt anything there
*/
- qemuDomainObjEnterMonitor(vm);
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorStartCPUs(priv->mon, dconn) < 0) {
if (virGetLastError() == NULL)
qemudReportError(dconn, NULL, NULL, VIR_ERR_INTERNAL_ERROR,
"%s", _("resume operation
failed"));
- qemuDomainObjExitMonitor(vm);
- goto cleanup;
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ goto endjob;
}
- qemuDomainObjExitMonitor(vm);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
vm->state = VIR_DOMAIN_RUNNING;
event = virDomainEventNewFromObj(vm,
@@ -7056,11 +7351,16 @@ qemudDomainMigrateFinish2 (virConnectPtr dconn,
VIR_DOMAIN_EVENT_STOPPED,
VIR_DOMAIN_EVENT_STOPPED_FAILED);
if (!vm->persistent) {
+ qemuDomainObjEndJob(vm);
virDomainRemoveInactive(&driver->domains, vm);
vm = NULL;
}
}
+endjob:
+ if (vm)
+ qemuDomainObjEndJob(vm);
+
cleanup:
if (vm)
virDomainObjUnlock(vm);
--
1.6.2.5