Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
---
v3: incorporate Wen's feedback - in particular, virProcessStartCPUs
now checks for return type, restarting libvirt does not use an
async job, and I didn't hit the deadlock in the same scenarios as
I tested under v2.
I still need to do migration testing before I'm convinced that this
is right, but it's doing a lot better.
src/qemu/THREADS.txt | 47 ++++++++++++++++++--------
src/qemu/qemu_domain.c | 55 +++++++++++++++++++++++-------
src/qemu/qemu_domain.h | 28 ++++++++++-----
src/qemu/qemu_driver.c | 81 ++++++++++++++++++++++++++-------------------
src/qemu/qemu_hotplug.c | 38 ++++++++++----------
src/qemu/qemu_migration.c | 75 ++++++++++++++++++++++++------------------
src/qemu/qemu_migration.h | 5 +--
src/qemu/qemu_process.c | 64 ++++++++++++++++++++++-------------
src/qemu/qemu_process.h | 7 +++-
9 files changed, 248 insertions(+), 152 deletions(-)
diff --git a/src/qemu/THREADS.txt b/src/qemu/THREADS.txt
index e73076c..38c4b9c 100644
--- a/src/qemu/THREADS.txt
+++ b/src/qemu/THREADS.txt
@@ -217,10 +217,7 @@ To acquire the QEMU monitor lock
NB: caller must take care to drop the driver lock if necessary
- These functions automatically begin/end nested job if called inside an
- asynchronous job. The caller must then check the return value of
- qemuDomainObjEnterMonitor to detect if domain died while waiting on
- the nested job.
+ These functions must not be used by an asynchronous job.
To acquire the QEMU monitor lock with the driver lock held
@@ -237,10 +234,30 @@ To acquire the QEMU monitor lock with the driver lock held
NB: caller must take care to drop the driver lock if necessary
- These functions automatically begin/end nested job if called inside an
- asynchronous job. The caller must then check the return value of
- qemuDomainObjEnterMonitorWithDriver to detect if domain died while
- waiting on the nested job.
+ These functions must not be used inside an asynchronous job.
+
+
+To acquire the QEMU monitor lock with the driver lock held and as part
+of an asynchronous job
+
+ qemuDomainObjEnterMonitorAsync()
+ - Validates that the right async job is still running
+ - Acquires the qemuMonitorObjPtr lock
+ - Releases the virDomainObjPtr lock
+ - Releases the driver lock
+ - Validates that the VM is still active
+
+ qemuDomainObjExitMonitorWithDriver()
+ - Releases the qemuMonitorObjPtr lock
+ - Acquires the driver lock
+ - Acquires the virDomainObjPtr lock
+
+ NB: caller must take care to drop the driver lock if necessary
+
+ These functions are for use inside an asynchronous job; the caller
+ must check for a return of -1 (VM not running, so nothing to exit).
+ Helper functions may also call this with QEMU_ASYNC_JOB_NONE when
+ used from a sync job (such as when first starting a domain).
To keep a domain alive while waiting on a remote command, starting
@@ -333,8 +350,7 @@ Design patterns
...do prep work...
if (virDomainObjIsActive(vm)) {
- /* using ignore_value is safe since vm is active */
- ignore_value(qemuDomainObjEnterMonitor(obj));
+ qemuDomainObjEnterMonitor(obj);
qemuMonitorXXXX(priv->mon);
qemuDomainObjExitMonitor(obj);
}
@@ -361,8 +377,7 @@ Design patterns
...do prep work...
if (virDomainObjIsActive(vm)) {
- /* using ignore_value is safe since vm is active */
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, obj));
+ qemuDomainObjEnterMonitorWithDriver(driver, obj);
qemuMonitorXXXX(priv->mon);
qemuDomainObjExitMonitorWithDriver(driver, obj);
}
@@ -374,7 +389,7 @@ Design patterns
qemuDriverUnlock(driver);
- * Running asynchronous job
+ * Running asynchronous job with driver lock held
virDomainObjPtr obj;
qemuDomainObjPrivatePtr priv;
@@ -387,7 +402,8 @@ Design patterns
...do prep work...
- if (qemuDomainObjEnterMonitorWithDriver(driver, obj) < 0) {
+ if (qemuDomainObjEnterMonitorAsync(driver, obj,
+ QEMU_ASYNC_JOB_TYPE) < 0) {
/* domain died in the meantime */
goto error;
}
@@ -395,7 +411,8 @@ Design patterns
qemuDomainObjExitMonitorWithDriver(driver, obj);
while (!finished) {
- if (qemuDomainObjEnterMonitorWithDriver(driver, obj) < 0) {
+ if (qemuDomainObjEnterMonitorAsync(driver, obj,
+ QEMU_ASYNC_JOB_TYPE) < 0) {
/* domain died in the meantime */
goto error;
}
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 2eaaf3a..9904ece 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -863,14 +863,20 @@ qemuDomainObjEndAsyncJob(struct qemud_driver *driver,
virDomainObjPtr obj)
return virDomainObjUnref(obj);
}
-static int ATTRIBUTE_NONNULL(1)
+static int
qemuDomainObjEnterMonitorInternal(struct qemud_driver *driver,
bool driver_locked,
- virDomainObjPtr obj)
+ virDomainObjPtr obj,
+ enum qemuDomainAsyncJob asyncJob)
{
qemuDomainObjPrivatePtr priv = obj->privateData;
- if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
+ if (asyncJob != QEMU_ASYNC_JOB_NONE) {
+ if (asyncJob != priv->job.asyncJob) {
+ qemuReportError(VIR_ERR_INTERNAL_ERROR,
+ _("unepxected async job %d"), asyncJob);
+ return -1;
+ }
if (qemuDomainObjBeginJobInternal(driver, driver_locked, obj,
QEMU_JOB_ASYNC_NESTED,
QEMU_ASYNC_JOB_NONE) < 0)
@@ -878,6 +884,8 @@ qemuDomainObjEnterMonitorInternal(struct qemud_driver *driver,
if (!virDomainObjIsActive(obj)) {
qemuReportError(VIR_ERR_OPERATION_FAILED, "%s",
_("domain is no longer running"));
+ /* Still referenced by the containing async job. */
+ ignore_value(qemuDomainObjEndJob(driver, obj));
return -1;
}
}
@@ -930,15 +938,15 @@ qemuDomainObjExitMonitorInternal(struct qemud_driver *driver,
*
* To be called immediately before any QEMU monitor API call
* Must have already either called qemuDomainObjBeginJob() and checked
- * that the VM is still active or called qemuDomainObjBeginAsyncJob, in which
- * case this will start a nested job.
+ * that the VM is still active; may not be used for nested async jobs.
*
* To be followed with qemuDomainObjExitMonitor() once complete
*/
-int qemuDomainObjEnterMonitor(struct qemud_driver *driver,
- virDomainObjPtr obj)
+void qemuDomainObjEnterMonitor(struct qemud_driver *driver,
+ virDomainObjPtr obj)
{
- return qemuDomainObjEnterMonitorInternal(driver, false, obj);
+ ignore_value(qemuDomainObjEnterMonitorInternal(driver, false, obj,
+ QEMU_ASYNC_JOB_NONE));
}
/* obj must NOT be locked before calling, qemud_driver must be unlocked
@@ -956,15 +964,36 @@ void qemuDomainObjExitMonitor(struct qemud_driver *driver,
*
* To be called immediately before any QEMU monitor API call
* Must have already either called qemuDomainObjBeginJobWithDriver() and
- * checked that the VM is still active or called qemuDomainObjBeginAsyncJob,
- * in which case this will start a nested job.
+ * checked that the VM is still active; may not be used for nested async jobs.
*
* To be followed with qemuDomainObjExitMonitorWithDriver() once complete
*/
-int qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
- virDomainObjPtr obj)
+void qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
+ virDomainObjPtr obj)
+{
+ ignore_value(qemuDomainObjEnterMonitorInternal(driver, true, obj,
+ QEMU_ASYNC_JOB_NONE));
+}
+
+/*
+ * obj and qemud_driver must be locked before calling
+ *
+ * To be called immediately before any QEMU monitor API call.
+ * Must have already either called qemuDomainObjBeginJobWithDriver()
+ * and checked that the VM is still active, with asyncJob of
+ * QEMU_ASYNC_JOB_NONE; or already called qemuDomainObjBeginAsyncJob,
+ * with the same asyncJob.
+ *
+ * Returns 0 if job was started, in which case this must be followed with
+ * qemuDomainObjExitMonitorWithDriver(); or -1 if the job could not be
+ * started (probably because the vm exited in the meantime).
+ */
+int
+qemuDomainObjEnterMonitorAsync(struct qemud_driver *driver,
+ virDomainObjPtr obj,
+ enum qemuDomainAsyncJob asyncJob)
{
- return qemuDomainObjEnterMonitorInternal(driver, true, obj);
+ return qemuDomainObjEnterMonitorInternal(driver, true, obj, asyncJob);
}
/* obj must NOT be locked before calling, qemud_driver must be unlocked,
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 8bff8b0..e12ca8e 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -168,20 +168,28 @@ void qemuDomainObjRestoreJob(virDomainObjPtr obj,
void qemuDomainObjDiscardAsyncJob(struct qemud_driver *driver,
virDomainObjPtr obj);
-int qemuDomainObjEnterMonitor(struct qemud_driver *driver,
- virDomainObjPtr obj)
- ATTRIBUTE_RETURN_CHECK;
+void qemuDomainObjEnterMonitor(struct qemud_driver *driver,
+ virDomainObjPtr obj)
+ ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2);
void qemuDomainObjExitMonitor(struct qemud_driver *driver,
- virDomainObjPtr obj);
-int qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
- virDomainObjPtr obj)
- ATTRIBUTE_RETURN_CHECK;
+ virDomainObjPtr obj)
+ ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2);
+void qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
+ virDomainObjPtr obj)
+ ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2);
+int qemuDomainObjEnterMonitorAsync(struct qemud_driver *driver,
+ virDomainObjPtr obj,
+ enum qemuDomainAsyncJob asyncJob)
+ ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_RETURN_CHECK;
void qemuDomainObjExitMonitorWithDriver(struct qemud_driver *driver,
- virDomainObjPtr obj);
+ virDomainObjPtr obj)
+ ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2);
void qemuDomainObjEnterRemoteWithDriver(struct qemud_driver *driver,
- virDomainObjPtr obj);
+ virDomainObjPtr obj)
+ ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2);
void qemuDomainObjExitRemoteWithDriver(struct qemud_driver *driver,
- virDomainObjPtr obj);
+ virDomainObjPtr obj)
+ ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2);
char *qemuDomainDefFormatXML(struct qemud_driver *driver,
virDomainDefPtr vm,
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index b673fd5..e9aaac1 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1375,7 +1375,7 @@ static int qemudDomainSuspend(virDomainPtr dom) {
goto endjob;
}
if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_PAUSED) {
- if (qemuProcessStopCPUs(driver, vm, reason) < 0) {
+ if (qemuProcessStopCPUs(driver, vm, reason, QEMU_ASYNC_JOB_NONE) < 0) {
goto endjob;
}
event = virDomainEventNewFromObj(vm,
@@ -1428,7 +1428,8 @@ static int qemudDomainResume(virDomainPtr dom) {
}
if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_PAUSED) {
if (qemuProcessStartCPUs(driver, vm, dom->conn,
- VIR_DOMAIN_RUNNING_UNPAUSED) < 0) {
+ VIR_DOMAIN_RUNNING_UNPAUSED,
+ QEMU_ASYNC_JOB_NONE) < 0) {
if (virGetLastError() == NULL)
qemuReportError(VIR_ERR_OPERATION_FAILED,
"%s", _("resume operation failed"));
@@ -1484,7 +1485,7 @@ static int qemuDomainShutdown(virDomainPtr dom) {
}
priv = vm->privateData;
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
ret = qemuMonitorSystemPowerdown(priv->mon);
qemuDomainObjExitMonitor(driver, vm);
@@ -1536,7 +1537,7 @@ static int qemuDomainReboot(virDomainPtr dom, unsigned int flags) {
goto endjob;
}
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
ret = qemuMonitorSystemPowerdown(priv->mon);
qemuDomainObjExitMonitor(driver, vm);
@@ -1775,7 +1776,7 @@ static int qemudDomainSetMemoryFlags(virDomainPtr dom, unsigned long
newmem,
if (flags & VIR_DOMAIN_AFFECT_LIVE) {
priv = vm->privateData;
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
r = qemuMonitorSetBalloon(priv->mon, newmem);
qemuDomainObjExitMonitor(driver, vm);
virDomainAuditMemory(vm, vm->def->mem.cur_balloon, newmem,
"update",
@@ -1849,7 +1850,7 @@ static int qemuDomainInjectNMI(virDomainPtr domain, unsigned int
flags)
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorInjectNMI(priv->mon);
qemuDomainObjExitMonitorWithDriver(driver, vm);
if (qemuDomainObjEndJob(driver, vm) == 0) {
@@ -1918,7 +1919,7 @@ static int qemuDomainSendKey(virDomainPtr domain,
goto cleanup;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorSendKey(priv->mon, holdtime, keycodes, nkeycodes);
qemuDomainObjExitMonitorWithDriver(driver, vm);
if (qemuDomainObjEndJob(driver, vm) == 0) {
@@ -1979,7 +1980,7 @@ static int qemudDomainGetInfo(virDomainPtr dom,
if (!virDomainObjIsActive(vm))
err = 0;
else {
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
qemuDomainObjExitMonitor(driver, vm);
}
@@ -2232,7 +2233,8 @@ qemuDomainSaveInternal(struct qemud_driver *driver, virDomainPtr
dom,
/* Pause */
if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) {
header.was_running = 1;
- if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE) < 0)
+ if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE,
+ QEMU_ASYNC_JOB_SAVE) < 0)
goto endjob;
if (!virDomainObjIsActive(vm)) {
@@ -2404,7 +2406,8 @@ qemuDomainSaveInternal(struct qemud_driver *driver, virDomainPtr
dom,
/* Perform the migration */
if (qemuMigrationToFile(driver, vm, fd, offset, path,
qemuCompressProgramName(compressed),
- is_reg, bypassSecurityDriver) < 0)
+ is_reg, bypassSecurityDriver,
+ QEMU_ASYNC_JOB_SAVE) < 0)
goto endjob;
if (VIR_CLOSE(fd) < 0) {
virReportSystemError(errno, _("unable to close %s"), path);
@@ -2433,7 +2436,8 @@ endjob:
if (ret != 0) {
if (header.was_running && virDomainObjIsActive(vm)) {
rc = qemuProcessStartCPUs(driver, vm, dom->conn,
- VIR_DOMAIN_RUNNING_SAVE_CANCELED);
+ VIR_DOMAIN_RUNNING_SAVE_CANCELED,
+ QEMU_ASYNC_JOB_SAVE);
if (rc < 0)
VIR_WARN("Unable to resume guest CPUs after save
failure");
}
@@ -2696,7 +2700,8 @@ doCoreDump(struct qemud_driver *driver,
goto cleanup;
if (qemuMigrationToFile(driver, vm, fd, 0, path,
- qemuCompressProgramName(compress), true, false) < 0)
+ qemuCompressProgramName(compress), true, false,
+ QEMU_ASYNC_JOB_DUMP) < 0)
goto cleanup;
if (VIR_CLOSE(fd) < 0) {
@@ -2787,7 +2792,8 @@ static int qemudDomainCoreDump(virDomainPtr dom,
/* Pause domain for non-live dump */
if (!(flags & VIR_DUMP_LIVE) &&
virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) {
- if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_DUMP) < 0)
+ if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_DUMP,
+ QEMU_ASYNC_JOB_DUMP) < 0)
goto endjob;
paused = 1;
@@ -2819,7 +2825,8 @@ endjob:
the migration is complete. */
else if (resume && paused && virDomainObjIsActive(vm)) {
if (qemuProcessStartCPUs(driver, vm, dom->conn,
- VIR_DOMAIN_RUNNING_UNPAUSED) < 0) {
+ VIR_DOMAIN_RUNNING_UNPAUSED,
+ QEMU_ASYNC_JOB_DUMP) < 0) {
if (virGetLastError() == NULL)
qemuReportError(VIR_ERR_OPERATION_FAILED,
"%s", _("resuming after dump
failed"));
@@ -2902,7 +2909,7 @@ qemuDomainScreenshot(virDomainPtr dom,
virSecurityManagerSetSavedStateLabel(qemu_driver->securityManager, vm, tmp);
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
if (qemuMonitorScreendump(priv->mon, tmp) < 0) {
qemuDomainObjExitMonitor(driver, vm);
goto endjob;
@@ -2978,7 +2985,8 @@ static void processWatchdogEvent(void *data, void *opaque)
"%s", _("Dump failed"));
ret = qemuProcessStartCPUs(driver, wdEvent->vm, NULL,
- VIR_DOMAIN_RUNNING_UNPAUSED);
+ VIR_DOMAIN_RUNNING_UNPAUSED,
+ QEMU_ASYNC_JOB_DUMP);
if (ret < 0)
qemuReportError(VIR_ERR_OPERATION_FAILED,
@@ -3014,7 +3022,7 @@ static int qemudDomainHotplugVcpus(struct qemud_driver *driver,
int oldvcpus = vm->def->vcpus;
int vcpus = oldvcpus;
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
/* We need different branches here, because we want to offline
* in reverse order to onlining, so any partial fail leaves us in a
@@ -3934,7 +3942,8 @@ qemuDomainSaveImageStartVM(virConnectPtr conn,
/* If it was running before, resume it now. */
if (header->was_running) {
if (qemuProcessStartCPUs(driver, vm, conn,
- VIR_DOMAIN_RUNNING_RESTORED) < 0) {
+ VIR_DOMAIN_RUNNING_RESTORED,
+ QEMU_ASYNC_JOB_NONE) < 0) {
if (virGetLastError() == NULL)
qemuReportError(VIR_ERR_OPERATION_FAILED,
"%s", _("failed to resume domain"));
@@ -4195,7 +4204,7 @@ static char *qemuDomainGetXMLDesc(virDomainPtr dom,
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_QUERY) < 0)
goto cleanup;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
qemuDomainObjExitMonitorWithDriver(driver, vm);
if (qemuDomainObjEndJob(driver, vm) == 0) {
@@ -6774,7 +6783,7 @@ qemudDomainBlockStats (virDomainPtr dom,
goto endjob;
}
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
ret = qemuMonitorGetBlockStatsInfo(priv->mon,
disk->info.alias,
&stats->rd_req,
@@ -6884,7 +6893,7 @@ qemudDomainMemoryStats (virDomainPtr dom,
if (virDomainObjIsActive(vm)) {
qemuDomainObjPrivatePtr priv = vm->privateData;
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
ret = qemuMonitorGetMemoryStats(priv->mon, stats, nr_stats);
qemuDomainObjExitMonitor(driver, vm);
} else {
@@ -7031,7 +7040,7 @@ qemudDomainMemoryPeek (virDomainPtr dom,
virSecurityManagerSetSavedStateLabel(qemu_driver->securityManager, vm, tmp);
priv = vm->privateData;
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
if (flags == VIR_MEMORY_VIRTUAL) {
if (qemuMonitorSaveVirtualMemory(priv->mon, offset, size, tmp) < 0) {
qemuDomainObjExitMonitor(driver, vm);
@@ -7210,7 +7219,7 @@ static int qemuDomainGetBlockInfo(virDomainPtr dom,
goto cleanup;
if (virDomainObjIsActive(vm)) {
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
ret = qemuMonitorGetBlockExtent(priv->mon,
disk->info.alias,
&info->allocation);
@@ -8085,7 +8094,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) {
}
VIR_DEBUG("Cancelling job at client request");
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
ret = qemuMonitorMigrateCancel(priv->mon);
qemuDomainObjExitMonitor(driver, vm);
@@ -8142,7 +8151,7 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom,
}
VIR_DEBUG("Setting migration downtime to %llums", downtime);
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
ret = qemuMonitorSetMigrationDowntime(priv->mon, downtime);
qemuDomainObjExitMonitor(driver, vm);
@@ -8198,7 +8207,7 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom,
}
VIR_DEBUG("Setting migration bandwidth to %luMbs", bandwidth);
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
ret = qemuMonitorSetMigrationSpeed(priv->mon, bandwidth);
qemuDomainObjExitMonitor(driver, vm);
@@ -8395,7 +8404,8 @@ qemuDomainSnapshotCreateActive(virConnectPtr conn,
* confuses libvirt since it's not notified when qemu resumes the
* domain. Thus we stop and start CPUs ourselves.
*/
- if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE) < 0)
+ if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE,
+ QEMU_ASYNC_JOB_NONE) < 0)
goto cleanup;
resume = true;
@@ -8406,14 +8416,15 @@ qemuDomainSnapshotCreateActive(virConnectPtr conn,
}
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorCreateSnapshot(priv->mon, snap->def->name);
qemuDomainObjExitMonitorWithDriver(driver, vm);
cleanup:
if (resume && virDomainObjIsActive(vm) &&
qemuProcessStartCPUs(driver, vm, conn,
- VIR_DOMAIN_RUNNING_UNPAUSED) < 0 &&
+ VIR_DOMAIN_RUNNING_UNPAUSED,
+ QEMU_ASYNC_JOB_NONE) < 0 &&
virGetLastError() == NULL) {
qemuReportError(VIR_ERR_OPERATION_FAILED, "%s",
_("resuming after snapshot failed"));
@@ -8740,7 +8751,7 @@ static int qemuDomainRevertToSnapshot(virDomainSnapshotPtr
snapshot,
if (virDomainObjIsActive(vm)) {
priv = vm->privateData;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
rc = qemuMonitorLoadSnapshot(priv->mon, snap->def->name);
qemuDomainObjExitMonitorWithDriver(driver, vm);
if (rc < 0)
@@ -8762,9 +8773,11 @@ static int qemuDomainRevertToSnapshot(virDomainSnapshotPtr
snapshot,
if (snap->def->state == VIR_DOMAIN_PAUSED) {
/* qemu unconditionally starts the domain running again after
* loadvm, so let's pause it to keep consistency
+ * XXX we should have used qemuProcessStart's start_paused instead
*/
rc = qemuProcessStopCPUs(driver, vm,
- VIR_DOMAIN_PAUSED_FROM_SNAPSHOT);
+ VIR_DOMAIN_PAUSED_FROM_SNAPSHOT,
+ QEMU_ASYNC_JOB_NONE);
if (rc < 0)
goto endjob;
} else {
@@ -8864,7 +8877,7 @@ static int qemuDomainSnapshotDiscard(struct qemud_driver *driver,
}
else {
priv = vm->privateData;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
/* we continue on even in the face of error */
qemuMonitorDeleteSnapshot(priv->mon, snap->def->name);
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -9074,7 +9087,7 @@ static int qemuDomainMonitorCommand(virDomainPtr domain, const char
*cmd,
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorArbitraryCommand(priv->mon, cmd, result, hmp);
qemuDomainObjExitMonitorWithDriver(driver, vm);
if (qemuDomainObjEndJob(driver, vm) == 0) {
@@ -9313,7 +9326,7 @@ qemuDomainBlockJobImpl(virDomainPtr dom, const char *path,
if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
priv = vm->privateData;
ret = qemuMonitorBlockJob(priv->mon, device, bandwidth, info, mode);
qemuDomainObjExitMonitorWithDriver(driver, vm);
diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c
index 7dfbfcd..5f449fb 100644
--- a/src/qemu/qemu_hotplug.c
+++ b/src/qemu/qemu_hotplug.c
@@ -97,7 +97,7 @@ int qemuDomainChangeEjectableMedia(struct qemud_driver *driver,
if (!(driveAlias = qemuDeviceDriveHostAlias(origdisk, priv->qemuCaps)))
goto error;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (disk->src) {
const char *format = NULL;
if (disk->type != VIR_DOMAIN_DISK_TYPE_DIR) {
@@ -199,7 +199,7 @@ int qemuDomainAttachPciDiskDevice(struct qemud_driver *driver,
goto error;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
ret = qemuMonitorAddDrive(priv->mon, drivestr);
if (ret == 0) {
@@ -296,7 +296,7 @@ int qemuDomainAttachPciControllerDevice(struct qemud_driver *driver,
goto cleanup;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
ret = qemuMonitorAddDevice(priv->mon, devstr);
} else {
@@ -441,7 +441,7 @@ int qemuDomainAttachSCSIDisk(struct qemud_driver *driver,
goto error;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
ret = qemuMonitorAddDrive(priv->mon, drivestr);
if (ret == 0) {
@@ -543,7 +543,7 @@ int qemuDomainAttachUsbMassstorageDevice(struct qemud_driver *driver,
goto error;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
ret = qemuMonitorAddDrive(priv->mon, drivestr);
if (ret == 0) {
@@ -688,7 +688,7 @@ int qemuDomainAttachNetDevice(virConnectPtr conn,
goto cleanup;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_NETDEV) &&
qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
if (qemuMonitorAddNetdev(priv->mon, netstr, tapfd, tapfd_name,
@@ -724,7 +724,7 @@ int qemuDomainAttachNetDevice(virConnectPtr conn,
goto try_remove;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
if (qemuMonitorAddDevice(priv->mon, nicstr) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -783,7 +783,7 @@ try_remove:
char *netdev_name;
if (virAsprintf(&netdev_name, "host%s", net->info.alias)
< 0)
goto no_memory;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorRemoveNetdev(priv->mon, netdev_name) < 0)
VIR_WARN("Failed to remove network backend for netdev %s",
netdev_name);
@@ -796,7 +796,7 @@ try_remove:
char *hostnet_name;
if (virAsprintf(&hostnet_name, "host%s", net->info.alias) <
0)
goto no_memory;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorRemoveHostNetwork(priv->mon, vlan, hostnet_name) < 0)
VIR_WARN("Failed to remove network backend for vlan %d, net %s",
vlan, hostnet_name);
@@ -857,14 +857,14 @@ int qemuDomainAttachHostPciDevice(struct qemud_driver *driver,
priv->qemuCaps)))
goto error;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorAddDeviceWithFd(priv->mon, devstr,
configfd, configfd_name);
qemuDomainObjExitMonitorWithDriver(driver, vm);
} else {
virDomainDevicePCIAddress guestAddr;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorAddPCIHostDevice(priv->mon,
&hostdev->source.subsys.u.pci,
&guestAddr);
@@ -945,7 +945,7 @@ int qemuDomainAttachHostUsbDevice(struct qemud_driver *driver,
goto error;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE))
ret = qemuMonitorAddDevice(priv->mon, devstr);
else
@@ -1273,7 +1273,7 @@ int qemuDomainDetachPciDiskDevice(struct qemud_driver *driver,
goto cleanup;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -1369,7 +1369,7 @@ int qemuDomainDetachDiskDevice(struct qemud_driver *driver,
goto cleanup;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
virDomainAuditDisk(vm, detach, NULL, "detach", false);
@@ -1507,7 +1507,7 @@ int qemuDomainDetachPciControllerDevice(struct qemud_driver
*driver,
goto cleanup;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
if (qemuMonitorDelDevice(priv->mon, detach->info.alias)) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -1602,7 +1602,7 @@ int qemuDomainDetachNetDevice(struct qemud_driver *driver,
goto cleanup;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -1739,7 +1739,7 @@ int qemuDomainDetachHostPciDevice(struct qemud_driver *driver,
return -1;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
ret = qemuMonitorDelDevice(priv->mon, detach->info.alias);
} else {
@@ -1842,7 +1842,7 @@ int qemuDomainDetachHostUsbDevice(struct qemud_driver *driver,
return -1;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorDelDevice(priv->mon, detach->info.alias);
qemuDomainObjExitMonitorWithDriver(driver, vm);
virDomainAuditHostdev(vm, detach, "detach", ret == 0);
@@ -1921,7 +1921,7 @@ qemuDomainChangeGraphicsPasswords(struct qemud_driver *driver,
if (auth->connected)
connected = virDomainGraphicsAuthConnectedTypeToString(auth->connected);
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorSetPassword(priv->mon,
type,
auth->passwd ? auth->passwd : defaultPasswd,
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 2843189..8bdbcaf 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -727,7 +727,8 @@ qemuMigrationSetOffline(struct qemud_driver *driver,
{
int ret;
VIR_DEBUG("driver=%p vm=%p", driver, vm);
- ret = qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_MIGRATION);
+ ret = qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_MIGRATION,
+ QEMU_ASYNC_JOB_MIGRATION_OUT);
if (ret == 0) {
virDomainEventPtr event;
@@ -745,7 +746,8 @@ qemuMigrationSetOffline(struct qemud_driver *driver,
static int
qemuMigrationUpdateJobStatus(struct qemud_driver *driver,
virDomainObjPtr vm,
- const char *job)
+ const char *job,
+ enum qemuDomainAsyncJob asyncJob)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
int ret = -1;
@@ -754,21 +756,17 @@ qemuMigrationUpdateJobStatus(struct qemud_driver *driver,
unsigned long long memRemaining;
unsigned long long memTotal;
- if (!virDomainObjIsActive(vm)) {
- qemuReportError(VIR_ERR_INTERNAL_ERROR, _("%s: %s"),
- job, _("guest unexpectedly quit"));
+ ret = qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob);
+ if (ret < 0) {
+ /* Guest already exited; nothing further to update. */
return -1;
}
-
- ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
- if (ret == 0) {
- ret = qemuMonitorGetMigrationStatus(priv->mon,
- &status,
- &memProcessed,
- &memRemaining,
- &memTotal);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
- }
+ ret = qemuMonitorGetMigrationStatus(priv->mon,
+ &status,
+ &memProcessed,
+ &memRemaining,
+ &memTotal);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
if (ret < 0 || virTimeMs(&priv->job.info.timeElapsed) < 0) {
priv->job.info.type = VIR_DOMAIN_JOB_FAILED;
@@ -817,8 +815,9 @@ qemuMigrationUpdateJobStatus(struct qemud_driver *driver,
}
-int
-qemuMigrationWaitForCompletion(struct qemud_driver *driver, virDomainObjPtr vm)
+static int
+qemuMigrationWaitForCompletion(struct qemud_driver *driver, virDomainObjPtr vm,
+ enum qemuDomainAsyncJob asyncJob)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
const char *job;
@@ -843,7 +842,7 @@ qemuMigrationWaitForCompletion(struct qemud_driver *driver,
virDomainObjPtr vm)
/* Poll every 50ms for progress & to allow cancellation */
struct timespec ts = { .tv_sec = 0, .tv_nsec = 50 * 1000 * 1000ull };
- if (qemuMigrationUpdateJobStatus(driver, vm, job) < 0)
+ if (qemuMigrationUpdateJobStatus(driver, vm, job, asyncJob) < 0)
goto cleanup;
virDomainObjUnlock(vm);
@@ -883,7 +882,8 @@ qemuDomainMigrateGraphicsRelocate(struct qemud_driver *driver,
if (cookie->graphics->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE)
return 0;
- ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ret = qemuDomainObjEnterMonitorAsync(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT);
if (ret == 0) {
ret = qemuMonitorGraphicsRelocate(priv->mon,
cookie->graphics->type,
@@ -1330,7 +1330,8 @@ static int doNativeMigrate(struct qemud_driver *driver,
goto cleanup;
}
- if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
+ if (qemuDomainObjEnterMonitorAsync(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
goto cleanup;
if (resource > 0 &&
@@ -1352,7 +1353,8 @@ static int doNativeMigrate(struct qemud_driver *driver,
}
qemuDomainObjExitMonitorWithDriver(driver, vm);
- if (qemuMigrationWaitForCompletion(driver, vm) < 0)
+ if (qemuMigrationWaitForCompletion(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
goto cleanup;
/* When migration completed, QEMU will have paused the
@@ -1591,7 +1593,8 @@ static int doTunnelMigrate(struct qemud_driver *driver,
goto cleanup;
}
- if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
+ if (qemuDomainObjEnterMonitorAsync(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
goto cleanup;
if (resource > 0 &&
@@ -1634,7 +1637,8 @@ static int doTunnelMigrate(struct qemud_driver *driver,
/* it is also possible that the migrate didn't fail initially, but
* rather failed later on. Check the output of "info migrate"
*/
- if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
+ if (qemuDomainObjEnterMonitorAsync(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
goto cancel;
if (qemuMonitorGetMigrationStatus(priv->mon,
&status,
@@ -1664,7 +1668,8 @@ static int doTunnelMigrate(struct qemud_driver *driver,
if (!(iothread = qemuMigrationStartTunnel(st, client_sock)))
goto cancel;
- ret = qemuMigrationWaitForCompletion(driver, vm);
+ ret = qemuMigrationWaitForCompletion(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT);
/* When migration completed, QEMU will have paused the
* CPUs for us, but unless we're using the JSON monitor
@@ -1693,7 +1698,8 @@ cancel:
if (ret != 0 && virDomainObjIsActive(vm)) {
VIR_FORCE_CLOSE(client_sock);
VIR_FORCE_CLOSE(qemu_sock);
- if (qemuDomainObjEnterMonitorWithDriver(driver, vm) == 0) {
+ if (qemuDomainObjEnterMonitorAsync(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) == 0) {
qemuMonitorMigrateCancel(priv->mon);
qemuDomainObjExitMonitorWithDriver(driver, vm);
}
@@ -2201,7 +2207,8 @@ endjob:
if (resume && virDomainObjGetState(vm, NULL) == VIR_DOMAIN_PAUSED) {
/* we got here through some sort of failure; start the domain again */
if (qemuProcessStartCPUs(driver, vm, conn,
- VIR_DOMAIN_RUNNING_MIGRATION_CANCELED) < 0) {
+ VIR_DOMAIN_RUNNING_MIGRATION_CANCELED,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) {
/* Hm, we already know we are in error here. We don't want to
* overwrite the previous error, though, so we just throw something
* to the logs and hope for the best
@@ -2274,7 +2281,8 @@ qemuMigrationPerformPhase(struct qemud_driver *driver,
virDomainObjGetState(vm, NULL) == VIR_DOMAIN_PAUSED) {
/* we got here through some sort of failure; start the domain again */
if (qemuProcessStartCPUs(driver, vm, conn,
- VIR_DOMAIN_RUNNING_MIGRATION_CANCELED) < 0) {
+ VIR_DOMAIN_RUNNING_MIGRATION_CANCELED,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) {
/* Hm, we already know we are in error here. We don't want to
* overwrite the previous error, though, so we just throw something
* to the logs and hope for the best
@@ -2500,7 +2508,8 @@ qemuMigrationFinish(struct qemud_driver *driver,
* older qemu's, but it also doesn't hurt anything there
*/
if (qemuProcessStartCPUs(driver, vm, dconn,
- VIR_DOMAIN_RUNNING_MIGRATED) < 0) {
+ VIR_DOMAIN_RUNNING_MIGRATED,
+ QEMU_ASYNC_JOB_MIGRATION_IN) < 0) {
if (virGetLastError() == NULL)
qemuReportError(VIR_ERR_INTERNAL_ERROR,
"%s", _("resume operation
failed"));
@@ -2626,7 +2635,8 @@ int qemuMigrationConfirm(struct qemud_driver *driver,
* older qemu's, but it also doesn't hurt anything there
*/
if (qemuProcessStartCPUs(driver, vm, conn,
- VIR_DOMAIN_RUNNING_MIGRATED) < 0) {
+ VIR_DOMAIN_RUNNING_MIGRATED,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) {
if (virGetLastError() == NULL)
qemuReportError(VIR_ERR_INTERNAL_ERROR,
"%s", _("resume operation failed"));
@@ -2657,7 +2667,8 @@ int
qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr vm,
int fd, off_t offset, const char *path,
const char *compressor,
- bool is_reg, bool bypassSecurityDriver)
+ bool is_reg, bool bypassSecurityDriver,
+ enum qemuDomainAsyncJob asyncJob)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
virCgroupPtr cgroup = NULL;
@@ -2709,7 +2720,7 @@ qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr
vm,
restoreLabel = true;
}
- if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
+ if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0)
goto cleanup;
if (!compressor) {
@@ -2763,7 +2774,7 @@ qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr
vm,
if (rc < 0)
goto cleanup;
- rc = qemuMigrationWaitForCompletion(driver, vm);
+ rc = qemuMigrationWaitForCompletion(driver, vm, asyncJob);
if (rc < 0)
goto cleanup;
diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h
index 9e88271..5c6921d 100644
--- a/src/qemu/qemu_migration.h
+++ b/src/qemu/qemu_migration.h
@@ -78,8 +78,6 @@ bool qemuMigrationIsAllowed(virDomainDefPtr def)
int qemuMigrationSetOffline(struct qemud_driver *driver,
virDomainObjPtr vm);
-int qemuMigrationWaitForCompletion(struct qemud_driver *driver, virDomainObjPtr vm);
-
char *qemuMigrationBegin(struct qemud_driver *driver,
virDomainObjPtr vm,
const char *xmlin,
@@ -145,7 +143,8 @@ int qemuMigrationConfirm(struct qemud_driver *driver,
int qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr vm,
int fd, off_t offset, const char *path,
const char *compressor,
- bool is_reg, bool bypassSecurityDriver)
+ bool is_reg, bool bypassSecurityDriver,
+ enum qemuDomainAsyncJob asyncJob)
ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(5)
ATTRIBUTE_RETURN_CHECK;
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index d160642..88cefd5 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -388,7 +388,7 @@ qemuProcessFakeReboot(void *opaque)
goto endjob;
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorSystemReset(priv->mon) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
goto endjob;
@@ -402,7 +402,8 @@ qemuProcessFakeReboot(void *opaque)
}
if (qemuProcessStartCPUs(driver, vm, NULL,
- VIR_DOMAIN_RUNNING_BOOTED) < 0) {
+ VIR_DOMAIN_RUNNING_BOOTED,
+ QEMU_ASYNC_JOB_NONE) < 0) {
if (virGetLastError() == NULL)
qemuReportError(VIR_ERR_INTERNAL_ERROR,
"%s", _("resume operation failed"));
@@ -850,7 +851,7 @@ qemuConnectMonitor(struct qemud_driver *driver, virDomainObjPtr vm)
}
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorSetCapabilities(priv->mon);
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -1202,7 +1203,7 @@ qemuProcessWaitForMonitor(struct qemud_driver* driver,
goto cleanup;
priv = vm->privateData;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorGetPtyPaths(priv->mon, paths);
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -1257,7 +1258,7 @@ qemuProcessDetectVcpuPIDs(struct qemud_driver *driver,
/* What follows is now all KVM specific */
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
return -1;
@@ -1551,7 +1552,7 @@ qemuProcessInitPasswords(virConnectPtr conn,
goto cleanup;
alias = vm->def->disks[i]->info.alias;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorSetDrivePassphrase(priv->mon, alias, secret);
VIR_FREE(secret);
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -1942,7 +1943,7 @@ qemuProcessInitPCIAddresses(struct qemud_driver *driver,
int ret;
qemuMonitorPCIAddress *addrs = NULL;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
naddrs = qemuMonitorGetAllPCIAddresses(priv->mon,
&addrs);
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -2148,7 +2149,8 @@ qemuProcessPrepareMonitorChr(struct qemud_driver *driver,
*/
int
qemuProcessStartCPUs(struct qemud_driver *driver, virDomainObjPtr vm,
- virConnectPtr conn, virDomainRunningReason reason)
+ virConnectPtr conn, virDomainRunningReason reason,
+ enum qemuDomainAsyncJob asyncJob)
{
int ret;
qemuDomainObjPrivatePtr priv = vm->privateData;
@@ -2163,9 +2165,11 @@ qemuProcessStartCPUs(struct qemud_driver *driver, virDomainObjPtr
vm,
}
VIR_FREE(priv->lockState);
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
- ret = qemuMonitorStartCPUs(priv->mon, conn);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ ret = qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob);
+ if (ret == 0) {
+ ret = qemuMonitorStartCPUs(priv->mon, conn);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ }
if (ret == 0) {
virDomainObjSetState(vm, VIR_DOMAIN_RUNNING, reason);
@@ -2180,7 +2184,8 @@ qemuProcessStartCPUs(struct qemud_driver *driver, virDomainObjPtr
vm,
int qemuProcessStopCPUs(struct qemud_driver *driver, virDomainObjPtr vm,
- virDomainPausedReason reason)
+ virDomainPausedReason reason,
+ enum qemuDomainAsyncJob asyncJob)
{
int ret;
int oldState;
@@ -2191,7 +2196,7 @@ int qemuProcessStopCPUs(struct qemud_driver *driver, virDomainObjPtr
vm,
oldState = virDomainObjGetState(vm, &oldReason);
virDomainObjSetState(vm, VIR_DOMAIN_PAUSED, reason);
- ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ret = qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob);
if (ret == 0) {
ret = qemuMonitorStopCPUs(priv->mon);
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -2254,7 +2259,7 @@ qemuProcessUpdateState(struct qemud_driver *driver, virDomainObjPtr
vm)
bool running;
int ret;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorGetStatus(priv->mon, &running);
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -2315,7 +2320,8 @@ qemuProcessRecoverMigration(struct qemud_driver *driver,
VIR_DEBUG("Incoming migration finished, resuming domain %s",
vm->def->name);
if (qemuProcessStartCPUs(driver, vm, conn,
- VIR_DOMAIN_RUNNING_UNPAUSED) < 0) {
+ VIR_DOMAIN_RUNNING_UNPAUSED,
+ QEMU_ASYNC_JOB_NONE) < 0) {
VIR_WARN("Could not resume domain %s", vm->def->name);
}
break;
@@ -2346,7 +2352,7 @@ qemuProcessRecoverMigration(struct qemud_driver *driver,
* domain */
VIR_DEBUG("Canceling unfinished outgoing migration of domain %s",
vm->def->name);
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
ignore_value(qemuMonitorMigrateCancel(priv->mon));
qemuDomainObjExitMonitor(driver, vm);
/* resume the domain but only if it was paused as a result of
@@ -2355,7 +2361,8 @@ qemuProcessRecoverMigration(struct qemud_driver *driver,
(reason == VIR_DOMAIN_PAUSED_MIGRATION ||
reason == VIR_DOMAIN_PAUSED_UNKNOWN)) {
if (qemuProcessStartCPUs(driver, vm, conn,
- VIR_DOMAIN_RUNNING_UNPAUSED) < 0) {
+ VIR_DOMAIN_RUNNING_UNPAUSED,
+ QEMU_ASYNC_JOB_NONE) < 0) {
VIR_WARN("Could not resume domain %s",
vm->def->name);
}
}
@@ -2375,7 +2382,8 @@ qemuProcessRecoverMigration(struct qemud_driver *driver,
(reason == VIR_DOMAIN_PAUSED_MIGRATION ||
reason == VIR_DOMAIN_PAUSED_UNKNOWN)) {
if (qemuProcessStartCPUs(driver, vm, conn,
- VIR_DOMAIN_RUNNING_UNPAUSED) < 0) {
+ VIR_DOMAIN_RUNNING_UNPAUSED,
+ QEMU_ASYNC_JOB_NONE) < 0) {
VIR_WARN("Could not resume domain %s",
vm->def->name);
}
}
@@ -2412,11 +2420,13 @@ qemuProcessRecoverJob(struct qemud_driver *driver,
case QEMU_ASYNC_JOB_SAVE:
case QEMU_ASYNC_JOB_DUMP:
- ignore_value(qemuDomainObjEnterMonitor(driver, vm));
+ qemuDomainObjEnterMonitor(driver, vm);
ignore_value(qemuMonitorMigrateCancel(priv->mon));
qemuDomainObjExitMonitor(driver, vm);
/* resume the domain but only if it was paused as a result of
- * running save/dump operation */
+ * running save/dump operation. Although we are recovering an
+ * async job, this function is run at startup and must resume
+ * things using sync monitor connections. */
if (state == VIR_DOMAIN_PAUSED &&
((job->asyncJob == QEMU_ASYNC_JOB_DUMP &&
reason == VIR_DOMAIN_PAUSED_DUMP) ||
@@ -2424,7 +2434,8 @@ qemuProcessRecoverJob(struct qemud_driver *driver,
reason == VIR_DOMAIN_PAUSED_SAVE) ||
reason == VIR_DOMAIN_PAUSED_UNKNOWN)) {
if (qemuProcessStartCPUs(driver, vm, conn,
- VIR_DOMAIN_RUNNING_UNPAUSED) < 0) {
+ VIR_DOMAIN_RUNNING_UNPAUSED,
+ QEMU_ASYNC_JOB_NONE) < 0) {
VIR_WARN("Could not resume domain %s after",
vm->def->name);
}
}
@@ -2974,9 +2985,13 @@ int qemuProcessStart(virConnectPtr conn,
goto cleanup;
}
+ /* Technically, qemuProcessStart can be called from inside
+ * QEMU_ASYNC_JOB_MIGRATION_IN, but we are okay treating this like
+ * a sync job since no other job can call into the domain until
+ * migration completes. */
VIR_DEBUG("Setting initial memory amount");
cur_balloon = vm->def->mem.cur_balloon;
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorSetBalloon(priv->mon, cur_balloon) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
@@ -2987,7 +3002,8 @@ int qemuProcessStart(virConnectPtr conn,
VIR_DEBUG("Starting domain CPUs");
/* Allow the CPUS to start executing */
if (qemuProcessStartCPUs(driver, vm, conn,
- VIR_DOMAIN_RUNNING_BOOTED) < 0) {
+ VIR_DOMAIN_RUNNING_BOOTED,
+ QEMU_ASYNC_JOB_NONE) < 0) {
if (virGetLastError() == NULL)
qemuReportError(VIR_ERR_INTERNAL_ERROR,
"%s", _("resume operation failed"));
@@ -3396,7 +3412,7 @@ int qemuProcessAttach(virConnectPtr conn ATTRIBUTE_UNUSED,
}
VIR_DEBUG("Getting initial memory amount");
- ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorGetBalloonInfo(priv->mon, &vm->def->mem.cur_balloon) <
0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index 449d7f1..e9b910d 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -23,6 +23,7 @@
# define __QEMU_PROCESS_H__
# include "qemu_conf.h"
+# include "qemu_domain.h"
int qemuProcessPrepareMonitorChr(struct qemud_driver *driver,
virDomainChrSourceDefPtr monConfig,
@@ -31,10 +32,12 @@ int qemuProcessPrepareMonitorChr(struct qemud_driver *driver,
int qemuProcessStartCPUs(struct qemud_driver *driver,
virDomainObjPtr vm,
virConnectPtr conn,
- virDomainRunningReason reason);
+ virDomainRunningReason reason,
+ enum qemuDomainAsyncJob asyncJob);
int qemuProcessStopCPUs(struct qemud_driver *driver,
virDomainObjPtr vm,
- virDomainPausedReason reason);
+ virDomainPausedReason reason,
+ enum qemuDomainAsyncJob asyncJob);
void qemuProcessAutostartAll(struct qemud_driver *driver);
void qemuProcessReconnectAll(virConnectPtr conn, struct qemud_driver *driver);
--
1.7.4.4