[libvirt] [PATCH v2 0/2] Rework reference locking; in QEMU, for starters

v2: - qemuDomObjEndAPI now resets the parameter to NULL - Fixes from Peter incorporated - rebased on current master v1: https://www.redhat.com/archives/libvir-list/2014-December/msg00198.html Martin Kletzander (2): conf: Rework virDomainObjListFindByUUID to allow more concurrent APIs qemu: completely rework reference counting src/conf/domain_conf.c | 27 +- src/conf/domain_conf.h | 2 + src/libvirt_private.syms | 1 + src/qemu/THREADS.txt | 40 ++- src/qemu/qemu_domain.c | 29 +- src/qemu/qemu_domain.h | 12 +- src/qemu/qemu_driver.c | 708 ++++++++++++++++------------------------------ src/qemu/qemu_migration.c | 108 +++---- src/qemu/qemu_migration.h | 10 +- src/qemu/qemu_process.c | 87 +++--- 10 files changed, 386 insertions(+), 638 deletions(-) -- 2.2.0

Currently, when there is an API that's blocking with locked domain and second API that's waiting in virDomainObjListFindByUUID() for the domain lock (with the domain list locked) no other API can be executed on any domain on the whole hypervisor because all would wait for the domain list to be locked. This patch adds new optional approach to this in which the domain is only ref'd (reference counter is incremented) instead of being locked and is locked *after* the list itself is unlocked. We might consider only ref'ing the domain in the future and leaving locking on particular APIs, but that's no tonight's fairy tale. Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/conf/domain_conf.c | 27 ++++++++++++++++++++++++--- src/conf/domain_conf.h | 2 ++ src/libvirt_private.syms | 1 + 3 files changed, 27 insertions(+), 3 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 2d81c37..0bb216d 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1068,8 +1068,10 @@ virDomainObjPtr virDomainObjListFindByID(virDomainObjListPtr doms, } -virDomainObjPtr virDomainObjListFindByUUID(virDomainObjListPtr doms, - const unsigned char *uuid) +static virDomainObjPtr +virDomainObjListFindByUUIDInternal(virDomainObjListPtr doms, + const unsigned char *uuid, + bool ref) { char uuidstr[VIR_UUID_STRING_BUFLEN]; virDomainObjPtr obj; @@ -1078,12 +1080,31 @@ virDomainObjPtr virDomainObjListFindByUUID(virDomainObjListPtr doms, virUUIDFormat(uuid, uuidstr); obj = virHashLookup(doms->objs, uuidstr); + if (ref) { + virObjectRef(obj); + virObjectUnlock(doms); + } if (obj) virObjectLock(obj); - virObjectUnlock(doms); + if (!ref) + virObjectUnlock(doms); return obj; } +virDomainObjPtr +virDomainObjListFindByUUID(virDomainObjListPtr doms, + const unsigned char *uuid) +{ + return virDomainObjListFindByUUIDInternal(doms, uuid, false); +} + +virDomainObjPtr +virDomainObjListFindByUUIDRef(virDomainObjListPtr doms, + const unsigned char *uuid) +{ + return virDomainObjListFindByUUIDInternal(doms, uuid, true); +} + static int virDomainObjListSearchName(const void *payload, const void *name ATTRIBUTE_UNUSED, const void *data) diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 439f3c0..3812a16 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2285,6 +2285,8 @@ virDomainObjPtr virDomainObjListFindByID(virDomainObjListPtr doms, int id); virDomainObjPtr virDomainObjListFindByUUID(virDomainObjListPtr doms, const unsigned char *uuid); +virDomainObjPtr virDomainObjListFindByUUIDRef(virDomainObjListPtr doms, + const unsigned char *uuid); virDomainObjPtr virDomainObjListFindByName(virDomainObjListPtr doms, const char *name); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 1853a9c..2351b91 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -352,6 +352,7 @@ virDomainObjListExport; virDomainObjListFindByID; virDomainObjListFindByName; virDomainObjListFindByUUID; +virDomainObjListFindByUUIDRef; virDomainObjListForEach; virDomainObjListGetActiveIDs; virDomainObjListGetInactiveNames; -- 2.2.0

On 12/05/14 15:02, Martin Kletzander wrote:
Currently, when there is an API that's blocking with locked domain and second API that's waiting in virDomainObjListFindByUUID() for the domain lock (with the domain list locked) no other API can be executed on any domain on the whole hypervisor because all would wait for the domain list to be locked. This patch adds new optional approach to this in which the domain is only ref'd (reference counter is incremented) instead of being locked and is locked *after* the list itself is unlocked. We might consider only ref'ing the domain in the future and leaving locking on particular APIs, but that's no tonight's fairy tale.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/conf/domain_conf.c | 27 ++++++++++++++++++++++++--- src/conf/domain_conf.h | 2 ++ src/libvirt_private.syms | 1 + 3 files changed, 27 insertions(+), 3 deletions(-)
ACK, although given that the devel freeze is tomorrow and second patch of this series is pretty invasive, I think that giving this change a full dev cycle is a better way to go. Peter

On Mon, Dec 08, 2014 at 04:07:39PM +0100, Peter Krempa wrote:
On 12/05/14 15:02, Martin Kletzander wrote:
Currently, when there is an API that's blocking with locked domain and second API that's waiting in virDomainObjListFindByUUID() for the domain lock (with the domain list locked) no other API can be executed on any domain on the whole hypervisor because all would wait for the domain list to be locked. This patch adds new optional approach to this in which the domain is only ref'd (reference counter is incremented) instead of being locked and is locked *after* the list itself is unlocked. We might consider only ref'ing the domain in the future and leaving locking on particular APIs, but that's no tonight's fairy tale.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/conf/domain_conf.c | 27 ++++++++++++++++++++++++--- src/conf/domain_conf.h | 2 ++ src/libvirt_private.syms | 1 + 3 files changed, 27 insertions(+), 3 deletions(-)
ACK, although given that the devel freeze is tomorrow and second patch of this series is pretty invasive, I think that giving this change a full dev cycle is a better way to go.
Agreed, particularly as this will be a longer dev cycle than usual due to christmas - don't want to risk destablising this release. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Mon, Dec 08, 2014 at 03:12:11PM +0000, Daniel P. Berrange wrote:
On Mon, Dec 08, 2014 at 04:07:39PM +0100, Peter Krempa wrote:
On 12/05/14 15:02, Martin Kletzander wrote:
Currently, when there is an API that's blocking with locked domain and second API that's waiting in virDomainObjListFindByUUID() for the domain lock (with the domain list locked) no other API can be executed on any domain on the whole hypervisor because all would wait for the domain list to be locked. This patch adds new optional approach to this in which the domain is only ref'd (reference counter is incremented) instead of being locked and is locked *after* the list itself is unlocked. We might consider only ref'ing the domain in the future and leaving locking on particular APIs, but that's no tonight's fairy tale.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/conf/domain_conf.c | 27 ++++++++++++++++++++++++--- src/conf/domain_conf.h | 2 ++ src/libvirt_private.syms | 1 + 3 files changed, 27 insertions(+), 3 deletions(-)
ACK, although given that the devel freeze is tomorrow and second patch of this series is pretty invasive, I think that giving this change a full dev cycle is a better way to go.
Agreed, particularly as this will be a longer dev cycle than usual due to christmas - don't want to risk destablising this release.
Definitely, I wouldn't feel very safe this close to release :) I'll push it after 1.2.11 then. Thank you for the review,

There is one problem that causes various errors in the daemon. When domain is waiting for a job, it is unlocked while waiting on the condition. However, if that domain is for example transient and being removed in another API (e.g. cancelling incoming migration), it get's unref'd. If the first call, that was waiting, fails to get the job, it unref's the domain object, and because it was the last reference, it causes clearing of the whole domain object. However, when finishing the call, the domain must be unlocked, but there is no way for the API to know whether it was cleaned or not (unless there is some ugly temporary variable, but let's scratch that). The root cause is that our APIs don't ref the objects they are using and all use the implicit reference that the object has when it is in the domain list. That reference can be removed when the API is waiting for a job. And because each domain doesn't do its ref'ing, it results in the ugly checking of the return value of virObjectUnref() that we have everywhere. This patch changes qemuDomObjFromDomain() to ref the domain (using virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which should be the only function in which the return value of virObjectUnref() is checked. This makes all reference counting deterministic and makes the code a bit clearer. Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/THREADS.txt | 40 ++- src/qemu/qemu_domain.c | 29 +- src/qemu/qemu_domain.h | 12 +- src/qemu/qemu_driver.c | 708 ++++++++++++++++------------------------------ src/qemu/qemu_migration.c | 108 +++---- src/qemu/qemu_migration.h | 10 +- src/qemu/qemu_process.c | 87 +++--- 7 files changed, 359 insertions(+), 635 deletions(-) diff --git a/src/qemu/THREADS.txt b/src/qemu/THREADS.txt index 50a0cf9..53a3415 100644 --- a/src/qemu/THREADS.txt +++ b/src/qemu/THREADS.txt @@ -26,12 +26,20 @@ There are a number of locks on various objects * virDomainObjPtr Will be locked after calling any of the virDomainFindBy{ID,Name,UUID} - methods. + methods. However, preferred method is qemuDomObjFromDomain() that uses + virDomainFindByUUIDRef() which also increases the reference counter and + finds the domain in the domain list without blocking all other lookups. + When the domain is locked and the reference increased, the prefered way of + decrementing the reference counter and unlocking the domain is using the + qemuDomObjEndAPI() function. Lock must be held when changing/reading any variable in the virDomainObjPtr If the lock needs to be dropped & then re-acquired for a short period of time, the reference count must be incremented first using virDomainObjRef(). + There is no need to increase the reference count if qemuDomObjFromDomain() + was used for looking up the domain. In this case there is one reference + already added by that function. This lock must not be held for anything which sleeps/waits (i.e. monitor commands). @@ -109,7 +117,6 @@ To lock the virDomainObjPtr To acquire the normal job condition qemuDomainObjBeginJob() - - Increments ref count on virDomainObjPtr - Waits until the job is compatible with current async job or no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr @@ -122,14 +129,12 @@ To acquire the normal job condition qemuDomainObjEndJob() - Sets job.active to 0 - Signals on job.cond condition - - Decrements ref count on virDomainObjPtr To acquire the asynchronous job condition qemuDomainObjBeginAsyncJob() - - Increments ref count on virDomainObjPtr - Waits until no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr mutex @@ -141,7 +146,6 @@ To acquire the asynchronous job condition qemuDomainObjEndAsyncJob() - Sets job.asyncJob to 0 - Broadcasts on job.asyncCond condition - - Decrements ref count on virDomainObjPtr @@ -179,12 +183,10 @@ To acquire the QEMU monitor lock as part of an asynchronous job To keep a domain alive while waiting on a remote command qemuDomainObjEnterRemote() - - Increments ref count on virDomainObjPtr - Releases the virDomainObjPtr lock qemuDomainObjExitRemote() - Acquires the virDomainObjPtr lock - - Decrements ref count on virDomainObjPtr Design patterns @@ -195,18 +197,18 @@ Design patterns virDomainObjPtr obj; - obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom); ...do work... - virDomainObjUnlock(obj); + qemuDomObjEndAPI(obj); * Updating something directly to do with a virDomainObjPtr virDomainObjPtr obj; - obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom); qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE); @@ -214,18 +216,15 @@ Design patterns qemuDomainObjEndJob(obj); - virDomainObjUnlock(obj); - - + qemuDomObjEndAPI(obj); * Invoking a monitor command on a virDomainObjPtr - virDomainObjPtr obj; qemuDomainObjPrivatePtr priv; - obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom); qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE); @@ -240,8 +239,7 @@ Design patterns ...do final work... qemuDomainObjEndJob(obj); - virDomainObjUnlock(obj); - + qemuDomObjEndAPI(obj); * Running asynchronous job @@ -249,7 +247,7 @@ Design patterns virDomainObjPtr obj; qemuDomainObjPrivatePtr priv; - obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom); qemuDomainObjBeginAsyncJob(obj, QEMU_ASYNC_JOB_TYPE); qemuDomainObjSetAsyncJobMask(obj, allowedJobs); @@ -281,7 +279,7 @@ Design patterns ...do final work... qemuDomainObjEndAsyncJob(obj); - virDomainObjUnlock(obj); + qemuDomObjEndAPI(obj); * Coordinating with a remote server for migration @@ -289,7 +287,7 @@ Design patterns virDomainObjPtr obj; qemuDomainObjPrivatePtr priv; - obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom); qemuDomainObjBeginAsyncJob(obj, QEMU_ASYNC_JOB_TYPE); @@ -309,4 +307,4 @@ Design patterns ...do final work... qemuDomainObjEndAsyncJob(obj); - virDomainObjUnlock(obj); + qemuDomObjEndAPI(obj); diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 220304f..d4a6021 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -1318,8 +1318,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, priv->jobs_queued++; then = now + QEMU_JOB_WAIT_TIME; - virObjectRef(obj); - retry: if (cfg->maxQueuedJobs && priv->jobs_queued > cfg->maxQueuedJobs) { @@ -1398,7 +1396,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, cleanup: priv->jobs_queued--; - virObjectUnref(obj); virObjectUnref(cfg); return ret; } @@ -1467,7 +1464,8 @@ qemuDomainObjBeginNestedJob(virQEMUDriverPtr driver, * Returns true if @obj was still referenced, false if it was * disposed of. */ -bool qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) +void +qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) { qemuDomainObjPrivatePtr priv = obj->privateData; qemuDomainJob job = priv->job.active; @@ -1483,11 +1481,9 @@ bool qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) if (qemuDomainTrackJob(job)) qemuDomainObjSaveJob(driver, obj); virCondSignal(&priv->job.cond); - - return virObjectUnref(obj); } -bool +void qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj) { qemuDomainObjPrivatePtr priv = obj->privateData; @@ -1501,8 +1497,6 @@ qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj) qemuDomainObjResetAsyncJob(priv); qemuDomainObjSaveJob(driver, obj); virCondBroadcast(&priv->job.asyncCond); - - return virObjectUnref(obj); } void @@ -1541,7 +1535,7 @@ qemuDomainObjEnterMonitorInternal(virQEMUDriverPtr driver, virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("domain is no longer running")); /* Still referenced by the containing async job. */ - ignore_value(qemuDomainObjEndJob(driver, obj)); + qemuDomainObjEndJob(driver, obj); return -1; } } else if (priv->job.asyncOwner == virThreadSelfID()) { @@ -1680,7 +1674,6 @@ void qemuDomainObjEnterRemote(virDomainObjPtr obj) { VIR_DEBUG("Entering remote (vm=%p name=%s)", obj, obj->def->name); - virObjectRef(obj); virObjectUnlock(obj); } @@ -1689,7 +1682,6 @@ void qemuDomainObjExitRemote(virDomainObjPtr obj) virObjectLock(obj); VIR_DEBUG("Exited remote (vm=%p name=%s)", obj, obj->def->name); - virObjectUnref(obj); } @@ -2390,8 +2382,7 @@ qemuDomainSnapshotDiscardAllMetadata(virQEMUDriverPtr driver, } /* - * The caller must hold a lock the vm and there must - * be no remaining references to vm. + * The caller must hold a lock the vm. */ void qemuDomainRemoveInactive(virQEMUDriverPtr driver, @@ -2422,7 +2413,7 @@ qemuDomainRemoveInactive(virQEMUDriverPtr driver, virObjectUnref(cfg); if (haveJob) - ignore_value(qemuDomainObjEndJob(driver, vm)); + qemuDomainObjEndJob(driver, vm); } void @@ -2795,3 +2786,11 @@ qemuDomainAgentAvailable(qemuDomainObjPrivatePtr priv, } return true; } + +void +qemuDomObjEndAPI(virDomainObjPtr *vm) +{ + virObjectUnlock(*vm); + virObjectUnref(*vm); + *vm = NULL; +} diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index efabd82..8b1a1b4 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -225,12 +225,10 @@ int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driver, qemuDomainAsyncJob asyncJob) ATTRIBUTE_RETURN_CHECK; -bool qemuDomainObjEndJob(virQEMUDriverPtr driver, - virDomainObjPtr obj) - ATTRIBUTE_RETURN_CHECK; -bool qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, - virDomainObjPtr obj) - ATTRIBUTE_RETURN_CHECK; +void qemuDomainObjEndJob(virQEMUDriverPtr driver, + virDomainObjPtr obj); +void qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, + virDomainObjPtr obj); void qemuDomainObjAbortAsyncJob(virDomainObjPtr obj); void qemuDomainObjSetJobPhase(virQEMUDriverPtr driver, virDomainObjPtr obj, @@ -411,4 +409,6 @@ int qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4); +void qemuDomObjEndAPI(virDomainObjPtr *vm); + #endif /* __QEMU_DOMAIN_H__ */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9152cf5..2e0b7fc 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -205,11 +205,11 @@ struct qemuAutostartData { * qemuDomObjFromDomain: * @domain: Domain pointer that has to be looked up * - * This function looks up @domain and returns the appropriate - * virDomainObjPtr. + * This function looks up @domain and returns the appropriate virDomainObjPtr + * that has to be cleaned by calling qemuDomObjEndAPI(). * - * Returns the domain object which is locked on success, NULL - * otherwise. + * Returns the domain object with incremented reference counter which is locked + * on success, NULL otherwise. */ static virDomainObjPtr qemuDomObjFromDomain(virDomainPtr domain) @@ -218,7 +218,7 @@ qemuDomObjFromDomain(virDomainPtr domain) virQEMUDriverPtr driver = domain->conn->privateData; char uuidstr[VIR_UUID_STRING_BUFLEN]; - vm = virDomainObjListFindByUUID(driver->domains, domain->uuid); + vm = virDomainObjListFindByUUIDRef(driver->domains, domain->uuid); if (!vm) { virUUIDFormat(domain->uuid, uuidstr); virReportError(VIR_ERR_NO_DOMAIN, @@ -230,9 +230,9 @@ qemuDomObjFromDomain(virDomainPtr domain) return vm; } -/* Looks up the domain object from snapshot and unlocks the driver. The - * returned domain object is locked and the caller is responsible for - * unlocking it */ +/* Looks up the domain object from snapshot and unlocks the + * driver. The returned domain object is locked and ref'd and the + * caller must call qemuDomObjEndAPI() on it. */ static virDomainObjPtr qemuDomObjFromSnapshot(virDomainSnapshotPtr snapshot) { @@ -278,6 +278,7 @@ qemuAutostartDomain(virDomainObjPtr vm, flags |= VIR_DOMAIN_START_BYPASS_CACHE; virObjectLock(vm); + virObjectRef(vm); virResetLastError(); if (vm->autostart && !virDomainObjIsActive(vm)) { @@ -297,14 +298,12 @@ qemuAutostartDomain(virDomainObjPtr vm, err ? err->message : _("unknown error")); } - if (!qemuDomainObjEndJob(data->driver, vm)) - vm = NULL; + qemuDomainObjEndJob(data->driver, vm); } ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -1446,7 +1445,7 @@ static virDomainPtr qemuDomainLookupByID(virConnectPtr conn, virDomainObjPtr vm; virDomainPtr dom = NULL; - vm = virDomainObjListFindByID(driver->domains, id); + vm = virDomainObjListFindByID(driver->domains, id); if (!vm) { virReportError(VIR_ERR_NO_DOMAIN, @@ -1461,8 +1460,7 @@ static virDomainPtr qemuDomainLookupByID(virConnectPtr conn, if (dom) dom->id = vm->def->id; cleanup: - if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm); return dom; } @@ -1473,7 +1471,7 @@ static virDomainPtr qemuDomainLookupByUUID(virConnectPtr conn, virDomainObjPtr vm; virDomainPtr dom = NULL; - vm = virDomainObjListFindByUUID(driver->domains, uuid); + vm = virDomainObjListFindByUUIDRef(driver->domains, uuid); if (!vm) { char uuidstr[VIR_UUID_STRING_BUFLEN]; @@ -1490,8 +1488,7 @@ static virDomainPtr qemuDomainLookupByUUID(virConnectPtr conn, if (dom) dom->id = vm->def->id; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return dom; } @@ -1517,8 +1514,7 @@ static virDomainPtr qemuDomainLookupByName(virConnectPtr conn, if (dom) dom->id = vm->def->id; cleanup: - if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm); return dom; } @@ -1537,8 +1533,7 @@ static int qemuDomainIsActive(virDomainPtr dom) ret = virDomainObjIsActive(obj); cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(&obj); return ret; } @@ -1556,8 +1551,7 @@ static int qemuDomainIsPersistent(virDomainPtr dom) ret = obj->persistent; cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(&obj); return ret; } @@ -1575,8 +1569,7 @@ static int qemuDomainIsUpdated(virDomainPtr dom) ret = obj->updated; cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(&obj); return ret; } @@ -1717,12 +1710,11 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL))) goto cleanup; - + virObjectRef(vm); def = NULL; if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) { qemuDomainRemoveInactive(driver, vm); - vm = NULL; goto cleanup; } @@ -1731,9 +1723,8 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags) < 0) { virDomainAuditStart(vm, "booted", false); - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; + qemuDomainObjEndJob(driver, vm); + qemuDomainRemoveInactive(driver, vm); goto cleanup; } @@ -1756,13 +1747,11 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, if (dom) dom->id = vm->def->id; - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virDomainDefFree(def); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) { qemuDomainEventQueue(driver, event); if (event2) @@ -1842,12 +1831,10 @@ static int qemuDomainSuspend(virDomainPtr dom) ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); @@ -1908,12 +1895,10 @@ static int qemuDomainResume(virDomainPtr dom) ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); virObjectUnref(caps); @@ -1998,12 +1983,10 @@ static int qemuDomainShutdownFlags(virDomainPtr dom, unsigned int flags) } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -2098,12 +2081,10 @@ qemuDomainReboot(virDomainPtr dom, unsigned int flags) } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -2141,12 +2122,10 @@ qemuDomainReset(virDomainPtr dom, unsigned int flags) priv->fakeReboot = false; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -2226,20 +2205,14 @@ qemuDomainDestroyFlags(virDomainPtr dom, VIR_DOMAIN_EVENT_STOPPED_DESTROYED); virDomainAuditStop(vm, "destroyed"); - if (!vm->persistent) { - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } ret = 0; - endjob: - if (vm && !qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); + if (ret == 0 && !vm->persistent) + qemuDomainRemoveInactive(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); return ret; @@ -2264,8 +2237,7 @@ static char *qemuDomainGetOSType(virDomainPtr dom) { ignore_value(VIR_STRDUP(type, vm->def->os.type)); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return type; } @@ -2285,8 +2257,7 @@ qemuDomainGetMaxMemory(virDomainPtr dom) ret = vm->def->mem.max_balloon; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -2388,12 +2359,10 @@ static int qemuDomainSetMemoryFlags(virDomainPtr dom, unsigned long newmem, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -2468,12 +2437,10 @@ static int qemuDomainSetMemoryStatsPeriod(virDomainPtr dom, int period, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -2516,12 +2483,10 @@ static int qemuDomainInjectNMI(virDomainPtr domain, unsigned int flags) qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -2580,12 +2545,10 @@ static int qemuDomainSendKey(virDomainPtr domain, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -2636,10 +2599,7 @@ static int qemuDomainGetInfo(virDomainPtr dom, err = qemuMonitorGetBalloonInfo(priv->mon, &balloon); qemuDomainObjExitMonitor(driver, vm); } - if (!qemuDomainObjEndJob(driver, vm)) { - vm = NULL; - goto cleanup; - } + qemuDomainObjEndJob(driver, vm); if (err < 0) { /* We couldn't get current memory allocation but that's not @@ -2664,8 +2624,7 @@ static int qemuDomainGetInfo(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -2690,8 +2649,7 @@ qemuDomainGetState(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -2741,8 +2699,7 @@ qemuDomainGetControlInfo(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -3237,37 +3194,29 @@ qemuDomainSaveInternal(virQEMUDriverPtr driver, virDomainPtr dom, event = virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPPED, VIR_DOMAIN_EVENT_STOPPED_SAVED); - if (!vm->persistent) { - if (qemuDomainObjEndAsyncJob(driver, vm) > 0) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } - endjob: - if (vm) { - if (ret != 0) { - if (was_running && virDomainObjIsActive(vm)) { - rc = qemuProcessStartCPUs(driver, vm, dom->conn, - VIR_DOMAIN_RUNNING_SAVE_CANCELED, - QEMU_ASYNC_JOB_SAVE); - if (rc < 0) { - VIR_WARN("Unable to resume guest CPUs after save failure"); - event = virDomainEventLifecycleNewFromObj(vm, - VIR_DOMAIN_EVENT_SUSPENDED, - VIR_DOMAIN_EVENT_SUSPENDED_API_ERROR); - } + qemuDomainObjEndAsyncJob(driver, vm); + if (ret != 0) { + if (was_running && virDomainObjIsActive(vm)) { + rc = qemuProcessStartCPUs(driver, vm, dom->conn, + VIR_DOMAIN_RUNNING_SAVE_CANCELED, + QEMU_ASYNC_JOB_SAVE); + if (rc < 0) { + VIR_WARN("Unable to resume guest CPUs after save failure"); + event = virDomainEventLifecycleNewFromObj(vm, + VIR_DOMAIN_EVENT_SUSPENDED, + VIR_DOMAIN_EVENT_SUSPENDED_API_ERROR); } } - if (qemuDomainObjEndAsyncJob(driver, vm) == 0) - vm = NULL; + } else if (!vm->persistent) { + qemuDomainRemoveInactive(driver, vm); } cleanup: VIR_FREE(xml); if (event) qemuDomainEventQueue(driver, event); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); return ret; } @@ -3333,11 +3282,9 @@ qemuDomainSaveFlags(virDomainPtr dom, const char *path, const char *dxml, ret = qemuDomainSaveInternal(driver, dom, vm, path, compressed, dxml, flags); - vm = NULL; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -3416,15 +3363,13 @@ qemuDomainManagedSave(virDomainPtr dom, unsigned int flags) VIR_INFO("Saving state of domain '%s' to '%s'", vm->def->name, name); - if ((ret = qemuDomainSaveInternal(driver, dom, vm, name, compressed, - NULL, flags)) == 0) + ret = qemuDomainSaveInternal(driver, dom, vm, name, + compressed, NULL, flags); + if (ret == 0) vm->hasManagedSave = true; - vm = NULL; - cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); VIR_FREE(name); virObjectUnref(cfg); @@ -3471,7 +3416,7 @@ qemuDomainHasManagedSaveImage(virDomainPtr dom, unsigned int flags) ret = vm->hasManagedSave; cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -3506,7 +3451,7 @@ qemuDomainManagedSaveRemove(virDomainPtr dom, unsigned int flags) cleanup: VIR_FREE(name); - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -3765,16 +3710,12 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, } } - if (qemuDomainObjEndAsyncJob(driver, vm) == 0) { - vm = NULL; - } else if ((ret == 0) && (flags & VIR_DUMP_CRASH) && !vm->persistent) { + qemuDomainObjEndAsyncJob(driver, vm); + if (ret == 0 && flags & VIR_DUMP_CRASH && !vm->persistent) qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); return ret; @@ -3873,12 +3814,10 @@ qemuDomainScreenshot(virDomainPtr dom, unlink(tmp); VIR_FREE(tmp); - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -3937,10 +3876,7 @@ static void processWatchdogEvent(virQEMUDriverPtr driver, virDomainObjPtr vm, in } endjob: - /* Safe to ignore value since ref count was incremented in - * qemuProcessHandleWatchdog(). - */ - ignore_value(qemuDomainObjEndAsyncJob(driver, vm)); + qemuDomainObjEndAsyncJob(driver, vm); cleanup: virObjectUnref(cfg); @@ -3987,10 +3923,7 @@ doCoreDumpToAutoDumpPath(virQEMUDriverPtr driver, "%s", _("Dump failed")); endjob: - /* Safe to ignore value since ref count was incremented in - * qemuProcessHandleGuestPanic(). - */ - ignore_value(qemuDomainObjEndAsyncJob(driver, vm)); + qemuDomainObjEndAsyncJob(driver, vm); cleanup: VIR_FREE(dumpfile); @@ -4120,10 +4053,7 @@ processDeviceDeletedEvent(virQEMUDriverPtr driver, devAlias); endjob: - /* We had an extra reference to vm before starting a new job so ending the - * job is guaranteed not to remove the last reference. - */ - ignore_value(qemuDomainObjEndJob(driver, vm)); + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(devAlias); @@ -4310,10 +4240,7 @@ processNicRxFilterChangedEvent(virQEMUDriverPtr driver, } endjob: - /* We had an extra reference to vm before starting a new job so ending the - * job is guaranteed not to remove the last reference. - */ - ignore_value(qemuDomainObjEndJob(driver, vm)); + qemuDomainObjEndJob(driver, vm); cleanup: virNetDevRxFilterFree(hostFilter); @@ -4371,10 +4298,7 @@ processSerialChangedEvent(virQEMUDriverPtr driver, "channel %s", vm->def->name, devAlias); endjob: - /* We had an extra reference to vm before starting a new job so ending the - * job is guaranteed not to remove the last reference. - */ - ignore_value(qemuDomainObjEndJob(driver, vm)); + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(devAlias); @@ -4413,8 +4337,7 @@ static void qemuProcessEventHandler(void *data, void *opaque) break; } - if (virObjectUnref(vm)) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); VIR_FREE(processEvent); } @@ -4748,12 +4671,10 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); VIR_FREE(cpuinfo); virObjectUnref(cfg); @@ -4945,8 +4866,7 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, cleanup: if (cgroup_vcpu) virCgroupFree(&cgroup_vcpu); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); VIR_FREE(str); @@ -5047,8 +4967,7 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom, ret = ncpumaps; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); return ret; } @@ -5226,8 +5145,7 @@ qemuDomainPinEmulator(virDomainPtr dom, VIR_FREE(str); virBitmapFree(pcpumap); virObjectUnref(caps); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -5300,8 +5218,7 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom, ret = 1; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); return ret; } @@ -5332,8 +5249,7 @@ qemuDomainGetVcpus(virDomainPtr dom, ret = qemuDomainHelperGetVcpus(vm, info, maxinfo, cpumaps, maplen); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -5399,8 +5315,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) qemuDomainObjExitAgent(vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); if (ncpuinfo < 0) goto cleanup; @@ -5425,8 +5340,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); VIR_FREE(cpuinfo); return ret; @@ -5486,8 +5400,7 @@ static int qemuDomainGetSecurityLabel(virDomainPtr dom, virSecurityLabelPtr secl ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -5552,8 +5465,7 @@ static int qemuDomainGetSecurityLabelList(virDomainPtr dom, } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -5990,6 +5902,7 @@ qemuDomainRestoreFlags(virConnectPtr conn, VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL))) goto cleanup; + virObjectRef(vm); def = NULL; if (flags & VIR_DOMAIN_SAVE_RUNNING) @@ -6010,12 +5923,9 @@ qemuDomainRestoreFlags(virConnectPtr conn, if (virFileWrapperFdClose(wrapperFd) < 0) VIR_WARN("Failed to close %s", path); - if (!qemuDomainObjEndJob(driver, vm)) { - vm = NULL; - } else if (ret < 0 && !vm->persistent) { + qemuDomainObjEndJob(driver, vm); + if (ret < 0 && !vm->persistent) qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } cleanup: virDomainDefFree(def); @@ -6023,8 +5933,7 @@ qemuDomainRestoreFlags(virConnectPtr conn, VIR_FREE(xml); VIR_FREE(xmlout); virFileWrapperFdFree(wrapperFd); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virNWFilterUnlockFilterUpdates(); return ret; } @@ -6273,10 +6182,7 @@ static char *qemuDomainGetXMLDesc(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) { - vm = NULL; - goto cleanup; - } + qemuDomainObjEndJob(driver, vm); if (err < 0) goto cleanup; if (err > 0) @@ -6291,8 +6197,7 @@ static char *qemuDomainGetXMLDesc(virDomainPtr dom, ret = qemuDomainFormatXML(driver, vm, flags); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -6687,12 +6592,10 @@ qemuDomainCreateWithFlags(virDomainPtr dom, unsigned int flags) ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virNWFilterUnlockFilterUpdates(); return ret; } @@ -6744,7 +6647,7 @@ static virDomainPtr qemuDomainDefineXML(virConnectPtr conn, const char *xml) driver->xmlopt, 0, &oldDef))) goto cleanup; - + virObjectRef(vm); def = NULL; if (virDomainHasDiskMirror(vm)) { virReportError(VIR_ERR_BLOCK_COPY_ACTIVE, "%s", @@ -6769,7 +6672,6 @@ static virDomainPtr qemuDomainDefineXML(virConnectPtr conn, const char *xml) /* Brand new domain. Remove it */ VIR_INFO("Deleting domain '%s'", vm->def->name); qemuDomainRemoveInactive(driver, vm); - vm = NULL; } goto cleanup; } @@ -6787,8 +6689,7 @@ static virDomainPtr qemuDomainDefineXML(virConnectPtr conn, const char *xml) cleanup: virDomainDefFree(oldDef); virDomainDefFree(def); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); virObjectUnref(qemuCaps); @@ -6894,15 +6795,13 @@ qemuDomainUndefineFlags(virDomainPtr dom, vm->persistent = 0; } else { qemuDomainRemoveInactive(driver, vm); - vm = NULL; } ret = 0; cleanup: VIR_FREE(name); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); virObjectUnref(cfg); @@ -7664,8 +7563,7 @@ static int qemuDomainAttachDeviceFlags(virDomainPtr dom, const char *xml, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virObjectUnref(qemuCaps); @@ -7673,8 +7571,7 @@ static int qemuDomainAttachDeviceFlags(virDomainPtr dom, const char *xml, if (dev != dev_copy) virDomainDeviceDefFree(dev_copy); virDomainDeviceDefFree(dev); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); virNWFilterUnlockFilterUpdates(); @@ -7813,8 +7710,7 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr dom, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virObjectUnref(qemuCaps); @@ -7822,8 +7718,7 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr dom, if (dev != dev_copy) virDomainDeviceDefFree(dev_copy); virDomainDeviceDefFree(dev); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); virNWFilterUnlockFilterUpdates(); @@ -7955,8 +7850,7 @@ static int qemuDomainDetachDeviceFlags(virDomainPtr dom, const char *xml, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virObjectUnref(qemuCaps); @@ -7964,8 +7858,7 @@ static int qemuDomainDetachDeviceFlags(virDomainPtr dom, const char *xml, if (dev != dev_copy) virDomainDeviceDefFree(dev_copy); virDomainDeviceDefFree(dev); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -7993,8 +7886,7 @@ static int qemuDomainGetAutostart(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -8059,8 +7951,7 @@ static int qemuDomainSetAutostart(virDomainPtr dom, cleanup: VIR_FREE(configFile); VIR_FREE(autostartLink); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -8114,8 +8005,7 @@ static char *qemuDomainGetSchedulerType(virDomainPtr dom, ignore_value(VIR_STRDUP(ret, "posix")); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -8499,12 +8389,10 @@ qemuDomainSetBlkioParameters(virDomainPtr dom, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -8917,8 +8805,7 @@ qemuDomainGetBlkioParameters(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -9062,7 +8949,7 @@ qemuDomainSetMemoryParameters(virDomainPtr dom, ret = 0; cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -9215,8 +9102,7 @@ qemuDomainGetMemoryParameters(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -9428,13 +9314,11 @@ qemuDomainSetNumaParameters(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virBitmapFree(nodeset); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -9538,8 +9422,7 @@ qemuDomainGetNumaParameters(virDomainPtr dom, cleanup: VIR_FREE(nodeset); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); return ret; } @@ -9826,8 +9709,7 @@ qemuDomainSetSchedulerParametersFlags(virDomainPtr dom, cleanup: virDomainDefFree(vmdef); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (eventNparams) virTypedParamsFree(eventParams, eventNparams); virObjectUnref(caps); @@ -10064,8 +9946,7 @@ qemuDomainGetSchedulerParametersFlags(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -10163,13 +10044,11 @@ qemuDomainBlockResize(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(device); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -10240,12 +10119,10 @@ qemuDomainBlockStats(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -10415,12 +10292,10 @@ qemuDomainBlockStatsFlags(virDomainPtr dom, *nparams = tmp; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -10461,8 +10336,7 @@ qemuDomainInterfaceStats(virDomainPtr dom, _("invalid path, '%s' is not a known interface"), path); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -10646,14 +10520,12 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virNetDevBandwidthFree(bandwidth); virNetDevBandwidthFree(newBandwidth); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -10773,8 +10645,7 @@ qemuDomainGetInterfaceParameters(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); return ret; } @@ -10823,12 +10694,10 @@ qemuDomainMemoryStats(virDomainPtr dom, } } - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -10885,8 +10754,7 @@ qemuDomainBlockPeek(virDomainPtr dom, cleanup: VIR_FORCE_CLOSE(fd); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -10966,16 +10834,14 @@ qemuDomainMemoryPeek(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FORCE_CLOSE(fd); if (tmp) unlink(tmp); VIR_FREE(tmp); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -11153,8 +11019,7 @@ qemuDomainGetBlockInfo(virDomainPtr dom, ret = 0; } - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); } else { ret = 0; } @@ -11175,7 +11040,7 @@ qemuDomainGetBlockInfo(virDomainPtr dom, _("domain is not running")); ret = -1; } - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -11500,7 +11365,7 @@ qemuDomainMigrateBegin3(virDomainPtr domain, return NULL; if (virDomainMigrateBegin3EnsureACL(domain->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return NULL; } @@ -11536,7 +11401,7 @@ qemuDomainMigrateBegin3Params(virDomainPtr domain, return NULL; if (virDomainMigrateBegin3ParamsEnsureACL(domain->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return NULL; } @@ -11780,7 +11645,7 @@ qemuDomainMigratePerform3(virDomainPtr dom, return -1; if (virDomainMigratePerform3EnsureACL(dom->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return -1; } @@ -11945,7 +11810,7 @@ qemuDomainMigrateConfirm3(virDomainPtr domain, return -1; if (virDomainMigrateConfirm3EnsureACL(domain->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return -1; } @@ -11973,7 +11838,7 @@ qemuDomainMigrateConfirm3Params(virDomainPtr domain, return -1; if (virDomainMigrateConfirm3ParamsEnsureACL(domain->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return -1; } @@ -12276,8 +12141,7 @@ static int qemuDomainGetJobInfo(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -12343,8 +12207,7 @@ qemuDomainGetJobStats(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -12391,12 +12254,10 @@ static int qemuDomainAbortJob(virDomainPtr dom) qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -12436,12 +12297,10 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -12491,12 +12350,10 @@ qemuDomainMigrateGetCompressionCache(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -12547,12 +12404,10 @@ qemuDomainMigrateSetCompressionCache(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -12602,16 +12457,14 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, priv->migMaxBandwidth = bandwidth; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); } else { priv->migMaxBandwidth = bandwidth; ret = 0; } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -12638,8 +12491,7 @@ qemuDomainMigrateGetMaxSpeed(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -14102,12 +13954,10 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain, virDomainSnapshotObjListRemove(vm->snapshots, snap); } - if (!qemuDomainObjEndAsyncJob(driver, vm)) - vm = NULL; + qemuDomainObjEndAsyncJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virDomainSnapshotDefFree(def); VIR_FREE(xml); virObjectUnref(caps); @@ -14138,7 +13988,7 @@ qemuDomainSnapshotListNames(virDomainPtr domain, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return n; } @@ -14162,7 +14012,7 @@ qemuDomainSnapshotNum(virDomainPtr domain, n = virDomainSnapshotObjListNum(vm->snapshots, NULL, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return n; } @@ -14187,7 +14037,7 @@ qemuDomainListAllSnapshots(virDomainPtr domain, n = virDomainListSnapshots(vm->snapshots, NULL, domain, snaps, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return n; } @@ -14246,7 +14096,7 @@ qemuDomainSnapshotNumChildren(virDomainSnapshotPtr snapshot, n = virDomainSnapshotObjListNum(vm->snapshots, snap, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return n; } @@ -14276,7 +14126,7 @@ qemuDomainSnapshotListAllChildren(virDomainSnapshotPtr snapshot, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return n; } @@ -14304,7 +14154,7 @@ qemuDomainSnapshotLookupByName(virDomainPtr domain, snapshot = virGetDomainSnapshot(domain, snap->def->name); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return snapshot; } @@ -14327,7 +14177,7 @@ qemuDomainHasCurrentSnapshot(virDomainPtr domain, ret = (vm->current_snapshot != NULL); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -14361,7 +14211,7 @@ qemuDomainSnapshotGetParent(virDomainSnapshotPtr snapshot, parent = virGetDomainSnapshot(snapshot->domain, snap->def->parent); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return parent; } @@ -14390,7 +14240,7 @@ qemuDomainSnapshotCurrent(virDomainPtr domain, snapshot = virGetDomainSnapshot(domain, vm->current_snapshot->def->name); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return snapshot; } @@ -14420,7 +14270,7 @@ qemuDomainSnapshotGetXMLDesc(virDomainSnapshotPtr snapshot, xml = virDomainSnapshotDefFormat(uuidstr, snap->def, flags, 0); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return xml; } @@ -14448,7 +14298,7 @@ qemuDomainSnapshotIsCurrent(virDomainSnapshotPtr snapshot, STREQ(snapshot->name, vm->current_snapshot->def->name)); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -14478,7 +14328,7 @@ qemuDomainSnapshotHasMetadata(virDomainSnapshotPtr snapshot, ret = 1; cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -14767,9 +14617,8 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot, if (qemuDomainSnapshotRevertInactive(driver, vm, snap) < 0) { if (!vm->persistent) { - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; + qemuDomainObjEndJob(driver, vm); + qemuDomainRemoveInactive(driver, vm); goto cleanup; } goto endjob; @@ -14794,9 +14643,8 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot, virDomainAuditStart(vm, "from-snapshot", rc >= 0); if (rc < 0) { if (!vm->persistent) { - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; + qemuDomainObjEndJob(driver, vm); + qemuDomainRemoveInactive(driver, vm); goto cleanup; } goto endjob; @@ -14833,11 +14681,10 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot, ret = 0; endjob: - if (vm && !qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm && ret == 0) { + if (ret == 0) { if (qemuDomainSnapshotWriteMetadata(vm, snap, cfg->snapshotDir) < 0) ret = -1; @@ -14851,8 +14698,7 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot, if (event2) qemuDomainEventQueue(driver, event2); } - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); virNWFilterUnlockFilterUpdates(); @@ -15002,12 +14848,10 @@ qemuDomainSnapshotDelete(virDomainSnapshotPtr snapshot, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -15055,12 +14899,10 @@ static int qemuDomainQemuMonitorCommand(virDomainPtr domain, const char *cmd, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -15128,21 +14970,18 @@ static virDomainPtr qemuDomainQemuAttach(virConnectPtr conn, VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL))) goto cleanup; - + virObjectRef(vm); def = NULL; if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) { qemuDomainRemoveInactive(driver, vm); - vm = NULL; goto cleanup; } if (qemuProcessAttach(conn, driver, vm, pid, pidfile, monConfig, monJSON) < 0) { - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; - monConfig = NULL; + qemuDomainObjEndJob(driver, vm); + qemuDomainRemoveInactive(driver, vm); goto cleanup; } @@ -15152,14 +14991,12 @@ static virDomainPtr qemuDomainQemuAttach(virConnectPtr conn, if (dom) dom->id = vm->def->id; - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virDomainDefFree(def); virDomainChrSourceDefFree(monConfig); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); VIR_FREE(pidfile); virObjectUnref(caps); virObjectUnref(qemuCaps); @@ -15244,8 +15081,7 @@ qemuDomainOpenConsole(virDomainPtr dom, } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -15319,8 +15155,7 @@ qemuDomainOpenChannel(virDomainPtr dom, } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -15513,8 +15348,7 @@ qemuDomainBlockPivot(virConnectPtr conn, } -/* bandwidth in MiB/s per public API. Caller must lock vm beforehand, - * and not access it afterwards. */ +/* bandwidth in MiB/s per public API. Caller must lock vm beforehand. */ static int qemuDomainBlockJobImpl(virDomainObjPtr vm, virConnectPtr conn, @@ -15735,18 +15569,14 @@ qemuDomainBlockJobImpl(virDomainObjPtr vm, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) { - vm = NULL; - goto cleanup; - } + qemuDomainObjEndJob(driver, vm); cleanup: virObjectUnref(cfg); VIR_FREE(basePath); VIR_FREE(backingPath); VIR_FREE(device); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); if (event2) @@ -15766,7 +15596,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, const char *path, unsigned int flags) return -1; if (virDomainBlockJobAbortEnsureACL(dom->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return -1; } @@ -15776,7 +15606,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, const char *path, unsigned int flags) static int qemuDomainGetBlockJobInfo(virDomainPtr dom, const char *path, - virDomainBlockJobInfoPtr info, unsigned int flags) + virDomainBlockJobInfoPtr info, unsigned int flags) { virQEMUDriverPtr driver = dom->conn->privateData; qemuDomainObjPrivatePtr priv; @@ -15793,7 +15623,7 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, const char *path, return -1; if (virDomainGetBlockJobInfoEnsureACL(dom->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return -1; } @@ -15854,12 +15684,10 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, const char *path, virObjectUnref(cfg); } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -15874,7 +15702,7 @@ qemuDomainBlockJobSetSpeed(virDomainPtr dom, const char *path, return -1; if (virDomainBlockJobSetSpeedEnsureACL(dom->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return -1; } @@ -15884,8 +15712,7 @@ qemuDomainBlockJobSetSpeed(virDomainPtr dom, const char *path, /* bandwidth in bytes/s. Caller must lock vm beforehand, and not - * access it afterwards; likewise, caller must not access mirror - * afterwards. */ + * access mirror afterwards. */ static int qemuDomainBlockCopyCommon(virDomainObjPtr vm, virConnectPtr conn, @@ -16063,13 +15890,10 @@ qemuDomainBlockCopyCommon(virDomainObjPtr vm, if (need_unlink && unlink(mirror->path)) VIR_WARN("unable to unlink just-created %s", mirror->path); virStorageSourceFree(mirror); - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(device); - if (vm) - virObjectUnlock(vm); virObjectUnref(cfg); return ret; } @@ -16140,12 +15964,10 @@ qemuDomainBlockRebase(virDomainPtr dom, const char *path, const char *base, VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT); ret = qemuDomainBlockCopyCommon(vm, dom->conn, path, dest, bandwidth, 0, 0, flags, true); - vm = NULL; dest = NULL; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virStorageSourceFree(dest); return ret; } @@ -16214,11 +16036,9 @@ qemuDomainBlockCopy(virDomainPtr dom, const char *disk, const char *destxml, ret = qemuDomainBlockCopyCommon(vm, dom->conn, disk, dest, bandwidth, granularity, buf_size, flags, false); - vm = NULL; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -16234,7 +16054,7 @@ qemuDomainBlockPull(virDomainPtr dom, const char *path, unsigned long bandwidth, return -1; if (virDomainBlockPullEnsureACL(dom->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return -1; } @@ -16473,16 +16293,14 @@ qemuDomainBlockCommit(virDomainPtr dom, VIR_DISK_CHAIN_READ_ONLY); } virStorageSourceFree(mirror); - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(topPath); VIR_FREE(basePath); VIR_FREE(backingPath); VIR_FREE(device); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -16543,12 +16361,10 @@ qemuDomainOpenGraphics(virDomainPtr dom, ret = qemuMonitorOpenGraphics(priv->mon, protocol, fd, "graphicsfd", (flags & VIR_DOMAIN_OPEN_GRAPHICS_SKIPAUTH) != 0); qemuDomainObjExitMonitor(driver, vm); - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -16614,8 +16430,7 @@ qemuDomainOpenGraphicsFD(virDomainPtr dom, ret = qemuMonitorOpenGraphics(priv->mon, protocol, pair[1], "graphicsfd", (flags & VIR_DOMAIN_OPEN_GRAPHICS_SKIPAUTH)); qemuDomainObjExitMonitor(driver, vm); - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); if (ret < 0) goto cleanup; @@ -16625,8 +16440,7 @@ qemuDomainOpenGraphicsFD(virDomainPtr dom, cleanup: VIR_FORCE_CLOSE(pair[0]); VIR_FORCE_CLOSE(pair[1]); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -16980,14 +16794,11 @@ qemuDomainSetBlockIoTune(virDomainPtr dom, } endjob: - - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(device); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (eventNparams) virTypedParamsFree(eventParams, eventNparams); virObjectUnref(caps); @@ -17183,13 +16994,11 @@ qemuDomainGetBlockIoTune(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(device); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); return ret; } @@ -17257,12 +17066,10 @@ qemuDomainGetDiskErrors(virDomainPtr dom, ret = n; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virHashFree(table); if (ret < 0) { for (i = 0; i < n; i++) @@ -17304,7 +17111,7 @@ qemuDomainSetMetadata(virDomainPtr dom, cfg->configDir, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -17333,7 +17140,7 @@ qemuDomainGetMetadata(virDomainPtr dom, ret = virDomainObjGetMetadata(vm, type, uri, caps, driver->xmlopt, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(caps); return ret; } @@ -17382,8 +17189,7 @@ qemuDomainGetCPUStats(virDomainPtr domain, ret = virCgroupGetPercpuStats(priv->cgroup, params, nparams, start_cpu, ncpus, priv->nvcpupids); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -17472,12 +17278,10 @@ qemuDomainPMSuspendForDuration(virDomainPtr dom, qemuDomainObjExitAgent(vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -17521,12 +17325,10 @@ qemuDomainPMWakeup(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -17597,12 +17399,10 @@ qemuDomainQemuAgentCommand(virDomainPtr domain, VIR_FREE(result); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return result; } @@ -17706,12 +17506,10 @@ qemuDomainFSTrim(virDomainPtr dom, qemuDomainObjExitAgent(vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -17891,12 +17689,10 @@ qemuDomainGetTime(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -17965,12 +17761,10 @@ qemuDomainSetTime(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -18005,12 +17799,10 @@ qemuDomainFSFreeze(virDomainPtr dom, ret = qemuDomainSnapshotFSFreeze(driver, vm, mountpoints, nmountpoints); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -18051,12 +17843,10 @@ qemuDomainFSThaw(virDomainPtr dom, ret = qemuDomainSnapshotFSThaw(driver, vm, true); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; } @@ -18733,13 +18523,15 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, continue; if (doms != domlist && - !virConnectGetAllDomainStatsCheckACL(conn, dom->def)) + !virConnectGetAllDomainStatsCheckACL(conn, dom->def)) { + virObjectUnlock(dom); continue; + } - if (HAVE_JOB(domflags) && - qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY) < 0) + if (HAVE_JOB(privflags) && + qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY) >= 0) /* As it was never requested. Gather as much as possible anyway. */ - domflags &= ~QEMU_DOMAIN_STATS_HAVE_JOB; + domflags |= QEMU_DOMAIN_STATS_HAVE_JOB; if (qemuDomainGetStats(conn, dom, stats, &tmp, domflags) < 0) goto endjob; @@ -18747,12 +18539,10 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, if (tmp) tmpstats[nstats++] = tmp; - if (HAVE_JOB(domflags) && !qemuDomainObjEndJob(driver, dom)) { - dom = NULL; - continue; - } + if (HAVE_JOB(domflags)) + qemuDomainObjEndJob(driver, dom); - virObjectUnlock(dom); + qemuDomObjEndAPI(&dom); dom = NULL; } @@ -18763,8 +18553,7 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, endjob: if (HAVE_JOB(domflags) && dom) - if (!qemuDomainObjEndJob(driver, dom)) - dom = NULL; + qemuDomainObjEndJob(driver, dom); cleanup: if (dom) @@ -18835,8 +18624,7 @@ qemuDomainGetFSInfo(virDomainPtr dom, qemuDomainObjExitAgent(vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: if (vm) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 0acbb57..e4c8cf8 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2736,31 +2736,20 @@ qemuMigrationBegin(virConnectPtr conn, if (virCloseCallbacksSet(driver->closeCallbacks, vm, conn, qemuMigrationCleanup) < 0) goto endjob; - if (qemuMigrationJobContinue(vm) == 0) { - vm = NULL; - virReportError(VIR_ERR_OPERATION_FAILED, - "%s", _("domain disappeared")); - VIR_FREE(xml); - if (cookieout) - VIR_FREE(*cookieout); - } + qemuMigrationJobContinue(vm); } else { goto endjob; } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return xml; endjob: - if ((flags & VIR_MIGRATE_CHANGE_PROTECTION)) { - if (qemuMigrationJobFinish(driver, vm) == 0) - vm = NULL; - } else { - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; - } + if (flags & VIR_MIGRATE_CHANGE_PROTECTION) + qemuMigrationJobFinish(driver, vm); + else + qemuDomainObjEndJob(driver, vm); goto cleanup; } @@ -3099,12 +3088,7 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, * This prevents any other APIs being invoked while incoming * migration is taking place. */ - if (!qemuMigrationJobContinue(vm)) { - vm = NULL; - virReportError(VIR_ERR_OPERATION_FAILED, - "%s", _("domain disappeared")); - goto cleanup; - } + qemuMigrationJobContinue(vm); if (autoPort) priv->migrationPort = port; @@ -3115,16 +3099,12 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, VIR_FREE(xmlout); VIR_FORCE_CLOSE(dataFD[0]); VIR_FORCE_CLOSE(dataFD[1]); - if (vm) { - if (ret < 0) { - virPortAllocatorRelease(driver->migrationPorts, priv->nbdPort); - priv->nbdPort = 0; - } - if (ret >= 0 || vm->persistent) - virObjectUnlock(vm); - else - qemuDomainRemoveInactive(driver, vm); + if (ret < 0) { + virPortAllocatorRelease(driver->migrationPorts, priv->nbdPort); + priv->nbdPort = 0; + qemuDomainRemoveInactive(driver, vm); } + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); qemuMigrationCookieFree(mig); @@ -3137,8 +3117,7 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED, 0); endjob: - if (!qemuMigrationJobFinish(driver, vm)) - vm = NULL; + qemuMigrationJobFinish(driver, vm); goto cleanup; } @@ -3507,19 +3486,16 @@ qemuMigrationConfirm(virConnectPtr conn, cookiein, cookieinlen, flags, cancelled); - if (qemuMigrationJobFinish(driver, vm) == 0) { - vm = NULL; - } else if (!virDomainObjIsActive(vm) && - (!vm->persistent || (flags & VIR_MIGRATE_UNDEFINE_SOURCE))) { + qemuMigrationJobFinish(driver, vm); + if (!virDomainObjIsActive(vm) && + (!vm->persistent || (flags & VIR_MIGRATE_UNDEFINE_SOURCE))) { if (flags & VIR_MIGRATE_UNDEFINE_SOURCE) virDomainDeleteConfig(cfg->configDir, cfg->autostartDir, vm); qemuDomainRemoveInactive(driver, vm); - vm = NULL; } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -4877,15 +4853,13 @@ qemuMigrationPerformJob(virQEMUDriverPtr driver, VIR_DOMAIN_EVENT_RESUMED_MIGRATED); } - if (!qemuMigrationJobFinish(driver, vm)) { - vm = NULL; - } else if (!virDomainObjIsActive(vm) && - (!vm->persistent || - (ret == 0 && (flags & VIR_MIGRATE_UNDEFINE_SOURCE)))) { + qemuMigrationJobFinish(driver, vm); + if (!virDomainObjIsActive(vm) && + (!vm->persistent || + (ret == 0 && (flags & VIR_MIGRATE_UNDEFINE_SOURCE)))) { if (flags & VIR_MIGRATE_UNDEFINE_SOURCE) virDomainDeleteConfig(cfg->configDir, cfg->autostartDir, vm); qemuDomainRemoveInactive(driver, vm); - vm = NULL; } if (orig_err) { @@ -4894,8 +4868,7 @@ qemuMigrationPerformJob(virQEMUDriverPtr driver, } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); virObjectUnref(cfg); @@ -4920,7 +4893,6 @@ qemuMigrationPerformPhase(virQEMUDriverPtr driver, { virObjectEventPtr event = NULL; int ret = -1; - bool hasrefs; /* If we didn't start the job in the begin phase, start it now. */ if (!(flags & VIR_MIGRATE_CHANGE_PROTECTION)) { @@ -4955,19 +4927,14 @@ qemuMigrationPerformPhase(virQEMUDriverPtr driver, endjob: if (ret < 0) - hasrefs = qemuMigrationJobFinish(driver, vm); + qemuMigrationJobFinish(driver, vm); else - hasrefs = qemuMigrationJobContinue(vm); - if (!hasrefs) { - vm = NULL; - } else if (!virDomainObjIsActive(vm) && !vm->persistent) { + qemuMigrationJobContinue(vm); + if (!virDomainObjIsActive(vm) && !vm->persistent) qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); return ret; @@ -5301,21 +5268,16 @@ qemuMigrationFinish(virQEMUDriverPtr driver, VIR_WARN("Unable to encode migration cookie"); endjob: - if (qemuMigrationJobFinish(driver, vm) == 0) { - vm = NULL; - } else if (!vm->persistent && !virDomainObjIsActive(vm)) { + qemuMigrationJobFinish(driver, vm); + if (!vm->persistent && !virDomainObjIsActive(vm)) qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } cleanup: virPortAllocatorRelease(driver->migrationPorts, port); - if (vm) { - if (priv->mon) - qemuMonitorSetDomainLog(priv->mon, -1); - VIR_FREE(priv->origname); - virObjectUnlock(vm); - } + if (priv->mon) + qemuMonitorSetDomainLog(priv->mon, -1); + VIR_FREE(priv->origname); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); qemuMigrationCookieFree(mig); @@ -5563,15 +5525,13 @@ qemuMigrationJobStartPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, qemuMigrationJobPhase phase) { - virObjectRef(vm); qemuMigrationJobSetPhase(driver, vm, phase); } -bool +void qemuMigrationJobContinue(virDomainObjPtr vm) { qemuDomainObjReleaseAsyncJob(vm); - return virObjectUnref(vm); } bool @@ -5594,7 +5554,7 @@ qemuMigrationJobIsActive(virDomainObjPtr vm, return true; } -bool +void qemuMigrationJobFinish(virQEMUDriverPtr driver, virDomainObjPtr vm) { return qemuDomainObjEndAsyncJob(driver, vm); diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index e7a90c3..1726455 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -1,7 +1,7 @@ /* * qemu_migration.h: QEMU migration handling * - * Copyright (C) 2006-2011 Red Hat, Inc. + * Copyright (C) 2006-2011, 2014 Red Hat, Inc. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public @@ -82,13 +82,13 @@ void qemuMigrationJobStartPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, qemuMigrationJobPhase phase) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); -bool qemuMigrationJobContinue(virDomainObjPtr obj) - ATTRIBUTE_NONNULL(1) ATTRIBUTE_RETURN_CHECK; +void qemuMigrationJobContinue(virDomainObjPtr obj) + ATTRIBUTE_NONNULL(1); bool qemuMigrationJobIsActive(virDomainObjPtr vm, qemuDomainAsyncJob job) ATTRIBUTE_NONNULL(1); -bool qemuMigrationJobFinish(virQEMUDriverPtr driver, virDomainObjPtr obj) - ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_RETURN_CHECK; +void qemuMigrationJobFinish(virQEMUDriverPtr driver, virDomainObjPtr obj) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); int qemuMigrationSetOffline(virQEMUDriverPtr driver, virDomainObjPtr vm); diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index a14b6f7..4776ce8 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -571,6 +571,7 @@ qemuProcessFakeReboot(void *opaque) virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); virDomainRunningReason reason = VIR_DOMAIN_RUNNING_BOOTED; int ret = -1; + VIR_DEBUG("vm=%p", vm); virObjectLock(vm); if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) @@ -619,16 +620,12 @@ qemuProcessFakeReboot(void *opaque) ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) { - if (ret == -1) - ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_FORCE)); - if (virObjectUnref(vm)) - virObjectUnlock(vm); - } + if (ret == -1) + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_FORCE)); + qemuDomObjEndAPI(&vm); if (event) qemuDomainEventQueue(driver, event); virObjectUnref(cfg); @@ -935,15 +932,13 @@ qemuProcessHandleWatchdog(qemuMonitorPtr mon ATTRIBUTE_UNUSED, */ virObjectRef(vm); if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - if (!virObjectUnref(vm)) - vm = NULL; + virObjectUnref(vm); VIR_FREE(processEvent); } } } - if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm); if (watchdogEvent) qemuDomainEventQueue(driver, watchdogEvent); if (lifecycleEvent) @@ -1439,14 +1434,12 @@ qemuProcessHandleGuestPanic(qemuMonitorPtr mon ATTRIBUTE_UNUSED, */ virObjectRef(vm); if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - if (!virObjectUnref(vm)) - vm = NULL; + virObjectUnref(vm); VIR_FREE(processEvent); } cleanup: - if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm); return 0; } @@ -3534,7 +3527,7 @@ struct qemuProcessReconnectData { * this thread function has increased the reference counter to it * so that we now have to close it. * - * This function also inherits a locked domain object. + * This function also inherits a locked and ref'd domain object. * * This function needs to: * 1. Enter job @@ -3567,10 +3560,6 @@ qemuProcessReconnect(void *opaque) qemuDomainObjRestoreJob(obj, &oldjob); - /* Hold an extra reference because we can't allow 'vm' to be - * deleted if qemuConnectMonitor() failed */ - virObjectRef(obj); - cfg = virQEMUDriverGetConfig(driver); priv = obj->privateData; @@ -3659,7 +3648,8 @@ qemuProcessReconnect(void *opaque) VIR_DEBUG("Finishing shutdown sequence for domain %s", obj->def->name); qemuProcessShutdownOrReboot(driver, obj); - goto endjob; + qemuDomainObjEndJob(driver, obj); + goto cleanup; } if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) @@ -3711,23 +3701,11 @@ qemuProcessReconnect(void *opaque) if (virAtomicIntInc(&driver->nactive) == 1 && driver->inhibitCallback) driver->inhibitCallback(true, driver->inhibitOpaque); - endjob: - /* we hold an extra reference, so this will never fail */ - ignore_value(qemuDomainObjEndJob(driver, obj)); - - if (virObjectUnref(obj)) - virObjectUnlock(obj); - - virObjectUnref(conn); - virObjectUnref(cfg); - virNWFilterUnlockFilterUpdates(); - - return; + qemuDomainObjEndJob(driver, obj); + goto cleanup; error: - /* we hold an extra reference, so this will never fail */ - ignore_value(qemuDomainObjEndJob(driver, obj)); - + qemuDomainObjEndJob(driver, obj); killvm: if (virDomainObjIsActive(obj)) { /* We can't get the monitor back, so must kill the VM @@ -3747,13 +3725,11 @@ qemuProcessReconnect(void *opaque) qemuProcessStop(driver, obj, state, 0); } - if (virObjectUnref(obj)) { - if (!obj->persistent) - qemuDomainRemoveInactive(driver, obj); - else - virObjectUnlock(obj); - } + if (!obj->persistent) + qemuDomainRemoveInactive(driver, obj); + cleanup: + qemuDomObjEndAPI(&obj); virObjectUnref(conn); virObjectUnref(cfg); virNWFilterUnlockFilterUpdates(); @@ -3777,9 +3753,10 @@ qemuProcessReconnectHelper(virDomainObjPtr obj, memcpy(data, src, sizeof(*data)); data->obj = obj; - /* this lock will be eventually transferred to the thread that handles the - * reconnect */ + /* this lock and reference will be eventually transferred to the thread + * that handles the reconnect */ virObjectLock(obj); + virObjectRef(obj); /* Since we close the connection later on, we have to make sure that the * threads we start see a valid connection throughout their lifetime. We @@ -3795,9 +3772,8 @@ qemuProcessReconnectHelper(virDomainObjPtr obj, qemuProcessStop(src->driver, obj, VIR_DOMAIN_SHUTOFF_FAILED, 0); if (!obj->persistent) qemuDomainRemoveInactive(src->driver, obj); - else - virObjectUnlock(obj); + qemuDomObjEndAPI(&obj); virObjectUnref(data->conn); VIR_FREE(data); return -1; @@ -5465,12 +5441,11 @@ qemuProcessAutoDestroy(virDomainObjPtr dom, VIR_DOMAIN_EVENT_STOPPED, VIR_DOMAIN_EVENT_STOPPED_DESTROYED); - if (!qemuDomainObjEndJob(driver, dom)) - dom = NULL; - if (dom && !dom->persistent) { + qemuDomainObjEndJob(driver, dom); + + if (!dom->persistent) qemuDomainRemoveInactive(driver, dom); - dom = NULL; - } + if (event) qemuDomainEventQueue(driver, event); @@ -5483,6 +5458,7 @@ int qemuProcessAutoDestroyAdd(virQEMUDriverPtr driver, virConnectPtr conn) { VIR_DEBUG("vm=%s, conn=%p", vm->def->name, conn); + virObjectRef(vm); return virCloseCallbacksSet(driver->closeCallbacks, vm, conn, qemuProcessAutoDestroy); } @@ -5490,9 +5466,12 @@ int qemuProcessAutoDestroyAdd(virQEMUDriverPtr driver, int qemuProcessAutoDestroyRemove(virQEMUDriverPtr driver, virDomainObjPtr vm) { + int ret; VIR_DEBUG("vm=%s", vm->def->name); - return virCloseCallbacksUnset(driver->closeCallbacks, vm, - qemuProcessAutoDestroy); + ret = virCloseCallbacksUnset(driver->closeCallbacks, vm, + qemuProcessAutoDestroy); + virObjectUnref(vm); + return ret; } bool qemuProcessAutoDestroyActive(virQEMUDriverPtr driver, -- 2.2.0

On 12/05/14 15:03, Martin Kletzander wrote:
There is one problem that causes various errors in the daemon. When domain is waiting for a job, it is unlocked while waiting on the condition. However, if that domain is for example transient and being removed in another API (e.g. cancelling incoming migration), it get's unref'd. If the first call, that was waiting, fails to get the job, it unref's the domain object, and because it was the last reference, it causes clearing of the whole domain object. However, when finishing the call, the domain must be unlocked, but there is no way for the API to know whether it was cleaned or not (unless there is some ugly temporary variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and all use the implicit reference that the object has when it is in the domain list. That reference can be removed when the API is waiting for a job. And because each domain doesn't do its ref'ing, it results in the ugly checking of the return value of virObjectUnref() that we have everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which should be the only function in which the return value of virObjectUnref() is checked. This makes all reference counting deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/THREADS.txt | 40 ++- src/qemu/qemu_domain.c | 29 +- src/qemu/qemu_domain.h | 12 +- src/qemu/qemu_driver.c | 708 ++++++++++++++++------------------------------ src/qemu/qemu_migration.c | 108 +++---- src/qemu/qemu_migration.h | 10 +- src/qemu/qemu_process.c | 87 +++--- 7 files changed, 359 insertions(+), 635 deletions(-)
diff --git a/src/qemu/THREADS.txt b/src/qemu/THREADS.txt index 50a0cf9..53a3415 100644 --- a/src/qemu/THREADS.txt +++ b/src/qemu/THREADS.txt
...
@@ -109,7 +117,6 @@ To lock the virDomainObjPtr To acquire the normal job condition
qemuDomainObjBeginJob()
*
- - Increments ref count on virDomainObjPtr - Waits until the job is compatible with current async job or no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr @@ -122,14 +129,12 @@ To acquire the normal job condition qemuDomainObjEndJob() - Sets job.active to 0 - Signals on job.cond condition - - Decrements ref count on virDomainObjPtr
To acquire the asynchronous job condition
qemuDomainObjBeginAsyncJob() - - Increments ref count on virDomainObjPtr
Should we mention that having a ref already is pre-requisite for calling Begin*Job?
- Waits until no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr mutex
...
@@ -179,12 +183,10 @@ To acquire the QEMU monitor lock as part of an asynchronous job To keep a domain alive while waiting on a remote command
qemuDomainObjEnterRemote()
Same here ?
- - Increments ref count on virDomainObjPtr - Releases the virDomainObjPtr lock
qemuDomainObjExitRemote() - Acquires the virDomainObjPtr lock - - Decrements ref count on virDomainObjPtr
Design patterns
@@ -195,18 +197,18 @@ Design patterns
virDomainObjPtr obj;
- obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom);
...do work...
- virDomainObjUnlock(obj); + qemuDomObjEndAPI(obj);
&obj to match the code.
* Updating something directly to do with a virDomainObjPtr
virDomainObjPtr obj;
- obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom);
qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE);
@@ -214,18 +216,15 @@ Design patterns
qemuDomainObjEndJob(obj);
- virDomainObjUnlock(obj); - - + qemuDomObjEndAPI(obj);
&obj ...
* Invoking a monitor command on a virDomainObjPtr
- virDomainObjPtr obj; qemuDomainObjPrivatePtr priv;
- obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom);
qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE);
@@ -240,8 +239,7 @@ Design patterns ...do final work...
qemuDomainObjEndJob(obj); - virDomainObjUnlock(obj); - + qemuDomObjEndAPI(obj);
&obj ...
* Running asynchronous job @@ -249,7 +247,7 @@ Design patterns virDomainObjPtr obj; qemuDomainObjPrivatePtr priv;
- obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom);
qemuDomainObjBeginAsyncJob(obj, QEMU_ASYNC_JOB_TYPE); qemuDomainObjSetAsyncJobMask(obj, allowedJobs); @@ -281,7 +279,7 @@ Design patterns ...do final work...
qemuDomainObjEndAsyncJob(obj); - virDomainObjUnlock(obj); + qemuDomObjEndAPI(obj);
&obj ...
* Coordinating with a remote server for migration @@ -289,7 +287,7 @@ Design patterns virDomainObjPtr obj; qemuDomainObjPrivatePtr priv;
- obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom);
qemuDomainObjBeginAsyncJob(obj, QEMU_ASYNC_JOB_TYPE);
@@ -309,4 +307,4 @@ Design patterns ...do final work...
qemuDomainObjEndAsyncJob(obj); - virDomainObjUnlock(obj); + qemuDomObjEndAPI(obj);
&obj ...
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 220304f..d4a6021 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -1318,8 +1318,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, priv->jobs_queued++; then = now + QEMU_JOB_WAIT_TIME;
- virObjectRef(obj); - retry: if (cfg->maxQueuedJobs && priv->jobs_queued > cfg->maxQueuedJobs) { @@ -1398,7 +1396,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver,
cleanup: priv->jobs_queued--; - virObjectUnref(obj); virObjectUnref(cfg); return ret; } @@ -1467,7 +1464,8 @@ qemuDomainObjBeginNestedJob(virQEMUDriverPtr driver, * Returns true if @obj was still referenced, false if it was * disposed of.
Comment is no longer valid, also mentioning that caller should hold a reference would be helpful.
*/ -bool qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) +void +qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) { qemuDomainObjPrivatePtr priv = obj->privateData; qemuDomainJob job = priv->job.active;
...
@@ -1541,7 +1535,7 @@ qemuDomainObjEnterMonitorInternal(virQEMUDriverPtr driver, virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("domain is no longer running")); /* Still referenced by the containing async job. */
This comment is no longer valid.
- ignore_value(qemuDomainObjEndJob(driver, obj)); + qemuDomainObjEndJob(driver, obj); return -1; } } else if (priv->job.asyncOwner == virThreadSelfID()) {
...
@@ -2795,3 +2786,11 @@ qemuDomainAgentAvailable(qemuDomainObjPrivatePtr priv, } return true; } +
Adding some docs what this exactly does would be cool.
+void +qemuDomObjEndAPI(virDomainObjPtr *vm) +{ + virObjectUnlock(*vm); + virObjectUnref(*vm); + *vm = NULL; +}
[...]
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9152cf5..2e0b7fc 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -205,11 +205,11 @@ struct qemuAutostartData { * qemuDomObjFromDomain: * @domain: Domain pointer that has to be looked up * - * This function looks up @domain and returns the appropriate - * virDomainObjPtr. + * This function looks up @domain and returns the appropriate virDomainObjPtr + * that has to be cleaned by calling qemuDomObjEndAPI().
s/cleaned/released/ ?
* - * Returns the domain object which is locked on success, NULL - * otherwise. + * Returns the domain object with incremented reference counter which is locked + * on success, NULL otherwise. */ static virDomainObjPtr qemuDomObjFromDomain(virDomainPtr domain)
[...]
@@ -1446,7 +1445,7 @@ static virDomainPtr qemuDomainLookupByID(virConnectPtr conn, virDomainObjPtr vm; virDomainPtr dom = NULL;
- vm = virDomainObjListFindByID(driver->domains, id); + vm = virDomainObjListFindByID(driver->domains, id);
Unrelated whitespace change ..
if (!vm) { virReportError(VIR_ERR_NO_DOMAIN, @@ -1461,8 +1460,7 @@ static virDomainPtr qemuDomainLookupByID(virConnectPtr conn, if (dom) dom->id = vm->def->id;
cleanup: - if (vm) - virObjectUnlock(vm);
and possible crash. If virDomainObjListFindByID fails, the code jumps to the cleanup label with "vm == NULL". Both changes also don't do anything with the reference count and thus should be dropped from this patch as a separate cleanup. Additional changes would be required in case you want to make the API more robust by adding a similar "ref then lock" scheme also for the ID lookup func. Without that change, this leaves a vector that will allow to lock up the domains hash in case a VM is unresponsive.
+ virObjectUnlock(vm); return dom; }
@@ -1473,7 +1471,7 @@ static virDomainPtr qemuDomainLookupByUUID(virConnectPtr conn, virDomainObjPtr vm; virDomainPtr dom = NULL;
- vm = virDomainObjListFindByUUID(driver->domains, uuid); + vm = virDomainObjListFindByUUIDRef(driver->domains, uuid);
if (!vm) { char uuidstr[VIR_UUID_STRING_BUFLEN]; @@ -1490,8 +1488,7 @@ static virDomainPtr qemuDomainLookupByUUID(virConnectPtr conn, if (dom) dom->id = vm->def->id;
cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm);
Control flow allows to reach this part with "vm == NULL" which would induce a crash.
return dom; }
@@ -1517,8 +1514,7 @@ static virDomainPtr qemuDomainLookupByName(virConnectPtr conn, if (dom) dom->id = vm->def->id;
cleanup: - if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm);
Aaand the same as above ;)
return dom; }
@@ -1537,8 +1533,7 @@ static int qemuDomainIsActive(virDomainPtr dom) ret = virDomainObjIsActive(obj);
cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(&obj);
Same as above. if obj is NULL we jump here.
return ret; }
@@ -1556,8 +1551,7 @@ static int qemuDomainIsPersistent(virDomainPtr dom) ret = obj->persistent;
cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(&obj);
Ditto.
return ret; }
@@ -1575,8 +1569,7 @@ static int qemuDomainIsUpdated(virDomainPtr dom) ret = obj->updated;
cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(&obj);
Ditto.
return ret; }
@@ -1717,12 +1710,11 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL))) goto cleanup; - + virObjectRef(vm); def = NULL;
if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) { qemuDomainRemoveInactive(driver, vm); - vm = NULL; goto cleanup; }
@@ -1731,9 +1723,8 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags) < 0) { virDomainAuditStart(vm, "booted", false); - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; + qemuDomainObjEndJob(driver, vm); + qemuDomainRemoveInactive(driver, vm); goto cleanup; }
@@ -1756,13 +1747,11 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, if (dom) dom->id = vm->def->id;
- if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm);
cleanup: virDomainDefFree(def); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) { qemuDomainEventQueue(driver, event); if (event2) @@ -1842,12 +1831,10 @@ static int qemuDomainSuspend(virDomainPtr dom) ret = 0;
endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm);
cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm);
Aaand same here. I'm starting to think that void qemuDomObjEndAPI(virDomainObjPtr *vm) { virObjectUnlock(*vm); virObjectUnref(*vm); *vm = NULL; } should look more like: void qemuDomObjEndAPI(virDomainObjPtr *vm) { if (!*vm) return; virObjectUnlock(*vm); virObjectUnref(*vm); *vm = NULL; } I'm stopping to report this issue assuming the tweak above.
if (event) qemuDomainEventQueue(driver, event);
[...]
@@ -3333,11 +3282,9 @@ qemuDomainSaveFlags(virDomainPtr dom, const char *path, const char *dxml,
ret = qemuDomainSaveInternal(driver, dom, vm, path, compressed, dxml, flags);
This function consumes @vm with it's reference and lock, thus ...
- vm = NULL;
... you cannot remove this line.
cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -3416,15 +3363,13 @@ qemuDomainManagedSave(virDomainPtr dom, unsigned int flags)
VIR_INFO("Saving state of domain '%s' to '%s'", vm->def->name, name);
- if ((ret = qemuDomainSaveInternal(driver, dom, vm, name, compressed, - NULL, flags)) == 0) + ret = qemuDomainSaveInternal(driver, dom, vm, name, + compressed, NULL, flags);
Same as above. qemuDomainSaveInternal consumes VM ...
+ if (ret == 0) vm->hasManagedSave = true;
- vm = NULL;
... so this needs to stay in the code.
- cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); VIR_FREE(name); virObjectUnref(cfg);
[...]
@@ -6744,7 +6647,7 @@ static virDomainPtr qemuDomainDefineXML(virConnectPtr conn, const char *xml) driver->xmlopt, 0, &oldDef))) goto cleanup; -
Don't remove the empty line for clarity.
+ virObjectRef(vm); def = NULL; if (virDomainHasDiskMirror(vm)) { virReportError(VIR_ERR_BLOCK_COPY_ACTIVE, "%s",
[...] | Migrations | -------------- qemuDomainMigratePerform uses qemuDomObjFromDomain but doesn't unlock the object if ACL check fails. I'll post a patch for that. You'll need to rebase on top of that. qemuDomainMigratePerform3Params does unlock on the ACL check, but you forgot to tweak the check.
@@ -12638,8 +12491,7 @@ qemuDomainMigrateGetMaxSpeed(virDomainPtr dom, ret = 0;
cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; }
Uff, I'm not able to finish this review, so I'm sending this as a part 1 and will continue tomorrow. Peter

On Mon, Dec 08, 2014 at 07:57:50PM +0100, Peter Krempa wrote:
On 12/05/14 15:03, Martin Kletzander wrote:
There is one problem that causes various errors in the daemon. When domain is waiting for a job, it is unlocked while waiting on the condition. However, if that domain is for example transient and being removed in another API (e.g. cancelling incoming migration), it get's unref'd. If the first call, that was waiting, fails to get the job, it unref's the domain object, and because it was the last reference, it causes clearing of the whole domain object. However, when finishing the call, the domain must be unlocked, but there is no way for the API to know whether it was cleaned or not (unless there is some ugly temporary variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and all use the implicit reference that the object has when it is in the domain list. That reference can be removed when the API is waiting for a job. And because each domain doesn't do its ref'ing, it results in the ugly checking of the return value of virObjectUnref() that we have everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which should be the only function in which the return value of virObjectUnref() is checked. This makes all reference counting deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/THREADS.txt | 40 ++- src/qemu/qemu_domain.c | 29 +- src/qemu/qemu_domain.h | 12 +- src/qemu/qemu_driver.c | 708 ++++++++++++++++------------------------------ src/qemu/qemu_migration.c | 108 +++---- src/qemu/qemu_migration.h | 10 +- src/qemu/qemu_process.c | 87 +++--- 7 files changed, 359 insertions(+), 635 deletions(-)
diff --git a/src/qemu/THREADS.txt b/src/qemu/THREADS.txt index 50a0cf9..53a3415 100644 --- a/src/qemu/THREADS.txt +++ b/src/qemu/THREADS.txt
...
@@ -109,7 +117,6 @@ To lock the virDomainObjPtr To acquire the normal job condition
qemuDomainObjBeginJob()
*
- - Increments ref count on virDomainObjPtr - Waits until the job is compatible with current async job or no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr @@ -122,14 +129,12 @@ To acquire the normal job condition qemuDomainObjEndJob() - Sets job.active to 0 - Signals on job.cond condition - - Decrements ref count on virDomainObjPtr
To acquire the asynchronous job condition
qemuDomainObjBeginAsyncJob() - - Increments ref count on virDomainObjPtr
Should we mention that having a ref already is pre-requisite for calling Begin*Job?
I take it as from this patch onwards it is pre-requisite to have a ref for doing anything.
- Waits until no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr mutex
...
@@ -179,12 +183,10 @@ To acquire the QEMU monitor lock as part of an asynchronous job To keep a domain alive while waiting on a remote command
qemuDomainObjEnterRemote()
Same here ?
Same here. Maybe it would be nice to document that the only way how to obtain virDomainPtr is with its own reference. Would that help?
- - Increments ref count on virDomainObjPtr - Releases the virDomainObjPtr lock
qemuDomainObjExitRemote() - Acquires the virDomainObjPtr lock - - Decrements ref count on virDomainObjPtr
Design patterns
@@ -195,18 +197,18 @@ Design patterns
virDomainObjPtr obj;
- obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom);
...do work...
- virDomainObjUnlock(obj); + qemuDomObjEndAPI(obj);
&obj to match the code.
I guess I missed some 'git add' after one of the 'sed -s' :)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 220304f..d4a6021 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -1318,8 +1318,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, priv->jobs_queued++; then = now + QEMU_JOB_WAIT_TIME;
- virObjectRef(obj); - retry: if (cfg->maxQueuedJobs && priv->jobs_queued > cfg->maxQueuedJobs) { @@ -1398,7 +1396,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver,
cleanup: priv->jobs_queued--; - virObjectUnref(obj); virObjectUnref(cfg); return ret; } @@ -1467,7 +1464,8 @@ qemuDomainObjBeginNestedJob(virQEMUDriverPtr driver, * Returns true if @obj was still referenced, false if it was * disposed of.
Comment is no longer valid, also mentioning that caller should hold a reference would be helpful.
Yes, this should be removed. About the reference, see above.
*/ -bool qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) +void +qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) { qemuDomainObjPrivatePtr priv = obj->privateData; qemuDomainJob job = priv->job.active;
...
@@ -1541,7 +1535,7 @@ qemuDomainObjEnterMonitorInternal(virQEMUDriverPtr driver, virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("domain is no longer running")); /* Still referenced by the containing async job. */
This comment is no longer valid.
Yes, thanks (even though it is referenced by an API that entered the monitor).
- ignore_value(qemuDomainObjEndJob(driver, obj)); + qemuDomainObjEndJob(driver, obj); return -1; } } else if (priv->job.asyncOwner == virThreadSelfID()) {
...
@@ -2795,3 +2786,11 @@ qemuDomainAgentAvailable(qemuDomainObjPrivatePtr priv, } return true; } +
Adding some docs what this exactly does would be cool.
Would some short paragraph like the following one, for example, suffice? /* * Stop working with a vm. Unlock the domain, decrements its * reference counter and sets the parameter to NULL to make sure there * is no consecutive access to the domain structure. */
+void +qemuDomObjEndAPI(virDomainObjPtr *vm) +{ + virObjectUnlock(*vm); + virObjectUnref(*vm); + *vm = NULL; +}
[...]
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9152cf5..2e0b7fc 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -205,11 +205,11 @@ struct qemuAutostartData { * qemuDomObjFromDomain: * @domain: Domain pointer that has to be looked up * - * This function looks up @domain and returns the appropriate - * virDomainObjPtr. + * This function looks up @domain and returns the appropriate virDomainObjPtr + * that has to be cleaned by calling qemuDomObjEndAPI().
s/cleaned/released/ ?
I've got no problem changing it :)
* - * Returns the domain object which is locked on success, NULL - * otherwise. + * Returns the domain object with incremented reference counter which is locked + * on success, NULL otherwise. */ static virDomainObjPtr qemuDomObjFromDomain(virDomainPtr domain)
[...]
@@ -1446,7 +1445,7 @@ static virDomainPtr qemuDomainLookupByID(virConnectPtr conn, virDomainObjPtr vm; virDomainPtr dom = NULL;
- vm = virDomainObjListFindByID(driver->domains, id); + vm = virDomainObjListFindByID(driver->domains, id);
Unrelated whitespace change ..
I haven't felt like this deserves its own commit.
if (!vm) { virReportError(VIR_ERR_NO_DOMAIN, @@ -1461,8 +1460,7 @@ static virDomainPtr qemuDomainLookupByID(virConnectPtr conn, if (dom) dom->id = vm->def->id;
cleanup: - if (vm) - virObjectUnlock(vm);
and possible crash. If virDomainObjListFindByID fails, the code jumps to the cleanup label with "vm == NULL".
Both changes also don't do anything with the reference count and thus should be dropped from this patch as a separate cleanup.
Additional changes would be required in case you want to make the API more robust by adding a similar "ref then lock" scheme also for the ID lookup func. Without that change, this leaves a vector that will allow to lock up the domains hash in case a VM is unresponsive.
For ByID and ByName it is not that easy to change the code to do ref, unlock list, lock, because the domain itself is being compared to the name/id and it has to be locked for that (or at least for the ID it does). qemuDomObjEndAPI() needs to just return if the parameter points to NULL. This wasn't a problem with that "if (unref) unlock" logic.
+ virObjectUnlock(vm); return dom; }
@@ -1473,7 +1471,7 @@ static virDomainPtr qemuDomainLookupByUUID(virConnectPtr conn, virDomainObjPtr vm; virDomainPtr dom = NULL;
- vm = virDomainObjListFindByUUID(driver->domains, uuid); + vm = virDomainObjListFindByUUIDRef(driver->domains, uuid);
if (!vm) { char uuidstr[VIR_UUID_STRING_BUFLEN]; @@ -1490,8 +1488,7 @@ static virDomainPtr qemuDomainLookupByUUID(virConnectPtr conn, if (dom) dom->id = vm->def->id;
cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm);
Control flow allows to reach this part with "vm == NULL" which would induce a crash.
return dom; }
@@ -1517,8 +1514,7 @@ static virDomainPtr qemuDomainLookupByName(virConnectPtr conn, if (dom) dom->id = vm->def->id;
cleanup: - if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm);
Aaand the same as above ;)
return dom; }
@@ -1537,8 +1533,7 @@ static int qemuDomainIsActive(virDomainPtr dom) ret = virDomainObjIsActive(obj);
cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(&obj);
Same as above. if obj is NULL we jump here.
return ret; }
@@ -1556,8 +1551,7 @@ static int qemuDomainIsPersistent(virDomainPtr dom) ret = obj->persistent;
cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(&obj);
Ditto.
return ret; }
@@ -1575,8 +1569,7 @@ static int qemuDomainIsUpdated(virDomainPtr dom) ret = obj->updated;
cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(&obj);
Ditto.
return ret; }
@@ -1717,12 +1710,11 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL))) goto cleanup; - + virObjectRef(vm); def = NULL;
if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) { qemuDomainRemoveInactive(driver, vm); - vm = NULL; goto cleanup; }
@@ -1731,9 +1723,8 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags) < 0) { virDomainAuditStart(vm, "booted", false); - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; + qemuDomainObjEndJob(driver, vm); + qemuDomainRemoveInactive(driver, vm); goto cleanup; }
@@ -1756,13 +1747,11 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, if (dom) dom->id = vm->def->id;
- if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm);
cleanup: virDomainDefFree(def); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); if (event) { qemuDomainEventQueue(driver, event); if (event2) @@ -1842,12 +1831,10 @@ static int qemuDomainSuspend(virDomainPtr dom) ret = 0;
endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm);
cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm);
Aaand same here. I'm starting to think that
void qemuDomObjEndAPI(virDomainObjPtr *vm) { virObjectUnlock(*vm); virObjectUnref(*vm); *vm = NULL; }
should look more like:
void qemuDomObjEndAPI(virDomainObjPtr *vm) { if (!*vm) return;
virObjectUnlock(*vm); virObjectUnref(*vm); *vm = NULL; }
Exactly what I thought above :)
I'm stopping to report this issue assuming the tweak above.
if (event) qemuDomainEventQueue(driver, event);
[...]
@@ -3333,11 +3282,9 @@ qemuDomainSaveFlags(virDomainPtr dom, const char *path, const char *dxml,
ret = qemuDomainSaveInternal(driver, dom, vm, path, compressed, dxml, flags);
This function consumes @vm with it's reference and lock, thus ...
- vm = NULL;
... you cannot remove this line.
I'd rather remove the endAPI call from SaveInternal.
cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); virObjectUnref(cfg); return ret; } @@ -3416,15 +3363,13 @@ qemuDomainManagedSave(virDomainPtr dom, unsigned int flags)
VIR_INFO("Saving state of domain '%s' to '%s'", vm->def->name, name);
- if ((ret = qemuDomainSaveInternal(driver, dom, vm, name, compressed, - NULL, flags)) == 0) + ret = qemuDomainSaveInternal(driver, dom, vm, name, + compressed, NULL, flags);
Same as above. qemuDomainSaveInternal consumes VM ...
+ if (ret == 0) vm->hasManagedSave = true;
- vm = NULL;
... so this needs to stay in the code.
- cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); VIR_FREE(name); virObjectUnref(cfg);
[...]
@@ -6744,7 +6647,7 @@ static virDomainPtr qemuDomainDefineXML(virConnectPtr conn, const char *xml) driver->xmlopt, 0, &oldDef))) goto cleanup; -
Don't remove the empty line for clarity.
I have no idea why I did that :D
+ virObjectRef(vm); def = NULL; if (virDomainHasDiskMirror(vm)) { virReportError(VIR_ERR_BLOCK_COPY_ACTIVE, "%s",
[...]
| Migrations | --------------
qemuDomainMigratePerform uses qemuDomObjFromDomain but doesn't unlock the object if ACL check fails. I'll post a patch for that. You'll need to rebase on top of that.
qemuDomainMigratePerform3Params does unlock on the ACL check, but you forgot to tweak the check.
Thanks for catching that, I'll gladly rebase that :)
@@ -12638,8 +12491,7 @@ qemuDomainMigrateGetMaxSpeed(virDomainPtr dom, ret = 0;
cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(&vm); return ret; }
Uff, I'm not able to finish this review, so I'm sending this as a part 1 and will continue tomorrow.
Definitely take a rest. This is terrible patch to review (as it was for writing it), thanks for doing that. Martin

On 12/05/14 15:03, Martin Kletzander wrote:
There is one problem that causes various errors in the daemon. When domain is waiting for a job, it is unlocked while waiting on the condition. However, if that domain is for example transient and being removed in another API (e.g. cancelling incoming migration), it get's unref'd. If the first call, that was waiting, fails to get the job, it unref's the domain object, and because it was the last reference, it causes clearing of the whole domain object. However, when finishing the call, the domain must be unlocked, but there is no way for the API to know whether it was cleaned or not (unless there is some ugly temporary variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and all use the implicit reference that the object has when it is in the domain list. That reference can be removed when the API is waiting for a job. And because each domain doesn't do its ref'ing, it results in the ugly checking of the return value of virObjectUnref() that we have everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which should be the only function in which the return value of virObjectUnref() is checked. This makes all reference counting deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/THREADS.txt | 40 ++- src/qemu/qemu_domain.c | 29 +- src/qemu/qemu_domain.h | 12 +- src/qemu/qemu_driver.c | 708 ++++++++++++++++------------------------------ src/qemu/qemu_migration.c | 108 +++---- src/qemu/qemu_migration.h | 10 +- src/qemu/qemu_process.c | 87 +++--- 7 files changed, 359 insertions(+), 635 deletions(-)
Continuing where I finished yesterday:
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9152cf5..2e0b7fc 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c
[...]
@@ -15128,21 +14970,18 @@ static virDomainPtr qemuDomainQemuAttach(virConnectPtr conn, VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL))) goto cleanup; -
Keep the empty line for visual separation.
+ virObjectRef(vm); def = NULL;
if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) { qemuDomainRemoveInactive(driver, vm); - vm = NULL; goto cleanup; }
if (qemuProcessAttach(conn, driver, vm, pid, pidfile, monConfig, monJSON) < 0) { - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; - monConfig = NULL;
Ownership of monConfig is transferred to qemuProcessAttach(), thus you need to clean the pointer to avoid double free.
+ qemuDomainObjEndJob(driver, vm); + qemuDomainRemoveInactive(driver, vm); goto cleanup; }
[...]
@@ -15513,8 +15348,7 @@ qemuDomainBlockPivot(virConnectPtr conn, }
-/* bandwidth in MiB/s per public API. Caller must lock vm beforehand, - * and not access it afterwards. */ +/* bandwidth in MiB/s per public API. Caller must lock vm beforehand. */
The semantics didn't change. qemuDomainBlockJobImpl() consumes vm and calls qemuDomObjEndAPI() on it without adding any reference thus callers still shouldn't touch VM after passing it to this function.
static int qemuDomainBlockJobImpl(virDomainObjPtr vm, virConnectPtr conn,
[...]
@@ -15776,7 +15606,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, const char *path, unsigned int flags)
static int qemuDomainGetBlockJobInfo(virDomainPtr dom, const char *path, - virDomainBlockJobInfoPtr info, unsigned int flags) + virDomainBlockJobInfoPtr info, unsigned int flags)
Unrelated whitespace fix?
{ virQEMUDriverPtr driver = dom->conn->privateData; qemuDomainObjPrivatePtr priv;
[...]
@@ -18733,13 +18523,15 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, continue;
if (doms != domlist && - !virConnectGetAllDomainStatsCheckACL(conn, dom->def)) + !virConnectGetAllDomainStatsCheckACL(conn, dom->def)) { + virObjectUnlock(dom);
Hmm, this should be a separate fix as I've done with the migration APIs. Or perhaps is this a rebase artifact from a un-published patch?
continue; + }
- if (HAVE_JOB(domflags) && - qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY) < 0) + if (HAVE_JOB(privflags) && + qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY) >= 0) /* As it was never requested. Gather as much as possible anyway. */ - domflags &= ~QEMU_DOMAIN_STATS_HAVE_JOB; + domflags |= QEMU_DOMAIN_STATS_HAVE_JOB;
if (qemuDomainGetStats(conn, dom, stats, &tmp, domflags) < 0) goto endjob;
[...]
@@ -18763,8 +18553,7 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
endjob: if (HAVE_JOB(domflags) && dom) - if (!qemuDomainObjEndJob(driver, dom)) - dom = NULL; + qemuDomainObjEndJob(driver, dom);
cleanup: if (dom) @@ -18835,8 +18624,7 @@ qemuDomainGetFSInfo(virDomainPtr dom, qemuDomainObjExitAgent(vm);
endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm);
cleanup: if (vm)
Here's a virObjectUnlock() that you've forgot to change to qemuDomObjEndAPI().
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 0acbb57..e4c8cf8 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c
[...]
+ qemuMigrationJobContinue(vm);
if (autoPort) priv->migrationPort = port; @@ -3115,16 +3099,12 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, VIR_FREE(xmlout); VIR_FORCE_CLOSE(dataFD[0]); VIR_FORCE_CLOSE(dataFD[1]);
(not in context) This function creates the @vm object with virDomainObjListAdd() but you don't ref it afterwards as you've used to do in other places that call that func ...
- if (vm) { - if (ret < 0) { - virPortAllocatorRelease(driver->migrationPorts, priv->nbdPort); - priv->nbdPort = 0; - } - if (ret >= 0 || vm->persistent) - virObjectUnlock(vm); - else - qemuDomainRemoveInactive(driver, vm); + if (ret < 0) { + virPortAllocatorRelease(driver->migrationPorts, priv->nbdPort); + priv->nbdPort = 0; + qemuDomainRemoveInactive(driver, vm); } + qemuDomObjEndAPI(&vm);
... and now you'd kill your last reference.
if (event) qemuDomainEventQueue(driver, event); qemuMigrationCookieFree(mig);
[...]
@@ -5301,21 +5268,16 @@ qemuMigrationFinish(virQEMUDriverPtr driver, VIR_WARN("Unable to encode migration cookie");
endjob: - if (qemuMigrationJobFinish(driver, vm) == 0) { - vm = NULL; - } else if (!vm->persistent && !virDomainObjIsActive(vm)) { + qemuMigrationJobFinish(driver, vm); + if (!vm->persistent && !virDomainObjIsActive(vm)) qemuDomainRemoveInactive(driver, vm); - vm = NULL; - }
cleanup: virPortAllocatorRelease(driver->migrationPorts, port); - if (vm) { - if (priv->mon) - qemuMonitorSetDomainLog(priv->mon, -1); - VIR_FREE(priv->origname); - virObjectUnlock(vm); - } + if (priv->mon) + qemuMonitorSetDomainLog(priv->mon, -1); + VIR_FREE(priv->origname); + qemuDomObjEndAPI(&vm);
The finish phase of the migration gathers the domain object via virDomainObjListFindByName() and thus isn't ref'd. You should tweak the callers to ref the object, otherwise it might vanish with us thinking we have the object.
if (event) qemuDomainEventQueue(driver, event); qemuMigrationCookieFree(mig);
[...]
@@ -5594,7 +5554,7 @@ qemuMigrationJobIsActive(virDomainObjPtr vm, return true; }
-bool +void qemuMigrationJobFinish(virQEMUDriverPtr driver, virDomainObjPtr vm) { return qemuDomainObjEndAsyncJob(driver, vm);
returning result of a void function in a void function? :) [...]
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index a14b6f7..4776ce8 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -571,6 +571,7 @@ qemuProcessFakeReboot(void *opaque) virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); virDomainRunningReason reason = VIR_DOMAIN_RUNNING_BOOTED; int ret = -1; +
Spurious whitespace change.
VIR_DEBUG("vm=%p", vm); virObjectLock(vm); if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0)
[...]
@@ -935,15 +932,13 @@ qemuProcessHandleWatchdog(qemuMonitorPtr mon ATTRIBUTE_UNUSED, */ virObjectRef(vm); if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - if (!virObjectUnref(vm)) - vm = NULL; + virObjectUnref(vm); VIR_FREE(processEvent); } } }
- if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm);
This can be considered as a separate cleanup as it doesn't change any semantics due to the fact that we acquired a reference to pass to the worker thread and are guaranteed that we are getting rid of the same one.
if (watchdogEvent) qemuDomainEventQueue(driver, watchdogEvent); if (lifecycleEvent) @@ -1439,14 +1434,12 @@ qemuProcessHandleGuestPanic(qemuMonitorPtr mon ATTRIBUTE_UNUSED, */ virObjectRef(vm); if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - if (!virObjectUnref(vm)) - vm = NULL; + virObjectUnref(vm); VIR_FREE(processEvent); }
cleanup: - if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm);
ditto for this hunk.
return 0; }
[...] Overall looks good, but many of the comments warrant for another version. Peter
participants (3)
-
Daniel P. Berrange
-
Martin Kletzander
-
Peter Krempa