[libvirt] [PATCH 0/2] Rework reference locking; in QEMU, for starters

Our reference locking for objects is great and powerful thing. The problem is that it was either not followed through completely or the design was not complete, but it doesn't matter now. There are few bugs in the code due to the reference counting and the daemon is lacking some performance in specific scenarios. This "series" tried to fix that using the following idea: - Each API working with a domain object that has to get it from the list will have its *own* reference, not borrowed one from the list. - When adding a domain into the list, the reference counter is increased (this is the reference that is there just for being in the list) and when being removed, it is decreased. No special-casing of "if this is was the last reference" and other funny stuff. - When job is created, there is no need to increase the reference counter as there are at least two references for the domain: 1) The API that created the job has one, so if it's not async it will be kept until the API ends and at that point the job won't exist any more. 2) The domain list has one and even though I said nobody needs to rely on that, async APIs probably will do that, but there's an excuse for that. In order to remove the domain from the list, you need a job and that won't succeed unless the async one ended. So we're good in this case as well. After searching through the code for all things that needed to be removed and fixing everything I could possibly think of, I tried a few things on my setup and it looks like it works. However, I haven't tried *every single API*, but I hope that's understandable. On the other hand, I asked Pavel to try running virt-test with these patches applied, hopefully we'll get an idea about how reliable this "series" is. I used "series" (with quotes) on purpose, because first patch just adds two new wrappers for slightly modified function and the second one changes the whole qemu driver at once. Unfortunately the second patch couldn't be broken up to more parts due to the nature of the fix. Enough chit-chat, let's look at the codes! Martin Kletzander (2): conf: Rework virDomainObjListFindByUUID to allow more concurrent APIs qemu: completely rework reference counting src/conf/domain_conf.c | 27 +- src/conf/domain_conf.h | 2 + src/libvirt_private.syms | 1 + src/qemu/THREADS.txt | 40 ++- src/qemu/qemu_domain.c | 28 +- src/qemu/qemu_domain.h | 12 +- src/qemu/qemu_driver.c | 708 ++++++++++++++++------------------------------ src/qemu/qemu_migration.c | 108 +++---- src/qemu/qemu_migration.h | 10 +- src/qemu/qemu_process.c | 126 ++++----- 10 files changed, 406 insertions(+), 656 deletions(-) -- 2.1.3

Currently, when there is an API that's blocking with locked domain and second API that's waiting in virDomainObjListFindByUUID() for the domain lock (with the domain list locked) no other API can be executed on any domain on the whole hypervisor because all would wait for the domain list to be locked. This patch adds new optional approach to this in which the domain is only ref'd (reference counter is incremented) instead of being locked and is locked *after* the list itself is unlocked. We might consider only ref'ing the domain in the future and leaving locking on particular APIs, but that's no tonight's fairy tale. Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/conf/domain_conf.c | 27 ++++++++++++++++++++++++--- src/conf/domain_conf.h | 2 ++ src/libvirt_private.syms | 1 + 3 files changed, 27 insertions(+), 3 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 2d81c37..e5bb572 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1068,8 +1068,10 @@ virDomainObjPtr virDomainObjListFindByID(virDomainObjListPtr doms, } -virDomainObjPtr virDomainObjListFindByUUID(virDomainObjListPtr doms, - const unsigned char *uuid) +static virDomainObjPtr +virDomainObjListFindByUUIDInternal(virDomainObjListPtr doms, + const unsigned char *uuid, + bool ref) { char uuidstr[VIR_UUID_STRING_BUFLEN]; virDomainObjPtr obj; @@ -1078,12 +1080,31 @@ virDomainObjPtr virDomainObjListFindByUUID(virDomainObjListPtr doms, virUUIDFormat(uuid, uuidstr); obj = virHashLookup(doms->objs, uuidstr); + if (ref) { + virObjectRef(obj); + virObjectUnlock(doms); + } if (obj) virObjectLock(obj); - virObjectUnlock(doms); + if (!ref) + virObjectUnlock(doms); return obj; } +virDomainObjPtr +virDomainObjListFindByUUID(virDomainObjListPtr doms, + const unsigned char *uuid) +{ + return virDomainObjListFindByUUIDInternal(doms, uuid, false); +} + +virDomainObjPtr +virDomainObjListFindByUUIDRef(virDomainObjListPtr doms, + const unsigned char *uuid) +{ + return virDomainObjListFindByUUIDInternal(doms, uuid, false); +} + static int virDomainObjListSearchName(const void *payload, const void *name ATTRIBUTE_UNUSED, const void *data) diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 439f3c0..3812a16 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2285,6 +2285,8 @@ virDomainObjPtr virDomainObjListFindByID(virDomainObjListPtr doms, int id); virDomainObjPtr virDomainObjListFindByUUID(virDomainObjListPtr doms, const unsigned char *uuid); +virDomainObjPtr virDomainObjListFindByUUIDRef(virDomainObjListPtr doms, + const unsigned char *uuid); virDomainObjPtr virDomainObjListFindByName(virDomainObjListPtr doms, const char *name); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 1853a9c..2351b91 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -352,6 +352,7 @@ virDomainObjListExport; virDomainObjListFindByID; virDomainObjListFindByName; virDomainObjListFindByUUID; +virDomainObjListFindByUUIDRef; virDomainObjListForEach; virDomainObjListGetActiveIDs; virDomainObjListGetInactiveNames; -- 2.1.3

On 12/03/14 06:49, Martin Kletzander wrote:
Currently, when there is an API that's blocking with locked domain and second API that's waiting in virDomainObjListFindByUUID() for the domain lock (with the domain list locked) no other API can be executed on any domain on the whole hypervisor because all would wait for the domain list to be locked. This patch adds new optional approach to this in which the domain is only ref'd (reference counter is incremented) instead of being locked and is locked *after* the list itself is unlocked. We might consider only ref'ing the domain in the future and leaving locking on particular APIs, but that's no tonight's fairy tale.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/conf/domain_conf.c | 27 ++++++++++++++++++++++++--- src/conf/domain_conf.h | 2 ++ src/libvirt_private.syms | 1 + 3 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 2d81c37..e5bb572 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1068,8 +1068,10 @@ virDomainObjPtr virDomainObjListFindByID(virDomainObjListPtr doms, }
-virDomainObjPtr virDomainObjListFindByUUID(virDomainObjListPtr doms, - const unsigned char *uuid) +static virDomainObjPtr +virDomainObjListFindByUUIDInternal(virDomainObjListPtr doms, + const unsigned char *uuid, + bool ref) { char uuidstr[VIR_UUID_STRING_BUFLEN]; virDomainObjPtr obj; @@ -1078,12 +1080,31 @@ virDomainObjPtr virDomainObjListFindByUUID(virDomainObjListPtr doms, virUUIDFormat(uuid, uuidstr);
obj = virHashLookup(doms->objs, uuidstr); + if (ref) { + virObjectRef(obj); + virObjectUnlock(doms); + } if (obj) virObjectLock(obj); - virObjectUnlock(doms); + if (!ref) + virObjectUnlock(doms); return obj; }
+virDomainObjPtr +virDomainObjListFindByUUID(virDomainObjListPtr doms, + const unsigned char *uuid) +{ + return virDomainObjListFindByUUIDInternal(doms, uuid, false); +} + +virDomainObjPtr +virDomainObjListFindByUUIDRef(virDomainObjListPtr doms, + const unsigned char *uuid) +{ + return virDomainObjListFindByUUIDInternal(doms, uuid, false);
Umm both call the helper with @ref being false? That probably isn't right.
+} + static int virDomainObjListSearchName(const void *payload, const void *name ATTRIBUTE_UNUSED, const void *data)
Peter

On Wed, Dec 03, 2014 at 10:00:52AM +0100, Peter Krempa wrote:
On 12/03/14 06:49, Martin Kletzander wrote:
Currently, when there is an API that's blocking with locked domain and second API that's waiting in virDomainObjListFindByUUID() for the domain lock (with the domain list locked) no other API can be executed on any domain on the whole hypervisor because all would wait for the domain list to be locked. This patch adds new optional approach to this in which the domain is only ref'd (reference counter is incremented) instead of being locked and is locked *after* the list itself is unlocked. We might consider only ref'ing the domain in the future and leaving locking on particular APIs, but that's no tonight's fairy tale.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/conf/domain_conf.c | 27 ++++++++++++++++++++++++--- src/conf/domain_conf.h | 2 ++ src/libvirt_private.syms | 1 + 3 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 2d81c37..e5bb572 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1068,8 +1068,10 @@ virDomainObjPtr virDomainObjListFindByID(virDomainObjListPtr doms, }
-virDomainObjPtr virDomainObjListFindByUUID(virDomainObjListPtr doms, - const unsigned char *uuid) +static virDomainObjPtr +virDomainObjListFindByUUIDInternal(virDomainObjListPtr doms, + const unsigned char *uuid, + bool ref) { char uuidstr[VIR_UUID_STRING_BUFLEN]; virDomainObjPtr obj; @@ -1078,12 +1080,31 @@ virDomainObjPtr virDomainObjListFindByUUID(virDomainObjListPtr doms, virUUIDFormat(uuid, uuidstr);
obj = virHashLookup(doms->objs, uuidstr); + if (ref) { + virObjectRef(obj); + virObjectUnlock(doms); + } if (obj) virObjectLock(obj); - virObjectUnlock(doms); + if (!ref) + virObjectUnlock(doms); return obj; }
+virDomainObjPtr +virDomainObjListFindByUUID(virDomainObjListPtr doms, + const unsigned char *uuid) +{ + return virDomainObjListFindByUUIDInternal(doms, uuid, false); +} + +virDomainObjPtr +virDomainObjListFindByUUIDRef(virDomainObjListPtr doms, + const unsigned char *uuid) +{ + return virDomainObjListFindByUUIDInternal(doms, uuid, false);
Umm both call the helper with @ref being false? That probably isn't right.
DUH! Dumb me, thanks for catching that... Martin
+} + static int virDomainObjListSearchName(const void *payload, const void *name ATTRIBUTE_UNUSED, const void *data)
Peter

There is one problem that causes various errors in the daemon. When domain is waiting for a job, it is unlocked while waiting on the condition. However, if that domain is for example transient and being removed in another API (e.g. cancelling incoming migration), it get's unref'd. If the first call, that was waiting, fails to get the job, it unref's the domain object, and because it was the last reference, it causes clearing of the whole domain object. However, when finishing the call, the domain must be unlocked, but there is no way for the API to know whether it was cleaned or not (unless there is some ugly temporary variable, but let's scratch that). The root cause is that our APIs don't ref the objects they are using and all use the implicit reference that the object has when it is in the domain list. That reference can be removed when the API is waiting for a job. And because each domain doesn't do its ref'ing, it results in the ugly checking of the return value of virObjectUnref() that we have everywhere. This patch changes qemuDomObjFromDomain() to ref the domain (using virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which should be the only function in which the return value of virObjectUnref() is checked. This makes all reference counting deterministic and makes the code a bit clearer. Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/THREADS.txt | 40 ++- src/qemu/qemu_domain.c | 28 +- src/qemu/qemu_domain.h | 12 +- src/qemu/qemu_driver.c | 708 ++++++++++++++++------------------------------ src/qemu/qemu_migration.c | 108 +++---- src/qemu/qemu_migration.h | 10 +- src/qemu/qemu_process.c | 126 ++++----- 7 files changed, 379 insertions(+), 653 deletions(-) diff --git a/src/qemu/THREADS.txt b/src/qemu/THREADS.txt index 50a0cf9..53a3415 100644 --- a/src/qemu/THREADS.txt +++ b/src/qemu/THREADS.txt @@ -26,12 +26,20 @@ There are a number of locks on various objects * virDomainObjPtr Will be locked after calling any of the virDomainFindBy{ID,Name,UUID} - methods. + methods. However, preferred method is qemuDomObjFromDomain() that uses + virDomainFindByUUIDRef() which also increases the reference counter and + finds the domain in the domain list without blocking all other lookups. + When the domain is locked and the reference increased, the prefered way of + decrementing the reference counter and unlocking the domain is using the + qemuDomObjEndAPI() function. Lock must be held when changing/reading any variable in the virDomainObjPtr If the lock needs to be dropped & then re-acquired for a short period of time, the reference count must be incremented first using virDomainObjRef(). + There is no need to increase the reference count if qemuDomObjFromDomain() + was used for looking up the domain. In this case there is one reference + already added by that function. This lock must not be held for anything which sleeps/waits (i.e. monitor commands). @@ -109,7 +117,6 @@ To lock the virDomainObjPtr To acquire the normal job condition qemuDomainObjBeginJob() - - Increments ref count on virDomainObjPtr - Waits until the job is compatible with current async job or no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr @@ -122,14 +129,12 @@ To acquire the normal job condition qemuDomainObjEndJob() - Sets job.active to 0 - Signals on job.cond condition - - Decrements ref count on virDomainObjPtr To acquire the asynchronous job condition qemuDomainObjBeginAsyncJob() - - Increments ref count on virDomainObjPtr - Waits until no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr mutex @@ -141,7 +146,6 @@ To acquire the asynchronous job condition qemuDomainObjEndAsyncJob() - Sets job.asyncJob to 0 - Broadcasts on job.asyncCond condition - - Decrements ref count on virDomainObjPtr @@ -179,12 +183,10 @@ To acquire the QEMU monitor lock as part of an asynchronous job To keep a domain alive while waiting on a remote command qemuDomainObjEnterRemote() - - Increments ref count on virDomainObjPtr - Releases the virDomainObjPtr lock qemuDomainObjExitRemote() - Acquires the virDomainObjPtr lock - - Decrements ref count on virDomainObjPtr Design patterns @@ -195,18 +197,18 @@ Design patterns virDomainObjPtr obj; - obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom); ...do work... - virDomainObjUnlock(obj); + qemuDomObjEndAPI(obj); * Updating something directly to do with a virDomainObjPtr virDomainObjPtr obj; - obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom); qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE); @@ -214,18 +216,15 @@ Design patterns qemuDomainObjEndJob(obj); - virDomainObjUnlock(obj); - - + qemuDomObjEndAPI(obj); * Invoking a monitor command on a virDomainObjPtr - virDomainObjPtr obj; qemuDomainObjPrivatePtr priv; - obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom); qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE); @@ -240,8 +239,7 @@ Design patterns ...do final work... qemuDomainObjEndJob(obj); - virDomainObjUnlock(obj); - + qemuDomObjEndAPI(obj); * Running asynchronous job @@ -249,7 +247,7 @@ Design patterns virDomainObjPtr obj; qemuDomainObjPrivatePtr priv; - obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom); qemuDomainObjBeginAsyncJob(obj, QEMU_ASYNC_JOB_TYPE); qemuDomainObjSetAsyncJobMask(obj, allowedJobs); @@ -281,7 +279,7 @@ Design patterns ...do final work... qemuDomainObjEndAsyncJob(obj); - virDomainObjUnlock(obj); + qemuDomObjEndAPI(obj); * Coordinating with a remote server for migration @@ -289,7 +287,7 @@ Design patterns virDomainObjPtr obj; qemuDomainObjPrivatePtr priv; - obj = virDomainFindByUUID(driver->domains, dom->uuid); + obj = qemuDomObjFromDomain(dom); qemuDomainObjBeginAsyncJob(obj, QEMU_ASYNC_JOB_TYPE); @@ -309,4 +307,4 @@ Design patterns ...do final work... qemuDomainObjEndAsyncJob(obj); - virDomainObjUnlock(obj); + qemuDomObjEndAPI(obj); diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 4061ca1..f6e3080 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -1328,8 +1328,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, priv->jobs_queued++; then = now + QEMU_JOB_WAIT_TIME; - virObjectRef(obj); - retry: if (cfg->maxQueuedJobs && priv->jobs_queued > cfg->maxQueuedJobs) { @@ -1408,7 +1406,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, cleanup: priv->jobs_queued--; - virObjectUnref(obj); virObjectUnref(cfg); return ret; } @@ -1477,7 +1474,8 @@ qemuDomainObjBeginNestedJob(virQEMUDriverPtr driver, * Returns true if @obj was still referenced, false if it was * disposed of. */ -bool qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) +void +qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) { qemuDomainObjPrivatePtr priv = obj->privateData; qemuDomainJob job = priv->job.active; @@ -1493,11 +1491,9 @@ bool qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) if (qemuDomainTrackJob(job)) qemuDomainObjSaveJob(driver, obj); virCondSignal(&priv->job.cond); - - return virObjectUnref(obj); } -bool +void qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj) { qemuDomainObjPrivatePtr priv = obj->privateData; @@ -1511,8 +1507,6 @@ qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj) qemuDomainObjResetAsyncJob(priv); qemuDomainObjSaveJob(driver, obj); virCondBroadcast(&priv->job.asyncCond); - - return virObjectUnref(obj); } void @@ -1551,7 +1545,7 @@ qemuDomainObjEnterMonitorInternal(virQEMUDriverPtr driver, virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("domain is no longer running")); /* Still referenced by the containing async job. */ - ignore_value(qemuDomainObjEndJob(driver, obj)); + qemuDomainObjEndJob(driver, obj); return -1; } } else if (priv->job.asyncOwner == virThreadSelfID()) { @@ -1690,7 +1684,6 @@ void qemuDomainObjEnterRemote(virDomainObjPtr obj) { VIR_DEBUG("Entering remote (vm=%p name=%s)", obj, obj->def->name); - virObjectRef(obj); virObjectUnlock(obj); } @@ -1699,7 +1692,6 @@ void qemuDomainObjExitRemote(virDomainObjPtr obj) virObjectLock(obj); VIR_DEBUG("Exited remote (vm=%p name=%s)", obj, obj->def->name); - virObjectUnref(obj); } @@ -2400,8 +2392,7 @@ qemuDomainSnapshotDiscardAllMetadata(virQEMUDriverPtr driver, } /* - * The caller must hold a lock the vm and there must - * be no remaining references to vm. + * The caller must hold a lock the vm. */ void qemuDomainRemoveInactive(virQEMUDriverPtr driver, @@ -2432,7 +2423,7 @@ qemuDomainRemoveInactive(virQEMUDriverPtr driver, virObjectUnref(cfg); if (haveJob) - ignore_value(qemuDomainObjEndJob(driver, vm)); + qemuDomainObjEndJob(driver, vm); } void @@ -2805,3 +2796,10 @@ qemuDomainAgentAvailable(qemuDomainObjPrivatePtr priv, } return true; } + +void +qemuDomObjEndAPI(virDomainObjPtr vm) +{ + if (virObjectUnref(vm)) + virObjectUnlock(vm); +} diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index ebb282a..cc3b4ac 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -225,12 +225,10 @@ int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driver, qemuDomainAsyncJob asyncJob) ATTRIBUTE_RETURN_CHECK; -bool qemuDomainObjEndJob(virQEMUDriverPtr driver, - virDomainObjPtr obj) - ATTRIBUTE_RETURN_CHECK; -bool qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, - virDomainObjPtr obj) - ATTRIBUTE_RETURN_CHECK; +void qemuDomainObjEndJob(virQEMUDriverPtr driver, + virDomainObjPtr obj); +void qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, + virDomainObjPtr obj); void qemuDomainObjAbortAsyncJob(virDomainObjPtr obj); void qemuDomainObjSetJobPhase(virQEMUDriverPtr driver, virDomainObjPtr obj, @@ -412,4 +410,6 @@ int qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4); +void qemuDomObjEndAPI(virDomainObjPtr vm); + #endif /* __QEMU_DOMAIN_H__ */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 5dc62b0..fc2f675 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -205,11 +205,11 @@ struct qemuAutostartData { * qemuDomObjFromDomain: * @domain: Domain pointer that has to be looked up * - * This function looks up @domain and returns the appropriate - * virDomainObjPtr. + * This function looks up @domain and returns the appropriate virDomainObjPtr + * that has to be cleaned by calling qemuDomObjEndAPI(). * - * Returns the domain object which is locked on success, NULL - * otherwise. + * Returns the domain object with incremented reference counter which is locked + * on success, NULL otherwise. */ static virDomainObjPtr qemuDomObjFromDomain(virDomainPtr domain) @@ -218,7 +218,7 @@ qemuDomObjFromDomain(virDomainPtr domain) virQEMUDriverPtr driver = domain->conn->privateData; char uuidstr[VIR_UUID_STRING_BUFLEN]; - vm = virDomainObjListFindByUUID(driver->domains, domain->uuid); + vm = virDomainObjListFindByUUIDRef(driver->domains, domain->uuid); if (!vm) { virUUIDFormat(domain->uuid, uuidstr); virReportError(VIR_ERR_NO_DOMAIN, @@ -230,9 +230,9 @@ qemuDomObjFromDomain(virDomainPtr domain) return vm; } -/* Looks up the domain object from snapshot and unlocks the driver. The - * returned domain object is locked and the caller is responsible for - * unlocking it */ +/* Looks up the domain object from snapshot and unlocks the + * driver. The returned domain object is locked and ref'd and the + * caller must call qemuDomObjEndAPI() on it. */ static virDomainObjPtr qemuDomObjFromSnapshot(virDomainSnapshotPtr snapshot) { @@ -278,6 +278,7 @@ qemuAutostartDomain(virDomainObjPtr vm, flags |= VIR_DOMAIN_START_BYPASS_CACHE; virObjectLock(vm); + virObjectRef(vm); virResetLastError(); if (vm->autostart && !virDomainObjIsActive(vm)) { @@ -297,14 +298,12 @@ qemuAutostartDomain(virDomainObjPtr vm, err ? err->message : _("unknown error")); } - if (!qemuDomainObjEndJob(data->driver, vm)) - vm = NULL; + qemuDomainObjEndJob(data->driver, vm); } ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(cfg); return ret; } @@ -1446,7 +1445,7 @@ static virDomainPtr qemuDomainLookupByID(virConnectPtr conn, virDomainObjPtr vm; virDomainPtr dom = NULL; - vm = virDomainObjListFindByID(driver->domains, id); + vm = virDomainObjListFindByID(driver->domains, id); if (!vm) { virReportError(VIR_ERR_NO_DOMAIN, @@ -1461,8 +1460,7 @@ static virDomainPtr qemuDomainLookupByID(virConnectPtr conn, if (dom) dom->id = vm->def->id; cleanup: - if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm); return dom; } @@ -1473,7 +1471,7 @@ static virDomainPtr qemuDomainLookupByUUID(virConnectPtr conn, virDomainObjPtr vm; virDomainPtr dom = NULL; - vm = virDomainObjListFindByUUID(driver->domains, uuid); + vm = virDomainObjListFindByUUIDRef(driver->domains, uuid); if (!vm) { char uuidstr[VIR_UUID_STRING_BUFLEN]; @@ -1490,8 +1488,7 @@ static virDomainPtr qemuDomainLookupByUUID(virConnectPtr conn, if (dom) dom->id = vm->def->id; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return dom; } @@ -1517,8 +1514,7 @@ static virDomainPtr qemuDomainLookupByName(virConnectPtr conn, if (dom) dom->id = vm->def->id; cleanup: - if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm); return dom; } @@ -1537,8 +1533,7 @@ static int qemuDomainIsActive(virDomainPtr dom) ret = virDomainObjIsActive(obj); cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(obj); return ret; } @@ -1556,8 +1551,7 @@ static int qemuDomainIsPersistent(virDomainPtr dom) ret = obj->persistent; cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(obj); return ret; } @@ -1575,8 +1569,7 @@ static int qemuDomainIsUpdated(virDomainPtr dom) ret = obj->updated; cleanup: - if (obj) - virObjectUnlock(obj); + qemuDomObjEndAPI(obj); return ret; } @@ -1717,12 +1710,11 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL))) goto cleanup; - + virObjectRef(vm); def = NULL; if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) { qemuDomainRemoveInactive(driver, vm); - vm = NULL; goto cleanup; } @@ -1731,9 +1723,8 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags) < 0) { virDomainAuditStart(vm, "booted", false); - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; + qemuDomainObjEndJob(driver, vm); + qemuDomainRemoveInactive(driver, vm); goto cleanup; } @@ -1756,13 +1747,11 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, if (dom) dom->id = vm->def->id; - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virDomainDefFree(def); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (event) { qemuDomainEventQueue(driver, event); if (event2) @@ -1842,12 +1831,10 @@ static int qemuDomainSuspend(virDomainPtr dom) ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); @@ -1908,12 +1895,10 @@ static int qemuDomainResume(virDomainPtr dom) ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); virObjectUnref(caps); @@ -1998,12 +1983,10 @@ static int qemuDomainShutdownFlags(virDomainPtr dom, unsigned int flags) } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -2098,12 +2081,10 @@ qemuDomainReboot(virDomainPtr dom, unsigned int flags) } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -2141,12 +2122,10 @@ qemuDomainReset(virDomainPtr dom, unsigned int flags) priv->fakeReboot = false; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -2226,20 +2205,14 @@ qemuDomainDestroyFlags(virDomainPtr dom, VIR_DOMAIN_EVENT_STOPPED_DESTROYED); virDomainAuditStop(vm, "destroyed"); - if (!vm->persistent) { - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } ret = 0; - endjob: - if (vm && !qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); + if (ret == 0 && !vm->persistent) + qemuDomainRemoveInactive(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); return ret; @@ -2264,8 +2237,7 @@ static char *qemuDomainGetOSType(virDomainPtr dom) { ignore_value(VIR_STRDUP(type, vm->def->os.type)); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return type; } @@ -2285,8 +2257,7 @@ qemuDomainGetMaxMemory(virDomainPtr dom) ret = vm->def->mem.max_balloon; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -2388,12 +2359,10 @@ static int qemuDomainSetMemoryFlags(virDomainPtr dom, unsigned long newmem, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -2468,12 +2437,10 @@ static int qemuDomainSetMemoryStatsPeriod(virDomainPtr dom, int period, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -2516,12 +2483,10 @@ static int qemuDomainInjectNMI(virDomainPtr domain, unsigned int flags) qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -2580,12 +2545,10 @@ static int qemuDomainSendKey(virDomainPtr domain, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -2636,10 +2599,7 @@ static int qemuDomainGetInfo(virDomainPtr dom, err = qemuMonitorGetBalloonInfo(priv->mon, &balloon); qemuDomainObjExitMonitor(driver, vm); } - if (!qemuDomainObjEndJob(driver, vm)) { - vm = NULL; - goto cleanup; - } + qemuDomainObjEndJob(driver, vm); if (err < 0) { /* We couldn't get current memory allocation but that's not @@ -2664,8 +2624,7 @@ static int qemuDomainGetInfo(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -2690,8 +2649,7 @@ qemuDomainGetState(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -2741,8 +2699,7 @@ qemuDomainGetControlInfo(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -3237,37 +3194,29 @@ qemuDomainSaveInternal(virQEMUDriverPtr driver, virDomainPtr dom, event = virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPPED, VIR_DOMAIN_EVENT_STOPPED_SAVED); - if (!vm->persistent) { - if (qemuDomainObjEndAsyncJob(driver, vm) > 0) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } - endjob: - if (vm) { - if (ret != 0) { - if (was_running && virDomainObjIsActive(vm)) { - rc = qemuProcessStartCPUs(driver, vm, dom->conn, - VIR_DOMAIN_RUNNING_SAVE_CANCELED, - QEMU_ASYNC_JOB_SAVE); - if (rc < 0) { - VIR_WARN("Unable to resume guest CPUs after save failure"); - event = virDomainEventLifecycleNewFromObj(vm, - VIR_DOMAIN_EVENT_SUSPENDED, - VIR_DOMAIN_EVENT_SUSPENDED_API_ERROR); - } + qemuDomainObjEndAsyncJob(driver, vm); + if (ret != 0) { + if (was_running && virDomainObjIsActive(vm)) { + rc = qemuProcessStartCPUs(driver, vm, dom->conn, + VIR_DOMAIN_RUNNING_SAVE_CANCELED, + QEMU_ASYNC_JOB_SAVE); + if (rc < 0) { + VIR_WARN("Unable to resume guest CPUs after save failure"); + event = virDomainEventLifecycleNewFromObj(vm, + VIR_DOMAIN_EVENT_SUSPENDED, + VIR_DOMAIN_EVENT_SUSPENDED_API_ERROR); } } - if (qemuDomainObjEndAsyncJob(driver, vm) == 0) - vm = NULL; + } else if (!vm->persistent) { + qemuDomainRemoveInactive(driver, vm); } cleanup: VIR_FREE(xml); if (event) qemuDomainEventQueue(driver, event); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); return ret; } @@ -3333,11 +3282,9 @@ qemuDomainSaveFlags(virDomainPtr dom, const char *path, const char *dxml, ret = qemuDomainSaveInternal(driver, dom, vm, path, compressed, dxml, flags); - vm = NULL; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(cfg); return ret; } @@ -3416,15 +3363,13 @@ qemuDomainManagedSave(virDomainPtr dom, unsigned int flags) VIR_INFO("Saving state of domain '%s' to '%s'", vm->def->name, name); - if ((ret = qemuDomainSaveInternal(driver, dom, vm, name, compressed, - NULL, flags)) == 0) + ret = qemuDomainSaveInternal(driver, dom, vm, name, + compressed, NULL, flags); + if (ret == 0) vm->hasManagedSave = true; - vm = NULL; - cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); VIR_FREE(name); virObjectUnref(cfg); @@ -3471,7 +3416,7 @@ qemuDomainHasManagedSaveImage(virDomainPtr dom, unsigned int flags) ret = vm->hasManagedSave; cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -3506,7 +3451,7 @@ qemuDomainManagedSaveRemove(virDomainPtr dom, unsigned int flags) cleanup: VIR_FREE(name); - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -3765,16 +3710,12 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, } } - if (qemuDomainObjEndAsyncJob(driver, vm) == 0) { - vm = NULL; - } else if ((ret == 0) && (flags & VIR_DUMP_CRASH) && !vm->persistent) { + qemuDomainObjEndAsyncJob(driver, vm); + if (ret == 0 && flags & VIR_DUMP_CRASH && !vm->persistent) qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); return ret; @@ -3873,12 +3814,10 @@ qemuDomainScreenshot(virDomainPtr dom, unlink(tmp); VIR_FREE(tmp); - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(cfg); return ret; } @@ -3937,10 +3876,7 @@ static void processWatchdogEvent(virQEMUDriverPtr driver, virDomainObjPtr vm, in } endjob: - /* Safe to ignore value since ref count was incremented in - * qemuProcessHandleWatchdog(). - */ - ignore_value(qemuDomainObjEndAsyncJob(driver, vm)); + qemuDomainObjEndAsyncJob(driver, vm); cleanup: virObjectUnref(cfg); @@ -3987,10 +3923,7 @@ doCoreDumpToAutoDumpPath(virQEMUDriverPtr driver, "%s", _("Dump failed")); endjob: - /* Safe to ignore value since ref count was incremented in - * qemuProcessHandleGuestPanic(). - */ - ignore_value(qemuDomainObjEndAsyncJob(driver, vm)); + qemuDomainObjEndAsyncJob(driver, vm); cleanup: VIR_FREE(dumpfile); @@ -4120,10 +4053,7 @@ processDeviceDeletedEvent(virQEMUDriverPtr driver, devAlias); endjob: - /* We had an extra reference to vm before starting a new job so ending the - * job is guaranteed not to remove the last reference. - */ - ignore_value(qemuDomainObjEndJob(driver, vm)); + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(devAlias); @@ -4310,10 +4240,7 @@ processNicRxFilterChangedEvent(virQEMUDriverPtr driver, } endjob: - /* We had an extra reference to vm before starting a new job so ending the - * job is guaranteed not to remove the last reference. - */ - ignore_value(qemuDomainObjEndJob(driver, vm)); + qemuDomainObjEndJob(driver, vm); cleanup: virNetDevRxFilterFree(hostFilter); @@ -4371,10 +4298,7 @@ processSerialChangedEvent(virQEMUDriverPtr driver, "channel %s", vm->def->name, devAlias); endjob: - /* We had an extra reference to vm before starting a new job so ending the - * job is guaranteed not to remove the last reference. - */ - ignore_value(qemuDomainObjEndJob(driver, vm)); + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(devAlias); @@ -4413,8 +4337,7 @@ static void qemuProcessEventHandler(void *data, void *opaque) break; } - if (virObjectUnref(vm)) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); VIR_FREE(processEvent); } @@ -4748,12 +4671,10 @@ qemuDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); VIR_FREE(cpuinfo); virObjectUnref(cfg); @@ -4945,8 +4866,7 @@ qemuDomainPinVcpuFlags(virDomainPtr dom, cleanup: if (cgroup_vcpu) virCgroupFree(&cgroup_vcpu); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); VIR_FREE(str); @@ -5047,8 +4967,7 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom, ret = ncpumaps; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); return ret; } @@ -5226,8 +5145,7 @@ qemuDomainPinEmulator(virDomainPtr dom, VIR_FREE(str); virBitmapFree(pcpumap); virObjectUnref(caps); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(cfg); return ret; } @@ -5300,8 +5218,7 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom, ret = 1; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); return ret; } @@ -5332,8 +5249,7 @@ qemuDomainGetVcpus(virDomainPtr dom, ret = qemuDomainHelperGetVcpus(vm, info, maxinfo, cpumaps, maplen); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -5399,8 +5315,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) qemuDomainObjExitAgent(vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); if (ncpuinfo < 0) goto cleanup; @@ -5425,8 +5340,7 @@ qemuDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags) cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); VIR_FREE(cpuinfo); return ret; @@ -5486,8 +5400,7 @@ static int qemuDomainGetSecurityLabel(virDomainPtr dom, virSecurityLabelPtr secl ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -5552,8 +5465,7 @@ static int qemuDomainGetSecurityLabelList(virDomainPtr dom, } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -5990,6 +5902,7 @@ qemuDomainRestoreFlags(virConnectPtr conn, VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL))) goto cleanup; + virObjectRef(vm); def = NULL; if (flags & VIR_DOMAIN_SAVE_RUNNING) @@ -6010,12 +5923,9 @@ qemuDomainRestoreFlags(virConnectPtr conn, if (virFileWrapperFdClose(wrapperFd) < 0) VIR_WARN("Failed to close %s", path); - if (!qemuDomainObjEndJob(driver, vm)) { - vm = NULL; - } else if (ret < 0 && !vm->persistent) { + qemuDomainObjEndJob(driver, vm); + if (ret < 0 && !vm->persistent) qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } cleanup: virDomainDefFree(def); @@ -6023,8 +5933,7 @@ qemuDomainRestoreFlags(virConnectPtr conn, VIR_FREE(xml); VIR_FREE(xmlout); virFileWrapperFdFree(wrapperFd); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virNWFilterUnlockFilterUpdates(); return ret; } @@ -6273,10 +6182,7 @@ static char *qemuDomainGetXMLDesc(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) { - vm = NULL; - goto cleanup; - } + qemuDomainObjEndJob(driver, vm); if (err < 0) goto cleanup; if (err > 0) @@ -6291,8 +6197,7 @@ static char *qemuDomainGetXMLDesc(virDomainPtr dom, ret = qemuDomainFormatXML(driver, vm, flags); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -6687,12 +6592,10 @@ qemuDomainCreateWithFlags(virDomainPtr dom, unsigned int flags) ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virNWFilterUnlockFilterUpdates(); return ret; } @@ -6744,7 +6647,7 @@ static virDomainPtr qemuDomainDefineXML(virConnectPtr conn, const char *xml) driver->xmlopt, 0, &oldDef))) goto cleanup; - + virObjectRef(vm); def = NULL; if (virDomainHasDiskMirror(vm)) { virReportError(VIR_ERR_BLOCK_COPY_ACTIVE, "%s", @@ -6769,7 +6672,6 @@ static virDomainPtr qemuDomainDefineXML(virConnectPtr conn, const char *xml) /* Brand new domain. Remove it */ VIR_INFO("Deleting domain '%s'", vm->def->name); qemuDomainRemoveInactive(driver, vm); - vm = NULL; } goto cleanup; } @@ -6787,8 +6689,7 @@ static virDomainPtr qemuDomainDefineXML(virConnectPtr conn, const char *xml) cleanup: virDomainDefFree(oldDef); virDomainDefFree(def); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); virObjectUnref(qemuCaps); @@ -6894,15 +6795,13 @@ qemuDomainUndefineFlags(virDomainPtr dom, vm->persistent = 0; } else { qemuDomainRemoveInactive(driver, vm); - vm = NULL; } ret = 0; cleanup: VIR_FREE(name); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); virObjectUnref(cfg); @@ -7664,8 +7563,7 @@ static int qemuDomainAttachDeviceFlags(virDomainPtr dom, const char *xml, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virObjectUnref(qemuCaps); @@ -7673,8 +7571,7 @@ static int qemuDomainAttachDeviceFlags(virDomainPtr dom, const char *xml, if (dev != dev_copy) virDomainDeviceDefFree(dev_copy); virDomainDeviceDefFree(dev); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); virNWFilterUnlockFilterUpdates(); @@ -7813,8 +7710,7 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr dom, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virObjectUnref(qemuCaps); @@ -7822,8 +7718,7 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr dom, if (dev != dev_copy) virDomainDeviceDefFree(dev_copy); virDomainDeviceDefFree(dev); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); virNWFilterUnlockFilterUpdates(); @@ -7955,8 +7850,7 @@ static int qemuDomainDetachDeviceFlags(virDomainPtr dom, const char *xml, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virObjectUnref(qemuCaps); @@ -7964,8 +7858,7 @@ static int qemuDomainDetachDeviceFlags(virDomainPtr dom, const char *xml, if (dev != dev_copy) virDomainDeviceDefFree(dev_copy); virDomainDeviceDefFree(dev); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -7993,8 +7886,7 @@ static int qemuDomainGetAutostart(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -8059,8 +7951,7 @@ static int qemuDomainSetAutostart(virDomainPtr dom, cleanup: VIR_FREE(configFile); VIR_FREE(autostartLink); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(cfg); return ret; } @@ -8114,8 +8005,7 @@ static char *qemuDomainGetSchedulerType(virDomainPtr dom, ignore_value(VIR_STRDUP(ret, "posix")); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(cfg); return ret; } @@ -8499,12 +8389,10 @@ qemuDomainSetBlkioParameters(virDomainPtr dom, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -8917,8 +8805,7 @@ qemuDomainGetBlkioParameters(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -9062,7 +8949,7 @@ qemuDomainSetMemoryParameters(virDomainPtr dom, ret = 0; cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -9215,8 +9102,7 @@ qemuDomainGetMemoryParameters(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -9428,13 +9314,11 @@ qemuDomainSetNumaParameters(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virBitmapFree(nodeset); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -9538,8 +9422,7 @@ qemuDomainGetNumaParameters(virDomainPtr dom, cleanup: VIR_FREE(nodeset); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); return ret; } @@ -9826,8 +9709,7 @@ qemuDomainSetSchedulerParametersFlags(virDomainPtr dom, cleanup: virDomainDefFree(vmdef); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (eventNparams) virTypedParamsFree(eventParams, eventNparams); virObjectUnref(caps); @@ -10064,8 +9946,7 @@ qemuDomainGetSchedulerParametersFlags(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -10163,13 +10044,11 @@ qemuDomainBlockResize(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(device); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -10240,12 +10119,10 @@ qemuDomainBlockStats(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -10415,12 +10292,10 @@ qemuDomainBlockStatsFlags(virDomainPtr dom, *nparams = tmp; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -10461,8 +10336,7 @@ qemuDomainInterfaceStats(virDomainPtr dom, _("invalid path, '%s' is not a known interface"), path); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -10646,14 +10520,12 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virNetDevBandwidthFree(bandwidth); virNetDevBandwidthFree(newBandwidth); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -10773,8 +10645,7 @@ qemuDomainGetInterfaceParameters(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); return ret; } @@ -10823,12 +10694,10 @@ qemuDomainMemoryStats(virDomainPtr dom, } } - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -10885,8 +10754,7 @@ qemuDomainBlockPeek(virDomainPtr dom, cleanup: VIR_FORCE_CLOSE(fd); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -10966,16 +10834,14 @@ qemuDomainMemoryPeek(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FORCE_CLOSE(fd); if (tmp) unlink(tmp); VIR_FREE(tmp); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(cfg); return ret; } @@ -11153,8 +11019,7 @@ qemuDomainGetBlockInfo(virDomainPtr dom, ret = 0; } - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); } else { ret = 0; } @@ -11175,7 +11040,7 @@ qemuDomainGetBlockInfo(virDomainPtr dom, _("domain is not running")); ret = -1; } - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(cfg); return ret; } @@ -11500,7 +11365,7 @@ qemuDomainMigrateBegin3(virDomainPtr domain, return NULL; if (virDomainMigrateBegin3EnsureACL(domain->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return NULL; } @@ -11536,7 +11401,7 @@ qemuDomainMigrateBegin3Params(virDomainPtr domain, return NULL; if (virDomainMigrateBegin3ParamsEnsureACL(domain->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return NULL; } @@ -11780,7 +11645,7 @@ qemuDomainMigratePerform3(virDomainPtr dom, return -1; if (virDomainMigratePerform3EnsureACL(dom->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return -1; } @@ -11945,7 +11810,7 @@ qemuDomainMigrateConfirm3(virDomainPtr domain, return -1; if (virDomainMigrateConfirm3EnsureACL(domain->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return -1; } @@ -11973,7 +11838,7 @@ qemuDomainMigrateConfirm3Params(virDomainPtr domain, return -1; if (virDomainMigrateConfirm3ParamsEnsureACL(domain->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return -1; } @@ -12276,8 +12141,7 @@ static int qemuDomainGetJobInfo(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -12343,8 +12207,7 @@ qemuDomainGetJobStats(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -12391,12 +12254,10 @@ static int qemuDomainAbortJob(virDomainPtr dom) qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -12436,12 +12297,10 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -12491,12 +12350,10 @@ qemuDomainMigrateGetCompressionCache(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -12547,12 +12404,10 @@ qemuDomainMigrateSetCompressionCache(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -12602,16 +12457,14 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, priv->migMaxBandwidth = bandwidth; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); } else { priv->migMaxBandwidth = bandwidth; ret = 0; } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -12638,8 +12491,7 @@ qemuDomainMigrateGetMaxSpeed(virDomainPtr dom, ret = 0; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -14099,12 +13951,10 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain, virDomainSnapshotObjListRemove(vm->snapshots, snap); } - if (!qemuDomainObjEndAsyncJob(driver, vm)) - vm = NULL; + qemuDomainObjEndAsyncJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virDomainSnapshotDefFree(def); VIR_FREE(xml); virObjectUnref(caps); @@ -14135,7 +13985,7 @@ qemuDomainSnapshotListNames(virDomainPtr domain, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return n; } @@ -14159,7 +14009,7 @@ qemuDomainSnapshotNum(virDomainPtr domain, n = virDomainSnapshotObjListNum(vm->snapshots, NULL, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return n; } @@ -14184,7 +14034,7 @@ qemuDomainListAllSnapshots(virDomainPtr domain, n = virDomainListSnapshots(vm->snapshots, NULL, domain, snaps, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return n; } @@ -14243,7 +14093,7 @@ qemuDomainSnapshotNumChildren(virDomainSnapshotPtr snapshot, n = virDomainSnapshotObjListNum(vm->snapshots, snap, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return n; } @@ -14273,7 +14123,7 @@ qemuDomainSnapshotListAllChildren(virDomainSnapshotPtr snapshot, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return n; } @@ -14301,7 +14151,7 @@ qemuDomainSnapshotLookupByName(virDomainPtr domain, snapshot = virGetDomainSnapshot(domain, snap->def->name); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return snapshot; } @@ -14324,7 +14174,7 @@ qemuDomainHasCurrentSnapshot(virDomainPtr domain, ret = (vm->current_snapshot != NULL); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -14358,7 +14208,7 @@ qemuDomainSnapshotGetParent(virDomainSnapshotPtr snapshot, parent = virGetDomainSnapshot(snapshot->domain, snap->def->parent); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return parent; } @@ -14387,7 +14237,7 @@ qemuDomainSnapshotCurrent(virDomainPtr domain, snapshot = virGetDomainSnapshot(domain, vm->current_snapshot->def->name); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return snapshot; } @@ -14417,7 +14267,7 @@ qemuDomainSnapshotGetXMLDesc(virDomainSnapshotPtr snapshot, xml = virDomainSnapshotDefFormat(uuidstr, snap->def, flags, 0); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return xml; } @@ -14445,7 +14295,7 @@ qemuDomainSnapshotIsCurrent(virDomainSnapshotPtr snapshot, STREQ(snapshot->name, vm->current_snapshot->def->name)); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -14475,7 +14325,7 @@ qemuDomainSnapshotHasMetadata(virDomainSnapshotPtr snapshot, ret = 1; cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -14764,9 +14614,8 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot, if (qemuDomainSnapshotRevertInactive(driver, vm, snap) < 0) { if (!vm->persistent) { - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; + qemuDomainObjEndJob(driver, vm); + qemuDomainRemoveInactive(driver, vm); goto cleanup; } goto endjob; @@ -14791,9 +14640,8 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot, virDomainAuditStart(vm, "from-snapshot", rc >= 0); if (rc < 0) { if (!vm->persistent) { - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; + qemuDomainObjEndJob(driver, vm); + qemuDomainRemoveInactive(driver, vm); goto cleanup; } goto endjob; @@ -14830,11 +14678,10 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot, ret = 0; endjob: - if (vm && !qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm && ret == 0) { + if (ret == 0) { if (qemuDomainSnapshotWriteMetadata(vm, snap, cfg->snapshotDir) < 0) ret = -1; @@ -14848,8 +14695,7 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot, if (event2) qemuDomainEventQueue(driver, event2); } - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); virNWFilterUnlockFilterUpdates(); @@ -14999,12 +14845,10 @@ qemuDomainSnapshotDelete(virDomainSnapshotPtr snapshot, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(cfg); return ret; } @@ -15052,12 +14896,10 @@ static int qemuDomainQemuMonitorCommand(virDomainPtr domain, const char *cmd, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -15125,21 +14967,18 @@ static virDomainPtr qemuDomainQemuAttach(virConnectPtr conn, VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL))) goto cleanup; - + virObjectRef(vm); def = NULL; if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) { qemuDomainRemoveInactive(driver, vm); - vm = NULL; goto cleanup; } if (qemuProcessAttach(conn, driver, vm, pid, pidfile, monConfig, monJSON) < 0) { - if (qemuDomainObjEndJob(driver, vm)) - qemuDomainRemoveInactive(driver, vm); - vm = NULL; - monConfig = NULL; + qemuDomainObjEndJob(driver, vm); + qemuDomainRemoveInactive(driver, vm); goto cleanup; } @@ -15149,14 +14988,12 @@ static virDomainPtr qemuDomainQemuAttach(virConnectPtr conn, if (dom) dom->id = vm->def->id; - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: virDomainDefFree(def); virDomainChrSourceDefFree(monConfig); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); VIR_FREE(pidfile); virObjectUnref(caps); virObjectUnref(qemuCaps); @@ -15241,8 +15078,7 @@ qemuDomainOpenConsole(virDomainPtr dom, } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -15316,8 +15152,7 @@ qemuDomainOpenChannel(virDomainPtr dom, } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -15510,8 +15345,7 @@ qemuDomainBlockPivot(virConnectPtr conn, } -/* bandwidth in MiB/s per public API. Caller must lock vm beforehand, - * and not access it afterwards. */ +/* bandwidth in MiB/s per public API. Caller must lock vm beforehand. */ static int qemuDomainBlockJobImpl(virDomainObjPtr vm, virConnectPtr conn, @@ -15730,18 +15564,14 @@ qemuDomainBlockJobImpl(virDomainObjPtr vm, } endjob: - if (!qemuDomainObjEndJob(driver, vm)) { - vm = NULL; - goto cleanup; - } + qemuDomainObjEndJob(driver, vm); cleanup: virObjectUnref(cfg); VIR_FREE(basePath); VIR_FREE(backingPath); VIR_FREE(device); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); if (event2) @@ -15761,7 +15591,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, const char *path, unsigned int flags) return -1; if (virDomainBlockJobAbortEnsureACL(dom->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return -1; } @@ -15771,7 +15601,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom, const char *path, unsigned int flags) static int qemuDomainGetBlockJobInfo(virDomainPtr dom, const char *path, - virDomainBlockJobInfoPtr info, unsigned int flags) + virDomainBlockJobInfoPtr info, unsigned int flags) { virQEMUDriverPtr driver = dom->conn->privateData; qemuDomainObjPrivatePtr priv; @@ -15788,7 +15618,7 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, const char *path, return -1; if (virDomainGetBlockJobInfoEnsureACL(dom->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return -1; } @@ -15849,12 +15679,10 @@ qemuDomainGetBlockJobInfo(virDomainPtr dom, const char *path, virObjectUnref(cfg); } endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -15869,7 +15697,7 @@ qemuDomainBlockJobSetSpeed(virDomainPtr dom, const char *path, return -1; if (virDomainBlockJobSetSpeedEnsureACL(dom->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return -1; } @@ -15879,8 +15707,7 @@ qemuDomainBlockJobSetSpeed(virDomainPtr dom, const char *path, /* bandwidth in bytes/s. Caller must lock vm beforehand, and not - * access it afterwards; likewise, caller must not access mirror - * afterwards. */ + * access mirror afterwards. */ static int qemuDomainBlockCopyCommon(virDomainObjPtr vm, virConnectPtr conn, @@ -16058,13 +15885,10 @@ qemuDomainBlockCopyCommon(virDomainObjPtr vm, if (need_unlink && unlink(mirror->path)) VIR_WARN("unable to unlink just-created %s", mirror->path); virStorageSourceFree(mirror); - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(device); - if (vm) - virObjectUnlock(vm); virObjectUnref(cfg); return ret; } @@ -16135,12 +15959,10 @@ qemuDomainBlockRebase(virDomainPtr dom, const char *path, const char *base, VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT); ret = qemuDomainBlockCopyCommon(vm, dom->conn, path, dest, bandwidth, 0, 0, flags, true); - vm = NULL; dest = NULL; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virStorageSourceFree(dest); return ret; } @@ -16209,11 +16031,9 @@ qemuDomainBlockCopy(virDomainPtr dom, const char *disk, const char *destxml, ret = qemuDomainBlockCopyCommon(vm, dom->conn, disk, dest, bandwidth, granularity, buf_size, flags, false); - vm = NULL; cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -16229,7 +16049,7 @@ qemuDomainBlockPull(virDomainPtr dom, const char *path, unsigned long bandwidth, return -1; if (virDomainBlockPullEnsureACL(dom->conn, vm->def) < 0) { - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return -1; } @@ -16468,16 +16288,14 @@ qemuDomainBlockCommit(virDomainPtr dom, VIR_DISK_CHAIN_READ_ONLY); } virStorageSourceFree(mirror); - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(topPath); VIR_FREE(basePath); VIR_FREE(backingPath); VIR_FREE(device); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -16538,12 +16356,10 @@ qemuDomainOpenGraphics(virDomainPtr dom, ret = qemuMonitorOpenGraphics(priv->mon, protocol, fd, "graphicsfd", (flags & VIR_DOMAIN_OPEN_GRAPHICS_SKIPAUTH) != 0); qemuDomainObjExitMonitor(driver, vm); - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -16609,8 +16425,7 @@ qemuDomainOpenGraphicsFD(virDomainPtr dom, ret = qemuMonitorOpenGraphics(priv->mon, protocol, pair[1], "graphicsfd", (flags & VIR_DOMAIN_OPEN_GRAPHICS_SKIPAUTH)); qemuDomainObjExitMonitor(driver, vm); - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); if (ret < 0) goto cleanup; @@ -16620,8 +16435,7 @@ qemuDomainOpenGraphicsFD(virDomainPtr dom, cleanup: VIR_FORCE_CLOSE(pair[0]); VIR_FORCE_CLOSE(pair[1]); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -16975,14 +16789,11 @@ qemuDomainSetBlockIoTune(virDomainPtr dom, } endjob: - - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(device); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (eventNparams) virTypedParamsFree(eventParams, eventNparams); virObjectUnref(caps); @@ -17178,13 +16989,11 @@ qemuDomainGetBlockIoTune(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: VIR_FREE(device); - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); return ret; } @@ -17252,12 +17061,10 @@ qemuDomainGetDiskErrors(virDomainPtr dom, ret = n; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virHashFree(table); if (ret < 0) { for (i = 0; i < n; i++) @@ -17299,7 +17106,7 @@ qemuDomainSetMetadata(virDomainPtr dom, cfg->configDir, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); virObjectUnref(cfg); return ret; @@ -17328,7 +17135,7 @@ qemuDomainGetMetadata(virDomainPtr dom, ret = virDomainObjGetMetadata(vm, type, uri, caps, driver->xmlopt, flags); cleanup: - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(caps); return ret; } @@ -17377,8 +17184,7 @@ qemuDomainGetCPUStats(virDomainPtr domain, ret = virCgroupGetPercpuStats(priv->cgroup, params, nparams, start_cpu, ncpus, priv->nvcpupids); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -17467,12 +17273,10 @@ qemuDomainPMSuspendForDuration(virDomainPtr dom, qemuDomainObjExitAgent(vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -17516,12 +17320,10 @@ qemuDomainPMWakeup(virDomainPtr dom, qemuDomainObjExitMonitor(driver, vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -17592,12 +17394,10 @@ qemuDomainQemuAgentCommand(virDomainPtr domain, VIR_FREE(result); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return result; } @@ -17701,12 +17501,10 @@ qemuDomainFSTrim(virDomainPtr dom, qemuDomainObjExitAgent(vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -17886,12 +17684,10 @@ qemuDomainGetTime(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -17960,12 +17756,10 @@ qemuDomainSetTime(virDomainPtr dom, ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -18000,12 +17794,10 @@ qemuDomainFSFreeze(virDomainPtr dom, ret = qemuDomainSnapshotFSFreeze(driver, vm, mountpoints, nmountpoints); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -18046,12 +17838,10 @@ qemuDomainFSThaw(virDomainPtr dom, ret = qemuDomainSnapshotFSThaw(driver, vm, true); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return ret; } @@ -18728,13 +18518,15 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, continue; if (doms != domlist && - !virConnectGetAllDomainStatsCheckACL(conn, dom->def)) + !virConnectGetAllDomainStatsCheckACL(conn, dom->def)) { + virObjectUnlock(dom); continue; + } - if (HAVE_JOB(domflags) && - qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY) < 0) + if (HAVE_JOB(privflags) && + qemuDomainObjBeginJob(driver, dom, QEMU_JOB_QUERY) >= 0) /* As it was never requested. Gather as much as possible anyway. */ - domflags &= ~QEMU_DOMAIN_STATS_HAVE_JOB; + domflags |= QEMU_DOMAIN_STATS_HAVE_JOB; if (qemuDomainGetStats(conn, dom, stats, &tmp, domflags) < 0) goto endjob; @@ -18742,12 +18534,10 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, if (tmp) tmpstats[nstats++] = tmp; - if (HAVE_JOB(domflags) && !qemuDomainObjEndJob(driver, dom)) { - dom = NULL; - continue; - } + if (HAVE_JOB(domflags)) + qemuDomainObjEndJob(driver, dom); - virObjectUnlock(dom); + qemuDomObjEndAPI(dom); dom = NULL; } @@ -18758,8 +18548,7 @@ qemuConnectGetAllDomainStats(virConnectPtr conn, endjob: if (HAVE_JOB(domflags) && dom) - if (!qemuDomainObjEndJob(driver, dom)) - dom = NULL; + qemuDomainObjEndJob(driver, dom); cleanup: if (dom) @@ -18830,8 +18619,7 @@ qemuDomainGetFSInfo(virDomainPtr dom, qemuDomainObjExitAgent(vm); endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: if (vm) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 0acbb57..6329d50 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2736,31 +2736,20 @@ qemuMigrationBegin(virConnectPtr conn, if (virCloseCallbacksSet(driver->closeCallbacks, vm, conn, qemuMigrationCleanup) < 0) goto endjob; - if (qemuMigrationJobContinue(vm) == 0) { - vm = NULL; - virReportError(VIR_ERR_OPERATION_FAILED, - "%s", _("domain disappeared")); - VIR_FREE(xml); - if (cookieout) - VIR_FREE(*cookieout); - } + qemuMigrationJobContinue(vm); } else { goto endjob; } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); return xml; endjob: - if ((flags & VIR_MIGRATE_CHANGE_PROTECTION)) { - if (qemuMigrationJobFinish(driver, vm) == 0) - vm = NULL; - } else { - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; - } + if (flags & VIR_MIGRATE_CHANGE_PROTECTION) + qemuMigrationJobFinish(driver, vm); + else + qemuDomainObjEndJob(driver, vm); goto cleanup; } @@ -3099,12 +3088,7 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, * This prevents any other APIs being invoked while incoming * migration is taking place. */ - if (!qemuMigrationJobContinue(vm)) { - vm = NULL; - virReportError(VIR_ERR_OPERATION_FAILED, - "%s", _("domain disappeared")); - goto cleanup; - } + qemuMigrationJobContinue(vm); if (autoPort) priv->migrationPort = port; @@ -3115,16 +3099,12 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, VIR_FREE(xmlout); VIR_FORCE_CLOSE(dataFD[0]); VIR_FORCE_CLOSE(dataFD[1]); - if (vm) { - if (ret < 0) { - virPortAllocatorRelease(driver->migrationPorts, priv->nbdPort); - priv->nbdPort = 0; - } - if (ret >= 0 || vm->persistent) - virObjectUnlock(vm); - else - qemuDomainRemoveInactive(driver, vm); + if (ret < 0) { + virPortAllocatorRelease(driver->migrationPorts, priv->nbdPort); + priv->nbdPort = 0; + qemuDomainRemoveInactive(driver, vm); } + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); qemuMigrationCookieFree(mig); @@ -3137,8 +3117,7 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED, 0); endjob: - if (!qemuMigrationJobFinish(driver, vm)) - vm = NULL; + qemuMigrationJobFinish(driver, vm); goto cleanup; } @@ -3507,19 +3486,16 @@ qemuMigrationConfirm(virConnectPtr conn, cookiein, cookieinlen, flags, cancelled); - if (qemuMigrationJobFinish(driver, vm) == 0) { - vm = NULL; - } else if (!virDomainObjIsActive(vm) && - (!vm->persistent || (flags & VIR_MIGRATE_UNDEFINE_SOURCE))) { + qemuMigrationJobFinish(driver, vm); + if (!virDomainObjIsActive(vm) && + (!vm->persistent || (flags & VIR_MIGRATE_UNDEFINE_SOURCE))) { if (flags & VIR_MIGRATE_UNDEFINE_SOURCE) virDomainDeleteConfig(cfg->configDir, cfg->autostartDir, vm); qemuDomainRemoveInactive(driver, vm); - vm = NULL; } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); virObjectUnref(cfg); return ret; } @@ -4877,15 +4853,13 @@ qemuMigrationPerformJob(virQEMUDriverPtr driver, VIR_DOMAIN_EVENT_RESUMED_MIGRATED); } - if (!qemuMigrationJobFinish(driver, vm)) { - vm = NULL; - } else if (!virDomainObjIsActive(vm) && - (!vm->persistent || - (ret == 0 && (flags & VIR_MIGRATE_UNDEFINE_SOURCE)))) { + qemuMigrationJobFinish(driver, vm); + if (!virDomainObjIsActive(vm) && + (!vm->persistent || + (ret == 0 && (flags & VIR_MIGRATE_UNDEFINE_SOURCE)))) { if (flags & VIR_MIGRATE_UNDEFINE_SOURCE) virDomainDeleteConfig(cfg->configDir, cfg->autostartDir, vm); qemuDomainRemoveInactive(driver, vm); - vm = NULL; } if (orig_err) { @@ -4894,8 +4868,7 @@ qemuMigrationPerformJob(virQEMUDriverPtr driver, } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); virObjectUnref(cfg); @@ -4920,7 +4893,6 @@ qemuMigrationPerformPhase(virQEMUDriverPtr driver, { virObjectEventPtr event = NULL; int ret = -1; - bool hasrefs; /* If we didn't start the job in the begin phase, start it now. */ if (!(flags & VIR_MIGRATE_CHANGE_PROTECTION)) { @@ -4955,19 +4927,14 @@ qemuMigrationPerformPhase(virQEMUDriverPtr driver, endjob: if (ret < 0) - hasrefs = qemuMigrationJobFinish(driver, vm); + qemuMigrationJobFinish(driver, vm); else - hasrefs = qemuMigrationJobContinue(vm); - if (!hasrefs) { - vm = NULL; - } else if (!virDomainObjIsActive(vm) && !vm->persistent) { + qemuMigrationJobContinue(vm); + if (!virDomainObjIsActive(vm) && !vm->persistent) qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } cleanup: - if (vm) - virObjectUnlock(vm); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); return ret; @@ -5301,21 +5268,16 @@ qemuMigrationFinish(virQEMUDriverPtr driver, VIR_WARN("Unable to encode migration cookie"); endjob: - if (qemuMigrationJobFinish(driver, vm) == 0) { - vm = NULL; - } else if (!vm->persistent && !virDomainObjIsActive(vm)) { + qemuMigrationJobFinish(driver, vm); + if (!vm->persistent && !virDomainObjIsActive(vm)) qemuDomainRemoveInactive(driver, vm); - vm = NULL; - } cleanup: virPortAllocatorRelease(driver->migrationPorts, port); - if (vm) { - if (priv->mon) - qemuMonitorSetDomainLog(priv->mon, -1); - VIR_FREE(priv->origname); - virObjectUnlock(vm); - } + if (priv->mon) + qemuMonitorSetDomainLog(priv->mon, -1); + VIR_FREE(priv->origname); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); qemuMigrationCookieFree(mig); @@ -5563,15 +5525,13 @@ qemuMigrationJobStartPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, qemuMigrationJobPhase phase) { - virObjectRef(vm); qemuMigrationJobSetPhase(driver, vm, phase); } -bool +void qemuMigrationJobContinue(virDomainObjPtr vm) { qemuDomainObjReleaseAsyncJob(vm); - return virObjectUnref(vm); } bool @@ -5594,7 +5554,7 @@ qemuMigrationJobIsActive(virDomainObjPtr vm, return true; } -bool +void qemuMigrationJobFinish(virQEMUDriverPtr driver, virDomainObjPtr vm) { return qemuDomainObjEndAsyncJob(driver, vm); diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index e7a90c3..1726455 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -1,7 +1,7 @@ /* * qemu_migration.h: QEMU migration handling * - * Copyright (C) 2006-2011 Red Hat, Inc. + * Copyright (C) 2006-2011, 2014 Red Hat, Inc. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public @@ -82,13 +82,13 @@ void qemuMigrationJobStartPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, qemuMigrationJobPhase phase) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); -bool qemuMigrationJobContinue(virDomainObjPtr obj) - ATTRIBUTE_NONNULL(1) ATTRIBUTE_RETURN_CHECK; +void qemuMigrationJobContinue(virDomainObjPtr obj) + ATTRIBUTE_NONNULL(1); bool qemuMigrationJobIsActive(virDomainObjPtr vm, qemuDomainAsyncJob job) ATTRIBUTE_NONNULL(1); -bool qemuMigrationJobFinish(virQEMUDriverPtr driver, virDomainObjPtr obj) - ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_RETURN_CHECK; +void qemuMigrationJobFinish(virQEMUDriverPtr driver, virDomainObjPtr obj) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); int qemuMigrationSetOffline(virQEMUDriverPtr driver, virDomainObjPtr vm); diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 08d6b7c..a0582d3 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -571,6 +571,7 @@ qemuProcessFakeReboot(void *opaque) virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); virDomainRunningReason reason = VIR_DOMAIN_RUNNING_BOOTED; int ret = -1; + VIR_DEBUG("vm=%p", vm); virObjectLock(vm); if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0) @@ -619,16 +620,12 @@ qemuProcessFakeReboot(void *opaque) ret = 0; endjob: - if (!qemuDomainObjEndJob(driver, vm)) - vm = NULL; + qemuDomainObjEndJob(driver, vm); cleanup: - if (vm) { - if (ret == -1) - ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_FORCE)); - if (virObjectUnref(vm)) - virObjectUnlock(vm); - } + if (ret == -1) + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_FORCE)); + qemuDomObjEndAPI(vm); if (event) qemuDomainEventQueue(driver, event); virObjectUnref(cfg); @@ -935,15 +932,13 @@ qemuProcessHandleWatchdog(qemuMonitorPtr mon ATTRIBUTE_UNUSED, */ virObjectRef(vm); if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - if (!virObjectUnref(vm)) - vm = NULL; + virObjectUnref(vm); VIR_FREE(processEvent); } } } - if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm); if (watchdogEvent) qemuDomainEventQueue(driver, watchdogEvent); if (lifecycleEvent) @@ -1439,14 +1434,12 @@ qemuProcessHandleGuestPanic(qemuMonitorPtr mon ATTRIBUTE_UNUSED, */ virObjectRef(vm); if (virThreadPoolSendJob(driver->workerPool, 0, processEvent) < 0) { - if (!virObjectUnref(vm)) - vm = NULL; + virObjectUnref(vm); VIR_FREE(processEvent); } cleanup: - if (vm) - virObjectUnlock(vm); + virObjectUnlock(vm); return 0; } @@ -3566,10 +3559,6 @@ qemuProcessReconnect(void *opaque) /* Job was started by the caller for us */ qemuDomainObjTransferJob(obj); - /* Hold an extra reference because we can't allow 'vm' to be - * deleted if qemuConnectMonitor() failed */ - virObjectRef(obj); - /* XXX check PID liveliness & EXE path */ if (qemuConnectMonitor(driver, obj, QEMU_ASYNC_JOB_NONE, -1) < 0) goto error; @@ -3701,11 +3690,8 @@ qemuProcessReconnect(void *opaque) driver->inhibitCallback(true, driver->inhibitOpaque); endjob: - if (!qemuDomainObjEndJob(driver, obj)) - obj = NULL; - - if (obj && virObjectUnref(obj)) - virObjectUnlock(obj); + qemuDomainObjEndJob(driver, obj); + qemuDomObjEndAPI(obj); virObjectUnref(conn); virObjectUnref(cfg); @@ -3714,35 +3700,29 @@ qemuProcessReconnect(void *opaque) return; error: - if (!qemuDomainObjEndJob(driver, obj)) - obj = NULL; - - if (obj) { - if (!virDomainObjIsActive(obj)) { - if (virObjectUnref(obj)) - virObjectUnlock(obj); - } else if (virObjectUnref(obj)) { - /* We can't get the monitor back, so must kill the VM - * to remove danger of it ending up running twice if - * user tries to start it again later - */ - if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NO_SHUTDOWN)) { - /* If we couldn't get the monitor and qemu supports - * no-shutdown, we can safely say that the domain - * crashed ... */ - state = VIR_DOMAIN_SHUTOFF_CRASHED; - } else { - /* ... but if it doesn't we can't say what the state - * really is and FAILED means "failed to start" */ - state = VIR_DOMAIN_SHUTOFF_UNKNOWN; - } - qemuProcessStop(driver, obj, state, 0); - if (!obj->persistent) - qemuDomainRemoveInactive(driver, obj); - else - virObjectUnlock(obj); + qemuDomainObjEndJob(driver, obj); + + if (virDomainObjIsActive(obj)) { + /* We can't get the monitor back, so must kill the VM + * to remove danger of it ending up running twice if + * user tries to start it again later + */ + if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_NO_SHUTDOWN)) { + /* If we couldn't get the monitor and qemu supports + * no-shutdown, we can safely say that the domain + * crashed ... */ + state = VIR_DOMAIN_SHUTOFF_CRASHED; + } else { + /* ... but if it doesn't we can't say what the state + * really is and FAILED means "failed to start" */ + state = VIR_DOMAIN_SHUTOFF_UNKNOWN; } + qemuProcessStop(driver, obj, state, 0); + if (!obj->persistent) + qemuDomainRemoveInactive(driver, obj); } + + qemuDomObjEndAPI(obj); virObjectUnref(conn); virObjectUnref(cfg); virNWFilterUnlockFilterUpdates(); @@ -3756,6 +3736,9 @@ qemuProcessReconnectHelper(virDomainObjPtr obj, struct qemuProcessReconnectData *src = opaque; struct qemuProcessReconnectData *data; + virObjectLock(obj); + virObjectRef(obj); + if (!obj->pid) return 0; @@ -3780,8 +3763,6 @@ qemuProcessReconnectHelper(virDomainObjPtr obj, * this early phase. */ - virObjectLock(obj); - qemuDomainObjRestoreJob(obj, &data->oldjob); if (qemuDomainObjBeginJob(src->driver, obj, QEMU_JOB_MODIFY) < 0) @@ -3800,17 +3781,14 @@ qemuProcessReconnectHelper(virDomainObjPtr obj, virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Could not create thread. QEMU initialization " "might be incomplete")); - if (!qemuDomainObjEndJob(src->driver, obj)) { - obj = NULL; - } else if (virObjectUnref(obj)) { - /* We can't spawn a thread and thus connect to monitor. - * Kill qemu */ - qemuProcessStop(src->driver, obj, VIR_DOMAIN_SHUTOFF_FAILED, 0); - if (!obj->persistent) - qemuDomainRemoveInactive(src->driver, obj); - else - virObjectUnlock(obj); - } + qemuDomainObjEndJob(src->driver, obj); + + /* We can't spawn a thread and thus connect to monitor. + * Kill qemu */ + qemuProcessStop(src->driver, obj, VIR_DOMAIN_SHUTOFF_FAILED, 0); + if (!obj->persistent) + qemuDomainRemoveInactive(src->driver, obj); + goto error; } @@ -3819,6 +3797,7 @@ qemuProcessReconnectHelper(virDomainObjPtr obj, return 0; error: + qemuDomObjEndAPI(obj); VIR_FREE(data); return -1; } @@ -5482,12 +5461,11 @@ qemuProcessAutoDestroy(virDomainObjPtr dom, VIR_DOMAIN_EVENT_STOPPED, VIR_DOMAIN_EVENT_STOPPED_DESTROYED); - if (!qemuDomainObjEndJob(driver, dom)) - dom = NULL; - if (dom && !dom->persistent) { + qemuDomainObjEndJob(driver, dom); + + if (!dom->persistent) qemuDomainRemoveInactive(driver, dom); - dom = NULL; - } + if (event) qemuDomainEventQueue(driver, event); @@ -5500,6 +5478,7 @@ int qemuProcessAutoDestroyAdd(virQEMUDriverPtr driver, virConnectPtr conn) { VIR_DEBUG("vm=%s, conn=%p", vm->def->name, conn); + virObjectRef(vm); return virCloseCallbacksSet(driver->closeCallbacks, vm, conn, qemuProcessAutoDestroy); } @@ -5507,9 +5486,12 @@ int qemuProcessAutoDestroyAdd(virQEMUDriverPtr driver, int qemuProcessAutoDestroyRemove(virQEMUDriverPtr driver, virDomainObjPtr vm) { + int ret; VIR_DEBUG("vm=%s", vm->def->name); - return virCloseCallbacksUnset(driver->closeCallbacks, vm, - qemuProcessAutoDestroy); + ret = virCloseCallbacksUnset(driver->closeCallbacks, vm, + qemuProcessAutoDestroy); + virObjectUnref(vm); + return ret; } bool qemuProcessAutoDestroyActive(virQEMUDriverPtr driver, -- 2.1.3

On 12/03/14 06:49, Martin Kletzander wrote:
There is one problem that causes various errors in the daemon. When domain is waiting for a job, it is unlocked while waiting on the condition. However, if that domain is for example transient and being removed in another API (e.g. cancelling incoming migration), it get's unref'd. If the first call, that was waiting, fails to get the job, it unref's the domain object, and because it was the last reference, it causes clearing of the whole domain object. However, when finishing the call, the domain must be unlocked, but there is no way for the API to know whether it was cleaned or not (unless there is some ugly temporary variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and all use the implicit reference that the object has when it is in the domain list. That reference can be removed when the API is waiting for a job. And because each domain doesn't do its ref'ing, it results in the ugly checking of the return value of virObjectUnref() that we have everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which should be the only function in which the return value of virObjectUnref() is checked. This makes all reference counting deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/THREADS.txt | 40 ++- src/qemu/qemu_domain.c | 28 +- src/qemu/qemu_domain.h | 12 +- src/qemu/qemu_driver.c | 708 ++++++++++++++++------------------------------ src/qemu/qemu_migration.c | 108 +++---- src/qemu/qemu_migration.h | 10 +- src/qemu/qemu_process.c | 126 ++++----- 7 files changed, 379 insertions(+), 653 deletions(-)
...
index 08d6b7c..a0582d3 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c
...
@@ -3756,6 +3736,9 @@ qemuProcessReconnectHelper(virDomainObjPtr obj, struct qemuProcessReconnectData *src = opaque; struct qemuProcessReconnectData *data;
+ virObjectLock(obj); + virObjectRef(obj); + if (!obj->pid) return 0;
This hunk causes deadlock of the daemon when starting if you have at least one inactive VM. You lock the object, find out that the pid is 0 and return without unlocking. The daemon then locks up when trying to reload snapshot list for the domain as the domain is still left locked: Thread 2 (Thread 0x7fd6165f1700 (LWP 847791)): #0 0x00007fd62c16faf4 in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x00007fd62c16b459 in _L_lock_535 () from /lib64/libpthread.so.0 #2 0x00007fd62c16b280 in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x00007fd62f99b453 in virMutexLock (m=0x7fd61c0e4810) at util/virthread.c:88 #4 0x00007fd62f97cdc6 in virObjectLock (anyobj=0x7fd61c0e4800) at util/virobject.c:323 #5 0x00007fd617672af5 in qemuDomainSnapshotLoad (vm=0x7fd61c0e4800, data=0x7fd61c0023f0) at qemu/qemu_driver.c:474 #6 0x00007fd62f9f20b8 in virDomainObjListHelper (payload=0x7fd61c0e4800, name=0x7fd61c0f0de0, opaque=0x7fd6165f0810) at conf/domain_conf.c:20454 #7 0x00007fd62f9559be in virHashForEach (table=0x7fd61c008050, iter=0x7fd62f9f2072 <virDomainObjListHelper>, data=0x7fd6165f0810) at util/virhash.c:521 #8 0x00007fd62f9f213e in virDomainObjListForEach (doms=0x7fd61c03bdd0, callback=0x7fd617672a55 <qemuDomainSnapshotLoad>, opaque=0x7fd61c0023f0) at conf/domain_conf.c:20467 #9 0x00007fd61767445c in qemuStateInitialize (privileged=true, callback=0x7fd630584ab2 <daemonInhibitCallback>, opaque=0x7fd63258dba0) at qemu/qemu_driver.c:877 #10 0x00007fd62fa5f05a in virStateInitialize (privileged=true, callback=0x7fd630584ab2 <daemonInhibitCallback>, opaque=0x7fd63258dba0) at libvirt.c:742 #11 0x00007fd630584b7a in daemonRunStateInit (opaque=0x7fd63258dba0) at libvirtd.c:918 #12 0x00007fd62f99b8b4 in virThreadHelper (data=0x7fd6325b4480) at util/virthread.c:197 #13 0x00007fd62c169023 in start_thread () from /lib64/libpthread.so.0 #14 0x00007fd62bea470d in clone () from /lib64/libc.so.6
@@ -3780,8 +3763,6 @@ qemuProcessReconnectHelper(virDomainObjPtr obj, * this early phase. */
- virObjectLock(obj); - qemuDomainObjRestoreJob(obj, &data->oldjob);
if (qemuDomainObjBeginJob(src->driver, obj, QEMU_JOB_MODIFY) < 0)
Rest of the review will follow once I find a way to solve that issue. Peter

On 12/03/14 10:54, Peter Krempa wrote:
On 12/03/14 06:49, Martin Kletzander wrote:
There is one problem that causes various errors in the daemon. When domain is waiting for a job, it is unlocked while waiting on the condition. However, if that domain is for example transient and being removed in another API (e.g. cancelling incoming migration), it get's unref'd. If the first call, that was waiting, fails to get the job, it unref's the domain object, and because it was the last reference, it causes clearing of the whole domain object. However, when finishing the call, the domain must be unlocked, but there is no way for the API to know whether it was cleaned or not (unless there is some ugly temporary variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and all use the implicit reference that the object has when it is in the domain list. That reference can be removed when the API is waiting for a job. And because each domain doesn't do its ref'ing, it results in the ugly checking of the return value of virObjectUnref() that we have everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which should be the only function in which the return value of virObjectUnref() is checked. This makes all reference counting deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/THREADS.txt | 40 ++- src/qemu/qemu_domain.c | 28 +- src/qemu/qemu_domain.h | 12 +- src/qemu/qemu_driver.c | 708 ++++++++++++++++------------------------------ src/qemu/qemu_migration.c | 108 +++---- src/qemu/qemu_migration.h | 10 +- src/qemu/qemu_process.c | 126 ++++----- 7 files changed, 379 insertions(+), 653 deletions(-)
...
index 08d6b7c..a0582d3 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c
...
@@ -3756,6 +3736,9 @@ qemuProcessReconnectHelper(virDomainObjPtr obj, struct qemuProcessReconnectData *src = opaque; struct qemuProcessReconnectData *data;
+ virObjectLock(obj); + virObjectRef(obj); + if (!obj->pid) return 0;
This hunk causes deadlock of the daemon when starting if you have at least one inactive VM. You lock the object, find out that the pid is 0 and return without unlocking.
The daemon then locks up when trying to reload snapshot list for the domain as the domain is still left locked:
Thread 2 (Thread 0x7fd6165f1700 (LWP 847791)): #0 0x00007fd62c16faf4 in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x00007fd62c16b459 in _L_lock_535 () from /lib64/libpthread.so.0 #2 0x00007fd62c16b280 in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x00007fd62f99b453 in virMutexLock (m=0x7fd61c0e4810) at util/virthread.c:88 #4 0x00007fd62f97cdc6 in virObjectLock (anyobj=0x7fd61c0e4800) at util/virobject.c:323 #5 0x00007fd617672af5 in qemuDomainSnapshotLoad (vm=0x7fd61c0e4800, data=0x7fd61c0023f0) at qemu/qemu_driver.c:474 #6 0x00007fd62f9f20b8 in virDomainObjListHelper (payload=0x7fd61c0e4800, name=0x7fd61c0f0de0, opaque=0x7fd6165f0810) at conf/domain_conf.c:20454 #7 0x00007fd62f9559be in virHashForEach (table=0x7fd61c008050, iter=0x7fd62f9f2072 <virDomainObjListHelper>, data=0x7fd6165f0810) at util/virhash.c:521 #8 0x00007fd62f9f213e in virDomainObjListForEach (doms=0x7fd61c03bdd0, callback=0x7fd617672a55 <qemuDomainSnapshotLoad>, opaque=0x7fd61c0023f0) at conf/domain_conf.c:20467 #9 0x00007fd61767445c in qemuStateInitialize (privileged=true, callback=0x7fd630584ab2 <daemonInhibitCallback>, opaque=0x7fd63258dba0) at qemu/qemu_driver.c:877 #10 0x00007fd62fa5f05a in virStateInitialize (privileged=true, callback=0x7fd630584ab2 <daemonInhibitCallback>, opaque=0x7fd63258dba0) at libvirt.c:742 #11 0x00007fd630584b7a in daemonRunStateInit (opaque=0x7fd63258dba0) at libvirtd.c:918 #12 0x00007fd62f99b8b4 in virThreadHelper (data=0x7fd6325b4480) at util/virthread.c:197 #13 0x00007fd62c169023 in start_thread () from /lib64/libpthread.so.0 #14 0x00007fd62bea470d in clone () from /lib64/libc.so.6
@@ -3780,8 +3763,6 @@ qemuProcessReconnectHelper(virDomainObjPtr obj, * this early phase. */
- virObjectLock(obj); - qemuDomainObjRestoreJob(obj, &data->oldjob);
if (qemuDomainObjBeginJob(src->driver, obj, QEMU_JOB_MODIFY) < 0)
Rest of the review will follow once I find a way to solve that issue.
I think the best solution will be to refactor the reconnect function before this change so that it makes more sense. I'll post patches later today hopefully. Peter

On Wed, Dec 03, 2014 at 06:49:04AM +0100, Martin Kletzander wrote:
@@ -109,7 +117,6 @@ To lock the virDomainObjPtr To acquire the normal job condition
qemuDomainObjBeginJob() - - Increments ref count on virDomainObjPtr - Waits until the job is compatible with current async job or no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr @@ -122,14 +129,12 @@ To acquire the normal job condition qemuDomainObjEndJob() - Sets job.active to 0 - Signals on job.cond condition - - Decrements ref count on virDomainObjPtr
I think this is really the key improvement here. Currently we have this error prone code where we have to check for NULLs after leaving the job if (qemuDomainObjEndJob() < 0) vm = NULL; if (vm) virObjectUnlock(vm) With the methods now fully owning a reference, the 'vm' object can never be disposed off by another thread. So it will let us make the code simpler and less error prone.
+void +qemuDomObjEndAPI(virDomainObjPtr vm) +{ + if (virObjectUnref(vm)) + virObjectUnlock(vm); +}
I've not thought about it too deeply, so could be missing something, but I wonder if it is not sufficient todo virObjectUnlock(vm); virObjectUnref(vm); As there is no requirement for the object to be locked now we're using atomic ints for reference counting. This would avoid the need go hide the unlock/unref behind this qemuDomObjEndAPI method. Just have this inline so it is obvious to the casual reader that we're unrefing + unlocking the object Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Wed, Dec 03, 2014 at 10:48:49AM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 06:49:04AM +0100, Martin Kletzander wrote:
@@ -109,7 +117,6 @@ To lock the virDomainObjPtr To acquire the normal job condition
qemuDomainObjBeginJob() - - Increments ref count on virDomainObjPtr - Waits until the job is compatible with current async job or no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr @@ -122,14 +129,12 @@ To acquire the normal job condition qemuDomainObjEndJob() - Sets job.active to 0 - Signals on job.cond condition - - Decrements ref count on virDomainObjPtr
I think this is really the key improvement here. Currently we have this error prone code where we have to check for NULLs after leaving the job
if (qemuDomainObjEndJob() < 0) vm = NULL;
if (vm) virObjectUnlock(vm)
With the methods now fully owning a reference, the 'vm' object can never be disposed off by another thread. So it will let us make the code simpler and less error prone.
+void +qemuDomObjEndAPI(virDomainObjPtr vm) +{ + if (virObjectUnref(vm)) + virObjectUnlock(vm); +}
I've not thought about it too deeply, so could be missing something, but I wonder if it is not sufficient todo
virObjectUnlock(vm); virObjectUnref(vm);
As there is no requirement for the object to be locked now we're using atomic ints for reference counting. This would avoid the need go hide the unlock/unref behind this qemuDomObjEndAPI method. Just have this inline so it is obvious to the casual reader that we're unrefing + unlocking the object
I had an idea of having some global PRE-call method that would be called for every API function and it would do the dance that all APIs do (checking flags, checking ACLs, getting all the objects, eventually creating a job) and there would be one at the end, doing all the reverse stuff. That's, of course, way more difficult or maybe impossible or too complex (I haven't explored the idea any more), but that's where the qemuDomObjEndAPI() came from. It also makes it easy to add any code for each "API". But of course it can be changed with unlock'n'unref as you said. And the reason for calling conditionally only when unref'ing any other reference than the last one, was that it is more usable with NULL pointers and also in APIs where it gets removed from the list. Martin
Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Wed, Dec 03, 2014 at 02:54:11PM +0100, Martin Kletzander wrote:
On Wed, Dec 03, 2014 at 10:48:49AM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 06:49:04AM +0100, Martin Kletzander wrote:
@@ -109,7 +117,6 @@ To lock the virDomainObjPtr To acquire the normal job condition
qemuDomainObjBeginJob() - - Increments ref count on virDomainObjPtr - Waits until the job is compatible with current async job or no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr @@ -122,14 +129,12 @@ To acquire the normal job condition qemuDomainObjEndJob() - Sets job.active to 0 - Signals on job.cond condition - - Decrements ref count on virDomainObjPtr
I think this is really the key improvement here. Currently we have this error prone code where we have to check for NULLs after leaving the job
if (qemuDomainObjEndJob() < 0) vm = NULL;
if (vm) virObjectUnlock(vm)
With the methods now fully owning a reference, the 'vm' object can never be disposed off by another thread. So it will let us make the code simpler and less error prone.
+void +qemuDomObjEndAPI(virDomainObjPtr vm) +{ + if (virObjectUnref(vm)) + virObjectUnlock(vm); +}
I've not thought about it too deeply, so could be missing something, but I wonder if it is not sufficient todo
virObjectUnlock(vm); virObjectUnref(vm);
As there is no requirement for the object to be locked now we're using atomic ints for reference counting. This would avoid the need go hide the unlock/unref behind this qemuDomObjEndAPI method. Just have this inline so it is obvious to the casual reader that we're unrefing + unlocking the object
I had an idea of having some global PRE-call method that would be called for every API function and it would do the dance that all APIs do (checking flags, checking ACLs, getting all the objects, eventually creating a job) and there would be one at the end, doing all the reverse stuff. That's, of course, way more difficult or maybe impossible or too complex (I haven't explored the idea any more), but that's where the qemuDomObjEndAPI() came from. It also makes it easy to add any code for each "API". But of course it can be changed with unlock'n'unref as you said. And the reason for calling conditionally only when unref'ing any other reference than the last one, was that it is more usable with NULL pointers and also in APIs where it gets removed from the list.
The APIs where we remove the dom from the object list should no longer be a special case after this change, right ? The list owns a reference and the executing API call owns a reference, instead of borrowing the list's reference. So we can always assume that 'dom' is non-NULL for these APIs now, even after being removed from the list. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Wed, Dec 03, 2014 at 02:01:10PM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 02:54:11PM +0100, Martin Kletzander wrote:
On Wed, Dec 03, 2014 at 10:48:49AM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 06:49:04AM +0100, Martin Kletzander wrote:
@@ -109,7 +117,6 @@ To lock the virDomainObjPtr To acquire the normal job condition
qemuDomainObjBeginJob() - - Increments ref count on virDomainObjPtr - Waits until the job is compatible with current async job or no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr @@ -122,14 +129,12 @@ To acquire the normal job condition qemuDomainObjEndJob() - Sets job.active to 0 - Signals on job.cond condition - - Decrements ref count on virDomainObjPtr
I think this is really the key improvement here. Currently we have this error prone code where we have to check for NULLs after leaving the job
if (qemuDomainObjEndJob() < 0) vm = NULL;
if (vm) virObjectUnlock(vm)
With the methods now fully owning a reference, the 'vm' object can never be disposed off by another thread. So it will let us make the code simpler and less error prone.
+void +qemuDomObjEndAPI(virDomainObjPtr vm) +{ + if (virObjectUnref(vm)) + virObjectUnlock(vm); +}
I've not thought about it too deeply, so could be missing something, but I wonder if it is not sufficient todo
virObjectUnlock(vm); virObjectUnref(vm);
As there is no requirement for the object to be locked now we're using atomic ints for reference counting. This would avoid the need go hide the unlock/unref behind this qemuDomObjEndAPI method. Just have this inline so it is obvious to the casual reader that we're unrefing + unlocking the object
I had an idea of having some global PRE-call method that would be called for every API function and it would do the dance that all APIs do (checking flags, checking ACLs, getting all the objects, eventually creating a job) and there would be one at the end, doing all the reverse stuff. That's, of course, way more difficult or maybe impossible or too complex (I haven't explored the idea any more), but that's where the qemuDomObjEndAPI() came from. It also makes it easy to add any code for each "API". But of course it can be changed with unlock'n'unref as you said. And the reason for calling conditionally only when unref'ing any other reference than the last one, was that it is more usable with NULL pointers and also in APIs where it gets removed from the list.
The APIs where we remove the dom from the object list should no longer be a special case after this change, right ? The list owns a reference and the executing API call owns a reference, instead of borrowing the list's reference. So we can always assume that 'dom' is non-NULL for these APIs now, even after being removed from the list.
Yes, you're right, I explained myself improperly, sorry. I saw the functions, thought they are in a different order and didn't think about what really was the reason behind that. Yes, unlock && unref makes sense as well, but we'd lose the function common to all those calls.
Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Wed, Dec 03, 2014 at 03:57:08PM +0100, Martin Kletzander wrote:
On Wed, Dec 03, 2014 at 02:01:10PM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 02:54:11PM +0100, Martin Kletzander wrote:
On Wed, Dec 03, 2014 at 10:48:49AM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 06:49:04AM +0100, Martin Kletzander wrote:
@@ -109,7 +117,6 @@ To lock the virDomainObjPtr To acquire the normal job condition
qemuDomainObjBeginJob() - - Increments ref count on virDomainObjPtr - Waits until the job is compatible with current async job or no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr @@ -122,14 +129,12 @@ To acquire the normal job condition qemuDomainObjEndJob() - Sets job.active to 0 - Signals on job.cond condition - - Decrements ref count on virDomainObjPtr
I think this is really the key improvement here. Currently we have this error prone code where we have to check for NULLs after leaving the job
if (qemuDomainObjEndJob() < 0) vm = NULL;
if (vm) virObjectUnlock(vm)
With the methods now fully owning a reference, the 'vm' object can never be disposed off by another thread. So it will let us make the code simpler and less error prone.
+void +qemuDomObjEndAPI(virDomainObjPtr vm) +{ + if (virObjectUnref(vm)) + virObjectUnlock(vm); +}
I've not thought about it too deeply, so could be missing something, but I wonder if it is not sufficient todo
virObjectUnlock(vm); virObjectUnref(vm);
As there is no requirement for the object to be locked now we're using atomic ints for reference counting. This would avoid the need go hide the unlock/unref behind this qemuDomObjEndAPI method. Just have this inline so it is obvious to the casual reader that we're unrefing + unlocking the object
I had an idea of having some global PRE-call method that would be called for every API function and it would do the dance that all APIs do (checking flags, checking ACLs, getting all the objects, eventually creating a job) and there would be one at the end, doing all the reverse stuff. That's, of course, way more difficult or maybe impossible or too complex (I haven't explored the idea any more), but that's where the qemuDomObjEndAPI() came from. It also makes it easy to add any code for each "API". But of course it can be changed with unlock'n'unref as you said. And the reason for calling conditionally only when unref'ing any other reference than the last one, was that it is more usable with NULL pointers and also in APIs where it gets removed from the list.
The APIs where we remove the dom from the object list should no longer be a special case after this change, right ? The list owns a reference and the executing API call owns a reference, instead of borrowing the list's reference. So we can always assume that 'dom' is non-NULL for these APIs now, even after being removed from the list.
Yes, you're right, I explained myself improperly, sorry. I saw the functions, thought they are in a different order and didn't think about what really was the reason behind that. Yes, unlock && unref makes sense as well, but we'd lose the function common to all those calls.
I think there's a slight benefit to having the top level driver methods do an explicit virObjectUnlock(vm); virObjectUnref(vm); As opposed to qemuDomainEndAPI(vm); because, to me at least, the latter doesn't imply that the 'vm' object is now potentially invalid, whereas seeing an ObjectUnref call makes it obvious that 'vm' should no longer be used past that point. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Wed, Dec 03, 2014 at 03:02:52PM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 03:57:08PM +0100, Martin Kletzander wrote:
On Wed, Dec 03, 2014 at 02:01:10PM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 02:54:11PM +0100, Martin Kletzander wrote:
On Wed, Dec 03, 2014 at 10:48:49AM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 06:49:04AM +0100, Martin Kletzander wrote:
@@ -109,7 +117,6 @@ To lock the virDomainObjPtr To acquire the normal job condition
qemuDomainObjBeginJob() - - Increments ref count on virDomainObjPtr - Waits until the job is compatible with current async job or no async job is running - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr @@ -122,14 +129,12 @@ To acquire the normal job condition qemuDomainObjEndJob() - Sets job.active to 0 - Signals on job.cond condition - - Decrements ref count on virDomainObjPtr
I think this is really the key improvement here. Currently we have this error prone code where we have to check for NULLs after leaving the job
if (qemuDomainObjEndJob() < 0) vm = NULL;
if (vm) virObjectUnlock(vm)
With the methods now fully owning a reference, the 'vm' object can never be disposed off by another thread. So it will let us make the code simpler and less error prone.
+void +qemuDomObjEndAPI(virDomainObjPtr vm) +{ + if (virObjectUnref(vm)) + virObjectUnlock(vm); +}
I've not thought about it too deeply, so could be missing something, but I wonder if it is not sufficient todo
virObjectUnlock(vm); virObjectUnref(vm);
As there is no requirement for the object to be locked now we're using atomic ints for reference counting. This would avoid the need go hide the unlock/unref behind this qemuDomObjEndAPI method. Just have this inline so it is obvious to the casual reader that we're unrefing + unlocking the object
I had an idea of having some global PRE-call method that would be called for every API function and it would do the dance that all APIs do (checking flags, checking ACLs, getting all the objects, eventually creating a job) and there would be one at the end, doing all the reverse stuff. That's, of course, way more difficult or maybe impossible or too complex (I haven't explored the idea any more), but that's where the qemuDomObjEndAPI() came from. It also makes it easy to add any code for each "API". But of course it can be changed with unlock'n'unref as you said. And the reason for calling conditionally only when unref'ing any other reference than the last one, was that it is more usable with NULL pointers and also in APIs where it gets removed from the list.
The APIs where we remove the dom from the object list should no longer be a special case after this change, right ? The list owns a reference and the executing API call owns a reference, instead of borrowing the list's reference. So we can always assume that 'dom' is non-NULL for these APIs now, even after being removed from the list.
Yes, you're right, I explained myself improperly, sorry. I saw the functions, thought they are in a different order and didn't think about what really was the reason behind that. Yes, unlock && unref makes sense as well, but we'd lose the function common to all those calls.
I think there's a slight benefit to having the top level driver methods do an explicit
virObjectUnlock(vm); virObjectUnref(vm);
As opposed to
qemuDomainEndAPI(vm);
because, to me at least, the latter doesn't imply that the 'vm' object is now potentially invalid, whereas seeing an ObjectUnref call makes it obvious that 'vm' should no longer be used past that point.
What if qemuDomainEndAPI(vm) sets the vm to NULL to make sure it is not used? That would make it clear first time someone tries that :)
Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Wed, Dec 03, 2014 at 04:28:46PM +0100, Martin Kletzander wrote:
On Wed, Dec 03, 2014 at 03:02:52PM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 03:57:08PM +0100, Martin Kletzander wrote:
On Wed, Dec 03, 2014 at 02:01:10PM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 02:54:11PM +0100, Martin Kletzander wrote:
On Wed, Dec 03, 2014 at 10:48:49AM +0000, Daniel P. Berrange wrote:
On Wed, Dec 03, 2014 at 06:49:04AM +0100, Martin Kletzander wrote:
>@@ -109,7 +117,6 @@ To lock the virDomainObjPtr > To acquire the normal job condition > > qemuDomainObjBeginJob() >- - Increments ref count on virDomainObjPtr > - Waits until the job is compatible with current async job or no > async job is running > - Waits for job.cond condition 'job.active != 0' using virDomainObjPtr >@@ -122,14 +129,12 @@ To acquire the normal job condition > qemuDomainObjEndJob() > - Sets job.active to 0 > - Signals on job.cond condition >- - Decrements ref count on virDomainObjPtr
I think this is really the key improvement here. Currently we have this error prone code where we have to check for NULLs after leaving the job
if (qemuDomainObjEndJob() < 0) vm = NULL;
if (vm) virObjectUnlock(vm)
With the methods now fully owning a reference, the 'vm' object can never be disposed off by another thread. So it will let us make the code simpler and less error prone.
>+void >+qemuDomObjEndAPI(virDomainObjPtr vm) >+{ >+ if (virObjectUnref(vm)) >+ virObjectUnlock(vm); >+}
I've not thought about it too deeply, so could be missing something, but I wonder if it is not sufficient todo
virObjectUnlock(vm); virObjectUnref(vm);
As there is no requirement for the object to be locked now we're using atomic ints for reference counting. This would avoid the need go hide the unlock/unref behind this qemuDomObjEndAPI method. Just have this inline so it is obvious to the casual reader that we're unrefing + unlocking the object
I had an idea of having some global PRE-call method that would be called for every API function and it would do the dance that all APIs do (checking flags, checking ACLs, getting all the objects, eventually creating a job) and there would be one at the end, doing all the reverse stuff. That's, of course, way more difficult or maybe impossible or too complex (I haven't explored the idea any more), but that's where the qemuDomObjEndAPI() came from. It also makes it easy to add any code for each "API". But of course it can be changed with unlock'n'unref as you said. And the reason for calling conditionally only when unref'ing any other reference than the last one, was that it is more usable with NULL pointers and also in APIs where it gets removed from the list.
The APIs where we remove the dom from the object list should no longer be a special case after this change, right ? The list owns a reference and the executing API call owns a reference, instead of borrowing the list's reference. So we can always assume that 'dom' is non-NULL for these APIs now, even after being removed from the list.
Yes, you're right, I explained myself improperly, sorry. I saw the functions, thought they are in a different order and didn't think about what really was the reason behind that. Yes, unlock && unref makes sense as well, but we'd lose the function common to all those calls.
I think there's a slight benefit to having the top level driver methods do an explicit
virObjectUnlock(vm); virObjectUnref(vm);
As opposed to
qemuDomainEndAPI(vm);
because, to me at least, the latter doesn't imply that the 'vm' object is now potentially invalid, whereas seeing an ObjectUnref call makes it obvious that 'vm' should no longer be used past that point.
What if qemuDomainEndAPI(vm) sets the vm to NULL to make sure it is not used? That would make it clear first time someone tries that :)
Yep, that would work - pass in a 'virDomainObjPtr *' instead i guess Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
participants (3)
-
Daniel P. Berrange
-
Martin Kletzander
-
Peter Krempa