[libvirt] PATCH: Don't stop storage pools on daemon shutdown

Tearing a guest's storage out from under its feet on libvirtd shutdown is just as bad as tearing out its network :-) This patch removes the code which shuts down storage pool when the daemon shuts down. So NFS mounts stay around, LVM VGs remain active, and iSCSI connections remain logged in. When we then start up again, we happily detect that these resources are already running, and mark the pool as such Daniel diff --git a/src/storage_backend_iscsi.c b/src/storage_backend_iscsi.c --- a/src/storage_backend_iscsi.c +++ b/src/storage_backend_iscsi.c @@ -573,6 +573,7 @@ virStorageBackendISCSIStartPool(virConne virStoragePoolObjPtr pool) { char *portal = NULL; + char *session; if (pool->def->source.host.name == NULL) { virStorageReportError(conn, VIR_ERR_INTERNAL_ERROR, @@ -587,13 +588,17 @@ virStorageBackendISCSIStartPool(virConne return -1; } - if ((portal = virStorageBackendISCSIPortal(conn, pool)) == NULL) - return -1; - if (virStorageBackendISCSILogin(conn, pool, portal) < 0) { + if ((session = virStorageBackendISCSISession(conn, pool)) == NULL) { + if ((portal = virStorageBackendISCSIPortal(conn, pool)) == NULL) + return -1; + if (virStorageBackendISCSILogin(conn, pool, portal) < 0) { + VIR_FREE(portal); + return -1; + } VIR_FREE(portal); - return -1; + } else { + VIR_FREE(session); } - VIR_FREE(portal); return 0; } diff --git a/src/storage_driver.c b/src/storage_driver.c --- a/src/storage_driver.c +++ b/src/storage_driver.c @@ -241,32 +241,10 @@ storageDriverActive(void) { */ static int storageDriverShutdown(void) { - unsigned int i; - if (!driverState) return -1; storageDriverLock(driverState); - /* shutdown active pools */ - for (i = 0 ; i < driverState->pools.count ; i++) { - virStoragePoolObjPtr pool = driverState->pools.objs[i]; - - if (virStoragePoolObjIsActive(pool)) { - virStorageBackendPtr backend; - if ((backend = virStorageBackendForType(pool->def->type)) == NULL) { - storageLog("Missing backend"); - continue; - } - - if (backend->stopPool && - backend->stopPool(NULL, pool) < 0) { - virErrorPtr err = virGetLastError(); - storageLog("Failed to stop storage pool '%s': %s", - pool->def->name, err ? err->message : NULL); - } - virStoragePoolObjClearVols(pool); - } - } /* free inactive pools */ virStoragePoolObjListFree(&driverState->pools); -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

"Daniel P. Berrange" <berrange@redhat.com> wrote:
Tearing a guest's storage out from under its feet on libvirtd shutdown is just as bad as tearing out its network :-) This patch removes the code which shuts down storage pool when the daemon shuts down. So NFS mounts stay around, LVM VGs remain active, and iSCSI connections remain logged in. When we then start up again, we happily detect that these resources are already running, and mark the pool as such
I've applied this and your [Allow virtual networks to survive...] patch, and rebased to accommodate the adjustment of #23 [Add domain events to test driver]. So the git branch is up to date: http://git.et.redhat.com/?p=libvirt.git;a=shortlog;h=refs/heads/danpb-thread... Since I had to rebase to adjust #23, that required a so-called non-fastforward chage. As a result, a regular "pull" won't automatically clobber your local "danpb-threads" branch. You have to add the "+" below to do that. To update the existing branch, you'll probably want to do this: git checkout master git fetch git://et.redhat.com/libvirt +danpb-threads:danpb-threads

Daniel P. Berrange wrote:
Tearing a guest's storage out from under its feet on libvirtd shutdown is just as bad as tearing out its network :-) This patch removes the code which shuts down storage pool when the daemon shuts down. So NFS mounts stay around, LVM VGs remain active, and iSCSI connections remain logged in. When we then start up again, we happily detect that these resources are already running, and mark the pool as such
Yes, this will fix a couple of bugs ovirt is having with iscsi as well, so this is a good change. In some basic testing here, it works like a charm, which is great. I do have a question though...
Daniel
diff --git a/src/storage_backend_iscsi.c b/src/storage_backend_iscsi.c --- a/src/storage_backend_iscsi.c +++ b/src/storage_backend_iscsi.c @@ -573,6 +573,7 @@ virStorageBackendISCSIStartPool(virConne virStoragePoolObjPtr pool) { char *portal = NULL; + char *session;
if (pool->def->source.host.name == NULL) { virStorageReportError(conn, VIR_ERR_INTERNAL_ERROR, @@ -587,13 +588,17 @@ virStorageBackendISCSIStartPool(virConne return -1; }
- if ((portal = virStorageBackendISCSIPortal(conn, pool)) == NULL) - return -1; - if (virStorageBackendISCSILogin(conn, pool, portal) < 0) { + if ((session = virStorageBackendISCSISession(conn, pool)) == NULL) {
OK, so here, we look up the session, and if it is NULL, this is a new pool that iscsid doesn't know about, and we need to start it. If we do not return NULL, then this is an existing session. That logic is fine, but my question has to do with virStorageBackendISCSISession() itself. If it detects a NULL session, it does a "virStorageReportError(INTERNAL_ERROR)". Now, the user never sees that because we don't propagate an error code, but we have now set the internal error code to that. Is that going to be a problem? Should we just remove the virStorageReportError in that case? -- Chris Lalancette
participants (3)
-
Chris Lalancette
-
Daniel P. Berrange
-
Jim Meyering