[libvirt] [PATCH 0/3] Enable automatic kill of running guests

I am building an application which uses KVM to run specific tasks, rather than as a general purpose guest OS. I want to ensure that when the app exits, the guest goes away too. To enable this, this series introduces the concept of 'autokill', whereby a guest is forcably destroyed when the virConnectPtr that launched it closes. This also lets us fix a long standing problem with migration leaving an unkillable guest

Sometimes it is useful to be able to automatically kill a guest when a connection is closed. For example, kill an incoming migration if the client managing the migration dies. This introduces a map between guest 'uuid' strings and virConnectPtr objects. When a connection is closed, any associated guests are killed off * src/qemu/qemu_conf.h: Add autokill hash table to qemu driver * src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add APIs for performing autokill of guests associated with a connection * src/qemu/qemu_driver.c: Initialize autokill map --- src/qemu/qemu_conf.h | 5 ++ src/qemu/qemu_driver.c | 17 +++++-- src/qemu/qemu_migration.c | 7 ++- src/qemu/qemu_process.c | 118 +++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_process.h | 12 +++++ 5 files changed, 151 insertions(+), 8 deletions(-) diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h index bf6dcf4..1f5027d 100644 --- a/src/qemu/qemu_conf.h +++ b/src/qemu/qemu_conf.h @@ -127,6 +127,11 @@ struct qemud_driver { virSysinfoDefPtr hostsysinfo; virLockManagerPluginPtr lockManager; + + /* Mapping of 'char *uuidstr' -> virConnectPtr + * of guests which will be automatically killed + * when the virConnectPtr is closed*/ + virHashTablePtr autokill; }; typedef struct _qemuDomainCmdlineDef qemuDomainCmdlineDef; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 01587e8..44e1fba 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -603,6 +603,9 @@ qemudStartup(int privileged) { qemu_driver->hugepage_path = mempath; } + if (qemuProcessAutokillInit(qemu_driver) < 0) + goto error; + /* Get all the running persistent or transient configs first */ if (virDomainLoadAllConfigs(qemu_driver->caps, &qemu_driver->domains, @@ -736,6 +739,8 @@ qemudShutdown(void) { virSysinfoDefFree(qemu_driver->hostsysinfo); + qemuProcessAutokillShutdown(qemu_driver); + VIR_FREE(qemu_driver->configDir); VIR_FREE(qemu_driver->autostartDir); VIR_FREE(qemu_driver->logDir); @@ -860,6 +865,7 @@ static int qemudClose(virConnectPtr conn) { qemuDriverLock(driver); virDomainEventCallbackListRemoveConn(conn, driver->domainEventState->callbacks); + qemuProcessAutokillRun(driver, conn); qemuDriverUnlock(driver); conn->privateData = NULL; @@ -1271,6 +1277,7 @@ static virDomainPtr qemudDomainCreate(virConnectPtr conn, const char *xml, if (qemuProcessStart(conn, driver, vm, NULL, (flags & VIR_DOMAIN_START_PAUSED) != 0, + false, -1, NULL, VIR_VM_OP_CREATE) < 0) { qemuAuditDomainStart(vm, "booted", false); if (qemuDomainObjEndJob(vm) > 0) @@ -3465,8 +3472,8 @@ qemuDomainSaveImageStartVM(virConnectPtr conn, } /* Set the migration source and start it up. */ - ret = qemuProcessStart(conn, driver, vm, "stdio", true, *fd, path, - VIR_VM_OP_RESTORE); + ret = qemuProcessStart(conn, driver, vm, "stdio", true, + false, *fd, path, VIR_VM_OP_RESTORE); if (intermediatefd != -1) { if (ret < 0) { @@ -3835,8 +3842,8 @@ static int qemudDomainObjStart(virConnectPtr conn, goto cleanup; } - ret = qemuProcessStart(conn, driver, vm, NULL, start_paused, -1, NULL, - VIR_VM_OP_CREATE); + ret = qemuProcessStart(conn, driver, vm, NULL, start_paused, + false, -1, NULL, VIR_VM_OP_CREATE); qemuAuditDomainStart(vm, "booted", ret >= 0); if (ret >= 0) { virDomainEventPtr event = @@ -7791,7 +7798,7 @@ static int qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot, goto endjob; rc = qemuProcessStart(snapshot->domain->conn, driver, vm, NULL, - false, -1, NULL, VIR_VM_OP_CREATE); + false, false, -1, NULL, VIR_VM_OP_CREATE); qemuAuditDomainStart(vm, "from-snapshot", rc >= 0); if (qemuDomainSnapshotSetCurrentInactive(vm, driver->snapshotDir) < 0) goto endjob; diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 87e0417..fbee653 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1119,8 +1119,9 @@ qemuMigrationPrepareTunnel(struct qemud_driver *driver, /* Start the QEMU daemon, with the same command-line arguments plus * -incoming stdio (which qemu_command might convert to exec:cat or fd:n) */ - internalret = qemuProcessStart(dconn, driver, vm, "stdio", true, dataFD[0], - NULL, VIR_VM_OP_MIGRATE_IN_START); + internalret = qemuProcessStart(dconn, driver, vm, "stdio", true, + false, dataFD[0], NULL, + VIR_VM_OP_MIGRATE_IN_START); if (internalret < 0) { qemuAuditDomainStart(vm, "migrated", false); /* Note that we don't set an error here because qemuProcessStart @@ -1347,7 +1348,7 @@ qemuMigrationPrepareDirect(struct qemud_driver *driver, * -incoming tcp:0.0.0.0:port */ snprintf (migrateFrom, sizeof (migrateFrom), "tcp:0.0.0.0:%d", this_port); - if (qemuProcessStart(dconn, driver, vm, migrateFrom, true, + if (qemuProcessStart(dconn, driver, vm, migrateFrom, true, false, -1, NULL, VIR_VM_OP_MIGRATE_IN_START) < 0) { qemuAuditDomainStart(vm, "migrated", false); /* Note that we don't set an error here because qemuProcessStart diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index b441137..f820ad2 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -55,6 +55,7 @@ #include "processinfo.h" #include "domain_nwfilter.h" #include "locking/domain_lock.h" +#include "uuid.h" #define VIR_FROM_THIS VIR_FROM_QEMU @@ -2313,6 +2314,7 @@ int qemuProcessStart(virConnectPtr conn, virDomainObjPtr vm, const char *migrateFrom, bool start_paused, + bool autokill, int stdin_fd, const char *stdin_path, enum virVMOperationType vmop) @@ -2699,6 +2701,10 @@ int qemuProcessStart(virConnectPtr conn, VIR_DOMAIN_PAUSED_USER); } + if (autokill && + qemuProcessAutokillAdd(driver, vm, conn) < 0) + goto cleanup; + VIR_DEBUG("Writing domain status to disk"); if (virDomainSaveStatus(driver->caps, driver->stateDir, vm) < 0) goto cleanup; @@ -2839,6 +2845,9 @@ void qemuProcessStop(struct qemud_driver *driver, /* shut it off for sure */ qemuProcessKill(vm); + /* Stop autokill in case guest is restarted */ + qemuProcessAutokillRemove(driver, vm); + /* now that we know it's stopped call the hook if present */ if (virHookPresent(VIR_HOOK_DRIVER_QEMU)) { char *xml = virDomainDefFormat(vm->def, 0); @@ -2941,3 +2950,112 @@ retry: virFreeError(orig_err); } } + + +int qemuProcessAutokillInit(struct qemud_driver *driver) +{ + if (!(driver->autokill = virHashCreate(5, NULL))) + return -1; + + return 0; +} + +struct qemuProcessAutokillData { + struct qemud_driver *driver; + virConnectPtr conn; +}; + +static void qemuProcessAutokillDom(void *payload, const void *name, void *opaque) +{ + struct qemuProcessAutokillData *data = opaque; + virConnectPtr conn = payload; + const char *uuidstr = name; + unsigned char uuid[VIR_UUID_BUFLEN]; + virDomainObjPtr dom; + qemuDomainObjPrivatePtr priv; + virDomainEventPtr event = NULL; + + VIR_DEBUG("conn=%p uuidstr=%s thisconn=%p", conn, uuidstr, data->conn); + + if (data->conn != conn) + return; + + if (virUUIDParse(uuidstr, uuid) < 0) { + VIR_WARN("Failed to parse %s", uuidstr); + return; + } + + if (!(dom = virDomainFindByUUID(&data->driver->domains, + uuid))) { + VIR_DEBUG("No domain object to kill"); + return; + } + + priv = dom->privateData; + if (priv->jobActive == QEMU_JOB_MIGRATION_IN) { + VIR_DEBUG("vm=%s has incoming migration active, cancelling", dom->def->name); + priv->jobActive = QEMU_JOB_NONE; + memset(&priv->jobInfo, 0, sizeof(priv->jobInfo)); + } + + if (qemuDomainObjBeginJobWithDriver(data->driver, dom) < 0) + goto cleanup; + + VIR_DEBUG("Killing domain"); + qemuProcessStop(data->driver, dom, 1, VIR_DOMAIN_SHUTOFF_DESTROYED); + qemuAuditDomainStop(dom, "destroyed"); + event = virDomainEventNewFromObj(dom, + VIR_DOMAIN_EVENT_STOPPED, + VIR_DOMAIN_EVENT_STOPPED_DESTROYED); + if (qemuDomainObjEndJob(dom) == 0) + dom = NULL; + if (dom && !dom->persistent) + virDomainRemoveInactive(&data->driver->domains, dom); + +cleanup: + if (dom) + virDomainObjUnlock(dom); + if (event) + qemuDomainEventQueue(data->driver, event); + virHashRemoveEntry(data->driver->autokill, uuidstr); +} + +/* + * Precondition: driver is locked + */ +void qemuProcessAutokillRun(struct qemud_driver *driver, virConnectPtr conn) +{ + struct qemuProcessAutokillData data = { + driver, conn + }; + VIR_DEBUG("conn=%p", conn); + virHashForEach(driver->autokill, qemuProcessAutokillDom, &data); +} + +void qemuProcessAutokillShutdown(struct qemud_driver *driver) +{ + virHashFree(driver->autokill); +} + +int qemuProcessAutokillAdd(struct qemud_driver *driver, + virDomainObjPtr vm, + virConnectPtr conn) +{ + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virUUIDFormat(vm->def->uuid, uuidstr); + VIR_DEBUG("vm=%s uuid=%s conn=%p", vm->def->name, uuidstr, conn); + if (virHashAddEntry(driver->autokill, uuidstr, conn) < 0) + return -1; + return 0; +} + +int qemuProcessAutokillRemove(struct qemud_driver *driver, + virDomainObjPtr vm) +{ + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virUUIDFormat(vm->def->uuid, uuidstr); + VIR_DEBUG("vm=%s uuid=%s", vm->def->name, uuidstr); + if (virHashRemoveEntry(driver->autokill, uuidstr) < 0) + return -1; + return 0; +} diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index 7ec9d7d..a5478f1 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -46,6 +46,7 @@ int qemuProcessStart(virConnectPtr conn, virDomainObjPtr vm, const char *migrateFrom, bool start_paused, + bool autokill, int stdin_fd, const char *stdin_path, enum virVMOperationType vmop); @@ -57,4 +58,15 @@ void qemuProcessStop(struct qemud_driver *driver, void qemuProcessKill(virDomainObjPtr vm); +int qemuProcessAutokillInit(struct qemud_driver *driver); +void qemuProcessAutokillRun(struct qemud_driver *driver, + virConnectPtr conn); +void qemuProcessAutokillShutdown(struct qemud_driver *driver); +int qemuProcessAutokillAdd(struct qemud_driver *driver, + virDomainObjPtr vm, + virConnectPtr conn); +int qemuProcessAutokillRemove(struct qemud_driver *driver, + virDomainObjPtr vm); + + #endif /* __QEMU_PROCESS_H__ */ -- 1.7.4.4

On 06/23/2011 04:58 AM, Daniel P. Berrange wrote:
Sometimes it is useful to be able to automatically kill a guest when a connection is closed. For example, kill an incoming migration if the client managing the migration dies. This introduces a map between guest 'uuid' strings and virConnectPtr objects. When a connection is closed, any associated guests are killed off
* src/qemu/qemu_conf.h: Add autokill hash table to qemu driver * src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add APIs for performing autokill of guests associated with a connection * src/qemu/qemu_driver.c: Initialize autokill map --- src/qemu/qemu_conf.h | 5 ++ src/qemu/qemu_driver.c | 17 +++++-- src/qemu/qemu_migration.c | 7 ++- src/qemu/qemu_process.c | 118 +++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_process.h | 12 +++++ 5 files changed, 151 insertions(+), 8 deletions(-)
ACK. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

Migration is a multi-step process 1. Begin(src) 2. Prepare(dst) 3. Perform(src) 4. Finish(dst) 5. Confirm(src) At step 2, a QEMU process is lauched in the destination to accept the incoming migration. Occasionally the process that is controlling the migration workflow aborts, and fails to call step 4, Finish. This leaves a QEMU process running on the target (albeit with paused CPUs). Unfortunately because step 2 actives a job on the QEMU process, it is unkillable by normal means. By registering the VM for autokill against the src virConnectPtr in step 2, we can ensure that the guest is forcefully killed off if the connection is closed without step 4 being invoked * src/qemu/qemu_migration.c: Register autokill in PrepareDirect and PrepareTunnel. Unregister autokill on successful run of Finish * src/qemu/qemu_process.c: Unregister autokill when stopping a process --- src/qemu/qemu_migration.c | 7 +++++-- 1 files changed, 5 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index fbee653..42091a0 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1120,7 +1120,7 @@ qemuMigrationPrepareTunnel(struct qemud_driver *driver, * -incoming stdio (which qemu_command might convert to exec:cat or fd:n) */ internalret = qemuProcessStart(dconn, driver, vm, "stdio", true, - false, dataFD[0], NULL, + true, dataFD[0], NULL, VIR_VM_OP_MIGRATE_IN_START); if (internalret < 0) { qemuAuditDomainStart(vm, "migrated", false); @@ -1348,7 +1348,7 @@ qemuMigrationPrepareDirect(struct qemud_driver *driver, * -incoming tcp:0.0.0.0:port */ snprintf (migrateFrom, sizeof (migrateFrom), "tcp:0.0.0.0:%d", this_port); - if (qemuProcessStart(dconn, driver, vm, migrateFrom, true, false, + if (qemuProcessStart(dconn, driver, vm, migrateFrom, true, true, -1, NULL, VIR_VM_OP_MIGRATE_IN_START) < 0) { qemuAuditDomainStart(vm, "migrated", false); /* Note that we don't set an error here because qemuProcessStart @@ -2549,6 +2549,9 @@ qemuMigrationFinish(struct qemud_driver *driver, VIR_WARN("Failed to save status on vm %s", vm->def->name); goto endjob; } + + /* Guest is sucessfully running, so cancel previous autokill */ + qemuProcessAutokillRemove(driver, vm); } else { qemuProcessStop(driver, vm, 1, VIR_DOMAIN_SHUTOFF_FAILED); qemuAuditDomainStop(vm, "failed"); -- 1.7.4.4

On 06/23/2011 04:58 AM, Daniel P. Berrange wrote:
Migration is a multi-step process
1. Begin(src) 2. Prepare(dst) 3. Perform(src) 4. Finish(dst) 5. Confirm(src)
At step 2, a QEMU process is lauched in the destination to accept the incoming migration. Occasionally the process that is controlling the migration workflow aborts, and fails to call step 4, Finish. This leaves a QEMU process running on the target (albeit with paused CPUs). Unfortunately because step 2 actives a job on the QEMU process, it is unkillable by normal means.
By registering the VM for autokill against the src virConnectPtr in step 2, we can ensure that the guest is forcefully killed off if the connection is closed without step 4 being invoked
* src/qemu/qemu_migration.c: Register autokill in PrepareDirect and PrepareTunnel. Unregister autokill on successful run of Finish * src/qemu/qemu_process.c: Unregister autokill when stopping a process --- src/qemu/qemu_migration.c | 7 +++++-- 1 files changed, 5 insertions(+), 2 deletions(-)
See comments on 0/3 about whether this patch is complete or whether we need more restrictions on migration when autokill is active. But for what you have here: ACK with nits.
@@ -2549,6 +2549,9 @@ qemuMigrationFinish(struct qemud_driver *driver, VIR_WARN("Failed to save status on vm %s", vm->def->name); goto endjob; } + + /* Guest is sucessfully running, so cancel previous autokill */
s/sucessfully/successfully/
+ qemuProcessAutokillRemove(driver, vm); } else { qemuProcessStop(driver, vm, 1, VIR_DOMAIN_SHUTOFF_FAILED); qemuAuditDomainStop(vm, "failed");
-- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

If an application is using libvirt + KVM as a piece of its internal infrastructure to perform a specific task, it can be desirable to guarentee the VM dies when the virConnectPtr disconnects from libvirtd. This ensures the app can't leak any VMs it was using. Adding VIR_DOMAIN_START_AUTOKILL as a flag when starting guests enables this to be done. * include/libvirt/libvirt.h.in: All VIR_DOMAIN_START_AUTOKILL * src/qemu/qemu_driver.c: Support automatic killing of guests upon connection close * tools/virsh.c: Add --autokill flag to 'start' and 'create' commands --- include/libvirt/libvirt.h.in | 1 + src/qemu/qemu_driver.c | 27 ++++++++++++++++++--------- tools/virsh.c | 6 ++++++ 3 files changed, 25 insertions(+), 9 deletions(-) diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index 3f634e6..a827697 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -234,6 +234,7 @@ typedef virDomainInfo *virDomainInfoPtr; typedef enum { VIR_DOMAIN_NONE = 0, /* Default behavior */ VIR_DOMAIN_START_PAUSED = 1 << 0, /* Launch guest in paused state */ + VIR_DOMAIN_START_AUTOKILL = 1 << 1, /* Automatically kill guest when virConnectPtr is closed */ } virDomainCreateFlags; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 44e1fba..18701a7 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -119,7 +119,8 @@ static int qemudShutdown(void); static int qemudDomainObjStart(virConnectPtr conn, struct qemud_driver *driver, virDomainObjPtr vm, - bool start_paused); + bool start_paused, + bool autokill); static int qemudDomainGetMaxVcpus(virDomainPtr dom); @@ -148,7 +149,7 @@ qemuAutostartDomain(void *payload, const void *name ATTRIBUTE_UNUSED, void *opaq } else { if (vm->autostart && !virDomainObjIsActive(vm) && - qemudDomainObjStart(data->conn, data->driver, vm, false) < 0) { + qemudDomainObjStart(data->conn, data->driver, vm, false, false) < 0) { err = virGetLastError(); VIR_ERROR(_("Failed to autostart VM '%s': %s"), vm->def->name, @@ -1246,7 +1247,8 @@ static virDomainPtr qemudDomainCreate(virConnectPtr conn, const char *xml, virDomainPtr dom = NULL; virDomainEventPtr event = NULL; - virCheckFlags(VIR_DOMAIN_START_PAUSED, NULL); + virCheckFlags(VIR_DOMAIN_START_PAUSED | + VIR_DOMAIN_START_AUTOKILL, NULL); qemuDriverLock(driver); if (!(def = virDomainDefParseString(driver->caps, xml, @@ -1277,7 +1279,7 @@ static virDomainPtr qemudDomainCreate(virConnectPtr conn, const char *xml, if (qemuProcessStart(conn, driver, vm, NULL, (flags & VIR_DOMAIN_START_PAUSED) != 0, - false, + (flags & VIR_DOMAIN_START_AUTOKILL) != 0, -1, NULL, VIR_VM_OP_CREATE) < 0) { qemuAuditDomainStart(vm, "booted", false); if (qemuDomainObjEndJob(vm) > 0) @@ -1295,6 +1297,7 @@ static virDomainPtr qemudDomainCreate(virConnectPtr conn, const char *xml, dom = virGetDomain(conn, vm->def->name, vm->def->uuid); if (dom) dom->id = vm->def->id; +endjob: if (vm && qemuDomainObjEndJob(vm) == 0) vm = NULL; @@ -3819,7 +3822,8 @@ static int qemudNumDefinedDomains(virConnectPtr conn) { static int qemudDomainObjStart(virConnectPtr conn, struct qemud_driver *driver, virDomainObjPtr vm, - bool start_paused) + bool start_paused, + bool autokill) { int ret = -1; char *managed_save; @@ -3843,7 +3847,7 @@ static int qemudDomainObjStart(virConnectPtr conn, } ret = qemuProcessStart(conn, driver, vm, NULL, start_paused, - false, -1, NULL, VIR_VM_OP_CREATE); + autokill, -1, NULL, VIR_VM_OP_CREATE); qemuAuditDomainStart(vm, "booted", ret >= 0); if (ret >= 0) { virDomainEventPtr event = @@ -3866,7 +3870,8 @@ qemudDomainStartWithFlags(virDomainPtr dom, unsigned int flags) virDomainObjPtr vm; int ret = -1; - virCheckFlags(VIR_DOMAIN_START_PAUSED, -1); + virCheckFlags(VIR_DOMAIN_START_PAUSED | + VIR_DOMAIN_START_AUTOKILL, -1); qemuDriverLock(driver); vm = virDomainFindByUUID(&driver->domains, dom->uuid); @@ -3888,8 +3893,12 @@ qemudDomainStartWithFlags(virDomainPtr dom, unsigned int flags) goto endjob; } - ret = qemudDomainObjStart(dom->conn, driver, vm, - (flags & VIR_DOMAIN_START_PAUSED) != 0); + if (qemudDomainObjStart(dom->conn, driver, vm, + (flags & VIR_DOMAIN_START_PAUSED) != 0, + (flags & VIR_DOMAIN_START_AUTOKILL) != 0) < 0) + goto endjob; + + ret = 0; endjob: if (qemuDomainObjEndJob(vm) == 0) diff --git a/tools/virsh.c b/tools/virsh.c index fcd254d..dedf883 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -1305,6 +1305,7 @@ static const vshCmdOptDef opts_create[] = { {"console", VSH_OT_BOOL, 0, N_("attach to console after creation")}, #endif {"paused", VSH_OT_BOOL, 0, N_("leave the guest paused after creation")}, + {"autokill", VSH_OT_BOOL, 0, N_("automatically kill the guest when virsh disconnects")}, {NULL, 0, 0, NULL} }; @@ -1331,6 +1332,8 @@ cmdCreate(vshControl *ctl, const vshCmd *cmd) if (vshCommandOptBool(cmd, "paused")) flags |= VIR_DOMAIN_START_PAUSED; + if (vshCommandOptBool(cmd, "autokill")) + flags |= VIR_DOMAIN_START_AUTOKILL; dom = virDomainCreateXML(ctl->conn, buffer, flags); VIR_FREE(buffer); @@ -1466,6 +1469,7 @@ static const vshCmdOptDef opts_start[] = { {"console", VSH_OT_BOOL, 0, N_("attach to console after creation")}, #endif {"paused", VSH_OT_BOOL, 0, N_("leave the guest paused after creation")}, + {"autokill", VSH_OT_BOOL, 0, N_("automatically kill the guest when virsh disconnects")}, {NULL, 0, 0, NULL} }; @@ -1494,6 +1498,8 @@ cmdStart(vshControl *ctl, const vshCmd *cmd) if (vshCommandOptBool(cmd, "paused")) flags |= VIR_DOMAIN_START_PAUSED; + if (vshCommandOptBool(cmd, "autokill")) + flags |= VIR_DOMAIN_START_AUTOKILL; /* Prefer older API unless we have to pass a flag. */ if ((flags ? virDomainCreateWithFlags(dom, flags) -- 1.7.4.4

On 06/23/2011 04:58 AM, Daniel P. Berrange wrote:
If an application is using libvirt + KVM as a piece of its internal infrastructure to perform a specific task, it can be desirable to guarentee the VM dies when the virConnectPtr disconnects from libvirtd. This ensures the app can't leak any VMs it was using. Adding VIR_DOMAIN_START_AUTOKILL as a flag when starting guests enables this to be done.
* include/libvirt/libvirt.h.in: All VIR_DOMAIN_START_AUTOKILL * src/qemu/qemu_driver.c: Support automatic killing of guests upon connection close * tools/virsh.c: Add --autokill flag to 'start' and 'create' commands --- include/libvirt/libvirt.h.in | 1 + src/qemu/qemu_driver.c | 27 ++++++++++++++++++--------- tools/virsh.c | 6 ++++++
Incomplete. Missing tools/virsh.pod.
3 files changed, 25 insertions(+), 9 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index 3f634e6..a827697 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -234,6 +234,7 @@ typedef virDomainInfo *virDomainInfoPtr; typedef enum { VIR_DOMAIN_NONE = 0, /* Default behavior */ VIR_DOMAIN_START_PAUSED = 1 << 0, /* Launch guest in paused state */ + VIR_DOMAIN_START_AUTOKILL = 1 << 1, /* Automatically kill guest when virConnectPtr is closed */ } virDomainCreateFlags;
The more uses we find for flags, the more I'm glad we've been making all new APIs support them :)
+++ b/tools/virsh.c @@ -1305,6 +1305,7 @@ static const vshCmdOptDef opts_create[] = { {"console", VSH_OT_BOOL, 0, N_("attach to console after creation")}, #endif {"paused", VSH_OT_BOOL, 0, N_("leave the guest paused after creation")}, + {"autokill", VSH_OT_BOOL, 0, N_("automatically kill the guest when virsh disconnects")},
Probably more useful when using a long-running 'virsh', as opposed to a shorter 'virsh start --autokill' :) Conditional ACK, once you also document the new virsh flags. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Thu, Jun 23, 2011 at 11:58:18AM +0100, Daniel P. Berrange wrote:
If an application is using libvirt + KVM as a piece of its internal infrastructure to perform a specific task, it can be desirable to guarentee the VM dies when the virConnectPtr disconnects from libvirtd. This ensures the app can't leak any VMs it was using. Adding VIR_DOMAIN_START_AUTOKILL as a flag when starting guests enables this to be done.
* include/libvirt/libvirt.h.in: All VIR_DOMAIN_START_AUTOKILL * src/qemu/qemu_driver.c: Support automatic killing of guests upon connection close * tools/virsh.c: Add --autokill flag to 'start' and 'create' commands --- include/libvirt/libvirt.h.in | 1 + src/qemu/qemu_driver.c | 27 ++++++++++++++++++--------- tools/virsh.c | 6 ++++++ 3 files changed, 25 insertions(+), 9 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index 3f634e6..a827697 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -234,6 +234,7 @@ typedef virDomainInfo *virDomainInfoPtr; typedef enum { VIR_DOMAIN_NONE = 0, /* Default behavior */ VIR_DOMAIN_START_PAUSED = 1 << 0, /* Launch guest in paused state */ + VIR_DOMAIN_START_AUTOKILL = 1 << 1, /* Automatically kill guest when virConnectPtr is closed */ } virDomainCreateFlags;
I'm thinking perhaps I should rename that to AUTODESTROY instead, since that matches the terminology we use here. 'kill' is specific to UNIX processes, and if we want to extend the autodestroy concept to storage pools or virtual networks, AUTODESTROY makes more sense.
diff --git a/tools/virsh.c b/tools/virsh.c index fcd254d..dedf883 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -1305,6 +1305,7 @@ static const vshCmdOptDef opts_create[] = { {"console", VSH_OT_BOOL, 0, N_("attach to console after creation")}, #endif {"paused", VSH_OT_BOOL, 0, N_("leave the guest paused after creation")}, + {"autokill", VSH_OT_BOOL, 0, N_("automatically kill the guest when virsh disconnects")}, {NULL, 0, 0, NULL} };
And '--autodestroy' here instead Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 06/23/2011 07:35 AM, Daniel P. Berrange wrote:
@@ -234,6 +234,7 @@ typedef virDomainInfo *virDomainInfoPtr; typedef enum { VIR_DOMAIN_NONE = 0, /* Default behavior */ VIR_DOMAIN_START_PAUSED = 1 << 0, /* Launch guest in paused state */ + VIR_DOMAIN_START_AUTOKILL = 1 << 1, /* Automatically kill guest when virConnectPtr is closed */ } virDomainCreateFlags;
I'm thinking perhaps I should rename that to AUTODESTROY instead, since that matches the terminology we use here. 'kill' is specific to UNIX processes, and if we want to extend the autodestroy concept to storage pools or virtual networks, AUTODESTROY makes more sense.
Indeed, that does sound better. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Thu, Jun 23, 2011 at 07:43:16AM -0600, Eric Blake wrote:
On 06/23/2011 07:35 AM, Daniel P. Berrange wrote:
@@ -234,6 +234,7 @@ typedef virDomainInfo *virDomainInfoPtr; typedef enum { VIR_DOMAIN_NONE = 0, /* Default behavior */ VIR_DOMAIN_START_PAUSED = 1 << 0, /* Launch guest in paused state */ + VIR_DOMAIN_START_AUTOKILL = 1 << 1, /* Automatically kill guest when virConnectPtr is closed */ } virDomainCreateFlags;
I'm thinking perhaps I should rename that to AUTODESTROY instead, since that matches the terminology we use here. 'kill' is specific to UNIX processes, and if we want to extend the autodestroy concept to storage pools or virtual networks, AUTODESTROY makes more sense.
Indeed, that does sound better.
Agreed :-) Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 06/23/2011 04:58 AM, Daniel P. Berrange wrote:
I am building an application which uses KVM to run specific tasks, rather than as a general purpose guest OS. I want to ensure that when the app exits, the guest goes away too. To enable this, this series introduces the concept of 'autokill', whereby a guest is forcably destroyed when the virConnectPtr that launched it closes. This also lets us fix a long standing problem with migration leaving an unkillable guest
Cool! How does this interact with migration? If a domain is currently marked autokill on the source, should that mean that attempts to migrate it are forbidden (since the connection to the source would end up being useless after the migration, at which point the connection is gone and autokill should kick in)? -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On Thu, Jun 23, 2011 at 06:52:47AM -0600, Eric Blake wrote:
On 06/23/2011 04:58 AM, Daniel P. Berrange wrote:
I am building an application which uses KVM to run specific tasks, rather than as a general purpose guest OS. I want to ensure that when the app exits, the guest goes away too. To enable this, this series introduces the concept of 'autokill', whereby a guest is forcably destroyed when the virConnectPtr that launched it closes. This also lets us fix a long standing problem with migration leaving an unkillable guest
Cool!
How does this interact with migration? If a domain is currently marked autokill on the source, should that mean that attempts to migrate it are forbidden (since the connection to the source would end up being useless after the migration, at which point the connection is gone and autokill should kick in)?
That's a good point. I reckon we should forbid migration and save/restore for such guests. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 06/23/2011 07:03 AM, Daniel P. Berrange wrote:
On Thu, Jun 23, 2011 at 06:52:47AM -0600, Eric Blake wrote:
On 06/23/2011 04:58 AM, Daniel P. Berrange wrote:
I am building an application which uses KVM to run specific tasks, rather than as a general purpose guest OS. I want to ensure that when the app exits, the guest goes away too. To enable this, this series introduces the concept of 'autokill', whereby a guest is forcably destroyed when the virConnectPtr that launched it closes. This also lets us fix a long standing problem with migration leaving an unkillable guest
Cool!
How does this interact with migration? If a domain is currently marked autokill on the source, should that mean that attempts to migrate it are forbidden (since the connection to the source would end up being useless after the migration, at which point the connection is gone and autokill should kick in)?
That's a good point. I reckon we should forbid migration and save/restore for such guests.
save/restore might work, if you keep the connection alive in the meantime; but seeing as how save is basically a form of migration (to a file rather than to another host), it's probably easier to just forbid that as well, and state that an autokill guest is one-shot. And another thought - it might be nice to expose the qemu -snapshot option, which creates a throwaway run of a guest (where all the disks are snapshotted prior to running any guest code, then when qemu exits the state rolled back to that snapshot). Seems like a new flag could easily expose this feature, and that it would either imply autokill or often be used with autokill. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

On 06/23/2011 07:11 AM, Eric Blake wrote:
On 06/23/2011 07:03 AM, Daniel P. Berrange wrote:
On Thu, Jun 23, 2011 at 06:52:47AM -0600, Eric Blake wrote:
On 06/23/2011 04:58 AM, Daniel P. Berrange wrote:
I am building an application which uses KVM to run specific tasks, rather than as a general purpose guest OS. I want to ensure that when the app exits, the guest goes away too. To enable this, this series introduces the concept of 'autokill', whereby a guest is forcably destroyed when the virConnectPtr that launched it closes. This also lets us fix a long standing problem with migration leaving an unkillable guest
Cool!
How does this interact with migration? If a domain is currently marked autokill on the source, should that mean that attempts to migrate it are forbidden (since the connection to the source would end up being useless after the migration, at which point the connection is gone and autokill should kick in)?
That's a good point. I reckon we should forbid migration and save/restore for such guests.
save/restore might work, if you keep the connection alive in the meantime; but seeing as how save is basically a form of migration (to a file rather than to another host), it's probably easier to just forbid that as well, and state that an autokill guest is one-shot.
And another thought - it might be nice to expose the qemu -snapshot option, which creates a throwaway run of a guest (where all the disks are snapshotted prior to running any guest code, then when qemu exits the state rolled back to that snapshot). Seems like a new flag could easily expose this feature, and that it would either imply autokill or often be used with autokill.
Oops. As implemented, the documentation states that autodestroy domains cannot be saved, but the qemu code allows that, while instead enforcing that autodestroy domains cannot be suspended (paused). I think we botched that. And while fixing it, I'm going to also make autodestroy domains reject use with snapshots (another use of the one-shot principle - save/restore is a special case of snapshot/revert). (Can you tell that I revisited this email because it mentions qemu -snapshot, and then started thinking about the ramifications with my recent snapshot work? :) -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org
participants (3)
-
Daniel P. Berrange
-
Daniel Veillard
-
Eric Blake