[libvirt] [PATCHv2 0/2] qemu: new GRACEFUL flag for virDomainDestroy

This is the 2nd attempt at solving the problem of virDomainDestroy uncermoniously sending SIGKILL to the qemu process before it's had a chance to flush its disk buffers. In v1, I altered the default behavior of virDomainDestroy(), but that was rejected as changing established API behavior. In v2, the default behavior is maintained, and a new flag VIR_DOMAIN_DESTROY_GRACEFUL is added fo the virDomainDestroyFlags API. If that flag is included in the call, virDomainDestroyFlags will only send SIGTERM to the qemu process, and if that is unsuccessful, it will return an error rather than sending SIGKILL. I've also included a 2nd patch which lengthens the timeout prior to sending SIGKILL. Although there has been some sentiment against doing this, my opinion is that it's the prudent thing to do because: 1) The original timeout of 1.6 seconds was arbitrarily selected, and obviously isn't always adequate. 2) In the cases where the current timeout *is* adequate, there will be exactly 0 change in behavior anyway. The only time behavior will change (and that will just be in the form of a longer delay before the API call returns) will be in those cases where we would otherwise have caused guest disk corruption. I know which of those two options I would choose if it was my guest. 3) Just adding a new flag to the libvirt API does not do anything to help those who are stuck (if only temporarily) with an existing version of an application that doesn't support the new API flag. Since libvirt is the cause of the problem, we should try to mitigate the problem as much as possible without requiring updates of other packages.

When libvirt's virDomainDestroy API is shutting down the qemu process, it first sends SIGTERM, then waits for 1.6 seconds and, if it sees the process still there, sends a SIGKILL. There have been reports that this behavior can lead to data loss because the guest running in qemu doesn't have time to flush its disk cache buffers before it's unceremoniously whacked. This patch maintains that default behavior, but provides a new flag VIR_DOMAIN_DESTROY_GRACEFUL to alter the behavior. If this flag is set in the call to virDomainDestroyFlags, SIGKILL will never be sent to the qemu process; instead, if the timeout is reached and the qemu process still exists, virDomainDestroy will return an error. Once this patch is in, the recommended method for applications to call virDomainDestroyFlags will be with VIR_DOMAIN_DESTROY_GRACEFUL included. If that fails, then the application can decide if and when to call virDomainDestroyFlags again without VIR_DOMAIN_DESTROY_GRACEFUL (to force the issue with SIGKILL). (Note that this does not address the issue of existing applications that have not yet been modified to use VIR_DOMAIN_DESTROY_GRACEFUL. That is a separate patch.) --- include/libvirt/libvirt.h.in | 12 ++++---- src/libvirt.c | 20 ++++++++++-- src/qemu/qemu_driver.c | 12 ++++++- src/qemu/qemu_process.c | 68 +++++++++++++++++++++++++++--------------- src/qemu/qemu_process.h | 9 ++++- 5 files changed, 83 insertions(+), 38 deletions(-) diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index d9b9b95..9440751 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -1186,12 +1186,6 @@ virConnectPtr virDomainGetConnect (virDomainPtr domain); * Domain creation and destruction */ - -/* - * typedef enum { - * } virDomainDestroyFlagsValues; - */ - virDomainPtr virDomainCreateXML (virConnectPtr conn, const char *xmlDesc, unsigned int flags); @@ -1226,6 +1220,12 @@ int virDomainReset (virDomainPtr domain, unsigned int flags); int virDomainDestroy (virDomainPtr domain); + +typedef enum { + VIR_DOMAIN_DESTROY_DEFAULT = 0, /* Default behavior - could lead to data loss!! */ + VIR_DOMAIN_DESTROY_GRACEFUL = 1 << 0, /* only SIGTERM, no SIGKILL */ +} virDomainDestroyFlagsValues; + int virDomainDestroyFlags (virDomainPtr domain, unsigned int flags); int virDomainRef (virDomainPtr domain); diff --git a/src/libvirt.c b/src/libvirt.c index c609202..5833d8d 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -2233,10 +2233,22 @@ error: * This does not free the associated virDomainPtr object. * This function may require privileged access. * - * Calling this function with no @flags set (equal to zero) - * is equivalent to calling virDomainDestroy. Using virDomainShutdown() - * may produce cleaner results for the guest's disks, but depends on guest - * support. + * Calling this function with no @flags set (equal to zero) is + * equivalent to calling virDomainDestroy, and after a reasonable + * timeout will forcefully terminate the guest (e.g. SIGKILL) if + * necessary (which may produce undesirable results, for example + * unflushed disk cache in the guest). Including + * VIR_DOMAIN_DESTROY_GRACEFUL in the flags will prevent the forceful + * termination of the guest, and virDomainDestroyFlags will instead + * return an error if the guest doesn't terminate by the end of the + * timeout; at that time, the management application can decide if + * calling again without VIR_DOMAIN_DESTROY_GRACEFUL is appropriate. + * + * Another alternative which may produce cleaner results for the + * guest's disks is to use virDomainShutdown() instead, but that + * depends on guest support (some hypervisor/guest combinations may + * ignore the shutdown request). + * * * Returns 0 in case of success and -1 in case of failure. */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 1b147a9..ad81e64 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1754,7 +1754,7 @@ qemuDomainDestroyFlags(virDomainPtr dom, virDomainEventPtr event = NULL; qemuDomainObjPrivatePtr priv; - virCheckFlags(0, -1); + virCheckFlags(VIR_DOMAIN_DESTROY_GRACEFUL, -1); qemuDriverLock(driver); vm = virDomainFindByUUID(&driver->domains, dom->uuid); @@ -1775,7 +1775,15 @@ qemuDomainDestroyFlags(virDomainPtr dom, * can kill the process even if a job is active. Killing * it now means the job will be released */ - qemuProcessKill(vm, false); + if (flags & VIR_DOMAIN_DESTROY_GRACEFUL) { + if (qemuProcessKill(vm, 0) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to kill qemu process with SIGTERM")); + goto cleanup; + } + } else { + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_FORCE)); + } /* We need to prevent monitor EOF callback from doing our work (and sending * misleading events) while the vm is unlocked inside BeginJob API diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 116a828..1bbb55c 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -1,7 +1,7 @@ /* * qemu_process.h: QEMU process management * - * Copyright (C) 2006-2011 Red Hat, Inc. + * Copyright (C) 2006-2012 Red Hat, Inc. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public @@ -587,7 +587,7 @@ endjob: cleanup: if (vm) { if (ret == -1) - qemuProcessKill(vm, false); + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_FORCE)); if (virDomainObjUnref(vm) > 0) virDomainObjUnlock(vm); } @@ -612,12 +612,12 @@ qemuProcessShutdownOrReboot(struct qemud_driver *driver, qemuProcessFakeReboot, vm) < 0) { VIR_ERROR(_("Failed to create reboot thread, killing domain")); - qemuProcessKill(vm, true); + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_NOWAIT)); /* Safe to ignore value since ref count was incremented above */ ignore_value(virDomainObjUnref(vm)); } } else { - qemuProcessKill(vm, true); + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_NOWAIT)); } } @@ -3532,45 +3532,65 @@ cleanup: } -void qemuProcessKill(virDomainObjPtr vm, bool gracefully) +int qemuProcessKill(virDomainObjPtr vm, unsigned int flags) { int i; - VIR_DEBUG("vm=%s pid=%d gracefully=%d", - vm->def->name, vm->pid, gracefully); + const char *signame = "TERM"; + + VIR_DEBUG("vm=%s pid=%d flags=%x", + vm->def->name, vm->pid, flags); if (!virDomainObjIsActive(vm)) { VIR_DEBUG("VM '%s' not active", vm->def->name); - return; + return 0; } - /* This loop sends SIGTERM, then waits a few iterations - * (1.6 seconds) to see if it dies. If still alive then - * it does SIGKILL, and waits a few more iterations (1.6 - * seconds more) to confirm that it has really gone. + /* This loop sends SIGTERM (or SIGKILL if flags has + * VIR_QEMU_PROCESS_KILL_FORCE and VIR_QEMU_PROCESS_KILL_NOWAIT), + * then waits a few iterations (3 seconds) to see if it + * dies. Halfway through this wait, if the qemu process still + * hasn't exited, and VIR_QEMU_PROCESS_KILL_FORCE is requested, a + * SIGKILL will be sent. Note that the FORCE mode could result + * in lost data in the guest, so it should only be used if the + * guest is hung and can't be destroyed in any other manner. */ - for (i = 0 ; i < 15 ; i++) { + for (i = 0 ; i < 15; i++) { int signum; - if (i == 0) - signum = SIGTERM; - else if (i == 8) - signum = SIGKILL; - else + if (i == 0) { + if ((flags & VIR_QEMU_PROCESS_KILL_FORCE) && + (flags & VIR_QEMU_PROCESS_KILL_NOWAIT)) { + signum = SIGKILL; /* nuke it immediately */ + signame="KILL"; + } else { + signum = SIGTERM; /* kindly suggest it should exit */ + } + } else if ((i == 8) & (flags & VIR_QEMU_PROCESS_KILL_FORCE)) { + VIR_WARN("Timed out waiting after SIG%s to process %d, " + "sending SIGKILL", signame, vm->pid); + signum = SIGKILL; /* nuke it after a grace period */ + signame="KILL"; + } else { signum = 0; /* Just check for existence */ + } if (virKillProcess(vm->pid, signum) < 0) { if (errno != ESRCH) { char ebuf[1024]; - VIR_WARN("Failed to kill process %d %s", - vm->pid, virStrerror(errno, ebuf, sizeof ebuf)); + VIR_WARN("Failed to terminate process %d with SIG%s: %s", + vm->pid, signame, + virStrerror(errno, ebuf, sizeof ebuf)); + return -1; } - break; + return 0; /* process is dead */ } - if (i == 0 && gracefully) - break; + if (i == 0 && (flags & VIR_QEMU_PROCESS_KILL_NOWAIT)) + return 0; usleep(200 * 1000); } + VIR_WARN("Timed out waiting after SIG%s to process %d", signame, vm->pid); + return -1; } @@ -3659,7 +3679,7 @@ void qemuProcessStop(struct qemud_driver *driver, } /* shut it off for sure */ - qemuProcessKill(vm, false); + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_FORCE)); /* Stop autodestroy in case guest is restarted */ qemuProcessAutoDestroyRemove(driver, vm); diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index 062d79e..4ae6b4b 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -1,7 +1,7 @@ /* * qemu_process.c: QEMU process management * - * Copyright (C) 2006-2011 Red Hat, Inc. + * Copyright (C) 2006-2012 Red Hat, Inc. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public @@ -68,7 +68,12 @@ int qemuProcessAttach(virConnectPtr conn, virDomainChrSourceDefPtr monConfig, bool monJSON); -void qemuProcessKill(virDomainObjPtr vm, bool gracefully); +typedef enum { + VIR_QEMU_PROCESS_KILL_FORCE = 1 << 0, + VIR_QEMU_PROCESS_KILL_NOWAIT = 1 << 1, +} virQemuProcessKillMode; + +int qemuProcessKill(virDomainObjPtr vm, unsigned int flags); int qemuProcessAutoDestroyInit(struct qemud_driver *driver); void qemuProcessAutoDestroyRun(struct qemud_driver *driver, -- 1.7.7.5

On Thu, Feb 02, 2012 at 12:54:28PM -0500, Laine Stump wrote:
When libvirt's virDomainDestroy API is shutting down the qemu process, it first sends SIGTERM, then waits for 1.6 seconds and, if it sees the process still there, sends a SIGKILL.
There have been reports that this behavior can lead to data loss because the guest running in qemu doesn't have time to flush its disk cache buffers before it's unceremoniously whacked.
This patch maintains that default behavior, but provides a new flag VIR_DOMAIN_DESTROY_GRACEFUL to alter the behavior. If this flag is set in the call to virDomainDestroyFlags, SIGKILL will never be sent to the qemu process; instead, if the timeout is reached and the qemu process still exists, virDomainDestroy will return an error.
Once this patch is in, the recommended method for applications to call virDomainDestroyFlags will be with VIR_DOMAIN_DESTROY_GRACEFUL included. If that fails, then the application can decide if and when to call virDomainDestroyFlags again without VIR_DOMAIN_DESTROY_GRACEFUL (to force the issue with SIGKILL).
(Note that this does not address the issue of existing applications that have not yet been modified to use VIR_DOMAIN_DESTROY_GRACEFUL. That is a separate patch.) --- include/libvirt/libvirt.h.in | 12 ++++---- src/libvirt.c | 20 ++++++++++-- src/qemu/qemu_driver.c | 12 ++++++- src/qemu/qemu_process.c | 68 +++++++++++++++++++++++++++--------------- src/qemu/qemu_process.h | 9 ++++- 5 files changed, 83 insertions(+), 38 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index d9b9b95..9440751 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -1186,12 +1186,6 @@ virConnectPtr virDomainGetConnect (virDomainPtr domain); * Domain creation and destruction */
- -/* - * typedef enum { - * } virDomainDestroyFlagsValues; - */ - virDomainPtr virDomainCreateXML (virConnectPtr conn, const char *xmlDesc, unsigned int flags); @@ -1226,6 +1220,12 @@ int virDomainReset (virDomainPtr domain, unsigned int flags);
int virDomainDestroy (virDomainPtr domain);
Please add here a standard description comment for the datatype: /** * virDomainDestroyFlagsValues: * * Flags used to provide specific behaviour to the * virDomainDestroyFlags() function */ for automatic extraction :-)
+ +typedef enum { + VIR_DOMAIN_DESTROY_DEFAULT = 0, /* Default behavior - could lead to data loss!! */ + VIR_DOMAIN_DESTROY_GRACEFUL = 1 << 0, /* only SIGTERM, no SIGKILL */ +} virDomainDestroyFlagsValues; + int virDomainDestroyFlags (virDomainPtr domain, unsigned int flags); int virDomainRef (virDomainPtr domain); diff --git a/src/libvirt.c b/src/libvirt.c index c609202..5833d8d 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -2233,10 +2233,22 @@ error: * This does not free the associated virDomainPtr object. * This function may require privileged access. * - * Calling this function with no @flags set (equal to zero) - * is equivalent to calling virDomainDestroy. Using virDomainShutdown() - * may produce cleaner results for the guest's disks, but depends on guest - * support. + * Calling this function with no @flags set (equal to zero) is + * equivalent to calling virDomainDestroy, and after a reasonable + * timeout will forcefully terminate the guest (e.g. SIGKILL) if + * necessary (which may produce undesirable results, for example + * unflushed disk cache in the guest). Including + * VIR_DOMAIN_DESTROY_GRACEFUL in the flags will prevent the forceful + * termination of the guest, and virDomainDestroyFlags will instead + * return an error if the guest doesn't terminate by the end of the + * timeout; at that time, the management application can decide if + * calling again without VIR_DOMAIN_DESTROY_GRACEFUL is appropriate. + * + * Another alternative which may produce cleaner results for the + * guest's disks is to use virDomainShutdown() instead, but that + * depends on guest support (some hypervisor/guest combinations may + * ignore the shutdown request). + * * * Returns 0 in case of success and -1 in case of failure. */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 1b147a9..ad81e64 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1754,7 +1754,7 @@ qemuDomainDestroyFlags(virDomainPtr dom, virDomainEventPtr event = NULL; qemuDomainObjPrivatePtr priv;
- virCheckFlags(0, -1); + virCheckFlags(VIR_DOMAIN_DESTROY_GRACEFUL, -1);
qemuDriverLock(driver); vm = virDomainFindByUUID(&driver->domains, dom->uuid); @@ -1775,7 +1775,15 @@ qemuDomainDestroyFlags(virDomainPtr dom, * can kill the process even if a job is active. Killing * it now means the job will be released */ - qemuProcessKill(vm, false); + if (flags & VIR_DOMAIN_DESTROY_GRACEFUL) { + if (qemuProcessKill(vm, 0) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("failed to kill qemu process with SIGTERM")); + goto cleanup; + } + } else { + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_FORCE)); + }
/* We need to prevent monitor EOF callback from doing our work (and sending * misleading events) while the vm is unlocked inside BeginJob API diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 116a828..1bbb55c 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -1,7 +1,7 @@ /* * qemu_process.h: QEMU process management * - * Copyright (C) 2006-2011 Red Hat, Inc. + * Copyright (C) 2006-2012 Red Hat, Inc. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public @@ -587,7 +587,7 @@ endjob: cleanup: if (vm) { if (ret == -1) - qemuProcessKill(vm, false); + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_FORCE)); if (virDomainObjUnref(vm) > 0) virDomainObjUnlock(vm); } @@ -612,12 +612,12 @@ qemuProcessShutdownOrReboot(struct qemud_driver *driver, qemuProcessFakeReboot, vm) < 0) { VIR_ERROR(_("Failed to create reboot thread, killing domain")); - qemuProcessKill(vm, true); + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_NOWAIT)); /* Safe to ignore value since ref count was incremented above */ ignore_value(virDomainObjUnref(vm)); } } else { - qemuProcessKill(vm, true); + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_NOWAIT)); } }
@@ -3532,45 +3532,65 @@ cleanup: }
-void qemuProcessKill(virDomainObjPtr vm, bool gracefully) +int qemuProcessKill(virDomainObjPtr vm, unsigned int flags) { int i; - VIR_DEBUG("vm=%s pid=%d gracefully=%d", - vm->def->name, vm->pid, gracefully); + const char *signame = "TERM"; + + VIR_DEBUG("vm=%s pid=%d flags=%x", + vm->def->name, vm->pid, flags);
if (!virDomainObjIsActive(vm)) { VIR_DEBUG("VM '%s' not active", vm->def->name); - return; + return 0; }
- /* This loop sends SIGTERM, then waits a few iterations - * (1.6 seconds) to see if it dies. If still alive then - * it does SIGKILL, and waits a few more iterations (1.6 - * seconds more) to confirm that it has really gone. + /* This loop sends SIGTERM (or SIGKILL if flags has + * VIR_QEMU_PROCESS_KILL_FORCE and VIR_QEMU_PROCESS_KILL_NOWAIT), + * then waits a few iterations (3 seconds) to see if it + * dies. Halfway through this wait, if the qemu process still + * hasn't exited, and VIR_QEMU_PROCESS_KILL_FORCE is requested, a + * SIGKILL will be sent. Note that the FORCE mode could result + * in lost data in the guest, so it should only be used if the + * guest is hung and can't be destroyed in any other manner. */ - for (i = 0 ; i < 15 ; i++) { + for (i = 0 ; i < 15; i++) { int signum; - if (i == 0) - signum = SIGTERM; - else if (i == 8) - signum = SIGKILL; - else + if (i == 0) { + if ((flags & VIR_QEMU_PROCESS_KILL_FORCE) && + (flags & VIR_QEMU_PROCESS_KILL_NOWAIT)) { + signum = SIGKILL; /* nuke it immediately */
s/nuke/kill/ :-)
+ signame="KILL"; + } else { + signum = SIGTERM; /* kindly suggest it should exit */ + } + } else if ((i == 8) & (flags & VIR_QEMU_PROCESS_KILL_FORCE)) { + VIR_WARN("Timed out waiting after SIG%s to process %d, " + "sending SIGKILL", signame, vm->pid); + signum = SIGKILL; /* nuke it after a grace period */
idem :-)
+ signame="KILL"; + } else { signum = 0; /* Just check for existence */ + }
if (virKillProcess(vm->pid, signum) < 0) { if (errno != ESRCH) { char ebuf[1024]; - VIR_WARN("Failed to kill process %d %s", - vm->pid, virStrerror(errno, ebuf, sizeof ebuf)); + VIR_WARN("Failed to terminate process %d with SIG%s: %s", + vm->pid, signame, + virStrerror(errno, ebuf, sizeof ebuf)); + return -1; } - break; + return 0; /* process is dead */ }
- if (i == 0 && gracefully) - break; + if (i == 0 && (flags & VIR_QEMU_PROCESS_KILL_NOWAIT)) + return 0;
usleep(200 * 1000); } + VIR_WARN("Timed out waiting after SIG%s to process %d", signame, vm->pid); + return -1; }
@@ -3659,7 +3679,7 @@ void qemuProcessStop(struct qemud_driver *driver, }
/* shut it off for sure */ - qemuProcessKill(vm, false); + ignore_value(qemuProcessKill(vm, VIR_QEMU_PROCESS_KILL_FORCE));
/* Stop autodestroy in case guest is restarted */ qemuProcessAutoDestroyRemove(driver, vm); diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index 062d79e..4ae6b4b 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -1,7 +1,7 @@ /* * qemu_process.c: QEMU process management * - * Copyright (C) 2006-2011 Red Hat, Inc. + * Copyright (C) 2006-2012 Red Hat, Inc. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public @@ -68,7 +68,12 @@ int qemuProcessAttach(virConnectPtr conn, virDomainChrSourceDefPtr monConfig, bool monJSON);
-void qemuProcessKill(virDomainObjPtr vm, bool gracefully); +typedef enum { + VIR_QEMU_PROCESS_KILL_FORCE = 1 << 0, + VIR_QEMU_PROCESS_KILL_NOWAIT = 1 << 1, +} virQemuProcessKillMode; + +int qemuProcessKill(virDomainObjPtr vm, unsigned int flags);
int qemuProcessAutoDestroyInit(struct qemud_driver *driver); void qemuProcessAutoDestroyRun(struct qemud_driver *driver, -- 1.7.7.5
Except for the 2 nits that looks fine to me ACK Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 02/03/2012 01:12 AM, Daniel Veillard wrote:
On Thu, Feb 02, 2012 at 12:54:28PM -0500, Laine Stump wrote:
When libvirt's virDomainDestroy API is shutting down the qemu process, it first sends SIGTERM, then waits for 1.6 seconds and, if it sees the process still there, sends a SIGKILL.
@@ -2233,10 +2233,22 @@ error: * This does not free the associated virDomainPtr object. * This function may require privileged access. * - * Calling this function with no @flags set (equal to zero) - * is equivalent to calling virDomainDestroy. Using virDomainShutdown() - * may produce cleaner results for the guest's disks, but depends on guest - * support. + * Calling this function with no @flags set (equal to zero) is + * equivalent to calling virDomainDestroy, and after a reasonable + * timeout will forcefully terminate the guest (e.g. SIGKILL) if + * necessary (which may produce undesirable results, for example + * unflushed disk cache in the guest). Including + * VIR_DOMAIN_DESTROY_GRACEFUL in the flags will prevent the forceful + * termination of the guest, and virDomainDestroyFlags will instead + * return an error if the guest doesn't terminate by the end of the + * timeout; at that time, the management application can decide if + * calling again without VIR_DOMAIN_DESTROY_GRACEFUL is appropriate.
Should we also be mentioning the possibility of data loss in the virDomainDestroy() documentation, as part of pointing to virDomainDestroyFlags() as the solution?
Except for the 2 nits that looks fine to me
ACK
This one can go in now - it is strict enhancement. That's true whether or not we wait for more consensus on 2/2, where we are deciding whether changing hueristics/timeout value is appropriate or backwards-incompatible. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

The current default method of terminating the qemu process is to send a SIGTERM, wait for up to 1.6 seconds for it to cleanly shutdown, then send a SIGKILL and wait for up to 1.4 seconds more for the process to terminate. This is problematic because occasionally 1.6 seconds is not long enough for the qemu process to flush its disk buffers, so the guest's disk ends up in an inconsistent state. Although a previous patch has provided a new flag to allow applications to alter this behavior, it will take time for applications to be updated to use this new flag, and since the fault for this inconsistent state lays solidly with libvirt, libvirt should be proactive about preventing the situation even before the applications can be updated. Since this only occasionally happens when the timeout prior to SIGKILL is 1.6 seconds, this patch increases that timeout to 10 seconds. At the very least, this should reduce the occurrence from "occasionally" to "extremely rarely". (Once SIGKILL is sent, it waits another 5 seconds for the process to die before returning). Note that in the cases where it takes less than this for qemu to shutdown cleanly, libvirt will *not* wait for any longer than it would without this patch - qemuProcessKill polls the process and returns as soon as it is gone. --- src/qemu/qemu_process.c | 18 ++++++++++-------- 1 files changed, 10 insertions(+), 8 deletions(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 1bbb55c..5044d76 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3547,14 +3547,16 @@ int qemuProcessKill(virDomainObjPtr vm, unsigned int flags) /* This loop sends SIGTERM (or SIGKILL if flags has * VIR_QEMU_PROCESS_KILL_FORCE and VIR_QEMU_PROCESS_KILL_NOWAIT), - * then waits a few iterations (3 seconds) to see if it - * dies. Halfway through this wait, if the qemu process still - * hasn't exited, and VIR_QEMU_PROCESS_KILL_FORCE is requested, a - * SIGKILL will be sent. Note that the FORCE mode could result - * in lost data in the guest, so it should only be used if the - * guest is hung and can't be destroyed in any other manner. + * then waits a few iterations (10 seconds) to see if it dies. If + * the qemu process still hasn't exited, and + * VIR_QEMU_PROCESS_KILL_FORCE is requested, a SIGKILL will then + * be sent, and qemuProcessKill will wait up to 5 seconds more for + * the process to exit before returning. Note that the FORCE mode + * could result in lost data in the guest, so it should only be + * used if the guest is hung and can't be destroyed in any other + * manner. */ - for (i = 0 ; i < 15; i++) { + for (i = 0 ; i < 75; i++) { int signum; if (i == 0) { if ((flags & VIR_QEMU_PROCESS_KILL_FORCE) && @@ -3564,7 +3566,7 @@ int qemuProcessKill(virDomainObjPtr vm, unsigned int flags) } else { signum = SIGTERM; /* kindly suggest it should exit */ } - } else if ((i == 8) & (flags & VIR_QEMU_PROCESS_KILL_FORCE)) { + } else if ((i == 50) & (flags & VIR_QEMU_PROCESS_KILL_FORCE)) { VIR_WARN("Timed out waiting after SIG%s to process %d, " "sending SIGKILL", signame, vm->pid); signum = SIGKILL; /* nuke it after a grace period */ -- 1.7.7.5

On Thu, Feb 02, 2012 at 12:54:29PM -0500, Laine Stump wrote:
The current default method of terminating the qemu process is to send a SIGTERM, wait for up to 1.6 seconds for it to cleanly shutdown, then send a SIGKILL and wait for up to 1.4 seconds more for the process to terminate. This is problematic because occasionally 1.6 seconds is not long enough for the qemu process to flush its disk buffers, so the guest's disk ends up in an inconsistent state.
Although a previous patch has provided a new flag to allow applications to alter this behavior, it will take time for applications to be updated to use this new flag, and since the fault for this inconsistent state lays solidly with libvirt, libvirt should be proactive about preventing the situation even before the applications can be updated.
Since this only occasionally happens when the timeout prior to SIGKILL is 1.6 seconds, this patch increases that timeout to 10 seconds. At the very least, this should reduce the occurrence from "occasionally" to "extremely rarely". (Once SIGKILL is sent, it waits another 5 seconds for the process to die before returning).
Note that in the cases where it takes less than this for qemu to shutdown cleanly, libvirt will *not* wait for any longer than it would without this patch - qemuProcessKill polls the process and returns as soon as it is gone. --- src/qemu/qemu_process.c | 18 ++++++++++-------- 1 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 1bbb55c..5044d76 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3547,14 +3547,16 @@ int qemuProcessKill(virDomainObjPtr vm, unsigned int flags)
/* This loop sends SIGTERM (or SIGKILL if flags has * VIR_QEMU_PROCESS_KILL_FORCE and VIR_QEMU_PROCESS_KILL_NOWAIT), - * then waits a few iterations (3 seconds) to see if it - * dies. Halfway through this wait, if the qemu process still - * hasn't exited, and VIR_QEMU_PROCESS_KILL_FORCE is requested, a - * SIGKILL will be sent. Note that the FORCE mode could result - * in lost data in the guest, so it should only be used if the - * guest is hung and can't be destroyed in any other manner. + * then waits a few iterations (10 seconds) to see if it dies. If + * the qemu process still hasn't exited, and + * VIR_QEMU_PROCESS_KILL_FORCE is requested, a SIGKILL will then + * be sent, and qemuProcessKill will wait up to 5 seconds more for + * the process to exit before returning. Note that the FORCE mode + * could result in lost data in the guest, so it should only be + * used if the guest is hung and can't be destroyed in any other + * manner. */ - for (i = 0 ; i < 15; i++) { + for (i = 0 ; i < 75; i++) { int signum; if (i == 0) { if ((flags & VIR_QEMU_PROCESS_KILL_FORCE) && @@ -3564,7 +3566,7 @@ int qemuProcessKill(virDomainObjPtr vm, unsigned int flags) } else { signum = SIGTERM; /* kindly suggest it should exit */ } - } else if ((i == 8) & (flags & VIR_QEMU_PROCESS_KILL_FORCE)) { + } else if ((i == 50) & (flags & VIR_QEMU_PROCESS_KILL_FORCE)) { VIR_WARN("Timed out waiting after SIG%s to process %d, " "sending SIGKILL", signame, vm->pid); signum = SIGKILL; /* nuke it after a grace period */
On the semantic of the patch, it does what it suggest ACK to this But that's unfortunately a pure heuristic, when the domain doesn't fail to stop gracefully, there is no problem and this doesn't change anything. If the domain is doing intensive I/Os flushing buffers for example the extra grace period may help but there is absolutely no garantee. On linux we could try to be a bit smart and detect completely stuck guests by looking at /proc/$pid/io rchar and wchar if that doesn't move at all in the iterations we can probably consider it dead, if it does well we can be pretty sure that SIGKILL will loose data :-\ ACK at this heuristic attempt but maybe a smarter algorithm is in order, I'm sure others will comment :-) Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 02/03/2012 01:24 AM, Daniel Veillard wrote:
On Thu, Feb 02, 2012 at 12:54:29PM -0500, Laine Stump wrote:
The current default method of terminating the qemu process is to send a SIGTERM, wait for up to 1.6 seconds for it to cleanly shutdown, then send a SIGKILL and wait for up to 1.4 seconds more for the process to terminate. This is problematic because occasionally 1.6 seconds is not long enough for the qemu process to flush its disk buffers, so the guest's disk ends up in an inconsistent state.
On the semantic of the patch, it does what it suggest ACK to this
Agreed.
But that's unfortunately a pure heuristic, when the domain doesn't fail to stop gracefully, there is no problem and this doesn't change anything. If the domain is doing intensive I/Os flushing buffers for example the extra grace period may help but there is absolutely no garantee. On linux we could try to be a bit smart and detect completely stuck guests by looking at /proc/$pid/io rchar and wchar if that doesn't move at all in the iterations we can probably consider it dead, if it does well we can be pretty sure that SIGKILL will loose data :-\
That would be a Linux-specific heuristic. It might even be worth adding more flags as we come up with more heuristics, to allow the user to control which heuristic to attempt (and fail if a particular heuristic is not supported for a particular host), but I think that those could be later patches.
ACK at this heuristic attempt but maybe a smarter algorithm is in order, I'm sure others will comment :-)
I'm in favor of this patch going in now; as you argued, it is a no-op change in the common success case, and a reliability fix (even if slower) in the case where it would have been giving up too early previously, all to benefit applications that haven't yet been adjusted to take advantage of the new flags. Speaking of applications that should take advantage of new flags, where's patch 3/2 to teach virsh destroy how to use the new flag? -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On 02/03/2012 10:06 AM, Eric Blake wrote:
On 02/03/2012 01:24 AM, Daniel Veillard wrote:
On Thu, Feb 02, 2012 at 12:54:29PM -0500, Laine Stump wrote:
The current default method of terminating the qemu process is to send a SIGTERM, wait for up to 1.6 seconds for it to cleanly shutdown, then send a SIGKILL and wait for up to 1.4 seconds more for the process to terminate. This is problematic because occasionally 1.6 seconds is not long enough for the qemu process to flush its disk buffers, so the guest's disk ends up in an inconsistent state.
On the semantic of the patch, it does what it suggest ACK to this
Agreed.
ACK at this heuristic attempt but maybe a smarter algorithm is in order, I'm sure others will comment :-)
I'm in favor of this patch going in now; as you argued, it is a no-op change in the common success case, and a reliability fix (even if slower) in the case where it would have been giving up too early previously, all to benefit applications that haven't yet been adjusted to take advantage of the new flags.
Hmm, I've just had a second thought. Looking back at this thread that never got applied because Dan had some review comments where I did not have time to implement them: https://www.redhat.com/archives/libvir-list/2011-November/msg00243.html Right now, we guarantee that we will timeout in 3 seconds, but during those three seconds, we hold the driver lock, which means that no other application can issue any command on any other VM managed by the same connection. If we are going to lengthen the timeout, then we also need to start thinking about dropping the driver lock for the duration of the wait - that is, operations on the VM being destroyed will be blocked (except for parallel attempts to destroy the same domain), but operations on other domains should not be blocked by the longer timeout. Even if we don't resolve Dan's concern of subdividing the driver lock into more manageable pieces, we should at least resolve the problem of holding the driver lock for the entire virDomainDestroy operation, before we lengthen the timeouts involved. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

This allows virsh to use the new VIR_DOMAIN_DESTROY_GRACEUL flag for virDomainDestroyFlags. --- tools/virsh.c | 7 ++++++- tools/virsh.pod | 6 +++++- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/tools/virsh.c b/tools/virsh.c index 42985a9..3478185 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -4264,6 +4264,7 @@ static const vshCmdInfo info_destroy[] = { static const vshCmdOptDef opts_destroy[] = { {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {"graceful", VSH_OT_BOOL, VSH_OFLAG_NONE, N_("needs rawio capability")}, {NULL, 0, 0, NULL} }; @@ -4273,6 +4274,7 @@ cmdDestroy(vshControl *ctl, const vshCmd *cmd) virDomainPtr dom; bool ret = true; const char *name; + unsigned int flags = 0; if (!vshConnectionUsability(ctl, ctl->conn)) return false; @@ -4280,7 +4282,10 @@ cmdDestroy(vshControl *ctl, const vshCmd *cmd) if (!(dom = vshCommandOptDomain(ctl, cmd, &name))) return false; - if (virDomainDestroy(dom) == 0) { + if (vshCommandOptBool(cmd, "graceful")) + flags |= VIR_DOMAIN_DESTROY_GRACEFUL; + + if (virDomainDestroyFlags(dom, flags) == 0) { vshPrint(ctl, _("Domain %s destroyed\n"), name); } else { vshError(ctl, _("Failed to destroy domain %s"), name); diff --git a/tools/virsh.pod b/tools/virsh.pod index 10b51ff..a85da13 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -458,7 +458,7 @@ Flag I<--title> selects operation on the title field instead of description. If neither of I<--edit> and I<--new_desc> are specified the note or description is displayed instead of being modified. -=item B<destroy> I<domain-id> +=item B<destroy> I<domain-id> [I<--graceful>] Immediately terminate the domain domain-id. This doesn't give the domain OS any chance to react, and it's the equivalent of ripping the power @@ -472,6 +472,10 @@ be lost once the guest stops running, but the snapshot contents still exist, and a new domain with the same name and UUID can restore the snapshot metadata with B<snapshot-create>. +If I<--graceful> is specified, don't resort to extreme measures +(e.g. SIGKILL) when the guest doesn't stop after a reasonable timeout; +return an error instead. + =item B<domblkstat> I<domain> I<block-device> [I<--human>] Get device block stats for a running domain. A I<block-device> corresponds -- 1.7.7.5

On 03.02.2012 20:16, Laine Stump wrote:
This allows virsh to use the new VIR_DOMAIN_DESTROY_GRACEUL flag for virDomainDestroyFlags. --- tools/virsh.c | 7 ++++++- tools/virsh.pod | 6 +++++- 2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c index 42985a9..3478185 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -4264,6 +4264,7 @@ static const vshCmdInfo info_destroy[] = {
static const vshCmdOptDef opts_destroy[] = { {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {"graceful", VSH_OT_BOOL, VSH_OFLAG_NONE, N_("needs rawio capability")}, {NULL, 0, 0, NULL} };
@@ -4273,6 +4274,7 @@ cmdDestroy(vshControl *ctl, const vshCmd *cmd) virDomainPtr dom; bool ret = true; const char *name; + unsigned int flags = 0;
if (!vshConnectionUsability(ctl, ctl->conn)) return false; @@ -4280,7 +4282,10 @@ cmdDestroy(vshControl *ctl, const vshCmd *cmd) if (!(dom = vshCommandOptDomain(ctl, cmd, &name))) return false;
- if (virDomainDestroy(dom) == 0) { + if (vshCommandOptBool(cmd, "graceful")) + flags |= VIR_DOMAIN_DESTROY_GRACEFUL; + + if (virDomainDestroyFlags(dom, flags) == 0) {
NACK. This would break communication to those daemons which don't know virDomainDestroyFlags(); In fact, we need this: if (flags) virDomainDestroyFlags(); else virDomainDestroy();

This allows virsh to use the new VIR_DOMAIN_DESTROY_GRACEUL flag for virDomainDestroyFlags. --- Okay, how about this one :-) tools/virsh.c | 9 ++++++++- tools/virsh.pod | 6 +++++- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/tools/virsh.c b/tools/virsh.c index 42985a9..6a151e6 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -4264,6 +4264,7 @@ static const vshCmdInfo info_destroy[] = { static const vshCmdOptDef opts_destroy[] = { {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {"graceful", VSH_OT_BOOL, VSH_OFLAG_NONE, N_("needs rawio capability")}, {NULL, 0, 0, NULL} }; @@ -4273,6 +4274,7 @@ cmdDestroy(vshControl *ctl, const vshCmd *cmd) virDomainPtr dom; bool ret = true; const char *name; + int result; if (!vshConnectionUsability(ctl, ctl->conn)) return false; @@ -4280,7 +4282,12 @@ cmdDestroy(vshControl *ctl, const vshCmd *cmd) if (!(dom = vshCommandOptDomain(ctl, cmd, &name))) return false; - if (virDomainDestroy(dom) == 0) { + if (vshCommandOptBool(cmd, "graceful")) + result = virDomainDestroyFlags(dom, VIR_DOMAIN_DESTROY_GRACEFUL); + else + result = virDomainDestroy(dom); + + if (result == 0) { vshPrint(ctl, _("Domain %s destroyed\n"), name); } else { vshError(ctl, _("Failed to destroy domain %s"), name); diff --git a/tools/virsh.pod b/tools/virsh.pod index 10b51ff..a85da13 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -458,7 +458,7 @@ Flag I<--title> selects operation on the title field instead of description. If neither of I<--edit> and I<--new_desc> are specified the note or description is displayed instead of being modified. -=item B<destroy> I<domain-id> +=item B<destroy> I<domain-id> [I<--graceful>] Immediately terminate the domain domain-id. This doesn't give the domain OS any chance to react, and it's the equivalent of ripping the power @@ -472,6 +472,10 @@ be lost once the guest stops running, but the snapshot contents still exist, and a new domain with the same name and UUID can restore the snapshot metadata with B<snapshot-create>. +If I<--graceful> is specified, don't resort to extreme measures +(e.g. SIGKILL) when the guest doesn't stop after a reasonable timeout; +return an error instead. + =item B<domblkstat> I<domain> I<block-device> [I<--human>] Get device block stats for a running domain. A I<block-device> corresponds -- 1.7.7.5

On 03.02.2012 20:34, Laine Stump wrote:
This allows virsh to use the new VIR_DOMAIN_DESTROY_GRACEUL flag for virDomainDestroyFlags. --- Okay, how about this one :-)
tools/virsh.c | 9 ++++++++- tools/virsh.pod | 6 +++++- 2 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c index 42985a9..6a151e6 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -4264,6 +4264,7 @@ static const vshCmdInfo info_destroy[] = {
static const vshCmdOptDef opts_destroy[] = { {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {"graceful", VSH_OT_BOOL, VSH_OFLAG_NONE, N_("needs rawio capability")}, {NULL, 0, 0, NULL} };
@@ -4273,6 +4274,7 @@ cmdDestroy(vshControl *ctl, const vshCmd *cmd) virDomainPtr dom; bool ret = true; const char *name; + int result;
I tend to define unsigned flags = 0; here, even when they'll be used only for one flag right now, but it will simplify patches adding a new one.
if (!vshConnectionUsability(ctl, ctl->conn)) return false; @@ -4280,7 +4282,12 @@ cmdDestroy(vshControl *ctl, const vshCmd *cmd) if (!(dom = vshCommandOptDomain(ctl, cmd, &name))) return false;
- if (virDomainDestroy(dom) == 0) { + if (vshCommandOptBool(cmd, "graceful")) + result = virDomainDestroyFlags(dom, VIR_DOMAIN_DESTROY_GRACEFUL); + else + result = virDomainDestroy(dom); + + if (result == 0) {
So this chunk will look like: if (vshCommandOptBool(cmd, "graceful")) flags |= VIR_DOMAIN_DESTROY_GRACEFUL; if (flags) result = virDomainDestroyFlags(dom, flags); else result = virDomainDestroy(dom); if (result == 0) { But this is not a show stopper for me.
vshPrint(ctl, _("Domain %s destroyed\n"), name); } else { vshError(ctl, _("Failed to destroy domain %s"), name); diff --git a/tools/virsh.pod b/tools/virsh.pod index 10b51ff..a85da13 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -458,7 +458,7 @@ Flag I<--title> selects operation on the title field instead of description. If neither of I<--edit> and I<--new_desc> are specified the note or description is displayed instead of being modified.
-=item B<destroy> I<domain-id> +=item B<destroy> I<domain-id> [I<--graceful>]
Immediately terminate the domain domain-id. This doesn't give the domain OS any chance to react, and it's the equivalent of ripping the power @@ -472,6 +472,10 @@ be lost once the guest stops running, but the snapshot contents still exist, and a new domain with the same name and UUID can restore the snapshot metadata with B<snapshot-create>.
+If I<--graceful> is specified, don't resort to extreme measures +(e.g. SIGKILL) when the guest doesn't stop after a reasonable timeout; +return an error instead. + =item B<domblkstat> I<domain> I<block-device> [I<--human>]
Get device block stats for a running domain. A I<block-device> corresponds
Nice, you haven't forgot the documentation. Overall, ACK, regardless you'll change that nit or not. Michal

On 02/03/2012 02:44 PM, Michal Privoznik wrote:
On 03.02.2012 20:34, Laine Stump wrote:
This allows virsh to use the new VIR_DOMAIN_DESTROY_GRACEUL flag for virDomainDestroyFlags. --- Okay, how about this one :-)
tools/virsh.c | 9 ++++++++- tools/virsh.pod | 6 +++++- 2 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c index 42985a9..6a151e6 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -4264,6 +4264,7 @@ static const vshCmdInfo info_destroy[] = {
static const vshCmdOptDef opts_destroy[] = { {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {"graceful", VSH_OT_BOOL, VSH_OFLAG_NONE, N_("needs rawio capability")}, {NULL, 0, 0, NULL} };
@@ -4273,6 +4274,7 @@ cmdDestroy(vshControl *ctl, const vshCmd *cmd) virDomainPtr dom; bool ret = true; const char *name; + int result; I tend to define unsigned flags = 0; here, even when they'll be used only for one flag right now, but it will simplify patches adding a new one.
if (!vshConnectionUsability(ctl, ctl->conn)) return false; @@ -4280,7 +4282,12 @@ cmdDestroy(vshControl *ctl, const vshCmd *cmd) if (!(dom = vshCommandOptDomain(ctl, cmd,&name))) return false;
- if (virDomainDestroy(dom) == 0) { + if (vshCommandOptBool(cmd, "graceful")) + result = virDomainDestroyFlags(dom, VIR_DOMAIN_DESTROY_GRACEFUL); + else + result = virDomainDestroy(dom); + + if (result == 0) { So this chunk will look like:
if (vshCommandOptBool(cmd, "graceful")) flags |= VIR_DOMAIN_DESTROY_GRACEFUL;
if (flags) result = virDomainDestroyFlags(dom, flags); else result = virDomainDestroy(dom);
if (result == 0) {
But this is not a show stopper for me.
Yeah, I agree, especially since we're likely to add another flag to virDomainDestroyFlags soon. I switched it back to having a flags variable, made sure it still worked, and pushed. Thanks!

On 02/03/2012 12:34 PM, Laine Stump wrote:
This allows virsh to use the new VIR_DOMAIN_DESTROY_GRACEUL flag for virDomainDestroyFlags. --- Okay, how about this one :-)
tools/virsh.c | 9 ++++++++- tools/virsh.pod | 6 +++++- 2 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c index 42985a9..6a151e6 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -4264,6 +4264,7 @@ static const vshCmdInfo info_destroy[] = {
static const vshCmdOptDef opts_destroy[] = { {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {"graceful", VSH_OT_BOOL, VSH_OFLAG_NONE, N_("needs rawio capability")},
In addition to Michal's review, fix your extra copy-and-paste. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On 02/03/2012 02:52 PM, Eric Blake wrote:
On 02/03/2012 12:34 PM, Laine Stump wrote:
+ {"graceful", VSH_OT_BOOL, VSH_OFLAG_NONE, N_("needs rawio capability")}, In addition to Michal's review, fix your extra copy-and-paste.
Yep, did that too. Thanks!
participants (4)
-
Daniel Veillard
-
Eric Blake
-
Laine Stump
-
Michal Privoznik