[libvirt] [PATCH] snapshot: forbid snapshot on autodestroy domain

There is no reason to forbid pausing an autodestroy domain (not to mention that 'virsh start --paused --autodestroy' succeeds in creating a paused autodestroy domain). Meanwhile, qemu was failing to enforce the API documentation that autodestroy domains cannot be saved. And while the original documentation only mentioned save/restore, snapshots are another form of saving that are close enough in semantics as to make no sense on one-shot domains. * src/qemu/qemu_driver.c (qemudDomainSuspend): Drop bogus check. (qemuDomainSaveInternal, qemuDomainSnapshotCreateXML): Forbid saves of autodestroy domains. * src/libvirt.c (virDomainCreateWithFlags, virDomainCreateXML): Document snapshot interaction. --- Sending this one separately for v1, but I'll insert it into the front of my snapshot series (before 1/43) when I post v4 of that. src/libvirt.c | 4 ++-- src/qemu/qemu_driver.c | 18 ++++++++++++------ 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/src/libvirt.c b/src/libvirt.c index 65a099b..711580e 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -1822,7 +1822,7 @@ virDomainGetConnect (virDomainPtr dom) * object is finally released. This will also happen if the * client application crashes / loses its connection to the * libvirtd daemon. Any domains marked for auto destroy will - * block attempts at migration or save-to-file + * block attempts at migration, save-to-file, or snapshots. * * Returns a new domain object or NULL in case of failure */ @@ -7073,7 +7073,7 @@ error: * object is finally released. This will also happen if the * client application crashes / loses its connection to the * libvirtd daemon. Any domains marked for auto destroy will - * block attempts at migration or save-to-file + * block attempts at migration, save-to-file, or snapshots. * * If the VIR_DOMAIN_START_BYPASS_CACHE flag is set, and there is a * managed save file for this domain (created by virDomainManagedSave()), diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index f21122d..737eec8 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1360,12 +1360,6 @@ static int qemudDomainSuspend(virDomainPtr dom) { goto cleanup; } - if (qemuProcessAutoDestroyActive(driver, vm)) { - qemuReportError(VIR_ERR_OPERATION_INVALID, - "%s", _("domain is marked for auto destroy")); - goto cleanup; - } - priv = vm->privateData; if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT) { @@ -2225,6 +2219,12 @@ qemuDomainSaveInternal(struct qemud_driver *driver, virDomainPtr dom, int directFlag = 0; virFileDirectFdPtr directFd = NULL; + if (qemuProcessAutoDestroyActive(driver, vm)) { + qemuReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is marked for auto destroy")); + return -1; + } + memset(&header, 0, sizeof(header)); memcpy(header.magic, QEMUD_SAVE_MAGIC, sizeof(header.magic)); header.version = QEMUD_SAVE_VERSION; @@ -8471,6 +8471,12 @@ static virDomainSnapshotPtr qemuDomainSnapshotCreateXML(virDomainPtr domain, goto cleanup; } + if (qemuProcessAutoDestroyActive(driver, vm)) { + qemuReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is marked for auto destroy")); + goto cleanup; + } + /* in a perfect world, we would allow qemu to tell us this. The problem * is that qemu only does this check device-by-device; so if you had a * domain that booted from a large qcow2 device, but had a secondary raw -- 1.7.4.4

On Fri, Aug 26, 2011 at 05:35:34PM -0600, Eric Blake wrote:
There is no reason to forbid pausing an autodestroy domain (not to mention that 'virsh start --paused --autodestroy' succeeds in creating a paused autodestroy domain).
Meanwhile, qemu was failing to enforce the API documentation that autodestroy domains cannot be saved. And while the original documentation only mentioned save/restore, snapshots are another form of saving that are close enough in semantics as to make no sense on one-shot domains.
* src/qemu/qemu_driver.c (qemudDomainSuspend): Drop bogus check. (qemuDomainSaveInternal, qemuDomainSnapshotCreateXML): Forbid saves of autodestroy domains. * src/libvirt.c (virDomainCreateWithFlags, virDomainCreateXML): Document snapshot interaction. ---
Sending this one separately for v1, but I'll insert it into the front of my snapshot series (before 1/43) when I post v4 of that.
src/libvirt.c | 4 ++-- src/qemu/qemu_driver.c | 18 ++++++++++++------ 2 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/src/libvirt.c b/src/libvirt.c index 65a099b..711580e 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -1822,7 +1822,7 @@ virDomainGetConnect (virDomainPtr dom) * object is finally released. This will also happen if the * client application crashes / loses its connection to the * libvirtd daemon. Any domains marked for auto destroy will - * block attempts at migration or save-to-file + * block attempts at migration, save-to-file, or snapshots. * * Returns a new domain object or NULL in case of failure */ @@ -7073,7 +7073,7 @@ error: * object is finally released. This will also happen if the * client application crashes / loses its connection to the * libvirtd daemon. Any domains marked for auto destroy will - * block attempts at migration or save-to-file + * block attempts at migration, save-to-file, or snapshots. * * If the VIR_DOMAIN_START_BYPASS_CACHE flag is set, and there is a * managed save file for this domain (created by virDomainManagedSave()), diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index f21122d..737eec8 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1360,12 +1360,6 @@ static int qemudDomainSuspend(virDomainPtr dom) { goto cleanup; }
- if (qemuProcessAutoDestroyActive(driver, vm)) { - qemuReportError(VIR_ERR_OPERATION_INVALID, - "%s", _("domain is marked for auto destroy")); - goto cleanup; - } - priv = vm->privateData;
if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT) { @@ -2225,6 +2219,12 @@ qemuDomainSaveInternal(struct qemud_driver *driver, virDomainPtr dom, int directFlag = 0; virFileDirectFdPtr directFd = NULL;
+ if (qemuProcessAutoDestroyActive(driver, vm)) { + qemuReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is marked for auto destroy")); + return -1; + } + memset(&header, 0, sizeof(header)); memcpy(header.magic, QEMUD_SAVE_MAGIC, sizeof(header.magic)); header.version = QEMUD_SAVE_VERSION; @@ -8471,6 +8471,12 @@ static virDomainSnapshotPtr qemuDomainSnapshotCreateXML(virDomainPtr domain, goto cleanup; }
+ if (qemuProcessAutoDestroyActive(driver, vm)) { + qemuReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is marked for auto destroy")); + goto cleanup; + } + /* in a perfect world, we would allow qemu to tell us this. The problem * is that qemu only does this check device-by-device; so if you had a * domain that booted from a large qcow2 device, but had a secondary raw
ACK, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 08/30/2011 04:46 AM, Daniel Veillard wrote:
On Fri, Aug 26, 2011 at 05:35:34PM -0600, Eric Blake wrote:
There is no reason to forbid pausing an autodestroy domain (not to mention that 'virsh start --paused --autodestroy' succeeds in creating a paused autodestroy domain).
Meanwhile, qemu was failing to enforce the API documentation that autodestroy domains cannot be saved. And while the original documentation only mentioned save/restore, snapshots are another form of saving that are close enough in semantics as to make no sense on one-shot domains.
Sending this one separately for v1, but I'll insert it into the front of my snapshot series (before 1/43) when I post v4 of that.
Or not - I'm still working on v4 of my series, but will push this one now.
ACK,
Thanks; pushed. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org
participants (2)
-
Daniel Veillard
-
Eric Blake