[libvirt] [PATCH v2 00/25] Incremental backup support for qemu

Next version which includes feedback from V1: https://www.redhat.com/archives/libvir-list/2019-November/msg01315.html and also few features and bugs fixed based on offline requests: - The flag VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTERNAL was added to facilitate users who wish to provide their own files. - The schema was fixed as many legitimate uses were not described: - format for the scratch file was not supported - security labels for the scratch file were not supported - tests were insufficient - backup XML 2 XML testing was added - scratch files created by libvirt are now removed after the job finishes - domain capability feature entry was added - the code for determining bitmaps for incremental backup was slightly optimized - documentation now documents our behaviour towards the scratch file and the relationship to the new flag. I might have forgotten to apply some reviewed-by tags though. I'm sorry for that. Most patches changed though (including the API patches which add the flag) so a review is welcome even there. You can fetch the new version at: git fetch https://gitlab.com/pipo.sk/libvirt.git blockdev-backup-v2 Note that the branch also contains commit to enable block commit for easier testing. With the posted code the following approach can be used to enable it (new qemu required): <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> ... <qemu:capabilities> <qemu:add capability='incremental-backup'/> </qemu:capabilities> </domain> Eric Blake (5): backup: Document new XML for backups backup: Introduce virDomainBackup APIs backup: Implement backup APIs for remote driver backup: Parse and output backup XML backup: Implement virsh support for backup Peter Krempa (20): qemu: domain: Export qemuDomainGetImageIds API: Introduce field for reporting temporary disk space usage of a domain job virsh: Implement VIR_DOMAIN_JOB_DISK_TEMP_(USED|TOTAL) in cmdDomjobinfo API: Add domain job operation for backups tests: genericxml2xml: Add testing of backup XML files qemu: Add infrastructure for statistics of a backup job qemu: domain: Introduce QEMU_ASYNC_JOB_BACKUP async job type Add 'backup' block job type qemu: monitor: Add support for blockdev-backup via 'transaction' qemu: domain: Track backup job data in the status XML qemu: blockjob: Track internal data for 'backup' blockjob tests: qemustatusxml2xml: Add test for 'pull' type backup job conf: backup: Add fields for tracking stats of completed sub-jobs doc: Document quirk of getting block job info for a 'backup' blockjob qemu: Implement backup job APIs and qemu handling qemu: backup: Implement stats gathering while the job is running qemu: driver: Allow cancellation of the backup job qemu: blockjob: Implement concluded blockjob handler for backup blockjobs conf: domaincaps: Add 'backup' feature flag qemu: Add support for VIR_DOMAIN_CAPS_FEATURE_BACKUP docs/docs.html.in | 3 +- docs/format.html.in | 1 + docs/formatbackup.html.in | 175 +++ docs/formatcheckpoint.html.in | 12 +- docs/formatdomaincaps.html.in | 8 + docs/index.html.in | 3 +- docs/schemas/domainbackup.rng | 223 ++++ docs/schemas/domaincaps.rng | 9 + examples/c/misc/event-test.c | 3 + include/libvirt/libvirt-domain.h | 37 +- libvirt.spec.in | 1 + mingw-libvirt.spec.in | 2 + po/POTFILES.in | 3 + src/conf/Makefile.inc.am | 2 + src/conf/backup_conf.c | 499 ++++++++ src/conf/backup_conf.h | 108 ++ src/conf/domain_capabilities.c | 1 + src/conf/domain_capabilities.h | 1 + src/conf/domain_conf.c | 2 +- src/conf/virconftypes.h | 3 + src/driver-hypervisor.h | 12 + src/libvirt-domain-checkpoint.c | 7 +- src/libvirt-domain.c | 147 +++ src/libvirt_private.syms | 8 + src/libvirt_public.syms | 6 + src/qemu/Makefile.inc.am | 2 + src/qemu/qemu_backup.c | 1039 +++++++++++++++++ src/qemu/qemu_backup.h | 46 + src/qemu/qemu_blockjob.c | 111 +- src/qemu/qemu_blockjob.h | 19 + src/qemu/qemu_capabilities.c | 1 + src/qemu/qemu_domain.c | 150 ++- src/qemu/qemu_domain.h | 21 + src/qemu/qemu_driver.c | 68 +- src/qemu/qemu_migration.c | 2 + src/qemu/qemu_monitor.c | 13 + src/qemu/qemu_monitor.h | 15 + src/qemu/qemu_monitor_json.c | 33 + src/qemu/qemu_monitor_json.h | 8 + src/qemu/qemu_process.c | 25 + src/remote/remote_driver.c | 2 + src/remote/remote_protocol.x | 33 +- src/remote_protocol-structs | 15 + tests/Makefile.am | 2 + .../backup-pull-seclabel.xml | 18 + tests/domainbackupxml2xmlin/backup-pull.xml | 10 + .../backup-push-seclabel.xml | 17 + tests/domainbackupxml2xmlin/backup-push.xml | 10 + tests/domainbackupxml2xmlin/empty.xml | 1 + .../backup-pull-seclabel.xml | 18 + tests/domainbackupxml2xmlout/backup-pull.xml | 10 + .../backup-push-seclabel.xml | 17 + tests/domainbackupxml2xmlout/backup-push.xml | 10 + tests/domainbackupxml2xmlout/empty.xml | 1 + .../domaincapsdata/qemu_1.5.3-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_1.5.3-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.5.3.x86_64.xml | 1 + .../domaincapsdata/qemu_1.6.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_1.6.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.6.0.x86_64.xml | 1 + .../domaincapsdata/qemu_1.7.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_1.7.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.7.0.x86_64.xml | 1 + .../domaincapsdata/qemu_2.1.1-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_2.1.1-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.1.1.x86_64.xml | 1 + .../domaincapsdata/qemu_2.10.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_2.10.0-tcg.x86_64.xml | 1 + .../qemu_2.10.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.10.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.10.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.10.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.10.0.x86_64.xml | 1 + .../domaincapsdata/qemu_2.11.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_2.11.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.11.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.11.0.x86_64.xml | 1 + .../domaincapsdata/qemu_2.12.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_2.12.0-tcg.x86_64.xml | 1 + .../qemu_2.12.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.12.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.12.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.12.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.12.0.x86_64.xml | 1 + .../domaincapsdata/qemu_2.4.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_2.4.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.4.0.x86_64.xml | 1 + .../domaincapsdata/qemu_2.5.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_2.5.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.5.0.x86_64.xml | 1 + .../domaincapsdata/qemu_2.6.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_2.6.0-tcg.x86_64.xml | 1 + .../qemu_2.6.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.6.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.6.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.6.0.x86_64.xml | 1 + .../domaincapsdata/qemu_2.7.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_2.7.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.7.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.7.0.x86_64.xml | 1 + .../domaincapsdata/qemu_2.8.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_2.8.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.8.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.8.0.x86_64.xml | 1 + .../domaincapsdata/qemu_2.9.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_2.9.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.9.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.9.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.9.0.x86_64.xml | 1 + .../domaincapsdata/qemu_3.0.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_3.0.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.0.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_3.0.0.s390x.xml | 1 + tests/domaincapsdata/qemu_3.0.0.x86_64.xml | 1 + .../domaincapsdata/qemu_3.1.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_3.1.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.1.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_3.1.0.x86_64.xml | 1 + .../domaincapsdata/qemu_4.0.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_4.0.0-tcg.x86_64.xml | 1 + .../qemu_4.0.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.0.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.0.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_4.0.0.s390x.xml | 1 + tests/domaincapsdata/qemu_4.0.0.x86_64.xml | 1 + .../domaincapsdata/qemu_4.1.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_4.1.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.1.0.x86_64.xml | 1 + .../domaincapsdata/qemu_4.2.0-q35.x86_64.xml | 1 + .../domaincapsdata/qemu_4.2.0-tcg.x86_64.xml | 1 + .../qemu_4.2.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.2.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.2.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_4.2.0.s390x.xml | 1 + tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 1 + tests/genericxml2xmltest.c | 46 + tests/qemumonitorjsontest.c | 8 +- .../qemustatusxml2xmldata/backup-pull-in.xml | 608 ++++++++++ .../qemustatusxml2xmldata/backup-pull-out.xml | 1 + tests/qemuxml2xmltest.c | 2 + tests/virschematest.c | 2 + tools/Makefile.am | 1 + tools/virsh-backup.c | 151 +++ tools/virsh-backup.h | 21 + tools/virsh-domain.c | 26 +- tools/virsh.c | 2 + tools/virsh.h | 1 + tools/virsh.pod | 35 + 148 files changed, 3952 insertions(+), 26 deletions(-) create mode 100644 docs/formatbackup.html.in create mode 100644 docs/schemas/domainbackup.rng create mode 100644 src/conf/backup_conf.c create mode 100644 src/conf/backup_conf.h create mode 100644 src/qemu/qemu_backup.c create mode 100644 src/qemu/qemu_backup.h create mode 100644 tests/domainbackupxml2xmlin/backup-pull-seclabel.xml create mode 100644 tests/domainbackupxml2xmlin/backup-pull.xml create mode 100644 tests/domainbackupxml2xmlin/backup-push-seclabel.xml create mode 100644 tests/domainbackupxml2xmlin/backup-push.xml create mode 100644 tests/domainbackupxml2xmlin/empty.xml create mode 100644 tests/domainbackupxml2xmlout/backup-pull-seclabel.xml create mode 100644 tests/domainbackupxml2xmlout/backup-pull.xml create mode 100644 tests/domainbackupxml2xmlout/backup-push-seclabel.xml create mode 100644 tests/domainbackupxml2xmlout/backup-push.xml create mode 100644 tests/domainbackupxml2xmlout/empty.xml create mode 100644 tests/qemustatusxml2xmldata/backup-pull-in.xml create mode 120000 tests/qemustatusxml2xmldata/backup-pull-out.xml create mode 100644 tools/virsh-backup.c create mode 100644 tools/virsh-backup.h -- 2.23.0

Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_domain.h | 7 +++++++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 470d342afc..8d2923300d 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -10240,7 +10240,7 @@ qemuDomainCleanupRun(virQEMUDriverPtr driver, priv->ncleanupCallbacks_max = 0; } -static void +void qemuDomainGetImageIds(virQEMUDriverConfigPtr cfg, virDomainObjPtr vm, virStorageSourcePtr src, diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index f626d3a54c..608546a27c 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -840,6 +840,13 @@ bool qemuDomainDiskChangeSupported(virDomainDiskDefPtr disk, const char *qemuDomainDiskNodeFormatLookup(virDomainObjPtr vm, const char *disk); +void qemuDomainGetImageIds(virQEMUDriverConfigPtr cfg, + virDomainObjPtr vm, + virStorageSourcePtr src, + virStorageSourcePtr parentSrc, + uid_t *uid, + gid_t *gid); + int qemuDomainStorageFileInit(virQEMUDriverPtr driver, virDomainObjPtr vm, virStorageSourcePtr src, -- 2.23.0

On 12/3/19 11:17 AM, Peter Krempa wrote:
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_domain.h | 7 +++++++ 2 files changed, 8 insertions(+), 1 deletion(-)
Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:23PM +0100, Peter Krempa wrote:
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_domain.h | 7 +++++++ 2 files changed, 8 insertions(+), 1 deletion(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, Dec 03, 2019 at 06:17:23PM +0100, Peter Krempa wrote:
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 2 +- src/qemu/qemu_domain.h | 7 +++++++ 2 files changed, 8 insertions(+), 1 deletion(-)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

A pull mode backup job uses temporary disk images to hold the changed parts of the disk while the client is copying the changes. Since usage of the temporary space can be monitored but doesn't really fit any of the existing stats fields introduce new fields for reporting this data. Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> --- include/libvirt/libvirt-domain.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index 40c71091ec..b9908fe7b0 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -3586,6 +3586,20 @@ typedef enum { */ # define VIR_DOMAIN_JOB_SUCCESS "success" +/** + * VIR_DOMAIN_JOB_DISK_TEMP_USED: + * virDomainGetJobStats field: current usage of temporary disk space for the + * job in bytes as VIR_TYPED_PARAM_ULLONG. + */ +# define VIR_DOMAIN_JOB_DISK_TEMP_USED "disk_temp_used" + +/** + * VIR_DOMAIN_JOB_DISK_TEMP_TOTAL: + * virDomainGetJobStats field: possible total temporary disk space for the + * job in bytes as VIR_TYPED_PARAM_ULLONG. + */ +# define VIR_DOMAIN_JOB_DISK_TEMP_TOTAL "disk_temp_total" + /** * virConnectDomainEventGenericCallback: * @conn: the connection pointer -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:24PM +0100, Peter Krempa wrote:
A pull mode backup job uses temporary disk images to hold the changed parts of the disk while the client is copying the changes. Since usage of the temporary space can be monitored but doesn't really fit any of the existing stats fields introduce new fields for reporting this data.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> --- include/libvirt/libvirt-domain.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> --- tools/virsh-domain.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 21ea1a69ea..bb942267f0 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -6388,6 +6388,24 @@ cmdDomjobinfo(vshControl *ctl, const vshCmd *cmd) vshPrint(ctl, "%-17s %-13d\n", _("Auto converge throttle:"), ivalue); } + if ((rc = virTypedParamsGetULLong(params, nparams, + VIR_DOMAIN_JOB_DISK_TEMP_USED, + &value)) < 0) { + goto save_error; + } else if (rc) { + val = vshPrettyCapacity(value, &unit); + vshPrint(ctl, "%-17s %-.3lf %s\n", _("Temporary disk space use:"), val, unit); + } + + if ((rc = virTypedParamsGetULLong(params, nparams, + VIR_DOMAIN_JOB_DISK_TEMP_TOTAL, + &value)) < 0) { + goto save_error; + } else if (rc) { + val = vshPrettyCapacity(value, &unit); + vshPrint(ctl, "%-17s %-.3lf %s\n", _("Temporary disk space total:"), val, unit); + } + ret = true; cleanup: -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:25PM +0100, Peter Krempa wrote:
Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> --- tools/virsh-domain.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

From: Eric Blake <eblake@redhat.com> Prepare for new backup APIs by describing the XML that will represent a backup. The XML resembles snapshots and checkpoints in being able to select actions for a set of disks, but has other differences. It can support both push model (the hypervisor does the backup directly into the destination file) and pull model (the hypervisor exposes an access port for a third party to grab what is necessary). Add testsuite coverage for some minimal uses of the XML. The <disk> element within <domainbackup> tries to model the same elements as a <disk> under <domain>, but sharing the RNG grammar proved to be hairy. That is in part because while <domain> use <source> to describe a host resource in use by the guest, a backup job is using a host resource that is not visible to the guest: a push backup action is instead describing a <target> (which ultimately could be a remote network resource, but for simplicity the RNG just validates a local file for now), and a pull backup action is instead describing a temporary local file <scratch> (which probably should not be a remote resource). A future refactoring may thus introduce some way to parameterize RNG to accept <disk type='FOO'>...</disk> so that the name of the subelement can be <source> for domain, or <target> or <scratch> as needed for backups. Future patches may improve this area of code. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/docs.html.in | 3 +- docs/format.html.in | 1 + docs/formatbackup.html.in | 175 ++++++++++++++ docs/formatcheckpoint.html.in | 12 +- docs/index.html.in | 3 +- docs/schemas/domainbackup.rng | 223 ++++++++++++++++++ libvirt.spec.in | 1 + mingw-libvirt.spec.in | 2 + tests/Makefile.am | 1 + .../backup-pull-seclabel.xml | 18 ++ tests/domainbackupxml2xmlin/backup-pull.xml | 10 + .../backup-push-seclabel.xml | 17 ++ tests/domainbackupxml2xmlin/backup-push.xml | 10 + tests/domainbackupxml2xmlin/empty.xml | 1 + tests/virschematest.c | 1 + 15 files changed, 470 insertions(+), 8 deletions(-) create mode 100644 docs/formatbackup.html.in create mode 100644 docs/schemas/domainbackup.rng create mode 100644 tests/domainbackupxml2xmlin/backup-pull-seclabel.xml create mode 100644 tests/domainbackupxml2xmlin/backup-pull.xml create mode 100644 tests/domainbackupxml2xmlin/backup-push-seclabel.xml create mode 100644 tests/domainbackupxml2xmlin/backup-push.xml create mode 100644 tests/domainbackupxml2xmlin/empty.xml diff --git a/docs/docs.html.in b/docs/docs.html.in index 268c16f3b3..f8a949bb53 100644 --- a/docs/docs.html.in +++ b/docs/docs.html.in @@ -82,7 +82,8 @@ <a href="formatnode.html">node devices</a>, <a href="formatsecret.html">secrets</a>, <a href="formatsnapshot.html">snapshots</a>, - <a href="formatcheckpoint.html">checkpoints</a></dd> + <a href="formatcheckpoint.html">checkpoints</a>, + <a href="formatbackup.html">backup jobs</a></dd> <dt><a href="uri.html">URI format</a></dt> <dd>The URI formats used for connecting to libvirt</dd> diff --git a/docs/format.html.in b/docs/format.html.in index 3be2237663..d013528fe0 100644 --- a/docs/format.html.in +++ b/docs/format.html.in @@ -27,6 +27,7 @@ <li><a href="formatsecret.html">Secrets</a></li> <li><a href="formatsnapshot.html">Snapshots</a></li> <li><a href="formatcheckpoint.html">Checkpoints</a></li> + <li><a href="formatbackup.html">Backup jobs</a></li> </ul> <h2>Command line validation</h2> diff --git a/docs/formatbackup.html.in b/docs/formatbackup.html.in new file mode 100644 index 0000000000..d2e4609c1c --- /dev/null +++ b/docs/formatbackup.html.in @@ -0,0 +1,175 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE html> +<html xmlns="http://www.w3.org/1999/xhtml"> + <body> + <h1>Backup XML format</h1> + + <ul id="toc"></ul> + + <h2><a id="BackupAttributes">Backup XML</a></h2> + + <p> + Creating a backup, whether full or incremental, is done + via <code>virDomainBackupBegin()</code>, which takes an XML + description of the actions to perform, as well as an optional + second XML document <a href="formatcheckpoint.html">describing a + checkpoint</a> to create at the same point in time. See + also <a href="domainstatecapture.html">a comparison</a> between + the various state capture APIs. + </p> + <p> + There are two general modes for backups: a push mode (where the + hypervisor writes out the data to the destination file, which + may be local or remote), and a pull mode (where the hypervisor + creates an NBD server that a third-party client can then read as + needed, and which requires the use of temporary storage, + typically local, until the backup is complete). + </p> + <p> + The instructions for beginning a backup job are provided as + attributes and elements of the + top-level <code>domainbackup</code> element. This element + includes an optional attribute <code>mode</code> which can be + either "push" or "pull" (default + push). <code>virDomainBackupGetXMLDesc()</code> can be used to + see the actual values selected for elements omitted during + creation (for example, learning which port the NBD server is + using in the pull model or what file names libvirt generated + when none were supplied). The following child elements and attributes + are supported: + </p> + <dl> + <dt><code>incremental</code></dt> + <dd>An optional element giving the name of an existing + checkpoint of the domain, which will be used to make this + backup an incremental one. In the push model, only changes + since the named checkpoint are written to the destination. In + the pull model, the NBD server uses the + NBD_OPT_SET_META_CONTEXT extension to advertise to the client + which portions of the export contain changes since the named + checkpoint. If omitted, a full backup is performed. + </dd> + <dt><code>server</code></dt> + <dd>Present only for a pull mode backup. Contains the same + attributes as + the <a href="formatdomain.html#elementsDisks"><code>protocol</code> + element of a disk</a> attached via NBD in the domain (such as + transport, socket, name, port, or tls), necessary to set up an + NBD server that exposes the content of each disk at the time + the backup is started. + </dd> + <dt><code>disks</code></dt> + <dd>An optional listing of instructions for disks participating + in the backup (if omitted, all disks participate and libvirt + attempts to generate filenames by appending the current + timestamp as a suffix). If the entire element was omitted on + input, then all disks participate in the backup, otherwise, + only the disks explicitly listed which do not also + use <code>backup='no'</code> will participate. On output, this + is the state of each of the domain's disk in relation to the + backup operation. + <dl> + <dt><code>disk</code></dt> + <dd>This sub-element describes the backup properties of a + specific disk, with the following attributes and child + elements: + <dl> + <dt><code>name</code></dt> + <dd>A mandatory attribute which must match + the <code><target dev='name'/></code> + of one of + the <a href="formatdomain.html#elementsDisks">disk + devices</a> specified for the domain at the time of + the checkpoint.</dd> + <dt><code>backup</code></dt> + <dd>Setting this attribute to <code>yes</code>(default) specifies + that the disk should take part in the backup and using + <code>no</code> excludes the disk from the backup.</dd> + <dt><code>type</code></dt> + <dd>A mandatory attribute to describe the type of the + disk, except when <code>backup='no'</code> is + used. Valid values include <code>file</code>, + <code>block</code>, or <code>network</code>. + Similar to a disk declaration for a domain, the choice of type + controls what additional sub-elements are needed to describe + the destination (such as <code>protocol</code> for a + network destination).</dd> + <dt><code>target</code></dt> + <dd>Valid only for push mode backups, this is the + primary sub-element that describes the file name of + the backup destination, similar to + the <code>source</code> sub-element of a domain + disk. An optional sub-element <code>driver</code> can + also be used, with an attribute <code>type</code> to + specify a destination format different from + qcow2. </dd> + <dt><code>scratch</code></dt> + <dd>Valid only for pull mode backups, this is the + primary sub-element that describes the file name of + the local scratch file to be used in facilitating the + backup, and is similar to the <code>source</code> + sub-element of a domain disk. Currently only <code>file</code> + and <code>block</code> scratch storage is supported. The + <code>file</code> scratch file is created and deleted by + libvirt in the given location. A <code>block</code> scratch + device must exist prior to starting the backup and is formatted. + The block device must have enough space for the corresponding + disk data including format overhead. + + If <code>VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTERNAL</code> flag is + used the file for a scratch of <code>file</code> type must + exist with the correct format and size to hold the copy and is + used without modification. The file is not deleted after the + backup but the contents of the file don't make sense outside + of the backup. The same applies for the block device which + must be formatted appropriately.</dd> + </dl> + </dd> + </dl> + </dd> + </dl> + + <h2><a id="example">Examples</a></h2> + + <p>Use <code>virDomainBackupBegin()</code> to perform a full + backup using push mode. The example lets libvirt pick the + destination and format for 'vda', fully specifies that we want a + raw backup of 'vdb', and omits 'vdc' from the operation. + </p> + <pre> +<domainbackup> + <disks/> + <disk name='vda' backup='yes'/> + <disk name='vdb' type='file'> + <target file='/path/to/vdb.backup'/> + <driver type='raw'/> + </disk> + <disk name='vdc' backup='no'/> + </disks/> +</domainbackup> + </pre> + + <p>If the previous full backup also passed a parameter describing + <a href="formatcheckpoint.html">checkpoint XML</a> that resulted + in a checkpoint named <code>1525889631</code>, we can make + another call to <code>virDomainBackupBegin()</code> to perform + an incremental backup of just the data changed since that + checkpoint, this time using the following XML to start a pull + model export of the 'vda' and 'vdb' disks, where a third-party + NBD client connecting to '/path/to/server' completes the backup + (omitting 'vdc' from the explicit list has the same effect as + the backup='no' from the previous example): + </p> + <pre> +<domainbackup mode="pull"> + <incremental>1525889631</incremental> + <server transport="unix" socket="/path/to/server"/> + <disks/> + <disk name='vda' backup='yes' type='file'> + <scratch file='/path/to/file1.scratch'/> + </disk> + </disks/> +</domainbackup> + </pre> + </body> +</html> diff --git a/docs/formatcheckpoint.html.in b/docs/formatcheckpoint.html.in index 044bbfe4b0..ee56194523 100644 --- a/docs/formatcheckpoint.html.in +++ b/docs/formatcheckpoint.html.in @@ -28,12 +28,12 @@ first checkpoint and the second backup operation), it is possible to do an offline reconstruction of the state of the disk at the time of the second backup without having to copy as - much data as a second full backup would require. Future API - additions will make it possible to create checkpoints in - conjunction with a backup - via <code>virDomainBackupBegin()</code> or with an external - snapshot via <code>virDomainSnapshotCreateXML2</code>; but for - now, libvirt exposes enough support to create disk checkpoints + much data as a second full backup would require. Most disk + checkpoints are created in conjunction with a backup + via <code>virDomainBackupBegin()</code>, although a future API + addition of <code>virDomainSnapshotCreateXML2()</code> will also + make this possible when creating external snapshots; however, + libvirt also exposes enough support to create disk checkpoints independently from a backup operation via <code>virDomainCheckpointCreateXML()</code> <span class="since">since 5.6.0</span>. Likewise, the creation of checkpoints when diff --git a/docs/index.html.in b/docs/index.html.in index 7d0ab650e3..26e8406917 100644 --- a/docs/index.html.in +++ b/docs/index.html.in @@ -59,7 +59,8 @@ <a href="formatnode.html">node devices</a>, <a href="formatsecret.html">secrets</a>, <a href="formatsnapshot.html">snapshots</a>, - <a href="formatcheckpoint.html">checkpoints</a></dd> + <a href="formatcheckpoint.html">checkpoints</a>, + <a href="formatbackup.html">backup jobs</a></dd> <dt><a href="http://wiki.libvirt.org">Wiki</a></dt> <dd>Read further community contributed content</dd> </dl> diff --git a/docs/schemas/domainbackup.rng b/docs/schemas/domainbackup.rng new file mode 100644 index 0000000000..7286acb18c --- /dev/null +++ b/docs/schemas/domainbackup.rng @@ -0,0 +1,223 @@ +<?xml version="1.0"?> +<!-- A Relax NG schema for the libvirt domain backup properties XML format --> +<grammar xmlns="http://relaxng.org/ns/structure/1.0"> + <start> + <ref name='domainbackup'/> + </start> + + <include href='domaincommon.rng'/> + + <define name='domainbackup'> + <element name='domainbackup'> + <interleave> + <optional> + <element name='incremental'> + <text/> + </element> + </optional> + <choice> + <group> + <optional> + <attribute name='mode'> + <value>push</value> + </attribute> + </optional> + <ref name='backupDisksPush'/> + </group> + <group> + <attribute name='mode'> + <value>pull</value> + </attribute> + <interleave> + <element name='server'> + <choice> + <group> + <optional> + <attribute name='transport'> + <value>tcp</value> + </attribute> + </optional> + <attribute name='name'> + <choice> + <ref name='dnsName'/> + <ref name='ipAddr'/> + </choice> + </attribute> + <optional> + <attribute name='port'> + <ref name='unsignedInt'/> + </attribute> + </optional> + <!-- add tls? --> + </group> + <group> + <attribute name='transport'> + <value>unix</value> + </attribute> + <attribute name='socket'> + <ref name='absFilePath'/> + </attribute> + </group> + </choice> + </element> + <ref name='backupDisksPull'/> + </interleave> + </group> + </choice> + </interleave> + </element> + </define> + + <define name='backupPushDriver'> + <optional> + <element name='driver'> + <attribute name='type'> + <ref name='storageFormat'/> + </attribute> + </element> + </optional> + </define> + + <define name='backupPullDriver'> + <optional> + <element name='driver'> + <attribute name='type'> + <value>qcow2</value> + </attribute> + </element> + </optional> + </define> + + <define name='backupAttr'> + <optional> + <attribute name='backup'> + <choice> + <value>yes</value> + </choice> + </attribute> + </optional> + </define> + + <define name='backupDisksPush'> + <optional> + <element name='disks'> + <oneOrMore> + <element name='disk'> + <attribute name='name'> + <choice> + <ref name='diskTarget'/> + </choice> + </attribute> + <choice> + <group> + <attribute name='backup'> + <value>no</value> + </attribute> + </group> + <group> + <ref name='backupAttr'/> + <attribute name='type'> + <value>file</value> + </attribute> + <interleave> + <optional> + <element name='target'> + <attribute name='file'> + <ref name='absFilePath'/> + </attribute> + <zeroOrMore> + <ref name='devSeclabel'/> + </zeroOrMore> + </element> + </optional> + <ref name='backupPushDriver'/> + </interleave> + </group> + <group> + <ref name='backupAttr'/> + <attribute name='type'> + <value>block</value> + </attribute> + <interleave> + <optional> + <element name='target'> + <attribute name='dev'> + <ref name='absFilePath'/> + </attribute> + <zeroOrMore> + <ref name='devSeclabel'/> + </zeroOrMore> + </element> + </optional> + <ref name='backupPushDriver'/> + </interleave> + </group> + </choice> + </element> + </oneOrMore> + </element> + </optional> + </define> + + <define name='backupDisksPull'> + <optional> + <element name='disks'> + <oneOrMore> + <element name='disk'> + <attribute name='name'> + <choice> + <ref name='diskTarget'/> + </choice> + </attribute> + <choice> + <group> + <attribute name='backup'> + <value>no</value> + </attribute> + </group> + <group> + <optional> + <ref name='backupAttr'/> + <attribute name='type'> + <value>file</value> + </attribute> + </optional> + <optional> + <interleave> + <element name='scratch'> + <attribute name='file'> + <ref name='absFilePath'/> + </attribute> + <zeroOrMore> + <ref name='devSeclabel'/> + </zeroOrMore> + </element> + <ref name='backupPullDriver'/> + </interleave> + </optional> + </group> + <group> + <ref name='backupAttr'/> + <attribute name='type'> + <value>block</value> + </attribute> + <interleave> + <element name='scratch'> + <attribute name='dev'> + <ref name='absFilePath'/> + </attribute> + <zeroOrMore> + <ref name='devSeclabel'/> + </zeroOrMore> + </element> + <ref name='backupPullDriver'/> + </interleave> + </group> + </choice> + </element> + </oneOrMore> + </element> + </optional> + </define> + +</grammar> diff --git a/libvirt.spec.in b/libvirt.spec.in index 4c6161a26f..feda2e9faa 100644 --- a/libvirt.spec.in +++ b/libvirt.spec.in @@ -1892,6 +1892,7 @@ exit 0 %{_datadir}/libvirt/schemas/capability.rng %{_datadir}/libvirt/schemas/cputypes.rng %{_datadir}/libvirt/schemas/domain.rng +%{_datadir}/libvirt/schemas/domainbackup.rng %{_datadir}/libvirt/schemas/domaincaps.rng %{_datadir}/libvirt/schemas/domaincheckpoint.rng %{_datadir}/libvirt/schemas/domaincommon.rng diff --git a/mingw-libvirt.spec.in b/mingw-libvirt.spec.in index c29f3eeed2..453570fc3c 100644 --- a/mingw-libvirt.spec.in +++ b/mingw-libvirt.spec.in @@ -233,6 +233,7 @@ rm -rf $RPM_BUILD_ROOT%{mingw64_libexecdir}/libvirt-guests.sh %{mingw32_datadir}/libvirt/schemas/capability.rng %{mingw32_datadir}/libvirt/schemas/cputypes.rng %{mingw32_datadir}/libvirt/schemas/domain.rng +%{mingw32_datadir}/libvirt/schemas/domainbackup.rng %{mingw32_datadir}/libvirt/schemas/domaincaps.rng %{mingw32_datadir}/libvirt/schemas/domaincheckpoint.rng %{mingw32_datadir}/libvirt/schemas/domaincommon.rng @@ -324,6 +325,7 @@ rm -rf $RPM_BUILD_ROOT%{mingw64_libexecdir}/libvirt-guests.sh %{mingw64_datadir}/libvirt/schemas/capability.rng %{mingw64_datadir}/libvirt/schemas/cputypes.rng %{mingw64_datadir}/libvirt/schemas/domain.rng +%{mingw64_datadir}/libvirt/schemas/domainbackup.rng %{mingw64_datadir}/libvirt/schemas/domaincaps.rng %{mingw64_datadir}/libvirt/schemas/domaincheckpoint.rng %{mingw64_datadir}/libvirt/schemas/domaincommon.rng diff --git a/tests/Makefile.am b/tests/Makefile.am index e009de830c..ea9e2b2ad0 100644 --- a/tests/Makefile.am +++ b/tests/Makefile.am @@ -91,6 +91,7 @@ EXTRA_DIST = \ commanddata \ cputestdata \ domaincapsdata \ + domainbackupxml2xmlin \ domainconfdata \ domainschemadata \ fchostdata \ diff --git a/tests/domainbackupxml2xmlin/backup-pull-seclabel.xml b/tests/domainbackupxml2xmlin/backup-pull-seclabel.xml new file mode 100644 index 0000000000..a00d8758bb --- /dev/null +++ b/tests/domainbackupxml2xmlin/backup-pull-seclabel.xml @@ -0,0 +1,18 @@ +<domainbackup mode="pull"> + <incremental>1525889631</incremental> + <server transport='tcp' name='localhost' port='10809'/> + <disks> + <disk name='vda' type='file'> + <driver type='qcow2'/> + <scratch file='/path/to/file'> + <seclabel model='dac' relabel='no'/> + </scratch> + </disk> + <disk name='vdb' type='block'> + <driver type='qcow2'/> + <scratch dev='/dev/block'> + <seclabel model='dac' relabel='no'/> + </scratch> + </disk> + </disks> +</domainbackup> diff --git a/tests/domainbackupxml2xmlin/backup-pull.xml b/tests/domainbackupxml2xmlin/backup-pull.xml new file mode 100644 index 0000000000..c0bea4771d --- /dev/null +++ b/tests/domainbackupxml2xmlin/backup-pull.xml @@ -0,0 +1,10 @@ +<domainbackup mode="pull"> + <incremental>1525889631</incremental> + <server transport='tcp' name='localhost' port='10809'/> + <disks> + <disk name='vda' type='file'> + <scratch file='/path/to/file'/> + </disk> + <disk name='hda' backup='no'/> + </disks> +</domainbackup> diff --git a/tests/domainbackupxml2xmlin/backup-push-seclabel.xml b/tests/domainbackupxml2xmlin/backup-push-seclabel.xml new file mode 100644 index 0000000000..dbaf7f8e7c --- /dev/null +++ b/tests/domainbackupxml2xmlin/backup-push-seclabel.xml @@ -0,0 +1,17 @@ +<domainbackup mode="push"> + <incremental>1525889631</incremental> + <disks> + <disk name='vda' type='file'> + <driver type='raw'/> + <target file='/path/to/file'> + <seclabel model='dac' relabel='no'/> + </target> + </disk> + <disk name='vdb' type='block'> + <driver type='qcow2'/> + <target dev='/dev/block'> + <seclabel model='dac' relabel='no'/> + </target> + </disk> + </disks> +</domainbackup> diff --git a/tests/domainbackupxml2xmlin/backup-push.xml b/tests/domainbackupxml2xmlin/backup-push.xml new file mode 100644 index 0000000000..0bfec9b270 --- /dev/null +++ b/tests/domainbackupxml2xmlin/backup-push.xml @@ -0,0 +1,10 @@ +<domainbackup mode="push"> + <incremental>1525889631</incremental> + <disks> + <disk name='vda' type='file'> + <driver type='raw'/> + <target file='/path/to/file'/> + </disk> + <disk name='hda' backup='no'/> + </disks> +</domainbackup> diff --git a/tests/domainbackupxml2xmlin/empty.xml b/tests/domainbackupxml2xmlin/empty.xml new file mode 100644 index 0000000000..7ed511f97b --- /dev/null +++ b/tests/domainbackupxml2xmlin/empty.xml @@ -0,0 +1 @@ +<domainbackup/> diff --git a/tests/virschematest.c b/tests/virschematest.c index df50ef1717..5ae2d207d1 100644 --- a/tests/virschematest.c +++ b/tests/virschematest.c @@ -205,6 +205,7 @@ mymain(void) "genericxml2xmloutdata", "xlconfigdata", "libxlxml2domconfigdata", "qemuhotplugtestdomains"); DO_TEST_DIR("domaincaps.rng", "domaincapsdata"); + DO_TEST_DIR("domainbackup.rng", "domainbackupxml2xmlin"); DO_TEST_DIR("domaincheckpoint.rng", "qemudomaincheckpointxml2xmlin", "qemudomaincheckpointxml2xmlout"); DO_TEST_DIR("domainsnapshot.rng", "qemudomainsnapshotxml2xmlin", -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:26PM +0100, Peter Krempa wrote:
From: Eric Blake <eblake@redhat.com>
Prepare for new backup APIs by describing the XML that will represent a backup. The XML resembles snapshots and checkpoints in being able to select actions for a set of disks, but has other differences. It can support both push model (the hypervisor does the backup directly into the destination file) and pull model (the hypervisor exposes an access port for a third party to grab what is necessary). Add testsuite coverage for some minimal uses of the XML.
The <disk> element within <domainbackup> tries to model the same elements as a <disk> under <domain>, but sharing the RNG grammar proved to be hairy. That is in part because while <domain> use <source> to describe a host resource in use by the guest, a backup job is using a host resource that is not visible to the guest: a push backup action is instead describing a <target> (which ultimately could be a remote network resource, but for simplicity the RNG just validates a local file for now), and a pull backup action is instead describing a temporary local file <scratch> (which probably should not be a remote resource). A future refactoring may thus introduce some way to parameterize RNG to accept <disk type='FOO'>...</disk> so that the name of the subelement can be <source> for domain, or <target> or <scratch> as needed for backups. Future patches may improve this area of code.
Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/docs.html.in | 3 +- docs/format.html.in | 1 + docs/formatbackup.html.in | 175 ++++++++++++++ docs/formatcheckpoint.html.in | 12 +- docs/index.html.in | 3 +- docs/schemas/domainbackup.rng | 223 ++++++++++++++++++ libvirt.spec.in | 1 + mingw-libvirt.spec.in | 2 + tests/Makefile.am | 1 + .../backup-pull-seclabel.xml | 18 ++ tests/domainbackupxml2xmlin/backup-pull.xml | 10 + .../backup-push-seclabel.xml | 17 ++ tests/domainbackupxml2xmlin/backup-push.xml | 10 + tests/domainbackupxml2xmlin/empty.xml | 1 + tests/virschematest.c | 1 + 15 files changed, 470 insertions(+), 8 deletions(-) create mode 100644 docs/formatbackup.html.in create mode 100644 docs/schemas/domainbackup.rng create mode 100644 tests/domainbackupxml2xmlin/backup-pull-seclabel.xml create mode 100644 tests/domainbackupxml2xmlin/backup-pull.xml create mode 100644 tests/domainbackupxml2xmlin/backup-push-seclabel.xml create mode 100644 tests/domainbackupxml2xmlin/backup-push.xml create mode 100644 tests/domainbackupxml2xmlin/empty.xml
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, Dec 03, 2019 at 06:17:26PM +0100, Peter Krempa wrote:
From: Eric Blake <eblake@redhat.com>
Prepare for new backup APIs by describing the XML that will represent a backup. The XML resembles snapshots and checkpoints in being able to select actions for a set of disks, but has other differences. It can support both push model (the hypervisor does the backup directly into the destination file) and pull model (the hypervisor exposes an access port for a third party to grab what is necessary). Add testsuite coverage for some minimal uses of the XML.
The <disk> element within <domainbackup> tries to model the same elements as a <disk> under <domain>, but sharing the RNG grammar proved to be hairy. That is in part because while <domain> use <source> to describe a host resource in use by the guest, a backup job is using a host resource that is not visible to the guest: a push backup action is instead describing a <target> (which ultimately could be a remote network resource, but for simplicity the RNG just validates a local file for now), and a pull backup action is instead describing a temporary local file <scratch> (which probably should not be a remote resource). A future refactoring may thus introduce some way to parameterize RNG to accept <disk type='FOO'>...</disk> so that the name of the subelement can be <source> for domain, or <target> or <scratch> as needed for backups. Future patches may improve this area of code.
Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/docs.html.in | 3 +- docs/format.html.in | 1 + docs/formatbackup.html.in | 175 ++++++++++++++ docs/formatcheckpoint.html.in | 12 +- docs/index.html.in | 3 +- docs/schemas/domainbackup.rng | 223 ++++++++++++++++++ libvirt.spec.in | 1 + mingw-libvirt.spec.in | 2 + tests/Makefile.am | 1 + .../backup-pull-seclabel.xml | 18 ++ tests/domainbackupxml2xmlin/backup-pull.xml | 10 + .../backup-push-seclabel.xml | 17 ++ tests/domainbackupxml2xmlin/backup-push.xml | 10 + tests/domainbackupxml2xmlin/empty.xml | 1 + tests/virschematest.c | 1 + 15 files changed, 470 insertions(+), 8 deletions(-) create mode 100644 docs/formatbackup.html.in create mode 100644 docs/schemas/domainbackup.rng create mode 100644 tests/domainbackupxml2xmlin/backup-pull-seclabel.xml create mode 100644 tests/domainbackupxml2xmlin/backup-pull.xml create mode 100644 tests/domainbackupxml2xmlin/backup-push-seclabel.xml create mode 100644 tests/domainbackupxml2xmlin/backup-push.xml create mode 100644 tests/domainbackupxml2xmlin/empty.xml
diff --git a/docs/formatbackup.html.in b/docs/formatbackup.html.in new file mode 100644 index 0000000000..d2e4609c1c --- /dev/null +++ b/docs/formatbackup.html.in @@ -0,0 +1,175 @@
[...]
+ <h2><a id="example">Examples</a></h2> + + <p>Use <code>virDomainBackupBegin()</code> to perform a full + backup using push mode. The example lets libvirt pick the + destination and format for 'vda', fully specifies that we want a + raw backup of 'vdb', and omits 'vdc' from the operation. + </p> + <pre> +<domainbackup> + <disks/> extra slash -^
+ <disk name='vda' backup='yes'/> + <disk name='vdb' type='file'> + <target file='/path/to/vdb.backup'/> + <driver type='raw'/> + </disk> + <disk name='vdc' backup='no'/> + </disks/> here too -----^
+</domainbackup> + </pre> + + <p>If the previous full backup also passed a parameter describing + <a href="formatcheckpoint.html">checkpoint XML</a> that resulted + in a checkpoint named <code>1525889631</code>, we can make + another call to <code>virDomainBackupBegin()</code> to perform + an incremental backup of just the data changed since that + checkpoint, this time using the following XML to start a pull + model export of the 'vda' and 'vdb' disks, where a third-party + NBD client connecting to '/path/to/server' completes the backup + (omitting 'vdc' from the explicit list has the same effect as + the backup='no' from the previous example): + </p> + <pre> +<domainbackup mode="pull"> + <incremental>1525889631</incremental> + <server transport="unix" socket="/path/to/server"/> + <disks/> here too ----^
+ <disk name='vda' backup='yes' type='file'> + <scratch file='/path/to/file1.scratch'/> + </disk> + </disks/> here too -----^ +</domainbackup> + </pre> + </body> +</html>
With that fixed: Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

From: Eric Blake <eblake@redhat.com> Introduce a few new public APIs related to incremental backups. This builds on the previous notion of a checkpoint (without an existing checkpoint, the new API is a full backup, differing from virDomainBlockCopy in the point of time chosen and in operation on multiple disks at once); and also allows creation of a new checkpoint at the same time as starting the backup (after all, an incremental backup is only useful if it covers the state since the previous backup). A backup job also affects filtering a listing of domains, as well as adding event reporting for signaling when a push model backup completes (where the hypervisor creates the backup); note that the pull model does not have an event (starting the backup lets a third party access the data, and only the third party knows when it is finished). The full list of new APIs: virDomainBackupBegin; virDomainBackupGetXMLDesc; Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- include/libvirt/libvirt-domain.h | 19 +++- src/driver-hypervisor.h | 12 +++ src/libvirt-domain-checkpoint.c | 7 +- src/libvirt-domain.c | 143 +++++++++++++++++++++++++++++++ src/libvirt_public.syms | 6 ++ 5 files changed, 183 insertions(+), 4 deletions(-) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index b9908fe7b0..84c5492a7b 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -4129,8 +4129,10 @@ typedef void (*virConnectDomainEventMigrationIterationCallback)(virConnectPtr co * @nparams: size of the params array * @opaque: application specific data * - * This callback occurs when a job (such as migration) running on the domain - * is completed. The params array will contain statistics of the just completed + * This callback occurs when a job (such as migration or backup) running on + * the domain is completed. + * + * The params array will contain statistics of the just completed * job as virDomainGetJobStats would return. The callback must not free @params * (the array will be freed once the callback finishes). * @@ -4949,4 +4951,17 @@ int virDomainAgentSetResponseTimeout(virDomainPtr domain, int timeout, unsigned int flags); +typedef enum { + VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTERNAL = (1 << 0), /* reuse separately + provided images */ +} virDomainBackupBeginFlags; + +int virDomainBackupBegin(virDomainPtr domain, + const char *backupXML, + const char *checkpointXML, + unsigned int flags); + +char *virDomainBackupGetXMLDesc(virDomainPtr domain, + unsigned int flags); + #endif /* LIBVIRT_DOMAIN_H */ diff --git a/src/driver-hypervisor.h b/src/driver-hypervisor.h index 4afd8f6ec5..bce023017d 100644 --- a/src/driver-hypervisor.h +++ b/src/driver-hypervisor.h @@ -1377,6 +1377,16 @@ typedef int int timeout, unsigned int flags); +typedef int +(*virDrvDomainBackupBegin)(virDomainPtr domain, + const char *backupXML, + const char *checkpointXML, + unsigned int flags); + +typedef char * +(*virDrvDomainBackupGetXMLDesc)(virDomainPtr domain, + unsigned int flags); + typedef struct _virHypervisorDriver virHypervisorDriver; typedef virHypervisorDriver *virHypervisorDriverPtr; @@ -1638,4 +1648,6 @@ struct _virHypervisorDriver { virDrvDomainCheckpointDelete domainCheckpointDelete; virDrvDomainGetGuestInfo domainGetGuestInfo; virDrvDomainAgentSetResponseTimeout domainAgentSetResponseTimeout; + virDrvDomainBackupBegin domainBackupBegin; + virDrvDomainBackupGetXMLDesc domainBackupGetXMLDesc; }; diff --git a/src/libvirt-domain-checkpoint.c b/src/libvirt-domain-checkpoint.c index fa391f8a06..432c2d5a52 100644 --- a/src/libvirt-domain-checkpoint.c +++ b/src/libvirt-domain-checkpoint.c @@ -102,8 +102,11 @@ virDomainCheckpointGetConnect(virDomainCheckpointPtr checkpoint) * @flags: bitwise-OR of supported virDomainCheckpointCreateFlags * * Create a new checkpoint using @xmlDesc, with a top-level - * <domaincheckpoint> element, on a running @domain. Note that @xmlDesc - * must validate against the <domaincheckpoint> XML schema. + * <domaincheckpoint> element, on a running @domain. Note that + * @xmlDesc must validate against the <domaincheckpoint> XML schema. + * Typically, it is more common to create a new checkpoint as part of + * kicking off a backup job with virDomainBackupBegin(); however, it + * is also possible to start a checkpoint without a backup. * * See <a href="formatcheckpoint.html#CheckpointAttributes">Checkpoint XML</a> * for more details on @xmlDesc. In particular, some hypervisors may require diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index b9345804ea..f873246ace 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -12541,3 +12541,146 @@ virDomainAgentSetResponseTimeout(virDomainPtr domain, virDispatchError(conn); return -1; } + + +/** + * virDomainBackupBegin: + * @domain: a domain object + * @backupXML: description of the requested backup + * @checkpointXML: description of a checkpoint to create or NULL + * @flags: bitwise or of virDomainBackupBeginFlags + * + * Start a point-in-time backup job for the specified disks of a + * running domain. + * + * A backup job is a domain job and thus mutually exclusive with any other + * domain job such as migration. + * + * For now, backup jobs are also mutually exclusive with any + * other block job on the same device, although this restriction may + * be lifted in a future release. Progress of the backup job can be + * tracked via virDomainGetJobStats(). Completion of the job is also announced + * asynchronously via VIR_DOMAIN_EVENT_ID_JOB_COMPLETED event. + * + * There are two fundamental backup approaches. The first, called a + * push model, instructs the hypervisor to copy the state of the guest + * disk to the designated storage destination (which may be on the + * local file system or a network device). In this mode, the + * hypervisor writes the content of the guest disk to the destination, + * then emits VIR_DOMAIN_EVENT_ID_JOB_COMPLETED when the backup is + * either complete or failed (the backup image is invalid if the job + * fails or virDomainAbortJob() is used prior to the event being + * emitted). This kind of the job finishes automatically. Users can + * determine success by using virDomainGetJobStats() with + * VIR_DOMAIN_JOB_STATS_COMPLETED flag. + * + * The second, called a pull model, instructs the hypervisor to expose + * the state of the guest disk over an NBD export. A third-party + * client can then connect to this export and read whichever portions + * of the disk it desires. In this mode libvirt has to be informed via + * virDomainAbortJob() when the third-party NBD client is done and the backup + * resources can be released. + * + * The @backupXML parameter contains details about the backup in the top-level + * element <domainbackup>, including which backup mode to use, whether the + * backup is incremental from a previous checkpoint, which disks + * participate in the backup, the destination for a push model backup, + * and the temporary storage and NBD server details for a pull model + * backup. + * + * virDomainBackupGetXMLDesc() can be called to learn actual + * values selected. For more information, see + * formatcheckpoint.html#BackupAttributes. + * + * The @checkpointXML parameter is optional; if non-NULL, then libvirt + * behaves as if virDomainCheckpointCreateXML() were called to create + * a checkpoint atomically covering the same point in time as the + * backup. + * + * The VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTERNAL specifies that the output or + * temporary files described by the @backupXML document were created by the + * caller with correct format and size to hold the backup or temporary data. + * + * The creation of a new checkpoint allows for future incremental backups. + * Note that some hypervisors may require a particular disk format, such as + * qcow2, in order to take advantage of checkpoints, while allowing arbitrary + * formats if checkpoints are not involved. + * + * Returns 0 on success or -1 on failure. + */ +int +virDomainBackupBegin(virDomainPtr domain, + const char *backupXML, + const char *checkpointXML, + unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "backupXML=%s, checkpointXML=%s, flags=0x%x", + NULLSTR(backupXML), NULLSTR(checkpointXML), flags); + + virResetLastError(); + + virCheckDomainReturn(domain, -1); + conn = domain->conn; + + virCheckNonNullArgGoto(backupXML, error); + virCheckReadOnlyGoto(conn->flags, error); + + if (conn->driver->domainBackupBegin) { + int ret; + ret = conn->driver->domainBackupBegin(domain, backupXML, checkpointXML, + flags); + if (ret < 0) + goto error; + return ret; + } + + virReportUnsupportedError(); + error: + virDispatchError(conn); + return -1; +} + + +/** + * virDomainBackupGetXMLDesc: + * @domain: a domain object + * @flags: extra flags; not used yet, so callers should always pass 0 + * + * Queries the configuration of the active backup job. + * + * In some cases, a user can start a backup job without supplying all + * details and rely on libvirt to fill in the rest (for example, + * selecting the port used for an NBD export). This API can then be + * used to learn what default values were chosen. + * + * Returns a NUL-terminated UTF-8 encoded XML instance or NULL in + * case of error. The caller must free() the returned value. + */ +char * +virDomainBackupGetXMLDesc(virDomainPtr domain, + unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "flags=0x%x", flags); + + virResetLastError(); + + virCheckDomainReturn(domain, NULL); + conn = domain->conn; + + if (conn->driver->domainBackupGetXMLDesc) { + char *ret; + ret = conn->driver->domainBackupGetXMLDesc(domain, flags); + if (!ret) + goto error; + return ret; + } + + virReportUnsupportedError(); + error: + virDispatchError(conn); + return NULL; +} diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index c92f083758..539d2e3943 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -867,4 +867,10 @@ LIBVIRT_5.10.0 { virDomainAgentSetResponseTimeout; } LIBVIRT_5.8.0; +LIBVIRT_6.0.0 { + global: + virDomainBackupBegin; + virDomainBackupGetXMLDesc; +} LIBVIRT_5.10.0; + # .... define new API here using predicted next version number .... -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:27PM +0100, Peter Krempa wrote:
From: Eric Blake <eblake@redhat.com>
Introduce a few new public APIs related to incremental backups. This builds on the previous notion of a checkpoint (without an existing checkpoint, the new API is a full backup, differing from virDomainBlockCopy in the point of time chosen and in operation on multiple disks at once); and also allows creation of a new checkpoint at the same time as starting the backup (after all, an incremental backup is only useful if it covers the state since the previous backup).
A backup job also affects filtering a listing of domains, as well as adding event reporting for signaling when a push model backup completes (where the hypervisor creates the backup); note that the pull model does not have an event (starting the backup lets a third party access the data, and only the third party knows when it is finished).
The full list of new APIs: virDomainBackupBegin; virDomainBackupGetXMLDesc;
Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- include/libvirt/libvirt-domain.h | 19 +++- src/driver-hypervisor.h | 12 +++ src/libvirt-domain-checkpoint.c | 7 +- src/libvirt-domain.c | 143 +++++++++++++++++++++++++++++++ src/libvirt_public.syms | 6 ++ 5 files changed, 183 insertions(+), 4 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, Dec 03, 2019 at 06:17:27PM +0100, Peter Krempa wrote:
From: Eric Blake <eblake@redhat.com>
Introduce a few new public APIs related to incremental backups. This builds on the previous notion of a checkpoint (without an existing checkpoint, the new API is a full backup, differing from virDomainBlockCopy in the point of time chosen and in operation on multiple disks at once); and also allows creation of a new checkpoint at the same time as starting the backup (after all, an incremental backup is only useful if it covers the state since the previous backup).
A backup job also affects filtering a listing of domains, as well as adding event reporting for signaling when a push model backup completes (where the hypervisor creates the backup); note that the pull model does not have an event (starting the backup lets a third party access the data, and only the third party knows when it is finished).
The full list of new APIs: virDomainBackupBegin; virDomainBackupGetXMLDesc;
Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- include/libvirt/libvirt-domain.h | 19 +++- src/driver-hypervisor.h | 12 +++ src/libvirt-domain-checkpoint.c | 7 +- src/libvirt-domain.c | 143 +++++++++++++++++++++++++++++++ src/libvirt_public.syms | 6 ++ 5 files changed, 183 insertions(+), 4 deletions(-)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

Introduce VIR_DOMAIN_JOB_OPERATION_BACKUP into virDomainJobOperation. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- include/libvirt/libvirt-domain.h | 1 + tools/virsh-domain.c | 4 +++- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index 84c5492a7b..6d1c7f1a3b 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -3269,6 +3269,7 @@ typedef enum { VIR_DOMAIN_JOB_OPERATION_SNAPSHOT = 6, VIR_DOMAIN_JOB_OPERATION_SNAPSHOT_REVERT = 7, VIR_DOMAIN_JOB_OPERATION_DUMP = 8, + VIR_DOMAIN_JOB_OPERATION_BACKUP = 9, # ifdef VIR_ENUM_SENTINELS VIR_DOMAIN_JOB_OPERATION_LAST diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index bb942267f0..5c313279d7 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -6068,7 +6068,9 @@ VIR_ENUM_IMPL(virshDomainJobOperation, N_("Outgoing migration"), N_("Snapshot"), N_("Snapshot revert"), - N_("Dump")); + N_("Dump"), + N_("Backup"), +); static const char * virshDomainJobOperationToString(int op) -- 2.23.0

On 12/3/19 11:17 AM, Peter Krempa wrote:
Introduce VIR_DOMAIN_JOB_OPERATION_BACKUP into virDomainJobOperation.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- include/libvirt/libvirt-domain.h | 1 + tools/virsh-domain.c | 4 +++- 2 files changed, 4 insertions(+), 1 deletion(-)
Reviewed-by: Eric Blake <eblake@redhat.com>
diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index 84c5492a7b..6d1c7f1a3b 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -3269,6 +3269,7 @@ typedef enum { VIR_DOMAIN_JOB_OPERATION_SNAPSHOT = 6, VIR_DOMAIN_JOB_OPERATION_SNAPSHOT_REVERT = 7, VIR_DOMAIN_JOB_OPERATION_DUMP = 8, + VIR_DOMAIN_JOB_OPERATION_BACKUP = 9,
# ifdef VIR_ENUM_SENTINELS VIR_DOMAIN_JOB_OPERATION_LAST diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index bb942267f0..5c313279d7 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -6068,7 +6068,9 @@ VIR_ENUM_IMPL(virshDomainJobOperation, N_("Outgoing migration"), N_("Snapshot"), N_("Snapshot revert"), - N_("Dump")); + N_("Dump"), + N_("Backup"), +);
static const char * virshDomainJobOperationToString(int op)
-- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:28PM +0100, Peter Krempa wrote:
Introduce VIR_DOMAIN_JOB_OPERATION_BACKUP into virDomainJobOperation.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- include/libvirt/libvirt-domain.h | 1 + tools/virsh-domain.c | 4 +++- 2 files changed, 4 insertions(+), 1 deletion(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, Dec 03, 2019 at 06:17:28PM +0100, Peter Krempa wrote:
Introduce VIR_DOMAIN_JOB_OPERATION_BACKUP into virDomainJobOperation.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- include/libvirt/libvirt-domain.h | 1 + tools/virsh-domain.c | 4 +++- 2 files changed, 4 insertions(+), 1 deletion(-)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

From: Eric Blake <eblake@redhat.com> This one is fairly straightforward - the generator already does what we need. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/remote/remote_driver.c | 2 ++ src/remote/remote_protocol.x | 33 ++++++++++++++++++++++++++++++++- src/remote_protocol-structs | 15 +++++++++++++++ 3 files changed, 49 insertions(+), 1 deletion(-) diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index a1384fc655..ddb95914a6 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -8702,6 +8702,8 @@ static virHypervisorDriver hypervisor_driver = { .domainCheckpointDelete = remoteDomainCheckpointDelete, /* 5.6.0 */ .domainGetGuestInfo = remoteDomainGetGuestInfo, /* 5.7.0 */ .domainAgentSetResponseTimeout = remoteDomainAgentSetResponseTimeout, /* 5.10.0 */ + .domainBackupBegin = remoteDomainBackupBegin, /* 6.0.0 */ + .domainBackupGetXMLDesc = remoteDomainBackupGetXMLDesc, /* 6.0.0 */ }; static virNetworkDriver network_driver = { diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 23e42d17b1..c79cb98ae8 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -3754,6 +3754,23 @@ struct remote_domain_agent_set_response_timeout_ret { int result; }; + +struct remote_domain_backup_begin_args { + remote_nonnull_domain dom; + remote_string backup_xml; + remote_string checkpoint_xml; + unsigned int flags; +}; + +struct remote_domain_backup_get_xml_desc_args { + remote_nonnull_domain dom; + unsigned int flags; +}; + +struct remote_domain_backup_get_xml_desc_ret { + remote_nonnull_string xml; +}; + /*----- Protocol. -----*/ /* Define the program number, protocol version and procedure numbers here. */ @@ -6633,5 +6650,19 @@ enum remote_procedure { * @generate: both * @acl: domain:write */ - REMOTE_PROC_DOMAIN_AGENT_SET_RESPONSE_TIMEOUT = 420 + REMOTE_PROC_DOMAIN_AGENT_SET_RESPONSE_TIMEOUT = 420, + + /** + * @generate: both + * @acl: domain:checkpoint + * @acl: domain:block_write + */ + REMOTE_PROC_DOMAIN_BACKUP_BEGIN = 421, + + /** + * @generate: both + * @priority: high + * @acl: domain:read + */ + REMOTE_PROC_DOMAIN_BACKUP_GET_XML_DESC = 422 }; diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 9ad7a857e0..abc5c5fd2c 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -3122,6 +3122,19 @@ struct remote_domain_agent_set_response_timeout_args { struct remote_domain_agent_set_response_timeout_ret { int result; }; +struct remote_domain_backup_begin_args { + remote_nonnull_domain dom; + remote_string backup_xml; + remote_string checkpoint_xml; + u_int flags; +}; +struct remote_domain_backup_get_xml_desc_args { + remote_nonnull_domain dom; + u_int flags; +}; +struct remote_domain_backup_get_xml_desc_ret { + remote_nonnull_string xml; +}; enum remote_procedure { REMOTE_PROC_CONNECT_OPEN = 1, REMOTE_PROC_CONNECT_CLOSE = 2, @@ -3543,4 +3556,6 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_GET_GUEST_INFO = 418, REMOTE_PROC_CONNECT_SET_IDENTITY = 419, REMOTE_PROC_DOMAIN_AGENT_SET_RESPONSE_TIMEOUT = 420, + REMOTE_PROC_DOMAIN_BACKUP_BEGIN = 421, + REMOTE_PROC_DOMAIN_BACKUP_GET_XML_DESC = 422, }; -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:29PM +0100, Peter Krempa wrote:
From: Eric Blake <eblake@redhat.com>
This one is fairly straightforward - the generator already does what we need.
Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/remote/remote_driver.c | 2 ++ src/remote/remote_protocol.x | 33 ++++++++++++++++++++++++++++++++- src/remote_protocol-structs | 15 +++++++++++++++ 3 files changed, 49 insertions(+), 1 deletion(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, Dec 03, 2019 at 06:17:29PM +0100, Peter Krempa wrote:
From: Eric Blake <eblake@redhat.com>
This one is fairly straightforward - the generator already does what we need.
Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/remote/remote_driver.c | 2 ++ src/remote/remote_protocol.x | 33 ++++++++++++++++++++++++++++++++- src/remote_protocol-structs | 15 +++++++++++++++ 3 files changed, 49 insertions(+), 1 deletion(-)
diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index a1384fc655..ddb95914a6 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -8702,6 +8702,8 @@ static virHypervisorDriver hypervisor_driver = { .domainCheckpointDelete = remoteDomainCheckpointDelete, /* 5.6.0 */ .domainGetGuestInfo = remoteDomainGetGuestInfo, /* 5.7.0 */ .domainAgentSetResponseTimeout = remoteDomainAgentSetResponseTimeout, /* 5.10.0 */ + .domainBackupBegin = remoteDomainBackupBegin, /* 6.0.0 */ + .domainBackupGetXMLDesc = remoteDomainBackupGetXMLDesc, /* 6.0.0 */ };
static virNetworkDriver network_driver = { diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 23e42d17b1..c79cb98ae8 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -3754,6 +3754,23 @@ struct remote_domain_agent_set_response_timeout_ret { int result; };
+ +struct remote_domain_backup_begin_args { + remote_nonnull_domain dom; + remote_string backup_xml;
Should this be remote_nonnull_string?
+ remote_string checkpoint_xml; + unsigned int flags; +}; +
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

From: Eric Blake <eblake@redhat.com> Accept XML describing a generic block job, and output it again as needed. This may still need a few tweaks to match the documented XML and RNG schema. Signed-off-by: Eric Blake <eblake@redhat.com> --- po/POTFILES.in | 1 + src/conf/Makefile.inc.am | 2 + src/conf/backup_conf.c | 499 +++++++++++++++++++++++++++++++++++++++ src/conf/backup_conf.h | 101 ++++++++ src/conf/virconftypes.h | 3 + src/libvirt_private.syms | 8 + 6 files changed, 614 insertions(+) create mode 100644 src/conf/backup_conf.c create mode 100644 src/conf/backup_conf.h diff --git a/po/POTFILES.in b/po/POTFILES.in index debb51cd70..0ff3beeb7e 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -19,6 +19,7 @@ @SRCDIR@/src/bhyve/bhyve_monitor.c @SRCDIR@/src/bhyve/bhyve_parse_command.c @SRCDIR@/src/bhyve/bhyve_process.c +@SRCDIR@/src/conf/backup_conf.c @SRCDIR@/src/conf/capabilities.c @SRCDIR@/src/conf/checkpoint_conf.c @SRCDIR@/src/conf/cpu_conf.c diff --git a/src/conf/Makefile.inc.am b/src/conf/Makefile.inc.am index 5035b9b524..debc6f4eef 100644 --- a/src/conf/Makefile.inc.am +++ b/src/conf/Makefile.inc.am @@ -12,6 +12,8 @@ NETDEV_CONF_SOURCES = \ $(NULL) DOMAIN_CONF_SOURCES = \ + conf/backup_conf.c \ + conf/backup_conf.h \ conf/capabilities.c \ conf/capabilities.h \ conf/checkpoint_conf.c \ diff --git a/src/conf/backup_conf.c b/src/conf/backup_conf.c new file mode 100644 index 0000000000..aaafdf12a2 --- /dev/null +++ b/src/conf/backup_conf.c @@ -0,0 +1,499 @@ +/* + * backup_conf.c: domain backup XML processing + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + */ + +#include <config.h> + +#include "configmake.h" +#include "internal.h" +#include "virbuffer.h" +#include "datatypes.h" +#include "domain_conf.h" +#include "virlog.h" +#include "viralloc.h" +#include "backup_conf.h" +#include "virstoragefile.h" +#include "virfile.h" +#include "virerror.h" +#include "virxml.h" +#include "virstring.h" +#include "virhash.h" +#include "virenum.h" + +#define VIR_FROM_THIS VIR_FROM_DOMAIN + +VIR_LOG_INIT("conf.backup_conf"); + +VIR_ENUM_DECL(virDomainBackup); +VIR_ENUM_IMPL(virDomainBackup, + VIR_DOMAIN_BACKUP_TYPE_LAST, + "default", + "push", + "pull"); + +/* following values appear in the status XML */ +VIR_ENUM_DECL(virDomainBackupDiskState); +VIR_ENUM_IMPL(virDomainBackupDiskState, + VIR_DOMAIN_BACKUP_DISK_STATE_LAST, + "", + "running", + "complete", + "failed", + "cancelling", + "cancelled"); + +void +virDomainBackupDefFree(virDomainBackupDefPtr def) +{ + size_t i; + + if (!def) + return; + + g_free(def->incremental); + virStorageNetHostDefFree(1, def->server); + + for (i = 0; i < def->ndisks; i++) { + virDomainBackupDiskDefPtr disk = def->disks + i; + + g_free(disk->name); + virObjectUnref(disk->store); + } + + g_free(def->disks); + g_free(def); +} + + +static int +virDomainBackupDiskDefParseXML(xmlNodePtr node, + xmlXPathContextPtr ctxt, + virDomainBackupDiskDefPtr def, + bool push, + unsigned int flags, + virDomainXMLOptionPtr xmlopt) +{ + VIR_XPATH_NODE_AUTORESTORE(ctxt); + g_autofree char *type = NULL; + g_autofree char *driver = NULL; + g_autofree char *backup = NULL; + g_autofree char *state = NULL; + int tmp; + xmlNodePtr srcNode; + unsigned int storageSourceParseFlags = 0; + bool internal = flags & VIR_DOMAIN_BACKUP_PARSE_INTERNAL; + + if (internal) + storageSourceParseFlags = VIR_DOMAIN_DEF_PARSE_STATUS; + + ctxt->node = node; + + if (!(def->name = virXMLPropString(node, "name"))) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("missing name from disk backup element")); + return -1; + } + + def->backup = VIR_TRISTATE_BOOL_YES; + + if ((backup = virXMLPropString(node, "backup"))) { + if ((tmp = virTristateBoolTypeFromString(backup)) <= 0) { + virReportError(VIR_ERR_XML_ERROR, + _("invalid disk 'backup' state '%s'"), backup); + return -1; + } + + def->backup = tmp; + } + + /* don't parse anything else if backup is disabled */ + if (def->backup == VIR_TRISTATE_BOOL_NO) + return 0; + + if (internal) { + tmp = 0; + if (!(state = virXMLPropString(node, "state")) || + (tmp = virDomainBackupDiskStateTypeFromString(state)) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("disk '%s' backup state wrong or missing'"), def->name); + return -1; + } + + def->state = tmp; + } + + if (!(def->store = virStorageSourceNew())) + return -1; + + if ((type = virXMLPropString(node, "type"))) { + if ((def->store->type = virStorageTypeFromString(type)) <= 0 || + def->store->type == VIR_STORAGE_TYPE_VOLUME || + def->store->type == VIR_STORAGE_TYPE_DIR) { + virReportError(VIR_ERR_XML_ERROR, + _("unknown disk backup type '%s'"), type); + return -1; + } + } else { + def->store->type = VIR_STORAGE_TYPE_FILE; + } + + if (push) + srcNode = virXPathNode("./target", ctxt); + else + srcNode = virXPathNode("./scratch", ctxt); + + if (srcNode && + virDomainStorageSourceParse(srcNode, ctxt, def->store, + storageSourceParseFlags, xmlopt) < 0) + return -1; + + if ((driver = virXPathString("string(./driver/@type)", ctxt))) { + def->store->format = virStorageFileFormatTypeFromString(driver); + if (def->store->format <= 0) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("unknown disk backup driver '%s'"), driver); + return -1; + } else if (!push && def->store->format != VIR_STORAGE_FILE_QCOW2) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("pull mode requires qcow2 driver, not '%s'"), + driver); + return -1; + } + } + + return 0; +} + + +static virDomainBackupDefPtr +virDomainBackupDefParse(xmlXPathContextPtr ctxt, + virDomainXMLOptionPtr xmlopt, + unsigned int flags) +{ + g_autoptr(virDomainBackupDef) def = NULL; + g_autofree xmlNodePtr *nodes = NULL; + xmlNodePtr node = NULL; + g_autofree char *mode = NULL; + bool push; + size_t i; + int n; + + def = g_new0(virDomainBackupDef, 1); + + def->type = VIR_DOMAIN_BACKUP_TYPE_PUSH; + + if ((mode = virXMLPropString(ctxt->node, "mode"))) { + if ((def->type = virDomainBackupTypeFromString(mode)) <= 0) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("unknown backup mode '%s'"), mode); + return NULL; + } + } + + push = def->type == VIR_DOMAIN_BACKUP_TYPE_PUSH; + + def->incremental = virXPathString("string(./incremental)", ctxt); + + if ((node = virXPathNode("./server", ctxt))) { + if (def->type != VIR_DOMAIN_BACKUP_TYPE_PULL) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("use of <server> requires pull mode backup")); + return NULL; + } + + def->server = g_new0(virStorageNetHostDef, 1); + + if (virDomainStorageNetworkParseHost(node, def->server) < 0) + return NULL; + + if (def->server->transport == VIR_STORAGE_NET_HOST_TRANS_RDMA) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("transport rdma is not supported for <server>")); + return NULL; + } + + if (def->server->transport == VIR_STORAGE_NET_HOST_TRANS_UNIX && + def->server->socket[0] != '/') { + virReportError(VIR_ERR_XML_ERROR, + _("backup socket path '%s' must be absolute"), + def->server->socket); + return NULL; + } + } + + if ((n = virXPathNodeSet("./disks/*", ctxt, &nodes)) < 0) + return NULL; + + def->disks = g_new0(virDomainBackupDiskDef, n); + + def->ndisks = n; + for (i = 0; i < def->ndisks; i++) { + if (virDomainBackupDiskDefParseXML(nodes[i], ctxt, + &def->disks[i], push, + flags, xmlopt) < 0) + return NULL; + } + + return g_steal_pointer(&def); +} + + +virDomainBackupDefPtr +virDomainBackupDefParseString(const char *xmlStr, + virDomainXMLOptionPtr xmlopt, + unsigned int flags) +{ + virDomainBackupDefPtr ret = NULL; + g_autoptr(xmlDoc) xml = NULL; + int keepBlanksDefault = xmlKeepBlanksDefault(0); + + if ((xml = virXMLParse(NULL, xmlStr, _("(domain_backup)")))) { + xmlKeepBlanksDefault(keepBlanksDefault); + ret = virDomainBackupDefParseNode(xml, xmlDocGetRootElement(xml), + xmlopt, flags); + } + xmlKeepBlanksDefault(keepBlanksDefault); + + return ret; +} + + +virDomainBackupDefPtr +virDomainBackupDefParseNode(xmlDocPtr xml, + xmlNodePtr root, + virDomainXMLOptionPtr xmlopt, + unsigned int flags) +{ + g_autoptr(xmlXPathContext) ctxt = NULL; + g_autofree char *schema = NULL; + + if (!virXMLNodeNameEqual(root, "domainbackup")) { + virReportError(VIR_ERR_XML_ERROR, "%s", _("domainbackup")); + return NULL; + } + + if (!(flags & VIR_DOMAIN_BACKUP_PARSE_INTERNAL)) { + if (!(schema = virFileFindResource("domainbackup.rng", + abs_top_srcdir "/docs/schemas", + PKGDATADIR "/schemas"))) + return NULL; + + if (virXMLValidateAgainstSchema(schema, xml) < 0) + return NULL; + } + + if (!(ctxt = virXMLXPathContextNew(xml))) + return NULL; + + ctxt->node = root; + return virDomainBackupDefParse(ctxt, xmlopt, flags); +} + + +static int +virDomainBackupDiskDefFormat(virBufferPtr buf, + virDomainBackupDiskDefPtr disk, + bool push, + bool internal) +{ + g_auto(virBuffer) attrBuf = VIR_BUFFER_INITIALIZER; + g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf); + const char *sourcename = "scratch"; + unsigned int storageSourceFormatFlags = 0; + + if (push) + sourcename = "target"; + + if (internal) + storageSourceFormatFlags |= VIR_DOMAIN_DEF_FORMAT_STATUS; + + virBufferEscapeString(&attrBuf, " name='%s'", disk->name); + virBufferAsprintf(&attrBuf, " backup='%s'", virTristateBoolTypeToString(disk->backup)); + if (internal) + virBufferAsprintf(&attrBuf, " state='%s'", virDomainBackupDiskStateTypeToString(disk->state)); + + if (disk->backup == VIR_TRISTATE_BOOL_YES) { + virBufferAsprintf(&attrBuf, " type='%s'", virStorageTypeToString(disk->store->type)); + + if (disk->store->format > 0) + virBufferEscapeString(&childBuf, "<driver type='%s'/>\n", + virStorageFileFormatTypeToString(disk->store->format)); + + if (virDomainDiskSourceFormat(&childBuf, disk->store, sourcename, + 0, false, storageSourceFormatFlags, NULL) < 0) + return -1; + } + + virXMLFormatElement(buf, "disk", &attrBuf, &childBuf); + return 0; +} + + +int +virDomainBackupDefFormat(virBufferPtr buf, + virDomainBackupDefPtr def, + bool internal) +{ + g_auto(virBuffer) attrBuf = VIR_BUFFER_INITIALIZER; + g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf); + g_auto(virBuffer) serverAttrBuf = VIR_BUFFER_INITIALIZER; + g_auto(virBuffer) disksChildBuf = VIR_BUFFER_INIT_CHILD(&childBuf); + size_t i; + + virBufferAsprintf(&attrBuf, " mode='%s'", virDomainBackupTypeToString(def->type)); + + virBufferEscapeString(&childBuf, "<incremental>%s</incremental>\n", def->incremental); + + if (def->server) { + virBufferAsprintf(&serverAttrBuf, " transport='%s'", + virStorageNetHostTransportTypeToString(def->server->transport)); + virBufferEscapeString(&serverAttrBuf, " name='%s'", def->server->name); + if (def->server->port) + virBufferAsprintf(&serverAttrBuf, " port='%u'", def->server->port); + virBufferEscapeString(&serverAttrBuf, " socket='%s'", def->server->socket); + } + + virXMLFormatElement(&childBuf, "server", &serverAttrBuf, NULL); + + for (i = 0; i < def->ndisks; i++) { + if (virDomainBackupDiskDefFormat(&disksChildBuf, &def->disks[i], + def->type == VIR_DOMAIN_BACKUP_TYPE_PUSH, + internal) < 0) + return -1; + } + + virXMLFormatElement(&childBuf, "disks", NULL, &disksChildBuf); + virXMLFormatElement(buf, "domainbackup", &attrBuf, &childBuf); + + return 0; +} + + +static int +virDomainBackupDefAssignStore(virDomainBackupDiskDefPtr disk, + virStorageSourcePtr src, + const char *suffix) +{ + if (virStorageSourceIsEmpty(src)) { + if (disk->store) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("disk '%s' has no media"), disk->name); + return -1; + } + } else if (src->readonly) { + if (disk->store) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("backup of readonly disk '%s' makes no sense"), + disk->name); + return -1; + } + } else if (!disk->store) { + if (virStorageSourceGetActualType(src) == VIR_STORAGE_TYPE_FILE) { + if (!(disk->store = virStorageSourceNew())) + return -1; + + disk->store->type = VIR_STORAGE_TYPE_FILE; + disk->store->path = g_strdup_printf("%s.%s", src->path, suffix); + } else { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("refusing to generate file name for disk '%s'"), + disk->name); + return -1; + } + } + + return 0; +} + + +int +virDomainBackupAlignDisks(virDomainBackupDefPtr def, + virDomainDefPtr dom, + const char *suffix) +{ + g_autoptr(virHashTable) disks = NULL; + size_t i; + int ndisks; + bool backup_all = false; + + + if (!(disks = virHashNew(NULL))) + return -1; + + /* Unlikely to have a guest without disks but technically possible. */ + if (!dom->ndisks) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("domain must have at least one disk to perform backup")); + return -1; + } + + /* Double check requested disks. */ + for (i = 0; i < def->ndisks; i++) { + virDomainBackupDiskDefPtr backupdisk = &def->disks[i]; + virDomainDiskDefPtr domdisk; + + if (!(domdisk = virDomainDiskByTarget(dom, backupdisk->name))) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("no disk named '%s'"), backupdisk->name); + return -1; + } + + if (virHashAddEntry(disks, backupdisk->name, NULL) < 0) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("disk '%s' specified twice"), + backupdisk->name); + return -1; + } + + if (backupdisk->backup == VIR_TRISTATE_BOOL_YES && + virDomainBackupDefAssignStore(backupdisk, domdisk->src, suffix) < 0) + return -1; + } + + if (def->ndisks == 0) + backup_all = true; + + ndisks = def->ndisks; + if (VIR_EXPAND_N(def->disks, def->ndisks, dom->ndisks - def->ndisks) < 0) + return -1; + + for (i = 0; i < dom->ndisks; i++) { + virDomainBackupDiskDefPtr backupdisk = NULL; + virDomainDiskDefPtr domdisk = dom->disks[i]; + + if (virHashHasEntry(disks, domdisk->dst)) + continue; + + backupdisk = &def->disks[ndisks++]; + + if (VIR_STRDUP(backupdisk->name, domdisk->dst) < 0) + return -1; + + if (backup_all && + !virStorageSourceIsEmpty(domdisk->src) && + !domdisk->src->readonly) { + backupdisk->backup = VIR_TRISTATE_BOOL_YES; + + if (virDomainBackupDefAssignStore(backupdisk, domdisk->src, suffix) < 0) + return -1; + } else { + backupdisk->backup = VIR_TRISTATE_BOOL_NO; + } + } + + return 0; +} diff --git a/src/conf/backup_conf.h b/src/conf/backup_conf.h new file mode 100644 index 0000000000..c970e01920 --- /dev/null +++ b/src/conf/backup_conf.h @@ -0,0 +1,101 @@ +/* + * backup_conf.h: domain backup XML processing + * (based on domain_conf.h) + * + * Copyright (C) 2006-2019 Red Hat, Inc. + * Copyright (C) 2006-2008 Daniel P. Berrange + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + */ + +#pragma once + +#include "internal.h" +#include "virconftypes.h" + +/* Items related to incremental backup state */ + +typedef enum { + VIR_DOMAIN_BACKUP_TYPE_DEFAULT = 0, + VIR_DOMAIN_BACKUP_TYPE_PUSH, + VIR_DOMAIN_BACKUP_TYPE_PULL, + + VIR_DOMAIN_BACKUP_TYPE_LAST +} virDomainBackupType; + +typedef enum { + VIR_DOMAIN_BACKUP_DISK_STATE_NONE = 0, + VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING, + VIR_DOMAIN_BACKUP_DISK_STATE_COMPLETE, + VIR_DOMAIN_BACKUP_DISK_STATE_FAILED, + VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLING, + VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLED, + VIR_DOMAIN_BACKUP_DISK_STATE_LAST +} virDomainBackupDiskState; + +/* Stores disk-backup information */ +typedef struct _virDomainBackupDiskDef virDomainBackupDiskDef; +typedef virDomainBackupDiskDef *virDomainBackupDiskDefPtr; +struct _virDomainBackupDiskDef { + char *name; /* name matching the <target dev='...' of the domain */ + virTristateBool backup; /* whether backup is requested */ + + /* details of target for push-mode, or of the scratch file for pull-mode */ + virStorageSourcePtr store; + + /* internal data */ + virDomainBackupDiskState state; +}; + +/* Stores the complete backup metadata */ +typedef struct _virDomainBackupDef virDomainBackupDef; +typedef virDomainBackupDef *virDomainBackupDefPtr; +struct _virDomainBackupDef { + /* Public XML. */ + int type; /* virDomainBackupType */ + char *incremental; + virStorageNetHostDefPtr server; /* only when type == PULL */ + + size_t ndisks; /* should not exceed dom->ndisks */ + virDomainBackupDiskDef *disks; +}; + +typedef enum { + VIR_DOMAIN_BACKUP_PARSE_INTERNAL = 1 << 0, +} virDomainBackupParseFlags; + +virDomainBackupDefPtr +virDomainBackupDefParseString(const char *xmlStr, + virDomainXMLOptionPtr xmlopt, + unsigned int flags); + +virDomainBackupDefPtr +virDomainBackupDefParseNode(xmlDocPtr xml, + xmlNodePtr root, + virDomainXMLOptionPtr xmlopt, + unsigned int flags); +void +virDomainBackupDefFree(virDomainBackupDefPtr def); + +G_DEFINE_AUTOPTR_CLEANUP_FUNC(virDomainBackupDef, virDomainBackupDefFree); + +int +virDomainBackupDefFormat(virBufferPtr buf, + virDomainBackupDefPtr def, + bool internal); +int +virDomainBackupAlignDisks(virDomainBackupDefPtr backup, + virDomainDefPtr dom, + const char *suffix); diff --git a/src/conf/virconftypes.h b/src/conf/virconftypes.h index 462842f324..9ed9b68b65 100644 --- a/src/conf/virconftypes.h +++ b/src/conf/virconftypes.h @@ -93,6 +93,9 @@ typedef virDomainABIStability *virDomainABIStabilityPtr; typedef struct _virDomainActualNetDef virDomainActualNetDef; typedef virDomainActualNetDef *virDomainActualNetDefPtr; +typedef struct _virDomainBackupDef virDomainBackupDef; +typedef virDomainBackupDef *virDomainBackupDefPtr; + typedef struct _virDomainBIOSDef virDomainBIOSDef; typedef virDomainBIOSDef *virDomainBIOSDefPtr; diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 8fe0bf9365..c7f4c3fb44 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -42,6 +42,14 @@ virAccessPermStorageVolTypeFromString; virAccessPermStorageVolTypeToString; +# conf/backup_conf.h +virDomainBackupAlignDisks; +virDomainBackupDefFormat; +virDomainBackupDefFree; +virDomainBackupDefParseNode; +virDomainBackupDefParseString; + + # conf/capabilities.h virCapabilitiesAddGuest; virCapabilitiesAddGuestDomain; -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:30PM +0100, Peter Krempa wrote:
From: Eric Blake <eblake@redhat.com>
Accept XML describing a generic block job, and output it again as needed. This may still need a few tweaks to match the documented XML and RNG schema.
Signed-off-by: Eric Blake <eblake@redhat.com> --- po/POTFILES.in | 1 + src/conf/Makefile.inc.am | 2 + src/conf/backup_conf.c | 499 +++++++++++++++++++++++++++++++++++++++ src/conf/backup_conf.h | 101 ++++++++ src/conf/virconftypes.h | 3 + src/libvirt_private.syms | 8 + 6 files changed, 614 insertions(+) create mode 100644 src/conf/backup_conf.c create mode 100644 src/conf/backup_conf.h
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, Dec 03, 2019 at 06:17:30PM +0100, Peter Krempa wrote:
From: Eric Blake <eblake@redhat.com>
Accept XML describing a generic block job, and output it again as needed.
This may still need a few tweaks to match the documented XML and RNG schema.
A leftover from earlier versions? This essentially says: there might be bugs
Signed-off-by: Eric Blake <eblake@redhat.com> --- po/POTFILES.in | 1 + src/conf/Makefile.inc.am | 2 + src/conf/backup_conf.c | 499 +++++++++++++++++++++++++++++++++++++++ src/conf/backup_conf.h | 101 ++++++++ src/conf/virconftypes.h | 3 + src/libvirt_private.syms | 8 + 6 files changed, 614 insertions(+) create mode 100644 src/conf/backup_conf.c create mode 100644 src/conf/backup_conf.h
[...]
+static int +virDomainBackupDiskDefParseXML(xmlNodePtr node, + xmlXPathContextPtr ctxt, + virDomainBackupDiskDefPtr def, + bool push, + unsigned int flags, + virDomainXMLOptionPtr xmlopt) +{ + VIR_XPATH_NODE_AUTORESTORE(ctxt); + g_autofree char *type = NULL; + g_autofree char *driver = NULL; + g_autofree char *backup = NULL; + g_autofree char *state = NULL; + int tmp; + xmlNodePtr srcNode; + unsigned int storageSourceParseFlags = 0; + bool internal = flags & VIR_DOMAIN_BACKUP_PARSE_INTERNAL; + + if (internal) + storageSourceParseFlags = VIR_DOMAIN_DEF_PARSE_STATUS; + + ctxt->node = node; + + if (!(def->name = virXMLPropString(node, "name"))) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("missing name from disk backup element")); + return -1; + } + + def->backup = VIR_TRISTATE_BOOL_YES; + + if ((backup = virXMLPropString(node, "backup"))) { + if ((tmp = virTristateBoolTypeFromString(backup)) <= 0) { + virReportError(VIR_ERR_XML_ERROR, + _("invalid disk 'backup' state '%s'"), backup); + return -1; + } + + def->backup = tmp; + } + + /* don't parse anything else if backup is disabled */ + if (def->backup == VIR_TRISTATE_BOOL_NO) + return 0; + + if (internal) { + tmp = 0;
This should not be necessary - either the condition below returns, or tmp gets overwritten.
+ if (!(state = virXMLPropString(node, "state")) || + (tmp = virDomainBackupDiskStateTypeFromString(state)) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("disk '%s' backup state wrong or missing'"), def->name); + return -1; + } + + def->state = tmp; + } + + if (!(def->store = virStorageSourceNew())) + return -1; + + if ((type = virXMLPropString(node, "type"))) { + if ((def->store->type = virStorageTypeFromString(type)) <= 0 || + def->store->type == VIR_STORAGE_TYPE_VOLUME || + def->store->type == VIR_STORAGE_TYPE_DIR) {
The schema whitelists file and block, while a blacklist is used here.
+ virReportError(VIR_ERR_XML_ERROR, + _("unknown disk backup type '%s'"), type);
Also, technically they are known, just unsupported.
+ return -1; + } + } else { + def->store->type = VIR_STORAGE_TYPE_FILE; + } + + if (push) + srcNode = virXPathNode("./target", ctxt); + else + srcNode = virXPathNode("./scratch", ctxt); + + if (srcNode && + virDomainStorageSourceParse(srcNode, ctxt, def->store, + storageSourceParseFlags, xmlopt) < 0) + return -1; +
If you make sure that this does not accept unwanted types: Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

Now that the parser and formatter are in place we can excercise it on the test files. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- tests/Makefile.am | 1 + .../backup-pull-seclabel.xml | 18 ++++++++ tests/domainbackupxml2xmlout/backup-pull.xml | 10 ++++ .../backup-push-seclabel.xml | 17 +++++++ tests/domainbackupxml2xmlout/backup-push.xml | 10 ++++ tests/domainbackupxml2xmlout/empty.xml | 1 + tests/genericxml2xmltest.c | 46 +++++++++++++++++++ tests/virschematest.c | 3 +- 8 files changed, 105 insertions(+), 1 deletion(-) create mode 100644 tests/domainbackupxml2xmlout/backup-pull-seclabel.xml create mode 100644 tests/domainbackupxml2xmlout/backup-pull.xml create mode 100644 tests/domainbackupxml2xmlout/backup-push-seclabel.xml create mode 100644 tests/domainbackupxml2xmlout/backup-push.xml create mode 100644 tests/domainbackupxml2xmlout/empty.xml diff --git a/tests/Makefile.am b/tests/Makefile.am index ea9e2b2ad0..75eee0006c 100644 --- a/tests/Makefile.am +++ b/tests/Makefile.am @@ -92,6 +92,7 @@ EXTRA_DIST = \ cputestdata \ domaincapsdata \ domainbackupxml2xmlin \ + domainbackupxml2xmlout \ domainconfdata \ domainschemadata \ fchostdata \ diff --git a/tests/domainbackupxml2xmlout/backup-pull-seclabel.xml b/tests/domainbackupxml2xmlout/backup-pull-seclabel.xml new file mode 100644 index 0000000000..c631c9b979 --- /dev/null +++ b/tests/domainbackupxml2xmlout/backup-pull-seclabel.xml @@ -0,0 +1,18 @@ +<domainbackup mode='pull'> + <incremental>1525889631</incremental> + <server transport='tcp' name='localhost' port='10809'/> + <disks> + <disk name='vda' backup='yes' type='file'> + <driver type='qcow2'/> + <scratch file='/path/to/file'> + <seclabel model='dac' relabel='no'/> + </scratch> + </disk> + <disk name='vdb' backup='yes' type='block'> + <driver type='qcow2'/> + <scratch dev='/dev/block'> + <seclabel model='dac' relabel='no'/> + </scratch> + </disk> + </disks> +</domainbackup> diff --git a/tests/domainbackupxml2xmlout/backup-pull.xml b/tests/domainbackupxml2xmlout/backup-pull.xml new file mode 100644 index 0000000000..24fce9c0e7 --- /dev/null +++ b/tests/domainbackupxml2xmlout/backup-pull.xml @@ -0,0 +1,10 @@ +<domainbackup mode='pull'> + <incremental>1525889631</incremental> + <server transport='tcp' name='localhost' port='10809'/> + <disks> + <disk name='vda' backup='yes' type='file'> + <scratch file='/path/to/file'/> + </disk> + <disk name='hda' backup='no'/> + </disks> +</domainbackup> diff --git a/tests/domainbackupxml2xmlout/backup-push-seclabel.xml b/tests/domainbackupxml2xmlout/backup-push-seclabel.xml new file mode 100644 index 0000000000..9986889ba3 --- /dev/null +++ b/tests/domainbackupxml2xmlout/backup-push-seclabel.xml @@ -0,0 +1,17 @@ +<domainbackup mode='push'> + <incremental>1525889631</incremental> + <disks> + <disk name='vda' backup='yes' type='file'> + <driver type='raw'/> + <target file='/path/to/file'> + <seclabel model='dac' relabel='no'/> + </target> + </disk> + <disk name='vdb' backup='yes' type='block'> + <driver type='qcow2'/> + <target dev='/dev/block'> + <seclabel model='dac' relabel='no'/> + </target> + </disk> + </disks> +</domainbackup> diff --git a/tests/domainbackupxml2xmlout/backup-push.xml b/tests/domainbackupxml2xmlout/backup-push.xml new file mode 100644 index 0000000000..1997c814ae --- /dev/null +++ b/tests/domainbackupxml2xmlout/backup-push.xml @@ -0,0 +1,10 @@ +<domainbackup mode='push'> + <incremental>1525889631</incremental> + <disks> + <disk name='vda' backup='yes' type='file'> + <driver type='raw'/> + <target file='/path/to/file'/> + </disk> + <disk name='hda' backup='no'/> + </disks> +</domainbackup> diff --git a/tests/domainbackupxml2xmlout/empty.xml b/tests/domainbackupxml2xmlout/empty.xml new file mode 100644 index 0000000000..b1ba4953be --- /dev/null +++ b/tests/domainbackupxml2xmlout/empty.xml @@ -0,0 +1 @@ +<domainbackup mode='push'/> diff --git a/tests/genericxml2xmltest.c b/tests/genericxml2xmltest.c index 0d04413712..1376221ef8 100644 --- a/tests/genericxml2xmltest.c +++ b/tests/genericxml2xmltest.c @@ -8,6 +8,7 @@ #include "testutils.h" #include "internal.h" #include "virstring.h" +#include "conf/backup_conf.h" #define VIR_FROM_THIS VIR_FROM_NONE @@ -44,6 +45,41 @@ testCompareXMLToXMLHelper(const void *data) } +static int +testCompareBackupXML(const void *data) +{ + const char *testname = data; + g_autofree char *xml_in = NULL; + g_autofree char *file_in = NULL; + g_autofree char *file_out = NULL; + g_autoptr(virDomainBackupDef) backup = NULL; + g_auto(virBuffer) buf = VIR_BUFFER_INITIALIZER; + g_autofree char *actual = NULL; + + file_in = g_strdup_printf("%s/domainbackupxml2xmlin/%s.xml", + abs_srcdir, testname); + file_out = g_strdup_printf("%s/domainbackupxml2xmlout/%s.xml", + abs_srcdir, testname); + + if (virFileReadAll(file_in, 1024 * 64, &xml_in) < 0) + return -1; + + if (!(backup = virDomainBackupDefParseString(xml_in, xmlopt, 0))) { + VIR_TEST_VERBOSE("failed to parse backup def '%s'", file_in); + return -1; + } + + if (virDomainBackupDefFormat(&buf, backup, false) < 0) { + VIR_TEST_VERBOSE("failed to format backup def '%s'", file_in); + return -1; + } + + actual = virBufferContentAndReset(&buf); + + return virTestCompareToFile(actual, file_out); +} + + static int mymain(void) { @@ -149,6 +185,16 @@ mymain(void) DO_TEST_DIFFERENT("cputune"); +#define DO_TEST_BACKUP(name) \ + if (virTestRun("QEMU BACKUP XML-2-XML " name, testCompareBackupXML, name) < 0) \ + ret = -1; + + DO_TEST_BACKUP("empty"); + DO_TEST_BACKUP("backup-pull"); + DO_TEST_BACKUP("backup-pull-seclabel"); + DO_TEST_BACKUP("backup-push"); + DO_TEST_BACKUP("backup-push-seclabel"); + virObjectUnref(caps); virObjectUnref(xmlopt); diff --git a/tests/virschematest.c b/tests/virschematest.c index 5ae2d207d1..e4a440afb0 100644 --- a/tests/virschematest.c +++ b/tests/virschematest.c @@ -205,7 +205,8 @@ mymain(void) "genericxml2xmloutdata", "xlconfigdata", "libxlxml2domconfigdata", "qemuhotplugtestdomains"); DO_TEST_DIR("domaincaps.rng", "domaincapsdata"); - DO_TEST_DIR("domainbackup.rng", "domainbackupxml2xmlin"); + DO_TEST_DIR("domainbackup.rng", "domainbackupxml2xmlin", + "domainbackupxml2xmlout"); DO_TEST_DIR("domaincheckpoint.rng", "qemudomaincheckpointxml2xmlin", "qemudomaincheckpointxml2xmlout"); DO_TEST_DIR("domainsnapshot.rng", "qemudomainsnapshotxml2xmlin", -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:31PM +0100, Peter Krempa wrote:
Now that the parser and formatter are in place we can excercise it on the test files.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- tests/Makefile.am | 1 + .../backup-pull-seclabel.xml | 18 ++++++++ tests/domainbackupxml2xmlout/backup-pull.xml | 10 ++++ .../backup-push-seclabel.xml | 17 +++++++ tests/domainbackupxml2xmlout/backup-push.xml | 10 ++++ tests/domainbackupxml2xmlout/empty.xml | 1 + tests/genericxml2xmltest.c | 46 +++++++++++++++++++ tests/virschematest.c | 3 +- 8 files changed, 105 insertions(+), 1 deletion(-) create mode 100644 tests/domainbackupxml2xmlout/backup-pull-seclabel.xml create mode 100644 tests/domainbackupxml2xmlout/backup-pull.xml create mode 100644 tests/domainbackupxml2xmlout/backup-push-seclabel.xml create mode 100644 tests/domainbackupxml2xmlout/backup-push.xml create mode 100644 tests/domainbackupxml2xmlout/empty.xml
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/3/19 11:17 AM, Peter Krempa wrote:
Now that the parser and formatter are in place we can excercise it on
exercise
the test files.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- Reviewed-by: Eric Blake <eblake@redhat.com>
-- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:31PM +0100, Peter Krempa wrote:
Now that the parser and formatter are in place we can excercise it on the test files.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- tests/Makefile.am | 1 + .../backup-pull-seclabel.xml | 18 ++++++++ tests/domainbackupxml2xmlout/backup-pull.xml | 10 ++++ .../backup-push-seclabel.xml | 17 +++++++ tests/domainbackupxml2xmlout/backup-push.xml | 10 ++++ tests/domainbackupxml2xmlout/empty.xml | 1 + tests/genericxml2xmltest.c | 46 +++++++++++++++++++ tests/virschematest.c | 3 +- 8 files changed, 105 insertions(+), 1 deletion(-) create mode 100644 tests/domainbackupxml2xmlout/backup-pull-seclabel.xml create mode 100644 tests/domainbackupxml2xmlout/backup-pull.xml create mode 100644 tests/domainbackupxml2xmlout/backup-push-seclabel.xml create mode 100644 tests/domainbackupxml2xmlout/backup-push.xml create mode 100644 tests/domainbackupxml2xmlout/empty.xml
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

From: Eric Blake <eblake@redhat.com> Introduce virsh commands for performing backup jobs. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- po/POTFILES.in | 1 + tools/Makefile.am | 1 + tools/virsh-backup.c | 151 +++++++++++++++++++++++++++++++++++++++++++ tools/virsh-backup.h | 21 ++++++ tools/virsh.c | 2 + tools/virsh.h | 1 + tools/virsh.pod | 31 +++++++++ 7 files changed, 208 insertions(+) create mode 100644 tools/virsh-backup.c create mode 100644 tools/virsh-backup.h diff --git a/po/POTFILES.in b/po/POTFILES.in index 0ff3beeb7e..48f3f431ec 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -329,6 +329,7 @@ @SRCDIR@/src/vz/vz_utils.h @SRCDIR@/tests/virpolkittest.c @SRCDIR@/tools/libvirt-guests.sh.in +@SRCDIR@/tools/virsh-backup.c @SRCDIR@/tools/virsh-checkpoint.c @SRCDIR@/tools/virsh-completer-host.c @SRCDIR@/tools/virsh-console.c diff --git a/tools/Makefile.am b/tools/Makefile.am index 1a541a3984..b9d31838df 100644 --- a/tools/Makefile.am +++ b/tools/Makefile.am @@ -232,6 +232,7 @@ virt_login_shell_helper_CFLAGS = \ virsh_SOURCES = \ virsh.c virsh.h \ + virsh-backup.c virsh-backup.h\ virsh-checkpoint.c virsh-checkpoint.h \ virsh-completer.c virsh-completer.h \ virsh-completer-domain.c virsh-completer-domain.h \ diff --git a/tools/virsh-backup.c b/tools/virsh-backup.c new file mode 100644 index 0000000000..04464c6bff --- /dev/null +++ b/tools/virsh-backup.c @@ -0,0 +1,151 @@ +/* + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + */ + +#include <config.h> +#include "virsh-backup.h" +#include "virsh-util.h" + +#include "internal.h" +#include "virfile.h" + +/* + * "backup-begin" command + */ +static const vshCmdInfo info_backup_begin[] = { + {.name = "help", + .data = N_("Start a disk backup of a live domain") + }, + {.name = "desc", + .data = N_("Use XML to start a full or incremental disk backup of a live " + "domain, optionally creating a checkpoint") + }, + {.name = NULL} +}; + +static const vshCmdOptDef opts_backup_begin[] = { + VIRSH_COMMON_OPT_DOMAIN_FULL(0), + {.name = "backupxml", + .type = VSH_OT_STRING, + .help = N_("domain backup XML"), + }, + {.name = "checkpointxml", + .type = VSH_OT_STRING, + .help = N_("domain checkpoint XML"), + }, + {.name = "reuse-external", + .type = VSH_OT_BOOL, + .help = N_("reuse files provided by caller"), + }, + {.name = NULL} +}; + +static bool +cmdBackupBegin(vshControl *ctl, + const vshCmd *cmd) +{ + g_autoptr(virshDomain) dom = NULL; + const char *backup_from = NULL; + g_autofree char *backup_buffer = NULL; + const char *check_from = NULL; + g_autofree char *check_buffer = NULL; + unsigned int flags = 0; + + if (vshCommandOptBool(cmd, "reuse-external")) + flags |= VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTERNAL; + + if (!(dom = virshCommandOptDomain(ctl, cmd, NULL))) + return false; + + if (vshCommandOptStringReq(ctl, cmd, "backupxml", &backup_from) < 0) + return false; + + if (!backup_from) { + backup_buffer = g_strdup("<domainbackup/>"); + } else { + if (virFileReadAll(backup_from, VSH_MAX_XML_FILE, &backup_buffer) < 0) { + vshSaveLibvirtError(); + return false; + } + } + + if (vshCommandOptStringReq(ctl, cmd, "checkpointxml", &check_from) < 0) + return false; + if (check_from) { + if (virFileReadAll(check_from, VSH_MAX_XML_FILE, &check_buffer) < 0) { + vshSaveLibvirtError(); + return false; + } + } + + if (virDomainBackupBegin(dom, backup_buffer, check_buffer, flags) < 0) + return false; + + vshPrint(ctl, _("Backup started\n")); + return true; +} + + +/* + * "backup-dumpxml" command + */ +static const vshCmdInfo info_backup_dumpxml[] = { + {.name = "help", + .data = N_("Dump XML for an ongoing domain block backup job") + }, + {.name = "desc", + .data = N_("Backup Dump XML") + }, + {.name = NULL} +}; + +static const vshCmdOptDef opts_backup_dumpxml[] = { + VIRSH_COMMON_OPT_DOMAIN_FULL(0), + {.name = NULL} +}; + +static bool +cmdBackupDumpXML(vshControl *ctl, + const vshCmd *cmd) +{ + g_autoptr(virshDomain) dom = NULL; + g_autofree char *xml = NULL; + + if (!(dom = virshCommandOptDomain(ctl, cmd, NULL))) + return false; + + if (!(xml = virDomainBackupGetXMLDesc(dom, 0))) + return false; + + vshPrint(ctl, "%s", xml); + return true; +} + + +const vshCmdDef backupCmds[] = { + {.name = "backup-begin", + .handler = cmdBackupBegin, + .opts = opts_backup_begin, + .info = info_backup_begin, + .flags = 0 + }, + {.name = "backup-dumpxml", + .handler = cmdBackupDumpXML, + .opts = opts_backup_dumpxml, + .info = info_backup_dumpxml, + .flags = 0 + }, + {.name = NULL} +}; diff --git a/tools/virsh-backup.h b/tools/virsh-backup.h new file mode 100644 index 0000000000..95c2f5a424 --- /dev/null +++ b/tools/virsh-backup.h @@ -0,0 +1,21 @@ +/* + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + */ + +#pragma once + +#include "virsh.h" + +extern const vshCmdDef backupCmds[]; diff --git a/tools/virsh.c b/tools/virsh.c index 8c0e9d960d..e70711f5d2 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -47,6 +47,7 @@ #include "virstring.h" #include "virgettext.h" +#include "virsh-backup.h" #include "virsh-checkpoint.h" #include "virsh-console.h" #include "virsh-domain.h" @@ -831,6 +832,7 @@ static const vshCmdGrp cmdGroups[] = { {VIRSH_CMD_GRP_NODEDEV, "nodedev", nodedevCmds}, {VIRSH_CMD_GRP_SECRET, "secret", secretCmds}, {VIRSH_CMD_GRP_SNAPSHOT, "snapshot", snapshotCmds}, + {VIRSH_CMD_GRP_BACKUP, "backup", backupCmds}, {VIRSH_CMD_GRP_STORAGE_POOL, "pool", storagePoolCmds}, {VIRSH_CMD_GRP_STORAGE_VOL, "volume", storageVolCmds}, {VIRSH_CMD_GRP_VIRSH, "virsh", virshCmds}, diff --git a/tools/virsh.h b/tools/virsh.h index b4e610b2a4..d84659124a 100644 --- a/tools/virsh.h +++ b/tools/virsh.h @@ -51,6 +51,7 @@ #define VIRSH_CMD_GRP_NWFILTER "Network Filter" #define VIRSH_CMD_GRP_SECRET "Secret" #define VIRSH_CMD_GRP_SNAPSHOT "Snapshot" +#define VIRSH_CMD_GRP_BACKUP "Backup" #define VIRSH_CMD_GRP_HOST_AND_HV "Host and Hypervisor" #define VIRSH_CMD_GRP_VIRSH "Virsh itself" diff --git a/tools/virsh.pod b/tools/virsh.pod index a8331154e1..b04f7c0fdc 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -1327,6 +1327,37 @@ addresses, currently 'lease' to read DHCP leases, 'agent' to query the guest OS via an agent, or 'arp' to get IP from host's arp tables. If unspecified, 'lease' is the default. +=item B<backup-begin> I<domain> [I<backupxml>] [I<checkpointxml>] +[I<--reuse-external>] + +Begin a new backup job. If I<backupxml> is omitted, this defaults to a full +backup using a push model to filenames generated by libvirt; supplying XML +allows fine-tuning such as requesting an incremental backup relative to an +earlier checkpoint, controlling which disks participate or which +filenames are involved, or requesting the use of a pull model backup. +The B<backup-dumpxml> command shows any resulting values assigned by +libvirt. For more information on backup XML, see: +L<https://libvirt.org/formatbackup.html>. + +If I<--reuse-external> is used it instructs libvirt to reuse temporary +and output files provided by the user in I<backupxml>. + +If I<checkpointxml> is specified, a second file with a top-level +element of <domaincheckpoint> is used to create a simultaneous +checkpoint, for doing a later incremental backup relative to the time +the backup was created. See B<checkpoint-create> for more details on +checkpoints. + +This command returns as soon as possible, and the backup job runs in +the background; the progress of a push model backup can be checked +with B<domjobinfo> or by waiting for an event with B<event> (the +progress of a pull model backup is under the control of whatever third +party connects to the NBD export). The job is ended with B<domjobabort>. + +=item B<backup-dumpxml> I<domain> + +Output XML describing the current backup job. + =item B<domiflist> I<domain> [I<--inactive>] Print a table showing the brief information of all virtual interfaces -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:32PM +0100, Peter Krempa wrote:
From: Eric Blake <eblake@redhat.com>
Introduce virsh commands for performing backup jobs.
Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- po/POTFILES.in | 1 + tools/Makefile.am | 1 + tools/virsh-backup.c | 151 +++++++++++++++++++++++++++++++++++++++++++ tools/virsh-backup.h | 21 ++++++ tools/virsh.c | 2 + tools/virsh.h | 1 + tools/virsh.pod | 31 +++++++++ 7 files changed, 208 insertions(+) create mode 100644 tools/virsh-backup.c create mode 100644 tools/virsh-backup.h
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, Dec 03, 2019 at 06:17:32PM +0100, Peter Krempa wrote:
From: Eric Blake <eblake@redhat.com>
Introduce virsh commands for performing backup jobs.
Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- po/POTFILES.in | 1 + tools/Makefile.am | 1 + tools/virsh-backup.c | 151 +++++++++++++++++++++++++++++++++++++++++++ tools/virsh-backup.h | 21 ++++++ tools/virsh.c | 2 + tools/virsh.h | 1 + tools/virsh.pod | 31 +++++++++ 7 files changed, 208 insertions(+) create mode 100644 tools/virsh-backup.c create mode 100644 tools/virsh-backup.h
diff --git a/po/POTFILES.in b/po/POTFILES.in index 0ff3beeb7e..48f3f431ec 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -329,6 +329,7 @@ @SRCDIR@/src/vz/vz_utils.h @SRCDIR@/tests/virpolkittest.c @SRCDIR@/tools/libvirt-guests.sh.in +@SRCDIR@/tools/virsh-backup.c @SRCDIR@/tools/virsh-checkpoint.c @SRCDIR@/tools/virsh-completer-host.c @SRCDIR@/tools/virsh-console.c diff --git a/tools/Makefile.am b/tools/Makefile.am index 1a541a3984..b9d31838df 100644 --- a/tools/Makefile.am +++ b/tools/Makefile.am @@ -232,6 +232,7 @@ virt_login_shell_helper_CFLAGS = \
virsh_SOURCES = \ virsh.c virsh.h \ + virsh-backup.c virsh-backup.h\
Missing space.
virsh-checkpoint.c virsh-checkpoint.h \ virsh-completer.c virsh-completer.h \ virsh-completer-domain.c virsh-completer-domain.h \
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

Introduce QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP and the convertors and other plumbing to be able to report statistics for the backup job. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 62 ++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 10 +++++++ src/qemu/qemu_driver.c | 4 +++ 3 files changed, 76 insertions(+) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 8d2923300d..c1b0f81c81 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -519,6 +519,12 @@ qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, info->memRemaining = info->memTotal - info->memProcessed; break; + case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: + info->fileTotal = jobInfo->stats.backup.total; + info->fileProcessed = jobInfo->stats.backup.transferred; + info->fileRemaining = info->fileTotal - info->fileProcessed; + break; + case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: break; } @@ -751,6 +757,59 @@ qemuDomainDumpJobInfoToParams(qemuDomainJobInfoPtr jobInfo, } +static int +qemuDomainBackupJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + qemuDomainBackupStats *stats = &jobInfo->stats.backup; + g_autoptr(virTypedParamList) par = g_new0(virTypedParamList, 1); + + if (virTypedParamListAddInt(par, jobInfo->operation, + VIR_DOMAIN_JOB_OPERATION) < 0) + return -1; + + if (virTypedParamListAddULLong(par, jobInfo->timeElapsed, + VIR_DOMAIN_JOB_TIME_ELAPSED) < 0) + return -1; + + if (stats->transferred > 0 || stats->total > 0) { + if (virTypedParamListAddULLong(par, stats->total, + VIR_DOMAIN_JOB_DISK_TOTAL) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->transferred, + VIR_DOMAIN_JOB_DISK_PROCESSED) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->total - stats->transferred, + VIR_DOMAIN_JOB_DISK_REMAINING) < 0) + return -1; + } + + if (stats->tmp_used > 0 || stats->tmp_total > 0) { + if (virTypedParamListAddULLong(par, stats->tmp_used, + VIR_DOMAIN_JOB_DISK_TEMP_USED) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->tmp_total, + VIR_DOMAIN_JOB_DISK_TEMP_TOTAL) < 0) + return -1; + } + + if (jobInfo->status != QEMU_DOMAIN_JOB_STATUS_ACTIVE && + virTypedParamListAddBoolean(par, + jobInfo->status == QEMU_DOMAIN_JOB_STATUS_COMPLETED, + VIR_DOMAIN_JOB_SUCCESS) < 0) + return -1; + + *nparams = virTypedParamListStealParams(par, params); + *type = qemuDomainJobStatusToType(jobInfo->status); + return 0; +} + + int qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, int *type, @@ -765,6 +824,9 @@ qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: return qemuDomainDumpJobInfoToParams(jobInfo, type, params, nparams); + case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: + return qemuDomainBackupJobInfoToParams(jobInfo, type, params, nparams); + case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("invalid job statistics type")); diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 608546a27c..a552af6180 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -126,6 +126,7 @@ typedef enum { QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION, QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP, QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP, + QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP, } qemuDomainJobStatsType; @@ -136,6 +137,14 @@ struct _qemuDomainMirrorStats { unsigned long long total; }; +typedef struct _qemuDomainBackupStats qemuDomainBackupStats; +struct _qemuDomainBackupStats { + unsigned long long transferred; + unsigned long long total; + unsigned long long tmp_used; + unsigned long long tmp_total; +}; + typedef struct _qemuDomainJobInfo qemuDomainJobInfo; typedef qemuDomainJobInfo *qemuDomainJobInfoPtr; struct _qemuDomainJobInfo { @@ -160,6 +169,7 @@ struct _qemuDomainJobInfo { union { qemuMonitorMigrationStats mig; qemuMonitorDumpStats dump; + qemuDomainBackupStats backup; } stats; qemuDomainMirrorStats mirrorStats; }; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 1911073f3e..3ebc902d4f 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -13905,6 +13905,10 @@ qemuDomainGetJobStatsInternal(virQEMUDriverPtr driver, goto cleanup; break; + case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: + /* TODO implement for backup job */ + break; + case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: break; } -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:33PM +0100, Peter Krempa wrote:
Introduce QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP and the convertors and other plumbing to be able to report statistics for the backup job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 62 ++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 10 +++++++ src/qemu/qemu_driver.c | 4 +++ 3 files changed, 76 insertions(+)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/3/19 11:17 AM, Peter Krempa wrote:
Introduce QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP and the convertors and other plumbing to be able to report statistics for the backup job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 62 ++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 10 +++++++ src/qemu/qemu_driver.c | 4 +++ 3 files changed, 76 insertions(+)
+++ b/src/qemu/qemu_driver.c @@ -13905,6 +13905,10 @@ qemuDomainGetJobStatsInternal(virQEMUDriverPtr driver, goto cleanup; break;
+ case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: + /* TODO implement for backup job */ + break;
The TODO goes away later in the series. Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:33PM +0100, Peter Krempa wrote:
Introduce QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP and the convertors and other plumbing to be able to report statistics for the backup job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 62 ++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 10 +++++++ src/qemu/qemu_driver.c | 4 +++ 3 files changed, 76 insertions(+)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

We will want to use the async job infrastructure along with all the APIs and event for the backup job so add the backup job as a new async job type. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 3 +++ src/qemu/qemu_domain.h | 1 + src/qemu/qemu_migration.c | 2 ++ src/qemu/qemu_process.c | 25 +++++++++++++++++++++++++ 4 files changed, 31 insertions(+) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index c1b0f81c81..4c8ffd60b0 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -114,6 +114,7 @@ VIR_ENUM_IMPL(qemuDomainAsyncJob, "dump", "snapshot", "start", + "backup", ); VIR_ENUM_IMPL(qemuDomainNamespace, @@ -210,6 +211,7 @@ qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, case QEMU_ASYNC_JOB_SNAPSHOT: case QEMU_ASYNC_JOB_START: case QEMU_ASYNC_JOB_NONE: + case QEMU_ASYNC_JOB_BACKUP: G_GNUC_FALLTHROUGH; case QEMU_ASYNC_JOB_LAST: break; @@ -235,6 +237,7 @@ qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, case QEMU_ASYNC_JOB_SNAPSHOT: case QEMU_ASYNC_JOB_START: case QEMU_ASYNC_JOB_NONE: + case QEMU_ASYNC_JOB_BACKUP: G_GNUC_FALLTHROUGH; case QEMU_ASYNC_JOB_LAST: break; diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index a552af6180..4cd7eec4ce 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -104,6 +104,7 @@ typedef enum { QEMU_ASYNC_JOB_DUMP, QEMU_ASYNC_JOB_SNAPSHOT, QEMU_ASYNC_JOB_START, + QEMU_ASYNC_JOB_BACKUP, QEMU_ASYNC_JOB_LAST } qemuDomainAsyncJob; diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index dabdda2715..d0e2b65d01 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1415,6 +1415,8 @@ qemuMigrationJobName(virDomainObjPtr vm) return _("snapshot job"); case QEMU_ASYNC_JOB_START: return _("start job"); + case QEMU_ASYNC_JOB_BACKUP: + return _("backup job"); case QEMU_ASYNC_JOB_LAST: default: return _("job"); diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 75ee3893c6..cca280992f 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -89,6 +89,7 @@ #include "virresctrl.h" #include "virvsock.h" #include "viridentity.h" +#include "virthreadjob.h" #define VIR_FROM_THIS VIR_FROM_QEMU @@ -3583,6 +3584,7 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver, qemuDomainObjPrivatePtr priv = vm->privateData; virDomainState state; int reason; + unsigned long long now; state = virDomainObjGetState(vm, &reason); @@ -3632,6 +3634,29 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver, /* Already handled in VIR_DOMAIN_PAUSED_STARTING_UP check. */ break; + case QEMU_ASYNC_JOB_BACKUP: + ignore_value(virTimeMillisNow(&now)); + + /* Restore the config of the async job which is not persisted */ + priv->jobs_queued++; + priv->job.asyncJob = QEMU_ASYNC_JOB_BACKUP; + priv->job.asyncOwnerAPI = virThreadJobGet(); + priv->job.asyncStarted = now; + + qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | + JOB_MASK(QEMU_JOB_SUSPEND) | + JOB_MASK(QEMU_JOB_MODIFY))); + + /* We reset the job parameters for backup so that the job will look + * active. This is possible because we are able to recover the state + * of blockjobs and also the backup job allows all sub-job types */ + priv->job.current = g_new0(qemuDomainJobInfo, 1); + priv->job.current->operation = VIR_DOMAIN_JOB_OPERATION_BACKUP; + priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + priv->job.current->status = QEMU_DOMAIN_JOB_STATUS_ACTIVE; + priv->job.current->started = now; + break; + case QEMU_ASYNC_JOB_NONE: case QEMU_ASYNC_JOB_LAST: break; -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:34PM +0100, Peter Krempa wrote:
We will want to use the async job infrastructure along with all the APIs and event for the backup job so add the backup job as a new async job type.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 3 +++ src/qemu/qemu_domain.h | 1 + src/qemu/qemu_migration.c | 2 ++ src/qemu/qemu_process.c | 25 +++++++++++++++++++++++++ 4 files changed, 31 insertions(+)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/3/19 11:17 AM, Peter Krempa wrote:
We will want to use the async job infrastructure along with all the APIs and event for the backup job so add the backup job as a new async job type.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> ---
Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:34PM +0100, Peter Krempa wrote:
We will want to use the async job infrastructure along with all the APIs and event for the backup job so add the backup job as a new async job type.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 3 +++ src/qemu/qemu_domain.h | 1 + src/qemu/qemu_migration.c | 2 ++ src/qemu/qemu_process.c | 25 +++++++++++++++++++++++++ 4 files changed, 31 insertions(+)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

A backup job may consist of many backup sub-blockjobs. Add the new blockjob type and add all type converter strings. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- examples/c/misc/event-test.c | 3 +++ include/libvirt/libvirt-domain.h | 3 +++ src/conf/domain_conf.c | 2 +- src/qemu/qemu_blockjob.c | 3 +++ src/qemu/qemu_blockjob.h | 1 + src/qemu/qemu_domain.c | 4 ++++ src/qemu/qemu_driver.c | 1 + src/qemu/qemu_monitor_json.c | 4 ++++ tools/virsh-domain.c | 4 +++- 9 files changed, 23 insertions(+), 2 deletions(-) diff --git a/examples/c/misc/event-test.c b/examples/c/misc/event-test.c index 5db572175d..ae282a5027 100644 --- a/examples/c/misc/event-test.c +++ b/examples/c/misc/event-test.c @@ -891,6 +891,9 @@ blockJobTypeToStr(int type) case VIR_DOMAIN_BLOCK_JOB_TYPE_ACTIVE_COMMIT: return "active layer block commit"; + + case VIR_DOMAIN_BLOCK_JOB_TYPE_BACKUP: + return "backup"; } return "unknown"; diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index 6d1c7f1a3b..f40096af88 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -2446,6 +2446,9 @@ typedef enum { * exists as long as sync is active */ VIR_DOMAIN_BLOCK_JOB_TYPE_ACTIVE_COMMIT = 4, + /* Backup (virDomainBackupBegin) */ + VIR_DOMAIN_BLOCK_JOB_TYPE_BACKUP = 5, + # ifdef VIR_ENUM_SENTINELS VIR_DOMAIN_BLOCK_JOB_TYPE_LAST # endif diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 9580884747..78436a89a2 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1227,7 +1227,7 @@ VIR_ENUM_IMPL(virDomainOsDefFirmware, VIR_ENUM_DECL(virDomainBlockJob); VIR_ENUM_IMPL(virDomainBlockJob, VIR_DOMAIN_BLOCK_JOB_TYPE_LAST, - "", "", "copy", "", "active-commit", + "", "", "copy", "", "active-commit", "", ); VIR_ENUM_IMPL(virDomainMemoryModel, diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c index baa79ea80c..5455eaba65 100644 --- a/src/qemu/qemu_blockjob.c +++ b/src/qemu/qemu_blockjob.c @@ -65,6 +65,7 @@ VIR_ENUM_IMPL(qemuBlockjob, "copy", "commit", "active-commit", + "backup", "", "create", "broken"); @@ -1276,6 +1277,8 @@ qemuBlockJobEventProcessConcludedTransition(qemuBlockJobDataPtr job, qemuBlockJobProcessEventConcludedCopyAbort(driver, vm, job, asyncJob); break; + case QEMU_BLOCKJOB_TYPE_BACKUP: + break; case QEMU_BLOCKJOB_TYPE_BROKEN: case QEMU_BLOCKJOB_TYPE_NONE: diff --git a/src/qemu/qemu_blockjob.h b/src/qemu/qemu_blockjob.h index fdfe2c57ec..4734984c99 100644 --- a/src/qemu/qemu_blockjob.h +++ b/src/qemu/qemu_blockjob.h @@ -60,6 +60,7 @@ typedef enum { QEMU_BLOCKJOB_TYPE_COPY = VIR_DOMAIN_BLOCK_JOB_TYPE_COPY, QEMU_BLOCKJOB_TYPE_COMMIT = VIR_DOMAIN_BLOCK_JOB_TYPE_COMMIT, QEMU_BLOCKJOB_TYPE_ACTIVE_COMMIT = VIR_DOMAIN_BLOCK_JOB_TYPE_ACTIVE_COMMIT, + QEMU_BLOCKJOB_TYPE_BACKUP = VIR_DOMAIN_BLOCK_JOB_TYPE_BACKUP, /* Additional enum values local to qemu */ QEMU_BLOCKJOB_TYPE_INTERNAL, QEMU_BLOCKJOB_TYPE_CREATE, diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 4c8ffd60b0..980287d2a0 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2603,6 +2603,8 @@ qemuDomainObjPrivateXMLFormatBlockjobIterator(void *payload, virBufferAddLit(&attrBuf, " shallownew='yes'"); break; + case QEMU_BLOCKJOB_TYPE_BACKUP: + break; case QEMU_BLOCKJOB_TYPE_BROKEN: case QEMU_BLOCKJOB_TYPE_NONE: @@ -3169,6 +3171,8 @@ qemuDomainObjPrivateXMLParseBlockjobDataSpecific(qemuBlockJobDataPtr job, } break; + case QEMU_BLOCKJOB_TYPE_BACKUP: + break; case QEMU_BLOCKJOB_TYPE_BROKEN: case QEMU_BLOCKJOB_TYPE_NONE: diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 3ebc902d4f..913ab18812 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -17427,6 +17427,7 @@ qemuDomainBlockPivot(virQEMUDriverPtr driver, case QEMU_BLOCKJOB_TYPE_PULL: case QEMU_BLOCKJOB_TYPE_COMMIT: + case QEMU_BLOCKJOB_TYPE_BACKUP: case QEMU_BLOCKJOB_TYPE_INTERNAL: case QEMU_BLOCKJOB_TYPE_CREATE: case QEMU_BLOCKJOB_TYPE_BROKEN: diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 9f3783ab70..391f39668a 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -1147,6 +1147,8 @@ qemuMonitorJSONHandleBlockJobImpl(qemuMonitorPtr mon, type = VIR_DOMAIN_BLOCK_JOB_TYPE_COMMIT; else if (STREQ(type_str, "mirror")) type = VIR_DOMAIN_BLOCK_JOB_TYPE_COPY; + else if (STREQ(type_str, "backup")) + type = VIR_DOMAIN_BLOCK_JOB_TYPE_BACKUP; switch ((virConnectDomainEventBlockJobStatus) event) { case VIR_DOMAIN_BLOCK_JOB_COMPLETED: @@ -4844,6 +4846,8 @@ qemuMonitorJSONParseBlockJobInfo(virHashTablePtr blockJobs, info->type = VIR_DOMAIN_BLOCK_JOB_TYPE_COMMIT; else if (STREQ(type, "mirror")) info->type = VIR_DOMAIN_BLOCK_JOB_TYPE_COPY; + else if (STREQ(type, "backup")) + info->type = VIR_DOMAIN_BLOCK_JOB_TYPE_BACKUP; else info->type = VIR_DOMAIN_BLOCK_JOB_TYPE_UNKNOWN; diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 5c313279d7..b009e4bfcc 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -2550,7 +2550,9 @@ VIR_ENUM_IMPL(virshDomainBlockJob, N_("Block Pull"), N_("Block Copy"), N_("Block Commit"), - N_("Active Block Commit")); + N_("Active Block Commit"), + N_("Backup"), +); static const char * virshDomainBlockJobToString(int type) -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:35PM +0100, Peter Krempa wrote:
A backup job may consist of many backup sub-blockjobs. Add the new blockjob type and add all type converter strings.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- examples/c/misc/event-test.c | 3 +++ include/libvirt/libvirt-domain.h | 3 +++ src/conf/domain_conf.c | 2 +- src/qemu/qemu_blockjob.c | 3 +++ src/qemu/qemu_blockjob.h | 1 + src/qemu/qemu_domain.c | 4 ++++ src/qemu/qemu_driver.c | 1 + src/qemu/qemu_monitor_json.c | 4 ++++ tools/virsh-domain.c | 4 +++- 9 files changed, 23 insertions(+), 2 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/3/19 11:17 AM, Peter Krempa wrote:
A backup job may consist of many backup sub-blockjobs. Add the new blockjob type and add all type converter strings.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> ---
Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:35PM +0100, Peter Krempa wrote:
A backup job may consist of many backup sub-blockjobs. Add the new blockjob type and add all type converter strings.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- examples/c/misc/event-test.c | 3 +++ include/libvirt/libvirt-domain.h | 3 +++ src/conf/domain_conf.c | 2 +- src/qemu/qemu_blockjob.c | 3 +++ src/qemu/qemu_blockjob.h | 1 + src/qemu/qemu_domain.c | 4 ++++ src/qemu/qemu_driver.c | 1 + src/qemu/qemu_monitor_json.c | 4 ++++ tools/virsh-domain.c | 4 +++- 9 files changed, 23 insertions(+), 2 deletions(-)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

Implement the transaction actions generator for blockdev-backup. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_monitor.c | 13 +++++++++++++ src/qemu/qemu_monitor.h | 15 +++++++++++++++ src/qemu/qemu_monitor_json.c | 29 +++++++++++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 8 ++++++++ tests/qemumonitorjsontest.c | 8 +++++++- 5 files changed, 72 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index a48305b046..6e6678eb9b 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -4615,3 +4615,16 @@ qemuMonitorTransactionSnapshotBlockdev(virJSONValuePtr actions, { return qemuMonitorJSONTransactionSnapshotBlockdev(actions, node, overlay); } + + +int +qemuMonitorTransactionBackup(virJSONValuePtr actions, + const char *device, + const char *jobname, + const char *target, + const char *bitmap, + qemuMonitorTransactionBackupSyncMode syncmode) +{ + return qemuMonitorJSONTransactionBackup(actions, device, jobname, target, + bitmap, syncmode); +} diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index e2bfc420bb..79e078fca4 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -1392,3 +1392,18 @@ int qemuMonitorTransactionSnapshotBlockdev(virJSONValuePtr actions, const char *node, const char *overlay); + +typedef enum { + QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_NONE = 0, + QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_INCREMENTAL, + QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_FULL, + QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_LAST, +} qemuMonitorTransactionBackupSyncMode; + +int +qemuMonitorTransactionBackup(virJSONValuePtr actions, + const char *device, + const char *jobname, + const char *target, + const char *bitmap, + qemuMonitorTransactionBackupSyncMode syncmode); diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 391f39668a..00e1d3ce15 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -9198,6 +9198,35 @@ qemuMonitorJSONTransactionSnapshotBlockdev(virJSONValuePtr actions, NULL); } +VIR_ENUM_DECL(qemuMonitorTransactionBackupSyncMode); +VIR_ENUM_IMPL(qemuMonitorTransactionBackupSyncMode, + QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_LAST, + "none", + "incremental", + "full"); + +int +qemuMonitorJSONTransactionBackup(virJSONValuePtr actions, + const char *device, + const char *jobname, + const char *target, + const char *bitmap, + qemuMonitorTransactionBackupSyncMode syncmode) +{ + const char *syncmodestr = qemuMonitorTransactionBackupSyncModeTypeToString(syncmode); + + return qemuMonitorJSONTransactionAdd(actions, + "blockdev-backup", + "s:device", device, + "s:job-id", jobname, + "s:target", target, + "s:sync", syncmodestr, + "S:bitmap", bitmap, + "T:auto-finalize", VIR_TRISTATE_BOOL_YES, + "T:auto-dismiss", VIR_TRISTATE_BOOL_NO, + NULL); +} + static qemuMonitorJobInfoPtr qemuMonitorJSONGetJobInfoOne(virJSONValuePtr data) diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 25b568d6b0..5d05772fa2 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -667,3 +667,11 @@ int qemuMonitorJSONTransactionSnapshotBlockdev(virJSONValuePtr actions, const char *node, const char *overlay); + +int +qemuMonitorJSONTransactionBackup(virJSONValuePtr actions, + const char *device, + const char *jobname, + const char *target, + const char *bitmap, + qemuMonitorTransactionBackupSyncMode syncmode); diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index 21f17f42af..4f3bfad1d7 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -2962,7 +2962,13 @@ testQemuMonitorJSONTransaction(const void *opaque) qemuMonitorTransactionBitmapDisable(actions, "node4", "bitmap4") < 0 || qemuMonitorTransactionBitmapMerge(actions, "node5", "bitmap5", &mergebitmaps) < 0 || qemuMonitorTransactionSnapshotLegacy(actions, "dev6", "path", "qcow2", true) < 0 || - qemuMonitorTransactionSnapshotBlockdev(actions, "node7", "overlay7") < 0) + qemuMonitorTransactionSnapshotBlockdev(actions, "node7", "overlay7") < 0 || + qemuMonitorTransactionBackup(actions, "dev8", "job8", "target8", "bitmap8", + QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_NONE) < 0 || + qemuMonitorTransactionBackup(actions, "dev9", "job9", "target9", "bitmap9", + QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_INCREMENTAL) < 0 || + qemuMonitorTransactionBackup(actions, "devA", "jobA", "targetA", "bitmapA", + QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_FULL) < 0) return -1; if (qemuMonitorTestAddItem(test, "transaction", "{\"return\":{}}") < 0) -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:36PM +0100, Peter Krempa wrote:
Implement the transaction actions generator for blockdev-backup.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_monitor.c | 13 +++++++++++++ src/qemu/qemu_monitor.h | 15 +++++++++++++++ src/qemu/qemu_monitor_json.c | 29 +++++++++++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 8 ++++++++ tests/qemumonitorjsontest.c | 8 +++++++- 5 files changed, 72 insertions(+), 1 deletion(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/3/19 11:17 AM, Peter Krempa wrote:
Implement the transaction actions generator for blockdev-backup.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> ---
Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:36PM +0100, Peter Krempa wrote:
Implement the transaction actions generator for blockdev-backup.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_monitor.c | 13 +++++++++++++ src/qemu/qemu_monitor.h | 15 +++++++++++++++ src/qemu/qemu_monitor_json.c | 29 +++++++++++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 8 ++++++++ tests/qemumonitorjsontest.c | 8 +++++++- 5 files changed, 72 insertions(+), 1 deletion(-)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

Store the data of a backup job along with the index counter for new backup jobs in the status XML. Currently we will support only one backup job and thus there's no necessity to add arrays of jobs. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 58 ++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 3 +++ 2 files changed, 61 insertions(+) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 980287d2a0..98d0dad861 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -62,6 +62,7 @@ #include "locking/domain_lock.h" #include "virdomainsnapshotobjlist.h" #include "virdomaincheckpointobjlist.h" +#include "backup_conf.h" #ifdef MAJOR_IN_MKDEV # include <sys/mkdev.h> @@ -2236,6 +2237,9 @@ qemuDomainObjPrivateDataClear(qemuDomainObjPrivatePtr priv) priv->pflash0 = NULL; virObjectUnref(priv->pflash1); priv->pflash1 = NULL; + + virDomainBackupDefFree(priv->backup); + priv->backup = NULL; } @@ -2643,6 +2647,26 @@ qemuDomainObjPrivateXMLFormatBlockjobs(virBufferPtr buf, } +static int +qemuDomainObjPrivateXMLFormatBackups(virBufferPtr buf, + virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + g_auto(virBuffer) attrBuf = VIR_BUFFER_INITIALIZER; + g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf); + + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP)) + return 0; + + if (priv->backup && + virDomainBackupDefFormat(&childBuf, priv->backup, true) < 0) + return -1; + + virXMLFormatElement(buf, "backups", &attrBuf, &childBuf); + return 0; +} + + void qemuDomainObjPrivateXMLFormatAllowReboot(virBufferPtr buf, virTristateBool allowReboot) @@ -2938,6 +2962,9 @@ qemuDomainObjPrivateXMLFormat(virBufferPtr buf, virBufferAsprintf(buf, "<agentTimeout>%i</agentTimeout>\n", priv->agentTimeout); + if (qemuDomainObjPrivateXMLFormatBackups(buf, vm) < 0) + return -1; + return 0; } @@ -3311,6 +3338,34 @@ qemuDomainObjPrivateXMLParseBlockjobs(virDomainObjPtr vm, } +static int +qemuDomainObjPrivateXMLParseBackups(qemuDomainObjPrivatePtr priv, + xmlXPathContextPtr ctxt) +{ + g_autofree xmlNodePtr *nodes = NULL; + ssize_t nnodes = 0; + + if ((nnodes = virXPathNodeSet("./backups/domainbackup", ctxt, &nodes)) < 0) + return -1; + + if (nnodes > 1) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("only one backup job is supported")); + return -1; + } + + if (nnodes == 0) + return 0; + + if (!(priv->backup = virDomainBackupDefParseNode(ctxt->doc, nodes[0], + priv->driver->xmlopt, + VIR_DOMAIN_BACKUP_PARSE_INTERNAL))) + return -1; + + return 0; +} + + int qemuDomainObjPrivateXMLParseAllowReboot(xmlXPathContextPtr ctxt, virTristateBool *allowReboot) @@ -3740,6 +3795,9 @@ qemuDomainObjPrivateXMLParse(xmlXPathContextPtr ctxt, if (qemuDomainObjPrivateXMLParseBlockjobs(vm, priv, ctxt) < 0) goto error; + if (qemuDomainObjPrivateXMLParseBackups(priv, ctxt) < 0) + goto error; + qemuDomainStorageIdReset(priv); if (virXPathULongLong("string(./nodename/@index)", ctxt, &priv->nodenameindex) == -2) { diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 4cd7eec4ce..e07c8aa58f 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -414,6 +414,9 @@ struct _qemuDomainObjPrivate { * commandline for pflash drives. */ virStorageSourcePtr pflash0; virStorageSourcePtr pflash1; + + /* running backup job */ + virDomainBackupDefPtr backup; }; #define QEMU_DOMAIN_PRIVATE(vm) \ -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:37PM +0100, Peter Krempa wrote:
Store the data of a backup job along with the index counter for new backup jobs in the status XML. Currently we will support only one backup job and thus there's no necessity to add arrays of jobs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 58 ++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 3 +++ 2 files changed, 61 insertions(+)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/3/19 11:17 AM, Peter Krempa wrote:
Store the data of a backup job along with the index counter for new backup jobs in the status XML. Currently we will support only one backup job and thus there's no necessity to add arrays of jobs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 58 ++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 3 +++ 2 files changed, 61 insertions(+)
Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:37PM +0100, Peter Krempa wrote:
Store the data of a backup job along with the index counter for new backup jobs in the status XML. Currently we will support only one backup job and thus there's no necessity to add arrays of jobs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 58 ++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 3 +++ 2 files changed, 61 insertions(+)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

A backup blockjob needs to be able to notify the parent backup job as well as track all data to be able to clean up the bitmap and blockdev used for the backup. Add the data structure, job allocation function and status XML formatter and parser. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_blockjob.c | 34 +++++++++++++++++++++++++++++++++- src/qemu/qemu_blockjob.h | 18 ++++++++++++++++++ src/qemu/qemu_domain.c | 21 +++++++++++++++++++++ 3 files changed, 72 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c index 5455eaba65..d434b8bddd 100644 --- a/src/qemu/qemu_blockjob.c +++ b/src/qemu/qemu_blockjob.c @@ -78,8 +78,12 @@ qemuBlockJobDataDisposeJobdata(qemuBlockJobDataPtr job) { if (job->type == QEMU_BLOCKJOB_TYPE_CREATE) virObjectUnref(job->data.create.src); -} + if (job->type == QEMU_BLOCKJOB_TYPE_BACKUP) { + virObjectUnref(job->data.backup.store); + g_free(job->data.backup.bitmap); + } +} static void qemuBlockJobDataDispose(void *obj) @@ -370,6 +374,34 @@ qemuBlockJobDiskNewCopy(virDomainObjPtr vm, } +qemuBlockJobDataPtr +qemuBlockJobDiskNewBackup(virDomainObjPtr vm, + virDomainDiskDefPtr disk, + virStorageSourcePtr store, + bool deleteStore, + const char *bitmap) +{ + g_autoptr(qemuBlockJobData) job = NULL; + g_autofree char *jobname = NULL; + + jobname = g_strdup_printf("backup-%s-%s", disk->dst, disk->src->nodeformat); + + if (!(job = qemuBlockJobDataNew(QEMU_BLOCKJOB_TYPE_BACKUP, jobname))) + return NULL; + + job->data.backup.bitmap = g_strdup(bitmap); + job->data.backup.store = virObjectRef(store); + job->data.backup.deleteStore = deleteStore; + + /* backup jobs are usually started in bulk by transaction so the caller + * shall save the status XML */ + if (qemuBlockJobRegister(job, vm, disk, false) < 0) + return NULL; + + return g_steal_pointer(&job); +} + + /** * qemuBlockJobDiskGetJob: * @disk: disk definition diff --git a/src/qemu/qemu_blockjob.h b/src/qemu/qemu_blockjob.h index 4734984c99..52b03aaf9e 100644 --- a/src/qemu/qemu_blockjob.h +++ b/src/qemu/qemu_blockjob.h @@ -107,6 +107,16 @@ struct _qemuBlockJobCopyData { }; +typedef struct _qemuBlockJobBackupData qemuBlockJobBackupData; +typedef qemuBlockJobBackupData *qemuBlockJobDataBackupPtr; + +struct _qemuBlockJobBackupData { + virStorageSourcePtr store; + bool deleteStore; + char *bitmap; +}; + + typedef struct _qemuBlockJobData qemuBlockJobData; typedef qemuBlockJobData *qemuBlockJobDataPtr; @@ -124,6 +134,7 @@ struct _qemuBlockJobData { qemuBlockJobCommitData commit; qemuBlockJobCreateData create; qemuBlockJobCopyData copy; + qemuBlockJobBackupData backup; } data; int type; /* qemuBlockJobType */ @@ -184,6 +195,13 @@ qemuBlockJobDiskNewCopy(virDomainObjPtr vm, bool shallow, bool reuse); +qemuBlockJobDataPtr +qemuBlockJobDiskNewBackup(virDomainObjPtr vm, + virDomainDiskDefPtr disk, + virStorageSourcePtr store, + bool deleteStore, + const char *bitmap); + qemuBlockJobDataPtr qemuBlockJobDiskGetJob(virDomainDiskDefPtr disk) ATTRIBUTE_NONNULL(1); diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 98d0dad861..f4f841526e 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2608,6 +2608,18 @@ qemuDomainObjPrivateXMLFormatBlockjobIterator(void *payload, break; case QEMU_BLOCKJOB_TYPE_BACKUP: + virBufferEscapeString(&childBuf, "<bitmap name='%s'/>\n", job->data.backup.bitmap); + if (job->data.backup.store) { + if (qemuDomainObjPrivateXMLFormatBlockjobFormatSource(&childBuf, + "store", + job->data.backup.store, + data->xmlopt, + false) < 0) + return -1; + + if (job->data.backup.deleteStore) + virBufferAddLit(&childBuf, "<deleteStore/>\n"); + } break; case QEMU_BLOCKJOB_TYPE_BROKEN: @@ -3199,6 +3211,15 @@ qemuDomainObjPrivateXMLParseBlockjobDataSpecific(qemuBlockJobDataPtr job, break; case QEMU_BLOCKJOB_TYPE_BACKUP: + job->data.backup.bitmap = virXPathString("string(./bitmap/@name)", ctxt); + + if (!(tmp = virXPathNode("./store", ctxt)) || + !(job->data.backup.store = qemuDomainObjPrivateXMLParseBlockjobChain(tmp, ctxt, xmlopt))) + goto broken; + + if (virXPathNode("./deleteStore", ctxt)) + job->data.backup.deleteStore = true; + break; case QEMU_BLOCKJOB_TYPE_BROKEN: -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:38PM +0100, Peter Krempa wrote:
A backup blockjob needs to be able to notify the parent backup job as well as track all data to be able to clean up the bitmap and blockdev used for the backup.
Add the data structure, job allocation function and status XML formatter and parser.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_blockjob.c | 34 +++++++++++++++++++++++++++++++++- src/qemu/qemu_blockjob.h | 18 ++++++++++++++++++ src/qemu/qemu_domain.c | 21 +++++++++++++++++++++ 3 files changed, 72 insertions(+), 1 deletion(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/3/19 11:17 AM, Peter Krempa wrote:
A backup blockjob needs to be able to notify the parent backup job as well as track all data to be able to clean up the bitmap and blockdev used for the backup.
Add the data structure, job allocation function and status XML formatter and parser.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> ---
Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:38PM +0100, Peter Krempa wrote:
A backup blockjob needs to be able to notify the parent backup job as well as track all data to be able to clean up the bitmap and blockdev used for the backup.
Add the data structure, job allocation function and status XML formatter and parser.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_blockjob.c | 34 +++++++++++++++++++++++++++++++++- src/qemu/qemu_blockjob.h | 18 ++++++++++++++++++ src/qemu/qemu_domain.c | 21 +++++++++++++++++++++ 3 files changed, 72 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c index 5455eaba65..d434b8bddd 100644 --- a/src/qemu/qemu_blockjob.c +++ b/src/qemu/qemu_blockjob.c @@ -78,8 +78,12 @@ qemuBlockJobDataDisposeJobdata(qemuBlockJobDataPtr job) { if (job->type == QEMU_BLOCKJOB_TYPE_CREATE) virObjectUnref(job->data.create.src); -}
This odd diff is caused by your disruption to the whitespace serenity.
+ if (job->type == QEMU_BLOCKJOB_TYPE_BACKUP) { + virObjectUnref(job->data.backup.store); + g_free(job->data.backup.bitmap); + } +}
static void qemuBlockJobDataDispose(void *obj)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- .../qemustatusxml2xmldata/backup-pull-in.xml | 608 ++++++++++++++++++ .../qemustatusxml2xmldata/backup-pull-out.xml | 1 + tests/qemuxml2xmltest.c | 2 + 3 files changed, 611 insertions(+) create mode 100644 tests/qemustatusxml2xmldata/backup-pull-in.xml create mode 120000 tests/qemustatusxml2xmldata/backup-pull-out.xml diff --git a/tests/qemustatusxml2xmldata/backup-pull-in.xml b/tests/qemustatusxml2xmldata/backup-pull-in.xml new file mode 100644 index 0000000000..6ef4965bed --- /dev/null +++ b/tests/qemustatusxml2xmldata/backup-pull-in.xml @@ -0,0 +1,608 @@ +<domstatus state='running' reason='booted' pid='7690'> + <taint flag='high-privileges'/> + <monitor path='/var/lib/libvirt/qemu/domain-4-copy/monitor.sock' type='unix'/> + <namespaces> + <mount/> + </namespaces> + <vcpus> + <vcpu id='0' pid='7696'/> + </vcpus> + <qemuCaps> + <flag name='kvm'/> + <flag name='no-hpet'/> + <flag name='spice'/> + <flag name='hda-duplex'/> + <flag name='ccid-emulated'/> + <flag name='ccid-passthru'/> + <flag name='virtio-tx-alg'/> + <flag name='virtio-blk-pci.ioeventfd'/> + <flag name='sga'/> + <flag name='virtio-blk-pci.event_idx'/> + <flag name='virtio-net-pci.event_idx'/> + <flag name='piix3-usb-uhci'/> + <flag name='piix4-usb-uhci'/> + <flag name='usb-ehci'/> + <flag name='ich9-usb-ehci1'/> + <flag name='vt82c686b-usb-uhci'/> + <flag name='pci-ohci'/> + <flag name='usb-redir'/> + <flag name='usb-hub'/> + <flag name='ich9-ahci'/> + <flag name='no-acpi'/> + <flag name='virtio-blk-pci.scsi'/> + <flag name='scsi-disk.channel'/> + <flag name='scsi-block'/> + <flag name='transaction'/> + <flag name='block-job-async'/> + <flag name='scsi-cd'/> + <flag name='ide-cd'/> + <flag name='hda-micro'/> + <flag name='dump-guest-memory'/> + <flag name='nec-usb-xhci'/> + <flag name='balloon-event'/> + <flag name='lsi'/> + <flag name='virtio-scsi-pci'/> + <flag name='blockio'/> + <flag name='disable-s3'/> + <flag name='disable-s4'/> + <flag name='usb-redir.filter'/> + <flag name='ide-drive.wwn'/> + <flag name='scsi-disk.wwn'/> + <flag name='seccomp-sandbox'/> + <flag name='reboot-timeout'/> + <flag name='seamless-migration'/> + <flag name='block-commit'/> + <flag name='vnc'/> + <flag name='drive-mirror'/> + <flag name='blockdev-snapshot-sync'/> + <flag name='qxl'/> + <flag name='VGA'/> + <flag name='cirrus-vga'/> + <flag name='vmware-svga'/> + <flag name='device-video-primary'/> + <flag name='usb-serial'/> + <flag name='nbd-server'/> + <flag name='virtio-rng'/> + <flag name='rng-random'/> + <flag name='rng-egd'/> + <flag name='megasas'/> + <flag name='tpm-passthrough'/> + <flag name='tpm-tis'/> + <flag name='pci-bridge'/> + <flag name='vfio-pci'/> + <flag name='mem-merge'/> + <flag name='drive-discard'/> + <flag name='mlock'/> + <flag name='device-del-event'/> + <flag name='dmi-to-pci-bridge'/> + <flag name='i440fx-pci-hole64-size'/> + <flag name='q35-pci-hole64-size'/> + <flag name='usb-storage'/> + <flag name='usb-storage.removable'/> + <flag name='ich9-intel-hda'/> + <flag name='kvm-pit-lost-tick-policy'/> + <flag name='boot-strict'/> + <flag name='pvpanic'/> + <flag name='spice-file-xfer-disable'/> + <flag name='usb-kbd'/> + <flag name='msg-timestamp'/> + <flag name='active-commit'/> + <flag name='change-backing-file'/> + <flag name='memory-backend-ram'/> + <flag name='numa'/> + <flag name='memory-backend-file'/> + <flag name='usb-audio'/> + <flag name='rtc-reset-reinjection'/> + <flag name='splash-timeout'/> + <flag name='iothread'/> + <flag name='migrate-rdma'/> + <flag name='ivshmem'/> + <flag name='drive-iotune-max'/> + <flag name='VGA.vgamem_mb'/> + <flag name='vmware-svga.vgamem_mb'/> + <flag name='qxl.vgamem_mb'/> + <flag name='pc-dimm'/> + <flag name='machine-vmport-opt'/> + <flag name='aes-key-wrap'/> + <flag name='dea-key-wrap'/> + <flag name='pci-serial'/> + <flag name='vhost-user-multiqueue'/> + <flag name='migration-event'/> + <flag name='ioh3420'/> + <flag name='x3130-upstream'/> + <flag name='xio3130-downstream'/> + <flag name='rtl8139'/> + <flag name='e1000'/> + <flag name='virtio-net'/> + <flag name='gic-version'/> + <flag name='incoming-defer'/> + <flag name='virtio-gpu'/> + <flag name='virtio-gpu.virgl'/> + <flag name='virtio-keyboard'/> + <flag name='virtio-mouse'/> + <flag name='virtio-tablet'/> + <flag name='virtio-input-host'/> + <flag name='chardev-file-append'/> + <flag name='ich9-disable-s3'/> + <flag name='ich9-disable-s4'/> + <flag name='vserport-change-event'/> + <flag name='virtio-balloon-pci.deflate-on-oom'/> + <flag name='mptsas1068'/> + <flag name='spice-gl'/> + <flag name='qxl.vram64_size_mb'/> + <flag name='chardev-logfile'/> + <flag name='debug-threads'/> + <flag name='secret'/> + <flag name='pxb'/> + <flag name='pxb-pcie'/> + <flag name='device-tray-moved-event'/> + <flag name='nec-usb-xhci-ports'/> + <flag name='virtio-scsi-pci.iothread'/> + <flag name='name-guest'/> + <flag name='qxl.max_outputs'/> + <flag name='spice-unix'/> + <flag name='drive-detect-zeroes'/> + <flag name='tls-creds-x509'/> + <flag name='intel-iommu'/> + <flag name='smm'/> + <flag name='virtio-pci-disable-legacy'/> + <flag name='query-hotpluggable-cpus'/> + <flag name='virtio-net.rx_queue_size'/> + <flag name='virtio-vga'/> + <flag name='drive-iotune-max-length'/> + <flag name='ivshmem-plain'/> + <flag name='ivshmem-doorbell'/> + <flag name='query-qmp-schema'/> + <flag name='gluster.debug_level'/> + <flag name='vhost-scsi'/> + <flag name='drive-iotune-group'/> + <flag name='query-cpu-model-expansion'/> + <flag name='virtio-net.host_mtu'/> + <flag name='spice-rendernode'/> + <flag name='nvdimm'/> + <flag name='pcie-root-port'/> + <flag name='query-cpu-definitions'/> + <flag name='block-write-threshold'/> + <flag name='query-named-block-nodes'/> + <flag name='cpu-cache'/> + <flag name='qemu-xhci'/> + <flag name='kernel-irqchip'/> + <flag name='kernel-irqchip.split'/> + <flag name='intel-iommu.intremap'/> + <flag name='intel-iommu.caching-mode'/> + <flag name='intel-iommu.eim'/> + <flag name='intel-iommu.device-iotlb'/> + <flag name='virtio.iommu_platform'/> + <flag name='virtio.ats'/> + <flag name='loadparm'/> + <flag name='vnc-multi-servers'/> + <flag name='virtio-net.tx_queue_size'/> + <flag name='chardev-reconnect'/> + <flag name='virtio-gpu.max_outputs'/> + <flag name='vxhs'/> + <flag name='virtio-blk.num-queues'/> + <flag name='vmcoreinfo'/> + <flag name='numa.dist'/> + <flag name='disk-share-rw'/> + <flag name='iscsi.password-secret'/> + <flag name='isa-serial'/> + <flag name='dump-completed'/> + <flag name='qcow2-luks'/> + <flag name='pcie-pci-bridge'/> + <flag name='seccomp-blacklist'/> + <flag name='query-cpus-fast'/> + <flag name='disk-write-cache'/> + <flag name='nbd-tls'/> + <flag name='tpm-crb'/> + <flag name='pr-manager-helper'/> + <flag name='qom-list-properties'/> + <flag name='memory-backend-file.discard-data'/> + <flag name='sdl-gl'/> + <flag name='screendump_device'/> + <flag name='hda-output'/> + <flag name='blockdev-del'/> + <flag name='vmgenid'/> + <flag name='vhost-vsock'/> + <flag name='chardev-fd-pass'/> + <flag name='tpm-emulator'/> + <flag name='mch'/> + <flag name='mch.extended-tseg-mbytes'/> + <flag name='usb-storage.werror'/> + <flag name='egl-headless'/> + <flag name='vfio-pci.display'/> + <flag name='blockdev'/> + <flag name='memory-backend-memfd'/> + <flag name='memory-backend-memfd.hugetlb'/> + <flag name='iothread.poll-max-ns'/> + <flag name='egl-headless.rendernode'/> + <flag name='incremental-backup'/> + </qemuCaps> + <devices> + <device alias='rng0'/> + <device alias='sound0-codec0'/> + <device alias='virtio-disk0'/> + <device alias='virtio-serial0'/> + <device alias='video0'/> + <device alias='serial0'/> + <device alias='sound0'/> + <device alias='channel1'/> + <device alias='channel0'/> + <device alias='usb'/> + </devices> + <libDir path='/var/lib/libvirt/qemu/domain-4-copy'/> + <channelTargetDir path='/var/lib/libvirt/qemu/channel/target/domain-4-copy'/> + <chardevStdioLogd/> + <allowReboot value='yes'/> + <nodename index='0'/> + <blockjobs active='yes'> + <blockjob name='backup-vda-libvirt-3-format' type='backup' state='running'> + <disk dst='vda'/> + <bitmap name='bitmapname'/> + <store type='file' format='qcow2'> + <source file='/path/to/file' index='1337'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-1337-storage'/> + <nodename type='format' name='libvirt-1337-format'/> + </nodenames> + </privateData> + </source> + </store> + <deleteStore/> + </blockjob> + </blockjobs> + <agentTimeout>-2</agentTimeout> + <backups> + <domainbackup mode='pull'> + <incremental>12345</incremental> + <server transport='tcp' name='localhost' port='10809'/> + <disks> + <disk name='vda' backup='yes' state='running' type='file'> + <scratch file='/path/to/file/'/> + </disk> + </disks> + </domainbackup> + </backups> + <domain type='kvm' id='4'> + <name>copy</name> + <uuid>0439a4a8-db56-4933-9183-d8681d7b0746</uuid> + <memory unit='KiB'>1024000</memory> + <currentMemory unit='KiB'>1024000</currentMemory> + <vcpu placement='static'>1</vcpu> + <resource> + <partition>/machine</partition> + </resource> + <os> + <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type> + <boot dev='hd'/> + <bootmenu enable='yes'/> + </os> + <features> + <acpi/> + <apic/> + <vmport state='off'/> + </features> + <clock offset='utc'> + <timer name='rtc' tickpolicy='catchup'/> + <timer name='pit' tickpolicy='delay'/> + <timer name='hpet' present='no'/> + </clock> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>restart</on_crash> + <pm> + <suspend-to-mem enabled='no'/> + <suspend-to-disk enabled='no'/> + </pm> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2'/> + <source file='/tmp/pull4.qcow2' index='3'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-3-storage'/> + <nodename type='format' name='libvirt-3-format'/> + </nodenames> + </privateData> + </source> + <backingStore type='file' index='13'> + <format type='qcow2'/> + <source file='/tmp/pull3.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-13-storage'/> + <nodename type='format' name='libvirt-13-format'/> + </nodenames> + <relPath>pull3.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='14'> + <format type='qcow2'/> + <source file='/tmp/pull2.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-14-storage'/> + <nodename type='format' name='libvirt-14-format'/> + </nodenames> + <relPath>pull2.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='15'> + <format type='qcow2'/> + <source file='/tmp/pull1.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-15-storage'/> + <nodename type='format' name='libvirt-15-format'/> + </nodenames> + <relPath>pull1.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='16'> + <format type='qcow2'/> + <source file='/tmp/pull0.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-16-storage'/> + <nodename type='format' name='libvirt-16-format'/> + </nodenames> + <relPath>pull0.qcow2</relPath> + </privateData> + </source> + <backingStore/> + </backingStore> + </backingStore> + </backingStore> + </backingStore> + <target dev='vda' bus='virtio'/> + <alias name='virtio-disk1'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> + <privateData> + <qom name='/machine/peripheral/virtio-disk1/virtio-backend'/> + </privateData> + </disk> + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2'/> + <source file='/tmp/commit4.qcow2' index='2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-2-storage'/> + <nodename type='format' name='libvirt-2-format'/> + </nodenames> + </privateData> + </source> + <backingStore type='file' index='9'> + <format type='qcow2'/> + <source file='/tmp/commit3.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-9-storage'/> + <nodename type='format' name='libvirt-9-format'/> + </nodenames> + <relPath>commit3.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='10'> + <format type='qcow2'/> + <source file='/tmp/commit2.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-10-storage'/> + <nodename type='format' name='libvirt-10-format'/> + </nodenames> + <relPath>commit2.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='11'> + <format type='qcow2'/> + <source file='/tmp/commit1.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-11-storage'/> + <nodename type='format' name='libvirt-11-format'/> + </nodenames> + <relPath>commit1.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='12'> + <format type='qcow2'/> + <source file='/tmp/commit0.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-12-storage'/> + <nodename type='format' name='libvirt-12-format'/> + </nodenames> + <relPath>commit0.qcow2</relPath> + </privateData> + </source> + <backingStore/> + </backingStore> + </backingStore> + </backingStore> + </backingStore> + <target dev='vdc' bus='virtio'/> + <alias name='virtio-disk2'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/> + <privateData> + <qom name='/machine/peripheral/virtio-disk2/virtio-backend'/> + </privateData> + </disk> + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2'/> + <source file='/tmp/copy4.qcow2' index='1'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-1-storage'/> + <nodename type='format' name='libvirt-1-format'/> + </nodenames> + </privateData> + </source> + <backingStore type='file' index='5'> + <format type='qcow2'/> + <source file='/tmp/copy3.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-5-storage'/> + <nodename type='format' name='libvirt-5-format'/> + </nodenames> + <relPath>copy3.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='6'> + <format type='qcow2'/> + <source file='/tmp/copy2.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-6-storage'/> + <nodename type='format' name='libvirt-6-format'/> + </nodenames> + <relPath>copy2.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='7'> + <format type='qcow2'/> + <source file='/tmp/copy1.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-7-storage'/> + <nodename type='format' name='libvirt-7-format'/> + </nodenames> + <relPath>copy1.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='8'> + <format type='qcow2'/> + <source file='/tmp/copy0.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-8-storage'/> + <nodename type='format' name='libvirt-8-format'/> + </nodenames> + <relPath>copy0.qcow2</relPath> + </privateData> + </source> + <backingStore/> + </backingStore> + </backingStore> + </backingStore> + </backingStore> + <target dev='vdd' bus='virtio'/> + <alias name='virtio-disk3'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/> + <privateData> + <qom name='/machine/peripheral/virtio-disk3/virtio-backend'/> + </privateData> + </disk> + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2'/> + <source file='/tmp/activecommit4.qcow2' index='17'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-17-storage'/> + <nodename type='format' name='libvirt-17-format'/> + </nodenames> + </privateData> + </source> + <backingStore type='file' index='18'> + <format type='qcow2'/> + <source file='/tmp/activecommit3.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-18-storage'/> + <nodename type='format' name='libvirt-18-format'/> + </nodenames> + <relPath>activecommit3.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='19'> + <format type='qcow2'/> + <source file='/tmp/activecommit2.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-19-storage'/> + <nodename type='format' name='libvirt-19-format'/> + </nodenames> + <relPath>activecommit2.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='20'> + <format type='qcow2'/> + <source file='/tmp/activecommit1.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-20-storage'/> + <nodename type='format' name='libvirt-20-format'/> + </nodenames> + <relPath>activecommit1.qcow2</relPath> + </privateData> + </source> + <backingStore type='file' index='21'> + <format type='qcow2'/> + <source file='/tmp/activecommit0.qcow2'> + <privateData> + <nodenames> + <nodename type='storage' name='libvirt-21-storage'/> + <nodename type='format' name='libvirt-21-format'/> + </nodenames> + <relPath>activecommit0.qcow2</relPath> + </privateData> + </source> + <backingStore/> + </backingStore> + </backingStore> + </backingStore> + </backingStore> + <target dev='vde' bus='virtio'/> + <alias name='virtio-disk3'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/> + <privateData> + <qom name='/machine/peripheral/virtio-disk3/virtio-backend'/> + </privateData> + </disk> + <controller type='usb' index='0' model='piix3-uhci'> + <alias name='usb'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> + </controller> + <controller type='pci' index='0' model='pci-root'> + <alias name='pci.0'/> + </controller> + <serial type='pty'> + <source path='/dev/pts/34'/> + <target type='isa-serial' port='0'> + <model name='isa-serial'/> + </target> + <alias name='serial0'/> + </serial> + <console type='pty' tty='/dev/pts/34'> + <source path='/dev/pts/34'/> + <target type='serial' port='0'/> + <alias name='serial0'/> + </console> + <input type='mouse' bus='ps2'> + <alias name='input0'/> + </input> + <input type='keyboard' bus='ps2'> + <alias name='input1'/> + </input> + <graphics type='spice' port='5900' autoport='yes' listen='127.0.0.1'> + <listen type='address' address='127.0.0.1' fromConfig='1' autoGenerated='no'/> + <image compression='off'/> + </graphics> + <video> + <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> + <alias name='video0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </video> + <memballoon model='none'/> + </devices> + <seclabel type='dynamic' model='selinux' relabel='yes'> + <label>unconfined_u:unconfined_r:svirt_t:s0:c550,c786</label> + <imagelabel>unconfined_u:object_r:svirt_image_t:s0:c550,c786</imagelabel> + </seclabel> + <seclabel type='dynamic' model='dac' relabel='yes'> + <label>+0:+0</label> + <imagelabel>+0:+0</imagelabel> + </seclabel> + </domain> +</domstatus> diff --git a/tests/qemustatusxml2xmldata/backup-pull-out.xml b/tests/qemustatusxml2xmldata/backup-pull-out.xml new file mode 120000 index 0000000000..b706ee2924 --- /dev/null +++ b/tests/qemustatusxml2xmldata/backup-pull-out.xml @@ -0,0 +1 @@ +backup-pull-in.xml \ No newline at end of file diff --git a/tests/qemuxml2xmltest.c b/tests/qemuxml2xmltest.c index 8b43f35f06..e1758a106f 100644 --- a/tests/qemuxml2xmltest.c +++ b/tests/qemuxml2xmltest.c @@ -1315,6 +1315,8 @@ mymain(void) DO_TEST_STATUS("blockjob-blockdev"); + DO_TEST_STATUS("backup-pull"); + DO_TEST("vhost-vsock", QEMU_CAPS_DEVICE_VHOST_VSOCK); DO_TEST("vhost-vsock-auto", QEMU_CAPS_DEVICE_VHOST_VSOCK); DO_TEST("vhost-vsock-ccw", QEMU_CAPS_DEVICE_VHOST_VSOCK, -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:39PM +0100, Peter Krempa wrote:
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- .../qemustatusxml2xmldata/backup-pull-in.xml | 608 ++++++++++++++++++ .../qemustatusxml2xmldata/backup-pull-out.xml | 1 + tests/qemuxml2xmltest.c | 2 + 3 files changed, 611 insertions(+) create mode 100644 tests/qemustatusxml2xmldata/backup-pull-in.xml create mode 120000 tests/qemustatusxml2xmldata/backup-pull-out.xml
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/3/19 11:17 AM, Peter Krempa wrote:
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- .../qemustatusxml2xmldata/backup-pull-in.xml | 608 ++++++++++++++++++ .../qemustatusxml2xmldata/backup-pull-out.xml | 1 + tests/qemuxml2xmltest.c | 2 + 3 files changed, 611 insertions(+) create mode 100644 tests/qemustatusxml2xmldata/backup-pull-in.xml create mode 120000 tests/qemustatusxml2xmldata/backup-pull-out.xml
Reviewed-by: Eric Blake <eblake@redhat.com>
+++ b/tests/qemustatusxml2xmldata/backup-pull-out.xml @@ -0,0 +1 @@ +backup-pull-in.xml \ No newline at end of file
Odd that diff warns about this (empty files are an exception to the rule that a text file must end in a newline). But harmless to the patch itself. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:39PM +0100, Peter Krempa wrote:
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- .../qemustatusxml2xmldata/backup-pull-in.xml | 608 ++++++++++++++++++ .../qemustatusxml2xmldata/backup-pull-out.xml | 1 + tests/qemuxml2xmltest.c | 2 + 3 files changed, 611 insertions(+) create mode 100644 tests/qemustatusxml2xmldata/backup-pull-in.xml create mode 120000 tests/qemustatusxml2xmldata/backup-pull-out.xml
Rust is a beautiful thing. Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

We need a place to store stats of completed sub-jobs so that we can later report accurate stats. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/conf/backup_conf.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/src/conf/backup_conf.h b/src/conf/backup_conf.h index c970e01920..5dfc42e297 100644 --- a/src/conf/backup_conf.h +++ b/src/conf/backup_conf.h @@ -70,6 +70,13 @@ struct _virDomainBackupDef { size_t ndisks; /* should not exceed dom->ndisks */ virDomainBackupDiskDef *disks; + + /* internal data */ + /* statistic totals for completed diks */ + unsigned long long push_transferred; + unsigned long long push_total; + unsigned long long pull_tmp_used; + unsigned long long pull_tmp_total; }; typedef enum { -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:40PM +0100, Peter Krempa wrote:
We need a place to store stats of completed sub-jobs so that we can later report accurate stats.
Its kind of gross using the public struct for internal only data but I guess we can fix that any time we want to later.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/conf/backup_conf.h | 7 +++++++ 1 file changed, 7 insertions(+)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/3/19 11:17 AM, Peter Krempa wrote:
We need a place to store stats of completed sub-jobs so that we can later report accurate stats.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/conf/backup_conf.h | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/src/conf/backup_conf.h b/src/conf/backup_conf.h index c970e01920..5dfc42e297 100644 --- a/src/conf/backup_conf.h +++ b/src/conf/backup_conf.h @@ -70,6 +70,13 @@ struct _virDomainBackupDef {
size_t ndisks; /* should not exceed dom->ndisks */ virDomainBackupDiskDef *disks; + + /* internal data */ + /* statistic totals for completed diks */
disks With typo fix, Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:40PM +0100, Peter Krempa wrote:
We need a place to store stats of completed sub-jobs so that we can later report accurate stats.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/conf/backup_conf.h | 7 +++++++ 1 file changed, 7 insertions(+)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

The stats reported for a blockjob which is member of a domain pull backup refer to the utilization of the scratch file rather than the progress of the backup as the progress of the backup depends on the client. Note this quirk in the docs. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/libvirt-domain.c | 4 ++++ tools/virsh.pod | 4 ++++ 2 files changed, 8 insertions(+) diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index f873246ace..793eceb39f 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -9949,6 +9949,10 @@ virDomainBlockJobAbort(virDomainPtr dom, const char *disk, * and was no-op. In this case libvirt reports cur = 1 and end = 1. * Since 2.3.0. * + * Note that the progress reported for blockjobs corresponding to a pull-mode + * backup don't report progress of the backup but rather usage of temporary + * space required for the backup. + * * Returns -1 in case of failure, 0 when nothing found, 1 when info was found. */ int diff --git a/tools/virsh.pod b/tools/virsh.pod index b04f7c0fdc..244895ceb4 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -994,6 +994,10 @@ I<--bytes> with a scaled value permits a finer granularity to be selected. A scaled value used without I<--bytes> will be rounded down to MiB/s. Note that the I<--bytes> may be unsupported by the hypervisor. +Note that the progress reported for blockjobs corresponding to a pull-mode +backup don't report progress of the backup but rather usage of temporary +space required for the backup. + =item B<blockpull> I<domain> I<path> [I<bandwidth>] [I<--bytes>] [I<base>] [I<--wait> [I<--verbose>] [I<--timeout> B<seconds>] [I<--async>]] [I<--keep-relative>] -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:41PM +0100, Peter Krempa wrote:
The stats reported for a blockjob which is member of a domain pull backup refer to the utilization of the scratch file rather than the progress of the backup as the progress of the backup depends on the client. Note this quirk in the docs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/libvirt-domain.c | 4 ++++ tools/virsh.pod | 4 ++++ 2 files changed, 8 insertions(+)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/3/19 11:17 AM, Peter Krempa wrote:
The stats reported for a blockjob which is member of a domain pull backup refer to the utilization of the scratch file rather than the progress of the backup as the progress of the backup depends on the client. Note this quirk in the docs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> ---
Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Tue, Dec 03, 2019 at 06:17:41PM +0100, Peter Krempa wrote:
The stats reported for a blockjob which is member of a domain pull backup refer to the utilization of the scratch file rather than the progress of the backup as the progress of the backup depends on the client. Note this quirk in the docs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/libvirt-domain.c | 4 ++++ tools/virsh.pod | 4 ++++ 2 files changed, 8 insertions(+)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

This allows to start and manage the backup job. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- po/POTFILES.in | 1 + src/qemu/Makefile.inc.am | 2 + src/qemu/qemu_backup.c | 941 +++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_backup.h | 41 ++ src/qemu/qemu_driver.c | 47 ++ 5 files changed, 1032 insertions(+) create mode 100644 src/qemu/qemu_backup.c create mode 100644 src/qemu/qemu_backup.h diff --git a/po/POTFILES.in b/po/POTFILES.in index 48f3f431ec..5afecf21ba 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -140,6 +140,7 @@ @SRCDIR@/src/phyp/phyp_driver.c @SRCDIR@/src/qemu/qemu_agent.c @SRCDIR@/src/qemu/qemu_alias.c +@SRCDIR@/src/qemu/qemu_backup.c @SRCDIR@/src/qemu/qemu_block.c @SRCDIR@/src/qemu/qemu_blockjob.c @SRCDIR@/src/qemu/qemu_capabilities.c diff --git a/src/qemu/Makefile.inc.am b/src/qemu/Makefile.inc.am index bf30f8a3c5..839b1cacb8 100644 --- a/src/qemu/Makefile.inc.am +++ b/src/qemu/Makefile.inc.am @@ -69,6 +69,8 @@ QEMU_DRIVER_SOURCES = \ qemu/qemu_vhost_user_gpu.h \ qemu/qemu_checkpoint.c \ qemu/qemu_checkpoint.h \ + qemu/qemu_backup.c \ + qemu/qemu_backup.h \ $(NULL) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c new file mode 100644 index 0000000000..8307a42e1c --- /dev/null +++ b/src/qemu/qemu_backup.c @@ -0,0 +1,941 @@ +/* + * qemu_backup.c: Implementation and handling of the backup jobs + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + */ + +#include <config.h> + +#include "qemu_block.h" +#include "qemu_conf.h" +#include "qemu_capabilities.h" +#include "qemu_monitor.h" +#include "qemu_process.h" +#include "qemu_backup.h" +#include "qemu_monitor_json.h" +#include "qemu_checkpoint.h" +#include "qemu_command.h" + +#include "virerror.h" +#include "virlog.h" +#include "virbuffer.h" +#include "viralloc.h" +#include "virxml.h" +#include "virstoragefile.h" +#include "virstring.h" +#include "backup_conf.h" +#include "virdomaincheckpointobjlist.h" + +#define VIR_FROM_THIS VIR_FROM_QEMU + +VIR_LOG_INIT("qemu.qemu_backup"); + + +static virDomainBackupDefPtr +qemuDomainGetBackup(virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + + if (!priv->backup) { + virReportError(VIR_ERR_NO_DOMAIN_BACKUP, "%s", + _("no domain backup job present")); + return NULL; + } + + return priv->backup; +} + + +static int +qemuBackupPrepare(virDomainBackupDefPtr def) +{ + + if (def->type == VIR_DOMAIN_BACKUP_TYPE_PULL) { + if (!def->server) { + def->server = g_new(virStorageNetHostDef, 1); + + def->server->transport = VIR_STORAGE_NET_HOST_TRANS_TCP; + def->server->name = g_strdup("localhost"); + } + + switch ((virStorageNetHostTransport) def->server->transport) { + case VIR_STORAGE_NET_HOST_TRANS_TCP: + /* TODO: Update qemu.conf to provide a port range, + * probably starting at 10809, for obtaining automatic + * port via virPortAllocatorAcquire, as well as store + * somewhere if we need to call virPortAllocatorRelease + * during BackupEnd. Until then, user must provide port */ + if (!def->server->port) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("<domainbackup> must specify TCP port for now")); + return -1; + } + break; + + case VIR_STORAGE_NET_HOST_TRANS_UNIX: + /* TODO: Do we need to mess with selinux? */ + break; + + case VIR_STORAGE_NET_HOST_TRANS_RDMA: + case VIR_STORAGE_NET_HOST_TRANS_LAST: + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("unexpected transport in <domainbackup>")); + return -1; + } + } + + return 0; +} + + +struct qemuBackupDiskData { + virDomainBackupDiskDefPtr backupdisk; + virDomainDiskDefPtr domdisk; + qemuBlockJobDataPtr blockjob; + virStorageSourcePtr store; + char *incrementalBitmap; + qemuBlockStorageSourceChainDataPtr crdata; + bool labelled; + bool initialized; + bool created; + bool added; + bool started; + bool done; +}; + + +static void +qemuBackupDiskDataCleanupOne(virDomainObjPtr vm, + struct qemuBackupDiskData *dd) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + + if (dd->started) + return; + + if (dd->added) { + qemuDomainObjEnterMonitor(priv->driver, vm); + qemuBlockStorageSourceAttachRollback(priv->mon, dd->crdata->srcdata[0]); + ignore_value(qemuDomainObjExitMonitor(priv->driver, vm)); + } + + if (dd->created) { + if (virStorageFileUnlink(dd->store) < 0) + VIR_WARN("Unable to remove just-created %s", NULLSTR(dd->store->path)); + } + + if (dd->initialized) + virStorageFileDeinit(dd->store); + + if (dd->labelled) + qemuDomainStorageSourceAccessRevoke(priv->driver, vm, dd->store); + + if (dd->blockjob) + qemuBlockJobStartupFinalize(vm, dd->blockjob); + + qemuBlockStorageSourceChainDataFree(dd->crdata); +} + + +static void +qemuBackupDiskDataCleanup(virDomainObjPtr vm, + struct qemuBackupDiskData *dd, + size_t ndd) +{ + virErrorPtr orig_err; + size_t i; + + if (!dd) + return; + + virErrorPreserveLast(&orig_err); + + for (i = 0; i < ndd; i++) + qemuBackupDiskDataCleanupOne(vm, dd + i); + + g_free(dd); + virErrorRestore(&orig_err); +} + + + +static int +qemuBackupDiskPrepareOneBitmaps(struct qemuBackupDiskData *dd, + virJSONValuePtr actions, + virDomainMomentObjPtr *incremental) +{ + g_autoptr(virJSONValue) mergebitmaps = NULL; + g_autoptr(virJSONValue) mergebitmapsstore = NULL; + + if (!(mergebitmaps = virJSONValueNewArray())) + return -1; + + /* TODO: this code works only if the bitmaps are present on a single node. + * The algorithm needs to be changed so that it looks into the backing chain + * so that we can combine all relevant bitmaps for a given backing chain */ + while (*incremental) { + if (qemuMonitorTransactionBitmapMergeSourceAddBitmap(mergebitmaps, + dd->domdisk->src->nodeformat, + (*incremental)->def->name) < 0) + return -1; + + incremental++; + } + + if (!(mergebitmapsstore = virJSONValueCopy(mergebitmaps))) + return -1; + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1; + + if (qemuMonitorTransactionBitmapMerge(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + &mergebitmaps) < 0) + return -1; + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1; + + if (qemuMonitorTransactionBitmapMerge(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + &mergebitmapsstore) < 0) + return -1; + + + return 0; +} + + +static int +qemuBackupDiskPrepareDataOne(virDomainObjPtr vm, + virDomainBackupDiskDefPtr backupdisk, + struct qemuBackupDiskData *dd, + virJSONValuePtr actions, + virDomainMomentObjPtr *incremental, + virQEMUDriverConfigPtr cfg, + bool removeStore) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + + /* set data structure */ + dd->backupdisk = backupdisk; + dd->store = dd->backupdisk->store; + + if (!(dd->domdisk = virDomainDiskByTarget(vm->def, dd->backupdisk->name))) { + virReportError(VIR_ERR_INVALID_ARG, + _("no disk named '%s'"), dd->backupdisk->name); + return -1; + } + + if (!dd->store->format) + dd->store->format = VIR_STORAGE_FILE_QCOW2; + + if (qemuDomainStorageFileInit(priv->driver, vm, dd->store, dd->domdisk->src) < 0) + return -1; + + if (qemuDomainPrepareStorageSourceBlockdev(NULL, dd->store, priv, cfg) < 0) + return -1; + + if (incremental) { + dd->incrementalBitmap = g_strdup_printf("backup-%s", dd->domdisk->dst); + + if (qemuBackupDiskPrepareOneBitmaps(dd, actions, incremental) < 0) + return -1; + } + + if (!(dd->blockjob = qemuBlockJobDiskNewBackup(vm, dd->domdisk, dd->store, + removeStore, + dd->incrementalBitmap))) + return -1; + + if (!(dd->crdata = qemuBuildStorageSourceChainAttachPrepareBlockdevTop(dd->store, + NULL, + priv->qemuCaps))) + return -1; + + return 0; +} + + +static int +qemuBackupDiskPrepareDataOnePush(virJSONValuePtr actions, + struct qemuBackupDiskData *dd) +{ + qemuMonitorTransactionBackupSyncMode syncmode = QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_FULL; + + if (dd->incrementalBitmap) + syncmode = QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_INCREMENTAL; + + if (qemuMonitorTransactionBackup(actions, + dd->domdisk->src->nodeformat, + dd->blockjob->name, + dd->store->nodeformat, + dd->incrementalBitmap, + syncmode) < 0) + return -1; + + return 0; +} + + +static int +qemuBackupDiskPrepareDataOnePull(virJSONValuePtr actions, + struct qemuBackupDiskData *dd) +{ + if (qemuMonitorTransactionBackup(actions, + dd->domdisk->src->nodeformat, + dd->blockjob->name, + dd->store->nodeformat, + NULL, + QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_NONE) < 0) + return -1; + + return 0; +} + + +static ssize_t +qemuBackupDiskPrepareData(virDomainObjPtr vm, + virDomainBackupDefPtr def, + virDomainMomentObjPtr *incremental, + virJSONValuePtr actions, + virQEMUDriverConfigPtr cfg, + struct qemuBackupDiskData **rdd, + bool reuse_external) +{ + struct qemuBackupDiskData *disks = NULL; + ssize_t ndisks = 0; + size_t i; + bool removeStore = !reuse_external && (def->type == VIR_DOMAIN_BACKUP_TYPE_PULL); + + disks = g_new0(struct qemuBackupDiskData, def->ndisks); + + for (i = 0; i < def->ndisks; i++) { + virDomainBackupDiskDef *backupdisk = &def->disks[i]; + struct qemuBackupDiskData *dd = disks + ndisks; + + if (!backupdisk->store) + continue; + + ndisks++; + + if (qemuBackupDiskPrepareDataOne(vm, backupdisk, dd, actions, + incremental, cfg, removeStore) < 0) + goto error; + + if (def->type == VIR_DOMAIN_BACKUP_TYPE_PULL) { + if (qemuBackupDiskPrepareDataOnePull(actions, dd) < 0) + goto error; + } else { + if (qemuBackupDiskPrepareDataOnePush(actions, dd) < 0) + goto error; + } + } + + *rdd = g_steal_pointer(&disks); + + return ndisks; + + error: + qemuBackupDiskDataCleanup(vm, disks, ndisks); + return -1; +} + + +static int +qemuBackupDiskPrepareOneStorage(virDomainObjPtr vm, + virHashTablePtr blockNamedNodeData, + struct qemuBackupDiskData *dd, + bool reuse_external) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + int rc; + + if (!reuse_external && + dd->store->type == VIR_STORAGE_TYPE_FILE && + virStorageFileSupportsCreate(dd->store)) { + + if (virFileExists(dd->store->path)) { + virReportError(VIR_ERR_INVALID_ARG, + _("store '%s' for backup of '%s' existst"), + dd->store->path, dd->domdisk->dst); + return -1; + } + + if (qemuDomainStorageFileInit(priv->driver, vm, dd->store, NULL) < 0) + return -1; + + dd->initialized = true; + + if (virStorageFileCreate(dd->store) < 0) { + virReportSystemError(errno, + _("failed to create image file '%s'"), + NULLSTR(dd->store->path)); + return -1; + } + + dd->created = true; + } + + if (qemuDomainStorageSourceAccessAllow(priv->driver, vm, dd->store, false, + true) < 0) + return -1; + + dd->labelled = true; + + if (!reuse_external) { + if (qemuBlockStorageSourceCreateDetectSize(blockNamedNodeData, + dd->store, dd->domdisk->src) < 0) + return -1; + + if (qemuBlockStorageSourceCreate(vm, dd->store, NULL, NULL, + dd->crdata->srcdata[0], + QEMU_ASYNC_JOB_BACKUP) < 0) + return -1; + } else { + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) + return -1; + + rc = qemuBlockStorageSourceAttachApply(priv->mon, dd->crdata->srcdata[0]); + + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0 || rc < 0) + return -1; + } + + dd->added = true; + + return 0; +} + + +static int +qemuBackupDiskPrepareStorage(virDomainObjPtr vm, + struct qemuBackupDiskData *disks, + size_t ndisks, + virHashTablePtr blockNamedNodeData, + bool reuse_external) +{ + size_t i; + + for (i = 0; i < ndisks; i++) { + if (qemuBackupDiskPrepareOneStorage(vm, blockNamedNodeData, disks + i, + reuse_external) < 0) + return -1; + } + + return 0; +} + + +static void +qemuBackupDiskStarted(virDomainObjPtr vm, + struct qemuBackupDiskData *dd, + size_t ndd) +{ + size_t i; + + for (i = 0; i < ndd; i++) { + dd[i].started = true; + dd[i].backupdisk->state = VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING; + qemuBlockJobStarted(dd->blockjob, vm); + } +} + + +/** + * qemuBackupBeginPullExportDisks: + * @vm: domain object + * @disks: backup disk data list + * @ndisks: number of valid disks in @disks + * + * Exports all disks from @dd when doing a pull backup in the NBD server. This + * function must be called while in the monitor context. + */ +static int +qemuBackupBeginPullExportDisks(virDomainObjPtr vm, + struct qemuBackupDiskData *disks, + size_t ndisks) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + size_t i; + + for (i = 0; i < ndisks; i++) { + struct qemuBackupDiskData *dd = disks + i; + + if (qemuMonitorNBDServerAdd(priv->mon, + dd->store->nodeformat, + dd->domdisk->dst, + false, + dd->incrementalBitmap) < 0) + return -1; + } + + return 0; +} + + +/** + * qemuBackupBeginCollectIncrementalCheckpoints: + * @vm: domain object + * @incrFrom: name of checkpoint representing starting point of incremental backup + * + * Returns a NULL terminated list of pointers to checkpoints in chronological + * order starting from the 'current' checkpoint until reaching @incrFrom. + */ +static virDomainMomentObjPtr * +qemuBackupBeginCollectIncrementalCheckpoints(virDomainObjPtr vm, + const char *incrFrom) +{ + virDomainMomentObjPtr n = virDomainCheckpointGetCurrent(vm->checkpoints); + g_autofree virDomainMomentObjPtr *incr = NULL; + size_t nincr = 0; + + while (n) { + if (VIR_APPEND_ELEMENT_COPY(incr, nincr, n) < 0) + return NULL; + + if (STREQ(n->def->name, incrFrom)) { + virDomainMomentObjPtr terminator = NULL; + if (VIR_APPEND_ELEMENT_COPY(incr, nincr, terminator) < 0) + return NULL; + + return g_steal_pointer(&incr); + } + + if (!n->def->parent_name) + break; + + n = virDomainCheckpointFindByName(vm->checkpoints, n->def->parent_name); + } + + virReportError(VIR_ERR_OPERATION_INVALID, + _("could not locate checkpoint '%s' for incremental backup"), + incrFrom); + return NULL; +} + + +static void +qemuBackupJobTerminate(virDomainObjPtr vm, + qemuDomainJobStatus jobstatus) + +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + + qemuDomainJobInfoUpdateTime(priv->job.current); + + g_free(priv->job.completed); + priv->job.completed = g_new0(qemuDomainJobInfo, 1); + *priv->job.completed = *priv->job.current; + + priv->job.completed->stats.backup.total = priv->backup->push_total; + priv->job.completed->stats.backup.transferred = priv->backup->push_transferred; + priv->job.completed->stats.backup.tmp_used = priv->backup->pull_tmp_used; + priv->job.completed->stats.backup.tmp_total = priv->backup->pull_tmp_total; + + priv->job.completed->status = jobstatus; + + qemuDomainEventEmitJobCompleted(priv->driver, vm); + + virDomainBackupDefFree(priv->backup); + priv->backup = NULL; + qemuDomainObjEndAsyncJob(priv->driver, vm); +} + + +/** + * qemuBackupJobCancelBlockjobs: + * @vm: domain object + * @backup: backup definition + * @terminatebackup: flag whether to terminate and unregister the backup + * + * Sends all active blockjobs which are part of @backup of @vm a signal to + * cancel. If @terminatebackup is true qemuBackupJobTerminate is also called + * if there are no outstanding active blockjobs. + */ +void +qemuBackupJobCancelBlockjobs(virDomainObjPtr vm, + virDomainBackupDefPtr backup, + bool terminatebackup) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + size_t i; + int rc = 0; + bool has_active = false; + + if (!backup) + return; + + for (i = 0; i < backup->ndisks; i++) { + virDomainBackupDiskDefPtr backupdisk = backup->disks + i; + virDomainDiskDefPtr disk; + g_autoptr(qemuBlockJobData) job = NULL; + + if (!backupdisk->store) + continue; + + /* Look up corresponding disk as backupdisk->idx is no longer reliable */ + if (!(disk = virDomainDiskByTarget(vm->def, backupdisk->name))) + continue; + + if (!(job = qemuBlockJobDiskGetJob(disk))) + continue; + + if (backupdisk->state != VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING && + backupdisk->state != VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLING) + continue; + + has_active = true; + + if (backupdisk->state != VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING) + continue; + + qemuDomainObjEnterMonitor(priv->driver, vm); + + rc = qemuMonitorJobCancel(priv->mon, job->name, false); + + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0) + return; + + if (rc == 0) { + backupdisk->state = VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLING; + job->state = QEMU_BLOCKJOB_STATE_ABORTING; + } + } + + if (terminatebackup && !has_active) + qemuBackupJobTerminate(vm, QEMU_DOMAIN_JOB_STATUS_CANCELED); +} + + +int +qemuBackupBegin(virDomainObjPtr vm, + const char *backupXML, + const char *checkpointXML, + unsigned int flags) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(priv->driver); + g_autoptr(virDomainBackupDef) def = NULL; + g_autoptr(virCaps) caps = NULL; + g_autofree char *suffix = NULL; + struct timeval tv; + bool pull = false; + virDomainMomentObjPtr chk = NULL; + g_autoptr(virDomainCheckpointDef) chkdef = NULL; + g_autofree virDomainMomentObjPtr *incremental = NULL; + g_autoptr(virJSONValue) actions = NULL; + struct qemuBackupDiskData *dd = NULL; + ssize_t ndd = 0; + g_autoptr(virHashTable) blockNamedNodeData = NULL; + bool job_started = false; + bool nbd_running = false; + bool reuse = (flags & VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTERNAL); + int rc = 0; + int ret = -1; + + virCheckFlags(VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTERNAL, -1); + + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("incremental backup is not supported yet")); + return -1; + } + + if (!(caps = virQEMUDriverGetCapabilities(priv->driver, false))) + return -1; + + if (!(def = virDomainBackupDefParseString(backupXML, priv->driver->xmlopt, 0))) + return -1; + + if (checkpointXML) { + if (!(chkdef = virDomainCheckpointDefParseString(checkpointXML, caps, + priv->driver->xmlopt, + priv->qemuCaps, 0))) + return -1; + + suffix = g_strdup(chkdef->parent.name); + } else { + gettimeofday(&tv, NULL); + suffix = g_strdup_printf("%lld", (long long)tv.tv_sec); + } + + if (def->type == VIR_DOMAIN_BACKUP_TYPE_PULL) + pull = true; + + /* we'll treat this kind of backup job as an asyncjob as it uses some of the + * infrastructure for async jobs. We'll allow standard modify-type jobs + * as the interlocking of conflicting operations is handled on the block + * job level */ + if (qemuDomainObjBeginAsyncJob(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP, + VIR_DOMAIN_JOB_OPERATION_BACKUP, flags) < 0) + return -1; + + qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | + JOB_MASK(QEMU_JOB_SUSPEND) | + JOB_MASK(QEMU_JOB_MODIFY))); + priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + + + if (!virDomainObjIsActive(vm)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("cannot perform disk backup for inactive domain")); + goto endjob; + } + + if (priv->backup) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("another backup job is already running")); + goto endjob; + } + + if (qemuBackupPrepare(def) < 0) + goto endjob; + + if (virDomainBackupAlignDisks(def, vm->def, suffix) < 0) + goto endjob; + + if (def->incremental && + !(incremental = qemuBackupBeginCollectIncrementalCheckpoints(vm, def->incremental))) + goto endjob; + + if (!(actions = virJSONValueNewArray())) + goto endjob; + + if (chkdef) { + if (qemuCheckpointCreateCommon(priv->driver, vm, caps, &chkdef, + &actions, &chk) < 0) + goto endjob; + } + + if ((ndd = qemuBackupDiskPrepareData(vm, def, incremental, actions, cfg, &dd, + reuse)) <= 0) { + if (ndd == 0) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("no disks selected for backup")); + } + + goto endjob; + } + + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) + goto endjob; + blockNamedNodeData = qemuMonitorBlockGetNamedNodeData(priv->mon); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0 || !blockNamedNodeData) + goto endjob; + + if (qemuBackupDiskPrepareStorage(vm, dd, ndd, blockNamedNodeData, reuse) < 0) + goto endjob; + + priv->backup = g_steal_pointer(&def); + + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) + goto endjob; + + /* TODO: TLS is a must-have for the modern age */ + if (pull) { + if ((rc = qemuMonitorNBDServerStart(priv->mon, priv->backup->server, NULL)) == 0) + nbd_running = true; + } + + if (rc == 0) + rc = qemuMonitorTransaction(priv->mon, &actions); + + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0 || rc < 0) + goto endjob; + + job_started = true; + qemuBackupDiskStarted(vm, dd, ndd); + + if (chk && + qemuCheckpointCreateFinalize(priv->driver, vm, cfg, chk, true) < 0) + goto endjob; + + if (pull) { + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) + goto endjob; + /* note that if the export fails we've already created the checkpoint + * and we will not delete it */ + rc = qemuBackupBeginPullExportDisks(vm, dd, ndd); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0) + goto endjob; + + if (rc < 0) { + qemuBackupJobCancelBlockjobs(vm, priv->backup, false); + goto endjob; + } + } + + ret = 0; + + endjob: + qemuBackupDiskDataCleanup(vm, dd, ndd); + if (!job_started && nbd_running && + qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) { + ignore_value(qemuMonitorNBDServerStop(priv->mon)); + ignore_value(qemuDomainObjExitMonitor(priv->driver, vm)); + } + + if (ret < 0 && !job_started) + def = g_steal_pointer(&priv->backup); + + if (ret == 0) + qemuDomainObjReleaseAsyncJob(vm); + else + qemuDomainObjEndAsyncJob(priv->driver, vm); + + return ret; +} + + +char * +qemuBackupGetXMLDesc(virDomainObjPtr vm, + unsigned int flags) +{ + g_auto(virBuffer) buf = VIR_BUFFER_INITIALIZER; + virDomainBackupDefPtr backup; + + virCheckFlags(0, NULL); + + if (!(backup = qemuDomainGetBackup(vm))) + return NULL; + + if (virDomainBackupDefFormat(&buf, backup, false) < 0) + return NULL; + + return virBufferContentAndReset(&buf); +} + + +void +qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, + virDomainDiskDefPtr disk, + qemuBlockjobState state, + unsigned long long cur, + unsigned long long end) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + bool has_running = false; + bool has_cancelling = false; + bool has_cancelled = false; + bool has_failed = false; + qemuDomainJobStatus jobstatus = QEMU_DOMAIN_JOB_STATUS_COMPLETED; + virDomainBackupDefPtr backup = priv->backup; + size_t i; + + VIR_DEBUG("vm: '%s', disk:'%s', state:'%d'", + vm->def->name, disk->dst, state); + + if (!backup) + return; + + if (backup->type == VIR_DOMAIN_BACKUP_TYPE_PULL) { + qemuDomainObjEnterMonitor(priv->driver, vm); + ignore_value(qemuMonitorNBDServerStop(priv->mon)); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0) + return; + + /* update the final statistics with the current job's data */ + backup->pull_tmp_used += cur; + backup->pull_tmp_total += end; + } else { + backup->push_transferred += cur; + backup->push_total += end; + } + + for (i = 0; i < backup->ndisks; i++) { + virDomainBackupDiskDefPtr backupdisk = backup->disks + i; + + if (!backupdisk->store) + continue; + + if (STREQ(disk->dst, backupdisk->name)) { + switch (state) { + case QEMU_BLOCKJOB_STATE_COMPLETED: + backupdisk->state = VIR_DOMAIN_BACKUP_DISK_STATE_COMPLETE; + break; + + case QEMU_BLOCKJOB_STATE_CONCLUDED: + case QEMU_BLOCKJOB_STATE_FAILED: + backupdisk->state = VIR_DOMAIN_BACKUP_DISK_STATE_FAILED; + break; + + case QEMU_BLOCKJOB_STATE_CANCELLED: + backupdisk->state = VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLED; + break; + + case QEMU_BLOCKJOB_STATE_READY: + case QEMU_BLOCKJOB_STATE_NEW: + case QEMU_BLOCKJOB_STATE_RUNNING: + case QEMU_BLOCKJOB_STATE_ABORTING: + case QEMU_BLOCKJOB_STATE_PIVOTING: + case QEMU_BLOCKJOB_STATE_LAST: + default: + break; + } + } + + switch (backupdisk->state) { + case VIR_DOMAIN_BACKUP_DISK_STATE_COMPLETE: + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING: + has_running = true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLING: + has_cancelling = true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_FAILED: + has_failed = true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLED: + has_cancelled = true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_NONE: + case VIR_DOMAIN_BACKUP_DISK_STATE_LAST: + break; + } + } + + if (has_running && (has_failed || has_cancelled)) { + /* cancel the rest of the jobs */ + qemuBackupJobCancelBlockjobs(vm, backup, false); + } else if (!has_running && !has_cancelling) { + /* all sub-jobs have stopped */ + + if (has_failed) + jobstatus = QEMU_DOMAIN_JOB_STATUS_FAILED; + else if (has_cancelled && backup->type == VIR_DOMAIN_BACKUP_TYPE_PUSH) + jobstatus = QEMU_DOMAIN_JOB_STATUS_CANCELED; + + qemuBackupJobTerminate(vm, jobstatus); + } + + /* otherwise we must wait for the jobs to end */ +} diff --git a/src/qemu/qemu_backup.h b/src/qemu/qemu_backup.h new file mode 100644 index 0000000000..96297fc9e4 --- /dev/null +++ b/src/qemu/qemu_backup.h @@ -0,0 +1,41 @@ +/* + * qemu_backup.h: Implementation and handling of the backup jobs + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + */ + +#pragma once + +int +qemuBackupBegin(virDomainObjPtr vm, + const char *backupXML, + const char *checkpointXML, + unsigned int flags); + +char * +qemuBackupGetXMLDesc(virDomainObjPtr vm, + unsigned int flags); + +void +qemuBackupJobCancelBlockjobs(virDomainObjPtr vm, + virDomainBackupDefPtr backup, + bool terminatebackup); + +void +qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, + virDomainDiskDefPtr disk, + qemuBlockjobState state, + unsigned long long cur, + unsigned long long end); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 913ab18812..00359d37e9 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -52,6 +52,7 @@ #include "qemu_blockjob.h" #include "qemu_security.h" #include "qemu_checkpoint.h" +#include "qemu_backup.h" #include "virerror.h" #include "virlog.h" @@ -17212,6 +17213,50 @@ qemuDomainCheckpointDelete(virDomainCheckpointPtr checkpoint, } +static int +qemuDomainBackupBegin(virDomainPtr domain, + const char *backupXML, + const char *checkpointXML, + unsigned int flags) +{ + virDomainObjPtr vm = NULL; + int ret = -1; + + if (!(vm = qemuDomainObjFromDomain(domain))) + goto cleanup; + + if (virDomainBackupBeginEnsureACL(domain->conn, vm->def) < 0) + goto cleanup; + + ret = qemuBackupBegin(vm, backupXML, checkpointXML, flags); + + cleanup: + virDomainObjEndAPI(&vm); + return ret; +} + + +static char * +qemuDomainBackupGetXMLDesc(virDomainPtr domain, + unsigned int flags) +{ + virDomainObjPtr vm = NULL; + char *ret = NULL; + + if (!(vm = qemuDomainObjFromDomain(domain))) + return NULL; + + if (virDomainBackupGetXMLDescEnsureACL(domain->conn, vm->def) < 0) + goto cleanup; + + ret = qemuBackupGetXMLDesc(vm, flags); + + cleanup: + virDomainObjEndAPI(&vm); + return ret; +} + + static int qemuDomainQemuMonitorCommand(virDomainPtr domain, const char *cmd, char **result, unsigned int flags) { @@ -22953,6 +22998,8 @@ static virHypervisorDriver qemuHypervisorDriver = { .domainCheckpointDelete = qemuDomainCheckpointDelete, /* 5.6.0 */ .domainGetGuestInfo = qemuDomainGetGuestInfo, /* 5.7.0 */ .domainAgentSetResponseTimeout = qemuDomainAgentSetResponseTimeout, /* 5.10.0 */ + .domainBackupBegin = qemuDomainBackupBegin, /* 5.10.0 */ + .domainBackupGetXMLDesc = qemuDomainBackupGetXMLDesc, /* 5.10.0 */ }; -- 2.23.0

On 12/3/19 11:17 AM, Peter Krempa wrote:
This allows to start and manage the backup job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- po/POTFILES.in | 1 + src/qemu/Makefile.inc.am | 2 + src/qemu/qemu_backup.c | 941 +++++++++++++++++++++++++++++++++++++++
Large patch, but I'm not sure how it could be subdivided.
src/qemu/qemu_backup.h | 41 ++ src/qemu/qemu_driver.c | 47 ++ 5 files changed, 1032 insertions(+) create mode 100644 src/qemu/qemu_backup.c create mode 100644 src/qemu/qemu_backup.h
+++ b/src/qemu/qemu_backup.c
+static int +qemuBackupPrepare(virDomainBackupDefPtr def) +{ + + if (def->type == VIR_DOMAIN_BACKUP_TYPE_PULL) { + if (!def->server) { + def->server = g_new(virStorageNetHostDef, 1); + + def->server->transport = VIR_STORAGE_NET_HOST_TRANS_TCP; + def->server->name = g_strdup("localhost"); + } + + switch ((virStorageNetHostTransport) def->server->transport) { + case VIR_STORAGE_NET_HOST_TRANS_TCP: + /* TODO: Update qemu.conf to provide a port range, + * probably starting at 10809, for obtaining automatic + * port via virPortAllocatorAcquire, as well as store + * somewhere if we need to call virPortAllocatorRelease + * during BackupEnd. Until then, user must provide port */
This TODO survives from my initial code, and does not seem to be addressed later in the series. Not a show-stopper for the initial implementation, but something to remember for followup patches.
+ if (!def->server->port) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("<domainbackup> must specify TCP port for now")); + return -1; + } + break; + + case VIR_STORAGE_NET_HOST_TRANS_UNIX: + /* TODO: Do we need to mess with selinux? */
This should be addressed as well (or deleted, if it works out of the box).
+static int +qemuBackupDiskPrepareOneBitmaps(struct qemuBackupDiskData *dd, + virJSONValuePtr actions, + virDomainMomentObjPtr *incremental) +{ + g_autoptr(virJSONValue) mergebitmaps = NULL; + g_autoptr(virJSONValue) mergebitmapsstore = NULL; + + if (!(mergebitmaps = virJSONValueNewArray())) + return -1; + + /* TODO: this code works only if the bitmaps are present on a single node. + * The algorithm needs to be changed so that it looks into the backing chain + * so that we can combine all relevant bitmaps for a given backing chain */
Correct - but mixing incremental backup with external snapshots is something that we know is future work. It's okay for the initial implementation that we only support a single node.
+ while (*incremental) { + if (qemuMonitorTransactionBitmapMergeSourceAddBitmap(mergebitmaps, + dd->domdisk->src->nodeformat, + (*incremental)->def->name) < 0) + return -1; + + incremental++; + } + + if (!(mergebitmapsstore = virJSONValueCopy(mergebitmaps))) + return -1; + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1; + + if (qemuMonitorTransactionBitmapMerge(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + &mergebitmaps) < 0) + return -1; + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1;
Why do we need two of these calls? /me reads on
+ + if (qemuMonitorTransactionBitmapMerge(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + &mergebitmapsstore) < 0) + return -1;
okay, it looks like you are trying to merge the same bitmap into two different merge commands, which will all be part of the same transaction. I guess it would be helpful to see a trace of the transaction in action to see if we can simplify it (using 2 instead of 4 qemuMonitor* commands).
+ + + return 0; +} + + +static int +qemuBackupDiskPrepareDataOne(virDomainObjPtr vm,
and this is as far as my review got today. I'll resume again as soon as I can. Other than one obvious thing I saw in passing:
@@ -22953,6 +22998,8 @@ static virHypervisorDriver qemuHypervisorDriver = { .domainCheckpointDelete = qemuDomainCheckpointDelete, /* 5.6.0 */ .domainGetGuestInfo = qemuDomainGetGuestInfo, /* 5.7.0 */ .domainAgentSetResponseTimeout = qemuDomainAgentSetResponseTimeout, /* 5.10.0 */ + .domainBackupBegin = qemuDomainBackupBegin, /* 5.10.0 */ + .domainBackupGetXMLDesc = qemuDomainBackupGetXMLDesc, /* 5.10.0 */
These should be 6.0.0 -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Wed, Dec 04, 2019 at 17:12:14 -0600, Eric Blake wrote:
On 12/3/19 11:17 AM, Peter Krempa wrote:
This allows to start and manage the backup job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- po/POTFILES.in | 1 + src/qemu/Makefile.inc.am | 2 + src/qemu/qemu_backup.c | 941 +++++++++++++++++++++++++++++++++++++++
Large patch, but I'm not sure how it could be subdivided.
src/qemu/qemu_backup.h | 41 ++ src/qemu/qemu_driver.c | 47 ++ 5 files changed, 1032 insertions(+) create mode 100644 src/qemu/qemu_backup.c create mode 100644 src/qemu/qemu_backup.h
+++ b/src/qemu/qemu_backup.c
+static int +qemuBackupPrepare(virDomainBackupDefPtr def) +{ + + if (def->type == VIR_DOMAIN_BACKUP_TYPE_PULL) { + if (!def->server) { + def->server = g_new(virStorageNetHostDef, 1); + + def->server->transport = VIR_STORAGE_NET_HOST_TRANS_TCP; + def->server->name = g_strdup("localhost"); + } + + switch ((virStorageNetHostTransport) def->server->transport) { + case VIR_STORAGE_NET_HOST_TRANS_TCP: + /* TODO: Update qemu.conf to provide a port range, + * probably starting at 10809, for obtaining automatic + * port via virPortAllocatorAcquire, as well as store + * somewhere if we need to call virPortAllocatorRelease + * during BackupEnd. Until then, user must provide port */
This TODO survives from my initial code, and does not seem to be addressed later in the series. Not a show-stopper for the initial implementation, but something to remember for followup patches.
+ if (!def->server->port) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("<domainbackup> must specify TCP port for now")); + return -1; + } + break; + + case VIR_STORAGE_NET_HOST_TRANS_UNIX: + /* TODO: Do we need to mess with selinux? */
This should be addressed as well (or deleted, if it works out of the box).
+static int +qemuBackupDiskPrepareOneBitmaps(struct qemuBackupDiskData *dd, + virJSONValuePtr actions, + virDomainMomentObjPtr *incremental) +{ + g_autoptr(virJSONValue) mergebitmaps = NULL; + g_autoptr(virJSONValue) mergebitmapsstore = NULL; + + if (!(mergebitmaps = virJSONValueNewArray())) + return -1; + + /* TODO: this code works only if the bitmaps are present on a single node. + * The algorithm needs to be changed so that it looks into the backing chain + * so that we can combine all relevant bitmaps for a given backing chain */
Correct - but mixing incremental backup with external snapshots is something that we know is future work. It's okay for the initial implementation that we only support a single node.
+ while (*incremental) { + if (qemuMonitorTransactionBitmapMergeSourceAddBitmap(mergebitmaps, + dd->domdisk->src->nodeformat, + (*incremental)->def->name) < 0) + return -1; + + incremental++; + } + + if (!(mergebitmapsstore = virJSONValueCopy(mergebitmaps))) + return -1; + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1; + + if (qemuMonitorTransactionBitmapMerge(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + &mergebitmaps) < 0) + return -1; + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1;
Why do we need two of these calls? /me reads on
+ + if (qemuMonitorTransactionBitmapMerge(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + &mergebitmapsstore) < 0) + return -1;
okay, it looks like you are trying to merge the same bitmap into two different merge commands, which will all be part of the same transaction. I guess it would be helpful to see a trace of the transaction in action to see if we can simplify it (using 2 instead of 4 qemuMonitor* commands).
This is required because the backup blockjob requires the bitmap to be present on the source ('device') image of the backup. The same also applies for the image exported by NBD. The catch is that we expose the scratch file via NBD which is actually the target image of the backup. This means that in case of a pull backup we need two instances of the bitmap so both the block job and the NBD server can use them. Arguably there's a possible simplification here for push-mode backups where the target bitmap doesn't make sense.
+ + + return 0; +} + + +static int +qemuBackupDiskPrepareDataOne(virDomainObjPtr vm,
and this is as far as my review got today. I'll resume again as soon as I can.
Other than one obvious thing I saw in passing:
@@ -22953,6 +22998,8 @@ static virHypervisorDriver qemuHypervisorDriver = { .domainCheckpointDelete = qemuDomainCheckpointDelete, /* 5.6.0 */ .domainGetGuestInfo = qemuDomainGetGuestInfo, /* 5.7.0 */ .domainAgentSetResponseTimeout = qemuDomainAgentSetResponseTimeout, /* 5.10.0 */ + .domainBackupBegin = qemuDomainBackupBegin, /* 5.10.0 */ + .domainBackupGetXMLDesc = qemuDomainBackupGetXMLDesc, /* 5.10.0 */
These should be 6.0.0
-- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

[adding some qemu visibility] On 12/5/19 7:34 AM, Peter Krempa wrote:
+ if (!(mergebitmapsstore = virJSONValueCopy(mergebitmaps))) + return -1; + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1; + + if (qemuMonitorTransactionBitmapMerge(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + &mergebitmaps) < 0) + return -1; + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1;
Why do we need two of these calls? /me reads on
+ + if (qemuMonitorTransactionBitmapMerge(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + &mergebitmapsstore) < 0) + return -1;
okay, it looks like you are trying to merge the same bitmap into two different merge commands, which will all be part of the same transaction. I guess it would be helpful to see a trace of the transaction in action to see if we can simplify it (using 2 instead of 4 qemuMonitor* commands).
This is required because the backup blockjob requires the bitmap to be present on the source ('device') image of the backup. The same also applies for the image exported by NBD. The catch is that we expose the scratch file via NBD which is actually the target image of the backup. This means that in case of a pull backup we need two instances of the bitmap so both the block job and the NBD server can use them. Arguably there's a possible simplification here for push-mode backups where the target bitmap doesn't make sense.
The backup job requires the bitmap to be on the source, but the qemu NBD export code only requires the bitmap to be locatable somewhere on the qemu NBD server requires the bitmap to be discoverable from anywhere on the chain, and since the temporary target of the block job has the source image as its backing file, that should be the case. That is: base <- active <- temp | bitmap0 looking up [active, bitmap0] or [temp, bitmap0] should both succeed; we need the former for the blockjob, and the latter for the NBD export. If the NBD export _can't_ find bitmap0 through the backing chain, that may be a symptom of the problem that Max has been trying to fix (his upcoming v7 "deal with filters" is hinted at here, but will not be in 4.2): https://lists.gnu.org/archive/html/qemu-devel/2019-11/msg04520.html In my original implementation, I did not need a duplicated bitmap in the temp file. But that was pre-blockdev. Are we really hitting filter limitations in qemu where the use of blockdev is preventing [temp, bitmap0] from finding the bitmap across the backing chain? If so, we should fix that in qemu - but we're so late for 4.2, that I guess libvirt will have to work around it. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

05.12.2019 21:13, Eric Blake wrote:
[adding some qemu visibility]
On 12/5/19 7:34 AM, Peter Krempa wrote:
+ if (!(mergebitmapsstore = virJSONValueCopy(mergebitmaps))) + return -1; + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1; + + if (qemuMonitorTransactionBitmapMerge(actions, + dd->domdisk->src->nodeformat, + dd->incrementalBitmap, + &mergebitmaps) < 0) + return -1; + + if (qemuMonitorTransactionBitmapAdd(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + false, + true) < 0) + return -1;
Why do we need two of these calls? /me reads on
+ + if (qemuMonitorTransactionBitmapMerge(actions, + dd->store->nodeformat, + dd->incrementalBitmap, + &mergebitmapsstore) < 0) + return -1;
okay, it looks like you are trying to merge the same bitmap into two different merge commands, which will all be part of the same transaction. I guess it would be helpful to see a trace of the transaction in action to see if we can simplify it (using 2 instead of 4 qemuMonitor* commands).
This is required because the backup blockjob requires the bitmap to be present on the source ('device') image of the backup. The same also applies for the image exported by NBD. The catch is that we expose the scratch file via NBD which is actually the target image of the backup. This means that in case of a pull backup we need two instances of the bitmap so both the block job and the NBD server can use them. Arguably there's a possible simplification here for push-mode backups where the target bitmap doesn't make sense.
The backup job requires the bitmap to be on the source, but the qemu NBD export code only requires the bitmap to be locatable somewhere on the qemu NBD server requires the bitmap to be discoverable from anywhere on the chain, and since the temporary target of the block job has the source image as its backing file, that should be the case. That is:
base <- active <- temp | bitmap0
looking up [active, bitmap0] or [temp, bitmap0] should both succeed; we need the former for the blockjob, and the latter for the NBD export.
If the NBD export _can't_ find bitmap0 through the backing chain, that may be a symptom of the problem that Max has been trying to fix (his upcoming v7 "deal with filters" is hinted at here, but will not be in 4.2): https://lists.gnu.org/archive/html/qemu-devel/2019-11/msg04520.html
these problems will hit if some filters are in use, like throttling, copy-on-read, etc. They use file child, which breaks backing chains. But normal backing chains should work well.
In my original implementation, I did not need a duplicated bitmap in the temp file. But that was pre-blockdev. Are we really hitting filter limitations in qemu where the use of blockdev is preventing [temp, bitmap0] from finding the bitmap across the backing chain? If so, we should fix that in qemu - but we're so late for 4.2, that I guess libvirt will have to work around it.
-- Best regards, Vladimir

On 12/3/19 12:17 PM, Peter Krempa wrote:
This allows to start and manage the backup job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- po/POTFILES.in | 1 + src/qemu/Makefile.inc.am | 2 + src/qemu/qemu_backup.c | 941 +++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_backup.h | 41 ++ src/qemu/qemu_driver.c | 47 ++ 5 files changed, 1032 insertions(+) create mode 100644 src/qemu/qemu_backup.c create mode 100644 src/qemu/qemu_backup.h ... + +static int +qemuBackupDiskPrepareOneStorage(virDomainObjPtr vm, + virHashTablePtr blockNamedNodeData, + struct qemuBackupDiskData *dd, + bool reuse_external) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + int rc; + + if (!reuse_external && + dd->store->type == VIR_STORAGE_TYPE_FILE && + virStorageFileSupportsCreate(dd->store)) { + + if (virFileExists(dd->store->path)) { + virReportError(VIR_ERR_INVALID_ARG, + _("store '%s' for backup of '%s' existst"),
Noticed this in testing, s/existst/exists/ - Cole

On Tue, Dec 03, 2019 at 18:17:42 +0100, Peter Krempa wrote:
This allows to start and manage the backup job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- po/POTFILES.in | 1 + src/qemu/Makefile.inc.am | 2 + src/qemu/qemu_backup.c | 941 +++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_backup.h | 41 ++ src/qemu/qemu_driver.c | 47 ++ 5 files changed, 1032 insertions(+) create mode 100644 src/qemu/qemu_backup.c create mode 100644 src/qemu/qemu_backup.h
Following diff is required when rebasing this on top of current master due to changes in passing of virCaps: diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 3780eb1ea4..5954f90d5c 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -638,7 +638,6 @@ qemuBackupBegin(virDomainObjPtr vm, qemuDomainObjPrivatePtr priv = vm->privateData; g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(priv->driver); g_autoptr(virDomainBackupDef) def = NULL; - g_autoptr(virCaps) caps = NULL; g_autofree char *suffix = NULL; struct timeval tv; bool pull = false; @@ -663,14 +662,11 @@ qemuBackupBegin(virDomainObjPtr vm, return -1; } - if (!(caps = virQEMUDriverGetCapabilities(priv->driver, false))) - return -1; - if (!(def = virDomainBackupDefParseString(backupXML, priv->driver->xmlopt, 0))) return -1; if (checkpointXML) { - if (!(chkdef = virDomainCheckpointDefParseString(checkpointXML, caps, + if (!(chkdef = virDomainCheckpointDefParseString(checkpointXML, priv->driver->xmlopt, priv->qemuCaps, 0))) return -1; @@ -724,7 +720,7 @@ qemuBackupBegin(virDomainObjPtr vm, goto endjob; if (chkdef) { - if (qemuCheckpointCreateCommon(priv->driver, vm, caps, &chkdef, + if (qemuCheckpointCreateCommon(priv->driver, vm, &chkdef, &actions, &chk) < 0) goto endjob; }

On Tue, Dec 03, 2019 at 06:17:42PM +0100, Peter Krempa wrote:
This allows to start and manage the backup job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- po/POTFILES.in | 1 + src/qemu/Makefile.inc.am | 2 + src/qemu/qemu_backup.c | 941 +++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_backup.h | 41 ++ src/qemu/qemu_driver.c | 47 ++ 5 files changed, 1032 insertions(+) create mode 100644 src/qemu/qemu_backup.c create mode 100644 src/qemu/qemu_backup.h
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

On 12/3/19 11:17 AM, Peter Krempa wrote:
This allows to start and manage the backup job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> ---
resuming where I left off last time
+ +static int +qemuBackupDiskPrepareDataOne(virDomainObjPtr vm,
+ +static int +qemuBackupDiskPrepareDataOnePush(virJSONValuePtr actions, + struct qemuBackupDiskData *dd) +{ + qemuMonitorTransactionBackupSyncMode syncmode = QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_FULL; + + if (dd->incrementalBitmap) + syncmode = QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_INCREMENTAL;
Looks correct for both forms of push mode backup.
+ +static int +qemuBackupDiskPrepareDataOnePull(virJSONValuePtr actions, + struct qemuBackupDiskData *dd) +{ + if (qemuMonitorTransactionBackup(actions, + dd->domdisk->src->nodeformat, + dd->blockjob->name, + dd->store->nodeformat, + NULL, + QEMU_MONITOR_TRANSACTION_BACKUP_SYNC_MODE_NONE) < 0)
and this is the correct mode for the blockjob used in managing push mode backup.
+ +static int +qemuBackupDiskPrepareOneStorage(virDomainObjPtr vm, + virHashTablePtr blockNamedNodeData, + struct qemuBackupDiskData *dd, + bool reuse_external) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + int rc; + + if (!reuse_external && + dd->store->type == VIR_STORAGE_TYPE_FILE && + virStorageFileSupportsCreate(dd->store)) { + + if (virFileExists(dd->store->path)) { + virReportError(VIR_ERR_INVALID_ARG, + _("store '%s' for backup of '%s' existst"),
exists
+ dd->store->path, dd->domdisk->dst); + return -1; + } + + if (qemuDomainStorageFileInit(priv->driver, vm, dd->store, NULL) < 0) + return -1; + + dd->initialized = true; + + if (virStorageFileCreate(dd->store) < 0) { + virReportSystemError(errno, + _("failed to create image file '%s'"), + NULLSTR(dd->store->path)); + return -1; + } + + dd->created = true; + } + + if (qemuDomainStorageSourceAccessAllow(priv->driver, vm, dd->store, false, + true) < 0) + return -1; + + dd->labelled = true; + + if (!reuse_external) { + if (qemuBlockStorageSourceCreateDetectSize(blockNamedNodeData, + dd->store, dd->domdisk->src) < 0) + return -1; + + if (qemuBlockStorageSourceCreate(vm, dd->store, NULL, NULL, + dd->crdata->srcdata[0], + QEMU_ASYNC_JOB_BACKUP) < 0) + return -1; + } else { + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) + return -1; + + rc = qemuBlockStorageSourceAttachApply(priv->mon, dd->crdata->srcdata[0]); + + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0 || rc < 0) + return -1;
Offlist, we were wondering if this should blindly trust whatever is already in the external file, or blindly pass backing:null. It may depend on whether it is push mode (probably trust the image) vs. pull mode (we will be hooking up the backing file ourselves when we create the sync:none job, so if the scratch disk already has a backing file that gets in the way, which argues we want a blind backing:null in that case).
+/** + * qemuBackupBeginPullExportDisks: + * @vm: domain object + * @disks: backup disk data list + * @ndisks: number of valid disks in @disks + * + * Exports all disks from @dd when doing a pull backup in the NBD server. This + * function must be called while in the monitor context. + */ +static int +qemuBackupBeginPullExportDisks(virDomainObjPtr vm, + struct qemuBackupDiskData *disks, + size_t ndisks) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + size_t i; + + for (i = 0; i < ndisks; i++) { + struct qemuBackupDiskData *dd = disks + i; + + if (qemuMonitorNBDServerAdd(priv->mon, + dd->store->nodeformat, + dd->domdisk->dst, + false, + dd->incrementalBitmap) < 0) + return -1; + }
When there's more than one disk, this process is non-atomic (a lucky observer on the NBD port could see one but not all of the disks exported yet). Or even with just one disk, a lucky observer can see the NBD port created but no disk exported yet. At any rate, until the libvirt backup API returns, the user shouldn't be trying to probe the NBD port anyway, so this doesn't bother me.
+ + return 0; +} + + +/** + * qemuBackupBeginCollectIncrementalCheckpoints: + * @vm: domain object + * @incrFrom: name of checkpoint representing starting point of incremental backup + * + * Returns a NULL terminated list of pointers to checkpoints in chronological + * order starting from the 'current' checkpoint until reaching @incrFrom. + */ +static virDomainMomentObjPtr * +qemuBackupBeginCollectIncrementalCheckpoints(virDomainObjPtr vm, + const char *incrFrom) +{ + virDomainMomentObjPtr n = virDomainCheckpointGetCurrent(vm->checkpoints); + g_autofree virDomainMomentObjPtr *incr = NULL; + size_t nincr = 0; + + while (n) { + if (VIR_APPEND_ELEMENT_COPY(incr, nincr, n) < 0)
Are there better glib functions for doing this string management?
+ return NULL; + + if (STREQ(n->def->name, incrFrom)) { + virDomainMomentObjPtr terminator = NULL; + if (VIR_APPEND_ELEMENT_COPY(incr, nincr, terminator) < 0) + return NULL; + + return g_steal_pointer(&incr); + } + + if (!n->def->parent_name) + break; + + n = virDomainCheckpointFindByName(vm->checkpoints, n->def->parent_name); + } + + virReportError(VIR_ERR_OPERATION_INVALID, + _("could not locate checkpoint '%s' for incremental backup"), + incrFrom); + return NULL; +} + +
+/** + * qemuBackupJobCancelBlockjobs: + * @vm: domain object + * @backup: backup definition + * @terminatebackup: flag whether to terminate and unregister the backup + * + * Sends all active blockjobs which are part of @backup of @vm a signal to + * cancel. If @terminatebackup is true qemuBackupJobTerminate is also called + * if there are no outstanding active blockjobs. + */ +void +qemuBackupJobCancelBlockjobs(virDomainObjPtr vm, + virDomainBackupDefPtr backup, + bool terminatebackup) +{
+ +int +qemuBackupBegin(virDomainObjPtr vm, + const char *backupXML, + const char *checkpointXML, + unsigned int flags) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(priv->driver); + g_autoptr(virDomainBackupDef) def = NULL; + g_autoptr(virCaps) caps = NULL; + g_autofree char *suffix = NULL; + struct timeval tv; + bool pull = false; + virDomainMomentObjPtr chk = NULL; + g_autoptr(virDomainCheckpointDef) chkdef = NULL; + g_autofree virDomainMomentObjPtr *incremental = NULL; + g_autoptr(virJSONValue) actions = NULL; + struct qemuBackupDiskData *dd = NULL; + ssize_t ndd = 0; + g_autoptr(virHashTable) blockNamedNodeData = NULL; + bool job_started = false; + bool nbd_running = false; + bool reuse = (flags & VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTERNAL); + int rc = 0; + int ret = -1; + + virCheckFlags(VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTERNAL, -1);
Adding support for a QUIESCE flag would also be nice (using the guest agent to auto-freeze filesystems prior to kicking off the checkpoint/backup). But that can be followup.
+ + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("incremental backup is not supported yet")); + return -1; + }
Is it safe to be reporting errors like this prior to ACL checks? Can it be used as an information leak? /me reads further Actually, isn't this missing an ACL check? /me reads even further Aha - this is a helper function, and the ACL check is performed in a different file, before this point. Okay, not a problem after all.
+ + if (!(caps = virQEMUDriverGetCapabilities(priv->driver, false))) + return -1; + + if (!(def = virDomainBackupDefParseString(backupXML, priv->driver->xmlopt, 0))) + return -1; + + if (checkpointXML) { + if (!(chkdef = virDomainCheckpointDefParseString(checkpointXML, caps, + priv->driver->xmlopt, + priv->qemuCaps, 0))) + return -1; + + suffix = g_strdup(chkdef->parent.name); + } else { + gettimeofday(&tv, NULL); + suffix = g_strdup_printf("%lld", (long long)tv.tv_sec); + } + + if (def->type == VIR_DOMAIN_BACKUP_TYPE_PULL) + pull = true; + + /* we'll treat this kind of backup job as an asyncjob as it uses some of the + * infrastructure for async jobs. We'll allow standard modify-type jobs + * as the interlocking of conflicting operations is handled on the block + * job level */ + if (qemuDomainObjBeginAsyncJob(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP, + VIR_DOMAIN_JOB_OPERATION_BACKUP, flags) < 0) + return -1; + + qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | + JOB_MASK(QEMU_JOB_SUSPEND) | + JOB_MASK(QEMU_JOB_MODIFY))); + priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + + + if (!virDomainObjIsActive(vm)) {
Is the double blank line intentional?
+ virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("cannot perform disk backup for inactive domain")); + goto endjob; + } + + if (priv->backup) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("another backup job is already running")); + goto endjob; + } + + if (qemuBackupPrepare(def) < 0) + goto endjob; + + if (virDomainBackupAlignDisks(def, vm->def, suffix) < 0) + goto endjob; + + if (def->incremental && + !(incremental = qemuBackupBeginCollectIncrementalCheckpoints(vm, def->incremental))) + goto endjob; + + if (!(actions = virJSONValueNewArray())) + goto endjob; + + if (chkdef) { + if (qemuCheckpointCreateCommon(priv->driver, vm, caps, &chkdef, + &actions, &chk) < 0) + goto endjob; + } + + if ((ndd = qemuBackupDiskPrepareData(vm, def, incremental, actions, cfg, &dd, + reuse)) <= 0) { + if (ndd == 0) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("no disks selected for backup")); + } + + goto endjob; + }
Lots of prep work before kicking off the backup :) But it looks good so far.
+ + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) + goto endjob; + blockNamedNodeData = qemuMonitorBlockGetNamedNodeData(priv->mon); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0 || !blockNamedNodeData) + goto endjob;
Do you still need to ask the monitor for named node data, or should you already have access to all that information since backups require -blockdev support?
+ + if (qemuBackupDiskPrepareStorage(vm, dd, ndd, blockNamedNodeData, reuse) < 0) + goto endjob; + + priv->backup = g_steal_pointer(&def); + + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) + goto endjob; + + /* TODO: TLS is a must-have for the modern age */ + if (pull) { + if ((rc = qemuMonitorNBDServerStart(priv->mon, priv->backup->server, NULL)) == 0) + nbd_running = true; + }
Yep, adding TLS in a followup patch will be necessary. But one step at a time.
+ + if (rc == 0) + rc = qemuMonitorTransaction(priv->mon, &actions); + + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0 || rc < 0) + goto endjob; + + job_started = true; + qemuBackupDiskStarted(vm, dd, ndd); + + if (chk && + qemuCheckpointCreateFinalize(priv->driver, vm, cfg, chk, true) < 0) + goto endjob; + + if (pull) { + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) + goto endjob; + /* note that if the export fails we've already created the checkpoint + * and we will not delete it */
Why not?
+ rc = qemuBackupBeginPullExportDisks(vm, dd, ndd); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0) + goto endjob; + + if (rc < 0) { + qemuBackupJobCancelBlockjobs(vm, priv->backup, false); + goto endjob; + } + } + + ret = 0; + + endjob: + qemuBackupDiskDataCleanup(vm, dd, ndd); + if (!job_started && nbd_running && + qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) { + ignore_value(qemuMonitorNBDServerStop(priv->mon)); + ignore_value(qemuDomainObjExitMonitor(priv->driver, vm)); + } + + if (ret < 0 && !job_started) + def = g_steal_pointer(&priv->backup); + + if (ret == 0) + qemuDomainObjReleaseAsyncJob(vm); + else + qemuDomainObjEndAsyncJob(priv->driver, vm); + + return ret; +} + + +char * +qemuBackupGetXMLDesc(virDomainObjPtr vm, + unsigned int flags) +{ + g_auto(virBuffer) buf = VIR_BUFFER_INITIALIZER; + virDomainBackupDefPtr backup; + + virCheckFlags(0, NULL); +
Missing ACL check? /me repeats exercise of reading further No, this is a helper function.
+ if (!(backup = qemuDomainGetBackup(vm))) + return NULL; + + if (virDomainBackupDefFormat(&buf, backup, false) < 0) + return NULL; + + return virBufferContentAndReset(&buf); +} + + +void +qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, + virDomainDiskDefPtr disk, + qemuBlockjobState state, + unsigned long long cur, + unsigned long long end) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + bool has_running = false; + bool has_cancelling = false; + bool has_cancelled = false; + bool has_failed = false; + qemuDomainJobStatus jobstatus = QEMU_DOMAIN_JOB_STATUS_COMPLETED; + virDomainBackupDefPtr backup = priv->backup; + size_t i; + + VIR_DEBUG("vm: '%s', disk:'%s', state:'%d'", + vm->def->name, disk->dst, state); + + if (!backup) + return; + + if (backup->type == VIR_DOMAIN_BACKUP_TYPE_PULL) { + qemuDomainObjEnterMonitor(priv->driver, vm); + ignore_value(qemuMonitorNBDServerStop(priv->mon)); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0) + return; + + /* update the final statistics with the current job's data */ + backup->pull_tmp_used += cur; + backup->pull_tmp_total += end; + } else { + backup->push_transferred += cur; + backup->push_total += end; + } +
If I understand, this is no longer an API entry point, but now a helper function called by the existing virDomainAbortJob API, so this one does not need an ACL check (as that would already have been performed).
+ for (i = 0; i < backup->ndisks; i++) { + virDomainBackupDiskDefPtr backupdisk = backup->disks + i; + + if (!backupdisk->store) + continue; + + if (STREQ(disk->dst, backupdisk->name)) { + switch (state) { + case QEMU_BLOCKJOB_STATE_COMPLETED: + backupdisk->state = VIR_DOMAIN_BACKUP_DISK_STATE_COMPLETE; + break; + + case QEMU_BLOCKJOB_STATE_CONCLUDED: + case QEMU_BLOCKJOB_STATE_FAILED: + backupdisk->state = VIR_DOMAIN_BACKUP_DISK_STATE_FAILED; + break; + + case QEMU_BLOCKJOB_STATE_CANCELLED: + backupdisk->state = VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLED; + break; + + case QEMU_BLOCKJOB_STATE_READY: + case QEMU_BLOCKJOB_STATE_NEW: + case QEMU_BLOCKJOB_STATE_RUNNING: + case QEMU_BLOCKJOB_STATE_ABORTING: + case QEMU_BLOCKJOB_STATE_PIVOTING: + case QEMU_BLOCKJOB_STATE_LAST: + default: + break; + } + } + + switch (backupdisk->state) { + case VIR_DOMAIN_BACKUP_DISK_STATE_COMPLETE: + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING: + has_running = true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLING: + has_cancelling = true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_FAILED: + has_failed = true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_CANCELLED: + has_cancelled = true; + break; + + case VIR_DOMAIN_BACKUP_DISK_STATE_NONE: + case VIR_DOMAIN_BACKUP_DISK_STATE_LAST: + break; + } + } + + if (has_running && (has_failed || has_cancelled)) { + /* cancel the rest of the jobs */ + qemuBackupJobCancelBlockjobs(vm, backup, false); + } else if (!has_running && !has_cancelling) { + /* all sub-jobs have stopped */ + + if (has_failed) + jobstatus = QEMU_DOMAIN_JOB_STATUS_FAILED; + else if (has_cancelled && backup->type == VIR_DOMAIN_BACKUP_TYPE_PUSH) + jobstatus = QEMU_DOMAIN_JOB_STATUS_CANCELED; + + qemuBackupJobTerminate(vm, jobstatus); + } + + /* otherwise we must wait for the jobs to end */ +} diff --git a/src/qemu/qemu_backup.h b/src/qemu/qemu_backup.h
+++ b/src/qemu/qemu_driver.c @@ -52,6 +52,7 @@ #include "qemu_blockjob.h" #include "qemu_security.h" #include "qemu_checkpoint.h" +#include "qemu_backup.h"
#include "virerror.h" #include "virlog.h" @@ -17212,6 +17213,50 @@ qemuDomainCheckpointDelete(virDomainCheckpointPtr checkpoint, }
+static int +qemuDomainBackupBegin(virDomainPtr domain, + const char *backupXML, + const char *checkpointXML, + unsigned int flags) +{ + virDomainObjPtr vm = NULL; + int ret = -1; + + if (!(vm = qemuDomainObjFromDomain(domain))) + goto cleanup; + + if (virDomainBackupBeginEnsureACL(domain->conn, vm->def) < 0) + goto cleanup; + + ret = qemuBackupBegin(vm, backupXML, checkpointXML, flags);
And here's what I was missing in my initial read of one file when asking about ACL checks above.
@@ -22953,6 +22998,8 @@ static virHypervisorDriver qemuHypervisorDriver = { .domainCheckpointDelete = qemuDomainCheckpointDelete, /* 5.6.0 */ .domainGetGuestInfo = qemuDomainGetGuestInfo, /* 5.7.0 */ .domainAgentSetResponseTimeout = qemuDomainAgentSetResponseTimeout, /* 5.10.0 */ + .domainBackupBegin = qemuDomainBackupBegin, /* 5.10.0 */ + .domainBackupGetXMLDesc = qemuDomainBackupGetXMLDesc, /* 5.10.0 */
6.0. There, I've reviewed the whole file. There's probably a few tweaks based on what I've pointed out, but we're very close to having this be ready for inclusion. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Mon, Dec 09, 2019 at 11:52:33 -0600, Eric Blake wrote:
On 12/3/19 11:17 AM, Peter Krempa wrote:
This allows to start and manage the backup job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> ---
resuming where I left off last time
[...]
+ if (!reuse_external) { + if (qemuBlockStorageSourceCreateDetectSize(blockNamedNodeData, + dd->store, dd->domdisk->src) < 0) + return -1; + + if (qemuBlockStorageSourceCreate(vm, dd->store, NULL, NULL, + dd->crdata->srcdata[0], + QEMU_ASYNC_JOB_BACKUP) < 0) + return -1; + } else { + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) + return -1; + + rc = qemuBlockStorageSourceAttachApply(priv->mon, dd->crdata->srcdata[0]); + + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0 || rc < 0) + return -1;
Offlist, we were wondering if this should blindly trust whatever is already in the external file, or blindly pass backing:null. It may depend on whether it is push mode (probably trust the image) vs. pull mode (we will be hooking up the backing file ourselves when we create the sync:none job, so if the scratch disk already has a backing file that gets in the way, which argues we want a blind backing:null in that case).
I'll add the following diff: diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index caef12cf94..1ca9f8dfe0 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -283,6 +283,7 @@ qemuBackupDiskPrepareDataOne(virDomainObjPtr vm, bool removeStore) { qemuDomainObjPrivatePtr priv = vm->privateData; + g_autoptr(virStorageSource) terminator = NULL; /* set data structure */ dd->backupdisk = backupdisk; @@ -311,13 +312,17 @@ qemuBackupDiskPrepareDataOne(virDomainObjPtr vm, return -1; } + /* install terminator to prevent qemu form opening backing images */ + if (!(terminator = virStorageSourceNew())) + return -1; + if (!(dd->blockjob = qemuBlockJobDiskNewBackup(vm, dd->domdisk, dd->store, removeStore, dd->incrementalBitmap))) return -1; if (!(dd->crdata = qemuBuildStorageSourceChainAttachPrepareBlockdevTop(dd->store, - NULL, + terminator, priv->qemuCaps))) return -1;
+/** + * qemuBackupBeginCollectIncrementalCheckpoints: + * @vm: domain object + * @incrFrom: name of checkpoint representing starting point of incremental backup + * + * Returns a NULL terminated list of pointers to checkpoints in chronological + * order starting from the 'current' checkpoint until reaching @incrFrom. + */ +static virDomainMomentObjPtr * +qemuBackupBeginCollectIncrementalCheckpoints(virDomainObjPtr vm, + const char *incrFrom) +{ + virDomainMomentObjPtr n = virDomainCheckpointGetCurrent(vm->checkpoints); + g_autofree virDomainMomentObjPtr *incr = NULL; + size_t nincr = 0; + + while (n) { + if (VIR_APPEND_ELEMENT_COPY(incr, nincr, n) < 0)
Are there better glib functions for doing this string management?
Probably yes. We didn't adopt them yet though and I don't feel brave enough to do it as a part of this series.
+ return NULL; + + if (STREQ(n->def->name, incrFrom)) { + virDomainMomentObjPtr terminator = NULL; + if (VIR_APPEND_ELEMENT_COPY(incr, nincr, terminator) < 0)
[...]
+ + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) + goto endjob; + blockNamedNodeData = qemuMonitorBlockGetNamedNodeData(priv->mon); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0 || !blockNamedNodeData) + goto endjob;
Do you still need to ask the monitor for named node data, or should you already have access to all that information since backups require -blockdev support?
Yes. This is for detecting the state of bitmaps and the sizing of the images. When we are pre-creating the images we need to know the current size.
+ + if (qemuBackupDiskPrepareStorage(vm, dd, ndd, blockNamedNodeData, reuse) < 0) + goto endjob; + + priv->backup = g_steal_pointer(&def);
[...]
+ if (chk && + qemuCheckpointCreateFinalize(priv->driver, vm, cfg, chk, true) < 0) + goto endjob; + + if (pull) { + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, QEMU_ASYNC_JOB_BACKUP) < 0) + goto endjob; + /* note that if the export fails we've already created the checkpoint + * and we will not delete it */
Why not?
It would require merging of bitmaps. That operation itself is too complex and error prone to attempt it on a cleanup path.
+ rc = qemuBackupBeginPullExportDisks(vm, dd, ndd); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0) + goto endjob; + + if (rc < 0) { + qemuBackupJobCancelBlockjobs(vm, priv->backup, false); + goto endjob; + } + }
[...]
+void +qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, + virDomainDiskDefPtr disk, + qemuBlockjobState state, + unsigned long long cur, + unsigned long long end) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + bool has_running = false; + bool has_cancelling = false; + bool has_cancelled = false; + bool has_failed = false; + qemuDomainJobStatus jobstatus = QEMU_DOMAIN_JOB_STATUS_COMPLETED; + virDomainBackupDefPtr backup = priv->backup; + size_t i; + + VIR_DEBUG("vm: '%s', disk:'%s', state:'%d'", + vm->def->name, disk->dst, state); + + if (!backup) + return; + + if (backup->type == VIR_DOMAIN_BACKUP_TYPE_PULL) { + qemuDomainObjEnterMonitor(priv->driver, vm); + ignore_value(qemuMonitorNBDServerStop(priv->mon)); + if (qemuDomainObjExitMonitor(priv->driver, vm) < 0) + return; + + /* update the final statistics with the current job's data */ + backup->pull_tmp_used += cur; + backup->pull_tmp_total += end; + } else { + backup->push_transferred += cur; + backup->push_total += end; + } +
If I understand, this is no longer an API entry point, but now a helper function called by the existing virDomainAbortJob API, so this one does not need an ACL check (as that would already have been performed).
This is (later) called from the block job event handler. I've included it in this patch because IMO it's semantically closer to this code rather than the blockjob code.
+ for (i = 0; i < backup->ndisks; i++) { + virDomainBackupDiskDefPtr backupdisk = backup->disks + i; + + if (!backupdisk->store) + continue;

We can use the output of 'query-jobs' to figure out some useful information about a backup job. That is progress in case of a push job and scratch file use in case of a pull job. Add a worker which will total up the data and call it from qemuDomainGetJobStatsInternal. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 98 ++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_backup.h | 5 +++ src/qemu/qemu_driver.c | 3 +- 3 files changed, 105 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 8307a42e1c..452f1b0a4d 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -939,3 +939,101 @@ qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, /* otherwise we must wait for the jobs to end */ } + + +static void +qemuBackupGetJobInfoStatsUpdateOne(virDomainObjPtr vm, + bool push, + const char *diskdst, + qemuDomainBackupStats *stats, + qemuMonitorJobInfoPtr *blockjobs, + size_t nblockjobs) +{ + virDomainDiskDefPtr domdisk; + qemuMonitorJobInfoPtr monblockjob = NULL; + g_autoptr(qemuBlockJobData) diskblockjob = NULL; + size_t i; + + /* it's just statistics so let's not worry so much about errors */ + if (!(domdisk = virDomainDiskByTarget(vm->def, diskdst))) + return; + + if (!(diskblockjob = qemuBlockJobDiskGetJob(domdisk))) + return; + + for (i = 0; i < nblockjobs; i++) { + if (STREQ_NULLABLE(blockjobs[i]->id, diskblockjob->name)) { + monblockjob = blockjobs[i]; + break; + } + } + if (!monblockjob) + return; + + if (push) { + stats->total += monblockjob->progressTotal; + stats->transferred += monblockjob->progressCurrent; + } else { + stats->tmp_used += monblockjob->progressCurrent; + stats->tmp_total += monblockjob->progressTotal; + } +} + + +int +qemuBackupGetJobInfoStats(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuDomainJobInfoPtr jobInfo) +{ + qemuDomainBackupStats *stats = &jobInfo->stats.backup; + qemuDomainObjPrivatePtr priv = vm->privateData; + qemuMonitorJobInfoPtr *blockjobs = NULL; + size_t nblockjobs = 0; + size_t i; + int rc; + int ret = -1; + + if (!priv->backup) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("backup job data missing")); + return -1; + } + + if (qemuDomainJobInfoUpdateTime(jobInfo) < 0) + return -1; + + jobInfo->status = QEMU_DOMAIN_JOB_STATUS_ACTIVE; + + qemuDomainObjEnterMonitor(driver, vm); + + rc = qemuMonitorGetJobInfo(priv->mon, &blockjobs, &nblockjobs); + + if (qemuDomainObjExitMonitor(driver, vm) < 0 || rc < 0) + goto cleanup; + + /* count in completed jobs */ + stats->total = priv->backup->push_total; + stats->transferred = priv->backup->push_transferred; + stats->tmp_used = priv->backup->pull_tmp_used; + stats->tmp_total = priv->backup->pull_tmp_total; + + for (i = 0; i < priv->backup->ndisks; i++) { + if (priv->backup->disks[i].state != VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING) + continue; + + qemuBackupGetJobInfoStatsUpdateOne(vm, + priv->backup->type == VIR_DOMAIN_BACKUP_TYPE_PUSH, + priv->backup->disks[i].name, + stats, + blockjobs, + nblockjobs); + } + + ret = 0; + + cleanup: + for (i = 0; i < nblockjobs; i++) + qemuMonitorJobInfoFree(blockjobs[i]); + g_free(blockjobs); + return ret; +} diff --git a/src/qemu/qemu_backup.h b/src/qemu/qemu_backup.h index 96297fc9e4..0f76abe067 100644 --- a/src/qemu/qemu_backup.h +++ b/src/qemu/qemu_backup.h @@ -39,3 +39,8 @@ qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, qemuBlockjobState state, unsigned long long cur, unsigned long long end); + +int +qemuBackupGetJobInfoStats(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuDomainJobInfoPtr jobInfo); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 00359d37e9..95882d9d14 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -13907,7 +13907,8 @@ qemuDomainGetJobStatsInternal(virQEMUDriverPtr driver, break; case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - /* TODO implement for backup job */ + if (qemuBackupGetJobInfoStats(driver, vm, jobInfo) < 0) + goto cleanup; break; case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: -- 2.23.0

On 12/3/19 11:17 AM, Peter Krempa wrote:
We can use the output of 'query-jobs' to figure out some useful information about a backup job. That is progress in case of a push job and scratch file use in case of a pull job.
[I still need to finish my review on 20, but this one's easier to knock out in the short term]
Add a worker which will total up the data and call it from qemuDomainGetJobStatsInternal.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> ---
+static void +qemuBackupGetJobInfoStatsUpdateOne(virDomainObjPtr vm,
+ + if (push) { + stats->total += monblockjob->progressTotal; + stats->transferred += monblockjob->progressCurrent; + } else { + stats->tmp_used += monblockjob->progressCurrent; + stats->tmp_total += monblockjob->progressTotal; + }
I don't know what qemu reports for the job used/total on the temp image (I guess the total is the guest-visible disk size?) but this reporting is reasonable enough for now. This patch looks fine, so: Reviewed-by: Eric Blake <eblake@redhat.com> Still, seeing it made me wonder - when I first wrote the bitmap code, I added flag VIR_DOMAIN_CHECKPOINT_XML_SIZE for grabbing the current size of a bitmap, but disabled the qemu implementation of it at the last minute when committing the Checkpoint API because the monitor actions I used to get at it before -blockdev was not sane. But I don't see it supported anywhere in this series. The progress stats of a backup job are similar, but at some point, we do need to get a followup patch that gets size estimation from a bitmap prior to starting the backup back to viability. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Thu, Dec 05, 2019 at 13:54:33 -0600, Eric Blake wrote:
On 12/3/19 11:17 AM, Peter Krempa wrote:
We can use the output of 'query-jobs' to figure out some useful information about a backup job. That is progress in case of a push job and scratch file use in case of a pull job.
[I still need to finish my review on 20, but this one's easier to knock out in the short term]
Add a worker which will total up the data and call it from qemuDomainGetJobStatsInternal.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> ---
+static void +qemuBackupGetJobInfoStatsUpdateOne(virDomainObjPtr vm,
+ + if (push) { + stats->total += monblockjob->progressTotal; + stats->transferred += monblockjob->progressCurrent; + } else { + stats->tmp_used += monblockjob->progressCurrent; + stats->tmp_total += monblockjob->progressTotal; + }
I don't know what qemu reports for the job used/total on the temp image (I guess the total is the guest-visible disk size?) but this reporting is reasonable enough for now.
Yes it's the guest visible size, which is basically the maximum amount that can be written into the scratch image in the current impl. In case you add the flag you suggested earlier which will make the blocks not covered by backup inaccessible and thus optimize the size of the scratch image the job stats reported by qemu should reflect that.
This patch looks fine, so: Reviewed-by: Eric Blake <eblake@redhat.com>
Still, seeing it made me wonder - when I first wrote the bitmap code, I added flag VIR_DOMAIN_CHECKPOINT_XML_SIZE for grabbing the current size of a bitmap, but disabled the qemu implementation of it at the last minute when committing the Checkpoint API because the monitor actions I used to get at it before -blockdev was not sane. But I don't see it supported anywhere in this series. The progress stats of a backup job are similar, but at some point, we do need to get a followup patch that gets size estimation from a bitmap prior to starting the backup back to viability.
I'm holding that patch until the integration with snapshots is done as it requires merging of the bitmaps to figure out the size of the backup.

On Tue, Dec 03, 2019 at 06:17:43PM +0100, Peter Krempa wrote:
We can use the output of 'query-jobs' to figure out some useful information about a backup job. That is progress in case of a push job and scratch file use in case of a pull job.
Add a worker which will total up the data and call it from qemuDomainGetJobStatsInternal.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 98 ++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_backup.h | 5 +++ src/qemu/qemu_driver.c | 3 +- 3 files changed, 105 insertions(+), 1 deletion(-)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

Use the helper which cancels all blockjobs to perform the backup job cancellation in qemuDomainAbortJob. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_driver.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 95882d9d14..2408b08106 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -14054,11 +14054,16 @@ static int qemuDomainAbortJob(virDomainPtr dom) } VIR_DEBUG("Cancelling job at client request"); - qemuDomainObjAbortAsyncJob(vm); - qemuDomainObjEnterMonitor(driver, vm); - ret = qemuMonitorMigrateCancel(priv->mon); - if (qemuDomainObjExitMonitor(driver, vm) < 0) - ret = -1; + if (priv->job.asyncJob == QEMU_ASYNC_JOB_BACKUP) { + qemuBackupJobCancelBlockjobs(vm, priv->backup, true); + ret = 0; + } else { + qemuDomainObjAbortAsyncJob(vm); + qemuDomainObjEnterMonitor(driver, vm); + ret = qemuMonitorMigrateCancel(priv->mon); + if (qemuDomainObjExitMonitor(driver, vm) < 0) + ret = -1; + } endjob: qemuDomainObjEndJob(driver, vm); -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:44PM +0100, Peter Krempa wrote:
Use the helper which cancels all blockjobs to perform the backup job cancellation in qemuDomainAbortJob.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_driver.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 95882d9d14..2408b08106 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -14054,11 +14054,16 @@ static int qemuDomainAbortJob(virDomainPtr dom) }
VIR_DEBUG("Cancelling job at client request"); - qemuDomainObjAbortAsyncJob(vm); - qemuDomainObjEnterMonitor(driver, vm); - ret = qemuMonitorMigrateCancel(priv->mon); - if (qemuDomainObjExitMonitor(driver, vm) < 0) - ret = -1; + if (priv->job.asyncJob == QEMU_ASYNC_JOB_BACKUP) { + qemuBackupJobCancelBlockjobs(vm, priv->backup, true); + ret = 0; + } else { + qemuDomainObjAbortAsyncJob(vm); + qemuDomainObjEnterMonitor(driver, vm); + ret = qemuMonitorMigrateCancel(priv->mon); + if (qemuDomainObjExitMonitor(driver, vm) < 0) + ret = -1; + }
Hmm, this makes me thing we should have had some better error checking in here already. IIUC, we have other types async job that are not related to either migration or backups, so should we do switch (priv->job.asyncJob) { case QEMU_ASYNC_JOB_BACKUP: ... case QEMU_ASYNC_JOB_MIGRATE: ... case QEMU_ASYNC_JOB.... default: report error } Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Wed, Dec 04, 2019 at 11:20:55 +0000, Daniel Berrange wrote:
On Tue, Dec 03, 2019 at 06:17:44PM +0100, Peter Krempa wrote:
Use the helper which cancels all blockjobs to perform the backup job cancellation in qemuDomainAbortJob.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_driver.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 95882d9d14..2408b08106 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -14054,11 +14054,16 @@ static int qemuDomainAbortJob(virDomainPtr dom) }
VIR_DEBUG("Cancelling job at client request"); - qemuDomainObjAbortAsyncJob(vm); - qemuDomainObjEnterMonitor(driver, vm); - ret = qemuMonitorMigrateCancel(priv->mon); - if (qemuDomainObjExitMonitor(driver, vm) < 0) - ret = -1; + if (priv->job.asyncJob == QEMU_ASYNC_JOB_BACKUP) { + qemuBackupJobCancelBlockjobs(vm, priv->backup, true); + ret = 0; + } else { + qemuDomainObjAbortAsyncJob(vm); + qemuDomainObjEnterMonitor(driver, vm); + ret = qemuMonitorMigrateCancel(priv->mon); + if (qemuDomainObjExitMonitor(driver, vm) < 0) + ret = -1; + }
Hmm, this makes me thing we should have had some better error checking in here already. IIUC, we have other types async job that are not related to either migration or backups, so should we do
switch (priv->job.asyncJob) { case QEMU_ASYNC_JOB_BACKUP: ... case QEMU_ASYNC_JOB_MIGRATE: ... case QEMU_ASYNC_JOB.... default: report error
This is now done upstream. The new version of the patch after merging with upstream: diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 93b6107f6c..72694dc8d0 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -14091,7 +14091,8 @@ static int qemuDomainAbortJob(virDomainPtr dom) break; case QEMU_ASYNC_JOB_BACKUP: - /* TODO: to be implemented later */ + qemuBackupJobCancelBlockjobs(vm, priv->backup, true); + ret = 0; break;

On Thu, Dec 05, 2019 at 02:18:38PM +0100, Peter Krempa wrote:
On Wed, Dec 04, 2019 at 11:20:55 +0000, Daniel Berrange wrote:
On Tue, Dec 03, 2019 at 06:17:44PM +0100, Peter Krempa wrote:
Use the helper which cancels all blockjobs to perform the backup job cancellation in qemuDomainAbortJob.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_driver.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 95882d9d14..2408b08106 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -14054,11 +14054,16 @@ static int qemuDomainAbortJob(virDomainPtr dom) }
VIR_DEBUG("Cancelling job at client request"); - qemuDomainObjAbortAsyncJob(vm); - qemuDomainObjEnterMonitor(driver, vm); - ret = qemuMonitorMigrateCancel(priv->mon); - if (qemuDomainObjExitMonitor(driver, vm) < 0) - ret = -1; + if (priv->job.asyncJob == QEMU_ASYNC_JOB_BACKUP) { + qemuBackupJobCancelBlockjobs(vm, priv->backup, true); + ret = 0; + } else { + qemuDomainObjAbortAsyncJob(vm); + qemuDomainObjEnterMonitor(driver, vm); + ret = qemuMonitorMigrateCancel(priv->mon); + if (qemuDomainObjExitMonitor(driver, vm) < 0) + ret = -1; + }
Hmm, this makes me thing we should have had some better error checking in here already. IIUC, we have other types async job that are not related to either migration or backups, so should we do
switch (priv->job.asyncJob) { case QEMU_ASYNC_JOB_BACKUP: ... case QEMU_ASYNC_JOB_MIGRATE: ... case QEMU_ASYNC_JOB.... default: report error
This is now done upstream. The new version of the patch after merging with upstream:
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 93b6107f6c..72694dc8d0 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -14091,7 +14091,8 @@ static int qemuDomainAbortJob(virDomainPtr dom) break;
case QEMU_ASYNC_JOB_BACKUP: - /* TODO: to be implemented later */ + qemuBackupJobCancelBlockjobs(vm, priv->backup, true); + ret = 0; break;
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

On 12/3/19 11:17 AM, Peter Krempa wrote:
Use the helper which cancels all blockjobs to perform the backup job cancellation in qemuDomainAbortJob.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_driver.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

After the individual sub-blockjobs of a backup libvirt job finish we must detect it and notify the parent job, so that it can be properly terminated. Since we update job information to determine success of an blockjob we can directly report back also statistics of the blockjob. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_blockjob.c | 74 ++++++++++++++++++++++++++++++++++++++-- 1 file changed, 72 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c index d434b8bddd..a6b0af182c 100644 --- a/src/qemu/qemu_blockjob.c +++ b/src/qemu/qemu_blockjob.c @@ -27,6 +27,7 @@ #include "qemu_block.h" #include "qemu_domain.h" #include "qemu_alias.h" +#include "qemu_backup.h" #include "conf/domain_conf.h" #include "conf/domain_event.h" @@ -1272,11 +1273,71 @@ qemuBlockJobProcessEventConcludedCreate(virQEMUDriverPtr driver, } +static void +qemuBlockJobProcessEventConcludedBackup(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuBlockJobDataPtr job, + qemuDomainAsyncJob asyncJob, + qemuBlockjobState newstate, + unsigned long long progressCurrent, + unsigned long long progressTotal) +{ + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); + uid_t uid; + gid_t gid; + g_autoptr(qemuBlockStorageSourceAttachData) backend = NULL; + g_autoptr(virJSONValue) actions = NULL; + + qemuBackupNotifyBlockjobEnd(vm, job->disk, newstate, progressCurrent, progressTotal); + + if (job->data.backup.store && + !(backend = qemuBlockStorageSourceDetachPrepare(job->data.backup.store, NULL))) + return; + + if (job->data.backup.bitmap) { + if (!(actions = virJSONValueNewArray())) + return; + + if (qemuMonitorTransactionBitmapRemove(actions, + job->disk->src->nodeformat, + job->data.backup.bitmap) < 0) + return; + } + + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + return; + + if (backend) + qemuBlockStorageSourceAttachRollback(qemuDomainGetMonitor(vm), backend); + + if (actions) + qemuMonitorTransaction(qemuDomainGetMonitor(vm), &actions); + + if (qemuDomainObjExitMonitor(driver, vm) < 0) + return; + + if (job->data.backup.store) { + qemuDomainStorageSourceAccessRevoke(driver, vm, job->data.backup.store); + + if (job->data.backup.deleteStore && + job->data.backup.store->type == VIR_STORAGE_TYPE_FILE) { + qemuDomainGetImageIds(cfg, vm, job->data.backup.store, NULL, &uid, &gid); + + if (virFileRemove(job->data.backup.store->path, uid, gid) < 0) + VIR_WARN("failed to remove scratch file '%s'", + job->data.backup.store->path); + } + } +} + + static void qemuBlockJobEventProcessConcludedTransition(qemuBlockJobDataPtr job, virQEMUDriverPtr driver, virDomainObjPtr vm, - qemuDomainAsyncJob asyncJob) + qemuDomainAsyncJob asyncJob, + unsigned long long progressCurrent, + unsigned long long progressTotal) { bool success = job->newstate == QEMU_BLOCKJOB_STATE_COMPLETED; @@ -1310,6 +1371,9 @@ qemuBlockJobEventProcessConcludedTransition(qemuBlockJobDataPtr job, break; case QEMU_BLOCKJOB_TYPE_BACKUP: + qemuBlockJobProcessEventConcludedBackup(driver, vm, job, asyncJob, + job->newstate, progressCurrent, + progressTotal); break; case QEMU_BLOCKJOB_TYPE_BROKEN: @@ -1336,6 +1400,8 @@ qemuBlockJobEventProcessConcluded(qemuBlockJobDataPtr job, size_t njobinfo = 0; size_t i; bool refreshed = false; + unsigned long long progressCurrent = 0; + unsigned long long progressTotal = 0; if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) goto cleanup; @@ -1348,6 +1414,9 @@ qemuBlockJobEventProcessConcluded(qemuBlockJobDataPtr job, if (STRNEQ_NULLABLE(job->name, jobinfo[i]->id)) continue; + progressCurrent = jobinfo[i]->progressCurrent; + progressTotal = jobinfo[i]->progressTotal; + job->errmsg = g_strdup(jobinfo[i]->error); if (job->errmsg) @@ -1380,7 +1449,8 @@ qemuBlockJobEventProcessConcluded(qemuBlockJobDataPtr job, VIR_DEBUG("handling job '%s' state '%d' newstate '%d'", job->name, job->state, job->newstate); - qemuBlockJobEventProcessConcludedTransition(job, driver, vm, asyncJob); + qemuBlockJobEventProcessConcludedTransition(job, driver, vm, asyncJob, + progressCurrent, progressTotal); /* unplug the backing chains in case the job inherited them */ if (!job->disk) { -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:45PM +0100, Peter Krempa wrote:
After the individual sub-blockjobs of a backup libvirt job finish we must detect it and notify the parent job, so that it can be properly terminated.
Since we update job information to determine success of an blockjob we can directly report back also statistics of the blockjob.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_blockjob.c | 74 ++++++++++++++++++++++++++++++++++++++-- 1 file changed, 72 insertions(+), 2 deletions(-)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

On 12/3/19 11:17 AM, Peter Krempa wrote:
After the individual sub-blockjobs of a backup libvirt job finish we must detect it and notify the parent job, so that it can be properly terminated.
Since we update job information to determine success of an blockjob we
s/an/a/
can directly report back also statistics of the blockjob.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_blockjob.c | 74 ++++++++++++++++++++++++++++++++++++++-- 1 file changed, 72 insertions(+), 2 deletions(-)
+static void +qemuBlockJobProcessEventConcludedBackup(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuBlockJobDataPtr job, + qemuDomainAsyncJob asyncJob, + qemuBlockjobState newstate, + unsigned long long progressCurrent, + unsigned long long progressTotal) +{ + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); + uid_t uid; + gid_t gid; + g_autoptr(qemuBlockStorageSourceAttachData) backend = NULL; + g_autoptr(virJSONValue) actions = NULL; + + qemuBackupNotifyBlockjobEnd(vm, job->disk, newstate, progressCurrent, progressTotal); + + if (job->data.backup.store && + !(backend = qemuBlockStorageSourceDetachPrepare(job->data.backup.store, NULL))) + return; + + if (job->data.backup.bitmap) { + if (!(actions = virJSONValueNewArray())) + return; + + if (qemuMonitorTransactionBitmapRemove(actions, + job->disk->src->nodeformat, + job->data.backup.bitmap) < 0) + return; + }
The late-breaking qemu fix for deleting persistent bitmaps in a transaction should be in 4.2-rc5. Phew.
+ + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + return; + + if (backend) + qemuBlockStorageSourceAttachRollback(qemuDomainGetMonitor(vm), backend); + + if (actions) + qemuMonitorTransaction(qemuDomainGetMonitor(vm), &actions); + + if (qemuDomainObjExitMonitor(driver, vm) < 0) + return; + + if (job->data.backup.store) { + qemuDomainStorageSourceAccessRevoke(driver, vm, job->data.backup.store); + + if (job->data.backup.deleteStore && + job->data.backup.store->type == VIR_STORAGE_TYPE_FILE) { + qemuDomainGetImageIds(cfg, vm, job->data.backup.store, NULL, &uid, &gid); + + if (virFileRemove(job->data.backup.store->path, uid, gid) < 0) + VIR_WARN("failed to remove scratch file '%s'", + job->data.backup.store->path); + } + } +}
Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

This flag will allow figuring out whether the hypervisor supports the incremental backup and checkpoint features. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/formatdomaincaps.html.in | 8 ++++++++ docs/schemas/domaincaps.rng | 9 +++++++++ src/conf/domain_capabilities.c | 1 + src/conf/domain_capabilities.h | 1 + 4 files changed, 19 insertions(+) diff --git a/docs/formatdomaincaps.html.in b/docs/formatdomaincaps.html.in index 0bafb67705..85226328a8 100644 --- a/docs/formatdomaincaps.html.in +++ b/docs/formatdomaincaps.html.in @@ -517,6 +517,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='yes'/> + <backup supported='yes'/> <sev> <cbitpos>47</cbitpos> <reduced-phys-bits>1</reduced-phys-bits> @@ -560,6 +561,13 @@ the disk to a running guest, or similar. </p> + <h4><a id="featureBackup">backup</a></h4> + + <p>Reports whether the hypervisor supports the backup,checkpoint and related + features. (<code>virDomainBackupBegin</code>, + <code>virDomainCheckpointCreateXML</code> etc). + </p> + <h4><a id="elementsSEV">SEV capabilities</a></h4> <p>AMD Secure Encrypted Virtualization (SEV) capabilities are exposed under diff --git a/docs/schemas/domaincaps.rng b/docs/schemas/domaincaps.rng index 88b545ec2a..682cc82177 100644 --- a/docs/schemas/domaincaps.rng +++ b/docs/schemas/domaincaps.rng @@ -210,6 +210,9 @@ <optional> <ref name='backingStoreInput'/> </optional> + <optional> + <ref name='backup'/> + </optional> <optional> <ref name='sev'/> </optional> @@ -241,6 +244,12 @@ </element> </define> + <define name='backup'> + <element name='backup'> + <ref name='supported'/> + </element> + </define> + <define name='sev'> <element name='sev'> <ref name='supported'/> diff --git a/src/conf/domain_capabilities.c b/src/conf/domain_capabilities.c index ca208f2340..921d795630 100644 --- a/src/conf/domain_capabilities.c +++ b/src/conf/domain_capabilities.c @@ -41,6 +41,7 @@ VIR_ENUM_IMPL(virDomainCapsFeature, "vmcoreinfo", "genid", "backingStoreInput", + "backup", ); static virClassPtr virDomainCapsClass; diff --git a/src/conf/domain_capabilities.h b/src/conf/domain_capabilities.h index 4ec9fe006c..9f4a23d015 100644 --- a/src/conf/domain_capabilities.h +++ b/src/conf/domain_capabilities.h @@ -163,6 +163,7 @@ typedef enum { VIR_DOMAIN_CAPS_FEATURE_VMCOREINFO, VIR_DOMAIN_CAPS_FEATURE_GENID, VIR_DOMAIN_CAPS_FEATURE_BACKING_STORE_INPUT, + VIR_DOMAIN_CAPS_FEATURE_BACKUP, VIR_DOMAIN_CAPS_FEATURE_LAST } virDomainCapsFeature; -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:46PM +0100, Peter Krempa wrote:
This flag will allow figuring out whether the hypervisor supports the incremental backup and checkpoint features.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/formatdomaincaps.html.in | 8 ++++++++ docs/schemas/domaincaps.rng | 9 +++++++++ src/conf/domain_capabilities.c | 1 + src/conf/domain_capabilities.h | 1 + 4 files changed, 19 insertions(+)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/3/19 11:17 AM, Peter Krempa wrote:
This flag will allow figuring out whether the hypervisor supports the incremental backup and checkpoint features.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/formatdomaincaps.html.in | 8 ++++++++ docs/schemas/domaincaps.rng | 9 +++++++++ src/conf/domain_capabilities.c | 1 + src/conf/domain_capabilities.h | 1 + 4 files changed, 19 insertions(+)
diff --git a/docs/formatdomaincaps.html.in b/docs/formatdomaincaps.html.in index 0bafb67705..85226328a8 100644 --- a/docs/formatdomaincaps.html.in +++ b/docs/formatdomaincaps.html.in @@ -517,6 +517,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='yes'/> + <backup supported='yes'/> <sev> <cbitpos>47</cbitpos> <reduced-phys-bits>1</reduced-phys-bits> @@ -560,6 +561,13 @@ the disk to a running guest, or similar. </p>
+ <h4><a id="featureBackup">backup</a></h4> + + <p>Reports whether the hypervisor supports the backup,checkpoint and related
space after comma. Reviewed-by: Eric Blake <eblake@redhat.com> Hmm - as of this series, the test driver supports checkpoints but not (yet) backups. Are there plans to get rudimentary backup support into the test driver as well, so that we can declare the feature there as well as in qemu? -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Mon, Dec 09, 2019 at 12:57:52 -0600, Eric Blake wrote:
On 12/3/19 11:17 AM, Peter Krempa wrote:
This flag will allow figuring out whether the hypervisor supports the incremental backup and checkpoint features.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/formatdomaincaps.html.in | 8 ++++++++ docs/schemas/domaincaps.rng | 9 +++++++++ src/conf/domain_capabilities.c | 1 + src/conf/domain_capabilities.h | 1 + 4 files changed, 19 insertions(+)
diff --git a/docs/formatdomaincaps.html.in b/docs/formatdomaincaps.html.in index 0bafb67705..85226328a8 100644 --- a/docs/formatdomaincaps.html.in +++ b/docs/formatdomaincaps.html.in @@ -517,6 +517,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='yes'/> + <backup supported='yes'/> <sev> <cbitpos>47</cbitpos> <reduced-phys-bits>1</reduced-phys-bits> @@ -560,6 +561,13 @@ the disk to a running guest, or similar. </p>
+ <h4><a id="featureBackup">backup</a></h4> + + <p>Reports whether the hypervisor supports the backup,checkpoint and related
space after comma.
Reviewed-by: Eric Blake <eblake@redhat.com>
Hmm - as of this series, the test driver supports checkpoints but not (yet) backups. Are there plans to get rudimentary backup support into the test driver as well, so that we can declare the feature there as well as in qemu?
I think I'll spend my time on adding snapshot and blockjob support with checkpoints rather than test driver support.

Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_capabilities.c | 1 + tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.5.3-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.5.3.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.6.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.6.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.6.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.7.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.7.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.7.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.1.1-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.1.1-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.1.1.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.10.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.10.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.10.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.10.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.10.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.10.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.10.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.11.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.11.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.11.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.11.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.12.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.12.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.12.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.12.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.12.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.12.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.12.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.4.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.4.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.4.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.5.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.5.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.5.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.6.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.6.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.6.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.6.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.6.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.6.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.7.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.7.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.7.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.7.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.8.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.8.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.8.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.8.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.9.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.9.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.9.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.9.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.9.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.0.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.0.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.0.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_3.0.0.s390x.xml | 1 + tests/domaincapsdata/qemu_3.0.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.1.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.1.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.1.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_3.1.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.0.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.0.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.0.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.0.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.0.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_4.0.0.s390x.xml | 1 + tests/domaincapsdata/qemu_4.0.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.1.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.1.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.1.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.2.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.2.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.2.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.2.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.2.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_4.2.0.s390x.xml | 1 + tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 1 + 82 files changed, 82 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index edb128c881..3b9e4561fa 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -5458,6 +5458,7 @@ static const struct virQEMUCapsDomainFeatureCapabilityTuple domCapsTuples[] = { { VIR_DOMAIN_CAPS_FEATURE_VMCOREINFO, QEMU_CAPS_DEVICE_VMCOREINFO }, { VIR_DOMAIN_CAPS_FEATURE_GENID, QEMU_CAPS_DEVICE_VMGENID }, { VIR_DOMAIN_CAPS_FEATURE_BACKING_STORE_INPUT, QEMU_CAPS_BLOCKDEV }, + { VIR_DOMAIN_CAPS_FEATURE_BACKUP, QEMU_CAPS_INCREMENTAL_BACKUP }, }; diff --git a/tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml b/tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml index 87cb8eb07e..b9b9dd0538 100644 --- a/tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml @@ -131,6 +131,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_1.5.3-tcg.x86_64.xml b/tests/domaincapsdata/qemu_1.5.3-tcg.x86_64.xml index 5588765182..3b3d89a643 100644 --- a/tests/domaincapsdata/qemu_1.5.3-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_1.5.3-tcg.x86_64.xml @@ -131,6 +131,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_1.5.3.x86_64.xml b/tests/domaincapsdata/qemu_1.5.3.x86_64.xml index 6bfe903f9a..c36295b3e5 100644 --- a/tests/domaincapsdata/qemu_1.5.3.x86_64.xml +++ b/tests/domaincapsdata/qemu_1.5.3.x86_64.xml @@ -131,6 +131,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_1.6.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_1.6.0-q35.x86_64.xml index f924bf7fad..56ad885feb 100644 --- a/tests/domaincapsdata/qemu_1.6.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_1.6.0-q35.x86_64.xml @@ -131,6 +131,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_1.6.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_1.6.0-tcg.x86_64.xml index be8921cfa9..6bff19bad5 100644 --- a/tests/domaincapsdata/qemu_1.6.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_1.6.0-tcg.x86_64.xml @@ -131,6 +131,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_1.6.0.x86_64.xml b/tests/domaincapsdata/qemu_1.6.0.x86_64.xml index 04f532cb3e..3d4f7f1cee 100644 --- a/tests/domaincapsdata/qemu_1.6.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_1.6.0.x86_64.xml @@ -131,6 +131,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_1.7.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_1.7.0-q35.x86_64.xml index 294cceff2f..7a8b58bffe 100644 --- a/tests/domaincapsdata/qemu_1.7.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_1.7.0-q35.x86_64.xml @@ -131,6 +131,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_1.7.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_1.7.0-tcg.x86_64.xml index 04d7c26bd5..97e71bffff 100644 --- a/tests/domaincapsdata/qemu_1.7.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_1.7.0-tcg.x86_64.xml @@ -131,6 +131,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_1.7.0.x86_64.xml b/tests/domaincapsdata/qemu_1.7.0.x86_64.xml index c00e492784..c9dfb2e123 100644 --- a/tests/domaincapsdata/qemu_1.7.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_1.7.0.x86_64.xml @@ -131,6 +131,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.1.1-q35.x86_64.xml b/tests/domaincapsdata/qemu_2.1.1-q35.x86_64.xml index 7190a0ec9a..0bf035ce58 100644 --- a/tests/domaincapsdata/qemu_2.1.1-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.1.1-q35.x86_64.xml @@ -132,6 +132,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.1.1-tcg.x86_64.xml b/tests/domaincapsdata/qemu_2.1.1-tcg.x86_64.xml index 8251017d40..192a505d77 100644 --- a/tests/domaincapsdata/qemu_2.1.1-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.1.1-tcg.x86_64.xml @@ -132,6 +132,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.1.1.x86_64.xml b/tests/domaincapsdata/qemu_2.1.1.x86_64.xml index 2dcb90c66e..23a8509698 100644 --- a/tests/domaincapsdata/qemu_2.1.1.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.1.1.x86_64.xml @@ -132,6 +132,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.10.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_2.10.0-q35.x86_64.xml index ec044791bd..7033264719 100644 --- a/tests/domaincapsdata/qemu_2.10.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.10.0-q35.x86_64.xml @@ -155,6 +155,7 @@ <vmcoreinfo supported='no'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.10.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_2.10.0-tcg.x86_64.xml index e024d9c571..1193f49bd6 100644 --- a/tests/domaincapsdata/qemu_2.10.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.10.0-tcg.x86_64.xml @@ -174,6 +174,7 @@ <vmcoreinfo supported='no'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.10.0-virt.aarch64.xml b/tests/domaincapsdata/qemu_2.10.0-virt.aarch64.xml index 490a1d4a5b..c55ed9bea8 100644 --- a/tests/domaincapsdata/qemu_2.10.0-virt.aarch64.xml +++ b/tests/domaincapsdata/qemu_2.10.0-virt.aarch64.xml @@ -139,6 +139,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.10.0.aarch64.xml b/tests/domaincapsdata/qemu_2.10.0.aarch64.xml index 00d8cc8625..0f710b001d 100644 --- a/tests/domaincapsdata/qemu_2.10.0.aarch64.xml +++ b/tests/domaincapsdata/qemu_2.10.0.aarch64.xml @@ -133,6 +133,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.10.0.ppc64.xml b/tests/domaincapsdata/qemu_2.10.0.ppc64.xml index 9a0ba5d6dd..08e63d6410 100644 --- a/tests/domaincapsdata/qemu_2.10.0.ppc64.xml +++ b/tests/domaincapsdata/qemu_2.10.0.ppc64.xml @@ -105,6 +105,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.10.0.s390x.xml b/tests/domaincapsdata/qemu_2.10.0.s390x.xml index e551ed03c5..bf3f13887f 100644 --- a/tests/domaincapsdata/qemu_2.10.0.s390x.xml +++ b/tests/domaincapsdata/qemu_2.10.0.s390x.xml @@ -195,6 +195,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.10.0.x86_64.xml b/tests/domaincapsdata/qemu_2.10.0.x86_64.xml index 872ea80869..d67badb4c2 100644 --- a/tests/domaincapsdata/qemu_2.10.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.10.0.x86_64.xml @@ -155,6 +155,7 @@ <vmcoreinfo supported='no'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.11.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_2.11.0-q35.x86_64.xml index 21cafab70e..ae37e6a462 100644 --- a/tests/domaincapsdata/qemu_2.11.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.11.0-q35.x86_64.xml @@ -153,6 +153,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.11.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_2.11.0-tcg.x86_64.xml index 98a7e4bfbe..7f66cf7b7e 100644 --- a/tests/domaincapsdata/qemu_2.11.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.11.0-tcg.x86_64.xml @@ -169,6 +169,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.11.0.s390x.xml b/tests/domaincapsdata/qemu_2.11.0.s390x.xml index e93cf3ffcc..9b3b18d320 100644 --- a/tests/domaincapsdata/qemu_2.11.0.s390x.xml +++ b/tests/domaincapsdata/qemu_2.11.0.s390x.xml @@ -194,6 +194,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.11.0.x86_64.xml b/tests/domaincapsdata/qemu_2.11.0.x86_64.xml index 0a6f417306..fa9c6487b0 100644 --- a/tests/domaincapsdata/qemu_2.11.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.11.0.x86_64.xml @@ -153,6 +153,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.12.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_2.12.0-q35.x86_64.xml index 451c69200d..164427683e 100644 --- a/tests/domaincapsdata/qemu_2.12.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.12.0-q35.x86_64.xml @@ -166,6 +166,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='yes'> <cbitpos>47</cbitpos> <reducedPhysBits>1</reducedPhysBits> diff --git a/tests/domaincapsdata/qemu_2.12.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_2.12.0-tcg.x86_64.xml index 3eb821966b..9a89587115 100644 --- a/tests/domaincapsdata/qemu_2.12.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.12.0-tcg.x86_64.xml @@ -180,6 +180,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='yes'> <cbitpos>47</cbitpos> <reducedPhysBits>1</reducedPhysBits> diff --git a/tests/domaincapsdata/qemu_2.12.0-virt.aarch64.xml b/tests/domaincapsdata/qemu_2.12.0-virt.aarch64.xml index ba23d2e357..6504b74fa6 100644 --- a/tests/domaincapsdata/qemu_2.12.0-virt.aarch64.xml +++ b/tests/domaincapsdata/qemu_2.12.0-virt.aarch64.xml @@ -141,6 +141,7 @@ <vmcoreinfo supported='yes'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.12.0.aarch64.xml b/tests/domaincapsdata/qemu_2.12.0.aarch64.xml index 06348366e1..2c429fdcca 100644 --- a/tests/domaincapsdata/qemu_2.12.0.aarch64.xml +++ b/tests/domaincapsdata/qemu_2.12.0.aarch64.xml @@ -135,6 +135,7 @@ <vmcoreinfo supported='yes'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.12.0.ppc64.xml b/tests/domaincapsdata/qemu_2.12.0.ppc64.xml index 8c02295d57..fb3f4b7c80 100644 --- a/tests/domaincapsdata/qemu_2.12.0.ppc64.xml +++ b/tests/domaincapsdata/qemu_2.12.0.ppc64.xml @@ -105,6 +105,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.12.0.s390x.xml b/tests/domaincapsdata/qemu_2.12.0.s390x.xml index d25b458608..af7a04fb01 100644 --- a/tests/domaincapsdata/qemu_2.12.0.s390x.xml +++ b/tests/domaincapsdata/qemu_2.12.0.s390x.xml @@ -193,6 +193,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.12.0.x86_64.xml b/tests/domaincapsdata/qemu_2.12.0.x86_64.xml index 5fe2c0637b..2a0dffaf0f 100644 --- a/tests/domaincapsdata/qemu_2.12.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.12.0.x86_64.xml @@ -166,6 +166,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='yes'> <cbitpos>47</cbitpos> <reducedPhysBits>1</reducedPhysBits> diff --git a/tests/domaincapsdata/qemu_2.4.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_2.4.0-q35.x86_64.xml index 84adbef31a..110bfcbdbd 100644 --- a/tests/domaincapsdata/qemu_2.4.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.4.0-q35.x86_64.xml @@ -140,6 +140,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.4.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_2.4.0-tcg.x86_64.xml index 8f3d11aa65..2a6296739c 100644 --- a/tests/domaincapsdata/qemu_2.4.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.4.0-tcg.x86_64.xml @@ -140,6 +140,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.4.0.x86_64.xml b/tests/domaincapsdata/qemu_2.4.0.x86_64.xml index 69e27d4474..5a6fa78201 100644 --- a/tests/domaincapsdata/qemu_2.4.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.4.0.x86_64.xml @@ -140,6 +140,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.5.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_2.5.0-q35.x86_64.xml index 6ec0f26a67..399ac43dc3 100644 --- a/tests/domaincapsdata/qemu_2.5.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.5.0-q35.x86_64.xml @@ -140,6 +140,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.5.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_2.5.0-tcg.x86_64.xml index 5f731ba6a5..8b022e9bd7 100644 --- a/tests/domaincapsdata/qemu_2.5.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.5.0-tcg.x86_64.xml @@ -140,6 +140,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.5.0.x86_64.xml b/tests/domaincapsdata/qemu_2.5.0.x86_64.xml index 8442a70c8e..23a83311c6 100644 --- a/tests/domaincapsdata/qemu_2.5.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.5.0.x86_64.xml @@ -140,6 +140,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.6.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_2.6.0-q35.x86_64.xml index ab67d42be5..a0b27929b0 100644 --- a/tests/domaincapsdata/qemu_2.6.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.6.0-q35.x86_64.xml @@ -140,6 +140,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.6.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_2.6.0-tcg.x86_64.xml index a279fdec76..7937fad971 100644 --- a/tests/domaincapsdata/qemu_2.6.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.6.0-tcg.x86_64.xml @@ -140,6 +140,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.6.0-virt.aarch64.xml b/tests/domaincapsdata/qemu_2.6.0-virt.aarch64.xml index 90e38a0836..5a93c0c153 100644 --- a/tests/domaincapsdata/qemu_2.6.0-virt.aarch64.xml +++ b/tests/domaincapsdata/qemu_2.6.0-virt.aarch64.xml @@ -138,6 +138,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.6.0.aarch64.xml b/tests/domaincapsdata/qemu_2.6.0.aarch64.xml index 724202dabc..fe3c5db30e 100644 --- a/tests/domaincapsdata/qemu_2.6.0.aarch64.xml +++ b/tests/domaincapsdata/qemu_2.6.0.aarch64.xml @@ -132,6 +132,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.6.0.ppc64.xml b/tests/domaincapsdata/qemu_2.6.0.ppc64.xml index 107102efbe..c69b4dcae6 100644 --- a/tests/domaincapsdata/qemu_2.6.0.ppc64.xml +++ b/tests/domaincapsdata/qemu_2.6.0.ppc64.xml @@ -105,6 +105,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.6.0.x86_64.xml b/tests/domaincapsdata/qemu_2.6.0.x86_64.xml index fd3160c4ea..889b935ac8 100644 --- a/tests/domaincapsdata/qemu_2.6.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.6.0.x86_64.xml @@ -140,6 +140,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.7.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_2.7.0-q35.x86_64.xml index a00a49b1b4..fd6762f28c 100644 --- a/tests/domaincapsdata/qemu_2.7.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.7.0-q35.x86_64.xml @@ -141,6 +141,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.7.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_2.7.0-tcg.x86_64.xml index e7a7941294..8b7c2ce8e6 100644 --- a/tests/domaincapsdata/qemu_2.7.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.7.0-tcg.x86_64.xml @@ -141,6 +141,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.7.0.s390x.xml b/tests/domaincapsdata/qemu_2.7.0.s390x.xml index ad48c732b4..258000dbaf 100644 --- a/tests/domaincapsdata/qemu_2.7.0.s390x.xml +++ b/tests/domaincapsdata/qemu_2.7.0.s390x.xml @@ -98,6 +98,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.7.0.x86_64.xml b/tests/domaincapsdata/qemu_2.7.0.x86_64.xml index f816468139..0625244885 100644 --- a/tests/domaincapsdata/qemu_2.7.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.7.0.x86_64.xml @@ -141,6 +141,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.8.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_2.8.0-q35.x86_64.xml index c75dd6736e..0bf92eeb39 100644 --- a/tests/domaincapsdata/qemu_2.8.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.8.0-q35.x86_64.xml @@ -141,6 +141,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.8.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_2.8.0-tcg.x86_64.xml index 20964973b6..100e8e059c 100644 --- a/tests/domaincapsdata/qemu_2.8.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.8.0-tcg.x86_64.xml @@ -141,6 +141,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.8.0.s390x.xml b/tests/domaincapsdata/qemu_2.8.0.s390x.xml index 103e1f7980..cc858f538c 100644 --- a/tests/domaincapsdata/qemu_2.8.0.s390x.xml +++ b/tests/domaincapsdata/qemu_2.8.0.s390x.xml @@ -179,6 +179,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.8.0.x86_64.xml b/tests/domaincapsdata/qemu_2.8.0.x86_64.xml index 935e0e9afe..3a9dc24c3d 100644 --- a/tests/domaincapsdata/qemu_2.8.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.8.0.x86_64.xml @@ -141,6 +141,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.9.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_2.9.0-q35.x86_64.xml index 4d0e145976..60a0b76cf1 100644 --- a/tests/domaincapsdata/qemu_2.9.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.9.0-q35.x86_64.xml @@ -150,6 +150,7 @@ <vmcoreinfo supported='no'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.9.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_2.9.0-tcg.x86_64.xml index bf83709d89..08bb5fbad7 100644 --- a/tests/domaincapsdata/qemu_2.9.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.9.0-tcg.x86_64.xml @@ -173,6 +173,7 @@ <vmcoreinfo supported='no'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.9.0.ppc64.xml b/tests/domaincapsdata/qemu_2.9.0.ppc64.xml index 1e85f0bdfd..4b86abbb8f 100644 --- a/tests/domaincapsdata/qemu_2.9.0.ppc64.xml +++ b/tests/domaincapsdata/qemu_2.9.0.ppc64.xml @@ -105,6 +105,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.9.0.s390x.xml b/tests/domaincapsdata/qemu_2.9.0.s390x.xml index 1477ca9487..fe2c023956 100644 --- a/tests/domaincapsdata/qemu_2.9.0.s390x.xml +++ b/tests/domaincapsdata/qemu_2.9.0.s390x.xml @@ -180,6 +180,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_2.9.0.x86_64.xml b/tests/domaincapsdata/qemu_2.9.0.x86_64.xml index c044b46c21..c98cf1045b 100644 --- a/tests/domaincapsdata/qemu_2.9.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_2.9.0.x86_64.xml @@ -150,6 +150,7 @@ <vmcoreinfo supported='no'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_3.0.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_3.0.0-q35.x86_64.xml index 124a460f41..acff9a1310 100644 --- a/tests/domaincapsdata/qemu_3.0.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_3.0.0-q35.x86_64.xml @@ -167,6 +167,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_3.0.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_3.0.0-tcg.x86_64.xml index 0dfbb3471b..d369fa827a 100644 --- a/tests/domaincapsdata/qemu_3.0.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_3.0.0-tcg.x86_64.xml @@ -182,6 +182,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_3.0.0.ppc64.xml b/tests/domaincapsdata/qemu_3.0.0.ppc64.xml index e3acde93d4..3fdc6e0f1e 100644 --- a/tests/domaincapsdata/qemu_3.0.0.ppc64.xml +++ b/tests/domaincapsdata/qemu_3.0.0.ppc64.xml @@ -107,6 +107,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_3.0.0.s390x.xml b/tests/domaincapsdata/qemu_3.0.0.s390x.xml index 850acf905c..68bcafd62f 100644 --- a/tests/domaincapsdata/qemu_3.0.0.s390x.xml +++ b/tests/domaincapsdata/qemu_3.0.0.s390x.xml @@ -200,6 +200,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_3.0.0.x86_64.xml b/tests/domaincapsdata/qemu_3.0.0.x86_64.xml index 18212faad8..a51eb46d15 100644 --- a/tests/domaincapsdata/qemu_3.0.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_3.0.0.x86_64.xml @@ -167,6 +167,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_3.1.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_3.1.0-q35.x86_64.xml index db00c67571..6d11d0303f 100644 --- a/tests/domaincapsdata/qemu_3.1.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_3.1.0-q35.x86_64.xml @@ -170,6 +170,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_3.1.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_3.1.0-tcg.x86_64.xml index b3ef9e6c7e..444d90504e 100644 --- a/tests/domaincapsdata/qemu_3.1.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_3.1.0-tcg.x86_64.xml @@ -185,6 +185,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_3.1.0.ppc64.xml b/tests/domaincapsdata/qemu_3.1.0.ppc64.xml index 6f1aef4e12..3136a00662 100644 --- a/tests/domaincapsdata/qemu_3.1.0.ppc64.xml +++ b/tests/domaincapsdata/qemu_3.1.0.ppc64.xml @@ -107,6 +107,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_3.1.0.x86_64.xml b/tests/domaincapsdata/qemu_3.1.0.x86_64.xml index a9dde532e7..9fe7605272 100644 --- a/tests/domaincapsdata/qemu_3.1.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_3.1.0.x86_64.xml @@ -170,6 +170,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.0.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_4.0.0-q35.x86_64.xml index 57eb49362c..8e991f672b 100644 --- a/tests/domaincapsdata/qemu_4.0.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_4.0.0-q35.x86_64.xml @@ -170,6 +170,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.0.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_4.0.0-tcg.x86_64.xml index 5884defc41..463f0db390 100644 --- a/tests/domaincapsdata/qemu_4.0.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_4.0.0-tcg.x86_64.xml @@ -185,6 +185,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.0.0-virt.aarch64.xml b/tests/domaincapsdata/qemu_4.0.0-virt.aarch64.xml index c2d77a9dc0..0821b8ef9f 100644 --- a/tests/domaincapsdata/qemu_4.0.0-virt.aarch64.xml +++ b/tests/domaincapsdata/qemu_4.0.0-virt.aarch64.xml @@ -148,6 +148,7 @@ <vmcoreinfo supported='yes'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.0.0.aarch64.xml b/tests/domaincapsdata/qemu_4.0.0.aarch64.xml index 218b9d7c0e..5d5af1b87c 100644 --- a/tests/domaincapsdata/qemu_4.0.0.aarch64.xml +++ b/tests/domaincapsdata/qemu_4.0.0.aarch64.xml @@ -142,6 +142,7 @@ <vmcoreinfo supported='yes'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.0.0.ppc64.xml b/tests/domaincapsdata/qemu_4.0.0.ppc64.xml index 8de62bc781..b56d5fb6dc 100644 --- a/tests/domaincapsdata/qemu_4.0.0.ppc64.xml +++ b/tests/domaincapsdata/qemu_4.0.0.ppc64.xml @@ -108,6 +108,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.0.0.s390x.xml b/tests/domaincapsdata/qemu_4.0.0.s390x.xml index 09c5286919..4298b148fd 100644 --- a/tests/domaincapsdata/qemu_4.0.0.s390x.xml +++ b/tests/domaincapsdata/qemu_4.0.0.s390x.xml @@ -205,6 +205,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.0.0.x86_64.xml b/tests/domaincapsdata/qemu_4.0.0.x86_64.xml index cfa58caa4f..27d8023d38 100644 --- a/tests/domaincapsdata/qemu_4.0.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_4.0.0.x86_64.xml @@ -170,6 +170,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.1.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_4.1.0-q35.x86_64.xml index 463db0c72d..6363aa4a3f 100644 --- a/tests/domaincapsdata/qemu_4.1.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_4.1.0-q35.x86_64.xml @@ -174,6 +174,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.1.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_4.1.0-tcg.x86_64.xml index 611c67a2a3..b6168e684d 100644 --- a/tests/domaincapsdata/qemu_4.1.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_4.1.0-tcg.x86_64.xml @@ -185,6 +185,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.1.0.x86_64.xml b/tests/domaincapsdata/qemu_4.1.0.x86_64.xml index 629d47a0d5..54cb76e77b 100644 --- a/tests/domaincapsdata/qemu_4.1.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_4.1.0.x86_64.xml @@ -174,6 +174,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.2.0-q35.x86_64.xml b/tests/domaincapsdata/qemu_4.2.0-q35.x86_64.xml index db0bf87e20..0842a75523 100644 --- a/tests/domaincapsdata/qemu_4.2.0-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_4.2.0-q35.x86_64.xml @@ -174,6 +174,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='yes'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.2.0-tcg.x86_64.xml b/tests/domaincapsdata/qemu_4.2.0-tcg.x86_64.xml index ddea9c52ea..c415535a91 100644 --- a/tests/domaincapsdata/qemu_4.2.0-tcg.x86_64.xml +++ b/tests/domaincapsdata/qemu_4.2.0-tcg.x86_64.xml @@ -185,6 +185,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='yes'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.2.0-virt.aarch64.xml b/tests/domaincapsdata/qemu_4.2.0-virt.aarch64.xml index d101914b06..e5954717cc 100644 --- a/tests/domaincapsdata/qemu_4.2.0-virt.aarch64.xml +++ b/tests/domaincapsdata/qemu_4.2.0-virt.aarch64.xml @@ -148,6 +148,7 @@ <vmcoreinfo supported='yes'/> <genid supported='no'/> <backingStoreInput supported='yes'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.2.0.aarch64.xml b/tests/domaincapsdata/qemu_4.2.0.aarch64.xml index 65a842c1b1..bb02b1d850 100644 --- a/tests/domaincapsdata/qemu_4.2.0.aarch64.xml +++ b/tests/domaincapsdata/qemu_4.2.0.aarch64.xml @@ -142,6 +142,7 @@ <vmcoreinfo supported='yes'/> <genid supported='no'/> <backingStoreInput supported='yes'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.2.0.ppc64.xml b/tests/domaincapsdata/qemu_4.2.0.ppc64.xml index d77e88ab24..6d3ada3735 100644 --- a/tests/domaincapsdata/qemu_4.2.0.ppc64.xml +++ b/tests/domaincapsdata/qemu_4.2.0.ppc64.xml @@ -108,6 +108,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.2.0.s390x.xml b/tests/domaincapsdata/qemu_4.2.0.s390x.xml index e787df4d70..c6d92542c3 100644 --- a/tests/domaincapsdata/qemu_4.2.0.s390x.xml +++ b/tests/domaincapsdata/qemu_4.2.0.s390x.xml @@ -199,6 +199,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> diff --git a/tests/domaincapsdata/qemu_4.2.0.x86_64.xml b/tests/domaincapsdata/qemu_4.2.0.x86_64.xml index c6b528c5f5..212e0a5666 100644 --- a/tests/domaincapsdata/qemu_4.2.0.x86_64.xml +++ b/tests/domaincapsdata/qemu_4.2.0.x86_64.xml @@ -174,6 +174,7 @@ <vmcoreinfo supported='yes'/> <genid supported='yes'/> <backingStoreInput supported='yes'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities> -- 2.23.0

On Tue, Dec 03, 2019 at 06:17:47PM +0100, Peter Krempa wrote:
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_capabilities.c | 1 + tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.5.3-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.5.3.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.6.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.6.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.6.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.7.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.7.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_1.7.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.1.1-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.1.1-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.1.1.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.10.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.10.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.10.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.10.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.10.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.10.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.10.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.11.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.11.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.11.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.11.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.12.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.12.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.12.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.12.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.12.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.12.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.12.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.4.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.4.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.4.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.5.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.5.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.5.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.6.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.6.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.6.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.6.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_2.6.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.6.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.7.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.7.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.7.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.7.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.8.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.8.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.8.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.8.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.9.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.9.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_2.9.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_2.9.0.s390x.xml | 1 + tests/domaincapsdata/qemu_2.9.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.0.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.0.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.0.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_3.0.0.s390x.xml | 1 + tests/domaincapsdata/qemu_3.0.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.1.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.1.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_3.1.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_3.1.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.0.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.0.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.0.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.0.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.0.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_4.0.0.s390x.xml | 1 + tests/domaincapsdata/qemu_4.0.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.1.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.1.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.1.0.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.2.0-q35.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.2.0-tcg.x86_64.xml | 1 + tests/domaincapsdata/qemu_4.2.0-virt.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.2.0.aarch64.xml | 1 + tests/domaincapsdata/qemu_4.2.0.ppc64.xml | 1 + tests/domaincapsdata/qemu_4.2.0.s390x.xml | 1 + tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 1 + 82 files changed, 82 insertions(+)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, Dec 03, 2019 at 06:17:47PM +0100, Peter Krempa wrote:
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_capabilities.c | 1 + tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml | 1 +
[...]
tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 1 + 82 files changed, 82 insertions(+)
Reviewed-by: Ján Tomko <jtomko@redhat.com> Jano

On 12/3/19 11:17 AM, Peter Krempa wrote:
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_capabilities.c | 1 + tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 1 + 82 files changed, 82 insertions(+)
Mostly mechanical :)
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index edb128c881..3b9e4561fa 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -5458,6 +5458,7 @@ static const struct virQEMUCapsDomainFeatureCapabilityTuple domCapsTuples[] = { { VIR_DOMAIN_CAPS_FEATURE_VMCOREINFO, QEMU_CAPS_DEVICE_VMCOREINFO }, { VIR_DOMAIN_CAPS_FEATURE_GENID, QEMU_CAPS_DEVICE_VMGENID }, { VIR_DOMAIN_CAPS_FEATURE_BACKING_STORE_INPUT, QEMU_CAPS_BLOCKDEV }, + { VIR_DOMAIN_CAPS_FEATURE_BACKUP, QEMU_CAPS_INCREMENTAL_BACKUP }, };
diff --git a/tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml b/tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml index 87cb8eb07e..b9b9dd0538 100644 --- a/tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml +++ b/tests/domaincapsdata/qemu_1.5.3-q35.x86_64.xml @@ -131,6 +131,7 @@ <vmcoreinfo supported='no'/> <genid supported='no'/> <backingStoreInput supported='no'/> + <backup supported='no'/> <sev supported='no'/> </features> </domainCapabilities>
Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org
participants (6)
-
Cole Robinson
-
Daniel P. Berrangé
-
Eric Blake
-
Ján Tomko
-
Peter Krempa
-
Vladimir Sementsov-Ogievskiy