[libvirt] [PATCH 00/11] qemu: backup: Fixes and improvements

I've got a report of pull-mode backup not working properly. While investigating I've found out few other problems and one regression caused by a recent patch. All of those are fixed below. Additionally the last 3 patches are RFC and were created based on the discussion on qemu-block and I don't really intend to push them right away. At least until the discussion there is over: https://lists.gnu.org/archive/html/qemu-block/2019-12/msg00416.html Peter Krempa (11): qemu: block: Use proper asyncJob when waiting for completion of blockdev-create qemu: Reset the node-name allocator in qemuDomainObjPrivateDataClear qemu: backup: Configure backup store image with backing file qemu: backup: Move deletion of backup images to job termination qemu: blockjob: Remove infrastructure for remembering to delete image qemu: backup: Properly propagate async job type when cancelling the job qemu: process: Terminate backup job on VM destroy schemas: backup: Remove pointless <choice> for 'name' of backup disk RFC: conf: backup: Allow configuration of names exported via NBD qemu: backup: Implement support for backup disk export name configuration qemu: backup: Implement support for backup disk bitmap name configuration docs/formatbackup.html.in | 9 +++ docs/schemas/domainbackup.rng | 16 ++-- src/conf/backup_conf.c | 11 +++ src/conf/backup_conf.h | 2 + src/qemu/qemu_backup.c | 77 +++++++++++++------ src/qemu/qemu_backup.h | 10 ++- src/qemu/qemu_block.c | 4 +- src/qemu/qemu_blockjob.c | 20 +---- src/qemu/qemu_blockjob.h | 2 - src/qemu/qemu_domain.c | 10 +-- src/qemu/qemu_driver.c | 2 +- src/qemu/qemu_process.c | 8 +- .../backup-pull-seclabel.xml | 2 +- .../backup-pull-seclabel.xml | 2 +- .../qemustatusxml2xmldata/backup-pull-in.xml | 1 - 15 files changed, 110 insertions(+), 66 deletions(-) -- 2.23.0

The waiting loop used QEMU_ASYNC_JOB_NONE rather than 'asyncJob' passed from the caller. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_block.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_block.c b/src/qemu/qemu_block.c index 95a2702f9d..eab21bc107 100644 --- a/src/qemu/qemu_block.c +++ b/src/qemu/qemu_block.c @@ -2406,11 +2406,11 @@ qemuBlockStorageSourceCreateGeneric(virDomainObjPtr vm, qemuBlockJobStarted(job, vm); - qemuBlockJobUpdate(vm, job, QEMU_ASYNC_JOB_NONE); + qemuBlockJobUpdate(vm, job, asyncJob); while (qemuBlockJobIsRunning(job)) { if (virDomainObjWait(vm) < 0) goto cleanup; - qemuBlockJobUpdate(vm, job, QEMU_ASYNC_JOB_NONE); + qemuBlockJobUpdate(vm, job, asyncJob); } if (job->state == QEMU_BLOCKJOB_STATE_FAILED || -- 2.23.0

On Fri, Dec 20, 2019 at 02:25:19PM +0100, Peter Krempa wrote:
The waiting loop used QEMU_ASYNC_JOB_NONE rather than 'asyncJob' passed from the caller.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_block.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

qemuDomainObjPrivateDataClear clears state which become invalid after VM stopped running and the node name allocator belongs there. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 3 +++ src/qemu/qemu_process.c | 3 --- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index ff87720fd1..0e073c25e1 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2238,6 +2238,9 @@ qemuDomainObjPrivateDataClear(qemuDomainObjPrivatePtr priv) virDomainBackupDefFree(priv->backup); priv->backup = NULL; + + /* reset node name allocator */ + qemuDomainStorageIdReset(priv); } diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 7e1db50e8f..e6c6c0bee2 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -7672,9 +7672,6 @@ void qemuProcessStop(virQEMUDriverPtr driver, /* clear all private data entries which are no longer needed */ qemuDomainObjPrivateDataClear(priv); - /* reset node name allocator */ - qemuDomainStorageIdReset(priv); - /* The "release" hook cleans up additional resources */ if (virHookPresent(VIR_HOOK_DRIVER_QEMU)) { char *xml = qemuDomainDefFormatXML(driver, NULL, vm->def, 0); -- 2.23.0

On Fri, Dec 20, 2019 at 02:25:20PM +0100, Peter Krempa wrote:
qemuDomainObjPrivateDataClear clears state which become invalid after VM stopped running and the node name allocator belongs there.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 3 +++ src/qemu/qemu_process.c | 3 --- 2 files changed, 3 insertions(+), 3 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

In contrast to snapshots the backup job does not complain when the backup job's store file has backing pre-configured. It's actually required so that the NBD server exposes all the data properly. Remove our fake termination and use the existing disk source as backing. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 294d5999a0..de4730441b 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -302,7 +302,6 @@ qemuBackupDiskPrepareDataOne(virDomainObjPtr vm, bool removeStore) { qemuDomainObjPrivatePtr priv = vm->privateData; - g_autoptr(virStorageSource) terminator = NULL; /* set data structure */ dd->backupdisk = backupdisk; @@ -331,17 +330,14 @@ qemuBackupDiskPrepareDataOne(virDomainObjPtr vm, return -1; } - /* install terminator to prevent qemu form opening backing images */ - if (!(terminator = virStorageSourceNew())) - return -1; - if (!(dd->blockjob = qemuBlockJobDiskNewBackup(vm, dd->domdisk, dd->store, removeStore, dd->incrementalBitmap))) return -1; + /* use original disk as backing to prevent opening the backing chain */ if (!(dd->crdata = qemuBuildStorageSourceChainAttachPrepareBlockdevTop(dd->store, - terminator, + dd->domdisk->src, priv->qemuCaps))) return -1; -- 2.23.0

On Fri, Dec 20, 2019 at 02:25:21PM +0100, Peter Krempa wrote:
In contrast to snapshots the backup job does not complain when the backup job's store file has backing pre-configured. It's actually required so that the NBD server exposes all the data properly.
Remove our fake termination and use the existing disk source as backing.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/20/19 7:25 AM, Peter Krempa wrote:
In contrast to snapshots the backup job does not complain when the backup job's store file has backing pre-configured. It's actually required so that the NBD server exposes all the data properly.
Remove our fake termination and use the existing disk source as backing.
Yep, this one is essential :)
Signed-off-by: Peter Krempa <pkrempa@redhat.com> ---
Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

While qemu is running both locations are identical in semantics, but the move will allow us to fix the scenario when the VM is destroyed or crashes where we'd leak the images. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 24 ++++++++++++++++++++++++ src/qemu/qemu_blockjob.c | 15 +-------------- 2 files changed, 25 insertions(+), 14 deletions(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index de4730441b..4ab9a2b17e 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -614,6 +614,7 @@ qemuBackupJobTerminate(virDomainObjPtr vm, { qemuDomainObjPrivatePtr priv = vm->privateData; + size_t i; qemuDomainJobInfoUpdateTime(priv->job.current); @@ -630,6 +631,29 @@ qemuBackupJobTerminate(virDomainObjPtr vm, qemuDomainEventEmitJobCompleted(priv->driver, vm); + if (!(priv->job.apiFlags & VIR_DOMAIN_BACKUP_BEGIN_REUSE_EXTERNAL) && + (priv->backup->type == VIR_DOMAIN_BACKUP_TYPE_PULL || + (priv->backup->type == VIR_DOMAIN_BACKUP_TYPE_PUSH && + jobstatus != QEMU_DOMAIN_JOB_STATUS_COMPLETED))) { + + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(priv->driver); + + for (i = 0; i < priv->backup->ndisks; i++) { + virDomainBackupDiskDefPtr backupdisk = priv->backup->disks + i; + uid_t uid; + gid_t gid; + + if (!backupdisk->store || + backupdisk->store->type != VIR_STORAGE_TYPE_FILE) + continue; + + qemuDomainGetImageIds(cfg, vm, backupdisk->store, NULL, &uid, &gid); + if (virFileRemove(backupdisk->store->path, uid, gid) < 0) + VIR_WARN("failed to remove scratch file '%s'", + backupdisk->store->path); + } + } + virDomainBackupDefFree(priv->backup); priv->backup = NULL; qemuDomainObjEndAsyncJob(priv->driver, vm); diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c index 498e2a716f..131b53d88d 100644 --- a/src/qemu/qemu_blockjob.c +++ b/src/qemu/qemu_blockjob.c @@ -1335,9 +1335,6 @@ qemuBlockJobProcessEventConcludedBackup(virQEMUDriverPtr driver, unsigned long long progressCurrent, unsigned long long progressTotal) { - g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); - uid_t uid; - gid_t gid; g_autoptr(qemuBlockStorageSourceAttachData) backend = NULL; g_autoptr(virJSONValue) actions = NULL; @@ -1369,18 +1366,8 @@ qemuBlockJobProcessEventConcludedBackup(virQEMUDriverPtr driver, if (qemuDomainObjExitMonitor(driver, vm) < 0) return; - if (job->data.backup.store) { + if (job->data.backup.store) qemuDomainStorageSourceAccessRevoke(driver, vm, job->data.backup.store); - - if (job->data.backup.deleteStore && - job->data.backup.store->type == VIR_STORAGE_TYPE_FILE) { - qemuDomainGetImageIds(cfg, vm, job->data.backup.store, NULL, &uid, &gid); - - if (virFileRemove(job->data.backup.store->path, uid, gid) < 0) - VIR_WARN("failed to remove scratch file '%s'", - job->data.backup.store->path); - } - } } -- 2.23.0

On Fri, Dec 20, 2019 at 02:25:22PM +0100, Peter Krempa wrote:
While qemu is running both locations are identical in semantics, but the move will allow us to fix the scenario when the VM is destroyed or crashes where we'd leak the images.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 24 ++++++++++++++++++++++++ src/qemu/qemu_blockjob.c | 15 +-------------- 2 files changed, 25 insertions(+), 14 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

Now that we delete the images elsewhere it's not required. Additionally it's safe to do as we never released an upstream version which required this being in place. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 12 ++++-------- src/qemu/qemu_blockjob.c | 2 -- src/qemu/qemu_blockjob.h | 2 -- src/qemu/qemu_domain.c | 7 ------- tests/qemustatusxml2xmldata/backup-pull-in.xml | 1 - 5 files changed, 4 insertions(+), 20 deletions(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 4ab9a2b17e..fa9588ef28 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -298,8 +298,7 @@ qemuBackupDiskPrepareDataOne(virDomainObjPtr vm, virJSONValuePtr actions, virDomainMomentDefPtr *incremental, virHashTablePtr blockNamedNodeData, - virQEMUDriverConfigPtr cfg, - bool removeStore) + virQEMUDriverConfigPtr cfg) { qemuDomainObjPrivatePtr priv = vm->privateData; @@ -331,7 +330,6 @@ qemuBackupDiskPrepareDataOne(virDomainObjPtr vm, } if (!(dd->blockjob = qemuBlockJobDiskNewBackup(vm, dd->domdisk, dd->store, - removeStore, dd->incrementalBitmap))) return -1; @@ -389,13 +387,11 @@ qemuBackupDiskPrepareData(virDomainObjPtr vm, virHashTablePtr blockNamedNodeData, virJSONValuePtr actions, virQEMUDriverConfigPtr cfg, - struct qemuBackupDiskData **rdd, - bool reuse_external) + struct qemuBackupDiskData **rdd) { struct qemuBackupDiskData *disks = NULL; ssize_t ndisks = 0; size_t i; - bool removeStore = !reuse_external && (def->type == VIR_DOMAIN_BACKUP_TYPE_PULL); disks = g_new0(struct qemuBackupDiskData, def->ndisks); @@ -410,7 +406,7 @@ qemuBackupDiskPrepareData(virDomainObjPtr vm, if (qemuBackupDiskPrepareDataOne(vm, backupdisk, dd, actions, incremental, blockNamedNodeData, - cfg, removeStore) < 0) + cfg) < 0) goto error; if (def->type == VIR_DOMAIN_BACKUP_TYPE_PULL) { @@ -827,7 +823,7 @@ qemuBackupBegin(virDomainObjPtr vm, goto endjob; if ((ndd = qemuBackupDiskPrepareData(vm, def, incremental, blockNamedNodeData, - actions, cfg, &dd, reuse)) <= 0) { + actions, cfg, &dd)) <= 0) { if (ndd == 0) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("no disks selected for backup")); diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c index 131b53d88d..c536c19bb6 100644 --- a/src/qemu/qemu_blockjob.c +++ b/src/qemu/qemu_blockjob.c @@ -382,7 +382,6 @@ qemuBlockJobDataPtr qemuBlockJobDiskNewBackup(virDomainObjPtr vm, virDomainDiskDefPtr disk, virStorageSourcePtr store, - bool deleteStore, const char *bitmap) { g_autoptr(qemuBlockJobData) job = NULL; @@ -395,7 +394,6 @@ qemuBlockJobDiskNewBackup(virDomainObjPtr vm, job->data.backup.bitmap = g_strdup(bitmap); job->data.backup.store = virObjectRef(store); - job->data.backup.deleteStore = deleteStore; /* backup jobs are usually started in bulk by transaction so the caller * shall save the status XML */ diff --git a/src/qemu/qemu_blockjob.h b/src/qemu/qemu_blockjob.h index 42b973fe96..7d584a2980 100644 --- a/src/qemu/qemu_blockjob.h +++ b/src/qemu/qemu_blockjob.h @@ -113,7 +113,6 @@ typedef qemuBlockJobBackupData *qemuBlockJobDataBackupPtr; struct _qemuBlockJobBackupData { virStorageSourcePtr store; - bool deleteStore; char *bitmap; }; @@ -201,7 +200,6 @@ qemuBlockJobDataPtr qemuBlockJobDiskNewBackup(virDomainObjPtr vm, virDomainDiskDefPtr disk, virStorageSourcePtr store, - bool deleteStore, const char *bitmap); qemuBlockJobDataPtr diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 0e073c25e1..d76dfe3a87 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2619,9 +2619,6 @@ qemuDomainObjPrivateXMLFormatBlockjobIterator(void *payload, data->xmlopt, false) < 0) return -1; - - if (job->data.backup.deleteStore) - virBufferAddLit(&childBuf, "<deleteStore/>\n"); } break; @@ -3224,10 +3221,6 @@ qemuDomainObjPrivateXMLParseBlockjobDataSpecific(qemuBlockJobDataPtr job, if (!(tmp = virXPathNode("./store", ctxt)) || !(job->data.backup.store = qemuDomainObjPrivateXMLParseBlockjobChain(tmp, ctxt, xmlopt))) goto broken; - - if (virXPathNode("./deleteStore", ctxt)) - job->data.backup.deleteStore = true; - break; case QEMU_BLOCKJOB_TYPE_BROKEN: diff --git a/tests/qemustatusxml2xmldata/backup-pull-in.xml b/tests/qemustatusxml2xmldata/backup-pull-in.xml index 6ef4965bed..3c69c41840 100644 --- a/tests/qemustatusxml2xmldata/backup-pull-in.xml +++ b/tests/qemustatusxml2xmldata/backup-pull-in.xml @@ -248,7 +248,6 @@ </privateData> </source> </store> - <deleteStore/> </blockjob> </blockjobs> <agentTimeout>-2</agentTimeout> -- 2.23.0

On Fri, Dec 20, 2019 at 02:25:23PM +0100, Peter Krempa wrote:
Now that we delete the images elsewhere it's not required. Additionally it's safe to do as we never released an upstream version which required this being in place.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 12 ++++-------- src/qemu/qemu_blockjob.c | 2 -- src/qemu/qemu_blockjob.h | 2 -- src/qemu/qemu_domain.c | 7 ------- tests/qemustatusxml2xmldata/backup-pull-in.xml | 1 - 5 files changed, 4 insertions(+), 20 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

When cancelling the blockjobs as part of failed backup job startup recover we didn't pass in the correct async job type. Luckily the block job handler and cancellation code paths use no block job at all currently so those were correct. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 17 +++++++++++------ src/qemu/qemu_backup.h | 7 +++++-- src/qemu/qemu_blockjob.c | 3 ++- src/qemu/qemu_driver.c | 2 +- 4 files changed, 19 insertions(+), 10 deletions(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index fa9588ef28..1381505d9f 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -661,6 +661,7 @@ qemuBackupJobTerminate(virDomainObjPtr vm, * @vm: domain object * @backup: backup definition * @terminatebackup: flag whether to terminate and unregister the backup + * @asyncJob: currently used qemu asynchronous job type * * Sends all active blockjobs which are part of @backup of @vm a signal to * cancel. If @terminatebackup is true qemuBackupJobTerminate is also called @@ -669,7 +670,8 @@ qemuBackupJobTerminate(virDomainObjPtr vm, void qemuBackupJobCancelBlockjobs(virDomainObjPtr vm, virDomainBackupDefPtr backup, - bool terminatebackup) + bool terminatebackup, + int asyncJob) { qemuDomainObjPrivatePtr priv = vm->privateData; size_t i; @@ -703,7 +705,8 @@ qemuBackupJobCancelBlockjobs(virDomainObjPtr vm, if (backupdisk->state != VIR_DOMAIN_BACKUP_DISK_STATE_RUNNING) continue; - qemuDomainObjEnterMonitor(priv->driver, vm); + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, asyncJob) < 0) + return; rc = qemuMonitorJobCancel(priv->mon, job->name, false); @@ -869,7 +872,7 @@ qemuBackupBegin(virDomainObjPtr vm, goto endjob; if (rc < 0) { - qemuBackupJobCancelBlockjobs(vm, priv->backup, false); + qemuBackupJobCancelBlockjobs(vm, priv->backup, false, QEMU_ASYNC_JOB_BACKUP); goto endjob; } } @@ -920,7 +923,8 @@ qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, virDomainDiskDefPtr disk, qemuBlockjobState state, unsigned long long cur, - unsigned long long end) + unsigned long long end, + int asyncJob) { qemuDomainObjPrivatePtr priv = vm->privateData; bool has_running = false; @@ -938,7 +942,8 @@ qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, return; if (backup->type == VIR_DOMAIN_BACKUP_TYPE_PULL) { - qemuDomainObjEnterMonitor(priv->driver, vm); + if (qemuDomainObjEnterMonitorAsync(priv->driver, vm, asyncJob) < 0) + return; ignore_value(qemuMonitorNBDServerStop(priv->mon)); if (qemuDomainObjExitMonitor(priv->driver, vm) < 0) return; @@ -1011,7 +1016,7 @@ qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, if (has_running && (has_failed || has_cancelled)) { /* cancel the rest of the jobs */ - qemuBackupJobCancelBlockjobs(vm, backup, false); + qemuBackupJobCancelBlockjobs(vm, backup, false, asyncJob); } else if (!has_running && !has_cancelling) { /* all sub-jobs have stopped */ diff --git a/src/qemu/qemu_backup.h b/src/qemu/qemu_backup.h index df67b849be..1b8a03612c 100644 --- a/src/qemu/qemu_backup.h +++ b/src/qemu/qemu_backup.h @@ -31,14 +31,17 @@ qemuBackupGetXMLDesc(virDomainObjPtr vm, void qemuBackupJobCancelBlockjobs(virDomainObjPtr vm, virDomainBackupDefPtr backup, - bool terminatebackup); + bool terminatebackup, + int asyncJob); void qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, virDomainDiskDefPtr disk, qemuBlockjobState state, unsigned long long cur, - unsigned long long end); + unsigned long long end, + int asyncJob); + int qemuBackupGetJobInfoStats(virQEMUDriverPtr driver, diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c index c536c19bb6..e04fcf69d1 100644 --- a/src/qemu/qemu_blockjob.c +++ b/src/qemu/qemu_blockjob.c @@ -1336,7 +1336,8 @@ qemuBlockJobProcessEventConcludedBackup(virQEMUDriverPtr driver, g_autoptr(qemuBlockStorageSourceAttachData) backend = NULL; g_autoptr(virJSONValue) actions = NULL; - qemuBackupNotifyBlockjobEnd(vm, job->disk, newstate, progressCurrent, progressTotal); + qemuBackupNotifyBlockjobEnd(vm, job->disk, newstate, + progressCurrent, progressTotal, asyncJob); if (job->data.backup.store && !(backend = qemuBlockStorageSourceDetachPrepare(job->data.backup.store, NULL))) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index ec8faf384c..e493755fb8 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -14124,7 +14124,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) break; case QEMU_ASYNC_JOB_BACKUP: - qemuBackupJobCancelBlockjobs(vm, priv->backup, true); + qemuBackupJobCancelBlockjobs(vm, priv->backup, true, QEMU_ASYNC_JOB_NONE); ret = 0; break; -- 2.23.0

On Fri, Dec 20, 2019 at 02:25:24PM +0100, Peter Krempa wrote:
When cancelling the blockjobs as part of failed backup job startup recover we didn't pass in the correct async job type. Luckily the block job handler and cancellation code paths use no block job at all currently so those were correct.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 17 +++++++++++------ src/qemu/qemu_backup.h | 7 +++++-- src/qemu/qemu_blockjob.c | 3 ++- src/qemu/qemu_driver.c | 2 +- 4 files changed, 19 insertions(+), 10 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

Commit d75f865fb989b3e6330c78c28e1c3bf7fa28e6a5 caused a job-deadlock if a VM is running the backup job and being destroyed as it removed the cleanup of the async job type and there was nothing to clean up the backup job. Add an explicit cleanup of the backup job when destroying a VM. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 2 +- src/qemu/qemu_backup.h | 3 +++ src/qemu/qemu_process.c | 5 +++++ 3 files changed, 9 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 1381505d9f..3bac6b353c 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -604,7 +604,7 @@ qemuBackupBeginCollectIncrementalCheckpoints(virDomainObjPtr vm, } -static void +void qemuBackupJobTerminate(virDomainObjPtr vm, qemuDomainJobStatus jobstatus) diff --git a/src/qemu/qemu_backup.h b/src/qemu/qemu_backup.h index 1b8a03612c..3321ba0b6f 100644 --- a/src/qemu/qemu_backup.h +++ b/src/qemu/qemu_backup.h @@ -42,6 +42,9 @@ qemuBackupNotifyBlockjobEnd(virDomainObjPtr vm, unsigned long long end, int asyncJob); +void +qemuBackupJobTerminate(virDomainObjPtr vm, + qemuDomainJobStatus jobstatus); int qemuBackupGetJobInfoStats(virQEMUDriverPtr driver, diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index e6c6c0bee2..3b036ac4bc 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -57,6 +57,7 @@ #include "qemu_security.h" #include "qemu_extdevice.h" #include "qemu_firmware.h" +#include "qemu_backup.h" #include "cpu/cpu.h" #include "cpu/cpu_x86.h" @@ -7609,6 +7610,10 @@ void qemuProcessStop(virQEMUDriverPtr driver, virResctrlAllocRemove(vm->def->resctrls[i]->alloc); } + /* clean up a possible backup job */ + if (priv->backup) + qemuBackupJobTerminate(vm, QEMU_DOMAIN_JOB_STATUS_CANCELED); + qemuProcessRemoveDomainStatus(driver, vm); /* Remove VNC and Spice ports from port reservation bitmap, but only if -- 2.23.0

On Fri, Dec 20, 2019 at 02:25:25PM +0100, Peter Krempa wrote:
Commit d75f865fb989b3e6330c78c28e1c3bf7fa28e6a5 caused a job-deadlock if a VM is running the backup job and being destroyed as it removed the cleanup of the async job type and there was nothing to clean up the backup job.
Add an explicit cleanup of the backup job when destroying a VM.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 2 +- src/qemu/qemu_backup.h | 3 +++ src/qemu/qemu_process.c | 5 +++++ 3 files changed, 9 insertions(+), 1 deletion(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

One of the first versions thought of using disk path as the second option but this was dropped as being a legacy interface. Remove the leftover pointless <choice> wrapper for the disk name as there's just one option now. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/schemas/domainbackup.rng | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/docs/schemas/domainbackup.rng b/docs/schemas/domainbackup.rng index 7286acb18c..c1e4d80302 100644 --- a/docs/schemas/domainbackup.rng +++ b/docs/schemas/domainbackup.rng @@ -104,9 +104,7 @@ <oneOrMore> <element name='disk'> <attribute name='name'> - <choice> - <ref name='diskTarget'/> - </choice> + <ref name='diskTarget'/> </attribute> <choice> <group> @@ -165,9 +163,7 @@ <oneOrMore> <element name='disk'> <attribute name='name'> - <choice> - <ref name='diskTarget'/> - </choice> + <ref name='diskTarget'/> </attribute> <choice> <group> -- 2.23.0

On Fri, Dec 20, 2019 at 02:25:26PM +0100, Peter Krempa wrote:
One of the first versions thought of using disk path as the second option but this was dropped as being a legacy interface. Remove the leftover pointless <choice> wrapper for the disk name as there's just one option now.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/schemas/domainbackup.rng | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

If users wish to use different name for exported disks or bitmaps the new fields allow to do so. Additionally they also document the current settings. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/formatbackup.html.in | 9 +++++++++ docs/schemas/domainbackup.rng | 8 ++++++++ src/conf/backup_conf.c | 11 +++++++++++ src/conf/backup_conf.h | 2 ++ tests/domainbackupxml2xmlin/backup-pull-seclabel.xml | 2 +- tests/domainbackupxml2xmlout/backup-pull-seclabel.xml | 2 +- 6 files changed, 32 insertions(+), 2 deletions(-) diff --git a/docs/formatbackup.html.in b/docs/formatbackup.html.in index 1c690901c7..7d2c6f1599 100644 --- a/docs/formatbackup.html.in +++ b/docs/formatbackup.html.in @@ -85,6 +85,15 @@ <dd>Setting this attribute to <code>yes</code>(default) specifies that the disk should take part in the backup and using <code>no</code> excludes the disk from the backup.</dd> + <dt><code>exportname</code></dt> + <dd>Allows to modify the NBD export name for the given disk. + By default equal to disk target. + Valid only for pull mode backups. </dd> + <dt><code>exportbitmap</code></dt> + <dd>Allows to modify the name of the bitmap describing dirty + blocks for an incremental backup exported via NBD export name + for the given disk. + Valid only for pull mode backups. </dd> <dt><code>type</code></dt> <dd>A mandatory attribute to describe the type of the disk, except when <code>backup='no'</code> is diff --git a/docs/schemas/domainbackup.rng b/docs/schemas/domainbackup.rng index c1e4d80302..395ea841f9 100644 --- a/docs/schemas/domainbackup.rng +++ b/docs/schemas/domainbackup.rng @@ -165,6 +165,14 @@ <attribute name='name'> <ref name='diskTarget'/> </attribute> + <optional> + <attribute name='exportname'> + <text/> + </attribute> + <attribute name='exportbitmap'> + <text/> + </attribute> + </optional> <choice> <group> <attribute name='backup'> diff --git a/src/conf/backup_conf.c b/src/conf/backup_conf.c index aa11967d2a..a4b87baa55 100644 --- a/src/conf/backup_conf.c +++ b/src/conf/backup_conf.c @@ -71,6 +71,8 @@ virDomainBackupDefFree(virDomainBackupDefPtr def) virDomainBackupDiskDefPtr disk = def->disks + i; g_free(disk->name); + g_free(disk->exportname); + g_free(disk->exportbitmap); virObjectUnref(disk->store); } @@ -124,6 +126,11 @@ virDomainBackupDiskDefParseXML(xmlNodePtr node, if (def->backup == VIR_TRISTATE_BOOL_NO) return 0; + if (!push) { + def->exportname = virXMLPropString(node, "exportname"); + def->exportbitmap = virXMLPropString(node, "exportbitmap"); + } + if (internal) { if (!(state = virXMLPropString(node, "state")) || (tmp = virDomainBackupDiskStateTypeFromString(state)) < 0) { @@ -165,6 +172,7 @@ virDomainBackupDiskDefParseXML(xmlNodePtr node, storageSourceParseFlags, xmlopt) < 0) return -1; + if ((driver = virXPathString("string(./driver/@type)", ctxt))) { def->store->format = virStorageFileFormatTypeFromString(driver); if (def->store->format <= 0) { @@ -333,6 +341,9 @@ virDomainBackupDiskDefFormat(virBufferPtr buf, if (disk->backup == VIR_TRISTATE_BOOL_YES) { virBufferAsprintf(&attrBuf, " type='%s'", virStorageTypeToString(disk->store->type)); + virBufferEscapeString(&attrBuf, " exportname='%s'", disk->exportname); + virBufferEscapeString(&attrBuf, " exportbitmap='%s'", disk->exportbitmap); + if (disk->store->format > 0) virBufferEscapeString(&childBuf, "<driver type='%s'/>\n", virStorageFileFormatTypeToString(disk->store->format)); diff --git a/src/conf/backup_conf.h b/src/conf/backup_conf.h index 7cf44245d4..672fd52ee7 100644 --- a/src/conf/backup_conf.h +++ b/src/conf/backup_conf.h @@ -51,6 +51,8 @@ typedef virDomainBackupDiskDef *virDomainBackupDiskDefPtr; struct _virDomainBackupDiskDef { char *name; /* name matching the <target dev='...' of the domain */ virTristateBool backup; /* whether backup is requested */ + char *exportname; /* name of the NBD export for pull mode backup */ + char *exportbitmap; /* name of the bitmap exposed in NBD for pull mode backup */ /* details of target for push-mode, or of the scratch file for pull-mode */ virStorageSourcePtr store; diff --git a/tests/domainbackupxml2xmlin/backup-pull-seclabel.xml b/tests/domainbackupxml2xmlin/backup-pull-seclabel.xml index a00d8758bb..4e6e602c19 100644 --- a/tests/domainbackupxml2xmlin/backup-pull-seclabel.xml +++ b/tests/domainbackupxml2xmlin/backup-pull-seclabel.xml @@ -2,7 +2,7 @@ <incremental>1525889631</incremental> <server transport='tcp' name='localhost' port='10809'/> <disks> - <disk name='vda' type='file'> + <disk name='vda' type='file' exportname='test-vda' exportbitmap='blah'> <driver type='qcow2'/> <scratch file='/path/to/file'> <seclabel model='dac' relabel='no'/> diff --git a/tests/domainbackupxml2xmlout/backup-pull-seclabel.xml b/tests/domainbackupxml2xmlout/backup-pull-seclabel.xml index c631c9b979..450f007d3a 100644 --- a/tests/domainbackupxml2xmlout/backup-pull-seclabel.xml +++ b/tests/domainbackupxml2xmlout/backup-pull-seclabel.xml @@ -2,7 +2,7 @@ <incremental>1525889631</incremental> <server transport='tcp' name='localhost' port='10809'/> <disks> - <disk name='vda' backup='yes' type='file'> + <disk name='vda' backup='yes' type='file' exportname='test-vda' exportbitmap='blah'> <driver type='qcow2'/> <scratch file='/path/to/file'> <seclabel model='dac' relabel='no'/> -- 2.23.0

On Fri, Dec 20, 2019 at 02:25:27PM +0100, Peter Krempa wrote:
If users wish to use different name for exported disks or bitmaps the new fields allow to do so. Additionally they also document the current settings.
Is there a reason why users would want to change the names ? Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 12/20/19 8:00 AM, Daniel P. Berrangé wrote:
On Fri, Dec 20, 2019 at 02:25:27PM +0100, Peter Krempa wrote:
If users wish to use different name for exported disks or bitmaps the new fields allow to do so. Additionally they also document the current settings.
Is there a reason why users would want to change the names ?
Right now, with only a single job possible at a time, libvirt picks a predictable name based on the guest disk being exported. But in the future, when we permit multiple jobs in parallel, it may become important to know which name is related to which job, or even to pick names based on what the client expects. I'm not sure we have an essential use case yet, but I'm also not rejecting this. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Fri, Dec 20, 2019 at 02:49:07PM -0600, Eric Blake wrote:
On 12/20/19 8:00 AM, Daniel P. Berrangé wrote:
On Fri, Dec 20, 2019 at 02:25:27PM +0100, Peter Krempa wrote:
If users wish to use different name for exported disks or bitmaps the new fields allow to do so. Additionally they also document the current settings.
Is there a reason why users would want to change the names ?
Right now, with only a single job possible at a time, libvirt picks a predictable name based on the guest disk being exported. But in the future, when we permit multiple jobs in parallel, it may become important to know which name is related to which job, or even to pick names based on what the client expects. I'm not sure we have an essential use case yet, but I'm also not rejecting this.
If we have multiple parallel jobs, would we not also have a separate NBD server listening for each job ? Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Fri, Dec 20, 2019 at 4:01 PM Daniel P. Berrangé <berrange@redhat.com> wrote:
On Fri, Dec 20, 2019 at 02:25:27PM +0100, Peter Krempa wrote:
If users wish to use different name for exported disks or bitmaps the new fields allow to do so. Additionally they also document the current settings.
Is there a reason why users would want to change the names ?
I don't see a reason for changing the exportname, but the default dirty bitmap name expose libvirt internal implementation details (e.g "libvirt-1-format"). We would choose use something like <backup-uuid>-<drive-name>. Displaying the names in the xml is important now, being able to modify it is nice to have for oVirt. Nir

On 12/20/19 7:25 AM, Peter Krempa wrote:
If users wish to use different name for exported disks or bitmaps
different names
the new fields allow to do so. Additionally they also document the
allow this possibility.
current settings.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/formatbackup.html.in | 9 +++++++++ docs/schemas/domainbackup.rng | 8 ++++++++ src/conf/backup_conf.c | 11 +++++++++++ src/conf/backup_conf.h | 2 ++ tests/domainbackupxml2xmlin/backup-pull-seclabel.xml | 2 +- tests/domainbackupxml2xmlout/backup-pull-seclabel.xml | 2 +- 6 files changed, 32 insertions(+), 2 deletions(-)
diff --git a/docs/formatbackup.html.in b/docs/formatbackup.html.in index 1c690901c7..7d2c6f1599 100644 --- a/docs/formatbackup.html.in +++ b/docs/formatbackup.html.in @@ -85,6 +85,15 @@ <dd>Setting this attribute to <code>yes</code>(default) specifies that the disk should take part in the backup and using <code>no</code> excludes the disk from the backup.</dd> + <dt><code>exportname</code></dt> + <dd>Allows to modify the NBD export name for the given disk.
Allows modificatino of the NBD export name...
+ By default equal to disk target. + Valid only for pull mode backups. </dd>
Why a space after '.'?
+ <dt><code>exportbitmap</code></dt> + <dd>Allows to modify the name of the bitmap describing dirty
Allows modification of the name of the bitmap
+ blocks for an incremental backup exported via NBD export name + for the given disk. + Valid only for pull mode backups. </dd>
and another space
<dt><code>type</code></dt> <dd>A mandatory attribute to describe the type of the disk, except when <code>backup='no'</code> is diff --git a/docs/schemas/domainbackup.rng b/docs/schemas/domainbackup.rng index c1e4d80302..395ea841f9 100644 --- a/docs/schemas/domainbackup.rng +++ b/docs/schemas/domainbackup.rng @@ -165,6 +165,14 @@ <attribute name='name'> <ref name='diskTarget'/> </attribute> + <optional> + <attribute name='exportname'> + <text/> + </attribute> + <attribute name='exportbitmap'> + <text/> + </attribute> + </optional>
Do we need any limits (for example, qcow2 rejects bitmap names longer than 1k)? -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Fri, Dec 20, 2019 at 3:27 PM Peter Krempa <pkrempa@redhat.com> wrote:
If users wish to use different name for exported disks or bitmaps the new fields allow to do so. Additionally they also document the current settings.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/formatbackup.html.in | 9 +++++++++ docs/schemas/domainbackup.rng | 8 ++++++++ src/conf/backup_conf.c | 11 +++++++++++ src/conf/backup_conf.h | 2 ++ tests/domainbackupxml2xmlin/backup-pull-seclabel.xml | 2 +- tests/domainbackupxml2xmlout/backup-pull-seclabel.xml | 2 +- 6 files changed, 32 insertions(+), 2 deletions(-)
diff --git a/docs/formatbackup.html.in b/docs/formatbackup.html.in index 1c690901c7..7d2c6f1599 100644 --- a/docs/formatbackup.html.in +++ b/docs/formatbackup.html.in @@ -85,6 +85,15 @@ <dd>Setting this attribute to <code>yes</code>(default) specifies that the disk should take part in the backup and using <code>no</code> excludes the disk from the backup.</dd> + <dt><code>exportname</code></dt> + <dd>Allows to modify the NBD export name for the given disk. + By default equal to disk target. + Valid only for pull mode backups. </dd>
Makes sense. We assumed that the name is same as the drive name (e.g. "sda"), but being able to set this name or get it from the backup xml returned by backupGetXMLDesc() is more robust. Currently oVirt generates the NBD URL after starting a backup, assuming the drive name. I think with this we will would call backupGetXMLDesc(), parse it and extract the exportname.
+ <dt><code>exportbitmap</code></dt> + <dd>Allows to modify the name of the bitmap describing dirty + blocks for an incremental backup exported via NBD export name + for the given disk. + Valid only for pull mode backups. </dd>
Currently we discover this name using NBD_OPT_LIST_META_CONTEXT but being able to set or get it from the xml is better. I think the text should mention that this name must be unique, so you cannot use the same name in case you backup multiple disks. We would probably use the drive name as well for this, so it will be available as "qemu:dirty-bitmap:sda". Or to ensure it can never clash with another bitmap, <backup-uuid>-<drive-name> (7ce9b65d-b832-4ff2-b100-5170c9d02e2a-sda). It would be nice to show these parameters in the examples at the bottom of the page, in particualr in an example output of backupGetXMLDesc(), where this value will always appear. Are there any limits on the content and size of these strings?
<dt><code>type</code></dt> <dd>A mandatory attribute to describe the type of the disk, except when <code>backup='no'</code> is diff --git a/docs/schemas/domainbackup.rng b/docs/schemas/domainbackup.rng index c1e4d80302..395ea841f9 100644 --- a/docs/schemas/domainbackup.rng +++ b/docs/schemas/domainbackup.rng @@ -165,6 +165,14 @@ <attribute name='name'> <ref name='diskTarget'/> </attribute> + <optional> + <attribute name='exportname'> + <text/> + </attribute> + <attribute name='exportbitmap'> + <text/> + </attribute> + </optional> <choice> <group> <attribute name='backup'> diff --git a/src/conf/backup_conf.c b/src/conf/backup_conf.c index aa11967d2a..a4b87baa55 100644 --- a/src/conf/backup_conf.c +++ b/src/conf/backup_conf.c @@ -71,6 +71,8 @@ virDomainBackupDefFree(virDomainBackupDefPtr def) virDomainBackupDiskDefPtr disk = def->disks + i;
g_free(disk->name); + g_free(disk->exportname); + g_free(disk->exportbitmap); virObjectUnref(disk->store); }
@@ -124,6 +126,11 @@ virDomainBackupDiskDefParseXML(xmlNodePtr node, if (def->backup == VIR_TRISTATE_BOOL_NO) return 0;
+ if (!push) { + def->exportname = virXMLPropString(node, "exportname"); + def->exportbitmap = virXMLPropString(node, "exportbitmap"); + } + if (internal) { if (!(state = virXMLPropString(node, "state")) || (tmp = virDomainBackupDiskStateTypeFromString(state)) < 0) { @@ -165,6 +172,7 @@ virDomainBackupDiskDefParseXML(xmlNodePtr node, storageSourceParseFlags, xmlopt) < 0) return -1;
+ if ((driver = virXPathString("string(./driver/@type)", ctxt))) { def->store->format = virStorageFileFormatTypeFromString(driver); if (def->store->format <= 0) { @@ -333,6 +341,9 @@ virDomainBackupDiskDefFormat(virBufferPtr buf, if (disk->backup == VIR_TRISTATE_BOOL_YES) { virBufferAsprintf(&attrBuf, " type='%s'", virStorageTypeToString(disk->store->type));
+ virBufferEscapeString(&attrBuf, " exportname='%s'", disk->exportname); + virBufferEscapeString(&attrBuf, " exportbitmap='%s'", disk->exportbitmap); + if (disk->store->format > 0) virBufferEscapeString(&childBuf, "<driver type='%s'/>\n", virStorageFileFormatTypeToString(disk->store->format)); diff --git a/src/conf/backup_conf.h b/src/conf/backup_conf.h index 7cf44245d4..672fd52ee7 100644 --- a/src/conf/backup_conf.h +++ b/src/conf/backup_conf.h @@ -51,6 +51,8 @@ typedef virDomainBackupDiskDef *virDomainBackupDiskDefPtr; struct _virDomainBackupDiskDef { char *name; /* name matching the <target dev='...' of the domain */ virTristateBool backup; /* whether backup is requested */ + char *exportname; /* name of the NBD export for pull mode backup */ + char *exportbitmap; /* name of the bitmap exposed in NBD for pull mode backup */
/* details of target for push-mode, or of the scratch file for pull-mode */ virStorageSourcePtr store; diff --git a/tests/domainbackupxml2xmlin/backup-pull-seclabel.xml b/tests/domainbackupxml2xmlin/backup-pull-seclabel.xml index a00d8758bb..4e6e602c19 100644 --- a/tests/domainbackupxml2xmlin/backup-pull-seclabel.xml +++ b/tests/domainbackupxml2xmlin/backup-pull-seclabel.xml @@ -2,7 +2,7 @@ <incremental>1525889631</incremental> <server transport='tcp' name='localhost' port='10809'/> <disks> - <disk name='vda' type='file'> + <disk name='vda' type='file' exportname='test-vda' exportbitmap='blah'> <driver type='qcow2'/> <scratch file='/path/to/file'> <seclabel model='dac' relabel='no'/> diff --git a/tests/domainbackupxml2xmlout/backup-pull-seclabel.xml b/tests/domainbackupxml2xmlout/backup-pull-seclabel.xml index c631c9b979..450f007d3a 100644 --- a/tests/domainbackupxml2xmlout/backup-pull-seclabel.xml +++ b/tests/domainbackupxml2xmlout/backup-pull-seclabel.xml @@ -2,7 +2,7 @@ <incremental>1525889631</incremental> <server transport='tcp' name='localhost' port='10809'/> <disks> - <disk name='vda' backup='yes' type='file'> + <disk name='vda' backup='yes' type='file' exportname='test-vda' exportbitmap='blah'> <driver type='qcow2'/> <scratch file='/path/to/file'> <seclabel model='dac' relabel='no'/> -- 2.23.0
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Nir

Pass the exportname as configured when exporting the image via NBD and fill it with the default if it's not configured. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 3bac6b353c..54e726ca4a 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -548,9 +548,12 @@ qemuBackupBeginPullExportDisks(virDomainObjPtr vm, for (i = 0; i < ndisks; i++) { struct qemuBackupDiskData *dd = disks + i; + if (!dd->backupdisk->exportname) + dd->backupdisk->exportname = g_strdup(dd->domdisk->dst); + if (qemuMonitorNBDServerAdd(priv->mon, dd->store->nodeformat, - dd->domdisk->dst, + dd->backupdisk->exportname, false, dd->incrementalBitmap) < 0) return -1; -- 2.23.0

On Fri, Dec 20, 2019 at 3:27 PM Peter Krempa <pkrempa@redhat.com> wrote:
Pass the exportname as configured when exporting the image via NBD and fill it with the default if it's not configured.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 3bac6b353c..54e726ca4a 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -548,9 +548,12 @@ qemuBackupBeginPullExportDisks(virDomainObjPtr vm, for (i = 0; i < ndisks; i++) { struct qemuBackupDiskData *dd = disks + i;
+ if (!dd->backupdisk->exportname) + dd->backupdisk->exportname = g_strdup(dd->domdisk->dst); + if (qemuMonitorNBDServerAdd(priv->mon, dd->store->nodeformat, - dd->domdisk->dst, + dd->backupdisk->exportname, false, dd->incrementalBitmap) < 0)
It is a little strange that we set dd->backupdisk->exportname here, just before we call qemu. I think it would be cleaner if we configure dd-backupdisk-exportname after parsing the xml, or never, sending the value from dd->domdisk->dst. Otherwise looks fine.
return -1; -- 2.23.0
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

Use the user-configured name of the bitmap when merging the appropriate bitmaps for an excremental backup so that the user can see it as configured. Additionally expose the default bitmap name if nothing is configured. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 54e726ca4a..795026d24b 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -322,7 +322,10 @@ qemuBackupDiskPrepareDataOne(virDomainObjPtr vm, return -1; if (incremental) { - dd->incrementalBitmap = g_strdup_printf("backup-%s", dd->domdisk->dst); + if (dd->backupdisk->exportbitmap) + dd->incrementalBitmap = g_strdup(dd->backupdisk->exportbitmap); + else + dd->incrementalBitmap = g_strdup_printf("backup-%s", dd->domdisk->dst); if (qemuBackupDiskPrepareOneBitmaps(dd, actions, incremental, blockNamedNodeData) < 0) @@ -368,6 +371,10 @@ static int qemuBackupDiskPrepareDataOnePull(virJSONValuePtr actions, struct qemuBackupDiskData *dd) { + if (!dd->backupdisk->exportbitmap && + dd->incrementalBitmap) + dd->backupdisk->exportbitmap = g_strdup(dd->incrementalBitmap); + if (qemuMonitorTransactionBackup(actions, dd->domdisk->src->nodeformat, dd->blockjob->name, -- 2.23.0

On 12/20/19 7:25 AM, Peter Krempa wrote:
Use the user-configured name of the bitmap when merging the appropriate bitmaps for an excremental backup so that the user can see it as
Eww. I know you meant incremental, but your choice of wording left quite a smelly impression 💩
configured. Additionally expose the default bitmap name if nothing is configured.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
-- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Fri, Dec 20, 2019 at 3:27 PM Peter Krempa <pkrempa@redhat.com> wrote:
Use the user-configured name of the bitmap when merging the appropriate bitmaps for an excremental backup so that the user can see it as configured. Additionally expose the default bitmap name if nothing is configured.
Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 54e726ca4a..795026d24b 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -322,7 +322,10 @@ qemuBackupDiskPrepareDataOne(virDomainObjPtr vm, return -1;
if (incremental) { - dd->incrementalBitmap = g_strdup_printf("backup-%s", dd->domdisk->dst); + if (dd->backupdisk->exportbitmap) + dd->incrementalBitmap = g_strdup(dd->backupdisk->exportbitmap); + else + dd->incrementalBitmap = g_strdup_printf("backup-%s", dd->domdisk->dst);
if (qemuBackupDiskPrepareOneBitmaps(dd, actions, incremental, blockNamedNodeData) < 0) @@ -368,6 +371,10 @@ static int qemuBackupDiskPrepareDataOnePull(virJSONValuePtr actions, struct qemuBackupDiskData *dd) { + if (!dd->backupdisk->exportbitmap && + dd->incrementalBitmap) + dd->backupdisk->exportbitmap = g_strdup(dd->incrementalBitmap);
I wonder why we have 2 copies of the same value.
+ if (qemuMonitorTransactionBackup(actions, dd->domdisk->src->nodeformat, dd->blockjob->name, -- 2.23.0
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
participants (4)
-
Daniel P. Berrangé
-
Eric Blake
-
Nir Soffer
-
Peter Krempa