[PATCH 0/7] qemu: Node reactivation and 'manual' disk snapshot enhancements

Peter Krempa (7): qemu: monitor: Track inactive state of block nodes in 'qemuBlockNamedNodeData' qemu: block: Introduce helper function to ensure that block nodes are active qemu: Re-activate block nodes before storage operations qemu: migration: Don't reactivate block nodes after migration failure any more qemu: snapshot: Deactivate block nodes on manually snapshotted disks qemu: snapshot: Allow snapshot consisting only of 'manual'-y handled disks docs: snapshot: Add a note that blockjobs ought to be avoided with 'manual' snapshots docs/formatsnapshot.rst | 7 ++- src/qemu/qemu_backup.c | 3 ++ src/qemu/qemu_block.c | 52 +++++++++++++++++++++ src/qemu/qemu_block.h | 4 ++ src/qemu/qemu_checkpoint.c | 12 +++++ src/qemu/qemu_driver.c | 9 ++++ src/qemu/qemu_migration.c | 53 ++------------------- src/qemu/qemu_monitor.h | 4 ++ src/qemu/qemu_monitor_json.c | 5 ++ src/qemu/qemu_snapshot.c | 91 +++++++++++++++++++++++++++++++++++- 10 files changed, 188 insertions(+), 52 deletions(-) -- 2.51.0

From: Peter Krempa <pkrempa@redhat.com> New qemus report if given block node is active. We'll be using this data to decide if we need to reactivate them prior to blockjobs. Extract the data as 'inactive' as it's simpler to track and we care only about inactive nodes. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_monitor.h | 4 ++++ src/qemu/qemu_monitor_json.c | 5 +++++ 2 files changed, 9 insertions(+) diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 8ef85ceb0a..b257c19c89 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -751,6 +751,10 @@ struct _qemuBlockNamedNodeData { /* qcow2 data file 'raw' feature is enabled */ bool qcow2dataFileRaw; + + /* node is deactivated in qemu (reported as 'active' but may be missing, + * thus the flag is asserted only when we know it's inactive) */ + bool inactive; }; GHashTable * diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 9caade7bc9..d44f5d94ed 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -2714,6 +2714,7 @@ qemuMonitorJSONBlockGetNamedNodeDataWorker(size_t pos G_GNUC_UNUSED, virJSONValue *bitmaps; virJSONValue *snapshots; virJSONValue *format_specific; + bool active; const char *nodename; g_autoptr(qemuBlockNamedNodeData) ent = NULL; @@ -2736,6 +2737,10 @@ qemuMonitorJSONBlockGetNamedNodeDataWorker(size_t pos G_GNUC_UNUSED, if ((bitmaps = virJSONValueObjectGetArray(val, "dirty-bitmaps"))) qemuMonitorJSONBlockGetNamedNodeDataBitmaps(bitmaps, ent); + /* stored as negative as the value may be missing from some qemus */ + if (virJSONValueObjectGetBoolean(val, "active", &active) == 0) + ent->inactive = !active; + if ((snapshots = virJSONValueObjectGetArray(img, "snapshots"))) { size_t nsnapshots = virJSONValueArraySize(snapshots); size_t i; -- 2.51.0

From: Peter Krempa <pkrempa@redhat.com> Upcoming changes to snapshot code will break the assumption that block nodes are always active (if the function is able to acquire a modify job). Introduce qemuBlockNodesEnsureActive that checks if the block graph in qemu contains any inactive nodes and if yes reactivates everything. The function will be used on code paths such as blockjobs which require the nodes to be active. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_block.c | 52 +++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_block.h | 4 ++++ 2 files changed, 56 insertions(+) diff --git a/src/qemu/qemu_block.c b/src/qemu/qemu_block.c index 194f8407e3..a7062d3e96 100644 --- a/src/qemu/qemu_block.c +++ b/src/qemu/qemu_block.c @@ -4023,3 +4023,55 @@ qemuBlockFinalize(virDomainObj *vm, return ret; } + + +/** + * qemuBlockNodesEnsureActive: + * @vm: domain object + * @asyncJob: asynchronous job ID + * + * Checks if any block nodes are inactive and reactivates them. This is necessary + * to do before any blockjob as the block nodes could have been deactivated + * either by an aborted migration (before the VM switched to running mode) or + * after a snapshot with 'manual' disks (which deactivates them). + * + * Block nodes need to be reactivated prior to fetching the data + * via 'qemuBlockGetNamedNodeData' as qemu doesn't guarantee that the data + * fetched while nodes are inactive is accurate. + */ +int +qemuBlockNodesEnsureActive(virDomainObj *vm, + virDomainAsyncJob asyncJob) +{ + qemuDomainObjPrivate *priv = vm->privateData; + GHashTableIter htitr; + g_autoptr(GHashTable) blockNamedNodeData = NULL; + qemuBlockNamedNodeData *node; + bool has_inactive = false; + int rc = 0; + + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_SET_ACTIVE)) + return 0; + + if (!(blockNamedNodeData = qemuBlockGetNamedNodeData(vm, asyncJob))) + return -1; + + g_hash_table_iter_init(&htitr, blockNamedNodeData); + while (g_hash_table_iter_next(&htitr, NULL, (void *) &node)) { + if (node->inactive) { + has_inactive = true; + break; + } + } + + if (!has_inactive) + return 0; + + if (qemuDomainObjEnterMonitorAsync(vm, asyncJob) < 0) + return -1; + + rc = qemuMonitorBlockdevSetActive(priv->mon, NULL, true); + qemuDomainObjExitMonitor(vm); + + return rc; +} diff --git a/src/qemu/qemu_block.h b/src/qemu/qemu_block.h index b9e950e494..ba7e9bbbda 100644 --- a/src/qemu/qemu_block.h +++ b/src/qemu/qemu_block.h @@ -376,3 +376,7 @@ int qemuBlockFinalize(virDomainObj *vm, qemuBlockJobData *job, virDomainAsyncJob asyncJob); + +int +qemuBlockNodesEnsureActive(virDomainObj *vm, + virDomainAsyncJob asyncJob); -- 2.51.0

From: Peter Krempa <pkrempa@redhat.com> Upcoming patches will modify how we treat inactive block nodes so that we can properly deactivate nodes for 'manual' disk snapshot mode. Re-activate the nodes before operations requiring them. This includes also query operations where we e.g. probe bitmaps. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_backup.c | 3 +++ src/qemu/qemu_checkpoint.c | 12 ++++++++++++ src/qemu/qemu_driver.c | 9 +++++++++ src/qemu/qemu_migration.c | 3 +++ src/qemu/qemu_snapshot.c | 6 ++++++ 5 files changed, 33 insertions(+) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index 1f43479b5e..3b4fe54854 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -823,6 +823,9 @@ qemuBackupBegin(virDomainObj *vm, if (qemuBackupBeginPrepareTLS(vm, cfg, def, &tlsProps, &tlsSecretProps) < 0) goto endjob; + if (qemuBlockNodesEnsureActive(vm, VIR_ASYNC_JOB_BACKUP) < 0) + goto endjob; + actions = virJSONValueNewArray(); /* The 'chk' checkpoint must be rolled back if the transaction command diff --git a/src/qemu/qemu_checkpoint.c b/src/qemu/qemu_checkpoint.c index af847cf1f2..193cf9a06a 100644 --- a/src/qemu/qemu_checkpoint.c +++ b/src/qemu/qemu_checkpoint.c @@ -189,6 +189,9 @@ qemuCheckpointDiscardBitmaps(virDomainObj *vm, actions = virJSONValueNewArray(); + if (qemuBlockNodesEnsureActive(vm, VIR_ASYNC_JOB_NONE) < 0) + return -1; + if (!(blockNamedNodeData = qemuBlockGetNamedNodeData(vm, VIR_ASYNC_JOB_NONE))) return -1; @@ -411,6 +414,9 @@ qemuCheckpointRedefineValidateBitmaps(virDomainObj *vm, if (virDomainObjCheckActive(vm) < 0) return -1; + if (qemuBlockNodesEnsureActive(vm, VIR_ASYNC_JOB_NONE) < 0) + return -1; + if (!(blockNamedNodeData = qemuBlockGetNamedNodeData(vm, VIR_ASYNC_JOB_NONE))) return -1; @@ -516,6 +522,9 @@ qemuCheckpointCreate(virQEMUDriver *driver, if (qemuCheckpointCreateCommon(driver, vm, def, &actions, &chk) < 0) return NULL; + if (qemuBlockNodesEnsureActive(vm, VIR_ASYNC_JOB_NONE) < 0) + return NULL; + qemuDomainObjEnterMonitor(vm); rc = qemuMonitorTransaction(qemuDomainGetMonitor(vm), &actions); qemuDomainObjExitMonitor(vm); @@ -651,6 +660,9 @@ qemuCheckpointGetXMLDescUpdateSize(virDomainObj *vm, if (virDomainObjCheckActive(vm) < 0) goto endjob; + if (qemuBlockNodesEnsureActive(vm, VIR_ASYNC_JOB_NONE) < 0) + goto endjob; + if (!(nodedataMerge = qemuBlockGetNamedNodeData(vm, VIR_ASYNC_JOB_NONE))) goto endjob; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index ac72ea5cb0..3954857512 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -13820,6 +13820,9 @@ qemuDomainBlockPullCommon(virDomainObj *vm, speed <<= 20; } + if (qemuBlockNodesEnsureActive(vm, VIR_ASYNC_JOB_NONE) < 0) + goto endjob; + if (!(job = qemuBlockJobDiskNewPull(vm, disk, baseSource, flags))) goto endjob; @@ -14390,6 +14393,9 @@ qemuDomainBlockCopyCommon(virDomainObj *vm, goto endjob; } + if (qemuBlockNodesEnsureActive(vm, VIR_ASYNC_JOB_NONE) < 0) + goto endjob; + /* pre-create the image file. This is required so that libvirt can properly * label the image for access by qemu */ if (!existing) { @@ -14796,6 +14802,9 @@ qemuDomainBlockCommit(virDomainPtr dom, base, disk->dst, NULL))) goto endjob; + if (qemuBlockNodesEnsureActive(vm, VIR_ASYNC_JOB_NONE) < 0) + goto endjob; + job = qemuBlockCommit(vm, disk, baseSource, topSource, top_parent, speed, VIR_ASYNC_JOB_NONE, VIR_TRISTATE_BOOL_YES, flags); diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 7d87b3073b..a11d1d8452 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2919,6 +2919,9 @@ qemuMigrationSrcBeginPhase(virQEMUDriver *driver, vm->newDef && !qemuDomainVcpuHotplugIsInOrder(vm->newDef))) cookieFlags |= QEMU_MIGRATION_COOKIE_CPU_HOTPLUG; + if (qemuBlockNodesEnsureActive(vm, vm->job->asyncJob) < 0) + return NULL; + return qemuMigrationSrcBeginXML(vm, xmlin, cookieout, cookieoutlen, cookieFlags, migrate_disks, flags); diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 764aafda4d..c988de37ca 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -2066,6 +2066,9 @@ qemuSnapshotCreate(virDomainObj *vm, /* actually do the snapshot */ if (virDomainObjIsActive(vm)) { + if (qemuBlockNodesEnsureActive(vm, VIR_ASYNC_JOB_SNAPSHOT) < 0) + goto error; + if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY || virDomainSnapshotObjGetDef(snap)->memory == VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL) { /* external full system or disk snapshot */ @@ -4094,6 +4097,9 @@ qemuSnapshotDiscardImpl(virDomainObj *vm, return -1; } } else { + if (qemuBlockNodesEnsureActive(vm, VIR_ASYNC_JOB_SNAPSHOT) < 0) + return -1; + if (virDomainSnapshotIsExternal(snap)) { if (qemuSnapshotDiscardExternal(vm, snap, externalData) < 0) return -1; -- 2.51.0

From: Peter Krempa <pkrempa@redhat.com> The other code paths which do want to issue block jobs can reactivate the nodes when necessary so we don't need to do that unconditionally after failed/cancelled migration. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_migration.c | 50 ++------------------------------------- 1 file changed, 2 insertions(+), 48 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index a11d1d8452..9109c4526d 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -220,43 +220,6 @@ qemuMigrationSrcStoreDomainState(virDomainObj *vm) } -/** - * qemuMigrationBlockNodesReactivate: - * - * In case when we're keeping the VM paused qemu will not re-activate the block - * device backend tree so blockjobs would fail. In case when qemu supports the - * 'blockdev-set-active' command this function will re-activate the block nodes. - */ -static void -qemuMigrationBlockNodesReactivate(virDomainObj *vm, - virDomainAsyncJob asyncJob) -{ - virErrorPtr orig_err; - qemuDomainObjPrivate *priv = vm->privateData; - int rc; - - if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_SET_ACTIVE)) - return; - - VIR_DEBUG("re-activating block nodes"); - - virErrorPreserveLast(&orig_err); - - if (qemuDomainObjEnterMonitorAsync(vm, asyncJob) < 0) - goto cleanup; - - rc = qemuMonitorBlockdevSetActive(priv->mon, NULL, true); - - qemuDomainObjExitMonitor(vm); - - if (rc < 0) - VIR_WARN("failed to re-activate block nodes after migration of VM '%s'", vm->def->name); - - cleanup: - virErrorRestore(&orig_err); -} - - static void qemuMigrationSrcRestoreDomainState(virQEMUDriver *driver, virDomainObj *vm) { @@ -279,11 +242,11 @@ qemuMigrationSrcRestoreDomainState(virQEMUDriver *driver, virDomainObj *vm) if (preMigrationState != VIR_DOMAIN_RUNNING || state != VIR_DOMAIN_PAUSED) - goto reactivate; + return; if (reason == VIR_DOMAIN_PAUSED_IOERROR) { VIR_DEBUG("Domain is paused due to I/O error, skipping resume"); - goto reactivate; + return; } VIR_DEBUG("Restoring pre-migration state due to migration error"); @@ -306,14 +269,7 @@ qemuMigrationSrcRestoreDomainState(virQEMUDriver *driver, virDomainObj *vm) VIR_DOMAIN_EVENT_SUSPENDED_API_ERROR); virObjectEventStateQueue(driver->domainEventState, event); } - - goto reactivate; } - - return; - - reactivate: - qemuMigrationBlockNodesReactivate(vm, VIR_ASYNC_JOB_MIGRATION_OUT); } @@ -6891,8 +6847,6 @@ qemuMigrationDstFinishFresh(virQEMUDriver *driver, if (*inPostCopy) *doKill = false; - } else { - qemuMigrationBlockNodesReactivate(vm, VIR_ASYNC_JOB_MIGRATION_IN); } if (mig->jobData) { -- 2.51.0

From: Peter Krempa <pkrempa@redhat.com> If the user wants to manually preserve state of the disk we need, apart from pausing the machine to quiesce all writes, also deactivate the block nodes of the device. This ensures that qemu writes out metadata (e.g. block dirty bitmaps) which are normally stored only in memory, thus allowing a consistent snapshot including the metadata. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_snapshot.c | 81 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 81 insertions(+) diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index c988de37ca..5c12dca892 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1552,6 +1552,83 @@ qemuSnapshotCreateActiveExternalDisks(virDomainObj *vm, } +static int +qemuSnapshotCreateActiveExternalDisksManual(virDomainObj *vm, + virDomainMomentObj *snap, + virDomainAsyncJob asyncJob) +{ + qemuDomainObjPrivate *priv = vm->privateData; + virDomainSnapshotDef *snapdef = virDomainSnapshotObjGetDef(snap); + g_autoptr(GPtrArray) nodenames = g_ptr_array_new(); + int ret = 0; + size_t i; + + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV_SET_ACTIVE)) + return 0; + + for (i = 0; i < snapdef->ndisks; i++) { + virDomainDiskDef *domdisk = vm->def->disks[i]; + qemuDomainDiskPrivate *domdiskPriv = QEMU_DOMAIN_DISK_PRIVATE(domdisk); + virStorageSource *n; + + if (snapdef->disks[i].snapshot != VIR_DOMAIN_SNAPSHOT_LOCATION_MANUAL) + continue; + + if (domdiskPriv->nodeCopyOnRead) + g_ptr_array_add(nodenames, domdiskPriv->nodeCopyOnRead); + + if (domdisk->nthrottlefilters > 0) { + size_t j; + + for (j = 0; j < domdisk->nthrottlefilters; j++) { + g_ptr_array_add(nodenames, (void *) qemuBlockThrottleFilterGetNodename(domdisk->throttlefilters[j])); + } + } + + for (n = domdisk->src; virStorageSourceIsBacking(n); n = n->backingStore) { + const char *tmp; + + if ((tmp = qemuBlockStorageSourceGetFormatNodename(n))) + g_ptr_array_add(nodenames, (void *) tmp); + + if ((tmp = qemuBlockStorageSourceGetSliceNodename(n))) + g_ptr_array_add(nodenames, (void *) tmp); + + g_ptr_array_add(nodenames, (void *) qemuBlockStorageSourceGetStorageNodename(n)); + + if (n->dataFileStore) { + if ((tmp = qemuBlockStorageSourceGetFormatNodename(n->dataFileStore))) + g_ptr_array_add(nodenames, (void *) tmp); + + if ((tmp = qemuBlockStorageSourceGetSliceNodename(n->dataFileStore))) + g_ptr_array_add(nodenames, (void *) tmp); + + g_ptr_array_add(nodenames, (void *) qemuBlockStorageSourceGetStorageNodename(n->dataFileStore)); + } + } + } + + if (nodenames->len == 0) + return 0; + + if (qemuDomainObjEnterMonitorAsync(vm, asyncJob) < 0) + return -1; + + for (i = 0; i < nodenames->len; i++) { + const char *nodename = g_ptr_array_index(nodenames, i); + + if (qemuMonitorBlockdevSetActive(priv->mon, nodename, false) < 0) { + ret = -1; + break; + } + } + + qemuDomainObjExitMonitor(vm); + + return ret; +} + + static int qemuSnapshotCreateActiveExternal(virQEMUDriver *driver, virDomainObj *vm, @@ -1627,6 +1704,10 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *driver, } } + if (has_manual && + qemuSnapshotCreateActiveExternalDisksManual(vm, snap, VIR_ASYNC_JOB_SNAPSHOT) < 0) + goto cleanup; + /* We need to collect reply from 'query-named-block-nodes' prior to the * migration step as qemu deactivates bitmaps after migration so the result * would be wrong */ -- 2.51.0

From: Peter Krempa <pkrempa@redhat.com> The 'manual' snapshot mode is meant for disks where the users wants to take a snapshot via means outside of libvirt, e.g. on a SAN network. Allow creating a snapshot which consists entirely of 'manual' disks. For now this effectively means that the VM will be paused but in the future more logic can be added to ensure consistency. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_snapshot.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 5c12dca892..d4994dd54e 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -996,7 +996,7 @@ qemuSnapshotPrepare(virDomainObj *vm, } } - if (!found_internal && !external && + if (!found_internal && !external && !*has_manual && def->memory == VIR_DOMAIN_SNAPSHOT_LOCATION_NO) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("nothing selected for snapshot")); @@ -1013,7 +1013,7 @@ qemuSnapshotPrepare(virDomainObj *vm, } /* disk snapshot requires at least one disk */ - if (def->state == VIR_DOMAIN_SNAPSHOT_DISK_SNAPSHOT && !external) { + if (def->state == VIR_DOMAIN_SNAPSHOT_DISK_SNAPSHOT && !external && !*has_manual) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("disk-only snapshots require at least one disk to be selected for snapshot")); return -1; -- 2.51.0

From: Peter Krempa <pkrempa@redhat.com> Using a blockjob will reactivate the block nodes in qemu and thus e.g. qcow2 metadata such as bitmaps may become marked as dirty. Users of 'manual' snapshots ought to avoid those. Signed-off-by: Peter Krempa <pkrempa@redhat.com> --- docs/formatsnapshot.rst | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/docs/formatsnapshot.rst b/docs/formatsnapshot.rst index b85194b6bb..98e3adc235 100644 --- a/docs/formatsnapshot.rst +++ b/docs/formatsnapshot.rst @@ -127,9 +127,12 @@ The top-level ``domainsnapshot`` element may contain the following elements: :since:`Since 8.2.0` the ``snapshot`` attribute supports the ``manual`` value which instructs the hypervisor to create the snapshot and keep a - synchronized state by pausing the VM which allows to snapshot disk + synchronized state by pausing the VM (and where supported deactivating + the storage backends of the hypervisor), which allows to snapshot disk storage from outside of the hypervisor if the storage provider supports - it. The caller is responsible for resuming a VM paused by requesting a + it. VM operations requiring the storage (e.g. blockjobs, migration) should + be avoided to ensure that the storage backends can stay deactivated. + The caller is responsible for resuming a VM paused by requesting a ``manual`` snapshot. When reverting such snapshot, the expectation is that the storage is configured in a way where the hypervisor will see the correct image state. -- 2.51.0

On 10/13/25 15:40, Peter Krempa via Devel wrote:
Peter Krempa (7): qemu: monitor: Track inactive state of block nodes in 'qemuBlockNamedNodeData' qemu: block: Introduce helper function to ensure that block nodes are active qemu: Re-activate block nodes before storage operations qemu: migration: Don't reactivate block nodes after migration failure any more qemu: snapshot: Deactivate block nodes on manually snapshotted disks qemu: snapshot: Allow snapshot consisting only of 'manual'-y handled disks docs: snapshot: Add a note that blockjobs ought to be avoided with 'manual' snapshots
docs/formatsnapshot.rst | 7 ++- src/qemu/qemu_backup.c | 3 ++ src/qemu/qemu_block.c | 52 +++++++++++++++++++++ src/qemu/qemu_block.h | 4 ++ src/qemu/qemu_checkpoint.c | 12 +++++ src/qemu/qemu_driver.c | 9 ++++ src/qemu/qemu_migration.c | 53 ++------------------- src/qemu/qemu_monitor.h | 4 ++ src/qemu/qemu_monitor_json.c | 5 ++ src/qemu/qemu_snapshot.c | 91 +++++++++++++++++++++++++++++++++++- 10 files changed, 188 insertions(+), 52 deletions(-)
Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Michal
participants (2)
-
Michal Prívozník
-
Peter Krempa