[libvirt PATCH 0/4] qemu: Drop QEMU_CAPS_MIGRATION_EVENT

All QEMU versions we care about already support migration events. Jiri Denemark (4): qemu: Drop QEMU_CAPS_MIGRATION_EVENT qemu: Refactor qemuDomainGetJobInfoMigrationStats qemu: Make migration events mandatory qemu: Enable migration events only for fresh QEMU process src/qemu/qemu_capabilities.c | 3 +- src/qemu/qemu_capabilities.h | 2 +- src/qemu/qemu_driver.c | 36 +++++++++--------- src/qemu/qemu_migration.c | 37 ++++--------------- src/qemu/qemu_migration_params.c | 15 +++----- src/qemu/qemu_migration_params.h | 3 +- src/qemu/qemu_process.c | 14 ++++--- .../qemucapabilitiesdata/caps_3.1.0.ppc64.xml | 1 - .../caps_3.1.0.x86_64.xml | 1 - .../caps_4.0.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_4.0.0.ppc64.xml | 1 - .../caps_4.0.0.riscv32.xml | 1 - .../caps_4.0.0.riscv64.xml | 1 - .../qemucapabilitiesdata/caps_4.0.0.s390x.xml | 1 - .../caps_4.0.0.x86_64.xml | 1 - .../caps_4.1.0.x86_64.xml | 1 - .../caps_4.2.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_4.2.0.ppc64.xml | 1 - .../qemucapabilitiesdata/caps_4.2.0.s390x.xml | 1 - .../caps_4.2.0.x86_64.xml | 1 - .../caps_5.0.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 1 - .../caps_5.0.0.riscv64.xml | 1 - .../caps_5.0.0.x86_64.xml | 1 - .../qemucapabilitiesdata/caps_5.1.0.sparc.xml | 1 - .../caps_5.1.0.x86_64.xml | 1 - .../caps_5.2.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_5.2.0.ppc64.xml | 1 - .../caps_5.2.0.riscv64.xml | 1 - .../qemucapabilitiesdata/caps_5.2.0.s390x.xml | 1 - .../caps_5.2.0.x86_64.xml | 1 - .../caps_6.0.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_6.0.0.s390x.xml | 1 - .../caps_6.0.0.x86_64.xml | 1 - .../caps_6.1.0.x86_64.xml | 1 - .../caps_6.2.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_6.2.0.ppc64.xml | 1 - .../caps_6.2.0.x86_64.xml | 1 - .../caps_7.0.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_7.0.0.ppc64.xml | 1 - .../caps_7.0.0.x86_64.xml | 1 - .../caps_7.1.0.x86_64.xml | 1 - 42 files changed, 46 insertions(+), 99 deletions(-) -- 2.35.1

All QEMU versions we care about already support migration events. Signed-off-by: Jiri Denemark <jdenemar@redhat.com> --- src/qemu/qemu_capabilities.c | 3 +- src/qemu/qemu_capabilities.h | 2 +- src/qemu/qemu_driver.c | 6 +-- src/qemu/qemu_migration.c | 37 ++++--------------- src/qemu/qemu_migration_params.c | 25 ++++++------- .../qemucapabilitiesdata/caps_3.1.0.ppc64.xml | 1 - .../caps_3.1.0.x86_64.xml | 1 - .../caps_4.0.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_4.0.0.ppc64.xml | 1 - .../caps_4.0.0.riscv32.xml | 1 - .../caps_4.0.0.riscv64.xml | 1 - .../qemucapabilitiesdata/caps_4.0.0.s390x.xml | 1 - .../caps_4.0.0.x86_64.xml | 1 - .../caps_4.1.0.x86_64.xml | 1 - .../caps_4.2.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_4.2.0.ppc64.xml | 1 - .../qemucapabilitiesdata/caps_4.2.0.s390x.xml | 1 - .../caps_4.2.0.x86_64.xml | 1 - .../caps_5.0.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 1 - .../caps_5.0.0.riscv64.xml | 1 - .../caps_5.0.0.x86_64.xml | 1 - .../qemucapabilitiesdata/caps_5.1.0.sparc.xml | 1 - .../caps_5.1.0.x86_64.xml | 1 - .../caps_5.2.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_5.2.0.ppc64.xml | 1 - .../caps_5.2.0.riscv64.xml | 1 - .../qemucapabilitiesdata/caps_5.2.0.s390x.xml | 1 - .../caps_5.2.0.x86_64.xml | 1 - .../caps_6.0.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_6.0.0.s390x.xml | 1 - .../caps_6.0.0.x86_64.xml | 1 - .../caps_6.1.0.x86_64.xml | 1 - .../caps_6.2.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_6.2.0.ppc64.xml | 1 - .../caps_6.2.0.x86_64.xml | 1 - .../caps_7.0.0.aarch64.xml | 1 - .../qemucapabilitiesdata/caps_7.0.0.ppc64.xml | 1 - .../caps_7.0.0.x86_64.xml | 1 - .../caps_7.1.0.x86_64.xml | 1 - 40 files changed, 22 insertions(+), 86 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 4a03f4794b..a59d839d85 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -335,7 +335,7 @@ VIR_ENUM_IMPL(virQEMUCaps, "pci-serial", /* QEMU_CAPS_DEVICE_PCI_SERIAL */ "aarch64-off", /* QEMU_CAPS_CPU_AARCH64_OFF */ "vhost-user-multiqueue", /* X_QEMU_CAPS_VHOSTUSER_MULTIQUEUE */ - "migration-event", /* QEMU_CAPS_MIGRATION_EVENT */ + "migration-event", /* X_QEMU_CAPS_MIGRATION_EVENT */ /* 190 */ "gpex-pcihost", /* QEMU_CAPS_OBJECT_GPEX */ @@ -1237,7 +1237,6 @@ struct virQEMUCapsStringFlags virQEMUCapsMigration[] = { /* Use virQEMUCapsQMPSchemaQueries for querying parameters of events */ struct virQEMUCapsStringFlags virQEMUCapsEvents[] = { - { "MIGRATION", QEMU_CAPS_MIGRATION_EVENT }, { "VSERPORT_CHANGE", QEMU_CAPS_VSERPORT_CHANGE }, { "BLOCK_WRITE_THRESHOLD", QEMU_CAPS_BLOCK_WRITE_THRESHOLD }, { "DUMP_COMPLETED", QEMU_CAPS_DUMP_COMPLETED }, diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index baddb059b2..59c09903f3 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -309,7 +309,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for syntax-check */ QEMU_CAPS_DEVICE_PCI_SERIAL, /* -device pci-serial */ QEMU_CAPS_CPU_AARCH64_OFF, /* -cpu ...,aarch64=off */ X_QEMU_CAPS_VHOSTUSER_MULTIQUEUE, /* vhost-user with -netdev queues= */ - QEMU_CAPS_MIGRATION_EVENT, /* MIGRATION event */ + X_QEMU_CAPS_MIGRATION_EVENT, /* MIGRATION event */ /* 190 */ QEMU_CAPS_OBJECT_GPEX, /* have generic PCI host controller */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index e3582f62a7..d75606f78c 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -12503,17 +12503,13 @@ qemuDomainGetJobInfoMigrationStats(virQEMUDriver *driver, virDomainObj *vm, virDomainJobData *jobData) { - qemuDomainObjPrivate *priv = vm->privateData; qemuDomainJobDataPrivate *privStats = jobData->privateData; - bool events = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVENT); - if (jobData->status == VIR_DOMAIN_JOB_STATUS_ACTIVE || jobData->status == VIR_DOMAIN_JOB_STATUS_MIGRATING || jobData->status == VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED || jobData->status == VIR_DOMAIN_JOB_STATUS_POSTCOPY) { - if (events && - jobData->status != VIR_DOMAIN_JOB_STATUS_ACTIVE && + if (jobData->status != VIR_DOMAIN_JOB_STATUS_ACTIVE && qemuMigrationAnyFetchStats(driver, vm, VIR_ASYNC_JOB_NONE, jobData, NULL) < 0) return -1; diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index b735bdb391..25af291dc6 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1735,10 +1735,8 @@ qemuMigrationJobCheckStatus(virQEMUDriver *driver, virDomainJobData *jobData = priv->job.current; qemuDomainJobDataPrivate *privJob = jobData->privateData; g_autofree char *error = NULL; - bool events = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVENT); - if (!events || - privJob->stats.mig.status == QEMU_MONITOR_MIGRATION_STATUS_ERROR) { + if (privJob->stats.mig.status == QEMU_MONITOR_MIGRATION_STATUS_ERROR) { if (qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobData, &error) < 0) return -1; } @@ -1890,7 +1888,6 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *driver, { qemuDomainObjPrivate *priv = vm->privateData; virDomainJobData *jobData = priv->job.current; - bool events = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVENT); int rv; jobData->status = VIR_DOMAIN_JOB_STATUS_MIGRATING; @@ -1900,24 +1897,14 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *driver, if (rv < 0) return rv; - if (events) { - if (virDomainObjWait(vm) < 0) { - if (virDomainObjIsActive(vm)) - jobData->status = VIR_DOMAIN_JOB_STATUS_FAILED; - return -2; - } - } else { - /* Poll every 50ms for progress & to allow cancellation */ - struct timespec ts = { .tv_sec = 0, .tv_nsec = 50 * 1000 * 1000ull }; - - virObjectUnlock(vm); - nanosleep(&ts, NULL); - virObjectLock(vm); + if (virDomainObjWait(vm) < 0) { + if (virDomainObjIsActive(vm)) + jobData->status = VIR_DOMAIN_JOB_STATUS_FAILED; + return -2; } } - if (events) - ignore_value(qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobData, NULL)); + ignore_value(qemuMigrationAnyFetchStats(driver, vm, asyncJob, jobData, NULL)); qemuDomainJobDataUpdateTime(jobData); qemuDomainJobDataUpdateDowntime(jobData); @@ -1939,13 +1926,9 @@ qemuMigrationDstWaitForCompletion(virQEMUDriver *driver, virDomainAsyncJob asyncJob, bool postcopy) { - qemuDomainObjPrivate *priv = vm->privateData; unsigned int flags = 0; int rv; - if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVENT)) - return 0; - VIR_DEBUG("Waiting for incoming migration to complete"); if (postcopy) @@ -4028,7 +4011,6 @@ qemuMigrationSrcRun(virQEMUDriver *driver, virErrorPtr orig_err = NULL; unsigned int cookieFlags = 0; bool abort_on_error = !!(flags & VIR_MIGRATE_ABORT_ON_ERROR); - bool events = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVENT); bool bwParam = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_PARAM_BANDWIDTH); bool storageMigration = flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_INC); bool cancel = false; @@ -4071,8 +4053,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, return -1; } - if (events) - priv->signalIOError = abort_on_error; + priv->signalIOError = abort_on_error; if (flags & VIR_MIGRATE_PERSIST_DEST) { if (persist_xml) { @@ -4364,9 +4345,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver, ret = 0; cleanup: - if (events) - priv->signalIOError = false; - + priv->signalIOError = false; priv->migMaxBandwidth = restore_max_bandwidth; virErrorRestore(&orig_err); diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c index df2384b213..ec288b9fa1 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -1419,26 +1419,23 @@ qemuMigrationCapsCheck(virQEMUDriver *driver, } } - if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVENT)) { - migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST); + migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST); - ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS)); + ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS)); - if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent))) - return -1; + if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent))) + return -1; - if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) - return -1; + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + return -1; - rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json); + rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json); - qemuDomainObjExitMonitor(vm); + qemuDomainObjExitMonitor(vm); - if (rc < 0) { - virResetLastError(); - VIR_DEBUG("Cannot enable migration events; clearing capability"); - virQEMUCapsClear(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVENT); - } + if (rc < 0) { + virResetLastError(); + VIR_DEBUG("Cannot enable migration events"); } /* Migration events capability must always be enabled, clearing it from diff --git a/tests/qemucapabilitiesdata/caps_3.1.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_3.1.0.ppc64.xml index 23116d61f7..cc6e75884b 100644 --- a/tests/qemucapabilitiesdata/caps_3.1.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_3.1.0.ppc64.xml @@ -57,7 +57,6 @@ <flag name='VGA.vgamem_mb'/> <flag name='pc-dimm'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='rtl8139'/> <flag name='e1000'/> <flag name='virtio-net'/> diff --git a/tests/qemucapabilitiesdata/caps_3.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_3.1.0.x86_64.xml index 8ca3ac48fe..27be36f0ab 100644 --- a/tests/qemucapabilitiesdata/caps_3.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_3.1.0.x86_64.xml @@ -78,7 +78,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml index 5adf904fc4..7e0b8fbddf 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml @@ -60,7 +60,6 @@ <flag name='vmware-svga.vgamem_mb'/> <flag name='pci-serial'/> <flag name='aarch64-off'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_4.0.0.ppc64.xml index a84adc2610..19bbbd1de3 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.ppc64.xml @@ -59,7 +59,6 @@ <flag name='vmware-svga.vgamem_mb'/> <flag name='pc-dimm'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.riscv32.xml b/tests/qemucapabilitiesdata/caps_4.0.0.riscv32.xml index c494254c4d..ef6f04f54b 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.riscv32.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.riscv32.xml @@ -62,7 +62,6 @@ <flag name='VGA.vgamem_mb'/> <flag name='vmware-svga.vgamem_mb'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.riscv64.xml b/tests/qemucapabilitiesdata/caps_4.0.0.riscv64.xml index d2582fa297..7c65aff290 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.riscv64.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.riscv64.xml @@ -62,7 +62,6 @@ <flag name='VGA.vgamem_mb'/> <flag name='vmware-svga.vgamem_mb'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.s390x.xml b/tests/qemucapabilitiesdata/caps_4.0.0.s390x.xml index 4f36186044..9b5ed96ba3 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.s390x.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.s390x.xml @@ -36,7 +36,6 @@ <flag name='migrate-rdma'/> <flag name='aes-key-wrap'/> <flag name='dea-key-wrap'/> - <flag name='migration-event'/> <flag name='virtio-net'/> <flag name='virtio-gpu'/> <flag name='virtio-gpu.virgl'/> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_4.0.0.x86_64.xml index 18e5ebd4f4..3cf6a66389 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.x86_64.xml @@ -77,7 +77,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_4.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_4.1.0.x86_64.xml index 12c5ebe6f3..5daa7bda75 100644 --- a/tests/qemucapabilitiesdata/caps_4.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_4.1.0.x86_64.xml @@ -77,7 +77,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml index ee536b7b63..833bf955f9 100644 --- a/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml @@ -61,7 +61,6 @@ <flag name='pc-dimm'/> <flag name='pci-serial'/> <flag name='aarch64-off'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_4.2.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_4.2.0.ppc64.xml index 10f5a9e2c5..1586f28ca5 100644 --- a/tests/qemucapabilitiesdata/caps_4.2.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_4.2.0.ppc64.xml @@ -59,7 +59,6 @@ <flag name='vmware-svga.vgamem_mb'/> <flag name='pc-dimm'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml b/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml index 069777a49b..ce5b782afd 100644 --- a/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml +++ b/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml @@ -36,7 +36,6 @@ <flag name='migrate-rdma'/> <flag name='aes-key-wrap'/> <flag name='dea-key-wrap'/> - <flag name='migration-event'/> <flag name='virtio-net'/> <flag name='virtio-gpu'/> <flag name='virtio-gpu.virgl'/> diff --git a/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml index 6b61214a0b..9ee4c0534d 100644 --- a/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml @@ -78,7 +78,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml index 4fd02e786d..29ee31473f 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml @@ -63,7 +63,6 @@ <flag name='pc-dimm'/> <flag name='pci-serial'/> <flag name='aarch64-off'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml index f2f3558fdc..868b3b0d0a 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml @@ -60,7 +60,6 @@ <flag name='vmware-svga.vgamem_mb'/> <flag name='pc-dimm'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml index 557949d6d6..3b58e7fece 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml @@ -62,7 +62,6 @@ <flag name='VGA.vgamem_mb'/> <flag name='vmware-svga.vgamem_mb'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml index f301d8a926..bee5a84cf9 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml @@ -78,7 +78,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml b/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml index 3a330ebdc0..e6a3ed5ec0 100644 --- a/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml +++ b/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml @@ -25,7 +25,6 @@ <flag name='memory-backend-file'/> <flag name='iothread'/> <flag name='migrate-rdma'/> - <flag name='migration-event'/> <flag name='chardev-file-append'/> <flag name='vserport-change-event'/> <flag name='spice-gl'/> diff --git a/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml index 53fcbf3417..070f64cb1c 100644 --- a/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml @@ -78,7 +78,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml index 824224302c..8e17532f3a 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml @@ -62,7 +62,6 @@ <flag name='pc-dimm'/> <flag name='pci-serial'/> <flag name='aarch64-off'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml index b949f88b5a..b0b5fe3271 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml @@ -59,7 +59,6 @@ <flag name='vmware-svga.vgamem_mb'/> <flag name='pc-dimm'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml index 873923992d..7cb4383693 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml @@ -61,7 +61,6 @@ <flag name='VGA.vgamem_mb'/> <flag name='vmware-svga.vgamem_mb'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml b/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml index 5e9560d7b7..30d10236e9 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml @@ -35,7 +35,6 @@ <flag name='migrate-rdma'/> <flag name='aes-key-wrap'/> <flag name='dea-key-wrap'/> - <flag name='migration-event'/> <flag name='virtio-net'/> <flag name='virtio-gpu'/> <flag name='virtio-gpu.virgl'/> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml index 3998da9253..cad4ed40e6 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml @@ -77,7 +77,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml index 51d3628eeb..4b4cc2d3aa 100644 --- a/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml @@ -61,7 +61,6 @@ <flag name='pc-dimm'/> <flag name='pci-serial'/> <flag name='aarch64-off'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml b/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml index 2e5d0f197a..06543071aa 100644 --- a/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml +++ b/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml @@ -35,7 +35,6 @@ <flag name='migrate-rdma'/> <flag name='aes-key-wrap'/> <flag name='dea-key-wrap'/> - <flag name='migration-event'/> <flag name='virtio-net'/> <flag name='virtio-gpu'/> <flag name='virtio-gpu.virgl'/> diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml index 3498d6255b..8c61bf8a84 100644 --- a/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml @@ -76,7 +76,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml index 7127e5d18a..afd8f606eb 100644 --- a/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml @@ -76,7 +76,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml index e45682a648..86fc46918f 100644 --- a/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml @@ -66,7 +66,6 @@ <flag name='pc-dimm'/> <flag name='pci-serial'/> <flag name='aarch64-off'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml index 9c9d9aa08e..d5a1663c15 100644 --- a/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml @@ -59,7 +59,6 @@ <flag name='vmware-svga.vgamem_mb'/> <flag name='pc-dimm'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml index 5eae115b28..19605d93ae 100644 --- a/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml @@ -76,7 +76,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml index c20b360e01..e24e2235fb 100644 --- a/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml @@ -66,7 +66,6 @@ <flag name='pc-dimm'/> <flag name='pci-serial'/> <flag name='aarch64-off'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml index 945227b7f4..6c51e27f46 100644 --- a/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml @@ -65,7 +65,6 @@ <flag name='vmware-svga.vgamem_mb'/> <flag name='pc-dimm'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> <flag name='xio3130-downstream'/> diff --git a/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml index 48fef44c9a..7523b92e6b 100644 --- a/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml @@ -76,7 +76,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> diff --git a/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml index 332d9a1ac3..f598950cc3 100644 --- a/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml @@ -76,7 +76,6 @@ <flag name='pc-dimm'/> <flag name='machine-vmport-opt'/> <flag name='pci-serial'/> - <flag name='migration-event'/> <flag name='gpex-pcihost'/> <flag name='ioh3420'/> <flag name='x3130-upstream'/> -- 2.35.1

On Thu, May 12, 2022 at 10:55:44 +0200, Jiri Denemark wrote:
All QEMU versions we care about already support migration events.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com> ---
Reviewed-by: Peter Krempa <pkrempa@redhat.com>

The code was a bit too complicated, especially after removing the check for QEMU_CAPS_MIGRATION_EVENT. Signed-off-by: Jiri Denemark <jdenemar@redhat.com> --- src/qemu/qemu_driver.c | 32 +++++++++++++++++++------------- 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index d75606f78c..b8af1d6817 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -12505,26 +12505,32 @@ qemuDomainGetJobInfoMigrationStats(virQEMUDriver *driver, { qemuDomainJobDataPrivate *privStats = jobData->privateData; - if (jobData->status == VIR_DOMAIN_JOB_STATUS_ACTIVE || - jobData->status == VIR_DOMAIN_JOB_STATUS_MIGRATING || - jobData->status == VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED || - jobData->status == VIR_DOMAIN_JOB_STATUS_POSTCOPY) { - if (jobData->status != VIR_DOMAIN_JOB_STATUS_ACTIVE && - qemuMigrationAnyFetchStats(driver, vm, VIR_ASYNC_JOB_NONE, - jobData, NULL) < 0) - return -1; - - if (jobData->status == VIR_DOMAIN_JOB_STATUS_ACTIVE && - privStats->statsType == QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION && + switch (jobData->status) { + case VIR_DOMAIN_JOB_STATUS_ACTIVE: + if (privStats->statsType == QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION && qemuMigrationSrcFetchMirrorStats(driver, vm, VIR_ASYNC_JOB_NONE, jobData) < 0) return -1; + break; - if (qemuDomainJobDataUpdateTime(jobData) < 0) + case VIR_DOMAIN_JOB_STATUS_MIGRATING: + case VIR_DOMAIN_JOB_STATUS_HYPERVISOR_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_POSTCOPY: + if (qemuMigrationAnyFetchStats(driver, vm, VIR_ASYNC_JOB_NONE, + jobData, NULL) < 0) return -1; + break; + + case VIR_DOMAIN_JOB_STATUS_NONE: + case VIR_DOMAIN_JOB_STATUS_PAUSED: + case VIR_DOMAIN_JOB_STATUS_COMPLETED: + case VIR_DOMAIN_JOB_STATUS_FAILED: + case VIR_DOMAIN_JOB_STATUS_CANCELED: + default: + return 0; } - return 0; + return qemuDomainJobDataUpdateTime(jobData); } -- 2.35.1

On Thu, May 12, 2022 at 10:55:45 +0200, Jiri Denemark wrote:
The code was a bit too complicated, especially after removing the check for QEMU_CAPS_MIGRATION_EVENT.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com> --- src/qemu/qemu_driver.c | 32 +++++++++++++++++++------------- 1 file changed, 19 insertions(+), 13 deletions(-)
Reviewed-by: Peter Krempa <pkrempa@redhat.com>

All QEMU versions we care about support migration events and we should be able to enable the associated capability when connecting to the monitor. Failure to do so is thus considered fatal now. Signed-off-by: Jiri Denemark <jdenemar@redhat.com> --- src/qemu/qemu_migration_params.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c index ec288b9fa1..26754f03f8 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -1433,10 +1433,8 @@ qemuMigrationCapsCheck(virQEMUDriver *driver, qemuDomainObjExitMonitor(vm); - if (rc < 0) { - virResetLastError(); - VIR_DEBUG("Cannot enable migration events"); - } + if (rc < 0) + return -1; /* Migration events capability must always be enabled, clearing it from * migration capabilities bitmap makes sure it won't be touched anywhere -- 2.35.1

On Thu, May 12, 2022 at 10:55:46 +0200, Jiri Denemark wrote:
All QEMU versions we care about support migration events and we should be able to enable the associated capability when connecting to the monitor. Failure to do so is thus considered fatal now.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com> --- src/qemu/qemu_migration_params.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
Reviewed-by: Peter Krempa <pkrempa@redhat.com>

Every running QEMU process we are willing to reconnect (i.e., at least 3.1.0) supports migration events and we can assume the capability is already enabled since last time libvirt daemon connected to its monitor. Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to start QEMU 3.1.0 or newer, migration events would not be enabled. And if the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU process is still running, they would not be able to migrate the domain because of disabled migration events. I think we do not really need to worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU 3.1.0 was released only 3.5 years ago. Thus a chance someone would be running such configuration should be fairly small and a combination with upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it pretty much to zero. The issue would disappear ff the ancient libvirt is first upgraded to something older than 8.4.0 and then to the current libvirt. Signed-off-by: Jiri Denemark <jdenemar@redhat.com> --- Notes: I was hoping we could do this without any downside, but it appeared this was not possible. In case someone still thinks this would be an issue, I can take an alternative solution and check whether migration events are enabled when reconnecting to QEMU monitor. src/qemu/qemu_migration_params.c | 26 ++++++++++++++------------ src/qemu/qemu_migration_params.h | 3 ++- src/qemu/qemu_process.c | 14 +++++++++----- 3 files changed, 25 insertions(+), 18 deletions(-) diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c index 26754f03f8..95fd773645 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -1385,10 +1385,10 @@ qemuMigrationParamsParse(xmlXPathContextPtr ctxt, int qemuMigrationCapsCheck(virQEMUDriver *driver, virDomainObj *vm, - int asyncJob) + int asyncJob, + bool reconnect) { qemuDomainObjPrivate *priv = vm->privateData; - g_autoptr(virBitmap) migEvent = NULL; g_autoptr(virJSONValue) json = NULL; g_auto(GStrv) caps = NULL; char **capStr; @@ -1419,22 +1419,24 @@ qemuMigrationCapsCheck(virQEMUDriver *driver, } } - migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST); + if (!reconnect) { + g_autoptr(virBitmap) migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST); - ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS)); + ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS)); - if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent))) - return -1; + if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent))) + return -1; - if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) - return -1; + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + return -1; - rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json); + rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json); - qemuDomainObjExitMonitor(vm); + qemuDomainObjExitMonitor(vm); - if (rc < 0) - return -1; + if (rc < 0) + return -1; + } /* Migration events capability must always be enabled, clearing it from * migration capabilities bitmap makes sure it won't be touched anywhere diff --git a/src/qemu/qemu_migration_params.h b/src/qemu/qemu_migration_params.h index 4a8815e776..a0909b9f3d 100644 --- a/src/qemu/qemu_migration_params.h +++ b/src/qemu/qemu_migration_params.h @@ -162,7 +162,8 @@ qemuMigrationParamsParse(xmlXPathContextPtr ctxt, int qemuMigrationCapsCheck(virQEMUDriver *driver, virDomainObj *vm, - int asyncJob); + int asyncJob, + bool reconnect); bool qemuMigrationCapsGet(virDomainObj *vm, diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 2b16298942..fd4db43a42 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -1914,8 +1914,12 @@ qemuProcessInitMonitor(virQEMUDriver *driver, static int -qemuConnectMonitor(virQEMUDriver *driver, virDomainObj *vm, int asyncJob, - bool retry, qemuDomainLogContext *logCtxt) +qemuConnectMonitor(virQEMUDriver *driver, + virDomainObj *vm, + int asyncJob, + bool retry, + qemuDomainLogContext *logCtxt, + bool reconnect) { qemuDomainObjPrivate *priv = vm->privateData; qemuMonitor *mon = NULL; @@ -1968,7 +1972,7 @@ qemuConnectMonitor(virQEMUDriver *driver, virDomainObj *vm, int asyncJob, if (qemuProcessInitMonitor(driver, vm, asyncJob) < 0) return -1; - if (qemuMigrationCapsCheck(driver, vm, asyncJob) < 0) + if (qemuMigrationCapsCheck(driver, vm, asyncJob, reconnect) < 0) return -1; return 0; @@ -2353,7 +2357,7 @@ qemuProcessWaitForMonitor(virQEMUDriver *driver, VIR_DEBUG("Connect monitor to vm=%p name='%s' retry=%d", vm, vm->def->name, retry); - if (qemuConnectMonitor(driver, vm, asyncJob, retry, logCtxt) < 0) + if (qemuConnectMonitor(driver, vm, asyncJob, retry, logCtxt, false) < 0) goto cleanup; /* Try to get the pty path mappings again via the monitor. This is much more @@ -8773,7 +8777,7 @@ qemuProcessReconnect(void *opaque) tryMonReconn = true; /* XXX check PID liveliness & EXE path */ - if (qemuConnectMonitor(driver, obj, VIR_ASYNC_JOB_NONE, retry, NULL) < 0) + if (qemuConnectMonitor(driver, obj, VIR_ASYNC_JOB_NONE, retry, NULL, true) < 0) goto error; priv->machineName = qemuDomainGetMachineName(obj); -- 2.35.1

On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote:
Every running QEMU process we are willing to reconnect (i.e., at least 3.1.0) supports migration events and we can assume the capability is already enabled since last time libvirt daemon connected to its monitor.
Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to start QEMU 3.1.0 or newer, migration events would not be enabled. And if the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU process is still running, they would not be able to migrate the domain
But doesn't the function below actually enable the events? Or is there something else that needs to be done? Such that we simply could enable the events and be done with it?
because of disabled migration events. I think we do not really need to worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU 3.1.0 was released only 3.5 years ago. Thus a chance someone would be running such configuration should be fairly small and a combination with upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it pretty much to zero. The issue would disappear ff the ancient libvirt is first upgraded to something older than 8.4.0 and then to the current libvirt.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com> ---
Notes: I was hoping we could do this without any downside, but it appeared this was not possible. In case someone still thinks this would be an issue, I can take an alternative solution and check whether migration events are enabled when reconnecting to QEMU monitor.
src/qemu/qemu_migration_params.c | 26 ++++++++++++++------------ src/qemu/qemu_migration_params.h | 3 ++- src/qemu/qemu_process.c | 14 +++++++++----- 3 files changed, 25 insertions(+), 18 deletions(-)
diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c index 26754f03f8..95fd773645 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -1385,10 +1385,10 @@ qemuMigrationParamsParse(xmlXPathContextPtr ctxt, int qemuMigrationCapsCheck(virQEMUDriver *driver, virDomainObj *vm, - int asyncJob) + int asyncJob, + bool reconnect) { qemuDomainObjPrivate *priv = vm->privateData; - g_autoptr(virBitmap) migEvent = NULL; g_autoptr(virJSONValue) json = NULL; g_auto(GStrv) caps = NULL; char **capStr; @@ -1419,22 +1419,24 @@ qemuMigrationCapsCheck(virQEMUDriver *driver, } }
- migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST); + if (!reconnect) { + g_autoptr(virBitmap) migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST);
- ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS)); + ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS));
Here you assert the flag ...
- if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent))) - return -1; + if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent))) + return -1;
- if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) - return -1; + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + return -1;
- rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json); + rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json);
... and ask the monitor to enable it. To me that looks like we're good even with older qemus, right?
- qemuDomainObjExitMonitor(vm); + qemuDomainObjExitMonitor(vm);
- if (rc < 0) - return -1; + if (rc < 0) + return -1; + }

On Thu, May 12, 2022 at 12:40:54 +0200, Peter Krempa wrote:
On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote:
Every running QEMU process we are willing to reconnect (i.e., at least 3.1.0) supports migration events and we can assume the capability is already enabled since last time libvirt daemon connected to its monitor.
Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to start QEMU 3.1.0 or newer, migration events would not be enabled. And if the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU process is still running, they would not be able to migrate the domain
But doesn't the function below actually enable the events? Or is there something else that needs to be done?
Such that we simply could enable the events and be done with it?
because of disabled migration events. I think we do not really need to worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU 3.1.0 was released only 3.5 years ago. Thus a chance someone would be running such configuration should be fairly small and a combination with upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it pretty much to zero. The issue would disappear ff the ancient libvirt is first upgraded to something older than 8.4.0 and then to the current libvirt.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com> ---
Notes: I was hoping we could do this without any downside, but it appeared this was not possible. In case someone still thinks this would be an issue, I can take an alternative solution and check whether migration events are enabled when reconnecting to QEMU monitor.
Aaah, never mind. You want to avoid setting it all the time. Well I think I would be okay with that, does our code handle the absence of events properly?

On Thu, May 12, 2022 at 12:41:57 +0200, Peter Krempa wrote:
On Thu, May 12, 2022 at 12:40:54 +0200, Peter Krempa wrote:
On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote:
Every running QEMU process we are willing to reconnect (i.e., at least 3.1.0) supports migration events and we can assume the capability is already enabled since last time libvirt daemon connected to its monitor.
Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to start QEMU 3.1.0 or newer, migration events would not be enabled. And if the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU process is still running, they would not be able to migrate the domain
But doesn't the function below actually enable the events? Or is there something else that needs to be done?
Such that we simply could enable the events and be done with it?
because of disabled migration events. I think we do not really need to worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU 3.1.0 was released only 3.5 years ago. Thus a chance someone would be running such configuration should be fairly small and a combination with upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it pretty much to zero. The issue would disappear ff the ancient libvirt is first upgraded to something older than 8.4.0 and then to the current libvirt.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com> ---
Notes: I was hoping we could do this without any downside, but it appeared this was not possible. In case someone still thinks this would be an issue, I can take an alternative solution and check whether migration events are enabled when reconnecting to QEMU monitor.
Aaah, never mind. You want to avoid setting it all the time. Well I think I would be okay with that, does our code handle the absence of events properly?
I guess the migration would just hang waiting for an event that never comes. But I guess it should be fairly easy to check whether events are enabled before starting a migration and report an error instead of hanging. Jirka

On Thu, May 12, 2022 at 13:30:57 +0200, Jiri Denemark wrote:
On Thu, May 12, 2022 at 12:41:57 +0200, Peter Krempa wrote:
On Thu, May 12, 2022 at 12:40:54 +0200, Peter Krempa wrote:
On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote:
Every running QEMU process we are willing to reconnect (i.e., at least 3.1.0) supports migration events and we can assume the capability is already enabled since last time libvirt daemon connected to its monitor.
Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to start QEMU 3.1.0 or newer, migration events would not be enabled. And if the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU process is still running, they would not be able to migrate the domain
But doesn't the function below actually enable the events? Or is there something else that needs to be done?
Such that we simply could enable the events and be done with it?
because of disabled migration events. I think we do not really need to worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU 3.1.0 was released only 3.5 years ago. Thus a chance someone would be running such configuration should be fairly small and a combination with upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it pretty much to zero. The issue would disappear ff the ancient libvirt is first upgraded to something older than 8.4.0 and then to the current libvirt.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com> ---
Notes: I was hoping we could do this without any downside, but it appeared this was not possible. In case someone still thinks this would be an issue, I can take an alternative solution and check whether migration events are enabled when reconnecting to QEMU monitor.
Aaah, never mind. You want to avoid setting it all the time. Well I think I would be okay with that, does our code handle the absence of events properly?
I guess the migration would just hang waiting for an event that never comes. But I guess it should be fairly easy to check whether events are enabled before starting a migration and report an error instead of hanging.
Leaving the VM in a hanging migration would be bad, but if we can just refuse it I'd be okay with that.

On Thu, May 12, 2022 at 13:32:38 +0200, Peter Krempa wrote:
On Thu, May 12, 2022 at 13:30:57 +0200, Jiri Denemark wrote:
On Thu, May 12, 2022 at 12:41:57 +0200, Peter Krempa wrote:
On Thu, May 12, 2022 at 12:40:54 +0200, Peter Krempa wrote:
On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote:
Every running QEMU process we are willing to reconnect (i.e., at least 3.1.0) supports migration events and we can assume the capability is already enabled since last time libvirt daemon connected to its monitor.
Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to start QEMU 3.1.0 or newer, migration events would not be enabled. And if the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU process is still running, they would not be able to migrate the domain
But doesn't the function below actually enable the events? Or is there something else that needs to be done?
Such that we simply could enable the events and be done with it?
because of disabled migration events. I think we do not really need to worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU 3.1.0 was released only 3.5 years ago. Thus a chance someone would be running such configuration should be fairly small and a combination with upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it pretty much to zero. The issue would disappear ff the ancient libvirt is first upgraded to something older than 8.4.0 and then to the current libvirt.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com> ---
Notes: I was hoping we could do this without any downside, but it appeared this was not possible. In case someone still thinks this would be an issue, I can take an alternative solution and check whether migration events are enabled when reconnecting to QEMU monitor.
Aaah, never mind. You want to avoid setting it all the time. Well I think I would be okay with that, does our code handle the absence of events properly?
I guess the migration would just hang waiting for an event that never comes. But I guess it should be fairly easy to check whether events are enabled before starting a migration and report an error instead of hanging.
Leaving the VM in a hanging migration would be bad, but if we can just refuse it I'd be okay with that.
Actually, doing so would not make any sense. We could just as easy enable the capability on reconnect if it's not enabled. Which is the alternative solution I described above in the Notes section :-) Jirka

On Thu, May 12, 2022 at 14:07:10 +0200, Jiri Denemark wrote:
On Thu, May 12, 2022 at 13:32:38 +0200, Peter Krempa wrote:
On Thu, May 12, 2022 at 13:30:57 +0200, Jiri Denemark wrote:
[....]
I guess the migration would just hang waiting for an event that never comes. But I guess it should be fairly easy to check whether events are enabled before starting a migration and report an error instead of hanging.
Leaving the VM in a hanging migration would be bad, but if we can just refuse it I'd be okay with that.
Actually, doing so would not make any sense. We could just as easy enable the capability on reconnect if it's not enabled. Which is the alternative solution I described above in the Notes section :-)
I think I'm also starting to lean towards the fact that "This can't happen" and simply ignore it ;) ... I'm not sure if I: Reviewed-by: Peter Krempa <pkrempa@redhat.com> It somewhere else in this thread.

On Thu, May 12, 2022 at 12:40:54 +0200, Peter Krempa wrote:
On Thu, May 12, 2022 at 10:55:47 +0200, Jiri Denemark wrote:
Every running QEMU process we are willing to reconnect (i.e., at least 3.1.0) supports migration events and we can assume the capability is already enabled since last time libvirt daemon connected to its monitor.
Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to start QEMU 3.1.0 or newer, migration events would not be enabled. And if the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU process is still running, they would not be able to migrate the domain
But doesn't the function below actually enable the events? Or is there something else that needs to be done?
Such that we simply could enable the events and be done with it?
We can't blindly enable the events all the time as it is a migration capability and as such can only be set when there's no active migration.
because of disabled migration events. I think we do not really need to worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU 3.1.0 was released only 3.5 years ago. Thus a chance someone would be running such configuration should be fairly small and a combination with upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it pretty much to zero. The issue would disappear ff the ancient libvirt is first upgraded to something older than 8.4.0 and then to the current libvirt.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com> ---
Notes: I was hoping we could do this without any downside, but it appeared this was not possible. In case someone still thinks this would be an issue, I can take an alternative solution and check whether migration events are enabled when reconnecting to QEMU monitor.
src/qemu/qemu_migration_params.c | 26 ++++++++++++++------------ src/qemu/qemu_migration_params.h | 3 ++- src/qemu/qemu_process.c | 14 +++++++++----- 3 files changed, 25 insertions(+), 18 deletions(-)
diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c index 26754f03f8..95fd773645 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -1385,10 +1385,10 @@ qemuMigrationParamsParse(xmlXPathContextPtr ctxt, int qemuMigrationCapsCheck(virQEMUDriver *driver, virDomainObj *vm, - int asyncJob) + int asyncJob, + bool reconnect) { qemuDomainObjPrivate *priv = vm->privateData; - g_autoptr(virBitmap) migEvent = NULL; g_autoptr(virJSONValue) json = NULL; g_auto(GStrv) caps = NULL; char **capStr; @@ -1419,22 +1419,24 @@ qemuMigrationCapsCheck(virQEMUDriver *driver, } }
- migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST); + if (!reconnect) { + g_autoptr(virBitmap) migEvent = virBitmapNew(QEMU_MIGRATION_CAP_LAST);
- ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS)); + ignore_value(virBitmapSetBit(migEvent, QEMU_MIGRATION_CAP_EVENTS));
Here you assert the flag ...
- if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent))) - return -1; + if (!(json = qemuMigrationCapsToJSON(migEvent, migEvent))) + return -1;
- if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) - return -1; + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) + return -1;
- rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json); + rc = qemuMonitorSetMigrationCapabilities(priv->mon, &json);
... and ask the monitor to enable it. To me that looks like we're good even with older qemus, right?
This all happens only when !reconnect, that is when we're starting a fresh QEMU. When reconnecting to an already running QEMU, we just expect the capability was enabled when it was first started. Which is true for any libvirt > 1.2.17 starting QEMU 3.1.0 and newer (and we won't even reconnect to an older one). Jirka

On Thu, May 12, 2022 at 10:55:47AM +0200, Jiri Denemark wrote:
Every running QEMU process we are willing to reconnect (i.e., at least 3.1.0) supports migration events and we can assume the capability is already enabled since last time libvirt daemon connected to its monitor.
Well, it's not guaranteed though. If libvirt 1.2.17 or older was used to start QEMU 3.1.0 or newer, migration events would not be enabled. And if the user decides to upgrade libvirt from 1.2.17 to 8.4.0 while the QEMU process is still running, they would not be able to migrate the domain because of disabled migration events. I think we do not really need to worry about this scenario as libvirt 1.2.17 is 7 years old while QEMU 3.1.0 was released only 3.5 years ago. Thus a chance someone would be running such configuration should be fairly small and a combination with upgrading 1.2.17 to 8.4.0 (or newer) with running domains should get it pretty much to zero. The issue would disappear ff the ancient libvirt is first upgraded to something older than 8.4.0 and then to the current libvirt.
libvirt 1.2.17 was released in Jul 2015 QEMU 3.1.0 was released in Dec 2018 New QEMU has periodically introduced CLI changes that made it incompatible with older libvirt. Given that 3+1/2 year gap, I feels like there is a pretty decent chance that it was impossible to ever start QEMU 3.3.0 using a libvirt 1.2.17. If so, we don't have anything to worry about from an upgrade POV With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
participants (3)
-
Daniel P. Berrangé
-
Jiri Denemark
-
Peter Krempa