[libvirt] [PATCH v4 0/4] Implement migrate-getmaxdowntime command

Currently, the maximum tolerable downtime for a domain being migrated is write-only. This patch implements a way to query that value nondestructively. I've tried to incorporate the feedback from v3; however, I've left alone a couple of places where the feedback would have diverged from parallelism with the existing migrate-setdowntime help/info. Also, since the main use of this new command is likely to be (as in virt-manager) in combination with migrate-setmaxdowntime, I don't think it's going to really help to add an options to report this in some other unit other than milliseconds. To-do: Ought to revisit setmaxdowntime to use the new preferred qemu interface. Changes from [v3]: * Incorporated suggestions, including adding an error message * Added a patch to the news.xml file [v1} https://www.redhat.com/archives/libvir-list/2017-July/msg00908.html [v2} https://www.redhat.com/archives/libvir-list/2017-July/msg01010.html [v3} https://www.redhat.com/archives/libvir-list/2017-July/msg01301.html Scott Garfinkle (4) Add virDomainMigrateGetMaxDowntime public API qemu: Implement virDomainMigrateGetMaxDowntime virsh: Add support for virDomainMigrateGetMaxDowntime docs: document migrate-getmaxdowntime support --- docs/news.xml | 10 +++++++ include/libvirt/libvirt-domain.h | 4 +++ src/driver-hypervisor.h | 6 +++++ src/libvirt-domain.c | 41 ++++++++++++++++++++++++++++ src/libvirt_public.syms | 4 +++ src/qemu/qemu_driver.c | 58 ++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor.h | 3 +++ src/qemu/qemu_monitor_json.c | 4 +++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 16 ++++++++++- src/remote_protocol-structs | 8 ++++++ tools/virsh-domain.c | 42 +++++++++++++++++++++++++++++ tools/virsh.pod | 6 +++++ 13 files changed, 202 insertions(+), 1 deletion(-)

Add virDomainMigrateGetMaxDowntime to support querying maximum allowable downtime during live migration. --- include/libvirt/libvirt-domain.h | 4 ++++ src/driver-hypervisor.h | 6 ++++++ src/libvirt-domain.c | 41 ++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 4 ++++ 4 files changed, 55 insertions(+) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index a3bb9cb..fae24ac 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -1039,6 +1039,10 @@ int virDomainMigrateToURI3(virDomainPtr domain, unsigned int nparams, unsigned int flags); +int virDomainMigrateGetMaxDowntime(virDomainPtr domain, + unsigned long long *downtime, + unsigned int flags); + int virDomainMigrateSetMaxDowntime (virDomainPtr domain, unsigned long long downtime, unsigned int flags); diff --git a/src/driver-hypervisor.h b/src/driver-hypervisor.h index 3053d7a..7b35e9e 100644 --- a/src/driver-hypervisor.h +++ b/src/driver-hypervisor.h @@ -697,6 +697,11 @@ typedef int (*virDrvDomainAbortJob)(virDomainPtr domain); typedef int +(*virDrvDomainMigrateGetMaxDowntime)(virDomainPtr domain, + unsigned long long *downtime, + unsigned int flags); + +typedef int (*virDrvDomainMigrateSetMaxDowntime)(virDomainPtr domain, unsigned long long downtime, unsigned int flags); @@ -1412,6 +1417,7 @@ struct _virHypervisorDriver { virDrvDomainGetJobInfo domainGetJobInfo; virDrvDomainGetJobStats domainGetJobStats; virDrvDomainAbortJob domainAbortJob; + virDrvDomainMigrateGetMaxDowntime domainMigrateGetMaxDowntime; virDrvDomainMigrateSetMaxDowntime domainMigrateSetMaxDowntime; virDrvDomainMigrateGetCompressionCache domainMigrateGetCompressionCache; virDrvDomainMigrateSetCompressionCache domainMigrateSetCompressionCache; diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index 87fca29..4d0ac30 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -8777,6 +8777,47 @@ virDomainMigrateSetMaxDowntime(virDomainPtr domain, /** + * virDomainMigrateGetMaxDowntime: + * @domain: a domain object + * @downtime: return value of the maximum tolerable downtime for live + * migration, in milliseconds + * @flags: extra flags; not used yet, so callers should always pass 0 + * + * Gets current maximum tolerable time for which the domain may be paused + * at the end of live migration. + * + * Returns 0 in case of success, -1 otherwise. + */ +int +virDomainMigrateGetMaxDowntime(virDomainPtr domain, + unsigned long long *downtime, + unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "downtime = %p, flags=%x", downtime, flags); + + virResetLastError(); + + virCheckDomainReturn(domain, -1); + conn = domain->conn; + + virCheckNonNullArgGoto(downtime, error); + + if (conn->driver->domainMigrateGetMaxDowntime) { + if (conn->driver->domainMigrateGetMaxDowntime(domain, downtime, flags) < 0) + goto error; + return 0; + } + + virReportUnsupportedError(); + error: + virDispatchError(conn); + return -1; +} + + +/** * virDomainMigrateGetCompressionCache: * @domain: a domain object * @cacheSize: return value of current size of the cache (in bytes) diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index fac77fb..b55ca4b 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -768,4 +768,8 @@ LIBVIRT_3.4.0 { virStreamSparseSendAll; } LIBVIRT_3.1.0; +LIBVIRT_3.7.0 { + global: + virDomainMigrateGetMaxDowntime; +} LIBVIRT_3.4.0; # .... define new API here using predicted next version number .... -- 1.8.3.1

On 08/17/2017 06:17 PM, Scott Garfinkle wrote:
Add virDomainMigrateGetMaxDowntime to support querying maximum allowable downtime during live migration. --- include/libvirt/libvirt-domain.h | 4 ++++ src/driver-hypervisor.h | 6 ++++++ src/libvirt-domain.c | 41 ++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 4 ++++ 4 files changed, 55 insertions(+)
Reviewed-by: John Ferlan <jferlan@redhat.com> John

Add code to support querying maximum allowable downtime during live migration. --- src/qemu/qemu_driver.c | 58 ++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor.h | 3 +++ src/qemu/qemu_monitor_json.c | 4 +++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 16 +++++++++++- src/remote_protocol-structs | 8 ++++++ 6 files changed, 89 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index e9f07c6..ca7f531 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -13150,6 +13150,63 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom, return ret; } + +static int +qemuDomainMigrateGetMaxDowntime(virDomainPtr dom, + unsigned long long *downtime, + unsigned int flags) +{ + virQEMUDriverPtr driver = dom->conn->privateData; + virDomainObjPtr vm; + qemuDomainObjPrivatePtr priv; + qemuMonitorMigrationParams migparams; + int ret = -1; + + virCheckFlags(0, -1); + + if (!(vm = qemuDomObjFromDomain(dom))) + goto cleanup; + + if (virDomainMigrateGetMaxDowntimeEnsureACL(dom->conn, vm->def) < 0) + goto cleanup; + + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is not running")); + goto endjob; + } + + priv = vm->privateData; + qemuDomainObjEnterMonitor(driver, vm); + + if (qemuMonitorGetMigrationParams(priv->mon, &migparams) >= 0 && + migparams.downtimeLimit_set) + { + *downtime = migparams.downtimeLimit; + ret = 0; + } + + if (ret < 0) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("Querying migration downtime is not supported by " + "QEMU binary")); + } + + if (qemuDomainObjExitMonitor(driver, vm) < 0) + ret = -1; + + endjob: + qemuDomainObjEndJob(driver, vm); + + cleanup: + virDomainObjEndAPI(&vm); + return ret; +} + + static int qemuDomainMigrateGetCompressionCache(virDomainPtr dom, unsigned long long *cacheSize, @@ -20829,6 +20886,7 @@ static virHypervisorDriver qemuHypervisorDriver = { .domainGetJobInfo = qemuDomainGetJobInfo, /* 0.7.7 */ .domainGetJobStats = qemuDomainGetJobStats, /* 1.0.3 */ .domainAbortJob = qemuDomainAbortJob, /* 0.7.7 */ + .domainMigrateGetMaxDowntime = qemuDomainMigrateGetMaxDowntime, /* 3.7.0 */ .domainMigrateSetMaxDowntime = qemuDomainMigrateSetMaxDowntime, /* 0.8.0 */ .domainMigrateGetCompressionCache = qemuDomainMigrateGetCompressionCache, /* 1.0.3 */ .domainMigrateSetCompressionCache = qemuDomainMigrateSetCompressionCache, /* 1.0.3 */ diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 31f7e97..9805a33 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -627,6 +627,9 @@ struct _qemuMonitorMigrationParams { * whereas, some string value indicates we can support setting/clearing */ char *migrateTLSAlias; char *migrateTLSHostname; + + bool downtimeLimit_set; + unsigned long long downtimeLimit; }; int qemuMonitorGetMigrationParams(qemuMonitorPtr mon, diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index b8a6815..df5fb7c 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -2705,6 +2705,10 @@ qemuMonitorJSONGetMigrationParams(qemuMonitorPtr mon, #undef PARSE + if (virJSONValueObjectGetNumberUlong(result, "downtime-limit", + ¶ms->downtimeLimit) == 0) + params->downtimeLimit_set = true; + if ((tlsStr = virJSONValueObjectGetString(result, "tls-creds"))) { if (VIR_STRDUP(params->migrateTLSAlias, tlsStr) < 0) goto cleanup; diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index a57d25f..027b073 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -8400,6 +8400,7 @@ static virHypervisorDriver hypervisor_driver = { .domainGetJobInfo = remoteDomainGetJobInfo, /* 0.7.7 */ .domainGetJobStats = remoteDomainGetJobStats, /* 1.0.3 */ .domainAbortJob = remoteDomainAbortJob, /* 0.7.7 */ + .domainMigrateGetMaxDowntime = remoteDomainMigrateGetMaxDowntime, /* 3.7.0 */ .domainMigrateSetMaxDowntime = remoteDomainMigrateSetMaxDowntime, /* 0.8.0 */ .domainMigrateGetCompressionCache = remoteDomainMigrateGetCompressionCache, /* 1.0.3 */ .domainMigrateSetCompressionCache = remoteDomainMigrateSetCompressionCache, /* 1.0.3 */ diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 0943208..2d49ceb 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -2326,6 +2326,15 @@ struct remote_domain_abort_job_args { }; +struct remote_domain_migrate_get_max_downtime_args { + remote_nonnull_domain dom; + unsigned int flags; +}; + +struct remote_domain_migrate_get_max_downtime_ret { + unsigned hyper downtime; /* insert@1 */ +}; + struct remote_domain_migrate_set_max_downtime_args { remote_nonnull_domain dom; unsigned hyper downtime; @@ -6064,7 +6073,12 @@ enum remote_procedure { * @generate: both * @acl: domain:write */ - REMOTE_PROC_DOMAIN_SET_BLOCK_THRESHOLD = 386 + REMOTE_PROC_DOMAIN_SET_BLOCK_THRESHOLD = 386, + /** + * @generate: both + * @acl: domain:migrate + */ + REMOTE_PROC_DOMAIN_MIGRATE_GET_MAX_DOWNTIME = 387 }; diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index a46fe37..cea7664 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -1773,6 +1773,13 @@ struct remote_domain_get_job_stats_ret { struct remote_domain_abort_job_args { remote_nonnull_domain dom; }; +struct remote_domain_migrate_get_max_downtime_args { + remote_nonnull_domain dom; + u_int flags; +}; +struct remote_domain_migrate_get_max_downtime_ret { + uint64_t downtime; +}; struct remote_domain_migrate_set_max_downtime_args { remote_nonnull_domain dom; uint64_t downtime; @@ -3233,4 +3240,5 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_SET_VCPU = 384, REMOTE_PROC_DOMAIN_EVENT_BLOCK_THRESHOLD = 385, REMOTE_PROC_DOMAIN_SET_BLOCK_THRESHOLD = 386, + REMOTE_PROC_DOMAIN_GET_MAX_DOWNTIME = 387, }; -- 1.8.3.1

On 08/17/2017 06:17 PM, Scott Garfinkle wrote:
Add code to support querying maximum allowable downtime during live migration. --- src/qemu/qemu_driver.c | 58 ++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor.h | 3 +++ src/qemu/qemu_monitor_json.c | 4 +++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 16 +++++++++++- src/remote_protocol-structs | 8 ++++++ 6 files changed, 89 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index e9f07c6..ca7f531 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -13150,6 +13150,63 @@ qemuDomainMigrateSetMaxDowntime(virDomainPtr dom, return ret; }
+ +static int +qemuDomainMigrateGetMaxDowntime(virDomainPtr dom, + unsigned long long *downtime, + unsigned int flags) +{ + virQEMUDriverPtr driver = dom->conn->privateData; + virDomainObjPtr vm; + qemuDomainObjPrivatePtr priv; + qemuMonitorMigrationParams migparams;
migparams = { 0 };
+ int ret = -1; + + virCheckFlags(0, -1); + + if (!(vm = qemuDomObjFromDomain(dom))) + goto cleanup; + + if (virDomainMigrateGetMaxDowntimeEnsureACL(dom->conn, vm->def) < 0) + goto cleanup; + + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is not running")); + goto endjob; + } + + priv = vm->privateData; + qemuDomainObjEnterMonitor(driver, vm); + + if (qemuMonitorGetMigrationParams(priv->mon, &migparams) >= 0 && + migparams.downtimeLimit_set) + { + *downtime = migparams.downtimeLimit; + ret = 0; + }
The { should have been on the previous line The indent should have been 4 spaces and the checks are almost right. This will elicit the following message and overwrite something that caused qemuMonitorGetMigrationParams to fail for some other reason. The only reason the following message should be generated is if we get a 0 return and downtimeLimit_set is false.
+ + if (ret < 0) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("Querying migration downtime is not supported by " + "QEMU binary"));
Indents for line 2 and 3 were off on this as well. I altered the sequence to be as I suggested in the v3 review: if (qemuMonitorGetMigrationParams(priv->mon, &migparams) == 0) { if (migparams.downtimeLimit_set) { *downtime = migparams.downtimeLimit; ret = 0; } else { virReportError(VIR_ERR_OPERATION_INVALID,"%s", _("Querying migration downtime is not supported by " "QEMU binary")); } } [pardon the email client line wrap on the string]
+ } + + if (qemuDomainObjExitMonitor(driver, vm) < 0) + ret = -1; + + endjob: + qemuDomainObjEndJob(driver, vm); + + cleanup: + virDomainObjEndAPI(&vm); + return ret; +} + + static int qemuDomainMigrateGetCompressionCache(virDomainPtr dom, unsigned long long *cacheSize, @@ -20829,6 +20886,7 @@ static virHypervisorDriver qemuHypervisorDriver = { .domainGetJobInfo = qemuDomainGetJobInfo, /* 0.7.7 */ .domainGetJobStats = qemuDomainGetJobStats, /* 1.0.3 */ .domainAbortJob = qemuDomainAbortJob, /* 0.7.7 */ + .domainMigrateGetMaxDowntime = qemuDomainMigrateGetMaxDowntime, /* 3.7.0 */ .domainMigrateSetMaxDowntime = qemuDomainMigrateSetMaxDowntime, /* 0.8.0 */ .domainMigrateGetCompressionCache = qemuDomainMigrateGetCompressionCache, /* 1.0.3 */ .domainMigrateSetCompressionCache = qemuDomainMigrateSetCompressionCache, /* 1.0.3 */ diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 31f7e97..9805a33 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -627,6 +627,9 @@ struct _qemuMonitorMigrationParams { * whereas, some string value indicates we can support setting/clearing */ char *migrateTLSAlias; char *migrateTLSHostname; + + bool downtimeLimit_set; + unsigned long long downtimeLimit; };
int qemuMonitorGetMigrationParams(qemuMonitorPtr mon, diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index b8a6815..df5fb7c 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -2705,6 +2705,10 @@ qemuMonitorJSONGetMigrationParams(qemuMonitorPtr mon,
#undef PARSE
+ if (virJSONValueObjectGetNumberUlong(result, "downtime-limit", + ¶ms->downtimeLimit) == 0) + params->downtimeLimit_set = true; + if ((tlsStr = virJSONValueObjectGetString(result, "tls-creds"))) { if (VIR_STRDUP(params->migrateTLSAlias, tlsStr) < 0) goto cleanup; diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index a57d25f..027b073 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -8400,6 +8400,7 @@ static virHypervisorDriver hypervisor_driver = { .domainGetJobInfo = remoteDomainGetJobInfo, /* 0.7.7 */ .domainGetJobStats = remoteDomainGetJobStats, /* 1.0.3 */ .domainAbortJob = remoteDomainAbortJob, /* 0.7.7 */ + .domainMigrateGetMaxDowntime = remoteDomainMigrateGetMaxDowntime, /* 3.7.0 */ .domainMigrateSetMaxDowntime = remoteDomainMigrateSetMaxDowntime, /* 0.8.0 */ .domainMigrateGetCompressionCache = remoteDomainMigrateGetCompressionCache, /* 1.0.3 */ .domainMigrateSetCompressionCache = remoteDomainMigrateSetCompressionCache, /* 1.0.3 */ diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 0943208..2d49ceb 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -2326,6 +2326,15 @@ struct remote_domain_abort_job_args { };
+struct remote_domain_migrate_get_max_downtime_args { + remote_nonnull_domain dom; + unsigned int flags; +}; + +struct remote_domain_migrate_get_max_downtime_ret { + unsigned hyper downtime; /* insert@1 */ +}; + struct remote_domain_migrate_set_max_downtime_args { remote_nonnull_domain dom; unsigned hyper downtime; @@ -6064,7 +6073,12 @@ enum remote_procedure { * @generate: both * @acl: domain:write */ - REMOTE_PROC_DOMAIN_SET_BLOCK_THRESHOLD = 386 + REMOTE_PROC_DOMAIN_SET_BLOCK_THRESHOLD = 386,
+ /** + * @generate: both + * @acl: domain:migrate + */ + REMOTE_PROC_DOMAIN_MIGRATE_GET_MAX_DOWNTIME = 387
}; diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index a46fe37..cea7664 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -1773,6 +1773,13 @@ struct remote_domain_get_job_stats_ret { struct remote_domain_abort_job_args { remote_nonnull_domain dom; }; +struct remote_domain_migrate_get_max_downtime_args { + remote_nonnull_domain dom; + u_int flags; +}; +struct remote_domain_migrate_get_max_downtime_ret { + uint64_t downtime; +}; struct remote_domain_migrate_set_max_downtime_args { remote_nonnull_domain dom; uint64_t downtime; @@ -3233,4 +3240,5 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_SET_VCPU = 384, REMOTE_PROC_DOMAIN_EVENT_BLOCK_THRESHOLD = 385, REMOTE_PROC_DOMAIN_SET_BLOCK_THRESHOLD = 386, + REMOTE_PROC_DOMAIN_GET_MAX_DOWNTIME = 387, };
Again, the symbol should have been: REMOTE_PROC_DOMAIN_MIGRATE_GET_MAX_DOWNTIME = 387, I'll fix the above before pushing... Reviewed-by: John Ferlan <jferlan@redhat.com> John

Implement a migrate-getmaxdowntime command to complement migrate-setmaxdowntime. --- tools/virsh-domain.c | 42 ++++++++++++++++++++++++++++++++++++++++++ tools/virsh.pod | 6 ++++++ 2 files changed, 48 insertions(+) diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 40f1673..012c96e 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -10719,6 +10719,42 @@ cmdMigrateSetMaxDowntime(vshControl *ctl, const vshCmd *cmd) return ret; } + /* + * "migrate-getmaxdowntime" command + */ +static const vshCmdInfo info_migrate_getmaxdowntime[] = { + {.name = "help", + .data = N_("get maximum tolerable downtime") + }, + {.name = "desc", + .data = N_("Get maximum tolerable downtime of a domain which is being live-migrated to another host.") + }, + {.name = NULL} +}; + +static const vshCmdOptDef opts_migrate_getmaxdowntime[] = { + VIRSH_COMMON_OPT_DOMAIN_FULL, + {.name = NULL} +}; + +static bool +cmdMigrateGetMaxDowntime(vshControl *ctl, const vshCmd *cmd) +{ + virDomainPtr dom; + unsigned long long downtime; + bool ret = false; + + if ((dom = virshCommandOptDomain(ctl, cmd, NULL)) != NULL) { + if (virDomainMigrateGetMaxDowntime(dom, &downtime, 0) >= 0) { + vshPrint(ctl, "%llu\n", downtime); + ret = true; + } + virshDomainFree(dom); + } + + return ret; +} + /* * "migrate-compcache" command */ @@ -13875,6 +13911,12 @@ const vshCmdDef domManagementCmds[] = { .info = info_migrate_setmaxdowntime, .flags = 0 }, + {.name = "migrate-getmaxdowntime", + .handler = cmdMigrateGetMaxDowntime, + .opts = opts_migrate_getmaxdowntime, + .info = info_migrate_getmaxdowntime, + .flags = 0 + }, {.name = "migrate-compcache", .handler = cmdMigrateCompCache, .opts = opts_migrate_compcache, diff --git a/tools/virsh.pod b/tools/virsh.pod index 43d6f0c..15a4143 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -1857,6 +1857,12 @@ Set maximum tolerable downtime for a domain which is being live-migrated to another host. The I<downtime> is a number of milliseconds the guest is allowed to be down at the end of live migration. +=item B<migrate-getmaxdowntime> I<domain> + +Get the maximum tolerable downtime for a domain which is being live-migrated to +another host. This is the number of milliseconds the guest is allowed +to be down at the end of live migration. + =item B<migrate-compcache> I<domain> [I<--size> B<bytes>] Sets and/or gets size of the cache (in bytes) used for compressing repeatedly -- 1.8.3.1

On 08/17/2017 06:17 PM, Scott Garfinkle wrote:
Implement a migrate-getmaxdowntime command to complement migrate-setmaxdowntime. --- tools/virsh-domain.c | 42 ++++++++++++++++++++++++++++++++++++++++++ tools/virsh.pod | 6 ++++++ 2 files changed, 48 insertions(+)
Please don't forget to post a patch to add the migrate-compcache that you rightly removed from this particular patch since it was "missing" from some other original patch.
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 40f1673..012c96e 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -10719,6 +10719,42 @@ cmdMigrateSetMaxDowntime(vshControl *ctl, const vshCmd *cmd) return ret; }
+ /*
Bad indent here.
+ * "migrate-getmaxdowntime" command + */ +static const vshCmdInfo info_migrate_getmaxdowntime[] = { + {.name = "help", + .data = N_("get maximum tolerable downtime") + }, + {.name = "desc", + .data = N_("Get maximum tolerable downtime of a domain which is being live-migrated to another host.") + }, + {.name = NULL} +}; + +static const vshCmdOptDef opts_migrate_getmaxdowntime[] = { + VIRSH_COMMON_OPT_DOMAIN_FULL, + {.name = NULL} +}; + +static bool +cmdMigrateGetMaxDowntime(vshControl *ctl, const vshCmd *cmd) +{ + virDomainPtr dom; + unsigned long long downtime; + bool ret = false; + + if ((dom = virshCommandOptDomain(ctl, cmd, NULL)) != NULL) { + if (virDomainMigrateGetMaxDowntime(dom, &downtime, 0) >= 0) { + vshPrint(ctl, "%llu\n", downtime); + ret = true; + } + virshDomainFree(dom); + } + + return ret;
Although technically correct, the keeping the style of function logic is preferred, thus I'll adjust to: if (!(dom = virshCommandOptDomain(ctl, cmd, NULL))) return false; if (virDomainMigrateGetMaxDowntime(dom, &downtime, 0) < 0) goto done; vshPrint(ctl, "%llu\n", downtime); ret = true; done: virshDomainFree(dom); return ret; I'll adjust the code before pushing Reviewed-by: John Ferlan <jferlan@redhat.com> John
+} + /* * "migrate-compcache" command */ @@ -13875,6 +13911,12 @@ const vshCmdDef domManagementCmds[] = { .info = info_migrate_setmaxdowntime, .flags = 0 }, + {.name = "migrate-getmaxdowntime", + .handler = cmdMigrateGetMaxDowntime, + .opts = opts_migrate_getmaxdowntime, + .info = info_migrate_getmaxdowntime, + .flags = 0 + }, {.name = "migrate-compcache", .handler = cmdMigrateCompCache, .opts = opts_migrate_compcache, diff --git a/tools/virsh.pod b/tools/virsh.pod index 43d6f0c..15a4143 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -1857,6 +1857,12 @@ Set maximum tolerable downtime for a domain which is being live-migrated to another host. The I<downtime> is a number of milliseconds the guest is allowed to be down at the end of live migration.
+=item B<migrate-getmaxdowntime> I<domain> + +Get the maximum tolerable downtime for a domain which is being live-migrated to +another host. This is the number of milliseconds the guest is allowed +to be down at the end of live migration. + =item B<migrate-compcache> I<domain> [I<--size> B<bytes>]
Sets and/or gets size of the cache (in bytes) used for compressing repeatedly

--- docs/news.xml | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/docs/news.xml b/docs/news.xml index 26bd9bd..640e5f5 100644 --- a/docs/news.xml +++ b/docs/news.xml @@ -35,6 +35,16 @@ <libvirt> <release version="v3.7.0" date="unreleased"> <section title="New features"> + <change> + <summary> + qemu: Add migrate-getmaxdowntime command + </summary> + <description> + Currently, the maximum tolerable downtime for a domain being migrated + is write-only from libvirt, via migrate-setmaxdowntime. This + implements a complementary migrate-getmaxdowntime command + </description> + </change> <change> <summary> bhyve: Support autoport for VNC ports -- 1.8.3.1

On 08/17/2017 06:17 PM, Scott Garfinkle wrote:
--- docs/news.xml | 10 ++++++++++ 1 file changed, 10 insertions(+)
Reviewed-by: John Ferlan <jferlan@redhat.com> John I'll push these a bit later - just to be sure you're OK with the adjustments and to ensure someone else doesn't pick up on something I've perhaps missed...

On 08/17/2017 04:17 PM, Scott Garfinkle wrote:
Currently, the maximum tolerable downtime for a domain being migrated is write-only. This patch implements a way to query that value nondestructively.
I'd like register my support for the concept in general. Seems odd to have something you can write but not read. For what it's worth I took a look at the patches and didn't see anything horribly wrong, but I only dabble in libvirt so that's not worth much. :) Chris

On 08/17/2017 06:17 PM, Scott Garfinkle wrote:
Currently, the maximum tolerable downtime for a domain being migrated is write-only. This patch implements a way to query that value nondestructively.
I've tried to incorporate the feedback from v3; however, I've left alone a couple of places where the feedback would have diverged from parallelism with the existing migrate-setdowntime help/info. Also, since the main use of this new command is likely to be (as in virt-manager) in combination with migrate-setmaxdowntime, I don't think it's going to really help to add an options to report this in some other unit other than milliseconds.
To-do: Ought to revisit setmaxdowntime to use the new preferred qemu interface.
Changes from [v3]:
* Incorporated suggestions, including adding an error message * Added a patch to the news.xml file
[v1} https://www.redhat.com/archives/libvir-list/2017-July/msg00908.html [v2} https://www.redhat.com/archives/libvir-list/2017-July/msg01010.html [v3} https://www.redhat.com/archives/libvir-list/2017-July/msg01301.html
Scott Garfinkle (4) Add virDomainMigrateGetMaxDowntime public API qemu: Implement virDomainMigrateGetMaxDowntime virsh: Add support for virDomainMigrateGetMaxDowntime docs: document migrate-getmaxdowntime support
--- docs/news.xml | 10 +++++++ include/libvirt/libvirt-domain.h | 4 +++ src/driver-hypervisor.h | 6 +++++ src/libvirt-domain.c | 41 ++++++++++++++++++++++++++++ src/libvirt_public.syms | 4 +++ src/qemu/qemu_driver.c | 58 ++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor.h | 3 +++ src/qemu/qemu_monitor_json.c | 4 +++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 16 ++++++++++- src/remote_protocol-structs | 8 ++++++ tools/virsh-domain.c | 42 +++++++++++++++++++++++++++++ tools/virsh.pod | 6 +++++ 13 files changed, 202 insertions(+), 1 deletion(-)
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
This is now pushed - I also generated/sent python and perl binding patches. Tks - John
participants (3)
-
Chris Friesen
-
John Ferlan
-
Scott Garfinkle