[libvirt] [PATCH 0/8] Add virDomainMigrateGetMaxSpeed API

As per previous request [1], this patch series introduces virDomainMigrateGetMaxSpeed API. Patches 1-5 contain usual new API stuff. In patch 6, qemuDomainMigrateSetMaxSpeed is made to work with inactive domains and set the new value in domain conf. Patch 7 changes migration speed to unlimited when target is a file *and* no user defined migration speed is set, and reverts to previous value after migration completes. Patch 8 considers user-defined migration speed in qemuMigrationRun() when explicit bandwidth is not provided in 'resource' paramter. Note that a patch to fix the remote protocol generator [2] is required for this patch series. [1] https://www.redhat.com/archives/libvir-list/2011-August/msg00224.html [2] https://www.redhat.com/archives/libvir-list/2011-August/msg01367.html Jim Fehlig (8): Add on_migrate element to domainXML Add public API for getting migration speed Add max migration bandwidth to domain_conf Impl virDomainMigrateGetMaxSpeed in qemu driver virsh: Expose virDomainMigrateGetMaxSpeed API Save migration speed to domain conf in qemuDomainMigrateSetMaxSpeed Set qemu migration speed unlimited when migrating to file Use max speed specified in domain conf when migrating docs/apibuild.py | 3 +- docs/formatdomain.html.in | 21 +++++++ docs/schemas/domain.rng | 13 +++++ include/libvirt/libvirt.h.in | 4 ++ python/generator.py | 1 + python/libvirt-override-api.xml | 6 ++ python/libvirt-override.c | 24 ++++++++ src/conf/domain_conf.c | 11 ++++ src/conf/domain_conf.h | 2 + src/driver.h | 6 ++ src/libvirt.c | 51 ++++++++++++++++++ src/libvirt_public.syms | 5 ++ src/qemu/qemu_driver.c | 76 ++++++++++++++++++++------- src/qemu/qemu_migration.c | 28 +++++++++- src/qemu/qemu_migration.h | 3 + src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 13 ++++- src/remote_protocol-structs | 9 +++ src/rpc/gendispatch.pl | 1 + tests/domainschemadata/migration-params.xml | 34 ++++++++++++ tools/virsh.c | 41 ++++++++++++++ tools/virsh.pod | 4 ++ 22 files changed, 333 insertions(+), 24 deletions(-) create mode 100644 tests/domainschemadata/migration-params.xml -- 1.7.5.4

on_migrate can be used to modify default parameters used when migrating a domain. --- docs/formatdomain.html.in | 21 ++++++++++++++++ docs/schemas/domain.rng | 13 ++++++++++ tests/domainschemadata/migration-params.xml | 34 +++++++++++++++++++++++++++ 3 files changed, 68 insertions(+), 0 deletions(-) create mode 100644 tests/domainschemadata/migration-params.xml diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index f46771d..84dd590 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -692,6 +692,27 @@ domain will be restarted with the same configuration</dd> </dl> + <p> + Default migration parameters, such as maximum or peak bandwidth + used during domain migration, can be specified with the + on_migrate element. Note that some hypervisors may not support + changing migration parameters. + </p> + +<pre> + ... + <on_migrate> + <bandwidth peak="1024"> + </on_migrate> + ...</pre> + + <dl> + <dt><code>bandwidth</code></dt> + <dd>Maximum or peak bandwidth consumed during the migration + operation can be specified with the <code>peak</code> + attribute. Units are MB/s.</dd> + </dl> + <h3><a name="elementsFeatures">Hypervisor features</a></h3> <p> diff --git a/docs/schemas/domain.rng b/docs/schemas/domain.rng index dd8c41a..b729aa7 100644 --- a/docs/schemas/domain.rng +++ b/docs/schemas/domain.rng @@ -1623,6 +1623,19 @@ <ref name="crashOptions"/> </element> </optional> + <optional> + <element name="on_migrate"> + <optional> + <element name="bandwidth"> + <optional> + <attribute name="peak"> + <ref name="positiveInteger"/> + </attribute> + </optional> + </element> + </optional> + </element> + </optional> </interleave> </define> <!-- diff --git a/tests/domainschemadata/migration-params.xml b/tests/domainschemadata/migration-params.xml new file mode 100644 index 0000000..d01229b --- /dev/null +++ b/tests/domainschemadata/migration-params.xml @@ -0,0 +1,34 @@ +<domain type='qemu'> + <name>test-domain</name> + <uuid>c3d496e6-fb22-7d89-1e73-9d3e231ba59f</uuid> + <memory>524288</memory> + <currentMemory>524288</currentMemory> + <vcpu>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <features> + <acpi/> + <apic/> + <pae/> + </features> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <on_migrate> + <bandwidth peak='1024' /> + </on_migrate> + <devices> + <emulator>/usr/bin/qemu</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='raw'/> + <source file='/path/to/disk.img'/> + <target dev='hda' bus='ide'/> + <address type='drive' controller='0' bus='0' unit='0'/> + </disk> + <controller type='ide' index='0'/> + <memballoon model='virtio'/> + </devices> +</domain> -- 1.7.5.4

On Fri, Aug 26, 2011 at 12:10:20PM -0600, Jim Fehlig wrote:
on_migrate can be used to modify default parameters used when migrating a domain. --- docs/formatdomain.html.in | 21 ++++++++++++++++ docs/schemas/domain.rng | 13 ++++++++++ tests/domainschemadata/migration-params.xml | 34 +++++++++++++++++++++++++++ 3 files changed, 68 insertions(+), 0 deletions(-) create mode 100644 tests/domainschemadata/migration-params.xml
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index f46771d..84dd590 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -692,6 +692,27 @@ domain will be restarted with the same configuration</dd> </dl>
+ <p> + Default migration parameters, such as maximum or peak bandwidth + used during domain migration, can be specified with the + on_migrate element. Note that some hypervisors may not support + changing migration parameters. + </p> + +<pre> + ... + <on_migrate> + <bandwidth peak="1024"> + </on_migrate> + ...</pre> + + <dl> + <dt><code>bandwidth</code></dt> + <dd>Maximum or peak bandwidth consumed during the migration + operation can be specified with the <code>peak</code> + attribute. Units are MB/s.</dd> + </dl> + <h3><a name="elementsFeatures">Hypervisor features</a></h3>
<p> diff --git a/docs/schemas/domain.rng b/docs/schemas/domain.rng index dd8c41a..b729aa7 100644 --- a/docs/schemas/domain.rng +++ b/docs/schemas/domain.rng @@ -1623,6 +1623,19 @@ <ref name="crashOptions"/> </element> </optional> + <optional> + <element name="on_migrate"> + <optional> + <element name="bandwidth"> + <optional> + <attribute name="peak"> + <ref name="positiveInteger"/> + </attribute> + </optional> + </element> + </optional> + </element> + </optional> </interleave> </define> <!-- diff --git a/tests/domainschemadata/migration-params.xml b/tests/domainschemadata/migration-params.xml new file mode 100644 index 0000000..d01229b --- /dev/null +++ b/tests/domainschemadata/migration-params.xml @@ -0,0 +1,34 @@ +<domain type='qemu'> + <name>test-domain</name> + <uuid>c3d496e6-fb22-7d89-1e73-9d3e231ba59f</uuid> + <memory>524288</memory> + <currentMemory>524288</currentMemory> + <vcpu>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <features> + <acpi/> + <apic/> + <pae/> + </features> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <on_migrate> + <bandwidth peak='1024' /> + </on_migrate> + <devices> + <emulator>/usr/bin/qemu</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='raw'/> + <source file='/path/to/disk.img'/> + <target dev='hda' bus='ide'/> + <address type='drive' controller='0' bus='0' unit='0'/> + </disk> + <controller type='ide' index='0'/> + <memballoon model='virtio'/> + </devices> +</domain>
I'm not entirely convinced by the idea of storing extra parameters to be used by future API calls, in the guest XML. When the guest XML is being defined, apps likely don't know when/where the guest will be migrated in the future, so I'm not sure reasonable guesses about migration bandwidth can be made then. IMHO the guest XML is really about configuration, and this doesn't feel like a configuration parameter, rather it is an operational parameter. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

Daniel P. Berrange wrote:
On Fri, Aug 26, 2011 at 12:10:20PM -0600, Jim Fehlig wrote:
on_migrate can be used to modify default parameters used when migrating a domain. --- docs/formatdomain.html.in | 21 ++++++++++++++++ docs/schemas/domain.rng | 13 ++++++++++ tests/domainschemadata/migration-params.xml | 34 +++++++++++++++++++++++++++ 3 files changed, 68 insertions(+), 0 deletions(-) create mode 100644 tests/domainschemadata/migration-params.xml
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index f46771d..84dd590 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -692,6 +692,27 @@ domain will be restarted with the same configuration</dd> </dl>
+ <p> + Default migration parameters, such as maximum or peak bandwidth + used during domain migration, can be specified with the + on_migrate element. Note that some hypervisors may not support + changing migration parameters. + </p> + +<pre> + ... + <on_migrate> + <bandwidth peak="1024"> + </on_migrate> + ...</pre> + + <dl> + <dt><code>bandwidth</code></dt> + <dd>Maximum or peak bandwidth consumed during the migration + operation can be specified with the <code>peak</code> + attribute. Units are MB/s.</dd> + </dl> + <h3><a name="elementsFeatures">Hypervisor features</a></h3>
<p> diff --git a/docs/schemas/domain.rng b/docs/schemas/domain.rng index dd8c41a..b729aa7 100644 --- a/docs/schemas/domain.rng +++ b/docs/schemas/domain.rng @@ -1623,6 +1623,19 @@ <ref name="crashOptions"/> </element> </optional> + <optional> + <element name="on_migrate"> + <optional> + <element name="bandwidth"> + <optional> + <attribute name="peak"> + <ref name="positiveInteger"/> + </attribute> + </optional> + </element> + </optional> + </element> + </optional> </interleave> </define> <!-- diff --git a/tests/domainschemadata/migration-params.xml b/tests/domainschemadata/migration-params.xml new file mode 100644 index 0000000..d01229b --- /dev/null +++ b/tests/domainschemadata/migration-params.xml @@ -0,0 +1,34 @@ +<domain type='qemu'> + <name>test-domain</name> + <uuid>c3d496e6-fb22-7d89-1e73-9d3e231ba59f</uuid> + <memory>524288</memory> + <currentMemory>524288</currentMemory> + <vcpu>1</vcpu> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <features> + <acpi/> + <apic/> + <pae/> + </features> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <on_migrate> + <bandwidth peak='1024' /> + </on_migrate> + <devices> + <emulator>/usr/bin/qemu</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='raw'/> + <source file='/path/to/disk.img'/> + <target dev='hda' bus='ide'/> + <address type='drive' controller='0' bus='0' unit='0'/> + </disk> + <controller type='ide' index='0'/> + <memballoon model='virtio'/> + </devices> +</domain>
I'm not entirely convinced by the idea of storing extra parameters to be used by future API calls, in the guest XML. When the guest XML is being defined, apps likely don't know when/where the guest will be migrated in the future, so I'm not sure reasonable guesses about migration bandwidth can be made then. IMHO the guest XML is really about configuration, and this doesn't feel like a configuration parameter, rather it is an operational parameter.
Thanks for the comments. I like your suggestion of putting max bandwidth in qemuDomainObjPrivate struct. I'll push patches 2 and 5, drop 1 and 3, and provide a v2 of 4, 6-8 using your suggestion. BTW, what do folks consider a sane libvirt default for migration bandwidth? Patch 7 sets bandwidth to unlimited when destination is a file, so default here means network bandwidth. Is qemu's 32MiB/s reasonable? Regards, Jim

Jim Fehlig wrote: [...]
I'll push patches 2 and 5, drop 1 and 3, and provide a v2 of 4, 6-8 using your suggestion.
Err, patch 3 was updated to store max migration bandwidth in qemuDomainObjPrivate structure instead of domain conf. V2 of 3-4,6-8 sent...
BTW, what do folks consider a sane libvirt default for migration bandwidth? Patch 7 sets bandwidth to unlimited when destination is a file, so default here means network bandwidth. Is qemu's 32MiB/s reasonable?
I made the libvirt default 32MiB/s as well. Regards, Jim

Includes impl of python binding since the generator was not able to cope. Note: Requires gendispatch.pl patch from Matthias Bolte https://www.redhat.com/archives/libvir-list/2011-August/msg01367.html --- docs/apibuild.py | 3 +- include/libvirt/libvirt.h.in | 4 +++ python/generator.py | 1 + python/libvirt-override-api.xml | 6 ++++ python/libvirt-override.c | 24 ++++++++++++++++++ src/driver.h | 6 ++++ src/libvirt.c | 51 +++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 5 ++++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 13 +++++++++- src/remote_protocol-structs | 9 +++++++ src/rpc/gendispatch.pl | 1 + 12 files changed, 122 insertions(+), 2 deletions(-) diff --git a/docs/apibuild.py b/docs/apibuild.py index 53b3421..3563d94 100755 --- a/docs/apibuild.py +++ b/docs/apibuild.py @@ -1643,7 +1643,8 @@ class CParser: "virDomainSetMemory" : (False, ("memory")), "virDomainSetMemoryFlags" : (False, ("memory")), "virDomainBlockJobSetSpeed" : (False, ("bandwidth")), - "virDomainBlockPull" : (False, ("bandwidth")) } + "virDomainBlockPull" : (False, ("bandwidth")), + "virDomainMigrateGetMaxSpeed" : (False, ("bandwidth")) } def checkLongLegacyFunction(self, name, return_type, signature): if "long" in return_type and "long long" not in return_type: diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index 53a2f7d..bd6c607 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -711,6 +711,10 @@ int virDomainMigrateSetMaxSpeed(virDomainPtr domain, unsigned long bandwidth, unsigned int flags); +int virDomainMigrateGetMaxSpeed(virDomainPtr domain, + unsigned long *bandwidth, + unsigned int flags); + /** * VIR_NODEINFO_MAXCPUS: * @nodeinfo: virNodeInfo instance diff --git a/python/generator.py b/python/generator.py index 97434ed..cc253cf 100755 --- a/python/generator.py +++ b/python/generator.py @@ -372,6 +372,7 @@ skip_impl = ( 'virNodeGetCPUStats', 'virNodeGetMemoryStats', 'virDomainGetBlockJobInfo', + 'virDomainMigrateGetMaxSpeed', ) diff --git a/python/libvirt-override-api.xml b/python/libvirt-override-api.xml index 2fa5eed..1cf115c 100644 --- a/python/libvirt-override-api.xml +++ b/python/libvirt-override-api.xml @@ -356,5 +356,11 @@ <arg name='flags' type='unsigned int' info='fine-tuning flags, currently unused, pass 0.'/> <return type='virDomainBlockJobInfo' info='A dictionary containing job information.' /> </function> + <function name='virDomainMigrateGetMaxSpeed' file='python'> + <info>Get currently configured maximum migration speed for a domain</info> + <arg name='domain' type='virDomainPtr' info='a domain object'/> + <arg name='flags' type='unsigned int' info='flags, currently unused, pass 0.'/> + <return type='unsigned long' info='current max migration speed, or None in case of error'/> + </function> </symbols> </api> diff --git a/python/libvirt-override.c b/python/libvirt-override.c index b5650e2..b020342 100644 --- a/python/libvirt-override.c +++ b/python/libvirt-override.c @@ -4543,6 +4543,29 @@ libvirt_virDomainSendKey(PyObject *self ATTRIBUTE_UNUSED, return py_retval; } +static PyObject * +libvirt_virDomainMigrateGetMaxSpeed(PyObject *self ATTRIBUTE_UNUSED, PyObject *args) { + PyObject *py_retval; + int c_retval; + unsigned long bandwidth; + virDomainPtr domain; + PyObject *pyobj_domain; + + if (!PyArg_ParseTuple(args, (char *)"O:virDomainMigrateGetMaxSpeed", &pyobj_domain)) + return(NULL); + + domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain); + + LIBVIRT_BEGIN_ALLOW_THREADS; + c_retval = virDomainMigrateGetMaxSpeed(domain, &bandwidth, 0); + LIBVIRT_END_ALLOW_THREADS; + + if (c_retval < 0) + return VIR_PY_INT_FAIL; + py_retval = libvirt_ulongWrap(bandwidth); + return(py_retval); +} + /************************************************************************ * * * The registration stuff * @@ -4632,6 +4655,7 @@ static PyMethodDef libvirtMethods[] = { {(char *) "virDomainRevertToSnapshot", libvirt_virDomainRevertToSnapshot, METH_VARARGS, NULL}, {(char *) "virDomainGetBlockJobInfo", libvirt_virDomainGetBlockJobInfo, METH_VARARGS, NULL}, {(char *) "virDomainSendKey", libvirt_virDomainSendKey, METH_VARARGS, NULL}, + {(char *) "virDomainMigrateGetMaxSpeed", libvirt_virDomainMigrateGetMaxSpeed, METH_VARARGS, NULL}, {NULL, NULL, 0, NULL} }; diff --git a/src/driver.h b/src/driver.h index 80d6628..21b2bd3 100644 --- a/src/driver.h +++ b/src/driver.h @@ -532,6 +532,11 @@ typedef int unsigned int flags); typedef int + (*virDrvDomainMigrateGetMaxSpeed)(virDomainPtr domain, + unsigned long *bandwidth, + unsigned int flags); + +typedef int (*virDrvDomainEventRegisterAny)(virConnectPtr conn, virDomainPtr dom, int eventID, @@ -828,6 +833,7 @@ struct _virDriver { virDrvDomainGetJobInfo domainGetJobInfo; virDrvDomainAbortJob domainAbortJob; virDrvDomainMigrateSetMaxDowntime domainMigrateSetMaxDowntime; + virDrvDomainMigrateGetMaxSpeed domainMigrateGetMaxSpeed; virDrvDomainMigrateSetMaxSpeed domainMigrateSetMaxSpeed; virDrvDomainEventRegisterAny domainEventRegisterAny; virDrvDomainEventDeregisterAny domainEventDeregisterAny; diff --git a/src/libvirt.c b/src/libvirt.c index b8fe1b1..683b8e1 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -15195,6 +15195,57 @@ error: } /** + * virDomainMigrateGetMaxSpeed: + * @domain: a domain object + * @bandwidth: return value of current migration bandwidth limit in Mbps + * @flags: fine-tuning flags, currently unused, use 0 + * + * Get the current maximum bandwidth (in Mbps) that will be used if the + * domain is migrated. Not all hypervisors will support a bandwidth limit. + * + * Returns 0 in case of success, -1 otherwise. + */ +int +virDomainMigrateGetMaxSpeed(virDomainPtr domain, + unsigned long *bandwidth, + unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "bandwidth = %p, flags=%x", bandwidth, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN(domain)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (!bandwidth) { + virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + + conn = domain->conn; + if (conn->flags & VIR_CONNECT_RO) { + virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if (conn->driver->domainMigrateGetMaxSpeed) { + if (conn->driver->domainMigrateGetMaxSpeed(domain, bandwidth, flags) < 0) + goto error; + return 0; + } + + virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__); +error: + virDispatchError(conn); + return -1; +} + +/** * virConnectDomainEventRegisterAny: * @conn: pointer to the connection * @dom: pointer to the domain diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index c2b6666..169c3ee 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -480,4 +480,9 @@ LIBVIRT_0.9.4 { virDomainBlockPull; } LIBVIRT_0.9.3; +LIBVIRT_0.9.5 { + global: + virDomainMigrateGetMaxSpeed; +} LIBVIRT_0.9.4; + # .... define new API here using predicted next version number .... diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index 603d589..81930a9 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -4335,6 +4335,7 @@ static virDriver remote_driver = { .domainAbortJob = remoteDomainAbortJob, /* 0.7.7 */ .domainMigrateSetMaxDowntime = remoteDomainMigrateSetMaxDowntime, /* 0.8.0 */ .domainMigrateSetMaxSpeed = remoteDomainMigrateSetMaxSpeed, /* 0.9.0 */ + .domainMigrateGetMaxSpeed = remoteDomainMigrateGetMaxSpeed, /* 0.9.5 */ .domainEventRegisterAny = remoteDomainEventRegisterAny, /* 0.8.0 */ .domainEventDeregisterAny = remoteDomainEventDeregisterAny, /* 0.8.0 */ .domainManagedSave = remoteDomainManagedSave, /* 0.8.0 */ diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 8f68808..676570e 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -1913,6 +1913,16 @@ struct remote_domain_migrate_set_max_speed_args { unsigned int flags; }; +struct remote_domain_migrate_get_max_speed_args { + remote_nonnull_domain dom; + unsigned int flags; +}; + +struct remote_domain_migrate_get_max_speed_ret { + unsigned hyper bandwidth; /* insert@1 */ +}; + + struct remote_domain_events_register_any_args { int eventID; }; @@ -2475,7 +2485,8 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_BLOCK_JOB_SET_SPEED = 239, /* autogen autogen */ REMOTE_PROC_DOMAIN_BLOCK_PULL = 240, /* autogen autogen */ - REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB = 241 /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB = 241, /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_MIGRATE_GET_MAX_SPEED = 242 /* autogen autogen */ /* * Notice how the entries are grouped in sets of 10 ? diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 91b3ca5..0498bd1 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -1429,6 +1429,14 @@ struct remote_domain_migrate_set_max_speed_args { uint64_t bandwidth; u_int flags; }; +struct remote_domain_migrate_get_max_speed_args { + remote_nonnull_domain dom; + u_int flags; +}; +struct remote_domain_migrate_get_max_speed_ret { + uint64_t bandwidth; +}; + struct remote_domain_events_register_any_args { int eventID; }; @@ -1936,4 +1944,5 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_BLOCK_JOB_SET_SPEED = 239, REMOTE_PROC_DOMAIN_BLOCK_PULL = 240, REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB = 241, + REMOTE_PROC_DOMAIN_MIGRATE_GET_MAX_SPEED = 242, }; diff --git a/src/rpc/gendispatch.pl b/src/rpc/gendispatch.pl index 0d344e8..c203324 100755 --- a/src/rpc/gendispatch.pl +++ b/src/rpc/gendispatch.pl @@ -222,6 +222,7 @@ my $long_legacy = { NodeGetInfo => { ret => { memory => 1 } }, DomainBlockPull => { arg => { bandwidth => 1 } }, DomainBlockJobSetSpeed => { arg => { bandwidth => 1 } }, + DomainMigrateGetMaxSpeed => { ret => { bandwidth => 1 } }, }; sub hyper_to_long -- 1.7.5.4

On Fri, Aug 26, 2011 at 12:10:21PM -0600, Jim Fehlig wrote:
Includes impl of python binding since the generator was not able to cope.
Note: Requires gendispatch.pl patch from Matthias Bolte
https://www.redhat.com/archives/libvir-list/2011-August/msg01367.html --- docs/apibuild.py | 3 +- include/libvirt/libvirt.h.in | 4 +++ python/generator.py | 1 + python/libvirt-override-api.xml | 6 ++++ python/libvirt-override.c | 24 ++++++++++++++++++ src/driver.h | 6 ++++ src/libvirt.c | 51 +++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 5 ++++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 13 +++++++++- src/remote_protocol-structs | 9 +++++++ src/rpc/gendispatch.pl | 1 + 12 files changed, 122 insertions(+), 2 deletions(-)
diff --git a/docs/apibuild.py b/docs/apibuild.py index 53b3421..3563d94 100755 --- a/docs/apibuild.py +++ b/docs/apibuild.py @@ -1643,7 +1643,8 @@ class CParser: "virDomainSetMemory" : (False, ("memory")), "virDomainSetMemoryFlags" : (False, ("memory")), "virDomainBlockJobSetSpeed" : (False, ("bandwidth")), - "virDomainBlockPull" : (False, ("bandwidth")) } + "virDomainBlockPull" : (False, ("bandwidth")), + "virDomainMigrateGetMaxSpeed" : (False, ("bandwidth")) }
def checkLongLegacyFunction(self, name, return_type, signature): if "long" in return_type and "long long" not in return_type: diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index 53a2f7d..bd6c607 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -711,6 +711,10 @@ int virDomainMigrateSetMaxSpeed(virDomainPtr domain, unsigned long bandwidth, unsigned int flags);
+int virDomainMigrateGetMaxSpeed(virDomainPtr domain, + unsigned long *bandwidth, + unsigned int flags); + /** * VIR_NODEINFO_MAXCPUS: * @nodeinfo: virNodeInfo instance diff --git a/python/generator.py b/python/generator.py index 97434ed..cc253cf 100755 --- a/python/generator.py +++ b/python/generator.py @@ -372,6 +372,7 @@ skip_impl = ( 'virNodeGetCPUStats', 'virNodeGetMemoryStats', 'virDomainGetBlockJobInfo', + 'virDomainMigrateGetMaxSpeed', )
diff --git a/python/libvirt-override-api.xml b/python/libvirt-override-api.xml index 2fa5eed..1cf115c 100644 --- a/python/libvirt-override-api.xml +++ b/python/libvirt-override-api.xml @@ -356,5 +356,11 @@ <arg name='flags' type='unsigned int' info='fine-tuning flags, currently unused, pass 0.'/> <return type='virDomainBlockJobInfo' info='A dictionary containing job information.' /> </function> + <function name='virDomainMigrateGetMaxSpeed' file='python'> + <info>Get currently configured maximum migration speed for a domain</info> + <arg name='domain' type='virDomainPtr' info='a domain object'/> + <arg name='flags' type='unsigned int' info='flags, currently unused, pass 0.'/> + <return type='unsigned long' info='current max migration speed, or None in case of error'/> + </function> </symbols> </api> diff --git a/python/libvirt-override.c b/python/libvirt-override.c index b5650e2..b020342 100644 --- a/python/libvirt-override.c +++ b/python/libvirt-override.c @@ -4543,6 +4543,29 @@ libvirt_virDomainSendKey(PyObject *self ATTRIBUTE_UNUSED, return py_retval; }
+static PyObject * +libvirt_virDomainMigrateGetMaxSpeed(PyObject *self ATTRIBUTE_UNUSED, PyObject *args) { + PyObject *py_retval; + int c_retval; + unsigned long bandwidth; + virDomainPtr domain; + PyObject *pyobj_domain; + + if (!PyArg_ParseTuple(args, (char *)"O:virDomainMigrateGetMaxSpeed", &pyobj_domain)) + return(NULL); + + domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain); + + LIBVIRT_BEGIN_ALLOW_THREADS; + c_retval = virDomainMigrateGetMaxSpeed(domain, &bandwidth, 0); + LIBVIRT_END_ALLOW_THREADS; + + if (c_retval < 0) + return VIR_PY_INT_FAIL; + py_retval = libvirt_ulongWrap(bandwidth); + return(py_retval); +} + /************************************************************************ * * * The registration stuff * @@ -4632,6 +4655,7 @@ static PyMethodDef libvirtMethods[] = { {(char *) "virDomainRevertToSnapshot", libvirt_virDomainRevertToSnapshot, METH_VARARGS, NULL}, {(char *) "virDomainGetBlockJobInfo", libvirt_virDomainGetBlockJobInfo, METH_VARARGS, NULL}, {(char *) "virDomainSendKey", libvirt_virDomainSendKey, METH_VARARGS, NULL}, + {(char *) "virDomainMigrateGetMaxSpeed", libvirt_virDomainMigrateGetMaxSpeed, METH_VARARGS, NULL}, {NULL, NULL, 0, NULL} };
diff --git a/src/driver.h b/src/driver.h index 80d6628..21b2bd3 100644 --- a/src/driver.h +++ b/src/driver.h @@ -532,6 +532,11 @@ typedef int unsigned int flags);
typedef int + (*virDrvDomainMigrateGetMaxSpeed)(virDomainPtr domain, + unsigned long *bandwidth, + unsigned int flags); + +typedef int (*virDrvDomainEventRegisterAny)(virConnectPtr conn, virDomainPtr dom, int eventID, @@ -828,6 +833,7 @@ struct _virDriver { virDrvDomainGetJobInfo domainGetJobInfo; virDrvDomainAbortJob domainAbortJob; virDrvDomainMigrateSetMaxDowntime domainMigrateSetMaxDowntime; + virDrvDomainMigrateGetMaxSpeed domainMigrateGetMaxSpeed; virDrvDomainMigrateSetMaxSpeed domainMigrateSetMaxSpeed; virDrvDomainEventRegisterAny domainEventRegisterAny; virDrvDomainEventDeregisterAny domainEventDeregisterAny; diff --git a/src/libvirt.c b/src/libvirt.c index b8fe1b1..683b8e1 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -15195,6 +15195,57 @@ error: }
/** + * virDomainMigrateGetMaxSpeed: + * @domain: a domain object + * @bandwidth: return value of current migration bandwidth limit in Mbps + * @flags: fine-tuning flags, currently unused, use 0 + * + * Get the current maximum bandwidth (in Mbps) that will be used if the + * domain is migrated. Not all hypervisors will support a bandwidth limit. + * + * Returns 0 in case of success, -1 otherwise. + */ +int +virDomainMigrateGetMaxSpeed(virDomainPtr domain, + unsigned long *bandwidth, + unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "bandwidth = %p, flags=%x", bandwidth, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN(domain)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (!bandwidth) { + virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto error; + } + + conn = domain->conn; + if (conn->flags & VIR_CONNECT_RO) { + virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if (conn->driver->domainMigrateGetMaxSpeed) { + if (conn->driver->domainMigrateGetMaxSpeed(domain, bandwidth, flags) < 0) + goto error; + return 0; + } + + virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__); +error: + virDispatchError(conn); + return -1; +} + +/** * virConnectDomainEventRegisterAny: * @conn: pointer to the connection * @dom: pointer to the domain diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index c2b6666..169c3ee 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -480,4 +480,9 @@ LIBVIRT_0.9.4 { virDomainBlockPull; } LIBVIRT_0.9.3;
+LIBVIRT_0.9.5 { + global: + virDomainMigrateGetMaxSpeed; +} LIBVIRT_0.9.4; + # .... define new API here using predicted next version number .... diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index 603d589..81930a9 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -4335,6 +4335,7 @@ static virDriver remote_driver = { .domainAbortJob = remoteDomainAbortJob, /* 0.7.7 */ .domainMigrateSetMaxDowntime = remoteDomainMigrateSetMaxDowntime, /* 0.8.0 */ .domainMigrateSetMaxSpeed = remoteDomainMigrateSetMaxSpeed, /* 0.9.0 */ + .domainMigrateGetMaxSpeed = remoteDomainMigrateGetMaxSpeed, /* 0.9.5 */ .domainEventRegisterAny = remoteDomainEventRegisterAny, /* 0.8.0 */ .domainEventDeregisterAny = remoteDomainEventDeregisterAny, /* 0.8.0 */ .domainManagedSave = remoteDomainManagedSave, /* 0.8.0 */ diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 8f68808..676570e 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -1913,6 +1913,16 @@ struct remote_domain_migrate_set_max_speed_args { unsigned int flags; };
+struct remote_domain_migrate_get_max_speed_args { + remote_nonnull_domain dom; + unsigned int flags; +}; + +struct remote_domain_migrate_get_max_speed_ret { + unsigned hyper bandwidth; /* insert@1 */ +}; + + struct remote_domain_events_register_any_args { int eventID; }; @@ -2475,7 +2485,8 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_BLOCK_JOB_SET_SPEED = 239, /* autogen autogen */ REMOTE_PROC_DOMAIN_BLOCK_PULL = 240, /* autogen autogen */
- REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB = 241 /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB = 241, /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_MIGRATE_GET_MAX_SPEED = 242 /* autogen autogen */
/* * Notice how the entries are grouped in sets of 10 ? diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 91b3ca5..0498bd1 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -1429,6 +1429,14 @@ struct remote_domain_migrate_set_max_speed_args { uint64_t bandwidth; u_int flags; }; +struct remote_domain_migrate_get_max_speed_args { + remote_nonnull_domain dom; + u_int flags; +}; +struct remote_domain_migrate_get_max_speed_ret { + uint64_t bandwidth; +}; + struct remote_domain_events_register_any_args { int eventID; }; @@ -1936,4 +1944,5 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_BLOCK_JOB_SET_SPEED = 239, REMOTE_PROC_DOMAIN_BLOCK_PULL = 240, REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB = 241, + REMOTE_PROC_DOMAIN_MIGRATE_GET_MAX_SPEED = 242, }; diff --git a/src/rpc/gendispatch.pl b/src/rpc/gendispatch.pl index 0d344e8..c203324 100755 --- a/src/rpc/gendispatch.pl +++ b/src/rpc/gendispatch.pl @@ -222,6 +222,7 @@ my $long_legacy = { NodeGetInfo => { ret => { memory => 1 } }, DomainBlockPull => { arg => { bandwidth => 1 } }, DomainBlockJobSetSpeed => { arg => { bandwidth => 1 } }, + DomainMigrateGetMaxSpeed => { ret => { bandwidth => 1 } }, };
sub hyper_to_long
ACK Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

--- src/conf/domain_conf.c | 11 +++++++++++ src/conf/domain_conf.h | 2 ++ 2 files changed, 13 insertions(+), 0 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 44212cf..4bf32e9 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -6263,6 +6263,10 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, virDomainLifecycleCrashTypeFromString) < 0) goto error; + if (virXPathULong("number(./on_migrate/bandwidth/@peak)", ctxt, + &def->migration_max_bandwidth) < 0) + def->migration_max_bandwidth = 0; + tmp = virXPathString("string(./clock/@offset)", ctxt); if (tmp) { if ((def->clock.offset = virDomainClockOffsetTypeFromString(tmp)) < 0) { @@ -10205,6 +10209,13 @@ virDomainDefFormatInternal(virDomainDefPtr def, virDomainLifecycleCrashTypeToString) < 0) goto cleanup; + if (def->migration_max_bandwidth > 0 ) { + virBufferAddLit(&buf, " <on_migrate>\n"); + virBufferAsprintf(&buf, " <bandwidth peak='%ld'/>\n", + def->migration_max_bandwidth); + virBufferAddLit(&buf, " </on_migrate>\n"); + } + virBufferAddLit(&buf, " <devices>\n"); if (def->emulator) diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 8382d28..99e5bec 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -1222,6 +1222,8 @@ struct _virDomainDef { int onPoweroff; int onCrash; + unsigned long migration_max_bandwidth; + virDomainOSDef os; char *emulator; int features; -- 1.7.5.4

From: Jim Fehlig <jfehlig@novell.com> The maximum bandwidth that can be consumed when migrating a domain is better classified as an operational vs configuration parameter of the dommain. As such, store this parameter in qemuDomainObjPrivate structure. --- src/qemu/qemu_domain.c | 2 ++ src/qemu/qemu_domain.h | 4 ++++ 2 files changed, 6 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 675c6df..f4110c7 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -215,6 +215,8 @@ static void *qemuDomainObjPrivateAlloc(void) if (qemuDomainObjInitJob(priv) < 0) VIR_FREE(priv); + priv->migMaxBandwidth = QEMU_DOMAIN_DEFAULT_MIG_BANDWIDTH_MAX; + return priv; } diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index e12ca8e..2aeed43 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -36,6 +36,9 @@ (1 << VIR_DOMAIN_VIRT_KVM) | \ (1 << VIR_DOMAIN_VIRT_XEN)) +# define QEMU_DOMAIN_DEFAULT_MIG_BANDWIDTH_MAX (32 << 20) +# define QEMU_DOMAIN_FILE_MIG_BANDWIDTH_MAX (INT64_MAX / (1024 * 1024)) + # define JOB_MASK(job) (1 << (job - 1)) # define DEFAULT_JOB_MASK \ (JOB_MASK(QEMU_JOB_QUERY) | \ @@ -113,6 +116,7 @@ struct _qemuDomainObjPrivate { char *lockState; bool fakeReboot; + unsigned long migMaxBandwidth; }; struct qemuDomainWatchdogEvent -- 1.7.5.4

From: Jim Fehlig <jfehlig@novell.com> --- src/qemu/qemu_driver.c | 33 +++++++++++++++++++++++++++++++++ 1 files changed, 33 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a150b08..c5fa106 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -8266,6 +8266,38 @@ cleanup: return ret; } +static int +qemuDomainMigrateGetMaxSpeed(virDomainPtr dom, + unsigned long *bandwidth, + unsigned int flags) +{ + struct qemud_driver *driver = dom->conn->privateData; + virDomainObjPtr vm; + int ret = -1; + + virCheckFlags(0, -1); + + qemuDriverLock(driver); + vm = virDomainFindByUUID(&driver->domains, dom->uuid); + qemuDriverUnlock(driver); + + if (!vm) { + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virUUIDFormat(dom->uuid, uuidstr); + qemuReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s'"), uuidstr); + goto cleanup; + } + + *bandwidth = vm->privateData->migMaxBandwidth; + ret = 0; + +cleanup: + if (vm) + virDomainObjUnlock(vm); + return ret; +} + static char *qemuFindQemuImgBinary(void) { char *ret; @@ -9529,6 +9561,7 @@ static virDriver qemuDriver = { .domainAbortJob = qemuDomainAbortJob, /* 0.7.7 */ .domainMigrateSetMaxDowntime = qemuDomainMigrateSetMaxDowntime, /* 0.8.0 */ .domainMigrateSetMaxSpeed = qemuDomainMigrateSetMaxSpeed, /* 0.9.0 */ + .domainMigrateGetMaxSpeed = qemuDomainMigrateGetMaxSpeed, /* 0.9.5 */ .domainEventRegisterAny = qemuDomainEventRegisterAny, /* 0.8.0 */ .domainEventDeregisterAny = qemuDomainEventDeregisterAny, /* 0.8.0 */ .domainManagedSave = qemuDomainManagedSave, /* 0.8.0 */ -- 1.7.5.4

On Thu, Sep 01, 2011 at 02:42:54PM -0600, Jim Fehlig wrote:
From: Jim Fehlig <jfehlig@novell.com>
--- src/qemu/qemu_driver.c | 33 +++++++++++++++++++++++++++++++++ 1 files changed, 33 insertions(+), 0 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a150b08..c5fa106 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -8266,6 +8266,38 @@ cleanup: return ret; }
+static int +qemuDomainMigrateGetMaxSpeed(virDomainPtr dom, + unsigned long *bandwidth, + unsigned int flags) +{ + struct qemud_driver *driver = dom->conn->privateData; + virDomainObjPtr vm; + int ret = -1; + + virCheckFlags(0, -1); + + qemuDriverLock(driver); + vm = virDomainFindByUUID(&driver->domains, dom->uuid); + qemuDriverUnlock(driver); + + if (!vm) { + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virUUIDFormat(dom->uuid, uuidstr); + qemuReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s'"), uuidstr); + goto cleanup; + } + + *bandwidth = vm->privateData->migMaxBandwidth; + ret = 0; + +cleanup: + if (vm) + virDomainObjUnlock(vm); + return ret; +} + static char *qemuFindQemuImgBinary(void) { char *ret; @@ -9529,6 +9561,7 @@ static virDriver qemuDriver = { .domainAbortJob = qemuDomainAbortJob, /* 0.7.7 */ .domainMigrateSetMaxDowntime = qemuDomainMigrateSetMaxDowntime, /* 0.8.0 */ .domainMigrateSetMaxSpeed = qemuDomainMigrateSetMaxSpeed, /* 0.9.0 */ + .domainMigrateGetMaxSpeed = qemuDomainMigrateGetMaxSpeed, /* 0.9.5 */ .domainEventRegisterAny = qemuDomainEventRegisterAny, /* 0.8.0 */ .domainEventDeregisterAny = qemuDomainEventDeregisterAny, /* 0.8.0 */ .domainManagedSave = qemuDomainManagedSave, /* 0.8.0 */
ACK, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

From: Jim Fehlig <jfehlig@novell.com> Now that migration speed is stored in qemuDomainObjPrivate structure, save the new value when invoking qemuDomainMigrateSetMaxSpeed(). Allow setting migration speed on inactive domain too. --- src/qemu/qemu_driver.c | 36 +++++++++++++++--------------------- 1 files changed, 15 insertions(+), 21 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index c5fa106..59b9a91 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -8234,31 +8234,25 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, return -1; } - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MIGRATION_OP) < 0) - goto cleanup; - - if (!virDomainObjIsActive(vm)) { - qemuReportError(VIR_ERR_OPERATION_INVALID, - "%s", _("domain is not running")); - goto endjob; - } - priv = vm->privateData; + if (virDomainObjIsActive(vm)) { + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MIGRATION_OP) < 0) + goto cleanup; - if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT) { - qemuReportError(VIR_ERR_OPERATION_INVALID, - "%s", _("domain is not being migrated")); - goto endjob; - } + VIR_DEBUG("Setting migration bandwidth to %luMbs", bandwidth); + qemuDomainObjEnterMonitor(driver, vm); + ret = qemuMonitorSetMigrationSpeed(priv->mon, bandwidth); + qemuDomainObjExitMonitor(driver, vm); - VIR_DEBUG("Setting migration bandwidth to %luMbs", bandwidth); - qemuDomainObjEnterMonitor(driver, vm); - ret = qemuMonitorSetMigrationSpeed(priv->mon, bandwidth); - qemuDomainObjExitMonitor(driver, vm); + if (ret == 0) + priv->migMaxBandwidth = bandwidth; -endjob: - if (qemuDomainObjEndJob(driver, vm) == 0) - vm = NULL; + if (qemuDomainObjEndJob(driver, vm) == 0) + vm = NULL; + } else { + priv->migMaxBandwidth = bandwidth; + ret = 0; + } cleanup: if (vm) -- 1.7.5.4

On Thu, Sep 01, 2011 at 02:42:55PM -0600, Jim Fehlig wrote:
From: Jim Fehlig <jfehlig@novell.com>
Now that migration speed is stored in qemuDomainObjPrivate structure, save the new value when invoking qemuDomainMigrateSetMaxSpeed().
Allow setting migration speed on inactive domain too. --- src/qemu/qemu_driver.c | 36 +++++++++++++++--------------------- 1 files changed, 15 insertions(+), 21 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index c5fa106..59b9a91 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -8234,31 +8234,25 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, return -1; }
- if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MIGRATION_OP) < 0) - goto cleanup; - - if (!virDomainObjIsActive(vm)) { - qemuReportError(VIR_ERR_OPERATION_INVALID, - "%s", _("domain is not running")); - goto endjob; - } - priv = vm->privateData; + if (virDomainObjIsActive(vm)) { + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MIGRATION_OP) < 0) + goto cleanup;
- if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT) { - qemuReportError(VIR_ERR_OPERATION_INVALID, - "%s", _("domain is not being migrated")); - goto endjob; - } + VIR_DEBUG("Setting migration bandwidth to %luMbs", bandwidth); + qemuDomainObjEnterMonitor(driver, vm); + ret = qemuMonitorSetMigrationSpeed(priv->mon, bandwidth); + qemuDomainObjExitMonitor(driver, vm);
- VIR_DEBUG("Setting migration bandwidth to %luMbs", bandwidth); - qemuDomainObjEnterMonitor(driver, vm); - ret = qemuMonitorSetMigrationSpeed(priv->mon, bandwidth); - qemuDomainObjExitMonitor(driver, vm); + if (ret == 0) + priv->migMaxBandwidth = bandwidth;
-endjob: - if (qemuDomainObjEndJob(driver, vm) == 0) - vm = NULL; + if (qemuDomainObjEndJob(driver, vm) == 0) + vm = NULL; + } else { + priv->migMaxBandwidth = bandwidth; + ret = 0; + }
cleanup: if (vm)
ACK, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

From: Jim Fehlig <jfehlig@novell.com> The qemu migration speed default is 32MiB/s as defined in migration.c /* Migration speed throttling */ static int64_t max_throttle = (32 << 20); There's no need to throttle migration when targeting a file, so set migration speed to unlimited prior to migration, and restore to libvirt default value after migration. Default units is MB for migrate_set_speed monitor command, so (INT64_MAX / (1024 * 1024)) is used for unlimited migration speed. Tested with both json and text monitors. --- src/qemu/qemu_migration.c | 17 +++++++++++++++++ 1 files changed, 17 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 38b05a9..ab38579 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2700,6 +2700,16 @@ qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr vm, bool restoreLabel = false; virCommandPtr cmd = NULL; int pipeFD[2] = { -1, -1 }; + unsigned long saveMigBandwidth = priv->migMaxBandwidth; + + /* Increase migration bandwidth to unlimited since target is a file. + * Failure to change migration speed is not fatal. */ + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) == 0) { + qemuMonitorSetMigrationSpeed(priv->mon, + QEMU_DOMAIN_FILE_MIG_BANDWIDTH_MAX); + priv->migMaxBandwidth = QEMU_DOMAIN_FILE_MIG_BANDWIDTH_MAX; + qemuDomainObjExitMonitorWithDriver(driver, vm); + } if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATE_QEMU_FD) && (!compressor || pipe(pipeFD) == 0)) { @@ -2808,6 +2818,13 @@ qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr vm, ret = 0; cleanup: + /* Restore max migration bandwidth */ + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) == 0) { + qemuMonitorSetMigrationSpeed(priv->mon, saveMigBandwidth); + priv->migMaxBandwidth = saveMigBandwidth; + qemuDomainObjExitMonitorWithDriver(driver, vm); + } + VIR_FORCE_CLOSE(pipeFD[0]); VIR_FORCE_CLOSE(pipeFD[1]); virCommandFree(cmd); -- 1.7.5.4

On Thu, Sep 01, 2011 at 02:42:56PM -0600, Jim Fehlig wrote:
From: Jim Fehlig <jfehlig@novell.com>
The qemu migration speed default is 32MiB/s as defined in migration.c
/* Migration speed throttling */ static int64_t max_throttle = (32 << 20);
There's no need to throttle migration when targeting a file, so set migration speed to unlimited prior to migration, and restore to libvirt default value after migration.
Default units is MB for migrate_set_speed monitor command, so (INT64_MAX / (1024 * 1024)) is used for unlimited migration speed.
Tested with both json and text monitors. --- src/qemu/qemu_migration.c | 17 +++++++++++++++++ 1 files changed, 17 insertions(+), 0 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 38b05a9..ab38579 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2700,6 +2700,16 @@ qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr vm, bool restoreLabel = false; virCommandPtr cmd = NULL; int pipeFD[2] = { -1, -1 }; + unsigned long saveMigBandwidth = priv->migMaxBandwidth; + + /* Increase migration bandwidth to unlimited since target is a file. + * Failure to change migration speed is not fatal. */ + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) == 0) { + qemuMonitorSetMigrationSpeed(priv->mon, + QEMU_DOMAIN_FILE_MIG_BANDWIDTH_MAX); + priv->migMaxBandwidth = QEMU_DOMAIN_FILE_MIG_BANDWIDTH_MAX; + qemuDomainObjExitMonitorWithDriver(driver, vm); + }
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATE_QEMU_FD) && (!compressor || pipe(pipeFD) == 0)) { @@ -2808,6 +2818,13 @@ qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr vm, ret = 0;
cleanup: + /* Restore max migration bandwidth */ + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) == 0) { + qemuMonitorSetMigrationSpeed(priv->mon, saveMigBandwidth); + priv->migMaxBandwidth = saveMigBandwidth; + qemuDomainObjExitMonitorWithDriver(driver, vm); + } + VIR_FORCE_CLOSE(pipeFD[0]); VIR_FORCE_CLOSE(pipeFD[1]); virCommandFree(cmd);
ACK, makes sense, my only worry would be handling of errors, for example if for some reason qemuMonitorSetMigrationSpeed() generate a failure, we ignore it, which may make sense for some kind of errors (like failure to understand the command) but I'm worried of more complex scenarios. Since I don't have one handy, I'm giving the ACK :-) Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

From: Jim Fehlig <jfehlig@novell.com> Adjust qemuMigrationRun() to use migMaxBandwidth in qemuDomainObjPrivate structure when setting qemu migration speed. Caller-specified 'resource' parameter overrides migMaxBandwidth. --- src/qemu/qemu_migration.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index ab38579..503b844 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1416,6 +1416,7 @@ qemuMigrationRun(struct qemud_driver *driver, qemuMigrationCookiePtr mig = NULL; qemuMigrationIOThreadPtr iothread = NULL; int fd = -1; + unsigned long migrate_speed = resource ? resource : priv->migMaxBandwidth; VIR_DEBUG("driver=%p, vm=%p, cookiein=%s, cookieinlen=%d, " "cookieout=%p, cookieoutlen=%p, flags=%lx, resource=%lu, " @@ -1451,8 +1452,7 @@ qemuMigrationRun(struct qemud_driver *driver, QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) goto cleanup; - if (resource > 0 && - qemuMonitorSetMigrationSpeed(priv->mon, resource) < 0) { + if (qemuMonitorSetMigrationSpeed(priv->mon, migrate_speed) < 0) { qemuDomainObjExitMonitorWithDriver(driver, vm); goto cleanup; } -- 1.7.5.4

On Thu, Sep 01, 2011 at 02:42:57PM -0600, Jim Fehlig wrote:
From: Jim Fehlig <jfehlig@novell.com>
Adjust qemuMigrationRun() to use migMaxBandwidth in qemuDomainObjPrivate structure when setting qemu migration speed. Caller-specified 'resource' parameter overrides migMaxBandwidth. --- src/qemu/qemu_migration.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index ab38579..503b844 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1416,6 +1416,7 @@ qemuMigrationRun(struct qemud_driver *driver, qemuMigrationCookiePtr mig = NULL; qemuMigrationIOThreadPtr iothread = NULL; int fd = -1; + unsigned long migrate_speed = resource ? resource : priv->migMaxBandwidth;
VIR_DEBUG("driver=%p, vm=%p, cookiein=%s, cookieinlen=%d, " "cookieout=%p, cookieoutlen=%p, flags=%lx, resource=%lu, " @@ -1451,8 +1452,7 @@ qemuMigrationRun(struct qemud_driver *driver, QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) goto cleanup;
- if (resource > 0 && - qemuMonitorSetMigrationSpeed(priv->mon, resource) < 0) { + if (qemuMonitorSetMigrationSpeed(priv->mon, migrate_speed) < 0) { qemuDomainObjExitMonitorWithDriver(driver, vm); goto cleanup; }
ACK, please push, I will try to make an rc3 thereafter, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On Thu, Sep 01, 2011 at 02:42:53PM -0600, Jim Fehlig wrote:
From: Jim Fehlig <jfehlig@novell.com>
The maximum bandwidth that can be consumed when migrating a domain is better classified as an operational vs configuration parameter of the dommain. As such, store this parameter in qemuDomainObjPrivate structure. --- src/qemu/qemu_domain.c | 2 ++ src/qemu/qemu_domain.h | 4 ++++ 2 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 675c6df..f4110c7 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -215,6 +215,8 @@ static void *qemuDomainObjPrivateAlloc(void) if (qemuDomainObjInitJob(priv) < 0) VIR_FREE(priv);
+ priv->migMaxBandwidth = QEMU_DOMAIN_DEFAULT_MIG_BANDWIDTH_MAX; + return priv; }
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index e12ca8e..2aeed43 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -36,6 +36,9 @@ (1 << VIR_DOMAIN_VIRT_KVM) | \ (1 << VIR_DOMAIN_VIRT_XEN))
+# define QEMU_DOMAIN_DEFAULT_MIG_BANDWIDTH_MAX (32 << 20) +# define QEMU_DOMAIN_FILE_MIG_BANDWIDTH_MAX (INT64_MAX / (1024 * 1024)) + # define JOB_MASK(job) (1 << (job - 1)) # define DEFAULT_JOB_MASK \ (JOB_MASK(QEMU_JOB_QUERY) | \ @@ -113,6 +116,7 @@ struct _qemuDomainObjPrivate { char *lockState;
bool fakeReboot; + unsigned long migMaxBandwidth; };
struct qemuDomainWatchdogEvent
V2 Following Dan suggestion, yes looks right to me, ACK, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

--- src/qemu/qemu_driver.c | 33 +++++++++++++++++++++++++++++++++ 1 files changed, 33 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index f21122d..b932e67 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -8256,6 +8256,38 @@ cleanup: return ret; } +static int +qemuDomainMigrateGetMaxSpeed(virDomainPtr dom, + unsigned long *bandwidth, + unsigned int flags) +{ + struct qemud_driver *driver = dom->conn->privateData; + virDomainObjPtr vm; + int ret = -1; + + virCheckFlags(0, -1); + + qemuDriverLock(driver); + vm = virDomainFindByUUID(&driver->domains, dom->uuid); + qemuDriverUnlock(driver); + + if (!vm) { + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virUUIDFormat(dom->uuid, uuidstr); + qemuReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s'"), uuidstr); + goto cleanup; + } + + *bandwidth = vm->def->migration_max_bandwidth; + ret = 0; + +cleanup: + if (vm) + virDomainObjUnlock(vm); + return ret; +} + static char *qemuFindQemuImgBinary(void) { char *ret; @@ -9513,6 +9545,7 @@ static virDriver qemuDriver = { .domainAbortJob = qemuDomainAbortJob, /* 0.7.7 */ .domainMigrateSetMaxDowntime = qemuDomainMigrateSetMaxDowntime, /* 0.8.0 */ .domainMigrateSetMaxSpeed = qemuDomainMigrateSetMaxSpeed, /* 0.9.0 */ + .domainMigrateGetMaxSpeed = qemuDomainMigrateGetMaxSpeed, /* 0.9.5 */ .domainEventRegisterAny = qemuDomainEventRegisterAny, /* 0.8.0 */ .domainEventDeregisterAny = qemuDomainEventDeregisterAny, /* 0.8.0 */ .domainManagedSave = qemuDomainManagedSave, /* 0.8.0 */ -- 1.7.5.4

On Fri, Aug 26, 2011 at 12:10:23PM -0600, Jim Fehlig wrote:
--- src/qemu/qemu_driver.c | 33 +++++++++++++++++++++++++++++++++ 1 files changed, 33 insertions(+), 0 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index f21122d..b932e67 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -8256,6 +8256,38 @@ cleanup: return ret; }
+static int +qemuDomainMigrateGetMaxSpeed(virDomainPtr dom, + unsigned long *bandwidth, + unsigned int flags) +{ + struct qemud_driver *driver = dom->conn->privateData; + virDomainObjPtr vm; + int ret = -1; + + virCheckFlags(0, -1); + + qemuDriverLock(driver); + vm = virDomainFindByUUID(&driver->domains, dom->uuid); + qemuDriverUnlock(driver); + + if (!vm) { + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virUUIDFormat(dom->uuid, uuidstr); + qemuReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s'"), uuidstr); + goto cleanup; + } + + *bandwidth = vm->def->migration_max_bandwidth; + ret = 0; + +cleanup: + if (vm) + virDomainObjUnlock(vm); + return ret; +} + static char *qemuFindQemuImgBinary(void) { char *ret; @@ -9513,6 +9545,7 @@ static virDriver qemuDriver = { .domainAbortJob = qemuDomainAbortJob, /* 0.7.7 */ .domainMigrateSetMaxDowntime = qemuDomainMigrateSetMaxDowntime, /* 0.8.0 */ .domainMigrateSetMaxSpeed = qemuDomainMigrateSetMaxSpeed, /* 0.9.0 */ + .domainMigrateGetMaxSpeed = qemuDomainMigrateGetMaxSpeed, /* 0.9.5 */ .domainEventRegisterAny = qemuDomainEventRegisterAny, /* 0.8.0 */ .domainEventDeregisterAny = qemuDomainEventDeregisterAny, /* 0.8.0 */ .domainManagedSave = qemuDomainManagedSave, /* 0.8.0 */
ACK to the patch in general, but obviously might require changes wrt where the migration max bandwidth data is kept. I'd be inclined to just put it in the qemuDomainPrivatePtr struct and initialize to some default value we choose Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

--- tools/virsh.c | 41 +++++++++++++++++++++++++++++++++++++++++ tools/virsh.pod | 4 ++++ 2 files changed, 45 insertions(+), 0 deletions(-) diff --git a/tools/virsh.c b/tools/virsh.c index 15b9bdd..f6d65c7 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -5194,6 +5194,45 @@ done: return ret; } +/* + * "migrate-getspeed" command + */ +static const vshCmdInfo info_migrate_getspeed[] = { + {"help", N_("Get the maximum migration bandwidth")}, + {"desc", N_("Get the maximum migration bandwidth (in Mbps) for a domain.")}, + {NULL, NULL} +}; + +static const vshCmdOptDef opts_migrate_getspeed[] = { + {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {NULL, 0, 0, NULL} +}; + +static bool +cmdMigrateGetMaxSpeed(vshControl *ctl, const vshCmd *cmd) +{ + virDomainPtr dom = NULL; + unsigned long bandwidth; + bool ret = false; + + if (!vshConnectionUsability(ctl, ctl->conn)) + return false; + + if (!(dom = vshCommandOptDomain(ctl, cmd, NULL))) + return false; + + if (virDomainMigrateGetMaxSpeed(dom, &bandwidth, 0) < 0) + goto done; + + vshPrint(ctl, "%lu\n", bandwidth); + + ret = true; + +done: + virDomainFree(dom); + return ret; +} + typedef enum { VSH_CMD_BLOCK_JOB_ABORT = 0, VSH_CMD_BLOCK_JOB_INFO = 1, @@ -12571,6 +12610,8 @@ static const vshCmdDef domManagementCmds[] = { opts_migrate_setmaxdowntime, info_migrate_setmaxdowntime, 0}, {"migrate-setspeed", cmdMigrateSetMaxSpeed, opts_migrate_setspeed, info_migrate_setspeed, 0}, + {"migrate-getspeed", cmdMigrateGetMaxSpeed, + opts_migrate_getspeed, info_migrate_getspeed, 0}, {"reboot", cmdReboot, opts_reboot, info_reboot, 0}, {"restore", cmdRestore, opts_restore, info_restore, 0}, {"resume", cmdResume, opts_resume, info_resume, 0}, diff --git a/tools/virsh.pod b/tools/virsh.pod index 81d7a1e..9c4ae19 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -624,6 +624,10 @@ to be down at the end of live migration. Set the maximum migration bandwidth (in Mbps) for a domain which is being migrated to another host. +=item B<migrate-getspeed> I<domain-id> + +Get the maximum migration bandwidth (in Mbps) for a domain. + =item B<reboot> I<domain-id> Reboot a domain. This acts just as if the domain had the B<reboot> -- 1.7.5.4

On Fri, Aug 26, 2011 at 12:10:24PM -0600, Jim Fehlig wrote:
--- tools/virsh.c | 41 +++++++++++++++++++++++++++++++++++++++++ tools/virsh.pod | 4 ++++ 2 files changed, 45 insertions(+), 0 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c index 15b9bdd..f6d65c7 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -5194,6 +5194,45 @@ done: return ret; }
+/* + * "migrate-getspeed" command + */ +static const vshCmdInfo info_migrate_getspeed[] = { + {"help", N_("Get the maximum migration bandwidth")}, + {"desc", N_("Get the maximum migration bandwidth (in Mbps) for a domain.")}, + {NULL, NULL} +}; + +static const vshCmdOptDef opts_migrate_getspeed[] = { + {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {NULL, 0, 0, NULL} +}; + +static bool +cmdMigrateGetMaxSpeed(vshControl *ctl, const vshCmd *cmd) +{ + virDomainPtr dom = NULL; + unsigned long bandwidth; + bool ret = false; + + if (!vshConnectionUsability(ctl, ctl->conn)) + return false; + + if (!(dom = vshCommandOptDomain(ctl, cmd, NULL))) + return false; + + if (virDomainMigrateGetMaxSpeed(dom, &bandwidth, 0) < 0) + goto done; + + vshPrint(ctl, "%lu\n", bandwidth); + + ret = true; + +done: + virDomainFree(dom); + return ret; +} + typedef enum { VSH_CMD_BLOCK_JOB_ABORT = 0, VSH_CMD_BLOCK_JOB_INFO = 1, @@ -12571,6 +12610,8 @@ static const vshCmdDef domManagementCmds[] = { opts_migrate_setmaxdowntime, info_migrate_setmaxdowntime, 0}, {"migrate-setspeed", cmdMigrateSetMaxSpeed, opts_migrate_setspeed, info_migrate_setspeed, 0}, + {"migrate-getspeed", cmdMigrateGetMaxSpeed, + opts_migrate_getspeed, info_migrate_getspeed, 0}, {"reboot", cmdReboot, opts_reboot, info_reboot, 0}, {"restore", cmdRestore, opts_restore, info_restore, 0}, {"resume", cmdResume, opts_resume, info_resume, 0}, diff --git a/tools/virsh.pod b/tools/virsh.pod index 81d7a1e..9c4ae19 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -624,6 +624,10 @@ to be down at the end of live migration. Set the maximum migration bandwidth (in Mbps) for a domain which is being migrated to another host.
+=item B<migrate-getspeed> I<domain-id> + +Get the maximum migration bandwidth (in Mbps) for a domain. + =item B<reboot> I<domain-id>
Reboot a domain. This acts just as if the domain had the B<reboot>
ACK Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

Now that migration speed is represented in XML, save the new value to domain conf when invoking qemuDomainMigrateSetMaxSpeed(). Allow setting migration speed on inactive domain too. --- src/qemu/qemu_driver.c | 45 ++++++++++++++++++++++++--------------------- 1 files changed, 24 insertions(+), 21 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index b932e67..9c91e49 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -8208,7 +8208,8 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, struct qemud_driver *driver = dom->conn->privateData; virDomainObjPtr vm; qemuDomainObjPrivatePtr priv; - int ret = -1; + virDomainDefPtr persistentDef = NULL; + int ret = 0; virCheckFlags(0, -1); @@ -8224,31 +8225,33 @@ qemuDomainMigrateSetMaxSpeed(virDomainPtr dom, return -1; } - if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MIGRATION_OP) < 0) - goto cleanup; + if (vm->persistent) + persistentDef = virDomainObjGetPersistentDef(driver->caps, vm); - if (!virDomainObjIsActive(vm)) { - qemuReportError(VIR_ERR_OPERATION_INVALID, - "%s", _("domain is not running")); - goto endjob; - } + if (virDomainObjIsActive(vm)) { + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MIGRATION_OP) < 0) { + ret = -1; + goto cleanup; + } - priv = vm->privateData; + priv = vm->privateData; - if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT) { - qemuReportError(VIR_ERR_OPERATION_INVALID, - "%s", _("domain is not being migrated")); - goto endjob; - } + VIR_DEBUG("Setting migration bandwidth to %luMbs", bandwidth); + qemuDomainObjEnterMonitor(driver, vm); + ret = qemuMonitorSetMigrationSpeed(priv->mon, bandwidth); + qemuDomainObjExitMonitor(driver, vm); - VIR_DEBUG("Setting migration bandwidth to %luMbs", bandwidth); - qemuDomainObjEnterMonitor(driver, vm); - ret = qemuMonitorSetMigrationSpeed(priv->mon, bandwidth); - qemuDomainObjExitMonitor(driver, vm); + if (ret == 0) + vm->def->migration_max_bandwidth = bandwidth; -endjob: - if (qemuDomainObjEndJob(driver, vm) == 0) - vm = NULL; + if (qemuDomainObjEndJob(driver, vm) == 0) + vm = NULL; + } + + if (ret == 0 && persistentDef) { + persistentDef->migration_max_bandwidth = bandwidth; + ret = virDomainSaveConfig(driver->configDir, persistentDef); + } cleanup: if (vm) -- 1.7.5.4

The qemu migration speed default is 32MiB/s as defined in migration.c /* Migration speed throttling */ static int64_t max_throttle = (32 << 20); The only reason to throttle migration when targeting a file is user request. If user has not changed the qemu default, set migration speed to unlimited prior to migration, and restore to qemu default value after migration. Default units is MB for migrate_set_speed monitor command, so (INT64_MAX / (1024 * 1024)) is used for unlimited migration speed. Tested with both json and text monitors. --- src/qemu/qemu_migration.c | 22 ++++++++++++++++++++++ src/qemu/qemu_migration.h | 3 +++ 2 files changed, 25 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index a2dc97c..910cd8d 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2699,6 +2699,18 @@ qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr vm, bool restoreLabel = false; virCommandPtr cmd = NULL; int pipeFD[2] = { -1, -1 }; + unsigned long initMigBandwidth = vm->def->migration_max_bandwidth; + + /* If no user-defined migration speed is set, increase qemu default + * (32MiB/s) to unlimited since target is a file. + * Failure to change migration speed is not fatal. */ + if (initMigBandwidth == 0) { + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) == 0) { + qemuMonitorSetMigrationSpeed(priv->mon, QEMU_FILE_MIGRATION_SPEED_MAX); + vm->def->migration_max_bandwidth = QEMU_FILE_MIGRATION_SPEED_MAX; + qemuDomainObjExitMonitorWithDriver(driver, vm); + } + } if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATE_QEMU_FD) && (!compressor || pipe(pipeFD) == 0)) { @@ -2807,6 +2819,16 @@ qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr vm, ret = 0; cleanup: + /* If migration speed was changed from default, restore it. */ + if (initMigBandwidth == 0) { + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) == 0) { + qemuMonitorSetMigrationSpeed(priv->mon, + QEMU_DEFAULT_MIGRATION_SPEED_MAX); + vm->def->migration_max_bandwidth = 0; + qemuDomainObjExitMonitorWithDriver(driver, vm); + } + } + VIR_FORCE_CLOSE(pipeFD[0]); VIR_FORCE_CLOSE(pipeFD[1]); virCommandFree(cmd); diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index 5c6921d..e49505e 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -53,6 +53,9 @@ enum qemuMigrationJobPhase { }; VIR_ENUM_DECL(qemuMigrationJobPhase) +# define QEMU_DEFAULT_MIGRATION_SPEED_MAX (32 << 20) +# define QEMU_FILE_MIGRATION_SPEED_MAX (INT64_MAX / (1024 * 1024)) + int qemuMigrationJobStart(struct qemud_driver *driver, virDomainObjPtr vm, enum qemuDomainAsyncJob job) -- 1.7.5.4

On Fri, Aug 26, 2011 at 12:10:26PM -0600, Jim Fehlig wrote:
The qemu migration speed default is 32MiB/s as defined in migration.c
/* Migration speed throttling */ static int64_t max_throttle = (32 << 20);
The only reason to throttle migration when targeting a file is user request. If user has not changed the qemu default, set migration speed to unlimited prior to migration, and restore to qemu default value after migration.
Default units is MB for migrate_set_speed monitor command, so (INT64_MAX / (1024 * 1024)) is used for unlimited migration speed.
Tested with both json and text monitors. --- src/qemu/qemu_migration.c | 22 ++++++++++++++++++++++ src/qemu/qemu_migration.h | 3 +++ 2 files changed, 25 insertions(+), 0 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index a2dc97c..910cd8d 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2699,6 +2699,18 @@ qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr vm, bool restoreLabel = false; virCommandPtr cmd = NULL; int pipeFD[2] = { -1, -1 }; + unsigned long initMigBandwidth = vm->def->migration_max_bandwidth; + + /* If no user-defined migration speed is set, increase qemu default + * (32MiB/s) to unlimited since target is a file. + * Failure to change migration speed is not fatal. */ + if (initMigBandwidth == 0) { + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) == 0) { + qemuMonitorSetMigrationSpeed(priv->mon, QEMU_FILE_MIGRATION_SPEED_MAX); + vm->def->migration_max_bandwidth = QEMU_FILE_MIGRATION_SPEED_MAX; + qemuDomainObjExitMonitorWithDriver(driver, vm); + } + }
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATE_QEMU_FD) && (!compressor || pipe(pipeFD) == 0)) { @@ -2807,6 +2819,16 @@ qemuMigrationToFile(struct qemud_driver *driver, virDomainObjPtr vm, ret = 0;
cleanup: + /* If migration speed was changed from default, restore it. */ + if (initMigBandwidth == 0) { + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) == 0) { + qemuMonitorSetMigrationSpeed(priv->mon, + QEMU_DEFAULT_MIGRATION_SPEED_MAX); + vm->def->migration_max_bandwidth = 0; + qemuDomainObjExitMonitorWithDriver(driver, vm); + } + }
IMHO, we should just unconditionally set & reset the migration bandwidth in QEMU here, and when doing a real migration, also unconditionally set the bandwidth there, so we remove QEMU's internal default setting from the equation & get predictable behaviour with all QEMUs.
+ VIR_FORCE_CLOSE(pipeFD[0]); VIR_FORCE_CLOSE(pipeFD[1]); virCommandFree(cmd); diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index 5c6921d..e49505e 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -53,6 +53,9 @@ enum qemuMigrationJobPhase { }; VIR_ENUM_DECL(qemuMigrationJobPhase)
+# define QEMU_DEFAULT_MIGRATION_SPEED_MAX (32 << 20) +# define QEMU_FILE_MIGRATION_SPEED_MAX (INT64_MAX / (1024 * 1024)) + int qemuMigrationJobStart(struct qemud_driver *driver, virDomainObjPtr vm, enum qemuDomainAsyncJob job) --
Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

Prefer qemuMigrationRun() 'resource' parameter, but consider value set in domain conf if 'resource' is 0. --- src/qemu/qemu_migration.c | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 910cd8d..878d163 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1415,6 +1415,8 @@ qemuMigrationRun(struct qemud_driver *driver, qemuMigrationCookiePtr mig = NULL; qemuMigrationIOThreadPtr iothread = NULL; int fd = -1; + unsigned long migrate_speed = resource ? resource : + vm->def->migration_max_bandwidth; VIR_DEBUG("driver=%p, vm=%p, cookiein=%s, cookieinlen=%d, " "cookieout=%p, cookieoutlen=%p, flags=%lx, resource=%lu, " @@ -1450,8 +1452,8 @@ qemuMigrationRun(struct qemud_driver *driver, QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) goto cleanup; - if (resource > 0 && - qemuMonitorSetMigrationSpeed(priv->mon, resource) < 0) { + if (migrate_speed > 0 && + qemuMonitorSetMigrationSpeed(priv->mon, migrate_speed) < 0) { qemuDomainObjExitMonitorWithDriver(driver, vm); goto cleanup; } -- 1.7.5.4

On Fri, Aug 26, 2011 at 12:10:27PM -0600, Jim Fehlig wrote:
Prefer qemuMigrationRun() 'resource' parameter, but consider value set in domain conf if 'resource' is 0. --- src/qemu/qemu_migration.c | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 910cd8d..878d163 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1415,6 +1415,8 @@ qemuMigrationRun(struct qemud_driver *driver, qemuMigrationCookiePtr mig = NULL; qemuMigrationIOThreadPtr iothread = NULL; int fd = -1; + unsigned long migrate_speed = resource ? resource : + vm->def->migration_max_bandwidth;
VIR_DEBUG("driver=%p, vm=%p, cookiein=%s, cookieinlen=%d, " "cookieout=%p, cookieoutlen=%p, flags=%lx, resource=%lu, " @@ -1450,8 +1452,8 @@ qemuMigrationRun(struct qemud_driver *driver, QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) goto cleanup;
- if (resource > 0 && - qemuMonitorSetMigrationSpeed(priv->mon, resource) < 0) { + if (migrate_speed > 0 && + qemuMonitorSetMigrationSpeed(priv->mon, migrate_speed) < 0) { qemuDomainObjExitMonitorWithDriver(driver, vm); goto cleanup; }
Again, if we make sure we always have an internal default migration bandwidth, we can make this unconditonal, and avoid reliance on unpredictable QEMU defaults Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
participants (4)
-
Daniel P. Berrange
-
Daniel Veillard
-
Jim Fehlig
-
Jim Fehlig