[Libvir] [PATCH 0/2] virDomainMigrate (for discussion only!)

This is a patch which adds virDomainMigrate. It is incomplete (only supports Xen, no remote support, no qemu support), but I hope you'll look at the proposed interface and discuss the parameters. More in the following two emails. Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

To cut to the chase, this is the proposed additional libvirt call to support migration. Please also read my explanation below. /** * virDomainMigrate: * @domain: a domain object * @dconn: destination host (a connection object) * @flags: flags * @dname: (optional) rename domain to this at destination * @hostname: (optional) remote hostname as seen from the source host * @params: linked list of hypervisor-specific parameters * * Migrate the domain object from its current host to the destination * host given by dconn (a connection to the destination host). * * Flags may be one of more of the following: * VIR_MIGRATE_LIVE Attempt a live migration. * * If a hypervisor supports renaming domains during migration, * then you may set the dname parameter to the new name (otherwise * it keeps the same name). If this is not supported by the * hypervisor, dname must be NULL or else you will get an error. * * Since typically the two hypervisors connect directly to each * other in order to perform the migration, you may need to specify * a hostname, which is the hostname or IP address of the destination * host as seen from the source host. If in doubt, leave this as * NULL and libvirt will attempt to work out the correct hostname. * * Params is a linked list of hypervisor-specific parameters. Each * element is a virMigrateParamPtr containing the following fields: * name Parameter name being set. * value A union expressing the value. * value.strv A string value. * value.longv A long value. * next Next in linked list (or NULL for end of list). * * Parameter names for Xen are: * VIR_MIGRATE_XEN_PORT (long) Override the default port number. * VIR_MIGRATE_XEN_RESOURCE (long) Set maximum resource usage (Mbps). * * Set params to NULL if you do not want to pass any hypervisor-specific * parameters. * * Returns the new domain object if the migration was successful, * or NULL in case of error. */ As discussed previously on this list, you need to have libvirt connections to both the source and destination hosts. I've tried to separate out what I believe will be common features of migration across hypervisors, and what is currently supported by Xen. What I think will be common features are: * live migration * direct host<->host connections * renaming domains during migration These are supported with explicit parameters. Drivers should check the parameters and any which are not supported should be rejected (eg. Xen cannot rename a domain when it is migrating, although this seems like it ought to be a common thing to want to do -- to prevent name clashes on the destination host which would otherwise make it impossible to migrate a domain). The explicit parameters include a general "flags" parameter, which we can extend with other boolean flags later. For host<->host connections you'll want some way to specify the hostname / IP address of the destination host as seen at the source. In the remote management case it's not always so easy to work this out. We can try using virConnectGetHostname, but we also allow the caller to override. On the other hand, there will be some hypervisor-specific features, and these are enabled through a linked list of parameters. For Xen these include setting port and resource usage. I guess other hypervisors will have their own parameters -- eg. security settings. In the current (Xen) implementation, any parameters which it doesn't understand are rejected with VIR_ERR_NO_SUPPORT. Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

RJ> * Params is a linked list of hypervisor-specific parameters. Each RJ> * element is a virMigrateParamPtr containing the following fields: RJ> * name Parameter name being set. RJ> * value A union expressing the value. RJ> * value.strv A string value. RJ> * value.longv A long value. RJ> * next Next in linked list (or NULL for end of list). This should allow us to pass URIs to qemu, as well. I like it :) RJ> + /* Try to migrate. */ RJ> + ddomain = conn->driver->domainMigrate (domain, dconn, flags, RJ> + dname, RJ> + hostname ? hostname : nchostname, RJ> + params); As was previously mentioned, don't we need a call to the remote side to let it know that the domain is coming before we start the actual migration? Are you expecting the driver implementation to do this? -- Dan Smith IBM Linux Technology Center Open Hypervisor Team email: danms@us.ibm.com

Dan Smith wrote:
RJ> * Params is a linked list of hypervisor-specific parameters. Each RJ> * element is a virMigrateParamPtr containing the following fields: RJ> * name Parameter name being set. RJ> * value A union expressing the value. RJ> * value.strv A string value. RJ> * value.longv A long value. RJ> * next Next in linked list (or NULL for end of list).
This should allow us to pass URIs to qemu, as well. I like it :)
Can you give us some idea of how QEMU migration works? KVM added a "migrate" function to the qemu console ("migrate <URI>"). For example: "migrate tcp://hostname:4444" and "migrate ssh://hostname". I'm not sure if that is in qemu upstream, or whether qemu upstream is doing something else. I think that we shouldn't pass URIs, but instead we should construct the URI from the hostname and port number, and something like an optional "VIR_KVM_TRANSPORT" virMigrateParamPtr. (This implies that port number, like hostname, becomes a main argument to virDomainMigrate). Incidentally, KVM also supports cancelling migrations (this interface doesn't), getting the status of migrations (this interface assumes the migration is synchronous and is supposed to only return when the migration is done), and setting resource limits. The latter implies that resource limits should be a non-Xen-specific parameter. [Source: http://kvm.qumranet.com/kvmwiki/Migration]
RJ> + /* Try to migrate. */ RJ> + ddomain = conn->driver->domainMigrate (domain, dconn, flags, RJ> + dname, RJ> + hostname ? hostname : nchostname, RJ> + params);
As was previously mentioned, don't we need a call to the remote side to let it know that the domain is coming before we start the actual migration? Are you expecting the driver implementation to do this?
Yes, I'm expecting the driver to do it. Xen doesn't require anything at the moment. It's quite happy to "engage" any caller :-) Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

Dan Berrange wrote:
I don't see the point in this. Libvirt already knows both hostnames of the source & destination.
It's really very hard for libvirt to accurately determine the hostname of the destination as seen from the source. Consider the case where you have a multi-homed host with a generic hostname (eg. "localhost.localdomain" which for some reason is the default on all my F7 installs). If you have a specific suggestion for how to solve this, I'd like to hear it.
libvirt should either know this, or be able to ask the underlying HV what port to use. I don't think we need to, or should put that burden on the app using the API.
The port would only be there to override the default, which would indeed be supplied by the driver / hypervisor.
Probably need one of the 'flags' to indicate whether to do live vs offline migration.
I didn't really understand this. Isn't live vs. offline mutually exclusive? Dan Smith wrote:
In that case, haven't we already failed with virDomainCreate() since it takes hypervisor-specific XML? Doesn't the presence of VIR_DEVICE_RW_FORCE imply knowledge of Xen-specific behavior?
The virConnectGetCapabilities call is supposed to allow you to generate the XML in a hypervisor-independent way. (It doesn't quite supply enough information yet, but that is the aim).
How would you handle someone wanting to use tcp:// or ssh:// with qemu?
I would add a transport parameter, see below. I said and Anthony Liguori replied:
* renaming domains during migration
Absolutely not! The issue only exists if you allow the guests to exist on both systems during the migration. In KVM, this shouldn't be the case. The domain on the target should only ever be visible after the domain has successfully completed.
I may be missing something here, but the domain name isn't really unique. So it would be possible to have two different domains called (say) "database" running on two hosts. The two domains are different databases with different purposes. Now for some reason the administrator wants to migrate one "database" on to the other host, perhaps to get better load balancing or to take a host out of service. But this is impossible simply because of the name clash. - * * * - * * * - Updated proposal: /** * virDomainMigrate: * @domain: a domain object * @dconn: destination host (a connection object) * @flags: flags * @dname: (optional) rename domain to this at destination * @hostname: (optional) dest hostname as seen from the source host * @port: (optional) override default port number * @transport: (optional) specify a transport * @resource: (optional) specify resource limit in Mbps * * Migrate the domain object from its current host to the destination * host given by dconn (a connection to the destination host). * * Flags may be one of more of the following: * VIR_MIGRATE_LIVE Attempt a live migration. * * If a hypervisor supports renaming domains during migration, * then you may set the dname parameter to the new name (otherwise * it keeps the same name). If this is not supported by the * hypervisor, dname must be NULL or else you will get an error. * * Since typically the two hypervisors connect directly to each * other in order to perform the migration, you may need to specify * a hostname, which is the hostname or IP address of the destination * host as seen from the source host. If in doubt, leave this as * NULL and libvirt will attempt to work out the correct hostname. * * Specify a port number to override the default migration port. * If set to 0, libvirt will try to choose the right port. * * Specify a transport to override the default transport (for * example: "ssh"). If set to NULL, libvirt will try to choose the * best transport. If non-NULL but the transport is not supported * by the hypervisor, then you will get an error. * * The maximum bandwidth (in Mbps) that will be used to do migration * can be specified with the resource parameter. If set to 0, * libvirt will choose a suitable default. Some hypervisors do * not support this feature and will return an error if resource * is not 0. * * Returns the new domain object if the migration was successful, * or NULL in case of error. */ Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

On Mon, Jul 16, 2007 at 04:56:53PM +0100, Richard W.M. Jones wrote:
* @resource: (optional) specify resource limit in Mbps * * The maximum bandwidth (in Mbps) that will be used to do migration * can be specified with the resource parameter. If set to 0, * libvirt will choose a suitable default. Some hypervisors do * not support this feature and will return an error if resource * is not 0.
Will there be another way to find this out? It would be unfortunate if the only way was to actually attempt it. Also, are there any backends that /do/ support it? (Xen's option has always been ignored) regards john

John Levon wrote:
On Mon, Jul 16, 2007 at 04:56:53PM +0100, Richard W.M. Jones wrote:
* @resource: (optional) specify resource limit in Mbps * * The maximum bandwidth (in Mbps) that will be used to do migration * can be specified with the resource parameter. If set to 0, * libvirt will choose a suitable default. Some hypervisors do * not support this feature and will return an error if resource * is not 0.
Will there be another way to find this out? It would be unfortunate if the only way was to actually attempt it.
virConnectGetCapabilities should return something to say whether this is supported. I'll add that to my implementation.
Also, are there any backends that /do/ support it? (Xen's option has always been ignored)
That is a very good point actually. xend does ignore the resource parameter :-( I was rather hoping that KVM would be doing the right thing. Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

On Mon, Jul 16, 2007 at 05:09:28PM +0100, Richard W.M. Jones wrote:
Will there be another way to find this out? It would be unfortunate if the only way was to actually attempt it.
virConnectGetCapabilities should return something to say whether this is supported. I'll add that to my implementation.
Great, thanks.
Also, are there any backends that /do/ support it? (Xen's option has always been ignored)
That is a very good point actually. xend does ignore the resource parameter :-(
Of course, it may not do so forever, so it's still sensible to have it in the API :) regards, john

John Levon wrote:
On Mon, Jul 16, 2007 at 04:56:53PM +0100, Richard W.M. Jones wrote:
* @resource: (optional) specify resource limit in Mbps * * The maximum bandwidth (in Mbps) that will be used to do migration * can be specified with the resource parameter. If set to 0, * libvirt will choose a suitable default. Some hypervisors do * not support this feature and will return an error if resource * is not 0.
Will there be another way to find this out? It would be unfortunate if the only way was to actually attempt it.
Also, are there any backends that /do/ support it? (Xen's option has always been ignored)
KVM supports it and does respect it. You really have to be able to adjust the bandwidth depending on your transport and network capabilities or you'll burn too much CPU and not converge. Regards, Anthony Liguori
regards john

On Mon, Jul 16, 2007 at 04:56:53PM +0100, Richard W.M. Jones wrote:
Dan Berrange wrote:
I don't see the point in this. Libvirt already knows both hostnames of the source & destination.
It's really very hard for libvirt to accurately determine the hostname of the destination as seen from the source. Consider the case where you have a multi-homed host with a generic hostname (eg. "localhost.localdomain" which for some reason is the default on all my F7 installs). If you have a specific suggestion for how to solve this, I'd like to hear it.
Nope don't have any magic solution offhand. I know its very hard, but punting this problem off to the end user isn't too nice either. Since we've already asked them for 2 hostnames when connecting to the source and destination nodes they would not unreasonably expect to be able to migrate without entering yet more hostnames.
Probably need one of the 'flags' to indicate whether to do live vs offline migration.
I didn't really understand this. Isn't live vs. offline mutually exclusive?
Yes, they're exclusive - you either live migrate, or your offline migrate. I just meant we need some way to express this in the API - or do we just go for always live migrating. I wouldn't have a problem with only doing live migrate, unless someone knows of a compelling reason to require offline migration too. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 04:56:53PM +0100, Richard W.M. Jones wrote:
Dan Berrange wrote:
I don't see the point in this. Libvirt already knows both hostnames of the source & destination. It's really very hard for libvirt to accurately determine the hostname of the destination as seen from the source. Consider the case where you have a multi-homed host with a generic hostname (eg. "localhost.localdomain" which for some reason is the default on all my F7 installs). If you have a specific suggestion for how to solve this, I'd like to hear it.
Nope don't have any magic solution offhand. I know its very hard, but punting this problem off to the end user isn't too nice either. Since we've already asked them for 2 hostnames when connecting to the source and destination nodes they would not unreasonably expect to be able to migrate without entering yet more hostnames.
In the first implementation, passing hostname = NULL causes libvirt to internally do a call to virConnectGetHostname. In src/libvirt.c: /* Synthesize a hostname if one is not given. */ if (!hostname) { nchostname = virConnectGetHostname (dconn); if (!nchostname) return NULL; } /* Try to migrate. */ ddomain = conn->driver->domainMigrate (domain, dconn, flags, dname, hostname ? hostname : nchostname, params); Most of the drivers implement virConnectGetHostname by calling gethostname(2) which is not a very reliable way to get a hostname, unless the system is being properly administered. (Since dconn is almost certainly going to be a remote connection, virConnectGetHostname in effect does gethostname(2) on the remote destination host). My latest thinking is that this should be a URI rather than a hostname, although a naked hostname or hostname:port should be acceptable. If URI is passed as NULL, libvirt should make a best-effort attempt to determine the destination hostname, although in practice this will still be by calling virConnectGetHostname, unless you can think of something better to do.
Probably need one of the 'flags' to indicate whether to do live vs offline migration. I didn't really understand this. Isn't live vs. offline mutually exclusive?
Yes, they're exclusive - you either live migrate, or your offline migrate. I just meant we need some way to express this in the API - or do we just go for always live migrating. I wouldn't have a problem with only doing live migrate, unless someone knows of a compelling reason to require offline migration too.
Right, so absence of the VIR_MIGRATE_LIVE flag was meant to mean "offline". There isn't another form of migration is there? Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

On Mon, Jul 16, 2007 at 06:52:15PM +0100, Richard W.M. Jones wrote:
Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 04:56:53PM +0100, Richard W.M. Jones wrote:
Dan Berrange wrote:
I don't see the point in this. Libvirt already knows both hostnames of the source & destination. It's really very hard for libvirt to accurately determine the hostname of the destination as seen from the source. Consider the case where you have a multi-homed host with a generic hostname (eg. "localhost.localdomain" which for some reason is the default on all my F7 installs). If you have a specific suggestion for how to solve this, I'd like to hear it.
Nope don't have any magic solution offhand. I know its very hard, but punting this problem off to the end user isn't too nice either. Since we've already asked them for 2 hostnames when connecting to the source and destination nodes they would not unreasonably expect to be able to migrate without entering yet more hostnames.
In the first implementation, passing hostname = NULL causes libvirt to internally do a call to virConnectGetHostname. In src/libvirt.c:
/* Synthesize a hostname if one is not given. */ if (!hostname) { nchostname = virConnectGetHostname (dconn); if (!nchostname) return NULL; }
/* Try to migrate. */ ddomain = conn->driver->domainMigrate (domain, dconn, flags, dname, hostname ? hostname : nchostname, params);
Most of the drivers implement virConnectGetHostname by calling gethostname(2) which is not a very reliable way to get a hostname, unless the system is being properly administered. (Since dconn is almost certainly going to be a remote connection, virConnectGetHostname in effect does gethostname(2) on the remote destination host).
IMHO it suffice to get to the IP address to implement the call unless I'm mistaken. That is if we get something like localhost internaly try to get the IP and transmit that back to the invocation point. I really think we should try to get without much added parameter except maybe the bandwidth limit.
My latest thinking is that this should be a URI rather than a hostname, although a naked hostname or hostname:port should be acceptable. If URI is passed as NULL, libvirt should make a best-effort attempt to determine the destination hostname, although in practice this will still be by calling virConnectGetHostname, unless you can think of something better to do.
I would prefer to do a couple extra RPC roundtrip and avoid extra arguments. And if getting the IP is one of them, so be it.
migrate, unless someone knows of a compelling reason to require offline migration too.
Right, so absence of the VIR_MIGRATE_LIVE flag was meant to mean "offline". There isn't another form of migration is there?
I assume that Save/transmit/Restore combination doesn't count :-) Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

On Mon, Jul 16, 2007 at 03:57:19PM -0400, Daniel Veillard wrote:
On Mon, Jul 16, 2007 at 06:52:15PM +0100, Richard W.M. Jones wrote:
Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 04:56:53PM +0100, Richard W.M. Jones wrote:
Dan Berrange wrote:
I don't see the point in this. Libvirt already knows both hostnames of the source & destination. It's really very hard for libvirt to accurately determine the hostname of the destination as seen from the source. Consider the case where you have a multi-homed host with a generic hostname (eg. "localhost.localdomain" which for some reason is the default on all my F7 installs). If you have a specific suggestion for how to solve this, I'd like to hear it.
Nope don't have any magic solution offhand. I know its very hard, but punting this problem off to the end user isn't too nice either. Since we've already asked them for 2 hostnames when connecting to the source and destination nodes they would not unreasonably expect to be able to migrate without entering yet more hostnames.
In the first implementation, passing hostname = NULL causes libvirt to internally do a call to virConnectGetHostname. In src/libvirt.c:
/* Synthesize a hostname if one is not given. */ if (!hostname) { nchostname = virConnectGetHostname (dconn); if (!nchostname) return NULL; }
/* Try to migrate. */ ddomain = conn->driver->domainMigrate (domain, dconn, flags, dname, hostname ? hostname : nchostname, params);
Most of the drivers implement virConnectGetHostname by calling gethostname(2) which is not a very reliable way to get a hostname, unless the system is being properly administered. (Since dconn is almost certainly going to be a remote connection, virConnectGetHostname in effect does gethostname(2) on the remote destination host).
IMHO it suffice to get to the IP address to implement the call unless I'm mistaken. That is if we get something like localhost internaly try to get the IP and transmit that back to the invocation point. I really think we should try to get without much added parameter except maybe the bandwidth limit.
Which IP address exactly .... # ip addr show | grep inet inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host inet 10.13.7.47/22 brd 10.13.7.255 scope global eth0 inet6 fec0::3:216:76ff:fed6:c945/64 scope site dynamic inet6 fe80::216:76ff:fed6:c945/64 scope link inet6 fe80::fcff:ffff:feff:ffff/64 scope link inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 inet6 fe80::200:ff:fe00:0/64 scope link inet6 fe80::fcff:ffff:feff:ffff/64 scope link inet 10.13.5.195/22 brd 10.13.7.255 scope global eth1 inet6 fec0::3:20e:cff:feb3:550/64 scope site dynamic inet6 fe80::20e:cff:feb3:550/64 scope link inet6 fec0::3:604b:fbff:fe53:86f8/64 scope site dynamic inet6 fec0::3:9887:56ff:fef7:1119/64 scope site dynamic inet6 fec0::3:5450:f8ff:fec8:934c/64 scope site dynamic inet6 fec0::3:acb1:d4ff:fe88:2df1/64 scope site dynamic inet6 fec0::3:7495:17ff:fe8a:edb/64 scope site dynamic inet6 fec0::3:8c50:5ff:fef5:1ad4/64 scope site dynamic inet6 fec0::3:fcff:ffff:feff:ffff/64 scope site dynamic inet6 fe80::200:ff:fe00:0/64 scope link inet6 fe80::604b:fbff:fe53:86f8/64 scope link I think best we can do is use 'gethostname()' and say that admin must have set this up to reflect the public facing hostname. If they don't do that, then they'll have to explicitly provide a URI for the destination instead of (or as well as) the virConnectPtr of the target.
migrate, unless someone knows of a compelling reason to require offline migration too.
Right, so absence of the VIR_MIGRATE_LIVE flag was meant to mean "offline". There isn't another form of migration is there?
I assume that Save/transmit/Restore combination doesn't count :-)
That's basically the manual way of doing offline migration. Dan -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Mon, Jul 16, 2007 at 03:43:48PM +0100, Richard W.M. Jones wrote:
Dan Smith wrote:
RJ> * Params is a linked list of hypervisor-specific parameters. Each RJ> * element is a virMigrateParamPtr containing the following fields: RJ> * name Parameter name being set. RJ> * value A union expressing the value. RJ> * value.strv A string value. RJ> * value.longv A long value. RJ> * next Next in linked list (or NULL for end of list).
This should allow us to pass URIs to qemu, as well. I like it :)
Can you give us some idea of how QEMU migration works?
KVM added a "migrate" function to the qemu console ("migrate <URI>"). For example: "migrate tcp://hostname:4444" and "migrate ssh://hostname". I'm not sure if that is in qemu upstream, or whether qemu upstream is doing something else.
I think that we shouldn't pass URIs, but instead we should construct the URI from the hostname and port number, and something like an optional "VIR_KVM_TRANSPORT" virMigrateParamPtr.
(This implies that port number, like hostname, becomes a main argument to virDomainMigrate).
Incidentally, KVM also supports cancelling migrations (this interface doesn't), getting the status of migrations (this interface assumes the migration is synchronous and is supposed to only return when the migration is done), and setting resource limits. The latter implies that resource limits should be a non-Xen-specific parameter.
This is an interesting point. This gets onto the more general question of being able to provide incremental feedback / async notifications / querying progress of ongoing ops. One could make use of the flags param by allowing the app to specify 'VIR_MIGRATE_ASYNC' so it returned immediately. Apps would then either need to poll to find out when an operation had completed or failed, or register a callback to be invoked upon completion / failure. The latter would obviously entail adding making the event loop stuff public instead of driver internal. One can say the same of the existing save / restore methods too - it would be desriable to be able to run those in backend, and/or cancel them. The way virt-manager deals with now is to just spawn a thread to let us run them in the BG without blocking the UI. This doesn't deal with cancellation though. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Mon, Jul 16, 2007 at 07:22:27AM -0700, Dan Smith wrote:
RJ> * Params is a linked list of hypervisor-specific parameters. Each RJ> * element is a virMigrateParamPtr containing the following fields: RJ> * name Parameter name being set. RJ> * value A union expressing the value. RJ> * value.strv A string value. RJ> * value.longv A long value. RJ> * next Next in linked list (or NULL for end of list).
This should allow us to pass URIs to qemu, as well. I like it :)
I don't. The API should be hypervisor agnostic. Needing to pass HV specific attributes to make it works shows we have failed.
RJ> + /* Try to migrate. */ RJ> + ddomain = conn->driver->domainMigrate (domain, dconn, flags, RJ> + dname, RJ> + hostname ? hostname : nchostname, RJ> + params);
As was previously mentioned, don't we need a call to the remote side to let it know that the domain is coming before we start the actual migration? Are you expecting the driver implementation to do this?
Yep, I suspect for internal driver API we will need to decompose the single virDomainMigrate call into several internal calls. We will need to speak to the the destination ahead of time in QEMU case to tell it to get ready to accept the new domain. When Xen gets a sane (well any) security model for migration we'll probably need to speak to the destination end ahead of time too - if only to retrieve some kind of 'auth token'. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

DB> I don't. The API should be hypervisor agnostic. Needing to pass DB> HV specific attributes to make it works shows we have failed. In that case, haven't we already failed with virDomainCreate() since it takes hypervisor-specific XML? Doesn't the presence of VIR_DEVICE_RW_FORCE imply knowledge of Xen-specific behavior? How would you handle someone wanting to use tcp:// or ssh:// with qemu? -- Dan Smith IBM Linux Technology Center Open Hypervisor Team email: danms@us.ibm.com

On Mon, Jul 16, 2007 at 08:20:54AM -0700, Dan Smith wrote:
DB> I don't. The API should be hypervisor agnostic. Needing to pass DB> HV specific attributes to make it works shows we have failed.
In that case, haven't we already failed with virDomainCreate() since it takes hypervisor-specific XML? Doesn't the presence of VIR_DEVICE_RW_FORCE imply knowledge of Xen-specific behavior?
The XML is *not* hypervisor specific. There is a subtle difference is between hypervisor specific concepts, and generic concepts which may only only be relevant to a sub-set of hypervisors.
How would you handle someone wanting to use tcp:// or ssh:// with qemu?
If we need to express some choice of data channel, TCP, vs SSH, vs SSL/TLS then figure out a way to expose that in the API with an hypervisor agnostic way. Exposing raw QEMU migration URIs is *not* hypervisor agnostic. Exposing a flag VIR_CHANNEL_CLEAR, VIR_CHANNEL_SSH, VIR_CHANNEL_TLS is agnostic because it allows the same syntax to be used regardless of driver. Now some drivers may only support a subset of channel types, but that's OK. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 08:20:54AM -0700, Dan Smith wrote:
DB> I don't. The API should be hypervisor agnostic. Needing to pass DB> HV specific attributes to make it works shows we have failed.
In that case, haven't we already failed with virDomainCreate() since it takes hypervisor-specific XML? Doesn't the presence of VIR_DEVICE_RW_FORCE imply knowledge of Xen-specific behavior?
The XML is *not* hypervisor specific. There is a subtle difference is between hypervisor specific concepts, and generic concepts which may only only be relevant to a sub-set of hypervisors.
How would you handle someone wanting to use tcp:// or ssh:// with qemu?
If we need to express some choice of data channel, TCP, vs SSH, vs SSL/TLS then figure out a way to expose that in the API with an hypervisor agnostic way. Exposing raw QEMU migration URIs is *not* hypervisor agnostic.
Why? If nothing but QEMU support ssh:// then exposing an API to do SSH migration isn't really hypervisor agnostic. It's an API for QEMU. If you don't expose raw URIs, then you can never support pluggable migration transports which I will implement in the not to distant future. I think there's a balance between being hypervisor agnostic and only supporting the least common denominator. Whenever there's a possibility to be common, I think libvirt should strive to provide a common interface but I still think it's important to unique features of the particular virtualization solution. Regards, Anthony Liguori Exposing
a flag VIR_CHANNEL_CLEAR, VIR_CHANNEL_SSH, VIR_CHANNEL_TLS is agnostic because it allows the same syntax to be used regardless of driver. Now some drivers may only support a subset of channel types, but that's OK.
Dan.

On Mon, Jul 16, 2007 at 10:47:59AM -0500, Anthony Liguori wrote:
If we need to express some choice of data channel, TCP, vs SSH, vs SSL/TLS then figure out a way to expose that in the API with an hypervisor agnostic way. Exposing raw QEMU migration URIs is *not* hypervisor agnostic.
Why? If nothing but QEMU support ssh:// then exposing an API to do SSH migration isn't really hypervisor agnostic. It's an API for QEMU.
You're presuming things never change (and that new backends never get added to libvirt!) regards john

John Levon wrote:
On Mon, Jul 16, 2007 at 10:47:59AM -0500, Anthony Liguori wrote:
If we need to express some choice of data channel, TCP, vs SSH, vs SSL/TLS then figure out a way to expose that in the API with an hypervisor agnostic way. Exposing raw QEMU migration URIs is *not* hypervisor agnostic.
Why? If nothing but QEMU support ssh:// then exposing an API to do SSH migration isn't really hypervisor agnostic. It's an API for QEMU.
You're presuming things never change (and that new backends never get added to libvirt!)
There's a big difference between taking two implements of SSH migration, finding the commonality, and building an abstraction verses just modeling the KVM ssh:// URI. The chances that a second implementation can be exposed nicely through the later is small. What I'd rather see is something that exposed the bits of both KVM and Xen and then a second "agnostic" interface. For instance, for KVM you may have: virDomainMigrateKVM(..., URI); For Xen you'd have: virDomainMigrateXen(..., hostname, port); But then you'd also have: virDomainMigrate(dom, connPtr); So I'm not necessarily arguing that there shouldn't be an agnostic interface, but rather that the lower bits should be exposed too. Regards, Anthony Liguori
regards john

On Mon, Jul 16, 2007 at 10:58:18AM -0500, Anthony Liguori wrote:
John Levon wrote:
On Mon, Jul 16, 2007 at 10:47:59AM -0500, Anthony Liguori wrote:
If we need to express some choice of data channel, TCP, vs SSH, vs SSL/TLS then figure out a way to expose that in the API with an hypervisor agnostic way. Exposing raw QEMU migration URIs is *not* hypervisor agnostic.
Why? If nothing but QEMU support ssh:// then exposing an API to do SSH migration isn't really hypervisor agnostic. It's an API for QEMU.
You're presuming things never change (and that new backends never get added to libvirt!)
There's a big difference between taking two implements of SSH migration, finding the commonality, and building an abstraction verses just modeling the KVM ssh:// URI. The chances that a second implementation can be exposed nicely through the later is small.
What I'd rather see is something that exposed the bits of both KVM and Xen and then a second "agnostic" interface. For instance, for KVM you may have:
virDomainMigrateKVM(..., URI);
For Xen you'd have:
virDomainMigrateXen(..., hostname, port);
I don't see the point in having separate methods for those when Xen's hostname+port can be formatted as a URI too.
But then you'd also have:
virDomainMigrate(dom, connPtr);
So perhaps we should think about 2 possible APIs: - One based on a URI string - One based on a pre-existing virConnectPtr Or, have 1 API, and have the URI string optional and virConnectPtr be compulsory ? Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Mon, Jul 16, 2007 at 10:47:59AM -0500, Anthony Liguori wrote:
Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 08:20:54AM -0700, Dan Smith wrote:
DB> I don't. The API should be hypervisor agnostic. Needing to pass DB> HV specific attributes to make it works shows we have failed.
In that case, haven't we already failed with virDomainCreate() since it takes hypervisor-specific XML? Doesn't the presence of VIR_DEVICE_RW_FORCE imply knowledge of Xen-specific behavior?
The XML is *not* hypervisor specific. There is a subtle difference is between hypervisor specific concepts, and generic concepts which may only only be relevant to a sub-set of hypervisors.
How would you handle someone wanting to use tcp:// or ssh:// with qemu?
If we need to express some choice of data channel, TCP, vs SSH, vs SSL/TLS then figure out a way to expose that in the API with an hypervisor agnostic way. Exposing raw QEMU migration URIs is *not* hypervisor agnostic.
Why? If nothing but QEMU support ssh:// then exposing an API to do SSH migration isn't really hypervisor agnostic. It's an API for QEMU.
If you don't expose raw URIs, then you can never support pluggable migration transports which I will implement in the not to distant future.
Its depends where you need to expose it. For any single deployment of do you need to be able to use all possible transports ? I think that some people will choose SSH, others will choose SSL, other's something else again, but they aren't all neccessarily used all the time. So it may be sufficient to specify which migration scheme to use per host. So libvirt can make use of all possible transports, without having to expose this to every single application using the API.
I think there's a balance between being hypervisor agnostic and only supporting the least common denominator. Whenever there's a possibility to be common, I think libvirt should strive to provide a common interface but I still think it's important to unique features of the particular virtualization solution.
Sure, but making use of all available capabilities in libvirt doesn't mean we have to expose them all in the API - some can be used 'behind the scenes' without apps needing to care about them. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 10:47:59AM -0500, Anthony Liguori wrote:
Its depends where you need to expose it. For any single deployment of do you need to be able to use all possible transports ? I think that some people will choose SSH, others will choose SSL, other's something else again, but they aren't all neccessarily used all the time. So it may be sufficient to specify which migration scheme to use per host. So libvirt can make use of all possible transports, without having to expose this to every single application using the API.
Right, the problem is that an admin can define a new transport for QEMU (or at least, the will be able to soon) by adding an external URI handler. For instance, let's say at a university they use an ldap directory to authenticate users and they decide to implement a migration handler that uses that for authentication. They may name this "uni://" and it'll just work. How would they get at this in libvirt without exposing URIs directly? Regards, Anthony Liguori
I think there's a balance between being hypervisor agnostic and only supporting the least common denominator. Whenever there's a possibility to be common, I think libvirt should strive to provide a common interface but I still think it's important to unique features of the particular virtualization solution.
Sure, but making use of all available capabilities in libvirt doesn't mean we have to expose them all in the API - some can be used 'behind the scenes' without apps needing to care about them.
Dan.

Anthony Liguori wrote:
For instance, let's say at a university they use an ldap directory to authenticate users and they decide to implement a migration handler that uses that for authentication. They may name this "uni://" and it'll just work. How would they get at this in libvirt without exposing URIs directly?
My latest proposal[1] has a transport parameter (a string) which covers this, in as much as it would allow you to construct URIs which are: <transport>://<hostname>:<port> Anything more complex than this simple pattern would not work. I think there will always be a conflict within libvirt between allowing the full features of every hypervisor to be expressed, and allowing simple programs to be written which are hypervisor independent. Rich. [1] https://www.redhat.com/archives/libvir-list/2007-July/msg00227.html -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

Richard W.M. Jones wrote:
Anthony Liguori wrote:
For instance, let's say at a university they use an ldap directory to authenticate users and they decide to implement a migration handler that uses that for authentication. They may name this "uni://" and it'll just work. How would they get at this in libvirt without exposing URIs directly?
My latest proposal[1] has a transport parameter (a string) which covers this, in as much as it would allow you to construct URIs which are:
<transport>://<hostname>:<port>
SSH requires: ssh://[user@]hostname[:port] So that wouldn't work :-( Regards, Anthony Liguori
Anything more complex than this simple pattern would not work.
I think there will always be a conflict within libvirt between allowing the full features of every hypervisor to be expressed, and allowing simple programs to be written which are hypervisor independent.
Rich.
[1] https://www.redhat.com/archives/libvir-list/2007-July/msg00227.html

On Mon, Jul 16, 2007 at 11:30:33AM -0500, Anthony Liguori wrote:
Richard W.M. Jones wrote:
Anthony Liguori wrote:
For instance, let's say at a university they use an ldap directory to authenticate users and they decide to implement a migration handler that uses that for authentication. They may name this "uni://" and it'll just work. How would they get at this in libvirt without exposing URIs directly?
My latest proposal[1] has a transport parameter (a string) which covers this, in as much as it would allow you to construct URIs which are:
<transport>://<hostname>:<port>
SSH requires:
ssh://[user@]hostname[:port]
So that wouldn't work :-(
Sure it would - rich was just showing simplified syntax - the URI rules/spec allow for a username and we already use this syntax with a username in the remote driver URIs. eg $ virsh --connect qemu+ssh://root@celery.virt.boston.redhat.com/system list --all Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 11:30:33AM -0500, Anthony Liguori wrote:
Richard W.M. Jones wrote:
Anthony Liguori wrote:
For instance, let's say at a university they use an ldap directory to authenticate users and they decide to implement a migration handler that uses that for authentication. They may name this "uni://" and it'll just work. How would they get at this in libvirt without exposing URIs directly? My latest proposal[1] has a transport parameter (a string) which covers this, in as much as it would allow you to construct URIs which are:
<transport>://<hostname>:<port> SSH requires:
ssh://[user@]hostname[:port]
So that wouldn't work :-(
Sure it would - rich was just showing simplified syntax - the URI rules/spec allow for a username and we already use this syntax with a username in the remote driver URIs. eg
$ virsh --connect qemu+ssh://root@celery.virt.boston.redhat.com/system list --all
Anthony is right that my revised proposal limits the migration to just three parameters: transport, hostname and port. https://www.redhat.com/archives/libvir-list/2007-July/msg00227.html Perhaps instead we should replace hostname with a URI parameter, understood as either a simple hostname, IP address, a "hostname:port" string [IPv6?], or a full URI. However I feel inevitably this is going to cause hypervisor dependencies to come into libvirt code, which should be avoidable. Another choice might be to go back to the list of parameters again, and have configurable VIR_MIGRATE_TRANSPORT, VIR_MIGRATE_USERNAME and so on... Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

Richard W.M. Jones wrote:
Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 11:30:33AM -0500, Anthony Liguori wrote:
Richard W.M. Jones wrote:
Anthony Liguori wrote:
For instance, let's say at a university they use an ldap directory to authenticate users and they decide to implement a migration handler that uses that for authentication. They may name this "uni://" and it'll just work. How would they get at this in libvirt without exposing URIs directly? My latest proposal[1] has a transport parameter (a string) which covers this, in as much as it would allow you to construct URIs which are:
<transport>://<hostname>:<port> SSH requires:
ssh://[user@]hostname[:port]
So that wouldn't work :-(
Sure it would - rich was just showing simplified syntax - the URI rules/spec allow for a username and we already use this syntax with a username in the remote driver URIs. eg
$ virsh --connect qemu+ssh://root@celery.virt.boston.redhat.com/system list --all
Anthony is right that my revised proposal limits the migration to just three parameters: transport, hostname and port.
https://www.redhat.com/archives/libvir-list/2007-July/msg00227.html
Perhaps instead we should replace hostname with a URI parameter, understood as either a simple hostname, IP address, a "hostname:port" string [IPv6?], or a full URI. However I feel inevitably this is going to cause hypervisor dependencies to come into libvirt code, which should be avoidable.
I don't think it's really a bad thing. I think if someone is writing pretty straight forward code that does something like migrate a VM from one host to another, then that code ought to be portable between hypervisors with no effort. However, if they are doing something more sophisticated like using a custom migration transport and interacting with KVM through libvirt, then yes, it's hypervisor dependent. I don't really see this as a problem though. Regards, Anthony Liguori
Another choice might be to go back to the list of parameters again, and have configurable VIR_MIGRATE_TRANSPORT, VIR_MIGRATE_USERNAME and so on...
Rich.

On Mon, Jul 16, 2007 at 06:34:57PM +0100, Richard W.M. Jones wrote:
Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 11:30:33AM -0500, Anthony Liguori wrote:
Richard W.M. Jones wrote:
Anthony Liguori wrote:
For instance, let's say at a university they use an ldap directory to authenticate users and they decide to implement a migration handler that uses that for authentication. They may name this "uni://" and it'll just work. How would they get at this in libvirt without exposing URIs directly? My latest proposal[1] has a transport parameter (a string) which covers this, in as much as it would allow you to construct URIs which are:
<transport>://<hostname>:<port> SSH requires:
ssh://[user@]hostname[:port]
So that wouldn't work :-(
Sure it would - rich was just showing simplified syntax - the URI rules/spec allow for a username and we already use this syntax with a username in the remote driver URIs. eg
$ virsh --connect qemu+ssh://root@celery.virt.boston.redhat.com/system list --all
Anthony is right that my revised proposal limits the migration to just three parameters: transport, hostname and port.
https://www.redhat.com/archives/libvir-list/2007-July/msg00227.html
Perhaps instead we should replace hostname with a URI parameter, understood as either a simple hostname, IP address, a "hostname:port" string [IPv6?], or a full URI. However I feel inevitably this is going to cause hypervisor dependencies to come into libvirt code, which should be avoidable.
I think we can expose URIs without directly making the libvirt API hypervisor specific. Even though Anthony is talking with respect to QEMU/KVM there, the concepts is reasonably applicable to Xen too - there's no reason XenD could not be enhanced to support migration over a user-defined transport. So, when thinking about URIs for migration we could consider that there are 2 classes of URI - Pre-defined 'standard' URIs - TCP, TCP with SSL/TLS, and SSH being the most obvious - we can easily define clear & portable semantics for these URIs - User-define 'custom' URIs - these are really site/deployment specific, rather than hypervisor specific. ie, if someone implemented a way to deal with foo://bar/, they could provide impls for both Xen & QEMU We should be able to guarentee that 'standard' URIs work forever, while for custom URIs we can allow them to be passed through, and not provide any guarentees about their behaviour/usage - in particular make no guarentees that a future libvirt won't define more 'standard' URI schemes which could potentially clash with use-define custom schemes. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 06:34:57PM +0100, Richard W.M. Jones wrote:
Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 11:30:33AM -0500, Anthony Liguori wrote:
Richard W.M. Jones wrote:
Anthony Liguori wrote:
For instance, let's say at a university they use an ldap directory to authenticate users and they decide to implement a migration handler that uses that for authentication. They may name this "uni://" and it'll just work. How would they get at this in libvirt without exposing URIs directly?
My latest proposal[1] has a transport parameter (a string) which covers this, in as much as it would allow you to construct URIs which are:
<transport>://<hostname>:<port>
SSH requires:
ssh://[user@]hostname[:port]
So that wouldn't work :-(
Sure it would - rich was just showing simplified syntax - the URI rules/spec allow for a username and we already use this syntax with a username in the remote driver URIs. eg
$ virsh --connect qemu+ssh://root@celery.virt.boston.redhat.com/system list --all
Anthony is right that my revised proposal limits the migration to just three parameters: transport, hostname and port.
https://www.redhat.com/archives/libvir-list/2007-July/msg00227.html
Perhaps instead we should replace hostname with a URI parameter, understood as either a simple hostname, IP address, a "hostname:port" string [IPv6?], or a full URI. However I feel inevitably this is going to cause hypervisor dependencies to come into libvirt code, which should be avoidable.
I think we can expose URIs without directly making the libvirt API hypervisor specific. Even though Anthony is talking with respect to QEMU/KVM there, the concepts is reasonably applicable to Xen too - there's no reason XenD could not be enhanced to support migration over a user-defined transport.
So, when thinking about URIs for migration we could consider that there are 2 classes of URI
- Pre-defined 'standard' URIs - TCP, TCP with SSL/TLS, and SSH being the most obvious - we can easily define clear & portable semantics for these URIs
- User-define 'custom' URIs - these are really site/deployment specific, rather than hypervisor specific. ie, if someone implemented a way to deal with foo://bar/, they could provide impls for both Xen & QEMU
How would a user define a custom URI? Regards, Anthony Liguori
We should be able to guarentee that 'standard' URIs work forever, while for custom URIs we can allow them to be passed through, and not provide any guarentees about their behaviour/usage - in particular make no guarentees that a future libvirt won't define more 'standard' URI schemes which could potentially clash with use-define custom schemes.
Dan.

On Mon, Jul 16, 2007 at 02:23:32PM -0500, Anthony Liguori wrote:
Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 06:34:57PM +0100, Richard W.M. Jones wrote:
Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 11:30:33AM -0500, Anthony Liguori wrote:
Richard W.M. Jones wrote:
Anthony Liguori wrote:
>For instance, let's say at a university they use an ldap directory to >authenticate users and they decide to implement a migration handler >that uses that for authentication. They may name this "uni://" and >it'll just work. How would they get at this in libvirt without >exposing URIs directly? > My latest proposal[1] has a transport parameter (a string) which covers this, in as much as it would allow you to construct URIs which are:
<transport>://<hostname>:<port>
SSH requires:
ssh://[user@]hostname[:port]
So that wouldn't work :-(
Sure it would - rich was just showing simplified syntax - the URI rules/spec allow for a username and we already use this syntax with a username in the remote driver URIs. eg
$ virsh --connect qemu+ssh://root@celery.virt.boston.redhat.com/system list --all
Anthony is right that my revised proposal limits the migration to just three parameters: transport, hostname and port.
https://www.redhat.com/archives/libvir-list/2007-July/msg00227.html
Perhaps instead we should replace hostname with a URI parameter, understood as either a simple hostname, IP address, a "hostname:port" string [IPv6?], or a full URI. However I feel inevitably this is going to cause hypervisor dependencies to come into libvirt code, which should be avoidable.
I think we can expose URIs without directly making the libvirt API hypervisor specific. Even though Anthony is talking with respect to QEMU/KVM there, the concepts is reasonably applicable to Xen too - there's no reason XenD could not be enhanced to support migration over a user-defined transport.
So, when thinking about URIs for migration we could consider that there are 2 classes of URI
- Pre-defined 'standard' URIs - TCP, TCP with SSL/TLS, and SSH being the most obvious - we can easily define clear & portable semantics for these URIs
- User-define 'custom' URIs - these are really site/deployment specific, rather than hypervisor specific. ie, if someone implemented a way to deal with foo://bar/, they could provide impls for both Xen & QEMU
How would a user define a custom URI?
A good question, to which I don't have any answer :-) Could just say that any unrecognised URI is passed down to the underlying driver without libvirt applying any interpretation of its own. Dan -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 02:23:32PM -0500, Anthony Liguori wrote:
Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 06:34:57PM +0100, Richard W.M. Jones wrote:
Daniel P. Berrange wrote:
On Mon, Jul 16, 2007 at 11:30:33AM -0500, Anthony Liguori wrote:
Richard W.M. Jones wrote:
> Anthony Liguori wrote: > > >> For instance, let's say at a university they use an ldap directory to >> authenticate users and they decide to implement a migration handler >> that uses that for authentication. They may name this "uni://" and >> it'll just work. How would they get at this in libvirt without >> exposing URIs directly? >> >> > My latest proposal[1] has a transport parameter (a string) which > covers this, in as much as it would allow you to construct URIs which > are: > > <transport>://<hostname>:<port> > > SSH requires:
ssh://[user@]hostname[:port]
So that wouldn't work :-(
Sure it would - rich was just showing simplified syntax - the URI rules/spec allow for a username and we already use this syntax with a username in the remote driver URIs. eg
$ virsh --connect qemu+ssh://root@celery.virt.boston.redhat.com/system list --all
Anthony is right that my revised proposal limits the migration to just three parameters: transport, hostname and port.
https://www.redhat.com/archives/libvir-list/2007-July/msg00227.html
Perhaps instead we should replace hostname with a URI parameter, understood as either a simple hostname, IP address, a "hostname:port" string [IPv6?], or a full URI. However I feel inevitably this is going to cause hypervisor dependencies to come into libvirt code, which should be avoidable.
I think we can expose URIs without directly making the libvirt API hypervisor specific. Even though Anthony is talking with respect to QEMU/KVM there, the concepts is reasonably applicable to Xen too - there's no reason XenD could not be enhanced to support migration over a user-defined transport.
So, when thinking about URIs for migration we could consider that there are 2 classes of URI
- Pre-defined 'standard' URIs - TCP, TCP with SSL/TLS, and SSH being the most obvious - we can easily define clear & portable semantics for these URIs
- User-define 'custom' URIs - these are really site/deployment specific, rather than hypervisor specific. ie, if someone implemented a way to deal with foo://bar/, they could provide impls for both Xen & QEMU
How would a user define a custom URI?
A good question, to which I don't have any answer :-) Could just say that any unrecognised URI is passed down to the underlying driver without libvirt applying any interpretation of its own.
I would like that :-) Regards, Anthony Liguori
Dan

On Mon, Jul 16, 2007 at 08:20:54AM -0700, Dan Smith wrote:
DB> I don't. The API should be hypervisor agnostic. Needing to pass DB> HV specific attributes to make it works shows we have failed.
In that case, haven't we already failed with virDomainCreate() since it takes hypervisor-specific XML?
the goal still is to try to coerce all common behaviour into as generic as possible APIs. My initial suggestion carried just an extra flags int to hold options (like live vs. non-live migrations) Maybe this won't be sufficient, Rich seems to think so, I hope we can avoid morphing APIs we did it once (and with XML). The real goal of unified API is that an app like virt-manager don't need to do custom code to support new hypervisor. Right, domain creation is unfortunately one of the parts where one need knowledge of the underlying engine in the app, but let's try to limit it as much as possible (and as long as the resulting API still make sense and are usable).
Doesn't the presence of VIR_DEVICE_RW_FORCE imply knowledge of Xen-specific behavior?
hum, no, I think when using NFS (or any other kind of stateless networking protocol) it may be important to indicate the virtualization layer that this can be shared because the virtualization system may not be able to guess it.
How would you handle someone wanting to use tcp:// or ssh:// with qemu?
don't we have qemu+ssh://host/system vs. qemu://host/system kind of connections ? Or maybe I'm missing something... DV -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

On Mon, Jul 16, 2007 at 11:40:30AM -0400, Daniel Veillard wrote:
On Mon, Jul 16, 2007 at 08:20:54AM -0700, Dan Smith wrote:
DB> I don't. The API should be hypervisor agnostic. Needing to pass DB> HV specific attributes to make it works shows we have failed.
In that case, haven't we already failed with virDomainCreate() since it takes hypervisor-specific XML?
the goal still is to try to coerce all common behaviour into as generic as possible APIs. My initial suggestion carried just an extra flags int to hold options (like live vs. non-live migrations) Maybe this won't be sufficient, Rich seems to think so, I hope we can avoid morphing APIs we did it once (and with XML). The real goal of unified API is that an app like virt-manager don't need to do custom code to support new hypervisor. Right, domain creation is unfortunately one of the parts where one need knowledge of the underlying engine in the app, but let's try to limit it as much as possible (and as long as the resulting API still make sense and are usable).
The 'capabilities' XML provides a way for virt-manager to figure out various bits of metadata for domain creation in a hypervisor agnostic manner. We just haven't updated virt-manager to use it yet. And again the difference is between hypervisor specific data representation (we avoid that), and hypervisor agnostic representation, but with varying sets of allowed data. The capabilities API allows you to determine the allowed data per driver when creating a guest. Dan -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Mon, Jul 16, 2007 at 01:47:38PM +0100, Richard W.M. Jones wrote:
To cut to the chase, this is the proposed additional libvirt call to support migration. Please also read my explanation below.
/** * virDomainMigrate: * @domain: a domain object * @dconn: destination host (a connection object) * @flags: flags * @dname: (optional) rename domain to this at destination * @hostname: (optional) remote hostname as seen from the source host * @params: linked list of hypervisor-specific parameters * * Migrate the domain object from its current host to the destination * host given by dconn (a connection to the destination host). * * Flags may be one of more of the following: * VIR_MIGRATE_LIVE Attempt a live migration. * * If a hypervisor supports renaming domains during migration, * then you may set the dname parameter to the new name (otherwise * it keeps the same name). If this is not supported by the * hypervisor, dname must be NULL or else you will get an error. * * Since typically the two hypervisors connect directly to each * other in order to perform the migration, you may need to specify * a hostname, which is the hostname or IP address of the destination * host as seen from the source host. If in doubt, leave this as * NULL and libvirt will attempt to work out the correct hostname.
I don't see the point in this. Libvirt already knows both hostnames of the source & destination.
* Params is a linked list of hypervisor-specific parameters. Each * element is a virMigrateParamPtr containing the following fields: * name Parameter name being set. * value A union expressing the value. * value.strv A string value. * value.longv A long value. * next Next in linked list (or NULL for end of list). * * Parameter names for Xen are: * VIR_MIGRATE_XEN_PORT (long) Override the default port number.
libvirt should either know this, or be able to ask the underlying HV what port to use. I don't think we need to, or should put that burden on the app using the API.
* VIR_MIGRATE_XEN_RESOURCE (long) Set maximum resource usage (Mbps).
This doesn't have to be Xen specific - any app implementing migration can have ability to throttle bandwidth.
What I think will be common features are: * live migration
Probably need one of the 'flags' to indicate whether to do live vs offline migration.
* direct host<->host connections * renaming domains during migration
Should we return a new 'virDomainPtr' object for the newly migrated domain, associated with the destination virConnectPtr object ?
The explicit parameters include a general "flags" parameter, which we can extend with other boolean flags later. For host<->host connections you'll want some way to specify the hostname / IP address of the destination host as seen at the source. In the remote management case it's not always so easy to work this out. We can try using virConnectGetHostname, but we also allow the caller to override.
I really don't like the idea of a hostname - if libvirt is unable to work it out under some circumstances, how does the app know what those circumstances are ? ie, how does it know whether it needs to specify the hostname or not ? I'd rather do without this & make it 'just work', even if we need more hardwork in the underlying driver impls.
On the other hand, there will be some hypervisor-specific features, and these are enabled through a linked list of parameters. For Xen these include setting port and resource usage. I guess other hypervisors will have their own parameters -- eg. security settings.
I don't like exposing hypervisor specific requirements here - rather defeats the purpose of having a hypervisor agnositic API. Dan -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Richard W.M. Jones wrote:
To cut to the chase, this is the proposed additional libvirt call to support migration. Please also read my explanation below.
/** * virDomainMigrate: * @domain: a domain object * @dconn: destination host (a connection object) * @flags: flags * @dname: (optional) rename domain to this at destination * @hostname: (optional) remote hostname as seen from the source host * @params: linked list of hypervisor-specific parameters * * Migrate the domain object from its current host to the destination * host given by dconn (a connection to the destination host). * * Flags may be one of more of the following: * VIR_MIGRATE_LIVE Attempt a live migration. * * If a hypervisor supports renaming domains during migration, * then you may set the dname parameter to the new name (otherwise * it keeps the same name). If this is not supported by the * hypervisor, dname must be NULL or else you will get an error. * * Since typically the two hypervisors connect directly to each * other in order to perform the migration, you may need to specify * a hostname, which is the hostname or IP address of the destination * host as seen from the source host. If in doubt, leave this as * NULL and libvirt will attempt to work out the correct hostname. * * Params is a linked list of hypervisor-specific parameters. Each * element is a virMigrateParamPtr containing the following fields: * name Parameter name being set. * value A union expressing the value. * value.strv A string value. * value.longv A long value. * next Next in linked list (or NULL for end of list). * * Parameter names for Xen are: * VIR_MIGRATE_XEN_PORT (long) Override the default port number. * VIR_MIGRATE_XEN_RESOURCE (long) Set maximum resource usage (Mbps). * * Set params to NULL if you do not want to pass any hypervisor-specific * parameters. * * Returns the new domain object if the migration was successful, * or NULL in case of error. */
As discussed previously on this list, you need to have libvirt connections to both the source and destination hosts.
I've tried to separate out what I believe will be common features of migration across hypervisors, and what is currently supported by Xen.
What I think will be common features are: * live migration * direct host<->host connections * renaming domains during migration
Absolutely not! The issue only exists if you allow the guests to exist on both systems during the migration. In KVM, this shouldn't be the case. The domain on the target should only ever be visible after the domain has successfully completed. Localhost migration works fine today in KVM without domain renaming. Regards, Anthony Liguori
These are supported with explicit parameters. Drivers should check the parameters and any which are not supported should be rejected (eg. Xen cannot rename a domain when it is migrating, although this seems like it ought to be a common thing to want to do -- to prevent name clashes on the destination host which would otherwise make it impossible to migrate a domain).
The explicit parameters include a general "flags" parameter, which we can extend with other boolean flags later. For host<->host connections you'll want some way to specify the hostname / IP address of the destination host as seen at the source. In the remote management case it's not always so easy to work this out. We can try using virConnectGetHostname, but we also allow the caller to override.
On the other hand, there will be some hypervisor-specific features, and these are enabled through a linked list of parameters. For Xen these include setting port and resource usage. I guess other hypervisors will have their own parameters -- eg. security settings.
In the current (Xen) implementation, any parameters which it doesn't understand are rejected with VIR_ERR_NO_SUPPORT.
Rich.
------------------------------------------------------------------------
Index: include/libvirt/libvirt.h =================================================================== RCS file: /data/cvs/libvirt/include/libvirt/libvirt.h,v retrieving revision 1.49 diff -u -p -r1.49 libvirt.h --- include/libvirt/libvirt.h 9 Jul 2007 12:41:30 -0000 1.49 +++ include/libvirt/libvirt.h 16 Jul 2007 12:31:44 -0000 @@ -209,6 +209,30 @@ int virDomainSetSchedulerParameters (vir virSchedParameterPtr params, int nparams);
+/* Hypervisor-specific migration parameters. */ +enum virMigrateParamName { + VIR_MIGRATE_XEN_PORT = 1, + VIR_MIGRATE_XEN_RESOURCE = 2 +}; + +struct virMigrateParam { + struct virMigrateParam *next; + enum virMigrateParamName name; + union { + const char *strv; + long longv; + } value; +}; +typedef struct virMigrateParam *virMigrateParamPtr; + + /* Domain migration flags. */ +#define VIR_MIGRATE_LIVE 1 + +/* Domain migration. */ +virDomainPtr virDomainMigrate (virDomainPtr domain, virConnectPtr dconn, + unsigned long flags, const char *dname, + const char *hostname, virMigrateParamPtr params); + /** * VIR_NODEINFO_MAXCPUS: * @nodeinfo: virNodeInfo instance Index: include/libvirt/libvirt.h.in =================================================================== RCS file: /data/cvs/libvirt/include/libvirt/libvirt.h.in,v retrieving revision 1.30 diff -u -p -r1.30 libvirt.h.in --- include/libvirt/libvirt.h.in 26 Jun 2007 11:42:46 -0000 1.30 +++ include/libvirt/libvirt.h.in 16 Jul 2007 12:31:45 -0000 @@ -209,6 +209,30 @@ int virDomainSetSchedulerParameters (vir virSchedParameterPtr params, int nparams);
+/* Hypervisor-specific migration parameters. */ +enum virMigrateParamName { + VIR_MIGRATE_XEN_PORT = 1, + VIR_MIGRATE_XEN_RESOURCE = 2 +}; + +struct virMigrateParam { + struct virMigrateParam *next; + enum virMigrateParamName name; + union { + const char *strv; + long longv; + } value; +}; +typedef struct virMigrateParam *virMigrateParamPtr; + + /* Domain migration flags. */ +#define VIR_MIGRATE_LIVE 1 + +/* Domain migration. */ +virDomainPtr virDomainMigrate (virDomainPtr domain, virConnectPtr dconn, + unsigned long flags, const char *dname, + const char *hostname, virMigrateParamPtr params); + /** * VIR_NODEINFO_MAXCPUS: * @nodeinfo: virNodeInfo instance Index: src/driver.h =================================================================== RCS file: /data/cvs/libvirt/src/driver.h,v retrieving revision 1.30 diff -u -p -r1.30 driver.h --- src/driver.h 26 Jun 2007 22:56:14 -0000 1.30 +++ src/driver.h 16 Jul 2007 12:31:45 -0000 @@ -180,6 +180,15 @@ typedef int virSchedParameterPtr params, int nparams);
+typedef virDomainPtr + (*virDrvDomainMigrate) + (virDomainPtr domain, + virConnectPtr dconn, + unsigned long flags, + const char *dname, + const char *hostname, + virMigrateParamPtr params); + typedef struct _virDriver virDriver; typedef virDriver *virDriverPtr;
@@ -244,6 +253,7 @@ struct _virDriver { virDrvDomainGetSchedulerType domainGetSchedulerType; virDrvDomainGetSchedulerParameters domainGetSchedulerParameters; virDrvDomainSetSchedulerParameters domainSetSchedulerParameters; + virDrvDomainMigrate domainMigrate; };
typedef int Index: src/libvirt.c =================================================================== RCS file: /data/cvs/libvirt/src/libvirt.c,v retrieving revision 1.88 diff -u -p -r1.88 libvirt.c --- src/libvirt.c 12 Jul 2007 08:34:51 -0000 1.88 +++ src/libvirt.c 16 Jul 2007 12:31:47 -0000 @@ -1662,6 +1662,96 @@ virDomainGetXMLDesc(virDomainPtr domain, }
/** + * virDomainMigrate: + * @domain: a domain object + * @dconn: destination host (a connection object) + * @flags: flags + * @dname: (optional) rename domain to this at destination + * @hostname: (optional) remote hostname as seen from the source host + * @params: linked list of hypervisor-specific parameters + * + * Migrate the domain object from its current host to the destination + * host given by dconn (a connection to the destination host). + * + * Flags may be one of more of the following: + * VIR_MIGRATE_LIVE Attempt a live migration. + * + * If a hypervisor supports renaming domains during migration, + * then you may set the dname parameter to the new name (otherwise + * it keeps the same name). If this is not supported by the + * hypervisor, dname must be NULL or else you will get an error. + * + * Since typically the two hypervisors connect directly to each + * other in order to perform the migration, you may need to specify + * a hostname, which is the hostname or IP address of the destination + * host as seen from the source host. If in doubt, leave this as + * NULL and libvirt will attempt to work out the correct hostname. + * + * Params is a linked list of hypervisor-specific parameters. Each + * element is a virMigrateParamPtr containing the following fields: + * name Parameter name being set. + * value A union expressing the value. + * value.strv A string value. + * value.longv A long value. + * next Next in linked list (or NULL for end of list). + * + * Parameter names for Xen are: + * VIR_MIGRATE_XEN_PORT (long) Override the default port number. + * VIR_MIGRATE_XEN_RESOURCE (long) Set maximum resource usage (Mbps). + * + * Set params to NULL if you do not want to pass any hypervisor-specific + * parameters. + * + * Returns the new domain object if the migration was successful, + * or NULL in case of error. + */ +virDomainPtr +virDomainMigrate (virDomainPtr domain, + virConnectPtr dconn, + unsigned long flags, + const char *dname, + const char *hostname, + virMigrateParamPtr params) +{ + virConnectPtr conn; + virDomainPtr ddomain; + char *nchostname = NULL; + DEBUG("domain=%p, dconn=%p, flags=%lu, dname=%s, hostname=%s, params=%p", + domain, dconn, flags, dname, hostname, params); + + if (!VIR_IS_DOMAIN (domain)) { + virLibDomainError(domain, VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + return NULL; + } + conn = domain->conn; /* Source connection. */ + if (!VIR_IS_CONNECT (dconn)) { + virLibConnError (conn, VIR_ERR_INVALID_CONN, __FUNCTION__); + return NULL; + } + + /* Check that migration is supported. */ + if (!conn->driver->domainMigrate) { + virLibConnError (conn, VIR_ERR_NO_SUPPORT, __FUNCTION__); + return NULL; + } + + /* Synthesize a hostname if one is not given. */ + if (!hostname) { + nchostname = virConnectGetHostname (dconn); + if (!nchostname) return NULL; + } + + /* Try to migrate. */ + ddomain = conn->driver->domainMigrate (domain, dconn, flags, + dname, + hostname ? hostname : nchostname, + params); + + if (nchostname) free (nchostname); + return ddomain; +} + +/** * virNodeGetInfo: * @conn: pointer to the hypervisor connection * @info: pointer to a virNodeInfo structure allocated by the user Index: src/libvirt_sym.version =================================================================== RCS file: /data/cvs/libvirt/src/libvirt_sym.version,v retrieving revision 1.25 diff -u -p -r1.25 libvirt_sym.version --- src/libvirt_sym.version 26 Jun 2007 22:56:14 -0000 1.25 +++ src/libvirt_sym.version 16 Jul 2007 12:31:47 -0000 @@ -69,6 +69,8 @@ virDomainAttachDevice; virDomainDetachDevice;
+ virDomainMigrate; + virNetworkGetConnect; virConnectNumOfNetworks; virConnectListNetworks; Index: src/qemu_driver.c =================================================================== RCS file: /data/cvs/libvirt/src/qemu_driver.c,v retrieving revision 1.8 diff -u -p -r1.8 qemu_driver.c --- src/qemu_driver.c 12 Jul 2007 15:09:01 -0000 1.8 +++ src/qemu_driver.c 16 Jul 2007 12:31:49 -0000 @@ -2507,6 +2507,7 @@ static virDriver qemuDriver = { NULL, /* domainGetSchedulerType */ NULL, /* domainGetSchedulerParameters */ NULL, /* domainSetSchedulerParameters */ + NULL, /* domainMigrate */ };
static virNetworkDriver qemuNetworkDriver = { Index: src/test.c =================================================================== RCS file: /data/cvs/libvirt/src/test.c,v retrieving revision 1.41 diff -u -p -r1.41 test.c --- src/test.c 6 Jul 2007 15:02:09 -0000 1.41 +++ src/test.c 16 Jul 2007 12:31:50 -0000 @@ -144,6 +144,7 @@ static virDriver testDriver = { NULL, /* domainGetSchedulerType */ NULL, /* domainGetSchedulerParameters */ NULL, /* domainSetSchedulerParameters */ + NULL, /* domainMigrate */ };
/* Per-connection private data. */ Index: src/xen_unified.c =================================================================== RCS file: /data/cvs/libvirt/src/xen_unified.c,v retrieving revision 1.17 diff -u -p -r1.17 xen_unified.c --- src/xen_unified.c 12 Jul 2007 08:34:51 -0000 1.17 +++ src/xen_unified.c 16 Jul 2007 12:31:50 -0000 @@ -791,6 +791,24 @@ xenUnifiedDomainDumpXML (virDomainPtr do return NULL; }
+static virDomainPtr +xenUnifiedDomainMigrate (virDomainPtr dom, + virConnectPtr dconn, + unsigned long flags, + const char *dname, + const char *hostname, + virMigrateParamPtr params) +{ + GET_PRIVATE(dom->conn); + + if (priv->opened[XEN_UNIFIED_XEND_OFFSET]) + return xenDaemonDomainMigrate (dom, dconn, flags, + dname, hostname, params); + + xenUnifiedError (dom->conn, VIR_ERR_NO_SUPPORT, __FUNCTION__); + return NULL; +} + static int xenUnifiedListDefinedDomains (virConnectPtr conn, char **const names, int maxnames) @@ -1002,6 +1020,7 @@ static virDriver xenUnifiedDriver = { .domainGetSchedulerType = xenUnifiedDomainGetSchedulerType, .domainGetSchedulerParameters = xenUnifiedDomainGetSchedulerParameters, .domainSetSchedulerParameters = xenUnifiedDomainSetSchedulerParameters, + .domainMigrate = xenUnifiedDomainMigrate, };
/** Index: src/xend_internal.c =================================================================== RCS file: /data/cvs/libvirt/src/xend_internal.c,v retrieving revision 1.129 diff -u -p -r1.129 xend_internal.c --- src/xend_internal.c 9 Jul 2007 11:24:52 -0000 1.129 +++ src/xend_internal.c 16 Jul 2007 12:31:52 -0000 @@ -3126,6 +3126,77 @@ xenDaemonDetachDevice(virDomainPtr domai }
+virDomainPtr +xenDaemonDomainMigrate (virDomainPtr domain, + virConnectPtr dconn, + unsigned long flags, + const char *dname, + const char *hostname, + virMigrateParamPtr params) +{ + /* Upper layers have already checked domain, dconn, etc. */ + virConnectPtr conn = domain->conn; + /* NB: Passing port=0, resource=0 to xend means it ignores + * those parameters. However this is somewhat specific to + * the internals of the xend Python code. (XXX). + */ + char port[16] = "0"; + char resource[16] = "0"; + char live[16] = "0"; + int ret; + + /* Xen doesn't support renaming domains during migration. */ + if (dname) { + virXendError (conn, VIR_ERR_NO_SUPPORT, + "xenDaemonDomainMigrate: Xen does not support renaming domains during migration"); + return NULL; + } + + /* Check the parameters and set variables as necessary. */ + for (; params; params = params->next) { + switch (params->name) { + case VIR_MIGRATE_XEN_PORT: + snprintf (port, sizeof port, "%ld", params->value.longv); + break; + case VIR_MIGRATE_XEN_RESOURCE: + snprintf (resource, sizeof resource, "%ld", params->value.longv); + break; + default: + virXendError (conn, VIR_ERR_NO_SUPPORT, + "xenDaemonDomainMigrate: unsupported parameter"); + return NULL; + } + } + + /* Check the flags. */ + if ((flags & VIR_MIGRATE_LIVE)) { + strcpy (live, "1"); + flags &= ~VIR_MIGRATE_LIVE; + } + if (flags != 0) { + virXendError (conn, VIR_ERR_NO_SUPPORT, + "xenDaemonDomainMigrate: unsupported flag"); + return NULL; + } + + /* Make the call. */ + ret = xend_op (domain->conn, domain->name, + "op", "migrate", + "destination", hostname, + "live", live, + "resource", resource, + "port", port, + NULL); + if (ret == -1) + return NULL; + + printf ("migration op returned\n"); + + /* Look for the new domain on the destination host. */ + return xenDaemonLookupByName (dconn, domain->name); +} + + virDomainPtr xenDaemonDomainDefineXML(virConnectPtr conn, const char *xmlDesc) { int ret; char *sexpr; Index: src/xend_internal.h =================================================================== RCS file: /data/cvs/libvirt/src/xend_internal.h,v retrieving revision 1.31 diff -u -p -r1.31 xend_internal.h --- src/xend_internal.h 6 Jul 2007 15:11:22 -0000 1.31 +++ src/xend_internal.h 16 Jul 2007 12:31:53 -0000 @@ -219,6 +219,7 @@ int xenDaemonInit (void); virDomainPtr xenDaemonLookupByID(virConnectPtr conn, int id); virDomainPtr xenDaemonLookupByUUID(virConnectPtr conn, const unsigned char *uuid); virDomainPtr xenDaemonLookupByName(virConnectPtr conn, const char *domname); +virDomainPtr xenDaemonDomainMigrate (virDomainPtr domain, virConnectPtr dconn, unsigned long flags, const char *dname, const char *hostname, virMigrateParamPtr params);
#ifdef __cplusplus }

This is a test program. You can fiddle with the various strings to control what domain it migrates. (If you leave the program as-is, then it will do a Xen migration from localhost to localhost, which screws up xend). Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

I think the conversation is heading towards a consensus around an API looking like that below. Let me know overnight if there are problems, otherwise I'll produce an implementation for consideration tomorrow morning. A hypervisor-agnostic call would look like this: ddom = virDomainMigrate (dom, dconn, VIR_MIGRATE_LIVE, NULL, NULL, 0); A hypervisor-specific call would look like this: ddom = virDomainMigrate (dom, dconn, VIR_MIGRATE_LIVE, NULL, "ssh://root@dest/", 10); /** * virDomainMigrate: * @domain: a domain object * @dconn: destination host (a connection object) * @flags: flags * @dname: (optional) rename domain to this at destination * @uri: (optional) dest hostname/URI as seen from the source host * @resource: (optional) specify resource limit in Mbps * * Migrate the domain object from its current host to the destination * host given by dconn (a connection to the destination host). * * Flags may be one of more of the following: * VIR_MIGRATE_LIVE Attempt a live migration. * * If a hypervisor supports renaming domains during migration, * then you may set the dname parameter to the new name (otherwise * it keeps the same name). If this is not supported by the * hypervisor, dname must be NULL or else you will get an error. * * Since typically the two hypervisors connect directly to each * other in order to perform the migration, you may need to specify * a path from the source to the destination. This is the purpose * of the uri parameter. If uri is NULL, then libvirt will try to * find the best method. Uri may specify the hostname or IP address * of the destination host as seen from the source. Or uri may be * a URI giving transport, hostname, user, port, etc. in the usual * form. Refer to driver documentation for the particular URIs * supported. * * The maximum bandwidth (in Mbps) that will be used to do migration * can be specified with the resource parameter. If set to 0, * libvirt will choose a suitable default. Some hypervisors do * not support this feature and will return an error if resource * is not 0. * * Returns the new domain object if the migration was successful, * or NULL in case of error. */ virConnectGetCapabilities[1] will be extended to return information about supported values for flags, domain renaming, URI formats and whether the hypervisor supports the resource parameter. My suggested extension would be: <capabilities> <host> <migration_features> <live/> <!-- live migration supported --> <resource/> <!-- resource limits supported --> <domain_rename/> <!-- can rename domains --> <uri_transports> <uri_transport>ssh</uri_transport> <uri_transport>tcp</uri_transport> (etc) </uri_transports> </migration_features> </host> </capabilities> (I think that's enough for now). Rich. [1] http://libvirt.org/format.html#Capa1 -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

On Mon, Jul 16, 2007 at 10:29:36PM +0100, Richard W.M. Jones wrote:
I think the conversation is heading towards a consensus around an API looking like that below. Let me know overnight if there are problems, otherwise I'll produce an implementation for consideration tomorrow morning.
A hypervisor-agnostic call would look like this:
ddom = virDomainMigrate (dom, dconn, VIR_MIGRATE_LIVE, NULL, NULL, 0);
A hypervisor-specific call would look like this:
ddom = virDomainMigrate (dom, dconn, VIR_MIGRATE_LIVE, NULL, "ssh://root@dest/", 10);
In terms of the set of data we need for a basic impl, I think these are resonable. That said I've been thinking about this in relation to the earlier points in this thread about cancellation, and progress info, etc. I'm wondering if we would be well served by introducing a new object to co-ordinate the whole thing. /* Prepare for migration to dconn */ virDomainMigratePtr mig = virDomainMigratePrepare(dom, dconn) /* Optionally specify a custom transport */ virDomainMigrateTransport(mig, "ssh://root@dest/"); /* Optionally throttle */ virDomainMigrateBandwidth(mig, 10); /* Perform the migration */ virDomainMigrateRun(mig, flags); /* Release resources */ virDomainMigrateFree(mig); This would make it easier for us to extend the capabilities in the future. eg adding more properties, or add APIs to run async, or getting progress info, etc, etc. eg, if flags request ASYNC, then one could imagine cancellation via virDomainMigrateAbort(mig); Or to poll for completion... virDomainMigrateStatus(mig); Finally we could have a convenience API virDomainMigrate(dom, dconn); For apps which don't care about custom transports, etc, etc
virConnectGetCapabilities[1] will be extended to return information about supported values for flags, domain renaming, URI formats and whether the hypervisor supports the resource parameter. My suggested extension would be:
<capabilities> <host> <migration_features> <live/> <!-- live migration supported --> <resource/> <!-- resource limits supported --> <domain_rename/> <!-- can rename domains --> <uri_transports> <uri_transport>ssh</uri_transport> <uri_transport>tcp</uri_transport> (etc) </uri_transports> </migration_features> </host> </capabilities>
Seems like a reasonable suggestion to add this to the capabilities XML to allow detection of host / HV support. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
In terms of the set of data we need for a basic impl, I think these are resonable. That said I've been thinking about this in relation to the earlier points in this thread about cancellation, and progress info, etc. I'm wondering if we would be well served by introducing a new object to co-ordinate the whole thing.
/* Prepare for migration to dconn */ virDomainMigratePtr mig = virDomainMigratePrepare(dom, dconn)
/* Optionally specify a custom transport */ virDomainMigrateTransport(mig, "ssh://root@dest/");
/* Optionally throttle */ virDomainMigrateBandwidth(mig, 10);
/* Perform the migration */ virDomainMigrateRun(mig, flags);
/* Release resources */ virDomainMigrateFree(mig);
This would make it easier for us to extend the capabilities in the future. eg adding more properties, or add APIs to run async, or getting progress info, etc, etc.
eg, if flags request ASYNC, then one could imagine cancellation via
virDomainMigrateAbort(mig);
Or to poll for completion...
virDomainMigrateStatus(mig);
Finally we could have a convenience API
virDomainMigrate(dom, dconn);
For apps which don't care about custom transports, etc, etc
Totally off on a tangent here, but a trick from functional programming is to return a "suspension". The call still looks like this: ddom = Domain.migrate dom dconn; The trick is that the call returns immediately, and 'ddom' isn't necessarily a domain object, at least not until you try to use it. For example, if the next statement was: printf "new domain id = %d\n" (Domain.id ddom); then the call to Domain.id ddom would (in the jargon) "force the suspension" -- basically cause the program to wait until the domain has migrated before returning the ID. With suspensions you can examine their state _without_ forcing them. For example: while Domain.is_migrating ddom; do printf "Domain still migrating ... %d percent done.\n" (Domain.migration_percent ddom); sleep 1; done; printf "Domain migrated, ID = %d\n" (Domain.id ddom) (Of course error handling is omitted here, but in a functional language it would just use exceptions. The C equivalent is more involved because you have to explicitly check for errors at every call). The advantage of suspensions is that in the simple case where you don't care about fancy progress bars, the code looks exactly the same as normal. Rich.

Here's a diagram which explains how the API would work in the remote case (a little more involved than I anticipated). I've kept the complexity hidden inside libvirt so that we can change it later, but if you can think of a simpler way to do it - ideas please ... Rich.
participants (6)
-
Anthony Liguori
-
Dan Smith
-
Daniel P. Berrange
-
Daniel Veillard
-
John Levon
-
Richard W.M. Jones