
On Wed, May 13, 2015 at 10:30:32AM -0400, Laine Stump wrote:
On 05/13/2015 05:57 AM, Daniel P. Berrange wrote:
On Wed, May 13, 2015 at 11:36:30AM +0800, Chen Fan wrote:
add migration support for ephemeral host devices, introduce two 'detach' and 'restore' functions to unplug/plug host devices during migration.
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com> --- src/qemu/qemu_migration.c | 171 ++++++++++++++++++++++++++++++++++++++++++++-- src/qemu/qemu_migration.h | 9 +++ src/qemu/qemu_process.c | 11 +++ 3 files changed, 187 insertions(+), 4 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 56112f9..d5a698f 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c
+void +qemuMigrationRestoreEphemeralDevices(virQEMUDriverPtr driver, + virConnectPtr conn, + virDomainObjPtr vm, + bool live) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + virDomainDeviceDefPtr dev; + int ret = -1; + size_t i; + + VIR_DEBUG("Rum domain restore ephemeral devices"); + + for (i = 0; i < priv->nEphemeralDevices; i++) { + dev = priv->ephemeralDevices[i]; + + switch ((virDomainDeviceType) dev->type) { + case VIR_DOMAIN_DEVICE_NET: + if (live) { + ret = qemuDomainAttachNetDevice(conn, driver, vm, + dev->data.net); + } else { + ret = virDomainNetInsert(vm->def, dev->data.net); + } + + if (!ret) + dev->data.net = NULL; + break; + case VIR_DOMAIN_DEVICE_HOSTDEV: + if (live) { + ret = qemuDomainAttachHostDevice(conn, driver, vm, + dev->data.hostdev); + } else { + ret =virDomainHostdevInsert(vm->def, dev->data.hostdev); + }
This re-attach step is where we actually have far far far worse problems than with detach. This is blindly assuming that the guest on the target host can use the same hostdev that it was using on the source host.
(kind of pointless to comment on, since pkrempa has changed my opinion by forcing me to think about the "failure to reattach" condition, but could be useful info for others)
For a <hostdev>, yes, but not for <interface type='network'> (which would point to a libvirt network pool of VFs).
I should note that in OpenStack at least we don't ever use the libvirt <interface type='network'> feature. This is because the OpenStack scheduler needs to have better control over exactly which VFs are allocated to which guest. This code runs on a separate host, and takes into account stuff such as the NUMA affinity of the guest, the utilization of the VFs by other guests, and more besides. So even in the <interface> case this proposal is pretty limited in usefulness. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|