[libvirt] [PATCH v2.1 RESEND 00/11] Rework storage migration

This patch set re-implements migration with storage for enough new qemu. Currently, you can migrate a domain to a host without need for shared storage. This is done by setting 'blk' or 'inc' attribute (representing VIR_MIGRATE_NON_SHARED_DISK and VIR_MIGRATE_NON_SHARED_INC flags respectively) of 'migrate' monitor command. However, the qemu implementation is buggy and applications are advised to switch to new impementation which, moreover, offers some nice features, like migrating only explicitly specified disks. The new functionality is controlled via 'nbd-server-*' and 'drive-mirror' commands. The flow is meant to look like this: 1) User invokes libvirt's migrate functionality. 2) libvirt checks that no block jobs are active on the source. 3) libvirt starts the destination QEMU and sets up the NBD server using the nbd-server-start and nbd-server-add commands. 4) libvirt starts drive-mirror with a destination pointing to the remote NBD server, for example nbd:host:port:exportname=diskname (where diskname is the -drive id specified on the destination). 5) once all mirroring jobs reach steady state, libvirt invokes the migrate command. 6) once migration completed, libvirt invokes the nbd-server-stop command on the destination QEMU. If we just skip the 2nd step and there is an active block-job, qemu will fail in step 4. No big deal. Since we try to NOT break migration and keep things compatible, this feature is enabled iff both sides support it. Since there's obvious need for some data transfer between src and dst, I've put it into qemuCookieMigration: 1) src -> dest: (QEMU_MIGRATION_PHASE_BEGIN3 -> QEMU_MIGRATION_PHASE_PREPARE) <nbd> <disk size='17179869184'/> </nbd> Hey destination, I know how to use this cool new feature. Moreover, these are the disks I'll send you. Each one of them is X bytes big. It's one of the prerequisite - the file (disk->src) on dst exists and has at least the same size as on dst. 2) dst -> src: (QEMU_MIGRATION_PHASE_PREPARE -> QEMU_MIGRATION_PHASE_PERFORM3) <nbd port='X'/> Okay, I (destination) support this feature as well. I've created all files as you (src) told me to and you can start rolling data. I am listening on port X. 3) src -> dst: (QEMU_MIGRATION_PHASE_PERFORM3 -> QEMU_MIGRATION_PHASE_FINISH3) <nbd port='-1'/> Migration completed, destination, you may shut the NBD server down. If either src or dst doesn't support NBD, it is not used and whole process fall backs to old implementation. diff to v1: -Eric's and Daniel's suggestions worked in. To point out the bigger ones: don't do NBD style when TUNNELLED requested, added 'b:writable' to 'nbd-server-add' -drop '/qemu-migration/nbd/disk/@src' attribute from migration cookie. As pointed out by Jirka, disk->src can be changed during migration (e.g. by migration hook or by passed xml). So I've tried (as suggested on the list) passing disk alias. However, since qemu hasn't been started on destination yet, the aliases hasn't been generated yet. So we have to rely on ordering completely. diff to v2: -rebase to reflect changes made by offline migration patch -send initial nbd cookie only if needed diff to previous v2.1: -rebase to current HEAD The patches 1,3 and 5 has been ACKed already. Maybe, I'll need to drop 7/11 patch completely, once Dan's patches [1] are in. 1: https://www.redhat.com/archives/libvir-list/2013-January/msg00742.html Michal Privoznik (11): qemu: Introduce NBD_SERVER capability Introduce NBD migration cookie qemu: Introduce nbd-server-start command qemu: Introduce nbd-server-add command qemu: Introduce nbd-server-stop command qemu_migration: Introduce qemuMigrationStartNBDServer qemu_migration: Move port allocation to a separate func qemu_migration: Implement qemuMigrationStartNBDServer() qemu_migration: Implement qemuMigrationDriveMirror qemu_migration: Check size prerequisites qemu_migration: Stop NBD server at Finish phase src/qemu/qemu_capabilities.c | 4 +- src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_migration.c | 624 ++++++++++++++++++++++++++++++++++++++++--- src/qemu/qemu_monitor.c | 63 +++++ src/qemu/qemu_monitor.h | 7 + src/qemu/qemu_monitor_json.c | 95 +++++++ src/qemu/qemu_monitor_json.h | 7 + 7 files changed, 768 insertions(+), 33 deletions(-) -- 1.8.0.2

This just keeps track whether qemu knows nbd-server-* commands so we can use it during migration or not. --- src/qemu/qemu_capabilities.c | 4 +++- src/qemu/qemu_capabilities.h | 1 + 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index b166dd6..f891893 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -203,7 +203,7 @@ VIR_ENUM_IMPL(qemuCaps, QEMU_CAPS_LAST, "usb-serial", /* 125 */ "usb-net", - + "nbd-server-start", ); struct _qemuCaps { @@ -1960,6 +1960,8 @@ qemuCapsProbeQMPCommands(qemuCapsPtr caps, qemuCapsSet(caps, QEMU_CAPS_DRIVE_MIRROR); else if (STREQ(name, "blockdev-snapshot-sync")) qemuCapsSet(caps, QEMU_CAPS_DISK_SNAPSHOT); + else if (STREQ(name, "nbd-server-start")) + qemuCapsSet(caps, QEMU_CAPS_NBD_SERVER); VIR_FREE(name); } VIR_FREE(commands); diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index 089fa30..246e686 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -165,6 +165,7 @@ enum qemuCapsFlags { QEMU_CAPS_SCLP_S390 = 124, /* -device sclp* */ QEMU_CAPS_DEVICE_USB_SERIAL = 125, /* -device usb-serial */ QEMU_CAPS_DEVICE_USB_NET = 126, /* -device usb-net */ + QEMU_CAPS_NBD_SERVER = 127, /* nbd-server-start QMP command */ QEMU_CAPS_LAST, /* this must always be the last item */ }; -- 1.8.0.2

On Fri, Jan 11, 2013 at 17:52:13 +0100, Michal Privoznik wrote:
This just keeps track whether qemu knows nbd-server-* commands so we can use it during migration or not. --- src/qemu/qemu_capabilities.c | 4 +++- src/qemu/qemu_capabilities.h | 1 + 2 files changed, 4 insertions(+), 1 deletion(-)
ACK Jirka

This migration cookie is meant for two purposes. The first is to be sent in begin phase from source to destination to let it know we support new implementation of VIR_MIGRATE_NON_SHARED_{DISK,INC} so destination can start NBD server. Then, the second purpose is, destination can let us know, on which port is NBD server running. --- src/qemu/qemu_migration.c | 145 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 124 insertions(+), 21 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index e235677..6fa4ad6 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -74,6 +74,7 @@ enum qemuMigrationCookieFlags { QEMU_MIGRATION_COOKIE_FLAG_LOCKSTATE, QEMU_MIGRATION_COOKIE_FLAG_PERSISTENT, QEMU_MIGRATION_COOKIE_FLAG_NETWORK, + QEMU_MIGRATION_COOKIE_FLAG_NBD, QEMU_MIGRATION_COOKIE_FLAG_LAST }; @@ -81,13 +82,18 @@ enum qemuMigrationCookieFlags { VIR_ENUM_DECL(qemuMigrationCookieFlag); VIR_ENUM_IMPL(qemuMigrationCookieFlag, QEMU_MIGRATION_COOKIE_FLAG_LAST, - "graphics", "lockstate", "persistent", "network"); + "graphics", + "lockstate", + "persistent", + "network", + "nbd"); enum qemuMigrationCookieFeatures { QEMU_MIGRATION_COOKIE_GRAPHICS = (1 << QEMU_MIGRATION_COOKIE_FLAG_GRAPHICS), QEMU_MIGRATION_COOKIE_LOCKSTATE = (1 << QEMU_MIGRATION_COOKIE_FLAG_LOCKSTATE), QEMU_MIGRATION_COOKIE_PERSISTENT = (1 << QEMU_MIGRATION_COOKIE_FLAG_PERSISTENT), QEMU_MIGRATION_COOKIE_NETWORK = (1 << QEMU_MIGRATION_COOKIE_FLAG_NETWORK), + QEMU_MIGRATION_COOKIE_NBD = (1 << QEMU_MIGRATION_COOKIE_FLAG_NBD), }; typedef struct _qemuMigrationCookieGraphics qemuMigrationCookieGraphics; @@ -121,6 +127,17 @@ struct _qemuMigrationCookieNetwork { qemuMigrationCookieNetDataPtr net; }; +typedef struct _qemuMigrationCookieNBD qemuMigrationCookieNBD; +typedef qemuMigrationCookieNBD *qemuMigrationCookieNBDPtr; +struct _qemuMigrationCookieNBD { + int port; /* on which port does NBD server listen for incoming data. + Zero value has special meaning - it is there just to let + destination know we (the source) do support NBD. + Negative one is meant to be sent when translating from + perform to finish phase to let destination know it's + safe to stop NBD server.*/ +}; + typedef struct _qemuMigrationCookie qemuMigrationCookie; typedef qemuMigrationCookie *qemuMigrationCookiePtr; struct _qemuMigrationCookie { @@ -149,6 +166,9 @@ struct _qemuMigrationCookie { /* If (flags & QEMU_MIGRATION_COOKIE_NETWORK) */ qemuMigrationCookieNetworkPtr network; + + /* If (flags & QEMU_MIGRATION_COOKIE_NBD) */ + qemuMigrationCookieNBDPtr nbd; }; static void qemuMigrationCookieGraphicsFree(qemuMigrationCookieGraphicsPtr grap) @@ -194,6 +214,7 @@ static void qemuMigrationCookieFree(qemuMigrationCookiePtr mig) VIR_FREE(mig->name); VIR_FREE(mig->lockState); VIR_FREE(mig->lockDriver); + VIR_FREE(mig->nbd); VIR_FREE(mig); } @@ -501,6 +522,24 @@ qemuMigrationCookieAddNetwork(qemuMigrationCookiePtr mig, } +static int +qemuMigrationCookieAddNBD(qemuMigrationCookiePtr mig, + int nbdPort) +{ + /* It is not a bug if there already is a NBD data */ + if (!mig->nbd && + VIR_ALLOC(mig->nbd) < 0) { + virReportOOMError(); + return -1; + } + + mig->nbd->port = nbdPort; + mig->flags |= QEMU_MIGRATION_COOKIE_NBD; + + return 0; +} + + static void qemuMigrationCookieGraphicsXMLFormat(virBufferPtr buf, qemuMigrationCookieGraphicsPtr grap) { @@ -603,6 +642,13 @@ qemuMigrationCookieXMLFormat(virQEMUDriverPtr driver, if ((mig->flags & QEMU_MIGRATION_COOKIE_NETWORK) && mig->network) qemuMigrationCookieNetworkXMLFormat(buf, mig->network); + if ((mig->flags & QEMU_MIGRATION_COOKIE_NBD) && mig->nbd) { + virBufferAddLit(buf, " <nbd"); + if (mig->nbd->port) + virBufferAsprintf(buf, " port='%d'", mig->nbd->port); + virBufferAddLit(buf, "/>\n"); + } + virBufferAddLit(buf, "</qemu-migration>\n"); return 0; } @@ -830,6 +876,12 @@ qemuMigrationCookieXMLParse(qemuMigrationCookiePtr mig, goto error; } + /* nbd is optional */ + if (val == QEMU_MIGRATION_COOKIE_FLAG_NBD) { + VIR_FREE(str); + continue; + } + if ((flags & (1 << val)) == 0) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Unsupported migration cookie feature %s"), @@ -882,6 +934,25 @@ qemuMigrationCookieXMLParse(qemuMigrationCookiePtr mig, (!(mig->network = qemuMigrationCookieNetworkXMLParse(ctxt)))) goto error; + if (flags & QEMU_MIGRATION_COOKIE_NBD && + virXPathBoolean("count(./nbd) > 0", ctxt)) { + char *port; + + if (VIR_ALLOC(mig->nbd) < 0) { + virReportOOMError(); + goto error; + } + + port = virXPathString("string(./nbd/@port)", ctxt); + if (port && + virStrToLong_i(port, NULL, 10, &mig->nbd->port) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Malformed nbd port '%s'"), + port); + goto error; + } + } + return 0; error: @@ -920,6 +991,7 @@ static int qemuMigrationBakeCookie(qemuMigrationCookiePtr mig, virQEMUDriverPtr driver, virDomainObjPtr dom, + int nbdPort, char **cookieout, int *cookieoutlen, unsigned int flags) @@ -946,6 +1018,10 @@ qemuMigrationBakeCookie(qemuMigrationCookiePtr mig, return -1; } + if (flags & QEMU_MIGRATION_COOKIE_NBD && + qemuMigrationCookieAddNBD(mig, nbdPort) < 0) + return -1; + if (!(*cookieout = qemuMigrationCookieXMLFormatStr(driver, mig))) return -1; @@ -1431,6 +1507,7 @@ char *qemuMigrationBegin(virQEMUDriverPtr driver, qemuMigrationCookiePtr mig = NULL; virDomainDefPtr def = NULL; qemuDomainObjPrivatePtr priv = vm->privateData; + unsigned int cookieFlags = QEMU_MIGRATION_COOKIE_LOCKSTATE; VIR_DEBUG("driver=%p, vm=%p, xmlin=%s, dname=%s," " cookieout=%p, cookieoutlen=%p, flags=%lx", @@ -1450,12 +1527,21 @@ char *qemuMigrationBegin(virQEMUDriverPtr driver, if (!(flags & VIR_MIGRATE_UNSAFE) && !qemuMigrationIsSafe(vm->def)) goto cleanup; + if (flags & (VIR_MIGRATE_NON_SHARED_DISK | VIR_MIGRATE_NON_SHARED_INC) && + qemuCapsGet(priv->caps, QEMU_CAPS_NBD_SERVER)) { + /* TODO support NBD for TUNNELLED migration */ + if (flags & VIR_MIGRATE_TUNNELLED) + VIR_DEBUG("NBD in tunnelled migration is currently not supported"); + else + cookieFlags |= QEMU_MIGRATION_COOKIE_NBD; + } + if (!(mig = qemuMigrationEatCookie(driver, vm, NULL, 0, 0))) goto cleanup; - if (qemuMigrationBakeCookie(mig, driver, vm, + if (qemuMigrationBakeCookie(mig, driver, vm, 0, cookieout, cookieoutlen, - QEMU_MIGRATION_COOKIE_LOCKSTATE) < 0) + cookieFlags) < 0) goto cleanup; if (flags & VIR_MIGRATE_OFFLINE) { @@ -1552,6 +1638,7 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, char *origname = NULL; char *xmlout = NULL; unsigned int cookieFlags; + int nbdPort = 0; if (virTimeMillisNow(&now) < 0) return -1; @@ -1651,7 +1738,8 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, origname = NULL; if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen, - QEMU_MIGRATION_COOKIE_LOCKSTATE))) + QEMU_MIGRATION_COOKIE_LOCKSTATE | + QEMU_MIGRATION_COOKIE_NBD))) goto cleanup; if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN) < 0) @@ -1711,8 +1799,12 @@ done: else cookieFlags = QEMU_MIGRATION_COOKIE_GRAPHICS; - if (qemuMigrationBakeCookie(mig, driver, vm, cookieout, cookieoutlen, - cookieFlags) < 0) { + /* dummy place holder for real work */ + nbdPort = 0; + cookieFlags |= QEMU_MIGRATION_COOKIE_NBD; + + if (qemuMigrationBakeCookie(mig, driver, vm, nbdPort, + cookieout, cookieoutlen, cookieFlags) < 0) { /* We could tear down the whole guest here, but * cookie data is (so far) non-critical, so that * seems a little harsh. We'll just warn for now. @@ -2207,6 +2299,7 @@ qemuMigrationRun(virQEMUDriverPtr driver, int fd = -1; unsigned long migrate_speed = resource ? resource : priv->migMaxBandwidth; virErrorPtr orig_err = NULL; + unsigned int cookieFlags = QEMU_MIGRATION_COOKIE_GRAPHICS; VIR_DEBUG("driver=%p, vm=%p, cookiein=%s, cookieinlen=%d, " "cookieout=%p, cookieoutlen=%p, flags=%lx, resource=%lu, " @@ -2215,6 +2308,16 @@ qemuMigrationRun(virQEMUDriverPtr driver, cookieout, cookieoutlen, flags, resource, spec, spec->destType, spec->fwdType); + if (flags & VIR_MIGRATE_NON_SHARED_DISK) { + migrate_flags |= QEMU_MONITOR_MIGRATE_NON_SHARED_DISK; + cookieFlags |= QEMU_MIGRATION_COOKIE_NBD; + } + + if (flags & VIR_MIGRATE_NON_SHARED_INC) { + migrate_flags |= QEMU_MONITOR_MIGRATE_NON_SHARED_INC; + cookieFlags |= QEMU_MIGRATION_COOKIE_NBD; + } + if (virLockManagerPluginUsesState(driver->lockManager) && !cookieout) { virReportError(VIR_ERR_INTERNAL_ERROR, @@ -2224,9 +2327,12 @@ qemuMigrationRun(virQEMUDriverPtr driver, return -1; } - if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen, - QEMU_MIGRATION_COOKIE_GRAPHICS))) + if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, + cookieinlen, cookieFlags))) { + cookieFlags &= ~QEMU_MIGRATION_COOKIE_GRAPHICS; goto cleanup; + } + cookieFlags &= ~QEMU_MIGRATION_COOKIE_GRAPHICS; if (qemuDomainMigrateGraphicsRelocate(driver, vm, mig) < 0) VIR_WARN("unable to provide data for graphics client relocation"); @@ -2258,11 +2364,6 @@ qemuMigrationRun(virQEMUDriverPtr driver, goto cleanup; } - if (flags & VIR_MIGRATE_NON_SHARED_DISK) - migrate_flags |= QEMU_MONITOR_MIGRATE_NON_SHARED_DISK; - - if (flags & VIR_MIGRATE_NON_SHARED_INC) - migrate_flags |= QEMU_MONITOR_MIGRATE_NON_SHARED_INC; /* connect to the destination qemu if needed */ if (spec->destType == MIGRATION_DEST_CONNECT_HOST && @@ -2370,10 +2471,11 @@ cleanup: VIR_FORCE_CLOSE(fd); } + cookieFlags |= (QEMU_MIGRATION_COOKIE_PERSISTENT | + QEMU_MIGRATION_COOKIE_NETWORK); if (ret == 0 && - qemuMigrationBakeCookie(mig, driver, vm, cookieout, cookieoutlen, - QEMU_MIGRATION_COOKIE_PERSISTENT | - QEMU_MIGRATION_COOKIE_NETWORK) < 0) { + qemuMigrationBakeCookie(mig, driver, vm, -1, cookieout, + cookieoutlen, cookieFlags) < 0) { VIR_WARN("Unable to encode migration cookie"); } @@ -3310,7 +3412,7 @@ qemuMigrationFinish(virQEMUDriverPtr driver, int newVM = 1; qemuMigrationCookiePtr mig = NULL; virErrorPtr orig_err = NULL; - int cookie_flags = 0; + int cookieFlags = 0; qemuDomainObjPrivatePtr priv = vm->privateData; VIR_DEBUG("driver=%p, dconn=%p, vm=%p, cookiein=%s, cookieinlen=%d, " @@ -3327,12 +3429,12 @@ qemuMigrationFinish(virQEMUDriverPtr driver, qemuDomainCleanupRemove(vm, qemuMigrationPrepareCleanup); - cookie_flags = QEMU_MIGRATION_COOKIE_NETWORK; + cookieFlags = QEMU_MIGRATION_COOKIE_NETWORK | QEMU_MIGRATION_COOKIE_NBD; if (flags & VIR_MIGRATE_PERSIST_DEST) - cookie_flags |= QEMU_MIGRATION_COOKIE_PERSISTENT; + cookieFlags |= QEMU_MIGRATION_COOKIE_PERSISTENT; if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, - cookieinlen, cookie_flags))) + cookieinlen, cookieFlags))) goto endjob; /* Did the migration go as planned? If yes, return the domain @@ -3479,7 +3581,8 @@ qemuMigrationFinish(virQEMUDriverPtr driver, VIR_DOMAIN_EVENT_STOPPED_FAILED); } - if (qemuMigrationBakeCookie(mig, driver, vm, cookieout, cookieoutlen, 0) < 0) + if (qemuMigrationBakeCookie(mig, driver, vm, 0, + cookieout, cookieoutlen, 0) < 0) VIR_WARN("Unable to encode migration cookie"); endjob: -- 1.8.0.2

On Fri, Jan 11, 2013 at 17:52:14 +0100, Michal Privoznik wrote:
This migration cookie is meant for two purposes. The first is to be sent in begin phase from source to destination to let it know we support new implementation of VIR_MIGRATE_NON_SHARED_{DISK,INC} so destination can start NBD server. Then, the second purpose is, destination can let us know, on which port is NBD server running. --- src/qemu/qemu_migration.c | 145 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 124 insertions(+), 21 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index e235677..6fa4ad6 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c ... @@ -121,6 +127,17 @@ struct _qemuMigrationCookieNetwork { qemuMigrationCookieNetDataPtr net; };
+typedef struct _qemuMigrationCookieNBD qemuMigrationCookieNBD; +typedef qemuMigrationCookieNBD *qemuMigrationCookieNBDPtr; +struct _qemuMigrationCookieNBD { + int port; /* on which port does NBD server listen for incoming data. + Zero value has special meaning - it is there just to let + destination know we (the source) do support NBD. + Negative one is meant to be sent when translating from + perform to finish phase to let destination know it's + safe to stop NBD server.*/
The negative special value is redundant. Destination already knows it is safe to stop NBD server when it gets into the finish phase.
+}; + typedef struct _qemuMigrationCookie qemuMigrationCookie; typedef qemuMigrationCookie *qemuMigrationCookiePtr; struct _qemuMigrationCookie { ... @@ -882,6 +934,25 @@ qemuMigrationCookieXMLParse(qemuMigrationCookiePtr mig, (!(mig->network = qemuMigrationCookieNetworkXMLParse(ctxt)))) goto error;
+ if (flags & QEMU_MIGRATION_COOKIE_NBD && + virXPathBoolean("count(./nbd) > 0", ctxt)) {
It would be enough to just check for "boolean(./nbd)" XPath expression but this would work too.
+ char *port; + + if (VIR_ALLOC(mig->nbd) < 0) { + virReportOOMError(); + goto error; + } + + port = virXPathString("string(./nbd/@port)", ctxt); + if (port && + virStrToLong_i(port, NULL, 10, &mig->nbd->port) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Malformed nbd port '%s'"), + port); + goto error; + } + } + return 0;
error: ... @@ -1651,7 +1738,8 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, origname = NULL;
if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen, - QEMU_MIGRATION_COOKIE_LOCKSTATE))) + QEMU_MIGRATION_COOKIE_LOCKSTATE | + QEMU_MIGRATION_COOKIE_NBD))) goto cleanup;
if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN) < 0) @@ -1711,8 +1799,12 @@ done: else cookieFlags = QEMU_MIGRATION_COOKIE_GRAPHICS;
- if (qemuMigrationBakeCookie(mig, driver, vm, cookieout, cookieoutlen, - cookieFlags) < 0) { + /* dummy place holder for real work */ + nbdPort = 0; + cookieFlags |= QEMU_MIGRATION_COOKIE_NBD;
We should guard QEMU_MIGRATION_COOKIE_NBD with the check for non-tunnelled migration in the same way we did before to make sure that we deny it if a newer libvirtd which already supports nbd for tunnelled migrations talks to an older one. It's possible it would be denied by source because destination would not provide some required data but we don't know this yet and better safe than sorry.
+ + if (qemuMigrationBakeCookie(mig, driver, vm, nbdPort, + cookieout, cookieoutlen, cookieFlags) < 0) { /* We could tear down the whole guest here, but * cookie data is (so far) non-critical, so that * seems a little harsh. We'll just warn for now. ... @@ -2224,9 +2327,12 @@ qemuMigrationRun(virQEMUDriverPtr driver, return -1; }
- if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen, - QEMU_MIGRATION_COOKIE_GRAPHICS))) + if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, + cookieinlen, cookieFlags))) { + cookieFlags &= ~QEMU_MIGRATION_COOKIE_GRAPHICS; goto cleanup; + } + cookieFlags &= ~QEMU_MIGRATION_COOKIE_GRAPHICS;
Hmm, this looks pretty strange. Why don't you initialize cookieFlags to zero and pass cookieFlags | QEMU_MIGRATION_COOKIE_GRAPHICS to qemuMigrationEatCookie? Moreover, I think we should check if source wanted to use NBD but destination did not send NBD cookie back and log this situation.
if (qemuDomainMigrateGraphicsRelocate(driver, vm, mig) < 0) VIR_WARN("unable to provide data for graphics client relocation"); @@ -2258,11 +2364,6 @@ qemuMigrationRun(virQEMUDriverPtr driver, goto cleanup; }
- if (flags & VIR_MIGRATE_NON_SHARED_DISK) - migrate_flags |= QEMU_MONITOR_MIGRATE_NON_SHARED_DISK; - - if (flags & VIR_MIGRATE_NON_SHARED_INC) - migrate_flags |= QEMU_MONITOR_MIGRATE_NON_SHARED_INC;
/* connect to the destination qemu if needed */ if (spec->destType == MIGRATION_DEST_CONNECT_HOST && @@ -2370,10 +2471,11 @@ cleanup: VIR_FORCE_CLOSE(fd); }
+ cookieFlags |= (QEMU_MIGRATION_COOKIE_PERSISTENT | + QEMU_MIGRATION_COOKIE_NETWORK); if (ret == 0 && - qemuMigrationBakeCookie(mig, driver, vm, cookieout, cookieoutlen, - QEMU_MIGRATION_COOKIE_PERSISTENT | - QEMU_MIGRATION_COOKIE_NETWORK) < 0) { + qemuMigrationBakeCookie(mig, driver, vm, -1, cookieout, + cookieoutlen, cookieFlags) < 0) { VIR_WARN("Unable to encode migration cookie"); }
This change should not be necessary, we don't need to send any NBD related data do the destination host once migration is finished. ...
@@ -3327,12 +3429,12 @@ qemuMigrationFinish(virQEMUDriverPtr driver,
qemuDomainCleanupRemove(vm, qemuMigrationPrepareCleanup);
- cookie_flags = QEMU_MIGRATION_COOKIE_NETWORK; + cookieFlags = QEMU_MIGRATION_COOKIE_NETWORK | QEMU_MIGRATION_COOKIE_NBD; if (flags & VIR_MIGRATE_PERSIST_DEST) - cookie_flags |= QEMU_MIGRATION_COOKIE_PERSISTENT; + cookieFlags |= QEMU_MIGRATION_COOKIE_PERSISTENT;
if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, - cookieinlen, cookie_flags))) + cookieinlen, cookieFlags))) goto endjob;
/* Did the migration go as planned? If yes, return the domain
No need to parse NBD cookie here.
@@ -3479,7 +3581,8 @@ qemuMigrationFinish(virQEMUDriverPtr driver, VIR_DOMAIN_EVENT_STOPPED_FAILED); }
- if (qemuMigrationBakeCookie(mig, driver, vm, cookieout, cookieoutlen, 0) < 0) + if (qemuMigrationBakeCookie(mig, driver, vm, 0, + cookieout, cookieoutlen, 0) < 0) VIR_WARN("Unable to encode migration cookie");
endjob:
Generally, the code looks good once you remove the redundant NBD data from migration cookie passed from Perform to Finish. Jirka

This will be used with new migration scheme. This patch creates basically just monitor stub functions. Wiring them into something useful is done in later patches. --- src/qemu/qemu_monitor.c | 22 ++++++++++++++++++++ src/qemu/qemu_monitor.h | 3 +++ src/qemu/qemu_monitor_json.c | 49 ++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 3 +++ 4 files changed, 77 insertions(+) diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index cb5a3e2..c38e3f9 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -3330,3 +3330,25 @@ char *qemuMonitorGetTargetArch(qemuMonitorPtr mon) return qemuMonitorJSONGetTargetArch(mon); } + +int qemuMonitorNBDServerStart(qemuMonitorPtr mon, + const char *host, + unsigned int port) +{ + VIR_DEBUG("mon=%p host=%s port=%u", + mon, host, port); + + if (!mon) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("monitor must not be NULL")); + return -1; + } + + if (!mon->json) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("JSON monitor is required")); + return -1; + } + + return qemuMonitorJSONNBDServerStart(mon, host, port); +} diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 00ce813..9f0d5af 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -634,6 +634,9 @@ int qemuMonitorGetObjectProps(qemuMonitorPtr mon, char ***props); char *qemuMonitorGetTargetArch(qemuMonitorPtr mon); +int qemuMonitorNBDServerStart(qemuMonitorPtr mon, + const char *host, + unsigned int port); /** * When running two dd process and using <> redirection, we need a * shell that will not truncate files. These two strings serve that diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index de5f115..d5285bc 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -4280,3 +4280,52 @@ cleanup: virJSONValueFree(reply); return ret; } + +int +qemuMonitorJSONNBDServerStart(qemuMonitorPtr mon, + const char *host, + unsigned int port) +{ + int ret = -1; + virJSONValuePtr cmd = NULL; + virJSONValuePtr reply = NULL; + virJSONValuePtr data = NULL; + virJSONValuePtr addr = NULL; + char *port_str = NULL; + + if (!(data = virJSONValueNewObject()) || + !(addr = virJSONValueNewObject()) || + virAsprintf(&port_str, "%d", port) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (virJSONValueObjectAppendString(data, "host", host) < 0 || + virJSONValueObjectAppendString(data, "port", port_str) < 0 || + virJSONValueObjectAppendString(addr, "type", "inet") < 0 || + virJSONValueObjectAppend(addr, "data", data) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (!(cmd = qemuMonitorJSONMakeCommand("nbd-server-start", + "a:addr", addr, + NULL))) + goto cleanup; + + if (qemuMonitorJSONCommand(mon, cmd, &reply) < 0) + goto cleanup; + + if (qemuMonitorJSONCheckError(cmd, reply) < 0) + goto cleanup; + + ret = 0; + +cleanup: + VIR_FREE(port_str); + virJSONValueFree(reply); + virJSONValueFree(cmd); + virJSONValueFree(addr); + virJSONValueFree(data); + return ret; +} diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 2b09a8f..655ef34 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -324,4 +324,7 @@ int qemuMonitorJSONGetObjectProps(qemuMonitorPtr mon, ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3); char *qemuMonitorJSONGetTargetArch(qemuMonitorPtr mon); +int qemuMonitorJSONNBDServerStart(qemuMonitorPtr mon, + const char *host, + unsigned int port); #endif /* QEMU_MONITOR_JSON_H */ -- 1.8.0.2

On Fri, Jan 11, 2013 at 17:52:15 +0100, Michal Privoznik wrote:
This will be used with new migration scheme. This patch creates basically just monitor stub functions. Wiring them into something useful is done in later patches. --- src/qemu/qemu_monitor.c | 22 ++++++++++++++++++++ src/qemu/qemu_monitor.h | 3 +++ src/qemu/qemu_monitor_json.c | 49 ++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 3 +++ 4 files changed, 77 insertions(+)
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index cb5a3e2..c38e3f9 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -3330,3 +3330,25 @@ char *qemuMonitorGetTargetArch(qemuMonitorPtr mon)
return qemuMonitorJSONGetTargetArch(mon); } + +int qemuMonitorNBDServerStart(qemuMonitorPtr mon, + const char *host, + unsigned int port) +{ + VIR_DEBUG("mon=%p host=%s port=%u", + mon, host, port); + + if (!mon) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("monitor must not be NULL")); + return -1; + } + + if (!mon->json) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
VIR_ERR_OPERATION_UNSUPPORTED is the right error code used elsewhere in qemu_monitor.c.
+ _("JSON monitor is required")); + return -1; + } + + return qemuMonitorJSONNBDServerStart(mon, host, port); +} ... diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index de5f115..d5285bc 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -4280,3 +4280,52 @@ cleanup: virJSONValueFree(reply); return ret; } + +int +qemuMonitorJSONNBDServerStart(qemuMonitorPtr mon, + const char *host, + unsigned int port) +{ + int ret = -1; + virJSONValuePtr cmd = NULL; + virJSONValuePtr reply = NULL; + virJSONValuePtr data = NULL; + virJSONValuePtr addr = NULL; + char *port_str = NULL; + + if (!(data = virJSONValueNewObject()) || + !(addr = virJSONValueNewObject()) || + virAsprintf(&port_str, "%d", port) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (virJSONValueObjectAppendString(data, "host", host) < 0 || + virJSONValueObjectAppendString(data, "port", port_str) < 0 ||
Hm, why not virJSONValueObjectAppendNumberInt rather than virAsprintf followed by virJSONValueObjectAppendString?
+ virJSONValueObjectAppendString(addr, "type", "inet") < 0 || + virJSONValueObjectAppend(addr, "data", data) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (!(cmd = qemuMonitorJSONMakeCommand("nbd-server-start", + "a:addr", addr, + NULL))) + goto cleanup; + + if (qemuMonitorJSONCommand(mon, cmd, &reply) < 0) + goto cleanup; + + if (qemuMonitorJSONCheckError(cmd, reply) < 0) + goto cleanup; + + ret = 0; + +cleanup: + VIR_FREE(port_str); + virJSONValueFree(reply); + virJSONValueFree(cmd); + virJSONValueFree(addr); + virJSONValueFree(data);
AFAIK virJSONValueFree is recursive and freeing addr and data once they are already assigned to cmd would free the same memory twice. Or am I missing something?
+ return ret; +} ...
Jirka

This will be used with new migration scheme. This patch creates basically just monitor stub functions. Wiring them into something useful is done in later patches. --- src/qemu/qemu_monitor.c | 22 ++++++++++++++++++++++ src/qemu/qemu_monitor.h | 3 +++ src/qemu/qemu_monitor_json.c | 25 +++++++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 3 +++ 4 files changed, 53 insertions(+) diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index c38e3f9..1bc6434 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -3352,3 +3352,25 @@ int qemuMonitorNBDServerStart(qemuMonitorPtr mon, return qemuMonitorJSONNBDServerStart(mon, host, port); } + +int qemuMonitorNBDServerAdd(qemuMonitorPtr mon, + const char *deviceID, + bool writable) +{ + VIR_DEBUG("mon=%p deviceID=%s", + mon, deviceID); + + if (!mon) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("monitor must not be NULL")); + return -1; + } + + if (!mon->json) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("JSON monitor is required")); + return -1; + } + + return qemuMonitorJSONNBDServerAdd(mon, deviceID, writable); +} diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 9f0d5af..bb22877 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -637,6 +637,9 @@ char *qemuMonitorGetTargetArch(qemuMonitorPtr mon); int qemuMonitorNBDServerStart(qemuMonitorPtr mon, const char *host, unsigned int port); +int qemuMonitorNBDServerAdd(qemuMonitorPtr mon, + const char *deviceID, + bool writable); /** * When running two dd process and using <> redirection, we need a * shell that will not truncate files. These two strings serve that diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index d5285bc..ffd126a 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -4329,3 +4329,28 @@ cleanup: virJSONValueFree(data); return ret; } + +int +qemuMonitorJSONNBDServerAdd(qemuMonitorPtr mon, + const char *deviceID, + bool writable) +{ + int ret = -1; + virJSONValuePtr cmd; + virJSONValuePtr reply = NULL; + + if (!(cmd = qemuMonitorJSONMakeCommand("nbd-server-add", + "s:device", deviceID, + "b:writable", writable, + NULL))) + return ret; + + ret = qemuMonitorJSONCommand(mon, cmd, &reply); + + if (ret == 0) + ret = qemuMonitorJSONCheckError(cmd, reply); + + virJSONValueFree(cmd); + virJSONValueFree(reply); + return ret; +} diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 655ef34..fd3dc9c 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -327,4 +327,7 @@ char *qemuMonitorJSONGetTargetArch(qemuMonitorPtr mon); int qemuMonitorJSONNBDServerStart(qemuMonitorPtr mon, const char *host, unsigned int port); +int qemuMonitorJSONNBDServerAdd(qemuMonitorPtr mon, + const char *deviceID, + bool writable); #endif /* QEMU_MONITOR_JSON_H */ -- 1.8.0.2

On Fri, Jan 11, 2013 at 17:52:16 +0100, Michal Privoznik wrote:
This will be used with new migration scheme. This patch creates basically just monitor stub functions. Wiring them into something useful is done in later patches. --- src/qemu/qemu_monitor.c | 22 ++++++++++++++++++++++ src/qemu/qemu_monitor.h | 3 +++ src/qemu/qemu_monitor_json.c | 25 +++++++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 3 +++ 4 files changed, 53 insertions(+)
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index c38e3f9..1bc6434 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -3352,3 +3352,25 @@ int qemuMonitorNBDServerStart(qemuMonitorPtr mon,
return qemuMonitorJSONNBDServerStart(mon, host, port); } + +int qemuMonitorNBDServerAdd(qemuMonitorPtr mon, + const char *deviceID, + bool writable) +{ + VIR_DEBUG("mon=%p deviceID=%s", + mon, deviceID); + + if (!mon) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("monitor must not be NULL")); + return -1; + } + + if (!mon->json) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
VIR_ERR_OPERATION_UNSUPPORTED again
+ _("JSON monitor is required")); + return -1; + } + + return qemuMonitorJSONNBDServerAdd(mon, deviceID, writable); +} ...
ACK with the error code fixed. Jirka

This will be used after all migration work is done to stop NBD server running on destination. It doesn't take any arguments, just issues a command. --- src/qemu/qemu_monitor.c | 19 +++++++++++++++++++ src/qemu/qemu_monitor.h | 1 + src/qemu/qemu_monitor_json.c | 21 +++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 1 + 4 files changed, 42 insertions(+) diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 1bc6434..04cb6a1 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -3374,3 +3374,22 @@ int qemuMonitorNBDServerAdd(qemuMonitorPtr mon, return qemuMonitorJSONNBDServerAdd(mon, deviceID, writable); } + +int qemuMonitorNBDServerStop(qemuMonitorPtr mon) +{ + VIR_DEBUG("mon=%p", mon); + + if (!mon) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("monitor must not be NULL")); + return -1; + } + + if (!mon->json) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("JSON monitor is required")); + return -1; + } + + return qemuMonitorJSONNBDServerStop(mon); +} diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index bb22877..4c4c30a 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -640,6 +640,7 @@ int qemuMonitorNBDServerStart(qemuMonitorPtr mon, int qemuMonitorNBDServerAdd(qemuMonitorPtr mon, const char *deviceID, bool writable); +int qemuMonitorNBDServerStop(qemuMonitorPtr); /** * When running two dd process and using <> redirection, we need a * shell that will not truncate files. These two strings serve that diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index ffd126a..228c9d7 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -4354,3 +4354,24 @@ qemuMonitorJSONNBDServerAdd(qemuMonitorPtr mon, virJSONValueFree(reply); return ret; } + +int +qemuMonitorJSONNBDServerStop(qemuMonitorPtr mon) +{ + int ret = -1; + virJSONValuePtr cmd; + virJSONValuePtr reply = NULL; + + if (!(cmd = qemuMonitorJSONMakeCommand("nbd-server-stop", + NULL))) + return ret; + + ret = qemuMonitorJSONCommand(mon, cmd, &reply); + + if (ret == 0) + ret = qemuMonitorJSONCheckError(cmd, reply); + + virJSONValueFree(cmd); + virJSONValueFree(reply); + return ret; +} diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index fd3dc9c..b4d9433 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -330,4 +330,5 @@ int qemuMonitorJSONNBDServerStart(qemuMonitorPtr mon, int qemuMonitorJSONNBDServerAdd(qemuMonitorPtr mon, const char *deviceID, bool writable); +int qemuMonitorJSONNBDServerStop(qemuMonitorPtr mon); #endif /* QEMU_MONITOR_JSON_H */ -- 1.8.0.2

On Fri, Jan 11, 2013 at 17:52:17 +0100, Michal Privoznik wrote:
This will be used after all migration work is done to stop NBD server running on destination. It doesn't take any arguments, just issues a command. --- src/qemu/qemu_monitor.c | 19 +++++++++++++++++++ src/qemu/qemu_monitor.h | 1 + src/qemu/qemu_monitor_json.c | 21 +++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 1 + 4 files changed, 42 insertions(+)
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 1bc6434..04cb6a1 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -3374,3 +3374,22 @@ int qemuMonitorNBDServerAdd(qemuMonitorPtr mon,
return qemuMonitorJSONNBDServerAdd(mon, deviceID, writable); } + +int qemuMonitorNBDServerStop(qemuMonitorPtr mon) +{ + VIR_DEBUG("mon=%p", mon); + + if (!mon) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("monitor must not be NULL")); + return -1; + } + + if (!mon->json) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
VIR_ERR_OPERATION_UNSUPPORTED
+ _("JSON monitor is required")); + return -1; + } + + return qemuMonitorJSONNBDServerStop(mon); +} ...
ACK with the right error code. Jirka

This is a stub internal API just for now. Its purpose in life is to start NBD server and feed it with all domain disks. When adding a disk to NBD server, it is addressed via its alias (id= param on qemu command line). --- src/qemu/qemu_migration.c | 46 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 39 insertions(+), 7 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 6fa4ad6..ccf223e 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1087,6 +1087,29 @@ error: return NULL; } +/** + * qemuMigrationStartNBDServer: + * @driver: qemu driver + * @vm: domain + * @nbdPort: which port is NBD server listening to + * + * Starts NBD server. This is a newer method to copy + * storage during migration than using 'blk' and 'inc' + * arguments in 'migrate' monitor command. + * Error is reported here. + * + * Returns 0 on success, -1 otherwise. + */ +static int +qemuMigrationStartNBDServer(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, + virDomainObjPtr vm ATTRIBUTE_UNUSED, + int *nbdPort ATTRIBUTE_UNUSED) +{ + /* do nothing for now */ + return 0; +} + + /* Validate whether the domain is safe to migrate. If vm is NULL, * then this is being run in the v2 Prepare stage on the destination * (where we only have the target xml); if vm is provided, then this @@ -1799,9 +1822,17 @@ done: else cookieFlags = QEMU_MIGRATION_COOKIE_GRAPHICS; - /* dummy place holder for real work */ - nbdPort = 0; - cookieFlags |= QEMU_MIGRATION_COOKIE_NBD; + if ((flags & VIR_MIGRATE_NON_SHARED_INC || + flags & VIR_MIGRATE_NON_SHARED_DISK) && + mig->nbd && qemuCapsGet(priv->caps, QEMU_CAPS_NBD_SERVER)) { + /* both source and destination qemus support nbd-server-* + * commands and user requested disk copy. Use the new ones */ + if (qemuMigrationStartNBDServer(driver, vm, &nbdPort) < 0) { + /* error already reported */ + goto endjob; + } + cookieFlags |= QEMU_MIGRATION_COOKIE_NBD; + } if (qemuMigrationBakeCookie(mig, driver, vm, nbdPort, cookieout, cookieoutlen, cookieFlags) < 0) { @@ -1879,9 +1910,10 @@ qemuMigrationPrepareTunnel(virQEMUDriverPtr driver, int ret; VIR_DEBUG("driver=%p, dconn=%p, cookiein=%s, cookieinlen=%d, " - "cookieout=%p, cookieoutlen=%p, st=%p, dname=%s, dom_xml=%s", + "cookieout=%p, cookieoutlen=%p, st=%p, dname=%s, dom_xml=%s " + "flags=%lx", driver, dconn, NULLSTR(cookiein), cookieinlen, - cookieout, cookieoutlen, st, NULLSTR(dname), dom_xml); + cookieout, cookieoutlen, st, NULLSTR(dname), dom_xml, flags); /* QEMU will be started with -incoming stdio (which qemu_command might * convert to exec:cat or fd:n) @@ -1915,10 +1947,10 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver, VIR_DEBUG("driver=%p, dconn=%p, cookiein=%s, cookieinlen=%d, " "cookieout=%p, cookieoutlen=%p, uri_in=%s, uri_out=%p, " - "dname=%s, dom_xml=%s", + "dname=%s, dom_xml=%s flags=%lx", driver, dconn, NULLSTR(cookiein), cookieinlen, cookieout, cookieoutlen, NULLSTR(uri_in), uri_out, - NULLSTR(dname), dom_xml); + NULLSTR(dname), dom_xml, flags); /* The URI passed in may be NULL or a string "tcp://somehostname:port". * -- 1.8.0.2

On Fri, Jan 11, 2013 at 17:52:18 +0100, Michal Privoznik wrote:
This is a stub internal API just for now. Its purpose in life is to start NBD server and feed it with all domain disks. When adding a disk to NBD server, it is addressed via its alias (id= param on qemu command line). --- src/qemu/qemu_migration.c | 46 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 39 insertions(+), 7 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 6fa4ad6..ccf223e 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1087,6 +1087,29 @@ error: return NULL; }
+/** + * qemuMigrationStartNBDServer: + * @driver: qemu driver + * @vm: domain + * @nbdPort: which port is NBD server listening to + * + * Starts NBD server. This is a newer method to copy + * storage during migration than using 'blk' and 'inc' + * arguments in 'migrate' monitor command. + * Error is reported here. + * + * Returns 0 on success, -1 otherwise. + */ +static int +qemuMigrationStartNBDServer(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, + virDomainObjPtr vm ATTRIBUTE_UNUSED, + int *nbdPort ATTRIBUTE_UNUSED) +{ + /* do nothing for now */ + return 0;
Just squash 8/11 into this patch.
+} + + /* Validate whether the domain is safe to migrate. If vm is NULL, * then this is being run in the v2 Prepare stage on the destination * (where we only have the target xml); if vm is provided, then this @@ -1799,9 +1822,17 @@ done: else cookieFlags = QEMU_MIGRATION_COOKIE_GRAPHICS;
- /* dummy place holder for real work */ - nbdPort = 0; - cookieFlags |= QEMU_MIGRATION_COOKIE_NBD; + if ((flags & VIR_MIGRATE_NON_SHARED_INC || + flags & VIR_MIGRATE_NON_SHARED_DISK) && + mig->nbd && qemuCapsGet(priv->caps, QEMU_CAPS_NBD_SERVER)) { + /* both source and destination qemus support nbd-server-* + * commands and user requested disk copy. Use the new ones */ + if (qemuMigrationStartNBDServer(driver, vm, &nbdPort) < 0) { + /* error already reported */ + goto endjob; + } + cookieFlags |= QEMU_MIGRATION_COOKIE_NBD; + }
As I said earlier, this code should refuse using NBD in case of tunnelled migration.
if (qemuMigrationBakeCookie(mig, driver, vm, nbdPort, cookieout, cookieoutlen, cookieFlags) < 0) { @@ -1879,9 +1910,10 @@ qemuMigrationPrepareTunnel(virQEMUDriverPtr driver, int ret;
VIR_DEBUG("driver=%p, dconn=%p, cookiein=%s, cookieinlen=%d, " - "cookieout=%p, cookieoutlen=%p, st=%p, dname=%s, dom_xml=%s", + "cookieout=%p, cookieoutlen=%p, st=%p, dname=%s, dom_xml=%s " + "flags=%lx", driver, dconn, NULLSTR(cookiein), cookieinlen, - cookieout, cookieoutlen, st, NULLSTR(dname), dom_xml); + cookieout, cookieoutlen, st, NULLSTR(dname), dom_xml, flags);
/* QEMU will be started with -incoming stdio (which qemu_command might * convert to exec:cat or fd:n) @@ -1915,10 +1947,10 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver,
VIR_DEBUG("driver=%p, dconn=%p, cookiein=%s, cookieinlen=%d, " "cookieout=%p, cookieoutlen=%p, uri_in=%s, uri_out=%p, " - "dname=%s, dom_xml=%s", + "dname=%s, dom_xml=%s flags=%lx", driver, dconn, NULLSTR(cookiein), cookieinlen, cookieout, cookieoutlen, NULLSTR(uri_in), uri_out, - NULLSTR(dname), dom_xml); + NULLSTR(dname), dom_xml, flags);
/* The URI passed in may be NULL or a string "tcp://somehostname:port". *
These two hunks are unrelated and may be pushed separately, ACK to them. Jirka

There's a code snippet which allocates a port for incoming migration. We will need this for allocating a port for incoming disks as well. Hence, move this snippet into a separate function called qemuMigrationNextPort(). --- src/qemu/qemu_migration.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index ccf223e..367747c 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1087,6 +1087,17 @@ error: return NULL; } +static int +qemuMigrationNextPort(void) { + static int port = 0; + int ret = QEMUD_MIGRATION_FIRST_PORT + port++; + + if (port == QEMUD_MIGRATION_NUM_PORTS) + port = 0; + + return ret; +} + /** * qemuMigrationStartNBDServer: * @driver: qemu driver @@ -1938,7 +1949,6 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver, const char *dom_xml, unsigned long flags) { - static int port = 0; int this_port; char *hostname = NULL; char migrateFrom [64]; @@ -1963,8 +1973,7 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver, * to be a correct hostname which refers to the target machine). */ if (uri_in == NULL) { - this_port = QEMUD_MIGRATION_FIRST_PORT + port++; - if (port == QEMUD_MIGRATION_NUM_PORTS) port = 0; + this_port = qemuMigrationNextPort(); /* Get hostname */ if ((hostname = virGetHostname(NULL)) == NULL) @@ -2003,9 +2012,7 @@ qemuMigrationPrepareDirect(virQEMUDriverPtr driver, p = strrchr(uri_in, ':'); if (p == strchr(uri_in, ':')) { /* Generate a port */ - this_port = QEMUD_MIGRATION_FIRST_PORT + port++; - if (port == QEMUD_MIGRATION_NUM_PORTS) - port = 0; + this_port = qemuMigrationNextPort(); /* Caller frees */ if (virAsprintf(uri_out, "%s:%d", uri_in, this_port) < 0) { -- 1.8.0.2

On Fri, Jan 11, 2013 at 17:52:19 +0100, Michal Privoznik wrote:
There's a code snippet which allocates a port for incoming migration. We will need this for allocating a port for incoming disks as well. Hence, move this snippet into a separate function called qemuMigrationNextPort().
Should we use virPortAllocator instead? Anyway, this patch should be moved before 6/11 Introduce qemuMigrationStartNBDServer. Jirka

We need to start NBD server and feed it with all non-<shared/> disks. However, after qemuDomainObjEnterMonitorAsync the domain object is unlocked so we cannot touch its disk definitions. Therefore, we must prepare the list of disk IDs prior entering monitor. --- src/qemu/qemu_migration.c | 59 +++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 57 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 367747c..fc410c0 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -35,6 +35,7 @@ #include "qemu_domain.h" #include "qemu_process.h" #include "qemu_capabilities.h" +#include "qemu_command.h" #include "qemu_cgroup.h" #include "domain_audit.h" @@ -1116,8 +1117,62 @@ qemuMigrationStartNBDServer(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, virDomainObjPtr vm ATTRIBUTE_UNUSED, int *nbdPort ATTRIBUTE_UNUSED) { - /* do nothing for now */ - return 0; + int ret = -1; + qemuDomainObjPrivatePtr priv = vm->privateData; + int port = qemuMigrationNextPort(); + const char *listen = "0.0.0.0"; + char **disks = NULL; + size_t i, ndisks = 0; + + for (i = 0; i < vm->def->ndisks; i++) { + virDomainDiskDefPtr disk = vm->def->disks[i]; + + /* skip shared disks */ + if (disk->shared) + continue; + + if (VIR_REALLOC_N(disks, ndisks + 1) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (virAsprintf(&disks[ndisks++], "%s%s", + QEMU_DRIVE_HOST_PREFIX, disk->info.alias) < 0) { + virReportOOMError(); + goto cleanup; + } + } + + if (!ndisks) { + /* Hooray! Nothing to care about */ + ret = 0; + goto cleanup; + } + + if (qemuDomainObjEnterMonitorAsync(driver, vm, + QEMU_ASYNC_JOB_MIGRATION_IN) < 0) + goto cleanup; + + if (qemuMonitorNBDServerStart(priv->mon, listen, port) < 0) + goto endjob; + + for (i = 0; i < ndisks; i++) { + if (qemuMonitorNBDServerAdd(priv->mon, disks[i], true) < 0) { + VIR_WARN("Unable to add '%s' to NDB server", disks[i]); + goto endjob; + } + } + + *nbdPort = port; + ret = 0; + +endjob: + qemuDomainObjExitMonitorWithDriver(driver, vm); +cleanup: + for (i = 0; i < ndisks; i++) + VIR_FREE(disks[i]); + VIR_FREE(disks); + return ret; } -- 1.8.0.2

On Fri, Jan 11, 2013 at 17:52:20 +0100, Michal Privoznik wrote:
We need to start NBD server and feed it with all non-<shared/> disks. However, after qemuDomainObjEnterMonitorAsync the domain object is unlocked so we cannot touch its disk definitions. Therefore, we must prepare the list of disk IDs prior entering monitor. --- src/qemu/qemu_migration.c | 59 +++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 57 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 367747c..fc410c0 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -35,6 +35,7 @@ #include "qemu_domain.h" #include "qemu_process.h" #include "qemu_capabilities.h" +#include "qemu_command.h" #include "qemu_cgroup.h"
#include "domain_audit.h" @@ -1116,8 +1117,62 @@ qemuMigrationStartNBDServer(virQEMUDriverPtr driver ATTRIBUTE_UNUSED, virDomainObjPtr vm ATTRIBUTE_UNUSED, int *nbdPort ATTRIBUTE_UNUSED) { - /* do nothing for now */ - return 0; + int ret = -1; + qemuDomainObjPrivatePtr priv = vm->privateData; + int port = qemuMigrationNextPort(); + const char *listen = "0.0.0.0";
Does nbd-server-start support IPv6?
+ char **disks = NULL; + size_t i, ndisks = 0; + + for (i = 0; i < vm->def->ndisks; i++) { + virDomainDiskDefPtr disk = vm->def->disks[i]; + + /* skip shared disks */ + if (disk->shared) + continue; + + if (VIR_REALLOC_N(disks, ndisks + 1) < 0) { + virReportOOMError(); + goto cleanup; + }
Personally, I'd just preallocate the array for vm->def->ndisks elements rather than doing several allocations in a loop.
+ + if (virAsprintf(&disks[ndisks++], "%s%s", + QEMU_DRIVE_HOST_PREFIX, disk->info.alias) < 0) { + virReportOOMError(); + goto cleanup; + } + } + + if (!ndisks) { + /* Hooray! Nothing to care about */ + ret = 0; + goto cleanup; + } + + if (qemuDomainObjEnterMonitorAsync(driver, vm, + QEMU_ASYNC_JOB_MIGRATION_IN) < 0) + goto cleanup; + + if (qemuMonitorNBDServerStart(priv->mon, listen, port) < 0) + goto endjob; + + for (i = 0; i < ndisks; i++) { + if (qemuMonitorNBDServerAdd(priv->mon, disks[i], true) < 0) { + VIR_WARN("Unable to add '%s' to NDB server", disks[i]); + goto endjob; + } + } + + *nbdPort = port; + ret = 0; + +endjob: + qemuDomainObjExitMonitorWithDriver(driver, vm); +cleanup: + for (i = 0; i < ndisks; i++) + VIR_FREE(disks[i]); + VIR_FREE(disks); + return ret; }
And definitely squash this patch into 6/11. Jirka

This function does the source part of NBD magic. It invokes drive-mirror on each non shared disk and wait till the mirroring process completes. When it does we can proceed with migration. Currently, an active waiting is done: every 50ms libvirt asks qemu if block-job is finished or not. However, once the job finishes, qemu doesn't report its progress so we can only assume if the job finished successfully or not. The better solution would be to listen to the event which is sent as soon as the job finishes. The event does contain the result of job. --- src/qemu/qemu_migration.c | 179 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 178 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index fc410c0..1f9a880 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1175,6 +1175,177 @@ cleanup: return ret; } +/** + * qemuMigrationDiskMirror: + * @driver: qemu driver + * @vm: domain + * @mig: migration cookie + * @migrate_flags: migrate monitor command flags + * + * Run drive-mirror to feed NBD server running on dst and + * wait till the process completes. On success, update + * @migrate_flags so we don't tell 'migrate' command to + * do the very same operation. + * + * Returns 0 on success (@migrate_flags updated), + * -1 otherwise. + */ +static int +qemuMigrationDriveMirror(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuMigrationCookiePtr mig, + const char *host, + unsigned long speed, + unsigned int *migrate_flags) +{ + int ret = -1; + int mon_ret; + qemuDomainObjPrivatePtr priv = vm->privateData; + size_t ndisks = 0, i; + char **disks = NULL; + unsigned int mirror_flags = VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT; + char *nbd_dest = NULL; + + if (*migrate_flags & QEMU_MONITOR_MIGRATE_NON_SHARED_DISK) { + /* dummy */ + } else if (*migrate_flags & QEMU_MONITOR_MIGRATE_NON_SHARED_INC) { + mirror_flags |= VIR_DOMAIN_BLOCK_REBASE_SHALLOW; + } else { + /* Nothing to be done here. Claim success */ + return 0; + } + + for (i = 0; i < vm->def->ndisks; i++) { + virDomainDiskDefPtr disk = vm->def->disks[i]; + + /* skip shared disks */ + if (disk->shared) + continue; + + if (VIR_REALLOC_N(disks, ndisks + 1) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (virAsprintf(&disks[ndisks++], "%s%s", + QEMU_DRIVE_HOST_PREFIX, disk->info.alias) < 0) { + virReportOOMError(); + goto cleanup; + } + } + + if (!ndisks) { + /* Hooray! Nothing to care about */ + ret = 0; + goto cleanup; + } + + if (!mig->nbd) { + /* Destination doesn't support NBD server. + * Fall back to previous implementation. + * XXX Or should we report an error here? */ + VIR_DEBUG("Destination doesn't support NBD server " + "Falling back to previous implementation."); + ret = 0; + goto cleanup; + } + + for (i = 0; i < ndisks; i++) { + virDomainBlockJobInfo info; + VIR_FREE(nbd_dest); + if (virAsprintf(&nbd_dest, "nbd:%s:%u:exportname=%s", + host, mig->nbd->port, disks[i]) < 0) { + virReportOOMError(); + goto error; + } + + if (qemuDomainObjEnterMonitorAsync(driver, vm, + QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) + goto error; + mon_ret = qemuMonitorDriveMirror(priv->mon, disks[i], nbd_dest, + NULL, speed, mirror_flags); + qemuDomainObjExitMonitorWithDriver(driver, vm); + + if (mon_ret < 0) + goto error; + + /* wait for completion */ + while (true) { + /* Poll every 50ms for progress & to allow cancellation */ + struct timespec ts = { .tv_sec = 0, .tv_nsec = 50 * 1000 * 1000ull }; + if (qemuDomainObjEnterMonitorAsync(driver, vm, + QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) + goto error; + if (priv->job.asyncAbort) { + /* explicitly do this *after* we entered the monitor, + * as this is a critical section so we are guaranteed + * priv->job.asyncAbort will not change */ + qemuDomainObjExitMonitorWithDriver(driver, vm); + virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), + qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + _("canceled by client")); + goto cleanup; + } + mon_ret = qemuMonitorBlockJob(priv->mon, disks[i], NULL, 0, + &info, BLOCK_JOB_INFO, true); + qemuDomainObjExitMonitorWithDriver(driver, vm); + + if (mon_ret < 0) { + /* qemu doesn't report finished jobs */ + VIR_WARN("Unable to query drive-mirror job status. " + "Stop polling on '%s' cur:%llu end:%llu", + disks[i], info.cur, info.end); + break; + } + + if (info.cur == info.end) { + VIR_DEBUG("Drive mirroring of '%s' completed", disks[i]); + break; + } + + /* XXX Frankly speaking, we should listen to the events, + * instead of doing this. But this works for now and we + * are doing something similar in migration itself anyway */ + + virDomainObjUnlock(vm); + qemuDriverUnlock(driver); + + nanosleep(&ts, NULL); + + qemuDriverLock(driver); + virDomainObjLock(vm); + + } + } + + /* okay, copied. modify migrate_flags */ + *migrate_flags &= ~(QEMU_MONITOR_MIGRATE_NON_SHARED_DISK | + QEMU_MONITOR_MIGRATE_NON_SHARED_INC); + ret = 0; + +cleanup: + for (i = 0; i < ndisks; i++) + VIR_FREE(disks[i]); + VIR_FREE(disks); + VIR_FREE(nbd_dest); + return ret; + +error: + /* cancel any outstanding jobs */ + if (qemuDomainObjEnterMonitorAsync(driver, vm, + QEMU_ASYNC_JOB_MIGRATION_OUT) == 0) { + while (i) { + if (qemuMonitorBlockJob(priv->mon, disks[i], NULL, 0, + NULL, BLOCK_JOB_ABORT, true) < 0) + VIR_WARN("Unable to cancel block-job on '%s'", disks[i]); + i--; + } + qemuDomainObjExitMonitorWithDriver(driver, vm); + } else { + VIR_WARN("Unable to enter monitor. No block job cancelled"); + } + goto cleanup; +} /* Validate whether the domain is safe to migrate. If vm is NULL, * then this is being run in the v2 Prepare stage on the destination @@ -2431,6 +2602,13 @@ qemuMigrationRun(virQEMUDriverPtr driver, if (qemuDomainMigrateGraphicsRelocate(driver, vm, mig) < 0) VIR_WARN("unable to provide data for graphics client relocation"); + /* this will update migrate_flags on success */ + if (qemuMigrationDriveMirror(driver, vm, mig, spec->dest.host.name, + migrate_speed, &migrate_flags) < 0) { + /* error reported by helper func */ + goto cleanup; + } + /* Before EnterMonitor, since qemuMigrationSetOffline already does that */ if (!(flags & VIR_MIGRATE_LIVE) && virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) { @@ -2458,7 +2636,6 @@ qemuMigrationRun(virQEMUDriverPtr driver, goto cleanup; } - /* connect to the destination qemu if needed */ if (spec->destType == MIGRATION_DEST_CONNECT_HOST && qemuMigrationConnect(driver, vm, spec) < 0) { -- 1.8.0.2

On Fri, Jan 11, 2013 at 17:52:21 +0100, Michal Privoznik wrote:
This function does the source part of NBD magic. It invokes drive-mirror on each non shared disk and wait till the mirroring process completes. When it does we can proceed with migration.
Currently, an active waiting is done: every 50ms libvirt asks qemu if block-job is finished or not. However, once the job finishes, qemu doesn't report its progress so we can only assume if the job finished successfully or not. The better solution would be to listen to the event which is sent as soon as the job finishes. The event does contain the result of job. --- src/qemu/qemu_migration.c | 179 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 178 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index fc410c0..1f9a880 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1175,6 +1175,177 @@ cleanup: return ret; }
+/** + * qemuMigrationDiskMirror: + * @driver: qemu driver + * @vm: domain + * @mig: migration cookie + * @migrate_flags: migrate monitor command flags + * + * Run drive-mirror to feed NBD server running on dst and + * wait till the process completes. On success, update + * @migrate_flags so we don't tell 'migrate' command to + * do the very same operation. + * + * Returns 0 on success (@migrate_flags updated), + * -1 otherwise. + */ +static int +qemuMigrationDriveMirror(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuMigrationCookiePtr mig, + const char *host, + unsigned long speed, + unsigned int *migrate_flags) +{ + int ret = -1; + int mon_ret; + qemuDomainObjPrivatePtr priv = vm->privateData; + size_t ndisks = 0, i; + char **disks = NULL; + unsigned int mirror_flags = VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT; + char *nbd_dest = NULL; + + if (*migrate_flags & QEMU_MONITOR_MIGRATE_NON_SHARED_DISK) { + /* dummy */ + } else if (*migrate_flags & QEMU_MONITOR_MIGRATE_NON_SHARED_INC) { + mirror_flags |= VIR_DOMAIN_BLOCK_REBASE_SHALLOW; + } else { + /* Nothing to be done here. Claim success */ + return 0; + }
I find this hard to read, I'd rather rewrite this as: if (*migrate_flags & QEMU_MONITOR_MIGRATE_NON_SHARED_INC) mirror_flags |= VIR_DOMAIN_BLOCK_REBASE_SHALLOW; else if (!(*migrate_flags & QEMU_MONITOR_MIGRATE_NON_SHARED_DISK)) return 0; or (even easier but redundant): if (!(*migrate_flags & (QEMU_MONITOR_MIGRATE_NON_SHARED_DISK | QEMU_MONITOR_MIGRATE_NON_SHARED_INC))) return 0; if (*migrate_flags & QEMU_MONITOR_MIGRATE_NON_SHARED_INC) mirror_flags |= VIR_DOMAIN_BLOCK_REBASE_SHALLOW;
+ + for (i = 0; i < vm->def->ndisks; i++) { + virDomainDiskDefPtr disk = vm->def->disks[i]; + + /* skip shared disks */ + if (disk->shared) + continue; + + if (VIR_REALLOC_N(disks, ndisks + 1) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (virAsprintf(&disks[ndisks++], "%s%s", + QEMU_DRIVE_HOST_PREFIX, disk->info.alias) < 0) { + virReportOOMError(); + goto cleanup; + } + } + + if (!ndisks) { + /* Hooray! Nothing to care about */ + ret = 0; + goto cleanup; + }
This is just a duplicated code from qemuMigrationStartNBDServer. Should we move it to a separate function?
+ + if (!mig->nbd) { + /* Destination doesn't support NBD server. + * Fall back to previous implementation. + * XXX Or should we report an error here? */ + VIR_DEBUG("Destination doesn't support NBD server " + "Falling back to previous implementation."); + ret = 0; + goto cleanup; + }
Ah, this is the check I was looking for in one of the previous patches.
+ + for (i = 0; i < ndisks; i++) { + virDomainBlockJobInfo info; + VIR_FREE(nbd_dest); + if (virAsprintf(&nbd_dest, "nbd:%s:%u:exportname=%s", + host, mig->nbd->port, disks[i]) < 0) { + virReportOOMError(); + goto error; + }
Hmm, looks like this is not IPv6 ready at all or does it allow "[fec0::1]" IPv6 addresses to be passed as host?
+ + if (qemuDomainObjEnterMonitorAsync(driver, vm, + QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) + goto error; + mon_ret = qemuMonitorDriveMirror(priv->mon, disks[i], nbd_dest, + NULL, speed, mirror_flags); + qemuDomainObjExitMonitorWithDriver(driver, vm); + + if (mon_ret < 0) + goto error; + + /* wait for completion */ + while (true) { + /* Poll every 50ms for progress & to allow cancellation */ + struct timespec ts = { .tv_sec = 0, .tv_nsec = 50 * 1000 * 1000ull }; + if (qemuDomainObjEnterMonitorAsync(driver, vm, + QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) + goto error; + if (priv->job.asyncAbort) { + /* explicitly do this *after* we entered the monitor, + * as this is a critical section so we are guaranteed + * priv->job.asyncAbort will not change */ + qemuDomainObjExitMonitorWithDriver(driver, vm); + virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), + qemuDomainAsyncJobTypeToString(priv->job.asyncJob), + _("canceled by client")); + goto cleanup;
I believe this should jump to the error label and cancel all started block jobs.
+ } + mon_ret = qemuMonitorBlockJob(priv->mon, disks[i], NULL, 0, + &info, BLOCK_JOB_INFO, true); + qemuDomainObjExitMonitorWithDriver(driver, vm); + + if (mon_ret < 0) { + /* qemu doesn't report finished jobs */ + VIR_WARN("Unable to query drive-mirror job status. " + "Stop polling on '%s' cur:%llu end:%llu", + disks[i], info.cur, info.end); + break;
Isn't this fatal? If not, I think the comment above is too short.
+ } + + if (info.cur == info.end) { + VIR_DEBUG("Drive mirroring of '%s' completed", disks[i]); + break; + } + + /* XXX Frankly speaking, we should listen to the events, + * instead of doing this. But this works for now and we + * are doing something similar in migration itself anyway */
Well, migration itself does not send any events. But I agree we can do polling here and change to events at the same time we do it for migration (if we ever get that event from qemu).
+ + virDomainObjUnlock(vm); + qemuDriverUnlock(driver); + + nanosleep(&ts, NULL); + + qemuDriverLock(driver); + virDomainObjLock(vm); + + } + } + + /* okay, copied. modify migrate_flags */ + *migrate_flags &= ~(QEMU_MONITOR_MIGRATE_NON_SHARED_DISK | + QEMU_MONITOR_MIGRATE_NON_SHARED_INC); + ret = 0; + +cleanup: + for (i = 0; i < ndisks; i++) + VIR_FREE(disks[i]); + VIR_FREE(disks); + VIR_FREE(nbd_dest); + return ret; + +error: + /* cancel any outstanding jobs */ + if (qemuDomainObjEnterMonitorAsync(driver, vm, + QEMU_ASYNC_JOB_MIGRATION_OUT) == 0) { + while (i) { + if (qemuMonitorBlockJob(priv->mon, disks[i], NULL, 0, + NULL, BLOCK_JOB_ABORT, true) < 0) + VIR_WARN("Unable to cancel block-job on '%s'", disks[i]); + i--; + } + qemuDomainObjExitMonitorWithDriver(driver, vm); + } else { + VIR_WARN("Unable to enter monitor. No block job cancelled"); + } + goto cleanup; +}
/* Validate whether the domain is safe to migrate. If vm is NULL, * then this is being run in the v2 Prepare stage on the destination ...
Jirka

With new NBD storage migration approach there are several requirements that need to be meet for successful use of the feature. One of them is - the file representing a disk, needs to have at least same size as on the source. Hence, we must transfer a list of pairs [disk source, size] and check on destination that this requirement is met and/or take actions to meet it. --- src/qemu/qemu_migration.c | 164 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 161 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 1f9a880..86eb4c8 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -29,6 +29,9 @@ #endif #include <fcntl.h> #include <poll.h> +#include <unistd.h> +#include <sys/types.h> +#include <sys/stat.h> #include "qemu_migration.h" #include "qemu_monitor.h" @@ -137,6 +140,18 @@ struct _qemuMigrationCookieNBD { Negative one is meant to be sent when translating from perform to finish phase to let destination know it's safe to stop NBD server.*/ + + /* The list of pairs [disk-size] (in Bytes). This is needed + * because the same disk size is one of prerequisites for NBD + * storage migration. Unfortunately, we can't rely on + * anything but disk order, since 'src' can be overwritten by + * migration hook script, device aliases are not assigned on + * dst yet (as the source files need to be created before + * qemuProcessStart). */ + size_t ndisks; + struct { + size_t bytes; + } *disk; }; typedef struct _qemuMigrationCookie qemuMigrationCookie; @@ -199,6 +214,17 @@ qemuMigrationCookieNetworkFree(qemuMigrationCookieNetworkPtr network) } +static void +qemuMigrationCookieNBDFree(qemuMigrationCookieNBDPtr nbd) +{ + if (!nbd) + return; + + VIR_FREE(nbd->disk); + VIR_FREE(nbd); +} + + static void qemuMigrationCookieFree(qemuMigrationCookiePtr mig) { if (!mig) @@ -210,12 +236,13 @@ static void qemuMigrationCookieFree(qemuMigrationCookiePtr mig) if (mig->flags & QEMU_MIGRATION_COOKIE_NETWORK) qemuMigrationCookieNetworkFree(mig->network); + qemuMigrationCookieNBDFree(mig->nbd); + VIR_FREE(mig->localHostname); VIR_FREE(mig->remoteHostname); VIR_FREE(mig->name); VIR_FREE(mig->lockState); VIR_FREE(mig->lockDriver); - VIR_FREE(mig->nbd); VIR_FREE(mig); } @@ -525,8 +552,12 @@ qemuMigrationCookieAddNetwork(qemuMigrationCookiePtr mig, static int qemuMigrationCookieAddNBD(qemuMigrationCookiePtr mig, + virDomainObjPtr vm, int nbdPort) { + qemuDomainObjPrivatePtr priv = vm->privateData; + size_t i; + /* It is not a bug if there already is a NBD data */ if (!mig->nbd && VIR_ALLOC(mig->nbd) < 0) { @@ -534,6 +565,33 @@ qemuMigrationCookieAddNBD(qemuMigrationCookiePtr mig, return -1; } + /* in Begin phase add info about disks */ + if (priv->job.phase == QEMU_MIGRATION_PHASE_BEGIN3 && + vm->def->ndisks) { + if (VIR_ALLOC_N(mig->nbd->disk, vm->def->ndisks) < 0) { + virReportOOMError(); + return -1; + } + + for (i = 0; i < vm->def->ndisks; i++) { + virDomainDiskDefPtr disk = vm->def->disks[i]; + struct stat sb; + + /* Add only non-shared disks with source */ + if (!disk->src || disk->shared) + continue; + + if (stat(disk->src, &sb) < 0) { + virReportSystemError(errno, + _("Unable to stat '%s'"), + disk->src); + return -1; + } + + mig->nbd->disk[mig->nbd->ndisks++].bytes = sb.st_size; + } + } + mig->nbd->port = nbdPort; mig->flags |= QEMU_MIGRATION_COOKIE_NBD; @@ -647,7 +705,15 @@ qemuMigrationCookieXMLFormat(virQEMUDriverPtr driver, virBufferAddLit(buf, " <nbd"); if (mig->nbd->port) virBufferAsprintf(buf, " port='%d'", mig->nbd->port); - virBufferAddLit(buf, "/>\n"); + if (mig->nbd->ndisks) { + virBufferAddLit(buf, ">\n"); + for (i = 0; i < mig->nbd->ndisks; i++) + virBufferAsprintf(buf, " <disk size='%zu'/>\n", + mig->nbd->disk[i].bytes); + virBufferAddLit(buf, " </nbd>\n"); + } else { + virBufferAddLit(buf, "/>\n"); + } } virBufferAddLit(buf, "</qemu-migration>\n"); @@ -952,6 +1018,32 @@ qemuMigrationCookieXMLParse(qemuMigrationCookiePtr mig, port); goto error; } + + if ((n = virXPathNodeSet("./nbd/disk", ctxt, &nodes)) > 0) { + xmlNodePtr oldNode = ctxt->node; + if (VIR_ALLOC_N(mig->nbd->disk, n) < 0) { + virReportOOMError(); + goto error; + } + mig->nbd->ndisks = n; + + for (i = 0; i < n; i++) { + ctxt->node = nodes[i]; + + tmp = virXPathString("string(./@size)", ctxt); + if (virStrToLong_ull(tmp, NULL, 10, (unsigned long long *) + &mig->nbd->disk[i].bytes) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Malformed size attribute '%s'"), + tmp); + VIR_FREE(tmp); + goto error; + } + VIR_FREE(tmp); + } + VIR_FREE(nodes); + ctxt->node = oldNode; + } } return 0; @@ -1020,7 +1112,7 @@ qemuMigrationBakeCookie(qemuMigrationCookiePtr mig, } if (flags & QEMU_MIGRATION_COOKIE_NBD && - qemuMigrationCookieAddNBD(mig, nbdPort) < 0) + qemuMigrationCookieAddNBD(mig, dom, nbdPort) < 0) return -1; if (!(*cookieout = qemuMigrationCookieXMLFormatStr(driver, mig))) @@ -1347,6 +1439,68 @@ error: goto cleanup; } +static int +qemuMigrationPreCreateStorage(virDomainObjPtr vm, + qemuMigrationCookiePtr mig) +{ + int ret = -1; + size_t i, mig_i = 0; + struct stat sb; + int fd = -1; + + if (!mig->nbd || !mig->nbd->ndisks) { + /* nothing to do here */ + return 0; + } + + for (i = 0; i < vm->def->ndisks; i++) { + virDomainDiskDefPtr disk = vm->def->disks[i]; + size_t bytes = mig->nbd->disk[mig_i].bytes; + + /* skip shared and source-free disks */ + if (!disk->src || disk->shared) + continue; + + mig_i++; + if (mig_i > mig->nbd->ndisks) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("disk count doesn't match")); + goto cleanup; + } + + VIR_DEBUG("Checking '%s' for its size (requested %zuB)", disk->src, bytes); + + if ((fd = virFileOpenAs(disk->src, O_RDWR | O_CREAT, 0660, + -1, -1, VIR_FILE_OPEN_NOFORK)) < 0) { + virReportSystemError(errno, _("Unable to create '%s'"), disk->src); + goto cleanup; + } + + if (fstat(fd, &sb) < 0) { + virReportSystemError(errno, _("Unable to stat '%s'"), disk->src); + goto cleanup; + } + + VIR_DEBUG("File '%s' is %zuB big", disk->src, sb.st_size); + if (sb.st_size < bytes && + ftruncate(fd, bytes) < 0) { + virReportSystemError(errno, _("Unable to ftruncate '%s'"), disk->src); + goto cleanup; + } + + VIR_FORCE_CLOSE(fd); + } + + ret = 0; +cleanup: + VIR_FORCE_CLOSE(fd); + /* free from migration data to prevent + * infinite sending from src to dst and back */ + VIR_FREE(mig->nbd->disk); + mig->nbd->ndisks = 0; + return ret; +} + /* Validate whether the domain is safe to migrate. If vm is NULL, * then this is being run in the v2 Prepare stage on the destination * (where we only have the target xml); if vm is provided, then this @@ -2002,6 +2156,10 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver, QEMU_MIGRATION_COOKIE_NBD))) goto cleanup; + /* pre-create all storage */ + if (qemuMigrationPreCreateStorage(vm, mig) < 0) + goto cleanup; + if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN) < 0) goto cleanup; qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_PREPARE); -- 1.8.0.2

On Fri, Jan 11, 2013 at 17:52:22 +0100, Michal Privoznik wrote:
With new NBD storage migration approach there are several requirements that need to be meet for successful use of the feature. One of them is - the file representing a disk, needs to have at least same size as on the source. Hence, we must transfer a list of pairs [disk source, size] and check on destination that this requirement is met and/or take actions to meet it.
The commit message does not correspond to what the code is doing. You're not sending (disk source, size) pairs.
--- src/qemu/qemu_migration.c | 164 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 161 insertions(+), 3 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 1f9a880..86eb4c8 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -29,6 +29,9 @@ #endif #include <fcntl.h> #include <poll.h> +#include <unistd.h> +#include <sys/types.h> +#include <sys/stat.h>
#include "qemu_migration.h" #include "qemu_monitor.h" @@ -137,6 +140,18 @@ struct _qemuMigrationCookieNBD { Negative one is meant to be sent when translating from perform to finish phase to let destination know it's safe to stop NBD server.*/ + + /* The list of pairs [disk-size] (in Bytes). This is needed
What a strange pair which contains just disk size :-P
+ * because the same disk size is one of prerequisites for NBD + * storage migration. Unfortunately, we can't rely on + * anything but disk order, since 'src' can be overwritten by + * migration hook script, device aliases are not assigned on + * dst yet (as the source files need to be created before + * qemuProcessStart). */
Hmm, can't we use disk targets (hda, vdb, ...)?
+ size_t ndisks; + struct { + size_t bytes; + } *disk; };
typedef struct _qemuMigrationCookie qemuMigrationCookie; ... @@ -534,6 +565,33 @@ qemuMigrationCookieAddNBD(qemuMigrationCookiePtr mig, return -1; }
+ /* in Begin phase add info about disks */ + if (priv->job.phase == QEMU_MIGRATION_PHASE_BEGIN3 &&
I think this check is better replaced with nbdPort == 0.
+ vm->def->ndisks) { + if (VIR_ALLOC_N(mig->nbd->disk, vm->def->ndisks) < 0) { + virReportOOMError(); + return -1; + }
Yay, this is what I suggested you to do in qemuMigration{Start,Stop}NBDServer :-)
+ + for (i = 0; i < vm->def->ndisks; i++) { + virDomainDiskDefPtr disk = vm->def->disks[i]; + struct stat sb; + + /* Add only non-shared disks with source */ + if (!disk->src || disk->shared) + continue; + + if (stat(disk->src, &sb) < 0) { + virReportSystemError(errno, + _("Unable to stat '%s'"), + disk->src); + return -1; + } + + mig->nbd->disk[mig->nbd->ndisks++].bytes = sb.st_size;
Unfortunately, this is wrong. As confirmed with Paolo on IRC today, the disk image on destination needs to be fully allocated. That is, we need to transfer the logical size of the image rather than its physical size. For a 10GB QCOW image that occupies 1GB of disk space, we need to create 10GB file (+ the size of QCOW metadata) on the destination.
+ } + } + mig->nbd->port = nbdPort; mig->flags |= QEMU_MIGRATION_COOKIE_NBD;
...
@@ -1347,6 +1439,68 @@ error: goto cleanup; }
+static int +qemuMigrationPreCreateStorage(virDomainObjPtr vm, + qemuMigrationCookiePtr mig) +{ + int ret = -1; + size_t i, mig_i = 0; + struct stat sb; + int fd = -1; + + if (!mig->nbd || !mig->nbd->ndisks) { + /* nothing to do here */ + return 0; + } + + for (i = 0; i < vm->def->ndisks; i++) { + virDomainDiskDefPtr disk = vm->def->disks[i]; + size_t bytes = mig->nbd->disk[mig_i].bytes; + + /* skip shared and source-free disks */ + if (!disk->src || disk->shared) + continue; + + mig_i++; + if (mig_i > mig->nbd->ndisks) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("disk count doesn't match")); + goto cleanup; + } + + VIR_DEBUG("Checking '%s' for its size (requested %zuB)", disk->src, bytes); + + if ((fd = virFileOpenAs(disk->src, O_RDWR | O_CREAT, 0660, + -1, -1, VIR_FILE_OPEN_NOFORK)) < 0) { + virReportSystemError(errno, _("Unable to create '%s'"), disk->src); + goto cleanup; + } + + if (fstat(fd, &sb) < 0) { + virReportSystemError(errno, _("Unable to stat '%s'"), disk->src); + goto cleanup; + } + + VIR_DEBUG("File '%s' is %zuB big", disk->src, sb.st_size); + if (sb.st_size < bytes && + ftruncate(fd, bytes) < 0) {
posix_fallocate would be a better choice; or just use existing functions used in storage driver.
+ virReportSystemError(errno, _("Unable to ftruncate '%s'"), disk->src); + goto cleanup; + } + + VIR_FORCE_CLOSE(fd); + } + + ret = 0; +cleanup: + VIR_FORCE_CLOSE(fd); + /* free from migration data to prevent + * infinite sending from src to dst and back */ + VIR_FREE(mig->nbd->disk); + mig->nbd->ndisks = 0; + return ret; +} + /* Validate whether the domain is safe to migrate. If vm is NULL, * then this is being run in the v2 Prepare stage on the destination * (where we only have the target xml); if vm is provided, then this ...
Jirka

At the end of migration, it is important to stop NBD server and thus release all allocated resources. --- src/qemu/qemu_migration.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 86eb4c8..a1372ed 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1501,6 +1501,32 @@ cleanup: return ret; } +static void +qemuMigrationStopNBDServer(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuMigrationCookiePtr mig) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + + if (!mig->nbd) + return; + + if (mig->nbd->port != -1) + VIR_WARN("This is strange. NBD port was not -1 " + "when shutting NDB server down"); + + if (qemuDomainObjEnterMonitorAsync(driver, vm, + QEMU_ASYNC_JOB_MIGRATION_IN) < 0) { + VIR_WARN("Unable to enter monitor"); + return; + } + + if (qemuMonitorNBDServerStop(priv->mon) < 0) + VIR_WARN("Unable to stop NBD server"); + + qemuDomainObjExitMonitorWithDriver(driver, vm); +} + /* Validate whether the domain is safe to migrate. If vm is NULL, * then this is being run in the v2 Prepare stage on the destination * (where we only have the target xml); if vm is provided, then this @@ -3891,6 +3917,8 @@ qemuMigrationFinish(virQEMUDriverPtr driver, VIR_WARN("unable to provide network data for relocation"); } + qemuMigrationStopNBDServer(driver, vm, mig); + if (flags & VIR_MIGRATE_PERSIST_DEST) { virDomainDefPtr vmdef; if (vm->persistent) -- 1.8.0.2

On Fri, Jan 11, 2013 at 17:52:23 +0100, Michal Privoznik wrote:
At the end of migration, it is important to stop NBD server and thus release all allocated resources. --- src/qemu/qemu_migration.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 86eb4c8..a1372ed 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1501,6 +1501,32 @@ cleanup: return ret; }
+static void +qemuMigrationStopNBDServer(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuMigrationCookiePtr mig) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + + if (!mig->nbd) + return; + + if (mig->nbd->port != -1) + VIR_WARN("This is strange. NBD port was not -1 " + "when shutting NDB server down");
It's so strange we should not even pass the -1 port to finish phase. It serves no purpose. However, we should remember that nbd server was running and stop it in that case; relying on source daemon to tell us that nbd server was running looks quite fragile.
+ + if (qemuDomainObjEnterMonitorAsync(driver, vm, + QEMU_ASYNC_JOB_MIGRATION_IN) < 0) { + VIR_WARN("Unable to enter monitor");
The function already reports an error or logs a warning, this extra warning is useless.
+ return; + } + + if (qemuMonitorNBDServerStop(priv->mon) < 0) + VIR_WARN("Unable to stop NBD server");
Shouldn't this be fatal to the migration process?
+ + qemuDomainObjExitMonitorWithDriver(driver, vm); +} + /* Validate whether the domain is safe to migrate. If vm is NULL, * then this is being run in the v2 Prepare stage on the destination * (where we only have the target xml); if vm is provided, then this @@ -3891,6 +3917,8 @@ qemuMigrationFinish(virQEMUDriverPtr driver, VIR_WARN("unable to provide network data for relocation"); }
+ qemuMigrationStopNBDServer(driver, vm, mig); + if (flags & VIR_MIGRATE_PERSIST_DEST) { virDomainDefPtr vmdef; if (vm->persistent)
BTW, Is the NBD server running as a thread within qemu? That is, when we kill qemu process in case of failed or cancelled migration, we don't need to explicitly stop the NBD server, do we? Jirka

On Fri, Jan 11, 2013 at 17:52:12 +0100, Michal Privoznik wrote: ...
Michal Privoznik (11): qemu: Introduce NBD_SERVER capability Introduce NBD migration cookie qemu: Introduce nbd-server-start command qemu: Introduce nbd-server-add command qemu: Introduce nbd-server-stop command qemu_migration: Introduce qemuMigrationStartNBDServer qemu_migration: Move port allocation to a separate func qemu_migration: Implement qemuMigrationStartNBDServer() qemu_migration: Implement qemuMigrationDriveMirror qemu_migration: Check size prerequisites qemu_migration: Stop NBD server at Finish phase
Overall I find this series quite hard to follow. Personally, I'd prefer to see all changes to each migration phase at one place. Perhaps with the "Check size prerequisites" which could be added on top. Jirka
participants (2)
-
Jiri Denemark
-
Michal Privoznik