On 28.11.2012 04:03, li guang wrote:
在 2012-11-27二的 19:50 +0100,Michal Privoznik写道:
> This is a stub internal API just for now. Its purpose
> in life is to start NBD server and feed it with all
> domain disks. When adding a disk to NBD server, it
> is addressed via its alias (id= param on qemu command line).
> ---
> src/qemu/qemu_driver.c | 8 +++---
> src/qemu/qemu_migration.c | 59 +++++++++++++++++++++++++++++++++++---------
> src/qemu/qemu_migration.h | 6 +++-
> 3 files changed, 55 insertions(+), 18 deletions(-)
>
> diff --git a/src/qemu/qemu_migration.c
b/src/qemu/qemu_migration.c
> index cd59eda..7e86c33 100644
> --- a/src/qemu/qemu_migration.c
> +++ b/src/qemu/qemu_migration.c
> @@ -1074,6 +1074,29 @@ error:
> return NULL;
> }
>
> +/**
> + * qemuMigrationStartNBDServer:
> + * @driver: qemu driver
> + * @vm: domain
> + * @nbdPort: which port is NBD server listening to
> + *
> + * Starts NBD server. This is a newer method to copy
> + * storage during migration than using 'blk' and 'inc'
> + * arguments in 'migrate' monitor command.
> + * Error is reported here.
> + *
> + * Returns 0 on success, -1 otherwise.
> + */
> +static int
> +qemuMigrationStartNBDServer(struct qemud_driver *driver ATTRIBUTE_UNUSED,
> + virDomainObjPtr vm ATTRIBUTE_UNUSED,
> + int *nbdPort ATTRIBUTE_UNUSED)
> +{
> + /* do nothing for now */
> + return 0;
> +}
> +
> +
> /* Validate whether the domain is safe to migrate. If vm is NULL,
> * then this is being run in the v2 Prepare stage on the destination
> * (where we only have the target xml); if vm is provided, then this
> @@ -1575,7 +1598,8 @@ qemuMigrationPrepareAny(struct qemud_driver *driver,
> const char *dname,
> const char *dom_xml,
> const char *migrateFrom,
> - virStreamPtr st)
> + virStreamPtr st,
> + unsigned long flags)
> {
> virDomainDefPtr def = NULL;
> virDomainObjPtr vm = NULL;
> @@ -1719,9 +1743,17 @@ qemuMigrationPrepareAny(struct qemud_driver *driver,
> VIR_DEBUG("Received no lockstate");
> }
>
> - /* dummy place holder for real work */
> - nbdPort = 0;
> - cookie_flags |= QEMU_MIGRATION_COOKIE_NBD;
> + if ((flags & VIR_MIGRATE_NON_SHARED_INC ||
> + flags & VIR_MIGRATE_NON_SHARED_DISK) &&
> + mig->nbd && qemuCapsGet(priv->caps, QEMU_CAPS_NBD_SERVER)) {
> + /* both source and destination qemus support nbd-server-*
> + * commands and user requested disk copy. Use the new ones */
> + if (qemuMigrationStartNBDServer(driver, vm, &nbdPort) < 0) {
so, nbdPort is generated by qemuMigrationNextPort() (08/11) not by
cookie element 'nbd/port' (02/11), as a result, seems the previous
cookie baking is rather needless.
Yes, that's correct. nbdPort is generated by that function. However, in
this patch, I wanted to point out it qemuMigrationStartNBDServer which
decides which port the NBD server will listen to.
The flow is exactly the opposite - qemuMigrationStartNBDServer starts
the server on a port it decides. The port is returned to callee which
will store it into the cookie and send back to the source. Source needs
to know which port the data should be sent to but it has no info which
ports are free on dst.
Long story short, This patch is mainly here to show this flow: A
NBDServerStart function is called (under certain circumstances). This
function allocates a port which is then injected into migration cookie.
The cookie is then sent from dst to src. How the function work inside is
not important for this patch.
Michal