On 25.08.2015 18:54, Dmitry Guryanov wrote:
On 08/25/2015 12:04 PM, nshirokovskiy(a)virtuozzo.com wrote:
> From: Nikolay Shirokovskiy <nshirokovskiy(a)virtuozzo.com>
>
> Migration API has a lot of options. This patch intention is to provide
> support for those options that can be trivially supported and give
> estimation for other options support in this commit message.
>
> I. Supported.
>
> 1. VIR_MIGRATE_COMPRESSED. Means 'use compression when migration domain
> memory'. It is supported but quite uncommon way: vz migration demands that this
> option should be set. This is due to vz is hardcoded to moving VMs memory using
> compression. So anyone who wants to migrate vz domain should set this option
> thus declaring it knows it uses compression.
>
> Why bother? May be just support this option and ignore if it is not set or
> don't support at all as we can't change behaviour in this aspect. Well I
> believe that this option is, first, inherent to hypervisor implementation as
> we have a task of moving domain memory to different place and, second, we have
> a tradeoff here between cpu and network resources and some managment should
> choose the stratery via this option. If we choose ignoring or unsupporting
> implementation than this option has a too vague meaning. Let's go into more
> detail.
>
> First if we ignore situation where option is not set than we put user into
> fallacy that vz hypervisor don't use compression and thus have lower cpu
> usage. Second approach is to not support the option. The main reason not to
> follow this way is that 'not supported and not set' is indistinguishable
from
> 'supported and not set' and thus again fool the user.
>
> 2. VIR_MIGRATE_LIVE. Means 'reduce domain downtime by suspending it as lately
> as possible' which technically means 'migrate as much domain memory as
possible
> before suspending'. Supported in same manner as VIR_MIGRATE_COMPRESSED as both
> vz VMs and CTs are always migrated via live scheme.
>
> One may be fooled by vz sdk flags of migration api: PVMT_HOT_MIGRATION(aka
> live) and PVMT_WARM_MIGRATION(aka normal). Current implementation ignore these
> flags and always use live migration.
>
> 3. VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_UNDEFINE_SOURCE. This two comes
> together. Vz domain are alwasy persistent so we have to support demand option
> VIR_MIGRATE_PERSIST_DEST is set and VIR_MIGRATE_UNDEFINE_SOURCE is not (and
> this is done just by unsupporting it).
>
> 4. VIR_MIGRATE_PAUSED. Means 'don't resume domain on destination'. This
is
> trivially supported as we have a corresponding option in vz migration.
>
> 5. VIR_MIGRATE_OFFLINE. Means 'migrate only XML definition of a domain'. It
is
> a forcing option that is it is ignored if domain is running and must be set
> to migrate stopped domain. Vz implemenation follows this unformal definition
> with one exception: non-shared disks will be migrated too. This desicion is on
> par with VIR_MIGRATE_NON_SHARED_DISK condideration(see last part of this
> notes).
>
> All that said the minimal command to migrate vz domain looks next:
>
> migrate --direct $DOMAIN $STUB --migrateuri $DESTINATION --live --persistent
--compressed.
>
> Not good. Say if you want to just migrate a domain without further
> details you will get error messages until you add these options to
> command line. I think there is a lack of notion 'default' behaviour
> in all these aspects. If we have it we could just issue:
>
> migrate $DOMAIN $DESTINATION
>
> For vz this would give default compression for example, for qemu - default
> no-compression. Then we could have flags --compressed and -no-compressed
> and for vz the latter would give unsupported error.
>
> II. Unsupported.
>
> 1. VIR_MIGRATE_UNSAFE. Vz disks are always have 'cache=none' set (this
> is not reflected in current version of vz driver and will be fixed
> soon). So we need not to support this option.
>
> 2. VIR_MIGRATE_CHANGE_PROTECTION. Unsupported as we have no appopriate
> support from vz sdk. Although we have locks they are advisory and
> cant help us.
>
> 3. VIR_MIGRATE_TUNNELLED. Means 'use libvirtd to libvirtd connection
> to pass hypervisor migration traffic'. Unsupported as not among
> vz hypervisor usecases.
>
> 4. p2p migration which is exposed via *toURI* interface with
> VIR_MIGRATE_PEER2PEER flag set. It doesn't make sense
> for vz migration as it is a variant of managed migration which
> is qemu specific.
>
> 5. VIR_MIGRATE_ABORT_ON_ERROR, VIR_MIGRATE_AUTO_CONVERGE,
> VIR_MIGRATE_RDMA_PIN_ALL, VIR_MIGRATE_NON_SHARED_INC,
> VIR_MIGRATE_PARAM_DEST_XML, VIR_MIGRATE_PARAM_BANDWIDTH,
> VIR_MIGRATE_PARAM_GRAPHICS_URI, VIR_MIGRATE_PARAM_LISTEN_ADDRESS,
> VIR_MIGRATE_PARAM_MIGRATE_DISKS.
> Without further discussion. They are just not usecases of vz hypevisor.
>
> III. Undecided and thus unsupported.
>
> 6. VIR_MIGRATE_NON_SHARED_DISK. The meaning of this option is not clear to me.
> Look, if qemu domain has a non-shared disk than it will refuse to migrate. But
> after you specify this option it will refuse too. You need to create image file
> for the disk on the destination side. Only after that you can migrate.
> Unexpectedly existence of this file is enough to migrate without option too. In
> this case you will get a domain on the destination with disk image unrelated to
> source one and this is in case of live migration! Looks like a bug. Ok, imagine
> this is fixed so that migration of non-shared disk is only possible with
> actual coping disk to destination. What we get from this option? We get
> that you have to specify this option if you want to migrate a domain with
> non-shared disk like some forcing option. May be it is a good approach
> but it is incompatible with vz. Vz don't demand any user awareness of
> migration of non-shared disks. And this case incompatibility can not
> be easily resolved as for 'compressed' option as this option depends on
> classifying of shared/non-shared for disks which is done inside vz.
> vz: implement misc migration options
>
> Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy(a)virtuozzo.com>
> ---
> src/vz/vz_driver.c | 128 +++++++++++++++++++++++++++++++++++++++++++++++++++-
> src/vz/vz_sdk.c | 8 +++-
> src/vz/vz_sdk.h | 3 +-
> 3 files changed, 134 insertions(+), 5 deletions(-)
>
> diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
> index dc26b09..b12592b 100644
> --- a/src/vz/vz_driver.c
> +++ b/src/vz/vz_driver.c
> @@ -1465,7 +1465,120 @@ vzMakeVzUri(const char *connuri_str)
> return vzuri;
> }
> -#define VZ_MIGRATION_FLAGS (0)
> +virURIPtr
> +vzParseVzURI(const char *uri_str)
> +{
> + virURIPtr uri = NULL;
> + int ret = -1;
> +
> + if (!(uri = virURIParse(uri_str)))
> + goto cleanup;
> +
> + if (uri->scheme == NULL || uri->server == NULL) {
> + virReportError(VIR_ERR_INVALID_ARG,
> + _("scheme and host are mandatory vz migration URI:
%s"),
> + uri_str);
> + goto cleanup;
> + }
> +
> + if (uri->user != NULL || uri->path != NULL ||
> + uri->query != NULL || uri->fragment != NULL) {
> + virReportError(VIR_ERR_INVALID_ARG,
> + _("only scheme, host and port are supported in "
> + "vz migration URI: %s"), uri_str);
> + goto cleanup;
> + }
> +
> + if (STRNEQ(uri->scheme, "tcp")) {
> + virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED,
> + _("unsupported scheme %s in migration URI %s"),
> + uri->scheme, uri_str);
> + goto cleanup;
> + }
> +
Try URI vz+ssh://hotname:22/system, there is some bug, prlsdk returns error after
migration:
It is ok. Uri you pass with --migrateuri is hypervisor connection uri and
port should be appropriate, not ssh one.
2015-08-25 15:36:18.923+0000: 92650: error : virDomainGetJobInfo:8889 : this function is
not supported by the connection driver: virDomainGetJobInfo
2015-08-25 15:36:19.169+0000: 92649: error : prlsdkMigrate:4089 : internal error: Unable
to connect to the server "10.27.255.18". An invalid response has been received
from the server. Make sure that Parallels Server is running on the server
"10.27.255.18" and is up to date and try again.
> + ret = 0;
> +
> + cleanup:
> + if (ret < 0) {
> + virURIFree(uri);
> + uri = NULL;
> + }
> +
> + return uri;
> +}
> +
> +#define VZ_MIGRATION_FLAGS \
> + (VIR_MIGRATE_OFFLINE | \
> + VIR_MIGRATE_LIVE | \
> + VIR_MIGRATE_COMPRESSED | \
> + VIR_MIGRATE_PERSIST_DEST | \
> + VIR_MIGRATE_PAUSED)
> +
> +/* TODO this code should be in common place as these
> + rules follows from options (informal) definitions.
> + Qemu makes some of these checks on begin phase but not all. */
> +int vzCheckOfflineFlags(int flags)
> +{
> + if (!(flags & VIR_MIGRATE_OFFLINE)) {
> + virReportError(VIR_ERR_OPERATION_INVALID,
> + "%s", _("domain is not running"));
> + return -1;
> + }
> +
> + if (flags & VIR_MIGRATE_LIVE) {
> + virReportError(VIR_ERR_OPERATION_INVALID, "%s",
> + _("live offline migration does not "
> + "make sense"));
> + return -1;
> + }
> +
> + if (flags & VIR_MIGRATE_COMPRESSED) {
> + virReportError(VIR_ERR_OPERATION_INVALID, "%s",
> + _("compressed offline migration does not "
> + "make sense"));
> + return -1;
> + }
> +
> + if (flags & VIR_MIGRATE_PAUSED) {
> + virReportError(VIR_ERR_OPERATION_INVALID, "%s",
> + _("paused offline migration does not "
> + "make sense"));
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +int vzCheckCommonFlags(int flags)
> +{
> + if (!(flags & VIR_MIGRATE_PERSIST_DEST)) {
> + virReportError(VIR_ERR_INVALID_ARG, "%s",
> + _("flags VIR_MIGRATE_PERSIST_DEST must be
set"
> + " for vz migration"));
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +int vzCheckOnlineFlags(int flags)
> +{
> + if (!(flags & VIR_MIGRATE_LIVE)) {
> + virReportError(VIR_ERR_INVALID_ARG, "%s",
> + _("flags VIR_MIGRATE_LIVE must be set"
> + " for online vz migration"));
> + return -1;
> + }
> +
> + if (!(flags & VIR_MIGRATE_COMPRESSED)) {
> + virReportError(VIR_ERR_INVALID_ARG, "%s",
> + _("flags VIR_MIGRATE_COMPRESSED must be set"
> + " for online vz migration"));
> + return -1;
> + }
> +
> + return 0;
> +}
> static int
> vzDomainMigratePerform3(virDomainPtr domain,
> @@ -1491,12 +1604,23 @@ vzDomainMigratePerform3(virDomainPtr domain,
> virCheckFlags(VZ_MIGRATION_FLAGS, -1);
> + if (vzCheckCommonFlags(flags))
> + goto cleanup;
> +
> if (!(vzuri = vzMakeVzUri(uri)))
> goto cleanup;
> if (!(dom = vzDomObjFromDomain(domain)))
> goto cleanup;
> + if (virDomainObjIsActive(dom))
> + ret = vzCheckOnlineFlags(flags);
> + else
> + ret = vzCheckOfflineFlags(flags);
> +
> + if (ret < 0)
> + goto cleanup;
> +
> dconn = virConnectOpen(uri);
> if (dconn == NULL) {
> virReportError(VIR_ERR_OPERATION_FAILED,
> @@ -1513,7 +1637,7 @@ vzDomainMigratePerform3(virDomainPtr domain,
> if (vzParseCookie(cookie, session_uuid) < 0)
> goto cleanup;
> - if (prlsdkMigrate(dom, vzuri, session_uuid, dname) < 0)
> + if (prlsdkMigrate(dom, vzuri, session_uuid, dname, flags) < 0)
> goto cleanup;
> virDomainObjListRemove(privconn->domains, dom);
> diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
> index 89a2429..9a2b5df 100644
> --- a/src/vz/vz_sdk.c
> +++ b/src/vz/vz_sdk.c
> @@ -4065,18 +4065,22 @@ prlsdkGetMemoryStats(virDomainObjPtr dom,
> int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri,
> const unsigned char *session_uuid,
> - const char *dname)
> + const char *dname, unsigned int flags)
> {
> int ret = -1;
> vzDomObjPtr privdom = dom->privateData;
> PRL_HANDLE job = PRL_INVALID_HANDLE;
> char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
> + PRL_UINT32 vzflags = PRLSDK_MIGRATION_FLAGS;
> +
> + if (flags & VIR_MIGRATE_PAUSED)
> + vzflags |= PVMT_DONT_RESUME_VM;
> prlsdkUUIDFormat(session_uuid, uuidstr);
> job = PrlVm_MigrateWithRenameEx(privdom->sdkdom, uri->server,
uri->port, uuidstr,
> dname == NULL ? "" : dname,
> "", /* use default dir for migrated
instance bundle */
> - PRLSDK_MIGRATION_FLAGS,
> + vzflags,
> 0, /* reserved flags */
> PRL_TRUE /* don't ask for confirmations */
> );
> diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h
> index 0aa70b3..5b26b70 100644
> --- a/src/vz/vz_sdk.h
> +++ b/src/vz/vz_sdk.h
> @@ -80,4 +80,5 @@ int
> prlsdkMigrate(virDomainObjPtr dom,
> virURIPtr uri,
> const char unsigned *session_uuid,
> - const char *dname);
> + const char *dname,
> + unsigned int flags);