[libvirt] [PATCH v3 0/5] vz: add migration support

NOTE that minimal command to migrate vz domain is like next: virsh -c vz:///system migrate --direct 200 shiny0 --migrateuri vz+ssh://shiny0/system --live --persistent --compressed ==Difference from v1: 1. Patch is quite different. First patchset implements migration thru managed migration scheme. This one goes thru p2p scheme. I belive this is a better approach. Vz migration is done via vz sdk and first patchset uses 5 phased migration only to get a token from destination on prepare phase which is kind a misuse. This patch just adds vz specific function to driver interface to archive the same goal. 2. Offline migration is supported as there is no more dependency on current flow of managed migration scheme. ==Difference from v2: 1. Implement thru direct migration instead of p2p. p2p is just managed 5-staged migration when managing is done on daemon side. Vz migration stages are all hidden in vz sdk and thus it would be more consistent to use direct scheme. 2. Use existing driver function for prepare migration phase to pass session uuid from destination to source instead of new one. As vz migration is direct one we will not use prepare phase function in a straight forward manner anyway. src/libvirt-domain.c | 3 +- src/vz/vz_driver.c | 315 ++++++++++++++++++++++++++++++++++++++++++++++++++ src/vz/vz_sdk.c | 86 ++++++++++++--- src/vz/vz_sdk.h | 6 + src/vz/vz_utils.h | 4 +- 5 files changed, 398 insertions(+), 16 deletions(-)

From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> This session uuid acts as authN token for different multihost vz operations one of which is migration. Unfortunately we can't get it from server at any time thus we need to save it at login. Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/vz/vz_sdk.c | 39 +++++++++++++++++++++++++++++---------- src/vz/vz_utils.h | 2 +- 2 files changed, 30 insertions(+), 11 deletions(-) diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index 744b58a..f7253de 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -37,6 +37,9 @@ #define VIR_FROM_THIS VIR_FROM_PARALLELS #define JOB_INFINIT_WAIT_TIMEOUT UINT_MAX +static int +prlsdkUUIDParse(const char *uuidstr, unsigned char *uuid); + VIR_LOG_INIT("parallels.sdk"); /* @@ -228,24 +231,40 @@ prlsdkDeinit(void) int prlsdkConnect(vzConnPtr privconn) { - PRL_RESULT ret; + int ret = -1; + PRL_RESULT pret; PRL_HANDLE job = PRL_INVALID_HANDLE; + PRL_HANDLE result = PRL_INVALID_HANDLE; + PRL_HANDLE response = PRL_INVALID_HANDLE; + char session_uuid[VIR_UUID_STRING_BUFLEN + 2]; + PRL_UINT32 buflen = ARRAY_CARDINALITY(session_uuid); - ret = PrlSrv_Create(&privconn->server); - if (PRL_FAILED(ret)) { - logPrlError(ret); - return -1; - } + pret = PrlSrv_Create(&privconn->server); + prlsdkCheckRetGoto(pret, cleanup); job = PrlSrv_LoginLocalEx(privconn->server, NULL, 0, PSL_HIGH_SECURITY, PACF_NON_INTERACTIVE_MODE); + if (PRL_FAILED(getJobResult(job, &result))) + goto cleanup; - if (waitJob(job)) { + pret = PrlResult_GetParam(result, &response); + prlsdkCheckRetGoto(pret, cleanup); + + pret = PrlLoginResponse_GetSessionUuid(response, session_uuid, &buflen); + prlsdkCheckRetGoto(pret, cleanup); + + if (prlsdkUUIDParse(session_uuid, privconn->session_uuid) < 0) + goto cleanup; + + ret = 0; + + cleanup: + if (ret < 0) PrlHandle_Free(privconn->server); - return -1; - } + PrlHandle_Free(result); + PrlHandle_Free(response); - return 0; + return ret; } void diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h index db09647..fe54b25 100644 --- a/src/vz/vz_utils.h +++ b/src/vz/vz_utils.h @@ -60,7 +60,7 @@ struct _vzConn { /* Immutable pointer, self-locking APIs */ virDomainObjListPtr domains; - + unsigned char session_uuid[VIR_UUID_BUFLEN]; PRL_HANDLE server; virStoragePoolObjList pools; virNetworkObjListPtr networks; -- 1.7.1

On 08/25/2015 12:04 PM, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This session uuid acts as authN token for different multihost vz operations one of which is migration. Unfortunately we can't get it from server at any time thus we need to save it at login.
ACK
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/vz/vz_sdk.c | 39 +++++++++++++++++++++++++++++---------- src/vz/vz_utils.h | 2 +- 2 files changed, 30 insertions(+), 11 deletions(-)
diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index 744b58a..f7253de 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -37,6 +37,9 @@ #define VIR_FROM_THIS VIR_FROM_PARALLELS #define JOB_INFINIT_WAIT_TIMEOUT UINT_MAX
+static int +prlsdkUUIDParse(const char *uuidstr, unsigned char *uuid); + VIR_LOG_INIT("parallels.sdk");
/* @@ -228,24 +231,40 @@ prlsdkDeinit(void) int prlsdkConnect(vzConnPtr privconn) { - PRL_RESULT ret; + int ret = -1; + PRL_RESULT pret; PRL_HANDLE job = PRL_INVALID_HANDLE; + PRL_HANDLE result = PRL_INVALID_HANDLE; + PRL_HANDLE response = PRL_INVALID_HANDLE; + char session_uuid[VIR_UUID_STRING_BUFLEN + 2]; + PRL_UINT32 buflen = ARRAY_CARDINALITY(session_uuid);
- ret = PrlSrv_Create(&privconn->server); - if (PRL_FAILED(ret)) { - logPrlError(ret); - return -1; - } + pret = PrlSrv_Create(&privconn->server); + prlsdkCheckRetGoto(pret, cleanup);
job = PrlSrv_LoginLocalEx(privconn->server, NULL, 0, PSL_HIGH_SECURITY, PACF_NON_INTERACTIVE_MODE); + if (PRL_FAILED(getJobResult(job, &result))) + goto cleanup;
- if (waitJob(job)) { + pret = PrlResult_GetParam(result, &response); + prlsdkCheckRetGoto(pret, cleanup); + + pret = PrlLoginResponse_GetSessionUuid(response, session_uuid, &buflen); + prlsdkCheckRetGoto(pret, cleanup); + + if (prlsdkUUIDParse(session_uuid, privconn->session_uuid) < 0) + goto cleanup; + + ret = 0; + + cleanup: + if (ret < 0) PrlHandle_Free(privconn->server); - return -1; - } + PrlHandle_Free(result); + PrlHandle_Free(response);
- return 0; + return ret; }
void diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h index db09647..fe54b25 100644 --- a/src/vz/vz_utils.h +++ b/src/vz/vz_utils.h @@ -60,7 +60,7 @@ struct _vzConn {
/* Immutable pointer, self-locking APIs */ virDomainObjListPtr domains; - + unsigned char session_uuid[VIR_UUID_BUFLEN]; PRL_HANDLE server; virStoragePoolObjList pools; virNetworkObjListPtr networks;

From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system $STUB could be anything as it is required virsh argument but it is not used in direct migration. Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call. Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/libvirt-domain.c | 3 +- src/vz/vz_driver.c | 193 ++++++++++++++++++++++++++++++++++++++++++++++++++ src/vz/vz_sdk.c | 33 +++++++++ src/vz/vz_sdk.h | 2 + 4 files changed, 230 insertions(+), 1 deletions(-) diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index cbf08fc..8577edd 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -3425,7 +3425,8 @@ virDomainMigrateDirect(virDomainPtr domain, NULLSTR(xmlin), flags, NULLSTR(dname), NULLSTR(uri), bandwidth); - if (!domain->conn->driver->domainMigratePerform) { + if (!domain->conn->driver->domainMigratePerform && + !domain->conn->driver->domainMigratePerform3) { virReportUnsupportedError(); return -1; } diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index 8fa7957..f82fff8 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -1343,6 +1343,196 @@ vzDomainMemoryStats(virDomainPtr domain, return ret; } +static char* +vzFormatCookie(const unsigned char *session_uuid) +{ + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virBuffer buf = VIR_BUFFER_INITIALIZER; + + virBufferAddLit(&buf, "<vz-migration1>\n"); + virUUIDFormat(session_uuid, uuidstr); + virBufferAsprintf(&buf, "<session_uuid>%s</session_uuid>\n", uuidstr); + virBufferAddLit(&buf, "</vz-migration1>\n"); + + if (virBufferCheckError(&buf) < 0) + return NULL; + + return virBufferContentAndReset(&buf); +} + +static int +vzParseCookie(const char *xml, unsigned char *session_uuid) +{ + xmlDocPtr doc = NULL; + xmlXPathContextPtr ctx = NULL; + char *tmp = NULL; + int ret = -1; + + if (!(doc = virXMLParseStringCtxt(xml, _("(_migration_cookie)"), &ctx))) + goto cleanup; + + if (!(tmp = virXPathString("string(./session_uuid[1])", ctx))) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing session_uuid element in migration data")); + goto cleanup; + } + if (virUUIDParse(tmp, session_uuid) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("malformed session_uuid element in migration data")); + goto cleanup; + } + ret = 0; + + cleanup: + xmlXPathFreeContext(ctx); + xmlFreeDoc(doc); + VIR_FREE(tmp); + + return ret; +} + +static int +vzDomainMigratePrepare3(virConnectPtr conn, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout, + int *cookieoutlen, + const char *uri_in ATTRIBUTE_UNUSED, + char **uri_out ATTRIBUTE_UNUSED, + unsigned long flags, + const char *dname ATTRIBUTE_UNUSED, + unsigned long resource ATTRIBUTE_UNUSED, + const char *dom_xml ATTRIBUTE_UNUSED) +{ + vzConnPtr privconn = conn->privateData; + int ret = -1; + + virCheckFlags(0, -1); + + if (!(*cookieout = vzFormatCookie(privconn->session_uuid))) + goto cleanup; + *cookieoutlen = strlen(*cookieout) + 1; + ret = 0; + + cleanup: + if (ret != 0) { + VIR_FREE(*cookieout); + *cookieoutlen = 0; + } + + return ret; +} + +static int +vzConnectSupportsFeature(virConnectPtr conn ATTRIBUTE_UNUSED, int feature) +{ + switch (feature) { + case VIR_DRV_FEATURE_MIGRATION_V3: + case VIR_DRV_FEATURE_MIGRATION_DIRECT: + return 1; + default: + return 0; + } +} + +static virURIPtr +vzMakeVzUri(const char *connuri_str) +{ + virURIPtr connuri = NULL; + virURIPtr vzuri = NULL; + int ret = -1; + + if (!(connuri = virURIParse(connuri_str))) + goto cleanup; + + if (VIR_ALLOC(vzuri) < 0) + goto cleanup; + memset(vzuri, 0, sizeof(*vzuri)); + + if (VIR_STRDUP(vzuri->server, connuri->server) < 0) + goto cleanup; + vzuri->port = connuri->port; + ret = 0; + + cleanup: + + virURIFree(connuri); + if (ret < 0) { + virURIFree(vzuri); + vzuri = NULL; + } + + return vzuri; +} + +#define VZ_MIGRATION_FLAGS (0) + +#define VZ_MIGRATION_PARAMETERS (NULL) + +static int +vzDomainMigratePerform3(virDomainPtr domain, + const char *xmlin ATTRIBUTE_UNUSED, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout ATTRIBUTE_UNUSED, + int *cookieoutlen ATTRIBUTE_UNUSED, + const char *dconnuri ATTRIBUTE_UNUSED, + const char *uri, + unsigned long flags, + const char *dname ATTRIBUTE_UNUSED, + unsigned long bandwidth ATTRIBUTE_UNUSED) +{ + int ret = -1; + virDomainObjPtr dom = NULL; + virConnectPtr dconn = NULL; + virURIPtr vzuri = NULL; + unsigned char session_uuid[VIR_UUID_BUFLEN]; + vzConnPtr privconn = domain->conn->privateData; + char *cookie = NULL; + int cookielen = 0; + + virCheckFlags(VZ_MIGRATION_FLAGS, -1); + + if (!(vzuri = vzMakeVzUri(uri))) + goto cleanup; + + if (!(dom = vzDomObjFromDomain(domain))) + goto cleanup; + + dconn = virConnectOpen(uri); + if (dconn == NULL) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("Failed to connect to remote libvirt URI %s: %s"), + uri, virGetLastErrorMessage()); + goto cleanup; + } + + /* NULL and zero elements are unused */ + /* domxml is passed as "" or otherwise we will fail at rpc call */ + if (virDomainMigratePrepare3(dconn, NULL, 0, &cookie, &cookielen, NULL, NULL, 0, NULL, 0, "") < 0) + goto cleanup; + + if (vzParseCookie(cookie, session_uuid) < 0) + goto cleanup; + + if (prlsdkMigrate(dom, vzuri, session_uuid) < 0) + goto cleanup; + + virDomainObjListRemove(privconn->domains, dom); + dom = NULL; + + ret = 0; + + cleanup: + if (dom) + virObjectUnlock(dom); + virObjectUnref(dconn); + virURIFree(vzuri); + VIR_FREE(cookie); + + return ret; +} + static virHypervisorDriver vzDriver = { .name = "vz", .connectOpen = vzConnectOpen, /* 0.10.0 */ @@ -1396,6 +1586,9 @@ static virHypervisorDriver vzDriver = { .domainBlockStatsFlags = vzDomainBlockStatsFlags, /* 1.2.17 */ .domainInterfaceStats = vzDomainInterfaceStats, /* 1.2.17 */ .domainMemoryStats = vzDomainMemoryStats, /* 1.2.17 */ + .connectSupportsFeature = vzConnectSupportsFeature, /* 1.2.19 */ + .domainMigratePrepare3 = vzDomainMigratePrepare3, /* 1.2.19 */ + .domainMigratePerform3 = vzDomainMigratePerform3, /* 1.2.19 */ }; static virConnectDriver vzConnectDriver = { diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index f7253de..783438d 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -4054,3 +4054,36 @@ prlsdkGetMemoryStats(virDomainObjPtr dom, return ret; } + +/* high security is default choice for 2 reasons: + 1. as this is the highest set security we can't get + reject from server with high security settings + 2. this is on par with security level of driver + connection to dispatcher */ + +#define PRLSDK_MIGRATION_FLAGS (PSL_HIGH_SECURITY) + +int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, + const unsigned char *session_uuid) +{ + int ret = -1; + vzDomObjPtr privdom = dom->privateData; + PRL_HANDLE job = PRL_INVALID_HANDLE; + char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + + prlsdkUUIDFormat(session_uuid, uuidstr); + job = PrlVm_MigrateEx(privdom->sdkdom, uri->server, uri->port, uuidstr, + "", /* use default dir for migrated instance bundle */ + PRLSDK_MIGRATION_FLAGS, + 0, /* reserved flags */ + PRL_TRUE /* don't ask for confirmations */ + ); + + if (PRL_FAILED(waitJob(job))) + goto cleanup; + + ret = 0; + + cleanup: + return ret; +} diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h index ebe4591..d3f0caf 100644 --- a/src/vz/vz_sdk.h +++ b/src/vz/vz_sdk.h @@ -76,3 +76,5 @@ int prlsdkGetVcpuStats(virDomainObjPtr dom, int idx, unsigned long long *time); int prlsdkGetMemoryStats(virDomainObjPtr dom, virDomainMemoryStatPtr stats, unsigned int nr_stats); +int +prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const char unsigned *session_uuid); -- 1.7.1

On 08/25/2015 12:04 PM, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/libvirt-domain.c | 3 +- src/vz/vz_driver.c | 193 ++++++++++++++++++++++++++++++++++++++++++++++++++ src/vz/vz_sdk.c | 33 +++++++++ src/vz/vz_sdk.h | 2 + 4 files changed, 230 insertions(+), 1 deletions(-)
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index cbf08fc..8577edd 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -3425,7 +3425,8 @@ virDomainMigrateDirect(virDomainPtr domain, NULLSTR(xmlin), flags, NULLSTR(dname), NULLSTR(uri), bandwidth);
- if (!domain->conn->driver->domainMigratePerform) { + if (!domain->conn->driver->domainMigratePerform && + !domain->conn->driver->domainMigratePerform3) { virReportUnsupportedError(); return -1; }
Could you, please, send this change in a separate patch, because it's a sort of bugfix to common libvirt code and not related to our migration?
diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index 8fa7957..f82fff8 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -1343,6 +1343,196 @@ vzDomainMemoryStats(virDomainPtr domain, return ret; }
+static char* +vzFormatCookie(const unsigned char *session_uuid) +{ + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virBuffer buf = VIR_BUFFER_INITIALIZER; + + virBufferAddLit(&buf, "<vz-migration1>\n"); + virUUIDFormat(session_uuid, uuidstr); + virBufferAsprintf(&buf, "<session_uuid>%s</session_uuid>\n", uuidstr); + virBufferAddLit(&buf, "</vz-migration1>\n");
.....
+ +static virURIPtr +vzMakeVzUri(const char *connuri_str) +{ + virURIPtr connuri = NULL; + virURIPtr vzuri = NULL; + int ret = -1; + + if (!(connuri = virURIParse(connuri_str))) + goto cleanup; + + if (VIR_ALLOC(vzuri) < 0) + goto cleanup; + memset(vzuri, 0, sizeof(*vzuri)); + + if (VIR_STRDUP(vzuri->server, connuri->server) < 0) + goto cleanup; + vzuri->port = connuri->port; + ret = 0; +
This is generally not correct, since you are passing the port of libvirt's connection to vz connection, it works if you don't specify port in URI, port is 0 in this case, and libprlsdk accepts it.
+ cleanup: + + virURIFree(connuri); + if (ret < 0) { + virURIFree(vzuri); + vzuri = NULL; + } + + return vzuri; +} + +#define VZ_MIGRATION_FLAGS (0) + +#define VZ_MIGRATION_PARAMETERS (NULL) + +static int +vzDomainMigratePerform3(virDomainPtr domain, + const char *xmlin ATTRIBUTE_UNUSED, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout ATTRIBUTE_UNUSED, + int *cookieoutlen ATTRIBUTE_UNUSED, + const char *dconnuri ATTRIBUTE_UNUSED, + const char *uri, + unsigned long flags, + const char *dname ATTRIBUTE_UNUSED, + unsigned long bandwidth ATTRIBUTE_UNUSED) +{ + int ret = -1; + virDomainObjPtr dom = NULL; + virConnectPtr dconn = NULL; + virURIPtr vzuri = NULL; + unsigned char session_uuid[VIR_UUID_BUFLEN]; + vzConnPtr privconn = domain->conn->privateData; + char *cookie = NULL; + int cookielen = 0; + + virCheckFlags(VZ_MIGRATION_FLAGS, -1); + + if (!(vzuri = vzMakeVzUri(uri))) + goto cleanup; + + if (!(dom = vzDomObjFromDomain(domain))) + goto cleanup; + + dconn = virConnectOpen(uri); + if (dconn == NULL) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("Failed to connect to remote libvirt URI %s: %s"), + uri, virGetLastErrorMessage()); + goto cleanup; + } + + /* NULL and zero elements are unused */ + /* domxml is passed as "" or otherwise we will fail at rpc call */ + if (virDomainMigratePrepare3(dconn, NULL, 0, &cookie, &cookielen, NULL, NULL, 0, NULL, 0, "") < 0) Could you split this line?
+ goto cleanup; + + if (vzParseCookie(cookie, session_uuid) < 0) + goto cleanup; + + if (prlsdkMigrate(dom, vzuri, session_uuid) < 0) + goto cleanup; + + virDomainObjListRemove(privconn->domains, dom); + dom = NULL; + + ret = 0; + + cleanup: + if (dom) + virObjectUnlock(dom); + virObjectUnref(dconn); + virURIFree(vzuri); + VIR_FREE(cookie); + + return ret; +} + static virHypervisorDriver vzDriver = { .name = "vz", .connectOpen = vzConnectOpen, /* 0.10.0 */ @@ -1396,6 +1586,9 @@ static virHypervisorDriver vzDriver = { .domainBlockStatsFlags = vzDomainBlockStatsFlags, /* 1.2.17 */ .domainInterfaceStats = vzDomainInterfaceStats, /* 1.2.17 */ .domainMemoryStats = vzDomainMemoryStats, /* 1.2.17 */ + .connectSupportsFeature = vzConnectSupportsFeature, /* 1.2.19 */ + .domainMigratePrepare3 = vzDomainMigratePrepare3, /* 1.2.19 */ + .domainMigratePerform3 = vzDomainMigratePerform3, /* 1.2.19 */ };
static virConnectDriver vzConnectDriver = { diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index f7253de..783438d 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -4054,3 +4054,36 @@ prlsdkGetMemoryStats(virDomainObjPtr dom,
return ret; } + +/* high security is default choice for 2 reasons: + 1. as this is the highest set security we can't get + reject from server with high security settings + 2. this is on par with security level of driver + connection to dispatcher */ + +#define PRLSDK_MIGRATION_FLAGS (PSL_HIGH_SECURITY) + +int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, + const unsigned char *session_uuid) +{ + int ret = -1; + vzDomObjPtr privdom = dom->privateData; + PRL_HANDLE job = PRL_INVALID_HANDLE; + char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + + prlsdkUUIDFormat(session_uuid, uuidstr); + job = PrlVm_MigrateEx(privdom->sdkdom, uri->server, uri->port, uuidstr, + "", /* use default dir for migrated instance bundle */ + PRLSDK_MIGRATION_FLAGS, + 0, /* reserved flags */ + PRL_TRUE /* don't ask for confirmations */ + ); + + if (PRL_FAILED(waitJob(job))) + goto cleanup; + + ret = 0; + + cleanup: + return ret; +} diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h index ebe4591..d3f0caf 100644 --- a/src/vz/vz_sdk.h +++ b/src/vz/vz_sdk.h @@ -76,3 +76,5 @@ int prlsdkGetVcpuStats(virDomainObjPtr dom, int idx, unsigned long long *time); int prlsdkGetMemoryStats(virDomainObjPtr dom, virDomainMemoryStatPtr stats, unsigned int nr_stats); +int +prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const char unsigned *session_uuid);

On 25.08.2015 18:42, Dmitry Guryanov wrote:
On 08/25/2015 12:04 PM, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/libvirt-domain.c | 3 +- src/vz/vz_driver.c | 193 ++++++++++++++++++++++++++++++++++++++++++++++++++ src/vz/vz_sdk.c | 33 +++++++++ src/vz/vz_sdk.h | 2 + 4 files changed, 230 insertions(+), 1 deletions(-)
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index cbf08fc..8577edd 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -3425,7 +3425,8 @@ virDomainMigrateDirect(virDomainPtr domain, NULLSTR(xmlin), flags, NULLSTR(dname), NULLSTR(uri), bandwidth); - if (!domain->conn->driver->domainMigratePerform) { + if (!domain->conn->driver->domainMigratePerform && + !domain->conn->driver->domainMigratePerform3) { virReportUnsupportedError(); return -1; }
Could you, please, send this change in a separate patch, because it's a sort of bugfix to common libvirt code and not related to our migration? ok
diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index 8fa7957..f82fff8 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -1343,6 +1343,196 @@ vzDomainMemoryStats(virDomainPtr domain, return ret; } +static char* +vzFormatCookie(const unsigned char *session_uuid) +{ + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virBuffer buf = VIR_BUFFER_INITIALIZER; + + virBufferAddLit(&buf, "<vz-migration1>\n"); + virUUIDFormat(session_uuid, uuidstr); + virBufferAsprintf(&buf, "<session_uuid>%s</session_uuid>\n", uuidstr); + virBufferAddLit(&buf, "</vz-migration1>\n");
.....
+ +static virURIPtr +vzMakeVzUri(const char *connuri_str) +{ + virURIPtr connuri = NULL; + virURIPtr vzuri = NULL; + int ret = -1; + + if (!(connuri = virURIParse(connuri_str))) + goto cleanup; + + if (VIR_ALLOC(vzuri) < 0) + goto cleanup; + memset(vzuri, 0, sizeof(*vzuri)); + + if (VIR_STRDUP(vzuri->server, connuri->server) < 0) + goto cleanup; + vzuri->port = connuri->port; + ret = 0; +
This is generally not correct, since you are passing the port of libvirt's connection to vz connection, it works if you don't specify port in URI, port is 0 in this case, and libprlsdk accepts it. I'm not agree. This function is called with hypervisor specific uri where port is hypervisor and not libvirtd port.
+ cleanup: + + virURIFree(connuri); + if (ret < 0) { + virURIFree(vzuri); + vzuri = NULL; + } + + return vzuri; +} + +#define VZ_MIGRATION_FLAGS (0) + +#define VZ_MIGRATION_PARAMETERS (NULL) + +static int +vzDomainMigratePerform3(virDomainPtr domain, + const char *xmlin ATTRIBUTE_UNUSED, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout ATTRIBUTE_UNUSED, + int *cookieoutlen ATTRIBUTE_UNUSED, + const char *dconnuri ATTRIBUTE_UNUSED, + const char *uri, + unsigned long flags, + const char *dname ATTRIBUTE_UNUSED, + unsigned long bandwidth ATTRIBUTE_UNUSED) +{ + int ret = -1; + virDomainObjPtr dom = NULL; + virConnectPtr dconn = NULL; + virURIPtr vzuri = NULL; + unsigned char session_uuid[VIR_UUID_BUFLEN]; + vzConnPtr privconn = domain->conn->privateData; + char *cookie = NULL; + int cookielen = 0; + + virCheckFlags(VZ_MIGRATION_FLAGS, -1); + + if (!(vzuri = vzMakeVzUri(uri))) + goto cleanup; + + if (!(dom = vzDomObjFromDomain(domain))) + goto cleanup; + + dconn = virConnectOpen(uri); + if (dconn == NULL) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("Failed to connect to remote libvirt URI %s: %s"), + uri, virGetLastErrorMessage()); + goto cleanup; + } + + /* NULL and zero elements are unused */ + /* domxml is passed as "" or otherwise we will fail at rpc call */ + if (virDomainMigratePrepare3(dconn, NULL, 0, &cookie, &cookielen, NULL, NULL, 0, NULL, 0, "") < 0) Could you split this line? ok
+ goto cleanup; + + if (vzParseCookie(cookie, session_uuid) < 0) + goto cleanup; + + if (prlsdkMigrate(dom, vzuri, session_uuid) < 0) + goto cleanup; + + virDomainObjListRemove(privconn->domains, dom); + dom = NULL; + + ret = 0; + + cleanup: + if (dom) + virObjectUnlock(dom); + virObjectUnref(dconn); + virURIFree(vzuri); + VIR_FREE(cookie); + + return ret; +} + static virHypervisorDriver vzDriver = { .name = "vz", .connectOpen = vzConnectOpen, /* 0.10.0 */ @@ -1396,6 +1586,9 @@ static virHypervisorDriver vzDriver = { .domainBlockStatsFlags = vzDomainBlockStatsFlags, /* 1.2.17 */ .domainInterfaceStats = vzDomainInterfaceStats, /* 1.2.17 */ .domainMemoryStats = vzDomainMemoryStats, /* 1.2.17 */ + .connectSupportsFeature = vzConnectSupportsFeature, /* 1.2.19 */ + .domainMigratePrepare3 = vzDomainMigratePrepare3, /* 1.2.19 */ + .domainMigratePerform3 = vzDomainMigratePerform3, /* 1.2.19 */ }; static virConnectDriver vzConnectDriver = { diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index f7253de..783438d 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -4054,3 +4054,36 @@ prlsdkGetMemoryStats(virDomainObjPtr dom, return ret; } + +/* high security is default choice for 2 reasons: + 1. as this is the highest set security we can't get + reject from server with high security settings + 2. this is on par with security level of driver + connection to dispatcher */ + +#define PRLSDK_MIGRATION_FLAGS (PSL_HIGH_SECURITY) + +int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, + const unsigned char *session_uuid) +{ + int ret = -1; + vzDomObjPtr privdom = dom->privateData; + PRL_HANDLE job = PRL_INVALID_HANDLE; + char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + + prlsdkUUIDFormat(session_uuid, uuidstr); + job = PrlVm_MigrateEx(privdom->sdkdom, uri->server, uri->port, uuidstr, + "", /* use default dir for migrated instance bundle */ + PRLSDK_MIGRATION_FLAGS, + 0, /* reserved flags */ + PRL_TRUE /* don't ask for confirmations */ + ); + + if (PRL_FAILED(waitJob(job))) + goto cleanup; + + ret = 0; + + cleanup: + return ret; +} diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h index ebe4591..d3f0caf 100644 --- a/src/vz/vz_sdk.h +++ b/src/vz/vz_sdk.h @@ -76,3 +76,5 @@ int prlsdkGetVcpuStats(virDomainObjPtr dom, int idx, unsigned long long *time); int prlsdkGetMemoryStats(virDomainObjPtr dom, virDomainMemoryStatPtr stats, unsigned int nr_stats); +int +prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const char unsigned *session_uuid);

On Tue, Aug 25, 2015 at 12:04:14PM +0300, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call.
I'm trying to understand why you need $STUB as dummy data. IIRC, when doing direct migration, you should be providing a valid URI for the first parameter, and not need the second uri.
+ +static int +vzDomainMigratePrepare3(virConnectPtr conn, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout, + int *cookieoutlen, + const char *uri_in ATTRIBUTE_UNUSED, + char **uri_out ATTRIBUTE_UNUSED, + unsigned long flags, + const char *dname ATTRIBUTE_UNUSED, + unsigned long resource ATTRIBUTE_UNUSED, + const char *dom_xml ATTRIBUTE_UNUSED) +{ + vzConnPtr privconn = conn->privateData; + int ret = -1; + + virCheckFlags(0, -1); + + if (!(*cookieout = vzFormatCookie(privconn->session_uuid))) + goto cleanup; + *cookieoutlen = strlen(*cookieout) + 1; + ret = 0; + + cleanup: + if (ret != 0) { + VIR_FREE(*cookieout); + *cookieoutlen = 0; + } + + return ret; +} + +static int +vzConnectSupportsFeature(virConnectPtr conn ATTRIBUTE_UNUSED, int feature) +{ + switch (feature) { + case VIR_DRV_FEATURE_MIGRATION_V3: + case VIR_DRV_FEATURE_MIGRATION_DIRECT: + return 1; + default: + return 0; + } +} + +static virURIPtr +vzMakeVzUri(const char *connuri_str) +{ + virURIPtr connuri = NULL; + virURIPtr vzuri = NULL; + int ret = -1; + + if (!(connuri = virURIParse(connuri_str))) + goto cleanup; + + if (VIR_ALLOC(vzuri) < 0) + goto cleanup; + memset(vzuri, 0, sizeof(*vzuri)); + + if (VIR_STRDUP(vzuri->server, connuri->server) < 0) + goto cleanup; + vzuri->port = connuri->port; + ret = 0; + + cleanup: + + virURIFree(connuri); + if (ret < 0) { + virURIFree(vzuri); + vzuri = NULL; + } + + return vzuri; +} + +#define VZ_MIGRATION_FLAGS (0) + +#define VZ_MIGRATION_PARAMETERS (NULL) + +static int +vzDomainMigratePerform3(virDomainPtr domain, + const char *xmlin ATTRIBUTE_UNUSED, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout ATTRIBUTE_UNUSED, + int *cookieoutlen ATTRIBUTE_UNUSED, + const char *dconnuri ATTRIBUTE_UNUSED, + const char *uri,
I think you should be using 'dconnuri' rather than 'uri', so then you would not need the $STUB dummy parameter. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 08/25/2015 07:18 PM, Daniel P. Berrange wrote:
On Tue, Aug 25, 2015 at 12:04:14PM +0300, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call. I'm trying to understand why you need $STUB as dummy data.
IIRC, when doing direct migration, you should be providing a valid URI for the first parameter, and not need the second uri.
virsh uses virDomainMigrateToURI3 function, and for some reason it differs destination URI for direct and peer-to-peer migrations. For p2p it uses pconnuri argument and for direct -VIR_MIGRATE_PARAM_URI <http://libvirt.org/html/libvirt-libvirt-domain.html#VIR_MIGRATE_PARAM_URI> from typed params list.
+ +static int +vzDomainMigratePrepare3(virConnectPtr conn, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout, + int *cookieoutlen, + const char *uri_in ATTRIBUTE_UNUSED, + char **uri_out ATTRIBUTE_UNUSED, + unsigned long flags, + const char *dname ATTRIBUTE_UNUSED, + unsigned long resource ATTRIBUTE_UNUSED, + const char *dom_xml ATTRIBUTE_UNUSED) +{ + vzConnPtr privconn = conn->privateData; + int ret = -1; + + virCheckFlags(0, -1); + + if (!(*cookieout = vzFormatCookie(privconn->session_uuid))) + goto cleanup; + *cookieoutlen = strlen(*cookieout) + 1; + ret = 0; + + cleanup: + if (ret != 0) { + VIR_FREE(*cookieout); + *cookieoutlen = 0; + } + + return ret; +} + +static int +vzConnectSupportsFeature(virConnectPtr conn ATTRIBUTE_UNUSED, int feature) +{ + switch (feature) { + case VIR_DRV_FEATURE_MIGRATION_V3: + case VIR_DRV_FEATURE_MIGRATION_DIRECT: + return 1; + default: + return 0; + } +} + +static virURIPtr +vzMakeVzUri(const char *connuri_str) +{ + virURIPtr connuri = NULL; + virURIPtr vzuri = NULL; + int ret = -1; + + if (!(connuri = virURIParse(connuri_str))) + goto cleanup; + + if (VIR_ALLOC(vzuri) < 0) + goto cleanup; + memset(vzuri, 0, sizeof(*vzuri)); + + if (VIR_STRDUP(vzuri->server, connuri->server) < 0) + goto cleanup; + vzuri->port = connuri->port; + ret = 0; + + cleanup: + + virURIFree(connuri); + if (ret < 0) { + virURIFree(vzuri); + vzuri = NULL; + } + + return vzuri; +} + +#define VZ_MIGRATION_FLAGS (0) + +#define VZ_MIGRATION_PARAMETERS (NULL) + +static int +vzDomainMigratePerform3(virDomainPtr domain, + const char *xmlin ATTRIBUTE_UNUSED, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout ATTRIBUTE_UNUSED, + int *cookieoutlen ATTRIBUTE_UNUSED, + const char *dconnuri ATTRIBUTE_UNUSED, + const char *uri, I think you should be using 'dconnuri' rather than 'uri', so then you would not need the $STUB dummy parameter.
Regards, Daniel

On 25.08.2015 20:28, Dmitry Guryanov wrote:
On 08/25/2015 07:18 PM, Daniel P. Berrange wrote:
On Tue, Aug 25, 2015 at 12:04:14PM +0300, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call. I'm trying to understand why you need $STUB as dummy data.
IIRC, when doing direct migration, you should be providing a valid URI for the first parameter, and not need the second uri.
virsh uses virDomainMigrateToURI3 function, and for some reason it differs destination URI for direct and peer-to-peer migrations. For p2p it uses pconnuri argument and for direct -VIR_MIGRATE_PARAM_URI <http://libvirt.org/html/libvirt-libvirt-domain.html#VIR_MIGRATE_PARAM_URI> from typed params list.
AFAIU in virDomainMigrateToURI3 function, dconnuri argument is treated as URI to destination libvirt daemon. As direct migration doesn't need this kind of connection this argument is ignored. However as suggested vz migration implementation uses connection to destination libvirtd daemon to get destination hypervisor session uuid I should probably use dconnuri for for that purporse not hypervisor URI. This is related what Dmitry mentioned in previous letters where there was a confusion between hypervisor and remote access to daemon ports. This would break a invariant mentioned in the documentation that direct migration does not use dconnuri. Probably it is not too bad too get away from this restriction.
+ +static int +vzDomainMigratePrepare3(virConnectPtr conn, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout, + int *cookieoutlen, + const char *uri_in ATTRIBUTE_UNUSED, + char **uri_out ATTRIBUTE_UNUSED, + unsigned long flags, + const char *dname ATTRIBUTE_UNUSED, + unsigned long resource ATTRIBUTE_UNUSED, + const char *dom_xml ATTRIBUTE_UNUSED) +{ + vzConnPtr privconn = conn->privateData; + int ret = -1; + + virCheckFlags(0, -1); + + if (!(*cookieout = vzFormatCookie(privconn->session_uuid))) + goto cleanup; + *cookieoutlen = strlen(*cookieout) + 1; + ret = 0; + + cleanup: + if (ret != 0) { + VIR_FREE(*cookieout); + *cookieoutlen = 0; + } + + return ret; +} + +static int +vzConnectSupportsFeature(virConnectPtr conn ATTRIBUTE_UNUSED, int feature) +{ + switch (feature) { + case VIR_DRV_FEATURE_MIGRATION_V3: + case VIR_DRV_FEATURE_MIGRATION_DIRECT: + return 1; + default: + return 0; + } +} + +static virURIPtr +vzMakeVzUri(const char *connuri_str) +{ + virURIPtr connuri = NULL; + virURIPtr vzuri = NULL; + int ret = -1; + + if (!(connuri = virURIParse(connuri_str))) + goto cleanup; + + if (VIR_ALLOC(vzuri) < 0) + goto cleanup; + memset(vzuri, 0, sizeof(*vzuri)); + + if (VIR_STRDUP(vzuri->server, connuri->server) < 0) + goto cleanup; + vzuri->port = connuri->port; + ret = 0; + + cleanup: + + virURIFree(connuri); + if (ret < 0) { + virURIFree(vzuri); + vzuri = NULL; + } + + return vzuri; +} + +#define VZ_MIGRATION_FLAGS (0) + +#define VZ_MIGRATION_PARAMETERS (NULL) + +static int +vzDomainMigratePerform3(virDomainPtr domain, + const char *xmlin ATTRIBUTE_UNUSED, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout ATTRIBUTE_UNUSED, + int *cookieoutlen ATTRIBUTE_UNUSED, + const char *dconnuri ATTRIBUTE_UNUSED, + const char *uri, I think you should be using 'dconnuri' rather than 'uri', so then you would not need the $STUB dummy parameter.
Regards, Daniel

On Wed, Aug 26, 2015 at 10:19:43AM +0300, Nikolay Shirokovskiy wrote:
On 25.08.2015 20:28, Dmitry Guryanov wrote:
On 08/25/2015 07:18 PM, Daniel P. Berrange wrote:
On Tue, Aug 25, 2015 at 12:04:14PM +0300, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call. I'm trying to understand why you need $STUB as dummy data.
IIRC, when doing direct migration, you should be providing a valid URI for the first parameter, and not need the second uri.
virsh uses virDomainMigrateToURI3 function, and for some reason it differs destination URI for direct and peer-to-peer migrations. For p2p it uses pconnuri argument and for direct -VIR_MIGRATE_PARAM_URI <http://libvirt.org/html/libvirt-libvirt-domain.html#VIR_MIGRATE_PARAM_URI> from typed params list.
AFAIU in virDomainMigrateToURI3 function, dconnuri argument is treated as URI to destination libvirt daemon. As direct migration doesn't need this kind of connection this argument is ignored.
However as suggested vz migration implementation uses connection to destination libvirtd daemon to get destination hypervisor session uuid I should probably use dconnuri for for that purporse not hypervisor URI. This is related what Dmitry mentioned in previous letters where there was a confusion between hypervisor and remote access to daemon ports.
This would break a invariant mentioned in the documentation that direct migration does not use dconnuri. Probably it is not too bad too get away from this restriction.
Before you change the code yet again, I'll take a closer look at the migration code and try to figure out a clear correct answer. This bit of libvirt is so confusing it is easy to get wrong.... Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Tue, Aug 25, 2015 at 12:04:14PM +0300, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/libvirt-domain.c | 3 +- src/vz/vz_driver.c | 193 ++++++++++++++++++++++++++++++++++++++++++++++++++ src/vz/vz_sdk.c | 33 +++++++++ src/vz/vz_sdk.h | 2 + 4 files changed, 230 insertions(+), 1 deletions(-)
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index cbf08fc..8577edd 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -3425,7 +3425,8 @@ virDomainMigrateDirect(virDomainPtr domain, NULLSTR(xmlin), flags, NULLSTR(dname), NULLSTR(uri), bandwidth);
- if (!domain->conn->driver->domainMigratePerform) { + if (!domain->conn->driver->domainMigratePerform && + !domain->conn->driver->domainMigratePerform3) { virReportUnsupportedError(); return -1; }
Can you send just this bit of code as a separate patch, as it is a trivial unrelated fix we can merge now. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Tue, Aug 25, 2015 at 12:04:14PM +0300, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call.
Looking more closely at migration again, the scenario you have is pretty much identical to the Xen scenario, in that the hypervisor actually manages the migration, but you still need a connection to dest libvirtd to fetch some initialization data. You have claimed you are implementing, what we describe as "direct, unmanaged" migration on this page: http://libvirt.org/migration.html But based on the fact that you need to talk to dest libvirtd, you should in fact implement 'direct, managed' migration - this name is slightly misleading as the VZ SDK is still actually managing it. Since you don't need to have the begin/confirm phases, you also don't need to implement the V3 migration protocol - it is sufficient to just use V1. This doesn't need many changes in your patch fortunately.
diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index 8fa7957..f82fff8 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -1343,6 +1343,196 @@ vzDomainMemoryStats(virDomainPtr domain, return ret; }
+static char* +vzFormatCookie(const unsigned char *session_uuid) +{ + char uuidstr[VIR_UUID_STRING_BUFLEN]; + virBuffer buf = VIR_BUFFER_INITIALIZER; + + virBufferAddLit(&buf, "<vz-migration1>\n"); + virUUIDFormat(session_uuid, uuidstr); + virBufferAsprintf(&buf, "<session_uuid>%s</session_uuid>\n", uuidstr); + virBufferAddLit(&buf, "</vz-migration1>\n"); + + if (virBufferCheckError(&buf) < 0) + return NULL; + + return virBufferContentAndReset(&buf); +} + +static int +vzParseCookie(const char *xml, unsigned char *session_uuid) +{ + xmlDocPtr doc = NULL; + xmlXPathContextPtr ctx = NULL; + char *tmp = NULL; + int ret = -1; + + if (!(doc = virXMLParseStringCtxt(xml, _("(_migration_cookie)"), &ctx))) + goto cleanup; + + if (!(tmp = virXPathString("string(./session_uuid[1])", ctx))) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("missing session_uuid element in migration data")); + goto cleanup; + } + if (virUUIDParse(tmp, session_uuid) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("malformed session_uuid element in migration data")); + goto cleanup; + } + ret = 0; + + cleanup: + xmlXPathFreeContext(ctx); + xmlFreeDoc(doc); + VIR_FREE(tmp); + + return ret; +} + +static int +vzDomainMigratePrepare3(virConnectPtr conn, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout, + int *cookieoutlen, + const char *uri_in ATTRIBUTE_UNUSED, + char **uri_out ATTRIBUTE_UNUSED, + unsigned long flags, + const char *dname ATTRIBUTE_UNUSED, + unsigned long resource ATTRIBUTE_UNUSED, + const char *dom_xml ATTRIBUTE_UNUSED)
Switch to implement domainMigratePrepare, instead of the v3 method, since you don't need anything except the cookieout parameters
+{ + vzConnPtr privconn = conn->privateData; + int ret = -1; + + virCheckFlags(0, -1); + + if (!(*cookieout = vzFormatCookie(privconn->session_uuid))) + goto cleanup; + *cookieoutlen = strlen(*cookieout) + 1; + ret = 0;
Also fill in the uri_out parameter, so that the user does not need to provide a URI explicitly.
+ + cleanup: + if (ret != 0) { + VIR_FREE(*cookieout); + *cookieoutlen = 0; + } + + return ret; +}
+ +static int +vzConnectSupportsFeature(virConnectPtr conn ATTRIBUTE_UNUSED, int feature) +{ + switch (feature) { + case VIR_DRV_FEATURE_MIGRATION_V3: + case VIR_DRV_FEATURE_MIGRATION_DIRECT: + return 1; + default: + return 0; + } +}
You can drop this since you don't need V3 or DIRECT features.
+static virURIPtr +vzMakeVzUri(const char *connuri_str) +{ + virURIPtr connuri = NULL; + virURIPtr vzuri = NULL; + int ret = -1; + + if (!(connuri = virURIParse(connuri_str))) + goto cleanup; + + if (VIR_ALLOC(vzuri) < 0) + goto cleanup; + memset(vzuri, 0, sizeof(*vzuri)); + + if (VIR_STRDUP(vzuri->server, connuri->server) < 0) + goto cleanup; + vzuri->port = connuri->port; + ret = 0; + + cleanup: + + virURIFree(connuri); + if (ret < 0) { + virURIFree(vzuri); + vzuri = NULL; + } + + return vzuri; +} + +#define VZ_MIGRATION_FLAGS (0) + +#define VZ_MIGRATION_PARAMETERS (NULL) + +static int +vzDomainMigratePerform3(virDomainPtr domain, + const char *xmlin ATTRIBUTE_UNUSED, + const char *cookiein ATTRIBUTE_UNUSED, + int cookieinlen ATTRIBUTE_UNUSED, + char **cookieout ATTRIBUTE_UNUSED, + int *cookieoutlen ATTRIBUTE_UNUSED, + const char *dconnuri ATTRIBUTE_UNUSED, + const char *uri, + unsigned long flags, + const char *dname ATTRIBUTE_UNUSED, + unsigned long bandwidth ATTRIBUTE_UNUSED)
Switch to implement domainMigratePerform instead of Perform3, and remove the ATTRIBUTE_UNUSED on the cookiein + cookieinlen parameters.
+{ + int ret = -1; + virDomainObjPtr dom = NULL; + virConnectPtr dconn = NULL; + virURIPtr vzuri = NULL; + unsigned char session_uuid[VIR_UUID_BUFLEN]; + vzConnPtr privconn = domain->conn->privateData; + char *cookie = NULL; + int cookielen = 0; + + virCheckFlags(VZ_MIGRATION_FLAGS, -1); + + if (!(vzuri = vzMakeVzUri(uri))) + goto cleanup;
Remove the MakeVzUri call and just use the 'uri' parameter directly - the Prepare method should populate uri_out in the required format for the prlsdkMigrate() API call.
+ if (!(dom = vzDomObjFromDomain(domain))) + goto cleanup; + + dconn = virConnectOpen(uri); + if (dconn == NULL) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("Failed to connect to remote libvirt URI %s: %s"), + uri, virGetLastErrorMessage()); + goto cleanup; + } + + /* NULL and zero elements are unused */ + /* domxml is passed as "" or otherwise we will fail at rpc call */ + if (virDomainMigratePrepare3(dconn, NULL, 0, &cookie, &cookielen, NULL, NULL, 0, NULL, 0, "") < 0) + goto cleanup;
Instead of opening the connection and calling Prepare3, simply use the 'cookiein' parameter passed to Perform()
+ + if (vzParseCookie(cookie, session_uuid) < 0) + goto cleanup; + + if (prlsdkMigrate(dom, vzuri, session_uuid) < 0) + goto cleanup; + + virDomainObjListRemove(privconn->domains, dom); + dom = NULL; + + ret = 0; + + cleanup: + if (dom) + virObjectUnlock(dom); + virObjectUnref(dconn); + virURIFree(vzuri); + VIR_FREE(cookie); + + return ret; +} + static virHypervisorDriver vzDriver = { .name = "vz", .connectOpen = vzConnectOpen, /* 0.10.0 */ @@ -1396,6 +1586,9 @@ static virHypervisorDriver vzDriver = { .domainBlockStatsFlags = vzDomainBlockStatsFlags, /* 1.2.17 */ .domainInterfaceStats = vzDomainInterfaceStats, /* 1.2.17 */ .domainMemoryStats = vzDomainMemoryStats, /* 1.2.17 */ + .connectSupportsFeature = vzConnectSupportsFeature, /* 1.2.19 */ + .domainMigratePrepare3 = vzDomainMigratePrepare3, /* 1.2.19 */ + .domainMigratePerform3 = vzDomainMigratePerform3, /* 1.2.19 */
Drop the '3' from these. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 27.08.2015 13:34, Daniel P. Berrange wrote:
On Tue, Aug 25, 2015 at 12:04:14PM +0300, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call.
Looking more closely at migration again, the scenario you have is pretty much identical to the Xen scenario, in that the hypervisor actually manages the migration, but you still need a connection to dest libvirtd to fetch some initialization data.
You have claimed you are implementing, what we describe as "direct, unmanaged" migration on this page:
http://libvirt.org/migration.html
But based on the fact that you need to talk to dest libvirtd, you should in fact implement 'direct, managed' migration - this name is slightly misleading as the VZ SDK is still actually managing it.
Since you don't need to have the begin/confirm phases, you also don't need to implement the V3 migration protocol - it is sufficient to just use V1.
This doesn't need many changes in your patch fortunately.
I've been looking at common migration code for rather long time and think that using direct managed scheme for vz migration could lead to problems. Let me share my concerns. 1. Migration protocol of version1 differs from version3 not only by number of stages. Version3 supports extended parameters like VIR_MIGRATE_PARAM_GRAPHICS_URI which have meaning for vz migration too. Thus in future we could move to implementing version3 as well. 2. Direct managed stages doesn't have a meaning do anything on source, then on destination and so on. They interconnected and this interconnection is given in migration algorithm. For version3 (virDomainMigrateVersion3Full) it is more noticeable. If finish3 phase fail then we cancel migration on confirm3 phase. See, we treat this phases specifically - on perform3 we think we move data, on finish we think we start domain on destination, on comfirm we think we stop domain on source. That is how qemu migration works and that is how we think of phases when we implement direct managed algorithm. So phases have some contracts. If we implement vz migration thru this scheme we could not keep these contracts as perform3 phase not only move data, but also kill source domain and start destination. The worst things the user could get are an erroneous warnings in logs and overall migration failure reports on actual migration success in case of side effect failures like rpc or OOM. The worser is that you should keep in mind that phases imlementation contracts are vague. So as as version1 scheme is quite simple and phase contracts are looser that for version3 we could go this way but i see potential problems (at least for developer). Thus suggest keep contracts of phases of all versions of direct managed migration clear and hide all differences by implementing p2p or direct scheme. The questing arises how these two differ. Documentation states that p2p is when libvirt daemon manages migrations and direct is when all managing is done by hypervisor. As vz migration needs some help from destination daemon it looks like a candidate for p2p. But as this help is as just little as help authenticate i suggest to think of it as of direct. From implementation point of view there is no difference, from user point of view the difference is only in flags. Another argument is that if we take qemu we see that p2p is just go thru same steps as direct managed, the most of difference is that managing move from client to daemon. That is p2p and direct managed are some kind of coupled. If there is p2p then direct managed should be possible too and this is not the case of vz migration.

On Fri, Aug 28, 2015 at 12:18:30PM +0300, Nikolay Shirokovskiy wrote:
On 27.08.2015 13:34, Daniel P. Berrange wrote:
On Tue, Aug 25, 2015 at 12:04:14PM +0300, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call.
Looking more closely at migration again, the scenario you have is pretty much identical to the Xen scenario, in that the hypervisor actually manages the migration, but you still need a connection to dest libvirtd to fetch some initialization data.
You have claimed you are implementing, what we describe as "direct, unmanaged" migration on this page:
http://libvirt.org/migration.html
But based on the fact that you need to talk to dest libvirtd, you should in fact implement 'direct, managed' migration - this name is slightly misleading as the VZ SDK is still actually managing it.
Since you don't need to have the begin/confirm phases, you also don't need to implement the V3 migration protocol - it is sufficient to just use V1.
This doesn't need many changes in your patch fortunately.
I've been looking at common migration code for rather long time and think that using direct managed scheme for vz migration could lead to problems. Let me share my concerns.
1. Migration protocol of version1 differs from version3 not only by number of stages. Version3 supports extended parameters like VIR_MIGRATE_PARAM_GRAPHICS_URI which have meaning for vz migration too. Thus in future we could move to implementing version3 as well.
Ah, that is indeed true. From that POV it certainly makes sense to want to start with V3 straight away.
2. Direct managed stages doesn't have a meaning do anything on source, then on destination and so on. They interconnected and this interconnection is given in migration algorithm. For version3 (virDomainMigrateVersion3Full) it is more noticeable. If finish3 phase fail then we cancel migration on confirm3 phase. See, we treat this phases specifically - on perform3 we think we move data, on finish we think we start domain on destination, on comfirm we think we stop domain on source. That is how qemu migration works and that is how we think of phases when we implement direct managed algorithm. So phases have some contracts. If we implement vz migration thru this scheme we could not keep these contracts as perform3 phase not only move data, but also kill source domain and start destination. The worst things the user could get are an erroneous warnings in logs and overall migration failure reports on actual migration success in case of side effect failures like rpc or OOM. The worser is that you should keep in mind that phases imlementation contracts are vague.
It isn't the end of the world if the Perform3 stage kills the source domain. That is the same behaviour as with Xen. Particularly since the VZ SDK itself does the switchover, there's no functional downside to letting Perform3 kill the source.
So as as version1 scheme is quite simple and phase contracts are looser that for version3 we could go this way but i see potential problems (at least for developer). Thus suggest keep contracts of phases of all versions of direct managed migration clear and hide all differences by implementing p2p or direct scheme.
The questing arises how these two differ. Documentation states that p2p is when libvirt daemon manages migrations and direct is when all managing is done by hypervisor. As vz migration needs some help from destination daemon it looks like a candidate for p2p. But as this help is as just little as help authenticate i suggest to think of it as of direct. From implementation point of view there is no difference, from user point of view the difference is only in flags. Another argument is that if we take qemu we see that p2p is just go thru same steps as direct managed, the most of difference is that managing move from client to daemon. That is p2p and direct managed are some kind of coupled. If there is p2p then direct managed should be possible too and this is not the case of vz migration.
The p2p migration mode is only different from the default mode, in that instead of the client app talking to the dest libvirtd, it is the source libvirtd talking. With the VZ driver though, the driver runs directly in the client app, not libvirtd. As such, there is no benefit to implementing p2p mode in VZ - it will just end up working in the exact same way as the default mode in terms of what part communicates with the dest. As you do need to talk to dest libvirtd, IMHO, this rules out use of the direct mode, as that is intended for the case where you don't ever use libvirtd on the target. This is why you ended up having the wierd situation with passing a dummy URI to virsh, and then passing a libvirt URI as the second parameter. This leads to a rather confusing setup for apps IMHO. So, overall I think you should still do what I suggested, but instead of implementing v1, stay with v3. This will mean you have to provide the full set of Begin, Prepare, Perform, Finish, Confirm callbacks, but you can just have Begin,Finish,Confirm be a no-op for your initial impl. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 28.08.2015 19:37, Daniel P. Berrange wrote:
On Fri, Aug 28, 2015 at 12:18:30PM +0300, Nikolay Shirokovskiy wrote:
On 27.08.2015 13:34, Daniel P. Berrange wrote:
On Tue, Aug 25, 2015 at 12:04:14PM +0300, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call.
Looking more closely at migration again, the scenario you have is pretty much identical to the Xen scenario, in that the hypervisor actually manages the migration, but you still need a connection to dest libvirtd to fetch some initialization data.
You have claimed you are implementing, what we describe as "direct, unmanaged" migration on this page:
http://libvirt.org/migration.html
But based on the fact that you need to talk to dest libvirtd, you should in fact implement 'direct, managed' migration - this name is slightly misleading as the VZ SDK is still actually managing it.
Since you don't need to have the begin/confirm phases, you also don't need to implement the V3 migration protocol - it is sufficient to just use V1.
This doesn't need many changes in your patch fortunately.
I've been looking at common migration code for rather long time and think that using direct managed scheme for vz migration could lead to problems. Let me share my concerns.
1. Migration protocol of version1 differs from version3 not only by number of stages. Version3 supports extended parameters like VIR_MIGRATE_PARAM_GRAPHICS_URI which have meaning for vz migration too. Thus in future we could move to implementing version3 as well.
Ah, that is indeed true. From that POV it certainly makes sense to want to start with V3 straight away.
2. Direct managed stages doesn't have a meaning do anything on source, then on destination and so on. They interconnected and this interconnection is given in migration algorithm. For version3 (virDomainMigrateVersion3Full) it is more noticeable. If finish3 phase fail then we cancel migration on confirm3 phase. See, we treat this phases specifically - on perform3 we think we move data, on finish we think we start domain on destination, on comfirm we think we stop domain on source. That is how qemu migration works and that is how we think of phases when we implement direct managed algorithm. So phases have some contracts. If we implement vz migration thru this scheme we could not keep these contracts as perform3 phase not only move data, but also kill source domain and start destination. The worst things the user could get are an erroneous warnings in logs and overall migration failure reports on actual migration success in case of side effect failures like rpc or OOM. The worser is that you should keep in mind that phases imlementation contracts are vague.
It isn't the end of the world if the Perform3 stage kills the source domain. That is the same behaviour as with Xen. Particularly since the VZ SDK itself does the switchover, there's no functional downside to letting Perform3 kill the source.
I can not quite agree with you. Yes, luckily, vz migration could be implemented via existent 5-phases interface and existing managing virDomainMigrateVersion3Full algorithm but this is fragile. I mean as phases have different meaning for qemu and vz in future if virDomainMigrateVersion3Full will be somehow changed this could lead to improper functioning of vz migration. As change will be done with qemu meaning for phases in mind.
So as as version1 scheme is quite simple and phase contracts are looser that for version3 we could go this way but i see potential problems (at least for developer). Thus suggest keep contracts of phases of all versions of direct managed migration clear and hide all differences by implementing p2p or direct scheme.
The questing arises how these two differ. Documentation states that p2p is when libvirt daemon manages migrations and direct is when all managing is done by hypervisor. As vz migration needs some help from destination daemon it looks like a candidate for p2p. But as this help is as just little as help authenticate i suggest to think of it as of direct. From implementation point of view there is no difference, from user point of view the difference is only in flags. Another argument is that if we take qemu we see that p2p is just go thru same steps as direct managed, the most of difference is that managing move from client to daemon. That is p2p and direct managed are some kind of coupled. If there is p2p then direct managed should be possible too and this is not the case of vz migration.
The p2p migration mode is only different from the default mode, in that instead of the client app talking to the dest libvirtd, it is the source libvirtd talking.
With the VZ driver though, the driver runs directly in the client app, not libvirtd. As such, there is no benefit to implementing p2p mode in VZ - it will just end up working in the exact same way as the default mode in terms of what part communicates with the dest.
Again can not quite agree. Vz could run on libvirtd too and someone could want whole managing to be done on libvirtd in this case. Thus user expects there is either direct or p2p migration exists. Another reason is that it would be simplier to support vz migration in openstack nova. It uses toURI2 to migrate and I would better run vz driver on libvirtd and use p2p or direct migration rather then introduce a branch for vz to use MigrateN API with a client side driver.
As you do need to talk to dest libvirtd, IMHO, this rules out use of the direct mode, as that is intended for the case where you don't ever use libvirtd on the target. This is why you ended up having the wierd situation with passing a dummy URI to virsh, and then passing a libvirt URI as the second parameter. This leads to a rather confusing setup for apps IMHO.
Actually dummy URI is not caused by some kind of improper use from my side. If someone wants to use existing direct migration it end up passing URIs in this manner. Cause is that from one side 'dconnuri' is a required parameter for virsh and from other side it is ignored in direct migrations.
So, overall I think you should still do what I suggested, but instead of implementing v1, stay with v3. This will mean you have to provide the full set of Begin, Prepare, Perform, Finish, Confirm callbacks, but you can just have Begin,Finish,Confirm be a no-op for your initial impl.
Regards, Daniel

On Mon, Aug 31, 2015 at 11:40:55AM +0300, Nikolay Shirokovskiy wrote:
On 28.08.2015 19:37, Daniel P. Berrange wrote:
On Fri, Aug 28, 2015 at 12:18:30PM +0300, Nikolay Shirokovskiy wrote:
On 27.08.2015 13:34, Daniel P. Berrange wrote:
On Tue, Aug 25, 2015 at 12:04:14PM +0300, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call.
Looking more closely at migration again, the scenario you have is pretty much identical to the Xen scenario, in that the hypervisor actually manages the migration, but you still need a connection to dest libvirtd to fetch some initialization data.
You have claimed you are implementing, what we describe as "direct, unmanaged" migration on this page:
http://libvirt.org/migration.html
But based on the fact that you need to talk to dest libvirtd, you should in fact implement 'direct, managed' migration - this name is slightly misleading as the VZ SDK is still actually managing it.
Since you don't need to have the begin/confirm phases, you also don't need to implement the V3 migration protocol - it is sufficient to just use V1.
This doesn't need many changes in your patch fortunately.
I've been looking at common migration code for rather long time and think that using direct managed scheme for vz migration could lead to problems. Let me share my concerns.
1. Migration protocol of version1 differs from version3 not only by number of stages. Version3 supports extended parameters like VIR_MIGRATE_PARAM_GRAPHICS_URI which have meaning for vz migration too. Thus in future we could move to implementing version3 as well.
Ah, that is indeed true. From that POV it certainly makes sense to want to start with V3 straight away.
2. Direct managed stages doesn't have a meaning do anything on source, then on destination and so on. They interconnected and this interconnection is given in migration algorithm. For version3 (virDomainMigrateVersion3Full) it is more noticeable. If finish3 phase fail then we cancel migration on confirm3 phase. See, we treat this phases specifically - on perform3 we think we move data, on finish we think we start domain on destination, on comfirm we think we stop domain on source. That is how qemu migration works and that is how we think of phases when we implement direct managed algorithm. So phases have some contracts. If we implement vz migration thru this scheme we could not keep these contracts as perform3 phase not only move data, but also kill source domain and start destination. The worst things the user could get are an erroneous warnings in logs and overall migration failure reports on actual migration success in case of side effect failures like rpc or OOM. The worser is that you should keep in mind that phases imlementation contracts are vague.
It isn't the end of the world if the Perform3 stage kills the source domain. That is the same behaviour as with Xen. Particularly since the VZ SDK itself does the switchover, there's no functional downside to letting Perform3 kill the source.
I can not quite agree with you. Yes, luckily, vz migration could be implemented via existent 5-phases interface and existing managing virDomainMigrateVersion3Full algorithm but this is fragile. I mean as phases have different meaning for qemu and vz in future if virDomainMigrateVersion3Full will be somehow changed this could lead to improper functioning of vz migration. As change will be done with qemu meaning for phases in mind.
On the contrary the migration stages are explicitly intended to allow arbitrary hypervisor specific operations to be performed. In V3, we essentially have 2 initialization steps (begin on src, prepare on dst), and 2 cleanup steps (finish on dst, confirm on src), and 1 action step. The only really required semantics are that the perform step starts the migration. What a hypervisor does in the 2 initialization and 2 cleanup steps is entirely arbitrary. You can rely on the fact that in V3, we will always call the 5 steps in the same sequence as they are defined now. We will never change the way they are called in the future. If there was ever a need to make some incompatible change to suit something the QEMU driver needs, we would have to introduce a V4 protocol, so we would avoid any risk of breaking VZ in that manner.
So as as version1 scheme is quite simple and phase contracts are looser that for version3 we could go this way but i see potential problems (at least for developer). Thus suggest keep contracts of phases of all versions of direct managed migration clear and hide all differences by implementing p2p or direct scheme.
The questing arises how these two differ. Documentation states that p2p is when libvirt daemon manages migrations and direct is when all managing is done by hypervisor. As vz migration needs some help from destination daemon it looks like a candidate for p2p. But as this help is as just little as help authenticate i suggest to think of it as of direct. From implementation point of view there is no difference, from user point of view the difference is only in flags. Another argument is that if we take qemu we see that p2p is just go thru same steps as direct managed, the most of difference is that managing move from client to daemon. That is p2p and direct managed are some kind of coupled. If there is p2p then direct managed should be possible too and this is not the case of vz migration.
The p2p migration mode is only different from the default mode, in that instead of the client app talking to the dest libvirtd, it is the source libvirtd talking.
With the VZ driver though, the driver runs directly in the client app, not libvirtd. As such, there is no benefit to implementing p2p mode in VZ - it will just end up working in the exact same way as the default mode in terms of what part communicates with the dest.
Again can not quite agree. Vz could run on libvirtd too and someone could want whole managing to be done on libvirtd in this case. Thus user expects there is either direct or p2p migration exists.
Yes, it would be valid to implement the P2P migration protocol in VZ, as well as the default protocol (managed direct). Implementing the unmanaged direct protocol (aka --direct virsh flag) is not appropriate though, as that's for the case where dest libvirtd is not involved in any way, which is not the case here.
Another reason is that it would be simplier to support vz migration in openstack nova. It uses toURI2 to migrate and I would better run vz driver on libvirtd and use p2p or direct migration rather then introduce a branch for vz to use MigrateN API with a client side driver.
If we implement the P2P protocol in VZ too, then Openstack should not need any changes at all in how it invokes migration
As you do need to talk to dest libvirtd, IMHO, this rules out use of the direct mode, as that is intended for the case where you don't ever use libvirtd on the target. This is why you ended up having the wierd situation with passing a dummy URI to virsh, and then passing a libvirt URI as the second parameter. This leads to a rather confusing setup for apps IMHO.
Actually dummy URI is not caused by some kind of improper use from my side. If someone wants to use existing direct migration it end up passing URIs in this manner. Cause is that from one side 'dconnuri' is a required parameter for virsh and from other side it is ignored in direct migrations.
Well we have 2 URIs in the migration APIs, one URI is intended to be a libvirt URI, and one is intended to be a hypervisor URI. The way you implemented this initial patch, is requiring a libvirt URI to be provided where we have documented a hypervisor URI should be provided. This is what I consider to be wrong about the current impl. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 01.09.2015 17:21, Daniel P. Berrange wrote:
On Mon, Aug 31, 2015 at 11:40:55AM +0300, Nikolay Shirokovskiy wrote:
On 28.08.2015 19:37, Daniel P. Berrange wrote:
On Fri, Aug 28, 2015 at 12:18:30PM +0300, Nikolay Shirokovskiy wrote:
On 27.08.2015 13:34, Daniel P. Berrange wrote:
On Tue, Aug 25, 2015 at 12:04:14PM +0300, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh: virsh -c vz:///system migrate --direct $NAME $STUB vz+ssh://$DST/system
$STUB could be anything as it is required virsh argument but it is not used in direct migration.
Vz migration is implemented as direct migration. The reason is that vz sdk do all the job. Prepare phase function is used to pass session uuid from destination to source so we don't introduce new rpc call.
Looking more closely at migration again, the scenario you have is pretty much identical to the Xen scenario, in that the hypervisor actually manages the migration, but you still need a connection to dest libvirtd to fetch some initialization data.
You have claimed you are implementing, what we describe as "direct, unmanaged" migration on this page:
http://libvirt.org/migration.html
But based on the fact that you need to talk to dest libvirtd, you should in fact implement 'direct, managed' migration - this name is slightly misleading as the VZ SDK is still actually managing it.
Since you don't need to have the begin/confirm phases, you also don't need to implement the V3 migration protocol - it is sufficient to just use V1.
This doesn't need many changes in your patch fortunately.
I've been looking at common migration code for rather long time and think that using direct managed scheme for vz migration could lead to problems. Let me share my concerns.
1. Migration protocol of version1 differs from version3 not only by number of stages. Version3 supports extended parameters like VIR_MIGRATE_PARAM_GRAPHICS_URI which have meaning for vz migration too. Thus in future we could move to implementing version3 as well.
Ah, that is indeed true. From that POV it certainly makes sense to want to start with V3 straight away.
2. Direct managed stages doesn't have a meaning do anything on source, then on destination and so on. They interconnected and this interconnection is given in migration algorithm. For version3 (virDomainMigrateVersion3Full) it is more noticeable. If finish3 phase fail then we cancel migration on confirm3 phase. See, we treat this phases specifically - on perform3 we think we move data, on finish we think we start domain on destination, on comfirm we think we stop domain on source. That is how qemu migration works and that is how we think of phases when we implement direct managed algorithm. So phases have some contracts. If we implement vz migration thru this scheme we could not keep these contracts as perform3 phase not only move data, but also kill source domain and start destination. The worst things the user could get are an erroneous warnings in logs and overall migration failure reports on actual migration success in case of side effect failures like rpc or OOM. The worser is that you should keep in mind that phases imlementation contracts are vague.
It isn't the end of the world if the Perform3 stage kills the source domain. That is the same behaviour as with Xen. Particularly since the VZ SDK itself does the switchover, there's no functional downside to letting Perform3 kill the source.
I can not quite agree with you. Yes, luckily, vz migration could be implemented via existent 5-phases interface and existing managing virDomainMigrateVersion3Full algorithm but this is fragile. I mean as phases have different meaning for qemu and vz in future if virDomainMigrateVersion3Full will be somehow changed this could lead to improper functioning of vz migration. As change will be done with qemu meaning for phases in mind.
On the contrary the migration stages are explicitly intended to allow arbitrary hypervisor specific operations to be performed.
In V3, we essentially have 2 initialization steps (begin on src, prepare on dst), and 2 cleanup steps (finish on dst, confirm on src), and 1 action step. The only really required semantics are that the perform step starts the migration. What a hypervisor does in the 2 initialization and 2 cleanup steps is entirely arbitrary.
You can rely on the fact that in V3, we will always call the 5 steps in the same sequence as they are defined now. We will never change the way they are called in the future. If there was ever a need to make some incompatible change to suit something the QEMU driver needs, we would have to introduce a V4 protocol, so we would avoid any risk of breaking VZ in that manner.
So as as version1 scheme is quite simple and phase contracts are looser that for version3 we could go this way but i see potential problems (at least for developer). Thus suggest keep contracts of phases of all versions of direct managed migration clear and hide all differences by implementing p2p or direct scheme.
The questing arises how these two differ. Documentation states that p2p is when libvirt daemon manages migrations and direct is when all managing is done by hypervisor. As vz migration needs some help from destination daemon it looks like a candidate for p2p. But as this help is as just little as help authenticate i suggest to think of it as of direct. From implementation point of view there is no difference, from user point of view the difference is only in flags. Another argument is that if we take qemu we see that p2p is just go thru same steps as direct managed, the most of difference is that managing move from client to daemon. That is p2p and direct managed are some kind of coupled. If there is p2p then direct managed should be possible too and this is not the case of vz migration.
The p2p migration mode is only different from the default mode, in that instead of the client app talking to the dest libvirtd, it is the source libvirtd talking.
With the VZ driver though, the driver runs directly in the client app, not libvirtd. As such, there is no benefit to implementing p2p mode in VZ - it will just end up working in the exact same way as the default mode in terms of what part communicates with the dest.
Again can not quite agree. Vz could run on libvirtd too and someone could want whole managing to be done on libvirtd in this case. Thus user expects there is either direct or p2p migration exists.
Yes, it would be valid to implement the P2P migration protocol in VZ, as well as the default protocol (managed direct). Implementing the unmanaged direct protocol (aka --direct virsh flag) is not appropriate though, as that's for the case where dest libvirtd is not involved in any way, which is not the case here.
Another reason is that it would be simplier to support vz migration in openstack nova. It uses toURI2 to migrate and I would better run vz driver on libvirtd and use p2p or direct migration rather then introduce a branch for vz to use MigrateN API with a client side driver.
If we implement the P2P protocol in VZ too, then Openstack should not need any changes at all in how it invokes migration
As you do need to talk to dest libvirtd, IMHO, this rules out use of the direct mode, as that is intended for the case where you don't ever use libvirtd on the target. This is why you ended up having the wierd situation with passing a dummy URI to virsh, and then passing a libvirt URI as the second parameter. This leads to a rather confusing setup for apps IMHO.
Actually dummy URI is not caused by some kind of improper use from my side. If someone wants to use existing direct migration it end up passing URIs in this manner. Cause is that from one side 'dconnuri' is a required parameter for virsh and from other side it is ignored in direct migrations.
Well we have 2 URIs in the migration APIs, one URI is intended to be a libvirt URI, and one is intended to be a hypervisor URI. The way you implemented this initial patch, is requiring a libvirt URI to be provided where we have documented a hypervisor URI should be provided. This is what I consider to be wrong about the current impl.
Ok then i'll return to p2p implementation as in v2 version of the patch and resend. Thank you for clarifications.
Regards, Daniel

From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/vz/vz_driver.c | 6 ++---- src/vz/vz_sdk.c | 16 +++++++++------- src/vz/vz_sdk.h | 5 ++++- 3 files changed, 15 insertions(+), 12 deletions(-) diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index f82fff8..dc26b09 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -1467,8 +1467,6 @@ vzMakeVzUri(const char *connuri_str) #define VZ_MIGRATION_FLAGS (0) -#define VZ_MIGRATION_PARAMETERS (NULL) - static int vzDomainMigratePerform3(virDomainPtr domain, const char *xmlin ATTRIBUTE_UNUSED, @@ -1479,7 +1477,7 @@ vzDomainMigratePerform3(virDomainPtr domain, const char *dconnuri ATTRIBUTE_UNUSED, const char *uri, unsigned long flags, - const char *dname ATTRIBUTE_UNUSED, + const char *dname, unsigned long bandwidth ATTRIBUTE_UNUSED) { int ret = -1; @@ -1515,7 +1513,7 @@ vzDomainMigratePerform3(virDomainPtr domain, if (vzParseCookie(cookie, session_uuid) < 0) goto cleanup; - if (prlsdkMigrate(dom, vzuri, session_uuid) < 0) + if (prlsdkMigrate(dom, vzuri, session_uuid, dname) < 0) goto cleanup; virDomainObjListRemove(privconn->domains, dom); diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index 783438d..89a2429 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -4064,7 +4064,8 @@ prlsdkGetMemoryStats(virDomainObjPtr dom, #define PRLSDK_MIGRATION_FLAGS (PSL_HIGH_SECURITY) int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, - const unsigned char *session_uuid) + const unsigned char *session_uuid, + const char *dname) { int ret = -1; vzDomObjPtr privdom = dom->privateData; @@ -4072,12 +4073,13 @@ int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; prlsdkUUIDFormat(session_uuid, uuidstr); - job = PrlVm_MigrateEx(privdom->sdkdom, uri->server, uri->port, uuidstr, - "", /* use default dir for migrated instance bundle */ - PRLSDK_MIGRATION_FLAGS, - 0, /* reserved flags */ - PRL_TRUE /* don't ask for confirmations */ - ); + job = PrlVm_MigrateWithRenameEx(privdom->sdkdom, uri->server, uri->port, uuidstr, + dname == NULL ? "" : dname, + "", /* use default dir for migrated instance bundle */ + PRLSDK_MIGRATION_FLAGS, + 0, /* reserved flags */ + PRL_TRUE /* don't ask for confirmations */ + ); if (PRL_FAILED(waitJob(job))) goto cleanup; diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h index d3f0caf..0aa70b3 100644 --- a/src/vz/vz_sdk.h +++ b/src/vz/vz_sdk.h @@ -77,4 +77,7 @@ prlsdkGetVcpuStats(virDomainObjPtr dom, int idx, unsigned long long *time); int prlsdkGetMemoryStats(virDomainObjPtr dom, virDomainMemoryStatPtr stats, unsigned int nr_stats); int -prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const char unsigned *session_uuid); +prlsdkMigrate(virDomainObjPtr dom, + virURIPtr uri, + const char unsigned *session_uuid, + const char *dname); -- 1.7.1

On 08/25/2015 12:04 PM, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
ACK
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/vz/vz_driver.c | 6 ++---- src/vz/vz_sdk.c | 16 +++++++++------- src/vz/vz_sdk.h | 5 ++++- 3 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index f82fff8..dc26b09 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -1467,8 +1467,6 @@ vzMakeVzUri(const char *connuri_str)
#define VZ_MIGRATION_FLAGS (0)
-#define VZ_MIGRATION_PARAMETERS (NULL) - static int vzDomainMigratePerform3(virDomainPtr domain, const char *xmlin ATTRIBUTE_UNUSED, @@ -1479,7 +1477,7 @@ vzDomainMigratePerform3(virDomainPtr domain, const char *dconnuri ATTRIBUTE_UNUSED, const char *uri, unsigned long flags, - const char *dname ATTRIBUTE_UNUSED, + const char *dname, unsigned long bandwidth ATTRIBUTE_UNUSED) { int ret = -1; @@ -1515,7 +1513,7 @@ vzDomainMigratePerform3(virDomainPtr domain, if (vzParseCookie(cookie, session_uuid) < 0) goto cleanup;
- if (prlsdkMigrate(dom, vzuri, session_uuid) < 0) + if (prlsdkMigrate(dom, vzuri, session_uuid, dname) < 0) goto cleanup;
virDomainObjListRemove(privconn->domains, dom); diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index 783438d..89a2429 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -4064,7 +4064,8 @@ prlsdkGetMemoryStats(virDomainObjPtr dom, #define PRLSDK_MIGRATION_FLAGS (PSL_HIGH_SECURITY)
int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, - const unsigned char *session_uuid) + const unsigned char *session_uuid, + const char *dname) { int ret = -1; vzDomObjPtr privdom = dom->privateData; @@ -4072,12 +4073,13 @@ int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
prlsdkUUIDFormat(session_uuid, uuidstr); - job = PrlVm_MigrateEx(privdom->sdkdom, uri->server, uri->port, uuidstr, - "", /* use default dir for migrated instance bundle */ - PRLSDK_MIGRATION_FLAGS, - 0, /* reserved flags */ - PRL_TRUE /* don't ask for confirmations */ - ); + job = PrlVm_MigrateWithRenameEx(privdom->sdkdom, uri->server, uri->port, uuidstr, + dname == NULL ? "" : dname, + "", /* use default dir for migrated instance bundle */ + PRLSDK_MIGRATION_FLAGS, + 0, /* reserved flags */ + PRL_TRUE /* don't ask for confirmations */ + );
if (PRL_FAILED(waitJob(job))) goto cleanup; diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h index d3f0caf..0aa70b3 100644 --- a/src/vz/vz_sdk.h +++ b/src/vz/vz_sdk.h @@ -77,4 +77,7 @@ prlsdkGetVcpuStats(virDomainObjPtr dom, int idx, unsigned long long *time); int prlsdkGetMemoryStats(virDomainObjPtr dom, virDomainMemoryStatPtr stats, unsigned int nr_stats); int -prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const char unsigned *session_uuid); +prlsdkMigrate(virDomainObjPtr dom, + virURIPtr uri, + const char unsigned *session_uuid, + const char *dname);

From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> Migration API has a lot of options. This patch intention is to provide support for those options that can be trivially supported and give estimation for other options support in this commit message. I. Supported. 1. VIR_MIGRATE_COMPRESSED. Means 'use compression when migration domain memory'. It is supported but quite uncommon way: vz migration demands that this option should be set. This is due to vz is hardcoded to moving VMs memory using compression. So anyone who wants to migrate vz domain should set this option thus declaring it knows it uses compression. Why bother? May be just support this option and ignore if it is not set or don't support at all as we can't change behaviour in this aspect. Well I believe that this option is, first, inherent to hypervisor implementation as we have a task of moving domain memory to different place and, second, we have a tradeoff here between cpu and network resources and some managment should choose the stratery via this option. If we choose ignoring or unsupporting implementation than this option has a too vague meaning. Let's go into more detail. First if we ignore situation where option is not set than we put user into fallacy that vz hypervisor don't use compression and thus have lower cpu usage. Second approach is to not support the option. The main reason not to follow this way is that 'not supported and not set' is indistinguishable from 'supported and not set' and thus again fool the user. 2. VIR_MIGRATE_LIVE. Means 'reduce domain downtime by suspending it as lately as possible' which technically means 'migrate as much domain memory as possible before suspending'. Supported in same manner as VIR_MIGRATE_COMPRESSED as both vz VMs and CTs are always migrated via live scheme. One may be fooled by vz sdk flags of migration api: PVMT_HOT_MIGRATION(aka live) and PVMT_WARM_MIGRATION(aka normal). Current implementation ignore these flags and always use live migration. 3. VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_UNDEFINE_SOURCE. This two comes together. Vz domain are alwasy persistent so we have to support demand option VIR_MIGRATE_PERSIST_DEST is set and VIR_MIGRATE_UNDEFINE_SOURCE is not (and this is done just by unsupporting it). 4. VIR_MIGRATE_PAUSED. Means 'don't resume domain on destination'. This is trivially supported as we have a corresponding option in vz migration. 5. VIR_MIGRATE_OFFLINE. Means 'migrate only XML definition of a domain'. It is a forcing option that is it is ignored if domain is running and must be set to migrate stopped domain. Vz implemenation follows this unformal definition with one exception: non-shared disks will be migrated too. This desicion is on par with VIR_MIGRATE_NON_SHARED_DISK condideration(see last part of this notes). All that said the minimal command to migrate vz domain looks next: migrate --direct $DOMAIN $STUB --migrateuri $DESTINATION --live --persistent --compressed. Not good. Say if you want to just migrate a domain without further details you will get error messages until you add these options to command line. I think there is a lack of notion 'default' behaviour in all these aspects. If we have it we could just issue: migrate $DOMAIN $DESTINATION For vz this would give default compression for example, for qemu - default no-compression. Then we could have flags --compressed and -no-compressed and for vz the latter would give unsupported error. II. Unsupported. 1. VIR_MIGRATE_UNSAFE. Vz disks are always have 'cache=none' set (this is not reflected in current version of vz driver and will be fixed soon). So we need not to support this option. 2. VIR_MIGRATE_CHANGE_PROTECTION. Unsupported as we have no appopriate support from vz sdk. Although we have locks they are advisory and cant help us. 3. VIR_MIGRATE_TUNNELLED. Means 'use libvirtd to libvirtd connection to pass hypervisor migration traffic'. Unsupported as not among vz hypervisor usecases. 4. p2p migration which is exposed via *toURI* interface with VIR_MIGRATE_PEER2PEER flag set. It doesn't make sense for vz migration as it is a variant of managed migration which is qemu specific. 5. VIR_MIGRATE_ABORT_ON_ERROR, VIR_MIGRATE_AUTO_CONVERGE, VIR_MIGRATE_RDMA_PIN_ALL, VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_PARAM_DEST_XML, VIR_MIGRATE_PARAM_BANDWIDTH, VIR_MIGRATE_PARAM_GRAPHICS_URI, VIR_MIGRATE_PARAM_LISTEN_ADDRESS, VIR_MIGRATE_PARAM_MIGRATE_DISKS. Without further discussion. They are just not usecases of vz hypevisor. III. Undecided and thus unsupported. 6. VIR_MIGRATE_NON_SHARED_DISK. The meaning of this option is not clear to me. Look, if qemu domain has a non-shared disk than it will refuse to migrate. But after you specify this option it will refuse too. You need to create image file for the disk on the destination side. Only after that you can migrate. Unexpectedly existence of this file is enough to migrate without option too. In this case you will get a domain on the destination with disk image unrelated to source one and this is in case of live migration! Looks like a bug. Ok, imagine this is fixed so that migration of non-shared disk is only possible with actual coping disk to destination. What we get from this option? We get that you have to specify this option if you want to migrate a domain with non-shared disk like some forcing option. May be it is a good approach but it is incompatible with vz. Vz don't demand any user awareness of migration of non-shared disks. And this case incompatibility can not be easily resolved as for 'compressed' option as this option depends on classifying of shared/non-shared for disks which is done inside vz. vz: implement misc migration options Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/vz/vz_driver.c | 128 +++++++++++++++++++++++++++++++++++++++++++++++++++- src/vz/vz_sdk.c | 8 +++- src/vz/vz_sdk.h | 3 +- 3 files changed, 134 insertions(+), 5 deletions(-) diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index dc26b09..b12592b 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -1465,7 +1465,120 @@ vzMakeVzUri(const char *connuri_str) return vzuri; } -#define VZ_MIGRATION_FLAGS (0) +virURIPtr +vzParseVzURI(const char *uri_str) +{ + virURIPtr uri = NULL; + int ret = -1; + + if (!(uri = virURIParse(uri_str))) + goto cleanup; + + if (uri->scheme == NULL || uri->server == NULL) { + virReportError(VIR_ERR_INVALID_ARG, + _("scheme and host are mandatory vz migration URI: %s"), + uri_str); + goto cleanup; + } + + if (uri->user != NULL || uri->path != NULL || + uri->query != NULL || uri->fragment != NULL) { + virReportError(VIR_ERR_INVALID_ARG, + _("only scheme, host and port are supported in " + "vz migration URI: %s"), uri_str); + goto cleanup; + } + + if (STRNEQ(uri->scheme, "tcp")) { + virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, + _("unsupported scheme %s in migration URI %s"), + uri->scheme, uri_str); + goto cleanup; + } + + ret = 0; + + cleanup: + if (ret < 0) { + virURIFree(uri); + uri = NULL; + } + + return uri; +} + +#define VZ_MIGRATION_FLAGS \ + (VIR_MIGRATE_OFFLINE | \ + VIR_MIGRATE_LIVE | \ + VIR_MIGRATE_COMPRESSED | \ + VIR_MIGRATE_PERSIST_DEST | \ + VIR_MIGRATE_PAUSED) + +/* TODO this code should be in common place as these + rules follows from options (informal) definitions. + Qemu makes some of these checks on begin phase but not all. */ +int vzCheckOfflineFlags(int flags) +{ + if (!(flags & VIR_MIGRATE_OFFLINE)) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is not running")); + return -1; + } + + if (flags & VIR_MIGRATE_LIVE) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("live offline migration does not " + "make sense")); + return -1; + } + + if (flags & VIR_MIGRATE_COMPRESSED) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("compressed offline migration does not " + "make sense")); + return -1; + } + + if (flags & VIR_MIGRATE_PAUSED) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("paused offline migration does not " + "make sense")); + return -1; + } + + return 0; +} + +int vzCheckCommonFlags(int flags) +{ + if (!(flags & VIR_MIGRATE_PERSIST_DEST)) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("flags VIR_MIGRATE_PERSIST_DEST must be set" + " for vz migration")); + return -1; + } + + return 0; +} + +int vzCheckOnlineFlags(int flags) +{ + if (!(flags & VIR_MIGRATE_LIVE)) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("flags VIR_MIGRATE_LIVE must be set" + " for online vz migration")); + return -1; + } + + if (!(flags & VIR_MIGRATE_COMPRESSED)) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("flags VIR_MIGRATE_COMPRESSED must be set" + " for online vz migration")); + return -1; + } + + return 0; +} static int vzDomainMigratePerform3(virDomainPtr domain, @@ -1491,12 +1604,23 @@ vzDomainMigratePerform3(virDomainPtr domain, virCheckFlags(VZ_MIGRATION_FLAGS, -1); + if (vzCheckCommonFlags(flags)) + goto cleanup; + if (!(vzuri = vzMakeVzUri(uri))) goto cleanup; if (!(dom = vzDomObjFromDomain(domain))) goto cleanup; + if (virDomainObjIsActive(dom)) + ret = vzCheckOnlineFlags(flags); + else + ret = vzCheckOfflineFlags(flags); + + if (ret < 0) + goto cleanup; + dconn = virConnectOpen(uri); if (dconn == NULL) { virReportError(VIR_ERR_OPERATION_FAILED, @@ -1513,7 +1637,7 @@ vzDomainMigratePerform3(virDomainPtr domain, if (vzParseCookie(cookie, session_uuid) < 0) goto cleanup; - if (prlsdkMigrate(dom, vzuri, session_uuid, dname) < 0) + if (prlsdkMigrate(dom, vzuri, session_uuid, dname, flags) < 0) goto cleanup; virDomainObjListRemove(privconn->domains, dom); diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index 89a2429..9a2b5df 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -4065,18 +4065,22 @@ prlsdkGetMemoryStats(virDomainObjPtr dom, int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const unsigned char *session_uuid, - const char *dname) + const char *dname, unsigned int flags) { int ret = -1; vzDomObjPtr privdom = dom->privateData; PRL_HANDLE job = PRL_INVALID_HANDLE; char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + PRL_UINT32 vzflags = PRLSDK_MIGRATION_FLAGS; + + if (flags & VIR_MIGRATE_PAUSED) + vzflags |= PVMT_DONT_RESUME_VM; prlsdkUUIDFormat(session_uuid, uuidstr); job = PrlVm_MigrateWithRenameEx(privdom->sdkdom, uri->server, uri->port, uuidstr, dname == NULL ? "" : dname, "", /* use default dir for migrated instance bundle */ - PRLSDK_MIGRATION_FLAGS, + vzflags, 0, /* reserved flags */ PRL_TRUE /* don't ask for confirmations */ ); diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h index 0aa70b3..5b26b70 100644 --- a/src/vz/vz_sdk.h +++ b/src/vz/vz_sdk.h @@ -80,4 +80,5 @@ int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const char unsigned *session_uuid, - const char *dname); + const char *dname, + unsigned int flags); -- 1.7.1

On 08/25/2015 12:04 PM, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Migration API has a lot of options. This patch intention is to provide support for those options that can be trivially supported and give estimation for other options support in this commit message.
I. Supported.
1. VIR_MIGRATE_COMPRESSED. Means 'use compression when migration domain memory'. It is supported but quite uncommon way: vz migration demands that this option should be set. This is due to vz is hardcoded to moving VMs memory using compression. So anyone who wants to migrate vz domain should set this option thus declaring it knows it uses compression.
Why bother? May be just support this option and ignore if it is not set or don't support at all as we can't change behaviour in this aspect. Well I believe that this option is, first, inherent to hypervisor implementation as we have a task of moving domain memory to different place and, second, we have a tradeoff here between cpu and network resources and some managment should choose the stratery via this option. If we choose ignoring or unsupporting implementation than this option has a too vague meaning. Let's go into more detail.
First if we ignore situation where option is not set than we put user into fallacy that vz hypervisor don't use compression and thus have lower cpu usage. Second approach is to not support the option. The main reason not to follow this way is that 'not supported and not set' is indistinguishable from 'supported and not set' and thus again fool the user.
2. VIR_MIGRATE_LIVE. Means 'reduce domain downtime by suspending it as lately as possible' which technically means 'migrate as much domain memory as possible before suspending'. Supported in same manner as VIR_MIGRATE_COMPRESSED as both vz VMs and CTs are always migrated via live scheme.
One may be fooled by vz sdk flags of migration api: PVMT_HOT_MIGRATION(aka live) and PVMT_WARM_MIGRATION(aka normal). Current implementation ignore these flags and always use live migration.
3. VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_UNDEFINE_SOURCE. This two comes together. Vz domain are alwasy persistent so we have to support demand option VIR_MIGRATE_PERSIST_DEST is set and VIR_MIGRATE_UNDEFINE_SOURCE is not (and this is done just by unsupporting it).
4. VIR_MIGRATE_PAUSED. Means 'don't resume domain on destination'. This is trivially supported as we have a corresponding option in vz migration.
5. VIR_MIGRATE_OFFLINE. Means 'migrate only XML definition of a domain'. It is a forcing option that is it is ignored if domain is running and must be set to migrate stopped domain. Vz implemenation follows this unformal definition with one exception: non-shared disks will be migrated too. This desicion is on par with VIR_MIGRATE_NON_SHARED_DISK condideration(see last part of this notes).
All that said the minimal command to migrate vz domain looks next:
migrate --direct $DOMAIN $STUB --migrateuri $DESTINATION --live --persistent --compressed.
Not good. Say if you want to just migrate a domain without further details you will get error messages until you add these options to command line. I think there is a lack of notion 'default' behaviour in all these aspects. If we have it we could just issue:
migrate $DOMAIN $DESTINATION
For vz this would give default compression for example, for qemu - default no-compression. Then we could have flags --compressed and -no-compressed and for vz the latter would give unsupported error.
II. Unsupported.
1. VIR_MIGRATE_UNSAFE. Vz disks are always have 'cache=none' set (this is not reflected in current version of vz driver and will be fixed soon). So we need not to support this option.
2. VIR_MIGRATE_CHANGE_PROTECTION. Unsupported as we have no appopriate support from vz sdk. Although we have locks they are advisory and cant help us.
3. VIR_MIGRATE_TUNNELLED. Means 'use libvirtd to libvirtd connection to pass hypervisor migration traffic'. Unsupported as not among vz hypervisor usecases.
4. p2p migration which is exposed via *toURI* interface with VIR_MIGRATE_PEER2PEER flag set. It doesn't make sense for vz migration as it is a variant of managed migration which is qemu specific.
5. VIR_MIGRATE_ABORT_ON_ERROR, VIR_MIGRATE_AUTO_CONVERGE, VIR_MIGRATE_RDMA_PIN_ALL, VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_PARAM_DEST_XML, VIR_MIGRATE_PARAM_BANDWIDTH, VIR_MIGRATE_PARAM_GRAPHICS_URI, VIR_MIGRATE_PARAM_LISTEN_ADDRESS, VIR_MIGRATE_PARAM_MIGRATE_DISKS. Without further discussion. They are just not usecases of vz hypevisor.
III. Undecided and thus unsupported.
6. VIR_MIGRATE_NON_SHARED_DISK. The meaning of this option is not clear to me. Look, if qemu domain has a non-shared disk than it will refuse to migrate. But after you specify this option it will refuse too. You need to create image file for the disk on the destination side. Only after that you can migrate. Unexpectedly existence of this file is enough to migrate without option too. In this case you will get a domain on the destination with disk image unrelated to source one and this is in case of live migration! Looks like a bug. Ok, imagine this is fixed so that migration of non-shared disk is only possible with actual coping disk to destination. What we get from this option? We get that you have to specify this option if you want to migrate a domain with non-shared disk like some forcing option. May be it is a good approach but it is incompatible with vz. Vz don't demand any user awareness of migration of non-shared disks. And this case incompatibility can not be easily resolved as for 'compressed' option as this option depends on classifying of shared/non-shared for disks which is done inside vz. vz: implement misc migration options
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/vz/vz_driver.c | 128 +++++++++++++++++++++++++++++++++++++++++++++++++++- src/vz/vz_sdk.c | 8 +++- src/vz/vz_sdk.h | 3 +- 3 files changed, 134 insertions(+), 5 deletions(-)
diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index dc26b09..b12592b 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -1465,7 +1465,120 @@ vzMakeVzUri(const char *connuri_str) return vzuri; }
-#define VZ_MIGRATION_FLAGS (0) +virURIPtr +vzParseVzURI(const char *uri_str) +{ + virURIPtr uri = NULL; + int ret = -1; + + if (!(uri = virURIParse(uri_str))) + goto cleanup; + + if (uri->scheme == NULL || uri->server == NULL) { + virReportError(VIR_ERR_INVALID_ARG, + _("scheme and host are mandatory vz migration URI: %s"), + uri_str); + goto cleanup; + } + + if (uri->user != NULL || uri->path != NULL || + uri->query != NULL || uri->fragment != NULL) { + virReportError(VIR_ERR_INVALID_ARG, + _("only scheme, host and port are supported in " + "vz migration URI: %s"), uri_str); + goto cleanup; + } + + if (STRNEQ(uri->scheme, "tcp")) { + virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, + _("unsupported scheme %s in migration URI %s"), + uri->scheme, uri_str); + goto cleanup; + } +
Try URI vz+ssh://hotname:22/system, there is some bug, prlsdk returns error after migration: 2015-08-25 15:36:18.923+0000: 92650: error : virDomainGetJobInfo:8889 : this function is not supported by the connection driver: virDomainGetJobInfo 2015-08-25 15:36:19.169+0000: 92649: error : prlsdkMigrate:4089 : internal error: Unable to connect to the server "10.27.255.18". An invalid response has been received from the server. Make sure that Parallels Server is running on the server "10.27.255.18" and is up to date and try again.
+ ret = 0; + + cleanup: + if (ret < 0) { + virURIFree(uri); + uri = NULL; + } + + return uri; +} + +#define VZ_MIGRATION_FLAGS \ + (VIR_MIGRATE_OFFLINE | \ + VIR_MIGRATE_LIVE | \ + VIR_MIGRATE_COMPRESSED | \ + VIR_MIGRATE_PERSIST_DEST | \ + VIR_MIGRATE_PAUSED) + +/* TODO this code should be in common place as these + rules follows from options (informal) definitions. + Qemu makes some of these checks on begin phase but not all. */ +int vzCheckOfflineFlags(int flags) +{ + if (!(flags & VIR_MIGRATE_OFFLINE)) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is not running")); + return -1; + } + + if (flags & VIR_MIGRATE_LIVE) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("live offline migration does not " + "make sense")); + return -1; + } + + if (flags & VIR_MIGRATE_COMPRESSED) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("compressed offline migration does not " + "make sense")); + return -1; + } + + if (flags & VIR_MIGRATE_PAUSED) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("paused offline migration does not " + "make sense")); + return -1; + } + + return 0; +} + +int vzCheckCommonFlags(int flags) +{ + if (!(flags & VIR_MIGRATE_PERSIST_DEST)) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("flags VIR_MIGRATE_PERSIST_DEST must be set" + " for vz migration")); + return -1; + } + + return 0; +} + +int vzCheckOnlineFlags(int flags) +{ + if (!(flags & VIR_MIGRATE_LIVE)) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("flags VIR_MIGRATE_LIVE must be set" + " for online vz migration")); + return -1; + } + + if (!(flags & VIR_MIGRATE_COMPRESSED)) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("flags VIR_MIGRATE_COMPRESSED must be set" + " for online vz migration")); + return -1; + } + + return 0; +}
static int vzDomainMigratePerform3(virDomainPtr domain, @@ -1491,12 +1604,23 @@ vzDomainMigratePerform3(virDomainPtr domain,
virCheckFlags(VZ_MIGRATION_FLAGS, -1);
+ if (vzCheckCommonFlags(flags)) + goto cleanup; + if (!(vzuri = vzMakeVzUri(uri))) goto cleanup;
if (!(dom = vzDomObjFromDomain(domain))) goto cleanup;
+ if (virDomainObjIsActive(dom)) + ret = vzCheckOnlineFlags(flags); + else + ret = vzCheckOfflineFlags(flags); + + if (ret < 0) + goto cleanup; + dconn = virConnectOpen(uri); if (dconn == NULL) { virReportError(VIR_ERR_OPERATION_FAILED, @@ -1513,7 +1637,7 @@ vzDomainMigratePerform3(virDomainPtr domain, if (vzParseCookie(cookie, session_uuid) < 0) goto cleanup;
- if (prlsdkMigrate(dom, vzuri, session_uuid, dname) < 0) + if (prlsdkMigrate(dom, vzuri, session_uuid, dname, flags) < 0) goto cleanup;
virDomainObjListRemove(privconn->domains, dom); diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index 89a2429..9a2b5df 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -4065,18 +4065,22 @@ prlsdkGetMemoryStats(virDomainObjPtr dom,
int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const unsigned char *session_uuid, - const char *dname) + const char *dname, unsigned int flags) { int ret = -1; vzDomObjPtr privdom = dom->privateData; PRL_HANDLE job = PRL_INVALID_HANDLE; char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + PRL_UINT32 vzflags = PRLSDK_MIGRATION_FLAGS; + + if (flags & VIR_MIGRATE_PAUSED) + vzflags |= PVMT_DONT_RESUME_VM;
prlsdkUUIDFormat(session_uuid, uuidstr); job = PrlVm_MigrateWithRenameEx(privdom->sdkdom, uri->server, uri->port, uuidstr, dname == NULL ? "" : dname, "", /* use default dir for migrated instance bundle */ - PRLSDK_MIGRATION_FLAGS, + vzflags, 0, /* reserved flags */ PRL_TRUE /* don't ask for confirmations */ ); diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h index 0aa70b3..5b26b70 100644 --- a/src/vz/vz_sdk.h +++ b/src/vz/vz_sdk.h @@ -80,4 +80,5 @@ int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const char unsigned *session_uuid, - const char *dname); + const char *dname, + unsigned int flags);

On 25.08.2015 18:54, Dmitry Guryanov wrote:
On 08/25/2015 12:04 PM, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Migration API has a lot of options. This patch intention is to provide support for those options that can be trivially supported and give estimation for other options support in this commit message.
I. Supported.
1. VIR_MIGRATE_COMPRESSED. Means 'use compression when migration domain memory'. It is supported but quite uncommon way: vz migration demands that this option should be set. This is due to vz is hardcoded to moving VMs memory using compression. So anyone who wants to migrate vz domain should set this option thus declaring it knows it uses compression.
Why bother? May be just support this option and ignore if it is not set or don't support at all as we can't change behaviour in this aspect. Well I believe that this option is, first, inherent to hypervisor implementation as we have a task of moving domain memory to different place and, second, we have a tradeoff here between cpu and network resources and some managment should choose the stratery via this option. If we choose ignoring or unsupporting implementation than this option has a too vague meaning. Let's go into more detail.
First if we ignore situation where option is not set than we put user into fallacy that vz hypervisor don't use compression and thus have lower cpu usage. Second approach is to not support the option. The main reason not to follow this way is that 'not supported and not set' is indistinguishable from 'supported and not set' and thus again fool the user.
2. VIR_MIGRATE_LIVE. Means 'reduce domain downtime by suspending it as lately as possible' which technically means 'migrate as much domain memory as possible before suspending'. Supported in same manner as VIR_MIGRATE_COMPRESSED as both vz VMs and CTs are always migrated via live scheme.
One may be fooled by vz sdk flags of migration api: PVMT_HOT_MIGRATION(aka live) and PVMT_WARM_MIGRATION(aka normal). Current implementation ignore these flags and always use live migration.
3. VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_UNDEFINE_SOURCE. This two comes together. Vz domain are alwasy persistent so we have to support demand option VIR_MIGRATE_PERSIST_DEST is set and VIR_MIGRATE_UNDEFINE_SOURCE is not (and this is done just by unsupporting it).
4. VIR_MIGRATE_PAUSED. Means 'don't resume domain on destination'. This is trivially supported as we have a corresponding option in vz migration.
5. VIR_MIGRATE_OFFLINE. Means 'migrate only XML definition of a domain'. It is a forcing option that is it is ignored if domain is running and must be set to migrate stopped domain. Vz implemenation follows this unformal definition with one exception: non-shared disks will be migrated too. This desicion is on par with VIR_MIGRATE_NON_SHARED_DISK condideration(see last part of this notes).
All that said the minimal command to migrate vz domain looks next:
migrate --direct $DOMAIN $STUB --migrateuri $DESTINATION --live --persistent --compressed.
Not good. Say if you want to just migrate a domain without further details you will get error messages until you add these options to command line. I think there is a lack of notion 'default' behaviour in all these aspects. If we have it we could just issue:
migrate $DOMAIN $DESTINATION
For vz this would give default compression for example, for qemu - default no-compression. Then we could have flags --compressed and -no-compressed and for vz the latter would give unsupported error.
II. Unsupported.
1. VIR_MIGRATE_UNSAFE. Vz disks are always have 'cache=none' set (this is not reflected in current version of vz driver and will be fixed soon). So we need not to support this option.
2. VIR_MIGRATE_CHANGE_PROTECTION. Unsupported as we have no appopriate support from vz sdk. Although we have locks they are advisory and cant help us.
3. VIR_MIGRATE_TUNNELLED. Means 'use libvirtd to libvirtd connection to pass hypervisor migration traffic'. Unsupported as not among vz hypervisor usecases.
4. p2p migration which is exposed via *toURI* interface with VIR_MIGRATE_PEER2PEER flag set. It doesn't make sense for vz migration as it is a variant of managed migration which is qemu specific.
5. VIR_MIGRATE_ABORT_ON_ERROR, VIR_MIGRATE_AUTO_CONVERGE, VIR_MIGRATE_RDMA_PIN_ALL, VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_PARAM_DEST_XML, VIR_MIGRATE_PARAM_BANDWIDTH, VIR_MIGRATE_PARAM_GRAPHICS_URI, VIR_MIGRATE_PARAM_LISTEN_ADDRESS, VIR_MIGRATE_PARAM_MIGRATE_DISKS. Without further discussion. They are just not usecases of vz hypevisor.
III. Undecided and thus unsupported.
6. VIR_MIGRATE_NON_SHARED_DISK. The meaning of this option is not clear to me. Look, if qemu domain has a non-shared disk than it will refuse to migrate. But after you specify this option it will refuse too. You need to create image file for the disk on the destination side. Only after that you can migrate. Unexpectedly existence of this file is enough to migrate without option too. In this case you will get a domain on the destination with disk image unrelated to source one and this is in case of live migration! Looks like a bug. Ok, imagine this is fixed so that migration of non-shared disk is only possible with actual coping disk to destination. What we get from this option? We get that you have to specify this option if you want to migrate a domain with non-shared disk like some forcing option. May be it is a good approach but it is incompatible with vz. Vz don't demand any user awareness of migration of non-shared disks. And this case incompatibility can not be easily resolved as for 'compressed' option as this option depends on classifying of shared/non-shared for disks which is done inside vz. vz: implement misc migration options
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/vz/vz_driver.c | 128 +++++++++++++++++++++++++++++++++++++++++++++++++++- src/vz/vz_sdk.c | 8 +++- src/vz/vz_sdk.h | 3 +- 3 files changed, 134 insertions(+), 5 deletions(-)
diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c index dc26b09..b12592b 100644 --- a/src/vz/vz_driver.c +++ b/src/vz/vz_driver.c @@ -1465,7 +1465,120 @@ vzMakeVzUri(const char *connuri_str) return vzuri; } -#define VZ_MIGRATION_FLAGS (0) +virURIPtr +vzParseVzURI(const char *uri_str) +{ + virURIPtr uri = NULL; + int ret = -1; + + if (!(uri = virURIParse(uri_str))) + goto cleanup; + + if (uri->scheme == NULL || uri->server == NULL) { + virReportError(VIR_ERR_INVALID_ARG, + _("scheme and host are mandatory vz migration URI: %s"), + uri_str); + goto cleanup; + } + + if (uri->user != NULL || uri->path != NULL || + uri->query != NULL || uri->fragment != NULL) { + virReportError(VIR_ERR_INVALID_ARG, + _("only scheme, host and port are supported in " + "vz migration URI: %s"), uri_str); + goto cleanup; + } + + if (STRNEQ(uri->scheme, "tcp")) { + virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, + _("unsupported scheme %s in migration URI %s"), + uri->scheme, uri_str); + goto cleanup; + } +
Try URI vz+ssh://hotname:22/system, there is some bug, prlsdk returns error after migration: It is ok. Uri you pass with --migrateuri is hypervisor connection uri and port should be appropriate, not ssh one.
2015-08-25 15:36:18.923+0000: 92650: error : virDomainGetJobInfo:8889 : this function is not supported by the connection driver: virDomainGetJobInfo 2015-08-25 15:36:19.169+0000: 92649: error : prlsdkMigrate:4089 : internal error: Unable to connect to the server "10.27.255.18". An invalid response has been received from the server. Make sure that Parallels Server is running on the server "10.27.255.18" and is up to date and try again.
+ ret = 0; + + cleanup: + if (ret < 0) { + virURIFree(uri); + uri = NULL; + } + + return uri; +} + +#define VZ_MIGRATION_FLAGS \ + (VIR_MIGRATE_OFFLINE | \ + VIR_MIGRATE_LIVE | \ + VIR_MIGRATE_COMPRESSED | \ + VIR_MIGRATE_PERSIST_DEST | \ + VIR_MIGRATE_PAUSED) + +/* TODO this code should be in common place as these + rules follows from options (informal) definitions. + Qemu makes some of these checks on begin phase but not all. */ +int vzCheckOfflineFlags(int flags) +{ + if (!(flags & VIR_MIGRATE_OFFLINE)) { + virReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is not running")); + return -1; + } + + if (flags & VIR_MIGRATE_LIVE) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("live offline migration does not " + "make sense")); + return -1; + } + + if (flags & VIR_MIGRATE_COMPRESSED) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("compressed offline migration does not " + "make sense")); + return -1; + } + + if (flags & VIR_MIGRATE_PAUSED) { + virReportError(VIR_ERR_OPERATION_INVALID, "%s", + _("paused offline migration does not " + "make sense")); + return -1; + } + + return 0; +} + +int vzCheckCommonFlags(int flags) +{ + if (!(flags & VIR_MIGRATE_PERSIST_DEST)) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("flags VIR_MIGRATE_PERSIST_DEST must be set" + " for vz migration")); + return -1; + } + + return 0; +} + +int vzCheckOnlineFlags(int flags) +{ + if (!(flags & VIR_MIGRATE_LIVE)) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("flags VIR_MIGRATE_LIVE must be set" + " for online vz migration")); + return -1; + } + + if (!(flags & VIR_MIGRATE_COMPRESSED)) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("flags VIR_MIGRATE_COMPRESSED must be set" + " for online vz migration")); + return -1; + } + + return 0; +} static int vzDomainMigratePerform3(virDomainPtr domain, @@ -1491,12 +1604,23 @@ vzDomainMigratePerform3(virDomainPtr domain, virCheckFlags(VZ_MIGRATION_FLAGS, -1); + if (vzCheckCommonFlags(flags)) + goto cleanup; + if (!(vzuri = vzMakeVzUri(uri))) goto cleanup; if (!(dom = vzDomObjFromDomain(domain))) goto cleanup; + if (virDomainObjIsActive(dom)) + ret = vzCheckOnlineFlags(flags); + else + ret = vzCheckOfflineFlags(flags); + + if (ret < 0) + goto cleanup; + dconn = virConnectOpen(uri); if (dconn == NULL) { virReportError(VIR_ERR_OPERATION_FAILED, @@ -1513,7 +1637,7 @@ vzDomainMigratePerform3(virDomainPtr domain, if (vzParseCookie(cookie, session_uuid) < 0) goto cleanup; - if (prlsdkMigrate(dom, vzuri, session_uuid, dname) < 0) + if (prlsdkMigrate(dom, vzuri, session_uuid, dname, flags) < 0) goto cleanup; virDomainObjListRemove(privconn->domains, dom); diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index 89a2429..9a2b5df 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -4065,18 +4065,22 @@ prlsdkGetMemoryStats(virDomainObjPtr dom, int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const unsigned char *session_uuid, - const char *dname) + const char *dname, unsigned int flags) { int ret = -1; vzDomObjPtr privdom = dom->privateData; PRL_HANDLE job = PRL_INVALID_HANDLE; char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + PRL_UINT32 vzflags = PRLSDK_MIGRATION_FLAGS; + + if (flags & VIR_MIGRATE_PAUSED) + vzflags |= PVMT_DONT_RESUME_VM; prlsdkUUIDFormat(session_uuid, uuidstr); job = PrlVm_MigrateWithRenameEx(privdom->sdkdom, uri->server, uri->port, uuidstr, dname == NULL ? "" : dname, "", /* use default dir for migrated instance bundle */ - PRLSDK_MIGRATION_FLAGS, + vzflags, 0, /* reserved flags */ PRL_TRUE /* don't ask for confirmations */ ); diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h index 0aa70b3..5b26b70 100644 --- a/src/vz/vz_sdk.h +++ b/src/vz/vz_sdk.h @@ -80,4 +80,5 @@ int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, const char unsigned *session_uuid, - const char *dname); + const char *dname, + unsigned int flags);

From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> vz puts uuids into curly braces. Simply introduce new contstant to reflect this and get rid of magic +2 in code. Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/vz/vz_sdk.c | 12 ++++++------ src/vz/vz_utils.h | 2 ++ 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index 9a2b5df..bafc6e4 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -236,7 +236,7 @@ prlsdkConnect(vzConnPtr privconn) PRL_HANDLE job = PRL_INVALID_HANDLE; PRL_HANDLE result = PRL_INVALID_HANDLE; PRL_HANDLE response = PRL_INVALID_HANDLE; - char session_uuid[VIR_UUID_STRING_BUFLEN + 2]; + char session_uuid[VZ_UUID_STRING_BUFLEN]; PRL_UINT32 buflen = ARRAY_CARDINALITY(session_uuid); pret = PrlSrv_Create(&privconn->server); @@ -316,7 +316,7 @@ prlsdkUUIDFormat(const unsigned char *uuid, char *uuidstr) static PRL_HANDLE prlsdkSdkDomainLookupByUUID(vzConnPtr privconn, const unsigned char *uuid) { - char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + char uuidstr[VZ_UUID_STRING_BUFLEN]; PRL_HANDLE sdkdom = PRL_INVALID_HANDLE; prlsdkUUIDFormat(uuid, uuidstr); @@ -365,7 +365,7 @@ prlsdkGetDomainIds(PRL_HANDLE sdkdom, char **name, unsigned char *uuid) { - char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + char uuidstr[VZ_UUID_STRING_BUFLEN]; PRL_UINT32 len; PRL_RESULT pret; @@ -1722,7 +1722,7 @@ prlsdkEventsHandler(PRL_HANDLE prlEvent, PRL_VOID_PTR opaque) vzConnPtr privconn = opaque; PRL_RESULT pret = PRL_ERR_FAILURE; PRL_HANDLE_TYPE handleType; - char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + char uuidstr[VZ_UUID_STRING_BUFLEN]; unsigned char uuid[VIR_UUID_BUFLEN]; PRL_UINT32 bufsize = ARRAY_CARDINALITY(uuidstr); PRL_EVENT_TYPE prlEventType; @@ -3480,7 +3480,7 @@ prlsdkDoApplyConfig(virConnectPtr conn, { PRL_RESULT pret; size_t i; - char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + char uuidstr[VZ_UUID_STRING_BUFLEN]; bool needBoot = true; char *mask = NULL; @@ -4070,7 +4070,7 @@ int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, int ret = -1; vzDomObjPtr privdom = dom->privateData; PRL_HANDLE job = PRL_INVALID_HANDLE; - char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + char uuidstr[VZ_UUID_STRING_BUFLEN]; PRL_UINT32 vzflags = PRLSDK_MIGRATION_FLAGS; if (flags & VIR_MIGRATE_PAUSED) diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h index fe54b25..2a59426 100644 --- a/src/vz/vz_utils.h +++ b/src/vz/vz_utils.h @@ -55,6 +55,8 @@ # define PARALLELS_REQUIRED_BRIDGED_NETWORK "Bridged" # define PARALLELS_BRIDGED_NETWORK_TYPE "bridged" +# define VZ_UUID_STRING_BUFLEN (VIR_UUID_STRING_BUFLEN + 2) + struct _vzConn { virMutex lock; -- 1.7.1

On 08/25/2015 12:04 PM, nshirokovskiy@virtuozzo.com wrote:
From: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
ACK
vz puts uuids into curly braces. Simply introduce new contstant to reflect this and get rid of magic +2 in code.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com> --- src/vz/vz_sdk.c | 12 ++++++------ src/vz/vz_utils.h | 2 ++ 2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c index 9a2b5df..bafc6e4 100644 --- a/src/vz/vz_sdk.c +++ b/src/vz/vz_sdk.c @@ -236,7 +236,7 @@ prlsdkConnect(vzConnPtr privconn) PRL_HANDLE job = PRL_INVALID_HANDLE; PRL_HANDLE result = PRL_INVALID_HANDLE; PRL_HANDLE response = PRL_INVALID_HANDLE; - char session_uuid[VIR_UUID_STRING_BUFLEN + 2]; + char session_uuid[VZ_UUID_STRING_BUFLEN]; PRL_UINT32 buflen = ARRAY_CARDINALITY(session_uuid);
pret = PrlSrv_Create(&privconn->server); @@ -316,7 +316,7 @@ prlsdkUUIDFormat(const unsigned char *uuid, char *uuidstr) static PRL_HANDLE prlsdkSdkDomainLookupByUUID(vzConnPtr privconn, const unsigned char *uuid) { - char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + char uuidstr[VZ_UUID_STRING_BUFLEN]; PRL_HANDLE sdkdom = PRL_INVALID_HANDLE;
prlsdkUUIDFormat(uuid, uuidstr); @@ -365,7 +365,7 @@ prlsdkGetDomainIds(PRL_HANDLE sdkdom, char **name, unsigned char *uuid) { - char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + char uuidstr[VZ_UUID_STRING_BUFLEN]; PRL_UINT32 len; PRL_RESULT pret;
@@ -1722,7 +1722,7 @@ prlsdkEventsHandler(PRL_HANDLE prlEvent, PRL_VOID_PTR opaque) vzConnPtr privconn = opaque; PRL_RESULT pret = PRL_ERR_FAILURE; PRL_HANDLE_TYPE handleType; - char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + char uuidstr[VZ_UUID_STRING_BUFLEN]; unsigned char uuid[VIR_UUID_BUFLEN]; PRL_UINT32 bufsize = ARRAY_CARDINALITY(uuidstr); PRL_EVENT_TYPE prlEventType; @@ -3480,7 +3480,7 @@ prlsdkDoApplyConfig(virConnectPtr conn, { PRL_RESULT pret; size_t i; - char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + char uuidstr[VZ_UUID_STRING_BUFLEN]; bool needBoot = true; char *mask = NULL;
@@ -4070,7 +4070,7 @@ int prlsdkMigrate(virDomainObjPtr dom, virURIPtr uri, int ret = -1; vzDomObjPtr privdom = dom->privateData; PRL_HANDLE job = PRL_INVALID_HANDLE; - char uuidstr[VIR_UUID_STRING_BUFLEN + 2]; + char uuidstr[VZ_UUID_STRING_BUFLEN]; PRL_UINT32 vzflags = PRLSDK_MIGRATION_FLAGS;
if (flags & VIR_MIGRATE_PAUSED) diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h index fe54b25..2a59426 100644 --- a/src/vz/vz_utils.h +++ b/src/vz/vz_utils.h @@ -55,6 +55,8 @@ # define PARALLELS_REQUIRED_BRIDGED_NETWORK "Bridged" # define PARALLELS_BRIDGED_NETWORK_TYPE "bridged"
+# define VZ_UUID_STRING_BUFLEN (VIR_UUID_STRING_BUFLEN + 2) + struct _vzConn { virMutex lock;

On 08/25/2015 12:04 PM, nshirokovskiy@virtuozzo.com wrote:
NOTE that minimal command to migrate vz domain is like next:
virsh -c vz:///system migrate --direct 200 shiny0 --migrateuri vz+ssh://shiny0/system --live --persistent --compressed
It seems there is something wrong with virsh migrate command, in my environment only this command worked: virsh -c vz+unix:///system migrate --direct 1001 --migrateuri vz+ssh://10.27.255.18/system --desturi vz+ssh://10.27.255.18/system --live --persistent --compressed
==Difference from v1:
1. Patch is quite different. First patchset implements migration thru managed migration scheme. This one goes thru p2p scheme. I belive this is a better approach. Vz migration is done via vz sdk and first patchset uses 5 phased migration only to get a token from destination on prepare phase which is kind a misuse. This patch just adds vz specific function to driver interface to archive the same goal.
2. Offline migration is supported as there is no more dependency on current flow of managed migration scheme.
==Difference from v2:
1. Implement thru direct migration instead of p2p. p2p is just managed 5-staged migration when managing is done on daemon side. Vz migration stages are all hidden in vz sdk and thus it would be more consistent to use direct scheme.
2. Use existing driver function for prepare migration phase to pass session uuid from destination to source instead of new one. As vz migration is direct one we will not use prepare phase function in a straight forward manner anyway.
src/libvirt-domain.c | 3 +- src/vz/vz_driver.c | 315 ++++++++++++++++++++++++++++++++++++++++++++++++++ src/vz/vz_sdk.c | 86 ++++++++++++--- src/vz/vz_sdk.h | 6 + src/vz/vz_utils.h | 4 +- 5 files changed, 398 insertions(+), 16 deletions(-)

On 08/25/2015 04:56 PM, Dmitry Guryanov wrote:
On 08/25/2015 12:04 PM, nshirokovskiy@virtuozzo.com wrote:
NOTE that minimal command to migrate vz domain is like next:
virsh -c vz:///system migrate --direct 200 shiny0 --migrateuri vz+ssh://shiny0/system --live --persistent --compressed
It seems there is something wrong with virsh migrate command, in my environment only this command worked:
virsh -c vz+unix:///system migrate --direct 1001 --migrateuri vz+ssh://10.27.255.18/system --desturi vz+ssh://10.27.255.18/system --live --persistent --compressed
Sorry, you're, your command works.
==Difference from v1:
1. Patch is quite different. First patchset implements migration thru managed migration scheme. This one goes thru p2p scheme. I belive this is a better approach. Vz migration is done via vz sdk and first patchset uses 5 phased migration only to get a token from destination on prepare phase which is kind a misuse. This patch just adds vz specific function to driver interface to archive the same goal.
2. Offline migration is supported as there is no more dependency on current flow of managed migration scheme.
==Difference from v2:
1. Implement thru direct migration instead of p2p. p2p is just managed 5-staged migration when managing is done on daemon side. Vz migration stages are all hidden in vz sdk and thus it would be more consistent to use direct scheme.
2. Use existing driver function for prepare migration phase to pass session uuid from destination to source instead of new one. As vz migration is direct one we will not use prepare phase function in a straight forward manner anyway.
src/libvirt-domain.c | 3 +- src/vz/vz_driver.c | 315 ++++++++++++++++++++++++++++++++++++++++++++++++++ src/vz/vz_sdk.c | 86 ++++++++++++--- src/vz/vz_sdk.h | 6 + src/vz/vz_utils.h | 4 +- 5 files changed, 398 insertions(+), 16 deletions(-)
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
participants (4)
-
Daniel P. Berrange
-
Dmitry Guryanov
-
Nikolay Shirokovskiy
-
nshirokovskiy@virtuozzo.com