[PATCH v2 0/9] qemu: Allow migration over UNIX sockets

v2: - Allow TLS and parallel migration as well - Use simpler unix socket URIs KubeVirt would like to use this feature. For more information see individual commits and changes in manpages and documentation. Resolves: https://bugzilla.redhat.com/1638889 Martin Kletzander (9): qemu: Use g_autofree in qemuMigrationSrcConnect qemu: Rework qemuMigrationSrcConnect virsh: Reuse existing variable when parsing migrate --disks-port qemu: Rework starting NBD server for migration tests: Add simple test for virDomainMigrateCheckNotLocal qemu: Allow NBD migration over UNIX socket peer2peer migration: allow connecting to local sockets qemu: Allow migration over UNIX socket news: qemu: Allow migration over UNIX sockets NEWS.rst | 6 + docs/manpages/virsh.rst | 33 ++- docs/migration.html.in | 33 +++ include/libvirt/libvirt-domain.h | 13 ++ scripts/apibuild.py | 1 + src/libvirt-domain.c | 11 +- src/libvirt_internal.h | 2 + src/libvirt_private.syms | 1 + src/qemu/qemu_driver.c | 33 ++- src/qemu/qemu_migration.c | 354 +++++++++++++++++++++++-------- src/qemu/qemu_migration.h | 3 + src/qemu/qemu_migration_cookie.c | 3 +- src/qemu/qemu_migration_params.c | 9 + src/qemu/qemu_migration_params.h | 3 + src/qemu/qemu_monitor.c | 15 ++ src/qemu/qemu_monitor.h | 4 + src/remote/remote_driver.c | 8 +- src/util/viruri.c | 30 +++ src/util/viruri.h | 2 + tests/meson.build | 1 + tests/virmigtest.c | 91 ++++++++ tools/virsh-domain.c | 19 +- 22 files changed, 570 insertions(+), 105 deletions(-) create mode 100644 tests/virmigtest.c -- 2.28.0

Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/qemu_migration.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 142faa2cf997..60ddfde65d46 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -3369,7 +3369,7 @@ qemuMigrationSrcConnect(virQEMUDriverPtr driver, { virNetSocketPtr sock; const char *host; - char *port = NULL; + g_autofree char *port = NULL; int ret = -1; host = spec->dest.host.name; @@ -3400,7 +3400,6 @@ qemuMigrationSrcConnect(virQEMUDriverPtr driver, ret = 0; cleanup: - VIR_FREE(port); if (ret < 0) VIR_FORCE_CLOSE(spec->dest.fd.qemu); return ret; -- 2.28.0

On Tue, Sep 01, 2020 at 16:36:52 +0200, Martin Kletzander wrote:
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/qemu_migration.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 142faa2cf997..60ddfde65d46 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -3369,7 +3369,7 @@ qemuMigrationSrcConnect(virQEMUDriverPtr driver, { virNetSocketPtr sock; const char *host; - char *port = NULL; + g_autofree char *port = NULL; int ret = -1;
host = spec->dest.host.name; @@ -3400,7 +3400,6 @@ qemuMigrationSrcConnect(virQEMUDriverPtr driver, ret = 0;
cleanup: - VIR_FREE(port); if (ret < 0) VIR_FORCE_CLOSE(spec->dest.fd.qemu); return ret;
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>

Instead of saving some data from a union up front and changing an overlayed struct before using said data, let's just set the new values after they are decided. This will increase the readability of future commit(s). Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/qemu_migration.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 60ddfde65d46..1a3cdb71f424 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -3368,24 +3368,24 @@ qemuMigrationSrcConnect(virQEMUDriverPtr driver, qemuMigrationSpecPtr spec) { virNetSocketPtr sock; - const char *host; g_autofree char *port = NULL; + int fd_qemu = -1; int ret = -1; - host = spec->dest.host.name; - port = g_strdup_printf("%d", spec->dest.host.port); - - spec->destType = MIGRATION_DEST_FD; - spec->dest.fd.qemu = -1; - if (qemuSecuritySetSocketLabel(driver->securityManager, vm->def) < 0) goto cleanup; - if (virNetSocketNewConnectTCP(host, port, + port = g_strdup_printf("%d", spec->dest.host.port); + if (virNetSocketNewConnectTCP(spec->dest.host.name, + port, AF_UNSPEC, &sock) == 0) { - spec->dest.fd.qemu = virNetSocketDupFD(sock, true); + fd_qemu = virNetSocketDupFD(sock, true); virObjectUnref(sock); } + + spec->destType = MIGRATION_DEST_FD; + spec->dest.fd.qemu = fd_qemu; + if (qemuSecurityClearSocketLabel(driver->securityManager, vm->def) < 0 || spec->dest.fd.qemu == -1) goto cleanup; -- 2.28.0

On Tue, Sep 01, 2020 at 16:36:53 +0200, Martin Kletzander wrote:
Instead of saving some data from a union up front and changing an overlayed struct before using said data, let's just set the new values after they are decided. This will increase the readability of future commit(s).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/qemu_migration.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-)
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>

Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- tools/virsh-domain.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 36581d2c31d2..8a4a0d172dc9 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -10708,7 +10708,6 @@ doMigrate(void *opaque) virDomainPtr dom = NULL; const char *desturi = NULL; const char *opt = NULL; - int disksPort = 0; unsigned int flags = 0; virshCtrlData *data = opaque; vshControl *ctl = data->ctl; @@ -10756,11 +10755,11 @@ doMigrate(void *opaque) VIR_MIGRATE_PARAM_LISTEN_ADDRESS, opt) < 0) goto save_error; - if (vshCommandOptInt(ctl, cmd, "disks-port", &disksPort) < 0) + if (vshCommandOptInt(ctl, cmd, "disks-port", &intOpt) < 0) goto out; - if (disksPort && + if (intOpt && virTypedParamsAddInt(¶ms, &nparams, &maxparams, - VIR_MIGRATE_PARAM_DISKS_PORT, disksPort) < 0) + VIR_MIGRATE_PARAM_DISKS_PORT, intOpt) < 0) goto save_error; if (vshCommandOptStringReq(ctl, cmd, "dname", &opt) < 0) -- 2.28.0

On Tue, Sep 01, 2020 at 16:36:54 +0200, Martin Kletzander wrote:
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- tools/virsh-domain.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>

Clean up the semantics by using one extra self-describing variable. This also fixes the port allocation when the port is specified. Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/qemu_migration.c | 37 +++++++++++++++++++++++-------------- 1 file changed, 23 insertions(+), 14 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 1a3cdb71f424..b887185d012d 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -383,12 +383,13 @@ qemuMigrationDstStartNBDServer(virQEMUDriverPtr driver, { int ret = -1; qemuDomainObjPrivatePtr priv = vm->privateData; - unsigned short port = 0; size_t i; virStorageNetHostDef server = { .name = (char *)listenAddr, /* cast away const */ .transport = VIR_STORAGE_NET_HOST_TRANS_TCP, + .port = nbdPort, }; + bool server_started = false; if (nbdPort < 0 || nbdPort > USHRT_MAX) { virReportError(VIR_ERR_INVALID_ARG, "%s", @@ -424,20 +425,27 @@ qemuMigrationDstStartNBDServer(virQEMUDriverPtr driver, devicename = diskAlias; } + if (!server_started) { + if (server.port) { + if (virPortAllocatorSetUsed(server.port) < 0) + goto cleanup; + } else { + unsigned short port = 0; + + if (virPortAllocatorAcquire(driver->migrationPorts, &port) < 0) + goto cleanup; + + server.port = port; + } + } + if (qemuDomainObjEnterMonitorAsync(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN) < 0) goto cleanup; - if (port == 0) { - if (nbdPort) - port = nbdPort; - else if (virPortAllocatorAcquire(driver->migrationPorts, &port) < 0) - goto exit_monitor; - - server.port = port; - if (qemuMonitorNBDServerStart(priv->mon, &server, tls_alias) < 0) - goto exit_monitor; - } + if (!server_started && + qemuMonitorNBDServerStart(priv->mon, &server, tls_alias) < 0) + goto exit_monitor; if (qemuMonitorNBDServerAdd(priv->mon, devicename, exportname, true, NULL) < 0) goto exit_monitor; @@ -445,12 +453,13 @@ qemuMigrationDstStartNBDServer(virQEMUDriverPtr driver, goto cleanup; } - priv->nbdPort = port; + priv->nbdPort = server.port; + ret = 0; cleanup: - if (ret < 0 && nbdPort == 0) - virPortAllocatorRelease(port); + if (ret < 0) + virPortAllocatorRelease(server.port); return ret; exit_monitor: -- 2.28.0

On Tue, Sep 01, 2020 at 16:36:55 +0200, Martin Kletzander wrote:
Clean up the semantics by using one extra self-describing variable. This also fixes the port allocation when the port is specified.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- src/qemu/qemu_migration.c | 37 +++++++++++++++++++++++-------------- 1 file changed, 23 insertions(+), 14 deletions(-)
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>

For this we need to make the function accessible (at least privately). The behaviour will change in following patches and the test helps explaining the change. Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- scripts/apibuild.py | 1 + src/libvirt-domain.c | 3 +- src/libvirt_internal.h | 2 + src/libvirt_private.syms | 1 + tests/meson.build | 1 + tests/virmigtest.c | 91 ++++++++++++++++++++++++++++++++++++++++ 6 files changed, 97 insertions(+), 2 deletions(-) create mode 100644 tests/virmigtest.c diff --git a/scripts/apibuild.py b/scripts/apibuild.py index 58ae76d29cfc..b94c0f6c09dd 100755 --- a/scripts/apibuild.py +++ b/scripts/apibuild.py @@ -81,6 +81,7 @@ ignored_words = { ignored_functions = { "virConnectSupportsFeature": "private function for remote access", + "virDomainMigrateCheckNotLocal": "private function for migration", "virDomainMigrateFinish": "private function for migration", "virDomainMigrateFinish2": "private function for migration", "virDomainMigratePerform": "private function for migration", diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index ad60a92da879..da5a21e4c4d4 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -3269,8 +3269,7 @@ virDomainMigrateVersion3Params(virDomainPtr domain, params, nparams, true, flags); } - -static int +int virDomainMigrateCheckNotLocal(const char *dconnuri) { g_autoptr(virURI) tempuri = NULL; diff --git a/src/libvirt_internal.h b/src/libvirt_internal.h index 72c61274a74e..cb239b3c1c44 100644 --- a/src/libvirt_internal.h +++ b/src/libvirt_internal.h @@ -129,6 +129,8 @@ typedef enum { int virConnectSupportsFeature(virConnectPtr conn, int feature); +int virDomainMigrateCheckNotLocal(const char *dconnuri); + int virDomainMigratePrepare (virConnectPtr dconn, char **cookie, int *cookielen, diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index f950a681792c..2f6d67cebe07 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1481,6 +1481,7 @@ virHostdevUpdateActiveUSBDevices; virConnectSupportsFeature; virDomainMigrateBegin3; virDomainMigrateBegin3Params; +virDomainMigrateCheckNotLocal; virDomainMigrateConfirm3; virDomainMigrateConfirm3Params; virDomainMigrateFinish; diff --git a/tests/meson.build b/tests/meson.build index b5f6e2267aaf..49bcf5eb8a8d 100644 --- a/tests/meson.build +++ b/tests/meson.build @@ -331,6 +331,7 @@ tests += [ { 'name': 'virtypedparamtest' }, { 'name': 'viruritest' }, { 'name': 'vshtabletest', 'link_with': [ libvirt_shell_lib ] }, + { 'name': 'virmigtest' }, ] if host_machine.system() == 'linux' diff --git a/tests/virmigtest.c b/tests/virmigtest.c new file mode 100644 index 000000000000..df8d51bab94b --- /dev/null +++ b/tests/virmigtest.c @@ -0,0 +1,91 @@ +/* + * Copyright (C) 2020 Red Hat, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + */ + +#include <config.h> + +#include "libvirt_internal.h" + +#include "testutils.h" +#include "virlog.h" +#include "viruri.h" + +#define VIR_FROM_THIS VIR_FROM_RPC + +VIR_LOG_INIT("tests.migtest"); + +struct migLocalData { + const char *uri; + bool fail; +}; + + +static int +testMigNotLocal(const void *args) +{ + int ret = -1; + const struct migLocalData *data = args; + + ret = virDomainMigrateCheckNotLocal(data->uri); + + if (ret == -1) { + if (data->fail) { + virResetLastError(); + return 0; + } + return -1; + } + + if (data->fail) { + VIR_TEST_DEBUG("passed instead of expected failure"); + return -1; + } + + return ret; +} + + +static int +mymain(void) +{ + int ret = 0; + +#define TEST_FULL(uri, fail) \ + do { \ + const struct migLocalData data = { \ + uri, fail \ + }; \ + if (virTestRun("Test URI " # uri, testMigNotLocal, &data) < 0) \ + ret = -1; \ + } while (0) + +#define TEST(uri) TEST_FULL(uri, false) +#define TEST_FAIL(uri) TEST_FULL(uri, true) + + TEST_FAIL("qemu:///system"); + + TEST_FAIL("//localhost"); + TEST_FAIL("tcp://localhost.localdomain/"); + + TEST("scheme://some.cryptorandom.fqdn.tld"); + + TEST_FAIL("hehe+unix:///?socket=/path/to/some-sock"); + + return ret == 0 ? EXIT_SUCCESS : EXIT_FAILURE; +} + +VIR_TEST_MAIN(mymain) -- 2.28.0

On Tue, Sep 01, 2020 at 16:36:56 +0200, Martin Kletzander wrote:
For this we need to make the function accessible (at least privately). The behaviour will change in following patches and the test helps explaining the change.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- scripts/apibuild.py | 1 + src/libvirt-domain.c | 3 +- src/libvirt_internal.h | 2 + src/libvirt_private.syms | 1 + tests/meson.build | 1 + tests/virmigtest.c | 91 ++++++++++++++++++++++++++++++++++++++++ 6 files changed, 97 insertions(+), 2 deletions(-) create mode 100644 tests/virmigtest.c
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>

Adds new typed param for migration and uses this as a UNIX socket path that should be used for the NBD part of migration. And also adds virsh support. Partially resolves: https://bugzilla.redhat.com/1638889 Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- docs/manpages/virsh.rst | 11 ++ include/libvirt/libvirt-domain.h | 13 +++ src/qemu/qemu_driver.c | 33 +++++- src/qemu/qemu_migration.c | 170 ++++++++++++++++++++++++------- src/qemu/qemu_migration.h | 3 + src/qemu/qemu_migration_cookie.c | 3 +- tools/virsh-domain.c | 12 +++ 7 files changed, 205 insertions(+), 40 deletions(-) diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index 8e2fb7039046..12357ea4ee86 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3113,6 +3113,7 @@ migrate [--postcopy-bandwidth bandwidth] [--parallel [--parallel-connections connections]] [--bandwidth bandwidth] [--tls-destination hostname] + [--disks-socket socket-path] Migrate domain to another host. Add *--live* for live migration; <--p2p> for peer-2-peer migration; *--direct* for direct migration; or *--tunnelled* @@ -3292,6 +3293,16 @@ error if this parameter is used. Optional *disks-port* sets the port that hypervisor on destination side should bind to for incoming disks traffic. Currently it is supported only by QEMU. +Optional *disks-uri* can also be specified (mutually exclusive with +*disks-port*) to specify what the remote hypervisor should bind/connect to when +migrating disks. This can be *tcp://address:port* to specify a listen address +(which overrides *--listen-address* for the disk migration) and a port or +*unix:///path/to/socket* in case you need the disk migration to happen over a +UNIX socket with that specified path. In this case you need to make sure the +same socket path is accessible to both source and destination hypervisors and +connecting to the socket on the source (after hypervisor creates it on the +destination) will actually connect to the destination. + migrate-compcache ----------------- diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index 8b9d9c110c19..77f9116675af 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -981,6 +981,19 @@ typedef enum { */ # define VIR_MIGRATE_PARAM_DISKS_PORT "disks_port" +/** + * VIR_MIGRATE_PARAM_DISKS_URI: + * + * virDomainMigrate* params field: URI used for incoming disks migration. Type + * is VIR_TYPED_PARAM_STRING. Only schemes "tcp" and "unix" are accepted. TCP + * URI can currently only provide a server and port to listen on (and connect + * to), UNIX URI may only provide a path component for a UNIX socket. This is + * currently only supported by the QEMU driver. UNIX URI is only usable if the + * management application makes sure that socket created with this name on the + * destination will be reachable from the source under the same exact path. + */ +# define VIR_MIGRATE_PARAM_DISKS_URI "disks_uri" + /** * VIR_MIGRATE_PARAM_COMPRESSION: * diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 3636716ceea1..08b6b853de47 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -11233,7 +11233,7 @@ qemuDomainMigratePrepare2(virConnectPtr dconn, ret = qemuMigrationDstPrepareDirect(driver, dconn, NULL, 0, NULL, NULL, /* No cookies */ uri_in, uri_out, - &def, origname, NULL, 0, NULL, 0, + &def, origname, NULL, 0, NULL, 0, NULL, migParams, flags); cleanup: @@ -11289,6 +11289,7 @@ qemuDomainMigratePerform(virDomainPtr dom, */ ret = qemuMigrationSrcPerform(driver, dom->conn, vm, NULL, NULL, dconnuri, uri, NULL, NULL, 0, NULL, 0, + NULL, migParams, cookie, cookielen, NULL, NULL, /* No output cookies in v2 */ flags, dname, resource, false); @@ -11459,7 +11460,7 @@ qemuDomainMigratePrepare3(virConnectPtr dconn, cookieout, cookieoutlen, uri_in, uri_out, &def, origname, NULL, 0, NULL, 0, - migParams, flags); + NULL, migParams, flags); } static int @@ -11485,6 +11486,7 @@ qemuDomainMigratePrepare3Params(virConnectPtr dconn, g_autofree const char **migrate_disks = NULL; g_autofree char *origname = NULL; g_autoptr(qemuMigrationParams) migParams = NULL; + const char *nbdSocketPath = NULL; virCheckFlags(QEMU_MIGRATION_FLAGS, -1); if (virTypedParamsValidate(params, nparams, QEMU_MIGRATION_PARAMETERS) < 0) @@ -11502,6 +11504,9 @@ qemuDomainMigratePrepare3Params(virConnectPtr dconn, virTypedParamsGetString(params, nparams, VIR_MIGRATE_PARAM_LISTEN_ADDRESS, &listenAddress) < 0 || + virTypedParamsGetString(params, nparams, + VIR_MIGRATE_PARAM_DISKS_URI, + &nbdSocketPath) < 0 || virTypedParamsGetInt(params, nparams, VIR_MIGRATE_PARAM_DISKS_PORT, &nbdPort) < 0) @@ -11518,6 +11523,13 @@ qemuDomainMigratePrepare3Params(virConnectPtr dconn, QEMU_MIGRATION_DESTINATION))) return -1; + if (nbdSocketPath && nbdPort) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("Both port and socket requested for disk migration " + "while being mutually exclusive")); + return -1; + } + if (flags & VIR_MIGRATE_TUNNELLED) { /* this is a logical error; we never should have gotten here with * VIR_MIGRATE_TUNNELLED set @@ -11540,7 +11552,7 @@ qemuDomainMigratePrepare3Params(virConnectPtr dconn, uri_in, uri_out, &def, origname, listenAddress, nmigrate_disks, migrate_disks, nbdPort, - migParams, flags); + nbdSocketPath, migParams, flags); } @@ -11682,7 +11694,7 @@ qemuDomainMigratePerform3(virDomainPtr dom, ret = qemuMigrationSrcPerform(driver, dom->conn, vm, xmlin, NULL, dconnuri, uri, NULL, NULL, 0, NULL, 0, - migParams, + NULL, migParams, cookiein, cookieinlen, cookieout, cookieoutlen, flags, dname, resource, true); @@ -11716,6 +11728,7 @@ qemuDomainMigratePerform3Params(virDomainPtr dom, unsigned long long bandwidth = 0; int nbdPort = 0; g_autoptr(qemuMigrationParams) migParams = NULL; + const char *nbdSocketPath = NULL; int ret = -1; virCheckFlags(QEMU_MIGRATION_FLAGS, -1); @@ -11743,11 +11756,21 @@ qemuDomainMigratePerform3Params(virDomainPtr dom, virTypedParamsGetInt(params, nparams, VIR_MIGRATE_PARAM_DISKS_PORT, &nbdPort) < 0 || + virTypedParamsGetString(params, nparams, + VIR_MIGRATE_PARAM_DISKS_URI, + &nbdSocketPath) < 0 || virTypedParamsGetString(params, nparams, VIR_MIGRATE_PARAM_PERSIST_XML, &persist_xml) < 0) goto cleanup; + if (nbdSocketPath && nbdPort) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("Both port and socket requested for disk migration " + "while being mutually exclusive")); + goto cleanup; + } + nmigrate_disks = virTypedParamsGetStringList(params, nparams, VIR_MIGRATE_PARAM_MIGRATE_DISKS, &migrate_disks); @@ -11768,7 +11791,7 @@ qemuDomainMigratePerform3Params(virDomainPtr dom, ret = qemuMigrationSrcPerform(driver, dom->conn, vm, dom_xml, persist_xml, dconnuri, uri, graphicsuri, listenAddress, nmigrate_disks, migrate_disks, nbdPort, - migParams, + nbdSocketPath, migParams, cookiein, cookieinlen, cookieout, cookieoutlen, flags, dname, bandwidth, true); cleanup: diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index b887185d012d..7277f2f458a2 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -379,6 +379,7 @@ qemuMigrationDstStartNBDServer(virQEMUDriverPtr driver, size_t nmigrate_disks, const char **migrate_disks, int nbdPort, + const char *nbdURI, const char *tls_alias) { int ret = -1; @@ -390,8 +391,44 @@ qemuMigrationDstStartNBDServer(virQEMUDriverPtr driver, .port = nbdPort, }; bool server_started = false; + g_autoptr(virURI) uri = NULL; + + /* Prefer nbdURI */ + if (nbdURI) { + uri = virURIParse(nbdURI); - if (nbdPort < 0 || nbdPort > USHRT_MAX) { + if (!uri) + return -1; + + if (STREQ(uri->scheme, "tcp")) { + server.transport = VIR_STORAGE_NET_HOST_TRANS_TCP; + if (!uri->server || STREQ(uri->server, "")) { + /* Since tcp://:<port>/ is parsed as server = NULL and port = 0 + * we should rather error out instead of auto-allocating a port + * as that would be the exact opposite of what was requested. */ + virReportError(VIR_ERR_INVALID_ARG, + _("URI with tcp scheme did not provide a server part: %s"), + nbdURI); + return -1; + } + server.name = (char *)uri->server; + if (uri->port) + server.port = uri->port; + } else if (STREQ(uri->scheme, "unix")) { + if (!uri->path) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("UNIX disks URI does not include path")); + return -1; + } + server.transport = VIR_STORAGE_NET_HOST_TRANS_UNIX; + server.socket = (char *)uri->path; + } else { + virReportError(VIR_ERR_INVALID_ARG, + _("Unsupported scheme in disks URI: %s"), + uri->scheme); + return -1; + } + } else if (nbdPort < 0 || nbdPort > USHRT_MAX) { virReportError(VIR_ERR_INVALID_ARG, "%s", _("nbd port must be in range 0-65535")); return -1; @@ -425,7 +462,8 @@ qemuMigrationDstStartNBDServer(virQEMUDriverPtr driver, devicename = diskAlias; } - if (!server_started) { + if (!server_started && + server.transport == VIR_STORAGE_NET_HOST_TRANS_TCP) { if (server.port) { if (virPortAllocatorSetUsed(server.port) < 0) goto cleanup; @@ -453,7 +491,8 @@ qemuMigrationDstStartNBDServer(virQEMUDriverPtr driver, goto cleanup; } - priv->nbdPort = server.port; + if (server.transport == VIR_STORAGE_NET_HOST_TRANS_TCP) + priv->nbdPort = server.port; ret = 0; @@ -793,6 +832,7 @@ static virStorageSourcePtr qemuMigrationSrcNBDStorageCopyBlockdevPrepareSource(virDomainDiskDefPtr disk, const char *host, int port, + const char *socket, const char *tlsAlias) { g_autoptr(virStorageSource) copysrc = NULL; @@ -813,9 +853,14 @@ qemuMigrationSrcNBDStorageCopyBlockdevPrepareSource(virDomainDiskDefPtr disk, copysrc->hosts = g_new0(virStorageNetHostDef, 1); copysrc->nhosts = 1; - copysrc->hosts->transport = VIR_STORAGE_NET_HOST_TRANS_TCP; - copysrc->hosts->port = port; - copysrc->hosts->name = g_strdup(host); + if (socket) { + copysrc->hosts->transport = VIR_STORAGE_NET_HOST_TRANS_UNIX; + copysrc->hosts->socket = g_strdup(socket); + } else { + copysrc->hosts->transport = VIR_STORAGE_NET_HOST_TRANS_TCP; + copysrc->hosts->port = port; + copysrc->hosts->name = g_strdup(host); + } copysrc->tlsAlias = g_strdup(tlsAlias); @@ -835,6 +880,7 @@ qemuMigrationSrcNBDStorageCopyBlockdev(virQEMUDriverPtr driver, bool persistjob, const char *host, int port, + const char *socket, unsigned long long mirror_speed, unsigned int mirror_shallow, const char *tlsAlias) @@ -846,7 +892,7 @@ qemuMigrationSrcNBDStorageCopyBlockdev(virQEMUDriverPtr driver, VIR_DEBUG("starting blockdev mirror for disk=%s to host=%s", disk->dst, host); - if (!(copysrc = qemuMigrationSrcNBDStorageCopyBlockdevPrepareSource(disk, host, port, tlsAlias))) + if (!(copysrc = qemuMigrationSrcNBDStorageCopyBlockdevPrepareSource(disk, host, port, socket, tlsAlias))) return -1; /* Migration via blockdev-mirror was supported sooner than the auto-read-only @@ -885,13 +931,17 @@ qemuMigrationSrcNBDStorageCopyDriveMirror(virQEMUDriverPtr driver, const char *diskAlias, const char *host, int port, + const char *socket, unsigned long long mirror_speed, bool mirror_shallow) { g_autofree char *nbd_dest = NULL; int mon_ret; - if (strchr(host, ':')) { + if (socket) { + nbd_dest = g_strdup_printf("nbd+unix:///%s?socket=%s", + diskAlias, socket); + } else if (strchr(host, ':')) { nbd_dest = g_strdup_printf("nbd:[%s]:%d:exportname=%s", host, port, diskAlias); } else { @@ -920,6 +970,7 @@ qemuMigrationSrcNBDStorageCopyOne(virQEMUDriverPtr driver, virDomainDiskDefPtr disk, const char *host, int port, + const char *socket, unsigned long long mirror_speed, bool mirror_shallow, const char *tlsAlias, @@ -958,13 +1009,13 @@ qemuMigrationSrcNBDStorageCopyOne(virQEMUDriverPtr driver, rc = qemuMigrationSrcNBDStorageCopyBlockdev(driver, vm, disk, jobname, sourcename, persistjob, - host, port, + host, port, socket, mirror_speed, mirror_shallow, tlsAlias); } else { rc = qemuMigrationSrcNBDStorageCopyDriveMirror(driver, vm, diskAlias, - host, port, + host, port, socket, mirror_speed, mirror_shallow); } @@ -1014,6 +1065,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriverPtr driver, const char **migrate_disks, virConnectPtr dconn, const char *tlsAlias, + const char *nbdURI, unsigned int flags) { qemuDomainObjPrivatePtr priv = vm->privateData; @@ -1023,6 +1075,8 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriverPtr driver, bool mirror_shallow = *migrate_flags & QEMU_MONITOR_MIGRATE_NON_SHARED_INC; int rv; g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); + g_autoptr(virURI) uri = NULL; + const char *socket = NULL; VIR_DEBUG("Starting drive mirrors for domain %s", vm->def->name); @@ -1038,6 +1092,33 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriverPtr driver, port = mig->nbd->port; mig->nbd->port = 0; + if (nbdURI) { + uri = virURIParse(nbdURI); + if (!uri) + return -1; + + if (STREQ(uri->scheme, "tcp")) { + if (uri->server && STRNEQ(uri->server, "")) + host = (char *)uri->server; + if (uri->port) + port = uri->port; + } else if (STREQ(uri->scheme, "unix")) { + if (!uri->path) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("UNIX disks URI does not include path")); + return -1; + } + socket = uri->path; + + if (qemuSecurityDomainSetPathLabel(driver, vm, socket, false) < 0) + return -1; + } else { + virReportError(VIR_ERR_INVALID_ARG, + _("Unsupported scheme in disks URI: %s"), + uri->scheme); + } + } + for (i = 0; i < vm->def->ndisks; i++) { virDomainDiskDefPtr disk = vm->def->disks[i]; @@ -1046,6 +1127,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriverPtr driver, continue; if (qemuMigrationSrcNBDStorageCopyOne(driver, vm, disk, host, port, + socket, mirror_speed, mirror_shallow, tlsAlias, flags) < 0) return -1; @@ -2397,6 +2479,7 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver, size_t nmigrate_disks, const char **migrate_disks, int nbdPort, + const char *nbdURI, qemuMigrationParamsPtr migParams, unsigned long flags) { @@ -2650,7 +2733,8 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver, if (qemuMigrationDstStartNBDServer(driver, vm, incoming->address, nmigrate_disks, migrate_disks, - nbdPort, nbdTLSAlias) < 0) { + nbdPort, nbdURI, + nbdTLSAlias) < 0) { goto stopjob; } cookieFlags |= QEMU_MIGRATION_COOKIE_NBD; @@ -2785,7 +2869,7 @@ qemuMigrationDstPrepareTunnel(virQEMUDriverPtr driver, return qemuMigrationDstPrepareAny(driver, dconn, cookiein, cookieinlen, cookieout, cookieoutlen, def, origname, st, NULL, 0, false, NULL, 0, NULL, 0, - migParams, flags); + NULL, migParams, flags); } @@ -2826,6 +2910,7 @@ qemuMigrationDstPrepareDirect(virQEMUDriverPtr driver, size_t nmigrate_disks, const char **migrate_disks, int nbdPort, + const char *nbdURI, qemuMigrationParamsPtr migParams, unsigned long flags) { @@ -2840,11 +2925,13 @@ qemuMigrationDstPrepareDirect(virQEMUDriverPtr driver, VIR_DEBUG("driver=%p, dconn=%p, cookiein=%s, cookieinlen=%d, " "cookieout=%p, cookieoutlen=%p, uri_in=%s, uri_out=%p, " "def=%p, origname=%s, listenAddress=%s, " - "nmigrate_disks=%zu, migrate_disks=%p, nbdPort=%d, flags=0x%lx", + "nmigrate_disks=%zu, migrate_disks=%p, nbdPort=%d, " + "nbdURI=%s, flags=0x%lx", driver, dconn, NULLSTR(cookiein), cookieinlen, cookieout, cookieoutlen, NULLSTR(uri_in), uri_out, *def, origname, NULLSTR(listenAddress), - nmigrate_disks, migrate_disks, nbdPort, flags); + nmigrate_disks, migrate_disks, nbdPort, NULLSTR(nbdURI), + flags); *uri_out = NULL; @@ -2947,7 +3034,7 @@ qemuMigrationDstPrepareDirect(virQEMUDriverPtr driver, NULL, uri ? uri->scheme : "tcp", port, autoPort, listenAddress, nmigrate_disks, migrate_disks, nbdPort, - migParams, flags); + nbdURI, migParams, flags); cleanup: if (ret != 0) { VIR_FREE(*uri_out); @@ -3482,7 +3569,8 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, const char *graphicsuri, size_t nmigrate_disks, const char **migrate_disks, - qemuMigrationParamsPtr migParams) + qemuMigrationParamsPtr migParams, + const char *nbdURI) { int ret = -1; unsigned int migrate_flags = QEMU_MONITOR_MIGRATE_BACKGROUND; @@ -3614,7 +3702,8 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, &migrate_flags, nmigrate_disks, migrate_disks, - dconn, tlsAlias, flags) < 0) { + dconn, tlsAlias, + nbdURI, flags) < 0) { goto error; } } else { @@ -3854,7 +3943,8 @@ qemuMigrationSrcPerformNative(virQEMUDriverPtr driver, const char *graphicsuri, size_t nmigrate_disks, const char **migrate_disks, - qemuMigrationParamsPtr migParams) + qemuMigrationParamsPtr migParams, + const char *nbdURI) { qemuDomainObjPrivatePtr priv = vm->privateData; g_autoptr(virURI) uribits = NULL; @@ -3908,7 +3998,7 @@ qemuMigrationSrcPerformNative(virQEMUDriverPtr driver, ret = qemuMigrationSrcRun(driver, vm, persist_xml, cookiein, cookieinlen, cookieout, cookieoutlen, flags, resource, &spec, dconn, graphicsuri, nmigrate_disks, migrate_disks, - migParams); + migParams, nbdURI); if (spec.destType == MIGRATION_DEST_FD) VIR_FORCE_CLOSE(spec.dest.fd.qemu); @@ -3971,7 +4061,7 @@ qemuMigrationSrcPerformTunnel(virQEMUDriverPtr driver, ret = qemuMigrationSrcRun(driver, vm, persist_xml, cookiein, cookieinlen, cookieout, cookieoutlen, flags, resource, &spec, dconn, graphicsuri, nmigrate_disks, migrate_disks, - migParams); + migParams, NULL); cleanup: VIR_FORCE_CLOSE(spec.dest.fd.qemu); @@ -4078,7 +4168,7 @@ qemuMigrationSrcPerformPeer2Peer2(virQEMUDriverPtr driver, cookie, cookielen, NULL, NULL, /* No out cookie with v2 migration */ flags, resource, dconn, NULL, 0, NULL, - migParams); + migParams, NULL); /* Perform failed. Make sure Finish doesn't overwrite the error */ if (ret < 0) @@ -4142,6 +4232,7 @@ qemuMigrationSrcPerformPeer2Peer3(virQEMUDriverPtr driver, size_t nmigrate_disks, const char **migrate_disks, int nbdPort, + const char *nbdURI, qemuMigrationParamsPtr migParams, unsigned long long bandwidth, bool useParams, @@ -4226,6 +4317,11 @@ qemuMigrationSrcPerformPeer2Peer3(virQEMUDriverPtr driver, VIR_MIGRATE_PARAM_DISKS_PORT, nbdPort) < 0) goto cleanup; + if (nbdURI && + virTypedParamsAddString(¶ms, &nparams, &maxparams, + VIR_MIGRATE_PARAM_DISKS_URI, + nbdURI) < 0) + goto cleanup; if (qemuMigrationParamsDump(migParams, ¶ms, &nparams, &maxparams, &flags) < 0) @@ -4323,7 +4419,7 @@ qemuMigrationSrcPerformPeer2Peer3(virQEMUDriverPtr driver, &cookieout, &cookieoutlen, flags, bandwidth, dconn, graphicsuri, nmigrate_disks, migrate_disks, - migParams); + migParams, nbdURI); } /* Perform failed. Make sure Finish doesn't overwrite the error */ @@ -4498,6 +4594,7 @@ qemuMigrationSrcPerformPeer2Peer(virQEMUDriverPtr driver, size_t nmigrate_disks, const char **migrate_disks, int nbdPort, + const char *nbdURI, qemuMigrationParamsPtr migParams, unsigned long flags, const char *dname, @@ -4515,12 +4612,12 @@ qemuMigrationSrcPerformPeer2Peer(virQEMUDriverPtr driver, VIR_DEBUG("driver=%p, sconn=%p, vm=%p, xmlin=%s, dconnuri=%s, uri=%s, " "graphicsuri=%s, listenAddress=%s, nmigrate_disks=%zu, " - "migrate_disks=%p, nbdPort=%d, flags=0x%lx, dname=%s, " - "resource=%lu", + "migrate_disks=%p, nbdPort=%d, nbdURI=%s, flags=0x%lx, " + "dname=%s, resource=%lu", driver, sconn, vm, NULLSTR(xmlin), NULLSTR(dconnuri), NULLSTR(uri), NULLSTR(graphicsuri), NULLSTR(listenAddress), - nmigrate_disks, migrate_disks, nbdPort, flags, NULLSTR(dname), - resource); + nmigrate_disks, migrate_disks, nbdPort, NULLSTR(nbdURI), + flags, NULLSTR(dname), resource); if (flags & VIR_MIGRATE_TUNNELLED && uri) { virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s", @@ -4618,7 +4715,7 @@ qemuMigrationSrcPerformPeer2Peer(virQEMUDriverPtr driver, ret = qemuMigrationSrcPerformPeer2Peer3(driver, sconn, dconn, dconnuri, vm, xmlin, persist_xml, dname, uri, graphicsuri, listenAddress, nmigrate_disks, migrate_disks, - nbdPort, migParams, resource, + nbdPort, nbdURI, migParams, resource, useParams, flags); } else { ret = qemuMigrationSrcPerformPeer2Peer2(driver, sconn, dconn, vm, @@ -4654,6 +4751,7 @@ qemuMigrationSrcPerformJob(virQEMUDriverPtr driver, size_t nmigrate_disks, const char **migrate_disks, int nbdPort, + const char *nbdURI, qemuMigrationParamsPtr migParams, const char *cookiein, int cookieinlen, @@ -4692,6 +4790,7 @@ qemuMigrationSrcPerformJob(virQEMUDriverPtr driver, ret = qemuMigrationSrcPerformPeer2Peer(driver, conn, vm, xmlin, persist_xml, dconnuri, uri, graphicsuri, listenAddress, nmigrate_disks, migrate_disks, nbdPort, + nbdURI, migParams, flags, dname, resource, &v3proto); } else { @@ -4699,7 +4798,7 @@ qemuMigrationSrcPerformJob(virQEMUDriverPtr driver, ret = qemuMigrationSrcPerformNative(driver, vm, persist_xml, uri, cookiein, cookieinlen, cookieout, cookieoutlen, flags, resource, NULL, NULL, 0, NULL, - migParams); + migParams, nbdURI); } if (ret < 0) goto endjob; @@ -4765,7 +4864,8 @@ qemuMigrationSrcPerformPhase(virQEMUDriverPtr driver, char **cookieout, int *cookieoutlen, unsigned long flags, - unsigned long resource) + unsigned long resource, + const char *nbdURI) { qemuDomainObjPrivatePtr priv = vm->privateData; qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; @@ -4787,7 +4887,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriverPtr driver, ret = qemuMigrationSrcPerformNative(driver, vm, persist_xml, uri, cookiein, cookieinlen, cookieout, cookieoutlen, flags, resource, NULL, graphicsuri, - nmigrate_disks, migrate_disks, migParams); + nmigrate_disks, migrate_disks, migParams, nbdURI); if (ret < 0) { qemuMigrationSrcRestoreDomainState(driver, vm); @@ -4828,6 +4928,7 @@ qemuMigrationSrcPerform(virQEMUDriverPtr driver, size_t nmigrate_disks, const char **migrate_disks, int nbdPort, + const char *nbdURI, qemuMigrationParamsPtr migParams, const char *cookiein, int cookieinlen, @@ -4841,11 +4942,12 @@ qemuMigrationSrcPerform(virQEMUDriverPtr driver, VIR_DEBUG("driver=%p, conn=%p, vm=%p, xmlin=%s, dconnuri=%s, " "uri=%s, graphicsuri=%s, listenAddress=%s, " "nmigrate_disks=%zu, migrate_disks=%p, nbdPort=%d, " + "nbdURI=%s, " "cookiein=%s, cookieinlen=%d, cookieout=%p, cookieoutlen=%p, " "flags=0x%lx, dname=%s, resource=%lu, v3proto=%d", driver, conn, vm, NULLSTR(xmlin), NULLSTR(dconnuri), NULLSTR(uri), NULLSTR(graphicsuri), NULLSTR(listenAddress), - nmigrate_disks, migrate_disks, nbdPort, + nmigrate_disks, migrate_disks, nbdPort, NULLSTR(nbdURI), NULLSTR(cookiein), cookieinlen, cookieout, cookieoutlen, flags, NULLSTR(dname), resource, v3proto); @@ -4859,7 +4961,7 @@ qemuMigrationSrcPerform(virQEMUDriverPtr driver, return qemuMigrationSrcPerformJob(driver, conn, vm, xmlin, persist_xml, dconnuri, uri, graphicsuri, listenAddress, nmigrate_disks, migrate_disks, nbdPort, - migParams, + nbdURI, migParams, cookiein, cookieinlen, cookieout, cookieoutlen, flags, dname, resource, v3proto); @@ -4877,12 +4979,12 @@ qemuMigrationSrcPerform(virQEMUDriverPtr driver, migParams, cookiein, cookieinlen, cookieout, cookieoutlen, - flags, resource); + flags, resource, nbdURI); } else { return qemuMigrationSrcPerformJob(driver, conn, vm, xmlin, persist_xml, NULL, uri, graphicsuri, listenAddress, nmigrate_disks, migrate_disks, nbdPort, - migParams, + nbdURI, migParams, cookiein, cookieinlen, cookieout, cookieoutlen, flags, dname, resource, v3proto); diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index b6f88d3fd9ea..fd9eb7cab0cd 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -84,6 +84,7 @@ VIR_MIGRATE_PARAM_BANDWIDTH_POSTCOPY, VIR_TYPED_PARAM_ULLONG, \ VIR_MIGRATE_PARAM_PARALLEL_CONNECTIONS, VIR_TYPED_PARAM_INT, \ VIR_MIGRATE_PARAM_TLS_DESTINATION, VIR_TYPED_PARAM_STRING, \ + VIR_MIGRATE_PARAM_DISKS_URI, VIR_TYPED_PARAM_STRING, \ NULL @@ -149,6 +150,7 @@ qemuMigrationDstPrepareDirect(virQEMUDriverPtr driver, size_t nmigrate_disks, const char **migrate_disks, int nbdPort, + const char *nbdURI, qemuMigrationParamsPtr migParams, unsigned long flags); @@ -165,6 +167,7 @@ qemuMigrationSrcPerform(virQEMUDriverPtr driver, size_t nmigrate_disks, const char **migrate_disks, int nbdPort, + const char *nbdURI, qemuMigrationParamsPtr migParams, const char *cookiein, int cookieinlen, diff --git a/src/qemu/qemu_migration_cookie.c b/src/qemu/qemu_migration_cookie.c index cef255598854..c1295b32fc27 100644 --- a/src/qemu/qemu_migration_cookie.c +++ b/src/qemu/qemu_migration_cookie.c @@ -91,6 +91,7 @@ qemuMigrationCookieNBDFree(qemuMigrationCookieNBDPtr nbd) while (nbd->ndisks) VIR_FREE(nbd->disks[--nbd->ndisks].target); VIR_FREE(nbd->disks); + VIR_FREE(nbd); } @@ -992,7 +993,7 @@ qemuMigrationCookieNBDXMLParse(xmlXPathContextPtr ctxt) if (port && virStrToLong_i(port, NULL, 10, &ret->port) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Malformed nbd port '%s'"), - port); + port); goto error; } diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 8a4a0d172dc9..31597ee46972 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -10643,6 +10643,10 @@ static const vshCmdOptDef opts_migrate[] = { .type = VSH_OT_INT, .help = N_("port to use by target server for incoming disks migration") }, + {.name = "disks-uri", + .type = VSH_OT_STRING, + .help = N_("URI to use for disks migration (overrides --disks-port)") + }, {.name = "comp-methods", .type = VSH_OT_STRING, .help = N_("comma separated list of compression methods to be used") @@ -10762,6 +10766,14 @@ doMigrate(void *opaque) VIR_MIGRATE_PARAM_DISKS_PORT, intOpt) < 0) goto save_error; + if (vshCommandOptStringReq(ctl, cmd, "disks-uri", &opt) < 0) + goto out; + if (opt && + virTypedParamsAddString(¶ms, &nparams, &maxparams, + VIR_MIGRATE_PARAM_DISKS_URI, + opt) < 0) + goto save_error; + if (vshCommandOptStringReq(ctl, cmd, "dname", &opt) < 0) goto out; if (opt && -- 2.28.0

On Tue, Sep 01, 2020 at 16:36:57 +0200, Martin Kletzander wrote:
Adds new typed param for migration and uses this as a UNIX socket path that should be used for the NBD part of migration. And also adds virsh support.
Partially resolves: https://bugzilla.redhat.com/1638889
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- docs/manpages/virsh.rst | 11 ++ include/libvirt/libvirt-domain.h | 13 +++ src/qemu/qemu_driver.c | 33 +++++- src/qemu/qemu_migration.c | 170 ++++++++++++++++++++++++------- src/qemu/qemu_migration.h | 3 + src/qemu/qemu_migration_cookie.c | 3 +- tools/virsh-domain.c | 12 +++ 7 files changed, 205 insertions(+), 40 deletions(-)
diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index 8e2fb7039046..12357ea4ee86 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3113,6 +3113,7 @@ migrate [--postcopy-bandwidth bandwidth] [--parallel [--parallel-connections connections]] [--bandwidth bandwidth] [--tls-destination hostname] + [--disks-socket socket-path]
--disks-uri URI
Migrate domain to another host. Add *--live* for live migration; <--p2p> for peer-2-peer migration; *--direct* for direct migration; or *--tunnelled* @@ -3292,6 +3293,16 @@ error if this parameter is used. Optional *disks-port* sets the port that hypervisor on destination side should bind to for incoming disks traffic. Currently it is supported only by QEMU.
+Optional *disks-uri* can also be specified (mutually exclusive with +*disks-port*) to specify what the remote hypervisor should bind/connect to when +migrating disks. This can be *tcp://address:port* to specify a listen address +(which overrides *--listen-address* for the disk migration) and a port or +*unix:///path/to/socket* in case you need the disk migration to happen over a +UNIX socket with that specified path. In this case you need to make sure the +same socket path is accessible to both source and destination hypervisors and +connecting to the socket on the source (after hypervisor creates it on the +destination) will actually connect to the destination. +
migrate-compcache -----------------
...
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 3636716ceea1..08b6b853de47 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c ... @@ -11485,6 +11486,7 @@ qemuDomainMigratePrepare3Params(virConnectPtr dconn, g_autofree const char **migrate_disks = NULL; g_autofree char *origname = NULL; g_autoptr(qemuMigrationParams) migParams = NULL; + const char *nbdSocketPath = NULL;
nbdURI
virCheckFlags(QEMU_MIGRATION_FLAGS, -1); if (virTypedParamsValidate(params, nparams, QEMU_MIGRATION_PARAMETERS) < 0) @@ -11502,6 +11504,9 @@ qemuDomainMigratePrepare3Params(virConnectPtr dconn, virTypedParamsGetString(params, nparams, VIR_MIGRATE_PARAM_LISTEN_ADDRESS, &listenAddress) < 0 || + virTypedParamsGetString(params, nparams, + VIR_MIGRATE_PARAM_DISKS_URI, + &nbdSocketPath) < 0 ||
nbdURI
virTypedParamsGetInt(params, nparams, VIR_MIGRATE_PARAM_DISKS_PORT, &nbdPort) < 0) @@ -11518,6 +11523,13 @@ qemuDomainMigratePrepare3Params(virConnectPtr dconn, QEMU_MIGRATION_DESTINATION))) return -1;
+ if (nbdSocketPath && nbdPort) {
nbdURI
+ virReportError(VIR_ERR_INVALID_ARG, "%s", + _("Both port and socket requested for disk migration "
s/socket/URI/
+ "while being mutually exclusive")); + return -1; + } + if (flags & VIR_MIGRATE_TUNNELLED) { /* this is a logical error; we never should have gotten here with * VIR_MIGRATE_TUNNELLED set @@ -11540,7 +11552,7 @@ qemuDomainMigratePrepare3Params(virConnectPtr dconn, uri_in, uri_out, &def, origname, listenAddress, nmigrate_disks, migrate_disks, nbdPort, - migParams, flags); + nbdSocketPath, migParams, flags);
nbdURI
}
@@ -11682,7 +11694,7 @@ qemuDomainMigratePerform3(virDomainPtr dom,
ret = qemuMigrationSrcPerform(driver, dom->conn, vm, xmlin, NULL, dconnuri, uri, NULL, NULL, 0, NULL, 0, - migParams, + NULL, migParams, cookiein, cookieinlen, cookieout, cookieoutlen, flags, dname, resource, true); @@ -11716,6 +11728,7 @@ qemuDomainMigratePerform3Params(virDomainPtr dom, unsigned long long bandwidth = 0; int nbdPort = 0; g_autoptr(qemuMigrationParams) migParams = NULL; + const char *nbdSocketPath = NULL;
nbdURI
int ret = -1;
virCheckFlags(QEMU_MIGRATION_FLAGS, -1); @@ -11743,11 +11756,21 @@ qemuDomainMigratePerform3Params(virDomainPtr dom, virTypedParamsGetInt(params, nparams, VIR_MIGRATE_PARAM_DISKS_PORT, &nbdPort) < 0 || + virTypedParamsGetString(params, nparams, + VIR_MIGRATE_PARAM_DISKS_URI, + &nbdSocketPath) < 0 ||
nbdURI
virTypedParamsGetString(params, nparams, VIR_MIGRATE_PARAM_PERSIST_XML, &persist_xml) < 0) goto cleanup;
+ if (nbdSocketPath && nbdPort) {
nbdURI
+ virReportError(VIR_ERR_INVALID_ARG, "%s", + _("Both port and socket requested for disk migration "
s/socket/URI/
+ "while being mutually exclusive")); + goto cleanup; + } + nmigrate_disks = virTypedParamsGetStringList(params, nparams, VIR_MIGRATE_PARAM_MIGRATE_DISKS, &migrate_disks); @@ -11768,7 +11791,7 @@ qemuDomainMigratePerform3Params(virDomainPtr dom, ret = qemuMigrationSrcPerform(driver, dom->conn, vm, dom_xml, persist_xml, dconnuri, uri, graphicsuri, listenAddress, nmigrate_disks, migrate_disks, nbdPort, - migParams, + nbdSocketPath, migParams,
nbdURI
cookiein, cookieinlen, cookieout, cookieoutlen, flags, dname, bandwidth, true); cleanup: diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index b887185d012d..7277f2f458a2 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c
...
@@ -390,8 +391,44 @@ qemuMigrationDstStartNBDServer(virQEMUDriverPtr driver, .port = nbdPort, }; bool server_started = false; + g_autoptr(virURI) uri = NULL; + + /* Prefer nbdURI */ + if (nbdURI) { + uri = virURIParse(nbdURI);
- if (nbdPort < 0 || nbdPort > USHRT_MAX) { + if (!uri) + return -1; + + if (STREQ(uri->scheme, "tcp")) { + server.transport = VIR_STORAGE_NET_HOST_TRANS_TCP; + if (!uri->server || STREQ(uri->server, "")) { + /* Since tcp://:<port>/ is parsed as server = NULL and port = 0 + * we should rather error out instead of auto-allocating a port + * as that would be the exact opposite of what was requested. */ + virReportError(VIR_ERR_INVALID_ARG, + _("URI with tcp scheme did not provide a server part: %s"), + nbdURI); + return -1; + } + server.name = (char *)uri->server;
Indentation is a bit off here.
+ if (uri->port) + server.port = uri->port; + } else if (STREQ(uri->scheme, "unix")) { + if (!uri->path) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("UNIX disks URI does not include path")); + return -1; + } + server.transport = VIR_STORAGE_NET_HOST_TRANS_UNIX; + server.socket = (char *)uri->path; + } else { + virReportError(VIR_ERR_INVALID_ARG, + _("Unsupported scheme in disks URI: %s"), + uri->scheme); + return -1; + } + } else if (nbdPort < 0 || nbdPort > USHRT_MAX) { virReportError(VIR_ERR_INVALID_ARG, "%s", _("nbd port must be in range 0-65535")); return -1; ... diff --git a/src/qemu/qemu_migration_cookie.c b/src/qemu/qemu_migration_cookie.c index cef255598854..c1295b32fc27 100644 --- a/src/qemu/qemu_migration_cookie.c +++ b/src/qemu/qemu_migration_cookie.c @@ -91,6 +91,7 @@ qemuMigrationCookieNBDFree(qemuMigrationCookieNBDPtr nbd) while (nbd->ndisks) VIR_FREE(nbd->disks[--nbd->ndisks].target); VIR_FREE(nbd->disks); + VIR_FREE(nbd); }
@@ -992,7 +993,7 @@ qemuMigrationCookieNBDXMLParse(xmlXPathContextPtr ctxt) if (port && virStrToLong_i(port, NULL, 10, &ret->port) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, _("Malformed nbd port '%s'"), - port); + port); goto error; }
I don't think you wanted to touch qemu_migration_cookie.c at all. You can just drop both hunks. ... With the small fixups Reviewed-by: Jiri Denemark <jdenemar@redhat.com>

Local socket connections were outright disabled because there was no "server" part in the URI. However, given how requirements and usage scenarios are evolving, some management apps might need the source libvirt daemon to connect to the destination daemon over a UNIX socket for peer2peer migration. Since we cannot know where the socket leads (whether the same daemon or not) let's decide that based on whether the socket path is non-standard, or rather explicitly specified in the URI. Checking non-standard path would require to ask the daemon for configuration and the only misuse that it would prevent would be a pretty weird one. And that's not worth it. The assumption is that whenever someone uses explicit UNIX socket paths in the URI for migration they better know what they are doing. Partially resolves: https://bugzilla.redhat.com/1638889 Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- docs/manpages/virsh.rst | 9 +++++++++ src/libvirt-domain.c | 8 +++++++- src/remote/remote_driver.c | 8 ++++++-- src/util/viruri.c | 30 ++++++++++++++++++++++++++++++ src/util/viruri.h | 2 ++ tests/virmigtest.c | 2 +- 6 files changed, 55 insertions(+), 4 deletions(-) diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index 12357ea4ee86..4d66019e750b 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3235,6 +3235,15 @@ has different semantics: * peer2peer migration: the *desturi* is an address of the target host as seen from the source machine. +In a special circumstance where you require a complete control of the connection +and/or libvirt does not have network access to the remote side you can use a +UNIX transport in the URI and specify a socket path in the query, for example +with the qemu driver you could use this: + +.. code-block:: + + qemu+unix://?socket=/path/to/socket + When *migrateuri* is not specified, libvirt will automatically determine the hypervisor specific URI. Some hypervisors, including QEMU, have an optional "migration_host" configuration parameter (useful when the host has multiple diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index da5a21e4c4d4..cde86c77e892 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -3276,7 +3276,13 @@ virDomainMigrateCheckNotLocal(const char *dconnuri) if (!(tempuri = virURIParse(dconnuri))) return -1; - if (!tempuri->server || STRPREFIX(tempuri->server, "localhost")) { + + /* + * If someone migrates explicitly to a unix socket, then they have to know + * what they are doing and it most probably was not a mistake. + */ + if ((tempuri->server && STRPREFIX(tempuri->server, "localhost")) || + (!tempuri->server && !virURICheckUnixSocket(tempuri))) { virReportInvalidArg(dconnuri, "%s", _("Attempt to migrate guest to the same host")); return -1; diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index 8654046b8ddb..7df995446a3c 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -1429,9 +1429,13 @@ remoteConnectOpen(virConnectPtr conn, /* If there's a driver registered we must defer to that. * If there isn't a driver, we must connect in "direct" - * mode - see doRemoteOpen */ + * mode - see doRemoteOpen. + * One exception is if we are trying to connect to an + * unknown socket path as that might be proxied to remote + * host */ if (!conn->uri->server && - virHasDriverForURIScheme(driver)) { + virHasDriverForURIScheme(driver) && + !virURICheckUnixSocket(conn->uri)) { ret = VIR_DRV_OPEN_DECLINED; goto cleanup; } diff --git a/src/util/viruri.c b/src/util/viruri.c index 0112186fdbc4..dd7559662bd2 100644 --- a/src/util/viruri.c +++ b/src/util/viruri.c @@ -393,3 +393,33 @@ virURIGetParam(virURIPtr uri, const char *name) _("Missing URI parameter '%s'"), name); return NULL; } + + +/** + * virURICheckUnixSocket: + * @uri: URI to check + * + * Check if the URI looks like it refers to a non-standard socket path. In such + * scenario the socket might be proxied to a remote server even though the URI + * looks like it is only local. + * + * Returns: true if the URI might be proxied to a remote server + */ +bool +virURICheckUnixSocket(virURIPtr uri) +{ + size_t i = 0; + + if (!uri->scheme) + return false; + + if (STRNEQ_NULLABLE(strchr(uri->scheme, '+'), "+unix")) + return false; + + for (i = 0; i < uri->paramsCount; i++) { + if (STREQ(uri->params[i].name, "socket")) + return true; + } + + return false; +} diff --git a/src/util/viruri.h b/src/util/viruri.h index e607ecc109e7..dc4907b55059 100644 --- a/src/util/viruri.h +++ b/src/util/viruri.h @@ -62,4 +62,6 @@ int virURIResolveAlias(virConfPtr conf, const char *alias, char **uri); const char *virURIGetParam(virURIPtr uri, const char *name); +bool virURICheckUnixSocket(virURIPtr uri); + #define VIR_URI_SERVER(uri) ((uri) && (uri)->server ? (uri)->server : "localhost") diff --git a/tests/virmigtest.c b/tests/virmigtest.c index df8d51bab94b..af6643397e90 100644 --- a/tests/virmigtest.c +++ b/tests/virmigtest.c @@ -83,7 +83,7 @@ mymain(void) TEST("scheme://some.cryptorandom.fqdn.tld"); - TEST_FAIL("hehe+unix:///?socket=/path/to/some-sock"); + TEST("hehe+unix:///?socket=/path/to/some-sock"); return ret == 0 ? EXIT_SUCCESS : EXIT_FAILURE; } -- 2.28.0

On Tue, Sep 01, 2020 at 16:36:58 +0200, Martin Kletzander wrote:
Local socket connections were outright disabled because there was no "server" part in the URI. However, given how requirements and usage scenarios are evolving, some management apps might need the source libvirt daemon to connect to the destination daemon over a UNIX socket for peer2peer migration. Since we cannot know where the socket leads (whether the same daemon or not) let's decide that based on whether the socket path is non-standard, or rather explicitly specified in the URI. Checking non-standard path would require to ask the daemon for configuration and the only misuse that it would prevent would be a pretty weird one. And that's not worth it. The assumption is that whenever someone uses explicit UNIX socket paths in the URI for migration they better know what they are doing.
Partially resolves: https://bugzilla.redhat.com/1638889
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- docs/manpages/virsh.rst | 9 +++++++++ src/libvirt-domain.c | 8 +++++++- src/remote/remote_driver.c | 8 ++++++-- src/util/viruri.c | 30 ++++++++++++++++++++++++++++++ src/util/viruri.h | 2 ++ tests/virmigtest.c | 2 +- 6 files changed, 55 insertions(+), 4 deletions(-)
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>

This allows: a) migration without access to network b) complete control of the migration stream c) easy migration between containerised libvirt daemons on the same host Resolves: https://bugzilla.redhat.com/1638889 Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- docs/manpages/virsh.rst | 15 +++- docs/migration.html.in | 33 ++++++++ src/qemu/qemu_migration.c | 138 +++++++++++++++++++++++-------- src/qemu/qemu_migration_params.c | 9 ++ src/qemu/qemu_migration_params.h | 3 + src/qemu/qemu_monitor.c | 15 ++++ src/qemu/qemu_monitor.h | 4 + 7 files changed, 179 insertions(+), 38 deletions(-) diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index 4d66019e750b..e3aeaa5c44ea 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3242,7 +3242,7 @@ with the qemu driver you could use this: .. code-block:: - qemu+unix://?socket=/path/to/socket + qemu+unix:///system?socket=/path/to/socket When *migrateuri* is not specified, libvirt will automatically determine the hypervisor specific URI. Some hypervisors, including QEMU, have an optional @@ -3270,6 +3270,14 @@ There are a few scenarios where specifying *migrateuri* may help: might be specified to choose a specific port number outside the default range in order to comply with local firewall policies. +* The *desturi* uses UNIX transport method. In this advanced case libvirt + should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: + +.. code-block:: + + unix:///path/to/socket + See `https://libvirt.org/migration.html#uris <https://libvirt.org/migration.html#uris>`_ for more details on migration URIs. @@ -3296,7 +3304,10 @@ specific parameters separated by '&'. Currently recognized parameters are Optional *listen-address* sets the listen address that hypervisor on the destination side should bind to for incoming migration. Both IPv4 and IPv6 addresses are accepted as well as hostnames (the resolving is done on -destination). Some hypervisors do not support this feature and will return an +destination). In niche scenarios you can also use UNIX socket to make the +hypervisor connection over UNIX socket in which case you must make sure the +source can connect to the destination using the socket path provided by you. +Some hypervisors do not support specifying the listen address and will return an error if this parameter is used. Optional *disks-port* sets the port that hypervisor on destination side should diff --git a/docs/migration.html.in b/docs/migration.html.in index e95ee9de6f1b..8585dcab6863 100644 --- a/docs/migration.html.in +++ b/docs/migration.html.in @@ -201,6 +201,9 @@ numbers. In the latter case the management application may wish to choose a specific port number outside the default range in order to comply with local firewall policies.</li> + <li>The second URI uses UNIX transport method. In this advanced case + libvirt should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: <code>unix:///path/to/socket</code>.</li> </ol> <h2><a id="config">Configuration file handling</a></h2> @@ -628,5 +631,35 @@ virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+ssh://10.0. Supported by QEMU driver </p> + + <h3><a id="scenariounixsocket">Migration using only UNIX sockets</a></h3> + + <p> + In niche scenarios where libvirt daemon does not have access to the + network (e.g. running in a restricted container on a host that has + accessible network), when a management application wants to have complete + control over the transfer or when migrating between two containers on the + same host all the communication can be done using UNIX sockets. This + includes connecting to non-standard socket path for the destination + daemon, using UNIX sockets for hypervisor's communication or for the NBD + data transfer. All of that can be used with both peer2peer and direct + migration options. + </p> + + <p> + Example using <code>/tmp/migdir</code> as a directory representing the + same path visible from both libvirt daemons. That can be achieved by + bind-mounting the same directory to different containers running separate + daemons or forwarding connections to these sockets manually + (using <code>socat</code>, <code>netcat</code> or a custom piece of + software): + <pre> +virsh migrate web1 [--p2p] --copy-storage-all 'qemu+unix:///system?socket=/tmp/migdir/test-sock-driver' 'unix:///tmp/migdir/test-sock-qemu' [--listen-address /tmp/migdir/test-sock-qemu] --disks-uri unix:///tmp/migdir/test-sock-nbd + </pre> + + <p> + Supported by QEMU driver + </p> + </body> </html> diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 7277f2f458a2..665b6da90c56 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2411,6 +2411,8 @@ qemuMigrationDstPrepare(virDomainObjPtr vm, if (tunnel) { migrateFrom = g_strdup("stdio"); + } else if (g_strcmp0(protocol, "unix") == 0) { + migrateFrom = g_strdup_printf("%s:%s", protocol, listenAddress); } else { bool encloseAddress = false; bool hostIPv6Capable = false; @@ -2995,34 +2997,40 @@ qemuMigrationDstPrepareDirect(virQEMUDriverPtr driver, } if (STRNEQ(uri->scheme, "tcp") && - STRNEQ(uri->scheme, "rdma")) { + STRNEQ(uri->scheme, "rdma") && + STRNEQ(uri->scheme, "unix")) { virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, _("unsupported scheme %s in migration URI %s"), uri->scheme, uri_in); goto cleanup; } - if (uri->server == NULL) { - virReportError(VIR_ERR_INVALID_ARG, _("missing host in migration" - " URI: %s"), uri_in); - goto cleanup; - } - - if (uri->port == 0) { - if (virPortAllocatorAcquire(driver->migrationPorts, &port) < 0) + if (STREQ(uri->scheme, "unix")) { + autoPort = false; + listenAddress = uri->path; + } else { + if (uri->server == NULL) { + virReportError(VIR_ERR_INVALID_ARG, _("missing host in migration" + " URI: %s"), uri_in); goto cleanup; + } - /* Send well-formed URI only if uri_in was well-formed */ - if (well_formed_uri) { - uri->port = port; - if (!(*uri_out = virURIFormat(uri))) + if (uri->port == 0) { + if (virPortAllocatorAcquire(driver->migrationPorts, &port) < 0) goto cleanup; + + /* Send well-formed URI only if uri_in was well-formed */ + if (well_formed_uri) { + uri->port = port; + if (!(*uri_out = virURIFormat(uri))) + goto cleanup; + } else { + *uri_out = g_strdup_printf("%s:%d", uri_in, port); + } } else { - *uri_out = g_strdup_printf("%s:%d", uri_in, port); + port = uri->port; + autoPort = false; } - } else { - port = uri->port; - autoPort = false; } } @@ -3237,6 +3245,8 @@ qemuMigrationSrcConfirm(virQEMUDriverPtr driver, enum qemuMigrationDestinationType { MIGRATION_DEST_HOST, MIGRATION_DEST_CONNECT_HOST, + MIGRATION_DEST_SOCKET, + MIGRATION_DEST_CONNECT_SOCKET, MIGRATION_DEST_FD, }; @@ -3256,6 +3266,10 @@ struct _qemuMigrationSpec { int port; } host; + struct { + const char *path; + } socket; + struct { int qemu; int local; @@ -3470,13 +3484,30 @@ qemuMigrationSrcConnect(virQEMUDriverPtr driver, if (qemuSecuritySetSocketLabel(driver->securityManager, vm->def) < 0) goto cleanup; - port = g_strdup_printf("%d", spec->dest.host.port); - if (virNetSocketNewConnectTCP(spec->dest.host.name, - port, - AF_UNSPEC, - &sock) == 0) { - fd_qemu = virNetSocketDupFD(sock, true); - virObjectUnref(sock); + + switch (spec->destType) { + case MIGRATION_DEST_CONNECT_HOST: + port = g_strdup_printf("%d", spec->dest.host.port); + if (virNetSocketNewConnectTCP(spec->dest.host.name, + port, + AF_UNSPEC, + &sock) == 0) { + fd_qemu = virNetSocketDupFD(sock, true); + virObjectUnref(sock); + } + break; + case MIGRATION_DEST_CONNECT_SOCKET: + if (virNetSocketNewConnectUNIX(spec->dest.socket.path, + false, NULL, + &sock) == 0) { + fd_qemu = virNetSocketDupFD(sock, true); + virObjectUnref(sock); + } + break; + case MIGRATION_DEST_HOST: + case MIGRATION_DEST_SOCKET: + case MIGRATION_DEST_FD: + break; } spec->destType = MIGRATION_DEST_FD; @@ -3684,6 +3715,14 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, if (migrate_flags & (QEMU_MONITOR_MIGRATE_NON_SHARED_DISK | QEMU_MONITOR_MIGRATE_NON_SHARED_INC)) { if (mig->nbd) { + const char *host = ""; + + if (spec->destType == MIGRATION_DEST_HOST || + spec->destType == MIGRATION_DEST_CONNECT_HOST) { + host = spec->dest.host.name; + } + + /* Currently libvirt does not support setting up of the NBD * non-shared storage migration with TLS. As we need to honour the * VIR_MIGRATE_TLS flag, we need to reject such migration until @@ -3697,7 +3736,7 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, /* This will update migrate_flags on success */ if (qemuMigrationSrcNBDStorageCopy(driver, vm, mig, - spec->dest.host.name, + host, migrate_speed, &migrate_flags, nmigrate_disks, @@ -3745,7 +3784,8 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, goto exit_monitor; /* connect to the destination qemu if needed */ - if (spec->destType == MIGRATION_DEST_CONNECT_HOST && + if ((spec->destType == MIGRATION_DEST_CONNECT_HOST || + spec->destType == MIGRATION_DEST_CONNECT_SOCKET) && qemuMigrationSrcConnect(driver, vm, spec) < 0) { goto exit_monitor; } @@ -3767,7 +3807,14 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, spec->dest.host.port); break; + case MIGRATION_DEST_SOCKET: + qemuSecurityDomainSetPathLabel(driver, vm, spec->dest.socket.path, false); + rc = qemuMonitorMigrateToSocket(priv->mon, migrate_flags, + spec->dest.socket.path); + break; + case MIGRATION_DEST_CONNECT_HOST: + case MIGRATION_DEST_CONNECT_SOCKET: /* handled above and transformed into MIGRATION_DEST_FD */ break; @@ -3983,16 +4030,35 @@ qemuMigrationSrcPerformNative(virQEMUDriverPtr driver, } } - /* RDMA and multi-fd migration requires QEMU to connect to the destination - * itself. - */ - if (STREQ(uribits->scheme, "rdma") || (flags & VIR_MIGRATE_PARALLEL)) - spec.destType = MIGRATION_DEST_HOST; - else - spec.destType = MIGRATION_DEST_CONNECT_HOST; - spec.dest.host.protocol = uribits->scheme; - spec.dest.host.name = uribits->server; - spec.dest.host.port = uribits->port; + if (STREQ(uribits->scheme, "unix")) { + if ((flags & VIR_MIGRATE_TLS) && + !qemuMigrationParamsTLSHostnameIsSet(migParams)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("Explicit destination hostname is required " + "for TLS migration over UNIX socket")); + return -1; + } + + if (flags & VIR_MIGRATE_PARALLEL) + spec.destType = MIGRATION_DEST_SOCKET; + else + spec.destType = MIGRATION_DEST_CONNECT_SOCKET; + + spec.dest.socket.path = uribits->path; + } else { + /* RDMA and multi-fd migration requires QEMU to connect to the destination + * itself. + */ + if (STREQ(uribits->scheme, "rdma") || (flags & VIR_MIGRATE_PARALLEL)) + spec.destType = MIGRATION_DEST_HOST; + else + spec.destType = MIGRATION_DEST_CONNECT_HOST; + + spec.dest.host.protocol = uribits->scheme; + spec.dest.host.name = uribits->server; + spec.dest.host.port = uribits->port; + } + spec.fwdType = MIGRATION_FWD_DIRECT; ret = qemuMigrationSrcRun(driver, vm, persist_xml, cookiein, cookieinlen, cookieout, diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c index 231a8a2ee83e..5fde915963ec 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -1008,6 +1008,15 @@ qemuMigrationParamsDisableTLS(virDomainObjPtr vm, } +bool +qemuMigrationParamsTLSHostnameIsSet(qemuMigrationParamsPtr migParams) +{ + int param = QEMU_MIGRATION_PARAM_TLS_HOSTNAME; + return (migParams->params[param].set && + STRNEQ(migParams->params[param].value.s, "")); +} + + /* qemuMigrationParamsResetTLS * @driver: pointer to qemu driver * @vm: domain object diff --git a/src/qemu/qemu_migration_params.h b/src/qemu/qemu_migration_params.h index 9aea24725f0a..9876101bfc4a 100644 --- a/src/qemu/qemu_migration_params.h +++ b/src/qemu/qemu_migration_params.h @@ -113,6 +113,9 @@ int qemuMigrationParamsDisableTLS(virDomainObjPtr vm, qemuMigrationParamsPtr migParams); +bool +qemuMigrationParamsTLSHostnameIsSet(qemuMigrationParamsPtr migParams); + int qemuMigrationParamsFetch(virQEMUDriverPtr driver, virDomainObjPtr vm, diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 718ac50c0898..ab3bcc761e9a 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -2555,6 +2555,21 @@ qemuMonitorMigrateToHost(qemuMonitorPtr mon, } +int +qemuMonitorMigrateToSocket(qemuMonitorPtr mon, + unsigned int flags, + const char *socketPath) +{ + g_autofree char *uri = g_strdup_printf("unix:%s", socketPath); + + VIR_DEBUG("socketPath=%s flags=0x%x", socketPath, flags); + + QEMU_CHECK_MONITOR(mon); + + return qemuMonitorJSONMigrate(mon, flags, uri); +} + + int qemuMonitorMigrateCancel(qemuMonitorPtr mon) { diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index d20a15c202db..3e4ef7e821a6 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -853,6 +853,10 @@ int qemuMonitorMigrateToHost(qemuMonitorPtr mon, const char *hostname, int port); +int qemuMonitorMigrateToSocket(qemuMonitorPtr mon, + unsigned int flags, + const char *socketPath); + int qemuMonitorMigrateCancel(qemuMonitorPtr mon); int qemuMonitorGetDumpGuestMemoryCapability(qemuMonitorPtr mon, -- 2.28.0

On Tue, Sep 01, 2020 at 04:36:59PM +0200, Martin Kletzander wrote:
This allows:
a) migration without access to network
b) complete control of the migration stream
c) easy migration between containerised libvirt daemons on the same host
Resolves: https://bugzilla.redhat.com/1638889
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- docs/manpages/virsh.rst | 15 +++- docs/migration.html.in | 33 ++++++++ src/qemu/qemu_migration.c | 138 +++++++++++++++++++++++-------- src/qemu/qemu_migration_params.c | 9 ++ src/qemu/qemu_migration_params.h | 3 + src/qemu/qemu_monitor.c | 15 ++++ src/qemu/qemu_monitor.h | 4 + 7 files changed, 179 insertions(+), 38 deletions(-)
diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index 4d66019e750b..e3aeaa5c44ea 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3242,7 +3242,7 @@ with the qemu driver you could use this:
.. code-block::
- qemu+unix://?socket=/path/to/socket + qemu+unix:///system?socket=/path/to/socket
When *migrateuri* is not specified, libvirt will automatically determine the hypervisor specific URI. Some hypervisors, including QEMU, have an optional @@ -3270,6 +3270,14 @@ There are a few scenarios where specifying *migrateuri* may help: might be specified to choose a specific port number outside the default range in order to comply with local firewall policies.
+* The *desturi* uses UNIX transport method. In this advanced case libvirt + should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: + +.. code-block:: + + unix:///path/to/socket + See `https://libvirt.org/migration.html#uris <https://libvirt.org/migration.html#uris>`_ for more details on migration URIs.
@@ -3296,7 +3304,10 @@ specific parameters separated by '&'. Currently recognized parameters are Optional *listen-address* sets the listen address that hypervisor on the destination side should bind to for incoming migration. Both IPv4 and IPv6 addresses are accepted as well as hostnames (the resolving is done on -destination). Some hypervisors do not support this feature and will return an +destination). In niche scenarios you can also use UNIX socket to make the +hypervisor connection over UNIX socket in which case you must make sure the +source can connect to the destination using the socket path provided by you. +Some hypervisors do not support specifying the listen address and will return an error if this parameter is used.
Optional *disks-port* sets the port that hypervisor on destination side should diff --git a/docs/migration.html.in b/docs/migration.html.in index e95ee9de6f1b..8585dcab6863 100644 --- a/docs/migration.html.in +++ b/docs/migration.html.in @@ -201,6 +201,9 @@ numbers. In the latter case the management application may wish to choose a specific port number outside the default range in order to comply with local firewall policies.</li> + <li>The second URI uses UNIX transport method. In this advanced case + libvirt should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: <code>unix:///path/to/socket</code>.</li> </ol>
<h2><a id="config">Configuration file handling</a></h2> @@ -628,5 +631,35 @@ virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+ssh://10.0. Supported by QEMU driver </p>
+ + <h3><a id="scenariounixsocket">Migration using only UNIX sockets</a></h3> + + <p> + In niche scenarios where libvirt daemon does not have access to the + network (e.g. running in a restricted container on a host that has + accessible network), when a management application wants to have complete + control over the transfer or when migrating between two containers on the + same host all the communication can be done using UNIX sockets. This + includes connecting to non-standard socket path for the destination + daemon, using UNIX sockets for hypervisor's communication or for the NBD + data transfer. All of that can be used with both peer2peer and direct + migration options. + </p> + + <p> + Example using <code>/tmp/migdir</code> as a directory representing the + same path visible from both libvirt daemons. That can be achieved by + bind-mounting the same directory to different containers running separate + daemons or forwarding connections to these sockets manually + (using <code>socat</code>, <code>netcat</code> or a custom piece of + software): + <pre> +virsh migrate web1 [--p2p] --copy-storage-all 'qemu+unix:///system?socket=/tmp/migdir/test-sock-driver' 'unix:///tmp/migdir/test-sock-qemu' [--listen-address /tmp/migdir/test-sock-qemu] --disks-uri unix:///tmp/migdir/test-sock-nbd
Shoudn't the --listen-address arg be the same URI as the previous arg ? Did we need a separate listen address parameter for disks too ? Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Wed, Sep 02, 2020 at 09:12:36AM +0100, Daniel P. Berrangé wrote:
On Tue, Sep 01, 2020 at 04:36:59PM +0200, Martin Kletzander wrote:
This allows:
a) migration without access to network
b) complete control of the migration stream
c) easy migration between containerised libvirt daemons on the same host
Resolves: https://bugzilla.redhat.com/1638889
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- docs/manpages/virsh.rst | 15 +++- docs/migration.html.in | 33 ++++++++ src/qemu/qemu_migration.c | 138 +++++++++++++++++++++++-------- src/qemu/qemu_migration_params.c | 9 ++ src/qemu/qemu_migration_params.h | 3 + src/qemu/qemu_monitor.c | 15 ++++ src/qemu/qemu_monitor.h | 4 + 7 files changed, 179 insertions(+), 38 deletions(-)
diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index 4d66019e750b..e3aeaa5c44ea 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3242,7 +3242,7 @@ with the qemu driver you could use this:
.. code-block::
- qemu+unix://?socket=/path/to/socket + qemu+unix:///system?socket=/path/to/socket
When *migrateuri* is not specified, libvirt will automatically determine the hypervisor specific URI. Some hypervisors, including QEMU, have an optional @@ -3270,6 +3270,14 @@ There are a few scenarios where specifying *migrateuri* may help: might be specified to choose a specific port number outside the default range in order to comply with local firewall policies.
+* The *desturi* uses UNIX transport method. In this advanced case libvirt + should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: + +.. code-block:: + + unix:///path/to/socket + See `https://libvirt.org/migration.html#uris <https://libvirt.org/migration.html#uris>`_ for more details on migration URIs.
@@ -3296,7 +3304,10 @@ specific parameters separated by '&'. Currently recognized parameters are Optional *listen-address* sets the listen address that hypervisor on the destination side should bind to for incoming migration. Both IPv4 and IPv6 addresses are accepted as well as hostnames (the resolving is done on -destination). Some hypervisors do not support this feature and will return an +destination). In niche scenarios you can also use UNIX socket to make the +hypervisor connection over UNIX socket in which case you must make sure the +source can connect to the destination using the socket path provided by you. +Some hypervisors do not support specifying the listen address and will return an error if this parameter is used.
This hunk could be dropped.
Optional *disks-port* sets the port that hypervisor on destination side should diff --git a/docs/migration.html.in b/docs/migration.html.in index e95ee9de6f1b..8585dcab6863 100644 --- a/docs/migration.html.in +++ b/docs/migration.html.in @@ -201,6 +201,9 @@ numbers. In the latter case the management application may wish to choose a specific port number outside the default range in order to comply with local firewall policies.</li> + <li>The second URI uses UNIX transport method. In this advanced case + libvirt should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: <code>unix:///path/to/socket</code>.</li> </ol>
<h2><a id="config">Configuration file handling</a></h2> @@ -628,5 +631,35 @@ virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+ssh://10.0. Supported by QEMU driver </p>
+ + <h3><a id="scenariounixsocket">Migration using only UNIX sockets</a></h3> + + <p> + In niche scenarios where libvirt daemon does not have access to the + network (e.g. running in a restricted container on a host that has + accessible network), when a management application wants to have complete + control over the transfer or when migrating between two containers on the + same host all the communication can be done using UNIX sockets. This + includes connecting to non-standard socket path for the destination + daemon, using UNIX sockets for hypervisor's communication or for the NBD + data transfer. All of that can be used with both peer2peer and direct + migration options. + </p> + + <p> + Example using <code>/tmp/migdir</code> as a directory representing the + same path visible from both libvirt daemons. That can be achieved by + bind-mounting the same directory to different containers running separate + daemons or forwarding connections to these sockets manually + (using <code>socat</code>, <code>netcat</code> or a custom piece of + software): + <pre> +virsh migrate web1 [--p2p] --copy-storage-all 'qemu+unix:///system?socket=/tmp/migdir/test-sock-driver' 'unix:///tmp/migdir/test-sock-qemu' [--listen-address /tmp/migdir/test-sock-qemu] --disks-uri unix:///tmp/migdir/test-sock-nbd
Shoudn't the --listen-address arg be the same URI as the previous arg ?
No, it is a leftover from previous version, but we did not need it at all, I think I used it for different paths to sockets on source and destination, but that is stupid. I'll remove the listen-address references.
Did we need a separate listen address parameter for disks too ?
No, although after this series it is possible for disks to be migrated using different network path using `--disks-uri tcp://<ip_address>[:<port>]/`.
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, Sep 01, 2020 at 16:36:59 +0200, Martin Kletzander wrote:
This allows:
a) migration without access to network
b) complete control of the migration stream
c) easy migration between containerised libvirt daemons on the same host
Resolves: https://bugzilla.redhat.com/1638889
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- docs/manpages/virsh.rst | 15 +++- docs/migration.html.in | 33 ++++++++ src/qemu/qemu_migration.c | 138 +++++++++++++++++++++++-------- src/qemu/qemu_migration_params.c | 9 ++ src/qemu/qemu_migration_params.h | 3 + src/qemu/qemu_monitor.c | 15 ++++ src/qemu/qemu_monitor.h | 4 + 7 files changed, 179 insertions(+), 38 deletions(-)
diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index 4d66019e750b..e3aeaa5c44ea 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3242,7 +3242,7 @@ with the qemu driver you could use this:
.. code-block::
- qemu+unix://?socket=/path/to/socket + qemu+unix:///system?socket=/path/to/socket
When *migrateuri* is not specified, libvirt will automatically determine the hypervisor specific URI. Some hypervisors, including QEMU, have an optional
Ah, again I didn't notice the incorrect URI until I saw this patch... This hunk should go into the previous patch.
@@ -3270,6 +3270,14 @@ There are a few scenarios where specifying *migrateuri* may help: might be specified to choose a specific port number outside the default range in order to comply with local firewall policies.
+* The *desturi* uses UNIX transport method. In this advanced case libvirt + should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: + +.. code-block:: + + unix:///path/to/socket + See `https://libvirt.org/migration.html#uris <https://libvirt.org/migration.html#uris>`_ for more details on migration URIs.
@@ -3296,7 +3304,10 @@ specific parameters separated by '&'. Currently recognized parameters are Optional *listen-address* sets the listen address that hypervisor on the destination side should bind to for incoming migration. Both IPv4 and IPv6 addresses are accepted as well as hostnames (the resolving is done on -destination). Some hypervisors do not support this feature and will return an +destination). In niche scenarios you can also use UNIX socket to make the +hypervisor connection over UNIX socket in which case you must make sure the +source can connect to the destination using the socket path provided by you. +Some hypervisors do not support specifying the listen address and will return an error if this parameter is used.
Optional *disks-port* sets the port that hypervisor on destination side should diff --git a/docs/migration.html.in b/docs/migration.html.in index e95ee9de6f1b..8585dcab6863 100644 --- a/docs/migration.html.in +++ b/docs/migration.html.in @@ -201,6 +201,9 @@ numbers. In the latter case the management application may wish to choose a specific port number outside the default range in order to comply with local firewall policies.</li> + <li>The second URI uses UNIX transport method. In this advanced case + libvirt should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: <code>unix:///path/to/socket</code>.</li> </ol>
<h2><a id="config">Configuration file handling</a></h2> @@ -628,5 +631,35 @@ virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+ssh://10.0. Supported by QEMU driver </p>
+ + <h3><a id="scenariounixsocket">Migration using only UNIX sockets</a></h3> + + <p> + In niche scenarios where libvirt daemon does not have access to the + network (e.g. running in a restricted container on a host that has + accessible network), when a management application wants to have complete + control over the transfer or when migrating between two containers on the + same host all the communication can be done using UNIX sockets. This + includes connecting to non-standard socket path for the destination + daemon, using UNIX sockets for hypervisor's communication or for the NBD + data transfer. All of that can be used with both peer2peer and direct + migration options. + </p> + + <p> + Example using <code>/tmp/migdir</code> as a directory representing the + same path visible from both libvirt daemons. That can be achieved by + bind-mounting the same directory to different containers running separate + daemons or forwarding connections to these sockets manually + (using <code>socat</code>, <code>netcat</code> or a custom piece of + software): + <pre> +virsh migrate web1 [--p2p] --copy-storage-all 'qemu+unix:///system?socket=/tmp/migdir/test-sock-driver' 'unix:///tmp/migdir/test-sock-qemu' [--listen-address /tmp/migdir/test-sock-qemu] --disks-uri unix:///tmp/migdir/test-sock-nbd
The listen address is a bit strange. With TCP this is the IP address QEMU on the destination host will be listening for incoming migration. But do we need this at all for UNIX sockets? I think coming up with a socket path which would be available on both hosts shouldn't be too hard. Ability to use different sockets on each side would provide greater flexibility, but I'm not sure it's worth the effort. Since normally listen address is just an IP address, using socket path here seems to be OK as it is the address of a UNIX socket, but it looks confusing. I don't think forcing unix:///socket/path as the listen address would make things significantly better. And creating a new listen URI parameter doesn't sound like a good idea either. That said, I would either drop support for UNIX sockets in listen address or keep it the way you implemented it. If you decide to keep it, proper documented would be appreciated. Mentioning it only as an optional (and redundant in this case) argument in the example doesn't seem optimal at all. And after reading the rest of the patch, I can see we gained the support for socket path in listen address for free. Forbidding it would require an explicit check for a valid listen IP address. So I think we could just enhance the documentation of the listen address to mention UNIX sockets and keep it supported.
+ </pre> + + <p> + Supported by QEMU driver + </p> + </body> </html> ...
The rest of the patch is OK. Jirka

On Wed, Sep 02, 2020 at 11:53:37 +0200, Jiri Denemark wrote:
On Tue, Sep 01, 2020 at 16:36:59 +0200, Martin Kletzander wrote:
This allows:
a) migration without access to network
b) complete control of the migration stream
c) easy migration between containerised libvirt daemons on the same host
Resolves: https://bugzilla.redhat.com/1638889
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- docs/manpages/virsh.rst | 15 +++- docs/migration.html.in | 33 ++++++++ src/qemu/qemu_migration.c | 138 +++++++++++++++++++++++-------- src/qemu/qemu_migration_params.c | 9 ++ src/qemu/qemu_migration_params.h | 3 + src/qemu/qemu_monitor.c | 15 ++++ src/qemu/qemu_monitor.h | 4 + 7 files changed, 179 insertions(+), 38 deletions(-)
diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index 4d66019e750b..e3aeaa5c44ea 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3242,7 +3242,7 @@ with the qemu driver you could use this:
.. code-block::
- qemu+unix://?socket=/path/to/socket + qemu+unix:///system?socket=/path/to/socket
When *migrateuri* is not specified, libvirt will automatically determine the hypervisor specific URI. Some hypervisors, including QEMU, have an optional
Ah, again I didn't notice the incorrect URI until I saw this patch... This hunk should go into the previous patch.
@@ -3270,6 +3270,14 @@ There are a few scenarios where specifying *migrateuri* may help: might be specified to choose a specific port number outside the default range in order to comply with local firewall policies.
+* The *desturi* uses UNIX transport method. In this advanced case libvirt + should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: + +.. code-block:: + + unix:///path/to/socket + See `https://libvirt.org/migration.html#uris <https://libvirt.org/migration.html#uris>`_ for more details on migration URIs.
@@ -3296,7 +3304,10 @@ specific parameters separated by '&'. Currently recognized parameters are Optional *listen-address* sets the listen address that hypervisor on the destination side should bind to for incoming migration. Both IPv4 and IPv6 addresses are accepted as well as hostnames (the resolving is done on -destination). Some hypervisors do not support this feature and will return an +destination). In niche scenarios you can also use UNIX socket to make the +hypervisor connection over UNIX socket in which case you must make sure the +source can connect to the destination using the socket path provided by you. +Some hypervisors do not support specifying the listen address and will return an error if this parameter is used.
Optional *disks-port* sets the port that hypervisor on destination side should diff --git a/docs/migration.html.in b/docs/migration.html.in index e95ee9de6f1b..8585dcab6863 100644 --- a/docs/migration.html.in +++ b/docs/migration.html.in @@ -201,6 +201,9 @@ numbers. In the latter case the management application may wish to choose a specific port number outside the default range in order to comply with local firewall policies.</li> + <li>The second URI uses UNIX transport method. In this advanced case + libvirt should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: <code>unix:///path/to/socket</code>.</li> </ol>
<h2><a id="config">Configuration file handling</a></h2> @@ -628,5 +631,35 @@ virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+ssh://10.0. Supported by QEMU driver </p>
+ + <h3><a id="scenariounixsocket">Migration using only UNIX sockets</a></h3> + + <p> + In niche scenarios where libvirt daemon does not have access to the + network (e.g. running in a restricted container on a host that has + accessible network), when a management application wants to have complete + control over the transfer or when migrating between two containers on the + same host all the communication can be done using UNIX sockets. This + includes connecting to non-standard socket path for the destination + daemon, using UNIX sockets for hypervisor's communication or for the NBD + data transfer. All of that can be used with both peer2peer and direct + migration options. + </p> + + <p> + Example using <code>/tmp/migdir</code> as a directory representing the + same path visible from both libvirt daemons. That can be achieved by + bind-mounting the same directory to different containers running separate + daemons or forwarding connections to these sockets manually + (using <code>socat</code>, <code>netcat</code> or a custom piece of + software): + <pre> +virsh migrate web1 [--p2p] --copy-storage-all 'qemu+unix:///system?socket=/tmp/migdir/test-sock-driver' 'unix:///tmp/migdir/test-sock-qemu' [--listen-address /tmp/migdir/test-sock-qemu] --disks-uri unix:///tmp/migdir/test-sock-nbd
The listen address is a bit strange. With TCP this is the IP address QEMU on the destination host will be listening for incoming migration. But do we need this at all for UNIX sockets? I think coming up with a socket path which would be available on both hosts shouldn't be too hard. Ability to use different sockets on each side would provide greater flexibility, but I'm not sure it's worth the effort. Since normally listen address is just an IP address, using socket path here seems to be OK as it is the address of a UNIX socket, but it looks confusing. I don't think forcing unix:///socket/path as the listen address would make things significantly better. And creating a new listen URI parameter doesn't sound like a good idea either.
That said, I would either drop support for UNIX sockets in listen address or keep it the way you implemented it. If you decide to keep it, proper documented would be appreciated. Mentioning it only as an optional (and redundant in this case) argument in the example doesn't seem optimal at all.
And after reading the rest of the patch, I can see we gained the support for socket path in listen address for free. Forbidding it would require an explicit check for a valid listen IP address. So I think we could just enhance the documentation of the listen address to mention UNIX sockets and keep it supported.
So let me correct myself. I missed the line which overrides any listenAddress passed via a parameter with the path from unix:// URI. I guess you can report an error at that point. And of course, drop the --listen-address usage from the example. Jirka

On Wed, Sep 02, 2020 at 11:53:37AM +0200, Jiri Denemark wrote:
On Tue, Sep 01, 2020 at 16:36:59 +0200, Martin Kletzander wrote:
This allows:
a) migration without access to network
b) complete control of the migration stream
c) easy migration between containerised libvirt daemons on the same host
Resolves: https://bugzilla.redhat.com/1638889
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- docs/manpages/virsh.rst | 15 +++- docs/migration.html.in | 33 ++++++++ src/qemu/qemu_migration.c | 138 +++++++++++++++++++++++-------- src/qemu/qemu_migration_params.c | 9 ++ src/qemu/qemu_migration_params.h | 3 + src/qemu/qemu_monitor.c | 15 ++++ src/qemu/qemu_monitor.h | 4 + 7 files changed, 179 insertions(+), 38 deletions(-)
diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index 4d66019e750b..e3aeaa5c44ea 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3242,7 +3242,7 @@ with the qemu driver you could use this:
.. code-block::
- qemu+unix://?socket=/path/to/socket + qemu+unix:///system?socket=/path/to/socket
When *migrateuri* is not specified, libvirt will automatically determine the hypervisor specific URI. Some hypervisors, including QEMU, have an optional
Ah, again I didn't notice the incorrect URI until I saw this patch... This hunk should go into the previous patch.
My bad, I though I did that.
@@ -3270,6 +3270,14 @@ There are a few scenarios where specifying *migrateuri* may help: might be specified to choose a specific port number outside the default range in order to comply with local firewall policies.
+* The *desturi* uses UNIX transport method. In this advanced case libvirt + should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: + +.. code-block:: + + unix:///path/to/socket + See `https://libvirt.org/migration.html#uris <https://libvirt.org/migration.html#uris>`_ for more details on migration URIs.
@@ -3296,7 +3304,10 @@ specific parameters separated by '&'. Currently recognized parameters are Optional *listen-address* sets the listen address that hypervisor on the destination side should bind to for incoming migration. Both IPv4 and IPv6 addresses are accepted as well as hostnames (the resolving is done on -destination). Some hypervisors do not support this feature and will return an +destination). In niche scenarios you can also use UNIX socket to make the +hypervisor connection over UNIX socket in which case you must make sure the +source can connect to the destination using the socket path provided by you. +Some hypervisors do not support specifying the listen address and will return an error if this parameter is used.
Optional *disks-port* sets the port that hypervisor on destination side should diff --git a/docs/migration.html.in b/docs/migration.html.in index e95ee9de6f1b..8585dcab6863 100644 --- a/docs/migration.html.in +++ b/docs/migration.html.in @@ -201,6 +201,9 @@ numbers. In the latter case the management application may wish to choose a specific port number outside the default range in order to comply with local firewall policies.</li> + <li>The second URI uses UNIX transport method. In this advanced case + libvirt should not guess a *migrateuri* and it should be specified using + UNIX socket path URI: <code>unix:///path/to/socket</code>.</li> </ol>
<h2><a id="config">Configuration file handling</a></h2> @@ -628,5 +631,35 @@ virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+ssh://10.0. Supported by QEMU driver </p>
+ + <h3><a id="scenariounixsocket">Migration using only UNIX sockets</a></h3> + + <p> + In niche scenarios where libvirt daemon does not have access to the + network (e.g. running in a restricted container on a host that has + accessible network), when a management application wants to have complete + control over the transfer or when migrating between two containers on the + same host all the communication can be done using UNIX sockets. This + includes connecting to non-standard socket path for the destination + daemon, using UNIX sockets for hypervisor's communication or for the NBD + data transfer. All of that can be used with both peer2peer and direct + migration options. + </p> + + <p> + Example using <code>/tmp/migdir</code> as a directory representing the + same path visible from both libvirt daemons. That can be achieved by + bind-mounting the same directory to different containers running separate + daemons or forwarding connections to these sockets manually + (using <code>socat</code>, <code>netcat</code> or a custom piece of + software): + <pre> +virsh migrate web1 [--p2p] --copy-storage-all 'qemu+unix:///system?socket=/tmp/migdir/test-sock-driver' 'unix:///tmp/migdir/test-sock-qemu' [--listen-address /tmp/migdir/test-sock-qemu] --disks-uri unix:///tmp/migdir/test-sock-nbd
The listen address is a bit strange. With TCP this is the IP address QEMU on the destination host will be listening for incoming migration. But do we need this at all for UNIX sockets? I think coming up with a socket path which would be available on both hosts shouldn't be too hard. Ability to use different sockets on each side would provide greater flexibility, but I'm not sure it's worth the effort. Since normally listen address is just an IP address, using socket path here seems to be OK as it is the address of a UNIX socket, but it looks confusing. I don't think forcing unix:///socket/path as the listen address would make things significantly better. And creating a new listen URI parameter doesn't sound like a good idea either.
That said, I would either drop support for UNIX sockets in listen address or keep it the way you implemented it. If you decide to keep it, proper documented would be appreciated. Mentioning it only as an optional (and redundant in this case) argument in the example doesn't seem optimal at all.
And after reading the rest of the patch, I can see we gained the support for socket path in listen address for free. Forbidding it would require an explicit check for a valid listen IP address. So I think we could just enhance the documentation of the listen address to mention UNIX sockets and keep it supported.
Nope, it gets replaced by the URI path if STREQ(uri->scheme, "unix"), so it gets silently ignored. I dropped the hunk adding docs about it and removed the mention from `docs/migration.html.in`. Now I can either document that it is ignored for desturi with unix transports or document that it is mutually exclusive and explicitly forbid it. I already did the former, but due to future corner cases I should probably do the latter, I think. Let me know which one would you rather see in v3 (of this and the next patch only, the previous ones are correct, so I'll push them as is). Thanks for the review.
+ </pre> + + <p> + Supported by QEMU driver + </p> + </body> </html> ...
The rest of the patch is OK.
Jirka

Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- NEWS.rst | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/NEWS.rst b/NEWS.rst index 302b7f477904..52781b7e1677 100644 --- a/NEWS.rst +++ b/NEWS.rst @@ -71,6 +71,12 @@ v6.7.0 (2020-09-01) forbidden and no size auto-alignment will be made. Instead, libvirt will suggest an aligned round up size for the user. + * qemu: Allow migration over UNIX sockets + + QEMU migration can now be performed completely over UNIX sockets. This is + useful for containerised scenarios and can be used in both peer2peer and + direct migrations. + * **Bug fixes** * virdevmapper: Deal with kernels without DM support -- 2.28.0

On Tue, Sep 01, 2020 at 16:37:00 +0200, Martin Kletzander wrote:
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- NEWS.rst | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst index 302b7f477904..52781b7e1677 100644 --- a/NEWS.rst +++ b/NEWS.rst @@ -71,6 +71,12 @@ v6.7.0 (2020-09-01) forbidden and no size auto-alignment will be made. Instead, libvirt will suggest an aligned round up size for the user.
+ * qemu: Allow migration over UNIX sockets + + QEMU migration can now be performed completely over UNIX sockets. This is + useful for containerised scenarios and can be used in both peer2peer and + direct migrations. + * **Bug fixes**
* virdevmapper: Deal with kernels without DM support
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>

On a Tuesday in 2020, Martin Kletzander wrote:
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- NEWS.rst | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst index 302b7f477904..52781b7e1677 100644 --- a/NEWS.rst +++ b/NEWS.rst @@ -71,6 +71,12 @@ v6.7.0 (2020-09-01)
This is not in the correct section - 6.7.0 was released already. Jano
forbidden and no size auto-alignment will be made. Instead, libvirt will suggest an aligned round up size for the user.
+ * qemu: Allow migration over UNIX sockets + + QEMU migration can now be performed completely over UNIX sockets. This is + useful for containerised scenarios and can be used in both peer2peer and + direct migrations. + * **Bug fixes**
* virdevmapper: Deal with kernels without DM support -- 2.28.0

On Tue, Sep 01, 2020 at 21:34:30 +0200, Ján Tomko wrote:
On a Tuesday in 2020, Martin Kletzander wrote:
Signed-off-by: Martin Kletzander <mkletzan@redhat.com> --- NEWS.rst | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst index 302b7f477904..52781b7e1677 100644 --- a/NEWS.rst +++ b/NEWS.rst @@ -71,6 +71,12 @@ v6.7.0 (2020-09-01)
This is not in the correct section - 6.7.0 was released already.
Eh, me-- for not looking at the context of this change at all :( My reviewed-by still holds if you move the news item to the 6.8.0 section. Jirka
participants (4)
-
Daniel P. Berrangé
-
Jiri Denemark
-
Ján Tomko
-
Martin Kletzander