Ping.
May I have your attention guys?
Pavel
On Wed, Nov 18, 2015 at 8:12 PM, Pavel Boldin <pboldin(a)mirantis.com> wrote:
The provided patchset implements NBD disk migration over a tunnelled
connection provided by libvirt.
The migration source instructs QEMU to NBD mirror drives into the provided
UNIX socket. These connections and all the data are then tunnelled to the
destination using newly introduced RPC call. The migration destination
implements a driver method that connects the tunnelled stream to the QEMU's
NBD destination.
The detailed scheme is the following:
PREPARE
1. Migration destination starts QEMU's NBD server listening on a UNIX
socket
using the `nbd-server-add` monitor command and tells NBD to accept
listed
disks via code added to qemuMigrationStartNBDServer that calls
introduced
qemuMonitorNBDServerStartUnix monitor function.
PERFORM
2. Migration source creates a UNIX socket that is later used as NBDs
destination in `drive-mirror` monitor command.
This is implemented as a call to virNetSocketNewListenUnix from
doTunnelMigrate.
3. Source starts IOThread that polls on the UNIX socket, accepting every
incoming QEMU connection.
This is done by adding a new pollfd in the poll(2) call in
qemuMigrationIOFunc that calls introduced qemuNBDTunnelAcceptAndPipe
function.
4. The qemuNBDTunnelAcceptAndPipe function accepts the connection and
creates
two virStream's. One is `local` that is later associated with just
accepted
connection using virFDStreamOpen. Second is `remote` that is later
tunnelled to the remote destination stream.
The `local` stream is converted to a virFDStreamDrv stream using the
virFDStreamOpen call on the fd returned by accept(2).
The `remote` stream is associated with a stream on the destination in
the way similar to used by PrepareTunnel3* function. That is, the
virDomainMigrateOpenTunnel function called on the destination
connection object. The virDomainMigrateOpenTunnel calls remote driver's
handler remoteDomainMigrateOpenTunnel that makes
DOMAIN_MIGRATE_OPEN_TUNNEL
call to the destination host. The code in remoteDomainMigrateOpenTunnel
ties passed virStream object to a virStream on the destination host via
remoteStreamDrv driver. The remote driver handles stream's IO by
tunnelling
data through the RPC connection.
The qemuNBDTunnelAcceptAndPipe at last assigns both streams the same
event
callback qemuMigrationPipeEvent. Its job is to track statuses of the
streams doing IO whenever it is necessary.
5. Source starts the drive mirroring using the qemuMigrationDriveMirror
func.
The function instructs QEMU to mirror drives to the UNIX socket that
thread
listens on.
Since it is necessary for the mirror driving to get into the
'synchronized'
state, where writes go to both destinations simultaneously, before
continuing VM migration, the thread serving the connections must be
started earlier.
6. When the connection to a UNIX socket on the migration source is made
the DOMAIN_MIGRATE_OPEN_TUNNEL proc is called on the migration
destination.
The handler of this code calls virDomainMigrateOpenTunnel which calls
qemuMigrationOpenNBDTunnel by the means of qemuDomainMigrateOpenTunnel.
The qemuMigrationOpenNBDTunnel connects the stream linked to a source's
stream to the NBD's UNIX socket on the migration destination side.
7. The rest of the disk migration occurs semimagically: virStream* APIs
tunnel
data in both directions. This is done by qemuMigrationPipeEvent event
callback set for both streams.
The order of the patches is roughly the following:
* First, the RPC machinery and remote driver's
virDrvDomainMigrateOpenTunnel
implementation are added.
* Then, the source-side of the protocol is implemented: code listening
on a UNIX socket is added, DriveMirror is enhanced to instruct QEMU to
`drive-mirror` here and starting IOThread driving the tunneling sooner.
* After that, the destination-side of the protocol is implemented:
the qemuMonitorNBDServerStartUnix added and qemuMigrationStartNBDServer
enhanced to call it. The qemuDomainMigrateOpenTunnel is implemented
along with qemuMigrationOpenNBDTunnel that does the real job.
* Finally, the code blocking NBD migration for tunnelled migration is
removed.
Pavel Boldin (21):
rpc: add DOMAIN_MIGRATE_OPEN_TUNNEL proc
driver: add virDrvDomainMigrateOpenTunnel
remote_driver: introduce virRemoteClientNew
remote_driver: add remoteDomainMigrateOpenTunnel
domain: add virDomainMigrateOpenTunnel
domain: add virDomainMigrateTunnelFlags
remote: impl remoteDispatchDomainMigrateOpenTunnel
qemu: migration: src: add nbd tunnel socket data
qemu: migration: src: nbdtunnel unix socket
qemu: migration: src: qemu `drive-mirror` to UNIX
qemu: migration: src: qemuSock for running thread
qemu: migration: src: add NBD unixSock to iothread
qemu: migration: src: qemuNBDTunnelAcceptAndPipe
qemu: migration: src: stream piping
qemu: monitor: add qemuMonitorNBDServerStartUnix
qemu: migration: dest: nbd-server to UNIX sock
qemu: migration: dest: qemuMigrationOpenTunnel
qemu: driver: add qemuDomainMigrateOpenTunnel
qemu: migration: dest: qemuMigrationOpenNBDTunnel
qemu: migration: allow NBD tunneling migration
apparmor: fix tunnelmigrate permissions
daemon/remote.c | 50 ++++
docs/apibuild.py | 1 +
docs/hvsupport.pl | 1 +
include/libvirt/libvirt-domain.h | 3 +
src/driver-hypervisor.h | 8 +
src/libvirt-domain.c | 43 ++++
src/libvirt_internal.h | 6 +
src/libvirt_private.syms | 1 +
src/qemu/qemu_driver.c | 24 ++
src/qemu/qemu_migration.c | 495
+++++++++++++++++++++++++++++++++------
src/qemu/qemu_migration.h | 6 +
src/qemu/qemu_monitor.c | 12 +
src/qemu/qemu_monitor.h | 2 +
src/qemu/qemu_monitor_json.c | 35 +++
src/qemu/qemu_monitor_json.h | 2 +
src/remote/remote_driver.c | 91 +++++--
src/remote/remote_protocol.x | 19 +-
src/remote_protocol-structs | 8 +
src/security/virt-aa-helper.c | 4 +-
19 files changed, 719 insertions(+), 92 deletions(-)
--
1.9.1