On 10/24/2011 07:50 AM, Daniel P. Berrange wrote:
From: "Daniel P. Berrange"<berrange(a)redhat.com>
This adds a page documenting many aspects of migration:
- The types of migration (managed direct, p2p, unmanaged direct)
- Data transports (native, tunnelled)
- Migration URIs
- Config file handling
- Example scenarios
Sounds very useful. The graphics generally look reasonable.
diff --git a/docs/migration.html.in b/docs/migration.html.in
index ee72b00..4a16162 100644
--- a/docs/migration.html.in
+++ b/docs/migration.html.in
I don't have this file in libvirt.git. Did you forget a pre-requisite
patch, or fail to squash two patches into one? It's making it hard to
review. I created a dummy version, if only to test all the subsidiary
pages.
+<em>Native</em> data transports may, or may not support
encryption depending
s/may,/may/
s/encryption/encryption,/
+<em>Tunnelled</em> data transports will always be
capable of strong encryption
since they are able to leverage the capabilities builtin to the libvirt RPC
protocol.
s/builtin/built in/
- One potentially significant downside of a tunnelled transport
is that there will be
- extra data copies involved on both the source and destinations hosts as the data
is
- moved between libvirtd& the hypervisor. On the deployment side, tunnelled
transports
- do not require any extra network configuration over and above what's already
required
- for general libvirtd<a href="remote.html">remote
access</a>.
+ The downside of a tunnelled transport, however, is that there will be extra data
copies
+ involved on both the source and destinations hosts as the data is moved between
libvirtd
+& the hypervisor. This is likely to be a more significant problem for guests
with
For documentation, I prefer s/&/and/ (using & as a conjunction in
written prose reminds me too much of l33t-speak, or people who
abbreviate "you" as "u").
+ very large RAM sizes, which dirty memory pages quickly. On the
deployment side, tunnelled
+ transports do not require any extra network configuration over and above
what's already
+ required for general libvirtd<a href="remote.html">remote
access</a>, and there is only
+ need for a single port to be open on the firewall.
Do we want to specifically mention that the single open libvirtd port
supports multiple migrations in parallel?
Also, should we mention "migration to file" (aka virsh save), or is that
pushing the boundaries of what this page was intended to cover?
+<p>
+ Migration of virtual machines requires close co-ordination of the two
+ hosts involved, as well as the application invoking the migration
+ operation which might be on a third host.
Should we reword this last phrase a bit, to state:
as well as the application invoking the migration, which may be on the
source, the destination, or a third host. The rest of this
documentation shows a third host, for maximum clarity.
+</p>
+
+<h3><a id="flowmanageddirect">Managed direct
migration</a></h3>
+
+<p>
+ With<em>managed, direct</em> migration, the libvirt client process
s/managed, direct/managed direct/
+ controls the various phases of migration. The client
application must
+ be able to connect& authenticate with the libvirtd daemons on both
s/&/and/
+ the source and destination hosts. There is no need for the two
libvirtd
+ daemons to communicate with each other. If the client application
+ crashes, or otherwise looses its connection to libvirtd during the
s/looses/loses/
+ migration process, an attempt will be made to abort the
migrastion and
s/migrastion/migration/
+ restart the guest CPUs on the source host. There may be
scenarios
+ where this cannot be safely done, in which cases the guest will be
+ left paused on one or both of the hosts.
+</p>
+
+<p>
+<img class="diagram" src="migration-managed-direct.png"
alt="Migration direct, managed">
+</p>
+
+
+<h3><a id="flowpeer2peer">Managed peer to peer
migration</a></h3>
+
+<p>
+ With<em>peer to peer</em> migration, the libvirt client process only
+ talks to the libvirtd daemon on the source host. The libvirtd daemon
+ will then connect to the destination host libvirtd and controls the
s/host libvirtd and/host libvirtd, which/
+ entire migration process itself. If the client application
crashes,
+ or otherwise looses its connection to libvirtd, the migration process
s/looses/loses/
+ will continue uninterrupted until completion.
+</p>
+
+<p>
+<img class="diagram" src="migration-managed-p2p.png"
alt="Migration peer-to-peer">
This image is wrong. It shows the client talking to the destination,
when in reality, the client only talks to the source, and the source
talks to the destination. That is, your diagram looks identical to
migration-managed-direct.png, although I don't think you intended it to
be that way.
+</p>
+
+
+<h3><a id="flowunmanageddirect">Unmanaged direct
migration</a></h3>
+
+<p>
+ With<em>unmanaged, direct</em> migration, neither the libvirt client
s/unmanaged, direct/unmanaged direct/
+<h2><a id="uris">Migration
URIs</a></h2>
+
+<p>
+ Initiating a guest migration requires the client application to
+ specify upto three URIs, depending on the choice of control
s/upto/up to/
+ flow and/or APIs used. The first URI is that of the libvirt
+ connection to the source host, where the virtual guest is
+ currently running. The second URI is that of the libvirt
+ connection to the destination host, where the virtual guest
+ will be moved to. The third URI is a hypervisor specific
+ URI used to control how the guest will be migrated. With
+ any managed migration flow, the first& second URIs are
s/&/and/
+ compulsory, while the third URI is optional. With the
+ unmanaged direct migration mode, the first& third URIs are
s/&/and/
+ compulsory and the second URI is not used.
+</p>
+
+<p>
+ Ordinarily management applications only need to care about the
+ first and second URIs, which are both in the normal libvirt
+ connection URI format. Libvirt will then automatically determine
+ the hypervisor specific URI, by looking up the target host's
+ configured hostname. There are a few scenarios where the management
+ application may wish to have direct control over the third URI.
</p>
-<h2>Migration scenarios</h2>
+<ol>
+<li>The configured hostname is incorrect, or DNS is broken. If a
+ host has a hostname, which will not resolve to match one of its
s/hostname, which/hostname which/
+ public IP addresses, then libvirt will generate an
incorrect
+ URI. In this case the management application should specify the
+ hypervisor specific URI explicitly, using an IP address, or a
+ correct hostname.</li>
+<li>The host has multiple network interaces. If a host has multiple
+ network interfaces, it might be desirable for the migration data
+ stream to be sent over a specific interface for either security
+ or performance reasons. In this case the management application
+ should specify the hypervisor specific URI, using an IP address
+ associated with the network to be used</li>
Check for trailing '.' at the end of sentences (I didn't check closely,
but noticed one here).
+<h2><a id="config">Configuration file
handling</a></h2>
+<p>
+ There are two types of virtual machine known to libvirt.
A<em>transient</em>
+ guest only exists while it is running, and has no configuration file stored
+ on disk. A<em>persistent</em> guest maintains a configuration file on
disk
+ even when it is not running.
+</p>
+
+<p>
+ By default, a migration operation will not attempt to change any configuration
+ files that may be stored on either the source or destination host.
Is that still true, in light of recent patches that pass persistent xml
definition as one of the cookies in migration v3? I guess so, since you
have to use --persist to cause the config to be transferred, which is
not default.
It is the
+ administrator, or management application's, responsibility to manage
distribution
+ of configuration files (if desired). It is important to note that
the<code>/etc/libvirt</code>
+ directory<strong>MUST NEVER BE SHARED BETWEEN HOSTS</strong>. There
are some
+ typical scenarios that might be applicable:
+</p>
+
+<ul>
+<li>Centralized configuration files outside libvirt, in shared storage. A cluster
+ aware management application may maintain all the master guest configuration
+ files in a cluster filesystem. When attempting to start a guest, the config
+ will be read from the cluster FS and used to deploy a persistent guest.
+ For migration the configuration wwill need to be copied to the destination
s/wwill/will/
+ host and removed on the original.
+</li>
+<li>Centralized configuration files outside libvirt, in a database. A data center
+ management application may not storage configuration files at all. Instead it
+ may generate libvirt XML on the fly when a guest is booted. It will typically
+ use transient guests, and thus not have to consider configuration files during
+ migration.
+</li>
+<li>Distributed configuration inside libvirt. The configuration file for each
+ guest is copied to every host where the guest is able to run. Upon migration
+ the existing config merely needs to be updated with any changes
+</li>
+<li>Ad-hoc configuration management inside libvirt. Each guest is tied to a
+ specific host and rarely migrated. When migration is required, the config
+ is moved from one host to the other.
+</li>
+</ul>
+
+<p>
+ As mentioned above, libvirt will not touch configuration files during
+ migration by default. The<code>virsh</code> command has two flags to
+ influence this behaviour. The<code>--undefine-source</code> flag
+ will cause the configuration file to be removed on the source host
+ after a successful migration. The<code>--persist</code> flag will
+ cause a configuration file to be created on the destination host
+ after a successful migration. The following table summarizes the
+ configuration file handling in all possible state& flag
s/&/and/
+ combinations.
+</p>
-<h3>Native migration, client to two libvirtd servers</h3>
+<table class="data">
Rows 5 and 6 are wrong...
+<!-- src:N, dst:Y -->
+<tr>
+<td>Transient</td>
+<td class="n">N</td>
+<td class="y">Y</td>
+<td class="n">N</td>
+<td class="n">N</td>
+<td>Persistent</td>
This one should be transient.
+<td class="n">N</td>
+<td class="n">N</td>
+</tr>
+<tr>
+<td>Transient</td>
+<td class="n">N</td>
+<td class="y">Y</td>
+<td class="y">Y</td>
+<td class="n">N</td>
+<td>Persistent</td>
as should this.
+<td class="n">N</td>
+<td class="n">N</td>
+</tr>
+<h3><a id="scenarionativedirect">Native
migration, client to two libvirtd servers</a></h3>
<p>
At an API level this requires use of virDomainMigrate, without the
@@ -66,7 +453,7 @@
Supported by Xen, QEMU, VMWare and VirtualBox drivers
I really wish I knew what this text said (back to my earlier comment
that you missed posting a patch).
--
Eric Blake eblake(a)redhat.com +1-801-349-2682
Libvirt virtualization library
http://libvirt.org