and the QEMU backend implementation using virtio failover.
Signed-off-by: Laine Stump <laine(a)redhat.com>
---
docs/formatdomain.html.in | 100 ++++++++++++++++++++++++++++++++++++++
docs/news.xml | 28 +++++++++++
2 files changed, 128 insertions(+)
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 4db9c292b7..a1c2a1e392 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -5873,6 +5873,106 @@
</devices>
...</pre>
+ <h5><a id="elementsTeaming">Teaming a virtio/hostdev NIC
pair</a></h5>
+
+ <p>
+ <span class="since">Since 6.1.0 (QEMU and KVM only, requires
+ QEMU 4.2.0 or newer and a guest virtio-net driver supporting
+ the "failover" feature)
+ </span>
+ The <code><teaming></code> element of two interfaces can
+ be used to connect them as a team/bond device in the guest
+ (assuming proper support in the hypervisor and the guest
+ network driver).
+ </p>
+
+<pre>
+...
+<devices>
+ <interface type='network'>
+ <source network='mybridge'/>
+ <mac address='00:11:22:33:44:55'/>
+ <model type='virtio'/>
+ <teaming type='persistent'/>
+ <alias name='ua-backup0'/>
+ </interface>
+ <interface type='network'>
+ <source network='hostdev-pool'/>
+ <mac address='00:11:22:33:44:55'/>
+ <model type='virtio'/>
+ <teaming type='transient' persistent='ua-backup0'/>
+ </interface>
+</devices>
+...</pre>
+
+ <p>
+ The <code><teaming></code> element required
+ attribute <code>type</code> will be set to
+ either <code>"persistent"</code> to indicate a device that
+ should always be present in the domain,
+ or <code>"transient"</code> to indicate a device that may
+ periodically be removed, then later re-added to the domain. When
+ type="transient", there should be a second attribute
+ to <code><teaming></code> called
<code>"persistent"</code>
+ - this attribute should be set to the alias name of the other
+ device in the pair (the one that has <code><teaming
+ type="persistent'/></code>).
+ </p>
+ <p>
+ In the particular case of QEMU,
+ libvirt's <code><teaming></code> element is used to
setup
+ a virtio-net "failover" device pair. For this setup, the
+ persistent device must be an interface with <code><model
+ type="virtio"/></code>, and the transient device must
+ be <code><interface type='hostdev'/></code>
+ (or <code><interface type='network'/></code>
where the
+ referenced network defines a pool of SRIOV VFs). The guest will
+ then have a simple network team/bond device made of the virtio
+ NIC + hostdev NIC pair. In this configuration, the
+ higher-performing hostdev NIC will normally be preferred for all
+ network traffic, but when the domain is migrated, QEMU will
+ automatically unplug the VF from the guest, and then hotplug a
+ similar device once migration is completed; while migration is
+ taking place, network traffic will use the virtio NIC. (Of
+ course the emulated virtio NIC and the hostdev NIC must be
+ connected to the same subnet for bonding to work properly).
+ </p>
+ <p>
+ NB1: Since you must know the alias name of the virtio NIC when
+ configuring the hostdev NIC, it will need to be manually set in
+ the virtio NIC's configuration (as with all other manually set
+ alias names, this means it must start with "ua-").
+ </p>
+ <p>
+ NB2: Currently the only implementation of the guest OS
+ virtio-net driver supporting virtio-net failover requires that
+ the MAC addresses of the virtio and hostdev NIC must
+ match. Since that may not always be a requirement in the future,
+ libvirt doesn't enforce this limitation - it is up to the
+ person/management application that is creating the configuration
+ to assure the MAC addresses of the two devices match.
+ </p>
+ <p>
+ NB3: Since the PCI addresses of the SRIOV VFs on the hosts that
+ are the source and destination of the migration will almost
+ certainly be different, either higher level management software
+ will need to modify the <code><source></code> of the
+ hostdev NIC (<code><interface
type='hostdev'></code>) at
+ the start of migration, or (a simpler solution) the
+ configuration will need to use a libvirt "hostdev" virtual
+ network that maintains a pool of such devices, as is implied in
+ the example's use of the libvirt network named "hostdev-pool" -
+ as long as the hostdev network pools on both hosts have the same
+ name, libvirt itself will take care of allocating an appropriate
+ device on both ends of the migration. Similarly the XML for the
+ virtio interface must also either work correctly unmodified on
+ both the source and destination of the migration (e.g. by
+ connecting to the same bridge device on both hosts, or by using
+ the same virtual network), or the management software must
+ properly modify the interface XML during migration so that the
+ virtio device remains connected to the same network segment
+ before and after migration.
+ </p>
<h5><a id="elementsNICSMulticast">Multicast
tunnel</a></h5>
diff --git a/docs/news.xml b/docs/news.xml
index 056c7ef026..7dc9cc18cb 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -44,6 +44,34 @@
<libvirt>
<release version="v6.1.0" date="unreleased">
<section title="New features">
+ <change>
+ <summary>
+ support for virtio+hostdev NIC <teaming>
+ </summary>
+ <description>
+ QEMU 4.2.0 and later, combined with a sufficiently recent
+ guest virtio-net driver, supports setting up a simple
+ network bond device comprised of one virtio emulated NIC and
+ one hostdev NIC (which must be an SRIOV VF). (in QEMU, this
+ is known as the "virtio failover" feature). The allure of
+ this setup is that the bond will always favor the hostdev
+ device, providing better performance, until the guest is
+ migrated - at that time QEMU will automatically unplug the
+ hostdev NIC and the bond will send all traffic via the
+ virtio NIC until migration is completed, then QEMU on the
+ destination side will hotplug a new hostdev NIC and the bond
+ will switch back to using the hostdev for network
+ traffic. The result is that guests desiring the extra
+ performance of a hostdev NIC are now migratable without
+ network downtime (performance is just degraded during
+ migration) and without requiring a complicated bonding
+ configuration in the guest OS network config and complicated
+ unplug/replug logic in the management application on the
+ host - it can instead all be accomplished in libvirt with
+ the interface <teaming> subelement "type" and
+ "persistent" attributes.
+ </description>
+ </change>
</section>
<section title="Improvements">
</section>
--
2.24.1