Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
docs/formatdomain-devices-interface.rst | 1258 ++++++++++++++++++++++
docs/formatdomain-devices.rst | 1260 +----------------------
docs/meson.build | 1 +
3 files changed, 1260 insertions(+), 1259 deletions(-)
create mode 100644 docs/formatdomain-devices-interface.rst
diff --git a/docs/formatdomain-devices-interface.rst
b/docs/formatdomain-devices-interface.rst
new file mode 100644
index 0000000000..c828e71df1
--- /dev/null
+++ b/docs/formatdomain-devices-interface.rst
@@ -0,0 +1,1258 @@
+:anchor:`<a id="elementsNICS"/>`
+
+Network interfaces
+~~~~~~~~~~~~~~~~~~
+
+::
+
+ ...
+ <devices>
+ <interface type='direct' trustGuestRxFilters='yes'>
+ <source dev='eth0'/>
+ <mac address='52:54:00:5d:c7:9e'/>
+ <boot order='1'/>
+ <rom bar='off'/>
+ </interface>
+ </devices>
+ ...
+
+There are several possibilities for specifying a network interface visible to
+the guest. Each subsection below provides more details about common setup
+options.
+
+:since:`Since 1.2.10` ), the ``interface`` element property
+``trustGuestRxFilters`` provides the capability for the host to detect and trust
+reports from the guest regarding changes to the interface mac address and
+receive filters by setting the attribute to ``yes``. The default setting for the
+attribute is ``no`` for security reasons and support depends on the guest
+network device model as well as the type of connection on the host - currently
+it is only supported for the virtio device model and for macvtap connections on
+the host.
+
+Each ``<interface>`` element has an optional ``<address>`` sub-element that
can
+tie the interface to a particular pci slot, with attribute ``type='pci'`` as
+`documented above <#elementsAddress>`__.
+
+:since:`Since 6.6.0` , one can force libvirt to keep the provided MAC address
+when it's in the reserved VMware range by adding a ``type="static"``
attribute
+to the ``<mac/>`` element. Note that this attribute is useless if the provided
+MAC address is outside of the reserved VMWare ranges.
+
+:anchor:`<a id="elementsNICSVirtual"/>`
+
+Virtual network
+^^^^^^^^^^^^^^^
+
+**This is the recommended config for general guest connectivity on hosts with
+dynamic / wireless networking configs (or multi-host environments where the host
+hardware details are described separately in a ``<network>`` definition
+:since:`Since 0.9.4` ).**
+
+Provides a connection whose details are described by the named network
+definition. Depending on the virtual network's "forward mode"
configuration, the
+network may be totally isolated (no ``<forward>`` element given), NAT'ing to
an
+explicit network device or to the default route (``<forward
mode='nat'>``),
+routed with no NAT (``<forward mode='route'/>``), or connected directly to
one
+of the host's network interfaces (via macvtap) or bridge devices
+((``<forward mode='bridge|private|vepa|passthrough'/>``
:since:`Since
+0.9.4` )
+
+For networks with a forward mode of bridge, private, vepa, and passthrough, it
+is assumed that the host has any necessary DNS and DHCP services already setup
+outside the scope of libvirt. In the case of isolated, nat, and routed networks,
+DHCP and DNS are provided on the virtual network by libvirt, and the IP range
+can be determined by examining the virtual network config with
+'``virsh net-dumpxml [networkname]``'. There is one virtual network called
+'default' setup out of the box which does NAT'ing to the default route and
has
+an IP range of ``192.168.122.0/255.255.255.0``. Each guest will have an
+associated tun device created with a name of vnetN, which can also be overridden
+with the <target> element (see `overriding the target
+element <#elementsNICSTargetOverride>`__).
+
+When the source of an interface is a network, a ``portgroup`` can be specified
+along with the name of the network; one network may have multiple portgroups
+defined, with each portgroup containing slightly different configuration
+information for different classes of network connections. :since:`Since 0.9.4` .
+
+When a guest is running an interface of type ``network`` may include a
+``portid`` attribute. This provides the UUID of an associated virNetworkPortPtr
+object that records the association between the domain interface and the
+network. This attribute is read-only since port objects are create and deleted
+automatically during startup and shutdown. :since:`Since 5.1.0`
+
+Also, similar to ``direct`` network connections (described below), a connection
+of type ``network`` may specify a ``virtualport`` element, with configuration
+data to be forwarded to a vepa (802.1Qbg) or 802.1Qbh compliant switch (
+:since:`Since 0.8.2` ), or to an Open vSwitch virtual switch ( :since:`Since
+0.9.11` ).
+
+Since the actual type of switch may vary depending on the configuration in the
+``<network>`` on the host, it is acceptable to omit the virtualport ``type``
+attribute, and specify attributes from multiple different virtualport types (and
+also to leave out certain attributes); at domain startup time, a complete
+``<virtualport>`` element will be constructed by merging together the type and
+attributes defined in the network and the portgroup referenced by the interface.
+The newly-constructed virtualport is a combination of them. The attributes from
+lower virtualport can't make change on the ones defined in higher virtualport.
+Interface takes the highest priority, portgroup is lowest priority. (
+:since:`Since 0.10.0` ). For example, in order to work properly with both an
+802.1Qbh switch and an Open vSwitch switch, you may choose to specify no type,
+but both a ``profileid`` (in case the switch is 802.1Qbh) and an ``interfaceid``
+(in case the switch is Open vSwitch) (you may also omit the other attributes,
+such as managerid, typeid, or profileid, to be filled in from the network's
+``<virtualport>``). If you want to limit a guest to connecting only to certain
+types of switches, you can specify the virtualport type, but still omit some/all
+of the parameters - in this case if the host's network has a different type of
+virtualport, connection of the interface will fail.
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ </interface>
+ ...
+ <interface type='network'>
+ <source network='default' portgroup='engineering'/>
+ <target dev='vnet7'/>
+ <mac address="00:11:22:33:44:55"/>
+ <virtualport>
+ <parameters instanceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/>
+ </virtualport>
+ </interface>
+ </devices>
+ ...
+
+:anchor:`<a id="elementsNICSBridge"/>`
+
+Bridge to LAN
+^^^^^^^^^^^^^
+
+**This is the recommended config for general guest connectivity on hosts with
+static wired networking configs.**
+
+Provides a bridge from the VM directly to the LAN. This assumes there is a
+bridge device on the host which has one or more of the hosts physical NICs
+attached. The guest VM will have an associated tun device created with a name of
+vnetN, which can also be overridden with the <target> element (see `overriding
+the target element <#elementsNICSTargetOverride>`__). The tun device will be
+attached to the bridge. The IP range / network configuration is whatever is used
+on the LAN. This provides the guest VM full incoming & outgoing net access just
+like a physical machine.
+
+On Linux systems, the bridge device is normally a standard Linux host bridge. On
+hosts that support Open vSwitch, it is also possible to connect to an Open
+vSwitch bridge device by adding a ``<virtualport type='openvswitch'/>`` to
the
+interface definition. ( :since:`Since 0.9.11` ). The Open vSwitch type
+virtualport accepts two parameters in its ``<parameters>`` element - an
+``interfaceid`` which is a standard uuid used to uniquely identify this
+particular interface to Open vSwitch (if you do not specify one, a random
+interfaceid will be generated for you when you first define the interface), and
+an optional ``profileid`` which is sent to Open vSwitch as the interfaces
+"port-profile".
+
+::
+
+ ...
+ <devices>
+ ...
+ <interface type='bridge'>
+ <source bridge='br0'/>
+ </interface>
+ <interface type='bridge'>
+ <source bridge='br1'/>
+ <target dev='vnet7'/>
+ <mac address="00:11:22:33:44:55"/>
+ </interface>
+ <interface type='bridge'>
+ <source bridge='ovsbr'/>
+ <virtualport type='openvswitch'>
+ <parameters profileid='menial'
interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/>
+ </virtualport>
+ </interface>
+ ...
+ </devices>
+ ...
+
+On hosts that support Open vSwitch on the kernel side and have the Midonet Host
+Agent configured, it is also possible to connect to the 'midonet' bridge device
+by adding a ``<virtualport type='midonet'/>`` to the interface definition.
(
+:since:`Since 1.2.13` ). The Midonet virtualport type requires an
+``interfaceid`` attribute in its ``<parameters>`` element. This interface id is
+the UUID that specifies which port in the virtual network topology will be bound
+to the interface.
+
+::
+
+ ...
+ <devices>
+ ...
+ <interface type='bridge'>
+ <source bridge='br0'/>
+ </interface>
+ <interface type='bridge'>
+ <source bridge='br1'/>
+ <target dev='vnet7'/>
+ <mac address="00:11:22:33:44:55"/>
+ </interface>
+ <interface type='bridge'>
+ <source bridge='midonet'/>
+ <virtualport type='midonet'>
+ <parameters interfaceid='0b2d64da-3d0e-431e-afdd-804415d6ebbb'/>
+ </virtualport>
+ </interface>
+ ...
+ </devices>
+ ...
+
+:anchor:`<a id="elementsNICSSlirp"/>`
+
+Userspace SLIRP stack
+^^^^^^^^^^^^^^^^^^^^^
+
+Provides a virtual LAN with NAT to the outside world. The virtual network has
+DHCP & DNS services and will give the guest VM addresses starting from
+``10.0.2.15``. The default router will be ``10.0.2.2`` and the DNS server will
+be ``10.0.2.3``. This networking is the only option for unprivileged users who
+need their VMs to have outgoing access. :since:`Since 3.8.0` it is possible to
+override the default network address by including an ``ip`` element specifying
+an IPv4 address in its one mandatory attribute, ``address``. Optionally, a
+second ``ip`` element with a ``family`` attribute set to "ipv6" can be
specified
+to add an IPv6 address to the interface. ``address``. Optionally, address
+``prefix`` can be specified.
+
+::
+
+ ...
+ <devices>
+ <interface type='user'/>
+ ...
+ <interface type='user'>
+ <mac address="00:11:22:33:44:55"/>
+ <ip family='ipv4' address='172.17.2.0'
prefix='24'/>
+ <ip family='ipv6' address='2001:db8:ac10:fd01::'
prefix='64'/>
+ </interface>
+ </devices>
+ ...
+
+:anchor:`<a id="elementsNICSEthernet"/>`
+
+Generic ethernet connection
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Provides a means to use a new or existing tap device (or veth device pair,
+depening on the needs of the hypervisor driver) that is partially or wholly
+setup external to libvirt (either prior to the guest starting, or while the
+guest is being started via an optional script specified in the config).
+
+The name of the tap device can optionally be specified with the ``dev``
+attribute of the ``<target>`` element. If no target dev is specified, libvirt
+will create a new standard tap device with a name of the pattern "vnetN",
where
+"N" is replaced with a number. If a target dev is specified and that device
+doesn't exist, then a new standard tap device will be created with the exact dev
+name given. If the specified target dev does exist, then that existing device
+will be used. Usually some basic setup of the device is done by libvirt,
+including setting a MAC address, and the IFF_UP flag, but if the ``dev`` is a
+pre-existing device, and the ``managed`` attribute of the ``target`` element is
+also set to "no" (the default value is "yes"), even this basic setup
will not be
+performed - libvirt will simply pass the device on to the hypervisor with no
+setup at all. :since:`Since 5.7.0` Using managed='no' with a pre-created tap
+device is useful because it permits a virtual machine managed by an unprivileged
+libvirtd to have emulated network devices based on tap devices.
+
+After creating/opening the tap device, an optional shell script (given in the
+``path`` attribute of the ``<script>`` element) will be run. :since:`Since
+0.2.1` Also, after detaching/closing the tap device, an optional shell script
+(given in the ``path`` attribute of the ``<downscript>`` element) will be run.
+:since:`Since 5.1.0` These can be used to do whatever extra host network
+integration is required.
+
+::
+
+ ...
+ <devices>
+ <interface type='ethernet'>
+ <script path='/etc/qemu-ifup-mynet'/>
+ <downscript path='/etc/qemu-ifdown-mynet'/>
+ </interface>
+ ...
+ <interface type='ethernet'>
+ <target dev='mytap1' managed='no'/>
+ <model type='virtio'/>
+ </interface>
+ </devices>
+ ...
+
+:anchor:`<a id="elementsNICSDirect"/>`
+
+Direct attachment to physical interface
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+| Provides direct attachment of the virtual machine's NIC to the given physical
+ interface of the host. :since:`Since 0.7.7 (QEMU and KVM only)`
+| This setup requires the Linux macvtap driver to be available. :since:`(Since
+ Linux 2.6.34.)` One of the modes 'vepa' ( `'Virtual Ethernet Port
+ Aggregator'
<
http://www.ieee802.org/1/files/public/docs2009/new-evb-congdon-vepa-modul...),
+ 'bridge' or 'private' can be chosen for the operation mode of the
macvtap
+ device, 'vepa' being the default mode. The individual modes cause the delivery
+ of packets to behave as follows:
+
+If the model type is set to ``virtio`` and interface's ``trustGuestRxFilters``
+attribute is set to ``yes``, changes made to the interface mac address,
+unicast/multicast receive filters, and vlan settings in the guest will be
+monitored and propagated to the associated macvtap device on the host (
+:since:`Since 1.2.10` ). If ``trustGuestRxFilters`` is not set, or is not
+supported for the device model in use, an attempted change to the mac address
+originating from the guest side will result in a non-working network connection.
+
+``vepa``
+ All VMs' packets are sent to the external bridge. Packets whose destination
+ is a VM on the same host as where the packet originates from are sent back to
+ the host by the VEPA capable bridge (today's bridges are typically not VEPA
+ capable).
+``bridge``
+ Packets whose destination is on the same host as where they originate from
+ are directly delivered to the target macvtap device. Both origin and
+ destination devices need to be in bridge mode for direct delivery. If either
+ one of them is in ``vepa`` mode, a VEPA capable bridge is required.
+``private``
+ All packets are sent to the external bridge and will only be delivered to a
+ target VM on the same host if they are sent through an external router or
+ gateway and that device sends them back to the host. This procedure is
+ followed if either the source or destination device is in ``private`` mode.
+``passthrough``
+ This feature attaches a virtual function of a SRIOV capable NIC directly to a
+ VM without losing the migration capability. All packets are sent to the VF/IF
+ of the configured network device. Depending on the capabilities of the device
+ additional prerequisites or limitations may apply; for example, on Linux this
+ requires kernel 2.6.38 or newer. :since:`Since 0.9.2`
+
+::
+
+ ...
+ <devices>
+ ...
+ <interface type='direct' trustGuestRxFilters='no'>
+ <source dev='eth0' mode='vepa'/>
+ </interface>
+ </devices>
+ ...
+
+The network access of direct attached virtual machines can be managed by the
+hardware switch to which the physical interface of the host machine is connected
+to.
+
+The interface can have additional parameters as shown below, if the switch is
+conforming to the IEEE 802.1Qbg standard. The parameters of the virtualport
+element are documented in more detail in the IEEE 802.1Qbg standard. The values
+are network specific and should be provided by the network administrator. In
+802.1Qbg terms, the Virtual Station Interface (VSI) represents the virtual
+interface of a virtual machine. :since:`Since 0.8.2`
+
+Please note that IEEE 802.1Qbg requires a non-zero value for the VLAN ID.
+
+``managerid``
+ The VSI Manager ID identifies the database containing the VSI type and
+ instance definitions. This is an integer value and the value 0 is reserved.
+``typeid``
+ The VSI Type ID identifies a VSI type characterizing the network access. VSI
+ types are typically managed by network administrator. This is an integer
+ value.
+``typeidversion``
+ The VSI Type Version allows multiple versions of a VSI Type. This is an
+ integer value.
+``instanceid``
+ The VSI Instance ID Identifier is generated when a VSI instance (i.e. a
+ virtual interface of a virtual machine) is created. This is a globally unique
+ identifier.
+
+::
+
+ ...
+ <devices>
+ ...
+ <interface type='direct'>
+ <source dev='eth0.2' mode='vepa'/>
+ <virtualport type="802.1Qbg">
+ <parameters managerid="11" typeid="1193047"
typeidversion="2"
instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
+ </virtualport>
+ </interface>
+ </devices>
+ ...
+
+The interface can have additional parameters as shown below if the switch is
+conforming to the IEEE 802.1Qbh standard. The values are network specific and
+should be provided by the network administrator. :since:`Since 0.8.2`
+
+``profileid``
+ The profile ID contains the name of the port profile that is to be applied to
+ this interface. This name is resolved by the port profile database into the
+ network parameters from the port profile, and those network parameters will
+ be applied to this interface.
+
+::
+
+ ...
+ <devices>
+ ...
+ <interface type='direct'>
+ <source dev='eth0' mode='private'/>
+ <virtualport type='802.1Qbh'>
+ <parameters profileid='finance'/>
+ </virtualport>
+ </interface>
+ </devices>
+ ...
+
+:anchor:`<a id="elementsNICSHostdev"/>`
+
+PCI Passthrough
+^^^^^^^^^^^^^^^
+
+A PCI network device (specified by the <source> element) is directly assigned to
+the guest using generic device passthrough, after first optionally setting the
+device's MAC address to the configured value, and associating the device with an
+802.1Qbh capable switch using an optionally specified <virtualport> element (see
+the examples of virtualport given above for type='direct' network devices). Note
+that - due to limitations in standard single-port PCI ethernet card driver
+design - only SR-IOV (Single Root I/O Virtualization) virtual function (VF)
+devices can be assigned in this manner; to assign a standard single-port PCI or
+PCIe ethernet card to a guest, use the traditional <hostdev> device definition
+and :since:`Since 0.9.11`
+
+To use VFIO device assignment rather than traditional/legacy KVM device
+assignment (VFIO is a new method of device assignment that is compatible with
+UEFI Secure Boot), a type='hostdev' interface can have an optional ``driver``
+sub-element with a ``name`` attribute set to "vfio". To use legacy KVM device
+assignment you can set ``name`` to "kvm" (or simply omit the
``<driver>``
+element, since "kvm" is currently the default). :since:`Since 1.0.5 (QEMU and
+KVM only, requires kernel 3.6 or newer)`
+
+Note that this "intelligent passthrough" of network devices is very similar to
+the functionality of a standard <hostdev> device, the difference being that this
+method allows specifying a MAC address and <virtualport> for the passed-through
+device. If these capabilities are not required, if you have a standard
+single-port PCI, PCIe, or USB network card that doesn't support SR-IOV (and
+hence would anyway lose the configured MAC address during reset after being
+assigned to the guest domain), or if you are using a version of libvirt older
+than 0.9.11, you should use standard <hostdev> to assign the device to the guest
+instead of <interface type='hostdev'/>.
+
+Similar to the functionality of a standard <hostdev> device, when ``managed`` is
+"yes", it is detached from the host before being passed on to the guest, and
+reattached to the host after the guest exits. If ``managed`` is omitted or
"no",
+the user is responsible to call ``virNodeDeviceDettach`` (or
+``virsh nodedev-detach``) before starting the guest or hot-plugging the device,
+and ``virNodeDeviceReAttach`` (or ``virsh nodedev-reattach``) after hot-unplug
+or stopping the guest.
+
+::
+
+ ...
+ <devices>
+ <interface type='hostdev' managed='yes'>
+ <driver name='vfio'/>
+ <source>
+ <address type='pci' domain='0x0000' bus='0x00'
slot='0x07' function='0x0'/>
+ </source>
+ <mac address='52:54:00:6d:90:02'/>
+ <virtualport type='802.1Qbh'>
+ <parameters profileid='finance'/>
+ </virtualport>
+ </interface>
+ </devices>
+ ...
+
+:anchor:`<a id="elementsTeaming"/>`
+
+Teaming a virtio/hostdev NIC pair
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+:since:`Since 6.1.0 (QEMU and KVM only, requires QEMU 4.2.0 or newer and a guest
+virtio-net driver supporting the "failover" feature, such as the one included
in
+Linux kernel 4.18 and newer) ` The ``<teaming>`` element of two interfaces can
+be used to connect them as a team/bond device in the guest (assuming proper
+support in the hypervisor and the guest network driver).
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='mybridge'/>
+ <mac address='00:11:22:33:44:55'/>
+ <model type='virtio'/>
+ <teaming type='persistent'/>
+ <alias name='ua-backup0'/>
+ </interface>
+ <interface type='network'>
+ <source network='hostdev-pool'/>
+ <mac address='00:11:22:33:44:55'/>
+ <model type='virtio'/>
+ <teaming type='transient' persistent='ua-backup0'/>
+ </interface>
+ </devices>
+ ...
+
+The ``<teaming>`` element required attribute ``type`` will be set to either
+``"persistent"`` to indicate a device that should always be present in the
+domain, or ``"transient"`` to indicate a device that may periodically be
+removed, then later re-added to the domain. When type="transient", there
should
+be a second attribute to ``<teaming>`` called ``"persistent"`` - this
attribute
+should be set to the alias name of the other device in the pair (the one that
+has ``<teaming type="persistent'/>``).
+
+In the particular case of QEMU, libvirt's ``<teaming>`` element is used to
setup
+a virtio-net "failover" device pair. For this setup, the persistent device
must
+be an interface with ``<model type="virtio"/>``, and the transient
device
+must be ``<interface type='hostdev'/>`` (or ``<interface
type='network'/>``
+where the referenced network defines a pool of SRIOV VFs). The guest will then
+have a simple network team/bond device made of the virtio NIC + hostdev NIC
+pair. In this configuration, the higher-performing hostdev NIC will normally be
+preferred for all network traffic, but when the domain is migrated, QEMU will
+automatically unplug the VF from the guest, and then hotplug a similar device
+once migration is completed; while migration is taking place, network traffic
+will use the virtio NIC. (Of course the emulated virtio NIC and the hostdev NIC
+must be connected to the same subnet for bonding to work properly).
+
+NB1: Since you must know the alias name of the virtio NIC when configuring the
+hostdev NIC, it will need to be manually set in the virtio NIC's configuration
+(as with all other manually set alias names, this means it must start with
+"ua-").
+
+NB2: Currently the only implementation of the guest OS virtio-net driver
+supporting virtio-net failover requires that the MAC addresses of the virtio and
+hostdev NIC must match. Since that may not always be a requirement in the
+future, libvirt doesn't enforce this limitation - it is up to the
+person/management application that is creating the configuration to assure the
+MAC addresses of the two devices match.
+
+NB3: Since the PCI addresses of the SRIOV VFs on the hosts that are the source
+and destination of the migration will almost certainly be different, either
+higher level management software will need to modify the ``<source>`` of the
+hostdev NIC (``<interface type='hostdev'>``) at the start of migration, or
(a
+simpler solution) the configuration will need to use a libvirt "hostdev"
virtual
+network that maintains a pool of such devices, as is implied in the example's
+use of the libvirt network named "hostdev-pool" - as long as the hostdev
network
+pools on both hosts have the same name, libvirt itself will take care of
+allocating an appropriate device on both ends of the migration. Similarly the
+XML for the virtio interface must also either work correctly unmodified on both
+the source and destination of the migration (e.g. by connecting to the same
+bridge device on both hosts, or by using the same virtual network), or the
+management software must properly modify the interface XML during migration so
+that the virtio device remains connected to the same network segment before and
+after migration.
+
+:anchor:`<a id="elementsNICSMulticast"/>`
+
+Multicast tunnel
+^^^^^^^^^^^^^^^^
+
+A multicast group is setup to represent a virtual network. Any VMs whose network
+devices are in the same multicast group can talk to each other even across
+hosts. This mode is also available to unprivileged users. There is no default
+DNS or DHCP support and no outgoing network access. To provide outgoing network
+access, one of the VMs should have a 2nd NIC which is connected to one of the
+first 4 network types and do the appropriate routing. The multicast protocol is
+compatible with that used by user mode linux guests too. The source address used
+must be from the multicast address block.
+
+::
+
+ ...
+ <devices>
+ <interface type='mcast'>
+ <mac address='52:54:00:6d:90:01'/>
+ <source address='230.0.0.1' port='5558'/>
+ </interface>
+ </devices>
+ ...
+
+:anchor:`<a id="elementsNICSTCP"/>`
+
+TCP tunnel
+^^^^^^^^^^
+
+A TCP client/server architecture provides a virtual network. One VM provides the
+server end of the network, all other VMS are configured as clients. All network
+traffic is routed between the VMs via the server. This mode is also available to
+unprivileged users. There is no default DNS or DHCP support and no outgoing
+network access. To provide outgoing network access, one of the VMs should have a
+2nd NIC which is connected to one of the first 4 network types and do the
+appropriate routing.
+
+::
+
+ ...
+ <devices>
+ <interface type='server'>
+ <mac address='52:54:00:22:c9:42'/>
+ <source address='192.168.0.1' port='5558'/>
+ </interface>
+ ...
+ <interface type='client'>
+ <mac address='52:54:00:8b:c9:51'/>
+ <source address='192.168.0.1' port='5558'/>
+ </interface>
+ </devices>
+ ...
+
+:anchor:`<a id="elementsNICSUDP"/>`
+
+UDP unicast tunnel
+^^^^^^^^^^^^^^^^^^
+
+A UDP unicast architecture provides a virtual network which enables connections
+between QEMU instances using QEMU's UDP infrastructure. The xml "source"
address
+is the endpoint address to which the UDP socket packets will be sent from the
+host running QEMU. The xml "local" address is the address of the interface
from
+which the UDP socket packets will originate from the QEMU host. :since:`Since
+1.2.20`
+
+::
+
+ ...
+ <devices>
+ <interface type='udp'>
+ <mac address='52:54:00:22:c9:42'/>
+ <source address='127.0.0.1' port='11115'>
+ <local address='127.0.0.1' port='11116'/>
+ </source>
+ </interface>
+ </devices>
+ ...
+
+:anchor:`<a id="elementsNICSModel"/>`
+
+Setting the NIC model
+^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet1'/>
+ <model type='ne2k_pci'/>
+ </interface>
+ </devices>
+ ...
+
+For hypervisors which support this, you can set the model of emulated network
+interface card.
+
+The values for ``type`` aren't defined specifically by libvirt, but by what the
+underlying hypervisor supports (if any). For QEMU and KVM you can get a list of
+supported models with these commands:
+
+::
+
+ qemu -net nic,model=? /dev/null
+ qemu-kvm -net nic,model=? /dev/null
+
+Typical values for QEMU and KVM include: ne2k_isa i82551 i82557b i82559er
+ne2k_pci pcnet rtl8139 e1000 virtio. :since:`Since 5.2.0` ,
+``virtio-transitional`` and ``virtio-non-transitional`` values are supported.
+See `Virtio transitional devices <#elementsVirtioTransitional>`__ for more
+details.
+
+:anchor:`<a id="elementsDriverBackendOptions"/>`
+
+Setting NIC driver-specific options
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet1'/>
+ <model type='virtio'/>
+ <driver name='vhost' txmode='iothread' ioeventfd='on'
event_idx='off' queues='5' rx_queue_size='256'
tx_queue_size='256'>
+ <host csum='off' gso='off' tso4='off'
tso6='off' ecn='off' ufo='off' mrg_rxbuf='off'/>
+ <guest csum='off' tso4='off' tso6='off'
ecn='off' ufo='off'/>
+ </driver>
+ </interface>
+ </devices>
+ ...
+
+Some NICs may have tunable driver-specific options. These are set as attributes
+of the ``driver`` sub-element of the interface definition. Currently the
+following attributes are available for the ``"virtio"`` NIC driver:
+
+``name``
+ The optional ``name`` attribute forces which type of backend driver to use.
+ The value can be either 'qemu' (a user-space backend) or 'vhost' (a
kernel
+ backend, which requires the vhost module to be provided by the kernel); an
+ attempt to require the vhost driver without kernel support will be rejected.
+ If this attribute is not present, then the domain defaults to 'vhost' if
+ present, but silently falls back to 'qemu' without error. :since:`Since 0.8.8
+ (QEMU and KVM only)`
+ For interfaces of type='hostdev' (PCI passthrough devices) the ``name``
+ attribute can optionally be set to "vfio" or "kvm".
"vfio" tells libvirt to
+ use VFIO device assignment rather than traditional KVM device assignment
+ (VFIO is a new method of device assignment that is compatible with UEFI
+ Secure Boot), and "kvm" tells libvirt to use the legacy device assignment
+ performed directly by the kvm kernel module (the default is currently
"kvm",
+ but is subject to change). :since:`Since 1.0.5 (QEMU and KVM only, requires
+ kernel 3.6 or newer)`
+ For interfaces of type='vhostuser', the ``name`` attribute is ignored. The
+ backend driver used is always vhost-user.
+``txmode``
+ The ``txmode`` attribute specifies how to handle transmission of packets when
+ the transmit buffer is full. The value can be either 'iothread' or
'timer'.
+ :since:`Since 0.8.8 (QEMU and KVM only)`
+ If set to 'iothread', packet tx is all done in an iothread in the bottom half
+ of the driver (this option translates into adding "tx=bh" to the qemu
+ commandline -device virtio-net-pci option).
+ If set to 'timer', tx work is done in qemu, and if there is more tx data than
+ can be sent at the present time, a timer is set before qemu moves on to do
+ other things; when the timer fires, another attempt is made to send more
+ data.
+ The resulting difference, according to the qemu developer who added the
+ option is: "bh makes tx more asynchronous and reduces latency, but
+ potentially causes more processor bandwidth contention since the CPU doing
+ the tx isn't necessarily the CPU where the guest generated the packets."
+ **In general you should leave this option alone, unless you are very certain
+ you know what you are doing.**
+``ioeventfd``
+ This optional attribute allows users to set `domain I/O asynchronous
+ handling <
https://patchwork.kernel.org/patch/43390/>`__ for interface device.
+ The default is left to the discretion of the hypervisor. Accepted values are
+ "on" and "off". Enabling this allows qemu to execute VM while a
separate
+ thread handles I/O. Typically guests experiencing high system CPU utilization
+ during I/O will benefit from this. On the other hand, on overloaded host it
+ could increase guest I/O latency. :since:`Since 0.9.3 (QEMU and KVM only)`
+ **In general you should leave this option alone, unless you are very certain
+ you know what you are doing.**
+``event_idx``
+ The ``event_idx`` attribute controls some aspects of device event processing.
+ The value can be either 'on' or 'off' - if it is on, it will reduce
the
+ number of interrupts and exits for the guest. The default is determined by
+ QEMU; usually if the feature is supported, default is on. In case there is a
+ situation where this behavior is suboptimal, this attribute provides a way to
+ force the feature off. :since:`Since 0.9.5 (QEMU and KVM only)`
+ **In general you should leave this option alone, unless you are very certain
+ you know what you are doing.**
+``queues``
+ The optional ``queues`` attribute controls the number of queues to be used
+ for either `Multiqueue
+ virtio-net <
https://www.linux-kvm.org/page/Multiqueue>`__ or
+ `vhost-user <#elementVhostuser>`__ network interfaces. Use of multiple packet
+ processing queues requires the interface having the
+ ``<model type='virtio'/>`` element. Each queue will potentially be
handled by
+ a different processor, resulting in much higher throughput.
+ :since:`virtio-net since 1.0.6 (QEMU and KVM only)` :since:`vhost-user since
+ 1.2.17 (QEMU and KVM only)`
+``rx_queue_size``
+ The optional ``rx_queue_size`` attribute controls the size of virtio ring for
+ each queue as described above. The default value is hypervisor dependent and
+ may change across its releases. Moreover, some hypervisors may pose some
+ restrictions on actual value. For instance, latest QEMU (as of 2016-09-01)
+ requires value to be a power of two from [256, 1024] range. :since:`Since
+ 2.3.0 (QEMU and KVM only)`
+ **In general you should leave this option alone, unless you are very certain
+ you know what you are doing.**
+``tx_queue_size``
+ The optional ``tx_queue_size`` attribute controls the size of virtio ring for
+ each queue as described above. The default value is hypervisor dependent and
+ may change across its releases. Moreover, some hypervisors may pose some
+ restrictions on actual value. For instance, QEMU v2.9 requires value to be a
+ power of two from [256, 1024] range. In addition to that, this may work only
+ for a subset of interface types, e.g. aforementioned QEMU enables this option
+ only for ``vhostuser`` type. :since:`Since 3.7.0 (QEMU and KVM only)`
+ **In general you should leave this option alone, unless you are very certain
+ you know what you are doing.**
+virtio options
+ For virtio interfaces, `Virtio-specific options <#elementsVirtio>`__ can also
+ be set. ( :since:`Since 3.5.0` )
+
+Offloading options for the host and guest can be configured using the following
+sub-elements:
+
+``host``
+ The ``csum``, ``gso``, ``tso4``, ``tso6``, ``ecn`` and ``ufo`` attributes
+ with possible values ``on`` and ``off`` can be used to turn off host
+ offloading options. By default, the supported offloads are enabled by QEMU.
+ :since:`Since 1.2.9 (QEMU only)` The ``mrg_rxbuf`` attribute can be used to
+ control mergeable rx buffers on the host side. Possible values are ``on``
+ (default) and ``off``. :since:`Since 1.2.13 (QEMU only)`
+``guest``
+ The ``csum``, ``tso4``, ``tso6``, ``ecn`` and ``ufo`` attributes with
+ possible values ``on`` and ``off`` can be used to turn off guest offloading
+ options. By default, the supported offloads are enabled by QEMU.
+ :since:`Since 1.2.9 (QEMU only)`
+
+:anchor:`<a id="elementsBackendOptions"/>`
+
+Setting network backend-specific options
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet1'/>
+ <model type='virtio'/>
+ <backend tap='/dev/net/tun' vhost='/dev/vhost-net'/>
+ <driver name='vhost' txmode='iothread' ioeventfd='on'
event_idx='off' queues='5'/>
+ <tune>
+ <sndbuf>1600</sndbuf>
+ </tune>
+ </interface>
+ </devices>
+ ...
+
+For tuning the backend of the network, the ``backend`` element can be used. The
+``vhost`` attribute can override the default vhost device path
+(``/dev/vhost-net``) for devices with ``virtio`` model. The ``tap`` attribute
+overrides the tun/tap device path (default: ``/dev/net/tun``) for network and
+bridge interfaces. This does not work in session mode. :since:`Since 1.2.9`
+
+For tap devices there is also ``sndbuf`` element which can adjust the size of
+send buffer in the host. :since:`Since 0.8.8`
+
+:anchor:`<a id="elementsNICSTargetOverride"/>`
+
+Overriding the target element
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet1'/>
+ </interface>
+ </devices>
+ ...
+
+If no target is specified, certain hypervisors will automatically generate a
+name for the created tun device. This name can be manually specified, however
+the name *should not start with either 'vnet', 'vif', 'macvtap',
or 'macvlan'*,
+which are prefixes reserved by libvirt and certain hypervisors. Manually
+specified targets using these prefixes may be ignored.
+
+Note that for LXC containers, this defines the name of the interface on the host
+side. :since:`Since 1.2.7` , to define the name of the device on the guest side,
+the ``guest`` element should be used, as in the following snippet:
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <guest dev='myeth'/>
+ </interface>
+ </devices>
+ ...
+
+:anchor:`<a id="elementsNICSBoot"/>`
+
+Specifying boot order
+^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet1'/>
+ <boot order='1'/>
+ </interface>
+ </devices>
+ ...
+
+For hypervisors which support this, you can set a specific NIC to be used for
+network boot. The ``order`` attribute determines the order in which devices will
+be tried during boot sequence. The per-device ``boot`` elements cannot be used
+together with general boot elements in `BIOS bootloader <#elementsOSBIOS>`__
+section. :since:`Since 0.8.8`
+
+:anchor:`<a id="elementsNICSROM"/>`
+
+Interface ROM BIOS configuration
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet1'/>
+ <rom bar='on' file='/etc/fake/boot.bin'/>
+ </interface>
+ </devices>
+ ...
+
+For hypervisors which support this, you can change how a PCI Network device's
+ROM is presented to the guest. The ``bar`` attribute can be set to "on" or
+"off", and determines whether or not the device's ROM will be visible in
the
+guest's memory map. (In PCI documentation, the "rombar" setting controls
the
+presence of the Base Address Register for the ROM). If no rom bar is specified,
+the qemu default will be used (older versions of qemu used a default of "off",
+while newer qemus have a default of "on"). The optional ``file`` attribute is
+used to point to a binary file to be presented to the guest as the device's ROM
+BIOS. This can be useful to provide an alternative boot ROM for a network
+device. :since:`Since 0.9.10 (QEMU and KVM only)` .
+
+:anchor:`<a id="elementDomain"/>`
+
+Setting up a network backend in a driver domain
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ ...
+ <interface type='bridge'>
+ <source bridge='br0'/>
+ <backenddomain name='netvm'/>
+ </interface>
+ ...
+ </devices>
+ ...
+
+The optional ``backenddomain`` element allows specifying a backend domain (aka
+driver domain) for the interface. Use the ``name`` attribute to specify the
+backend domain name. You can use it to create a direct network link between
+domains (so data will not go through host system). Use with type 'ethernet' to
+create plain network link, or with type 'bridge' to connect to a bridge inside
+the backend domain. :since:`Since 1.2.13 (Xen only)`
+
+:anchor:`<a id="elementQoS"/>`
+
+Quality of service
+^^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet0'/>
+ <bandwidth>
+ <inbound average='1000' peak='5000' floor='200'
burst='1024'/>
+ <outbound average='128' peak='256' burst='256'/>
+ </bandwidth>
+ </interface>
+ </devices>
+ ...
+
+This part of interface XML provides setting quality of service. Incoming and
+outgoing traffic can be shaped independently. The ``bandwidth`` element and its
+child elements are described in the `QoS <formatnetwork.html#elementQoS>`__
+section of the Network XML.
+
+:anchor:`<a id="elementVlanTag"/>`
+
+Setting VLAN tag (on supported network types only)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='bridge'>
+ <vlan>
+ <tag id='42'/>
+ </vlan>
+ <source bridge='ovsbr0'/>
+ <virtualport type='openvswitch'>
+ <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/>
+ </virtualport>
+ </interface>
+ <interface type='bridge'>
+ <vlan trunk='yes'>
+ <tag id='42'/>
+ <tag id='123' nativeMode='untagged'/>
+ </vlan>
+ ...
+ </interface>
+ </devices>
+ ...
+
+If (and only if) the network connection used by the guest supports VLAN tagging
+transparent to the guest, an optional ``<vlan>`` element can specify one or more
+VLAN tags to apply to the guest's network traffic :since:`Since 0.10.0` .
+Network connections that support guest-transparent VLAN tagging include 1)
+type='bridge' interfaces connected to an Open vSwitch bridge :since:`Since
+0.10.0` , 2) SRIOV Virtual Functions (VF) used via type='hostdev' (direct device
+assignment) :since:`Since 0.10.0` , and 3) SRIOV VFs used via type='direct' with
+mode='passthrough' (macvtap "passthru" mode) :since:`Since 1.3.5` . All
other
+connection types, including standard linux bridges and libvirt's own virtual
+networks, **do not** support it. 802.1Qbh (vn-link) and 802.1Qbg (VEPA) switches
+provide their own way (outside of libvirt) to tag guest traffic onto a specific
+VLAN. Each tag is given in a separate ``<tag>`` subelement of ``<vlan>``
(for
+example: ``<tag id='42'/>``). For VLAN trunking of multiple tags
(which is
+supported only on Open vSwitch connections), multiple ``<tag>`` subelements can
+be specified, which implies that the user wants to do VLAN trunking on the
+interface for all the specified tags. In the case that VLAN trunking of a single
+tag is desired, the optional attribute ``trunk='yes'`` can be added to the
+toplevel ``<vlan>`` element to differentiate trunking of a single tag from
+normal tagging.
+
+For network connections using Open vSwitch it is also possible to configure
+'native-tagged' and 'native-untagged' VLAN modes :since:`Since 1.1.0.`
This is
+done with the optional ``nativeMode`` attribute on the ``<tag>`` subelement:
+``nativeMode`` may be set to 'tagged' or 'untagged'. The ``id`` attribute
of the
+``<tag>`` subelement containing ``nativeMode`` sets which VLAN is considered to
+be the "native" VLAN for this interface, and the ``nativeMode`` attribute
+determines whether or not traffic for that VLAN will be tagged.
+
+:anchor:`<a id="elementPort"/>`
+
+Isolating guests's network traffic from each other
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <port isolated='yes'/>
+ </interface>
+ </devices>
+ ...
+
+:since:`Since 6.1.0.` The ``port`` element property ``isolated``, when set to
+``yes`` (default setting is ``no``) is used to isolate this interface's network
+traffic from that of other guest interfaces connected to the same network that
+also have ``<port isolated='yes'/>``. This setting is only supported for
+emulated interface devices that use a standard tap device to connect to the
+network via a Linux host bridge. This property can be inherited from a libvirt
+network, so if all guests that will be connected to the network should be
+isolated, it is better to put the setting in the network configuration. (NB:
+this only prevents guests that have ``isolated='yes'`` from communicating with
+each other; if there is a guest on the same bridge that doesn't have
+``isolated='yes'``, even the isolated guests will be able to communicate with
+it.)
+
+:anchor:`<a id="elementLink"/>`
+
+Modifying virtual link state
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet0'/>
+ <link state='down'/>
+ </interface>
+ </devices>
+ ...
+
+This element provides means of setting state of the virtual network link.
+Possible values for attribute ``state`` are ``up`` and ``down``. If ``down`` is
+specified as the value, the interface behaves as if it had the network cable
+disconnected. Default behavior if this element is unspecified is to have the
+link state ``up``. :since:`Since 0.9.5`
+
+:anchor:`<a id="mtu"/>`
+
+MTU configuration
+^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet0'/>
+ <mtu size='1500'/>
+ </interface>
+ </devices>
+ ...
+
+This element provides means of setting MTU of the virtual network link.
+Currently there is just one attribute ``size`` which accepts a non-negative
+integer which specifies the MTU size for the interface. :since:`Since 3.1.0`
+
+:anchor:`<a id="coalesce"/>`
+
+Coalesce settings
+^^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet0'/>
+ <coalesce>
+ <rx>
+ <frames max='7'/>
+ </rx>
+ </coalesce>
+ </interface>
+ </devices>
+ ...
+
+This element provides means of setting coalesce settings for some interface
+devices (currently only type ``network`` and ``bridge``. Currently there is just
+one attribute, ``max``, to tweak, in element ``frames`` for the ``rx`` group,
+which accepts a non-negative integer that specifies the maximum number of
+packets that will be received before an interrupt. :since:`Since 3.3.0`
+
+:anchor:`<a id="ipconfig"/>`
+
+IP configuration
+^^^^^^^^^^^^^^^^
+
+::
+
+ ...
+ <devices>
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet0'/>
+ <ip address='192.168.122.5' prefix='24'/>
+ <ip address='192.168.122.5' prefix='24'
peer='10.0.0.10'/>
+ <route family='ipv4' address='192.168.122.0'
prefix='24' gateway='192.168.122.1'/>
+ <route family='ipv4' address='192.168.122.8'
gateway='192.168.122.1'/>
+ </interface>
+ ...
+ <hostdev mode='capabilities' type='net'>
+ <source>
+ <interface>eth0</interface>
+ </source>
+ <ip address='192.168.122.6' prefix='24'/>
+ <route family='ipv4' address='192.168.122.0'
prefix='24' gateway='192.168.122.1'/>
+ <route family='ipv4' address='192.168.122.8'
gateway='192.168.122.1'/>
+ </hostdev>
+ ...
+ </devices>
+ ...
+
+:since:`Since 1.2.12` network devices and hostdev devices with network
+capabilities can optionally be provided one or more IP addresses to set on the
+network device in the guest. Note that some hypervisors or network device types
+will simply ignore them or only use the first one. The ``family`` attribute can
+be set to either ``ipv4`` or ``ipv6``, and the ``address`` attribute contains
+the IP address. The optional ``prefix`` is the number of 1 bits in the netmask,
+and will be automatically set if not specified - for IPv4 the default prefix is
+determined according to the network "class" (A, B, or C - see RFC870), and for
+IPv6 the default prefix is 64. The optional ``peer`` attribute holds the IP
+address of the other end of a point-to-point network device :since:`(since
+2.1.0)` .
+
+:since:`Since 1.2.12` route elements can also be added to define IP routes to
+add in the guest. The attributes of this element are described in the
+documentation for the ``route`` element in `network
+definitions <formatnetwork.html#elementsStaticroute>`__. This is used by the LXC
+driver.
+
+::
+
+ ...
+ <devices>
+ <interface type='ethernet'>
+ <source/>
+ <ip address='192.168.123.1' prefix='24'/>
+ <ip address='10.0.0.10' prefix='24'
peer='192.168.122.5'/>
+ <route family='ipv4' address='192.168.42.0'
prefix='24' gateway='192.168.123.4'/>
+ <source/>
+ ...
+ </interface>
+ ...
+ </devices>
+ ...
+
+:since:`Since 2.1.0` network devices of type "ethernet" can optionally be
+provided one or more IP addresses and one or more routes to set on the **host**
+side of the network device. These are configured as subelements of the
+``<source>`` element of the interface, and have the same attributes as the
+similarly named elements used to configure the guest side of the interface
+(described above).
+
+:anchor:`<a id="elementVhostuser"/>`
+
+vhost-user interface
+^^^^^^^^^^^^^^^^^^^^
+
+:since:`Since 1.2.7` the vhost-user enables the communication between a QEMU
+virtual machine and other userspace process using the Virtio transport protocol.
+A char dev (e.g. Unix socket) is used for the control plane, while the data
+plane is based on shared memory.
+
+::
+
+ ...
+ <devices>
+ <interface type='vhostuser'>
+ <mac address='52:54:00:3b:83:1a'/>
+ <source type='unix' path='/tmp/vhost1.sock'
mode='server'/>
+ <model type='virtio'/>
+ </interface>
+ <interface type='vhostuser'>
+ <mac address='52:54:00:3b:83:1b'/>
+ <source type='unix' path='/tmp/vhost2.sock'
mode='client'>
+ <reconnect enabled='yes' timeout='10'/>
+ </source>
+ <model type='virtio'/>
+ <driver queues='5'/>
+ </interface>
+ </devices>
+ ...
+
+The ``<source>`` element has to be specified along with the type of char device.
+Currently, only type='unix' is supported, where the path (the directory path of
+the socket) and mode attributes are required. Both ``mode='server'`` and
+``mode='client'`` are supported. vhost-user requires the virtio model type, thus
+the ``<model>`` element is mandatory. :since:`Since 4.1.0` the element has an
+optional child element ``reconnect`` which configures reconnect timeout if the
+connection is lost. It has two attributes ``enabled`` (which accepts ``yes`` and
+``no``) and ``timeout`` which specifies the amount of seconds after which
+hypervisor tries to reconnect.
+
+:anchor:`<a id="elementNwfilter"/>`
+
+Traffic filtering with NWFilter
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+:since:`Since 0.8.0` an ``nwfilter`` profile can be assigned to a domain
+interface, which allows configuring traffic filter rules for the virtual
+machine. See the `nwfilter <formatnwfilter.html>`__ documentation for more
+complete details.
+
+::
+
+ ...
+ <devices>
+ <interface ...>
+ ...
+ <filterref filter='clean-traffic'/>
+ </interface>
+ <interface ...>
+ ...
+ <filterref filter='myfilter'>
+ <parameter name='IP' value='104.207.129.11'/>
+ <parameter name='IP6_ADDR' value='2001:19f0:300:2102::'/>
+ <parameter name='IP6_MASK' value='64'/>
+ ...
+ </filterref>
+ </interface>
+ </devices>
+ ...
+
+The ``filter`` attribute specifies the name of the nwfilter to use. Optional
+``<parameter>`` elements may be specified for passing additional info to the
+nwfilter via the ``name`` and ``value`` attributes. See the
+`nwfilter <formatnwfilter.html#nwfconceptsvars>`__ docs for info on parameters.
diff --git a/docs/formatdomain-devices.rst b/docs/formatdomain-devices.rst
index 4b5391f77b..4334feb428 100644
--- a/docs/formatdomain-devices.rst
+++ b/docs/formatdomain-devices.rst
@@ -48,1265 +48,7 @@ following characters: ``[a-zA-Z0-9_-]``. :since:`Since 3.9.0`
.. include:: formatdomain-devices-hostdev.rst
.. include:: formatdomain-devices-redirdev.rst
.. include:: formatdomain-devices-smartcard.rst
-
-:anchor:`<a id="elementsNICS"/>`
-
-Network interfaces
-~~~~~~~~~~~~~~~~~~
-
-::
-
- ...
- <devices>
- <interface type='direct' trustGuestRxFilters='yes'>
- <source dev='eth0'/>
- <mac address='52:54:00:5d:c7:9e'/>
- <boot order='1'/>
- <rom bar='off'/>
- </interface>
- </devices>
- ...
-
-There are several possibilities for specifying a network interface visible to
-the guest. Each subsection below provides more details about common setup
-options.
-
-:since:`Since 1.2.10` ), the ``interface`` element property
-``trustGuestRxFilters`` provides the capability for the host to detect and trust
-reports from the guest regarding changes to the interface mac address and
-receive filters by setting the attribute to ``yes``. The default setting for the
-attribute is ``no`` for security reasons and support depends on the guest
-network device model as well as the type of connection on the host - currently
-it is only supported for the virtio device model and for macvtap connections on
-the host.
-
-Each ``<interface>`` element has an optional ``<address>`` sub-element that
can
-tie the interface to a particular pci slot, with attribute ``type='pci'`` as
-`documented above <#elementsAddress>`__.
-
-:since:`Since 6.6.0` , one can force libvirt to keep the provided MAC address
-when it's in the reserved VMware range by adding a ``type="static"``
attribute
-to the ``<mac/>`` element. Note that this attribute is useless if the provided
-MAC address is outside of the reserved VMWare ranges.
-
-:anchor:`<a id="elementsNICSVirtual"/>`
-
-Virtual network
-^^^^^^^^^^^^^^^
-
-**This is the recommended config for general guest connectivity on hosts with
-dynamic / wireless networking configs (or multi-host environments where the host
-hardware details are described separately in a ``<network>`` definition
-:since:`Since 0.9.4` ).**
-
-Provides a connection whose details are described by the named network
-definition. Depending on the virtual network's "forward mode"
configuration, the
-network may be totally isolated (no ``<forward>`` element given), NAT'ing to
an
-explicit network device or to the default route (``<forward
mode='nat'>``),
-routed with no NAT (``<forward mode='route'/>``), or connected directly to
one
-of the host's network interfaces (via macvtap) or bridge devices
-((``<forward mode='bridge|private|vepa|passthrough'/>``
:since:`Since
-0.9.4` )
-
-For networks with a forward mode of bridge, private, vepa, and passthrough, it
-is assumed that the host has any necessary DNS and DHCP services already setup
-outside the scope of libvirt. In the case of isolated, nat, and routed networks,
-DHCP and DNS are provided on the virtual network by libvirt, and the IP range
-can be determined by examining the virtual network config with
-'``virsh net-dumpxml [networkname]``'. There is one virtual network called
-'default' setup out of the box which does NAT'ing to the default route and
has
-an IP range of ``192.168.122.0/255.255.255.0``. Each guest will have an
-associated tun device created with a name of vnetN, which can also be overridden
-with the <target> element (see `overriding the target
-element <#elementsNICSTargetOverride>`__).
-
-When the source of an interface is a network, a ``portgroup`` can be specified
-along with the name of the network; one network may have multiple portgroups
-defined, with each portgroup containing slightly different configuration
-information for different classes of network connections. :since:`Since 0.9.4` .
-
-When a guest is running an interface of type ``network`` may include a
-``portid`` attribute. This provides the UUID of an associated virNetworkPortPtr
-object that records the association between the domain interface and the
-network. This attribute is read-only since port objects are create and deleted
-automatically during startup and shutdown. :since:`Since 5.1.0`
-
-Also, similar to ``direct`` network connections (described below), a connection
-of type ``network`` may specify a ``virtualport`` element, with configuration
-data to be forwarded to a vepa (802.1Qbg) or 802.1Qbh compliant switch (
-:since:`Since 0.8.2` ), or to an Open vSwitch virtual switch ( :since:`Since
-0.9.11` ).
-
-Since the actual type of switch may vary depending on the configuration in the
-``<network>`` on the host, it is acceptable to omit the virtualport ``type``
-attribute, and specify attributes from multiple different virtualport types (and
-also to leave out certain attributes); at domain startup time, a complete
-``<virtualport>`` element will be constructed by merging together the type and
-attributes defined in the network and the portgroup referenced by the interface.
-The newly-constructed virtualport is a combination of them. The attributes from
-lower virtualport can't make change on the ones defined in higher virtualport.
-Interface takes the highest priority, portgroup is lowest priority. (
-:since:`Since 0.10.0` ). For example, in order to work properly with both an
-802.1Qbh switch and an Open vSwitch switch, you may choose to specify no type,
-but both a ``profileid`` (in case the switch is 802.1Qbh) and an ``interfaceid``
-(in case the switch is Open vSwitch) (you may also omit the other attributes,
-such as managerid, typeid, or profileid, to be filled in from the network's
-``<virtualport>``). If you want to limit a guest to connecting only to certain
-types of switches, you can specify the virtualport type, but still omit some/all
-of the parameters - in this case if the host's network has a different type of
-virtualport, connection of the interface will fail.
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- </interface>
- ...
- <interface type='network'>
- <source network='default' portgroup='engineering'/>
- <target dev='vnet7'/>
- <mac address="00:11:22:33:44:55"/>
- <virtualport>
- <parameters instanceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/>
- </virtualport>
- </interface>
- </devices>
- ...
-
-:anchor:`<a id="elementsNICSBridge"/>`
-
-Bridge to LAN
-^^^^^^^^^^^^^
-
-**This is the recommended config for general guest connectivity on hosts with
-static wired networking configs.**
-
-Provides a bridge from the VM directly to the LAN. This assumes there is a
-bridge device on the host which has one or more of the hosts physical NICs
-attached. The guest VM will have an associated tun device created with a name of
-vnetN, which can also be overridden with the <target> element (see `overriding
-the target element <#elementsNICSTargetOverride>`__). The tun device will be
-attached to the bridge. The IP range / network configuration is whatever is used
-on the LAN. This provides the guest VM full incoming & outgoing net access just
-like a physical machine.
-
-On Linux systems, the bridge device is normally a standard Linux host bridge. On
-hosts that support Open vSwitch, it is also possible to connect to an Open
-vSwitch bridge device by adding a ``<virtualport type='openvswitch'/>`` to
the
-interface definition. ( :since:`Since 0.9.11` ). The Open vSwitch type
-virtualport accepts two parameters in its ``<parameters>`` element - an
-``interfaceid`` which is a standard uuid used to uniquely identify this
-particular interface to Open vSwitch (if you do not specify one, a random
-interfaceid will be generated for you when you first define the interface), and
-an optional ``profileid`` which is sent to Open vSwitch as the interfaces
-"port-profile".
-
-::
-
- ...
- <devices>
- ...
- <interface type='bridge'>
- <source bridge='br0'/>
- </interface>
- <interface type='bridge'>
- <source bridge='br1'/>
- <target dev='vnet7'/>
- <mac address="00:11:22:33:44:55"/>
- </interface>
- <interface type='bridge'>
- <source bridge='ovsbr'/>
- <virtualport type='openvswitch'>
- <parameters profileid='menial'
interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/>
- </virtualport>
- </interface>
- ...
- </devices>
- ...
-
-On hosts that support Open vSwitch on the kernel side and have the Midonet Host
-Agent configured, it is also possible to connect to the 'midonet' bridge device
-by adding a ``<virtualport type='midonet'/>`` to the interface definition.
(
-:since:`Since 1.2.13` ). The Midonet virtualport type requires an
-``interfaceid`` attribute in its ``<parameters>`` element. This interface id is
-the UUID that specifies which port in the virtual network topology will be bound
-to the interface.
-
-::
-
- ...
- <devices>
- ...
- <interface type='bridge'>
- <source bridge='br0'/>
- </interface>
- <interface type='bridge'>
- <source bridge='br1'/>
- <target dev='vnet7'/>
- <mac address="00:11:22:33:44:55"/>
- </interface>
- <interface type='bridge'>
- <source bridge='midonet'/>
- <virtualport type='midonet'>
- <parameters interfaceid='0b2d64da-3d0e-431e-afdd-804415d6ebbb'/>
- </virtualport>
- </interface>
- ...
- </devices>
- ...
-
-:anchor:`<a id="elementsNICSSlirp"/>`
-
-Userspace SLIRP stack
-^^^^^^^^^^^^^^^^^^^^^
-
-Provides a virtual LAN with NAT to the outside world. The virtual network has
-DHCP & DNS services and will give the guest VM addresses starting from
-``10.0.2.15``. The default router will be ``10.0.2.2`` and the DNS server will
-be ``10.0.2.3``. This networking is the only option for unprivileged users who
-need their VMs to have outgoing access. :since:`Since 3.8.0` it is possible to
-override the default network address by including an ``ip`` element specifying
-an IPv4 address in its one mandatory attribute, ``address``. Optionally, a
-second ``ip`` element with a ``family`` attribute set to "ipv6" can be
specified
-to add an IPv6 address to the interface. ``address``. Optionally, address
-``prefix`` can be specified.
-
-::
-
- ...
- <devices>
- <interface type='user'/>
- ...
- <interface type='user'>
- <mac address="00:11:22:33:44:55"/>
- <ip family='ipv4' address='172.17.2.0'
prefix='24'/>
- <ip family='ipv6' address='2001:db8:ac10:fd01::'
prefix='64'/>
- </interface>
- </devices>
- ...
-
-:anchor:`<a id="elementsNICSEthernet"/>`
-
-Generic ethernet connection
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Provides a means to use a new or existing tap device (or veth device pair,
-depening on the needs of the hypervisor driver) that is partially or wholly
-setup external to libvirt (either prior to the guest starting, or while the
-guest is being started via an optional script specified in the config).
-
-The name of the tap device can optionally be specified with the ``dev``
-attribute of the ``<target>`` element. If no target dev is specified, libvirt
-will create a new standard tap device with a name of the pattern "vnetN",
where
-"N" is replaced with a number. If a target dev is specified and that device
-doesn't exist, then a new standard tap device will be created with the exact dev
-name given. If the specified target dev does exist, then that existing device
-will be used. Usually some basic setup of the device is done by libvirt,
-including setting a MAC address, and the IFF_UP flag, but if the ``dev`` is a
-pre-existing device, and the ``managed`` attribute of the ``target`` element is
-also set to "no" (the default value is "yes"), even this basic setup
will not be
-performed - libvirt will simply pass the device on to the hypervisor with no
-setup at all. :since:`Since 5.7.0` Using managed='no' with a pre-created tap
-device is useful because it permits a virtual machine managed by an unprivileged
-libvirtd to have emulated network devices based on tap devices.
-
-After creating/opening the tap device, an optional shell script (given in the
-``path`` attribute of the ``<script>`` element) will be run. :since:`Since
-0.2.1` Also, after detaching/closing the tap device, an optional shell script
-(given in the ``path`` attribute of the ``<downscript>`` element) will be run.
-:since:`Since 5.1.0` These can be used to do whatever extra host network
-integration is required.
-
-::
-
- ...
- <devices>
- <interface type='ethernet'>
- <script path='/etc/qemu-ifup-mynet'/>
- <downscript path='/etc/qemu-ifdown-mynet'/>
- </interface>
- ...
- <interface type='ethernet'>
- <target dev='mytap1' managed='no'/>
- <model type='virtio'/>
- </interface>
- </devices>
- ...
-
-:anchor:`<a id="elementsNICSDirect"/>`
-
-Direct attachment to physical interface
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-| Provides direct attachment of the virtual machine's NIC to the given physical
- interface of the host. :since:`Since 0.7.7 (QEMU and KVM only)`
-| This setup requires the Linux macvtap driver to be available. :since:`(Since
- Linux 2.6.34.)` One of the modes 'vepa' ( `'Virtual Ethernet Port
- Aggregator'
<
http://www.ieee802.org/1/files/public/docs2009/new-evb-congdon-vepa-modul...),
- 'bridge' or 'private' can be chosen for the operation mode of the
macvtap
- device, 'vepa' being the default mode. The individual modes cause the delivery
- of packets to behave as follows:
-
-If the model type is set to ``virtio`` and interface's ``trustGuestRxFilters``
-attribute is set to ``yes``, changes made to the interface mac address,
-unicast/multicast receive filters, and vlan settings in the guest will be
-monitored and propagated to the associated macvtap device on the host (
-:since:`Since 1.2.10` ). If ``trustGuestRxFilters`` is not set, or is not
-supported for the device model in use, an attempted change to the mac address
-originating from the guest side will result in a non-working network connection.
-
-``vepa``
- All VMs' packets are sent to the external bridge. Packets whose destination
- is a VM on the same host as where the packet originates from are sent back to
- the host by the VEPA capable bridge (today's bridges are typically not VEPA
- capable).
-``bridge``
- Packets whose destination is on the same host as where they originate from
- are directly delivered to the target macvtap device. Both origin and
- destination devices need to be in bridge mode for direct delivery. If either
- one of them is in ``vepa`` mode, a VEPA capable bridge is required.
-``private``
- All packets are sent to the external bridge and will only be delivered to a
- target VM on the same host if they are sent through an external router or
- gateway and that device sends them back to the host. This procedure is
- followed if either the source or destination device is in ``private`` mode.
-``passthrough``
- This feature attaches a virtual function of a SRIOV capable NIC directly to a
- VM without losing the migration capability. All packets are sent to the VF/IF
- of the configured network device. Depending on the capabilities of the device
- additional prerequisites or limitations may apply; for example, on Linux this
- requires kernel 2.6.38 or newer. :since:`Since 0.9.2`
-
-::
-
- ...
- <devices>
- ...
- <interface type='direct' trustGuestRxFilters='no'>
- <source dev='eth0' mode='vepa'/>
- </interface>
- </devices>
- ...
-
-The network access of direct attached virtual machines can be managed by the
-hardware switch to which the physical interface of the host machine is connected
-to.
-
-The interface can have additional parameters as shown below, if the switch is
-conforming to the IEEE 802.1Qbg standard. The parameters of the virtualport
-element are documented in more detail in the IEEE 802.1Qbg standard. The values
-are network specific and should be provided by the network administrator. In
-802.1Qbg terms, the Virtual Station Interface (VSI) represents the virtual
-interface of a virtual machine. :since:`Since 0.8.2`
-
-Please note that IEEE 802.1Qbg requires a non-zero value for the VLAN ID.
-
-``managerid``
- The VSI Manager ID identifies the database containing the VSI type and
- instance definitions. This is an integer value and the value 0 is reserved.
-``typeid``
- The VSI Type ID identifies a VSI type characterizing the network access. VSI
- types are typically managed by network administrator. This is an integer
- value.
-``typeidversion``
- The VSI Type Version allows multiple versions of a VSI Type. This is an
- integer value.
-``instanceid``
- The VSI Instance ID Identifier is generated when a VSI instance (i.e. a
- virtual interface of a virtual machine) is created. This is a globally unique
- identifier.
-
-::
-
- ...
- <devices>
- ...
- <interface type='direct'>
- <source dev='eth0.2' mode='vepa'/>
- <virtualport type="802.1Qbg">
- <parameters managerid="11" typeid="1193047"
typeidversion="2"
instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
- </virtualport>
- </interface>
- </devices>
- ...
-
-The interface can have additional parameters as shown below if the switch is
-conforming to the IEEE 802.1Qbh standard. The values are network specific and
-should be provided by the network administrator. :since:`Since 0.8.2`
-
-``profileid``
- The profile ID contains the name of the port profile that is to be applied to
- this interface. This name is resolved by the port profile database into the
- network parameters from the port profile, and those network parameters will
- be applied to this interface.
-
-::
-
- ...
- <devices>
- ...
- <interface type='direct'>
- <source dev='eth0' mode='private'/>
- <virtualport type='802.1Qbh'>
- <parameters profileid='finance'/>
- </virtualport>
- </interface>
- </devices>
- ...
-
-:anchor:`<a id="elementsNICSHostdev"/>`
-
-PCI Passthrough
-^^^^^^^^^^^^^^^
-
-A PCI network device (specified by the <source> element) is directly assigned to
-the guest using generic device passthrough, after first optionally setting the
-device's MAC address to the configured value, and associating the device with an
-802.1Qbh capable switch using an optionally specified <virtualport> element (see
-the examples of virtualport given above for type='direct' network devices). Note
-that - due to limitations in standard single-port PCI ethernet card driver
-design - only SR-IOV (Single Root I/O Virtualization) virtual function (VF)
-devices can be assigned in this manner; to assign a standard single-port PCI or
-PCIe ethernet card to a guest, use the traditional <hostdev> device definition
-and :since:`Since 0.9.11`
-
-To use VFIO device assignment rather than traditional/legacy KVM device
-assignment (VFIO is a new method of device assignment that is compatible with
-UEFI Secure Boot), a type='hostdev' interface can have an optional ``driver``
-sub-element with a ``name`` attribute set to "vfio". To use legacy KVM device
-assignment you can set ``name`` to "kvm" (or simply omit the
``<driver>``
-element, since "kvm" is currently the default). :since:`Since 1.0.5 (QEMU and
-KVM only, requires kernel 3.6 or newer)`
-
-Note that this "intelligent passthrough" of network devices is very similar to
-the functionality of a standard <hostdev> device, the difference being that this
-method allows specifying a MAC address and <virtualport> for the passed-through
-device. If these capabilities are not required, if you have a standard
-single-port PCI, PCIe, or USB network card that doesn't support SR-IOV (and
-hence would anyway lose the configured MAC address during reset after being
-assigned to the guest domain), or if you are using a version of libvirt older
-than 0.9.11, you should use standard <hostdev> to assign the device to the guest
-instead of <interface type='hostdev'/>.
-
-Similar to the functionality of a standard <hostdev> device, when ``managed`` is
-"yes", it is detached from the host before being passed on to the guest, and
-reattached to the host after the guest exits. If ``managed`` is omitted or
"no",
-the user is responsible to call ``virNodeDeviceDettach`` (or
-``virsh nodedev-detach``) before starting the guest or hot-plugging the device,
-and ``virNodeDeviceReAttach`` (or ``virsh nodedev-reattach``) after hot-unplug
-or stopping the guest.
-
-::
-
- ...
- <devices>
- <interface type='hostdev' managed='yes'>
- <driver name='vfio'/>
- <source>
- <address type='pci' domain='0x0000' bus='0x00'
slot='0x07' function='0x0'/>
- </source>
- <mac address='52:54:00:6d:90:02'/>
- <virtualport type='802.1Qbh'>
- <parameters profileid='finance'/>
- </virtualport>
- </interface>
- </devices>
- ...
-
-:anchor:`<a id="elementsTeaming"/>`
-
-Teaming a virtio/hostdev NIC pair
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-:since:`Since 6.1.0 (QEMU and KVM only, requires QEMU 4.2.0 or newer and a guest
-virtio-net driver supporting the "failover" feature, such as the one included
in
-Linux kernel 4.18 and newer) ` The ``<teaming>`` element of two interfaces can
-be used to connect them as a team/bond device in the guest (assuming proper
-support in the hypervisor and the guest network driver).
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='mybridge'/>
- <mac address='00:11:22:33:44:55'/>
- <model type='virtio'/>
- <teaming type='persistent'/>
- <alias name='ua-backup0'/>
- </interface>
- <interface type='network'>
- <source network='hostdev-pool'/>
- <mac address='00:11:22:33:44:55'/>
- <model type='virtio'/>
- <teaming type='transient' persistent='ua-backup0'/>
- </interface>
- </devices>
- ...
-
-The ``<teaming>`` element required attribute ``type`` will be set to either
-``"persistent"`` to indicate a device that should always be present in the
-domain, or ``"transient"`` to indicate a device that may periodically be
-removed, then later re-added to the domain. When type="transient", there
should
-be a second attribute to ``<teaming>`` called ``"persistent"`` - this
attribute
-should be set to the alias name of the other device in the pair (the one that
-has ``<teaming type="persistent'/>``).
-
-In the particular case of QEMU, libvirt's ``<teaming>`` element is used to
setup
-a virtio-net "failover" device pair. For this setup, the persistent device
must
-be an interface with ``<model type="virtio"/>``, and the transient
device
-must be ``<interface type='hostdev'/>`` (or ``<interface
type='network'/>``
-where the referenced network defines a pool of SRIOV VFs). The guest will then
-have a simple network team/bond device made of the virtio NIC + hostdev NIC
-pair. In this configuration, the higher-performing hostdev NIC will normally be
-preferred for all network traffic, but when the domain is migrated, QEMU will
-automatically unplug the VF from the guest, and then hotplug a similar device
-once migration is completed; while migration is taking place, network traffic
-will use the virtio NIC. (Of course the emulated virtio NIC and the hostdev NIC
-must be connected to the same subnet for bonding to work properly).
-
-NB1: Since you must know the alias name of the virtio NIC when configuring the
-hostdev NIC, it will need to be manually set in the virtio NIC's configuration
-(as with all other manually set alias names, this means it must start with
-"ua-").
-
-NB2: Currently the only implementation of the guest OS virtio-net driver
-supporting virtio-net failover requires that the MAC addresses of the virtio and
-hostdev NIC must match. Since that may not always be a requirement in the
-future, libvirt doesn't enforce this limitation - it is up to the
-person/management application that is creating the configuration to assure the
-MAC addresses of the two devices match.
-
-NB3: Since the PCI addresses of the SRIOV VFs on the hosts that are the source
-and destination of the migration will almost certainly be different, either
-higher level management software will need to modify the ``<source>`` of the
-hostdev NIC (``<interface type='hostdev'>``) at the start of migration, or
(a
-simpler solution) the configuration will need to use a libvirt "hostdev"
virtual
-network that maintains a pool of such devices, as is implied in the example's
-use of the libvirt network named "hostdev-pool" - as long as the hostdev
network
-pools on both hosts have the same name, libvirt itself will take care of
-allocating an appropriate device on both ends of the migration. Similarly the
-XML for the virtio interface must also either work correctly unmodified on both
-the source and destination of the migration (e.g. by connecting to the same
-bridge device on both hosts, or by using the same virtual network), or the
-management software must properly modify the interface XML during migration so
-that the virtio device remains connected to the same network segment before and
-after migration.
-
-:anchor:`<a id="elementsNICSMulticast"/>`
-
-Multicast tunnel
-^^^^^^^^^^^^^^^^
-
-A multicast group is setup to represent a virtual network. Any VMs whose network
-devices are in the same multicast group can talk to each other even across
-hosts. This mode is also available to unprivileged users. There is no default
-DNS or DHCP support and no outgoing network access. To provide outgoing network
-access, one of the VMs should have a 2nd NIC which is connected to one of the
-first 4 network types and do the appropriate routing. The multicast protocol is
-compatible with that used by user mode linux guests too. The source address used
-must be from the multicast address block.
-
-::
-
- ...
- <devices>
- <interface type='mcast'>
- <mac address='52:54:00:6d:90:01'/>
- <source address='230.0.0.1' port='5558'/>
- </interface>
- </devices>
- ...
-
-:anchor:`<a id="elementsNICSTCP"/>`
-
-TCP tunnel
-^^^^^^^^^^
-
-A TCP client/server architecture provides a virtual network. One VM provides the
-server end of the network, all other VMS are configured as clients. All network
-traffic is routed between the VMs via the server. This mode is also available to
-unprivileged users. There is no default DNS or DHCP support and no outgoing
-network access. To provide outgoing network access, one of the VMs should have a
-2nd NIC which is connected to one of the first 4 network types and do the
-appropriate routing.
-
-::
-
- ...
- <devices>
- <interface type='server'>
- <mac address='52:54:00:22:c9:42'/>
- <source address='192.168.0.1' port='5558'/>
- </interface>
- ...
- <interface type='client'>
- <mac address='52:54:00:8b:c9:51'/>
- <source address='192.168.0.1' port='5558'/>
- </interface>
- </devices>
- ...
-
-:anchor:`<a id="elementsNICSUDP"/>`
-
-UDP unicast tunnel
-^^^^^^^^^^^^^^^^^^
-
-A UDP unicast architecture provides a virtual network which enables connections
-between QEMU instances using QEMU's UDP infrastructure. The xml "source"
address
-is the endpoint address to which the UDP socket packets will be sent from the
-host running QEMU. The xml "local" address is the address of the interface
from
-which the UDP socket packets will originate from the QEMU host. :since:`Since
-1.2.20`
-
-::
-
- ...
- <devices>
- <interface type='udp'>
- <mac address='52:54:00:22:c9:42'/>
- <source address='127.0.0.1' port='11115'>
- <local address='127.0.0.1' port='11116'/>
- </source>
- </interface>
- </devices>
- ...
-
-:anchor:`<a id="elementsNICSModel"/>`
-
-Setting the NIC model
-^^^^^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <target dev='vnet1'/>
- <model type='ne2k_pci'/>
- </interface>
- </devices>
- ...
-
-For hypervisors which support this, you can set the model of emulated network
-interface card.
-
-The values for ``type`` aren't defined specifically by libvirt, but by what the
-underlying hypervisor supports (if any). For QEMU and KVM you can get a list of
-supported models with these commands:
-
-::
-
- qemu -net nic,model=? /dev/null
- qemu-kvm -net nic,model=? /dev/null
-
-Typical values for QEMU and KVM include: ne2k_isa i82551 i82557b i82559er
-ne2k_pci pcnet rtl8139 e1000 virtio. :since:`Since 5.2.0` ,
-``virtio-transitional`` and ``virtio-non-transitional`` values are supported.
-See `Virtio transitional devices <#elementsVirtioTransitional>`__ for more
-details.
-
-:anchor:`<a id="elementsDriverBackendOptions"/>`
-
-Setting NIC driver-specific options
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <target dev='vnet1'/>
- <model type='virtio'/>
- <driver name='vhost' txmode='iothread' ioeventfd='on'
event_idx='off' queues='5' rx_queue_size='256'
tx_queue_size='256'>
- <host csum='off' gso='off' tso4='off'
tso6='off' ecn='off' ufo='off' mrg_rxbuf='off'/>
- <guest csum='off' tso4='off' tso6='off'
ecn='off' ufo='off'/>
- </driver>
- </interface>
- </devices>
- ...
-
-Some NICs may have tunable driver-specific options. These are set as attributes
-of the ``driver`` sub-element of the interface definition. Currently the
-following attributes are available for the ``"virtio"`` NIC driver:
-
-``name``
- The optional ``name`` attribute forces which type of backend driver to use.
- The value can be either 'qemu' (a user-space backend) or 'vhost' (a
kernel
- backend, which requires the vhost module to be provided by the kernel); an
- attempt to require the vhost driver without kernel support will be rejected.
- If this attribute is not present, then the domain defaults to 'vhost' if
- present, but silently falls back to 'qemu' without error. :since:`Since 0.8.8
- (QEMU and KVM only)`
- For interfaces of type='hostdev' (PCI passthrough devices) the ``name``
- attribute can optionally be set to "vfio" or "kvm".
"vfio" tells libvirt to
- use VFIO device assignment rather than traditional KVM device assignment
- (VFIO is a new method of device assignment that is compatible with UEFI
- Secure Boot), and "kvm" tells libvirt to use the legacy device assignment
- performed directly by the kvm kernel module (the default is currently
"kvm",
- but is subject to change). :since:`Since 1.0.5 (QEMU and KVM only, requires
- kernel 3.6 or newer)`
- For interfaces of type='vhostuser', the ``name`` attribute is ignored. The
- backend driver used is always vhost-user.
-``txmode``
- The ``txmode`` attribute specifies how to handle transmission of packets when
- the transmit buffer is full. The value can be either 'iothread' or
'timer'.
- :since:`Since 0.8.8 (QEMU and KVM only)`
- If set to 'iothread', packet tx is all done in an iothread in the bottom half
- of the driver (this option translates into adding "tx=bh" to the qemu
- commandline -device virtio-net-pci option).
- If set to 'timer', tx work is done in qemu, and if there is more tx data than
- can be sent at the present time, a timer is set before qemu moves on to do
- other things; when the timer fires, another attempt is made to send more
- data.
- The resulting difference, according to the qemu developer who added the
- option is: "bh makes tx more asynchronous and reduces latency, but
- potentially causes more processor bandwidth contention since the CPU doing
- the tx isn't necessarily the CPU where the guest generated the packets."
- **In general you should leave this option alone, unless you are very certain
- you know what you are doing.**
-``ioeventfd``
- This optional attribute allows users to set `domain I/O asynchronous
- handling <
https://patchwork.kernel.org/patch/43390/>`__ for interface device.
- The default is left to the discretion of the hypervisor. Accepted values are
- "on" and "off". Enabling this allows qemu to execute VM while a
separate
- thread handles I/O. Typically guests experiencing high system CPU utilization
- during I/O will benefit from this. On the other hand, on overloaded host it
- could increase guest I/O latency. :since:`Since 0.9.3 (QEMU and KVM only)`
- **In general you should leave this option alone, unless you are very certain
- you know what you are doing.**
-``event_idx``
- The ``event_idx`` attribute controls some aspects of device event processing.
- The value can be either 'on' or 'off' - if it is on, it will reduce
the
- number of interrupts and exits for the guest. The default is determined by
- QEMU; usually if the feature is supported, default is on. In case there is a
- situation where this behavior is suboptimal, this attribute provides a way to
- force the feature off. :since:`Since 0.9.5 (QEMU and KVM only)`
- **In general you should leave this option alone, unless you are very certain
- you know what you are doing.**
-``queues``
- The optional ``queues`` attribute controls the number of queues to be used
- for either `Multiqueue
- virtio-net <
https://www.linux-kvm.org/page/Multiqueue>`__ or
- `vhost-user <#elementVhostuser>`__ network interfaces. Use of multiple packet
- processing queues requires the interface having the
- ``<model type='virtio'/>`` element. Each queue will potentially be
handled by
- a different processor, resulting in much higher throughput.
- :since:`virtio-net since 1.0.6 (QEMU and KVM only)` :since:`vhost-user since
- 1.2.17 (QEMU and KVM only)`
-``rx_queue_size``
- The optional ``rx_queue_size`` attribute controls the size of virtio ring for
- each queue as described above. The default value is hypervisor dependent and
- may change across its releases. Moreover, some hypervisors may pose some
- restrictions on actual value. For instance, latest QEMU (as of 2016-09-01)
- requires value to be a power of two from [256, 1024] range. :since:`Since
- 2.3.0 (QEMU and KVM only)`
- **In general you should leave this option alone, unless you are very certain
- you know what you are doing.**
-``tx_queue_size``
- The optional ``tx_queue_size`` attribute controls the size of virtio ring for
- each queue as described above. The default value is hypervisor dependent and
- may change across its releases. Moreover, some hypervisors may pose some
- restrictions on actual value. For instance, QEMU v2.9 requires value to be a
- power of two from [256, 1024] range. In addition to that, this may work only
- for a subset of interface types, e.g. aforementioned QEMU enables this option
- only for ``vhostuser`` type. :since:`Since 3.7.0 (QEMU and KVM only)`
- **In general you should leave this option alone, unless you are very certain
- you know what you are doing.**
-virtio options
- For virtio interfaces, `Virtio-specific options <#elementsVirtio>`__ can also
- be set. ( :since:`Since 3.5.0` )
-
-Offloading options for the host and guest can be configured using the following
-sub-elements:
-
-``host``
- The ``csum``, ``gso``, ``tso4``, ``tso6``, ``ecn`` and ``ufo`` attributes
- with possible values ``on`` and ``off`` can be used to turn off host
- offloading options. By default, the supported offloads are enabled by QEMU.
- :since:`Since 1.2.9 (QEMU only)` The ``mrg_rxbuf`` attribute can be used to
- control mergeable rx buffers on the host side. Possible values are ``on``
- (default) and ``off``. :since:`Since 1.2.13 (QEMU only)`
-``guest``
- The ``csum``, ``tso4``, ``tso6``, ``ecn`` and ``ufo`` attributes with
- possible values ``on`` and ``off`` can be used to turn off guest offloading
- options. By default, the supported offloads are enabled by QEMU.
- :since:`Since 1.2.9 (QEMU only)`
-
-:anchor:`<a id="elementsBackendOptions"/>`
-
-Setting network backend-specific options
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <target dev='vnet1'/>
- <model type='virtio'/>
- <backend tap='/dev/net/tun' vhost='/dev/vhost-net'/>
- <driver name='vhost' txmode='iothread' ioeventfd='on'
event_idx='off' queues='5'/>
- <tune>
- <sndbuf>1600</sndbuf>
- </tune>
- </interface>
- </devices>
- ...
-
-For tuning the backend of the network, the ``backend`` element can be used. The
-``vhost`` attribute can override the default vhost device path
-(``/dev/vhost-net``) for devices with ``virtio`` model. The ``tap`` attribute
-overrides the tun/tap device path (default: ``/dev/net/tun``) for network and
-bridge interfaces. This does not work in session mode. :since:`Since 1.2.9`
-
-For tap devices there is also ``sndbuf`` element which can adjust the size of
-send buffer in the host. :since:`Since 0.8.8`
-
-:anchor:`<a id="elementsNICSTargetOverride"/>`
-
-Overriding the target element
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <target dev='vnet1'/>
- </interface>
- </devices>
- ...
-
-If no target is specified, certain hypervisors will automatically generate a
-name for the created tun device. This name can be manually specified, however
-the name *should not start with either 'vnet', 'vif', 'macvtap',
or 'macvlan'*,
-which are prefixes reserved by libvirt and certain hypervisors. Manually
-specified targets using these prefixes may be ignored.
-
-Note that for LXC containers, this defines the name of the interface on the host
-side. :since:`Since 1.2.7` , to define the name of the device on the guest side,
-the ``guest`` element should be used, as in the following snippet:
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <guest dev='myeth'/>
- </interface>
- </devices>
- ...
-
-:anchor:`<a id="elementsNICSBoot"/>`
-
-Specifying boot order
-^^^^^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <target dev='vnet1'/>
- <boot order='1'/>
- </interface>
- </devices>
- ...
-
-For hypervisors which support this, you can set a specific NIC to be used for
-network boot. The ``order`` attribute determines the order in which devices will
-be tried during boot sequence. The per-device ``boot`` elements cannot be used
-together with general boot elements in `BIOS bootloader <#elementsOSBIOS>`__
-section. :since:`Since 0.8.8`
-
-:anchor:`<a id="elementsNICSROM"/>`
-
-Interface ROM BIOS configuration
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <target dev='vnet1'/>
- <rom bar='on' file='/etc/fake/boot.bin'/>
- </interface>
- </devices>
- ...
-
-For hypervisors which support this, you can change how a PCI Network device's
-ROM is presented to the guest. The ``bar`` attribute can be set to "on" or
-"off", and determines whether or not the device's ROM will be visible in
the
-guest's memory map. (In PCI documentation, the "rombar" setting controls
the
-presence of the Base Address Register for the ROM). If no rom bar is specified,
-the qemu default will be used (older versions of qemu used a default of "off",
-while newer qemus have a default of "on"). The optional ``file`` attribute is
-used to point to a binary file to be presented to the guest as the device's ROM
-BIOS. This can be useful to provide an alternative boot ROM for a network
-device. :since:`Since 0.9.10 (QEMU and KVM only)` .
-
-:anchor:`<a id="elementDomain"/>`
-
-Setting up a network backend in a driver domain
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- ...
- <interface type='bridge'>
- <source bridge='br0'/>
- <backenddomain name='netvm'/>
- </interface>
- ...
- </devices>
- ...
-
-The optional ``backenddomain`` element allows specifying a backend domain (aka
-driver domain) for the interface. Use the ``name`` attribute to specify the
-backend domain name. You can use it to create a direct network link between
-domains (so data will not go through host system). Use with type 'ethernet' to
-create plain network link, or with type 'bridge' to connect to a bridge inside
-the backend domain. :since:`Since 1.2.13 (Xen only)`
-
-:anchor:`<a id="elementQoS"/>`
-
-Quality of service
-^^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <target dev='vnet0'/>
- <bandwidth>
- <inbound average='1000' peak='5000' floor='200'
burst='1024'/>
- <outbound average='128' peak='256' burst='256'/>
- </bandwidth>
- </interface>
- </devices>
- ...
-
-This part of interface XML provides setting quality of service. Incoming and
-outgoing traffic can be shaped independently. The ``bandwidth`` element and its
-child elements are described in the `QoS <formatnetwork.html#elementQoS>`__
-section of the Network XML.
-
-:anchor:`<a id="elementVlanTag"/>`
-
-Setting VLAN tag (on supported network types only)
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='bridge'>
- <vlan>
- <tag id='42'/>
- </vlan>
- <source bridge='ovsbr0'/>
- <virtualport type='openvswitch'>
- <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/>
- </virtualport>
- </interface>
- <interface type='bridge'>
- <vlan trunk='yes'>
- <tag id='42'/>
- <tag id='123' nativeMode='untagged'/>
- </vlan>
- ...
- </interface>
- </devices>
- ...
-
-If (and only if) the network connection used by the guest supports VLAN tagging
-transparent to the guest, an optional ``<vlan>`` element can specify one or more
-VLAN tags to apply to the guest's network traffic :since:`Since 0.10.0` .
-Network connections that support guest-transparent VLAN tagging include 1)
-type='bridge' interfaces connected to an Open vSwitch bridge :since:`Since
-0.10.0` , 2) SRIOV Virtual Functions (VF) used via type='hostdev' (direct device
-assignment) :since:`Since 0.10.0` , and 3) SRIOV VFs used via type='direct' with
-mode='passthrough' (macvtap "passthru" mode) :since:`Since 1.3.5` . All
other
-connection types, including standard linux bridges and libvirt's own virtual
-networks, **do not** support it. 802.1Qbh (vn-link) and 802.1Qbg (VEPA) switches
-provide their own way (outside of libvirt) to tag guest traffic onto a specific
-VLAN. Each tag is given in a separate ``<tag>`` subelement of ``<vlan>``
(for
-example: ``<tag id='42'/>``). For VLAN trunking of multiple tags
(which is
-supported only on Open vSwitch connections), multiple ``<tag>`` subelements can
-be specified, which implies that the user wants to do VLAN trunking on the
-interface for all the specified tags. In the case that VLAN trunking of a single
-tag is desired, the optional attribute ``trunk='yes'`` can be added to the
-toplevel ``<vlan>`` element to differentiate trunking of a single tag from
-normal tagging.
-
-For network connections using Open vSwitch it is also possible to configure
-'native-tagged' and 'native-untagged' VLAN modes :since:`Since 1.1.0.`
This is
-done with the optional ``nativeMode`` attribute on the ``<tag>`` subelement:
-``nativeMode`` may be set to 'tagged' or 'untagged'. The ``id`` attribute
of the
-``<tag>`` subelement containing ``nativeMode`` sets which VLAN is considered to
-be the "native" VLAN for this interface, and the ``nativeMode`` attribute
-determines whether or not traffic for that VLAN will be tagged.
-
-:anchor:`<a id="elementPort"/>`
-
-Isolating guests's network traffic from each other
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <port isolated='yes'/>
- </interface>
- </devices>
- ...
-
-:since:`Since 6.1.0.` The ``port`` element property ``isolated``, when set to
-``yes`` (default setting is ``no``) is used to isolate this interface's network
-traffic from that of other guest interfaces connected to the same network that
-also have ``<port isolated='yes'/>``. This setting is only supported for
-emulated interface devices that use a standard tap device to connect to the
-network via a Linux host bridge. This property can be inherited from a libvirt
-network, so if all guests that will be connected to the network should be
-isolated, it is better to put the setting in the network configuration. (NB:
-this only prevents guests that have ``isolated='yes'`` from communicating with
-each other; if there is a guest on the same bridge that doesn't have
-``isolated='yes'``, even the isolated guests will be able to communicate with
-it.)
-
-:anchor:`<a id="elementLink"/>`
-
-Modifying virtual link state
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <target dev='vnet0'/>
- <link state='down'/>
- </interface>
- </devices>
- ...
-
-This element provides means of setting state of the virtual network link.
-Possible values for attribute ``state`` are ``up`` and ``down``. If ``down`` is
-specified as the value, the interface behaves as if it had the network cable
-disconnected. Default behavior if this element is unspecified is to have the
-link state ``up``. :since:`Since 0.9.5`
-
-:anchor:`<a id="mtu"/>`
-
-MTU configuration
-^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <target dev='vnet0'/>
- <mtu size='1500'/>
- </interface>
- </devices>
- ...
-
-This element provides means of setting MTU of the virtual network link.
-Currently there is just one attribute ``size`` which accepts a non-negative
-integer which specifies the MTU size for the interface. :since:`Since 3.1.0`
-
-:anchor:`<a id="coalesce"/>`
-
-Coalesce settings
-^^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <target dev='vnet0'/>
- <coalesce>
- <rx>
- <frames max='7'/>
- </rx>
- </coalesce>
- </interface>
- </devices>
- ...
-
-This element provides means of setting coalesce settings for some interface
-devices (currently only type ``network`` and ``bridge``. Currently there is just
-one attribute, ``max``, to tweak, in element ``frames`` for the ``rx`` group,
-which accepts a non-negative integer that specifies the maximum number of
-packets that will be received before an interrupt. :since:`Since 3.3.0`
-
-:anchor:`<a id="ipconfig"/>`
-
-IP configuration
-^^^^^^^^^^^^^^^^
-
-::
-
- ...
- <devices>
- <interface type='network'>
- <source network='default'/>
- <target dev='vnet0'/>
- <ip address='192.168.122.5' prefix='24'/>
- <ip address='192.168.122.5' prefix='24'
peer='10.0.0.10'/>
- <route family='ipv4' address='192.168.122.0'
prefix='24' gateway='192.168.122.1'/>
- <route family='ipv4' address='192.168.122.8'
gateway='192.168.122.1'/>
- </interface>
- ...
- <hostdev mode='capabilities' type='net'>
- <source>
- <interface>eth0</interface>
- </source>
- <ip address='192.168.122.6' prefix='24'/>
- <route family='ipv4' address='192.168.122.0'
prefix='24' gateway='192.168.122.1'/>
- <route family='ipv4' address='192.168.122.8'
gateway='192.168.122.1'/>
- </hostdev>
- ...
- </devices>
- ...
-
-:since:`Since 1.2.12` network devices and hostdev devices with network
-capabilities can optionally be provided one or more IP addresses to set on the
-network device in the guest. Note that some hypervisors or network device types
-will simply ignore them or only use the first one. The ``family`` attribute can
-be set to either ``ipv4`` or ``ipv6``, and the ``address`` attribute contains
-the IP address. The optional ``prefix`` is the number of 1 bits in the netmask,
-and will be automatically set if not specified - for IPv4 the default prefix is
-determined according to the network "class" (A, B, or C - see RFC870), and for
-IPv6 the default prefix is 64. The optional ``peer`` attribute holds the IP
-address of the other end of a point-to-point network device :since:`(since
-2.1.0)` .
-
-:since:`Since 1.2.12` route elements can also be added to define IP routes to
-add in the guest. The attributes of this element are described in the
-documentation for the ``route`` element in `network
-definitions <formatnetwork.html#elementsStaticroute>`__. This is used by the LXC
-driver.
-
-::
-
- ...
- <devices>
- <interface type='ethernet'>
- <source/>
- <ip address='192.168.123.1' prefix='24'/>
- <ip address='10.0.0.10' prefix='24'
peer='192.168.122.5'/>
- <route family='ipv4' address='192.168.42.0'
prefix='24' gateway='192.168.123.4'/>
- <source/>
- ...
- </interface>
- ...
- </devices>
- ...
-
-:since:`Since 2.1.0` network devices of type "ethernet" can optionally be
-provided one or more IP addresses and one or more routes to set on the **host**
-side of the network device. These are configured as subelements of the
-``<source>`` element of the interface, and have the same attributes as the
-similarly named elements used to configure the guest side of the interface
-(described above).
-
-:anchor:`<a id="elementVhostuser"/>`
-
-vhost-user interface
-^^^^^^^^^^^^^^^^^^^^
-
-:since:`Since 1.2.7` the vhost-user enables the communication between a QEMU
-virtual machine and other userspace process using the Virtio transport protocol.
-A char dev (e.g. Unix socket) is used for the control plane, while the data
-plane is based on shared memory.
-
-::
-
- ...
- <devices>
- <interface type='vhostuser'>
- <mac address='52:54:00:3b:83:1a'/>
- <source type='unix' path='/tmp/vhost1.sock'
mode='server'/>
- <model type='virtio'/>
- </interface>
- <interface type='vhostuser'>
- <mac address='52:54:00:3b:83:1b'/>
- <source type='unix' path='/tmp/vhost2.sock'
mode='client'>
- <reconnect enabled='yes' timeout='10'/>
- </source>
- <model type='virtio'/>
- <driver queues='5'/>
- </interface>
- </devices>
- ...
-
-The ``<source>`` element has to be specified along with the type of char device.
-Currently, only type='unix' is supported, where the path (the directory path of
-the socket) and mode attributes are required. Both ``mode='server'`` and
-``mode='client'`` are supported. vhost-user requires the virtio model type, thus
-the ``<model>`` element is mandatory. :since:`Since 4.1.0` the element has an
-optional child element ``reconnect`` which configures reconnect timeout if the
-connection is lost. It has two attributes ``enabled`` (which accepts ``yes`` and
-``no``) and ``timeout`` which specifies the amount of seconds after which
-hypervisor tries to reconnect.
-
-:anchor:`<a id="elementNwfilter"/>`
-
-Traffic filtering with NWFilter
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-:since:`Since 0.8.0` an ``nwfilter`` profile can be assigned to a domain
-interface, which allows configuring traffic filter rules for the virtual
-machine. See the `nwfilter <formatnwfilter.html>`__ documentation for more
-complete details.
-
-::
-
- ...
- <devices>
- <interface ...>
- ...
- <filterref filter='clean-traffic'/>
- </interface>
- <interface ...>
- ...
- <filterref filter='myfilter'>
- <parameter name='IP' value='104.207.129.11'/>
- <parameter name='IP6_ADDR' value='2001:19f0:300:2102::'/>
- <parameter name='IP6_MASK' value='64'/>
- ...
- </filterref>
- </interface>
- </devices>
- ...
-
-The ``filter`` attribute specifies the name of the nwfilter to use. Optional
-``<parameter>`` elements may be specified for passing additional info to the
-nwfilter via the ``name`` and ``value`` attributes. See the
-`nwfilter <formatnwfilter.html#nwfconceptsvars>`__ docs for info on parameters.
+.. include:: formatdomain-devices-interface.rst
:anchor:`<a id="elementsInput"/>`
diff --git a/docs/meson.build b/docs/meson.build
index c5600ba4d1..9846b3e7df 100644
--- a/docs/meson.build
+++ b/docs/meson.build
@@ -133,6 +133,7 @@ docs_rst_files = [
'formatdomain-devices-hostdev.rst',
'formatdomain-devices-redirdev.rst',
'formatdomain-devices-smartcard.rst',
+ 'formatdomain-devices-interface.rst',
]
},
{ 'name': 'hacking' },
--
2.26.2