On 07/25/2017 06:42 AM, Andrea Bolognani wrote:
For all machine types except i440fx, making a guest hotplug
capable requires some sort of planning. Add some information
to help users make educated choices when defining the PCI
topology of guests.
Signed-off-by: Andrea Bolognani <abologna(a)redhat.com>
---
docs/formatdomain.html.in | 4 +-
docs/pci-hotplug.html.in | 164 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 167 insertions(+), 1 deletion(-)
create mode 100644 docs/pci-hotplug.html.in
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index bceddd2..7c4450c 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -3505,7 +3505,9 @@
appear more than once, with a group of virtual devices tied to a
virtual controller. Normally, libvirt can automatically infer such
controllers without requiring explicit XML markup, but sometimes
- it is necessary to provide an explicit controller element.
+ it is necessary to provide an explicit controller element, notably
+ when planning the <a href="pci-hotplug.html">PCI
topology</a>
+ for guests where device hotplug is expected.
</p>
<pre>
diff --git a/docs/pci-hotplug.html.in b/docs/pci-hotplug.html.in
new file mode 100644
index 0000000..f3d1610
--- /dev/null
+++ b/docs/pci-hotplug.html.in
@@ -0,0 +1,164 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
+<html
xmlns="http://www.w3.org/1999/xhtml">
+ <body>
+ <h1>PCI topology and hotplug</h1>
+
+ <ul id="toc"></ul>
+
+ <p>
+ Perhaps surprisingly, most libvirt guests support only limited PCI
+ device hotplug out of the box, or even none at all.
+ </p>
+ <p>
+ The reason for this apparent limitation is the fact that each
+ hotplugged PCI device might require additional PCI controllers to
+ be added to the guest, and libvirt has no way of knowing in advance
+ how many devices will be hotplugged during the guest's lifetime,
+ thus making it impossible to automatically provide the right amount
+ of PCI controllers: any arbitrary number would end up being too big
+ for some users, and too small for others.
Of course we all know this, but you haven't said here that PCI
controllers in general cannot themselves be hotplugged (although the new
pcie-pci-bridge *will* be hotpluggable, as long as the OS supports that).
+ </p>
+ <p>
+ Ultimately, the user is the only one who knows how much the guest
+ will need to grow dynamically, so the responsability of planning
s/responsability/responsibility/
+ a suitabile PCI topology in advance falls on them.
s/suitabile/suitable/
+ </p>
+ <p>
+ This document aims at providing all the information needed to
+ successfully plan the PCI topology of a guest. Note that the
+ details can vary a lot between architectures and even machine
+ types, hence the way it's organized.
+ </p>
+
+ <h2><a name="x86_64">x86_64 architecture</a></h2>
+
+ <h3><a name="x86_64-q35">q35 machine
type</a></h3>
+
+ <p>
+ This is a PCI Express native machine type. The default PCI topology
+ looks like
+ </p>
+
+<pre>
+<controller type='pci' index='0'
model='pcie-root'/>
+<controller type='pci' index='1'
model='pcie-root-port'>
+ <model name='pcie-root-port'/>
+ <target chassis='1' port='0x10'/>
+ <address type='pci' domain='0x0000' bus='0x00'
slot='0x01' function='0x0'/>
+</controller></pre>
+
+ <p>
+ and supports hotplugging a single PCI Express device, either
+ emulated or assigned from the host.
+ </p>
Didn't we include some trick in there (requested for libguestfs
appliances) that allows creating a config that has no pcie-root-ports?
+ <p>
+ Slots on the <code>pcie-root</code> controller do not support
+ hotplug, so the device will be hotplugged into the
+ <code>pcie-root-port</code> controller. If you
plan to hotplug
+ more than a single PCI Express device, you should add a suitable
+ number of <code>pcie-root-port</code> controllers when defining
+ the guest: for example, add
+ </p>
+
+<pre>
+<controller type='pci' model='pcie-root-port'/>
+<controller type='pci' model='pcie-root-port'/>
+<controller type='pci'
model='pcie-root-port'/></pre>
+
+ <p>
+ if you expect to hotplug up to three PCI Express devices,
+ either emulated or assigned from the host. That's all the
+ information you need to provide: libvirt will fill in the
+ remaining details automatically.
+ </p>
Maybe a note here pointing out that if you add root-ports and new
endpoint devices at the same time, the endpoint devices will
automatically be attached to the manually added root-ports, so if you're
trying to end up with spares, you'll need to manually add enough for the
endpoints, plus the number of spares you want (you won't need to address
any of the controllers or endpoints though).
+ <p>
+ If you expect to hotplug legacy PCI devices, then you will need
+ specialized controllers, since all those mentioned above are
+ intended for PCI Express devices only: add
+ </p>
+
+<pre>
+<controller type='pci' model='dmi-to-pci-bridge'/>
+<controller type='pci' model='pci-bridge'/></pre>
+
+ <p>
+ and you'll be able to hotplug up to 31 legacy PCI devices,
+ either emulated or assigned from the host.
+ </p>
Maybe mention that it's slot 1 - 31 because slot 0 is reserved.
+
+ <h3><a name="x86_64-i440fx">i440fx (pc) machine
type</a></h3>
+
+ <p>
+ This is a legacy PCI native machine type. The default PCI
+ topology looks like
+ </p>
+
+<pre>
+<controller type='pci' index='0'
model='pci-root'/></pre>
+
+ <p>
+ where each of the 31 slots on the <code>pci-root</code>
+ controller is hotplug capable and can accept a legacy PCI
+ device, either emulated or assigned from the guest.
+ </p>
+
+ <h2><a name="ppc64">ppc64 architecture</a></h2>
+
+ <h3><a name="ppc64-pseries">pseries machine
type</a></h3>
+
+ <p>
+ The default PCI topology for the <code>pseries</code> machine
+ type looks like
+ </p>
+
+<pre>
+<controller type='pci' index='0'
model='pcie-root'>
You mean pci-root, right? (I always get these mixed up for PPC, since I
don't actually use it).
+ <model name='spapr-pci-host-bridge'/>
+ <target index='0'/>
+</controller></pre>
+
+ <p>
+ The 31 slots on a <code>pci-root</code> controller are all
+ hotplug capable and, despite the name suggesting otherwise,
+ starting with QEMU 2.9 all of them can accept PCI Express
+ devices in addition to legacy PCI devices; however,
+ libvirt will only place emulated devices on the default
+ <code>pci-root</code> controller.
+ </p>
+ <p>
+ In order to take advantage of improved error reporting and
+ recovering capabilities, PCI devices assigned from the
+ host need to be isolated by placing each on a separate
+ <code>pci-root</code> controller, which has to be prepared
+ in advance for hotplug to work: for example, add
+ </p>
+
+<pre>
+<controller type='pci' model='pci-root'/>
+<controller type='pci' model='pci-root'/>
+<controller type='pci' model='pci-root'/></pre>
+
+ <p>
+ if you expect to hotplug up to three PCI devices assigned
+ from the host.
+ </p>
+
+ <h2><a name="aarch64">aarch64
architecture</a></h2>
+
+ <h3><a name="aarch64-virt">mach-virt (virt) machine
type</a></h3>
+
+ <p>
+ This machine type mostly behaves the same as the
+ <a href="#x86_64-q35">q35 machine type</a>, so you can just
+ refer to that section for information.
+ </p>
+ <p>
+ The only difference worth mentioning is that using legacy
+ PCI for <code>mach-virt</code> guests is extremely uncommon,
+ so you'll probably never need to add controllers other than
+ <code>pcie-root-port</code>.
+ </p>
+
+ </body>
+</html>
There's of course always more that can be written, but this is a good
and useful start, so ACK (with the few typos fixed).