On Fri, 2016-10-07 at 10:17 -0400, Laine Stump wrote:
So here's a rewording of your description (with a couple
additional
conditions) to see if I understand everything correctly:
1) during initial domain definition:
A) If there are *no pci controllers at all* (not even a pci-root or
pcie-root) *and there are any unaddressed devices that need a PCI slot*
then auto-add enough controllers for the requested devices, *and* make
sure there are enough empty slots for "N" (do we stick with 4? or make
it 3?) devices to be added later without needing more controllers. (So,
if the domain has no PCI devices, we don't add anything extra, and also
if it only has PCI devices that already have addresses, then we also
don't add anything extra).
B) if there is at least one pci controller specified in the XML, and
there are any unused slots in the pci controllers in the provided XML,
then use them for the unaddressed devices. If there are more devices
that need an address at this time, also add controllers for them, but no
*extra* controllers.
(Note to Rich: libguestfs could avoid the extra controllers either by
adding a pci-root/pcie-root to the XML, or by manually addressing the
devices. The latter would actually be better, since it would avoid the
need for any pcie-root-ports).
2) When adding a device to the persistent config (i.e. offline): if
there is an empty slot on a controller, use it. If not, add a controller
for that device *but no extra controllers*
3) when adding a device to the guest machine (i.e. hotplug / online), if
there is an empty slot on a controller, use it. If not, then fail.
The differences I see from what (I think) you suggested are:
* if there aren't any unaddressed pci devices (even if there are no
controllers in the config), then we also don't add any extra controllers
(although we will of course add the pci-root or pcie-root, to
acknowledge it is there).
* if another controller is needed for adding a device offline, it's okay
to add it.
So instead of guaranteeing that there will always be an empty
slot available for hotplug during a single start/destroy
cycle of the guest, we would be guaranteeing that there will
be 3/4 empty slots available for either hotplug or coldplug
throughout the entire life of the guest.
Sounds like a pretty good compromise to me.
The only problem I can think of is that there might be
management applications that add eg. a pcie-root in the XML
when first defining a guest, and after the change such guests
would get zero hotpluggable ports. Then again it's probably
okay to expect such management applications to add the
necessary number of pcie-root-ports themselves.
Maybe we could relax the wording on A) and ignore any
pci{,e}-root? Even though there is really no reason for
either a user or a management application to add them
explicitly when defining a guest, I feel like they might be
special enough to deserve an exception.
--
Andrea Bolognani / Red Hat / Virtualization