
On 10/07/2016 11:16 AM, Andrea Bolognani wrote:
So here's a rewording of your description (with a couple additional conditions) to see if I understand everything correctly:
1) during initial domain definition:
A) If there are *no pci controllers at all* (not even a pci-root or pcie-root) *and there are any unaddressed devices that need a PCI slot* then auto-add enough controllers for the requested devices, *and* make sure there are enough empty slots for "N" (do we stick with 4? or make it 3?) devices to be added later without needing more controllers. (So, if the domain has no PCI devices, we don't add anything extra, and also if it only has PCI devices that already have addresses, then we also don't add anything extra).
B) if there is at least one pci controller specified in the XML, and there are any unused slots in the pci controllers in the provided XML, then use them for the unaddressed devices. If there are more devices that need an address at this time, also add controllers for them, but no *extra* controllers.
(Note to Rich: libguestfs could avoid the extra controllers either by adding a pci-root/pcie-root to the XML, or by manually addressing the devices. The latter would actually be better, since it would avoid the need for any pcie-root-ports).
2) When adding a device to the persistent config (i.e. offline): if there is an empty slot on a controller, use it. If not, add a controller for that device *but no extra controllers*
3) when adding a device to the guest machine (i.e. hotplug / online), if there is an empty slot on a controller, use it. If not, then fail.
The differences I see from what (I think) you suggested are:
* if there aren't any unaddressed pci devices (even if there are no controllers in the config), then we also don't add any extra controllers (although we will of course add the pci-root or pcie-root, to acknowledge it is there).
* if another controller is needed for adding a device offline, it's okay to add it. So instead of guaranteeing that there will always be an empty slot available for hotplug during a single start/destroy cycle of the guest, we would be guaranteeing that there will be 3/4 empty slots available for either hotplug or coldplug
On Fri, 2016-10-07 at 10:17 -0400, Laine Stump wrote: throughout the entire life of the guest.
A better way to put it is that we guarantee there will be "N" (3 or 4 or whatever) slots available when the domain is originally defined. Once any change has been made, all bets are off.
Sounds like a pretty good compromise to me.
The only problem I can think of is that there might be management applications that add eg. a pcie-root in the XML when first defining a guest, and after the change such guests would get zero hotpluggable ports. Then again it's probably okay to expect such management applications to add the necessary number of pcie-root-ports themselves.
Yeah, if they know enough to be adding a root-port, then they know enough to add extras.
Maybe we could relax the wording on A) and ignore any pci{,e}-root? Even though there is really no reason for either a user or a management application to add them explicitly when defining a guest, I feel like they might be special enough to deserve an exception.
I thought about that; I'm not sure. In the case of libguestfs, even if Rich added a pcie-root, I guess he would still be manually addressing his pci devices, so that would be clue enough that he knew what he was doing and didn't want any extra.