Re: [libvirt] [Qemu-devel] PCI(e): Documentation "io-reserve" and related properties?

On Thu, 2019-06-06 at 14:20 -0400, Michael S. Tsirkin wrote:
On Thu, Jun 06, 2019 at 06:19:43PM +0200, Kashyap Chamarthy wrote:
Hi folks,
Today I learnt about some obscure PCIe-related properties, in context of the adding PCIe root ports to a guest, namely:
io-reserve mem-reserve bus-reserve pref32-reserve pref64-reserve
Unfortunately, the commit[*] that added them provided no documentation whatsover.
In my scenario, I was specifically wondering about what does "io-reserve" mean, in what context to use it, etc. (But documentation about other properties is also welcome.)
Anyone more well-versed in this area care to shed some light?
[*] 6755e618d0 (hw/pci: add PCI resource reserve capability to legacy PCI bridge, 2018-08-21)
So normally bios would reserve just enough io space to satisfy all devices behind a bridge. What if you intend to hotplug more devices? These properties allow you to ask bios to reserve extra space.
Is it fair to say that setting io-reserve=0 for a pcie-root-port would be a way to implement the requirements set forth in https://bugzilla.redhat.com/show_bug.cgi?id=1408810 ? I tested this on aarch64 and it seems to work as expected, but then again without documentation it's hard to tell. More specifically, I created an aarch64/virt guest with several pcie-root-ports and it couldn't boot much further than GRUB when the number of ports exceeded 24, but as soon as I added the io-reserve=0 option I could get the same guest to boot fine with 32 or even 64 pcie-root-ports. I'm attaching the boot log for reference: there are a bunch of messages about the topic but they would appear to be benign. Hotplug seemed to work too: I tried with a single virtio-net-pci and I could access the network. My understanding is that PCIe devices are required to work without IO space, so this behavior matches my expectations. I wonder, though, what would happen if I had something like -device pcie-root-port,io-reserve=0,id=pci.1 -device pcie-pci-bridge,bus=pci.1 Would I be able to hotplug conventional PCI devices into the pcie-pci-bridge, or would the lack of IO space reservation for the pcie-root-port cause issues with that? -- Andrea Bolognani / Red Hat / Virtualization

On 6/7/19 2:43 PM, Andrea Bolognani wrote:
On Thu, 2019-06-06 at 14:20 -0400, Michael S. Tsirkin wrote:
On Thu, Jun 06, 2019 at 06:19:43PM +0200, Kashyap Chamarthy wrote:
Hi folks,
Today I learnt about some obscure PCIe-related properties, in context of the adding PCIe root ports to a guest, namely:
io-reserve mem-reserve bus-reserve pref32-reserve pref64-reserve
Unfortunately, the commit[*] that added them provided no documentation whatsover.
In my scenario, I was specifically wondering about what does "io-reserve" mean, in what context to use it, etc. (But documentation about other properties is also welcome.)
Anyone more well-versed in this area care to shed some light?
[*] 6755e618d0 (hw/pci: add PCI resource reserve capability to legacy PCI bridge, 2018-08-21) So normally bios would reserve just enough io space to satisfy all devices behind a bridge. What if you intend to hotplug more devices? These properties allow you to ask bios to reserve extra space. Is it fair to say that setting io-reserve=0 for a pcie-root-port would be a way to implement the requirements set forth in
https://bugzilla.redhat.com/show_bug.cgi?id=1408810
? I tested this on aarch64 and it seems to work as expected, but then again without documentation it's hard to tell.
More specifically, I created an aarch64/virt guest with several pcie-root-ports and it couldn't boot much further than GRUB when the number of ports exceeded 24, but as soon as I added the io-reserve=0 option I could get the same guest to boot fine with 32 or even 64 pcie-root-ports. I'm attaching the boot log for reference: there are a bunch of messages about the topic but they would appear to be benign.
Hotplug seemed to work too: I tried with a single virtio-net-pci and I could access the network. My understanding is that PCIe devices are required to work without IO space, so this behavior matches my expectations.
I wonder, though, what would happen if I had something like
-device pcie-root-port,io-reserve=0,id=pci.1 -device pcie-pci-bridge,bus=pci.1
Would I be able to hotplug conventional PCI devices into the pcie-pci-bridge, or would the lack of IO space reservation for the pcie-root-port cause issues with that?
You would not have any IO space for a PCI device or PCIe device that for some reason will require IO space (even if they shouldn't) and the hotplug operation would fail. On the other hand, if the pcie-pci-bridge device itself will require some IO space, it will work.. it worth trying. Thanks, Marcel
participants (2)
-
Andrea Bolognani
-
Marcel Apfelbaum