On Tue, 2015-08-11 at 19:26 -0400, Laine Stump wrote:
(Alex - I cc'ed you because I addressed a question or two your
way down
towards the bottom).
On 08/11/2015 02:52 AM, Pavel Fedin wrote:
> Hello!
>
>> The original patches to support pcie-root severely restricted what could
>> plug into what because in real hardware you can't plug a PCI device into
>> a PCIe slot (physically it doesn't work)
> But how do you know whether the device is PCI or PCIe ? I don't see anything
like this in the code, i see that for example "all network cards are PCI", which
is, BTW, not true in the real world.
Two years ago when I first added support for q35-based machinetypes and
the first pcie controllers, I had less information than I do now. When
I looked in the ouput of "qemu-kvm -device ?" I saw that each device
listed the type of bus it connected to (PCI or ISA), and assumed that
even though at the time qemu didn't differentiate between PCI and PCIe
there, since the two things *are* different in the real world eventually
they likely would. I wanted the libvirt code to be prepared for that
eventuality. Of course every example device (except the PCIe controllers
themselves) ends up with the flag saying that it can connect to a PCI
bus, not PCIe).
Later I was told that, unlike the real world where, if nothing else, the
physical slots themselves limit you, any normal PCI device in qemu could
be plugged into a PCI or PCIe slot. There still are several restrictions
though, which showed themselves as more complicated than just the naive
PCI vs PCIe that I originally imagined - just look at the restrictions
on the different PCIe controllers:
("pcie-sw-up-port" == "pcie-switch-upstream-port",
"pcie-sw-dn-port" ==
"pcie-switch-downstream-port")
name upstream downstream
----------------- ----------------- -------------------
pcie-root none any endpoint
pcie-root-port
dmi-to-pci-bridge
pci-bridge
31 ports NO hotplug
dmi-to-pci-bridge pcie-root any endpoint device
pcie-root-port pcie-sw-up-port
pcie-sw-dn-port
NO hotplug 32 ports NO hotplug
Hmm, pcie-sw-up-port on the downstream is a stretch here. pci-bridge
should be allowed downstream though.
pcie-root-port pcie-root-only any endpoint
NO hotplug dmi-to-pci-bridge
pcie-sw-up-port
1 port hotpluggable
pcie-sw-up-port pcie-root-port pcie-sw-dn-port
pcie-sw-dn-port 32 ports "kind of" hotpluggable
"kind of" hotpluggable
pcie-sw-dn-port pcie-sw-up-port any endpoint
"kind of" hotplug pcie-sw-up-port
1 port hotpluggable
pci-bridge pci-root any endpoint
pcie-root pci-bridge
dmi-to-pci-bridge 32 ports hotpluggable
pcie-root-port
pcie-sw-dn-port
NO hotplug (now)
So the original restrictions I placed on what could plugin where were
*too* restrictive for endpoint devices, but other restrictions were
useful, and the framework came in handy as I learned the restrictions of
each new pci controller model.
System software ends up being pretty amiable as well since PCIe is
software compatible with conventional PCI. If we have a guest-based
IOMMU though, things could start to get interesting because the
difference isn't so transparent. The kernel starts to care about
whether a device is express and expects certain compatible upstream
devices as it walks the topology. Thankfully though real hardware gets
plenty wrong too, so we only have to be not substantially worse than
real hardware ;)
>> The behavior now is that if libvirt is auto-assigning a slot
for a
>> device, it will put it into a hotpluggable true PCI slot, but if you
>> manually assign it to a slot that is non-hotpluggable and/or PCIe, it
>> will be allowed.
> But when i tried to manually assign virtio-PCI to PCIe i simply got "Device
requires standard PCI slot" and that was all. I had to make patch N4 in order to
overcome this.
I'm glad you pointed out this patch (I had skipped over it), because
1) that patch is completely unnecessary ever since commits 1e15be1 and
9a12b6c were pushed upstream, and
2) that patch will cause problems with auto-assignment of addresses for
virtio-net devices on Q35 machines (they will be put on pcie-root
instead of the pci-bridge).
I have verified that (1) is true - I removed your patch, built and
installed new libvirt, and tried adding a new virtio-net device to
Cole's aarch64 example domain with a manually set pci address on both
bus 0 (pcie-root) and bus 1 (dmi-to-pci-bridge), and both were successful.
I just sent a patch that reverts your patch 4. Please test it to verify
my claims and let me know so I can push it.
https://www.redhat.com/archives/libvir-list/2015-August/msg00488.html
>> BTW, I'm still wondering if the arm machinetype really does support the
>> supposedly Interl-x86-specific i82801b11-bridge device
> Yes, it works fine. Just devices behind it cannot get MSI-X enabled.
I'm not expert enough to know for sure, but that sounds like a bug. Alex?
i82801b11-bridge is the "DMI-to-PCI" bridge and ARM certainly doesn't
support DMI, but it's really just a PCIe-to-PCI bridge, but I don't know
why MSI wouldn't work behind it. I can't say I've done much with it
though.
I do recall that a very long time ago when I first tried out
dmi-to-pci-bridge and Q35, I found a qemu bug that made it impossible to
use network devices (and they were probably virtio-net, as that's what I
usually use) attached to a pci-bridge on a Q35 machine. Maybe the same
bug? I don't remember what the exact problem was (but again, I think
Alex will remember the problem).
Drawing a blank...
I can say that currently (and for almost the last two years) there is
no
problem using a virtio-net adapter that is connected to a q35 machine in
this way:
pcie-root --> dmi-to-pci-bridge --> pci-bridge --> virtio-net
> By the way, you should be using virtio-pci with PC guests for a while, does it also
suffer from this restriction there ?
See above :-)
>
>> (and the new > controller devices - ioh3420 (pcie-root-port), x3130-upstream
>> (pcie-switch-upstream-port), and xio3130-downstream
>> (pcie-switch-downstream-port).
> Didn't try that, but don't see why they would not work. PCI is just PCI
after all, everything behing the controller is pretty much standard and arch-independent.
Alex (yeah, I know, I need to stop being such an Alex-worshipper on PCI
topics :-P) has actually expressed concern that we are using all of
these Intel-chipset-specific PCI controllers, and thinks that we should
instead create some generic devices that have different PCI device IDs
than those. Among other problems, those Intel-specific devices only
exist in real hardware at specific addresses, and he's concerned that us
putting them at a different address might confuse some OS; better to
have a device that functions similarly, but IDs itself differently so
the OS won't make any potentially incorrect assumptions (that type of
problem is likely a non-issue for your use of these devices though,
since you don't really have any "legacy OS" you have to deal with).
Yep, I'd love for us to replace all these specific devices with generic
ones: pci-pci-bridge, pci-pcie-bridge, pcie-pci-bridge, pcie-root-port,
pcie-upstream-switch-port, pcie-downstream-switch-port, etc. Unless
there's some specific need for emulating a specific device, I expect
we're only making things more complicated. Thanks,
Alex