On 6/16/20 10:32 AM, Paulo de Rezende Pinatti wrote:
No default model should be added to the interface
entry at post parse when its actual network type is hostdev
as doing so might cause a mismatch between the interface
definition and its actual device type.
Have you encountered a real problem from this? (I have, and have been
thinking about the issue for awhile, but only late at night when I'm not
near my keyboard to do something about it. I'm just wondering what
problem you've had :-))
The reason it's been on my mind is this: The default model is rtl8139.
When libvirt is auto-assigning PCI addresses to devices at parse time,
it decides whether to assign a network device to a conventional PCI or
PCI Express slot based on the model, and rtl8139 is conventional PCI. So
if you have <interface type='network'> where the network is a pool of
hostdevs, and if you don't assign a "fake" model like "virtio" or
"e1000e", then the hostdev device (which is 100% certainly a PCIe
device) will be assigned to a conventional PCI slot. That works, but
is.... "sub-optimal" :-)
I think we will still need to add a bit to
qemuDomainDeviceCalculatePCIConnectFlags() in order to get the right
type of slot set, but this is a good start.
Reviewed-by: Laine Stump <laine(a)redhat.com>
Thanks, and congratulations on your first libvirt patch!
---
src/qemu/qemu_domain.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 2dad823a86..33ce0ad992 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -5831,6 +5831,7 @@ qemuDomainDeviceNetDefPostParse(virDomainNetDefPtr net,
virQEMUCapsPtr qemuCaps)
{
if (net->type != VIR_DOMAIN_NET_TYPE_HOSTDEV &&
+ virDomainNetResolveActualType(net) != VIR_DOMAIN_NET_TYPE_HOSTDEV &&
!virDomainNetGetModelString(net))
net->model = qemuDomainDefaultNetModel(def, qemuCaps);