vepa-mode directly attached interface

Hi all, I just realized that libvirt default for directly attached interface is vepa mode. I discovered it now because virt-manager automatically enables bridge mode, while cockpit-machines get the default vepa mode. This is unfortunate because, being vepa switches very rare, it means that using cockpit to configure directly attached interfaces causes guests to not talk each other. I have some questions: - can libvirt default be changed/configured somehow (ie: to automatically create bridge-mode directly attached interface when no mode is specified)? - how can I use virsh to discover machines with vepa-mode interfaces (virsh domiflist <domain> does not return the interface mode)? - can I change the interface type at runtime (virt-xml <domain> --edit --network type=direct,source.mode=bridge works for inactive domains only)? Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti@assyoma.it - info@assyoma.it GPG public key ID: FF5F32A8

On 6/11/22 8:54 AM, Gionatan Danti wrote:
Hi all, I just realized that libvirt default for directly attached interface is vepa mode.
I discovered it now because virt-manager automatically enables bridge mode, while cockpit-machines get the default vepa mode. This is unfortunate because, being vepa switches very rare, it means that using cockpit to configure directly attached interfaces causes guests to not talk each other.
I have some questions:
- can libvirt default be changed/configured somehow (ie: to automatically create bridge-mode directly attached interface when no mode is specified)?
libvirt's defaults are defaults, and can't be changed with config. libvirt also will never changed the hardcoded default, because that could break existing installations. I'm not sure why vepa was chosen as the default when macvtap support was added to libvirt, but my best guess is that it was just the bias of the person doing the work who assumed their usage would be the most common (IIRC it was done by someone who was specifically wanting to support connection to VEPA-capable switches, and since it was a new feature nobody else had experience with / opinions about which mode would be most common, so the reviewers just accepted this default)
- how can I use virsh to discover machines with vepa-mode interfaces (virsh domiflist <domain> does not return the interface mode)?
I guess you'll need to do a "virsh dumpxml --inactive" for each guest and parse it out of there.
- can I change the interface type at runtime (virt-xml <domain> --edit --network type=direct,source.mode=bridge works for inactive domains only)?
No, I think the mode of the macvtap interface is set when the interface is created, and can't be changed later (i.e. it's a limitation of macvtap, not libvirt)

Il 2022-06-11 19:32 Laine Stump ha scritto:
libvirt's defaults are defaults, and can't be changed with config. libvirt also will never changed the hardcoded default, because that could break existing installations.
Thanks for the direct confirmation that default can not be changed with config. I opened a cockpit-machine bug to ask for a change there: https://bugzilla.redhat.com/show_bug.cgi?id=2095967
I'm not sure why vepa was chosen as the default when macvtap support was added to libvirt, but my best guess is that it was just the bias of the person doing the work who assumed their usage would be the most common (IIRC it was done by someone who was specifically wanting to support connection to VEPA-capable switches, and since it was a new feature nobody else had experience with / opinions about which mode would be most common, so the reviewers just accepted this default)
Sound reasonable. Do you know if directly attached bridge-mode interface is common and tested into production workload, or if "pure" bridge mode is the way to go? Full disclosure: I always used "pure" bridges, but I like the idea to deny any guest->host traffic by design - hence my interest in bridge-mode macvtap interfaces. Moreover, by the virtue of not requiring a dedicated bridge interface, they can simplify the host network configuration.
I guess you'll need to do a "virsh dumpxml --inactive" for each guest and parse it out of there.
It matches my results. However, virsh dumpxml is an expensive operation - polkitd can easily burn >30% of a single core for 1s polling interval with virsh dumpxml. I settled (for testing) with grepping "mode..vepa" in /etc/libvirtd/qemu and it seems to work well (1s polling not even register in top). Does libvirt support calling some external script when a new virtual machine is defined?
No, I think the mode of the macvtap interface is set when the interface is created, and can't be changed later (i.e. it's a limitation of macvtap, not libvirt)
Again, it matches my results (ip link showing "operation not supported" when trying to change the mode at runtime). I hoped to miss something, but it does not seem the case... Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti@assyoma.it - info@assyoma.it GPG public key ID: FF5F32A8

On Sun, Jun 12, 2022 at 00:16:20 +0200, Gionatan Danti wrote:
Il 2022-06-11 19:32 Laine Stump ha scritto:
[...]
I guess you'll need to do a "virsh dumpxml --inactive" for each guest and parse it out of there.
It matches my results. However, virsh dumpxml is an expensive operation - polkitd can easily burn >30% of a single core for 1s polling interval with virsh dumpxml. I settled (for testing) with grepping "mode..vepa" in /etc/libvirtd/qemu and it seems to work well (1s polling not even register in top).
The unfortunate thing about using `virsh dumpxml $VM` as written is that it opens a connection (which uses polkit to auth), gets the XML and then closes the connection. If you want it to be more optimized you can e.g. write a script using python bindings for libvirt and simply keep the connection open ...
Does libvirt support calling some external script when a new virtual machine is defined?
... which additionally allows you to register 'domain lifecycle events' [1] which give you a trigger when a new VM is defined. https://www.libvirt.org/html/libvirt-libvirt-domain.html#virConnectDomainEve... https://www.libvirt.org/html/libvirt-libvirt-domain.html#virDomainEventID

Il 2022-06-13 09:47 Peter Krempa ha scritto:
If you want it to be more optimized you can e.g. write a script using python bindings for libvirt and simply keep the connection open ...
... which additionally allows you to register 'domain lifecycle events' [1] which give you a trigger when a new VM is defined.
This is a good idea, indeed. Thank you for suggesting! Regards. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti@assyoma.it - info@assyoma.it GPG public key ID: FF5F32A8
participants (3)
-
Gionatan Danti
-
Laine Stump
-
Peter Krempa