On Fri, Apr 03, 2026 at 08:24:42PM +0200, Dion Bosschieter via Devel wrote:
On 4/2/26 06:29, Laine Stump wrote:
On 4/1/26 3:34 AM, Dion Bosschieter wrote:
Change the nwfilter driver loading mechanism to read from nwfilter.conf. By default, it will use the nftables driver, which follows the firewall_backend bridge driver config logic.
I think it should initially default to the iptables driver, just so nobody gets a surprise when they upgrade if there is any incompatibility at all - this is what we did when the nftables backend was added to the virtual network driver, and there are some distros that still keep their default set to iptables (due to interoperability problems with, e.g. docker using iptables rule with a default action of "reject" (or was it "deny"))
Should I do that by setting the aug file to the static value "iptables", asking because I think that the meson firewall_backend_priority is nftables by default, (which gets reused from the network driver) when:
%if 0%{?rhel} >= 10 || 0%{?fedora} %define prefer_nftables 1 %define firewall_backend_priority nftables,iptables
I'll check and update the nwfilter.conf reading, defaulting to ebiptables if no driver has been set.
BTW, this reminds me of the topic of "what happens when someone restarts nwfilterd after switching the setting of firewall_backend?" I could just build the patches, install, and find out for myself, but it's after midnight so I'll do the lazy thing and ask :-). It's important to keep track somewhere of whether the *previous* run of the daemon had loaded iptables or nftables rules. (for the network driver I did it in the status XML of each network).
Currently it doesn't do a cleanup when a user switches to a different nwfilter driver. When that happens there will be leftover firewall rules in either nftables or eb/iptables.
Can you share some examples of where that gets set and loaded?
I could then also run dropAllRules on the "old" driver so that when a user chooses the nftables driver, the old ebiptables rules get cleaned up.
It will require initializing both drivers and when the driver from status != current driver, we can do a cleanup via prev_driver.allTeardown(ifname)
I wonder if there is a place to store such a state/status, as I don't think there is a filter object per domain, it could be part of the filter itself or be stored global.
IIUC, Lanie is suggesting that we need to store state in /var/run/libvirt/nwfilter/. We have a "nwfilter binding" object in the API which associates VM NICs and network filters. At the simplest level we could just record which backend was used for each binding, so we know which teardown codepath to invoke. At the hardest level we would record every single rules or table for the binding, and specifically tear down those, instead of relying on current code which may not match historical code. IMHO it is probably sufficient to record just the table names we created. We can drop all the per-NIC tables, without having to delete each individual rule in them, and dropping the table should purge any jumps targetting that table IIUC ?
Or otherwise it can also be detected by checking "ebtables-save" "iptables-save" and "nft list ruleset".
With regards, Daniel -- |: https://berrange.com ~~ https://hachyderm.io/@berrange :| |: https://libvirt.org ~~ https://entangle-photo.org :| |: https://pixelfed.art/berrange ~~ https://fstop138.berrange.com :|