On 1/24/21 1:59 AM, Neal Gompa wrote:
On Sun, Jan 24, 2021 at 1:45 AM Laine Stump <laine(a)redhat.com>
wrote:
> V1 here:
https://www.redhat.com/archives/libvir-list/2021-January/msg00922.html
>
> A short version of the cover letter from V1: this is a followup to my
> proposal to stop using netcf for the interface driver backend in
> Fedora/RHEL/CentOS builds and use the udev backend instead (it turns
> out Ubuntu already disabled netcf in 2018).
>
> Changes in V2:
>
> * removed the patch that made the default netcf=disabled even when
> netcf-devel was found on the host. If someone has netcf-devel
> installed and still wants to build without netcf, then can add
> "-Dnetcf=disabled" on the meson commandline.
>
> * Made the specfile changes more intelligent:
>
> * instead of hardcoding -Dnetcf=disabled, we now have a variable
> %{with_netcf} that is set to 1 for current Fedora (< 34) and current
> RHEL (< 9) but will be set to 0 for future Fedora/RHEL. This way the
> behavior on current OS releases will remain the same even for future
> libvirt.
>
> * it is possible to for netcf support off even in current/older OS
> releases by adding "--without netcf" to the rpmbuild commandline.
>
> I think at this point I would be comfortable pushing these patches, unless someone
has misgivings about it...
>
> Laine Stump (2):
> build: support explicitly disabling netcf
> rpm: disable netcf for the interface driver in rpm build on new
> targets
>
> libvirt.spec.in | 22 +++++++++++++++++-----
> meson.build | 10 ++++++----
> 2 files changed, 23 insertions(+), 9 deletions(-)
>
> --
> 2.29.2
>
This looks fine to me, but I'm wondering why libvirt doesn't
communicate with NetworkManager for this information? That's a cross
distribution method of handling complex network configuration that we
basically know will exist and can handle parsing and configuring
networks effectively.
Wanna write a backend driver?
Seriously though, we've talked about that in the past, but if it was
done, it would be separate from netcf - an alternate backend like the
udev backend. But in the end, I don't really think there's much demand
for libvirt to configure host interfaces anyway, so it would be a lot of
effort for no real gain.
The only reason we ever added the virInterface API in the first place
was that at the time (late 2008) there was *no* standard API for
configuring a new bridge or bond device and attaching ethernets to them
(unless you count "directly modify the files in
/etc/sysconfig/network-scripts" as an API :-P), and that was a common
need for someone provisioning a host to be a compute node for
virtualization. Even NetworkManager (which at that time generally was
only used on Desktop systems, almost never on servers) had exactly 0
support for bridge or bond devices (I won't claim to know how much of a
programmatic API they had at the time, but it would have been a moot
point, since they didn't support bridges and bonds at all)
At the same time there was a virtualization management application
(ovirt) which needed such an API because they needed to be able to
provision a new compute node that started out with a bare ethernet (and
they wanted a bridge or a bond attached to a bridge). Since they already
had a connection into the node via libvirt, they saw that as an
opportune place to add such an API.
But within a couple years after that, NetworkManager started supporting
bridge and bond devices, but still using ifcfg files just like
initscripts (and libvirt/netcf) did. The only problem was that they made
subtle changes in behavior (one I remember is they changed the necessary
ordering of starting bridge devices and attached ethernet devices, and
then later *again* changed behavior by automatically starting attached
ethernets (or something like that - I just remember it was frustrating
to deal with). But at the same time they made behavioral changes that
repeatedly broke libvirt's attempts at configuring/managing host
interfaces, they also produced a usable API. And the people who were the
reason we added netcf/virInterface decided not to use virInterface, but
instead to do their own thing (via vdsm, which is a part of ovirt). So
we were left with a bunch of code that nobody used, and was constantly
breaking due to behavior changes by the ever-more-popular
NetworkManager. This of course did nothing to encourage me (or anyone
else) to work on it - why toil away to get something to work properly if
the fix for the current version of NM was just going to break it on
previous versions (and probably prime it for breakage on future
versions), especially if there were no users?
Anyway, I've digressed quite a ways for no good reason (other than that
it's the middle of the night and I'm sleepless again) and am probably
becoming incoherent and repetitive, so I'll pull it back in. In the old
days there was no good way to configure host interfaces via an API, so
we made something. But now there *are* good ways to do that, so there's
not much upside to just putting libvirt in as the middleman. If someone
wants to try doing it, a NetworkManager backend to the interface driver
might be interesting, but I'm not going to spend any time doing it
unless someone in a position of authority over me says that I need to,
and that hasn't happened :-P
BTW, do you ever sleep???
Otherwise...
Reviewed-by: Neal Gompa <ngompa13(a)gmail.com>
Thanks. I'm going to let it sit and fester a few days to see if anyone
else has anything to say about it, but I'll definitely remember to add
your R-b when/if I push :-)