
On Tue, Feb 20, 2024 at 11:10:22AM -0800, Andrea Bolognani wrote:
On Tue, Feb 20, 2024 at 02:04:11PM -0500, Chuck Lever wrote:
On Tue, Feb 20, 2024 at 10:58:46AM -0800, Andrea Bolognani wrote:
On Tue, Feb 20, 2024 at 10:17:43AM -0500, Chuck Lever wrote:
On Mon, Feb 19, 2024 at 07:18:06PM -0500, Laine Stump wrote:
On 2/19/24 10:21 AM, Chuck Lever wrote:
Hello-
I'm somewhat new to the libvirt world, and I've encountered a problem that needs better troubleshooting skills than I have. I've searched Google/Ecosia and stackoverflow without finding a solution.
I set up libvirt on an x86_64 system without a problem, but on my new aarch64 / Fedora 39 system, virsh doesn't seem to want to start virbr0 when run from my own user account:
cel@boudin:~/kdevops$ virsh net-start default error: Failed to start network default error: error creating bridge interface virbr0: Operation not permitted
If you run virsh as a normal user, it will auto-create an unprivileged ("session mode") libvirt instance, and connect to that rather than the single privileged (ie. run as root) libvirt instance that is managed by systemd. Because this libvirt is running as a normal user with no elevated privileges, it is unable to create a virtual network.
What you probably wanted to do was to connect to the system-wide privileged libvirt, you can do this by either running virsh as root (or with sudo), or by using
# virsh -c qemu:///system
rather than straight "virsh". Whichever method you choose, you'll want to do that for all of your virsh commands, both for creating/managing networks and guests.
These are wrapped up in scripts and ansible playbooks, so I'll have to dig through that to figure out which connection is being used. Strange that this all works on my x86_64 system, but not on aarch64.
This makes me very suspicious. There are a few things that differ between x86_64 and aarch64, but this shouldn't be one of them.
Are you 100% sure that the two environments are identical, modulo the architecture? Honestly, what seems a lot more likely is that either the Ansible playbooks execute some tasks conditionally based on the architecture, or some changes were made to the x86_64 machine outside of the scope of the playbooks.
It's impossible to say that the two environments are identical. The two possibilities you mention are the first things I plan to investigate.
One major difference that escaped me before is that the x86_64 system is using vagrant, but the aarch64 system is using libguestfs. The libguestfs stuff is new and there are likely some untested bits there.
Possible leads:
* contents of ~/.config/libvirt;
On x86_64 / vagrant, .config/libvirt has a channel/ directory, but no networks/ directory. On aarch64 / libguestfs, .config/libvirt has no channel/ directory, but the networks/ directory contains the definition of the "default" network.
* libvirt-related variables in the user's environment;
I don't see any remarkable differences there.
* groups the user is part of.
x86_64: [cel@renoir target]$ id uid=1046(cel) gid=100(users) groups=100(users),10(wheel),36(kvm),107(qemu),986(libvirt) [cel@renoir target]$ I see that, though the SELinux policy is "enforcing", the kernel is booted with "selinux=0". aarch64: cel@boudin:~/.config/libvirt/qemu$ id uid=1046(cel) gid=100(users) groups=100(users),10(wheel),36(kvm),107(qemu),981(libvirt) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 cel@boudin:~/.config/libvirt/qemu$ -- Chuck Lever