How can I control iptables/nftables rules addition on libvirtd host on
Debian 12 ?
by oza.4h07@gmail.com
Hello,
When I install libvirt-daemon on a Debian 12 host, I can see the iptables rules below beeing added.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:LIBVIRT_FWI - [0:0]
:LIBVIRT_FWO - [0:0]
:LIBVIRT_FWX - [0:0]
:LIBVIRT_INP - [0:0]
:LIBVIRT_OUT - [0:0]
-A INPUT -j LIBVIRT_INP
-A FORWARD -j LIBVIRT_FWX
-A FORWARD -j LIBVIRT_FWI
-A FORWARD -j LIBVIRT_FWO
-A OUTPUT -j LIBVIRT_OUT
-A LIBVIRT_FWI -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A LIBVIRT_FWI -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWO -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A LIBVIRT_FWO -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWX -i virbr0 -o virbr0 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 68 -j ACCEPT
COMMIT
For some reason, I need to add a couple of other rules.
How can I do that ?
Best regards
4 days, 18 hours
Network denied access
by Rodrigo Prieto
Hello,
I am configuring Polkit using an example I found on the web. It correctly
displays the assigned domain for a given user, but when I try to start the
VM, I get the following error:
error: Failed to start domain 'debian12'
error: access denied: 'network' denied access
Here is my configuration:
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "lolo") {
return polkit.Result.YES;
}
});
polkit.addRule(function(action, subject) {
if (action.id.indexOf("org.libvirt.api.domain.") == 0 &&
subject.user == "lolo") {
if (action.lookup("connect_driver") == 'QEMU' &&
action.lookup("domain_name") == 'debian12') {
return polkit.Result.YES;
} else {
return polkit.Result.NO;
}
}
});
To grant network access, I have to configure the following:
polkit.addRule(function(action, subject) {
if (action.id.indexOf("org.libvirt.api.network") == 0 &&
subject.user == "lolo") {
return polkit.Result.YES;
}
});
The problem with the previous configuration is that it allows full access
to the network, requiring the following configuration:
polkit.addRule(function(action, subject) {
if ((action.id == "org.libvirt.api.network.stop" ||
action.id == "org.libvirt.api.network.delete" ||
action.id == "org.libvirt.api.network.write") &&
subject.user == "lolo") {
return polkit.Result.NO;
}
});
By default, shouldn't network access behave like domains or pools, which
cannot be deleted?
I tested it on Libvirt 9.0.0 and 10.0.0
If you can help me, I would really appreciate it.
4 days, 21 hours
Detect OS
by rodrigoprieto2019@gmail.com
Hello, I would like to know if this behavior is normal. When I create a VM locally using the virt-install command with the argument --osinfo detect=on, it works perfectly and detects the operating system version. However, when I try to do it remotely using virt-install --connect=qemu+tcp://x.x.x.x/system, the following error appears:
`--os-variant/--osinfo OS name is required, but no value was set or detected.
This is now a fatal error. Specifying an OS name is required for modern, performant, and secure virtual machine defaults.
If you expected virt-install to detect an OS name from the install media, you can set a fallback OS name with:
--osinfo detect=on,name=OSNAME
You can see a full list of possible OS name values with:
virt-install --osinfo list
If your Linux distro is not listed, try one of generic values such as: linux2024, linux2022, linux2020, linux2018, linux2016
If you just need to get the old behavior back, you can use:
--osinfo detect=on,require=off
Or export VIRTINSTALL_OSINFO_DISABLE_REQUIRE=1`
Both the host machine and the client PC have osinfo updated to the latest version as of today. If I execute the command with --os-variant, it works correctly when done remotely.
Is there a way to make it detect the operating system automatically when connected remotely?
I’m using libvirtd (libvirt) 10.10.0 on Debian 12.
Best regards, and thank you.
5 days
The right way to revert to external disk snapshots
by Alex Serban
Hello everyone, I'm seeking guidance on *best practices* for virtual
machine recovery using external disk snapshots, particularly in a storage
environment with ZFS. My current snapshot and recovery *workflow* involves:
- Keeping VM disks & state on a ZFS volume; - Creating external KVM/Libvirt
disk-only snapshots, resulting in deltas kept on the volume, next to the
disk images; - Capturing the entire VM state through ZFS snapshots; - VM
recovery through ZFS snapshot clones. I am particularly interested in
obtaining an app-consistent recovery, in which I need to revert to the KVM
snapshot of the VM, to ensure the possible clean state offered by a
quiesced snapshot. Reading other posts from the archive and forums, it is
clear for me that I cannot simply revert to the VM's snapshot, if it's a
disk-only one, and that I have to manage them manually. Thus, my question
is: *what is the best practice in order to recover the VM to the external
disk snapshot that we have*? *What I have tried* and worked but I'm not
sure is the best practice: on a VM with only one snapshot, I've changed the
disk source files (which were pointing to deltas), to the ones pointed by
their backingStore source files, effectively making them use the disk state
of the snapshot time. This only works for shut-off VMs, as live VMs cannot
have their disk sources changed, of course. Thus, for powered on VMs in the
use case with only one snapshot, I've chosen to use `virDomainBlockPull` in
order to have the app-consistent state pulled on the current disk (which
was and still is pointing to the delta). *My concerns* on the approach I
took regard, mostly, scalability and the safety of the whole process: - I
am not sure how I could revert again to the current snapshot with the
operations I did: for powered off VMs, disk images will change once we
start using the VM, and for powered on VMs, the blockpull will alter the
deltas which the disks were pointing to; - I don't see how I could apply
this method in a scalable way, if the VM had more than one snapshot. At
least for powered-on VMs.
Thus, I thought I should seek some advice from you guys and see if there's
another, smarter way that I can do this. Thanks a lot for your time, Alex
Serban
6 days, 15 hours
Cannot start network interfaces
by Iain M Conochie
Good day all,
I have having an issue starting virtual networks defined within
libvirt:
virsh # net-start cmdb
error: Failed to start network cmdb
error: internal error: Child process (VIR_BRIDGE_NAME=virbr3
/usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/cmdb.conf --
leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper)
unexpected exit status 2:
dnsmasq: failed to create listening socket for 172.26.80.1: Address
already in use
The definition of the network is pretty simple:
virsh # net-dumpxml cmdb
<network>
<name>cmdb</name>
<uuid>42ffd3c4-624d-46ad-be21-bb3c61c67e41</uuid>
<bridge name='virbr3' stp='on' delay='0'/>
<mac address='52:54:00:5b:e9:6c'/>
<domain name='shihad.org'/>
<ip address='172.26.80.1' netmask='255.255.255.0'>
</ip>
</network>
At the OS level, this network interface should get created by libvirt,
so I am a bit mystified why dnsmasq is having an issue starting, as
there is nothing using this address yet. In fact, virbr3 does not exist
until libvirt would bring up this interface:
ifconfig -a | grep virbr
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
I am running libvirt 9.0.0 on Debian 12.
Any help or pointers greatly appreciated.
Thanks!
Iain
1 week
RBD pool not starting "An error occurred, but the cause is unknown"
by Stuart Longland VK4MSL
Hi all,
I have an issue getting a RBD pool going on a newly deployed compute node.
The storage back-end is a Ceph storage cluster running Ceph 14
(Nautilus… yes I know this is old, an update to 18 is planned soon). I
have an existing node, running Debian 10 (again, updating this is
planned, but I'd like to deploy new nodes to migrate the instances to
whilst this node is updated), which runs about a dozen VMs with disks on
this back-end.
I've loaded a new machine (a MSI Cubi 5 mini PC) up with AlpineLinux
3.21. Boot disk is a 240GB SATA SSD, and there's a 1TB nVME for local
VM storage. My intent is to allow VMs to mount RBDs for back-up
purposes. The machine has two Ethernet interfaces (a 2.5Gbps and a
1Gbps link), one will be the "front-end" used by the VMs, the other will
be a "back-end" link to talk to Ceph and administer the host.
- OpenVSwitch 2.17.11 is deployed with two bridges
- libvirtd 10.9.0 installed
- a LVM pool called 'data' has been created on nVME
- Ceph 19.2.0 is installed (libvirtd is linked to this version of librbd)
- /etc/ceph has been cloned from my existing working compute node
I have two RBD pools; 'one' and 'ha'. 'one' has most of my virtual
machine images in it (it is from a former OpenNebula install), 'ha' has
core router root disk images in it ('ha' for high availability; it has
stronger replication settings than 'one' to guarantee better reliability).
I've created a `libvirt` user in Ceph, and on the intended node, this works:
> ~ # rbd --id libvirt ls -p one | head
> mastodon-vda
> mastodon-vdb
> mastodon-vdd
> mastodon-vde
> one-14
> one-15
> one-19
> one-20
> one-22
> one-23
> ~ # rbd --id libvirt ls -p ha | head
> core-router-obsd75-vda
> core-router-obsd76-vda
I can also access RBD images just fine:
> ~ # rbd --id libvirt map one/shares-vda
> /dev/rbd0
> ~ # fdisk -l /dev/rbd0
> Disk /dev/rbd0: 20 GB, 21474836480 bytes, 41943040 sectors
> 2610 cylinders, 255 heads, 63 sectors/track
> Units: sectors of 1 * 512 = 512 bytes
>
> Device Boot StartCHS EndCHS StartLBA EndLBA Sectors Size Id Type
> /dev/rbd0p1 * 2,0,33 611,8,56 2048 616447 614400 300M 83 Linux
> /dev/rbd0p2 611,8,57 1023,15,63 616448 2584575 1968128 961M 82 Linux swap
> /dev/rbd0p3 1023,15,63 1023,15,63 2584576 41943039 39358464 18.7G 83 Linux
> ~ # rbd unmap one/shares-vda
This is registered in libvirtd:
> ~ # virsh secret-list
> UUID Usage
> --------------------------------------------------------------------
> c14a16b5-bba5-473a-ae9b-53a9a6b0a4e3 ceph client.libvirt secret
> ~ # virsh secret-dumpxml c14a16b5-bba5-473a-ae9b-53a9a6b0a4e3
> <secret ephemeral='no' private='no'>
> <uuid>c14a16b5-bba5-473a-ae9b-53a9a6b0a4e3</uuid>
> <usage type='ceph'>
> <name>client.libvirt secret</name>
> </usage>
> </secret>
I have defined four pools, 'temp', 'local', 'ha-images' and
'opennebula-images':
> ~ # virsh pool-list --all
> Name State Autostart
> -------------------------------------------
> default active yes
> ha-images active yes
> local active yes
> opennebula-images inactive yes
> temp active yes
'ha-images' works just fine, this is its config:
> ~ # virsh pool-dumpxml ha-images
> <pool type='rbd'>
> <name>ha-images</name>
> <uuid>6beab982-52b3-495b-a4a7-ab7ebb522ef5</uuid>
> <capacity unit='bytes'>20003977953280</capacity>
> <allocation unit='bytes'>159339114496</allocation>
> <available unit='bytes'>13142248669184</available>
> <source>
> <host name='172.31.252.1' port='6789'/>
> <host name='172.31.252.2' port='6789'/>
> <host name='172.31.252.5' port='6789'/>
> <host name='172.31.252.6' port='6789'/>
> <host name='172.31.252.7' port='6789'/>
> <host name='172.31.252.8' port='6789'/>
> <host name='172.31.252.9' port='6789'/>
> <host name='172.31.252.10' port='6789'/>
> <name>ha</name>
> <auth type='ceph' username='libvirt'>
> <secret uuid='c14a16b5-bba5-473a-ae9b-53a9a6b0a4e3'/>
> </auth>
> </source>
> </pool>
'opennebula-images' does not, its config:
> ~ # virsh pool-dumpxml opennebula-images
> <pool type='rbd'>
> <name>opennebula-images</name>
> <uuid>fcaa2fa8-f0d2-4919-9168-756a9f4ad7ee</uuid>
> <capacity unit='bytes'>20003977953280</capacity>
> <allocation unit='bytes'>5454371495936</allocation>
> <available unit='bytes'>13142254759936</available>
> <source>
> <host name='172.31.252.1' port='6789'/>
> <host name='172.31.252.2' port='6789'/>
> <host name='172.31.252.5' port='6789'/>
> <host name='172.31.252.6' port='6789'/>
> <host name='172.31.252.7' port='6789'/>
> <host name='172.31.252.8' port='6789'/>
> <host name='172.31.252.9' port='6789'/>
> <host name='172.31.252.10' port='6789'/>
> <name>one</name>
> <auth type='ceph' username='libvirt'>
> <secret uuid='c14a16b5-bba5-473a-ae9b-53a9a6b0a4e3'/>
> </auth>
> </source>
> </pool>
It's not obvious what the differences are. `name`, `uuid`,
`allocation`, `available` and `source/name` are expected to be
different, everything else 100% matches. I've tried removing and
zeroing out the `capacity`, `allocation` and `available` tags to no effect.
> ~ # virsh pool-dumpxml ha-images > /tmp/ha-images.xml
> ~ # virsh pool-dumpxml opennebula-images > /tmp/opennebula-images.xml
> ~ # diff -u /tmp/ha-images.xml /tmp/opennebula-images.xml
> --- /tmp/ha-images.xml
> +++ /tmp/opennebula-images.xml
> @@ -1,9 +1,9 @@
> <pool type='rbd'>
> - <name>ha-images</name>
> - <uuid>6beab982-52b3-495b-a4a7-ab7ebb522ef5</uuid>
> + <name>opennebula-images</name>
> + <uuid>fcaa2fa8-f0d2-4919-9168-756a9f4ad7ee</uuid>
> <capacity unit='bytes'>20003977953280</capacity>
> - <allocation unit='bytes'>159339114496</allocation>
> - <available unit='bytes'>13142248669184</available>
> + <allocation unit='bytes'>5454371495936</allocation>
> + <available unit='bytes'>13142254759936</available>
> <source>
> <host name='172.31.252.1' port='6789'/>
> <host name='172.31.252.2' port='6789'/>
> @@ -13,7 +13,7 @@
> <host name='172.31.252.8' port='6789'/>
> <host name='172.31.252.9' port='6789'/>
> <host name='172.31.252.10' port='6789'/>
> - <name>ha</name>
> + <name>one</name>
> <auth type='ceph' username='libvirt'>
> <secret uuid='c14a16b5-bba5-473a-ae9b-53a9a6b0a4e3'/>
> </auth>
> ~ # diff -y /tmp/ha-images.xml /tmp/opennebula-images.xml
When I start this errant pool, I get this:
> ~ # virsh pool-start opennebula-images
> error: Failed to start pool opennebula-images
> error: An error occurred, but the cause is unknown
If I crank debugging up in `libvirtd` (via the not-recommended
`log_level` and directing all output to a file), I see it successfully
connects to the pool for about 15 seconds, lists the sizes of about a
dozen disk images, then seemingly gives up and disconnects.
> 2025-01-19 05:16:55.176+0000: 3609: info : vir_object_finalize:319 : OBJECT_DISPOSE: obj=0x7f975fc816a0
> 2025-01-19 05:16:55.177+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975fc816a0
> 2025-01-19 05:16:55.183+0000: 3609: debug : virStorageBackendRBDRefreshPool:693 : Utilization of RBD pool one: (kb: 19535134720 kb_
> avail: 12800438616 num_bytes: 5489355030528)
> 2025-01-19 05:16:55.988+0000: 3609: debug : volStorageBackendRBDRefreshVolInfo:569 : Refreshed RBD image one/mastodon-vda (capacity
> : 21474836480 allocation: 21474836480 obj_size: 4194304 num_objs: 5120)
> 2025-01-19 05:16:55.993+0000: 3609: info : virObjectNew:256 : OBJECT_NEW: obj=0x7f975fa6bba0 classname=virStorageVolObj
> 2025-01-19 05:16:55.993+0000: 3609: info : virObjectRef:400 : OBJECT_REF: obj=0x7f975fa6bba0
> 2025-01-19 05:16:55.993+0000: 3609: info : virObjectRef:400 : OBJECT_REF: obj=0x7f975fa6bba0
> 2025-01-19 05:16:55.993+0000: 3609: info : virObjectRef:400 : OBJECT_REF: obj=0x7f975fa6bba0
> 2025-01-19 05:16:55.993+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975fa6bba0
> 2025-01-19 05:16:56.011+0000: 3609: debug : volStorageBackendRBDRefreshVolInfo:569 : Refreshed RBD image one/mastodon-vdb (capacity
> : 536870912000 allocation: 536870912000 obj_size: 4194304 num_objs: 128000)
…snip…
> 2025-01-19 05:17:03.756+0000: 3609: debug : volStorageBackendRBDRefreshVolInfo:569 : Refreshed RBD image one/wsmail-vdb (capacity:
> 21474836480 allocation: 21474836480 obj_size: 4194304 num_objs: 5120)
> 2025-01-19 05:17:03.758+0000: 3609: info : virObjectNew:256 : OBJECT_NEW: obj=0x7f975f9cf250 classname=virStorageVolObj
> 2025-01-19 05:17:03.758+0000: 3609: info : virObjectRef:400 : OBJECT_REF: obj=0x7f975f9cf250
> 2025-01-19 05:17:03.758+0000: 3609: info : virObjectRef:400 : OBJECT_REF: obj=0x7f975f9cf250
> 2025-01-19 05:17:03.758+0000: 3609: info : virObjectRef:400 : OBJECT_REF: obj=0x7f975f9cf250
> 2025-01-19 05:17:03.758+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975f9cf250
> 2025-01-19 05:17:03.777+0000: 3609: debug : volStorageBackendRBDRefreshVolInfo:569 : Refreshed RBD image one/sjl-router-obsd76-vda
> (capacity: 34359738368 allocation: 34359738368 obj_size: 4194304 num_objs: 8192)
> 2025-01-19 05:17:03.778+0000: 3609: debug : virStorageBackendRBDCloseRADOSConn:369 : Closing RADOS IoCTX
> 2025-01-19 05:17:03.778+0000: 3609: debug : virStorageBackendRBDCloseRADOSConn:374 : Closing RADOS connection
> 2025-01-19 05:17:03.783+0000: 3609: debug : virStorageBackendRBDCloseRADOSConn:378 : RADOS connection existed for 15 seconds
> 2025-01-19 05:17:03.783+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975f9cf2b0
> 2025-01-19 05:17:03.783+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975ef90ac0
> 2025-01-19 05:17:03.783+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975fa6bd20
> 2025-01-19 05:17:03.783+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975fa6ecc0
…snip…
> 2025-01-19 05:17:03.785+0000: 3609: info : vir_object_finalize:319 : OBJECT_DISPOSE: obj=0x7f975f9cee90
> 2025-01-19 05:17:03.785+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975f9cee90
> 2025-01-19 05:17:03.785+0000: 3609: info : vir_object_finalize:319 : OBJECT_DISPOSE: obj=0x7f975fa6e960
> 2025-01-19 05:17:03.785+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975fa6e960
> 2025-01-19 05:17:03.785+0000: 3609: error : storageDriverAutostartCallback:213 : internal error: Failed to autostart storage pool 'opennebula-images': no error
> 2025-01-19 05:17:03.785+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975fcd0490
> 2025-01-19 05:17:03.785+0000: 3609: info : virObjectRef:400 : OBJECT_REF: obj=0x7f975fcd27c0
> 2025-01-19 05:17:03.785+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975fcd27c0
> 2025-01-19 05:17:03.786+0000: 3609: info : virObjectRef:400 : OBJECT_REF: obj=0x7f975fcd06d0
> 2025-01-19 05:17:03.786+0000: 3609: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7f975fcd06d0
> 2025-01-19 05:17:03.786+0000: 3609: info : virObjectRef:400 : OBJECT_REF: obj=0x7f975fcd0130
If there's no cause for the error, it should not fail.
If it fails, there should be a cause listed, there's no excuse for it
being "unknown" -- just because Microsoft's OSes make up error codes
that its own help system can't explain is no excuse for the open-source
world to follow their example.
I'd happily provide more information, if someone can provide guidance on
how to locate it.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
1 week
Authoritative info on backup-begin versus snapshots/other state
capture
by camccuk@yahoo.com
Hello all
Apologies for the basic nature of the question, but having recently started working with libvirt - and virtualisation in general - I find there is a lot of out-of-date and sometimes contradictory material out there across blogs, articles, stackoverflow, the usual sources... I thought I might be able to get definitive answers here. For the record, I assume libvirt.org is authoritative but while there is a lot of material there, its structure is not always clear to me. Also the lack of dates on any pages leaves some room for doubt.
I am wondering if there is a recent, reliable summary of the various approaches and current best practices for backing up VMs that covers snapshots both internal and external, approaches that use backup-begin and third-party approaches which simply stop the VM and copy off files.
If there is not such a summary, can anyone confirm my reading of https://libvirt.org/kbase/domainstatecapture.html that a simple backup-begin <domain-name> will:
- pause the VM and quiesce the disk (in which case is qemu agent a requirement on the guest?)
- generate a date-suffixed disk-only copy of a VMs disks alongside the originals wherever that storage is
- not generate any backing image chains or metadata that needs to be retained
Furthermore, is it then possible to restore to that point by stopping a VM, and associating that backup file with the VM either by virsh-editing its xml or overwriting the original file with the backup file.
This seems to be my experience in testing this, but there are very few references to this tool compared to the many lengthy discussions about snapshots and other approaches which is a bit puzzling. It would be great to have this understanding confirmed or refined!
Many thanks for any pointers
2 weeks, 4 days
libvirt-guests on Redhat
by Joe Muller
Hello,
Per libvirt and Redhat (up to RHEL7) documentation, the
libvirt-guests daemon should be enabled to ensure that guest VMs are
properly shutdown/suspended if the KVM host is shutdown/rebooted. I
understand that in addition to other changes, Redhat moved from the
'monolithic' libvirtd service model to the 'modular' multiple service
model. Out of the box, the systemd libvirt-guests.service is disabled,
and testing shows that the associated qemu-kvm processes for virtual
machines are simply killed as the various virt* services are stopped as
part of the systemd shutdown target.
Did Redhat just decide to fork libvirt and do their own thing, or
there some equivalent way to get the same clean shutdown behavior which
libvirt-guests used to provide?
Relevant documentation: --- https://libvirt.org/daemons.html
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/...
-- Joe Muller System Administrator Sonic.
4 weeks, 1 day
Best practice to manage network
by linux@hklb.ch
Hi,
First, sorry if the topic has already been discussed recently (the only thread I found related to my problem was created in 2010..)
I have a hypervisor with KVM and LXC installed on a Debian 12, and I'm using libvirt to create my VM. All my networks are defined in my /etc/network/interfaces.d/* (I'm using openvswitch with specific options, such as port mirroring/patch/...) , and I'm configuring the network on my VM XML definition like this :
<interface type='bridge'>
<mac address='52:54:00:ab:c3:d3'/>
<source bridge='prod'/>
<vlan>
<tag id='55'/>
</vlan>
<virtualport type='openvswitch'>
<parameters interfaceid='331d973c-0c5b-4d3c-b2ad-590f908e680d'/>
</virtualport>
<target dev='vnet180'/>
<model type='virtio'/>
<mtu size='9216'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>
It is working perfectly, until I'm restarting my network (using ifreload, ifup, systemctl network restart, ...) - all my VM come unreachable... To make it work again, I also need to restart libvirtd
Is it still expected to have this behavior ? What would be a better way to configure the network ?
Thanks in advance
Lucas
1 month