libvirt KVM virtual NIC as ens160 for running a virtual appliance
designed for Vmware
by Sascha Frey
Hi,
I want to run a virtual appliance which was designed for Vmware on libvirt Linux/KVM.
Unfortunately, they hardcoded the kernel NIC name to ens160 inside that image.
I can create a libvirt XML which creates a NIC recognised as ensXX, but not ens160, because libvirt doesn’t allow PCI slots over 0x1f.
Is there any possible way to create a virtual NIC ens160 without modifying the image?
Thanks.
1 day, 8 hours
Re: Capture pcap for each VM
by Laine Stump
(added users(a)lists.libvirt.org back into the Cc so that anyone else
asking the same question in the future will benefit from the answer)
On 3/28/25 8:12 AM, James Liu wrote:
> Hi Laine,
>
> I tried your first solution but somehow it didn't work(the network is
> NAT). I edited the xml in virt-manager gui. Whenever I added following
> options, and click "Apply", the virt-manager will revert(remove) the
> "alias" tag; consequently, the qemu command line couldn't find the
> device name('foo' in this example).
Sorry, I had meant to point this out but somehow forgot - user-specified
aliases must begin with "ua-", otherwise they will be ignored. So
instead of using "<alias name='foo'/>", use "<alias name='ua-foo'/>"
(and then in the <qemu:arg value> use "netdev=hostua-foo")
Also note that your guest should be shutdown while you're making these
changes.
>
> Appreciate your suggestions.
>
> <interface type="network">
> <mac address="52:54:00:3a:bf:52"/>
> <source network="default"/>
> <model type="e1000e"/>
> <alias name="*foo*"/> # virt-manager will always remove
> this tag
> <address type="pci" domain="0x0000" bus="0x01" slot="0x00"
> function="0x0"/>
> </interface>
>
> <qemu:commandline>
> <qemu:arg value="-object"/>
> <qemu:arg value="filter-dump,id=f1,netdev=*foo*,file=/tmp/xyzzy.pcap"/>
> </qemu:commandline>
1 day, 15 hours
Clarifying isolated network rules in nftables
by Alexey Kashavkin
Hi,
I’m trying to understand how firewall filter works for isolated network in libvirt v11.1.0. When I start the network I can see following rules in nftables:
table ip libvirt_network {
chain forward {
type filter hook forward priority filter; policy accept;
counter packets 0 bytes 0 jump guest_cross
counter packets 0 bytes 0 jump guest_input
counter packets 0 bytes 0 jump guest_output
}
chain guest_output {
iif "virbr3" counter packets 0 bytes 0 reject
}
chain guest_input {
oif "virbr3" counter packets 0 bytes 0 reject
}
chain guest_cross {
iif "virbr3" oif "virbr3" counter packets 0 bytes 0 accept
}
chain guest_nat {
type nat hook postrouting priority srcnat; policy accept;
}
}
But when I start ping from one VM to another on the same isolated network, I don't see an increase in counters in either chain.
In the libvirt code, I found a comment in src/network/network_nftables.c:
/**
* nftablesAddForwardAllowCross:
*
* Add a rule to @fw to allow traffic to go across @iface (the virtual
* network's bridge) from one port to another. This allows all traffic
* between guests on the same virtual network.
*/
But it seems that these rules don't work and are not needed. If I delete this table or some chains, nothing happens. VMs have connectivity with each other on this network.
What are these rules for?
5 days, 8 hours
Capture pcap for each VM
by icefrog1950@gmail.com
Hi,
Is possible to capture pcaps for each VM individually?
QEMU supports command line '-object filter-dump, file=test.pcap'. I'm not sure if Libvirt supports this features, or there are better ways to solve this.
Many thanks.
5 days, 20 hours
Running DHCP server inside of the Guest VM instead of the host.
by gameplayer2019pl@tutamail.com
Hello,
I've recently tried to run KEA DHCP server inside of a Debian VM and the following host interface configuration, attached to that VM:
```
<network>
<name>netasn-dhcpv6</name>
<bridge name="netasn-dhcpv6" stp="on" delay="0"/>
<mtu size="1500"/>
<mac address="XX:XX:XX:XX:XX:XX"/>
<dns enable="no"/>
<ip family="ipv6" address="2a14:7581:feb::1" prefix="64">
</ip>
</network>
```
But whenever I'm trying to run dhclient on another Debian VM attached to the same network I couldn't obtain the IPv6 lease from KEA DHCP.
The host is as well running Debian, 12 version.
Is there anyway to make the DHCP work from Guest VM instead of using built-in libvirt DHCP?
6 days
Integrating Open Virtual Network with Libvert Provisioned VMs
by odukoyaonline@gmail.com
I am having a little confusion trying to integrate OVN defined networks(I tried with Ovs and it worked fine) with the VMs I have provisioned with Libvirt. I want to ask if there are resource around this and if not, will appreciate input from anyone who has a prior experience with it no matter how little.
6 days, 11 hours
General use of zstd instead of zlib compression
by Michael Niehren
Hi,
currently i use on all VM's qcow2-Images with zlib compression. If i do an Backup, the Backup-Image will
be compressed with zstd Level 3 to shrink the image for transfering it over not so fast internet connections.
So, why not directly using zstd compression on the images. Are there any reason's not to do that ?
As i always use virt-manager for administration, i patched qemu (V9.2.2) to create on default zstd compressed images
(only 1 change in line 3525). So newly created images do have compression type zstd, which work's (qemi-img info).
I see one unusual thing. If i do an qemi-img convert with compression_type=zstd the size of the converted image
is much smaller than the original file while "qemu-img info" shows on both as compression type zstd. Do they use
different compression levels, maybe ?
If i now do an virsh backup-begin <domain>, the backup-Image does also has a bigger size than the original, while
showing zstd as compression type (qemu-img info). If i convert it with the similar command as above, both converted images has
nearly the same size. Even if i copy the smaller converted image to the original and boot the vm from the smaller
image, the backup-image (after backup-begin) is bigger.
So, i am confused. Are there any explanations about the different image sizes or what's going on here ?
best regards
Michael
2 weeks, 1 day
live migration of SR-IOV vm
by Paul B. Henson
I have a vm using an sr-iov NIC that I'm testing live migration on (Debian
12, OS packages).
Per the documentation, I have the sr-iov link set as transient with a
pointer to the persistent virtio link:
<interface type='network'>
<mac address='52:54:00:a1:e0:38'/>
<source network='sr-iov-intel-10G-1'/>
<vlan>
<tag id='400'/>
</vlan>
<model type='virtio'/>
<teaming type='transient' persistent='ua-sr-iov-backup'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00'
function='0x0'/>
</interface>
and a persistent virto link down by default:
<interface type='direct'>
<mac address='52:54:00:a1:e0:38'/>
<source dev='eno5np0.400' mode='bridge'/>
<model type='virtio'/>
<teaming type='persistent'/>
<link state='down'/>
<alias name='ua-sr-iov-backup'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</interface>
The failover driver finds this in the vm:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP mode DEFAULT group default qlen 100
0
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel
master eth0 state DOWN mode DEFAULT group
default qlen 1000
10: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
eth0 state UP mode DEFAULT group default
qlen 1000
and the network works fine. However, during migration, the sr-iov
interface is removed, but the link on the virtio interface is *not*
brought up:
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
state DOWN mode DEFAULT group default qlen
1000
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel
master eth0 state DOWN mode DEFAULT group
default qlen 1000
resulting in no network for part of the migration.
Once the box finished migrating, the replacement sv-iov link is plugged
back in, and all is well once again:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP mode DEFAULT group default qlen 1000
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel
master eth0 state DOWN mode DEFAULT group default qlen 1000
11: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
eth0 state UP mode DEFAULT group default qlen 1000
My understanding was that virsh migrate was supposed to automatically
bring up the virtio link when the sr-iov link is removed? Or do I need
to explicitly bring it up myself before the migration and take it down
after?
If I bring up the link manually before the migration, there are a few
packets lost at that time, but then none are lost during the migration
when the sr-iov link is pulled, or after the migration when I shut that
link down again. Ideally no packets would be lost :), but I realize
that's unlikely in practice...
Thanks...
2 weeks, 1 day
system cache dropped
by matus valentin
Hi,
I have a setup with multiple virtual machines (VMs), each with a saved
state. All VMs share the same parent, which is located on a shared drive.
Whenever I restore any VM using virsh restore, the parent is dropped from
the system cache, causing all data to be downloaded from the shared drive
again. This results in unnecessary network traffic, even though the parent
changes very rarely. However, if I create a child from the parent and
call virsh
create to create a new VM, the parent is never dropped from the system
cache.
Is this expected behavior? Should the parent be retained in the system
cache during a virsh restore operation? Are there any configurations or
settings that can prevent the parent from being dropped from the cache?
thanks
2 weeks, 4 days
best backup strategy for full backup's
by Michael Niehren
Hi together,
actually i only do full-backup's of my virtual machines.
I use the for the backup the "old" strategy:
- virsh snapshot-create-as $vmname overlay --disk-only --atomic --no-metadata --quiesce
- copy the qcow2 image file
- virsh blockcommit $vmname $device --active --wait --pivot
- the guest agent in the VM got's an 2 seconds freeze/thaw intervall
Now i want to switch to the new strategy with "backup-begin".
- virsh backup-begin $vmname
- the guest agent does not got an freez/thaw signal
As the guest agent got's no signal, is the backup over "backup-begin" still consistent ?
Or do i have to be consistent to send an virsh domfsfreeze $vmname before starting the backup and an
virsh domfsthaw $vmname it it is finished ?
If so, the time intervall between freeze/thaw would be on an huge disk much more then 2 secords.
So, is the old method currently still the better way, if only doing full-backup's ?
best regards,
Michael
3 weeks, 1 day