trustGuestRxFilters broken after upgrade to Debian 12
by Paul B. Henson
We've been running Debian 11 for a while, using sr-iov:
<network>
<name>sr-iov-intel-10G-1</name>
<uuid>6bdaa4c8-e720-4ea0-9a50-91cb7f2c83b1</uuid>
<forward mode='hostdev' managed='yes'>
<pf dev='eth2'/>
</forward>
</network>
and allocating vf's from the pool:
<interface type='network' trustGuestRxFilters='yes'>
<mac address='52:54:00:08:da:5b'/>
<source network='sr-iov-intel-10G-1'/>
<vlan>
<tag id='50'/>
</vlan>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
After upgrading to Debian 12, when I try to start any vm which uses the
trustGuestRxFilters option, it fails to start with the message:
error: internal error: unable to execute QEMU command 'query-rx-filter':
invalid net client name: hostdev0
If I remove the option, it starts fine (but of course is broken
functionality wise as the option wasn't there just for fun :) ).
Any thoughts on what's going on here? The Debian 12 versions are:
libvirt-daemon/stable,now 9.0.0-4
qemu-system-x86/stable,now 1:7.2+dfsg-7+deb12u3
I see Debian 12 backports has version 8.1.2+ds-1~bpo12+1 of qemu, but no
newer versions of libvirt. I haven't tried the backports version to
see if that resolves the problem.
Thanks much...
3 weeks
per user vm isolation with shared network
by daggs
Greetings,
I have two vm which I want to isolate per user, if I'm not mistaken, I can to that with per session uri.
but I want to setup a virtual bridge so they will get connected with each other.
looks like that if I define the network as system, it isn't visible in the session.
is there a way to do that? if I define the same network in both sessions, will it work?
Thanks,
Dagg
4 months, 3 weeks
Re: dm-crypt performance regression due to workqueue changes
by Mikulas Patocka
On Sun, 30 Jun 2024, Tejun Heo wrote:
> Hello,
>
> On Sat, Jun 29, 2024 at 08:15:56PM +0200, Mikulas Patocka wrote:
>
> > With 6.5, we get 3600MiB/s; with 6.6 we get 1400MiB/s.
> >
> > The reason is that virt-manager by default sets up a topology where we
> > have 16 sockets, 1 core per socket, 1 thread per core. And that workqueue
> > patch avoids moving work items across sockets, so it processes all
> > encryption work only on one virtual CPU.
> >
> > The performance degradation may be fixed with "echo 'system'
> > >/sys/module/workqueue/parameters/default_affinity_scope" - but it is
> > regression anyway, as many users don't know about this option.
> >
> > How should we fix it? There are several options:
> > 1. revert back to 'numa' affinity
> > 2. revert to 'numa' affinity only if we are in a virtual machine
> > 3. hack dm-crypt to set the 'numa' affinity for the affected workqueues
> > 4. any other solution?
>
> Do you happen to know why libvirt is doing that? There are many other
> implications to configuring the system that way and I don't think we want to
> design kernel behaviors to suit topology information fed to VMs which can be
> arbitrary.
>
> Thanks.
I don't know why. I added users(a)lists.libvirt.org to the CC.
How should libvirt properly advertise "we have 16 threads that are
dynamically scheduled by the host kernel, so the latencies between them
are changing and unpredictable"?
Mikulas
4 months, 3 weeks
Command line equivalent for cpu pinning
by Gianluca Cecchi
Hello,
on my Fedora 39 with
libvirt 9.7.0-3.fc39
qemu-kvm 8.1.3-5.fc39
kernel 6.8.11-200.fc39.x86_64
I'm testing cpu pinning
The hw is a Nuc with
13th Gen Intel(R) Core(TM) i7-1360P
If I pass from this in my guest xml:
<vcpu placement='static'>4</vcpu>
<cpu mode='host-passthrough' check='none' migratable='on'>
<topology sockets='1' dies='1' cores='4' threads='1'/>
</cpu>
to this:
<vcpu placement='static'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='2'/>
<vcpupin vcpu='2' cpuset='4'/>
<vcpupin vcpu='3' cpuset='6'/>
</cputune>
<cpu mode='host-passthrough' check='none' migratable='on'>
<topology sockets='1' dies='1' cores='4' threads='1'/>
</cpu>
It seems to me that the generated command line of the qemu-system-x88_64
process doesn't change.
As if the cputune options were not considered
What should I see as different?
Actually it seems it is indeed honored, because if I run stress-ng in the
VM in the second scenario and top command in the host, I see only pcpu
0,2,4,6 going up with the load.
Instead the first scenario keeps several different cpus alternating in the
load.
The real question could be: if I want to reproduce from the command line
the cputune options, how can I do it?
Is it only a cpuset wrapper used for the qemu-system-x86_64 process to
place it in a cpuset control group?
I see for the pid of the process
$ sudo cat /proc/340215/cgroup
0::/machine.slice/machine-qemu\x2d8\x2dc7anstgt.scope/libvirt/emulator
and
$ sudo systemd-cgls /machine.slice
CGroup /machine.slice:
└─machine-qemu\x2d8\x2dc7anstgt.scope …
└─libvirt
├─340215 /usr/bin/qemu-system-x86_64....
├─vcpu1
├─vcpu2
├─vcpu0
├─emulator
└─vcpu3
What could be an easy command to replicate from the command line what virsh
does?
Thanks in advance
Gianluca
4 months, 3 weeks
cpu vmx migration issue
by d tbsky
Hi:
I update our RHEL9 system to RHEL 9.4, which brings libvirt 10.0.
I try to calculate the cpu baseline for our two-node cluster with
command "virsh domcapabilities" then "virsh hypervisor-cpu-baseline
--migratable". the result has many cpu features begin with "vmx".
the test cluster has cpu "intel E3-1280 V3" and "intel I3-9100F".
when I try live migrate vm, it failed and told me "guest CPU doesn't
match specification: missing features:
vmx-apicv-register,vmx-apicv-vid,vmx-posted-intr".
at another cluster with cpu " Intel i5-2520M" and "Intel
i7-9750H" the migration works fine for the calculated cpu result.
although there are still many "vmx" cpu features in the result.
if I delete all these vmx features, then migration works fine for
both cluster, like old days.
I wonder what's the benefit to expose these vmx features to guest
if I don't do any nested virtualization.
is it ok the drop all these vmx cpu features for a guest?
thanks a lot for help!
5 months
Running libvirt without dnsmasq
by procmem@riseup.net
Hi, we are trying to document a way for our users to run libvirt without dnsmasq to reduce attack surface on the host. We are aware that the default network uses it but plan to disable that and use our own custom configured networks instead. Uninstalling dnsmasq causes libvirt to refuse to start even if the default network is no longer running. Is this possible or is this something that needs code changes upstream?
5 months
unable to start a nested vm due to iommu_group issue
by daggs
Greetings,
I'm working on a new os for my server which runs 2 vms. I'm using nested vms to work on it so I won't take down the server.
the new os is alpine 3.20, qemu is 9.0.1, libvirt is 10.3.0.
I have a device I want to pass to one of the guests, here is the iommu group layout:
IOMMU Group 18 00:1f.0 ISA bridge [0601]: Intel Corporation 82801IB (ICH9) LPC Interface Controller [8086:2918] (rev 02)
IOMMU Group 18 00:1f.2 SATA controller [0106]: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] [8086:2922] (rev 02)
IOMMU Group 18 00:1f.3 SMBus [0c05]: Intel Corporation 82801I (ICH9 Family) SMBus Controller [8086:2930] (rev 02)
IOMMU Group 18 00:1f.4 Audio device [0403]: Intel Corporation 82801I (ICH9 Family) HD Audio Controller [8086:293e] (rev 03)
the device in question is 00:1f.4:
utils-server:/home/igor# lspci -s 00:1f.4 -nnv
00:1f.4 Audio device [0403]: Intel Corporation 82801I (ICH9 Family) HD Audio Controller [8086:293e] (rev 03)
Subsystem: Red Hat, Inc. QEMU Virtual Machine [1af4:1100]
Flags: bus master, fast devsel, latency 0, IRQ 10, IOMMU group 18
Memory at e1080000 (32-bit, non-prefetchable) [size=16K]
Capabilities: [60] MSI: Enable- Count=1/1 Maskable- 64bit+
Kernel driver in use: vfio-pci
to achieve that, I've compiled the kernel with acs patch and defined it in the kernel cmdline:
BOOT_IMAGE=/boot/vmlinuz-lts root=UUID=44299ace-27c6-4047-8cbd-bbffcc0a65f0 ro modules=sd-mod,usb-storage,ext4 quiet rootfstype=ext4 iommu=pt intel_iommu=on pcie_acs_override=id:8086:293e,8086:10c9
I see in dmesg, acs is enabled, see:
# dmesg | grep "ACS overrides"
[ 0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
the xml file can be found at https://bpa.st/FARQ.
when I try to start the vm, I get this:
# virsh start streamer
error: Failed to start domain 'streamer'
error: internal error: QEMU unexpectedly closed the monitor (vm='streamer'): 2024-06-21T08:39:17.959476Z qemu-system-x86_64: -device {"driver":"vfio-pci","host":"0000:00:1f.4","id":"hostdev0","bus":"pcie.0","addr":"0x1f.0x5"}: vfio 0000:00:1f.4: group 18 is not viable
Please ensure all devices within the iommu_group are bound to their vfio bus driver
any ideas what am I missing? is it possible this cannot work within a vm?
Thanks,
Dagg
5 months
librbd encryption and guest XML
by melanie witt
Hi,
I have been trying to use the librbd engine to run a guest from an
encrypted RBD image and am running into some problems.
What I would like to do is:
1. Start from an unencrypted raw image with an OS
2. Make an encrypted clone of that image
3. Boot a guest from the encrypted clone image
What I have tried so far (simplified):
1. Make a clone of the unencrypted image
rbd clone images/unencrypted@snap images/encryptedclone
2. Format the clone image with encryption
rbd encryption format images/encryptedclone luks1 passphrase.bin
3. Create guest XML with the encrypted clone
[...]
<disk type="network" device="disk">
<driver type="raw" cache="writeback"/>
<source protocol="rbd" name="images/encryptedclone">
<host name="127.0.0.1" port="6789"/>
<encryption format="luks" engine="librbd">
<secret type="passphrase" uuid="secretuuid"/>
</encryption>
</source>
<auth username="cinder">
<secret type="ceph" uuid="othersecretuuid"/>
</auth>
<target dev="vda" bus="virtio"/>
</disk>
[...]
and virDomainCreateWithFlags() with the XML.
I don't get any errors from libvirt (no errors about loading encryption)
but this configuration does not seem to work, the guest won't boot.
If anyone can give me a hint what I'm doing wrong, I would appreciate it.
Cheers,
-melwitt
5 months, 1 week
brew services start libvirt failed to start libvirt on MacOS Sonoma
by absiplant@gmail.com
Hello, I am hoping someone can point me in the right direction regarding issues I have starting up libvirt services on MacOS Sonoma.
I am running:
macOS Sonoma 14.5
llibvirt: stable 10.4.0 (bottled)
qemu: stable 9.0.1 (bottled)
I have the following entries in libvirt.conf:
listen_tls = 0
listen_tcp = 1
listen_addr = "0.0.0.0"
unix_sock_group = "libvirt"
unix_sock_ro_perms = "0777"
unix_sock_rw_perms = "0770"
unix_sock_admin_perms = "0700"
unix_sock_dir = "/opt/homebrew/var/run/libvirt"
auth_unix_ro = "none"
auth_unix_rw = "none"
auth_tls = "none"
tls_no_sanity_certificate = 1
tls_no_verify_certificate = 1
log_filters="1:qemu 1:libvirt 4:object 4:json 2:event 1:util"
log_outputs="1:file:/opt/homebrew/var/log/libvirt/libvirtd.log"
I added ```firewalld = 0``` and ```irewall_backend = "none"```. These made no difference, I have since removed both.
Snippets of the log in debug mode indicate a usable firewall backend cannot be found, is there a way to resolve this on macOS?
8617315328: debug : virSystemdNotifyStartup:648 : Skipping systemd notify, not requested
6105182208: debug : virThreadJobSet:97 : Thread 6105182208 is now running job daemon-init
6105182208: warning : virProcessGetStartTime:1205 : Process start time of pid 10814 not available on this platform
6105182208: debug : virGDBusIsServiceEnabled:386 : Service org.freedesktop.login1 is unavailable
6105182208: debug : virStateInitialize:663 : Running global init for Remote state driver
6105182208: debug : virStateInitialize:670 : State init result 1 (mandatory=0)
6105182208: debug : virStateInitialize:663 : Running global init for bridge state driver
6105182208: error : virNetworkLoadDriverConfig:147 : internal error: could not find a usable firewall backend
6105182208: debug : virStateInitialize:670 : State init result -1 (mandatory=0)
6105182208: error : virStateInitialize:674 : Initialisation of bridge state driver failed: internal error: could not find a usable firewall backend
6105182208: error : daemonRunStateInit:617 : Driver state initialisation failed
6105182208: debug : virThreadJobClear:122 : Thread 6105182208 finished job daemon-init with ret=0
Thanks.
5 months, 1 week
vfio usage in a vm
by daggs
Greetings,
I wanted to know if it is possible to attach a virtual nic (shows up as 02:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8100/8101L/8139 PCI Fast Ethernet Adapter (rev 20)) to the vfio module, when I modprobe it I get this error:
[ 854.624668] vfio-pci: probe of 0000:02:01.0 failed with error -22
Thanks,
Dagg
5 months, 1 week