trustGuestRxFilters broken after upgrade to Debian 12
by Paul B. Henson
We've been running Debian 11 for a while, using sr-iov:
<network>
<name>sr-iov-intel-10G-1</name>
<uuid>6bdaa4c8-e720-4ea0-9a50-91cb7f2c83b1</uuid>
<forward mode='hostdev' managed='yes'>
<pf dev='eth2'/>
</forward>
</network>
and allocating vf's from the pool:
<interface type='network' trustGuestRxFilters='yes'>
<mac address='52:54:00:08:da:5b'/>
<source network='sr-iov-intel-10G-1'/>
<vlan>
<tag id='50'/>
</vlan>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
After upgrading to Debian 12, when I try to start any vm which uses the
trustGuestRxFilters option, it fails to start with the message:
error: internal error: unable to execute QEMU command 'query-rx-filter':
invalid net client name: hostdev0
If I remove the option, it starts fine (but of course is broken
functionality wise as the option wasn't there just for fun :) ).
Any thoughts on what's going on here? The Debian 12 versions are:
libvirt-daemon/stable,now 9.0.0-4
qemu-system-x86/stable,now 1:7.2+dfsg-7+deb12u3
I see Debian 12 backports has version 8.1.2+ds-1~bpo12+1 of qemu, but no
newer versions of libvirt. I haven't tried the backports version to
see if that resolves the problem.
Thanks much...
3 weeks
Error Cannot acquire state change lock from
remoteDispatchDomainMigratePrepare3Params during live migration of domains
by Christian Rohmann
Hallo libvirt-users!
we observe lock-ups / timeouts with in prometheus-libvirt-exporter
(https://github.com/inovex/prometheus-libvirt-exporter) when
libvirt is live-migrating domains:
> Timed out during operation: cannot acquire state change lock (held by
> monitor=remoteDispatchDomainMigratePrepare3Params)
All of the source code can be found at:
https://github.com/inovex/prometheus-libvirt-exporter/blob/master/pkg/exp....
Basically the error happens when DomainMemoryStats or other operational
domain info is queried via the libvirt socket.
1) We are actually using the read-only socket at
'/var/run/libvirt/libvirt-sock-ro', so there should not be any locking
required.
Is there any way to not run into lock contention, like running a request
with some "nolock" indication?
2) This being reported as timeout waiting for the lock, what is the
timeout and would waiting a bit longer help?
Or is the lock active during the whole time a domain live migration is
running?
3) Is this in any way related to the type of migration? Tunneled vs.
native (https://libvirt.org/migration.html)?
4) Is there any indication that we could use to skip those domains (or
certain queries)?
The same issue was actually previously reported for another
implementation of a Prometheus exporter
(https://github.com/kumina/libvirt_exporter/issues/33).
Currently the exporter locks up or throws the mentioned timeout errors
during the the migration of 200 domains, 5 at a time.
It would be awesome to find a way to make this work as smooth as
possible, even during live migrations!
I am thankful for any insights into how the libvirt socket, the various
calls, the locking mechanisms or live migration modes work!
Regards
Christian
7 months
all domains paused, maybe logging might be helpfull
by Lennart Fricke
Hello,
I just hit the situation that all domains on a host where paused due to
missing space. It took me some time to figure out that there was no
space left for the images on the host. I learned that 'virsh domstate
--reason $GUEST' and 'domblkerror $GUEST' could have helped me. But the
logs are silent about the problem.
Would it be possible to show these problems in logs or is there other
documentation than the reference to find out how to troubleshoot those
issues?
Thank you
Lennart
8 months
Re: PCI Hotplug to a VM does not work in Alma
by Michal Prívozník
On 3/19/24 22:51, Chanda Mendon (cmendon) via Users wrote:
> Hi
>
> We have a chassis with a peripheral PCI device installed. We have a
> hypervisor running on the chassis where we have deployed a VM which can
> use the PCI device once it is attached.
>
> When the PCI device is powered on or off we need to do a hotplug in/out
> using virsh commands. Even though the virsh commands for hotplug is
> executed successfully, the VM sees the PCI inside the VM for one or 2
> secs. What do you think is the issue?
> [root ~]# uname -a
>
> Linux 4.18.0-372.9.1.el8.x86_64 #1 SMP Fri Mar 15 05:32:38 UTC 2024
> x86_64 x86_64 x86_64 GNU/Linux
>
> [root ~]# virsh --version
>
> 8.0.0
>
> [root ~]# rpm -qa | grep libvirt
>
> libvirt-daemon-driver-nwfilter-8.0.0-5.el8.x86_64
>
> python3-libvirt-7.8.0-1.el8.x86_64
>
> libvirt-daemon-driver-storage-logical-8.0.0-5.el8.x86_64
>
> libvirt-libs-8.0.0-5.el8.x86_64
>
> libvirt-daemon-config-nwfilter-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-storage-gluster-8.0.0-5.el8.x86_64
>
> libvirt-8.0.0-5.el8.x86_64
>
> libvirt-daemon-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-nodedev-8.0.0-5.el8.x86_64
>
> libvirt-daemon-config-network-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-storage-iscsi-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-storage-rbd-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-network-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-secret-8.0.0-5.el8.x86_64
>
> python2-libvirt-python-5.10.0-1.el8.x86_64
>
> libvirt-daemon-driver-qemu-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-storage-core-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-storage-iscsi-direct-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-storage-scsi-8.0.0-5.el8.x86_64
>
> libvirt-client-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-storage-disk-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-storage-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-interface-8.0.0-5.el8.x86_64
>
> libvirt-daemon-driver-storage-mpath-8.0.0-5.el8.x86_64
>
> libvirt-daemon-kvm-8.0.0-5.el8.x86_64
>
> [root ~]# rpm -qa | grep qemu
>
> qemu-kvm-ui-opengl-6.2.0-11.el8.x86_64
>
> qemu-kvm-6.2.0-11.el8.x86_64
>
> qemu-img-6.2.0-11.el8.x86_64
>
> qemu-kvm-block-iscsi-6.2.0-11.el8.x86_64
>
> ipxe-roms-qemu-20200823-7.git4bd064de.el8.noarch
>
> qemu-kvm-block-gluster-6.2.0-11.el8.x86_64
>
> qemu-kvm-block-rbd-6.2.0-11.el8.x86_64
>
> qemu-kvm-block-curl-6.2.0-11.el8.x86_64
>
> qemu-kvm-core-6.2.0-11.el8.x86_64
>
> qemu-kvm-hw-usbredir-6.2.0-11.el8.x86_64
>
> libvirt-daemon-driver-qemu-8.0.0-5.el8.x86_64
>
> qemu-kvm-ui-spice-6.2.0-11.el8.x86_64
>
> qemu-kvm-docs-6.2.0-11.el8.x86_64
>
> qemu-kvm-block-ssh-6.2.0-11.el8.x86_64
>
> qemu-kvm-common-6.2.0-11.el8.x86_64
>
>
>
> [root ~]# *virsh nodedev-dettach** **pci_0000_04_00_0*
>
> [12013.987821] pci_probe_reset_slot: call pci_slot_reset with probe=1
>
> [12014.063669] pci_slot_reset (printk info): reset hotplug slot.
>
> [12014.134164] pci_reset_hotplug_slot(printk INFO): calling reset_slot
> (probe = 1)
>
> [Mar19 21:09] pci_probe_reset_slot: call pci_slot_reset with probe=1
>
> [ +0.075848] pci_slot_reset (printk info): reset hotplug slot.
>
> [ +0.070495] pci_reset_hotplug_slot(printk INFO): calling reset_slot
> (probe = 1)
>
> *Device pci_0000_04_00_0 detached***
>
>
>
> [root ~]# *virsh attach-device ROUTER8 /opt/us/bin/mrvl.xml*
>
> [12024.217540] pci_probe_reset_slot: call pci_slot_reset with probe=1
>
> [ +10.083376] pci_probe_reset_slot: call pci_slot_reset with
> probe=1[12024.293548] pci_slot_reset (printk info): reset hotplug slot.
The fact that the device resets so often might suggest a problem with
the device itself. But since QEMU is seeing the device (even if only for
a brief period), I think libvirt's out of the picture. Perhaps QEMU
folks might have a better answer.
Michal
8 months
PCI Hotplug to a VM does not work in Alma
by Chanda Mendon (cmendon)
Hi
We have a chassis with a peripheral PCI device installed. We have a hypervisor running on the chassis where we have deployed a VM which can use the PCI device once it is attached.
When the PCI device is powered on or off we need to do a hotplug in/out using virsh commands. Even though the virsh commands for hotplug is executed successfully, the VM sees the PCI inside the VM for one or 2 secs. What do you think is the issue?
[root ~]# uname -a
Linux 4.18.0-372.9.1.el8.x86_64 #1 SMP Fri Mar 15 05:32:38 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
[root ~]# virsh --version
8.0.0
[root ~]# rpm -qa | grep libvirt
libvirt-daemon-driver-nwfilter-8.0.0-5.el8.x86_64
python3-libvirt-7.8.0-1.el8.x86_64
libvirt-daemon-driver-storage-logical-8.0.0-5.el8.x86_64
libvirt-libs-8.0.0-5.el8.x86_64
libvirt-daemon-config-nwfilter-8.0.0-5.el8.x86_64
libvirt-daemon-driver-storage-gluster-8.0.0-5.el8.x86_64
libvirt-8.0.0-5.el8.x86_64
libvirt-daemon-8.0.0-5.el8.x86_64
libvirt-daemon-driver-nodedev-8.0.0-5.el8.x86_64
libvirt-daemon-config-network-8.0.0-5.el8.x86_64
libvirt-daemon-driver-storage-iscsi-8.0.0-5.el8.x86_64
libvirt-daemon-driver-storage-rbd-8.0.0-5.el8.x86_64
libvirt-daemon-driver-network-8.0.0-5.el8.x86_64
libvirt-daemon-driver-secret-8.0.0-5.el8.x86_64
python2-libvirt-python-5.10.0-1.el8.x86_64
libvirt-daemon-driver-qemu-8.0.0-5.el8.x86_64
libvirt-daemon-driver-storage-core-8.0.0-5.el8.x86_64
libvirt-daemon-driver-storage-iscsi-direct-8.0.0-5.el8.x86_64
libvirt-daemon-driver-storage-scsi-8.0.0-5.el8.x86_64
libvirt-client-8.0.0-5.el8.x86_64
libvirt-daemon-driver-storage-disk-8.0.0-5.el8.x86_64
libvirt-daemon-driver-storage-8.0.0-5.el8.x86_64
libvirt-daemon-driver-interface-8.0.0-5.el8.x86_64
libvirt-daemon-driver-storage-mpath-8.0.0-5.el8.x86_64
libvirt-daemon-kvm-8.0.0-5.el8.x86_64
[root ~]# rpm -qa | grep qemu
qemu-kvm-ui-opengl-6.2.0-11.el8.x86_64
qemu-kvm-6.2.0-11.el8.x86_64
qemu-img-6.2.0-11.el8.x86_64
qemu-kvm-block-iscsi-6.2.0-11.el8.x86_64
ipxe-roms-qemu-20200823-7.git4bd064de.el8.noarch
qemu-kvm-block-gluster-6.2.0-11.el8.x86_64
qemu-kvm-block-rbd-6.2.0-11.el8.x86_64
qemu-kvm-block-curl-6.2.0-11.el8.x86_64
qemu-kvm-core-6.2.0-11.el8.x86_64
qemu-kvm-hw-usbredir-6.2.0-11.el8.x86_64
libvirt-daemon-driver-qemu-8.0.0-5.el8.x86_64
qemu-kvm-ui-spice-6.2.0-11.el8.x86_64
qemu-kvm-docs-6.2.0-11.el8.x86_64
qemu-kvm-block-ssh-6.2.0-11.el8.x86_64
qemu-kvm-common-6.2.0-11.el8.x86_64
[root ~]# virsh nodedev-dettach pci_0000_04_00_0
[12013.987821] pci_probe_reset_slot: call pci_slot_reset with probe=1
[12014.063669] pci_slot_reset (printk info): reset hotplug slot.
[12014.134164] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
[Mar19 21:09] pci_probe_reset_slot: call pci_slot_reset with probe=1
[ +0.075848] pci_slot_reset (printk info): reset hotplug slot.
[ +0.070495] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
Device pci_0000_04_00_0 detached
[root ~]# virsh attach-device ROUTER8 /opt/us/bin/mrvl.xml
[12024.217540] pci_probe_reset_slot: call pci_slot_reset with probe=1
[ +10.083376] pci_probe_reset_slot: call pci_slot_reset with probe=1[12024.293548] pci_slot_reset (printk info): reset hotplug slot.
[12024.434828] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
[ +0.076008] pci_slot_reset (printk info): reset hotplug slot.
[ +0.141280] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)4.526807] pci_probe_reset_slot: call pci_slot_reset with probe=1
[12024.749367] pci_slot_reset (printk info): reset hotplug slot.
[12024.821327] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
[ +0.091979] pci_probe_reset_slot: call pci_slot_reset with probe=1[12024.911065] pci_probe_reset_slot: call pci_slot_reset with probe=1
[ +0.222560] pci_slot_reset (printk info): reset hotplug slot.
[ +0.071960] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
5.057584] pci_slot_reset (printk info): reset hotplug slot.
[ +0.089738] pci_probe_reset_slot: call pci_slot_reset with probe=1
[12025.278585] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
[ +0.146519] pci_slot_reset (printk info): reset hotplug slot.
[ +0.221001] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
5.441574] pci_probe_reset_slot: call pci_slot_reset with probe=1
[ +0.162989] pci_probe_reset_slot: call pci_slot_reset with probe=1[12025.665968] pci_slot_reset (printk info): reset hotplug slot.
[12025.806751] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
[12025.898060] __pci_reset_slot (printk info): Reset slot (not hotplug), probe = 1.
[ +0.224394] pci_slot_reset (printk info): reset hotplug slot.
[ +0.140783] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)5.988477] pci_slot_reset (printk info): reset hotplug slot.
[ +0.091309] __pci_reset_slot (printk info): Reset slot (not hotplug), probe = 1.
12026.205654] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
[12026.383424] __pci_reset_slot (printk info): pci_slot_trylock is non-zero, so reset hotplug slot.
[ +0.090417] pci_slot_reset (printk info): reset hotplug slot.
[ +0.217177] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)6.490406] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 0)
[ +0.177770] __pci_reset_slot (printk info): pci_slot_trylock is non-zero, so reset hotplug slot.
2026.726579] pciehp_reset_slot: SLOTCTRL 58 write cmd 0
[ +0.106982] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 0)
[ +0.236167] pcieport 0000:00:0e.0: pciehp: pciehp_reset_slot: SLOTCTRL 58 write cmd 0
[ +0.000006] pciehp_reset_slot: SLOTCTRL 58 write cmd 0
[12027.744480] pending interrupts 0x0108 from Slot Status
[ +1.017890] pcieport 0000:00:0e.0: pciehp: pending interrupts 0x0108 from Slot Status
[ +0.000011] pending interrupts 0x0108 from Slot Status
[12027.936469] pciehp_reset_slot: SLOTCTRL 58 write cmd 1008
[ +0.191983] pcieport 0000:00:0e.0: pciehp: pciehp_reset_slot: SLOTCTRL 58 write cmd 100812028.027182] pcieport 0000:00:0e.0: pciehp: Slot(4): Link Down
[12028.190357] Slot(4): Link Down
[ +0.000006] pciehp_reset_slot: SLOTCTRL 58 write cmd 1008[12028.230605] pcieport 0000:00:0e.0: pciehp: Slot(4): Card not present
[12028.369819] Slot(4): Card not present
Device attached successfully
[12028.415302] pciehp_unconfigure_device: domain:bus:dev = 0000:04:00
[ +0.090713] pcieport 0000:00:0e.0: pciehp: Slot(4): Link Down[12028.524356] vfio-pci 0000:04:00.0: Relaying device request to user (#0)
[ +0.163175] Slot(4): Link Down
[ +0.040248] pcieport 0000:00:0e.0: pciehp: Slot(4): Card not present
[ +0.139214] Slot(4): Card not present
[ +0.045479] pcieport 0000:00:0e.0: pciehp: pciehp_unconfigure_device: domain:bus:dev = 0000:04:00
[ +0.000004] pciehp_unconfigure_device: domain:bus:dev = 0000:04:00
[ +0.109054] vfio-pci 0000:04:00.0: Relaying device request to user (#0)
9119] pci_probe_reset_slot: call pci_slot_reset with probe=1
[root@nfvis ~]# [12029.151422] pci_slot_reset (printk info): reset hotplug slot.
[12029.237806] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
[ +0.334763] pci_probe_reset_slot: call pci_slot_reset with probe=1[12029.327572] pci_probe_reset_slot: call pci_slot_reset with probe=1
[ +0.292303] pci_slot_reset (printk info): reset hotplug slot.
[ +0.086384] pci_r12029.474447] pci_slot_reset (printk info): reset hotplug slot.
eset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
[12029.632533] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
[ +0.089766] pci_probe_reset_slot: call pci_slot_reset with probe=1[12029.788423] __pci_reset_slot (printk info): Reset slot (not hotplug), probe = 1.
[ +0.146875] pci_slot_reset (printk info): reset hotplug slot.
[12029.950782] pci_slot_reset (printk info): reset hotplug slot.
[ +0.158086] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
[ +0.155890] __pci_reset_slot (printk info): Reset slot (not hotplug), probe = 1.
.089787] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)
[12030.346708] __pci_reset_slot (printk info): pci_slot_trylock = 0, so don't reset hotplug slot.
[ +0.162359] pci_slot_reset (printk info): reset hotplug slot.[12030.451555] vfio-pci 0000:04:00.0: can't change power state from D0 to D3hot (config space inaccessible)
[ +0.139005] pci_reset_hotplug_slot(printk INFO): calling reset_slot (probe = 1)12030.632974] pci 0000:04:00.0: Removing from iommu group 58
[ +0.256921] __pci_reset_slot (printk info): pci_slot_trylock = 0, so don't reset hotplug slot.
[ +0.104847] vfio-pci 0000:04:00.0: can't change power state from D0 to D3hot (config space inaccessible)
5279] pciehp_check_link_active: lnk_status = 7011
[ +0.181419] pci 0000:04:00.0: Removing from iommu group 58
[12031.056747] pcieport 0000:00:0e.0: pciehp: Slot(4): Card present
[12031.194871] Slot(4): Card present
[ +0.152301] pcieport 0000:00:0e.0: pciehp: pciehp_check_link_active: lnk_status = 701112031.236173] pcieport 0000:00:0e.0: pciehp: Slot(4): Link Up
[12031.395181] Slot(4): Link Up
[ +0.000004] pciehp_check_link_active: lnk_status = 7011
[ +0.271468] pcieport 0000:00:0e.0: pciehp: Slot(4): Card present
[ +0.138124] Slot(4): Card present
[ +0.041302] pcieport 0000:00:0e.0: pciehp: Slot(4): Link Up
[ +0.159008] Slot(4): Link Up
31.560448] read_dev_vendor_id: read config_dword worked. bus = 4, dev_vendor id = 0xE61E11AB
[12031.803909] read_dev_vendor_id: device found, return true. bus = 4, dev_vendor id = 0xE61E11AB
[12031.909784] pciehp_check_link_status: lnk_status = 7011
[ +0.165267] read_dev_vendor_id: read config_dword worked. bus = 4, dev_vendor id = 0xE61E11AB2031.974018] read_dev_vendor_id: read config_dword worked. bus = 4, dev_vendor id = 0xE61E11AB
[12032.176762] read_dev_vendor_id: device found, return true. bus = 4, dev_vendor id = 0xE61E11AB
[ +0.243461] read_dev_vendor_id: device found, return true. bus = 4, dev_vendor id = 0xE61E11AB
[ +0.105871] pcieport 0000:00:0e.0: pciehp: pciehp_check_link_status: lnk_status = 7011282681] pci 0000:04:00.0: [11ab:e61e] type 00 class 0x020000
[ +0.000004] pciehp_check_link_status: lnk_status = 7011
[12032.545840] pci 0000:04:00.0: reg 0x10: [mem 0x7b44000000-0x7b440fffff 64bit pref]
[12032.701716] pci 0000:04:00.0: reg 0x18: [mem 0x7b40000000-0x7b43ffffff 64bit pref]
[ +0.064234] read_dev_vendor_id: read config_dword worked. bus = 4, dev_vendor id = 0xE61E11AB2032.794062] pci 0000:04:00.0: reg 0x20: [mem 0x7b30000000-0x7b3fffffff 64bit pref]
[12032.984416] pci 0000:04:00.0: supports D1 D2
[ +0.202744] read_dev_vendor_id: device found, return true. bus = 4, dev_vendor id = 0xE61E11AB2033.039350] pci 0000:04:00.0: Adding to iommu group 58
[12033.201376] pci 0000:04:00.0: BAR 4: assigned [mem 0x7b30000000-0x7b3fffffff 64bit pref]
[12033.299989] pci 0000:04:00.0: BAR 2: assigned [mem 0x7b40000000-0x7b43ffffff 64bit pref]
[ +0.105919] pci 0000:04:00.0: [11ab:e61e] type 00 class 0x020000[12033.400664] pci 0000:04:00.0: BAR 0: assigned [mem 0x7b44000000-0x7b440fffff 64bit pref]
[ +0.263159] pci 0000:04:00.0: reg 0x10: [mem 0x7b44000000-0x7b440fffff 64bit pref]12033.568019] pcieport 0000:00:0e.0: PCI bridge to [bus 04-06]
[12033.725967] pcieport 0000:00:0e.0: bridge window [io 0xd000-0xdfff]
[ +0.155876] pci 0000:04:00.0: reg 0x18: [mem 0x7b40000000-0x7b43ffffff 64bit pref]12033.805813] pcieport 0000:00:0e.0: bridge window [mem 0xdc000000-0xdcffffff]
[12033.982533] pcieport 0000:00:0e.0: bridge window [mem 0x7b30000000-0x7b6fffffff 64bit pref]
[ +0.092346] pci 0000:04:00.0: reg 0x20: [mem 0x7b30000000-0x7b3fffffff 64bit pref]12034.086495] pci-stub 0000:04:00.0: claimed by stub
[ +0.190354] pci 0000:04:00.0: supports D1 D2
[ +0.054934] pci 0000:04:00.0: Adding to iommu group 58
[ +0.162026] pci 0000:04:00.0: BAR 4: assigned [mem 0x7b30000000-0x7b3fffffff 64bit pref]
[ +0.098613] pci 0000:04:00.0: BAR 2: assigned [mem 0x7b40000000-0x7b43ffffff 64bit pref]
[ +0.100675] pci 0000:04:00.0: BAR 0: assigned [mem 0x7b44000000-0x7b440fffff 64bit pref]
[ +0.167355] pcieport 0000:00:0e.0: PCI bridge to [bus 04-06]
[ +0.157948] pcieport 0000:00:0e.0: bridge window [io 0xd000-0xdfff]
[ +0.079846] pcieport 0000:00:0e.0: bridge window [mem 0xdc000000-0xdcffffff]
[ +0.176720] pcieport 0000:00:0e.0: bridge window [mem 0x7b30000000-0x7b6fffffff 64bit pref]
[ +0.103962] pci-stub 0000:04:00.0: claimed by stub
Thank you for replying in advance,
-Chanda
8 months, 1 week
Automate VM migration
by aheath1992@gmail.com
Are their any tools that can automatically migrate VMs from one host to another, like in VMware, proxmox, and oVirt?
8 months, 1 week
Info regarding AMX support and libvirt implications
by Gianluca Cecchi
Hello,
I'm trying to use AMX in my virtual machines.
More info on AMX:
https://www.intel.com/content/www/us/en/products/docs/accelerator-engines...
My system in test is currently SLES 15 SP5.
I'm also verifying in parallel with Suse (especially regarding the
backported features in their 5.14 based kernel), but in the meantime I
would like to understand implication, if any, of libvirt in the
certification loop I have to analyse.
From what I see we have in upstream:
. support in the KVM kernel module since 5.17
. support of cpu model SapphireRapids, the first offering AMX as an ISA
extension, in QEMU since 7.0
Is there any dependance to check on libvirt too?
When I run
virsh cpu-models x86_64
Is libvirt sw stack querying qemu directly? Or the kvm kernel module? Or
any internal "database" file?
From man page it is not clear to me what "known" means:
"
cpu-models
Syntax:
cpu-models arch
Print the list of CPU models known by libvirt for the specified
architecture. Whether a specific hypervisor is able to create a domain
which uses any of the printed CPU models is a separate question which can
be answered by looking at the domain capabilities XML returned by
domcapabilities command. Moreover, for some architectures libvirt does not
know any CPU models and the usable CPU models are only limited by the
hypervisor. This command will print that all CPU models are accepted for
these architectures and the actual list of supported CPU models can be
checked in the domain capabilities XML.
"
In SLES 15 SP5 with:
qemu-7.1.0-150500.49.9.2.x86_64
kernel-default-5.14.21-150500.55.49.1.x86_64
libvirtd-*-9.0.0-150500.6.11.1.x86_64
I get
# virsh cpu-models x86_64
...
Cascadelake-Server
Cascadelake-Server-noTSX
Icelake-Client
Icelake-Client-noTSX
Icelake-Server
Icelake-Server-noTSX
Cooperlake
Snowridge
athlon
phenom
Opteron_G1
Opteron_G2
...
# virsh domcapabilities | grep -i sapphirerapid
#
In fedora39 with
qemu-8.1.3-4.fc39.x86_64
kernel-6.7.5-200.fc39.x86_64
libvirt-*-9.7.0-2.fc39.x86_64
I get
# virsh cpu-models x86_64
...
Cascadelake-Server
Cascadelake-Server-noTSX
Icelake-Client
Icelake-Client-noTSX
Icelake-Server
Icelake-Server-noTSX
Cooperlake
Snowridge
SapphireRapids
athlon
phenom
Opteron_G1
Opteron_G2
...
# virsh domcapabilities | grep -i sapphirerapids
<model usable='no' vendor='Intel'>SapphireRapids</model>
#
because I'm running on a client system without AMX support
Thanks in advance,
Gianluca
8 months, 2 weeks
virsh undefine --nvram
by Chuck Lever
Hello -
The virsh(1) man page says:
> --nvram and --keep-nvram specify accordingly to delete or keep nvram (/domain/os/nvram/) file.
However on my Fedora 39 system with libvirt 9.7.0, using "virsh
undefine --nvram" option appears to leave the NVRAM backing file:
cel@boudin:~$ ls ~/.config/libvirt/qemu/nvram
guestfs-02a4oblo9vg7888d_VARS.qcow2 guestfs-7hmiolqcqwz5b0qs_VARS.qcow2
guestfs-el7lqymnwgzpnr38_VARS.qcow2 guestfs-jp0tfneh205xjrh8_VARS.qcow2
guestfs-mhqbp4gwmqsuu9w3_VARS.qcow2 guestfs-sat7x2kgpg9opjds_VARS.qcow2
guestfs-vhhi476rsq7vh83z_VARS.qcow2 guestfs-177mrq9xrb5589wl_VARS.qcow2
guestfs-88p8heyq1j0sy8bk_VARS.qcow2 guestfs-gxul3j9e1h3po19o_VARS.qcow2
guestfs-k0jf8msw4kh3yuff_VARS.qcow2 guestfs-oakcwpbozezvids8_VARS.qcow2
guestfs-sinmkv97r4k4dzcd_VARS.qcow2 guestfs-vj5cu9hgwcv0f18u_VARS.qcow2
guestfs-6jnim67ceme3h369_VARS.qcow2 guestfs-89g707uzdttxb9hv_VARS.qcow2
guestfs-h6vzt9zqxhoxx0hj_VARS.qcow2 guestfs-k4hw4gkcq8u0eebj_VARS.qcow2
guestfs-pljksgeblxgyf0pw_VARS.qcow2 guestfs-t8bnntykpzm3qvh2_VARS.qcow2
guestfs-ynbxqtj01sn4ovwy_VARS.qcow2 guestfs-6sqhenv6ditrq410_VARS.qcow2
guestfs-at8lndx32y69bjmg_VARS.qcow2 guestfs-ht0uycdn3vwdozad_VARS.qcow2
guestfs-lfy19ftph83jo4v0_VARS.qcow2 guestfs-qj0c2u9w2k4ors6h_VARS.qcow2
guestfs-tmq0h36spng4vtwk_VARS.qcow2 guestfs-7erqks7ks5c8rj38_VARS.qcow2
guestfs-ba2kzqxb1pna12l5_VARS.qcow2 guestfs-is7avsxjtnig5kgb_VARS.qcow2
guestfs-m3nrhs3x58rlget9_VARS.qcow2 guestfs-rilsled4ff9wvk6v_VARS.qcow2
guestfs-veebmlno2okc7c8h_VARS.qcow2
cel@boudin:~$
Or perhaps I do not understand what these files are.
--
Chuck Lever
8 months, 2 weeks
山东省女子监狱利用犯
by Bai Guoliang
*你好!*
山东省女子监狱十一监区利用犯人迫害学员事实补充。
狱警纵容犯人打人,这就是犯法,犯人在狱中用暴力手段打人本身就是狱内在犯罪,作为监狱警察不但不制止,还纵容打人,隐瞒事实,找假证据诬赖被害人,次数不断增加,致使宋春梅越来越猖狂,这是罪上加罪。犯人是狱内在犯罪,按照监狱内部规定这必须要关禁闭高戒备,但监狱不仅不管还隐瞒事实,找人制造假证据反制被害者。这种行为太恶劣,构成犯罪。
*👉 详情请看附件。*
感谢你阅读这篇文章!
---
*法轮大法是佛法, 是正法。诚念九字真言得福报。*
*法轮大法好,真善忍好*
几十年来,在中共的洗脑灌输下,大量的中国人被迫加入了共产党和其附属组织(共青团、少先队)。在「全球退党运动」中,中国人民正在从中共的谎言中觉醒。他们公开声明退出中共,远离邪恶。
此时此刻,我们要让世界听到我们声音:
打倒中共恶魔!结束中共!
迄今为止, 2024年2月12日
426,111,490
中国民众退出共产党,及其附属组织
8 months, 3 weeks