[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 3 months
libvirtd file descriptors leaking
by Дмитрий
Hello,
## SETUP
CentOS 8 Stream
libguestfs-tools.noarch 1:1.40.2-24.el8.plesk
libvirt.x86_64 7.6.0-6.el8s
## ISSUE REPRODUCTION
lsof -p $(cat /run/libvirtd.pid) | wc -l # here we have N open file descriptors by libvirtd
guestfish --ro -d domain1
run # here we have N+1 open file descriptors by libvirtd
Ctrl+D # and we still have N+1 open file descriptors by libvirtd
virt-df -d domain1 # after completion we still having N++ open file descriptors by libvirtd
To fix this issue i am force to restart libvirtd
2 years, 10 months
QEMU args generation order with libvirt
by M, Shivakumar
Hello,
We are trying to bring up the Android VM with Libvirt, as part of this exercise it is advised to keep Replay Protected Memory Block (RPMB) device as the first virtio device in the QEMU arg list, as the secure storage daemon in Andriod side communicates with dev/vport0p1 for RPMB usage.
Is there any way we can make a qemu commands generation in particular order with Libvirt XML ?
Thanks,
Shivakumar M
2 years, 10 months
networking question
by Natxo Asenjo
hi,
I have an issue with one host at a customer's site. I think this cannot
work, but I would like to ask you just in case I am confused.
host:
eno1: 172.20.10.x/24 management interface gw 172.20.10.254
bridge-service: 0.0.0.0/24
tun0: openvpn tunnel to external data center
internal-bridge: x.x.x.x/28 ; routed subnet that goes to openvpn tun0
on vm:
eth0: x.x.x.x/28 on internal-bridge (default gw)
eth1: 172.20.10.x/24 bridge-service gw 172.20.10.254 (same as eno1)
Connectivity to and from openvpn (from and to datacenter) is perfect. All
vms are directly reachable from our management services, no natting.
>From hypervisor I can ping the gw, from vm I cannot ping 172.20.10.254.
My gut feeling is that this cannot work because traffic for the hypervisor
for subnet 172.20.10.x/24 flows through eno1, but for vm through the
bridge-loggin interface. So that cannot work.
Should we just ask the customer to give us different subnets for the host
and the vm?
TIA.
--
regards,
Natxo
2 years, 10 months
domblkinfo->allocation is not equals to domstats.1->allocation for disk with <slices> settings
by Zhen Tang
Hi,
I did some testing about disk with <slice> setting and found that the value
of domblkinfo->allocation is not same with domstats.1->allocation for disk
with <slices>
Env:
libvirt-8.0.0-4.el9
qemu-kvm-6.2.0-9.el9
Step:
1. prepare a image
# qemu-img create /car/lib/libvirt/images/disk-raw -f raw 100M -o
preallocation=full
2. start a guest and attach the disk
➜ ~ cat disk.xml
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'
copy_on_read='off' discard='ignore' detect_zeroes='on'/>
<source file='/var/lib/libvirt/images/disk-raw' index='1'>
<slices>
<slice type='storage' offset='0' size='104857600'/>
</slices>
</source>
<backingStore/>
<target dev='sdb' bus='scsi'/>
<iotune>
<total_bytes_sec>10000000</total_bytes_sec>
<group_name>slice</group_name>
</iotune>
<alias name='ua-slices'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
3. Write some data to the slices disk in vm
(in vm) dd if=/dev/zero of=/dev/sdb bs=10M count=1
4. check *domblkinfo* and *domstats*
➜ ~ virsh domstats rhel9.0-1 --block|grep -Ei
'block.1.(allocation|capacity|physical)'
*block.1.allocation=0*
block.1.capacity=104857600
block.1.physical=104865792
➜ ~
➜ ~ virsh domblkinfo rhel9.0-1 sdb|grep -Ei
'(allocation|capacity|physical)'
Capacity: 104857600
*Allocation: 104865792*
Physical: 104857600
Are these 'allocation' values expected to be equal to each other?
Thanks,
Zhen Tang
2 years, 10 months
Re: Hugepages -- "Memory backend 'pc.ram' not found"
by Michal Prívozník
On 2/21/22 17:12, Charles Polisher wrote:
Hey, please the list on CC for benefit of others, e.g. when somebody
runs into the same problem they can find the discussion in the archive.
> On 2/21/22 01:54, Michal Prívozník wrote:
>
>> On 2/20/22 04:07, Charles Polisher wrote:
>>> Hello,
>>>
>>> After defining hugepages, as documented at
>>> https://libvirt.org/formatdomain.html#memory-backing ,
>>> when I start the guest, I get a dialogue
>>> box that says:
>>>
>>> Error starting domain: internal error: qemu unexpectedly
>>> closed the monitor: 2022-02-20T01:10:36.520955Z
>>> qemu-system-x86_64: Memory backend 'pc.ram' not found
>>> Traceback (most recent call last):
>>> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 65,
>>> in cb_wrapper
>>> callback(asyncjob, *args, **kwargs)
>>> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 101,
>>> in tmpcb
>>> callback(*args, **kwargs)
>>> File
>>> "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 57,
>>> in newfn
>>> ret = fn(self, *args, **kwargs)
>>> File "/usr/share/virt-manager/virtManager/object/domain.py", line
>>> 1329, in startup
>>> self._backend.create()
>>> File "/usr/lib64/python3.9/site-packages/libvirt.py", line 1353,
>>> in create
>>> raise libvirtError('virDomainCreate() failed'
>>>
>>> After backing out changes, guest starts normally.
>>> I searched online for the error message, but found nothing useful.
>>> The hypervisor is running libvirtd (libvirt) 7.8.0 and QEMU emulator
>>> version 6.1.0,
>>> both build from source. I've got plenty of hugepages available.
>>> The domain's XML definition is attached.
>> Hey, can you share your domain XML and the generated cmd line? The
>> latter should be found in /var/log/libvirt/qemu/$domain.log
>>
>> Thanks,
>> Michal
>
> Thanks for your reply. As requested, the domain XML:
>
> <domain type="kvm">
> <name>slacky-0</name>
> <uuid>4a67eb39-9b92-8b8a-97ba-7e1250d56b07</uuid>
> <title>slacky-0</title>
> <description>elided</description>
> <memory unit="KiB">4194304</memory>
> <currentMemory unit="KiB">4194304</currentMemory>
> <memoryBacking>
> <hugepages>
> <page size="4194304" unit="KiB"/>
This does not look correct. This is on x86_64 and there is no such size
for hugepages, only 2MiB and 1GiB.
> </hugepages>
> </memoryBacking>
> <vcpu placement="static">2</vcpu>
> <os>
> <type arch="x86_64" machine="pc-i440fx-5.1">hvm</type>
> <bootmenu enable="no"/>
> </os>
> <features>
> <acpi/>
> <apic/>
> <pae/>
> </features>
> <cpu mode="custom" match="exact" check="none">
> <model fallback="forbid">kvm64</model>
> </cpu>
> <clock offset="utc"/>
> <on_poweroff>destroy</on_poweroff>
> <on_reboot>restart</on_reboot>
> <on_crash>restart</on_crash>
> <devices>
> <emulator>/usr/bin/qemu-system-x86_64</emulator>
> <disk type="file" device="cdrom">
> <driver name="qemu" type="raw"/>
> <target dev="hdc" bus="ide"/>
> <readonly/>
> <address type="drive" controller="0" bus="1" target="0"
> unit="0"/>
> </disk>
> <disk type="file" device="disk">
> <driver name="qemu" type="qcow2" cache="writethrough"/>
> <source file="/mnt/nvme1/VIRTUAL_MACHINES/slacky-0.qcow2"/>
> <target dev="vda" bus="virtio"/>
> <boot order="1"/>
> <address type="pci" domain="0x0000" bus="0x00" slot="0x09"
> function="0x0"/>
> </disk>
> <controller type="usb" index="0" model="ich9-ehci1">
> <address type="pci" domain="0x0000" bus="0x00" slot="0x05"
> function="0x7"/>
> </controller>
> <controller type="usb" index="0" model="ich9-uhci1">
> <master startport="0"/>
> <address type="pci" domain="0x0000" bus="0x00" slot="0x05"
> function="0x0" multifunction="on"/>
> </controller>
> <controller type="usb" index="0" model="ich9-uhci2">
> <master startport="2"/>
> <address type="pci" domain="0x0000" bus="0x00" slot="0x05"
> function="0x1"/>
> </controller>
> <controller type="usb" index="0" model="ich9-uhci3">
> <master startport="4"/>
> <address type="pci" domain="0x0000" bus="0x00" slot="0x05"
> function="0x2"/>
> </controller>
> <controller type="ide" index="0">
> <address type="pci" domain="0x0000" bus="0x00" slot="0x01"
> function="0x1"/>
> </controller>
> <controller type="virtio-serial" index="0">
> <address type="pci" domain="0x0000" bus="0x00" slot="0x07"
> function="0x0"/>
> </controller>
> <controller type="scsi" index="0" model="virtio-scsi">
> <address type="pci" domain="0x0000" bus="0x00" slot="0x08"
> function="0x0"/>
> </controller>
> <controller type="pci" index="0" model="pci-root"/>
> <interface type="network">
> <mac address="52:54:00:c3:93:40"/>
> <source network="default"/>
> <model type="virtio"/>
> <address type="pci" domain="0x0000" bus="0x00" slot="0x03"
> function="0x0"/>
> </interface>
> <serial type="file">
> <source path="/tmp/myconsoleoutput.txt"/>
> <target type="isa-serial" port="0">
> <model name="isa-serial"/>
> </target>
> </serial>
> <console type="file">
> <source path="/tmp/myconsoleoutput.txt"/>
> <target type="serial" port="0"/>
> </console>
> <input type="tablet" bus="usb">
> <address type="usb" bus="0" port="1"/>
> </input>
> <input type="mouse" bus="ps2"/>
> <input type="keyboard" bus="ps2"/>
> <graphics type="spice" autoport="yes" listen="127.0.0.1">
> <listen type="address" address="127.0.0.1"/>
> </graphics>
> <sound model="ich9">
> <address type="pci" domain="0x0000" bus="0x00" slot="0x04"
> function="0x0"/>
> </sound>
> <audio id="1" type="spice"/>
> <video>
> <model type="qxl" ram="65536" vram="65536" vgamem="16384"
> heads="1" primary="yes"/>
> <address type="pci" domain="0x0000" bus="0x00" slot="0x02"
> function="0x0"/>
> </video>
> <memballoon model="virtio">
> <address type="pci" domain="0x0000" bus="0x00" slot="0x06"
> function="0x0"/>
> </memballoon>
> </devices>
> </domain>
>
>
> And the guest log with the generated command line:
>
> 2022-02-20 01:13:12.985+0000: starting up libvirt version: 7.8.0,
> qemu version: 6.1.0, kernel: 5.15.19, hostname: godzilla.peecee3.com
> LC_ALL=C \
> PATH=/sbin:/usr/sbin:/bin:/usr/bin \
> HOME=/var/lib/libvirt/qemu/domain-34-slacky-0 \
> XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-34-slacky-0/.local/share \
> XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-34-slacky-0/.cache \
> XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-34-slacky-0/.config \
> /usr/bin/qemu-system-x86_64 \
> -name guest=slacky-0,process=qemu:slacky-0,debug-threads=on \
> -S \
> -object
> '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-34-slacky-0/master-key.aes"}'
> \
> -machine
> pc-i440fx-5.1,accel=kvm,usb=off,dump-guest-core=off,memory-backend=pc.ram \
So this instructs qemu to use a memory device wih id='pc.ram' as the
default/generic memory for the guest..
> -cpu kvm64 \
> -m 4096 \
> -overcommit mem-lock=off \
.. but we never generate such device. Here libvirt should have generated
-object memory-backend-file,id=pc.ram,path=/hugepages/...
And I think I know why. Let me post a patch.
> -smp 2,sockets=2,cores=1,threads=1 \
> -uuid 4a67eb39-9b92-8b8a-97ba-7e1250d56b07 \
> -no-user-config \
> -nodefaults \
> -chardev socket,id=charmonitor,fd=34,server=on,wait=off \
> -mon chardev=charmonitor,id=monitor,mode=control \
> -rtc base=utc \
> -no-shutdown \
> -boot menu=off,strict=on \
> -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 \
> -device
> ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5
> \
> -device
> ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 \
> -device
> ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 \
> -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x8 \
> -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 \
> -device ide-cd,bus=ide.1,unit=0,id=ide0-1-0 \
> -blockdev
> '{"driver":"file","filename":"/mnt/nvme1/VIRTUAL_MACHINES/slacky-0.qcow2","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}'
> \
> -blockdev
> '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":null}'
> \
> -device
> virtio-blk-pci,bus=pci.0,addr=0x9,drive=libvirt-1-format,id=virtio-disk0,bootindex=1,write-cache=off
> \
> -netdev tap,fd=58,id=hostnet0,vhost=on,vhostfd=60 \
> -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:c3:93:40,bus=pci.0,addr=0x3
> \
> -add-fd set=3,fd=62 \
> -chardev file,id=charserial0,path=/dev/fdset/3,append=on \
> -device isa-serial,chardev=charserial0,id=serial0 \
> -device usb-tablet,id=input0,bus=usb.0,port=1 \
> -audiodev id=audio1,driver=spice \
> -spice
> port=5901,addr=127.0.0.1,disable-ticketing=on,seamless-migration=on \
> -device
> qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
> \
> -device ich9-intel-hda,id=sound0,bus=pci.0,addr=0x4 \
> -device
> hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0,audiodev=audio1 \
> -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \
> -msg timestamp=on
> 2022-02-20T01:13:13.136602Z qemu-system-x86_64: Memory backend
> 'pc.ram' not found
> 2022-02-20 01:13:13.186+0000: shutting down, reason=failed
>
> Again, thank you!
Michal
2 years, 10 months
help
by admin@foundryserver.com
help
2 years, 10 months
random but often VMs' - BUG: soft lockup
by lejeczek
Hi guys
I see this often(perhaps once a day) on VMs:
... watchdog: BUG: soft -lockup - CPU#1 stuck for xxXXxxs! [swapper/1:0]
At the moment I cannot say I see a pattern - perhaps not spotted one yet
- and this can happen to any VM between any one of the three hosts,
which hosts are CentOS 9 on AMD Ryzen 5600G hardware. VMs are mostly
CentOS 8 but a few Ubuntus too.
Anybody would know what that might be, is, a symptom of? But most
importantly how to dig in to troubleshoot this?(when this happen then
such affected VM I can only hard reset)
many thanks, L.
2 years, 10 months
Hugepages -- "Memory backend 'pc.ram' not found"
by Charles Polisher
Hello,
After defining hugepages, as documented at
https://libvirt.org/formatdomain.html#memory-backing ,
when I start the guest, I get a dialogue
box that says:
Error starting domain: internal error: qemu unexpectedly
closed the monitor: 2022-02-20T01:10:36.520955Z
qemu-system-x86_64: Memory backend 'pc.ram' not found
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 65,
in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 101,
in tmpcb
callback(*args, **kwargs)
File
"/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 57,
in newfn
ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/domain.py", line
1329, in startup
self._backend.create()
File "/usr/lib64/python3.9/site-packages/libvirt.py", line 1353,
in create
raise libvirtError('virDomainCreate() failed'
After backing out changes, guest starts normally.
I searched online for the error message, but found nothing useful.
The hypervisor is running libvirtd (libvirt) 7.8.0 and QEMU emulator
version 6.1.0,
both build from source. I've got plenty of hugepages available.
The domain's XML definition is attached.
Any ideas where to look next?
Thanks,
--
Charles Polisher
2 years, 10 months