Libvirt
by Gk Gk
Hi All,
I am trying to collect memory, disk and network stats for a VM on kvm host.
It seems that the statistics are not matching what the OS inside the VM is
reporting. Why is this discrepancy ?
Is this a known bug of libvirt ? Also I heard that libvirt shows cumulative
figures for these measures ever since the VM was created. Also I tested by
creating a new vm and comparing the stats without a reboot . Even in this
case, the stats dont agree. Can someone help me here please ?
Thanks
Kumar
1 year, 1 month
Xen with libvirt and SR-IOV
by nospam@godawa.de
Hi everybody,
since long time, I'm using Xen on CentOS with XL, currently latest
CentOS 7 with Xen 4.15 from the CentOS-Xen-Project. For several VMs I
have to use SR-IOV, to lower the CPU-usage on Dom0 on the host.
CentOS 7 comes to an end, Xen is not supported by RHEL nor RockyLinux
anymore, so unfortunately I have to switch to KVM.
First step will be now, converting all the scripts for managing and
running VMs, that they run with the additional libvirt-layer.
Mostly everything is working, but I do not get a network interface in
the VM, when I start it with "virsh start ..." instead of "xl create ...".
First of all, is there documentation how to configure the VMs in the
dom-definition for XEN (all docs I found are KVM-related)?
The converted xl-config does not do the job:
virsh -c xen:/// domxml-from-native --format xen-xl vm > vm.xml
These are some none working examples I tried out:
...
<interface type='hostdev' managed='yes'>
<mac address='02:16:32:10:20:30'/>
<driver name='xen'/>
<source>
<address type='pci' domain='0x0000' bus='0x81' slot='0x02'
function='0x6'/>
</source>
<vlan>
<tag id='11'/>
</vlan>
</interface>
...
...
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='xen'/>
<source>
<address domain='0x0000' bus='0x81' slot='0x02' function='0x6'/>
</source>
<address type='pci' domain='0x0000' bus='0x81' slot='0x02'
function='0x6'/>
</hostdev>
...
Result is always the same, the VM does not find any interface to
configure, when starting as libvirt:
# dmesg | egrep -i "net|eth"
...
[ 4.523173] iavf: Intel(R) Ethernet Adaptive Virtual Function Network
Driver - version 4.4.2.1
# lspci | egrep -i "net|eth"
00:00.6 Ethernet controller: Intel Corporation Ethernet Virtual Function
700 Series (rev 02)
# lspci -vmmks 00:00.6
Slot: 00:00.6
Class: Ethernet controller
Vendor: Intel Corporation
Device: Ethernet Virtual Function 700 Series
SVendor: Intel Corporation
SDevice: Device 0000
Rev: 02
Module: i40evf
Module: iavf
NUMANode: 0
# lsmod | egrep -i "iavf|i40"
iavf 135168 0
auxiliary 16384 1 iavf
ptp 20480 1 iavf
# ifconfig eth0
eth0: error fetching interface information: Device not found
The same VM after starting with XL:
# dmesg | egrep -i "net|eth"
...
[ 4.742038] iavf: Intel(R) Ethernet Adaptive Virtual Function Network
Driver - version 4.4.2.1
[ 40.578461] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 40.642868] iavf 0000:00:00.6 eth0: NIC Link is Up Speed is 10 Gbps
Full Duplex
[ 40.644015] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
# lspci | egrep -i "net|eth"
00:00.6 Ethernet controller: Intel Corporation Ethernet Virtual Function
700 Series (rev 02)
# lspci -vmmks 00:00.6
Slot: 00:00.6
Class: Ethernet controller
Vendor: Intel Corporation
Device: Ethernet Virtual Function 700 Series
SVendor: Intel Corporation
SDevice: Device 0000
Rev: 02
Driver: iavf
Module: i40evf
Module: iavf
NUMANode: 0
# lsmod | egrep -i "iavf|i40"
iavf 135168 0
auxiliary 16384 1 iavf
ptp 20480 1 iavf
# ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.20.30.40 netmask 255.255.255.0 broadcast 10.20.30.1
...
ether 02:16:32:10:20:30 txqueuelen 1000 (Ethernet)
I expect, that all the SR-IOV stuff is configured correctly, because the
VMs runs in XL-mode without any problems.
Thanks a lot for any ideas,
--
kind regards,
Thorolf
1 year, 8 months
About libvirt domain dump state and persistent state
by Shen, Tao
Hi all,
I use libvirt-go in my agent to attach rbd volumes. I ofen suffer the issue of incosistent of domain dump xml and domain persistent xml file. For example, when I try to attach volume, the the domain by “virConnectListAllDomains” api from libvirt tell me the vdf or 0x0a is empty, but when attach it with the vdf device or 0x0a address, qemu return the error duplicated with vdf or PCI address are double used:
2021-09-28T16:00:03.816682107-07:00 stderr F I0928 23:00:03.816558 1 cephvolume.go:73] attach disk &{XMLName:{Space: Local:} Device:disk RawIO: SGIO: Snapshot: Model: Driver:0xc00073b420 Auth:0xc000f49e40 Source:0xc000159860 BackingStore:<nil> Geometry:<nil> BlockIO:<nil> Mirror:<nil> Target:0xc000a58b40 IOTune:0xc00019b970 ReadOnly:<nil> Shareable:<nil>
Transient:<nil> Serial:pvc-33003998-6624-4ac9-a923-d94f9401abdf WWN: Vendor: Product: Encryption:<nil> Boot:<nil> Alias:<nil> Address:0xc001420b40} error: virError(Code=27, Domain=20, Message='XML error: target 'vdf' duplicated for disk sources 'volume-0aab375c-1858-4f09-b276-ea297cd29a3d' and 'volume-63ef92c4-a027-476c-a2de-9fcf501dd4de'')
I0330 00:28:55.070331 1 cephvolume.go:73] attach disk &{XMLName:{Space: Local:} Device:disk RawIO: SGIO: Snapshot: Model: Driver:0xc000024380 Auth:0xc006a51600 Source:0xc000640cd0 BackingStore:<nil> Geometry:<nil> BlockIO:<nil> Mirror:<nil> Target:0xc00342db80 IOTune:0xc005071c30 ReadOnly:<nil> Shareable:<nil> Transient:<nil> Serial:pvc-39c80157-0862-433c-a1ec-49475db818cf WWN: Vendor: Product: Encryption:<nil> Boot:<nil> Alias:<nil> Address:0xc000f5e240} error: virError(Code=27, Domain=20, Message='XML error: Attempted double use of PCI Address 0000:00:0a.0')
In first case of “target 'vdf' duplicated” , I find the volume is in the dump xml by “virsh dumpxml” but not in the persistent xml at /etc/libvirt/qemu/xxx.xml or “virsh edit”. But the second case of “double use of PCI Address” I found the volume is on the persistent xml but not int the duml xml.
I think there is a middle state that qemu try to attach or detach but not finished. But what is the api to get the different domain state except “virConnectListAllDomains”?
The reversed case is that when I try to detach volume, the domain tell me the volume is allocated at vde, but I detach the volume with vde, it also return the error “vde not found”:
I0330 00:38:54.615254 1 cephvolume.go:227] detach disk &{XMLName:{Space: Local:disk} Device:disk RawIO: SGIO: Snapshot: Model: Driver:0xc000736540 Auth:0xc0027b68a0 Source:0xc000641040 BackingStore:<nil> Geometry:<nil> BlockIO:<nil> Mirror:<nil> Target:0xc004557640 IOTune:0xc0006e4370 ReadOnly:<nil> Shareable:<nil> Transient:<nil> Serial:pvc-febae406-15ad-4d05-9c93-b2d09c197840 WWN: Vendor: Product: Encryption:<nil> Boot:<nil> Alias:<nil> Address:0xc00709cba0} error: virError(Code=9, Domain=10, Message='operation failed: disk vde not found')
In this case, I found the volume is not in dump but in the persistent. All my volume api are use persistent operation. So I want to know
1. What is the state return by “virConnectListAllDomains” ?
2. What is the detail of state and how the state transition of qemu/libvirt?
3. Any way in libvirt api to get the different state of domain?
Thanks,
Tao
1 year, 8 months
Option Flags
by Simon Fairweather
Hi
Are the flags documented? can this function be used to specify same as virsh
undefine --nvram "name of VM"
libvirt_domain_undefine_flags($res, $flags)
[Since version (null)]
Function is used to undefine(with flags) the domain identified by it's
resource.
*@res [resource]*: libvirt domain resource, e.g. from
libvirt_domain_lookup_by_*()
*@flags [int]*: optional flags
*Returns*: : TRUE for success, FALSE on error
1 year, 9 months
virsh domifaddr --domain domname --source {lease, arp} not showing results with ipv6
by Natxo Asenjo
hi,
I have configured a routed network on my laptop with a ipv6 subnet and
dnsmasq is handing out ipv6 addresses to my vms and it works really wel,
but finding out which ips have been used is not as easy as with ipv4.
[root@lenovo ~]# virsh domifaddr --domain wec --source lease
Name MAC address Protocol Address
-------------------------------------------------------------------------------
[root@lenovo ~]# virsh domifaddr --domain wec --source arp
Name MAC address Protocol Address
-------------------------------------------------------------------------------
When using a ipv4 network, this works:
root@lenovo ~]# virsh domifaddr --domain evenng --source arp
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet2 52:54:00:4c:83:98 ipv4 192.168.122.229/0
[root@lenovo ~]# virsh domifaddr --domain evenng --source lease
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet2 52:54:00:4c:83:98 ipv4 192.168.122.229/24
I can obviously look into the leases file and find out the address, but it
would be nice to be able to use the virt tooling.
This is on a fedora 37 running qemu-kvm-7.0.0-14.fc37.x86_64 and
dnsmasq-2.89-1.fc37.x86_64, everything intalled from the fedora
repositories.
--
--
Groeten,
natxo
1 year, 9 months
Upgrade machine type during migration
by Michael Schwartzkopff
Hi,
I have an old system. The guest there is defined with:
<os>
<type arch='x86_64' machine='pc-q35-rhel8.2.0'>hvm</type>
</os>
When I try to migrate this guest to a new system I get the error:
error: internal error: unable to execute QEMU command 'blockdev-add':
Failed to connect socket: Permission denied
On the new host I see the log entries:
libvirtd[22411]: Domain id=18 name='test02'
uuid=9bad33a8-d18e-4c68-bdbe-dad34142dc22 is tainted: deprecated-config
(machine type 'pc-q35-rhel8.2.0')
systemd-machined[9980]: New machine qemu-18-test02.
systemd[1]: Started Virtual Machine qemu-18-test02.
systemd-networkd[3914]: vnet20: Link DOWN
systemd-networkd[3914]: vnet20: Lost carrier
kernel: br1: port 3(vnet20) entered disabled state
kernel: device vnet20 left promiscuous mode
kernel: br1: port 3(vnet20) entered disabled state
systemd[1]: machine-qemu\x2d18\x2dtest02.scope: Deactivated successfully.
systemd-machined[9980]: Machine qemu-18-test02 terminated.
libvirtd[22411]: migration successfully aborted
It seems there is no possible combination of machine types that is
compatible for both hosts. The old host only can start rhel8.2 guests,
the new one only rhel9.0.
Is there a possibility to change the machine type during a migration? Is
such a migration possible? Or do I have so shut down the guest?
Michael Schwartzkopff
1 year, 9 months
virNetSocketReadWire
by Simon Fairweather
What are the standard reasons for this to fail?
virNetSocketReadWire:1791 : End of file while reading data: Input/output
error
1 year, 9 months