Libvirt
by Gk Gk
Hi All,
I am trying to collect memory, disk and network stats for a VM on kvm host.
It seems that the statistics are not matching what the OS inside the VM is
reporting. Why is this discrepancy ?
Is this a known bug of libvirt ? Also I heard that libvirt shows cumulative
figures for these measures ever since the VM was created. Also I tested by
creating a new vm and comparing the stats without a reboot . Even in this
case, the stats dont agree. Can someone help me here please ?
Thanks
Kumar
1 year, 1 month
Starting guest VM with default NAT network breaks host routing
by Mathias Gibbens
I'm running libvirt 9.0.0 on a Debian 12 bookworm host, setting up a
Windows 11 guest using qemu-kvm and the default NAT network[1]. When I
start the guest VM, it successfully acquires a DHCP lease, and I can
ping other hosts on my local network subnet, but something then happens
which messes up routing on the *host* machine, resulting in no access
to anything beyond my LAN. If I shutdown the VM, network access returns
to normal on the host system.
I've been unable to figure out how to fix this, and unfortunately the
terms are generic enough that Google isn't much help -- most of the
results are about issues with NAT setup (which is working), not route
configuration of the host. I did find one similar report[2], but
there's no reported solution.
I feel like this should be a very common use case, so maybe I've just
setup something wrong. Since I haven't been able to solve it on my own,
I'm hoping someone will have a pointer to get me going in the right
direction.
Further details are below, and I'm happy to provide anything else
that might prove useful.
Thanks,
Mathias
Before starting the guest VM, routes on the host are:
> $ ip route
> default via 172.20.1.1 dev wlan0 proto dhcp src 172.20.1.110 metric 600
> 169.254.0.0/16 dev virbr0 scope link metric 1000 linkdown
> 172.20.1.0/24 dev wlan0 proto kernel scope link src 172.20.1.110 metric 600
> 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
A few seconds after the guest finishes booting, the host gets some
new routes, and connectivity outside of the local LAN subnet breaks:
> $ ip route
> 0.0.0.0 dev vnet0 scope link
> default dev vnet0 scope link
> default via 172.20.1.1 dev wlan0 proto dhcp src 172.20.1.110 metric 600
> 169.254.0.0/16 dev vnet0 proto kernel scope link src 169.254.103.112
> 169.254.0.0/16 dev virbr0 scope link metric 1000
> 172.20.1.0/24 dev wlan0 proto kernel scope link src 172.20.1.110 metric 600
> 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
Within the guest, I can ping the NAT gateway (192.168.122.1), the
host's IP (172.20.1.110), and another computer on the network
(172.20.1.105), so NAT appears to be working correctly:
> C:\Users\user>ipconfig /all
>
> Windows IP Configuration
>
> Host Name . . . . . . . . . . . . : DESKTOP-LGNEPEC
> Primary Dns Suffix . . . . . . . :
> Node Type . . . . . . . . . . . . : Hybrid
> IP Routing Enabled. . . . . . . . : No
> WINS Proxy Enabled. . . . . . . . : No
>
> Ethernet adapter Ethernet:
>
> Connection-specific DNS Suffix . :
> Description . . . . . . . . . . . : Red Hat VirtIO Ethernet Adapter
> Physical Address. . . . . . . . . : 52-54-00-AE-05-B2
> DHCP Enabled. . . . . . . . . . . : Yes
> Autoconfiguration Enabled . . . . : Yes
> Link-local IPv6 Address . . . . . : fe80::2881:98b7:34b8:fe2%11(Preferred)
> IPv4 Address. . . . . . . . . . . : 192.168.122.203(Preferred)
> Subnet Mask . . . . . . . . . . . : 255.255.255.0
> Lease Obtained. . . . . . . . . . : Monday, June 19, 2023 14:03:57
> Lease Expires . . . . . . . . . . : Monday, June 19, 2023 15:03:57
> Default Gateway . . . . . . . . . : 192.168.122.1
> DHCP Server . . . . . . . . . . . : 192.168.122.1
> DHCPv6 IAID . . . . . . . . . . . : 340939776
> DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-2C-0E-E6-F4-52-54-00-AE-05-B2
> DNS Servers . . . . . . . . . . . : 192.168.122.1
> NetBIOS over Tcpip. . . . . . . . : Enabled
>
> C:\Users\user>ping 192.168.122.1
>
> Pinging 192.168.122.1 with 32 bytes of data:
> Reply from 192.168.122.1: bytes=32 time<1ms TTL=64
> Reply from 192.168.122.1: bytes=32 time<1ms TTL=64
> Reply from 192.168.122.1: bytes=32 time<1ms TTL=64
> Reply from 192.168.122.1: bytes=32 time<1ms TTL=64
>
> Ping statistics for 192.168.122.1:
> Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
> Approximate round trip times in milli-seconds:
> Minimum = 0ms, Maximum = 0ms, Average = 0ms
>
> C:\Users\user>ping 172.20.1.110
>
> Pinging 172.20.1.110 with 32 bytes of data:
> Reply from 172.20.1.110: bytes=32 time<1ms TTL=64
> Reply from 172.20.1.110: bytes=32 time<1ms TTL=64
> Reply from 172.20.1.110: bytes=32 time<1ms TTL=64
> Reply from 172.20.1.110: bytes=32 time<1ms TTL=64
>
> Ping statistics for 172.20.1.110:
> Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
> Approximate round trip times in milli-seconds:
> Minimum = 0ms, Maximum = 0ms, Average = 0ms
>
> C:\Users\user>ping 172.20.1.105
>
> Pinging 172.20.1.105 with 32 bytes of data:
> Reply from 172.20.1.105: bytes=32 time=49ms TTL=63
> Reply from 172.20.1.105: bytes=32 time=31ms TTL=63
> Reply from 172.20.1.105: bytes=32 time=26ms TTL=63
> Reply from 172.20.1.105: bytes=32 time=26ms TTL=63
>
> Ping statistics for 172.20.1.105:
> Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
> Approximate round trip times in milli-seconds:
> Minimum = 26ms, Maximum = 49ms, Average = 33ms
-----
[1] -- Here's the NAT configuration:
> <network>
> <name>default</name>
> <uuid>ff6cd6ed-a8fe-4e50-8852-3c93a169e156</uuid>
> <forward mode="nat">
> <nat>
> <port start="1024" end="65535"/>
> </nat>
> </forward>
> <bridge name="virbr0" stp="on" delay="0"/>
> <mac address="52:54:00:4e:80:30"/>
> <ip address="192.168.122.1" netmask="255.255.255.0">
> <dhcp>
> <range start="192.168.122.2" end="192.168.122.254"/>
> </dhcp>
> </ip>
> </network>
[2] -- https://unix.stackexchange.com/questions/723091/kvm-booting-guest-breaks-...
1 year, 6 months
Changing EFI boot order via API
by Lukas Zapletal
Hello,
I would like to be able to change the boot order of an EFI VM via
libvirt API. Specifically, I am looking into configuring a VM to boot
over HTTP UEFI Boot or HTTPS UEFI Boot (also available as HTTP IPv4 in
the firmware).
Is this possible? I cannot find anything relevant in the documentation. Thanks.
--
Later,
Lukas @lzap Zapletal
1 year, 6 months
Re: virsh not connecting to libvertd ?
by Jerry Buburuz
Michal PrÃvoznÃk
> On 6/12/23 20:17, Jerry Buburuz wrote:
>> Just found my issue.
>> After I removed the cephfs mounts it worked!
>> I will debug ceph.
>> I assumed because I could touch files on mounted cephfs it was working.
Now virsh list works!
> Out of curiosity. Do you perhaps have a storage pool defined over
cephfs? I can see two possible sources for the problem:
> 1) autostarted storage pool that makes libvirt mount cephfs, or
My storage is hard mounted in fstab. This works and it does mount on boot.
# fstab on hypervisor
user@.mynamefs-01=/ /data ceph noatime,_netdev 0 0
usern@.mynamefs-02=/ /data2 ceph noatime,_netdev 0 0
> 2) a storage pool defined over a path where cephfs is mounted.
> The problem with 1) is obvious (in fact it's not specific to ceph, if it
was NFS/iSCSI and the server wasn't responding then libvirtd would just
hang).
I agree with NFS/cephfs or any storage over the network if its not
available libvirtd defined pools will cause problems.
> The problem with 2) is that for some types of storage pools ('dir'
typically) libvirt assumes they are always 'running'. And proceeds to
enumerate volumes in that pool (i.e. files under the dir). And if
there's a stale mount point, this might stuck libvirtd. But again, this is
not limited to ceph, any network FS might do this.
> Michal
In my case I built this hypervisor using cephfs as my primary storage for
virtualmachines over the past year. Its has worked until recently.
Recently I had issue with my ceph which likely caused a stale mount. In
the past if the storage went offline, after fixing the issue I had no
problems and my virualmachines came back to life.
In the current case I found:
* cephfs working as normal. healthy.
* hypervisors mounting using fstab as usual.
* libvirtd starts normally no errors (even with cephfs mounted)
* virsh fails to connect to libvirt when the cephfs is mounted.
If I umount /cephfs, and systemctl restart libvirtd. virsh works!
Example "virsh list" , "virsh version" ..etc.
I am going to try and maybe delete the iso pools I have.
One interesting thing I found yesterday after I restart the hypervisor and
try umount /cephfs, libvirtd has one of my pools locked and opened. Its
the pool with iso in it. I know someone in a previous response to me
mentioned they had issues with iso being stored on cephfs and libvirtd. In
order for me to umount /cephfs I have to stop all libvirtd
services(livrtd, libvirtd.socket .. ro .. and admin). This makes sense
libvirtd has defined pools in my cephfs.
thanks
jerry
1 year, 6 months
virsh not connecting to libvertd ?
by Jerry Buburuz
I have identical two hypervisors same operating system: Ubuntu 22.04.2 LTS
Recently both virsh stopped talking to the libvirtd. Both stopped within a
few days of each other.
Currently if I run:
virsh uri
virsh version
virsh list
# virsh list
..nothing just hangs
When I ran strace on these broken machines it get stuck at same spot:
strace virsh list
...
access("/var/run/libvirt/virtqemud-sock", F_OK) = -1 ENOENT (No such file
or directory)
access("/var/run/libvirt/libvirt-sock", F_OK) = 0
socket(AF_UNIX, SOCK_STREAM, 0) = 5
connect(5, {sa_family=AF_UNIX, sun_path="/var/run/libvirt/libvirt-sock"},
110) = 0
getsockname(5, {sa_family=AF_UNIX}, [128 => 2]) = 0
futex(0x7fa716a672f0, FUTEX_WAKE_PRIVATE, 2147483647) = 0
fcntl(5, F_GETFD) = 0
fcntl(5, F_SETFD, FD_CLOEXEC) = 0
fcntl(5, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(5, F_SETFL, O_RDWR|O_NONBLOCK) = 0
futex(0x7fa716a67348, FUTEX_WAKE_PRIVATE, 2147483647) = 0
eventfd2(0, EFD_CLOEXEC|EFD_NONBLOCK) = 6
write(6, "\1\0\0\0\0\0\0\0", 8) = 8
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
futex(0x7fa70c001cb0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7fa716a6786c, FUTEX_WAKE_PRIVATE, 2147483647) = 0
futex(0x7fa716a67378, FUTEX_WAKE_PRIVATE, 2147483647) = 0
write(4, "\1\0\0\0\0\0\0\0", 8) = 8
futex(0x7fa70c001cb0, FUTEX_WAKE_PRIVATE, 1) = 1
write(6, "\1\0\0\0\0\0\0\0", 8) = 8
rt_sigprocmask(SIG_BLOCK, [PIPE CHLD WINCH], [], 8) = 0
poll([{fd=5, events=POLLOUT}, {fd=6, events=POLLIN}], 2, -1) = 2 ([{fd=5,
revents=POLLOUT}, {fd=6, revents=POLLIN}])
read(6, "\2\0\0\0\0\0\0\0", 16) = 8
write(6, "\1\0\0\0\0\0\0\0", 8) = 8
write(6, "\1\0\0\0\0\0\0\0", 8) = 8
futex(0x5628ce6e9710, FUTEX_WAKE_PRIVATE, 2147483647) = 0
write(6, "\1\0\0\0\0\0\0\0", 8) = 8
write(6, "\1\0\0\0\0\0\0\0", 8) = 8
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
write(5, "\0\0\0\34 \0\200\206\0\0\0\1\0\0\0B\0\0\0\0\0\0\0\0\0\0\0\0",
28) = 28
write(6, "\1\0\0\0\0\0\0\0", 8) = 8
rt_sigprocmask(SIG_BLOCK, [PIPE CHLD WINCH], [], 8) = 0
poll([{fd=5, events=POLLIN}, {fd=6, events=POLLIN}], 2, -1) = 1 ([{fd=6,
revents=POLLIN}])
read(6, "\5\0\0\0\0\0\0\0", 16) = 8
poll([{fd=5, events=POLLIN}, {fd=6, events=POLLIN}], 2, -1
It gets stuck at this poll(). Note I tested strace on identical new
install of ubtuntu 22.04 where virsh connects fine and get identical
strace, except after this poll() it continues on with read/write ..etc.
I turned on debugging for libvirtd and get no errors while virsh is trying
to connect.
I am able to get a virsh# shell. The shell only hangs when I try "connect,
uri, version".
Another method of debugging I tried was:
LIBVIRT_DEBUG=error LIBVIRT_LOG_FILTERS="1:* " virsh uri
..
..
2023-06-06 20:51:22.312+0000: 1647: debug : doRemoteOpen:1128 : Trying
authentication
2023-06-06 20:51:22.312+0000: 1647: debug : virNetMessageNew:44 :
msg=0x55b996539680 tracked=0
2023-06-06 20:51:22.312+0000: 1647: debug : virNetMessageEncodePayload:383
: Encode length as 28
2023-06-06 20:51:22.312+0000: 1647: info : virNetClientSendInternal:2151 :
RPC_CLIENT_MSG_TX_QUEUE: client=0x55b996538010 len=28 prog=536903814
vers=1 proc=66 type=0 status=0 serial=0
2023-06-06 20:51:22.312+0000: 1647: debug : virNetClientCallNew:2107 : New
call 0x55b996535f80: msg=0x55b996539680, expectReply=1, nonBlock=0
2023-06-06 20:51:22.312+0000: 1647: debug : virNetClientIO:1920 : Outgoing
message prog=536903814 version=1 serial=0 proc=66 type=0 length=28
dispatch=(nil)
2023-06-06 20:51:22.312+0000: 1647: debug : virNetClientIO:1978 : We have
the buck head=0x55b996535f80 call=0x55b996535f80
2023-06-06 20:51:22.312+0000: 1647: info : virEventGLibHandleUpdate:195 :
EVENT_GLIB_UPDATE_HANDLE: watch=1 events=0
2023-06-06 20:51:22.312+0000: 1647: debug : virEventGLibHandleUpdate:206 :
Update handle data=0x55b996534d30 watch=1 fd=5 events=0
2023-06-06 20:51:22.312+0000: 1647: debug : virEventGLibHandleUpdate:229 :
Removed old handle source=0x55b996534de0
2023-06-06 20:51:22.312+0000: 1648: debug : virEventRunDefaultImpl:341 :
running default event implementation
Any help would be appreciated.
thanks
jerry
1 year, 6 months
Criteria used to allow or not live migration
by Gianluca Cecchi
Hello,
I would like to know what are the criteria that allow or not live migration
when source and dest hosts have same CPU type (eg Intel) but different
models (eg cascade lake vs ice lake).
I imagine it is libvirtd to drive this decision, correct?
Does libvirt has a sort of table or list of cpu flags that are mandatory /
optional? Or how is it established to prevent or not live migration across
different couodels systems?
Thanks,
Gianluca
1 year, 6 months