WinServer2016 guest no mouse in VirtManager
by John McInnes
Hi! I recently converted several Windows Server VMs from HyperV to libvirt/KVM. The host is running openSUSE Leap 15.3. I used virt-v2v and I installed virtio drivers on all of them and it all went well - except for one VM. The mouse does not work for this VM in VirtualMachineManager. There is no cursor and no response. There are no issues showing in Windows Device Manager. The mouse shows up as a PS/2 mouse. Interestingly if I RDP into this VM using Microsoft Remote Desktop the mouse works fine. Any ideas?
----
John McInnes
jmcinnes /\T svt.org
1 year, 10 months
Libvirt slow after a couple of months uptime
by André Malm
Hello,
I have some issues with libvirtd getting slow over time.
After a fresh reboot (or systemctl restart libvirtd) virsh list /
virt-install is fast, as expected, but after a couple of months uptime
they both take a significantly longer time.
Virsh list takes around 3 seconds (from 0.04s on a fresh reboot) and
virt-install takes over a minute (from around a second).
Running strace on virsh list it seems to get stuck in a loop on this:
poll([{fd=5<socket:[173169773]>, events=POLLOUT},
{fd=6<anon_inode:[eventfd]>, events=POLLIN}], 2, -1) = 2 ([{fd=5,
revents=POLLOUT}, {fd=6, revents=POLLIN}])
While restarting libvirtd fixes it a restart takes around 1 minute where
ebtables rules etc are recreated and it does interrupt the service. What
could cause this? How would I troubleshoot this?
I'm running Ubuntu 22.04 / libvirt 8.0.0 with 70 active VM’s on a 16/32
core machine with 256GB of ram, CPU is below 50% usage at all times,
memory below 50% usage and swap 0% usage.
Thanks,
André
2 years, 1 month
Caching qemu capabilities and KubeVirt
by Roman Mohr
Hi,
I have a question regarding capability caching in the context of KubeVirt.
Since we start in KubeVirt one libvirt instance per VM, libvirt has to
re-discover on every VM start the qemu capabilities which leads to a 1-2s+
delay in startup.
We already discover the features in a dedicated KubeVirt pod on each node.
Therefore I tried to copy the capabilities over to see if that would work.
It looks like in general it could work, but libvirt seems to detect a
mismatch in the exposed KVM CPU ID in every pod. Therefore it invalidates
the cache. The recreated capability cache looks esctly like the original
one though ...
The check responsible for the invalidation is this:
```
Outdated capabilities for '%s': host cpuid changed
```
So the KVM_GET_SUPPORTED_CPUID call seems to return
slightly different values in different containers.
After trying out the attached golang scripts in different containers, I
could indeed see differences.
I can however not really judge what the differences in these KVM function
registers mean and I am curious if someone else knows. The files are
attached too (as json for easy diffing).
Best regards,
Roman
2 years, 2 months
Re: Why host device disappear in libvirt doman xml?
by Michal Prívozník
[Once again, please keep the list on CC]
On 9/29/22 04:59, 陈新隆 wrote:
> Thanks for the detail explanation, it's very helpful.
>
> I guess :
> 1. virsh dumpxml <domain> outputs the live xml(or active xml)
Yes, if domain is running then it outputs the live XML, otherwise it
outputs the inactive XML. For running domains you can use --inactive
flag to get the inactive XML.
> 2. virsh edit <domain> edits the inactive xml, so I should cold boot
> domain to take effect
Correct.
Michal
2 years, 2 months
Re: Why host device disappear in libvirt doman xml?
by Michal Prívozník
[Please, keep the list on CC for benefit of others]
On 9/28/22 10:13, 陈新隆 wrote:
> I'm using the `virsh dumpxml <domain name>` to check the xml. When I
> first execute this command, there's two hostdev(GPU) in the xml, but
> next time I execute the same command these two hostdev elements
> disappeared. During this time, the vm didn't restart or stop. Also,
> within the vm `lspci | grep -i nvidia` command did not print GPU infos.
>
> So I was wondering if there's a mechanism in libvirtd detach the hostdev
> element without the vm stop or restart. This problem doesn't happen
> often, so I am looking a way to reproduce it.
>
> Can you help me with these questions :
>
> 1. If I edit the xml manually then detached the hostdev elements from
> xml by invoke libvirtd apis, will the vm apply it immediately without
> stop or restart? Or I must restart the vm to apply the latest xml ?
I'm not sure what you mean. Editing XML manually is different to using
libvirt APIs to detach hostdevs. Here's how it works:
1) a domain is defined (say using virsh define file.xml), libvirt parses
this XML, keeps it in a memory and stores it "somewhere" (it's under
/etc/libvirt/qemu/ but we do not want users to hand edit those files
manually as libvirt reads them only on libvirtd/virtqemud restart). This
is called inactive XML, because it reflects the inactive state of
domain. Sometimes it's also called config XML.
2) when the domain is started (e.g. virsh start), libvirt creates a copy
of inactive XML, populates it with runtime information and saves it
elsewhere (/run/libvirt/qemu/, but again, we do not want users to hand
edit those files). This copy is referred to as live XML.
3) users can alter the live XML using APIs (e.g. to hotplug a device or
hotunplug it). The inactive XML can be altered by providing altered XML
and defining it again (here, domain name and UUID must match already
existing domain).
4) upon domain shutoff, the live XML is thrown away, and finally
5) the inactive XML is never thrown away, until virsh undefine is called.
Now, you can see that there's no real connection between live and
inactive XMLs and changing one has no affect on the other, except when
the domain is cold booted again. Live and inactive XMLs can vary wildly.
Therefore, you can have inactive XML with two <hostdev/>-s, and active
XML with no <hostdev> at all. NB, hotplug APIs can also be used to alter
inactive XML (virsh attach-device --config / virsh detach-device
--config / ...) or both at the same time (virsh attach-device --config
--live / virsh attach-device --config --live / ...).
And what you describe sounds as if those two <hostdev/>-s you saw at
domain startup were hotunplugged. The fact that even 'lspci' ran from
inside the domain can't find them only supports this theory.
And no, libvirt never tries to bring inactive and live XMLs together. It
has no intelligence built in to do that and we, developers, do not want
such thing either. We might change something that user specifically
wanted and I believe nobody likes those "smart" tools that get in your way.
> 2. After a host device is hot-unplugged, will libvirtd aware of it then
> remove the related `hostdev` element from xml ?
Yes. I believe the reasoning is seen in the previous block of my reply.
>
> On Mon, Sep 26, 2022 at 10:10 PM Michal Prívozník <mprivozn(a)redhat.com
> <mailto:mprivozn@redhat.com>> wrote:
>
> On 9/26/22 15:06, 陈新隆 wrote:
> >
> > <https://stackoverflow.com/posts/73854544/timeline
> <https://stackoverflow.com/posts/73854544/timeline>>
> >
> > I'm using Kubevirt to manage my virtual machine instances. When I
> using
> > Kubevirt to create a vm(with two GPUs), kubevirt will generate a
> libvirt
> > guest domain xml for this vm which includes two GPUs, the domain
> xml as
> > follow :
> >
> > |<hostdev mode='subsystem' type='pci' managed='no'> <driver
> > name='vfio'/> <source> <address domain='0x0000' bus='0x83' slot='0x00'
> > function='0x0'/> </source> <alias name='ua-gpu-gpu0'/> <address
> > type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
> > </hostdev> <hostdev mode='subsystem' type='pci' managed='no'> <driver
> > name='vfio'/> <source> <address domain='0x0000' bus='0x84' slot='0x00'
> > function='0x0'/> </source> <alias name='ua-gpu-gpu1'/> <address
> > type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
> > </hostdev> |
> >
> > No one ever edit this domain xml, but these two |hostdev| element
> > disappeared in the domain xml. During this time, I've run
> > the |gpu_burn| command to do a stress test for these two GPUs.
> >
> > My question is :
> >
> > * when will libvirtd change the guest domain xml ?
> > * why libvirtd delete these two |hostdev| from domain xml ?
> >
>
> Libvirt does not remove anything from domain XML (except for the
> elements it does not understand, but this is not the case). My suspicion
> is that you're looking at live XML instead of inactive XML or vice
> versa. Libvirt allows guests to be defined (i.e. libvirt manages their
> inactive definition). However, a guest can be started with wildly
> different configuration (e.g. without those two <hostdev/>-s). OR, they
> might have been hot-unplugged.
>
>
I still recommend reading this link:
> This article can explain more details:
> https://wiki.libvirt.org/page/VM_lifecycle
> <https://wiki.libvirt.org/page/VM_lifecycle>
Michal
2 years, 3 months
[Question] Should libvirt update live xml accordingly when guest mac is changed?
by Fangge Jin
Hi
I met an issue when testing trustGuestRxFilters:
Attach a macvtap interface with trustGuestRxFilters=’yes’ to vm, then
change interface mac address in vm.
Should libvirt update interface mac in live vm xml accordingly? If not, vm
network will be broken after
managedsaving and restoring vm.
BR,
Fangge Jin
Steps:
1. Start a vm
2.
Attach a macvtap interface with trustGuestRxFilters=’yes’ to vm
<interface type='direct' trustGuestRxFilters='yes'>
<source dev='enp175s0v0' mode='passthrough'/>
<target dev='macvtap0'/>
<model type='virtio'/>
<alias name='net1'/>
</interface>
3.
Check vm xml:
# virsh dumpxml uefi --xpath //interface
<interface type="direct" trustGuestRxFilters="yes">
<mac address="52:54:00:46:88:8b"/>
<source dev="enp175s0v0" mode="passthrough"/>
<target dev="macvtap2"/>
<model type="virtio"/>
<alias name="net0"/>
<address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
</interface>
4.
Change interface mac in guest:
# ip link set dev enp1s0 address 52:54:00:9d:a1:1e
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
fq_codel state UP group default qlen 1000
link/ether 52:54:00:9d:a1:1e brd ff:ff:ff:ff:ff:ff permaddr
52:54:00:46:88:8b
inet 192.168.124.5/24 scope global enp1s0
valid_lft forever preferred_lft forever
# ping 192.168.124.4
PING 192.168.124.4 (192.168.124.4) 56(84) bytes of data.
64 bytes from 192.168.124.4: icmp_seq=1 ttl=64 time=0.240 ms
64 bytes from 192.168.124.4: icmp_seq=2 ttl=64 time=0.138 ms
--- 192.168.124.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
5.
Check vm xml:
# virsh dumpxml uefi --xpath //interface
<interface type="direct" trustGuestRxFilters="yes">
<mac address="52:54:00:46:88:8b"/>
<source dev="enp175s0v0" mode="passthrough"/>
<target dev="macvtap2"/>
<model type="virtio"/>
<alias name="net0"/>
<address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
</interface>
6.
Check on host:
16: macvtap1@enp175s0v0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
qdisc noqueue state UP group default qlen 500
link/ether 52:54:00:9d:a1:1e brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe46:888b/64 scope link
valid_lft forever preferred_lft forever
7.
Do managedsave and restore
# virsh managedsave uefi
Domain 'uefi' state saved by libvirt
# virsh start uefi
Domain 'uefi' started
8.
Check vm network function:
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
fq_codel state UP group default qlen 1000
link/ether 52:54:00:9d:a1:1e brd ff:ff:ff:ff:ff:ff permaddr
52:54:00:46:88:8b
inet 192.168.124.5/24 scope global enp1s0
valid_lft forever preferred_lft forever
# ping 192.168.124.4
PING 192.168.124.4 (192.168.124.4) 56(84) bytes of data.
--- 192.168.124.4 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2036ms
2 years, 3 months
Why host device disappear in libvirt doman xml?
by 陈新隆
<https://stackoverflow.com/posts/73854544/timeline>
I'm using Kubevirt to manage my virtual machine instances. When I using
Kubevirt to create a vm(with two GPUs), kubevirt will generate a libvirt
guest domain xml for this vm which includes two GPUs, the domain xml as
follow :
<hostdev mode='subsystem' type='pci' managed='no'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x83' slot='0x00' function='0x0'/>
</source>
<alias name='ua-gpu-gpu0'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00'
function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='no'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x84' slot='0x00' function='0x0'/>
</source>
<alias name='ua-gpu-gpu1'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</hostdev>
No one ever edit this domain xml, but these two hostdev element disappeared
in the domain xml. During this time, I've run the gpu_burn command to do a
stress test for these two GPUs.
My question is :
- when will libvirtd change the guest domain xml ?
- why libvirtd delete these two hostdev from domain xml ?
2 years, 3 months
Backup KVM Guest VM in OVA or VMDK format
by Kaushal Shriyan
Hi,
Is there a way to backup KVM Guest VM in kvmguestosimage.ova or
kvmguestosimage.vmdk format as I am trying to restore it in AWS by
referring to https://aws.amazon.com/ec2/vm-import/ article as per the
below supported file format.
[1] Open Virtualization Archive (OVA)
[2] Virtual Machine Disk (VMDK)
[3] Virtual Hard Disk (VHD/VHDX)
[4] raw
Also any method to take full and incremental backup of KVM Guest VM.
Any help will be highly appreciated. I look forward to hearing from you.
Thanks in Advance.
Best Regards,
Kaushal
2 years, 3 months