WinServer2016 guest no mouse in VirtManager
by John McInnes
Hi! I recently converted several Windows Server VMs from HyperV to libvirt/KVM. The host is running openSUSE Leap 15.3. I used virt-v2v and I installed virtio drivers on all of them and it all went well - except for one VM. The mouse does not work for this VM in VirtualMachineManager. There is no cursor and no response. There are no issues showing in Windows Device Manager. The mouse shows up as a PS/2 mouse. Interestingly if I RDP into this VM using Microsoft Remote Desktop the mouse works fine. Any ideas?
----
John McInnes
jmcinnes /\T svt.org
1 year, 10 months
[Question] Should libvirt update live xml accordingly when guest mac is changed?
by Fangge Jin
Hi
I met an issue when testing trustGuestRxFilters:
Attach a macvtap interface with trustGuestRxFilters=’yes’ to vm, then
change interface mac address in vm.
Should libvirt update interface mac in live vm xml accordingly? If not, vm
network will be broken after
managedsaving and restoring vm.
BR,
Fangge Jin
Steps:
1. Start a vm
2.
Attach a macvtap interface with trustGuestRxFilters=’yes’ to vm
<interface type='direct' trustGuestRxFilters='yes'>
<source dev='enp175s0v0' mode='passthrough'/>
<target dev='macvtap0'/>
<model type='virtio'/>
<alias name='net1'/>
</interface>
3.
Check vm xml:
# virsh dumpxml uefi --xpath //interface
<interface type="direct" trustGuestRxFilters="yes">
<mac address="52:54:00:46:88:8b"/>
<source dev="enp175s0v0" mode="passthrough"/>
<target dev="macvtap2"/>
<model type="virtio"/>
<alias name="net0"/>
<address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
</interface>
4.
Change interface mac in guest:
# ip link set dev enp1s0 address 52:54:00:9d:a1:1e
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
fq_codel state UP group default qlen 1000
link/ether 52:54:00:9d:a1:1e brd ff:ff:ff:ff:ff:ff permaddr
52:54:00:46:88:8b
inet 192.168.124.5/24 scope global enp1s0
valid_lft forever preferred_lft forever
# ping 192.168.124.4
PING 192.168.124.4 (192.168.124.4) 56(84) bytes of data.
64 bytes from 192.168.124.4: icmp_seq=1 ttl=64 time=0.240 ms
64 bytes from 192.168.124.4: icmp_seq=2 ttl=64 time=0.138 ms
--- 192.168.124.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
5.
Check vm xml:
# virsh dumpxml uefi --xpath //interface
<interface type="direct" trustGuestRxFilters="yes">
<mac address="52:54:00:46:88:8b"/>
<source dev="enp175s0v0" mode="passthrough"/>
<target dev="macvtap2"/>
<model type="virtio"/>
<alias name="net0"/>
<address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
</interface>
6.
Check on host:
16: macvtap1@enp175s0v0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
qdisc noqueue state UP group default qlen 500
link/ether 52:54:00:9d:a1:1e brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe46:888b/64 scope link
valid_lft forever preferred_lft forever
7.
Do managedsave and restore
# virsh managedsave uefi
Domain 'uefi' state saved by libvirt
# virsh start uefi
Domain 'uefi' started
8.
Check vm network function:
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
fq_codel state UP group default qlen 1000
link/ether 52:54:00:9d:a1:1e brd ff:ff:ff:ff:ff:ff permaddr
52:54:00:46:88:8b
inet 192.168.124.5/24 scope global enp1s0
valid_lft forever preferred_lft forever
# ping 192.168.124.4
PING 192.168.124.4 (192.168.124.4) 56(84) bytes of data.
--- 192.168.124.4 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2036ms
2 years, 2 months
[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 3 months
Libvirtd slower over time
by André Malm
Hello,
I have some issues with libvirtd getting slow over time.
After a fresh reboot (or systemctl restart libvirtd) virsh list /
virt-install is fast, as expected, but after a couple of months uptime
they both take a significantly longer time.
Virsh list takes around 3 seconds (from 0.04s on a fresh reboot) and
virt-install takes over a minute (from around a second).
Running strace on virsh list it seems to get stuck in a loop on this:
poll([{fd=5<socket:[173169773]>, events=POLLOUT},
{fd=6<anon_inode:[eventfd]>, events=POLLIN}], 2, -1) = 2 ([{fd=5,
revents=POLLOUT}, {fd=6, revents=POLLIN}])
While restarting libvirtd fixes it a restart takes around 1 minute where
ebtables rules etc are recreated and it does interrupt the service. What
could cause this? How would I troubleshoot this?
I'm running Ubuntu 22.04 / libvirt 8.0.0 with 70 active VM’s on a 16/32
core machine with 256GB of ram, CPU is below 50% usage at all times,
memory below 50% usage and swap 0% usage.
Thanks,
André
2 years, 3 months
NUMA node - Memory Only
by Jin Huang
Hi, libvirt-users
How could I set up a memory-only and no-cpu NUMA node for the qemu-VM with
the XML file?
Seems each NUMA cell has to be bundled with some specific cpuids.
If I write the element like this, it is wrong.
<cpu mode='host-passthrough'>
<numa>
<cell id='0' cpus='0-3' memory='16' unit='GiB'/>
<cell id='1' cpus='*null*' memory='16' unit='GiB'/>
</numa>
</cpu>
Also if I ignore the cpus item, it is not acceptable for the virsh command.
The reason I want the memory-only node is that I want to set up multiple
memory tiers for the VM, just like
https://stevescargall.com/2022/06/10/using-linux-kernel-memory-tiering/
Thank You
Best
Jin Huang
2 years, 4 months
Memory locking limit and zero-copy migrations
by Milan Zamazal
Hi,
do I read libvirt sources right that when <memtune> is not used in the
libvirt domain then libvirt takes proper care about setting memory
locking limits when zero-copy is requested for a migration?
I also wonder whether there are any other situations where memory limits
could be set by libvirt or QEMU automatically rather than having no
memory limits? We had oVirt bugs in the past where certain VMs with
VFIO devices couldn't be started due to extra requirements on the amount
of locked memory and adding <hard_limit> to the domain apparently
helped.
Thanks,
Milan
2 years, 4 months
Problem with a disk device of type 'volume'
by Frédéric Lespez
Hi,
I need some help to debug a problem with libvirt and a disk device of
type 'volume'.
I have a VM failing to start with the following error :
$ virsh -c qemu:///system start server
error :Failed to start domain 'server'
error :internal error: process exited while connecting to monitor:
2022-08-13T09:26:50.121259Z qemu-system-x86_64: -blockdev
{"driver":"file","filename":"/mnt/images/debian-11-genericcloud-amd64.qcow2","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}:
Could not open '/mnt/images/debian-11-genericcloud-amd64.qcow2':
Permission denied
I check the file access permission, but they are correct. I try to set
everything to 777 or run qemu as root, but the problem persist.
$ ll -d /mnt /mnt/images /mnt/images/*
drwxr-xr-x 9 root root 4,0K 31 déc. 2021 /mnt
drwxr-xr-x 2 root root 4,0K 13 août 11:31 /mnt/images
-rw-r--r-- 1 libvirt-qemu libvirt-qemu 242M 13 août 11:31
/mnt/images/debian-11-genericcloud-amd64.qcow2
-rw-r--r-- 1 libvirt-qemu libvirt-qemu 366K 13 août 11:31
/mnt/images/server_cloudinit.iso
-rw-r--r-- 1 libvirt-qemu libvirt-qemu 593M 13 août 11:59
/mnt/images/server_image.qcow2
After a lot of searching and testing, I found out that the disk device
definition is linked to the source of the problem.
The disk device is defined like this :
<disk type="volume" device="disk">
<driver name="qemu" type="qcow2"/>
<source pool="TERRAFORM" volume="server_image.qcow2"/>
<target dev="vda" bus="virtio"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x05"
function="0x0"/>
</disk>
This image 'server_image.qcow2' use a backing file:
$ qemu-img info /mnt/images/server_image.qcow2 --backing-chain
image: /mnt/stockage_rapide/VMs/terraform/puppetdev_server_image.qcow2
file format: qcow2
virtual size: 6 GiB (6442450944 bytes)
disk size: 475 MiB
cluster_size: 65536
backing file: /mnt/images/debian-11-genericcloud-amd64.qcow2
backing file format: qcow2
Format specific information:
compat: 0.10
compression type: zlib
refcount bits: 16
image: /mnt/images/debian-11-genericcloud-amd64.qcow2
file format: qcow2
virtual size: 2 GiB (2147483648 bytes)
disk size: 242 MiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
And here is the definition of the associated storage pool :
<pool type="dir">
<name>TERRAFORM</name>
<uuid>dae00836-db4d-49ba-9d32-1f0278055516</uuid>
<capacity unit="bytes">155674652672</capacity>
<allocation unit="bytes">74396299264</allocation>
<available unit="bytes">81278353408</available>
<source>
</source>
<target>
<path>/mnt/images</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
If I changed the disk device definition to this (and changing only
that), the domain start and works fine (no permission problem !).
<disk type="file" device="disk">
<driver name="qemu" type="qcow2"/>
<source file="/mnt/images/server_image.qcow2"/>
<target dev="vda" bus="virtio"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x05"
function="0x0"/>
</disk>
Could you help me find the reason why the domain doesn't work when the
disk device is of type 'volume' ?
Thanks in advance for your help.
Regards,
Fred
Additional information:
- Running this on Debian 11 with libvirt 8.0.0 (from backports) and qemu
7.0 (from backports).
- Vanilla configuration of libvirt. I have just added my regular user to
the libvirt group.
- Problem exists even if AppArmor is disabled.
PS: I want to use a disk device of type 'volume' because this domain is
created by Terraform using the libvirt provider which use this kind of
disk since it has some advantages. See the details here :
https://github.com/dmacvicar/terraform-provider-libvirt/issues/126#issuec...
2 years, 4 months
Libvirt virsh : Error starting network, cannot execute binary /usr/sbin/iptables
by Pascal
Hi,
I am a bit lost and hope someone can help me. I am running Debian
bookworm (testing) with last updates.
$ sudo apt policy libvirt-daemon
libvirt-daemon:
Installé : 8.5.0-1
Candidat : 8.5.0-1
Table de version :
*** 8.5.0-1 100
100 /var/lib/dpkg/status
I am unable to start default network , and get an error related to
iptables :
$ sudo virsh net-start default
erreur :Impossible de démarrer le réseau default
erreur :internal error: Failed to apply firewall rules
/usr/sbin/iptables -w --table filter --list-rules: libvirt: erreur :
cannot execute binary /usr/sbin/iptables: Aucun fichier ou dossier de ce
type
Sorry for the french, it says "impossible to start default network" and
"no such file or folder" at the end.
It is true I removed iptables because I want to use only nftables (I
removed both ufw and iptables packages (apt remove), and enabled the
nftables service before error raises). Before this, all was fine, but
when I enabled nftables, all VMs disapeared from virt-manager).
I uninstalled KVM related packages and reinstalled, still the same.
I also installed back iptables, but strangely I still get the same
error, although binary /usr/sbin/iptables is there.
I tried many things with no luck, restarted libvirtd service, recreated
the network, etc...
Has anyone some idea about what is happening here ? is there some
incompatibility with nftables (firewalld service is disabled) and libvirt ?
Thank you,
Pascal
2 years, 4 months