WinServer2016 guest no mouse in VirtManager
by John McInnes
Hi! I recently converted several Windows Server VMs from HyperV to libvirt/KVM. The host is running openSUSE Leap 15.3. I used virt-v2v and I installed virtio drivers on all of them and it all went well - except for one VM. The mouse does not work for this VM in VirtualMachineManager. There is no cursor and no response. There are no issues showing in Windows Device Manager. The mouse shows up as a PS/2 mouse. Interestingly if I RDP into this VM using Microsoft Remote Desktop the mouse works fine. Any ideas?
----
John McInnes
jmcinnes /\T svt.org
1 year, 9 months
[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
Issue with Guest and 2 Displays
by Thomas Luening
Hi @ all
Since switching from Debian 10 to Debian 11, I've had a new small unsolved
problem... with essentially the same customizing as before.
It all takes place on my local Desktop-PC, where I have set up a
restricted VM for actions in the internet.
After I started the VM locally on my PC, I can operate the VM with the
virtviewer without any errors. If I then open the second monitor via
the Menu 'View->Display', a blank white window appears with the message
"Waiting for display 2".
xrandr correctly shows the second display as disconnected and also all
available display-modes.
After the following command
xrandr --output Virtual-2 --auto --right-of Virtual-1
both (!) displays are white and both are waiting for a display.... and
nothing works anymore.
A previously started SSH connection to the VM from the PC shows the
following for this moment:
tom@pc:~
$ ssh 192.168.100.10
$ tom@vm:~
$su-
Password:
root@vm:~
# journalctl -f
Apr 29 16:24:34 internet kernel: [drm] driver is in bug mode
Apr 29 16:24:44 internet spice-vdagent[809]: Unable to find a display id for output index 2)
Apr 29 16:24:44 internet spice-vdagent[809]: Unable to find a display id for output index 3)
Apr 29 16:24:44 internet spice-vdagent[809]: Unable to find a display id for output index 2)
Apr 29 16:24:44 internet spice-vdagent[809]: Unable to find a display id for output index 3)
Apr 29 16:24:44 internet kernel: input: spice vdagent tablet as /devices/virtual/input/input7
It is no longer possible to restart the VM via "systemctl reboot",
the VM is probably stuck in some systemd-timeouts, only
"systemctl reboot -ff" leads to a reboot as a hard intervention.
Does anyone know this problem? Is there a solution for this, to get the second display working?
Below some relevant system-informations of my PC and the VM.
root@pc:~
$ lsb_release -a
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
root@pc:~
# uname -a
Linux pc 5.10.0-13-amd64 #1 SMP Debian 5.10.106-1 (2022-03-17) x86_64 GNU/Linux
root@pc:~
# cat /etc/libvirt/qemu/internet.xml | grep '<video' -A 3
<video>
<model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='2' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
tom@vm:~
$ lsb_release -a
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
tom@vm:~
$ dpkg -l | grep -i spice
ii spice-vdagent 0.20.0-2 amd64 Spice agent for Linux
tom@vm:~
$ systemctl status spice-vdagent.service
● spice-vdagentd.service - Agent daemon for Spice guests
Loaded: loaded (/lib/systemd/system/spice-vdagentd.service; indirect; vendor preset: enabled)
Active: active (running) since Fri 2022-04-29 16:00:56 CEST; 16min ago
TriggeredBy: ● spice-vdagentd.socket
Process: 847 ExecStart=/usr/sbin/spice-vdagentd $SPICE_VDAGENTD_EXTRA_ARGS (code=exited, status=0/SUCCESS)
Main PID: 849 (spice-vdagentd)
Tasks: 2 (limit: 4702)
Memory: 944.0K
CPU: 1.918s
CGroup: /system.slice/spice-vdagentd.service
└─849 /usr/sbin/spice-vdagentd
Thanks and Best Regards
Tom
2 years, 6 months
Re: how to change emulator path during live migration
by Peter Krempa
[re-adding libvirt-users list]
Please always reply to the list so that the follow-up conversation is
archived and delivered to all subscribers.
On Wed, Apr 27, 2022 at 15:36:54 +0800, Jiatong Shen wrote:
> Thank you for the feedback!
>
> Is it ok if the source node does not contain a emulator path used by the
> dest node? for example, on src emulator path is /a/b/c, but
> on dest it is /a/b/d, and /a/b/d does not exist on src.
You can change the emulator path arbitrarily. The only limitation is
that the emulator you pick (the binary, not the path) must be able to
run the VM, but that will be validated during the migration.
2 years, 6 months
how to change emulator path during live migration
by Jiatong Shen
Hello libvirt experts,
I am facing the following exceptions during live migrating a virtual
machine from one compute node to another.
file or directory: libvirt.libvirtError: Cannot check QEMU binary
/usr/bin/kvm-spice: No such file or directory
File "/var/lib/openstack/lib/python3.6/site-packages/eventlet/tpool.py",
line 83, in tworker
rv = meth(*args, **kwargs)
File "/var/lib/openstack/lib/python3.6/site-packages/libvirt.py", line
1745, in migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
dom=self)
libvirt.libvirtError: Cannot check QEMU binary /usr/bin/kvm-spice: No such
file or directory
After some investigation, we found that this error is triggered because we
do not have qemu-kvm installed in our container, btw the libvirt is
directly installed on the source node.
I have following questions
Is it possible to change emulator during live migration? I try to to remove
emulator under devices but looks like it does not help.
thank you very much and looking forward for the feedback.
--
Best Regards,
Jiatong Shen
2 years, 7 months
set fixed time at vm guest startup?
by Fred Clift
I'm looking at the <clock> libvirt parameters.
Is there a way to set the guests clock to a specific time/date at virt
startup? I'm trying to virtualize a system with pci passthrough for a
hardware device that wants to always live in pre-2011. The license manager
for the software/hardware has 31-bit overflow of time calculations.
The current process on the physical machine is to power it on, use the bios
to set the hardware date to Jan 1 2010, boot the system, use an RC script
to ntp-set the date now that the hardware is up and running, and then
everything works till next boot. We even have a nice rc script that will
set the clock back to 2010 if you do a clean shutdown/reboot.
I see I can set my clock sync to variable and specify a negative offset -
is there a way to just say "always be jan 10 2010 when you power on"?
Fred
2 years, 7 months
discard option on ext4 mounts in QEMU qcow2 images
by Nate Collins
Hello,
Does anyone have any real-world statistics on the reliability and
performance of the ext4 discard option in QEMU VMs? I've heard it's
discouraged for physical devices, as some are inefficient in how
TRIM requests are processed, but I was wondering if there were any
drawbacks/warnings when it comes to virtual disks (in this case, QCOW2s).
I currently use either virtio-scsi or virtio-blk drivers for disk
images depending on what the host/guest support, and run TRIMs weekly,
but having an immediate freeing of space is desirable for many reasons.
Also wondering if snapshots on snapshotting filesystems (such as ZFS) can
run into corruption or other issues if snapshots are taken during a TRIM.
Thanks.
2 years, 7 months
race condition? virsh migrate --copy-storage-all
by Valentijn Sessink
Hi list,
I'm trying to migrate a few qemu virtual machines between two 1G
ethernet connected hosts, with local storage only. I got endless "error:
operation failed: migration of disk vda failed: Input/output error"
errors and thought: something wrong with settings.
However, then, suddenly: I succeeded without changing anything. And, hey:
while ! time virsh migrate --live --persistent --undefinesource
--copy-storage-all ubuntu20.04 qemu+ssh://duikboot/system; do a=$(( $a +
1 )); echo $a; done
... retried 8 times, but then: success. This smells like a race
condition, doesn't it? A bit weird is the fact that the migration seems
to succeed every time while copying from revolving disks to SSD; but the
other way around has this Input/output error.
There are some messages in /var/log/syslog, but not at the time of the
failure, and no disk errors. These disks are LVM2 volumes and they live
on raid arrays - and/so there is not a real, as in physical, I/O-error.
Source system has SSD's, target system has regular disks.
1) is this the right mailing list? I'm not 100% sure.
2) how can I research this further? Spending hours on a "while / then"
loop to try and retry live migration looks like a dull job for my poor
computers ;-)
Best regards,
Valentijn
2 years, 7 months
Virtio-scsi and block mirroring
by Bjoern Teipel
Hello everyone,
I’m looking at an issue where I do see guests freezing (Dl) process state during a block disk mirror from one storage to another storage (NFS) where the network stack of the guest can freeze for up to 10 seconds.
Looking at the storage and IO I noticed good throughput ad low latency <3ms and I am having trouble to track down the source for the issue, as neither storage nor networking show issues. Interestingly when I do the same test with virtio-blk I do not really see the process freezes at the frequency or duration compared to virtio-scsi which seem to indicate a client side rather than storage side problem.
I had looked at the syscalls and nothing stuck out:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
28.51 20.672654 8339 2479 ioctl
27.81 20.162714 3379 5967 31 futex
22.02 15.964498 785 20335 poll
15.22 11.038403 150 73561 io_submit
4.17 3.023285 41 73540 lseek
1.20 0.868003 5 158591 write
0.63 0.459030 11 42871 ppoll
0.22 0.159263 8 19314 recvmsg
0.16 0.115520 5 22526 read
0.04 0.029149 29149 1 restart_syscall
0.01 0.009252 28 330 sendmsg
0.00 0.001221 1221 1 munmap
0.00 0.000458 22 21 fcntl
0.00 0.000286 95 3 openat
0.00 0.000166 5 32 rt_sigprocmask
0.00 0.000103 10 10 fdatasync
0.00 0.000099 25 4 clone
0.00 0.000081 7 12 mmap
0.00 0.000077 19 4 close
0.00 0.000076 6 12 mprotect
0.00 0.000056 14 4 madvise
0.00 0.000025 6 4 set_robust_list
0.00 0.000023 6 4 prctl
------ ----------- ----------- --------- --------- ----------------
100.00 72.504442 419626 31 total
Does anyone have an idea how to better debug this issue ?
Thanks
Bjoern
2 years, 7 months