WinServer2016 guest no mouse in VirtManager
by John McInnes
Hi! I recently converted several Windows Server VMs from HyperV to libvirt/KVM. The host is running openSUSE Leap 15.3. I used virt-v2v and I installed virtio drivers on all of them and it all went well - except for one VM. The mouse does not work for this VM in VirtualMachineManager. There is no cursor and no response. There are no issues showing in Windows Device Manager. The mouse shows up as a PS/2 mouse. Interestingly if I RDP into this VM using Microsoft Remote Desktop the mouse works fine. Any ideas?
----
John McInnes
jmcinnes /\T svt.org
1 year, 10 months
[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 3 months
Virtiofs xattr options on domain xml
by ksobrenat32
Hi!
I have a debian 11 (bullseye) machine running libvirtd version 7.0.0 and
a RHEL 9 virtual machine that I need to share a disk and though about
virtiofs.
The disk is a btrfs disk and I have successfully mount it with:
<filesystem type='mount' accessmode='passthrough'>
<driver type='virtiofs' queue='1024'/>
<binary path='/usr/lib/qemu/virtiofsd' xattr='on'>
<cache mode='always'/>
<lock posix='on' flock='on'/>
</binary>
<source dir='/mnt/WD-Disk'/>
<target dir='media'/>
<alias name='fs0'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</filesystem>
The problem I have is with selinux, when I try to change the context of
a file inside the virtual machine I get a 'Operation not permitted'
error, I can change the context in the Debian host and see the changes
in the virtual machine but I would want to be able to change the context
from the vm to able to use podman containers with selinux enabled.
I see on the docs
https://qemu.readthedocs.io/en/latest/tools/virtiofsd.html#selinux-support
you can run virtiofsd with a xattr option so it is compatible with
selinux but I do not find a way to change the domain xml to add this
option, is there a way to add this option? Does a better option exists
(maybe on the guest side)?
2 years, 7 months
something (qemu?) is leaking
by lejeczek
Hi guys.
I do a simple thing which should be easy to reproduce.
-> $ virt-install -n rum1 --virt-type kvm --os-variant
centos8 --memory $((4*1024))
--disk=/VMs3/rum1.qcow2,device=disk,bus=virtio --network
network=10_3_1,model=virtio --graphics=listen=0.0.0.0 --cpu
EPYC-Rome --vcpus 3 --cdrom
/VMs3/CentOS-Stream-9-latest-x86_64-dvd1.iso
During manual setup in the VM I set 'hostname' to something
and when installation begins and disk config stage takes
place I can see - and later when VM(c9s) is ready can
confirm - that VG name is taken from another VM
defined/running on the host.
Host is c9s with:
qemu-kvm-7.0.0-1.el9.x86_64
libvirt-daemon-8.2.0-1.el9.x86_64
5.14.0-86.el9.x86_64
Does anybody see this/similar?
many thanks, L.
2 years, 7 months
Slow VM start/revert, when trying to start/revert dozens of VMs in parallel
by Petr Beneš
Hi,
my problem can be described simply: libvirt can't handle starting dozens of VMs at the same time.
(technically, it can, but it's really slow.)
We have an AMD machine with 256 logical cores and 1.5T ram.
On that machine there is roughly 200 VMs.
Each VM is the same: 8GB of RAM, 4 VCPUs. Half of them is Win7 x86, the other half is Win7 x64.
VMs are using qcow2 as the disk image. These images reside in the ramdisk (tmpfs).
We use these machines for automatic malware analysis, so our scenario consists of this cycle:
- reverting VM to a running state
- execute sample inside of the VM for ~1-2 minutes
- shutdown the VM
Of course, this results in multiple VMs trying to start at the same time.
At first, reverts/starts are really fast - second or two.
After about a minute, the "revertToSnapshot" suddenly takes 10-15 seconds, which is really unacceptable.
For comparison, we're running the same scenarion on Proxmox, where the revertToSnapshot usually takes 2 seconds.
Few notes:
- Because of this fast cycle (~2-3 minutes) and because of VMs taking 10-15 seconds to start, there is barely more than 25-30 VMs running at once.
We would really love to utilise the whole potential of such beast machine of ours, and have at least ~100 VMs running at any given time.
- During the time running, the avg. CPU load isn't higher than 25%. Also, there's only about 280 GB of RAM used. Therefore, it's not limitation of our resources.
- When the framwork is running and libvirt is making its best to start our VMs, I noticed that every libvirt operation is suddenly very slow.
Even simple "virsh list [--all]" takes few seconds to complete, even though it finishes instantly when no VM is running/starting.
I was trying to search for this issue, but didn't really find anything besides this presentation:
https://events19.linuxfoundation.org/wp-content/uploads/2017/12/Scalabili...
However, I couldn't find those commits in your upstream.
Is this a known issue? Or is there some setting I don't know of which would magically make the VMs start faster?
As for steps to reproduce - I don't think there is anything special needed. Just try to start/destroy several VMs in a loop.
There is even provided one-liner for that in the presentation above.
```
# For multiple domains:
# while virsh start $vm && virsh destroy $vm; do : ; done
# → ~30s hang ups of the libvirtd main loop
```
Best Regards,
Petr
2 years, 7 months
macvtap with disconnected physical interface
by Gionatan Danti
Dear list,
I just discovered the hard way that if the lower lever physical
interface of a macvlan bridge is disconnected (ie: by unplugging the eth
cable, resulting in no carrier), inter-guest network traffic from all
virtual machines bound to the disconnected interface is dropped.
This behavior surprises me, as with the classic bridges I can disconnect
the underlying physical interface without causing any harm to
inter-guest traffic.
Am I doing something wrong, or this really is the expected behavior? If
so, can I force the macvtap interfaces to bridge traffic even when the
underlying physical interface is disconnected?
Regards.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
2 years, 7 months
Updating domains definitions via API
by Darragh Bailey
Hi,
Looking into a bug in vagrant-libvirt where an error during the update will
cause the domain to be completely discarded.
https://github.com/vagrant-libvirt/vagrant-libvirt/issues/949
Basically I think it stems from doing an undefine -> create with XML new
process, which if there is an issue with the new XML due to KVM module not
loaded or something similar it will be rejected, but unfortunately it is
also unlikely to allow the old definition to be restored either.
I'm looking around to try and see if there is an API (specfically in
ruby-libvirt) for updating the domain definition, so that if the new XML is
rejected at least the old definition remains, and so far I'm drawing a
blank.
Is the only option here to write using a temporary domain name, then remove
the old domain and rename the new definition to the old domain?
Or have I missed the obvious API analogous to the edit functionality?
--
Darragh Bailey
"Nothing is foolproof to a sufficiently talented fool" - unknown
2 years, 7 months
Any size limitation on the backing store?
by Jeff Brown
We have a few QCOW2 images that have grown rather large (> 250GB) and to
back them up across the VLAN is resource intensive and time consuming.
What I'd like to do to optimise backups is freeze the backing store and
always run the VMs in 'snapshot 1', and then run 'snapshot 2' to backup
'snapshot 1' to reciprocal failover hypervisors in the DC.
I do know it's common practice to even run multiple VMs off a single
"golden image" backing store, but I'm concerned about the performance
impact with such a large backing store, and 'snapshot 1' growing at
~10GB a year. With that in mind, it should be feasible to pivot
'snapshot 1' into the backing image periodically.
Further, I am aware of incremental backups which Debian 11 doesn't yet
support. I think it's coming in Debian 12, but some of our servers are
actually still on Debian 10.
jeff@abacus:/var/lib/libvirt/images$ virsh backup-begin dev
error: Operation not supported: incremental backup is not supported yet
In a nutshell, is it feasible to run off an > 250GB and growing backing
store?
Thanks in advance for any advice.
--
Jeff Brown
Future Foundation
Cell 074 101 5170
VoIP 087 550 1210
Fax: 086 532 3508
2 years, 7 months