Activate storage during domain migration
by e-m@mailbox.org
Hi,
I have a block storage which I only want to be mounted on a single node.
I know that there are many possibilities for shared storage usage but I
want to know if the following is possible (using the API).
- Have a domain running on node-A
- Initialize a migration for that domain to node-B
- Run a hook or something just before the domain starts on node-B to:
- unmount storage on node-A
- mount/prepare storage on node-B
Thanks and best regards,
Etienne
11 months, 1 week
Libvirt
by Gk Gk
Hi All,
I am trying to collect memory, disk and network stats for a VM on kvm host.
It seems that the statistics are not matching what the OS inside the VM is
reporting. Why is this discrepancy ?
Is this a known bug of libvirt ? Also I heard that libvirt shows cumulative
figures for these measures ever since the VM was created. Also I tested by
creating a new vm and comparing the stats without a reboot . Even in this
case, the stats dont agree. Can someone help me here please ?
Thanks
Kumar
11 months, 1 week
CAN virtualization
by Sánta, Márton (ext)
Dear Users,
I use KVM with libvirt 9.0.0. The host and guest OS-es are also AGL needlefish images. I am currently trying to virtualize a CAN driver and provide virtual machines access to the physical CAN channels.
I started with the virtual network handling as CAN interface is a network interface, I tried to find analogies, solutions like "traditional" network handling but it did not work.
I also tried out to define a nodedev device via .xml config file. The interesting thing is, that when I list all the available nodedev devices with 'virsh nodedev-list', I can see 'net_can0' and 'net_can1' on the output as 'net' type device but I cannot attach these devices to the guests, and I do not know how to define it in the guest .xml file. I tried out many different things, but when I try to add it as a 'hostdev' device with different mode and type settings, I always get an error (e.g. not a PCI device or not supported device type etc.). It would be long to write down all the configurations I tried out, so my first question would be that does anybody know how could I provide access to guests to the physical CAN interfaces? The aim is to be able to send CAN messages from guest OS-es. If no direct access is possible, it would be also OK to have access to virtual CAN interfaces on the host and then transfer messages to the physical CAN channel.
Thank you in advance for an early reply!
Best regard,
Márton Sánta
This transmission is intended solely for the addressee and contains confidential information.
If you are not the intended recipient, please immediately inform the sender and delete the message and any attachments from your system.
Furthermore, please do not copy the message or disclose the contents to anyone unless agreed otherwise. To the extent permitted by law we shall in no way be liable for any damages, whatever their nature, arising out of transmission failures, viruses, external influence, delays and the like.
1 year
SEV, SEV-ES, SEV-SNP
by Derek Lee
When SEV is enabled in domcapabilities does that just mean any of SEV,
SEV-ES, SEV-SNP is possible on the hardware?
Similarly, does enabling SEV as a launchSecurity option in a domainXML mean
that whichever SEV is available will be enabled? And if the guest policy
has the ES flag set, it will not be created unless ES is enabled?
Sorry if these questions don't make sense or are ill-formed.
Best,
Derek
1 year, 1 month
Performance Discrepancies and Limitations in Local Storage IOPS Testing
by Jan Wasilewski
Hi,
In the past few weeks, I conducted performance tests to evaluate IOPS
(Input/Output Operations Per Second) performance for locally attached
disks. The original discussion started on the openstack-discuss mailing
list since all the tests were conducted within an OpenStack cloud
environment. However, I decided to initiate a discussion here, as it
appears that the performance differences might be rooted in lower-level
factors rather than the OpenStack platform itself. It seems that these
differences are closely related to variations in libvirt/qemu versions and
kernel configurations.
Ultimately, I discovered that the performance is significantly enhanced
when the hypervisor is deployed on top of Ubuntu 22.04LTS. Under this
setup, I was able to achieve around 100,000 IOPS during my fio tests
[1][2]. In contrast, conducting a similar test with the hypervisor deployed
on Ubuntu 20.04LTS yielded significantly lower results, averaging around
20,000 IOPS [3][4]. An intriguing observation is that attaching an nvme
disk to my Ubuntu 22.04LTS system and utilizing it as local storage led to
slightly diminished performance, hovering around 90,000 IOPS [5]. This
outcome is somewhat unexpected, as I initially anticipated higher figures.
It's particularly noteworthy that when I conduct the same test directly on
top of the hypervisor, the numbers align more closely with expectations
[6][7]. This pattern suggests that there might be a limitation imposed
either by libvirt/qemu or the kernel itself.
Having meticulously reviewed all the release notes, I failed to come across
any information pertaining to noteworthy performance enhancements
concerning local storage and IOPS. Given this, I'd like to reach out to you
directly to inquire if you possess any insights into such limitations. Any
guidance or suggestions that could help optimize my local storage results
would be greatly appreciated.
Looking forward to your input.
Best regards
/Jan Wasilewski
*References:[1] Configuration details(libvirt/qemu/kernel version) for
Ubuntu 22.04: https://paste.openstack.org/show/b8svl0bOfX0WHTGvgI1h/
<https://paste.openstack.org/show/b8svl0bOfX0WHTGvgI1h/>[2] fio results for
VM test with Ubuntu 22.04:
https://paste.openstack.org/show/bUpiq1y0o58S1ThNgqLd/
<https://paste.openstack.org/show/bUpiq1y0o58S1ThNgqLd/>[3] Configuration
details(libvirt/qemu/kernel version) for Ubuntu 20.04:
https://paste.openstack.org/show/b005K2L6ZutGxDrCOZbL/
<https://paste.openstack.org/show/b005K2L6ZutGxDrCOZbL/>[4] fio results for
VM test with Ubuntu 20.04:
https://paste.openstack.org/show/b8JeVOn4YCPSX7uaqR0N/
<https://paste.openstack.org/show/b8JeVOn4YCPSX7uaqR0N/>[5] fio results for
VM test with Ubuntu 22.04 and nvme as a local storage disk:
https://paste.openstack.org/show/b75M4bI00LQePTmYSaUI/
<https://paste.openstack.org/show/b75M4bI00LQePTmYSaUI/>[6] fio results for
hypervisor test with Ubuntu 22.04 and nvme storage:
https://paste.openstack.org/show/bj7NeSiwwiWS6MbJROts/
<https://paste.openstack.org/show/bj7NeSiwwiWS6MbJROts/>[7] fio results for
hypervisor test with Ubuntu 22.04 and ssd storage:
https://paste.openstack.org/show/bSeEHXcbrY9YlYWQWGKS/
<https://paste.openstack.org/show/bSeEHXcbrY9YlYWQWGKS/>*
1 year, 1 month
Warning : Failed to set up UEFI / The Libvirt version does not support UEFI / Install options are limited...
by Mario Marietto
Hello to everyone.
I'm trying to use qemu 5.1 with virt-manager and libvirt on my ARM
chromebook (armhf 32 bit cpu) running with Devuan 4 as host o.s. By default
it uses qemu and its dependencies,version 5.2. I remember that I can't use
qemu 5.2,because it doesn't have any support for KVM as you can read here :
https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg02074.html
For this reason,I've compiled qemu 5.1 from source. Below I shown how I
have configured everything such as a little piece of compilation messages :
# apt install libgtk-3-dev libpulse-dev libgbm-dev libspice-protocol-dev
libspice-server-dev libusb-1.0-0-dev libepoxy-dev
# cd /usr/share
# mv qemu qemu_
# cd /usr/lib
# mv qemu qemu_
# cd /usr/lib/arm-linux-gnueabihf
# mv qemu qemu_
# cd /usr/lib/ipxe
# mv qemu qemu_
# cd /usr/share/bash-completion/completions/
# mv qemu qemu_
# mv qemu-kvm qemu-kvm_
# mv qemu-system-i386 qemu-system-i386_
# mv qemu-system-x86_64 qemu-system-x86_64_
# cd /usr/bin
# mv qemu-system-arm qemu-system-arm_
# cp /root/Desktop/qemu-v5.1.0/arm-softmmu/qemu-system-arm /usr/bin
# CFLAGS=-Wno-error ./configure --target-list=x86_64-softmmu --enable-opengl
--enable-gtk --enable-kvm --enable-guest-agent --enable-spice --audio-drv-
list="oss pa" --enable-libusb
A little piece of the log messages that I've got from the compilation of
qemu 5.1 :
https://pastebin.ubuntu.com/p/8DYfgPvhXy/
These are the resulting versions of my frankenstein operation :
# virsh version
Compiled against library: libvirt 7.0.0
Using library: libvirt 7.0.0
Using API: QEMU 7.0.0
Running hypervisor: QEMU 5.1.0
At this point I've run virt-manager. It has been able to detect qemu,but I
get the following error :
Warning : Failed to set up UEFI.
The Libvirt version does not support UEFI.
Install options are limited.
Do you have some suggestions to give me to fix this error ? I'm sure to
have missed something, thanks.
--
Mario.
1 year, 1 month
Does libvirt support intra-host KVM migration?
by Eric Wheeler
Hello all, I'm reposting this to the libvirt-users list:
I looked around for documentation on intra-host KVM migration but haven't
found much. For example, this could be useful to "migrate" VM to run on
an upgraded version of `qemu-kvm` without migrating to a different host
and migrating back.
We tested migrating a VM to the same host on an old version of libvirt
(el7), and it complained about UUID conflicts if the destination already
hosts the same VM.
Changing the name or the UUID in the destination XML during migration
(`virsh migrate --xml`) gives an error that the UUID or name does not
match during migration:
UUID change:
error: unsupported configuration: Target domain uuid 66a113f4-a101-4db1-8478-49cf088fedb9 does not matchsource b5256b25-e137-45b8-bddb-78545ab55fc4
Name change:
error: unsupported configuration: Target domain name 'diskless2' does not match source 'diskless'
Questions before we do more testing:
- Do modern versions of libvirt support intra-host migration?
- If so, which version?
- Documentation?
Any help you can provide would be greatly appreciated!
--
Eric Wheeler
1 year, 1 month
You will need to grant the 'libvirt-qemu' user search permissions for the following directories....
by Sebastien WILLEMIJNS
Hello,
Why LIBVIRT software/libs need to chown "near the root level" (home/blahblah/) when raw/vdi/vhd can contains lots of directories as /home/user/Virtual_HDs/desktop/daddy/private/bedroom/number2/hd.vdi ?
on ubuntu, "/media/hostname" can contains all our external HD's without relation with virtualization !!! :-(
another sample picked up in the net:
WARNING /home/jwright/virtualMachines/images/fedora25.qcow2 may not be accessible by the hypervisor. You will need to grant the 'qemu' user search permissions for the following directories: ['/home/jwright']
1 year, 1 month
ipv6 can not work for direct type interface
by Yalan Zhang
Hi there,
I have a question regarding direct type interfaces. Would someone be able
to take a look at it?
When I start 2 VMs on the same host with interface "direct type + bridge
mode", just as below:
<interface type="direct">
<mac address="52:54:00:9e:7b:51"/>
<source dev="eno1" mode="bridge"/>
<model type="virtio"/>
</interface>
The 2 VMs can connect to each other via ipv4, but can not connect to each
other via ipv6.
Maybe it's related to some kernel parameters, but I don't know how to
debug.
Is there anyone who can help me?
Thank you!
BR,
Yalan
1 year, 1 month