Greetings,
I'm running 3 vms in session mode on arch linux, I need each vm to startup on boot. libvirt version is 12.1.0.
from what I understand, libvirt-guests lacks session support so I went to look and found that I can use per user systemd service, using this doc: https://wiki.archlinux.org/title/Systemd/User
I defined each user to linger, created a service file and enabled it for each user.
I've rebooted the machine and found out that not all the vms started, some of them have failed, each boot the outcome differs but usually there is atleast one vm which doesn't startup.
I've check the status of the service and found that in the cases where the boot failed, the error was that the vm is already active, however virsh list shows ihe vm is off.
none of the vms is marked as autostart.
how can I find out who is the other source that starts up the vms? I assume it is something internal of libvirt.
ideally, if possible, i'd like to use something from within libvirt or at least stop whatever autoboots the vms.
Thanks,
Dagg
Hi, all
I've been reading the QEMU documentation, the Libvirt documentation, as
well as doing some testing and code investigation regarding disk cache
modes.
In the Libvirt documentation, it is stated that the disk driver cache
modes accepted are "default", "none", "writethrough", "writeback",
"directsync" and "unsafe". Analyzing the code, we can see that this is
somewhat reflected on the cache modes that QEMU documents on the -drive
option (https://qemu-project.gitlab.io/qemu/system/invocation.html)
https://github.com/libvirt/libvirt/blob/04c1f458313e9001f5a804a898408e1f498…
However, the default option (when we do not define the "cache" option)
is different from other options and does not seem to map to any option
on the Qemu side. I see that, from testing and analyzing the code, if
the default (not defining the "cache" options) is used, cache.direct and
cache.no-flush are not informed; also, the write-cache option is not
informed.
Just for comparison, here is a disk defined with cache=writeback:
XML:
|<disk type='file' device='disk'> <driver name='qemu' type='qcow2'
cache='writeback'/> <source
file='/mnt/a66f48f5-13c6-3676-b320-a15d32bbf32f/c0e925a9-6697-4ee9-8abf-d36c7a95cea4'
index='2'/> <backingStore type='file' index='3'> <format type='qcow2'/>
<source
file='/mnt/a66f48f5-13c6-3676-b320-a15d32bbf32f/0db52809-9f33-43ac-8151-5452e9dee330'/>
<backingStore/> </backingStore> <target dev='vda' bus='virtio'/>
<serial>c0e925a966974ee98abf</serial> <alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/> </disk> |
Generated QEMU command:
|-blockdev
{"driver":"file","filename":"/mnt/a66f48f5-13c6-3676-b320-a15d32bbf32f/c0e925a9-6697-4ee9-8abf-d36c7a95cea4","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}
-blockdev
{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-2-storage","backing":"libvirt-3-format"}
-device
virtio-blk-pci,bus=pci.0,addr=0x5,drive=libvirt-2-format,id=virtio-disk0,bootindex=2,write-cache=on,serial=c0e925a966974ee98abf
|
And here is a disk that does not have the cache option:
XML:
|<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/>
<source
file='/mnt/b59eff0f-5b97-37ed-a513-9e1983d1d19b/b430296b-0924-412e-a7b1-ddc3d4c90f83'
index='2'/> <backingStore/> <target dev='vdc' bus='virtio'/>
<serial>b430296b0924412ea7b1</serial> <alias name='virtio-disk2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08'
function='0x0'/> </disk> |
Generated QEMU command:
|blockdev
{"driver":"file","filename":"/mnt/b59eff0f-5b97-37ed-a513-9e1983d1d19b/b430296b-0924-412e-a7b1-ddc3d4c90f83","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}
-blockdev
{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}
-device
virtio-blk-pci,bus=pci.0,addr=0x8,drive=libvirt-2-format,id=virtio-disk2,serial=b430296b0924412ea7b1
|
The QEMU documentation states in the -drive option that the default
cache mode is |Writeback| (see
https://qemu-project.gitlab.io/qemu/system/invocation.html) so I
expected that setting |Writeback| or |Default| as the cache mode on
Libvirt would also be the same. I ran some performance tests using both
options to try and verify if they were the same, but they are clearly
different. For each of the configuration profiles below, I created a VM
and ran some fio tests:
Combination Disk cache mode Disk Controller IO thread IO policy
hds0i default virtio-scsi 0 iouring
hds0t default virtio-scsi 0 threads
hds1i default virtio-scsi 1 iouring
hds1t default virtio-scsi 1 threads
hdv0i default virtio 0 iouring
hdv0t default virtio 0 threads
hdv1i default virtio 1 iouring
hdv1t default virtio 1 threads
wbs0i writeback virtio-scsi 0 iouring
wbs0t writeback virtio-scsi 0 threads
wbs1i writeback virtio-scsi 1 iouring
wbs1t writeback virtio-scsi 1 threads
wbv0i writeback virtio 0 iouring
wbv0t writeback virtio 0 threads
wbv1i writeback virtio 1 iouring
wbv1t writeback virtio 1 threads
The fio test results are in a html attached to the email.
I would like to understand what the practical difference is between not
informing the cache and any other option. I can see that the disk is
defined in different way, but just by analyzing the code I was not able
to understand what that leads to; also, looking at the performance
tests, we clearly see a difference in the results. Would anyone be able
to explain what is the expected behavior when the disk is defined in
such a way?
By the way, I'm using Libvirt 8.0.0 and Qemu 6.2.0.
Best regards,
João Jandre