Hi,
I've seen that in the past, libvirt couldn't start VMs when the disk
image was stored on a file system that doesn't support direct I/O
having the 'cache=none' configuration [0].
On the KubeVirt project, we have some storage tests on a particular
provider which does just that - try to create / start a VM whose disk
is on tmpfs and whose definition features 'cache=none'.
The behavior we're seeing is that libvirt throws this warning:
```
Unexpected Warning event received:
testvmig4zsxc2f8swkxv22xkhx2vrb4ppqbfdfgfgqh5gq8plqzrv5,853ff3d9-70d4-43c5-b9ff-4d5815ea557d:
server error. command SyncVMI failed: "LibvirtError(Code=1, Domain=10,
Message='internal error: process exited while connecting to monitor:
2020-03-25T10:09:21.656238Z qemu-kvm: -drive
file=/var/run/kubevirt-ephemeral-disks/disk-data/disk0/disk.qcow2,format=qcow2,if=none,id=drive-ua-disk0,cache=none:
file system may not support O_DIRECT\n2020-03-25T10:09:21.656391Z
qemu-kvm: -drive
file=/var/run/kubevirt-ephemeral-disks/disk-data/disk0/disk.qcow2,format=qcow2,if=none,id=drive-ua-disk0,cache=none:
Could not open backing file: Could not open
'/var/run/kubevirt-private/vmi-disks/disk0/disk.img': Invalid
argument')"
```
But actually proceeds, and is able to start the VM - but seems it
coerces the cache value to writeThrough.
Is this the expected behavior ? e.g. cache = none can't be used when
the disk images are on a tmpfs file system ? I know it was, not sure
about now (libvirt-5.6.0-7) ...
[0] -
https://bugs.launchpad.net/nova/+bug/959637