ZFS storage backend
by Paul B. Henson
I'm running libvirt to manage virtual machines utilizing ZFS zvol's for
storage. For organization and management purposes, these zvol's are nested:
virt/qemu/debian12-template 15.8G 1.57T 24K none
virt/qemu/debian12-template/bootefi 1.02G 1.57T 141M -
virt/qemu/debian12-template/home 522M 1.57T 2.57M -
virt/qemu/debian12-template/opt 522M 1.57T 3.00M -
virt/qemu/debian12-template/root 8.13G 1.58T 970M -
virt/qemu/debian12-template/swap 522M 1.57T 366M -
virt/qemu/debian12-template/tmp 522M 1.57T 2.39M -
virt/qemu/debian12-template/var 3.05G 1.57T 161M -
virt/qemu/debian12-template/varlog 1.02G 1.57T 101M -
virt/qemu/debian12-template/vartmp 522M 1.57T 3.75M -
The libvirt ZFS integration seems to assume that all zvol's will exist
at the top level of the pool, it won't let you create or manage a
hierarchical structure. That is what it is, but it also then
misrepresents the actual structure and shows things as being in the root
when they are not:
virsh # vol-list virt
Name Path
-------------------------------------
backup /dev/zvol/virt/backup
boot-nb /dev/zvol/virt/boot-nb
bootefi /dev/zvol/virt/bootefi
dest /dev/zvol/virt/dest
disk1 /dev/zvol/virt/disk1
home /dev/zvol/virt/home
opt /dev/zvol/virt/opt
root /dev/zvol/virt/root
swap /dev/zvol/virt/swap
tmp /dev/zvol/virt/tmp
usr /dev/zvol/virt/usr
usrlocal /dev/zvol/virt/usrlocal
usrobj /dev/zvol/virt/usrobj
usrports /dev/zvol/virt/usrports
usrsrc /dev/zvol/virt/usrsrc
var /dev/zvol/virt/var
varlog /dev/zvol/virt/varlog
vartmp /dev/zvol/virt/vartmp
Ideally libvirt would support nested ZFS organizational structure, as
that is a very common layout. But even if not, it probably shouldn't
misrepresent a structure it doesn't understand?
Thanks for any thoughts…
7 hours, 59 minutes
network usually failing (nested virt, Debian)
by Misha Ramendik
Hello,
I have a VPS where hardware nested virtualization is enabled, and I am
trying to use this nested virtualization. The VPS runs Debian 12 and has 16
Gb of RAM.
I installed libvirt/virt-manager/etc and downloaded the "nocloud" and
"genericcloud" images from https://cdimage.debian.org/images/cloud/ . The
description says that the "nocloud" image should allow passwordless root
login but unfortunately it does not. I run things as root (this is a test
setup) but I did chown all qcow images to "libvirt-qemu".
I use the following command line:
# virt-install --name test-cloud-vnc --os-variant debian11 --ram 8192
--disk
debian-12-genericcloud-amd64.qcow2,device=disk,bus=virtio,size=10,format=qcow2
--hvm --import
--noautoconsole --network default --graphics vnc,port=-1,listen=0.0.0.0
(Or the same for the nocloud image)
The nocloud image sometimes, rarely, gets a DHCP lease (visible in "virsh
net-dhcp-leases-default") and then responds to pings. But usually the
nocloud image, and always the cloud image (but this might just be by random
numbers), don't get a DHCP lease and cannot be pinged. This means that my
attempt to set up cloud-init by an ad hoc webserver (as per
https://cloudinit.readthedocs.io/en/latest/tutorial/qemu.html ) never got
tested, because the cloud-init image can't access the network to start
with.
I did try --network default,model=e1000 - no change. I do successfully see
the guest console when I connect to the VPS by VNC. Unfortunately, I don't
have a password to log in with, so I can't even try to see whether it sees
any network adapter.
dmesg output for the time:
[71382.495314] audit: type=1400 audit(1732157273.151:173):
apparmor="STATUS" operation="profile_load" profile="unconfined"
name="libvirt-3ca46e41-5cca-40b0-a5cd-d7d7e60de326" pid=30675 c
omm="apparmor_parser"
[71382.855419] audit: type=1400 audit(1732157273.511:174):
apparmor="STATUS" operation="profile_replace" profile="unconfined"
name="libvirt-3ca46e41-5cca-40b0-a5cd-d7d7e60de326" pid=3067
8 comm="apparmor_parser"
[71383.228796] audit: type=1400 audit(1732157273.883:175):
apparmor="STATUS" operation="profile_replace" profile="unconfined"
name="libvirt-3ca46e41-5cca-40b0-a5cd-d7d7e60de326" pid=3068
2 comm="apparmor_parser"
[71383.626483] audit: type=1400 audit(1732157274.279:176):
apparmor="STATUS" operation="profile_replace" info="same as current
profile, skipping" profile="unconfined" name="libvirt-3ca46
e41-5cca-40b0-a5cd-d7d7e60de326" pid=30686 comm="apparmor_parser"
[71383.664542] virbr0: port 1(vnet0) entered blocking state
[71383.667108] virbr0: port 1(vnet0) entered disabled state
[71383.671212] device vnet0 entered promiscuous mode
[71383.674775] virbr0: port 1(vnet0) entered blocking state
[71383.677431] virbr0: port 1(vnet0) entered listening state
[71384.077738] audit: type=1400 audit(1732157274.731:177):
apparmor="STATUS" operation="profile_replace" profile="unconfined"
name="libvirt-3ca46e41-5cca-40b0-a5cd-d7d7e60de326" pid=3069
7 comm="apparmor_parser"
[71385.702614] virbr0: port 1(vnet0) entered learning state
[71387.718555] virbr0: port 1(vnet0) entered forwarding state
[71387.720995] virbr0: topology change detected, propagating
I tried to boot the GRML ISO ( https://grml.org/ ) using the following
command:
# virt-install --name test-cloud-vnc --os-variant debian11 --ram 8192
--disk
debian-12-genericcloud-amd64.qcow2,device=disk,bus=virtio,size=10,format=qcow2
--hvm --import
--noautoconsole --network default --cdrom grml64-full_2024.02.iso --boot
cdrom --graphics vnc,port=-1,listen=0.0.0.0
Unfortunately, the GRML boot hangs shortly after starting, apparently while
trying to load the initrd. So I can't poke around in the guest in this way,
either.
Advice about debugging this would be highly appreciated.
--
Yours, Misha Ramendik
Unless explicitly stated, all opinions in my mail are my own and do not
reflect the views of any organization
18 hours, 45 minutes
Re: Set permissions and ownership of disk image created by vol-upload
by Martin Kletzander
On Tue, Nov 19, 2024 at 07:01:39PM +0000, Andrew Martin wrote:
>Hello,
>
>I am using libvirt 8.0 on Ubuntu 22.04 and would like to utilize the vol-upload
>command to upload a disk image:
>https://www.libvirt.org/manpages/virsh.html#vol-upload
>
>I am using the "directory" storage pool type:
>https://libvirt.org/storage.html#directory-pool
>
>However, when uploading the disk image, it gets written with octal permissions
>0600 and owner root:root. Ideally I'd like this file to be owned by
>libvirt-qemu:libvirt-qemu with permissions 0660 so that the group can read it.
>
>I've tried the following, none of which seem to alter the owner or permissions:
>
>- change the umask in the libvirtd systemd unit
>- edit the user, group, and dynamic_ownership settings in /etc/libvirt/qemu.conf
>- run "virsh pool-edit default" and change the <mode>, <owner>, or <group> tags
>
>How can I configure libvirtd to create these uploaded files with the desired
>permissions and ownership?
>
Use virsh vol-create <pool> <volume.xml> where the volume xml looks
something like this (adjust to your liking):
<volume>
<name>perms.img</name>
<capacity unit='M'>100</capacity>
<target>
<path>/var/lib/libvirt/images/perms.img</path>
<format type='raw'/>
<permissions>
<mode>0755</mode>
<owner>77</owner>
<group>77</group>
</permissions>
</target>
</volume>
And then use virsh vol-upload to populate the volume with what you want.
That ought to be enough.
HTH,
Martin
>Thanks,
>
>Andrew
1 day, 17 hours
Set permissions and ownership of disk image created by vol-upload
by Andrew Martin
Hello,
I am using libvirt 8.0 on Ubuntu 22.04 and would like to utilize the vol-upload
command to upload a disk image:
https://www.libvirt.org/manpages/virsh.html#vol-upload
I am using the "directory" storage pool type:
https://libvirt.org/storage.html#directory-pool
However, when uploading the disk image, it gets written with octal permissions
0600 and owner root:root. Ideally I'd like this file to be owned by
libvirt-qemu:libvirt-qemu with permissions 0660 so that the group can read it.
I've tried the following, none of which seem to alter the owner or permissions:
- change the umask in the libvirtd systemd unit
- edit the user, group, and dynamic_ownership settings in /etc/libvirt/qemu.conf
- run "virsh pool-edit default" and change the <mode>, <owner>, or <group> tags
How can I configure libvirtd to create these uploaded files with the desired
permissions and ownership?
Thanks,
Andrew
2 days, 13 hours
Immediate "system reset" when booting UEFI?
by Lars Kellogg-Stedman
Hey folks,
I'm running libvirt 10.1.0/qemu-system-x86-core-9.0.1-1.fc40.x86_64 on Fedora
40. I'm trying to boot an Ubuntu image in UEFI mode, like this:
virt-install -r 2048 -n ubuntu.virt --os-variant ubuntu24.04 \
--disk pool=default,size=10,backing_store=mantic-server-cloudimg-amd64.img,backing_format=qcow2
\
--cloud-init root-ssh-key=$HOME/.ssh/id_ed25519.pub \
--boot uefi
This results in the domain booting up and then immediately resetting:
BdsDxe: loading Boot0001 "UEFI Misc Device" from
PciRoot(0x0)/Pci(0x2,0x3)/Pci(0x0,0x0)
BdsDxe: starting Boot0001 "UEFI Misc Device" from
PciRoot(0x0)/Pci(0x2,0x3)/Pci(0x0,0x0)
Reset System
Domain creation completed.
At this point, the machine is actually powered down and needs to be
restarted manually:
virsh start ubuntu.virt
This works fine, and the domain boots successfully, but now the cloud-init
metadata provided by the `--cloud-init` option to `virt-install` is no longer
available (because this is no longer the initial, virt-install managed boot of
the domain).
What is causing the firmware to reset the system when it first boots?
--
Lars Kellogg-Stedman <lars(a)redhat.com>
3 days, 17 hours
hard-disk via virtio-blk under windows (discard_granularity=0)
by d tbsky
Hi:
few years ago virtio-blk device was showing as hard-disk under
windows. recent years the driver change the device to show as
thin-provisioned disk. the change is good for ssd, but not so good
for raw hard-disk.
under windows server 2022 the default virtio-blk situation is quite
bad, ssd trim is very slow. and defrag a bigger volume like 1TB
harddisk will always show "memory not enough", even when the volume is
empty.
I found discussions to change the "discard_granularity" to make trim
happy again. and libvirt support the syntax like below:
<blockio discard_granularity='2097152'/>
I also found if I can set "discard_granularity" to zero, then
windows will recognize the device as "traditional hard drive" again
and won't do unnecessary trim to it. I want to do this for years, but
couldn't find a way to set it up like vitio-scsi's rotational
parameter.
the sad part is if I setup it under RHEL 9.4 with librirt 10.0 like below:
<blockio discard_granularity='0'/>
the line will just disappear when I close "virsh edit"
so I can only use complex format with "<qemu:override>" to set
"discard_granularity='0'"
I wonder if libvirt would be changed to accept
"discard_granularity='0'" so the traditional hard-disk can be
recognized under windows again.
or is there better ways to distinguish hard-disk/ssd/thin-disk
for virtio-blk now?
Regards,
tbskyd
3 days, 18 hours
error: unsupported flags (0x4) in function virStorageVolDefParseXML
by Veiko Kukk
Hi!
Simple volume definition:
<volume type='file'>
<name>tstlog01-system</name>
<capacity unit='GiB'>20</capacity>
<target>
<compat>1.1</compat>
<format type='qcow2'/>
</target>
</volume>
# virsh vol-create --pool libvirt-ssd0 --file
vm-files/tstlog01/tstlog01-sys
tem-vol.xml --validate
error: Failed to create vol from vm-files/tstlog01/tstlog01-system-vol.xml
error: unsupported flags (0x4) in function virStorageVolDefParseXML
Omitting --validate creates volume
# virsh vol-dumpxml --pool libvirt-ssd0 --vol tstlog01-system
<volume type='file'>
<name>tstlog01-system</name>
<key>/var/lib/libvirt/images/tstlog01-system</key>
<capacity unit='bytes'>21474836480</capacity>
<allocation unit='bytes'>200704</allocation>
<physical unit='bytes'>196928</physical>
<target>
<path>/var/lib/libvirt/images/tstlog01-system</path>
<format type='qcow2'/>
<permissions>
<mode>0600</mode>
<owner>0</owner>
<group>0</group>
<label>system_u:object_r:virt_image_t:s0</label>
</permissions>
<timestamps>
<atime>1730277533.331347844</atime>
<mtime>1730277533.329347839</mtime>
<ctime>1730277533.330347841</ctime>
<btime>0</btime>
</timestamps>
</target>
</volume>
# file /var/lib/libvirt/images/tstlog01-system
/var/lib/libvirt/images/tstlog01-system: QEMU QCOW2 Image (v2),
21474836480 bytes
# virsh vol-create-as libvirt-ssd0 tstlog01-system 20g --format qcow2
Vol tstlog01-system created
# virsh vol-dumpxml --pool libvirt-ssd0 --vol tstlog01-system
<volume type='file'>
<name>tstlog01-system</name>
<key>/var/lib/libvirt/images/tstlog01-system</key>
<capacity unit='bytes'>21474836480</capacity>
<allocation unit='bytes'>200704</allocation>
<physical unit='bytes'>196928</physical>
<target>
<path>/var/lib/libvirt/images/tstlog01-system</path>
<format type='qcow2'/>
<permissions>
<mode>0600</mode>
<owner>0</owner>
<group>0</group>
<label>system_u:object_r:virt_image_t:s0</label>
</permissions>
<timestamps>
<atime>1730277391.657950326</atime>
<mtime>1730277391.656950323</mtime>
<ctime>1730277391.657950326</ctime>
<btime>0</btime>
</timestamps>
</target>
</volume>
It seems both volumes are identical - the one created without
--validate and the one created with vol-create-as.
Questions:
1) Why does validation fail? How to debug it?
2) Why doesn't libvirt create qcow2 v3 even when specifying compat
1.1? When creating an image using virt-manager on the same libvirt
hypervisor host, v3 images are created.
Alma Linux 9.4, libvirt-10.0.0-6.7.el9_4.alma.1.x86_64,
qemu-kvm-8.2.0-11.el9_4.6.x86_64
With best regards,
Veiko
1 week, 6 days
How to assign vHBA to guest VM?
by voron_g@inbox.ru
Hi all!
I've created several vHBA and i need to assign this device to guest VM
Is it possible?
Thanks a lot for your answer
BR, Gennady
2 weeks, 5 days
trustGuestRxFilters broken after upgrade to Debian 12
by Paul B. Henson
We've been running Debian 11 for a while, using sr-iov:
<network>
<name>sr-iov-intel-10G-1</name>
<uuid>6bdaa4c8-e720-4ea0-9a50-91cb7f2c83b1</uuid>
<forward mode='hostdev' managed='yes'>
<pf dev='eth2'/>
</forward>
</network>
and allocating vf's from the pool:
<interface type='network' trustGuestRxFilters='yes'>
<mac address='52:54:00:08:da:5b'/>
<source network='sr-iov-intel-10G-1'/>
<vlan>
<tag id='50'/>
</vlan>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
After upgrading to Debian 12, when I try to start any vm which uses the
trustGuestRxFilters option, it fails to start with the message:
error: internal error: unable to execute QEMU command 'query-rx-filter':
invalid net client name: hostdev0
If I remove the option, it starts fine (but of course is broken
functionality wise as the option wasn't there just for fun :) ).
Any thoughts on what's going on here? The Debian 12 versions are:
libvirt-daemon/stable,now 9.0.0-4
qemu-system-x86/stable,now 1:7.2+dfsg-7+deb12u3
I see Debian 12 backports has version 8.1.2+ds-1~bpo12+1 of qemu, but no
newer versions of libvirt. I haven't tried the backports version to
see if that resolves the problem.
Thanks much...
3 weeks