Running libvirt without dnsmasq
by procmem@riseup.net
Hi, we are trying to document a way for our users to run libvirt without dnsmasq to reduce attack surface on the host. We are aware that the default network uses it but plan to disable that and use our own custom configured networks instead. Uninstalling dnsmasq causes libvirt to refuse to start even if the default network is no longer running. Is this possible or is this something that needs code changes upstream?
2 weeks, 2 days
Re: Set permissions and ownership of disk image created by vol-upload
by Martin Kletzander
On Tue, Nov 19, 2024 at 07:01:39PM +0000, Andrew Martin wrote:
>Hello,
>
>I am using libvirt 8.0 on Ubuntu 22.04 and would like to utilize the vol-upload
>command to upload a disk image:
>https://www.libvirt.org/manpages/virsh.html#vol-upload
>
>I am using the "directory" storage pool type:
>https://libvirt.org/storage.html#directory-pool
>
>However, when uploading the disk image, it gets written with octal permissions
>0600 and owner root:root. Ideally I'd like this file to be owned by
>libvirt-qemu:libvirt-qemu with permissions 0660 so that the group can read it.
>
>I've tried the following, none of which seem to alter the owner or permissions:
>
>- change the umask in the libvirtd systemd unit
>- edit the user, group, and dynamic_ownership settings in /etc/libvirt/qemu.conf
>- run "virsh pool-edit default" and change the <mode>, <owner>, or <group> tags
>
>How can I configure libvirtd to create these uploaded files with the desired
>permissions and ownership?
>
Use virsh vol-create <pool> <volume.xml> where the volume xml looks
something like this (adjust to your liking):
<volume>
<name>perms.img</name>
<capacity unit='M'>100</capacity>
<target>
<path>/var/lib/libvirt/images/perms.img</path>
<format type='raw'/>
<permissions>
<mode>0755</mode>
<owner>77</owner>
<group>77</group>
</permissions>
</target>
</volume>
And then use virsh vol-upload to populate the volume with what you want.
That ought to be enough.
HTH,
Martin
>Thanks,
>
>Andrew
1 month
Help building .rpm of libvirt from fork
by Filippo Ferrando Damillano
Hi, i'm a CS Student who is trying to build a full rpm of libvirt out of a
fork <https://gitlab.com/filippo-ferrando/libvirt-sd> because i need to add
a scheduler to the libvirt configuration.
I successfully builded the source code but i cannot manage to build a
working .rpm executable, i'm using a Alma9 machine, and the original .spec
file from the build of the repo.
When i try to use rpmbuild, it goes until meson.build says that a file
called `meson.build` is missing in the directory, even if the file itself
exists.
my fork is based on the master branch at tag 10.9.0
i'll include in the mail the log from meson and the spec file i'm using,
thank to everyone for the help!
--
------------------------
Indirizzo istituzionale di posta elettronica
degli studenti e dei laureati dell'Università di TorinoOfficial University
of Turin email address for students and graduates
1 month
ZFS storage backend
by Paul B. Henson
I'm running libvirt to manage virtual machines utilizing ZFS zvol's for
storage. For organization and management purposes, these zvol's are nested:
virt/qemu/debian12-template 15.8G 1.57T 24K none
virt/qemu/debian12-template/bootefi 1.02G 1.57T 141M -
virt/qemu/debian12-template/home 522M 1.57T 2.57M -
virt/qemu/debian12-template/opt 522M 1.57T 3.00M -
virt/qemu/debian12-template/root 8.13G 1.58T 970M -
virt/qemu/debian12-template/swap 522M 1.57T 366M -
virt/qemu/debian12-template/tmp 522M 1.57T 2.39M -
virt/qemu/debian12-template/var 3.05G 1.57T 161M -
virt/qemu/debian12-template/varlog 1.02G 1.57T 101M -
virt/qemu/debian12-template/vartmp 522M 1.57T 3.75M -
The libvirt ZFS integration seems to assume that all zvol's will exist
at the top level of the pool, it won't let you create or manage a
hierarchical structure. That is what it is, but it also then
misrepresents the actual structure and shows things as being in the root
when they are not:
virsh # vol-list virt
Name Path
-------------------------------------
backup /dev/zvol/virt/backup
boot-nb /dev/zvol/virt/boot-nb
bootefi /dev/zvol/virt/bootefi
dest /dev/zvol/virt/dest
disk1 /dev/zvol/virt/disk1
home /dev/zvol/virt/home
opt /dev/zvol/virt/opt
root /dev/zvol/virt/root
swap /dev/zvol/virt/swap
tmp /dev/zvol/virt/tmp
usr /dev/zvol/virt/usr
usrlocal /dev/zvol/virt/usrlocal
usrobj /dev/zvol/virt/usrobj
usrports /dev/zvol/virt/usrports
usrsrc /dev/zvol/virt/usrsrc
var /dev/zvol/virt/var
varlog /dev/zvol/virt/varlog
vartmp /dev/zvol/virt/vartmp
Ideally libvirt would support nested ZFS organizational structure, as
that is a very common layout. But even if not, it probably shouldn't
misrepresent a structure it doesn't understand?
Thanks for any thoughts…
1 month
network usually failing (nested virt, Debian)
by Misha Ramendik
Hello,
I have a VPS where hardware nested virtualization is enabled, and I am
trying to use this nested virtualization. The VPS runs Debian 12 and has 16
Gb of RAM.
I installed libvirt/virt-manager/etc and downloaded the "nocloud" and
"genericcloud" images from https://cdimage.debian.org/images/cloud/ . The
description says that the "nocloud" image should allow passwordless root
login but unfortunately it does not. I run things as root (this is a test
setup) but I did chown all qcow images to "libvirt-qemu".
I use the following command line:
# virt-install --name test-cloud-vnc --os-variant debian11 --ram 8192
--disk
debian-12-genericcloud-amd64.qcow2,device=disk,bus=virtio,size=10,format=qcow2
--hvm --import
--noautoconsole --network default --graphics vnc,port=-1,listen=0.0.0.0
(Or the same for the nocloud image)
The nocloud image sometimes, rarely, gets a DHCP lease (visible in "virsh
net-dhcp-leases-default") and then responds to pings. But usually the
nocloud image, and always the cloud image (but this might just be by random
numbers), don't get a DHCP lease and cannot be pinged. This means that my
attempt to set up cloud-init by an ad hoc webserver (as per
https://cloudinit.readthedocs.io/en/latest/tutorial/qemu.html ) never got
tested, because the cloud-init image can't access the network to start
with.
I did try --network default,model=e1000 - no change. I do successfully see
the guest console when I connect to the VPS by VNC. Unfortunately, I don't
have a password to log in with, so I can't even try to see whether it sees
any network adapter.
dmesg output for the time:
[71382.495314] audit: type=1400 audit(1732157273.151:173):
apparmor="STATUS" operation="profile_load" profile="unconfined"
name="libvirt-3ca46e41-5cca-40b0-a5cd-d7d7e60de326" pid=30675 c
omm="apparmor_parser"
[71382.855419] audit: type=1400 audit(1732157273.511:174):
apparmor="STATUS" operation="profile_replace" profile="unconfined"
name="libvirt-3ca46e41-5cca-40b0-a5cd-d7d7e60de326" pid=3067
8 comm="apparmor_parser"
[71383.228796] audit: type=1400 audit(1732157273.883:175):
apparmor="STATUS" operation="profile_replace" profile="unconfined"
name="libvirt-3ca46e41-5cca-40b0-a5cd-d7d7e60de326" pid=3068
2 comm="apparmor_parser"
[71383.626483] audit: type=1400 audit(1732157274.279:176):
apparmor="STATUS" operation="profile_replace" info="same as current
profile, skipping" profile="unconfined" name="libvirt-3ca46
e41-5cca-40b0-a5cd-d7d7e60de326" pid=30686 comm="apparmor_parser"
[71383.664542] virbr0: port 1(vnet0) entered blocking state
[71383.667108] virbr0: port 1(vnet0) entered disabled state
[71383.671212] device vnet0 entered promiscuous mode
[71383.674775] virbr0: port 1(vnet0) entered blocking state
[71383.677431] virbr0: port 1(vnet0) entered listening state
[71384.077738] audit: type=1400 audit(1732157274.731:177):
apparmor="STATUS" operation="profile_replace" profile="unconfined"
name="libvirt-3ca46e41-5cca-40b0-a5cd-d7d7e60de326" pid=3069
7 comm="apparmor_parser"
[71385.702614] virbr0: port 1(vnet0) entered learning state
[71387.718555] virbr0: port 1(vnet0) entered forwarding state
[71387.720995] virbr0: topology change detected, propagating
I tried to boot the GRML ISO ( https://grml.org/ ) using the following
command:
# virt-install --name test-cloud-vnc --os-variant debian11 --ram 8192
--disk
debian-12-genericcloud-amd64.qcow2,device=disk,bus=virtio,size=10,format=qcow2
--hvm --import
--noautoconsole --network default --cdrom grml64-full_2024.02.iso --boot
cdrom --graphics vnc,port=-1,listen=0.0.0.0
Unfortunately, the GRML boot hangs shortly after starting, apparently while
trying to load the initrd. So I can't poke around in the guest in this way,
either.
Advice about debugging this would be highly appreciated.
--
Yours, Misha Ramendik
Unless explicitly stated, all opinions in my mail are my own and do not
reflect the views of any organization
1 month
Set permissions and ownership of disk image created by vol-upload
by Andrew Martin
Hello,
I am using libvirt 8.0 on Ubuntu 22.04 and would like to utilize the vol-upload
command to upload a disk image:
https://www.libvirt.org/manpages/virsh.html#vol-upload
I am using the "directory" storage pool type:
https://libvirt.org/storage.html#directory-pool
However, when uploading the disk image, it gets written with octal permissions
0600 and owner root:root. Ideally I'd like this file to be owned by
libvirt-qemu:libvirt-qemu with permissions 0660 so that the group can read it.
I've tried the following, none of which seem to alter the owner or permissions:
- change the umask in the libvirtd systemd unit
- edit the user, group, and dynamic_ownership settings in /etc/libvirt/qemu.conf
- run "virsh pool-edit default" and change the <mode>, <owner>, or <group> tags
How can I configure libvirtd to create these uploaded files with the desired
permissions and ownership?
Thanks,
Andrew
1 month, 1 week
Immediate "system reset" when booting UEFI?
by Lars Kellogg-Stedman
Hey folks,
I'm running libvirt 10.1.0/qemu-system-x86-core-9.0.1-1.fc40.x86_64 on Fedora
40. I'm trying to boot an Ubuntu image in UEFI mode, like this:
virt-install -r 2048 -n ubuntu.virt --os-variant ubuntu24.04 \
--disk pool=default,size=10,backing_store=mantic-server-cloudimg-amd64.img,backing_format=qcow2
\
--cloud-init root-ssh-key=$HOME/.ssh/id_ed25519.pub \
--boot uefi
This results in the domain booting up and then immediately resetting:
BdsDxe: loading Boot0001 "UEFI Misc Device" from
PciRoot(0x0)/Pci(0x2,0x3)/Pci(0x0,0x0)
BdsDxe: starting Boot0001 "UEFI Misc Device" from
PciRoot(0x0)/Pci(0x2,0x3)/Pci(0x0,0x0)
Reset System
Domain creation completed.
At this point, the machine is actually powered down and needs to be
restarted manually:
virsh start ubuntu.virt
This works fine, and the domain boots successfully, but now the cloud-init
metadata provided by the `--cloud-init` option to `virt-install` is no longer
available (because this is no longer the initial, virt-install managed boot of
the domain).
What is causing the firmware to reset the system when it first boots?
--
Lars Kellogg-Stedman <lars(a)redhat.com>
1 month, 1 week
hard-disk via virtio-blk under windows (discard_granularity=0)
by d tbsky
Hi:
few years ago virtio-blk device was showing as hard-disk under
windows. recent years the driver change the device to show as
thin-provisioned disk. the change is good for ssd, but not so good
for raw hard-disk.
under windows server 2022 the default virtio-blk situation is quite
bad, ssd trim is very slow. and defrag a bigger volume like 1TB
harddisk will always show "memory not enough", even when the volume is
empty.
I found discussions to change the "discard_granularity" to make trim
happy again. and libvirt support the syntax like below:
<blockio discard_granularity='2097152'/>
I also found if I can set "discard_granularity" to zero, then
windows will recognize the device as "traditional hard drive" again
and won't do unnecessary trim to it. I want to do this for years, but
couldn't find a way to set it up like vitio-scsi's rotational
parameter.
the sad part is if I setup it under RHEL 9.4 with librirt 10.0 like below:
<blockio discard_granularity='0'/>
the line will just disappear when I close "virsh edit"
so I can only use complex format with "<qemu:override>" to set
"discard_granularity='0'"
I wonder if libvirt would be changed to accept
"discard_granularity='0'" so the traditional hard-disk can be
recognized under windows again.
or is there better ways to distinguish hard-disk/ssd/thin-disk
for virtio-blk now?
Regards,
tbskyd
1 month, 1 week
error: unsupported flags (0x4) in function virStorageVolDefParseXML
by Veiko Kukk
Hi!
Simple volume definition:
<volume type='file'>
<name>tstlog01-system</name>
<capacity unit='GiB'>20</capacity>
<target>
<compat>1.1</compat>
<format type='qcow2'/>
</target>
</volume>
# virsh vol-create --pool libvirt-ssd0 --file
vm-files/tstlog01/tstlog01-sys
tem-vol.xml --validate
error: Failed to create vol from vm-files/tstlog01/tstlog01-system-vol.xml
error: unsupported flags (0x4) in function virStorageVolDefParseXML
Omitting --validate creates volume
# virsh vol-dumpxml --pool libvirt-ssd0 --vol tstlog01-system
<volume type='file'>
<name>tstlog01-system</name>
<key>/var/lib/libvirt/images/tstlog01-system</key>
<capacity unit='bytes'>21474836480</capacity>
<allocation unit='bytes'>200704</allocation>
<physical unit='bytes'>196928</physical>
<target>
<path>/var/lib/libvirt/images/tstlog01-system</path>
<format type='qcow2'/>
<permissions>
<mode>0600</mode>
<owner>0</owner>
<group>0</group>
<label>system_u:object_r:virt_image_t:s0</label>
</permissions>
<timestamps>
<atime>1730277533.331347844</atime>
<mtime>1730277533.329347839</mtime>
<ctime>1730277533.330347841</ctime>
<btime>0</btime>
</timestamps>
</target>
</volume>
# file /var/lib/libvirt/images/tstlog01-system
/var/lib/libvirt/images/tstlog01-system: QEMU QCOW2 Image (v2),
21474836480 bytes
# virsh vol-create-as libvirt-ssd0 tstlog01-system 20g --format qcow2
Vol tstlog01-system created
# virsh vol-dumpxml --pool libvirt-ssd0 --vol tstlog01-system
<volume type='file'>
<name>tstlog01-system</name>
<key>/var/lib/libvirt/images/tstlog01-system</key>
<capacity unit='bytes'>21474836480</capacity>
<allocation unit='bytes'>200704</allocation>
<physical unit='bytes'>196928</physical>
<target>
<path>/var/lib/libvirt/images/tstlog01-system</path>
<format type='qcow2'/>
<permissions>
<mode>0600</mode>
<owner>0</owner>
<group>0</group>
<label>system_u:object_r:virt_image_t:s0</label>
</permissions>
<timestamps>
<atime>1730277391.657950326</atime>
<mtime>1730277391.656950323</mtime>
<ctime>1730277391.657950326</ctime>
<btime>0</btime>
</timestamps>
</target>
</volume>
It seems both volumes are identical - the one created without
--validate and the one created with vol-create-as.
Questions:
1) Why does validation fail? How to debug it?
2) Why doesn't libvirt create qcow2 v3 even when specifying compat
1.1? When creating an image using virt-manager on the same libvirt
hypervisor host, v3 images are created.
Alma Linux 9.4, libvirt-10.0.0-6.7.el9_4.alma.1.x86_64,
qemu-kvm-8.2.0-11.el9_4.6.x86_64
With best regards,
Veiko
1 month, 2 weeks