Libvirt
by Gk Gk
Hi All,
I am trying to collect memory, disk and network stats for a VM on kvm host.
It seems that the statistics are not matching what the OS inside the VM is
reporting. Why is this discrepancy ?
Is this a known bug of libvirt ? Also I heard that libvirt shows cumulative
figures for these measures ever since the VM was created. Also I tested by
creating a new vm and comparing the stats without a reboot . Even in this
case, the stats dont agree. Can someone help me here please ?
Thanks
Kumar
1 year, 1 month
UEFI and External Snapshots
by Simon Fairweather
Is there a plan to support UEFI and Ext Snaps? Or is there a location for
documentation as the info I can find is quite old.
1 year, 7 months
seclabel & fuse
by lejeczek
Hi guys.
Wtih images stored on fuse-mounded storage - does it make
sense to try 'seclabel', does it even work?
Are there any other techniques which would help to add
that/similar layer of security when fuse is used?
many thanks, L.
1 year, 8 months
ecrypting image file breaks efi/boot of the guest/Ubuntu - ?
by lejeczek
Hi guys.
I've have a guest and that guest differs from all other
guest by:
<os>
<type arch='x86_64' machine='pc-q35-rhel9.0.0'>hvm</type>
<loader readonly='yes' secure='yes'
type='pflash'>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/ubusrv1_VARS.fd</nvram>
<boot dev='hd'/>
<bootmenu enable='yes'/>
</os>
whereas everything else has:
<os>
<type arch='x86_64' machine='pc-q35-rhel9.0.0'>hvm</type>
<boot dev='hd'/>
<boot dev='cdrom'/>
<bootmenu enable='yes'/>
</os>
Now, that different guest fails - as the only one - to
start, to boot after its qcow2 image was luks-encrypted.
Guest starts but says that:
BdsDxe: failed to load Boot0001 "Uefi Misc Device" from
PciRoot (0x0)/Pci(0x2,0x3)/Pci(0x0,0x0): Not found
revert back to original, non-encrypted qcow2 image and all
works a ok.
All and any thoughts shared are much appreciated.
many thanks, L.
1 year, 8 months
storage backup with encryption on-the-fly ?
by lejeczek
Hi guys.
Is there a solution, perhaps a function of libvirt, to
backup guest's storage and encrypt the resulting image file?
On-the-fly ideally.
If not ready/built-in solution then perhaps a best technique
you recommend/use?
I currently use 'backup-begin' on qcow2s, which are LUKS
encrypted.
many thanks, L.
1 year, 8 months
Virtiofsd
by Simon Fairweather
Hi
In QEMU 8 virtiofsd has been removed in favor of the rust version. Which
includes options that are not longer supported,
Do you have a view on what should be used going forwards to support
virtiofsd in libvirt with qemu 8?
The options are showing as depreciated,
-o <compat-options>...
Options in a format compatible with the legacy implementation
[deprecated]
Rust version options
virtiofsd backend 1.5.1
Launch a virtiofsd backend.
USAGE:
virtiofsd [FLAGS] [OPTIONS] --fd <fd> --socket <socket> --socket-path
<socket-path>
FLAGS:
--allow-direct-io
Honor the O_DIRECT flag passed down by guest applications
--announce-submounts
Tell the guest which directories are mount points
-d
Set log level to "debug" [deprecated]
-f
Compatibility option that has no effect [deprecated]
-h, --help
Prints help information
--killpriv-v2
Enable KILLPRIV V2 support
--no-killpriv-v2
Disable KILLPRIV V2 support [default]
--no-readdirplus
Disable support for READDIRPLUS operations
--posix-acl
Enable support for posix ACLs (implies --xattr)
--print-capabilities
Print vhost-user.json backend program capabilities and exit
--security-label
Enable security label support. Expects SELinux xattr on file
creation from client and stores it in the newly
created file
--syslog
Log to syslog [default: stderr]
-V, --version
Prints version information
--writeback
Enable writeback cache
--xattr
Enable support for extended attributes
OPTIONS:
--cache <cache>
The caching policy the file system should use (auto, always,
never) [default: auto]
-o <compat-options>...
Options in a format compatible with the legacy implementation
[deprecated]
--fd <fd>
File descriptor for the listening socket
--inode-file-handles=<inode-file-handles>
When to use file handles to reference inodes instead of O_PATH
file descriptors (never, prefer, mandatory)
- never: Never use file handles, always use O_PATH file
descriptors.
- prefer: Attempt to generate file handles, but fall back to
O_PATH file descriptors where the underlying
filesystem does not support file handles. Useful when there
are various different filesystems under the
shared directory and some of them do not support file handles.
("fallback" is a deprecated alias for
"prefer".)
- mandatory: Always use file handles, never fall back to O_PATH
file descriptors.
Using file handles reduces the number of file descriptors
virtiofsd keeps open, which is not only helpful
with resources, but may also be important in cases where
virtiofsd should only have file descriptors open
for files that are open in the guest, e.g. to get around bad
interactions with NFS's silly renaming.
[default: never]
--log-level <log-level>
Log level (error, warn, info, debug, trace, off) [default: info]
--modcaps <modcaps>
Modify the list of capabilities, e.g.,
--modcaps=+sys_admin:-chown
--rlimit-nofile <rlimit-nofile>
Set maximum number of file descriptors (0 leaves rlimit
unchanged) [default: min(1000000,
'/proc/sys/fs/nr_open')]
--sandbox <sandbox>
Sandbox mechanism to isolate the daemon process (namespace,
chroot, none) [default: namespace]
--seccomp <seccomp>
Action to take when seccomp finds a not allowed syscall (none,
kill, log, trap) [default: kill]
--shared-dir <shared-dir>
Shared directory path
--socket <socket>
vhost-user socket path [deprecated]
--socket-group <socket-group>
Name of group for the vhost-user socket
--socket-path <socket-path>
vhost-user socket path
--thread-pool-size <thread-pool-size>
Maximum thread pool size. A value of "0" disables the pool
[default: 0]
--xattrmap <xattrmap>
Add custom rules for translating extended attributes between
host and guest (e.g. :map::user.virtiofs.:)
Regards
Simon.
1 year, 8 months
backup-begin
by André Malm
Hello,
For some vms the virsh backup-begin sometimes shuts off the vm and
returns "error: operation failed: domain is not running" although it was
clearly in state running (or paused).
Is the idea that you should guest-fsfreeze-freeze / virsh suspend before
virsh backup-begin? I have tried with both with the same results.
What could be causing the machine to shut off?
Thanks,
André
1 year, 8 months
Xen with libvirt and SR-IOV
by nospam@godawa.de
Hi everybody,
since long time, I'm using Xen on CentOS with XL, currently latest
CentOS 7 with Xen 4.15 from the CentOS-Xen-Project. For several VMs I
have to use SR-IOV, to lower the CPU-usage on Dom0 on the host.
CentOS 7 comes to an end, Xen is not supported by RHEL nor RockyLinux
anymore, so unfortunately I have to switch to KVM.
First step will be now, converting all the scripts for managing and
running VMs, that they run with the additional libvirt-layer.
Mostly everything is working, but I do not get a network interface in
the VM, when I start it with "virsh start ..." instead of "xl create ...".
First of all, is there documentation how to configure the VMs in the
dom-definition for XEN (all docs I found are KVM-related)?
The converted xl-config does not do the job:
virsh -c xen:/// domxml-from-native --format xen-xl vm > vm.xml
These are some none working examples I tried out:
...
<interface type='hostdev' managed='yes'>
<mac address='02:16:32:10:20:30'/>
<driver name='xen'/>
<source>
<address type='pci' domain='0x0000' bus='0x81' slot='0x02'
function='0x6'/>
</source>
<vlan>
<tag id='11'/>
</vlan>
</interface>
...
...
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='xen'/>
<source>
<address domain='0x0000' bus='0x81' slot='0x02' function='0x6'/>
</source>
<address type='pci' domain='0x0000' bus='0x81' slot='0x02'
function='0x6'/>
</hostdev>
...
Result is always the same, the VM does not find any interface to
configure, when starting as libvirt:
# dmesg | egrep -i "net|eth"
...
[ 4.523173] iavf: Intel(R) Ethernet Adaptive Virtual Function Network
Driver - version 4.4.2.1
# lspci | egrep -i "net|eth"
00:00.6 Ethernet controller: Intel Corporation Ethernet Virtual Function
700 Series (rev 02)
# lspci -vmmks 00:00.6
Slot: 00:00.6
Class: Ethernet controller
Vendor: Intel Corporation
Device: Ethernet Virtual Function 700 Series
SVendor: Intel Corporation
SDevice: Device 0000
Rev: 02
Module: i40evf
Module: iavf
NUMANode: 0
# lsmod | egrep -i "iavf|i40"
iavf 135168 0
auxiliary 16384 1 iavf
ptp 20480 1 iavf
# ifconfig eth0
eth0: error fetching interface information: Device not found
The same VM after starting with XL:
# dmesg | egrep -i "net|eth"
...
[ 4.742038] iavf: Intel(R) Ethernet Adaptive Virtual Function Network
Driver - version 4.4.2.1
[ 40.578461] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 40.642868] iavf 0000:00:00.6 eth0: NIC Link is Up Speed is 10 Gbps
Full Duplex
[ 40.644015] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
# lspci | egrep -i "net|eth"
00:00.6 Ethernet controller: Intel Corporation Ethernet Virtual Function
700 Series (rev 02)
# lspci -vmmks 00:00.6
Slot: 00:00.6
Class: Ethernet controller
Vendor: Intel Corporation
Device: Ethernet Virtual Function 700 Series
SVendor: Intel Corporation
SDevice: Device 0000
Rev: 02
Driver: iavf
Module: i40evf
Module: iavf
NUMANode: 0
# lsmod | egrep -i "iavf|i40"
iavf 135168 0
auxiliary 16384 1 iavf
ptp 20480 1 iavf
# ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.20.30.40 netmask 255.255.255.0 broadcast 10.20.30.1
...
ether 02:16:32:10:20:30 txqueuelen 1000 (Ethernet)
I expect, that all the SR-IOV stuff is configured correctly, because the
VMs runs in XL-mode without any problems.
Thanks a lot for any ideas,
--
kind regards,
Thorolf
1 year, 8 months