[PATCH 0/4] virsh completer cleanups
by Peter Krempa
Make the core completer code common in all virt shells and annotate few
arguments with the existing empty and local-file completers.
Peter Krempa (4):
virsh: completer: Extract common completer methods from virsh to vsh
vsh: Apply empty/local completers to global commands
virsh: Apply empty completer to arguments where completion doesn't
make sense
virsh: domain: Annotate rest of arguments taking local existing file
tools/meson.build | 2 +-
tools/virsh-backup.c | 6 +-
tools/virsh-checkpoint.c | 8 +-
tools/virsh-completer-domain.c | 70 ++++++------
tools/virsh-completer-host.c | 12 +-
tools/virsh-completer-nodedev.c | 10 +-
tools/virsh-completer-pool.c | 6 +-
tools/virsh-completer-volume.c | 4 +-
tools/virsh-completer.h | 19 +---
tools/virsh-domain.c | 112 ++++++++++---------
tools/virsh-host.c | 6 +-
tools/virsh-interface.c | 2 +-
tools/virsh-network.c | 9 +-
tools/virsh-nodedev.c | 2 +-
tools/virsh-nwfilter.c | 4 +-
tools/virsh-pool.c | 18 +--
tools/virsh-secret.c | 6 +-
tools/virsh-snapshot.c | 8 +-
tools/virsh-volume.c | 12 +-
tools/virsh.c | 2 +-
tools/virsh.h | 2 +-
tools/{virsh-completer.c => vsh-completer.c} | 34 +++---
tools/vsh-completer.h | 41 +++++++
tools/vsh.c | 6 +
24 files changed, 222 insertions(+), 179 deletions(-)
rename tools/{virsh-completer.c => vsh-completer.c} (85%)
create mode 100644 tools/vsh-completer.h
--
2.49.0
1 hour, 44 minutes
[PATCH] cputest: Skip more tests requiring JSON_MODELS if QEMU is disabled
by Jaroslav Suchanek
From: Jaroslav Suchanek <jsuchane(a)redhat.com>
Marking more tests with JSON_MODELS_REQUIRED as these tests fail if QEMU is
disabled, typically when running tests on FreeBSD or macOS systems.
Signed-off-by: Jaroslav Suchanek <jsuchane(a)redhat.com>
---
tests/cputest.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/tests/cputest.c b/tests/cputest.c
index de313f6102..bb471d2ae7 100644
--- a/tests/cputest.c
+++ b/tests/cputest.c
@@ -1183,14 +1183,14 @@ mymain(void)
DO_TEST_CPUID(VIR_ARCH_X86_64, "Atom-D510", JSON_NONE);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Atom-N450", JSON_NONE);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Atom-P5362", JSON_MODELS_REQUIRED);
- DO_TEST_CPUID(VIR_ARCH_X86_64, "Atom-P5362-2", JSON_MODELS);
+ DO_TEST_CPUID(VIR_ARCH_X86_64, "Atom-P5362-2", JSON_MODELS_REQUIRED);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i5-650", JSON_MODELS_REQUIRED);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i5-2500", JSON_HOST);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i5-2540M", JSON_MODELS);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i5-4670T", JSON_HOST);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i5-6600", JSON_HOST);
- DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i7-1270P", JSON_MODELS);
- DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i7-1365U", JSON_MODELS);
+ DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i7-1270P", JSON_MODELS_REQUIRED);
+ DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i7-1365U", JSON_MODELS_REQUIRED);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i7-2600", JSON_HOST);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i7-2600-xsaveopt", JSON_MODELS_REQUIRED);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Core-i7-3520M", JSON_NONE);
@@ -1225,7 +1225,7 @@ mymain(void)
DO_TEST_CPUID(VIR_ARCH_X86_64, "Ryzen-9-3900X-12-Core", JSON_MODELS);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-5110", JSON_NONE);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-6731E", JSON_MODELS);
- DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-Bronze-3408U", JSON_MODELS);
+ DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-Bronze-3408U", JSON_MODELS_REQUIRED);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-E3-1225-v5", JSON_MODELS);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-E3-1245-v5", JSON_MODELS);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-E3-1270-v5", JSON_MODELS);
@@ -1244,12 +1244,12 @@ mymain(void)
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-Gold-6130", JSON_MODELS);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-Gold-6148", JSON_HOST);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-Gold-6152", JSON_MODELS);
- DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-Gold-6530", JSON_MODELS);
+ DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-Gold-6530", JSON_MODELS_REQUIRED);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-Platinum-8268", JSON_HOST);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-Platinum-9242", JSON_MODELS);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-Silver-4214R", JSON_MODELS);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-W3520", JSON_HOST);
- DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-w7-3465X", JSON_MODELS);
+ DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-w7-3465X", JSON_MODELS_REQUIRED);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Xeon-X5460", JSON_NONE);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Ice-Lake-Server", JSON_MODELS);
DO_TEST_CPUID(VIR_ARCH_X86_64, "Cooperlake", JSON_MODELS);
--
2.49.0
2 hours, 21 minutes
[RFC PATCH 0/5] qemu: Implement support for iommufd and multiple vSMMUs
by Nathan Chen
Hi,
This is a follow up to the first RFC patchset [0] for supporting multiple
vSMMU instances in a qemu VM. This patchset also introduces support for
using iommufd to propagate DMA mappings to kernel for assigned devices.
This patchset implements support for specifying multiple <iommu> devices
within the VM definition when smmuv3Dev IOMMU model is specified, and is
tested with Shameer's latest qemu RFC for HW-accelerated vSMMU devices [1]
Moreover, it adds a new 'iommufd' member for virDomainIOMMUDef,
in order to represent the iommufd object in qemu command line. This
patchset also implements new 'iommufdId' and 'iommufdFd' attributes for
hostdev devices to be associated with the iommufd object.
For instance, specifying the iommufd object and associated hostdev in a
VM definition with multiple IOMMUs, configured to be routed to
pcie-expander-bus controllers in a way where VFIO device to SMMUv3
associations are matched with the host (pcie-expander-bus and
pcie-root-port controllers are no longer auto-added/auto-routed
like in the first revision of this RFC, as the PCIe topology will be
configured by management apps):
<devices>
...
<controller type='pci' index='1' model='pcie-expander-bus'>
<model name='pxb-pcie'/>
<target busNr='252'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</controller>
<controller type='pci' index='2' model='pcie-expander-bus'>
<model name='pxb-pcie'/>
<target busNr='248'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</controller>
...
<controller type='pci' index='21' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='21' port='0x0'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</controller>
<controller type='pci' index='22' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='22' port='0xa8'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</controller>
...
<hostdev mode='subsystem' type='pci' managed='no'>
<source>
<address domain='0x0009' bus='0x01' slot='0x00' function='0x0'/>
</source>
<iommufdId>iommufd0</iommufdId>
<address type='pci' domain='0x0000' bus='0x15' slot='0x00' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='no'>
<source>
<address domain='0x0019' bus='0x01' slot='0x00' function='0x0'/>
</source>
<iommufdId>iommufd0</iommufdId>
<address type='pci' domain='0x0000' bus='0x16' slot='0x00' function='0x0'/>
</hostdev>
<iommu model='smmuv3Dev'>
<iommufd>
<id>iommufd0</id>
</iommufd>
<address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>
</iommu>
<iommu model='smmuv3Dev'>
<iommufd>
<id>iommufd0</id>
</iommufd>
<address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
</iommu>
</devices>
This would get translated to a qemu command line with the arguments below:
-device '{"driver":"pxb-pcie","bus_nr":252,"id":"pci.1","bus":"pcie.0","addr":"0x1"}' \
-device '{"driver":"pxb-pcie","bus_nr":248,"id":"pci.2","bus":"pcie.0","addr":"0x2"}' \
-device '{"driver":"pcie-root-port","port":0,"chassis":21,"id":"pci.21","bus":"pci.1","addr":"0x0"}' \
-device '{"driver":"pcie-root-port","port":168,"chassis":22,"id":"pci.22","bus":"pci.2","addr":"0x0"}' \
-object '{"qom-type":"iommufd","id":"iommufd0"}' \
-device '{"driver":"arm-smmuv3-accel","bus":"pci.1"}' \
-device '{"driver":"arm-smmuv3-accel","bus":"pci.2"}' \
-device '{"driver":"vfio-pci","host":"0009:01:00.0","id":"hostdev0","iommufd":"iommufd0","bus":"pci.21","addr":"0x0"}' \
-device '{"driver":"vfio-pci","host":"0019:01:00.0","id":"hostdev1","iommufd":"iommufd0","bus":"pci.22","addr":"0x0"}' \
If users would like to leverage qemu's iommufd feature to open the VFIO
cdev and /dev/iommu via an external management layer, the fd can be
specified like so in the VM definition:
<devices>
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x06' slot='0x12' function='0x2'/>
</source>
<iommufdId>iommufd0</iommufdId>
<iommufdFd>23</iommufdFd>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</hostdev>
<iommu model='intel'>
<iommufd>
<id>iommufd0</id>
<fd>22</fd>
</iommufd>
</iommu>
</devices>
This would get translated to a qemu command line with the arguments below:
-object '{"qom-type":"iommufd","id":"iommufd0","fd":"22"}' \
-device '{"driver":"vfio-pci","host":"0000:06:12.2","id":"hostdev1","iommufd":"iommufd0","fd":"23","bus":"pci.0","addr":"0x3"}' \
Summary of changes:
- Introduced support for specifying multiple <iommu> stanzas in the VM
XML definition when using smmuv3Dev model.
- Automating PCIe topology to populate VM definition with multiple vSMMUs
routed to pcie-expander-bus controllers is excluded, in favor of
deferring creation of PXBs and routing of VFIO devices to management apps.
- Introduced iommufd support.
TODO:
- I updated the namespace and cgroup configuration to allow access to iommufd
paths at /dev/vfio/devices/vfio* and /dev/iommu. However, qemu needs to be
launched with user and group set to 'root' in order for these paths to be
accessible. A passthrough device represented by /dev/vfio/18 normally has
'root' user and group permissions, but in the mount namespace it's changed to
'libvirt-qemu' and 'kvm'. I wasn't able to discern where this is happening by
looking at src/qemu/qemu_namespace.c and src/qemu/qemu_cgroup.c. Would you have
any pointers on how to change the iommufd paths' user and group permissions in
the libvirt mount namespace?
This series is on Github:
https://github.com/NathanChenNVIDIA/libvirt/tree/smmuv3Dev-iommufd-04-15-25
Thanks,
Nathan
[0] https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/7G...
[1] https://lore.kernel.org/qemu-devel/20250311141045.66620-1-shameerali.kolo...
Signed-off-by: Nathan Chen <nathanc(a)nvidia.com>
Nathan Chen (5):
conf: Support multiple smmuv3Dev IOMMU devices
conf: Add an iommufd member struct to virDomainIOMMUDef
qemu: Implement support for associating iommufd to hostdev
qemu: Update Cgroup and namespace for qemu to access iommufd paths
qemu: Add test case for specifying iommufd
docs/formatdomain.rst | 5 +-
src/conf/domain_addr.c | 12 +-
src/conf/domain_addr.h | 4 +-
src/conf/domain_conf.c | 292 ++++++++++++++++--
src/conf/domain_conf.h | 21 +-
src/conf/domain_validate.c | 94 +++++-
src/conf/schemas/domaincommon.rng | 37 ++-
src/conf/virconftypes.h | 2 +
src/libvirt_private.syms | 2 +
src/qemu/qemu_alias.c | 15 +-
src/qemu/qemu_cgroup.c | 47 +++
src/qemu/qemu_cgroup.h | 1 +
src/qemu/qemu_command.c | 146 ++++++---
src/qemu/qemu_domain_address.c | 33 +-
src/qemu/qemu_driver.c | 8 +-
src/qemu/qemu_namespace.c | 36 +++
src/qemu/qemu_postparse.c | 11 +-
src/qemu/qemu_validate.c | 22 +-
...fio-iommufd-intel-iommu.x86_64-latest.args | 43 +++
...vfio-iommufd-intel-iommu.x86_64-latest.xml | 80 +++++
.../hostdev-vfio-iommufd-intel-iommu.xml | 80 +++++
tests/qemuxmlconftest.c | 1 +
22 files changed, 878 insertions(+), 114 deletions(-)
create mode 100644 tests/qemuxmlconfdata/hostdev-vfio-iommufd-intel-iommu.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/hostdev-vfio-iommufd-intel-iommu.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/hostdev-vfio-iommufd-intel-iommu.xml
--
2.43.0
3 hours, 33 minutes
[PATCH 0/1] [RFC] Live migration support for ch driver
by Stefan Kober
This change serves as a proof of concept that adds live migration support to
the Cloud Hypervisor driver. It is meant to show feasibility and to receive
early feedback.
I tested the live migration by invoking:
virsh -c ch:///session migrate --domain vmName --desturi ch+ssh://dstHost/session --live
Opens:
* What is required for a minimal viable live migration to be merged?
* Job state tracking? (virDomainObjBeginJob, ...)
* What should 'virsh domjobinfo' show?
* Testing?
* Anything else?
Stefan Kober (1):
Initial CH migrate API
src/ch/ch_conf.h | 4 +
src/ch/ch_domain.h | 2 +
src/ch/ch_driver.c | 362 +++++++++++++++++++++++++++++-
src/ch/ch_monitor.c | 156 +++++++++++++
src/ch/ch_monitor.h | 8 +
src/ch/ch_process.c | 136 ++++++++++-
src/ch/ch_process.h | 6 +
src/hypervisor/domain_interface.c | 1 +
src/libvirt-domain.c | 15 +-
9 files changed, 680 insertions(+), 10 deletions(-)
--
2.49.0
5 hours, 26 minutes
[libvirt PATCH] qemu: forbid readonly attribute for externally launched virtiofsd
by Ján Tomko
From: Ján Tomko <jtomko(a)redhat.com>
In that case, libvirtd cannot set it on the command line because
virtiofsd is not launched by libvirt.
https://issues.redhat.com/browse/RHEL-87522
Signed-off-by: Ján Tomko <jtomko(a)redhat.com>
---
src/qemu/qemu_validate.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/src/qemu/qemu_validate.c b/src/qemu/qemu_validate.c
index 87588024ce..013ff14d75 100644
--- a/src/qemu/qemu_validate.c
+++ b/src/qemu/qemu_validate.c
@@ -4684,6 +4684,11 @@ qemuValidateDomainDeviceDefFS(virDomainFSDef *fs,
_("virtiofs does not support wrpolicy"));
return -1;
}
+ if (fs->readonly) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("readonly mode cannot be set for extrenally started virtiofsd"));
+ return -1;
+ }
}
if (fs->model != VIR_DOMAIN_FS_MODEL_DEFAULT) {
--
2.49.0
5 hours, 49 minutes
libvirt TLS certificates letsencrypt
by Jani Heikkinen
Hello! I am part of infrastructure team at BFH and we are building an
openstack installation.
A couple of months ago I was experimenting with setting up noVNC
consoles with encryption.
Connecting until the novncproxy service with TLS can be done with no
issues, but the connection between libvirt and the proxy turned out to
be problematic.
We would really love to use letsencrypt certificates which are generated
for each server/container, instead of relying on creating own CA and
generate certificates.
With libvirt, using letsencrypt is impossible, since the code does bunch
of checks for properties of the TLS certificates, see:
https://github.com/libvirt/libvirt/blob/bf79a021a6437b4f85469a53f650bff62...
Does anybody know, why these checks exist? They de facto prevent using
anything else than self-generated certificates for securing vnc console
traffic.
Is using my own self-created CA somehow more trustworthy than
LetsEncrypt root CA?
Best, Jani Heikkinen
--
Berner Fachhochschule / Bern University of Applied Sciences
IT-Services / Team Linux & Infrastructure Services
Jani Heikkinen
IT Linux Engineer
___________________________________________________________
Dammweg 3, CH-3013 Bern
Telefon direkt +41 31 848 68 14
Telefon Servicedesk +41 31 848 48 48
jani.heikkinen(a)bfh.ch
6 hours, 18 minutes
[RFC PATCH v3 0/6] RFC: Add Arm CCA support for getting capability information and running Realm VM
by Kazuhiro Abe
Hi, all.
This patch adds Arm CCA support to QEMU driver for aarch64 system.
CCA is an abbreviation for Arm Confidential Compute Architecture
feature, it enhances the virtualization capabilities of
the platform by separating the management of resources from access
to those resources.
We are not yet at the stage where we can merge this patch as host
Linux/QEMU support is not yet merged, but I would like to receive
reviews and comments on the overall direction.
Changes in v3:
Supports two launch VM options: personalization-value and
measurement-log.
The measurement-log was added to allow us to check if the data
loaded into the guest RAM is a good image.
The Realm Personalization Value (RPV) was added to allow us to
distinguish between realms that have the same initial measurement
results.
This was added as a new Realm startup option in Linaro QEMU after
the v2 patch was posted. That's why we made this change.
[summary]
At this stage, all you can do is getting the CCA capability with
the virsh domcapabilities command and start the CCA VM with
the virsh create command.
capability info uses QEMU QMP to query QEMU options. The option
that exists now is for selecting a hash algorithm.
QEMU QMP sections currently only contains a single member, but
is wrapped in sections for expansion.
[Capability example]
Execution results of 'virsh domcapability" on QEMU
<domaincapabilities>
...
<features>
...
</sgx>
<cca supported='yes'>
<enum name='measurement-algo'>
<value>sha256</value>
<value>sha512</value>
</enum>
</cca>
<hyperv supported='yes'>
...
</features>
</domaincapabilities>
[XML example]
<domain>
...
<launchsecurity type='cca'>
<measurement-algo>sha256</measurement-algo>
</launchsecurity>
...
</domain>
[limitations/tests]
To obtain capability info, it is necessary to support the QEMU QMP
command, which QEMU does not yet support. We added a QMP
command to retrieve CCA info for test (See "[software version]"
below). I need to check qemu_firmware.c to see if my CPU firmware
supports CCA. Since it's not implemented yet, I'll wait until a Linux
distributor provides me with a JSON file for CCA.
We have confirmed that the added tests (qemucapabilitiestest,
domaincapstest and qemuxmlconftest) and the CCA VM startup test
(starting the CCA VM from the virsh create command) passed.
The "personalization-value" and "measurement-log" parameters that
exist in the current Linaro QEMU cca/latest branch will not be
specified as CCA VM startup parameters with the virsh create
command.
[software version]
I followed the steps in Linaro's blog below.
https://linaro.atlassian.net/wiki/spaces/QEMU/pages/29051027459/Building+...
The QEMU used was enhanced with CCA QMP command and found at:
https://github.com/Kazuhiro-Abe-fj/linaro_qemu/tree/cca-latest-qmp
which is based on Linaro QEMU (cca/latest)
https://git.codelinaro.org/linaro/dcap/qemu/-/tree/cca/latest?ref_type=heads
RFC v1:
https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/V4...
RFC v2:
https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/message/5...
Signed-off-by: Kazuhiro Abe <fj1078ii(a)aa.jp.fujitsu.com>
Akio Kakuno (6):
src: Add ARM CCA support in qemu driver to launch VM
src: Add ARM CCA support in domain capabilities command
src: Add ARM CCA support in domain schema
qemucapabilitiestest: Adds Arm CCA support
domaincapstest: Adds Arm CCA support
qemuxmlconftest: Adds Arm CCA support
docs/formatdomain.rst | 43 +
docs/formatdomaincaps.rst | 27 +-
src/conf/domain_capabilities.c | 48 +
src/conf/domain_capabilities.h | 12 +
src/conf/domain_conf.c | 25 +
src/conf/domain_conf.h | 9 +
src/conf/domain_validate.c | 1 +
src/conf/schemas/domaincaps.rng | 36 +
src/conf/schemas/domaincommon.rng | 26 +
src/conf/virconftypes.h | 2 +
src/libvirt_private.syms | 1 +
src/qemu/qemu_capabilities.c | 145 +
src/qemu/qemu_capabilities.h | 4 +
src/qemu/qemu_cgroup.c | 2 +
src/qemu/qemu_command.c | 29 +
src/qemu/qemu_driver.c | 2 +
src/qemu/qemu_firmware.c | 1 +
src/qemu/qemu_monitor.c | 10 +
src/qemu/qemu_monitor.h | 3 +
src/qemu/qemu_monitor_json.c | 98 +
src/qemu/qemu_monitor_json.h | 4 +
src/qemu/qemu_namespace.c | 2 +
src/qemu/qemu_process.c | 4 +
src/qemu/qemu_validate.c | 4 +
src/security/security_dac.c | 2 +
.../qemu_9.1.0-virt.aarch64.xml | 244 +
tests/domaincapsdata/qemu_9.1.0.aarch64.xml | 244 +
.../caps_9.1.0_aarch64.replies | 36222 ++++++++++++++++
.../caps_9.1.0_aarch64.xml | 530 +
.../launch-security-cca.aarch64-latest.args | 30 +
.../launch-security-cca.aarch64-latest.xml | 24 +
tests/qemuxmlconfdata/launch-security-cca.xml | 16 +
tests/qemuxmlconftest.c | 2 +
33 files changed, 37851 insertions(+), 1 deletion(-)
create mode 100644 tests/domaincapsdata/qemu_9.1.0-virt.aarch64.xml
create mode 100644 tests/domaincapsdata/qemu_9.1.0.aarch64.xml
create mode 100644 tests/qemucapabilitiesdata/caps_9.1.0_aarch64.replies
create mode 100644 tests/qemucapabilitiesdata/caps_9.1.0_aarch64.xml
create mode 100644 tests/qemuxmlconfdata/launch-security-cca.aarch64-latest.args
create mode 100644 tests/qemuxmlconfdata/launch-security-cca.aarch64-latest.xml
create mode 100644 tests/qemuxmlconfdata/launch-security-cca.xml
--
2.43.5
7 hours, 40 minutes
[PATCH 0/3] qemu: fix validation of 'fd' passing for backup job and improve debugability of passed FDs
by Peter Krempa
Peter Krempa (3):
qemuBackupPrepare: Actually allow 'VIR_STORAGE_NET_HOST_TRANS_FD'
docs: backup: Hint at proper selinux labelling of the FD-passed NBD
socket
qemu: fd: Log information about passed file descriptor
docs/formatbackup.rst | 4 +++
src/qemu/qemu_backup.c | 4 ++-
src/qemu/qemu_fd.c | 62 ++++++++++++++++++++++++++++++++++++++++++
3 files changed, 69 insertions(+), 1 deletion(-)
--
2.49.0
22 hours, 18 minutes
[PATCH v1 0/2] Disable Deprecated Features by Default on s390 CPU Models
by Collin Walling
The intention of reporting deprecated features and modifying the guest
CPU model was to alleviate the user from the burden of preparing a guest
with the necessary amendments to assure migration to newer hardware.
While that goal was met by way of the "deprecated_features='on|off'"
attribute, it still adds an extra step that the user must be aware to
prepare a guest for migration and the errors that stem from an
unsuccessful migration (due to feature incompatibility) is not always
clear how to resolve.
These patches make s390 CPU host models migration ready from the get-go
by disabling deprecated features by default. They may still be disabled
for other model types via the respective attribute, or reenabled if
desired.
Collin Walling (2):
qemu: caps: add virCPUFeaturePolicy param to
virQEMUCapsUpdateCPUDeprecatedFeatures
qemu: caps: disable deprecated features for s390 models by default
src/qemu/qemu_capabilities.c | 10 +++++++---
src/qemu/qemu_capabilities.h | 3 ++-
src/qemu/qemu_driver.c | 3 ++-
src/qemu/qemu_process.c | 19 ++++++++++++-------
tests/domaincapsdata/qemu_10.0.0.s390x.xml | 8 ++++----
tests/domaincapsdata/qemu_9.1.0.s390x.xml | 8 ++++----
tests/domaincapsdata/qemu_9.2.0.s390x.xml | 8 ++++----
...default-video-type-s390x.s390x-latest.args | 2 +-
...vfio-zpci-ccw-memballoon.s390x-latest.args | 2 +-
.../launch-security-s390-pv.s390x-latest.args | 2 +-
...t-cpu-kvm-ccw-virtio-4.2.s390x-latest.args | 2 +-
.../s390-defaultconsole.s390x-latest.args | 2 +-
.../s390-panic.s390x-latest.args | 2 +-
13 files changed, 41 insertions(+), 30 deletions(-)
--
2.47.1
1 day
[libvirt PATCH 0/2] Remove 'inline' keyword from C files
by Ján Tomko
Ján Tomko (2):
build: prohibit inline functions in C files by syntax-check
build: do not use -Winline
build-aux/syntax-check.mk | 10 ++++++++++
meson.build | 6 ------
src/hyperv/hyperv_wmi.c | 2 +-
src/locking/lock_daemon.c | 4 ++--
src/qemu/qemu_block.c | 2 +-
src/qemu/qemu_command.c | 2 +-
src/qemu/qemu_saveimage.c | 2 +-
src/rpc/virnetserver.c | 8 ++++----
src/storage/storage_util.c | 4 ++--
src/util/virhashcode.c | 4 ++--
src/util/virthreadpool.c | 2 +-
src/util/virxml.c | 2 +-
tools/nss/libvirt_nss.c | 2 +-
13 files changed, 27 insertions(+), 23 deletions(-)
--
2.49.0
1 day, 3 hours