[PATCH v2] cpu_x86: Do not inline cpuidCall()
by Fabio Estevam
The following build error is observed when the DEBUG_BUILD variable
is enabled in OpenEmbedded:
src/cpu/cpu_x86.c: In function 'cpuidSetLeaf4':
src/cpu/cpu_x86.c:2563:1: error: inlining failed in call to 'cpuidCall': function not considered for inlining [-Werror=inline]
2563 | cpuidCall(virCPUx86CPUID *cpuid)
| ^~~~~~~~~
Remove the 'inline' specifier to avoid the problem.
Reported-by: Hongxu Jia <hongxu.jia(a)windriver.com>
Signed-off-by: Fabio Estevam <festevam(a)gmail.com>
Reviewed-by: Ján Tomko <jtomko(a)redhat.com>
---
Changes since v1:
- Improve the commit log by explaining where DEBUG_BUILD comes from. (Ján)
src/cpu/cpu_x86.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/cpu/cpu_x86.c b/src/cpu/cpu_x86.c
index 213af67ea478..0f7eb8f48b35 100644
--- a/src/cpu/cpu_x86.c
+++ b/src/cpu/cpu_x86.c
@@ -2564,7 +2564,7 @@ virCPUx86DataCheckFeature(const virCPUData *data,
#if defined(__i386__) || defined(__x86_64__)
-static inline void
+static void
cpuidCall(virCPUx86CPUID *cpuid)
{
virHostCPUX86GetCPUID(cpuid->eax_in,
--
2.34.1
1 week, 6 days
[PATCH 0/2] spec: Bump min_rhel and min_fedora
by Michal Privoznik
Per our support policy [1], the minimal version we aim to support is
RHEL-9 and Fedora 41. Reflect this in the spec file.
1: https://libvirt.org/platforms.html
Michal Prívozník (2):
spec: Bump min_rhel
spec: Bump min_fedora
libvirt.spec.in | 38 ++++++++++++--------------------------
1 file changed, 12 insertions(+), 26 deletions(-)
--
2.49.0
1 week, 6 days
[PATCH] ch: Support RNG device
by Stefan Kober
Cloud Hypervisor supports virtio-rng devices and the configuration of
the randomness source (e.g. /dev/random or /dev/urandom).
This commit adds support for configuring the RNG device via libvirt for
the ch driver.
Signed-off-by: Stefan Kober <stefan.kober(a)cyberus-technology.de>
---
src/ch/ch_domain.c | 8 +++++++-
src/ch/ch_monitor.c | 41 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 48 insertions(+), 1 deletion(-)
diff --git a/src/ch/ch_domain.c b/src/ch/ch_domain.c
index c0c9acd85b..7231fdc49f 100644
--- a/src/ch/ch_domain.c
+++ b/src/ch/ch_domain.c
@@ -163,6 +163,7 @@ chValidateDomainDeviceDef(const virDomainDeviceDef *dev,
case VIR_DOMAIN_DEVICE_CONTROLLER:
case VIR_DOMAIN_DEVICE_CHR:
case VIR_DOMAIN_DEVICE_HOSTDEV:
+ case VIR_DOMAIN_DEVICE_RNG:
break;
case VIR_DOMAIN_DEVICE_LEASE:
@@ -177,7 +178,6 @@ chValidateDomainDeviceDef(const virDomainDeviceDef *dev,
case VIR_DOMAIN_DEVICE_SMARTCARD:
case VIR_DOMAIN_DEVICE_MEMBALLOON:
case VIR_DOMAIN_DEVICE_NVRAM:
- case VIR_DOMAIN_DEVICE_RNG:
case VIR_DOMAIN_DEVICE_SHMEM:
case VIR_DOMAIN_DEVICE_TPM:
case VIR_DOMAIN_DEVICE_PANIC:
@@ -218,6 +218,12 @@ chValidateDomainDeviceDef(const virDomainDeviceDef *dev,
return -1;
}
+ if (def->nrngs > 1) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Only a single RNG device can be configured for this domain"));
+ return -1;
+ }
+
if (def->nserials > 1) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("Only a single serial can be configured for this domain"));
diff --git a/src/ch/ch_monitor.c b/src/ch/ch_monitor.c
index 5a490b75f6..3d3b4cb87d 100644
--- a/src/ch/ch_monitor.c
+++ b/src/ch/ch_monitor.c
@@ -302,6 +302,44 @@ virCHMonitorBuildDisksJson(virJSONValue *content, virDomainDef *vmdef)
return 0;
}
+static int
+virCHMonitorBuildRngJson(virJSONValue *content, virDomainDef *vmdef)
+{
+ g_autoptr(virJSONValue) rng = virJSONValueNewObject();
+
+ if (vmdef->nrngs == 0) {
+ return 0;
+ }
+
+ if (vmdef->rngs[0]->model != VIR_DOMAIN_RNG_MODEL_VIRTIO) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("Only virtio model is supported for RNG devices"));
+ return -1;
+ }
+
+ switch (vmdef->rngs[0]->backend) {
+ case VIR_DOMAIN_RNG_BACKEND_RANDOM:
+ if (virJSONValueObjectAppendString(rng, "src", vmdef->rngs[0]->source.file) < 0)
+ return -1;
+
+ if (virJSONValueObjectAppend(content, "rng", &rng) < 0)
+ return -1;
+
+ break;
+
+ case VIR_DOMAIN_RNG_BACKEND_EGD:
+ case VIR_DOMAIN_RNG_BACKEND_BUILTIN:
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("Only RANDOM backend is supported for RNG devices"));
+ return -1;
+
+ case VIR_DOMAIN_RNG_BACKEND_LAST:
+ break;
+ }
+
+ return 0;
+}
+
/**
* virCHMonitorBuildNetJson:
* @net: pointer to a guest network definition
@@ -501,6 +539,9 @@ virCHMonitorBuildVMJson(virCHDriver *driver, virDomainDef *vmdef,
if (virCHMonitorBuildDisksJson(content, vmdef) < 0)
return -1;
+ if (virCHMonitorBuildRngJson(content, vmdef) < 0)
+ return -1;
+
if (virCHMonitorBuildDevicesJson(content, vmdef) < 0)
return -1;
--
2.49.0
1 week, 6 days
[PATCH 0/3] domain_capabilities: add console capabilities
by Roman Bogorodskiy
Motivation behind this series is to give management software possibility
to check whether the 'pty' console could be used,
or it should use something else, e.g. 'nmdm' for bhyve.
Because of the complex relationships between 'serial' and 'console',
I wasn't entirely sure whether I should report 'console' or 'serial'.
Also, I wasn't sure if I needed to report anything but 'type'.
Eventually I've decided to stay close to problem I'm trying to solve,
and report only console types.
I have updated only qemu and bhyve drivers for now, as I'm not sure if
the approach is correct. I'll update other drivers if that's ok.
Also, it was surprisingly tricky to get a list of supported console
types for qemu, as the model is heavily shared between console, serials,
parallel ports and channels, and sometimes it's not obvious if there's
any difference between these devices' supported types.
Interestingly, formatdomain.html doesn't provide much information about
'console type'. I was able to find the only occurrence of non-pty
console, which is type='stdio'. I was able to find more console types
used in the test data files though.
Roman Bogorodskiy (3):
domain_capabilities: add console capabilities
bhyve: capabilities: report NMDM console
qemu: capabilities: report supported console types
src/bhyve/bhyve_capabilities.c | 5 +++
src/conf/domain_capabilities.c | 12 +++++++
src/conf/domain_capabilities.h | 8 +++++
src/conf/schemas/domaincaps.rng | 10 ++++++
src/qemu/qemu_capabilities.c | 32 +++++++++++++++++++
src/qemu/qemu_capabilities.h | 3 ++
tests/domaincapsdata/bhyve_basic.x86_64.xml | 5 +++
tests/domaincapsdata/bhyve_fbuf.x86_64.xml | 5 +++
tests/domaincapsdata/bhyve_uefi.x86_64.xml | 5 +++
.../qemu_10.0.0-q35.x86_64+amdsev.xml | 18 +++++++++++
.../domaincapsdata/qemu_10.0.0-q35.x86_64.xml | 18 +++++++++++
.../qemu_10.0.0-tcg.x86_64+amdsev.xml | 18 +++++++++++
.../domaincapsdata/qemu_10.0.0-tcg.x86_64.xml | 18 +++++++++++
tests/domaincapsdata/qemu_10.0.0.s390x.xml | 15 +++++++++
.../qemu_10.0.0.x86_64+amdsev.xml | 18 +++++++++++
tests/domaincapsdata/qemu_10.0.0.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_6.2.0-q35.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml | 18 +++++++++++
tests/domaincapsdata/qemu_6.2.0.ppc64.xml | 15 +++++++++
tests/domaincapsdata/qemu_6.2.0.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 18 +++++++++++
tests/domaincapsdata/qemu_7.0.0.ppc64.xml | 16 ++++++++++
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_7.1.0-q35.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml | 18 +++++++++++
tests/domaincapsdata/qemu_7.1.0.ppc64.xml | 16 ++++++++++
tests/domaincapsdata/qemu_7.1.0.x86_64.xml | 18 +++++++++++
.../qemu_7.2.0-hvf.x86_64+hvf.xml | 18 +++++++++++
.../domaincapsdata/qemu_7.2.0-q35.x86_64.xml | 18 +++++++++++
.../qemu_7.2.0-tcg.x86_64+hvf.xml | 18 +++++++++++
.../domaincapsdata/qemu_7.2.0-tcg.x86_64.xml | 18 +++++++++++
tests/domaincapsdata/qemu_7.2.0.ppc.xml | 18 +++++++++++
tests/domaincapsdata/qemu_7.2.0.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_8.0.0-q35.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_8.0.0-tcg.x86_64.xml | 18 +++++++++++
tests/domaincapsdata/qemu_8.0.0.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_8.1.0-q35.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_8.1.0-tcg.x86_64.xml | 18 +++++++++++
tests/domaincapsdata/qemu_8.1.0.s390x.xml | 15 +++++++++
tests/domaincapsdata/qemu_8.1.0.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_8.2.0-q35.x86_64.xml | 18 +++++++++++
.../qemu_8.2.0-tcg-virt.loongarch64.xml | 18 +++++++++++
.../domaincapsdata/qemu_8.2.0-tcg.x86_64.xml | 18 +++++++++++
.../qemu_8.2.0-virt.aarch64.xml | 16 ++++++++++
.../qemu_8.2.0-virt.loongarch64.xml | 18 +++++++++++
tests/domaincapsdata/qemu_8.2.0.aarch64.xml | 16 ++++++++++
tests/domaincapsdata/qemu_8.2.0.armv7l.xml | 18 +++++++++++
tests/domaincapsdata/qemu_8.2.0.s390x.xml | 15 +++++++++
tests/domaincapsdata/qemu_8.2.0.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_9.0.0-q35.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_9.0.0-tcg.x86_64.xml | 18 +++++++++++
tests/domaincapsdata/qemu_9.0.0.sparc.xml | 18 +++++++++++
tests/domaincapsdata/qemu_9.0.0.x86_64.xml | 18 +++++++++++
.../domaincapsdata/qemu_9.1.0-q35.x86_64.xml | 18 +++++++++++
.../qemu_9.1.0-tcg-virt.riscv64.xml | 18 +++++++++++
.../domaincapsdata/qemu_9.1.0-tcg.x86_64.xml | 18 +++++++++++
.../qemu_9.1.0-virt.riscv64.xml | 18 +++++++++++
tests/domaincapsdata/qemu_9.1.0.s390x.xml | 15 +++++++++
tests/domaincapsdata/qemu_9.1.0.x86_64.xml | 18 +++++++++++
.../qemu_9.2.0-hvf.aarch64+hvf.xml | 16 ++++++++++
.../qemu_9.2.0-q35.x86_64+amdsev.xml | 18 +++++++++++
.../domaincapsdata/qemu_9.2.0-q35.x86_64.xml | 18 +++++++++++
.../qemu_9.2.0-tcg.x86_64+amdsev.xml | 18 +++++++++++
.../domaincapsdata/qemu_9.2.0-tcg.x86_64.xml | 18 +++++++++++
tests/domaincapsdata/qemu_9.2.0.s390x.xml | 15 +++++++++
.../qemu_9.2.0.x86_64+amdsev.xml | 18 +++++++++++
tests/domaincapsdata/qemu_9.2.0.x86_64.xml | 18 +++++++++++
68 files changed, 1119 insertions(+)
--
2.49.0
1 week, 6 days
[PATCH 00/17] qemu: Fix regresion when loading internal snapshots and few cleanups
by Peter Krempa
This series:
1) Fixes the regression in loading internal snapshots:
https://gitlab.com/libvirt/libvirt/-/issues/771
2) Fixes bugs in cleanup paths of snapshot reversion where we'd keep an
inactive transient VM definition in the domain list (Noticed when
debugging the former issue)
3) Cleans up qemu commandline generator of unneeded arguments for
snapshot reversion after recent removal of old code
4) Renames the argument used to revert internal snapshots to something
more obvious.
5) Cleans some unneeded passing of the qemu driver struct
Peter Krempa (17):
qemuProcessStartWithMemoryState: Don't setup qemu for incoming
migration when reverting internal snapshot
NEWS: Mention fix for internal snapshot reversion regression
qemuSnapshotRevertActive: Remove transient domain on failure
qemuSnapshotRevertInactive: Ensure all error paths handle transient
domains properly
qemuBuildCommandLine: Drop 'snapshot' argument
qemuProcessLaunch: Rename 'snapshot' to 'internalSnapshotRevert'
qemuProcessStart: Rename 'snapshot' to 'internalSnapshotRevert'
qemuProcessStartWithMemoryState: Rename 'snapshot' to
'internalSnapshotRevert'
qemuExtDevicesCleanupHost: Use 'virQEMUDriverConfig' instead of
'virQEMUDriver'
qemuCheckpointDiscardAllMetadata: Remove 'driver' argument
qemuSnapshotDiscardAllMetadata: Remove 'driver' argument
qemuDomainRemoveInactiveCommon: Remove 'driver' argument
qemuProcessStop: Drop 'driver' argument
qemuDomainRemoveInactiveLocked: Remove 'driver' argument
qemuProcessReconnect: Modernize local variable setup
qemuProcessReconnectData: Drop 'driver' struct and clean up
qemuDomainRemoveInactive: Remove 'driver' argument
NEWS.rst | 10 ++++
src/qemu/qemu_checkpoint.c | 5 +-
src/qemu/qemu_checkpoint.h | 3 +-
src/qemu/qemu_command.c | 5 +-
src/qemu/qemu_command.h | 1 -
src/qemu/qemu_domain.c | 25 +++++-----
src/qemu/qemu_domain.h | 6 +--
src/qemu/qemu_driver.c | 46 ++++++++----------
src/qemu/qemu_extdevice.c | 18 +++----
src/qemu/qemu_extdevice.h | 4 +-
src/qemu/qemu_migration.c | 20 ++++----
src/qemu/qemu_process.c | 98 +++++++++++++++++++-------------------
src/qemu/qemu_process.h | 7 ++-
src/qemu/qemu_saveimage.c | 2 +-
src/qemu/qemu_snapshot.c | 75 ++++++++++++++---------------
src/qemu/qemu_snapshot.h | 3 +-
src/qemu/qemu_tpm.c | 14 ++----
src/qemu/qemu_tpm.h | 4 +-
18 files changed, 167 insertions(+), 179 deletions(-)
--
2.49.0
1 week, 6 days
[PATCH] ci: refresh with 'lcitool manifest'
by Michal Privoznik
From: Michal Privoznik <mprivozn(a)redhat.com>
- Add Fedora 42
- Remove EOL Fedora 40
- Switch mingw from Fedora 41 to Fedora 42
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
Green pipeline:
https://gitlab.com/MichalPrivoznik/libvirt/-/pipelines/1819375318
It also contains spec file changes that I'll be sending shortly. I have
them all in one branch.
...-mingw32.sh => fedora-42-cross-mingw32.sh} | 0
...-mingw64.sh => fedora-42-cross-mingw64.sh} | 0
ci/buildenv/{fedora-40.sh => fedora-42.sh} | 3 +-
...ile => fedora-42-cross-mingw32.Dockerfile} | 2 +-
...ile => fedora-42-cross-mingw64.Dockerfile} | 2 +-
...ora-40.Dockerfile => fedora-42.Dockerfile} | 5 +-
ci/gitlab/builds.yml | 46 +++++++++----------
ci/gitlab/containers.yml | 22 ++++-----
ci/manifest.yml | 16 +++----
9 files changed, 49 insertions(+), 47 deletions(-)
rename ci/buildenv/{fedora-41-cross-mingw32.sh => fedora-42-cross-mingw32.sh} (100%)
rename ci/buildenv/{fedora-41-cross-mingw64.sh => fedora-42-cross-mingw64.sh} (100%)
rename ci/buildenv/{fedora-40.sh => fedora-42.sh} (97%)
rename ci/containers/{fedora-41-cross-mingw32.Dockerfile => fedora-42-cross-mingw32.Dockerfile} (98%)
rename ci/containers/{fedora-41-cross-mingw64.Dockerfile => fedora-42-cross-mingw64.Dockerfile} (98%)
rename ci/containers/{fedora-40.Dockerfile => fedora-42.Dockerfile} (96%)
diff --git a/ci/buildenv/fedora-41-cross-mingw32.sh b/ci/buildenv/fedora-42-cross-mingw32.sh
similarity index 100%
rename from ci/buildenv/fedora-41-cross-mingw32.sh
rename to ci/buildenv/fedora-42-cross-mingw32.sh
diff --git a/ci/buildenv/fedora-41-cross-mingw64.sh b/ci/buildenv/fedora-42-cross-mingw64.sh
similarity index 100%
rename from ci/buildenv/fedora-41-cross-mingw64.sh
rename to ci/buildenv/fedora-42-cross-mingw64.sh
diff --git a/ci/buildenv/fedora-40.sh b/ci/buildenv/fedora-42.sh
similarity index 97%
rename from ci/buildenv/fedora-40.sh
rename to ci/buildenv/fedora-42.sh
index e45ac2230f..c32a689fbd 100644
--- a/ci/buildenv/fedora-40.sh
+++ b/ci/buildenv/fedora-42.sh
@@ -9,7 +9,7 @@ function install_buildenv() {
dnf install -y \
audit-libs-devel \
augeas \
- bash-completion \
+ bash-completion-devel \
ca-certificates \
ccache \
clang \
@@ -82,6 +82,7 @@ function install_buildenv() {
systemd-devel \
systemd-rpm-macros \
systemtap-sdt-devel \
+ systemtap-sdt-dtrace \
wireshark-devel \
xen-devel
rm -f /usr/lib*/python3*/EXTERNALLY-MANAGED
diff --git a/ci/containers/fedora-41-cross-mingw32.Dockerfile b/ci/containers/fedora-42-cross-mingw32.Dockerfile
similarity index 98%
rename from ci/containers/fedora-41-cross-mingw32.Dockerfile
rename to ci/containers/fedora-42-cross-mingw32.Dockerfile
index 6ab14be6fc..5689128ed9 100644
--- a/ci/containers/fedora-41-cross-mingw32.Dockerfile
+++ b/ci/containers/fedora-42-cross-mingw32.Dockerfile
@@ -4,7 +4,7 @@
#
# https://gitlab.com/libvirt/libvirt-ci
-FROM registry.fedoraproject.org/fedora:41
+FROM registry.fedoraproject.org/fedora:42
RUN dnf install -y nosync && \
printf '#!/bin/sh\n\
diff --git a/ci/containers/fedora-41-cross-mingw64.Dockerfile b/ci/containers/fedora-42-cross-mingw64.Dockerfile
similarity index 98%
rename from ci/containers/fedora-41-cross-mingw64.Dockerfile
rename to ci/containers/fedora-42-cross-mingw64.Dockerfile
index a0ec65d74a..dff7d84fa3 100644
--- a/ci/containers/fedora-41-cross-mingw64.Dockerfile
+++ b/ci/containers/fedora-42-cross-mingw64.Dockerfile
@@ -4,7 +4,7 @@
#
# https://gitlab.com/libvirt/libvirt-ci
-FROM registry.fedoraproject.org/fedora:41
+FROM registry.fedoraproject.org/fedora:42
RUN dnf install -y nosync && \
printf '#!/bin/sh\n\
diff --git a/ci/containers/fedora-40.Dockerfile b/ci/containers/fedora-42.Dockerfile
similarity index 96%
rename from ci/containers/fedora-40.Dockerfile
rename to ci/containers/fedora-42.Dockerfile
index b82a975bdb..a59c7beb41 100644
--- a/ci/containers/fedora-40.Dockerfile
+++ b/ci/containers/fedora-42.Dockerfile
@@ -4,7 +4,7 @@
#
# https://gitlab.com/libvirt/libvirt-ci
-FROM registry.fedoraproject.org/fedora:40
+FROM registry.fedoraproject.org/fedora:42
RUN dnf install -y nosync && \
printf '#!/bin/sh\n\
@@ -20,7 +20,7 @@ exec "$@"\n' > /usr/bin/nosync && \
nosync dnf install -y \
audit-libs-devel \
augeas \
- bash-completion \
+ bash-completion-devel \
ca-certificates \
ccache \
clang \
@@ -93,6 +93,7 @@ exec "$@"\n' > /usr/bin/nosync && \
systemd-devel \
systemd-rpm-macros \
systemtap-sdt-devel \
+ systemtap-sdt-dtrace \
wireshark-devel \
xen-devel && \
nosync dnf autoremove -y && \
diff --git a/ci/gitlab/builds.yml b/ci/gitlab/builds.yml
index 893843cb64..5fab2008d8 100644
--- a/ci/gitlab/builds.yml
+++ b/ci/gitlab/builds.yml
@@ -103,21 +103,6 @@ x86_64-debian-sid:
TARGET_BASE_IMAGE: docker.io/library/debian:sid-slim
-x86_64-fedora-40:
- extends: .native_build_job
- needs:
- - job: x86_64-fedora-40-container
- optional: true
- allow_failure: false
- variables:
- NAME: fedora-40
- TARGET_BASE_IMAGE: registry.fedoraproject.org/fedora:40
- artifacts:
- expire_in: 1 day
- paths:
- - libvirt-rpms
-
-
x86_64-fedora-41:
extends: .native_build_job
needs:
@@ -133,6 +118,21 @@ x86_64-fedora-41:
- libvirt-rpms
+x86_64-fedora-42:
+ extends: .native_build_job
+ needs:
+ - job: x86_64-fedora-42-container
+ optional: true
+ allow_failure: false
+ variables:
+ NAME: fedora-42
+ TARGET_BASE_IMAGE: registry.fedoraproject.org/fedora:42
+ artifacts:
+ expire_in: 1 day
+ paths:
+ - libvirt-rpms
+
+
x86_64-fedora-rawhide:
extends: .native_build_job
needs:
@@ -416,29 +416,29 @@ s390x-debian-sid:
TARGET_BASE_IMAGE: docker.io/library/debian:sid-slim
-mingw32-fedora-41:
+mingw32-fedora-42:
extends: .cross_build_job
needs:
- - job: mingw32-fedora-41-container
+ - job: mingw32-fedora-42-container
optional: true
allow_failure: false
variables:
CROSS: mingw32
JOB_OPTIONAL: 1
- NAME: fedora-41
- TARGET_BASE_IMAGE: registry.fedoraproject.org/fedora:41
+ NAME: fedora-42
+ TARGET_BASE_IMAGE: registry.fedoraproject.org/fedora:42
-mingw64-fedora-41:
+mingw64-fedora-42:
extends: .cross_build_job
needs:
- - job: mingw64-fedora-41-container
+ - job: mingw64-fedora-42-container
optional: true
allow_failure: false
variables:
CROSS: mingw64
- NAME: fedora-41
- TARGET_BASE_IMAGE: registry.fedoraproject.org/fedora:41
+ NAME: fedora-42
+ TARGET_BASE_IMAGE: registry.fedoraproject.org/fedora:42
mingw32-fedora-rawhide:
diff --git a/ci/gitlab/containers.yml b/ci/gitlab/containers.yml
index f88a39a1f8..05809fbdeb 100644
--- a/ci/gitlab/containers.yml
+++ b/ci/gitlab/containers.yml
@@ -49,13 +49,6 @@ x86_64-debian-sid-container:
NAME: debian-sid
-x86_64-fedora-40-container:
- extends: .container_job
- allow_failure: false
- variables:
- NAME: fedora-40
-
-
x86_64-fedora-41-container:
extends: .container_job
allow_failure: false
@@ -63,6 +56,13 @@ x86_64-fedora-41-container:
NAME: fedora-41
+x86_64-fedora-42-container:
+ extends: .container_job
+ allow_failure: false
+ variables:
+ NAME: fedora-42
+
+
x86_64-fedora-rawhide-container:
extends: .container_job
allow_failure: true
@@ -220,19 +220,19 @@ s390x-debian-sid-container:
NAME: debian-sid-cross-s390x
-mingw32-fedora-41-container:
+mingw32-fedora-42-container:
extends: .container_job
allow_failure: false
variables:
JOB_OPTIONAL: 1
- NAME: fedora-41-cross-mingw32
+ NAME: fedora-42-cross-mingw32
-mingw64-fedora-41-container:
+mingw64-fedora-42-container:
extends: .container_job
allow_failure: false
variables:
- NAME: fedora-41-cross-mingw64
+ NAME: fedora-42-cross-mingw64
mingw32-fedora-rawhide-container:
diff --git a/ci/manifest.yml b/ci/manifest.yml
index 3b06f4827e..14bfef25d2 100644
--- a/ci/manifest.yml
+++ b/ci/manifest.yml
@@ -104,14 +104,6 @@ targets:
containers: false
builds: false
- fedora-40:
- jobs:
- - arch: x86_64
- artifacts:
- expire_in: 1 day
- paths:
- - libvirt-rpms
-
fedora-41:
jobs:
- arch: x86_64
@@ -120,6 +112,14 @@ targets:
paths:
- libvirt-rpms
+ fedora-42:
+ jobs:
+ - arch: x86_64
+ artifacts:
+ expire_in: 1 day
+ paths:
+ - libvirt-rpms
+
- arch: mingw32
builds: false
--
2.49.0
2 weeks
[PATCH v3 0/6] qemu: acpi-generic-initiator support
by Andrea Righi
= Overview =
This patch set introduces support for acpi-generic-initiator devices,
supported by QEMU [1].
The acpi-generic-initiator object is required to support Multi-Instance GPU
(MIG) configurations on NVIDIA GPUs [2]. MIG enables partitioning of GPU
resources into multiple isolated instances, each requiring a dedicated NUMA
node definition.
= Implementation =
This patch set implements the libvirt counterpart to the QEMU feature,
enabling users to configure acpi-generic-initiator objects within libvirt
domain XML.
This includes:
- adding XML syntax to define acpi-generic-initiator objects,
- resolving the acpi-generic-initiator definitions into the proper QEMU
command-line arguments,
- ensuring compatibility with existing NUMA configuration.
= Example =
- Domain XML:
```
...
<cpu mode='host-passthrough' check='none'>
<numa>
<cell id='0' cpus='0-15' memory='8388608' unit='KiB'/>
<cell id='1' memory='0' unit='KiB'/>
<cell id='2' memory='0' unit='KiB'/>
<cell id='3' memory='0' unit='KiB'/>
<cell id='4' memory='0' unit='KiB'/>
<cell id='5' memory='0' unit='KiB'/>
<cell id='6' memory='0' unit='KiB'/>
<cell id='7' memory='0' unit='KiB'/>
<cell id='8' memory='0' unit='KiB'/>
</numa>
</cpu>
...
<devices>
...
<hostdev mode='subsystem' type='pci' managed='no'>
<source>
<address domain='0x0009' bus='0x01' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</hostdev>
<acpi-generic-initiator>
<alias name="gi1"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>1</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi2"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>2</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi3"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>3</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi4"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>4</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi5"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>5</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi6"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>6</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi7"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>7</numa-node>
</acpi-generic-initiator>
<acpi-generic-initiator>
<alias name="gi8"/>
<pci-dev>hostdev0</pci-dev>
<numa-node>8</numa-node>
</acpi-generic-initiator>
</devices>
```
- Generated QEMU command line options:
```
... /usr/bin/qemu-system-aarch64 \
...
-object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":8589934592}' \
-numa node,nodeid=0,cpus=0-15,memdev=ram-node0 \
-numa node,nodeid=1 \
-numa node,nodeid=2 \
-numa node,nodeid=3 \
-numa node,nodeid=4 \
-numa node,nodeid=5 \
-numa node,nodeid=6 \
-numa node,nodeid=7 \
-numa node,nodeid=8 \
...
-device '{"driver":"vfio-pci","host":"0009:01:00.0","id":"hostdev0","bus":"pci.3","addr":"0x0"}'
...
-object acpi-generic-initiator,id=gi1,pci-dev=hostdev0,node=1 \
-object acpi-generic-initiator,id=gi2,pci-dev=hostdev0,node=2 \
-object acpi-generic-initiator,id=gi3,pci-dev=hostdev0,node=3 \
-object acpi-generic-initiator,id=gi4,pci-dev=hostdev0,node=4 \
-object acpi-generic-initiator,id=gi5,pci-dev=hostdev0,node=5 \
-object acpi-generic-initiator,id=gi6,pci-dev=hostdev0,node=6 \
-object acpi-generic-initiator,id=gi7,pci-dev=hostdev0,node=7 \
-object acpi-generic-initiator,id=gi8,pci-dev=hostdev0,node=8
```
= References =
[1] https://lore.kernel.org/all/20231225045603.7654-2-ankita@nvidia.com/
[2] https://www.nvidia.com/en-in/technologies/multi-instance-gpu/
ChangeLog v2 -> v3:
- replaced <text/> with proper types in the XML schema
- avoid mixing g_free() and VIR_FREE()
- use virXMLPropString() instead of looping all XML nodes
- report proper errors with virReportError()
- use virBufferEscapeString() to process strings passed by the user
- fix broken formatting of function headers
- misc coding style fixes
ChangeLog v1 -> v2:
- split parser and driver changes in separate patches
- introduce a new qemu capability flag
- introduce test in qemuxmlconftest
Andrea Righi (6):
schema: Introduce acpi-generic-initiator definition
conf: Introduce acpi-generic-initiator device
qemu: Allow to define NUMA nodes without memory or CPUs assigned
qemu: capabilies: Introduce QEMU_CAPS_ACPI_GENERIC_INITIATOR
qemu: support acpi-generic-initiator
qemu: Add test case for acpi-generic-initiator
src/ch/ch_domain.c | 1 +
src/conf/domain_conf.c | 159 +++++++++++++++++++++
src/conf/domain_conf.h | 14 ++
src/conf/domain_postparse.c | 1 +
src/conf/domain_validate.c | 37 +++++
src/conf/numa_conf.c | 3 +
src/conf/schemas/domaincommon.rng | 19 +++
src/conf/virconftypes.h | 2 +
src/libxl/libxl_driver.c | 6 +
src/lxc/lxc_driver.c | 6 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 49 ++++++-
src/qemu/qemu_domain.c | 2 +
src/qemu/qemu_domain_address.c | 4 +
src/qemu/qemu_driver.c | 3 +
src/qemu/qemu_hotplug.c | 5 +
src/qemu/qemu_postparse.c | 1 +
src/qemu/qemu_validate.c | 1 +
src/test/test_driver.c | 4 +
.../caps_10.0.0_x86_64+amdsev.xml | 1 +
tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_9.1.0_riscv64.xml | 1 +
tests/qemucapabilitiesdata/caps_9.1.0_x86_64.xml | 1 +
.../caps_9.2.0_aarch64+hvf.xml | 1 +
.../caps_9.2.0_x86_64+amdsev.xml | 1 +
tests/qemucapabilitiesdata/caps_9.2.0_x86_64.xml | 1 +
.../acpi-generic-initiator.x86_64-latest.args | 55 +++++++
.../acpi-generic-initiator.x86_64-latest.xml | 102 +++++++++++++
tests/qemuxmlconfdata/acpi-generic-initiator.xml | 102 +++++++++++++
tests/qemuxmlconftest.c | 1 +
32 files changed, 581 insertions(+), 7 deletions(-)
create mode 100644 tests/qemuxmlconfdata/acpi-generic-initiator.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/acpi-generic-initiator.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/acpi-generic-initiator.xml
2 weeks
[PATCH v2 0/3] Add virt machine support for configuring PCI high memory MMIO size
by Matthew R. Ochs
This patch series adds support for configuring the PCI high memory MMIO
window size for aarch64 virt machine types using the highmem-mmio-size
feature introduced in QEMU v10.0.0 [1]. It allows users to configure the
size of the high memory MMIO window above 4GB, which can be required to
support PCI passthrough with devices that have a large BARs.
The feature is exposed through the existing pcihole64 element associated
with the PCIe root controller:
<controller type='pci' index='0' model='pcie-root'>
<pcihole64 unit='GiB'>512</pcihole64>
</controller>
This existing schema supports the same semantics for the QEMU PC
machine pci-hole64-size parameter and is a natural fit for supporting
the highmem-mmio-size feature on the aarch64 virt machine.
This series is applied over master and depends on the recently merged
patch [2] that added support for QEMU v10.0.0 aarch64 capabilities.
For your convenience, this series is also available on Github [3].
[1] https://github.com/qemu/qemu/commit/f10104aeae3a17f181d5bb37b7fd7dad7fe86cba
[2] https://github.com/nvmochs/libvirt/commit/cea2ee1d28780808172911e5c586478...
[3] git fetch https://github.com/nvmochs/libvirt.git pci_highmem_mmio_size_pcihole64
Signed-off-by: Matthew R. Ochs <mochs(a)nvidia.com>
Changelog:
v2
- Use existing XML scmhema (pcihole64 element) instead of a new one
Matthew R. Ochs (3):
qemu: Add capability for PCI high memory MMIO size
qemu: Add command line support for PCI high memory MMIO size
tests: Add pcihole64 test for virt machine
src/qemu/qemu_capabilities.c | 2 ++
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 14 ++++++++-
src/qemu/qemu_validate.c | 8 +++--
.../caps_10.0.0_aarch64.xml | 1 +
.../pcihole64-virt.aarch64-latest.args | 31 +++++++++++++++++++
.../pcihole64-virt.aarch64-latest.xml | 29 +++++++++++++++++
tests/qemuxmlconfdata/pcihole64-virt.xml | 17 ++++++++++
tests/qemuxmlconftest.c | 1 +
9 files changed, 101 insertions(+), 3 deletions(-)
create mode 100644 tests/qemuxmlconfdata/pcihole64-virt.aarch64-latest.args
create mode 100644 tests/qemuxmlconfdata/pcihole64-virt.aarch64-latest.xml
create mode 100644 tests/qemuxmlconfdata/pcihole64-virt.xml
--
2.46.0
2 weeks
[PATCH 0/7] Fix return type of some functions
by Michal Privoznik
Inspired by Peter's patch:
https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/TR...
(since it wasn't merged yet, one hunk in patch 7/7 is the same as
Peter's; I'll update it after Peter merges his patches)
There are some function which are declared to return an int, but in fact
return a boolean:
int foo(...)
{
...
return true;
}
Worse, some mix ints and bools. With a help of coccinelle I was able to
identify some offenders. And either redeclare them to return a bool, or
fix those (misleading) return statements.
Michal Prívozník (7):
storage_backend_rbd.c: Make virStorageBackendRBDSetAllocation() stub
report an error
nwfilter: Fix return type of virNWFilterCanApplyBasicRules callback
qemu_process: Fix return type of
qemuDomainHasHotpluggableStartupVcpus()
storage_backend_rbd.C: Fix return type of a
volStorageBackendRBDUseFastDiff() stub
virnetdevvlan: Fix return type of virNetDevVlanEqual()
virsh-pool.c: Fix return type of virshBuildPoolXML()
src: Fix retval of some functions declared to return an int
src/ch/ch_hostdev.c | 8 ++++----
src/nwfilter/nwfilter_ebiptables_driver.c | 2 +-
src/nwfilter/nwfilter_tech_driver.h | 2 +-
src/qemu/qemu_domain.c | 8 ++++----
src/qemu/qemu_process.c | 2 +-
src/storage/storage_backend_rbd.c | 5 +++--
src/util/vircommand.c | 2 +-
src/util/virnetdevvlan.c | 2 +-
src/util/virnetdevvlan.h | 2 +-
tools/virsh-pool.c | 2 +-
10 files changed, 18 insertions(+), 17 deletions(-)
--
2.49.0
2 weeks, 1 day