[PATCH] Extend libvirt-guests to shutdown only persistent VMs
by Benjamin Taubmann
At the moment, there is no configuration option for the libvirt-guests
service that allows users to define that only persistent virtual machines
should be shutdown on host shutdown.
Currently, the service config allows to choose between two ON_SHUTDOWN
actions that are executed on running virtual machines when the host goes
down: shutdown, suspend.
The ON_SHUTDOWN action should be orthogonal to the type of the virtual
machine. However, the existing implementation, does not suspend
transient virtual machines.
This is the matrix of actions that is executed on virtual machines based
on the configured ON_SHUTDOWN action and the type of a virtual machine.
| persistent | transient
shutdown | shutdown | shutdown (what we want to change)
suspend | suspend | nothing
Add config option PERSISTENT_ONLY to libvirt-guests config that allows
users to define if the ON_SHUTDOWN action should be applied only on
persistent virtual machines. PERSISTENT_ONLY can be set to true, false,
default. The default option will implement the already existing logic.
Case 1: PERSISTENT_ONLY=default
| persistent | transient
shutdown | shutdown | shutdown
suspend | suspend | nothing
Case 2: PERSISTENT_ONLY=true
| persistent | transient
shutdown | shutdown | nothing
suspend | suspend | nothing
Case 3: PERSISTENT_ONLY=false
| persistent | transient
shutdown | shutdown | shutdown
suspend | suspend | suspend
Change-Id: Ib03013d00b3ec60716287dad4743a038cf000763
---
tools/libvirt-guests.sh.in | 37 ++++++++++++++++++++++++++++++-------
1 file changed, 30 insertions(+), 7 deletions(-)
diff --git a/tools/libvirt-guests.sh.in b/tools/libvirt-guests.sh.in
index 344b54390a..c3c5954e17 100644
--- a/tools/libvirt-guests.sh.in
+++ b/tools/libvirt-guests.sh.in
@@ -38,6 +38,7 @@ PARALLEL_SHUTDOWN=0
START_DELAY=0
BYPASS_CACHE=0
SYNC_TIME=0
+PERSISTENT_ONLY="default"
test -f "$initconfdir"/libvirt-guests &&
. "$initconfdir"/libvirt-guests
@@ -438,14 +439,16 @@ shutdown_guests_parallel()
# stop
# Shutdown or save guests on the configured uris
stop() {
- local suspending="true"
local uri=
+ local action="suspend"
+ local persistent_only="default"
+
# last stop was not followed by start
[ -f "$LISTFILE" ] && return 0
if [ "x$ON_SHUTDOWN" = xshutdown ]; then
- suspending="false"
+ action="shutdown"
if [ $SHUTDOWN_TIMEOUT -lt 0 ]; then
gettext "SHUTDOWN_TIMEOUT must be equal or greater than 0"
echo
@@ -454,6 +457,22 @@ stop() {
fi
fi
+ case "x$PERSISTENT_ONLY" in
+ xtrue)
+ persistent_only="true"
+ ;;
+ xfalse)
+ persistent_only="false"
+ ;;
+ *)
+ if [ "x$action" = xshutdown ]; then
+ persistent_only="false"
+ elif [ "x$action" = xsuspend ]; then
+ persistent_only="true"
+ fi
+ ;;
+ esac
+
: >"$LISTFILE"
set -f
for uri in $URIS; do
@@ -478,7 +497,7 @@ stop() {
echo
fi
- if "$suspending"; then
+ if "$persistent_only"; then
local transient="$(list_guests "$uri" "--transient")"
if [ $? -eq 0 ]; then
local empty="true"
@@ -486,7 +505,11 @@ stop() {
for uuid in $transient; do
if "$empty"; then
- eval_gettext "Not suspending transient guests on URI: \$uri: "
+ if [ "x$action" = xsuspend ]; then
+ eval_gettext "Not suspending transient guests on URI: \$uri: "
+ else
+ eval_gettext "Not shutting down transient guests on URI: \$uri: "
+ fi
empty="false"
else
printf ", "
@@ -520,19 +543,19 @@ stop() {
if [ -s "$LISTFILE" ]; then
while read uri list; do
- if "$suspending"; then
+ if [ "x$action" = xsuspend ]; then
eval_gettext "Suspending guests on \$uri URI..."; echo
else
eval_gettext "Shutting down guests on \$uri URI..."; echo
fi
if [ "$PARALLEL_SHUTDOWN" -gt 1 ] &&
- ! "$suspending"; then
+ [ "x$action" = xshutdown ]; then
shutdown_guests_parallel "$uri" "$list"
else
local guest=
for guest in $list; do
- if "$suspending"; then
+ if [ "x$action" = xsuspend ]; then
suspend_guest "$uri" "$guest"
else
shutdown_guest "$uri" "$guest"
--
2.39.2
1 month
[PATCH v3 00/12] Improve versioned CPU support in libvirt
by Jonathon Jongsma
For SEV-SNP support we will need to be able to specify versioned CPU models
that are not yet available in libvirt. Rather than just adding a versioned CPU
or two that would satisfy that immediate need, I decided to try to add
versioned CPUs in a more standard way. This series generates CPU definitions
for all cpu versions that are defined in upstream qemu (at least for
recent Intel and AMD CPUs).
libvirt already provides a select subset of these versions as configurable CPU
models. But we only include the ones that have defined aliases in qemu, such as
EPYC-IBPB. After this patchset, all verisioned cpu models supported by qemu
will be available in libvirt.
In addition to adding these new versioned models, based on feedback from Daniel
Berrange, I've also translated all CPU model aliases to a specific version when
specifying a CPU model to qemu. This means that we will no longer specify e.g.
'-cpu EPYC' to qemu, but will rather specify '-cpu EPYC-v1'
Changes in v3:
- handle unversioned aliases
Changes in v2:
- don't make any changes to existing CPU models
- drop concept of aliases from libvirt and only provide new versioned models
that aren't already available via their qemu alias.
Jonathon Jongsma (12):
cpu_map: update script to handle versioned CPUs
cpu_map: add canonical names to existing CPU models
cpu: parse the canonical name from the cpu model
qemu: use canonical name for CPU models
cpu_map: Add versioned EPYC CPUs
cpu_map: Add versioned Intel Skylake CPUs
cpu_map: Add versioned Intel Cascadelake CPUs
cpu_map: Add versioned Intel Icelake CPUs
cpu_map: Add versioned Intel Cooperlake CPUs
cpu_map: Add versioned Intel Snowridge CPUs
cpu_map: Add versioned Intel SapphireRapids CPUs
cpu_map: Add versioned Dhyana CPUs
src/conf/cpu_conf.c | 3 +
src/conf/cpu_conf.h | 1 +
src/cpu/cpu_x86.c | 24 ++++
src/cpu_map/index.xml | 22 +++
src/cpu_map/meson.build | 22 +++
src/cpu_map/sync_qemu_models_i386.py | 42 ++++--
src/cpu_map/x86_Broadwell-IBRS.xml | 1 +
src/cpu_map/x86_Broadwell-noTSX-IBRS.xml | 1 +
src/cpu_map/x86_Broadwell-noTSX.xml | 1 +
src/cpu_map/x86_Broadwell.xml | 1 +
src/cpu_map/x86_Cascadelake-Server-noTSX.xml | 1 +
src/cpu_map/x86_Cascadelake-Server-v2.xml | 93 +++++++++++++
src/cpu_map/x86_Cascadelake-Server-v4.xml | 91 +++++++++++++
src/cpu_map/x86_Cascadelake-Server-v5.xml | 92 +++++++++++++
src/cpu_map/x86_Cascadelake-Server.xml | 1 +
src/cpu_map/x86_Cooperlake-v2.xml | 98 ++++++++++++++
src/cpu_map/x86_Cooperlake.xml | 1 +
src/cpu_map/x86_Dhyana-v2.xml | 81 ++++++++++++
src/cpu_map/x86_Dhyana.xml | 1 +
src/cpu_map/x86_EPYC-IBPB.xml | 1 +
src/cpu_map/x86_EPYC-Milan-v2.xml | 108 +++++++++++++++
src/cpu_map/x86_EPYC-Milan.xml | 1 +
src/cpu_map/x86_EPYC-Rome-v2.xml | 93 +++++++++++++
src/cpu_map/x86_EPYC-Rome-v3.xml | 95 +++++++++++++
src/cpu_map/x86_EPYC-Rome-v4.xml | 94 +++++++++++++
src/cpu_map/x86_EPYC-Rome.xml | 1 +
src/cpu_map/x86_EPYC-v3.xml | 87 ++++++++++++
src/cpu_map/x86_EPYC-v4.xml | 88 ++++++++++++
src/cpu_map/x86_EPYC.xml | 1 +
src/cpu_map/x86_Haswell-IBRS.xml | 1 +
src/cpu_map/x86_Haswell-noTSX-IBRS.xml | 1 +
src/cpu_map/x86_Haswell-noTSX.xml | 1 +
src/cpu_map/x86_Haswell.xml | 1 +
src/cpu_map/x86_Icelake-Server-noTSX.xml | 1 +
src/cpu_map/x86_Icelake-Server-v3.xml | 103 +++++++++++++++
src/cpu_map/x86_Icelake-Server-v4.xml | 108 +++++++++++++++
src/cpu_map/x86_Icelake-Server-v5.xml | 109 +++++++++++++++
src/cpu_map/x86_Icelake-Server-v6.xml | 109 +++++++++++++++
src/cpu_map/x86_Icelake-Server.xml | 1 +
src/cpu_map/x86_IvyBridge-IBRS.xml | 1 +
src/cpu_map/x86_IvyBridge.xml | 1 +
src/cpu_map/x86_Nehalem-IBRS.xml | 1 +
src/cpu_map/x86_Nehalem.xml | 1 +
src/cpu_map/x86_SandyBridge-IBRS.xml | 1 +
src/cpu_map/x86_SandyBridge.xml | 1 +
src/cpu_map/x86_SapphireRapids-v2.xml | 125 ++++++++++++++++++
src/cpu_map/x86_SapphireRapids.xml | 1 +
src/cpu_map/x86_Skylake-Client-IBRS.xml | 1 +
src/cpu_map/x86_Skylake-Client-noTSX-IBRS.xml | 1 +
src/cpu_map/x86_Skylake-Client-v4.xml | 77 +++++++++++
src/cpu_map/x86_Skylake-Client.xml | 1 +
src/cpu_map/x86_Skylake-Server-IBRS.xml | 1 +
src/cpu_map/x86_Skylake-Server-noTSX-IBRS.xml | 1 +
src/cpu_map/x86_Skylake-Server-v4.xml | 83 ++++++++++++
src/cpu_map/x86_Skylake-Server-v5.xml | 85 ++++++++++++
src/cpu_map/x86_Skylake-Server.xml | 1 +
src/cpu_map/x86_Snowridge-v2.xml | 78 +++++++++++
src/cpu_map/x86_Snowridge-v3.xml | 80 +++++++++++
src/cpu_map/x86_Snowridge-v4.xml | 78 +++++++++++
src/cpu_map/x86_Snowridge.xml | 1 +
src/cpu_map/x86_Westmere-IBRS.xml | 1 +
src/cpu_map/x86_Westmere.xml | 1 +
src/qemu/qemu_command.c | 5 +-
.../x86_64-cpuid-Atom-P5362-guest.xml | 3 +-
.../x86_64-cpuid-Atom-P5362-json.xml | 3 +-
.../x86_64-cpuid-Cooperlake-host.xml | 3 +-
.../x86_64-cpuid-EPYC-7502-32-Core-host.xml | 5 +-
.../x86_64-cpuid-EPYC-7601-32-Core-guest.xml | 9 +-
...6_64-cpuid-EPYC-7601-32-Core-ibpb-host.xml | 8 +-
..._64-cpuid-Hygon-C86-7185-32-core-guest.xml | 5 +-
...6_64-cpuid-Hygon-C86-7185-32-core-host.xml | 5 +-
...6_64-cpuid-Hygon-C86-7185-32-core-json.xml | 6 +-
...4-cpuid-Ryzen-7-1800X-Eight-Core-guest.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-8268-guest.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-8268-host.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-9242-guest.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-9242-host.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-9242-json.xml | 9 +-
..._64-cpuid-baseline-Cascadelake+Icelake.xml | 9 +-
...-cpuid-baseline-Cooperlake+Cascadelake.xml | 9 +-
...6_64-cpuid-baseline-Cooperlake+Icelake.xml | 9 +-
.../domaincapsdata/qemu_4.2.0-q35.x86_64.xml | 2 +
.../domaincapsdata/qemu_4.2.0-tcg.x86_64.xml | 2 +
tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 2 +
.../domaincapsdata/qemu_5.0.0-q35.x86_64.xml | 4 +
.../domaincapsdata/qemu_5.0.0-tcg.x86_64.xml | 4 +
tests/domaincapsdata/qemu_5.0.0.x86_64.xml | 4 +
.../domaincapsdata/qemu_5.1.0-q35.x86_64.xml | 7 +
.../domaincapsdata/qemu_5.1.0-tcg.x86_64.xml | 7 +
tests/domaincapsdata/qemu_5.1.0.x86_64.xml | 7 +
.../domaincapsdata/qemu_5.2.0-q35.x86_64.xml | 7 +
.../domaincapsdata/qemu_5.2.0-tcg.x86_64.xml | 7 +
tests/domaincapsdata/qemu_5.2.0.x86_64.xml | 7 +
.../domaincapsdata/qemu_6.0.0-q35.x86_64.xml | 8 ++
.../domaincapsdata/qemu_6.0.0-tcg.x86_64.xml | 8 ++
tests/domaincapsdata/qemu_6.0.0.x86_64.xml | 8 ++
.../domaincapsdata/qemu_6.1.0-q35.x86_64.xml | 15 +++
.../domaincapsdata/qemu_6.1.0-tcg.x86_64.xml | 15 +++
tests/domaincapsdata/qemu_6.1.0.x86_64.xml | 15 +++
.../domaincapsdata/qemu_6.2.0-q35.x86_64.xml | 16 +++
.../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml | 16 +++
tests/domaincapsdata/qemu_6.2.0.x86_64.xml | 16 +++
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 17 +++
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 17 +++
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 17 +++
.../domaincapsdata/qemu_7.1.0-q35.x86_64.xml | 17 +++
.../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml | 17 +++
tests/domaincapsdata/qemu_7.1.0.x86_64.xml | 17 +++
.../domaincapsdata/qemu_7.2.0-q35.x86_64.xml | 17 +++
.../qemu_7.2.0-tcg.x86_64+hvf.xml | 17 +++
.../domaincapsdata/qemu_7.2.0-tcg.x86_64.xml | 17 +++
tests/domaincapsdata/qemu_7.2.0.x86_64.xml | 17 +++
.../domaincapsdata/qemu_8.0.0-q35.x86_64.xml | 17 +++
.../domaincapsdata/qemu_8.0.0-tcg.x86_64.xml | 17 +++
tests/domaincapsdata/qemu_8.0.0.x86_64.xml | 17 +++
.../domaincapsdata/qemu_8.1.0-q35.x86_64.xml | 27 +++-
.../domaincapsdata/qemu_8.1.0-tcg.x86_64.xml | 22 +++
tests/domaincapsdata/qemu_8.1.0.x86_64.xml | 27 +++-
.../domaincapsdata/qemu_8.2.0-q35.x86_64.xml | 27 +++-
.../domaincapsdata/qemu_8.2.0-tcg.x86_64.xml | 22 +++
tests/domaincapsdata/qemu_8.2.0.x86_64.xml | 27 +++-
.../cpu-Haswell-noTSX.x86_64-latest.args | 2 +-
.../cpu-Haswell.x86_64-latest.args | 2 +-
.../cpu-Haswell2.x86_64-latest.args | 2 +-
.../cpu-Haswell3.x86_64-latest.args | 2 +-
...-Icelake-Server-pconfig.x86_64-latest.args | 2 +-
.../cpu-cache-disable3.x86_64-latest.args | 2 +-
...u-check-default-partial.x86_64-latest.args | 2 +-
.../cpu-fallback.x86_64-5.2.0.args | 2 +-
.../cpu-fallback.x86_64-8.0.0.args | 2 +-
.../cpu-host-model-cmt.x86_64-latest.args | 2 +-
...-host-model-fallback-kvm.x86_64-4.2.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-5.0.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-5.1.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-5.2.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-6.0.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-6.1.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-6.2.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-7.0.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-7.1.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-7.2.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-8.0.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-8.1.0.args | 2 +-
...host-model-fallback-kvm.x86_64-latest.args | 2 +-
...-host-model-fallback-tcg.x86_64-7.2.0.args | 2 +-
...-host-model-fallback-tcg.x86_64-8.0.0.args | 2 +-
...-host-model-fallback-tcg.x86_64-8.1.0.args | 2 +-
...host-model-fallback-tcg.x86_64-latest.args | 2 +-
.../cpu-host-model-kvm.x86_64-4.2.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-5.0.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-5.1.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-5.2.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-6.0.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-6.1.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-6.2.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-7.0.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-7.1.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-7.2.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-8.0.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-8.1.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-latest.args | 2 +-
...ost-model-nofallback-kvm.x86_64-4.2.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-5.0.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-5.1.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-5.2.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-6.0.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-6.1.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-6.2.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-7.0.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-7.1.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-7.2.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-8.0.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-8.1.0.args | 2 +-
...st-model-nofallback-kvm.x86_64-latest.args | 2 +-
...ost-model-nofallback-tcg.x86_64-7.2.0.args | 2 +-
...ost-model-nofallback-tcg.x86_64-8.0.0.args | 2 +-
...ost-model-nofallback-tcg.x86_64-8.1.0.args | 2 +-
...st-model-nofallback-tcg.x86_64-latest.args | 2 +-
.../cpu-host-model-tcg.x86_64-7.2.0.args | 2 +-
.../cpu-host-model-tcg.x86_64-8.0.0.args | 2 +-
.../cpu-host-model-tcg.x86_64-8.1.0.args | 2 +-
.../cpu-host-model-tcg.x86_64-latest.args | 2 +-
.../cpu-host-model-vendor.x86_64-latest.args | 2 +-
.../cpu-minimum1.x86_64-latest.args | 2 +-
.../cpu-minimum2.x86_64-latest.args | 2 +-
.../cpu-nofallback.x86_64-8.0.0.args | 2 +-
.../cpu-phys-bits-emulate2.x86_64-latest.args | 2 +-
.../cpu-strict1.x86_64-latest.args | 2 +-
.../cpu-translation.x86_64-latest.args | 2 +-
.../cpu-tsc-frequency.x86_64-latest.args | 2 +-
190 files changed, 2837 insertions(+), 187 deletions(-)
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Cooperlake-v2.xml
create mode 100644 src/cpu_map/x86_Dhyana-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Milan-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v4.xml
create mode 100644 src/cpu_map/x86_EPYC-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v6.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v2.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Snowridge-v2.xml
create mode 100644 src/cpu_map/x86_Snowridge-v3.xml
create mode 100644 src/cpu_map/x86_Snowridge-v4.xml
--
2.41.0
1 month, 2 weeks
[libvirt PATCH 00/12] Add vmx-* features from qemu
by Tim Wiederhake
For rationale, see patch 3 (cpu_map: No longer ignore vmx- features in
sync_qemu_features_i386.py).
Adding features in bunches (one patch per msr index), as this series
is adding ~100 features.
This also adds cpu features "gds-no" and "amx-complex" and brings
libvirt in sync with qemu commit ad6ef0a42e.
Tim Wiederhake (12):
cpu_map: Add missing feature "gds-no"
cpu_map: Add missing feature "amx-complex"
cpu_map: No longer ignore vmx- features in sync_qemu_features_i386.py
cpu_map: Add missing vmx features from MSR 0x480
cpu_map: Add missing vmx features from MSR 0x485
cpu_map: Add missing vmx features from MSR 0x48B
cpu_map: Add missing vmx features from MSR 0x48C
cpu_map: Add missing vmx features from MSR 0x48D
cpu_map: Add missing vmx features from MSR 0x48E
cpu_map: Add missing vmx features from MSR 0x48F
cpu_map: Add missing vmx features from MSR 0x490
cpu_map: Add missing vmx features from MSR 0x491
src/cpu_map/sync_qemu_features_i386.py | 2 +-
src/cpu_map/x86_features.xml | 295 ++++++++++++++++++
.../x86_64-cpuid-Atom-P5362-enabled.xml | 9 +
.../x86_64-cpuid-Atom-P5362-json.xml | 70 +++++
.../x86_64-cpuid-Cooperlake-enabled.xml | 9 +
.../x86_64-cpuid-Cooperlake-json.xml | 71 +++++
.../x86_64-cpuid-Core-i7-8550U-enabled.xml | 9 +
.../x86_64-cpuid-Core-i7-8550U-json.xml | 68 ++++
...86_64-cpuid-Xeon-Platinum-9242-enabled.xml | 9 +
.../x86_64-cpuid-Xeon-Platinum-9242-json.xml | 71 +++++
...-cpuid-baseline-Cooperlake+Cascadelake.xml | 71 +++++
.../domaincapsdata/qemu_4.2.0-q35.x86_64.xml | 68 ++++
tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 68 ++++
.../domaincapsdata/qemu_5.0.0-q35.x86_64.xml | 68 ++++
tests/domaincapsdata/qemu_5.0.0.x86_64.xml | 68 ++++
.../domaincapsdata/qemu_8.2.0-q35.x86_64.xml | 1 +
tests/domaincapsdata/qemu_8.2.0.x86_64.xml | 1 +
.../cpu-host-model.x86_64-4.2.0.args | 2 +-
.../cpu-host-model.x86_64-5.0.0.args | 2 +-
.../cpu-host-model.x86_64-latest.args | 2 +-
20 files changed, 960 insertions(+), 4 deletions(-)
--
2.39.2
1 month, 3 weeks
[PATCH 00/10] Subject: [PATCH 00/10] Code cleanup
by Artem Chernyshev
Several functions was modified to become invariant. Change
their type to void and remove unnecessary checks of their
return values.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Artem Chernyshev (10):
conf: domain_conf: virDomainHostdevInsert/virDomainNetInsert to void
libxl: libxlMakeDomainCapabilities to void
virtime: virTimeMillisNowRaw to void
conf: virDomainGraphicsListenAppendAddress() to void
conf: virDomainDefMaybeAddInput() to void
virbitmap: virBitmapToData() to void
tools: cmdDomblkinfoGet() to void
vireventglib: virEventRunDefaultImpl() to void
vsh: vshCompleterFilter() to void
vsh: vshInitDebug() to void
examples/c/misc/event-test.c | 5 +-
include/libvirt/libvirt-event.h | 2 +-
src/ch/ch_monitor.c | 5 +-
src/conf/domain_conf.c | 35 +++-------
src/conf/domain_conf.h | 8 +--
src/conf/virdomainjob.c | 7 +-
src/hypervisor/domain_driver.c | 5 +-
src/libxl/libxl_capabilities.c | 40 ++++--------
src/libxl/libxl_capabilities.h | 2 +-
src/libxl/libxl_domain.c | 16 ++---
src/libxl/libxl_domain.h | 2 +-
src/libxl/libxl_driver.c | 17 ++---
src/libxl/xen_common.c | 25 +++----
src/libxl/xen_common.h | 2 +-
src/libxl/xen_xl.c | 3 +-
src/lxc/lxc_driver.c | 6 +-
src/nwfilter/nwfilter_dhcpsnoop.c | 18 ++---
src/qemu/qemu_agent.c | 6 +-
src/qemu/qemu_backup.c | 3 +-
src/qemu/qemu_dbus.c | 3 +-
src/qemu/qemu_domain.c | 35 ++++------
src/qemu/qemu_domainjob.c | 22 +++----
src/qemu/qemu_domainjob.h | 4 +-
src/qemu/qemu_driver.c | 19 +++---
src/qemu/qemu_hotplug.c | 6 +-
src/qemu/qemu_migration.c | 4 +-
src/qemu/qemu_nbdkit.c | 3 +-
src/qemu/qemu_process.c | 12 ++--
src/qemu/qemu_tpm.c | 3 +-
src/rpc/virnetdaemon.c | 6 +-
src/storage/storage_backend_iscsi_direct.c | 4 +-
src/util/virbitmap.c | 6 +-
src/util/virbitmap.h | 2 +-
src/util/virevent.c | 7 +-
src/util/vireventglib.c | 4 +-
src/util/vireventglib.h | 2 +-
src/util/virfdstream.c | 3 +-
src/util/virhostcpu.c | 4 +-
src/util/virhostuptime.c | 3 +-
src/util/virtime.c | 48 ++++----------
src/util/virtime.h | 14 ++--
src/vbox/vbox_common.c | 4 +-
src/vmx/vmx.c | 3 +-
src/vz/vz_driver.c | 38 +++++------
src/vz/vz_sdk.c | 14 ++--
src/vz/vz_utils.c | 16 ++---
src/vz/vz_utils.h | 2 +-
tests/domaincapstest.c | 3 +-
tests/objecteventtest.c | 76 ++++++++++------------
tests/qemucapsprobe.c | 5 +-
tests/qemumonitortestutils.c | 8 +--
tests/virbitmaptest.c | 3 +-
tests/virnetsockettest.c | 3 +-
tools/virsh-completer-domain.c | 6 +-
tools/virsh-domain-monitor.c | 10 +--
tools/virsh-domain.c | 15 ++---
tools/vsh.c | 24 +++----
57 files changed, 231 insertions(+), 420 deletions(-)
--
2.43.0
2 months, 1 week
[PATCH 0/4] vmx: A couple of disk related fixes
by Michal Privoznik
The first patch fixes an issue.
The last one - I am not sure if it's not going to break something and
thus it's optional.
Michal Prívozník (4):
vmx: Accept empty fileName for cdrom-image
vmx2xmltest: Add another test case
vmx: Separate disk target name generation into a function
vmx: Ensure unique disk targets when parsing
src/vmx/vmx.c | 199 +++++++++++++++--------
tests/vmx2xmldata/esx-in-the-wild-12.vmx | 86 ++++++++++
tests/vmx2xmldata/esx-in-the-wild-12.xml | 39 +++++
tests/vmx2xmldata/esx-in-the-wild-8.xml | 4 +-
tests/vmx2xmltest.c | 1 +
5 files changed, 261 insertions(+), 68 deletions(-)
create mode 100644 tests/vmx2xmldata/esx-in-the-wild-12.vmx
create mode 100644 tests/vmx2xmldata/esx-in-the-wild-12.xml
--
2.41.0
3 months, 2 weeks
[PATCH 0/3] Allow reserving more memory for PCI controllers
by Michal Privoznik
*** BLURB HERE ***
Michal Prívozník (3):
conf: Introduce @memReserve to <controller/>
qemu_validate: Restrict setting @memReserve only to some controllers
qemu_command: Generate mem-reserve for controllers
docs/formatdomain.rst | 6 +++++
src/conf/domain_conf.c | 9 +++++++
src/conf/domain_conf.h | 3 +++
src/conf/schemas/domaincommon.rng | 5 ++++
src/qemu/qemu_command.c | 3 +++
src/qemu/qemu_validate.c | 25 +++++++++++++++++++
.../q35-usb2.x86_64-latest.args | 2 +-
tests/qemuxml2argvdata/q35-usb2.xml | 2 +-
.../q35-usb2.x86_64-latest.xml | 2 +-
9 files changed, 54 insertions(+), 3 deletions(-)
--
2.41.0
3 months, 2 weeks
[v3 0/4] Support for dirty-limit live migration
by Hyman Huang
v3:
- adjust the parameter check location for suggested by Michal
- mark the VIR_MIGRATE_DIRTY_LIMIT flag since 10.0.0
- rebase on master
Thanks Michal for the comments.
Please review,
Yong.
v1:
The dirty-limit functionality for live migration was
introduced since qemu>=8.1.
In the live migration scenario, it implements the force
convergence using the dirty-limit approach, which results
in better reliable read performance.
A straightforward dirty-limit capability for live migration
is added by this patchset. Users might not care about other
dirty-limit arguments like "x-vcpu-dirty-limit-period"
or "vcpu-dirty-limit," thus do not expose them to Libvirt
and Keep the default configurations and values in place.
For more details about dirty-limit, please see the following
reference:
https://lore.kernel.org/qemu-
devel/169024923116.19090.10825599068950039132-0(a)git.sr.ht/
Hyman Huang (4):
Add VIR_MIGRATE_DIRTY_LIMIT flag
qemu_migration: Implement VIR_MIGRATE_DIRTY_LIMIT flag
virsh: Add support for VIR_MIGRATE_DIRTY_LIMIT flag
NEWS: document support for dirty-limit live migration
NEWS.rst | 8 ++++++++
docs/manpages/virsh.rst | 10 +++++++++-
include/libvirt/libvirt-domain.h | 5 +++++
src/libvirt-domain.c | 8 ++++++++
src/qemu/qemu_migration.c | 8 ++++++++
src/qemu/qemu_migration.h | 1 +
src/qemu/qemu_migration_params.c | 6 ++++++
src/qemu/qemu_migration_params.h | 1 +
tools/virsh-domain.c | 6 ++++++
9 files changed, 52 insertions(+), 1 deletion(-)
--
2.39.1
3 months, 3 weeks
[PATCH rfcv3 00/11] LIBVIRT: X86: TDX support
by Zhenzhong Duan
Hi,
This series brings libvirt the x86 TDX support.
* What's TDX?
TDX stands for Trust Domain Extensions which isolates VMs from
the virtual-machine manager (VMM)/hypervisor and any other software on
the platform.
To support TDX, multiple software components, not only KVM but also QEMU,
guest Linux and virtual bios, need to be updated. For more details, please
check link[1], there are TDX spec links and public repository link at github
for each software component.
This patchset is another software component to extend libvirt to support TDX,
with which one can start a VM from high level rather than running qemu directly.
* Misc
As QEMU use a software emulated way to reset guest which isn't supported by TDX
guest for security reason. We add a new way to emulate the reset for TDX guest,
called "hard reboot". We achieve this by killing old qemu and start a new one.
Complete code can be found at [1], matching qemu code can be found at [2].
There are some new properties for tdx-guest object, i.e. `mrconfigid`, `mrowner`,
`mrownerconfig` and `debug` which aren't in matching qemu[2] yet. I keep them
intentionally as they will be implemented in qemu as extention series of [2].
* Test
start/stop/reboot with virsh
stop/reboot trigger in guest
stop with on_poweroff=destroy/restart
reboot with on_reboot=destroy/restart
* Patch organization
- patch 1-3: Support query of TDX capabilities.
- patch 4-6: Add TDX type to launchsecurity framework.
- patch 7-11: Add hard reboot support to TDX guest
[1] https://github.com/intel/libvirt-tdx/commits/tdx_for_upstream_rfcv3
[2] https://github.com/intel/qemu-tdx/tree/tdx-qemu-upstream-v3
Thanks
Zhenzhong
Changelog:
rfcv3:
- Change to generate qemu cmdline with -bios
- drop firmware auto match as -bios is used
- add a hard reboot method to reboot TDX guest
rfcv2:
- give up using qmp cmd and check TDX directly on host for TDX capabilities.
- use launchsecurity framework to support TDX
- use <os>.<loader> for general loader
- add auto firmware match feature for TDX
A example TDVF fimware description file 70-edk2-x86_64-tdx.json:
{
"description": "UEFI firmware for x86_64, supporting Intel TDX",
"interface-types": [
"uefi"
],
"mapping": {
"device": "generic",
"filename": "/usr/share/OVMF/OVMF_CODE-tdx.fd"
},
"targets": [
{
"architecture": "x86_64",
"machines": [
"pc-q35-*"
]
}
],
"features": [
"intel-tdx",
"verbose-dynamic"
],
"tags": [
]
}
rfcv2:
https://www.mail-archive.com/libvir-list@redhat.com/msg219378.html
Chenyi Qiang (3):
qemu: add hard reboot in QEMU driver
qemu: make hard reboot as the TDX default reboot mode
virsh: add new option "timekeep" to keep virsh console alive
Zhenzhong Duan (8):
qemu: Check if INTEL Trust Domain Extention support is enabled
qemu: Add TDX capability
conf: expose TDX feature in domain capabilities
conf: add tdx as launch security type
qemu: Add command line and validation for TDX type
qemu: force special parameters enabled for TDX guest
qemu: Extend hard reboot in Qemu driver
conf: Add support to keep same domid for hard reboot
docs/formatdomaincaps.rst | 1 +
include/libvirt/libvirt-domain.h | 2 +
src/conf/domain_capabilities.c | 1 +
src/conf/domain_capabilities.h | 1 +
src/conf/domain_conf.c | 50 ++++++++++++++++
src/conf/domain_conf.h | 11 ++++
src/conf/schemas/domaincaps.rng | 9 +++
src/conf/schemas/domaincommon.rng | 34 +++++++++++
src/conf/virconftypes.h | 2 +
src/qemu/qemu_capabilities.c | 38 +++++++++++-
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 29 +++++++++
src/qemu/qemu_domain.c | 18 ++++++
src/qemu/qemu_domain.h | 4 ++
src/qemu/qemu_driver.c | 85 ++++++++++++++++++++------
src/qemu/qemu_firmware.c | 1 +
src/qemu/qemu_monitor.c | 19 +++++-
src/qemu/qemu_monitor.h | 2 +-
src/qemu/qemu_monitor_json.c | 6 +-
src/qemu/qemu_namespace.c | 1 +
src/qemu/qemu_process.c | 99 ++++++++++++++++++++++++++++++-
src/qemu/qemu_validate.c | 18 ++++++
tools/virsh-console.c | 3 +
tools/virsh-domain.c | 64 +++++++++++++++-----
tools/virsh.h | 1 +
25 files changed, 463 insertions(+), 37 deletions(-)
--
2.34.1
3 months, 3 weeks
[PATCH v2 00/14] Unreacheble code cleanup
by Dmitry Frolov
A lot of unreacheble code was caused by int functions, which return nothing
except zero. Unreacheble code was removed. Corresponding functions type was
changed to void.
Dmitry Frolov (14):
cpu: turn virCPUx86DataAddItem() to void
cpu: turn virCPUx86DataAdd() to void
libxl: turn libxlCapsAddCPUID() to void
libxl: turn virCapabilitiesAddHostFeature() to void
libxl: turn libxl_get_physinfo() to void
conf: turn virCapabilitiesSetNetPrefix() to void
libxl: turn libxlMakeDomainOSCaps() to void
libxl: turn libxlMakeDomainDeviceDiskCaps() to void
libxl: turn libxlMakeDomainDeviceGraphicsCaps() to void
libxl: turn libxlMakeDomainDeviceVideoCaps() to void
conf: turn virDomainGraphicsListenAppendAddress() to void
vbox: turn vboxDumpDisplay() to void
libxl: turn xenParseXLNamespaceData() to void
rpc: turn virNetClientAddProgram() to void
src/admin/admin_remote.c | 3 +-
src/conf/capabilities.c | 8 +--
src/conf/capabilities.h | 4 +-
src/conf/domain_conf.c | 4 +-
src/conf/domain_conf.h | 2 +-
src/cpu/cpu_x86.c | 91 ++++++++++++---------------------
src/cpu/cpu_x86.h | 2 +-
src/libxl/libxl_capabilities.c | 54 +++++++------------
src/libxl/libxl_conf.c | 20 ++------
src/libxl/libxl_driver.c | 6 +--
src/libxl/xen_common.c | 7 +--
src/libxl/xen_xl.c | 8 ++-
src/locking/lock_driver_lockd.c | 3 +-
src/logging/log_manager.c | 3 +-
src/lxc/lxc_monitor.c | 4 +-
src/remote/remote_driver.c | 7 ++-
src/rpc/virnetclient.c | 3 +-
src/rpc/virnetclient.h | 2 +-
src/test/test_driver.c | 6 +--
src/vbox/vbox_common.c | 13 ++---
src/vmx/vmx.c | 3 +-
tests/libxlmock.c | 5 +-
22 files changed, 86 insertions(+), 172 deletions(-)
--
2.34.1
3 months, 4 weeks
[PATCH 0/2] ci: Update to Fedora 39
by Michal Privoznik
Green pipeline:
https://gitlab.com/MichalPrivoznik/libvirt/-/pipelines/1106643445
Michal Prívozník (2):
ci: integration: Switch upstream integration tests to Fedora 39
ci: Update Alpine and Fedora and regenerate
ci/buildenv/{alpine-317.sh => alpine-319.sh} | 0
ci/buildenv/{fedora-37.sh => fedora-39.sh} | 0
...e-317.Dockerfile => alpine-319.Dockerfile} | 2 +-
...ora-37.Dockerfile => fedora-39.Dockerfile} | 2 +-
ci/gitlab/builds.yml | 64 +++++++--------
ci/gitlab/containers.yml | 18 ++---
ci/integration.yml | 80 +++++++++----------
ci/manifest.yml | 18 ++---
8 files changed, 92 insertions(+), 92 deletions(-)
rename ci/buildenv/{alpine-317.sh => alpine-319.sh} (100%)
rename ci/buildenv/{fedora-37.sh => fedora-39.sh} (100%)
rename ci/containers/{alpine-317.Dockerfile => alpine-319.Dockerfile} (98%)
rename ci/containers/{fedora-37.Dockerfile => fedora-39.Dockerfile} (98%)
--
2.41.0
4 months