[PATCH v2 00/36] Add support for versioned CPU models
by Jiri Denemark
Each CPU model with -v* suffix is defined as a standalone model copying
all attributes of the previous version although CPU model versions with
an alias are handled differently. The full definition is used for the
alias and the versioned model is created as an identical copy of the
alias.
To avoid breaking migration compatibility of host-model CPUs all
versioned models are marked with <decode guest='off'/> so that they are
ignored when selecting candidates for host-model. It's not ideal but not
doing so would break almost all host-model CPUs as the new versioned CPU
models have all vmx-* features included since their introduction while
existing CPU models were updated later. This meas existing models would
be accompanied with a long list of vmx-* features to properly describe a
host CPU while the newly added CPU models would have those features
enabled implicitly and their list of features would be significantly
shorter. Thus the new models would always be better candidates for
host-model than the existing models.
Version 2:
- removed patches
- cpu_x86: Copy added and removed features from ancestor
- qemu: Canonicalize CPU models
- new patches
- cpu_x86: Annotate virCPUx86Model fields
- cpu_x86: Promote added/removed from ancestor
- cpu_x86: Record relations between CPU models
- cpu: Introduce virCPUGetCanonicalModel
- domain_capabilities: Report canonical names of CPU models
- cpu_map: Add Denverton CPU model
- cpu_map: Add KnightsMill CPU model
- make -v? variants linked to their corresponding non-versioned models
(such as -noTSX, -IBRS, etc.)
- all -v? variants are marked with <decode host='on' guest='off'/>
- do not add absolute path to CPU model XMLs to index.xml
- use <group name='...'> for all groups rather than a strange mix of
<group name='...'> and <group vendor='...'>
Jiri Denemark (36):
cpu_x86: Annotate virCPUx86Model fields
cpu_x86: Promote added/removed from ancestor
sync_qemu_features_i386: Add some removed features back
sync_qemu_models_i386: Use f-strings
sync_qemu_models_i386: Do not overwrite existing models
sync_qemu_models_i386: Do not require full path to QEMU's cpu.c
sync_qemu_models_i386: Add support for versioned CPU models
sync_qemu_models_i386: Store extra info in a separate file
sync_qemu_models_i386: Switch to lxml
cpu_map: Properly group models in index.xml
sync_qemu_models_i386: Update index.xml
sync_qemu_models_i386: Copy signatures from base model
cpu_x86: Record relations between CPU models
cpu: Introduce virCPUGetCanonicalModel
domain_capabilities: Report canonical names of CPU models
cpu_map: Add versions of SierraForest CPU model
cpu_map: Add versions of GraniteRapids CPU model
cpu_map: Add versions of SapphireRapids CPU model
cpu_map: Add versions of Snowridge CPU model
cpu_map: Add versions of Cooperlake CPU model
cpu_map: Add versions of Icelake-Server CPU model
cpu_map: Add versions of Cascadelake-Server CPU model
cpu_map: Add versions of Skylake-Server CPU model
cpu_map: Add versions of Skylake-Client CPU model
cpu_map: Add versions of Broadwell CPU model
cpu_map: Add versions of Haswell CPU model
cpu_map: Add versions of IvyBridge CPU model
cpu_map: Add versions of SandyBridge CPU model
cpu_map: Add versions of Westmere CPU model
cpu_map: Add versions of Nehalem CPU model
cpu_map: Add versions of EPYC-Milan CPU model
cpu_map: Add versions of EPYC-Rome CPU model
cpu_map: Add versions of EPYC CPU model
cpu_map: Add versions of Dhyana CPU model
cpu_map: Add Denverton CPU model
cpu_map: Add KnightsMill CPU model
docs/formatdomaincaps.rst | 8 +-
src/conf/domain_capabilities.c | 11 +-
src/conf/domain_capabilities.h | 4 +-
src/cpu/cpu.c | 25 +
src/cpu/cpu.h | 8 +
src/cpu/cpu_map.c | 2 +-
src/cpu/cpu_x86.c | 88 +-
src/cpu_map/index.xml | 291 ++--
src/cpu_map/meson.build | 60 +
src/cpu_map/sync_qemu_features_i386.py | 3 +
src/cpu_map/sync_qemu_models_i386.py | 184 +-
src/cpu_map/x86_Broadwell-v1.xml | 6 +
src/cpu_map/x86_Broadwell-v2.xml | 6 +
src/cpu_map/x86_Broadwell-v3.xml | 6 +
src/cpu_map/x86_Broadwell-v4.xml | 6 +
src/cpu_map/x86_Cascadelake-Server-v1.xml | 6 +
src/cpu_map/x86_Cascadelake-Server-v2.xml | 157 ++
src/cpu_map/x86_Cascadelake-Server-v3.xml | 6 +
src/cpu_map/x86_Cascadelake-Server-v4.xml | 156 ++
src/cpu_map/x86_Cascadelake-Server-v5.xml | 158 ++
src/cpu_map/x86_Cooperlake-v1.xml | 6 +
src/cpu_map/x86_Cooperlake-v2.xml | 164 ++
src/cpu_map/x86_Denverton-v1.xml | 6 +
src/cpu_map/x86_Denverton-v2.xml | 137 ++
src/cpu_map/x86_Denverton-v3.xml | 139 ++
src/cpu_map/x86_Denverton.xml | 138 ++
src/cpu_map/x86_Dhyana-v1.xml | 6 +
src/cpu_map/x86_Dhyana-v2.xml | 73 +
src/cpu_map/x86_EPYC-Milan-v1.xml | 6 +
src/cpu_map/x86_EPYC-Milan-v2.xml | 99 ++
src/cpu_map/x86_EPYC-Rome-v1.xml | 6 +
src/cpu_map/x86_EPYC-Rome-v2.xml | 86 +
src/cpu_map/x86_EPYC-Rome-v3.xml | 86 +
src/cpu_map/x86_EPYC-Rome-v4.xml | 85 +
src/cpu_map/x86_EPYC-v1.xml | 6 +
src/cpu_map/x86_EPYC-v2.xml | 6 +
src/cpu_map/x86_EPYC-v3.xml | 79 +
src/cpu_map/x86_EPYC-v4.xml | 79 +
src/cpu_map/x86_GraniteRapids-v1.xml | 6 +
src/cpu_map/x86_Haswell-v1.xml | 6 +
src/cpu_map/x86_Haswell-v2.xml | 6 +
src/cpu_map/x86_Haswell-v3.xml | 6 +
src/cpu_map/x86_Haswell-v4.xml | 6 +
src/cpu_map/x86_Icelake-Server-v1.xml | 6 +
src/cpu_map/x86_Icelake-Server-v2.xml | 6 +
src/cpu_map/x86_Icelake-Server-v3.xml | 165 ++
src/cpu_map/x86_Icelake-Server-v4.xml | 172 ++
src/cpu_map/x86_Icelake-Server-v5.xml | 174 ++
src/cpu_map/x86_Icelake-Server-v6.xml | 175 ++
src/cpu_map/x86_Icelake-Server-v7.xml | 177 ++
src/cpu_map/x86_IvyBridge-v1.xml | 6 +
src/cpu_map/x86_IvyBridge-v2.xml | 6 +
src/cpu_map/x86_KnightsMill.xml | 71 +
src/cpu_map/x86_Nehalem-v1.xml | 6 +
src/cpu_map/x86_Nehalem-v2.xml | 6 +
src/cpu_map/x86_SandyBridge-v1.xml | 6 +
src/cpu_map/x86_SandyBridge-v2.xml | 6 +
src/cpu_map/x86_SapphireRapids-v1.xml | 6 +
src/cpu_map/x86_SapphireRapids-v2.xml | 193 +++
src/cpu_map/x86_SapphireRapids-v3.xml | 198 +++
src/cpu_map/x86_SierraForest-v1.xml | 6 +
src/cpu_map/x86_Skylake-Client-v1.xml | 6 +
src/cpu_map/x86_Skylake-Client-v2.xml | 6 +
src/cpu_map/x86_Skylake-Client-v3.xml | 6 +
src/cpu_map/x86_Skylake-Client-v4.xml | 141 ++
src/cpu_map/x86_Skylake-Server-v1.xml | 6 +
src/cpu_map/x86_Skylake-Server-v2.xml | 6 +
src/cpu_map/x86_Skylake-Server-v3.xml | 6 +
src/cpu_map/x86_Skylake-Server-v4.xml | 148 ++
src/cpu_map/x86_Skylake-Server-v5.xml | 150 ++
src/cpu_map/x86_Snowridge-v1.xml | 6 +
src/cpu_map/x86_Snowridge-v2.xml | 143 ++
src/cpu_map/x86_Snowridge-v3.xml | 145 ++
src/cpu_map/x86_Snowridge-v4.xml | 143 ++
src/cpu_map/x86_Westmere-v1.xml | 6 +
src/cpu_map/x86_Westmere-v2.xml | 6 +
src/libvirt_private.syms | 1 +
src/qemu/qemu_capabilities.c | 10 +-
tests/cputest.c | 5 +-
.../x86_64-cpuid-Atom-P5362-host.xml | 2 +-
.../x86_64-cpuid-Cooperlake-host.xml | 2 +-
.../x86_64-cpuid-Core-i5-2500-host.xml | 2 +-
.../x86_64-cpuid-Core-i5-2540M-host.xml | 2 +-
.../x86_64-cpuid-Core-i5-4670T-host.xml | 2 +-
.../x86_64-cpuid-Core-i5-650-host.xml | 2 +-
.../x86_64-cpuid-Core-i5-6600-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-2600-host.xml | 2 +-
...86_64-cpuid-Core-i7-2600-xsaveopt-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-3520M-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-3740QM-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-3770-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-4510U-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-4600U-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-5600U-arat-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-5600U-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-5600U-ibrs-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-7600U-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-7700-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-8550U-host.xml | 2 +-
.../x86_64-cpuid-Core-i7-8700-host.xml | 2 +-
.../x86_64-cpuid-EPYC-7502-32-Core-host.xml | 5 +-
.../x86_64-cpuid-EPYC-7601-32-Core-host.xml | 2 +-
...6_64-cpuid-EPYC-7601-32-Core-ibpb-host.xml | 8 +-
...6_64-cpuid-Hygon-C86-7185-32-core-host.xml | 5 +-
.../x86_64-cpuid-Ice-Lake-Server-host.xml | 2 +-
...64-cpuid-Ryzen-7-1800X-Eight-Core-host.xml | 2 +-
...86_64-cpuid-Ryzen-9-3900X-12-Core-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E3-1225-v5-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E3-1245-v5-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E5-2609-v3-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E5-2623-v4-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E5-2630-v3-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E5-2630-v4-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E5-2650-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E5-2650-v3-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E5-2650-v4-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E7-4820-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E7-4830-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E7-8890-v3-host.xml | 2 +-
.../x86_64-cpuid-Xeon-E7540-host.xml | 2 +-
.../x86_64-cpuid-Xeon-Gold-5115-host.xml | 2 +-
.../x86_64-cpuid-Xeon-Gold-6130-host.xml | 2 +-
.../x86_64-cpuid-Xeon-Gold-6148-host.xml | 2 +-
.../x86_64-cpuid-Xeon-Platinum-8268-host.xml | 2 +-
.../x86_64-cpuid-Xeon-Platinum-9242-host.xml | 2 +-
.../x86_64-cpuid-Xeon-W3520-host.xml | 2 +-
.../domaincapsdata/qemu_5.2.0-q35.x86_64.xml | 462 ++++-
.../domaincapsdata/qemu_5.2.0-tcg.x86_64.xml | 836 +++++++++-
tests/domaincapsdata/qemu_5.2.0.x86_64.xml | 462 ++++-
.../domaincapsdata/qemu_6.0.0-q35.x86_64.xml | 477 +++++-
.../domaincapsdata/qemu_6.0.0-tcg.x86_64.xml | 896 +++++++++-
tests/domaincapsdata/qemu_6.0.0.x86_64.xml | 477 +++++-
.../domaincapsdata/qemu_6.1.0-q35.x86_64.xml | 576 ++++++-
.../domaincapsdata/qemu_6.1.0-tcg.x86_64.xml | 1458 +++++++++++++---
tests/domaincapsdata/qemu_6.1.0.x86_64.xml | 576 ++++++-
.../domaincapsdata/qemu_6.2.0-q35.x86_64.xml | 583 ++++++-
.../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml | 1461 +++++++++++++---
tests/domaincapsdata/qemu_6.2.0.x86_64.xml | 583 ++++++-
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 609 ++++++-
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 1485 ++++++++++++++---
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 609 ++++++-
.../domaincapsdata/qemu_7.1.0-q35.x86_64.xml | 609 ++++++-
.../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml | 1425 +++++++++++++---
tests/domaincapsdata/qemu_7.1.0.x86_64.xml | 609 ++++++-
.../domaincapsdata/qemu_7.2.0-q35.x86_64.xml | 609 ++++++-
.../qemu_7.2.0-tcg.x86_64+hvf.xml | 979 ++++++++++-
.../domaincapsdata/qemu_7.2.0-tcg.x86_64.xml | 979 ++++++++++-
tests/domaincapsdata/qemu_7.2.0.x86_64.xml | 609 ++++++-
.../domaincapsdata/qemu_8.0.0-q35.x86_64.xml | 652 +++++++-
.../domaincapsdata/qemu_8.0.0-tcg.x86_64.xml | 1015 ++++++++++-
tests/domaincapsdata/qemu_8.0.0.x86_64.xml | 652 +++++++-
.../domaincapsdata/qemu_8.1.0-q35.x86_64.xml | 815 ++++++++-
.../domaincapsdata/qemu_8.1.0-tcg.x86_64.xml | 1063 +++++++++++-
tests/domaincapsdata/qemu_8.1.0.x86_64.xml | 815 ++++++++-
.../domaincapsdata/qemu_8.2.0-q35.x86_64.xml | 815 ++++++++-
.../domaincapsdata/qemu_8.2.0-tcg.x86_64.xml | 959 ++++++++++-
tests/domaincapsdata/qemu_8.2.0.x86_64.xml | 815 ++++++++-
.../domaincapsdata/qemu_9.0.0-q35.x86_64.xml | 815 ++++++++-
.../domaincapsdata/qemu_9.0.0-tcg.x86_64.xml | 915 +++++++++-
tests/domaincapsdata/qemu_9.0.0.x86_64.xml | 815 ++++++++-
.../domaincapsdata/qemu_9.1.0-q35.x86_64.xml | 922 +++++++++-
.../domaincapsdata/qemu_9.1.0-tcg.x86_64.xml | 1139 +++++++++++--
tests/domaincapsdata/qemu_9.1.0.x86_64.xml | 922 +++++++++-
.../domaincapsdata/qemu_9.2.0-q35.x86_64.xml | 922 +++++++++-
.../domaincapsdata/qemu_9.2.0-tcg.x86_64.xml | 1139 +++++++++++--
tests/domaincapsdata/qemu_9.2.0.x86_64.xml | 922 +++++++++-
166 files changed, 35711 insertions(+), 2629 deletions(-)
create mode 100644 src/cpu_map/x86_Broadwell-v1.xml
create mode 100644 src/cpu_map/x86_Broadwell-v2.xml
create mode 100644 src/cpu_map/x86_Broadwell-v3.xml
create mode 100644 src/cpu_map/x86_Broadwell-v4.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v1.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Cooperlake-v1.xml
create mode 100644 src/cpu_map/x86_Cooperlake-v2.xml
create mode 100644 src/cpu_map/x86_Denverton-v1.xml
create mode 100644 src/cpu_map/x86_Denverton-v2.xml
create mode 100644 src/cpu_map/x86_Denverton-v3.xml
create mode 100644 src/cpu_map/x86_Denverton.xml
create mode 100644 src/cpu_map/x86_Dhyana-v1.xml
create mode 100644 src/cpu_map/x86_Dhyana-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Milan-v1.xml
create mode 100644 src/cpu_map/x86_EPYC-Milan-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v1.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v4.xml
create mode 100644 src/cpu_map/x86_EPYC-v1.xml
create mode 100644 src/cpu_map/x86_EPYC-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-v4.xml
create mode 100644 src/cpu_map/x86_GraniteRapids-v1.xml
create mode 100644 src/cpu_map/x86_Haswell-v1.xml
create mode 100644 src/cpu_map/x86_Haswell-v2.xml
create mode 100644 src/cpu_map/x86_Haswell-v3.xml
create mode 100644 src/cpu_map/x86_Haswell-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v1.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v6.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v7.xml
create mode 100644 src/cpu_map/x86_IvyBridge-v1.xml
create mode 100644 src/cpu_map/x86_IvyBridge-v2.xml
create mode 100644 src/cpu_map/x86_KnightsMill.xml
create mode 100644 src/cpu_map/x86_Nehalem-v1.xml
create mode 100644 src/cpu_map/x86_Nehalem-v2.xml
create mode 100644 src/cpu_map/x86_SandyBridge-v1.xml
create mode 100644 src/cpu_map/x86_SandyBridge-v2.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v1.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v2.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v3.xml
create mode 100644 src/cpu_map/x86_SierraForest-v1.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v1.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v2.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v3.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v1.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Snowridge-v1.xml
create mode 100644 src/cpu_map/x86_Snowridge-v2.xml
create mode 100644 src/cpu_map/x86_Snowridge-v3.xml
create mode 100644 src/cpu_map/x86_Snowridge-v4.xml
create mode 100644 src/cpu_map/x86_Westmere-v1.xml
create mode 100644 src/cpu_map/x86_Westmere-v2.xml
--
2.47.0
12 hours, 44 minutes
[PATCH 0/5] network: fix dhcp response packet checksums on virtual networks
by Laine Stump
Patch 4/4 explains the problem and how these patches fix it. Assuming
no problems are found (none so far) this should go into 10.10.0, as it
solves a regression caused by switching the network driver to the
nftables backend.
There was a prior attempt at fixing this that was accepted, pushed,
bugs were discovered, and it was reverted (see Patch 4/4 for details). This will hopefully be the final attempt.
Please test with as many different guests as possible, both with
nftables backend and iptables backend, and using different guest
interface types, etc.
Laine Stump (5):
util: make it optional to clear existing tc qdiscs/filters in
virNetDevBandwidthSet()
util: put the command that adds a tx filter qdisc into a separate
function
util: don't re-add the qdisc used for tx filters if it already exists
util: add new "raw" layer for virFirewallCmd objects
network: add tc filter rule to nftables backend to fix checksum of
DHCP responses
src/libvirt_private.syms | 1 +
src/lxc/lxc_driver.c | 2 +-
src/lxc/lxc_process.c | 2 +-
src/network/bridge_driver.c | 4 +-
src/network/network_nftables.c | 69 +++++++++++++++++
src/qemu/qemu_command.c | 2 +-
src/qemu/qemu_driver.c | 3 +-
src/qemu/qemu_hotplug.c | 4 +-
src/util/virfirewall.c | 74 ++++++++++++-------
src/util/virfirewall.h | 1 +
src/util/virfirewalld.c | 1 +
src/util/virnetdevbandwidth.c | 70 ++++++++++++++++--
src/util/virnetdevbandwidth.h | 4 +
.../forward-dev-linux.nftables | 40 ++++++++++
.../isolated-linux.nftables | 40 ++++++++++
.../nat-default-linux.nftables | 40 ++++++++++
.../nat-ipv6-linux.nftables | 40 ++++++++++
.../nat-ipv6-masquerade-linux.nftables | 40 ++++++++++
.../nat-many-ips-linux.nftables | 40 ++++++++++
.../nat-no-dhcp-linux.nftables | 40 ++++++++++
.../nat-port-range-ipv6-linux.nftables | 40 ++++++++++
.../nat-port-range-linux.nftables | 40 ++++++++++
.../nat-tftp-linux.nftables | 40 ++++++++++
.../route-default-linux.nftables | 40 ++++++++++
tests/virnetdevbandwidthtest.c | 5 +-
25 files changed, 639 insertions(+), 43 deletions(-)
--
2.47.0
13 hours, 1 minute
[PATCH v3 00/15] Implement support for QCOW2 data files
by Nikolai Barybin
Hello everyone!
With help of Peter's comprehensive review I finally present 3rd version
of this series.
Changes since last revision:
- minor code improvements including memory leaks fixes
- splitted and regrouped some patches for more logical structure
- fixed issue with erroneous disk with data file detatch
- completely rewritten tests
- added some piece of documentation
Nikolai Barybin (15):
conf: add data-file feature and related fields to virStorageSource
Add VIR_STORAGE_FILE_FEATURE_DATA_FILE to virStorageFileFeature enum
conf: schemas: add data-file store to domain rng schema
conf: implement XML parsing/formating for dataFileStore
storage file: add getDataFile function to FileTypeInfo
storage file: add qcow2 data-file path parsing from header
storage file: fill in src->dataFileStore during file probe
security: DAC: handle qcow2 data-file on image label set/restore
security: selinux: handle qcow2 data-file on image label set/restore
security: apparmor: handle qcow2 data-file
qemu: put data-file path to VM's cgroup and namespace
qemu: factor out qemuDomainPrepareStorageSource()
qemu: enable basic qcow2 data-file feature support
tests: add qcow2 data-file tests
docs: formatdomain: describe dataFileStore element of disk
docs/formatdomain.rst | 45 +++++++-
src/conf/domain_conf.c | 100 ++++++++++++++++++
src/conf/domain_conf.h | 13 +++
src/conf/schemas/domaincommon.rng | 15 +++
src/conf/storage_source_conf.c | 11 ++
src/conf/storage_source_conf.h | 5 +
src/qemu/qemu_block.c | 14 +++
src/qemu/qemu_cgroup.c | 13 ++-
src/qemu/qemu_command.c | 5 +
src/qemu/qemu_domain.c | 50 ++++++---
src/qemu/qemu_namespace.c | 7 ++
src/security/security_dac.c | 27 ++++-
src/security/security_selinux.c | 27 ++++-
src/security/virt-aa-helper.c | 4 +
src/storage_file/storage_file_probe.c | 78 ++++++++++----
src/storage_file/storage_source.c | 39 +++++++
src/storage_file/storage_source.h | 4 +
.../qcow2-data-file-in.xml | 86 +++++++++++++++
.../qcow2-data-file-out.xml | 86 +++++++++++++++
tests/qemuxmlactivetest.c | 2 +
...sk-qcow2-datafile-store.x86_64-latest.args | 51 +++++++++
...isk-qcow2-datafile-store.x86_64-latest.xml | 81 ++++++++++++++
.../disk-qcow2-datafile-store.xml | 67 ++++++++++++
tests/qemuxmlconftest.c | 1 +
tests/virstoragetest.c | 98 +++++++++++++++++
tests/virstoragetestdata/out/qcow2-data_file | 20 ++++
.../out/qcow2-qcow2_qcow2-qcow2-data_file | 31 ++++++
27 files changed, 941 insertions(+), 39 deletions(-)
create mode 100644 tests/qemustatusxml2xmldata/qcow2-data-file-in.xml
create mode 100644 tests/qemustatusxml2xmldata/qcow2-data-file-out.xml
create mode 100644 tests/qemuxmlconfdata/disk-qcow2-datafile-store.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/disk-qcow2-datafile-store.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/disk-qcow2-datafile-store.xml
create mode 100644 tests/virstoragetestdata/out/qcow2-data_file
create mode 100644 tests/virstoragetestdata/out/qcow2-qcow2_qcow2-qcow2-data_file
--
2.43.5
19 hours, 30 minutes
[PATCH 00/32] Add support for versioned CPU models
by Jiri Denemark
When parsing a domain XML which uses a non-versioned CPU model we want
to replace it with the appropriate version variant similarly to what we
do with machine types. Theoretically QEMU supports per machine type
specification of a version with which a non-versioned CPU model is
replaced, but this is always 1 for all machine types and the
query-machines QMP command does not even report the value.
Luckily after talking to Igor, having a single number per machine type
does not really allow for setting it to anything but 1 as CPU models
have different number of versions. Each machine type would need to
define a specific version for each CPU model, which would be a
maintenance nightmare. For this reason there's no desire to ever resolve
non-versioned CPU models to anything but v1 in QEMU and the per machine
type setting will most likely even be removed completely. Thus it is
safe for us to always use v1 as the canonical CPU model.
Some non-versioned CPU models, however, are actually aliases to specific
versions of a base model rather than being base models themselves. These
are the old CPU model variants before model versions were introduced,
e.g., -noTSX, -IBRS, etc. The mapping of these names to versions is
hardcoded and will never change. We do not translate such CPU models to
the corresponding versioned names. This allows us to introduce the
corresponding -v* variants that match the QEMU models rather than the
existing definitions in our CPU map. The guest CPU will be the same
either way, but the way libvirt checks the CPU model compatibility with
the host will be different. The old "partial" check done by libvirt
using the definition from CPU map will still be used for the old names
(we can't change this for compatibility reasons), but the corresponding
versioned variants (as well as all other versions that do not have a
non-versioned alias) will benefit from the recently introduced new
"partial" check which uses only the information we get from QEMU to
check whether a specific CPU definition is usable on the host.
Other I considered were:
- replace -noTSX, -IBRS, ... models with their versioned variants
- we'd need to translate them back for migration (just what we do
for -v1) for backward compatibility
- I found the benefit of new partial checking when explicitly using
the versioned variants quite appealing and dropped the relevant
changes in progress
- do not translate anything, i.e., not even base models to -v1
- the idea behind translating was to make sure QEMU suddenly doesn't
start translating the base CPU model to a different version (this
does not happen with -noTSX etc. as they are hardcoded aliases);
Igor said they will never do that so is this still valid?
- not translating would bring the same benefit of explicitly using
-v1 vs non-versioned name
I guess the current mix does not look very consistent (i.e., it's not
either all or nothing), but it makes sense to me. The question is
whether it also makes sense to others :-)
Jiri Denemark (32):
cpu_x86: Copy added and removed features from ancestor
sync_qemu_features_i386: Add some removed features back
sync_qemu_models_i386: Use f-strings
sync_qemu_models_i386: Do not overwrite existing models
sync_qemu_models_i386: Do not require full path to QEMU's cpu.c
sync_qemu_models_i386: Add support for versioned CPU models
sync_qemu_models_i386: Store extra info in a separate file
sync_qemu_models_i386: Switch to lxml
cpu_map: Group models in index.xml
sync_qemu_models_i386: Update index.xml
sync_qemu_models_i386: Copy signatures from base model
cpu: Introduce virCPUCheckModel
qemu: Canonicalize CPU models
cpu_map: Add versions of SierraForest CPU model
cpu_map: Add versions of GraniteRapids CPU model
cpu_map: Add versions of SapphireRapids CPU model
cpu_map: Add versions of Snowridge CPU model
cpu_map: Add versions of Cooperlake CPU model
cpu_map: Add versions of Icelake-Server CPU model
cpu_map: Add versions of Cascadelake-Server CPU model
cpu_map: Add versions of Skylake-Server CPU model
cpu_map: Add versions of Skylake-Client CPU model
cpu_map: Add versions of Broadwell CPU model
cpu_map: Add versions of Haswell CPU model
cpu_map: Add versions of IvyBridge CPU model
cpu_map: Add versions of SandyBridge CPU model
cpu_map: Add versions of Westmere CPU model
cpu_map: Add versions of Nehalem CPU model
cpu_map: Add versions of EPYC-Milan CPU model
cpu_map: Add versions of EPYC-Rome CPU model
cpu_map: Add versions of EPYC CPU model
cpu_map: Add versions of Dhyana CPU model
src/cpu/cpu.c | 25 +
src/cpu/cpu.h | 8 +
src/cpu/cpu_map.c | 2 +-
src/cpu/cpu_x86.c | 40 +-
src/cpu_map/index.xml | 286 ++--
src/cpu_map/meson.build | 60 +
src/cpu_map/sync_qemu_features_i386.py | 3 +
src/cpu_map/sync_qemu_models_i386.py | 178 ++-
src/cpu_map/x86_Broadwell-v1.xml | 6 +
src/cpu_map/x86_Broadwell-v2.xml | 140 ++
src/cpu_map/x86_Broadwell-v3.xml | 143 ++
src/cpu_map/x86_Broadwell-v4.xml | 141 ++
src/cpu_map/x86_Cascadelake-Server-v1.xml | 6 +
src/cpu_map/x86_Cascadelake-Server-v2.xml | 157 +++
src/cpu_map/x86_Cascadelake-Server-v3.xml | 155 +++
src/cpu_map/x86_Cascadelake-Server-v4.xml | 156 +++
src/cpu_map/x86_Cascadelake-Server-v5.xml | 158 +++
src/cpu_map/x86_Cooperlake-v1.xml | 6 +
src/cpu_map/x86_Cooperlake-v2.xml | 164 +++
src/cpu_map/x86_Dhyana-v1.xml | 6 +
src/cpu_map/x86_Dhyana-v2.xml | 73 ++
src/cpu_map/x86_EPYC-Milan-v1.xml | 6 +
src/cpu_map/x86_EPYC-Milan-v2.xml | 99 ++
src/cpu_map/x86_EPYC-Rome-v1.xml | 6 +
src/cpu_map/x86_EPYC-Rome-v2.xml | 86 ++
src/cpu_map/x86_EPYC-Rome-v3.xml | 86 ++
src/cpu_map/x86_EPYC-Rome-v4.xml | 85 ++
src/cpu_map/x86_EPYC-v1.xml | 6 +
src/cpu_map/x86_EPYC-v2.xml | 75 ++
src/cpu_map/x86_EPYC-v3.xml | 79 ++
src/cpu_map/x86_EPYC-v4.xml | 79 ++
src/cpu_map/x86_GraniteRapids-v1.xml | 6 +
src/cpu_map/x86_Haswell-v1.xml | 6 +
src/cpu_map/x86_Haswell-v2.xml | 134 ++
src/cpu_map/x86_Haswell-v3.xml | 137 ++
src/cpu_map/x86_Haswell-v4.xml | 135 ++
src/cpu_map/x86_Icelake-Server-v1.xml | 6 +
src/cpu_map/x86_Icelake-Server-v2.xml | 158 +++
src/cpu_map/x86_Icelake-Server-v3.xml | 165 +++
src/cpu_map/x86_Icelake-Server-v4.xml | 172 +++
src/cpu_map/x86_Icelake-Server-v5.xml | 174 +++
src/cpu_map/x86_Icelake-Server-v6.xml | 175 +++
src/cpu_map/x86_Icelake-Server-v7.xml | 177 +++
src/cpu_map/x86_IvyBridge-v1.xml | 6 +
src/cpu_map/x86_IvyBridge-v2.xml | 119 ++
src/cpu_map/x86_Nehalem-v1.xml | 6 +
src/cpu_map/x86_Nehalem-v2.xml | 101 ++
src/cpu_map/x86_SandyBridge-v1.xml | 6 +
src/cpu_map/x86_SandyBridge-v2.xml | 110 ++
src/cpu_map/x86_SapphireRapids-v1.xml | 6 +
src/cpu_map/x86_SapphireRapids-v2.xml | 193 +++
src/cpu_map/x86_SapphireRapids-v3.xml | 198 +++
src/cpu_map/x86_SierraForest-v1.xml | 6 +
src/cpu_map/x86_Skylake-Client-v1.xml | 6 +
src/cpu_map/x86_Skylake-Client-v2.xml | 141 ++
src/cpu_map/x86_Skylake-Client-v3.xml | 139 ++
src/cpu_map/x86_Skylake-Client-v4.xml | 141 ++
src/cpu_map/x86_Skylake-Server-v1.xml | 6 +
src/cpu_map/x86_Skylake-Server-v2.xml | 149 +++
src/cpu_map/x86_Skylake-Server-v3.xml | 147 +++
src/cpu_map/x86_Skylake-Server-v4.xml | 148 +++
src/cpu_map/x86_Skylake-Server-v5.xml | 150 +++
src/cpu_map/x86_Snowridge-v1.xml | 6 +
src/cpu_map/x86_Snowridge-v2.xml | 143 ++
src/cpu_map/x86_Snowridge-v3.xml | 145 +++
src/cpu_map/x86_Snowridge-v4.xml | 143 ++
src/cpu_map/x86_Westmere-v1.xml | 6 +
src/cpu_map/x86_Westmere-v2.xml | 105 ++
src/libvirt_private.syms | 1 +
src/qemu/qemu_capabilities.c | 53 +
src/qemu/qemu_capabilities.h | 3 +
src/qemu/qemu_domain.c | 6 +
src/qemu/qemu_postparse.c | 19 +
.../x86_64-cpuid-Atom-P5362-json.xml | 75 +-
.../x86_64-cpuid-Core-i7-8550U-json.xml | 72 +-
.../x86_64-cpuid-EPYC-7502-32-Core-host.xml | 5 +-
.../x86_64-cpuid-EPYC-7601-32-Core-guest.xml | 9 +-
...6_64-cpuid-EPYC-7601-32-Core-ibpb-host.xml | 8 +-
.../x86_64-cpuid-EPYC-7601-32-Core-json.xml | 6 +-
..._64-cpuid-Hygon-C86-7185-32-core-guest.xml | 5 +-
...6_64-cpuid-Hygon-C86-7185-32-core-host.xml | 5 +-
...6_64-cpuid-Hygon-C86-7185-32-core-json.xml | 6 +-
...4-cpuid-Ryzen-7-1800X-Eight-Core-guest.xml | 9 +-
...64-cpuid-Ryzen-7-1800X-Eight-Core-json.xml | 6 +-
.../x86_64-cpuid-Xeon-Platinum-9242-json.xml | 79 +-
...-cpuid-baseline-Cooperlake+Cascadelake.xml | 84 +-
.../x86_64-cpuid-baseline-EPYC+Rome.xml | 6 +-
.../x86_64-cpuid-baseline-Ryzen+Rome.xml | 6 +-
.../domaincapsdata/qemu_5.2.0-q35.x86_64.xml | 369 ++++++
.../domaincapsdata/qemu_5.2.0-tcg.x86_64.xml | 740 ++++++++++-
tests/domaincapsdata/qemu_5.2.0.x86_64.xml | 369 ++++++
.../domaincapsdata/qemu_6.0.0-q35.x86_64.xml | 382 ++++++
.../domaincapsdata/qemu_6.0.0-tcg.x86_64.xml | 798 +++++++++++-
tests/domaincapsdata/qemu_6.0.0.x86_64.xml | 382 ++++++
.../domaincapsdata/qemu_6.1.0-q35.x86_64.xml | 476 +++++++
.../domaincapsdata/qemu_6.1.0-tcg.x86_64.xml | 1003 +++++++++++++-
tests/domaincapsdata/qemu_6.1.0.x86_64.xml | 476 +++++++
.../domaincapsdata/qemu_6.2.0-q35.x86_64.xml | 483 +++++++
.../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml | 1008 +++++++++++++-
tests/domaincapsdata/qemu_6.2.0.x86_64.xml | 483 +++++++
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 509 ++++++++
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 1018 ++++++++++++++-
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 509 ++++++++
.../domaincapsdata/qemu_7.1.0-q35.x86_64.xml | 509 ++++++++
.../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml | 1154 +++++++++++++++--
tests/domaincapsdata/qemu_7.1.0.x86_64.xml | 509 ++++++++
.../domaincapsdata/qemu_7.2.0-q35.x86_64.xml | 509 ++++++++
.../qemu_7.2.0-tcg.x86_64+hvf.xml | 830 +++++++++++-
.../domaincapsdata/qemu_7.2.0-tcg.x86_64.xml | 830 +++++++++++-
tests/domaincapsdata/qemu_7.2.0.x86_64.xml | 509 ++++++++
.../domaincapsdata/qemu_8.0.0-q35.x86_64.xml | 550 ++++++++
.../domaincapsdata/qemu_8.0.0-tcg.x86_64.xml | 862 +++++++++++-
tests/domaincapsdata/qemu_8.0.0.x86_64.xml | 550 ++++++++
.../domaincapsdata/qemu_8.1.0-q35.x86_64.xml | 711 +++++++++-
.../domaincapsdata/qemu_8.1.0-tcg.x86_64.xml | 864 +++++++++++-
tests/domaincapsdata/qemu_8.1.0.x86_64.xml | 711 +++++++++-
.../domaincapsdata/qemu_8.2.0-q35.x86_64.xml | 711 +++++++++-
.../domaincapsdata/qemu_8.2.0-tcg.x86_64.xml | 848 +++++++++++-
tests/domaincapsdata/qemu_8.2.0.x86_64.xml | 711 +++++++++-
.../domaincapsdata/qemu_9.0.0-q35.x86_64.xml | 711 +++++++++-
.../domaincapsdata/qemu_9.0.0-tcg.x86_64.xml | 811 +++++++++++-
tests/domaincapsdata/qemu_9.0.0.x86_64.xml | 711 +++++++++-
.../domaincapsdata/qemu_9.1.0-q35.x86_64.xml | 816 +++++++++++-
.../domaincapsdata/qemu_9.1.0-tcg.x86_64.xml | 1099 ++++++++++++++--
tests/domaincapsdata/qemu_9.1.0.x86_64.xml | 816 +++++++++++-
.../domaincapsdata/qemu_9.2.0-q35.x86_64.xml | 816 +++++++++++-
.../domaincapsdata/qemu_9.2.0-tcg.x86_64.xml | 1099 ++++++++++++++--
tests/domaincapsdata/qemu_9.2.0.x86_64.xml | 816 +++++++++++-
.../cpu-Haswell.x86_64-latest.args | 2 +-
.../cpu-Haswell.x86_64-latest.xml | 2 +-
.../cpu-Haswell2.x86_64-latest.args | 2 +-
.../cpu-Haswell2.x86_64-latest.xml | 2 +-
.../cpu-Haswell3.x86_64-latest.args | 2 +-
.../cpu-Haswell3.x86_64-latest.xml | 2 +-
...-Icelake-Server-pconfig.x86_64-latest.args | 2 +-
...u-Icelake-Server-pconfig.x86_64-latest.xml | 2 +-
.../cpu-fallback.x86_64-8.0.0.args | 2 +-
.../cpu-fallback.x86_64-8.0.0.xml | 2 +-
...-host-model-fallback-kvm.x86_64-8.1.0.args | 2 +-
...host-model-fallback-kvm.x86_64-latest.args | 2 +-
...host-model-fallback-tcg.x86_64-latest.args | 2 +-
...cpu-host-model-features.x86_64-latest.args | 2 +-
.../cpu-host-model-kvm.x86_64-8.1.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-latest.args | 2 +-
...ost-model-nofallback-kvm.x86_64-8.1.0.args | 2 +-
...st-model-nofallback-kvm.x86_64-latest.args | 2 +-
...st-model-nofallback-tcg.x86_64-latest.args | 2 +-
.../cpu-host-model-tcg.x86_64-latest.args | 2 +-
.../cpu-nofallback.x86_64-8.0.0.args | 2 +-
.../cpu-nofallback.x86_64-8.0.0.xml | 2 +-
.../cpu-strict1.x86_64-latest.args | 2 +-
.../cpu-strict1.x86_64-latest.xml | 2 +-
.../cpu-translation.x86_64-latest.args | 2 +-
.../cpu-translation.x86_64-latest.xml | 2 +-
154 files changed, 33779 insertions(+), 1095 deletions(-)
create mode 100644 src/cpu_map/x86_Broadwell-v1.xml
create mode 100644 src/cpu_map/x86_Broadwell-v2.xml
create mode 100644 src/cpu_map/x86_Broadwell-v3.xml
create mode 100644 src/cpu_map/x86_Broadwell-v4.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v1.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Cooperlake-v1.xml
create mode 100644 src/cpu_map/x86_Cooperlake-v2.xml
create mode 100644 src/cpu_map/x86_Dhyana-v1.xml
create mode 100644 src/cpu_map/x86_Dhyana-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Milan-v1.xml
create mode 100644 src/cpu_map/x86_EPYC-Milan-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v1.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v4.xml
create mode 100644 src/cpu_map/x86_EPYC-v1.xml
create mode 100644 src/cpu_map/x86_EPYC-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-v4.xml
create mode 100644 src/cpu_map/x86_GraniteRapids-v1.xml
create mode 100644 src/cpu_map/x86_Haswell-v1.xml
create mode 100644 src/cpu_map/x86_Haswell-v2.xml
create mode 100644 src/cpu_map/x86_Haswell-v3.xml
create mode 100644 src/cpu_map/x86_Haswell-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v1.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v6.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v7.xml
create mode 100644 src/cpu_map/x86_IvyBridge-v1.xml
create mode 100644 src/cpu_map/x86_IvyBridge-v2.xml
create mode 100644 src/cpu_map/x86_Nehalem-v1.xml
create mode 100644 src/cpu_map/x86_Nehalem-v2.xml
create mode 100644 src/cpu_map/x86_SandyBridge-v1.xml
create mode 100644 src/cpu_map/x86_SandyBridge-v2.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v1.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v2.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v3.xml
create mode 100644 src/cpu_map/x86_SierraForest-v1.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v1.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v2.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v3.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v1.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Snowridge-v1.xml
create mode 100644 src/cpu_map/x86_Snowridge-v2.xml
create mode 100644 src/cpu_map/x86_Snowridge-v3.xml
create mode 100644 src/cpu_map/x86_Snowridge-v4.xml
create mode 100644 src/cpu_map/x86_Westmere-v1.xml
create mode 100644 src/cpu_map/x86_Westmere-v2.xml
--
2.47.0
1 day, 15 hours
[libvirt PATCH] docs: document external swtpm
by Ján Tomko
When external swtpm support was added back in 9.0.0, I omitted
the update of the XML docs.
Add it now, especially since the 'emulator' backend can now
also use the <source> element.
Signed-off-by: Ján Tomko <jtomko(a)redhat.com>
---
docs/formatdomain.rst | 43 ++++++++++++++++++++++++++++++++++++-------
1 file changed, 36 insertions(+), 7 deletions(-)
diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst
index b3f9f453aa..a5510e82f5 100644
--- a/docs/formatdomain.rst
+++ b/docs/formatdomain.rst
@@ -8140,6 +8140,20 @@ Example: usage of the TPM Emulator
</devices>
...
+Example: usage of external TPM emulator :since:`Since 9.0.0`
+
+::
+
+ ...
+ <devices>
+ <tpm model='tpm-tis'>
+ <backend type='external'>
+ <source type='unix' mode='connect' path='/tmp/path.sock'/>
+ </backend>
+ </tpm>
+ </devices>
+ ...
+
``model``
The ``model`` attribute specifies what device model QEMU provides to the
guest. If no model name is provided, ``tpm-tis`` will automatically be chosen
@@ -8178,6 +8192,12 @@ Example: usage of the TPM Emulator
parameter can be used to enable logging in the emulator backend, and
accepts non-zero integer values.
+ ``external``
+ For this backend, libvirt expects the TPM emulator to be started externally.
+ The path to the unix socket where the emulator is listening is passed
+ via the ``source`` element. Other ``backend`` sub-elements do not apply
+ in these case, since they are controlled by the emulator command line.
+
``version``
The ``version`` attribute indicates the version of the TPM. This attribute
only works with the ``emulator`` backend. The following versions are
@@ -8190,8 +8210,13 @@ Example: usage of the TPM Emulator
architecture, TPM model and backend.
``source``
- The ``source`` element specifies the location of the TPM state storage . This
- element only works with the ``emulator`` backend.
+ For the ``emulator`` backend, the ``source`` element specifies the location
+ of the TPM state storage. :since:`Since v10.10.0`
+
+ For the ``external`` backend, it specifies the socket of the externally
+ run TPM emulator. :since:`Since v9.0.0`
+
+ This element does not work with the ``passthrough`` backend.
When specified, it is the user's responsability to prevent files from being
used by multiple VMs or emulators (swtpm will also use advisory locking). If
@@ -8202,14 +8227,18 @@ Example: usage of the TPM Emulator
The following attributes are supported:
``type``
- The type of storage. It's possible to provide "file" to utilize a single
- file or block device where the TPM state will be stored, or "dir" for the
- directory where the files will be stored.
+ For ``external`` backend, only type ``unix`` is supported.
+ For ``emulator`` backend, it's possible to provide ``file`` to utilize
+ a single file or block device where the TPM state will be stored,
+ or ``dir`` for the directory where the files will be stored.
+
+ ``mode``
+ Connection mode for the ``unix`` socket. Only ``connect`` is supported.
+ Can be omitted.
``path``
- The path to the TPM state storage.
+ The path to the TPM state storage, or the unix socket.
- :since:`Since v10.10.0`
``persistent_state``
The ``persistent_state`` attribute indicates whether 'swtpm' TPM state is
--
2.47.0
1 day, 16 hours
[PATCH] QEMU: allow to hot plugging virtio-serial-pci device
by shenjiatong
Virtio-serial-pci device is hot pluggable, losen the restriction and
allow user to hot plug it.
---
src/qemu/qemu_hotplug.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c
index bddd553c88..55512476e4 100644
--- a/src/qemu/qemu_hotplug.c
+++ b/src/qemu/qemu_hotplug.c
@@ -837,7 +837,8 @@ qemuDomainAttachControllerDevice(virDomainObj *vm,
{ .controller = controller } };
bool releaseaddr = false;
- if (controller->type != VIR_DOMAIN_CONTROLLER_TYPE_SCSI) {
+ if (controller->type != VIR_DOMAIN_CONTROLLER_TYPE_SCSI && \
+ controller->type != VIR_DOMAIN_CONTROLLER_TYPE_VIRTIO_SERIAL) {
virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
_("'%1$s' controller cannot be hot plugged."),
virDomainControllerTypeToString(controller->type));
--
2.43.0
1 day, 18 hours
[PATCH] qemu:qemu_snapshot: Fix a libvirtd cransh when delete snapshot
by jungle man
qemuDomainDiskByName() can return a NULL pointer on failure, but this
returned value in qemuSnapshotDeleteValidate is not checked.
It will make libvirtd crash.
diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 5b3aadcbf0..52312b4a7b 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -4235,8 +4235,11 @@ qemuSnapshotDeleteValidate(virDomainObj *vm,
virDomainDiskDef *vmdisk = NULL;
virDomainDiskDef *disk = NULL;
- vmdisk = qemuDomainDiskByName(vm->def, snapDisk->name);
- disk = qemuDomainDiskByName(snapdef->parent.dom,
snapDisk->name);
+ if (!(vmdisk = qemuDomainDiskByName(vm->def, snapDisk->name)))
+ return -1;
+
+ if (!(disk = qemuDomainDiskByName(snapdef->parent.dom,
snapDisk->name)))
+ return -1;
if (!virStorageSourceIsSameLocation(vmdisk->src, disk->src)) {
virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
1 day, 18 hours
[PATCH] docs: formatsecret: Fix an example of secret-set-value
by Han Han
The previous example will cause the error like:
error: Options --file and --base64 are mutually exclusive
Reported-by: Yanqiu Zhang <yanqzhan(a)redhat.com>
Signed-off-by: Han Han <hhan(a)redhat.com>
---
docs/formatsecret.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/formatsecret.rst b/docs/formatsecret.rst
index aeeb67610d..606a9cc587 100644
--- a/docs/formatsecret.rst
+++ b/docs/formatsecret.rst
@@ -318,7 +318,7 @@ be omitted if the file contents are base64-encoded.
::
- # virsh secret-set-value 6dd3e4a5-1d76-44ce-961f-f119f5aad935 --file --plain secretinfile
+ # virsh secret-set-value 6dd3e4a5-1d76-44ce-961f-f119f5aad935 --file secretinfile --plain
Secret value set
**WARNING** The following approach is **insecure** and deprecated. The secret
--
2.47.0
2 days, 7 hours
[PATCH 0/4] qemu: Adapt to deprecation of 'reconnect' field for 'stream 'netdevs' and update qemu capablities
by Peter Krempa
Peter Krempa (4):
qemu: capabilities: Restore grouping in 'virQEMUCapsQMPSchemaQueries'
qemu: capabilities: Introduce
QEMU_CAPS_NETDEV_STREAM_RECONNECT_MILISECONDS
qemu: passt: Use 'reconnect-ms' instead of 'reconnect' with new qemus
tests: qemucapabilitiesdata: Update 'x86_64' capabilities for the
qemu-9.2 dev cycle
src/qemu/qemu_capabilities.c | 18 +-
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_passt.c | 14 +-
.../qemucapabilitiesdata/caps_9.2.0_s390x.xml | 1 +
.../caps_9.2.0_x86_64.replies | 4407 +++++++++--------
.../caps_9.2.0_x86_64.xml | 310 +-
.../net-user-passt.x86_64-latest.args | 2 +-
7 files changed, 2523 insertions(+), 2230 deletions(-)
--
2.47.0
3 days, 22 hours