[PATCH] NEWS: mention nbdkit config option
by Jonathon Jongsma
Signed-off-by: Jonathon Jongsma <jjongsma(a)redhat.com>
---
NEWS.rst | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index af3c4906df..8088097ad6 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -37,6 +37,16 @@ v10.0.0 (unreleased)
``virDomainBlockResize`` allows resizing a block-device backed ``raw`` disk
of a VM without the need to specify the full size of the block device.
+ * qemu: add runtime configuration option for nbdkit
+
+ Since the new nbdkit support requires a recent selinux policy that is not
+ widely available yet, it is now possible to build libvirt with nbdkit
+ support for remote disks but disabled at runtime. This behavior is
+ controlled via the storage_use_nbdkit option of the qemu driver
+ configuration file. The option will default to being disabled, but this may
+ change in a future release and can be customized with the
+ nbdkit_config_default build option.
+
* **Improvements**
* qemu: Improve migration XML use when persisting VM on destination
--
2.43.0
9 months, 2 weeks
[v3 0/4] Support for dirty-limit live migration
by Hyman Huang
v3:
- adjust the parameter check location for suggested by Michal
- mark the VIR_MIGRATE_DIRTY_LIMIT flag since 10.0.0
- rebase on master
Thanks Michal for the comments.
Please review,
Yong.
v1:
The dirty-limit functionality for live migration was
introduced since qemu>=8.1.
In the live migration scenario, it implements the force
convergence using the dirty-limit approach, which results
in better reliable read performance.
A straightforward dirty-limit capability for live migration
is added by this patchset. Users might not care about other
dirty-limit arguments like "x-vcpu-dirty-limit-period"
or "vcpu-dirty-limit," thus do not expose them to Libvirt
and Keep the default configurations and values in place.
For more details about dirty-limit, please see the following
reference:
https://lore.kernel.org/qemu-
devel/169024923116.19090.10825599068950039132-0(a)git.sr.ht/
Hyman Huang (4):
Add VIR_MIGRATE_DIRTY_LIMIT flag
qemu_migration: Implement VIR_MIGRATE_DIRTY_LIMIT flag
virsh: Add support for VIR_MIGRATE_DIRTY_LIMIT flag
NEWS: document support for dirty-limit live migration
NEWS.rst | 8 ++++++++
docs/manpages/virsh.rst | 10 +++++++++-
include/libvirt/libvirt-domain.h | 5 +++++
src/libvirt-domain.c | 8 ++++++++
src/qemu/qemu_migration.c | 8 ++++++++
src/qemu/qemu_migration.h | 1 +
src/qemu/qemu_migration_params.c | 6 ++++++
src/qemu/qemu_migration_params.h | 1 +
tools/virsh-domain.c | 6 ++++++
9 files changed, 52 insertions(+), 1 deletion(-)
--
2.39.1
9 months, 2 weeks
[PATCH] NEWS: Document my contributions for upcoming release
by Michal Privoznik
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
NEWS.rst | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index af3c4906df..e8cc89a2ee 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -55,6 +55,10 @@ v10.0.0 (unreleased)
Libvirt aleviates this by automatically adding a ``<slice>`` to match the
size of the source image rather than failing the migration.
+ * test driver: Support for hotplug/hotunplug of PCI devices
+
+ The test driver now supports basic hotplug and hotunplug of PCI devices.
+
* **Bug fixes**
* qemu: Various migration bug fixes and debuggability improvement
@@ -63,6 +67,29 @@ v10.0.0 (unreleased)
migration arguments and XMLs and modifies error reporting for better
debugging.
+ * conf: Restore setting default bus for input devices
+
+ Because of a regression, starting from 9.3.0 libvirt did not autofill bus
+ for input devices. With this release the regression was identified and
+ fixed.
+
+ * qemu: Relax check for memory device coldplug
+
+ Because of too aggressive check, a virtio-mem memory device could not be
+ cold plugged. This now fixed.
+
+ * qemu: Be less aggressive when dropping channel source paths
+
+ Another regression is resolved, (introduced in 9.7.0) when libvirt was too
+ aggressive when dropping parsed paths for <channel/> sources
+
+ * qemuDomainChangeNet: Reflect trustGuestRxFilters change
+
+ On device-update, when user requests change of trustGuestRxFilters for a
+ domain's <interface/> libvirt did nothing. Neither it thrown an error nor
+ did it reflect the change. Starting with this release, the change is
+ reflected.
+
v9.10.0 (2023-12-01)
====================
--
2.41.0
9 months, 3 weeks
[PATCH 0/2] ci: Fix upstream QEMU integration job
by Andrea Bolognani
Or at least, hopefully it does!
This is completely untested, as validating changes to the integration
tests is difficult/annoying. The job's currently broken anyway, so
it's not like these changes could possibly make things worse :)
Andrea Bolognani (2):
ci: Fix .integration_tests_upstream_qemu
ci: Do more as part of .qemu-build-template
ci/integration-template.yml | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--
2.43.0
9 months, 3 weeks
[PATCH rfcv3 00/11] LIBVIRT: X86: TDX support
by Zhenzhong Duan
Hi,
This series brings libvirt the x86 TDX support.
* What's TDX?
TDX stands for Trust Domain Extensions which isolates VMs from
the virtual-machine manager (VMM)/hypervisor and any other software on
the platform.
To support TDX, multiple software components, not only KVM but also QEMU,
guest Linux and virtual bios, need to be updated. For more details, please
check link[1], there are TDX spec links and public repository link at github
for each software component.
This patchset is another software component to extend libvirt to support TDX,
with which one can start a VM from high level rather than running qemu directly.
* Misc
As QEMU use a software emulated way to reset guest which isn't supported by TDX
guest for security reason. We add a new way to emulate the reset for TDX guest,
called "hard reboot". We achieve this by killing old qemu and start a new one.
Complete code can be found at [1], matching qemu code can be found at [2].
There are some new properties for tdx-guest object, i.e. `mrconfigid`, `mrowner`,
`mrownerconfig` and `debug` which aren't in matching qemu[2] yet. I keep them
intentionally as they will be implemented in qemu as extention series of [2].
* Test
start/stop/reboot with virsh
stop/reboot trigger in guest
stop with on_poweroff=destroy/restart
reboot with on_reboot=destroy/restart
* Patch organization
- patch 1-3: Support query of TDX capabilities.
- patch 4-6: Add TDX type to launchsecurity framework.
- patch 7-11: Add hard reboot support to TDX guest
[1] https://github.com/intel/libvirt-tdx/commits/tdx_for_upstream_rfcv3
[2] https://github.com/intel/qemu-tdx/tree/tdx-qemu-upstream-v3
Thanks
Zhenzhong
Changelog:
rfcv3:
- Change to generate qemu cmdline with -bios
- drop firmware auto match as -bios is used
- add a hard reboot method to reboot TDX guest
rfcv2:
- give up using qmp cmd and check TDX directly on host for TDX capabilities.
- use launchsecurity framework to support TDX
- use <os>.<loader> for general loader
- add auto firmware match feature for TDX
A example TDVF fimware description file 70-edk2-x86_64-tdx.json:
{
"description": "UEFI firmware for x86_64, supporting Intel TDX",
"interface-types": [
"uefi"
],
"mapping": {
"device": "generic",
"filename": "/usr/share/OVMF/OVMF_CODE-tdx.fd"
},
"targets": [
{
"architecture": "x86_64",
"machines": [
"pc-q35-*"
]
}
],
"features": [
"intel-tdx",
"verbose-dynamic"
],
"tags": [
]
}
rfcv2:
https://www.mail-archive.com/libvir-list@redhat.com/msg219378.html
Chenyi Qiang (3):
qemu: add hard reboot in QEMU driver
qemu: make hard reboot as the TDX default reboot mode
virsh: add new option "timekeep" to keep virsh console alive
Zhenzhong Duan (8):
qemu: Check if INTEL Trust Domain Extention support is enabled
qemu: Add TDX capability
conf: expose TDX feature in domain capabilities
conf: add tdx as launch security type
qemu: Add command line and validation for TDX type
qemu: force special parameters enabled for TDX guest
qemu: Extend hard reboot in Qemu driver
conf: Add support to keep same domid for hard reboot
docs/formatdomaincaps.rst | 1 +
include/libvirt/libvirt-domain.h | 2 +
src/conf/domain_capabilities.c | 1 +
src/conf/domain_capabilities.h | 1 +
src/conf/domain_conf.c | 50 ++++++++++++++++
src/conf/domain_conf.h | 11 ++++
src/conf/schemas/domaincaps.rng | 9 +++
src/conf/schemas/domaincommon.rng | 34 +++++++++++
src/conf/virconftypes.h | 2 +
src/qemu/qemu_capabilities.c | 38 +++++++++++-
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 29 +++++++++
src/qemu/qemu_domain.c | 18 ++++++
src/qemu/qemu_domain.h | 4 ++
src/qemu/qemu_driver.c | 85 ++++++++++++++++++++------
src/qemu/qemu_firmware.c | 1 +
src/qemu/qemu_monitor.c | 19 +++++-
src/qemu/qemu_monitor.h | 2 +-
src/qemu/qemu_monitor_json.c | 6 +-
src/qemu/qemu_namespace.c | 1 +
src/qemu/qemu_process.c | 99 ++++++++++++++++++++++++++++++-
src/qemu/qemu_validate.c | 18 ++++++
tools/virsh-console.c | 3 +
tools/virsh-domain.c | 64 +++++++++++++++-----
tools/virsh.h | 1 +
25 files changed, 463 insertions(+), 37 deletions(-)
--
2.34.1
9 months, 3 weeks
Entering freeze for libvirt-10.0.0
by Jiri Denemark
I have just tagged v10.0.0-rc1 in the repository and pushed signed
tarballs and source RPMs to https://download.libvirt.org/
Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.
If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.
Thanks,
Jirka
9 months, 3 weeks
[PATCH 0/3] ci: Fixes for integration jobs
by Andrea Bolognani
Andrea Bolognani (3):
ci: Fix upstream-qemu job definitions
ci: Move upstream-qemu job to Fedora 39
ci: Add notes for integration jobs
ci/integration.yml | 58 +++++++++++++++++++++++++++++++---------------
1 file changed, 39 insertions(+), 19 deletions(-)
--
2.43.0
9 months, 3 weeks
[PATCH] NEWS: Mention migration fixes and iothread mapping
by Peter Krempa
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
NEWS.rst | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index 9e538a8f57..af3c4906df 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -24,10 +24,45 @@ v10.0.0 (unreleased)
This should enable faster migration of memory pages that the destination
tries to read before they are migrated from the source.
+ * qemu: Add support for mapping iothreads to virtqueues of ``virtio-blk`` devices
+
+ QEMU added the possibility to map multiple ``iothreads`` to a single
+ ``virtio-blk`` device and map them even to specific virtqueues. Libvirt
+ adds a ``<iothreads>`` subelement of the ``<disk> <driver>`` element that
+ users can use to configure the mapping.
+
+ * qemu: Allow automatic resize of block-device-backed disk to full size of the device
+
+ The new flag ``VIR_DOMAIN_BLOCK_RESIZE_CAPACITY`` for
+ ``virDomainBlockResize`` allows resizing a block-device backed ``raw`` disk
+ of a VM without the need to specify the full size of the block device.
+
* **Improvements**
+ * qemu: Improve migration XML use when persisting VM on destination
+
+ When migrating a VM with a custom migration XML, use it as a base for
+ persisting it on the destination as users could have changed non-ABI
+ breaking facts which would prevent subsequent start if the old XML were used.
+
+ * qemu: Simplify non-shared storage migration to ``raw`` block devices
+
+ The phase of copying storage during migration without shared storage
+ requires that both the source and destination image are identical in size.
+ This may not be possible if the destination is backed by a block device
+ and the source image size is not a multiple of the block device block size.
+
+ Libvirt aleviates this by automatically adding a ``<slice>`` to match the
+ size of the source image rather than failing the migration.
+
* **Bug fixes**
+ * qemu: Various migration bug fixes and debuggability improvement
+
+ This release fixes multiple bugs in virsh and libvirt in handling of
+ migration arguments and XMLs and modifies error reporting for better
+ debugging.
+
v9.10.0 (2023-12-01)
====================
--
2.43.0
9 months, 3 weeks
[PATCH] conf: domain_conf: cleanup def in case of errors
by Shaleen Bathla
Just like in rest of the function virDomainFSDefParseXML,
use goto error so that def will be cleaned up in error cases.
Signed-off-by: Shaleen Bathla <shaleen.bathla(a)oracle.com>
---
src/conf/domain_conf.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index be57a1981e7d..5d55d2acdace 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -8866,23 +8866,23 @@ virDomainFSDefParseXML(virDomainXMLOption *xmlopt,
goto error;
if ((n = virXPathNodeSet("./idmap/uid", ctxt, &uid_nodes)) < 0)
- return NULL;
+ goto error;
if (n) {
def->idmap.uidmap = virDomainIdmapDefParseXML(ctxt, uid_nodes, n);
if (!def->idmap.uidmap)
- return NULL;
+ goto error;
def->idmap.nuidmap = n;
}
if ((n = virXPathNodeSet("./idmap/gid", ctxt, &gid_nodes)) < 0)
- return NULL;
+ goto error;
if (n) {
def->idmap.gidmap = virDomainIdmapDefParseXML(ctxt, gid_nodes, n);
if (!def->idmap.gidmap)
- return NULL;
+ goto error;
def->idmap.ngidmap = n;
}
--
2.39.3
9 months, 3 weeks
[PATCH 00/11] qemu: Add support for CPU clusters
by Andrea Bolognani
Edited for brevity. Full version:
$ git fetch https://gitlab.com/abologna/libvirt.git cpu-clusters
Depends on:
https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/ID...
Andrea Bolognani (11):
tests: Add hostcpudata for machine with CPU clusters
conf: Report CPU clusters in capabilities XML
conf: Allow specifying CPU clusters
qemu: Introduce QEMU_CAPS_SMP_CLUSTERS
qemu: Use CPU clusters for guests
tests: Add test case for CPU clusters
qemu: Make monitor aware of CPU clusters
tests: Verify handling of CPU clusters in QMP data
docs: Tweak documentation for CPU topology
docs: Document CPU clusters
news: Mention support for CPU clusters
NEWS.rst | 6 +
docs/formatcaps.rst | 2 +-
docs/formatdomain.rst | 24 +-
src/bhyve/bhyve_command.c | 5 +
src/conf/capabilities.c | 5 +-
src/conf/capabilities.h | 1 +
src/conf/cpu_conf.c | 16 +-
src/conf/cpu_conf.h | 1 +
src/conf/domain_conf.c | 1 +
src/conf/schemas/capability.rng | 3 +
src/conf/schemas/cputypes.rng | 5 +
src/cpu/cpu.c | 1 +
src/libvirt_linux.syms | 1 +
src/libxl/libxl_capabilities.c | 1 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 7 +
src/qemu/qemu_domain.c | 3 +-
src/qemu/qemu_monitor.c | 2 +
src/qemu/qemu_monitor.h | 2 +
src/qemu/qemu_monitor_json.c | 5 +
src/util/virhostcpu.c | 22 +
src/util/virhostcpu.h | 1 +
src/vmx/vmx.c | 7 +
tests/capabilityschemadata/caps-qemu-kvm.xml | 32 +-
.../x86_64-host+guest,model486-result.xml | 2 +-
.../x86_64-host+guest,models-result.xml | 2 +-
.../cputestdata/x86_64-host+guest-result.xml | 2 +-
tests/cputestdata/x86_64-host+guest.xml | 2 +-
.../x86_64-host+host-model-nofallback.xml | 2 +-
...t-Haswell-noTSX+Haswell,haswell-result.xml | 2 +-
...ell-noTSX+Haswell-noTSX,haswell-result.xml | 2 +-
...ost-Haswell-noTSX+Haswell-noTSX-result.xml | 2 +-
.../x86_64-host-worse+guest-result.xml | 2 +-
.../qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 +
.../caps_7.1.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 +
.../caps_7.2.0_x86_64+hvf.xml | 1 +
.../caps_7.2.0_x86_64.xml | 1 +
.../caps_8.0.0_riscv64.xml | 1 +
.../caps_8.0.0_x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_8.1.0_s390x.xml | 1 +
.../caps_8.1.0_x86_64.xml | 1 +
.../caps_8.2.0_aarch64.xml | 1 +
.../caps_8.2.0_x86_64.xml | 1 +
.../caps_9.0.0_x86_64.xml | 1 +
.../ppc64-modern-bulk-result-conf.xml | 2 +-
.../ppc64-modern-bulk-result-live.xml | 2 +-
.../ppc64-modern-individual-result-conf.xml | 2 +-
.../ppc64-modern-individual-result-live.xml | 2 +-
.../x86-modern-bulk-result-conf.xml | 2 +-
.../x86-modern-bulk-result-live.xml | 2 +-
.../x86-modern-individual-add-result-conf.xml | 2 +-
.../x86-modern-individual-add-result-live.xml | 2 +-
...imeout+graphics-spice-timeout-password.xml | 2 +-
.../qemuhotplug-graphics-spice-timeout.xml | 2 +-
...torjson-cpuinfo-aarch64-clusters-cpus.json | 88 +
...json-cpuinfo-aarch64-clusters-hotplug.json | 171 ++
...umonitorjson-cpuinfo-aarch64-clusters.data | 108 +
tests/qemumonitorjsontest.c | 9 +-
.../cpu-hotplug-startup.x86_64-latest.args | 2 +-
.../cpu-numa-disjoint.x86_64-latest.args | 2 +-
.../cpu-numa-disordered.x86_64-latest.args | 2 +-
.../cpu-numa-memshared.x86_64-latest.args | 2 +-
...-numa-no-memory-element.x86_64-latest.args | 2 +-
.../cpu-numa1.x86_64-latest.args | 2 +-
.../cpu-numa2.x86_64-latest.args | 2 +-
.../cpu-topology1.x86_64-latest.args | 2 +-
.../cpu-topology2.x86_64-latest.args | 2 +-
.../cpu-topology3.x86_64-latest.args | 2 +-
.../cpu-topology4.x86_64-latest.args | 2 +-
...args => cpu-topology5.aarch64-latest.args} | 12 +-
tests/qemuxml2argvdata/cpu-topology5.xml | 17 +
...memory-no-numa-topology.x86_64-latest.args | 2 +-
.../fd-memory-no-numa-topology.xml | 2 +-
...fd-memory-numa-topology.x86_64-latest.args | 2 +-
.../fd-memory-numa-topology.xml | 2 +-
...d-memory-numa-topology2.x86_64-latest.args | 2 +-
.../fd-memory-numa-topology2.xml | 2 +-
...d-memory-numa-topology3.x86_64-latest.args | 2 +-
.../fd-memory-numa-topology3.xml | 2 +-
.../hugepages-nvdimm.x86_64-latest.args | 2 +-
tests/qemuxml2argvdata/hugepages-nvdimm.xml | 2 +-
...memory-default-hugepage.x86_64-latest.args | 2 +-
.../memfd-memory-default-hugepage.xml | 2 +-
.../memfd-memory-numa.x86_64-latest.args | 2 +-
tests/qemuxml2argvdata/memfd-memory-numa.xml | 2 +-
...emory-hotplug-dimm-addr.x86_64-latest.args | 2 +-
.../memory-hotplug-dimm.x86_64-latest.args | 2 +-
...memory-hotplug-multiple.x86_64-latest.args | 2 +-
...y-hotplug-nvdimm-access.x86_64-latest.args | 2 +-
.../memory-hotplug-nvdimm-access.xml | 2 +-
...ry-hotplug-nvdimm-align.x86_64-latest.args | 2 +-
.../memory-hotplug-nvdimm-align.xml | 2 +-
...ry-hotplug-nvdimm-label.x86_64-latest.args | 2 +-
.../memory-hotplug-nvdimm-label.xml | 2 +-
...ory-hotplug-nvdimm-pmem.x86_64-latest.args | 2 +-
.../memory-hotplug-nvdimm-pmem.xml | 2 +-
...-nvdimm-ppc64-abi-update.ppc64-latest.args | 2 +-
...ory-hotplug-nvdimm-ppc64.ppc64-latest.args | 2 +-
...hotplug-nvdimm-readonly.x86_64-latest.args | 2 +-
.../memory-hotplug-nvdimm-readonly.xml | 2 +-
.../memory-hotplug-nvdimm.x86_64-latest.args | 2 +-
.../memory-hotplug-nvdimm.xml | 2 +-
...mory-hotplug-virtio-mem.x86_64-latest.args | 2 +-
.../memory-hotplug-virtio-mem.xml | 2 +-
...ory-hotplug-virtio-pmem.x86_64-latest.args | 2 +-
.../memory-hotplug-virtio-pmem.xml | 2 +-
.../memory-hotplug.x86_64-latest.args | 2 +-
...auto-memory-vcpu-cpuset.x86_64-latest.args | 2 +-
...no-cpuset-and-placement.x86_64-latest.args | 2 +-
...d-auto-vcpu-no-numatune.x86_64-latest.args | 2 +-
...to-vcpu-static-numatune.x86_64-latest.args | 2 +-
...static-memory-auto-vcpu.x86_64-latest.args | 2 +-
...static-vcpu-no-numatune.x86_64-latest.args | 2 +-
.../qemuxml2argvdata/numad.x86_64-latest.args | 2 +-
...ne-auto-nodeset-invalid.x86_64-latest.args | 2 +-
.../pci-expander-bus.x86_64-latest.args | 2 +-
.../pcie-expander-bus.x86_64-latest.args | 2 +-
.../pseries-phb-numa-node.ppc64-latest.args | 2 +-
tests/qemuxml2argvtest.c | 1 +
.../cpu-numa-disjoint.x86_64-latest.xml | 2 +-
.../cpu-numa-disordered.x86_64-latest.xml | 2 +-
.../cpu-numa-memshared.x86_64-latest.xml | 2 +-
...u-numa-no-memory-element.x86_64-latest.xml | 2 +-
.../cpu-numa1.x86_64-latest.xml | 2 +-
.../cpu-numa2.x86_64-latest.xml | 2 +-
...memory-hotplug-dimm-addr.x86_64-latest.xml | 2 +-
.../memory-hotplug-dimm.x86_64-latest.xml | 2 +-
.../memory-hotplug-multiple.x86_64-latest.xml | 2 +-
...g-nvdimm-ppc64-abi-update.ppc64-latest.xml | 2 +-
...mory-hotplug-nvdimm-ppc64.ppc64-latest.xml | 2 +-
.../memory-hotplug.x86_64-latest.xml | 2 +-
...-auto-memory-vcpu-cpuset.x86_64-latest.xml | 2 +-
...-no-cpuset-and-placement.x86_64-latest.xml | 2 +-
...ad-auto-vcpu-no-numatune.x86_64-latest.xml | 2 +-
...-static-vcpu-no-numatune.x86_64-latest.xml | 2 +-
.../pci-expander-bus.x86_64-latest.xml | 2 +-
.../pcie-expander-bus.x86_64-latest.xml | 2 +-
.../pseries-phb-numa-node.ppc64-latest.xml | 2 +-
.../linux-basic-clusters/system/cpu | 1 +
.../linux-basic-clusters/system/node | 1 +
.../vircaps-aarch64-basic-clusters.xml | 255 +++
.../vircaps2xmldata/vircaps-aarch64-basic.xml | 32 +-
.../vircaps-x86_64-basic-dies.xml | 24 +-
.../vircaps2xmldata/vircaps-x86_64-basic.xml | 32 +-
.../vircaps2xmldata/vircaps-x86_64-caches.xml | 16 +-
tests/vircaps2xmldata/vircaps-x86_64-hmat.xml | 48 +-
.../vircaps-x86_64-resctrl-cdp.xml | 24 +-
.../vircaps-x86_64-resctrl-cmt.xml | 24 +-
.../vircaps-x86_64-resctrl-fake-feature.xml | 24 +-
.../vircaps-x86_64-resctrl-skx-twocaches.xml | 2 +-
.../vircaps-x86_64-resctrl-skx.xml | 2 +-
.../vircaps-x86_64-resctrl.xml | 24 +-
tests/vircaps2xmltest.c | 1 +
.../linux-aarch64-with-clusters.cpuinfo | 2016 +++++++++++++++++
.../linux-aarch64-with-clusters.expected | 1 +
.../cpu/cpu0/topology/cluster_cpus | 1 +
.../cpu/cpu0/topology/cluster_cpus_list | 1 +
.../cpu/cpu0/topology/cluster_id | 1 +
.../cpu/cpu0/topology/core_cpus | 1 +
.../cpu/cpu0/topology/core_cpus_list | 1 +
.../cpu/cpu0/topology/core_id | 1 +
.../cpu/cpu0/topology/core_siblings | 1 +
.../cpu/cpu0/topology/core_siblings_list | 1 +
.../cpu/cpu0/topology/package_cpus | 1 +
.../cpu/cpu0/topology/package_cpus_list | 1 +
.../cpu/cpu0/topology/physical_package_id | 1 +
.../cpu/cpu0/topology/thread_siblings | 1 +
.../cpu/cpu0/topology/thread_siblings_list | 1 +
[...]
tests/virhostcputest.c | 1 +
tests/vmx2xmldata/esx-in-the-wild-10.xml | 2 +-
tests/vmx2xmldata/esx-in-the-wild-8.xml | 2 +-
tests/vmx2xmldata/esx-in-the-wild-9.xml | 2 +-
3303 files changed, 6180 insertions(+), 262 deletions(-)
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-cpus.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-hotplug.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters.data
copy tests/qemuxml2argvdata/{cpu-topology2.x86_64-latest.args => cpu-topology5.aarch64-latest.args} (69%)
create mode 100644 tests/qemuxml2argvdata/cpu-topology5.xml
create mode 120000 tests/vircaps2xmldata/linux-basic-clusters/system/cpu
create mode 120000 tests/vircaps2xmldata/linux-basic-clusters/system/node
create mode 100644 tests/vircaps2xmldata/vircaps-aarch64-basic-clusters.xml
create mode 100644 tests/virhostcpudata/linux-aarch64-with-clusters.cpuinfo
create mode 100644 tests/virhostcpudata/linux-aarch64-with-clusters.expected
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_cpus
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_cpus_list
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_id
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_cpus
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_cpus_list
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_id
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_siblings
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/package_cpus
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/package_cpus_list
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/physical_package_id
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/thread_siblings
create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/thread_siblings_list
[...]
--
2.43.0
9 months, 3 weeks