[PATCH v2 00/11] qemu: Add support for CPU clusters

Changes from [v1] * minimize amount of newly-introduced test data; * add documentation for CPU topology information in the host capabilities XML; * address other review comments. [v1] https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/ZAZOR... Andrea Bolognani (11): tests: Add hostcpudata for machine with CPU clusters conf: Report CPU clusters in capabilities XML conf: Allow specifying CPU clusters qemu: Introduce QEMU_CAPS_SMP_CLUSTERS qemu: Use CPU clusters for guests tests: Add test case for CPU clusters qemu: Make monitor aware of CPU clusters tests: Verify handling of CPU clusters in QMP data docs: Improve documentation for CPU topology docs: Document CPU clusters news: Mention support for CPU clusters NEWS.rst | 6 + docs/formatcaps.rst | 72 ++++++-- docs/formatdomain.rst | 24 ++- src/bhyve/bhyve_command.c | 5 + src/conf/capabilities.c | 5 +- src/conf/capabilities.h | 1 + src/conf/cpu_conf.c | 16 +- src/conf/cpu_conf.h | 1 + src/conf/domain_conf.c | 1 + src/conf/schemas/capability.rng | 3 + src/conf/schemas/cputypes.rng | 5 + src/cpu/cpu.c | 1 + src/libvirt_linux.syms | 1 + src/libxl/libxl_capabilities.c | 1 + src/qemu/qemu_capabilities.c | 2 + src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_command.c | 8 + src/qemu/qemu_domain.c | 3 +- src/qemu/qemu_monitor.c | 2 + src/qemu/qemu_monitor.h | 2 + src/qemu/qemu_monitor_json.c | 5 + src/util/virhostcpu.c | 22 +++ src/util/virhostcpu.h | 1 + src/vmx/vmx.c | 7 + tests/capabilityschemadata/caps-qemu-kvm.xml | 32 ++-- .../x86_64-host+guest,model486-result.xml | 2 +- .../x86_64-host+guest,models-result.xml | 2 +- .../cputestdata/x86_64-host+guest-result.xml | 2 +- tests/cputestdata/x86_64-host+guest.xml | 2 +- .../x86_64-host+host-model-nofallback.xml | 2 +- ...t-Haswell-noTSX+Haswell,haswell-result.xml | 2 +- ...ell-noTSX+Haswell-noTSX,haswell-result.xml | 2 +- ...ost-Haswell-noTSX+Haswell-noTSX-result.xml | 2 +- .../x86_64-host-worse+guest-result.xml | 2 +- .../qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 + .../caps_7.1.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 + .../caps_7.2.0_x86_64+hvf.xml | 1 + .../caps_7.2.0_x86_64.xml | 1 + .../caps_8.0.0_riscv64.xml | 1 + .../caps_8.0.0_x86_64.xml | 1 + .../qemucapabilitiesdata/caps_8.1.0_s390x.xml | 1 + .../caps_8.1.0_x86_64.xml | 1 + .../caps_8.2.0_aarch64.xml | 1 + .../caps_8.2.0_x86_64.xml | 1 + .../caps_9.0.0_x86_64.xml | 1 + .../ppc64-modern-bulk-result-conf.xml | 2 +- .../ppc64-modern-bulk-result-live.xml | 2 +- .../ppc64-modern-individual-result-conf.xml | 2 +- .../ppc64-modern-individual-result-live.xml | 2 +- .../x86-modern-bulk-result-conf.xml | 2 +- .../x86-modern-bulk-result-live.xml | 2 +- .../x86-modern-individual-add-result-conf.xml | 2 +- .../x86-modern-individual-add-result-live.xml | 2 +- ...imeout+graphics-spice-timeout-password.xml | 2 +- .../qemuhotplug-graphics-spice-timeout.xml | 2 +- ...torjson-cpuinfo-aarch64-clusters-cpus.json | 88 +++++++++ ...json-cpuinfo-aarch64-clusters-hotplug.json | 171 ++++++++++++++++++ ...umonitorjson-cpuinfo-aarch64-clusters.data | 108 +++++++++++ tests/qemumonitorjsontest.c | 9 +- .../cpu-hotplug-startup.x86_64-latest.args | 2 +- .../cpu-numa-disjoint.x86_64-latest.args | 2 +- .../cpu-numa-disordered.x86_64-latest.args | 2 +- .../cpu-numa-memshared.x86_64-latest.args | 2 +- ...-numa-no-memory-element.x86_64-latest.args | 2 +- .../cpu-numa1.x86_64-latest.args | 2 +- .../cpu-numa2.x86_64-latest.args | 2 +- .../cpu-topology1.x86_64-latest.args | 2 +- .../cpu-topology2.x86_64-latest.args | 2 +- .../cpu-topology3.x86_64-latest.args | 2 +- .../cpu-topology4.x86_64-latest.args | 2 +- ...args => cpu-topology5.aarch64-latest.args} | 12 +- tests/qemuxml2argvdata/cpu-topology5.xml | 17 ++ ...memory-no-numa-topology.x86_64-latest.args | 2 +- .../fd-memory-no-numa-topology.xml | 2 +- ...fd-memory-numa-topology.x86_64-latest.args | 2 +- .../fd-memory-numa-topology.xml | 2 +- ...d-memory-numa-topology2.x86_64-latest.args | 2 +- .../fd-memory-numa-topology2.xml | 2 +- ...d-memory-numa-topology3.x86_64-latest.args | 2 +- .../fd-memory-numa-topology3.xml | 2 +- .../hugepages-nvdimm.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/hugepages-nvdimm.xml | 2 +- ...memory-default-hugepage.x86_64-latest.args | 2 +- .../memfd-memory-default-hugepage.xml | 2 +- .../memfd-memory-numa.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/memfd-memory-numa.xml | 2 +- ...emory-hotplug-dimm-addr.x86_64-latest.args | 2 +- .../memory-hotplug-dimm.x86_64-latest.args | 2 +- ...memory-hotplug-multiple.x86_64-latest.args | 2 +- ...y-hotplug-nvdimm-access.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm-access.xml | 2 +- ...ry-hotplug-nvdimm-align.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm-align.xml | 2 +- ...ry-hotplug-nvdimm-label.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm-label.xml | 2 +- ...ory-hotplug-nvdimm-pmem.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm-pmem.xml | 2 +- ...-nvdimm-ppc64-abi-update.ppc64-latest.args | 2 +- ...ory-hotplug-nvdimm-ppc64.ppc64-latest.args | 2 +- ...hotplug-nvdimm-readonly.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm-readonly.xml | 2 +- .../memory-hotplug-nvdimm.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm.xml | 2 +- ...mory-hotplug-virtio-mem.x86_64-latest.args | 2 +- .../memory-hotplug-virtio-mem.xml | 2 +- ...ory-hotplug-virtio-pmem.x86_64-latest.args | 2 +- .../memory-hotplug-virtio-pmem.xml | 2 +- .../memory-hotplug.x86_64-latest.args | 2 +- ...auto-memory-vcpu-cpuset.x86_64-latest.args | 2 +- ...no-cpuset-and-placement.x86_64-latest.args | 2 +- ...d-auto-vcpu-no-numatune.x86_64-latest.args | 2 +- ...to-vcpu-static-numatune.x86_64-latest.args | 2 +- ...static-memory-auto-vcpu.x86_64-latest.args | 2 +- ...static-vcpu-no-numatune.x86_64-latest.args | 2 +- .../qemuxml2argvdata/numad.x86_64-latest.args | 2 +- ...ne-auto-nodeset-invalid.x86_64-latest.args | 2 +- .../pci-expander-bus.x86_64-latest.args | 2 +- .../pcie-expander-bus.x86_64-latest.args | 2 +- .../pseries-phb-numa-node.ppc64-latest.args | 2 +- tests/qemuxml2argvtest.c | 1 + .../cpu-numa-disjoint.x86_64-latest.xml | 2 +- .../cpu-numa-disordered.x86_64-latest.xml | 2 +- .../cpu-numa-memshared.x86_64-latest.xml | 2 +- ...u-numa-no-memory-element.x86_64-latest.xml | 2 +- .../cpu-numa1.x86_64-latest.xml | 2 +- .../cpu-numa2.x86_64-latest.xml | 2 +- .../cpu-topology5.aarch64-latest.xml | 29 +++ ...memory-hotplug-dimm-addr.x86_64-latest.xml | 2 +- .../memory-hotplug-dimm.x86_64-latest.xml | 2 +- .../memory-hotplug-multiple.x86_64-latest.xml | 2 +- ...g-nvdimm-ppc64-abi-update.ppc64-latest.xml | 2 +- ...mory-hotplug-nvdimm-ppc64.ppc64-latest.xml | 2 +- .../memory-hotplug.x86_64-latest.xml | 2 +- ...-auto-memory-vcpu-cpuset.x86_64-latest.xml | 2 +- ...-no-cpuset-and-placement.x86_64-latest.xml | 2 +- ...ad-auto-vcpu-no-numatune.x86_64-latest.xml | 2 +- ...-static-vcpu-no-numatune.x86_64-latest.xml | 2 +- .../pci-expander-bus.x86_64-latest.xml | 2 +- .../pcie-expander-bus.x86_64-latest.xml | 2 +- .../pseries-phb-numa-node.ppc64-latest.xml | 2 +- tests/qemuxml2xmltest.c | 2 + .../linux-basic-clusters/system/cpu | 1 + .../linux-basic-clusters/system/node | 1 + .../vircaps-aarch64-basic-clusters.xml | 39 ++++ .../vircaps2xmldata/vircaps-aarch64-basic.xml | 32 ++-- .../vircaps-x86_64-basic-dies.xml | 24 +-- .../vircaps2xmldata/vircaps-x86_64-basic.xml | 32 ++-- .../vircaps2xmldata/vircaps-x86_64-caches.xml | 16 +- tests/vircaps2xmldata/vircaps-x86_64-hmat.xml | 48 ++--- .../vircaps-x86_64-resctrl-cdp.xml | 24 +-- .../vircaps-x86_64-resctrl-cmt.xml | 24 +-- .../vircaps-x86_64-resctrl-fake-feature.xml | 24 +-- .../vircaps-x86_64-resctrl-skx-twocaches.xml | 2 +- .../vircaps-x86_64-resctrl-skx.xml | 2 +- .../vircaps-x86_64-resctrl.xml | 24 +-- tests/vircaps2xmltest.c | 1 + .../linux-aarch64-with-clusters.cpuinfo | 72 ++++++++ .../linux-aarch64-with-clusters.expected | 1 + .../cpu/cpu0/topology/cluster_cpus_list | 1 + .../cpu/cpu0/topology/cluster_id | 1 + .../cpu/cpu0/topology/core_cpus_list | 1 + .../cpu/cpu0/topology/core_id | 1 + .../cpu/cpu0/topology/core_siblings_list | 1 + .../cpu/cpu0/topology/package_cpus_list | 1 + .../cpu/cpu0/topology/physical_package_id | 1 + .../cpu/cpu0/topology/thread_siblings_list | 1 + .../cpu/cpu1/topology/cluster_cpus_list | 1 + .../cpu/cpu1/topology/cluster_id | 1 + .../cpu/cpu1/topology/core_cpus_list | 1 + .../cpu/cpu1/topology/core_id | 1 + .../cpu/cpu1/topology/core_siblings_list | 1 + .../cpu/cpu1/topology/package_cpus_list | 1 + .../cpu/cpu1/topology/physical_package_id | 1 + .../cpu/cpu1/topology/thread_siblings_list | 1 + .../cpu/cpu2/topology/cluster_cpus_list | 1 + .../cpu/cpu2/topology/cluster_id | 1 + .../cpu/cpu2/topology/core_cpus_list | 1 + .../cpu/cpu2/topology/core_id | 1 + .../cpu/cpu2/topology/core_siblings_list | 1 + .../cpu/cpu2/topology/package_cpus_list | 1 + .../cpu/cpu2/topology/physical_package_id | 1 + .../cpu/cpu2/topology/thread_siblings_list | 1 + .../cpu/cpu3/topology/cluster_cpus_list | 1 + .../cpu/cpu3/topology/cluster_id | 1 + .../cpu/cpu3/topology/core_cpus_list | 1 + .../cpu/cpu3/topology/core_id | 1 + .../cpu/cpu3/topology/core_siblings_list | 1 + .../cpu/cpu3/topology/package_cpus_list | 1 + .../cpu/cpu3/topology/physical_package_id | 1 + .../cpu/cpu3/topology/thread_siblings_list | 1 + .../cpu/cpu4/topology/cluster_cpus_list | 1 + .../cpu/cpu4/topology/cluster_id | 1 + .../cpu/cpu4/topology/core_cpus_list | 1 + .../cpu/cpu4/topology/core_id | 1 + .../cpu/cpu4/topology/core_siblings_list | 1 + .../cpu/cpu4/topology/package_cpus_list | 1 + .../cpu/cpu4/topology/physical_package_id | 1 + .../cpu/cpu4/topology/thread_siblings_list | 1 + .../cpu/cpu5/topology/cluster_cpus_list | 1 + .../cpu/cpu5/topology/cluster_id | 1 + .../cpu/cpu5/topology/core_cpus_list | 1 + .../cpu/cpu5/topology/core_id | 1 + .../cpu/cpu5/topology/core_siblings_list | 1 + .../cpu/cpu5/topology/package_cpus_list | 1 + .../cpu/cpu5/topology/physical_package_id | 1 + .../cpu/cpu5/topology/thread_siblings_list | 1 + .../cpu/cpu6/topology/cluster_cpus_list | 1 + .../cpu/cpu6/topology/cluster_id | 1 + .../cpu/cpu6/topology/core_cpus_list | 1 + .../cpu/cpu6/topology/core_id | 1 + .../cpu/cpu6/topology/core_siblings_list | 1 + .../cpu/cpu6/topology/package_cpus_list | 1 + .../cpu/cpu6/topology/physical_package_id | 1 + .../cpu/cpu6/topology/thread_siblings_list | 1 + .../cpu/cpu7/topology/cluster_cpus_list | 1 + .../cpu/cpu7/topology/cluster_id | 1 + .../cpu/cpu7/topology/core_cpus_list | 1 + .../cpu/cpu7/topology/core_id | 1 + .../cpu/cpu7/topology/core_siblings_list | 1 + .../cpu/cpu7/topology/package_cpus_list | 1 + .../cpu/cpu7/topology/physical_package_id | 1 + .../cpu/cpu7/topology/thread_siblings_list | 1 + .../linux-with-clusters/cpu/online | 1 + .../linux-with-clusters/cpu/present | 1 + .../linux-with-clusters/node/node0/cpu0 | 1 + .../linux-with-clusters/node/node0/cpu1 | 1 + .../linux-with-clusters/node/node0/cpu2 | 1 + .../linux-with-clusters/node/node0/cpu3 | 1 + .../linux-with-clusters/node/node0/cpulist | 1 + .../linux-with-clusters/node/node1/cpu4 | 1 + .../linux-with-clusters/node/node1/cpu5 | 1 + .../linux-with-clusters/node/node1/cpu6 | 1 + .../linux-with-clusters/node/node1/cpu7 | 1 + .../linux-with-clusters/node/node1/cpulist | 1 + .../linux-with-clusters/node/online | 1 + .../linux-with-clusters/node/possible | 1 + tests/virhostcputest.c | 1 + tests/vmx2xmldata/esx-in-the-wild-10.xml | 2 +- tests/vmx2xmldata/esx-in-the-wild-8.xml | 2 +- tests/vmx2xmldata/esx-in-the-wild-9.xml | 2 +- 241 files changed, 1044 insertions(+), 276 deletions(-) create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters.data copy tests/qemuxml2argvdata/{cpu-topology2.x86_64-latest.args => cpu-topology5.aarch64-latest.args} (69%) create mode 100644 tests/qemuxml2argvdata/cpu-topology5.xml create mode 100644 tests/qemuxml2xmloutdata/cpu-topology5.aarch64-latest.xml create mode 120000 tests/vircaps2xmldata/linux-basic-clusters/system/cpu create mode 120000 tests/vircaps2xmldata/linux-basic-clusters/system/node create mode 100644 tests/vircaps2xmldata/vircaps-aarch64-basic-clusters.xml create mode 100644 tests/virhostcpudata/linux-aarch64-with-clusters.cpuinfo create mode 100644 tests/virhostcpudata/linux-aarch64-with-clusters.expected create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/online create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/present create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node0/cpu0 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node0/cpu1 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node0/cpu2 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node0/cpu3 create mode 100644 tests/virhostcpudata/linux-with-clusters/node/node0/cpulist create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node1/cpu4 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node1/cpu5 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node1/cpu6 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node1/cpu7 create mode 100644 tests/virhostcpudata/linux-with-clusters/node/node1/cpulist create mode 100644 tests/virhostcpudata/linux-with-clusters/node/online create mode 100644 tests/virhostcpudata/linux-with-clusters/node/possible -- 2.43.0

The data is taken from an HPE Apollo 70 machine, which uses aarch64 CPUs. It is interesting for us because non-dummy information about CPU clusters is exposed through sysfs. In order to keep things reasonable, the data was manually modified so that only 8 of the original 224 CPUs are included. Care has been taken to ensure that the topology is otherwise unaltered. Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- .../linux-basic-clusters/system/cpu | 1 + .../linux-basic-clusters/system/node | 1 + .../vircaps-aarch64-basic-clusters.xml | 39 ++++++++++ tests/vircaps2xmltest.c | 1 + .../linux-aarch64-with-clusters.cpuinfo | 72 +++++++++++++++++++ .../linux-aarch64-with-clusters.expected | 1 + .../cpu/cpu0/topology/cluster_cpus_list | 1 + .../cpu/cpu0/topology/cluster_id | 1 + .../cpu/cpu0/topology/core_cpus_list | 1 + .../cpu/cpu0/topology/core_id | 1 + .../cpu/cpu0/topology/core_siblings_list | 1 + .../cpu/cpu0/topology/package_cpus_list | 1 + .../cpu/cpu0/topology/physical_package_id | 1 + .../cpu/cpu0/topology/thread_siblings_list | 1 + .../cpu/cpu1/topology/cluster_cpus_list | 1 + .../cpu/cpu1/topology/cluster_id | 1 + .../cpu/cpu1/topology/core_cpus_list | 1 + .../cpu/cpu1/topology/core_id | 1 + .../cpu/cpu1/topology/core_siblings_list | 1 + .../cpu/cpu1/topology/package_cpus_list | 1 + .../cpu/cpu1/topology/physical_package_id | 1 + .../cpu/cpu1/topology/thread_siblings_list | 1 + .../cpu/cpu2/topology/cluster_cpus_list | 1 + .../cpu/cpu2/topology/cluster_id | 1 + .../cpu/cpu2/topology/core_cpus_list | 1 + .../cpu/cpu2/topology/core_id | 1 + .../cpu/cpu2/topology/core_siblings_list | 1 + .../cpu/cpu2/topology/package_cpus_list | 1 + .../cpu/cpu2/topology/physical_package_id | 1 + .../cpu/cpu2/topology/thread_siblings_list | 1 + .../cpu/cpu3/topology/cluster_cpus_list | 1 + .../cpu/cpu3/topology/cluster_id | 1 + .../cpu/cpu3/topology/core_cpus_list | 1 + .../cpu/cpu3/topology/core_id | 1 + .../cpu/cpu3/topology/core_siblings_list | 1 + .../cpu/cpu3/topology/package_cpus_list | 1 + .../cpu/cpu3/topology/physical_package_id | 1 + .../cpu/cpu3/topology/thread_siblings_list | 1 + .../cpu/cpu4/topology/cluster_cpus_list | 1 + .../cpu/cpu4/topology/cluster_id | 1 + .../cpu/cpu4/topology/core_cpus_list | 1 + .../cpu/cpu4/topology/core_id | 1 + .../cpu/cpu4/topology/core_siblings_list | 1 + .../cpu/cpu4/topology/package_cpus_list | 1 + .../cpu/cpu4/topology/physical_package_id | 1 + .../cpu/cpu4/topology/thread_siblings_list | 1 + .../cpu/cpu5/topology/cluster_cpus_list | 1 + .../cpu/cpu5/topology/cluster_id | 1 + .../cpu/cpu5/topology/core_cpus_list | 1 + .../cpu/cpu5/topology/core_id | 1 + .../cpu/cpu5/topology/core_siblings_list | 1 + .../cpu/cpu5/topology/package_cpus_list | 1 + .../cpu/cpu5/topology/physical_package_id | 1 + .../cpu/cpu5/topology/thread_siblings_list | 1 + .../cpu/cpu6/topology/cluster_cpus_list | 1 + .../cpu/cpu6/topology/cluster_id | 1 + .../cpu/cpu6/topology/core_cpus_list | 1 + .../cpu/cpu6/topology/core_id | 1 + .../cpu/cpu6/topology/core_siblings_list | 1 + .../cpu/cpu6/topology/package_cpus_list | 1 + .../cpu/cpu6/topology/physical_package_id | 1 + .../cpu/cpu6/topology/thread_siblings_list | 1 + .../cpu/cpu7/topology/cluster_cpus_list | 1 + .../cpu/cpu7/topology/cluster_id | 1 + .../cpu/cpu7/topology/core_cpus_list | 1 + .../cpu/cpu7/topology/core_id | 1 + .../cpu/cpu7/topology/core_siblings_list | 1 + .../cpu/cpu7/topology/package_cpus_list | 1 + .../cpu/cpu7/topology/physical_package_id | 1 + .../cpu/cpu7/topology/thread_siblings_list | 1 + .../linux-with-clusters/cpu/online | 1 + .../linux-with-clusters/cpu/present | 1 + .../linux-with-clusters/node/node0/cpu0 | 1 + .../linux-with-clusters/node/node0/cpu1 | 1 + .../linux-with-clusters/node/node0/cpu2 | 1 + .../linux-with-clusters/node/node0/cpu3 | 1 + .../linux-with-clusters/node/node0/cpulist | 1 + .../linux-with-clusters/node/node1/cpu4 | 1 + .../linux-with-clusters/node/node1/cpu5 | 1 + .../linux-with-clusters/node/node1/cpu6 | 1 + .../linux-with-clusters/node/node1/cpu7 | 1 + .../linux-with-clusters/node/node1/cpulist | 1 + .../linux-with-clusters/node/online | 1 + .../linux-with-clusters/node/possible | 1 + tests/virhostcputest.c | 1 + 85 files changed, 194 insertions(+) create mode 120000 tests/vircaps2xmldata/linux-basic-clusters/system/cpu create mode 120000 tests/vircaps2xmldata/linux-basic-clusters/system/node create mode 100644 tests/vircaps2xmldata/vircaps-aarch64-basic-clusters.xml create mode 100644 tests/virhostcpudata/linux-aarch64-with-clusters.cpuinfo create mode 100644 tests/virhostcpudata/linux-aarch64-with-clusters.expected create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/cluster_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/cluster_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/package_cpus_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/physical_package_id create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/thread_siblings_list create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/online create mode 100644 tests/virhostcpudata/linux-with-clusters/cpu/present create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node0/cpu0 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node0/cpu1 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node0/cpu2 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node0/cpu3 create mode 100644 tests/virhostcpudata/linux-with-clusters/node/node0/cpulist create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node1/cpu4 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node1/cpu5 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node1/cpu6 create mode 120000 tests/virhostcpudata/linux-with-clusters/node/node1/cpu7 create mode 100644 tests/virhostcpudata/linux-with-clusters/node/node1/cpulist create mode 100644 tests/virhostcpudata/linux-with-clusters/node/online create mode 100644 tests/virhostcpudata/linux-with-clusters/node/possible diff --git a/tests/vircaps2xmldata/linux-basic-clusters/system/cpu b/tests/vircaps2xmldata/linux-basic-clusters/system/cpu new file mode 120000 index 0000000000..f7354e3525 --- /dev/null +++ b/tests/vircaps2xmldata/linux-basic-clusters/system/cpu @@ -0,0 +1 @@ +../../../virhostcpudata/linux-with-clusters/cpu \ No newline at end of file diff --git a/tests/vircaps2xmldata/linux-basic-clusters/system/node b/tests/vircaps2xmldata/linux-basic-clusters/system/node new file mode 120000 index 0000000000..57b972ce90 --- /dev/null +++ b/tests/vircaps2xmldata/linux-basic-clusters/system/node @@ -0,0 +1 @@ +../../../virhostcpudata/linux-with-clusters/node \ No newline at end of file diff --git a/tests/vircaps2xmldata/vircaps-aarch64-basic-clusters.xml b/tests/vircaps2xmldata/vircaps-aarch64-basic-clusters.xml new file mode 100644 index 0000000000..fe61fc42cc --- /dev/null +++ b/tests/vircaps2xmldata/vircaps-aarch64-basic-clusters.xml @@ -0,0 +1,39 @@ +<capabilities> + + <host> + <cpu> + <arch>aarch64</arch> + </cpu> + <power_management/> + <iommu support='no'/> + <topology> + <cells num='2'> + <cell id='0'> + <memory unit='KiB'>1048576</memory> + <pages unit='KiB' size='4'>2048</pages> + <pages unit='KiB' size='2048'>4096</pages> + <pages unit='KiB' size='1048576'>6144</pages> + <cpus num='4'> + <cpu id='0' socket_id='36' die_id='0' core_id='0' siblings='0-1'/> + <cpu id='1' socket_id='36' die_id='0' core_id='0' siblings='0-1'/> + <cpu id='2' socket_id='36' die_id='0' core_id='1' siblings='2-3'/> + <cpu id='3' socket_id='36' die_id='0' core_id='1' siblings='2-3'/> + </cpus> + </cell> + <cell id='1'> + <memory unit='KiB'>2097152</memory> + <pages unit='KiB' size='4'>4096</pages> + <pages unit='KiB' size='2048'>6144</pages> + <pages unit='KiB' size='1048576'>8192</pages> + <cpus num='4'> + <cpu id='4' socket_id='3180' die_id='0' core_id='256' siblings='4-5'/> + <cpu id='5' socket_id='3180' die_id='0' core_id='256' siblings='4-5'/> + <cpu id='6' socket_id='3180' die_id='0' core_id='257' siblings='6-7'/> + <cpu id='7' socket_id='3180' die_id='0' core_id='257' siblings='6-7'/> + </cpus> + </cell> + </cells> + </topology> + </host> + +</capabilities> diff --git a/tests/vircaps2xmltest.c b/tests/vircaps2xmltest.c index 26a512e87f..2fdf694640 100644 --- a/tests/vircaps2xmltest.c +++ b/tests/vircaps2xmltest.c @@ -93,6 +93,7 @@ mymain(void) DO_TEST_FULL("basic", VIR_ARCH_X86_64, false, false); DO_TEST_FULL("basic", VIR_ARCH_AARCH64, true, false); DO_TEST_FULL("basic-dies", VIR_ARCH_X86_64, false, false); + DO_TEST_FULL("basic-clusters", VIR_ARCH_AARCH64, false, false); DO_TEST_FULL("caches", VIR_ARCH_X86_64, true, true); diff --git a/tests/virhostcpudata/linux-aarch64-with-clusters.cpuinfo b/tests/virhostcpudata/linux-aarch64-with-clusters.cpuinfo new file mode 100644 index 0000000000..94030201d2 --- /dev/null +++ b/tests/virhostcpudata/linux-aarch64-with-clusters.cpuinfo @@ -0,0 +1,72 @@ +processor : 0 +BogoMIPS : 400.00 +Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics cpuid asimdrdm +CPU implementer : 0x43 +CPU architecture: 8 +CPU variant : 0x1 +CPU part : 0x0af +CPU revision : 1 + +processor : 1 +BogoMIPS : 400.00 +Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics cpuid asimdrdm +CPU implementer : 0x43 +CPU architecture: 8 +CPU variant : 0x1 +CPU part : 0x0af +CPU revision : 1 + +processor : 2 +BogoMIPS : 400.00 +Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics cpuid asimdrdm +CPU implementer : 0x43 +CPU architecture: 8 +CPU variant : 0x1 +CPU part : 0x0af +CPU revision : 1 + +processor : 3 +BogoMIPS : 400.00 +Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics cpuid asimdrdm +CPU implementer : 0x43 +CPU architecture: 8 +CPU variant : 0x1 +CPU part : 0x0af +CPU revision : 1 + +processor : 4 +BogoMIPS : 400.00 +Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics cpuid asimdrdm +CPU implementer : 0x43 +CPU architecture: 8 +CPU variant : 0x1 +CPU part : 0x0af +CPU revision : 1 + +processor : 5 +BogoMIPS : 400.00 +Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics cpuid asimdrdm +CPU implementer : 0x43 +CPU architecture: 8 +CPU variant : 0x1 +CPU part : 0x0af +CPU revision : 1 + +processor : 6 +BogoMIPS : 400.00 +Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics cpuid asimdrdm +CPU implementer : 0x43 +CPU architecture: 8 +CPU variant : 0x1 +CPU part : 0x0af +CPU revision : 1 + +processor : 7 +BogoMIPS : 400.00 +Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics cpuid asimdrdm +CPU implementer : 0x43 +CPU architecture: 8 +CPU variant : 0x1 +CPU part : 0x0af +CPU revision : 1 + diff --git a/tests/virhostcpudata/linux-aarch64-with-clusters.expected b/tests/virhostcpudata/linux-aarch64-with-clusters.expected new file mode 100644 index 0000000000..bf350bd40b --- /dev/null +++ b/tests/virhostcpudata/linux-aarch64-with-clusters.expected @@ -0,0 +1 @@ +CPUs: 8/8, MHz: 0, Nodes: 2, Sockets: 1, Cores: 2, Threads: 2 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_cpus_list new file mode 100644 index 0000000000..8b0fab869c --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_cpus_list @@ -0,0 +1 @@ +0-1 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_id new file mode 100644 index 0000000000..573541ac97 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/cluster_id @@ -0,0 +1 @@ +0 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_cpus_list new file mode 100644 index 0000000000..8b0fab869c --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_cpus_list @@ -0,0 +1 @@ +0-1 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_id new file mode 100644 index 0000000000..573541ac97 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_id @@ -0,0 +1 @@ +0 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_siblings_list new file mode 100644 index 0000000000..40c7bb2f1a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/core_siblings_list @@ -0,0 +1 @@ +0-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/package_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/package_cpus_list new file mode 100644 index 0000000000..40c7bb2f1a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/package_cpus_list @@ -0,0 +1 @@ +0-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/physical_package_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/physical_package_id new file mode 100644 index 0000000000..7facc89938 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/physical_package_id @@ -0,0 +1 @@ +36 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/thread_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/thread_siblings_list new file mode 100644 index 0000000000..8b0fab869c --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu0/topology/thread_siblings_list @@ -0,0 +1 @@ +0-1 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/cluster_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/cluster_cpus_list new file mode 100644 index 0000000000..8b0fab869c --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/cluster_cpus_list @@ -0,0 +1 @@ +0-1 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/cluster_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/cluster_id new file mode 100644 index 0000000000..573541ac97 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/cluster_id @@ -0,0 +1 @@ +0 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_cpus_list new file mode 100644 index 0000000000..8b0fab869c --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_cpus_list @@ -0,0 +1 @@ +0-1 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_id new file mode 100644 index 0000000000..573541ac97 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_id @@ -0,0 +1 @@ +0 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_siblings_list new file mode 100644 index 0000000000..40c7bb2f1a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/core_siblings_list @@ -0,0 +1 @@ +0-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/package_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/package_cpus_list new file mode 100644 index 0000000000..40c7bb2f1a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/package_cpus_list @@ -0,0 +1 @@ +0-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/physical_package_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/physical_package_id new file mode 100644 index 0000000000..7facc89938 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/physical_package_id @@ -0,0 +1 @@ +36 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/thread_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/thread_siblings_list new file mode 100644 index 0000000000..8b0fab869c --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu1/topology/thread_siblings_list @@ -0,0 +1 @@ +0-1 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/cluster_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/cluster_cpus_list new file mode 100644 index 0000000000..7a9857542a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/cluster_cpus_list @@ -0,0 +1 @@ +2-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/cluster_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/cluster_id new file mode 100644 index 0000000000..d00491fd7e --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/cluster_id @@ -0,0 +1 @@ +1 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_cpus_list new file mode 100644 index 0000000000..7a9857542a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_cpus_list @@ -0,0 +1 @@ +2-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_id new file mode 100644 index 0000000000..d00491fd7e --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_id @@ -0,0 +1 @@ +1 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_siblings_list new file mode 100644 index 0000000000..40c7bb2f1a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/core_siblings_list @@ -0,0 +1 @@ +0-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/package_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/package_cpus_list new file mode 100644 index 0000000000..40c7bb2f1a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/package_cpus_list @@ -0,0 +1 @@ +0-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/physical_package_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/physical_package_id new file mode 100644 index 0000000000..7facc89938 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/physical_package_id @@ -0,0 +1 @@ +36 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/thread_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/thread_siblings_list new file mode 100644 index 0000000000..7a9857542a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu2/topology/thread_siblings_list @@ -0,0 +1 @@ +2-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/cluster_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/cluster_cpus_list new file mode 100644 index 0000000000..7a9857542a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/cluster_cpus_list @@ -0,0 +1 @@ +2-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/cluster_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/cluster_id new file mode 100644 index 0000000000..d00491fd7e --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/cluster_id @@ -0,0 +1 @@ +1 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_cpus_list new file mode 100644 index 0000000000..7a9857542a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_cpus_list @@ -0,0 +1 @@ +2-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_id new file mode 100644 index 0000000000..d00491fd7e --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_id @@ -0,0 +1 @@ +1 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_siblings_list new file mode 100644 index 0000000000..40c7bb2f1a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/core_siblings_list @@ -0,0 +1 @@ +0-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/package_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/package_cpus_list new file mode 100644 index 0000000000..40c7bb2f1a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/package_cpus_list @@ -0,0 +1 @@ +0-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/physical_package_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/physical_package_id new file mode 100644 index 0000000000..7facc89938 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/physical_package_id @@ -0,0 +1 @@ +36 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/thread_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/thread_siblings_list new file mode 100644 index 0000000000..7a9857542a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu3/topology/thread_siblings_list @@ -0,0 +1 @@ +2-3 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/cluster_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/cluster_cpus_list new file mode 100644 index 0000000000..e66d883ade --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/cluster_cpus_list @@ -0,0 +1 @@ +4-5 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/cluster_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/cluster_id new file mode 100644 index 0000000000..9183bf03fc --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/cluster_id @@ -0,0 +1 @@ +256 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_cpus_list new file mode 100644 index 0000000000..e66d883ade --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_cpus_list @@ -0,0 +1 @@ +4-5 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_id new file mode 100644 index 0000000000..9183bf03fc --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_id @@ -0,0 +1 @@ +256 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_siblings_list new file mode 100644 index 0000000000..93fccd3cc6 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/core_siblings_list @@ -0,0 +1 @@ +4-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/package_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/package_cpus_list new file mode 100644 index 0000000000..93fccd3cc6 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/package_cpus_list @@ -0,0 +1 @@ +4-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/physical_package_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/physical_package_id new file mode 100644 index 0000000000..58cecca290 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/physical_package_id @@ -0,0 +1 @@ +3180 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/thread_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/thread_siblings_list new file mode 100644 index 0000000000..e66d883ade --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu4/topology/thread_siblings_list @@ -0,0 +1 @@ +4-5 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/cluster_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/cluster_cpus_list new file mode 100644 index 0000000000..e66d883ade --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/cluster_cpus_list @@ -0,0 +1 @@ +4-5 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/cluster_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/cluster_id new file mode 100644 index 0000000000..9183bf03fc --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/cluster_id @@ -0,0 +1 @@ +256 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_cpus_list new file mode 100644 index 0000000000..e66d883ade --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_cpus_list @@ -0,0 +1 @@ +4-5 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_id new file mode 100644 index 0000000000..9183bf03fc --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_id @@ -0,0 +1 @@ +256 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_siblings_list new file mode 100644 index 0000000000..93fccd3cc6 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/core_siblings_list @@ -0,0 +1 @@ +4-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/package_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/package_cpus_list new file mode 100644 index 0000000000..93fccd3cc6 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/package_cpus_list @@ -0,0 +1 @@ +4-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/physical_package_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/physical_package_id new file mode 100644 index 0000000000..58cecca290 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/physical_package_id @@ -0,0 +1 @@ +3180 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/thread_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/thread_siblings_list new file mode 100644 index 0000000000..e66d883ade --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu5/topology/thread_siblings_list @@ -0,0 +1 @@ +4-5 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/cluster_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/cluster_cpus_list new file mode 100644 index 0000000000..fdd9f37517 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/cluster_cpus_list @@ -0,0 +1 @@ +6-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/cluster_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/cluster_id new file mode 100644 index 0000000000..a700e79997 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/cluster_id @@ -0,0 +1 @@ +257 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_cpus_list new file mode 100644 index 0000000000..fdd9f37517 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_cpus_list @@ -0,0 +1 @@ +6-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_id new file mode 100644 index 0000000000..a700e79997 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_id @@ -0,0 +1 @@ +257 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_siblings_list new file mode 100644 index 0000000000..93fccd3cc6 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/core_siblings_list @@ -0,0 +1 @@ +4-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/package_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/package_cpus_list new file mode 100644 index 0000000000..93fccd3cc6 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/package_cpus_list @@ -0,0 +1 @@ +4-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/physical_package_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/physical_package_id new file mode 100644 index 0000000000..58cecca290 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/physical_package_id @@ -0,0 +1 @@ +3180 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/thread_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/thread_siblings_list new file mode 100644 index 0000000000..fdd9f37517 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu6/topology/thread_siblings_list @@ -0,0 +1 @@ +6-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/cluster_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/cluster_cpus_list new file mode 100644 index 0000000000..fdd9f37517 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/cluster_cpus_list @@ -0,0 +1 @@ +6-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/cluster_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/cluster_id new file mode 100644 index 0000000000..a700e79997 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/cluster_id @@ -0,0 +1 @@ +257 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_cpus_list new file mode 100644 index 0000000000..fdd9f37517 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_cpus_list @@ -0,0 +1 @@ +6-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_id new file mode 100644 index 0000000000..a700e79997 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_id @@ -0,0 +1 @@ +257 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_siblings_list new file mode 100644 index 0000000000..93fccd3cc6 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/core_siblings_list @@ -0,0 +1 @@ +4-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/package_cpus_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/package_cpus_list new file mode 100644 index 0000000000..93fccd3cc6 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/package_cpus_list @@ -0,0 +1 @@ +4-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/physical_package_id b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/physical_package_id new file mode 100644 index 0000000000..58cecca290 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/physical_package_id @@ -0,0 +1 @@ +3180 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/thread_siblings_list b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/thread_siblings_list new file mode 100644 index 0000000000..fdd9f37517 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/cpu7/topology/thread_siblings_list @@ -0,0 +1 @@ +6-7 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/online b/tests/virhostcpudata/linux-with-clusters/cpu/online new file mode 100644 index 0000000000..5f4593c34a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/online @@ -0,0 +1 @@ +0-223 diff --git a/tests/virhostcpudata/linux-with-clusters/cpu/present b/tests/virhostcpudata/linux-with-clusters/cpu/present new file mode 100644 index 0000000000..5f4593c34a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/cpu/present @@ -0,0 +1 @@ +0-223 diff --git a/tests/virhostcpudata/linux-with-clusters/node/node0/cpu0 b/tests/virhostcpudata/linux-with-clusters/node/node0/cpu0 new file mode 120000 index 0000000000..c841bea28b --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/node0/cpu0 @@ -0,0 +1 @@ +../../cpu/cpu0 \ No newline at end of file diff --git a/tests/virhostcpudata/linux-with-clusters/node/node0/cpu1 b/tests/virhostcpudata/linux-with-clusters/node/node0/cpu1 new file mode 120000 index 0000000000..5f4536279e --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/node0/cpu1 @@ -0,0 +1 @@ +../../cpu/cpu1 \ No newline at end of file diff --git a/tests/virhostcpudata/linux-with-clusters/node/node0/cpu2 b/tests/virhostcpudata/linux-with-clusters/node/node0/cpu2 new file mode 120000 index 0000000000..2dcca332ce --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/node0/cpu2 @@ -0,0 +1 @@ +../../cpu/cpu2 \ No newline at end of file diff --git a/tests/virhostcpudata/linux-with-clusters/node/node0/cpu3 b/tests/virhostcpudata/linux-with-clusters/node/node0/cpu3 new file mode 120000 index 0000000000..c7690e5aa6 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/node0/cpu3 @@ -0,0 +1 @@ +../../cpu/cpu3 \ No newline at end of file diff --git a/tests/virhostcpudata/linux-with-clusters/node/node0/cpulist b/tests/virhostcpudata/linux-with-clusters/node/node0/cpulist new file mode 100644 index 0000000000..40c7bb2f1a --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/node0/cpulist @@ -0,0 +1 @@ +0-3 diff --git a/tests/virhostcpudata/linux-with-clusters/node/node1/cpu4 b/tests/virhostcpudata/linux-with-clusters/node/node1/cpu4 new file mode 120000 index 0000000000..9e77a64eb4 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/node1/cpu4 @@ -0,0 +1 @@ +../../cpu/cpu4 \ No newline at end of file diff --git a/tests/virhostcpudata/linux-with-clusters/node/node1/cpu5 b/tests/virhostcpudata/linux-with-clusters/node/node1/cpu5 new file mode 120000 index 0000000000..cc07c3b97b --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/node1/cpu5 @@ -0,0 +1 @@ +../../cpu/cpu5 \ No newline at end of file diff --git a/tests/virhostcpudata/linux-with-clusters/node/node1/cpu6 b/tests/virhostcpudata/linux-with-clusters/node/node1/cpu6 new file mode 120000 index 0000000000..2e7576354f --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/node1/cpu6 @@ -0,0 +1 @@ +../../cpu/cpu6 \ No newline at end of file diff --git a/tests/virhostcpudata/linux-with-clusters/node/node1/cpu7 b/tests/virhostcpudata/linux-with-clusters/node/node1/cpu7 new file mode 120000 index 0000000000..09e3f79b43 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/node1/cpu7 @@ -0,0 +1 @@ +../../cpu/cpu7 \ No newline at end of file diff --git a/tests/virhostcpudata/linux-with-clusters/node/node1/cpulist b/tests/virhostcpudata/linux-with-clusters/node/node1/cpulist new file mode 100644 index 0000000000..93fccd3cc6 --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/node1/cpulist @@ -0,0 +1 @@ +4-7 diff --git a/tests/virhostcpudata/linux-with-clusters/node/online b/tests/virhostcpudata/linux-with-clusters/node/online new file mode 100644 index 0000000000..8b0fab869c --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/online @@ -0,0 +1 @@ +0-1 diff --git a/tests/virhostcpudata/linux-with-clusters/node/possible b/tests/virhostcpudata/linux-with-clusters/node/possible new file mode 100644 index 0000000000..8b0fab869c --- /dev/null +++ b/tests/virhostcpudata/linux-with-clusters/node/possible @@ -0,0 +1 @@ +0-1 diff --git a/tests/virhostcputest.c b/tests/virhostcputest.c index 0990013878..cf310cb4ce 100644 --- a/tests/virhostcputest.c +++ b/tests/virhostcputest.c @@ -273,6 +273,7 @@ mymain(void) {"subcores3", VIR_ARCH_PPC64}, {"with-frequency", VIR_ARCH_S390X}, {"with-die", VIR_ARCH_X86_64}, + {"with-clusters", VIR_ARCH_AARCH64}, }; if (virInitialize() < 0) -- 2.43.0

On Thu, Jan 11, 2024 at 15:26:33 +0100, Andrea Bolognani wrote:
The data is taken from an HPE Apollo 70 machine, which uses aarch64 CPUs. It is interesting for us because non-dummy information about CPU clusters is exposed through sysfs.
In order to keep things reasonable, the data was manually modified so that only 8 of the original 224 CPUs are included. Care has been taken to ensure that the topology is otherwise unaltered.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
[...]
--- tests/virhostcputest.c | 1 + 85 files changed, 194 insertions(+)
So much better! Reviewed-by: Peter Krempa <pkrempa@redhat.com>

For machines that don't expose useful information through sysfs, the dummy ID 0 is used. https://issues.redhat.com/browse/RHEL-7043 Signed-off-by: Andrea Bolognani <abologna@redhat.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> --- src/conf/capabilities.c | 5 +- src/conf/capabilities.h | 1 + src/conf/schemas/capability.rng | 3 ++ src/libvirt_linux.syms | 1 + src/util/virhostcpu.c | 22 +++++++++ src/util/virhostcpu.h | 1 + tests/capabilityschemadata/caps-qemu-kvm.xml | 32 ++++++------- .../vircaps-aarch64-basic-clusters.xml | 16 +++---- .../vircaps2xmldata/vircaps-aarch64-basic.xml | 32 ++++++------- .../vircaps-x86_64-basic-dies.xml | 24 +++++----- .../vircaps2xmldata/vircaps-x86_64-basic.xml | 32 ++++++------- .../vircaps2xmldata/vircaps-x86_64-caches.xml | 16 +++---- tests/vircaps2xmldata/vircaps-x86_64-hmat.xml | 48 +++++++++---------- .../vircaps-x86_64-resctrl-cdp.xml | 24 +++++----- .../vircaps-x86_64-resctrl-cmt.xml | 24 +++++----- .../vircaps-x86_64-resctrl-fake-feature.xml | 24 +++++----- .../vircaps-x86_64-resctrl-skx-twocaches.xml | 2 +- .../vircaps-x86_64-resctrl-skx.xml | 2 +- .../vircaps-x86_64-resctrl.xml | 24 +++++----- 19 files changed, 182 insertions(+), 151 deletions(-) diff --git a/src/conf/capabilities.c b/src/conf/capabilities.c index 32badee7b3..02298e40a3 100644 --- a/src/conf/capabilities.c +++ b/src/conf/capabilities.c @@ -811,9 +811,10 @@ virCapsHostNUMACellCPUFormat(virBuffer *buf, return -1; virBufferAsprintf(&childBuf, - " socket_id='%d' die_id='%d' core_id='%d' siblings='%s'", + " socket_id='%d' die_id='%d' cluster_id='%d' core_id='%d' siblings='%s'", cpus[j].socket_id, cpus[j].die_id, + cpus[j].cluster_id, cpus[j].core_id, siblings); } @@ -1453,6 +1454,7 @@ virCapabilitiesFillCPUInfo(int cpu_id G_GNUC_UNUSED, if (virHostCPUGetSocket(cpu_id, &cpu->socket_id) < 0 || virHostCPUGetDie(cpu_id, &cpu->die_id) < 0 || + virHostCPUGetCluster(cpu_id, &cpu->cluster_id) < 0 || virHostCPUGetCore(cpu_id, &cpu->core_id) < 0) return -1; @@ -1712,6 +1714,7 @@ virCapabilitiesHostNUMAInitFake(virCapsHostNUMA *caps) if (tmp) { cpus[cid].id = id; cpus[cid].die_id = 0; + cpus[cid].cluster_id = 0; cpus[cid].socket_id = s; cpus[cid].core_id = c; cpus[cid].siblings = virBitmapNewCopy(siblings); diff --git a/src/conf/capabilities.h b/src/conf/capabilities.h index 9eaf6e2807..52e395de14 100644 --- a/src/conf/capabilities.h +++ b/src/conf/capabilities.h @@ -89,6 +89,7 @@ struct _virCapsHostNUMACellCPU { unsigned int id; unsigned int socket_id; unsigned int die_id; + unsigned int cluster_id; unsigned int core_id; virBitmap *siblings; }; diff --git a/src/conf/schemas/capability.rng b/src/conf/schemas/capability.rng index b1968df258..a1606941e7 100644 --- a/src/conf/schemas/capability.rng +++ b/src/conf/schemas/capability.rng @@ -201,6 +201,9 @@ <attribute name="die_id"> <ref name="unsignedInt"/> </attribute> + <attribute name="cluster_id"> + <ref name="unsignedInt"/> + </attribute> <attribute name="core_id"> <ref name="unsignedInt"/> </attribute> diff --git a/src/libvirt_linux.syms b/src/libvirt_linux.syms index 55649ae39c..004cbfee97 100644 --- a/src/libvirt_linux.syms +++ b/src/libvirt_linux.syms @@ -3,6 +3,7 @@ # # util/virhostcpu.h +virHostCPUGetCluster; virHostCPUGetCore; virHostCPUGetDie; virHostCPUGetInfoPopulateLinux; diff --git a/src/util/virhostcpu.c b/src/util/virhostcpu.c index 4027547e1e..a3781ca870 100644 --- a/src/util/virhostcpu.c +++ b/src/util/virhostcpu.c @@ -232,6 +232,28 @@ virHostCPUGetDie(unsigned int cpu, unsigned int *die) return 0; } +int +virHostCPUGetCluster(unsigned int cpu, unsigned int *cluster) +{ + int cluster_id; + int ret = virFileReadValueInt(&cluster_id, + "%s/cpu/cpu%u/topology/cluster_id", + SYSFS_SYSTEM_PATH, cpu); + + if (ret == -1) + return -1; + + /* If the file doesn't exists (old kernel) or the value contained + * in it is -1 (architecture without CPU clusters), report 0 to + * indicate the lack of information */ + if (ret == -2 || cluster_id < 0) + cluster_id = 0; + + *cluster = cluster_id; + + return 0; +} + int virHostCPUGetCore(unsigned int cpu, unsigned int *core) { diff --git a/src/util/virhostcpu.h b/src/util/virhostcpu.h index 5f0d43e069..d7e09bff22 100644 --- a/src/util/virhostcpu.h +++ b/src/util/virhostcpu.h @@ -68,6 +68,7 @@ int virHostCPUStatsAssign(virNodeCPUStatsPtr param, #ifdef __linux__ int virHostCPUGetSocket(unsigned int cpu, unsigned int *socket); int virHostCPUGetDie(unsigned int cpu, unsigned int *die); +int virHostCPUGetCluster(unsigned int cpu, unsigned int *cluster); int virHostCPUGetCore(unsigned int cpu, unsigned int *core); virBitmap *virHostCPUGetSiblingsList(unsigned int cpu); diff --git a/tests/capabilityschemadata/caps-qemu-kvm.xml b/tests/capabilityschemadata/caps-qemu-kvm.xml index acdbb362cc..317fa0885f 100644 --- a/tests/capabilityschemadata/caps-qemu-kvm.xml +++ b/tests/capabilityschemadata/caps-qemu-kvm.xml @@ -64,14 +64,14 @@ <sibling id='1' value='21'/> </distances> <cpus num='8'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/> - <cpu id='2' socket_id='0' die_id='0' core_id='1' siblings='2'/> - <cpu id='4' socket_id='0' die_id='0' core_id='2' siblings='4'/> - <cpu id='6' socket_id='0' die_id='0' core_id='3' siblings='6'/> - <cpu id='8' socket_id='0' die_id='0' core_id='4' siblings='8'/> - <cpu id='10' socket_id='0' die_id='0' core_id='5' siblings='10'/> - <cpu id='12' socket_id='0' die_id='0' core_id='6' siblings='12'/> - <cpu id='14' socket_id='0' die_id='0' core_id='7' siblings='14'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0'/> + <cpu id='2' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='2'/> + <cpu id='4' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='4'/> + <cpu id='6' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='6'/> + <cpu id='8' socket_id='0' die_id='0' cluster_id='0' core_id='4' siblings='8'/> + <cpu id='10' socket_id='0' die_id='0' cluster_id='0' core_id='5' siblings='10'/> + <cpu id='12' socket_id='0' die_id='0' cluster_id='0' core_id='6' siblings='12'/> + <cpu id='14' socket_id='0' die_id='0' cluster_id='0' core_id='7' siblings='14'/> </cpus> </cell> <cell id='1'> @@ -84,14 +84,14 @@ <sibling id='1' value='10'/> </distances> <cpus num='8'> - <cpu id='1' socket_id='1' die_id='0' core_id='0' siblings='1'/> - <cpu id='3' socket_id='1' die_id='0' core_id='1' siblings='3'/> - <cpu id='5' socket_id='1' die_id='0' core_id='2' siblings='5'/> - <cpu id='7' socket_id='1' die_id='0' core_id='3' siblings='7'/> - <cpu id='9' socket_id='1' die_id='0' core_id='4' siblings='9'/> - <cpu id='11' socket_id='1' die_id='0' core_id='5' siblings='11'/> - <cpu id='13' socket_id='1' die_id='0' core_id='6' siblings='13'/> - <cpu id='15' socket_id='1' die_id='0' core_id='7' siblings='15'/> + <cpu id='1' socket_id='1' die_id='0' cluster_id='0' core_id='0' siblings='1'/> + <cpu id='3' socket_id='1' die_id='0' cluster_id='0' core_id='1' siblings='3'/> + <cpu id='5' socket_id='1' die_id='0' cluster_id='0' core_id='2' siblings='5'/> + <cpu id='7' socket_id='1' die_id='0' cluster_id='0' core_id='3' siblings='7'/> + <cpu id='9' socket_id='1' die_id='0' cluster_id='0' core_id='4' siblings='9'/> + <cpu id='11' socket_id='1' die_id='0' cluster_id='0' core_id='5' siblings='11'/> + <cpu id='13' socket_id='1' die_id='0' cluster_id='0' core_id='6' siblings='13'/> + <cpu id='15' socket_id='1' die_id='0' cluster_id='0' core_id='7' siblings='15'/> </cpus> </cell> </cells> diff --git a/tests/vircaps2xmldata/vircaps-aarch64-basic-clusters.xml b/tests/vircaps2xmldata/vircaps-aarch64-basic-clusters.xml index fe61fc42cc..b37c8e7a20 100644 --- a/tests/vircaps2xmldata/vircaps-aarch64-basic-clusters.xml +++ b/tests/vircaps2xmldata/vircaps-aarch64-basic-clusters.xml @@ -14,10 +14,10 @@ <pages unit='KiB' size='2048'>4096</pages> <pages unit='KiB' size='1048576'>6144</pages> <cpus num='4'> - <cpu id='0' socket_id='36' die_id='0' core_id='0' siblings='0-1'/> - <cpu id='1' socket_id='36' die_id='0' core_id='0' siblings='0-1'/> - <cpu id='2' socket_id='36' die_id='0' core_id='1' siblings='2-3'/> - <cpu id='3' socket_id='36' die_id='0' core_id='1' siblings='2-3'/> + <cpu id='0' socket_id='36' die_id='0' cluster_id='0' core_id='0' siblings='0-1'/> + <cpu id='1' socket_id='36' die_id='0' cluster_id='0' core_id='0' siblings='0-1'/> + <cpu id='2' socket_id='36' die_id='0' cluster_id='1' core_id='1' siblings='2-3'/> + <cpu id='3' socket_id='36' die_id='0' cluster_id='1' core_id='1' siblings='2-3'/> </cpus> </cell> <cell id='1'> @@ -26,10 +26,10 @@ <pages unit='KiB' size='2048'>6144</pages> <pages unit='KiB' size='1048576'>8192</pages> <cpus num='4'> - <cpu id='4' socket_id='3180' die_id='0' core_id='256' siblings='4-5'/> - <cpu id='5' socket_id='3180' die_id='0' core_id='256' siblings='4-5'/> - <cpu id='6' socket_id='3180' die_id='0' core_id='257' siblings='6-7'/> - <cpu id='7' socket_id='3180' die_id='0' core_id='257' siblings='6-7'/> + <cpu id='4' socket_id='3180' die_id='0' cluster_id='256' core_id='256' siblings='4-5'/> + <cpu id='5' socket_id='3180' die_id='0' cluster_id='256' core_id='256' siblings='4-5'/> + <cpu id='6' socket_id='3180' die_id='0' cluster_id='257' core_id='257' siblings='6-7'/> + <cpu id='7' socket_id='3180' die_id='0' cluster_id='257' core_id='257' siblings='6-7'/> </cpus> </cell> </cells> diff --git a/tests/vircaps2xmldata/vircaps-aarch64-basic.xml b/tests/vircaps2xmldata/vircaps-aarch64-basic.xml index 0a04052c40..5533ae0586 100644 --- a/tests/vircaps2xmldata/vircaps-aarch64-basic.xml +++ b/tests/vircaps2xmldata/vircaps-aarch64-basic.xml @@ -16,10 +16,10 @@ <pages unit='KiB' size='2048'>4096</pages> <pages unit='KiB' size='1048576'>6144</pages> <cpus num='4'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/> - <cpu id='1' socket_id='0' die_id='0' core_id='1' siblings='1'/> - <cpu id='2' socket_id='0' die_id='0' core_id='2' siblings='2'/> - <cpu id='3' socket_id='0' die_id='0' core_id='3' siblings='3'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0'/> + <cpu id='1' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='1'/> + <cpu id='2' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='2'/> + <cpu id='3' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='3'/> </cpus> </cell> <cell id='1'> @@ -28,10 +28,10 @@ <pages unit='KiB' size='2048'>6144</pages> <pages unit='KiB' size='1048576'>8192</pages> <cpus num='4'> - <cpu id='4' socket_id='1' die_id='0' core_id='4' siblings='4'/> - <cpu id='5' socket_id='1' die_id='0' core_id='5' siblings='5'/> - <cpu id='6' socket_id='1' die_id='0' core_id='6' siblings='6'/> - <cpu id='7' socket_id='1' die_id='0' core_id='7' siblings='7'/> + <cpu id='4' socket_id='1' die_id='0' cluster_id='0' core_id='4' siblings='4'/> + <cpu id='5' socket_id='1' die_id='0' cluster_id='0' core_id='5' siblings='5'/> + <cpu id='6' socket_id='1' die_id='0' cluster_id='0' core_id='6' siblings='6'/> + <cpu id='7' socket_id='1' die_id='0' cluster_id='0' core_id='7' siblings='7'/> </cpus> </cell> <cell id='2'> @@ -40,10 +40,10 @@ <pages unit='KiB' size='2048'>8192</pages> <pages unit='KiB' size='1048576'>10240</pages> <cpus num='4'> - <cpu id='8' socket_id='2' die_id='0' core_id='8' siblings='8'/> - <cpu id='9' socket_id='2' die_id='0' core_id='9' siblings='9'/> - <cpu id='10' socket_id='2' die_id='0' core_id='10' siblings='10'/> - <cpu id='11' socket_id='2' die_id='0' core_id='11' siblings='11'/> + <cpu id='8' socket_id='2' die_id='0' cluster_id='0' core_id='8' siblings='8'/> + <cpu id='9' socket_id='2' die_id='0' cluster_id='0' core_id='9' siblings='9'/> + <cpu id='10' socket_id='2' die_id='0' cluster_id='0' core_id='10' siblings='10'/> + <cpu id='11' socket_id='2' die_id='0' cluster_id='0' core_id='11' siblings='11'/> </cpus> </cell> <cell id='3'> @@ -52,10 +52,10 @@ <pages unit='KiB' size='2048'>10240</pages> <pages unit='KiB' size='1048576'>12288</pages> <cpus num='4'> - <cpu id='12' socket_id='3' die_id='0' core_id='12' siblings='12'/> - <cpu id='13' socket_id='3' die_id='0' core_id='13' siblings='13'/> - <cpu id='14' socket_id='3' die_id='0' core_id='14' siblings='14'/> - <cpu id='15' socket_id='3' die_id='0' core_id='15' siblings='15'/> + <cpu id='12' socket_id='3' die_id='0' cluster_id='0' core_id='12' siblings='12'/> + <cpu id='13' socket_id='3' die_id='0' cluster_id='0' core_id='13' siblings='13'/> + <cpu id='14' socket_id='3' die_id='0' cluster_id='0' core_id='14' siblings='14'/> + <cpu id='15' socket_id='3' die_id='0' cluster_id='0' core_id='15' siblings='15'/> </cpus> </cell> </cells> diff --git a/tests/vircaps2xmldata/vircaps-x86_64-basic-dies.xml b/tests/vircaps2xmldata/vircaps-x86_64-basic-dies.xml index 8a3ca2d13c..c86dc4defc 100644 --- a/tests/vircaps2xmldata/vircaps-x86_64-basic-dies.xml +++ b/tests/vircaps2xmldata/vircaps-x86_64-basic-dies.xml @@ -14,18 +14,18 @@ <pages unit='KiB' size='2048'>4096</pages> <pages unit='KiB' size='1048576'>6144</pages> <cpus num='12'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/> - <cpu id='1' socket_id='0' die_id='0' core_id='1' siblings='1'/> - <cpu id='2' socket_id='0' die_id='1' core_id='0' siblings='2'/> - <cpu id='3' socket_id='0' die_id='1' core_id='1' siblings='3'/> - <cpu id='4' socket_id='0' die_id='2' core_id='0' siblings='4'/> - <cpu id='5' socket_id='0' die_id='2' core_id='1' siblings='5'/> - <cpu id='6' socket_id='1' die_id='0' core_id='0' siblings='6'/> - <cpu id='7' socket_id='1' die_id='0' core_id='1' siblings='7'/> - <cpu id='8' socket_id='1' die_id='1' core_id='0' siblings='8'/> - <cpu id='9' socket_id='1' die_id='1' core_id='1' siblings='9'/> - <cpu id='10' socket_id='1' die_id='2' core_id='0' siblings='10'/> - <cpu id='11' socket_id='1' die_id='2' core_id='1' siblings='11'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0'/> + <cpu id='1' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='1'/> + <cpu id='2' socket_id='0' die_id='1' cluster_id='0' core_id='0' siblings='2'/> + <cpu id='3' socket_id='0' die_id='1' cluster_id='0' core_id='1' siblings='3'/> + <cpu id='4' socket_id='0' die_id='2' cluster_id='0' core_id='0' siblings='4'/> + <cpu id='5' socket_id='0' die_id='2' cluster_id='0' core_id='1' siblings='5'/> + <cpu id='6' socket_id='1' die_id='0' cluster_id='0' core_id='0' siblings='6'/> + <cpu id='7' socket_id='1' die_id='0' cluster_id='0' core_id='1' siblings='7'/> + <cpu id='8' socket_id='1' die_id='1' cluster_id='0' core_id='0' siblings='8'/> + <cpu id='9' socket_id='1' die_id='1' cluster_id='0' core_id='1' siblings='9'/> + <cpu id='10' socket_id='1' die_id='2' cluster_id='0' core_id='0' siblings='10'/> + <cpu id='11' socket_id='1' die_id='2' cluster_id='0' core_id='1' siblings='11'/> </cpus> </cell> </cells> diff --git a/tests/vircaps2xmldata/vircaps-x86_64-basic.xml b/tests/vircaps2xmldata/vircaps-x86_64-basic.xml index 4da09f889c..9ae155d571 100644 --- a/tests/vircaps2xmldata/vircaps-x86_64-basic.xml +++ b/tests/vircaps2xmldata/vircaps-x86_64-basic.xml @@ -14,10 +14,10 @@ <pages unit='KiB' size='2048'>4096</pages> <pages unit='KiB' size='1048576'>6144</pages> <cpus num='4'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/> - <cpu id='1' socket_id='0' die_id='0' core_id='1' siblings='1'/> - <cpu id='2' socket_id='0' die_id='0' core_id='2' siblings='2'/> - <cpu id='3' socket_id='0' die_id='0' core_id='3' siblings='3'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0'/> + <cpu id='1' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='1'/> + <cpu id='2' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='2'/> + <cpu id='3' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='3'/> </cpus> </cell> <cell id='1'> @@ -26,10 +26,10 @@ <pages unit='KiB' size='2048'>6144</pages> <pages unit='KiB' size='1048576'>8192</pages> <cpus num='4'> - <cpu id='4' socket_id='1' die_id='0' core_id='4' siblings='4'/> - <cpu id='5' socket_id='1' die_id='0' core_id='5' siblings='5'/> - <cpu id='6' socket_id='1' die_id='0' core_id='6' siblings='6'/> - <cpu id='7' socket_id='1' die_id='0' core_id='7' siblings='7'/> + <cpu id='4' socket_id='1' die_id='0' cluster_id='0' core_id='4' siblings='4'/> + <cpu id='5' socket_id='1' die_id='0' cluster_id='0' core_id='5' siblings='5'/> + <cpu id='6' socket_id='1' die_id='0' cluster_id='0' core_id='6' siblings='6'/> + <cpu id='7' socket_id='1' die_id='0' cluster_id='0' core_id='7' siblings='7'/> </cpus> </cell> <cell id='2'> @@ -38,10 +38,10 @@ <pages unit='KiB' size='2048'>8192</pages> <pages unit='KiB' size='1048576'>10240</pages> <cpus num='4'> - <cpu id='8' socket_id='2' die_id='0' core_id='8' siblings='8'/> - <cpu id='9' socket_id='2' die_id='0' core_id='9' siblings='9'/> - <cpu id='10' socket_id='2' die_id='0' core_id='10' siblings='10'/> - <cpu id='11' socket_id='2' die_id='0' core_id='11' siblings='11'/> + <cpu id='8' socket_id='2' die_id='0' cluster_id='0' core_id='8' siblings='8'/> + <cpu id='9' socket_id='2' die_id='0' cluster_id='0' core_id='9' siblings='9'/> + <cpu id='10' socket_id='2' die_id='0' cluster_id='0' core_id='10' siblings='10'/> + <cpu id='11' socket_id='2' die_id='0' cluster_id='0' core_id='11' siblings='11'/> </cpus> </cell> <cell id='3'> @@ -50,10 +50,10 @@ <pages unit='KiB' size='2048'>10240</pages> <pages unit='KiB' size='1048576'>12288</pages> <cpus num='4'> - <cpu id='12' socket_id='3' die_id='0' core_id='12' siblings='12'/> - <cpu id='13' socket_id='3' die_id='0' core_id='13' siblings='13'/> - <cpu id='14' socket_id='3' die_id='0' core_id='14' siblings='14'/> - <cpu id='15' socket_id='3' die_id='0' core_id='15' siblings='15'/> + <cpu id='12' socket_id='3' die_id='0' cluster_id='0' core_id='12' siblings='12'/> + <cpu id='13' socket_id='3' die_id='0' cluster_id='0' core_id='13' siblings='13'/> + <cpu id='14' socket_id='3' die_id='0' cluster_id='0' core_id='14' siblings='14'/> + <cpu id='15' socket_id='3' die_id='0' cluster_id='0' core_id='15' siblings='15'/> </cpus> </cell> </cells> diff --git a/tests/vircaps2xmldata/vircaps-x86_64-caches.xml b/tests/vircaps2xmldata/vircaps-x86_64-caches.xml index 28f00c0a90..05b33147b7 100644 --- a/tests/vircaps2xmldata/vircaps-x86_64-caches.xml +++ b/tests/vircaps2xmldata/vircaps-x86_64-caches.xml @@ -17,14 +17,14 @@ <pages unit='KiB' size='2048'>4096</pages> <pages unit='KiB' size='1048576'>6144</pages> <cpus num='8'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0,4'/> - <cpu id='1' socket_id='0' die_id='0' core_id='1' siblings='1,5'/> - <cpu id='2' socket_id='0' die_id='0' core_id='2' siblings='2,6'/> - <cpu id='3' socket_id='0' die_id='0' core_id='3' siblings='3,7'/> - <cpu id='4' socket_id='0' die_id='0' core_id='0' siblings='0,4'/> - <cpu id='5' socket_id='0' die_id='0' core_id='1' siblings='1,5'/> - <cpu id='6' socket_id='0' die_id='0' core_id='2' siblings='2,6'/> - <cpu id='7' socket_id='0' die_id='0' core_id='3' siblings='3,7'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0,4'/> + <cpu id='1' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='1,5'/> + <cpu id='2' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='2,6'/> + <cpu id='3' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='3,7'/> + <cpu id='4' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0,4'/> + <cpu id='5' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='1,5'/> + <cpu id='6' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='2,6'/> + <cpu id='7' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='3,7'/> </cpus> </cell> </cells> diff --git a/tests/vircaps2xmldata/vircaps-x86_64-hmat.xml b/tests/vircaps2xmldata/vircaps-x86_64-hmat.xml index 6fe5751666..2b97354bf3 100644 --- a/tests/vircaps2xmldata/vircaps-x86_64-hmat.xml +++ b/tests/vircaps2xmldata/vircaps-x86_64-hmat.xml @@ -25,30 +25,30 @@ <line value='16' unit='B'/> </cache> <cpus num='24'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/> - <cpu id='1' socket_id='1' die_id='0' core_id='0' siblings='1'/> - <cpu id='2' socket_id='2' die_id='0' core_id='0' siblings='2'/> - <cpu id='3' socket_id='3' die_id='0' core_id='0' siblings='3'/> - <cpu id='4' socket_id='4' die_id='0' core_id='0' siblings='4'/> - <cpu id='5' socket_id='5' die_id='0' core_id='0' siblings='5'/> - <cpu id='6' socket_id='6' die_id='0' core_id='0' siblings='6'/> - <cpu id='7' socket_id='7' die_id='0' core_id='0' siblings='7'/> - <cpu id='8' socket_id='8' die_id='0' core_id='0' siblings='8'/> - <cpu id='9' socket_id='9' die_id='0' core_id='0' siblings='9'/> - <cpu id='10' socket_id='10' die_id='0' core_id='0' siblings='10'/> - <cpu id='11' socket_id='11' die_id='0' core_id='0' siblings='11'/> - <cpu id='12' socket_id='12' die_id='0' core_id='0' siblings='12'/> - <cpu id='13' socket_id='13' die_id='0' core_id='0' siblings='13'/> - <cpu id='14' socket_id='14' die_id='0' core_id='0' siblings='14'/> - <cpu id='15' socket_id='15' die_id='0' core_id='0' siblings='15'/> - <cpu id='16' socket_id='16' die_id='0' core_id='0' siblings='16'/> - <cpu id='17' socket_id='17' die_id='0' core_id='0' siblings='17'/> - <cpu id='18' socket_id='18' die_id='0' core_id='0' siblings='18'/> - <cpu id='19' socket_id='19' die_id='0' core_id='0' siblings='19'/> - <cpu id='20' socket_id='20' die_id='0' core_id='0' siblings='20'/> - <cpu id='21' socket_id='21' die_id='0' core_id='0' siblings='21'/> - <cpu id='22' socket_id='22' die_id='0' core_id='0' siblings='22'/> - <cpu id='23' socket_id='23' die_id='0' core_id='0' siblings='23'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0'/> + <cpu id='1' socket_id='1' die_id='0' cluster_id='0' core_id='0' siblings='1'/> + <cpu id='2' socket_id='2' die_id='0' cluster_id='0' core_id='0' siblings='2'/> + <cpu id='3' socket_id='3' die_id='0' cluster_id='0' core_id='0' siblings='3'/> + <cpu id='4' socket_id='4' die_id='0' cluster_id='0' core_id='0' siblings='4'/> + <cpu id='5' socket_id='5' die_id='0' cluster_id='0' core_id='0' siblings='5'/> + <cpu id='6' socket_id='6' die_id='0' cluster_id='0' core_id='0' siblings='6'/> + <cpu id='7' socket_id='7' die_id='0' cluster_id='0' core_id='0' siblings='7'/> + <cpu id='8' socket_id='8' die_id='0' cluster_id='0' core_id='0' siblings='8'/> + <cpu id='9' socket_id='9' die_id='0' cluster_id='0' core_id='0' siblings='9'/> + <cpu id='10' socket_id='10' die_id='0' cluster_id='0' core_id='0' siblings='10'/> + <cpu id='11' socket_id='11' die_id='0' cluster_id='0' core_id='0' siblings='11'/> + <cpu id='12' socket_id='12' die_id='0' cluster_id='0' core_id='0' siblings='12'/> + <cpu id='13' socket_id='13' die_id='0' cluster_id='0' core_id='0' siblings='13'/> + <cpu id='14' socket_id='14' die_id='0' cluster_id='0' core_id='0' siblings='14'/> + <cpu id='15' socket_id='15' die_id='0' cluster_id='0' core_id='0' siblings='15'/> + <cpu id='16' socket_id='16' die_id='0' cluster_id='0' core_id='0' siblings='16'/> + <cpu id='17' socket_id='17' die_id='0' cluster_id='0' core_id='0' siblings='17'/> + <cpu id='18' socket_id='18' die_id='0' cluster_id='0' core_id='0' siblings='18'/> + <cpu id='19' socket_id='19' die_id='0' cluster_id='0' core_id='0' siblings='19'/> + <cpu id='20' socket_id='20' die_id='0' cluster_id='0' core_id='0' siblings='20'/> + <cpu id='21' socket_id='21' die_id='0' cluster_id='0' core_id='0' siblings='21'/> + <cpu id='22' socket_id='22' die_id='0' cluster_id='0' core_id='0' siblings='22'/> + <cpu id='23' socket_id='23' die_id='0' cluster_id='0' core_id='0' siblings='23'/> </cpus> </cell> <cell id='1'> diff --git a/tests/vircaps2xmldata/vircaps-x86_64-resctrl-cdp.xml b/tests/vircaps2xmldata/vircaps-x86_64-resctrl-cdp.xml index ee26fe9464..167b217d8e 100644 --- a/tests/vircaps2xmldata/vircaps-x86_64-resctrl-cdp.xml +++ b/tests/vircaps2xmldata/vircaps-x86_64-resctrl-cdp.xml @@ -17,12 +17,12 @@ <pages unit='KiB' size='2048'>4096</pages> <pages unit='KiB' size='1048576'>6144</pages> <cpus num='6'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/> - <cpu id='1' socket_id='0' die_id='0' core_id='1' siblings='1'/> - <cpu id='2' socket_id='0' die_id='0' core_id='2' siblings='2'/> - <cpu id='3' socket_id='0' die_id='0' core_id='3' siblings='3'/> - <cpu id='4' socket_id='0' die_id='0' core_id='4' siblings='4'/> - <cpu id='5' socket_id='0' die_id='0' core_id='5' siblings='5'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0'/> + <cpu id='1' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='1'/> + <cpu id='2' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='2'/> + <cpu id='3' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='3'/> + <cpu id='4' socket_id='0' die_id='0' cluster_id='0' core_id='4' siblings='4'/> + <cpu id='5' socket_id='0' die_id='0' cluster_id='0' core_id='5' siblings='5'/> </cpus> </cell> <cell id='1'> @@ -31,12 +31,12 @@ <pages unit='KiB' size='2048'>6144</pages> <pages unit='KiB' size='1048576'>8192</pages> <cpus num='6'> - <cpu id='6' socket_id='1' die_id='0' core_id='0' siblings='6'/> - <cpu id='7' socket_id='1' die_id='0' core_id='1' siblings='7'/> - <cpu id='8' socket_id='1' die_id='0' core_id='2' siblings='8'/> - <cpu id='9' socket_id='1' die_id='0' core_id='3' siblings='9'/> - <cpu id='10' socket_id='1' die_id='0' core_id='4' siblings='10'/> - <cpu id='11' socket_id='1' die_id='0' core_id='5' siblings='11'/> + <cpu id='6' socket_id='1' die_id='0' cluster_id='0' core_id='0' siblings='6'/> + <cpu id='7' socket_id='1' die_id='0' cluster_id='0' core_id='1' siblings='7'/> + <cpu id='8' socket_id='1' die_id='0' cluster_id='0' core_id='2' siblings='8'/> + <cpu id='9' socket_id='1' die_id='0' cluster_id='0' core_id='3' siblings='9'/> + <cpu id='10' socket_id='1' die_id='0' cluster_id='0' core_id='4' siblings='10'/> + <cpu id='11' socket_id='1' die_id='0' cluster_id='0' core_id='5' siblings='11'/> </cpus> </cell> </cells> diff --git a/tests/vircaps2xmldata/vircaps-x86_64-resctrl-cmt.xml b/tests/vircaps2xmldata/vircaps-x86_64-resctrl-cmt.xml index acdd97ec58..311bb58e6a 100644 --- a/tests/vircaps2xmldata/vircaps-x86_64-resctrl-cmt.xml +++ b/tests/vircaps2xmldata/vircaps-x86_64-resctrl-cmt.xml @@ -17,12 +17,12 @@ <pages unit='KiB' size='2048'>4096</pages> <pages unit='KiB' size='1048576'>6144</pages> <cpus num='6'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/> - <cpu id='1' socket_id='0' die_id='0' core_id='1' siblings='1'/> - <cpu id='2' socket_id='0' die_id='0' core_id='2' siblings='2'/> - <cpu id='3' socket_id='0' die_id='0' core_id='3' siblings='3'/> - <cpu id='4' socket_id='0' die_id='0' core_id='4' siblings='4'/> - <cpu id='5' socket_id='0' die_id='0' core_id='5' siblings='5'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0'/> + <cpu id='1' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='1'/> + <cpu id='2' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='2'/> + <cpu id='3' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='3'/> + <cpu id='4' socket_id='0' die_id='0' cluster_id='0' core_id='4' siblings='4'/> + <cpu id='5' socket_id='0' die_id='0' cluster_id='0' core_id='5' siblings='5'/> </cpus> </cell> <cell id='1'> @@ -31,12 +31,12 @@ <pages unit='KiB' size='2048'>6144</pages> <pages unit='KiB' size='1048576'>8192</pages> <cpus num='6'> - <cpu id='6' socket_id='1' die_id='0' core_id='0' siblings='6'/> - <cpu id='7' socket_id='1' die_id='0' core_id='1' siblings='7'/> - <cpu id='8' socket_id='1' die_id='0' core_id='2' siblings='8'/> - <cpu id='9' socket_id='1' die_id='0' core_id='3' siblings='9'/> - <cpu id='10' socket_id='1' die_id='0' core_id='4' siblings='10'/> - <cpu id='11' socket_id='1' die_id='0' core_id='5' siblings='11'/> + <cpu id='6' socket_id='1' die_id='0' cluster_id='0' core_id='0' siblings='6'/> + <cpu id='7' socket_id='1' die_id='0' cluster_id='0' core_id='1' siblings='7'/> + <cpu id='8' socket_id='1' die_id='0' cluster_id='0' core_id='2' siblings='8'/> + <cpu id='9' socket_id='1' die_id='0' cluster_id='0' core_id='3' siblings='9'/> + <cpu id='10' socket_id='1' die_id='0' cluster_id='0' core_id='4' siblings='10'/> + <cpu id='11' socket_id='1' die_id='0' cluster_id='0' core_id='5' siblings='11'/> </cpus> </cell> </cells> diff --git a/tests/vircaps2xmldata/vircaps-x86_64-resctrl-fake-feature.xml b/tests/vircaps2xmldata/vircaps-x86_64-resctrl-fake-feature.xml index 1327d85c98..d85407f0b1 100644 --- a/tests/vircaps2xmldata/vircaps-x86_64-resctrl-fake-feature.xml +++ b/tests/vircaps2xmldata/vircaps-x86_64-resctrl-fake-feature.xml @@ -17,12 +17,12 @@ <pages unit='KiB' size='2048'>4096</pages> <pages unit='KiB' size='1048576'>6144</pages> <cpus num='6'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/> - <cpu id='1' socket_id='0' die_id='0' core_id='1' siblings='1'/> - <cpu id='2' socket_id='0' die_id='0' core_id='2' siblings='2'/> - <cpu id='3' socket_id='0' die_id='0' core_id='3' siblings='3'/> - <cpu id='4' socket_id='0' die_id='0' core_id='4' siblings='4'/> - <cpu id='5' socket_id='0' die_id='0' core_id='5' siblings='5'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0'/> + <cpu id='1' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='1'/> + <cpu id='2' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='2'/> + <cpu id='3' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='3'/> + <cpu id='4' socket_id='0' die_id='0' cluster_id='0' core_id='4' siblings='4'/> + <cpu id='5' socket_id='0' die_id='0' cluster_id='0' core_id='5' siblings='5'/> </cpus> </cell> <cell id='1'> @@ -31,12 +31,12 @@ <pages unit='KiB' size='2048'>6144</pages> <pages unit='KiB' size='1048576'>8192</pages> <cpus num='6'> - <cpu id='6' socket_id='1' die_id='0' core_id='0' siblings='6'/> - <cpu id='7' socket_id='1' die_id='0' core_id='1' siblings='7'/> - <cpu id='8' socket_id='1' die_id='0' core_id='2' siblings='8'/> - <cpu id='9' socket_id='1' die_id='0' core_id='3' siblings='9'/> - <cpu id='10' socket_id='1' die_id='0' core_id='4' siblings='10'/> - <cpu id='11' socket_id='1' die_id='0' core_id='5' siblings='11'/> + <cpu id='6' socket_id='1' die_id='0' cluster_id='0' core_id='0' siblings='6'/> + <cpu id='7' socket_id='1' die_id='0' cluster_id='0' core_id='1' siblings='7'/> + <cpu id='8' socket_id='1' die_id='0' cluster_id='0' core_id='2' siblings='8'/> + <cpu id='9' socket_id='1' die_id='0' cluster_id='0' core_id='3' siblings='9'/> + <cpu id='10' socket_id='1' die_id='0' cluster_id='0' core_id='4' siblings='10'/> + <cpu id='11' socket_id='1' die_id='0' cluster_id='0' core_id='5' siblings='11'/> </cpus> </cell> </cells> diff --git a/tests/vircaps2xmldata/vircaps-x86_64-resctrl-skx-twocaches.xml b/tests/vircaps2xmldata/vircaps-x86_64-resctrl-skx-twocaches.xml index 6769bd0591..eb53eb2142 100644 --- a/tests/vircaps2xmldata/vircaps-x86_64-resctrl-skx-twocaches.xml +++ b/tests/vircaps2xmldata/vircaps-x86_64-resctrl-skx-twocaches.xml @@ -17,7 +17,7 @@ <pages unit='KiB' size='2048'>4096</pages> <pages unit='KiB' size='1048576'>6144</pages> <cpus num='1'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0'/> </cpus> </cell> </cells> diff --git a/tests/vircaps2xmldata/vircaps-x86_64-resctrl-skx.xml b/tests/vircaps2xmldata/vircaps-x86_64-resctrl-skx.xml index bc52480905..38ea0bdc27 100644 --- a/tests/vircaps2xmldata/vircaps-x86_64-resctrl-skx.xml +++ b/tests/vircaps2xmldata/vircaps-x86_64-resctrl-skx.xml @@ -17,7 +17,7 @@ <pages unit='KiB' size='2048'>4096</pages> <pages unit='KiB' size='1048576'>6144</pages> <cpus num='1'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0'/> </cpus> </cell> </cells> diff --git a/tests/vircaps2xmldata/vircaps-x86_64-resctrl.xml b/tests/vircaps2xmldata/vircaps-x86_64-resctrl.xml index b638bbd1c9..fd854ee91e 100644 --- a/tests/vircaps2xmldata/vircaps-x86_64-resctrl.xml +++ b/tests/vircaps2xmldata/vircaps-x86_64-resctrl.xml @@ -17,12 +17,12 @@ <pages unit='KiB' size='2048'>4096</pages> <pages unit='KiB' size='1048576'>6144</pages> <cpus num='6'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0'/> - <cpu id='1' socket_id='0' die_id='0' core_id='1' siblings='1'/> - <cpu id='2' socket_id='0' die_id='0' core_id='2' siblings='2'/> - <cpu id='3' socket_id='0' die_id='0' core_id='3' siblings='3'/> - <cpu id='4' socket_id='0' die_id='0' core_id='4' siblings='4'/> - <cpu id='5' socket_id='0' die_id='0' core_id='5' siblings='5'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0'/> + <cpu id='1' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='1'/> + <cpu id='2' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='2'/> + <cpu id='3' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='3'/> + <cpu id='4' socket_id='0' die_id='0' cluster_id='0' core_id='4' siblings='4'/> + <cpu id='5' socket_id='0' die_id='0' cluster_id='0' core_id='5' siblings='5'/> </cpus> </cell> <cell id='1'> @@ -31,12 +31,12 @@ <pages unit='KiB' size='2048'>6144</pages> <pages unit='KiB' size='1048576'>8192</pages> <cpus num='6'> - <cpu id='6' socket_id='1' die_id='0' core_id='0' siblings='6'/> - <cpu id='7' socket_id='1' die_id='0' core_id='1' siblings='7'/> - <cpu id='8' socket_id='1' die_id='0' core_id='2' siblings='8'/> - <cpu id='9' socket_id='1' die_id='0' core_id='3' siblings='9'/> - <cpu id='10' socket_id='1' die_id='0' core_id='4' siblings='10'/> - <cpu id='11' socket_id='1' die_id='0' core_id='5' siblings='11'/> + <cpu id='6' socket_id='1' die_id='0' cluster_id='0' core_id='0' siblings='6'/> + <cpu id='7' socket_id='1' die_id='0' cluster_id='0' core_id='1' siblings='7'/> + <cpu id='8' socket_id='1' die_id='0' cluster_id='0' core_id='2' siblings='8'/> + <cpu id='9' socket_id='1' die_id='0' cluster_id='0' core_id='3' siblings='9'/> + <cpu id='10' socket_id='1' die_id='0' cluster_id='0' core_id='4' siblings='10'/> + <cpu id='11' socket_id='1' die_id='0' cluster_id='0' core_id='5' siblings='11'/> </cpus> </cell> </cells> -- 2.43.0

The default number of CPU clusters is 1, and values other than that one are currently rejected by all hypervisor drivers. Signed-off-by: Andrea Bolognani <abologna@redhat.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> --- src/bhyve/bhyve_command.c | 5 +++++ src/conf/cpu_conf.c | 16 +++++++++++++++- src/conf/cpu_conf.h | 1 + src/conf/domain_conf.c | 1 + src/conf/schemas/cputypes.rng | 5 +++++ src/cpu/cpu.c | 1 + src/libxl/libxl_capabilities.c | 1 + src/qemu/qemu_command.c | 5 +++++ src/vmx/vmx.c | 7 +++++++ .../x86_64-host+guest,model486-result.xml | 2 +- .../x86_64-host+guest,models-result.xml | 2 +- tests/cputestdata/x86_64-host+guest-result.xml | 2 +- tests/cputestdata/x86_64-host+guest.xml | 2 +- .../x86_64-host+host-model-nofallback.xml | 2 +- ...host-Haswell-noTSX+Haswell,haswell-result.xml | 2 +- ...aswell-noTSX+Haswell-noTSX,haswell-result.xml | 2 +- ...4-host-Haswell-noTSX+Haswell-noTSX-result.xml | 2 +- .../x86_64-host-worse+guest-result.xml | 2 +- .../ppc64-modern-bulk-result-conf.xml | 2 +- .../ppc64-modern-bulk-result-live.xml | 2 +- .../ppc64-modern-individual-result-conf.xml | 2 +- .../ppc64-modern-individual-result-live.xml | 2 +- .../x86-modern-bulk-result-conf.xml | 2 +- .../x86-modern-bulk-result-live.xml | 2 +- .../x86-modern-individual-add-result-conf.xml | 2 +- .../x86-modern-individual-add-result-live.xml | 2 +- ...e-timeout+graphics-spice-timeout-password.xml | 2 +- .../qemuhotplug-graphics-spice-timeout.xml | 2 +- .../fd-memory-no-numa-topology.xml | 2 +- .../qemuxml2argvdata/fd-memory-numa-topology.xml | 2 +- .../fd-memory-numa-topology2.xml | 2 +- .../fd-memory-numa-topology3.xml | 2 +- tests/qemuxml2argvdata/hugepages-nvdimm.xml | 2 +- .../memfd-memory-default-hugepage.xml | 2 +- tests/qemuxml2argvdata/memfd-memory-numa.xml | 2 +- .../memory-hotplug-nvdimm-access.xml | 2 +- .../memory-hotplug-nvdimm-align.xml | 2 +- .../memory-hotplug-nvdimm-label.xml | 2 +- .../memory-hotplug-nvdimm-pmem.xml | 2 +- .../memory-hotplug-nvdimm-readonly.xml | 2 +- tests/qemuxml2argvdata/memory-hotplug-nvdimm.xml | 2 +- .../memory-hotplug-virtio-mem.xml | 2 +- .../memory-hotplug-virtio-pmem.xml | 2 +- .../cpu-numa-disjoint.x86_64-latest.xml | 2 +- .../cpu-numa-disordered.x86_64-latest.xml | 2 +- .../cpu-numa-memshared.x86_64-latest.xml | 2 +- .../cpu-numa-no-memory-element.x86_64-latest.xml | 2 +- .../cpu-numa1.x86_64-latest.xml | 2 +- .../cpu-numa2.x86_64-latest.xml | 2 +- .../memory-hotplug-dimm-addr.x86_64-latest.xml | 2 +- .../memory-hotplug-dimm.x86_64-latest.xml | 2 +- .../memory-hotplug-multiple.x86_64-latest.xml | 2 +- ...plug-nvdimm-ppc64-abi-update.ppc64-latest.xml | 2 +- .../memory-hotplug-nvdimm-ppc64.ppc64-latest.xml | 2 +- .../memory-hotplug.x86_64-latest.xml | 2 +- ...mad-auto-memory-vcpu-cpuset.x86_64-latest.xml | 2 +- ...cpu-no-cpuset-and-placement.x86_64-latest.xml | 2 +- ...numad-auto-vcpu-no-numatune.x86_64-latest.xml | 2 +- ...mad-static-vcpu-no-numatune.x86_64-latest.xml | 2 +- .../pci-expander-bus.x86_64-latest.xml | 2 +- .../pcie-expander-bus.x86_64-latest.xml | 2 +- .../pseries-phb-numa-node.ppc64-latest.xml | 2 +- tests/vmx2xmldata/esx-in-the-wild-10.xml | 2 +- tests/vmx2xmldata/esx-in-the-wild-8.xml | 2 +- tests/vmx2xmldata/esx-in-the-wild-9.xml | 2 +- 65 files changed, 97 insertions(+), 57 deletions(-) diff --git a/src/bhyve/bhyve_command.c b/src/bhyve/bhyve_command.c index 5b388c7a8f..d05b01ae5d 100644 --- a/src/bhyve/bhyve_command.c +++ b/src/bhyve/bhyve_command.c @@ -672,6 +672,11 @@ virBhyveProcessBuildBhyveCmd(struct _bhyveConn *driver, virDomainDef *def, _("Only 1 die per socket is supported")); return NULL; } + if (def->cpu->clusters != 1) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("Only 1 cluster per die is supported")); + return NULL; + } if (nvcpus != def->cpu->sockets * def->cpu->cores * def->cpu->threads) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("Invalid CPU topology: total number of vCPUs must equal the product of sockets, cores, and threads")); diff --git a/src/conf/cpu_conf.c b/src/conf/cpu_conf.c index 7abe489733..6e6e1b9a89 100644 --- a/src/conf/cpu_conf.c +++ b/src/conf/cpu_conf.c @@ -241,6 +241,7 @@ virCPUDefCopyWithoutModel(const virCPUDef *cpu) copy->fallback = cpu->fallback; copy->sockets = cpu->sockets; copy->dies = cpu->dies; + copy->clusters = cpu->clusters; copy->cores = cpu->cores; copy->threads = cpu->threads; copy->arch = cpu->arch; @@ -572,6 +573,12 @@ virCPUDefParseXML(xmlXPathContextPtr ctxt, return -1; } + if (virXMLPropUIntDefault(topology, "clusters", 10, + VIR_XML_PROP_NONZERO, + &def->clusters, 1) < 0) { + return -1; + } + if (virXMLPropUInt(topology, "cores", 10, VIR_XML_PROP_REQUIRED | VIR_XML_PROP_NONZERO, &def->cores) < 0) { @@ -827,10 +834,11 @@ virCPUDefFormatBuf(virBuffer *buf, virBufferAddLit(buf, "/>\n"); } - if (def->sockets && def->dies && def->cores && def->threads) { + if (def->sockets && def->dies && def->clusters && def->cores && def->threads) { virBufferAddLit(buf, "<topology"); virBufferAsprintf(buf, " sockets='%u'", def->sockets); virBufferAsprintf(buf, " dies='%u'", def->dies); + virBufferAsprintf(buf, " clusters='%u'", def->clusters); virBufferAsprintf(buf, " cores='%u'", def->cores); virBufferAsprintf(buf, " threads='%u'", def->threads); virBufferAddLit(buf, "/>\n"); @@ -1106,6 +1114,12 @@ virCPUDefIsEqual(virCPUDef *src, return false; } + if (src->clusters != dst->clusters) { + MISMATCH(_("Target CPU clusters %1$d does not match source %2$d"), + dst->clusters, src->clusters); + return false; + } + if (src->cores != dst->cores) { MISMATCH(_("Target CPU cores %1$d does not match source %2$d"), dst->cores, src->cores); diff --git a/src/conf/cpu_conf.h b/src/conf/cpu_conf.h index 3e4c53675c..2694022fed 100644 --- a/src/conf/cpu_conf.h +++ b/src/conf/cpu_conf.h @@ -148,6 +148,7 @@ struct _virCPUDef { unsigned int microcodeVersion; unsigned int sockets; unsigned int dies; + unsigned int clusters; unsigned int cores; unsigned int threads; unsigned int sigFamily; diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index be57a1981e..f9d643fc12 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -2316,6 +2316,7 @@ virDomainDefGetVcpusTopology(const virDomainDef *def, /* multiplication of 32bit numbers fits into a 64bit variable */ if ((tmp *= def->cpu->dies) > UINT_MAX || + (tmp *= def->cpu->clusters) > UINT_MAX || (tmp *= def->cpu->cores) > UINT_MAX || (tmp *= def->cpu->threads) > UINT_MAX) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, diff --git a/src/conf/schemas/cputypes.rng b/src/conf/schemas/cputypes.rng index db1aa57158..3a8910e09f 100644 --- a/src/conf/schemas/cputypes.rng +++ b/src/conf/schemas/cputypes.rng @@ -92,6 +92,11 @@ <ref name="positiveInteger"/> </attribute> </optional> + <optional> + <attribute name="clusters"> + <ref name="positiveInteger"/> + </attribute> + </optional> <attribute name="cores"> <ref name="positiveInteger"/> </attribute> diff --git a/src/cpu/cpu.c b/src/cpu/cpu.c index bc43aa4e93..4f048d0dad 100644 --- a/src/cpu/cpu.c +++ b/src/cpu/cpu.c @@ -435,6 +435,7 @@ virCPUGetHost(virArch arch, if (nodeInfo) { cpu->sockets = nodeInfo->sockets; cpu->dies = 1; + cpu->clusters = 1; cpu->cores = nodeInfo->cores; cpu->threads = nodeInfo->threads; } diff --git a/src/libxl/libxl_capabilities.c b/src/libxl/libxl_capabilities.c index dfb602ca2f..522256777d 100644 --- a/src/libxl/libxl_capabilities.c +++ b/src/libxl/libxl_capabilities.c @@ -152,6 +152,7 @@ libxlCapsInitCPU(virCaps *caps, libxl_physinfo *phy_info) cpu->cores = phy_info->cores_per_socket; cpu->threads = phy_info->threads_per_core; cpu->dies = 1; + cpu->clusters = 1; cpu->sockets = phy_info->nr_cpus / (cpu->cores * cpu->threads); if (!(data = libxlCapsNodeData(cpu, phy_info->hw_cap)) || diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 653817173b..71daa85e55 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7226,6 +7226,11 @@ qemuBuildSmpCommandLine(virCommand *cmd, _("Only 1 die per socket is supported")); return -1; } + if (def->cpu->clusters != 1) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("Only 1 cluster per die is supported")); + return -1; + } virBufferAsprintf(&buf, ",sockets=%u", def->cpu->sockets); if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_SMP_DIES)) virBufferAsprintf(&buf, ",dies=%u", def->cpu->dies); diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c index 26b89776e1..4ac2320251 100644 --- a/src/vmx/vmx.c +++ b/src/vmx/vmx.c @@ -1583,6 +1583,7 @@ virVMXParseConfig(virVMXContext *ctx, goto cleanup; } cpu->dies = 1; + cpu->clusters = 1; cpu->cores = coresPerSocket; cpu->threads = 1; @@ -3377,6 +3378,12 @@ virVMXFormatConfig(virVMXContext *ctx, virDomainXMLOption *xmlopt, virDomainDef goto cleanup; } + if (def->cpu->clusters != 1) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("Only 1 cluster per die is supported")); + goto cleanup; + } + calculated_vcpus = def->cpu->sockets * def->cpu->cores; if (calculated_vcpus != maxvcpus) { virReportError(VIR_ERR_INTERNAL_ERROR, diff --git a/tests/cputestdata/x86_64-host+guest,model486-result.xml b/tests/cputestdata/x86_64-host+guest,model486-result.xml index ea8e2d3a48..b533f22b88 100644 --- a/tests/cputestdata/x86_64-host+guest,model486-result.xml +++ b/tests/cputestdata/x86_64-host+guest,model486-result.xml @@ -1,6 +1,6 @@ <cpu mode='custom' match='exact'> <model fallback='allow'>486</model> - <topology sockets='2' dies='1' cores='4' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='1'/> <feature policy='require' name='de'/> <feature policy='require' name='tsc'/> <feature policy='require' name='msr'/> diff --git a/tests/cputestdata/x86_64-host+guest,models-result.xml b/tests/cputestdata/x86_64-host+guest,models-result.xml index 42664a48b4..e975d9bc18 100644 --- a/tests/cputestdata/x86_64-host+guest,models-result.xml +++ b/tests/cputestdata/x86_64-host+guest,models-result.xml @@ -1,6 +1,6 @@ <cpu mode='custom' match='exact'> <model fallback='allow'>qemu64</model> - <topology sockets='2' dies='1' cores='4' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='1'/> <feature policy='force' name='pbe'/> <feature policy='force' name='monitor'/> <feature policy='require' name='ssse3'/> diff --git a/tests/cputestdata/x86_64-host+guest-result.xml b/tests/cputestdata/x86_64-host+guest-result.xml index 28e3152cbf..cf41b3f872 100644 --- a/tests/cputestdata/x86_64-host+guest-result.xml +++ b/tests/cputestdata/x86_64-host+guest-result.xml @@ -1,6 +1,6 @@ <cpu mode='custom' match='exact'> <model fallback='allow'>Penryn</model> - <topology sockets='2' dies='1' cores='4' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='1'/> <feature policy='require' name='dca'/> <feature policy='require' name='xtpr'/> <feature policy='disable' name='sse4.2'/> diff --git a/tests/cputestdata/x86_64-host+guest.xml b/tests/cputestdata/x86_64-host+guest.xml index 28e3152cbf..cf41b3f872 100644 --- a/tests/cputestdata/x86_64-host+guest.xml +++ b/tests/cputestdata/x86_64-host+guest.xml @@ -1,6 +1,6 @@ <cpu mode='custom' match='exact'> <model fallback='allow'>Penryn</model> - <topology sockets='2' dies='1' cores='4' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='1'/> <feature policy='require' name='dca'/> <feature policy='require' name='xtpr'/> <feature policy='disable' name='sse4.2'/> diff --git a/tests/cputestdata/x86_64-host+host-model-nofallback.xml b/tests/cputestdata/x86_64-host+host-model-nofallback.xml index 16d6e1daf2..881eea7bd0 100644 --- a/tests/cputestdata/x86_64-host+host-model-nofallback.xml +++ b/tests/cputestdata/x86_64-host+host-model-nofallback.xml @@ -1,7 +1,7 @@ <cpu mode='custom' match='exact'> <model fallback='forbid'>Penryn</model> <vendor>Intel</vendor> - <topology sockets='1' dies='1' cores='2' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='2' threads='1'/> <feature policy='require' name='dca'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='tm2'/> diff --git a/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell,haswell-result.xml b/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell,haswell-result.xml index 8eda6684a0..67994c62cc 100644 --- a/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell,haswell-result.xml +++ b/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell,haswell-result.xml @@ -1,6 +1,6 @@ <cpu mode='custom' match='exact'> <model fallback='allow'>Haswell</model> - <topology sockets='1' dies='1' cores='2' threads='2'/> + <topology sockets='1' dies='1' clusters='1' cores='2' threads='2'/> <feature policy='disable' name='rtm'/> <feature policy='disable' name='hle'/> </cpu> diff --git a/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell-noTSX,haswell-result.xml b/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell-noTSX,haswell-result.xml index cb02449d60..4804c0b818 100644 --- a/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell-noTSX,haswell-result.xml +++ b/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell-noTSX,haswell-result.xml @@ -1,6 +1,6 @@ <cpu mode='custom' match='exact'> <model fallback='allow'>Haswell</model> - <topology sockets='1' dies='1' cores='2' threads='2'/> + <topology sockets='1' dies='1' clusters='1' cores='2' threads='2'/> <feature policy='disable' name='hle'/> <feature policy='disable' name='rtm'/> </cpu> diff --git a/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell-noTSX-result.xml b/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell-noTSX-result.xml index 7ee926aba8..c21b331248 100644 --- a/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell-noTSX-result.xml +++ b/tests/cputestdata/x86_64-host-Haswell-noTSX+Haswell-noTSX-result.xml @@ -1,4 +1,4 @@ <cpu mode='custom' match='exact'> <model fallback='allow'>Haswell-noTSX</model> - <topology sockets='1' dies='1' cores='2' threads='2'/> + <topology sockets='1' dies='1' clusters='1' cores='2' threads='2'/> </cpu> diff --git a/tests/cputestdata/x86_64-host-worse+guest-result.xml b/tests/cputestdata/x86_64-host-worse+guest-result.xml index 9d54c66a8f..712c3ad341 100644 --- a/tests/cputestdata/x86_64-host-worse+guest-result.xml +++ b/tests/cputestdata/x86_64-host-worse+guest-result.xml @@ -1,6 +1,6 @@ <cpu mode='custom' match='exact'> <model fallback='allow'>Penryn</model> - <topology sockets='2' dies='1' cores='4' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='1'/> <feature policy='disable' name='dca'/> <feature policy='disable' name='xtpr'/> <feature policy='disable' name='sse4.2'/> diff --git a/tests/qemuhotplugtestcpus/ppc64-modern-bulk-result-conf.xml b/tests/qemuhotplugtestcpus/ppc64-modern-bulk-result-conf.xml index ad11b2f8a6..1a0d28257e 100644 --- a/tests/qemuhotplugtestcpus/ppc64-modern-bulk-result-conf.xml +++ b/tests/qemuhotplugtestcpus/ppc64-modern-bulk-result-conf.xml @@ -44,7 +44,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>POWER9</model> - <topology sockets='1' dies='1' cores='4' threads='8'/> + <topology sockets='1' dies='1' clusters='1' cores='4' threads='8'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuhotplugtestcpus/ppc64-modern-bulk-result-live.xml b/tests/qemuhotplugtestcpus/ppc64-modern-bulk-result-live.xml index 2a3b4a495f..b127883b36 100644 --- a/tests/qemuhotplugtestcpus/ppc64-modern-bulk-result-live.xml +++ b/tests/qemuhotplugtestcpus/ppc64-modern-bulk-result-live.xml @@ -44,7 +44,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>POWER9</model> - <topology sockets='1' dies='1' cores='4' threads='8'/> + <topology sockets='1' dies='1' clusters='1' cores='4' threads='8'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuhotplugtestcpus/ppc64-modern-individual-result-conf.xml b/tests/qemuhotplugtestcpus/ppc64-modern-individual-result-conf.xml index 34aec9b965..29f1a5ac45 100644 --- a/tests/qemuhotplugtestcpus/ppc64-modern-individual-result-conf.xml +++ b/tests/qemuhotplugtestcpus/ppc64-modern-individual-result-conf.xml @@ -44,7 +44,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>POWER9</model> - <topology sockets='1' dies='1' cores='4' threads='8'/> + <topology sockets='1' dies='1' clusters='1' cores='4' threads='8'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuhotplugtestcpus/ppc64-modern-individual-result-live.xml b/tests/qemuhotplugtestcpus/ppc64-modern-individual-result-live.xml index 5ce2cfd0b0..76a85ac9f0 100644 --- a/tests/qemuhotplugtestcpus/ppc64-modern-individual-result-live.xml +++ b/tests/qemuhotplugtestcpus/ppc64-modern-individual-result-live.xml @@ -44,7 +44,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>POWER9</model> - <topology sockets='1' dies='1' cores='4' threads='8'/> + <topology sockets='1' dies='1' clusters='1' cores='4' threads='8'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuhotplugtestcpus/x86-modern-bulk-result-conf.xml b/tests/qemuhotplugtestcpus/x86-modern-bulk-result-conf.xml index 8d52ffedb4..bec46987ff 100644 --- a/tests/qemuhotplugtestcpus/x86-modern-bulk-result-conf.xml +++ b/tests/qemuhotplugtestcpus/x86-modern-bulk-result-conf.xml @@ -20,7 +20,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='4' dies='1' cores='2' threads='1'/> + <topology sockets='4' dies='1' clusters='1' cores='2' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuhotplugtestcpus/x86-modern-bulk-result-live.xml b/tests/qemuhotplugtestcpus/x86-modern-bulk-result-live.xml index f416397e33..be9769c686 100644 --- a/tests/qemuhotplugtestcpus/x86-modern-bulk-result-live.xml +++ b/tests/qemuhotplugtestcpus/x86-modern-bulk-result-live.xml @@ -20,7 +20,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='4' dies='1' cores='2' threads='1'/> + <topology sockets='4' dies='1' clusters='1' cores='2' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuhotplugtestcpus/x86-modern-individual-add-result-conf.xml b/tests/qemuhotplugtestcpus/x86-modern-individual-add-result-conf.xml index 0bd2af8e43..539f607818 100644 --- a/tests/qemuhotplugtestcpus/x86-modern-individual-add-result-conf.xml +++ b/tests/qemuhotplugtestcpus/x86-modern-individual-add-result-conf.xml @@ -20,7 +20,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='4' dies='1' cores='2' threads='1'/> + <topology sockets='4' dies='1' clusters='1' cores='2' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuhotplugtestcpus/x86-modern-individual-add-result-live.xml b/tests/qemuhotplugtestcpus/x86-modern-individual-add-result-live.xml index b31e6ebe55..acbdd3cfd5 100644 --- a/tests/qemuhotplugtestcpus/x86-modern-individual-add-result-live.xml +++ b/tests/qemuhotplugtestcpus/x86-modern-individual-add-result-live.xml @@ -20,7 +20,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='4' dies='1' cores='2' threads='1'/> + <topology sockets='4' dies='1' clusters='1' cores='2' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuhotplugtestdomains/qemuhotplug-graphics-spice-timeout+graphics-spice-timeout-password.xml b/tests/qemuhotplugtestdomains/qemuhotplug-graphics-spice-timeout+graphics-spice-timeout-password.xml index 03964ad01c..ee53339338 100644 --- a/tests/qemuhotplugtestdomains/qemuhotplug-graphics-spice-timeout+graphics-spice-timeout-password.xml +++ b/tests/qemuhotplugtestdomains/qemuhotplug-graphics-spice-timeout+graphics-spice-timeout-password.xml @@ -18,7 +18,7 @@ <cpu mode='custom' match='exact' check='partial'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> - <topology sockets='1' dies='1' cores='2' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='2' threads='1'/> <feature policy='require' name='lahf_lm'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='cx16'/> diff --git a/tests/qemuhotplugtestdomains/qemuhotplug-graphics-spice-timeout.xml b/tests/qemuhotplugtestdomains/qemuhotplug-graphics-spice-timeout.xml index e6b0cc833a..eb9b902fc5 100644 --- a/tests/qemuhotplugtestdomains/qemuhotplug-graphics-spice-timeout.xml +++ b/tests/qemuhotplugtestdomains/qemuhotplug-graphics-spice-timeout.xml @@ -18,7 +18,7 @@ <cpu mode='custom' match='exact' check='partial'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> - <topology sockets='1' dies='1' cores='2' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='2' threads='1'/> <feature policy='require' name='lahf_lm'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='cx16'/> diff --git a/tests/qemuxml2argvdata/fd-memory-no-numa-topology.xml b/tests/qemuxml2argvdata/fd-memory-no-numa-topology.xml index 2090bb8288..92f418fb88 100644 --- a/tests/qemuxml2argvdata/fd-memory-no-numa-topology.xml +++ b/tests/qemuxml2argvdata/fd-memory-no-numa-topology.xml @@ -15,7 +15,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='8' dies='1' cores='1' threads='1'/> + <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuxml2argvdata/fd-memory-numa-topology.xml b/tests/qemuxml2argvdata/fd-memory-numa-topology.xml index 2f94690656..543509d832 100644 --- a/tests/qemuxml2argvdata/fd-memory-numa-topology.xml +++ b/tests/qemuxml2argvdata/fd-memory-numa-topology.xml @@ -15,7 +15,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='1' dies='1' cores='8' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='8' threads='1'/> <numa> <cell id='0' cpus='0-7' memory='14680064' unit='KiB'/> </numa> diff --git a/tests/qemuxml2argvdata/fd-memory-numa-topology2.xml b/tests/qemuxml2argvdata/fd-memory-numa-topology2.xml index 3a4e9b478e..d3b98da3c6 100644 --- a/tests/qemuxml2argvdata/fd-memory-numa-topology2.xml +++ b/tests/qemuxml2argvdata/fd-memory-numa-topology2.xml @@ -15,7 +15,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='1' dies='1' cores='20' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='20' threads='1'/> <numa> <cell id='0' cpus='0-7,16-19' memory='14680064' unit='KiB'/> <cell id='1' cpus='8-15' memory='14680064' unit='KiB' memAccess='shared'/> diff --git a/tests/qemuxml2argvdata/fd-memory-numa-topology3.xml b/tests/qemuxml2argvdata/fd-memory-numa-topology3.xml index 0f7f74283b..459d1b9d1d 100644 --- a/tests/qemuxml2argvdata/fd-memory-numa-topology3.xml +++ b/tests/qemuxml2argvdata/fd-memory-numa-topology3.xml @@ -15,7 +15,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='1' dies='1' cores='32' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='32' threads='1'/> <numa> <cell id='0' cpus='0-1,6-31' memory='14680064' unit='KiB'/> <cell id='1' cpus='2-3' memory='14680064' unit='KiB' memAccess='shared'/> diff --git a/tests/qemuxml2argvdata/hugepages-nvdimm.xml b/tests/qemuxml2argvdata/hugepages-nvdimm.xml index 1a1500895b..b786b0d3dd 100644 --- a/tests/qemuxml2argvdata/hugepages-nvdimm.xml +++ b/tests/qemuxml2argvdata/hugepages-nvdimm.xml @@ -17,7 +17,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='1048576' unit='KiB'/> </numa> diff --git a/tests/qemuxml2argvdata/memfd-memory-default-hugepage.xml b/tests/qemuxml2argvdata/memfd-memory-default-hugepage.xml index 238d4c6b52..a70bd53134 100644 --- a/tests/qemuxml2argvdata/memfd-memory-default-hugepage.xml +++ b/tests/qemuxml2argvdata/memfd-memory-default-hugepage.xml @@ -19,7 +19,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='1' dies='1' cores='8' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='8' threads='1'/> <numa> <cell id='0' cpus='0-7' memory='14680064' unit='KiB'/> </numa> diff --git a/tests/qemuxml2argvdata/memfd-memory-numa.xml b/tests/qemuxml2argvdata/memfd-memory-numa.xml index 1ac87e3aef..0c5d7ba4ef 100644 --- a/tests/qemuxml2argvdata/memfd-memory-numa.xml +++ b/tests/qemuxml2argvdata/memfd-memory-numa.xml @@ -22,7 +22,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='1' dies='1' cores='8' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='8' threads='1'/> <numa> <cell id='0' cpus='0-7' memory='14680064' unit='KiB'/> </numa> diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-access.xml b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-access.xml index bee0346aca..84baf82bf5 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-access.xml +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-access.xml @@ -15,7 +15,7 @@ </idmap> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='219136' unit='KiB'/> </numa> diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-align.xml b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-align.xml index decf87db63..664418e805 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-align.xml +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-align.xml @@ -15,7 +15,7 @@ </idmap> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='219136' unit='KiB'/> </numa> diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-label.xml b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-label.xml index 8a0dab3908..f998f7f276 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-label.xml +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-label.xml @@ -15,7 +15,7 @@ </idmap> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='219136' unit='KiB'/> </numa> diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-pmem.xml b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-pmem.xml index a712adfe1e..d66481fd35 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-pmem.xml +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-pmem.xml @@ -15,7 +15,7 @@ </idmap> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='219136' unit='KiB'/> </numa> diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-readonly.xml b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-readonly.xml index 57629ccb8c..56d6b7b712 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-readonly.xml +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-readonly.xml @@ -15,7 +15,7 @@ </idmap> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='219136' unit='KiB'/> </numa> diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm.xml b/tests/qemuxml2argvdata/memory-hotplug-nvdimm.xml index 865ddcf0ea..ff6e3b7b0f 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm.xml +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm.xml @@ -15,7 +15,7 @@ </idmap> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='1048576' unit='KiB'/> </numa> diff --git a/tests/qemuxml2argvdata/memory-hotplug-virtio-mem.xml b/tests/qemuxml2argvdata/memory-hotplug-virtio-mem.xml index c578209d8a..52fa6b14e9 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-virtio-mem.xml +++ b/tests/qemuxml2argvdata/memory-hotplug-virtio-mem.xml @@ -11,7 +11,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='2095104' unit='KiB'/> </numa> diff --git a/tests/qemuxml2argvdata/memory-hotplug-virtio-pmem.xml b/tests/qemuxml2argvdata/memory-hotplug-virtio-pmem.xml index a8b22dd3c5..2786a739ad 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-virtio-pmem.xml +++ b/tests/qemuxml2argvdata/memory-hotplug-virtio-pmem.xml @@ -11,7 +11,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='2095104' unit='KiB'/> </numa> diff --git a/tests/qemuxml2xmloutdata/cpu-numa-disjoint.x86_64-latest.xml b/tests/qemuxml2xmloutdata/cpu-numa-disjoint.x86_64-latest.xml index fa2ec31463..4f33094949 100644 --- a/tests/qemuxml2xmloutdata/cpu-numa-disjoint.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/cpu-numa-disjoint.x86_64-latest.xml @@ -10,7 +10,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='4' threads='2'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='2'/> <numa> <cell id='0' cpus='0-3,8-11' memory='109550' unit='KiB'/> <cell id='1' cpus='4-7,12-15' memory='109550' unit='KiB'/> diff --git a/tests/qemuxml2xmloutdata/cpu-numa-disordered.x86_64-latest.xml b/tests/qemuxml2xmloutdata/cpu-numa-disordered.x86_64-latest.xml index 1b4d0bfa67..75dcb8c9e2 100644 --- a/tests/qemuxml2xmloutdata/cpu-numa-disordered.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/cpu-numa-disordered.x86_64-latest.xml @@ -10,7 +10,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='4' threads='2'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='2'/> <numa> <cell id='0' cpus='0-5' memory='109550' unit='KiB'/> <cell id='1' cpus='11-15' memory='109550' unit='KiB'/> diff --git a/tests/qemuxml2xmloutdata/cpu-numa-memshared.x86_64-latest.xml b/tests/qemuxml2xmloutdata/cpu-numa-memshared.x86_64-latest.xml index 47ed9efd69..c45e295921 100644 --- a/tests/qemuxml2xmloutdata/cpu-numa-memshared.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/cpu-numa-memshared.x86_64-latest.xml @@ -10,7 +10,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='4' threads='2'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='2'/> <numa> <cell id='0' cpus='0-7' memory='109550' unit='KiB' memAccess='shared'/> <cell id='1' cpus='8-15' memory='109550' unit='KiB' memAccess='private'/> diff --git a/tests/qemuxml2xmloutdata/cpu-numa-no-memory-element.x86_64-latest.xml b/tests/qemuxml2xmloutdata/cpu-numa-no-memory-element.x86_64-latest.xml index 57bbacdff0..663d137ff5 100644 --- a/tests/qemuxml2xmloutdata/cpu-numa-no-memory-element.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/cpu-numa-no-memory-element.x86_64-latest.xml @@ -10,7 +10,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='4' threads='2'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='2'/> <numa> <cell id='0' cpus='0-7' memory='109550' unit='KiB'/> <cell id='1' cpus='8-15' memory='109550' unit='KiB'/> diff --git a/tests/qemuxml2xmloutdata/cpu-numa1.x86_64-latest.xml b/tests/qemuxml2xmloutdata/cpu-numa1.x86_64-latest.xml index 57bbacdff0..663d137ff5 100644 --- a/tests/qemuxml2xmloutdata/cpu-numa1.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/cpu-numa1.x86_64-latest.xml @@ -10,7 +10,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='4' threads='2'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='2'/> <numa> <cell id='0' cpus='0-7' memory='109550' unit='KiB'/> <cell id='1' cpus='8-15' memory='109550' unit='KiB'/> diff --git a/tests/qemuxml2xmloutdata/cpu-numa2.x86_64-latest.xml b/tests/qemuxml2xmloutdata/cpu-numa2.x86_64-latest.xml index 57bbacdff0..663d137ff5 100644 --- a/tests/qemuxml2xmloutdata/cpu-numa2.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/cpu-numa2.x86_64-latest.xml @@ -10,7 +10,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='4' threads='2'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='2'/> <numa> <cell id='0' cpus='0-7' memory='109550' unit='KiB'/> <cell id='1' cpus='8-15' memory='109550' unit='KiB'/> diff --git a/tests/qemuxml2xmloutdata/memory-hotplug-dimm-addr.x86_64-latest.xml b/tests/qemuxml2xmloutdata/memory-hotplug-dimm-addr.x86_64-latest.xml index 0a32d5491a..38b41e6719 100644 --- a/tests/qemuxml2xmloutdata/memory-hotplug-dimm-addr.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/memory-hotplug-dimm-addr.x86_64-latest.xml @@ -11,7 +11,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='219136' unit='KiB'/> </numa> diff --git a/tests/qemuxml2xmloutdata/memory-hotplug-dimm.x86_64-latest.xml b/tests/qemuxml2xmloutdata/memory-hotplug-dimm.x86_64-latest.xml index 7c1b7b2c5d..7f0dc85c0e 100644 --- a/tests/qemuxml2xmloutdata/memory-hotplug-dimm.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/memory-hotplug-dimm.x86_64-latest.xml @@ -15,7 +15,7 @@ </idmap> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='219136' unit='KiB'/> </numa> diff --git a/tests/qemuxml2xmloutdata/memory-hotplug-multiple.x86_64-latest.xml b/tests/qemuxml2xmloutdata/memory-hotplug-multiple.x86_64-latest.xml index 42b0f7b880..b3306fb569 100644 --- a/tests/qemuxml2xmloutdata/memory-hotplug-multiple.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/memory-hotplug-multiple.x86_64-latest.xml @@ -11,7 +11,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='2095104' unit='KiB'/> </numa> diff --git a/tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64-abi-update.ppc64-latest.xml b/tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64-abi-update.ppc64-latest.xml index ae157c4849..4cc0c674df 100644 --- a/tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64-abi-update.ppc64-latest.xml +++ b/tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64-abi-update.ppc64-latest.xml @@ -11,7 +11,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>POWER9</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='1048576' unit='KiB'/> </numa> diff --git a/tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64.ppc64-latest.xml b/tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64.ppc64-latest.xml index 3c1cbc731d..a5c26e3c5b 100644 --- a/tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64.ppc64-latest.xml +++ b/tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64.ppc64-latest.xml @@ -11,7 +11,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>POWER9</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='1048576' unit='KiB'/> </numa> diff --git a/tests/qemuxml2xmloutdata/memory-hotplug.x86_64-latest.xml b/tests/qemuxml2xmloutdata/memory-hotplug.x86_64-latest.xml index 083102e8d6..697819387f 100644 --- a/tests/qemuxml2xmloutdata/memory-hotplug.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/memory-hotplug.x86_64-latest.xml @@ -11,7 +11,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> <numa> <cell id='0' cpus='0-1' memory='219136' unit='KiB'/> </numa> diff --git a/tests/qemuxml2xmloutdata/numad-auto-memory-vcpu-cpuset.x86_64-latest.xml b/tests/qemuxml2xmloutdata/numad-auto-memory-vcpu-cpuset.x86_64-latest.xml index 2d04bc23c2..6068a76464 100644 --- a/tests/qemuxml2xmloutdata/numad-auto-memory-vcpu-cpuset.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/numad-auto-memory-vcpu-cpuset.x86_64-latest.xml @@ -13,7 +13,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuxml2xmloutdata/numad-auto-memory-vcpu-no-cpuset-and-placement.x86_64-latest.xml b/tests/qemuxml2xmloutdata/numad-auto-memory-vcpu-no-cpuset-and-placement.x86_64-latest.xml index 80f7284126..6c558526e9 100644 --- a/tests/qemuxml2xmloutdata/numad-auto-memory-vcpu-no-cpuset-and-placement.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/numad-auto-memory-vcpu-no-cpuset-and-placement.x86_64-latest.xml @@ -13,7 +13,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuxml2xmloutdata/numad-auto-vcpu-no-numatune.x86_64-latest.xml b/tests/qemuxml2xmloutdata/numad-auto-vcpu-no-numatune.x86_64-latest.xml index 724209f6e3..6e1fecb488 100644 --- a/tests/qemuxml2xmloutdata/numad-auto-vcpu-no-numatune.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/numad-auto-vcpu-no-numatune.x86_64-latest.xml @@ -13,7 +13,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuxml2xmloutdata/numad-static-vcpu-no-numatune.x86_64-latest.xml b/tests/qemuxml2xmloutdata/numad-static-vcpu-no-numatune.x86_64-latest.xml index 2a4ee0d496..c42d7066f9 100644 --- a/tests/qemuxml2xmloutdata/numad-static-vcpu-no-numatune.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/numad-static-vcpu-no-numatune.x86_64-latest.xml @@ -10,7 +10,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='1' threads='1'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/qemuxml2xmloutdata/pci-expander-bus.x86_64-latest.xml b/tests/qemuxml2xmloutdata/pci-expander-bus.x86_64-latest.xml index b63c8c145a..2a6c329a40 100644 --- a/tests/qemuxml2xmloutdata/pci-expander-bus.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/pci-expander-bus.x86_64-latest.xml @@ -10,7 +10,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='4' threads='2'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='2'/> <numa> <cell id='0' cpus='0-7' memory='109550' unit='KiB'/> <cell id='1' cpus='8-15' memory='109550' unit='KiB'/> diff --git a/tests/qemuxml2xmloutdata/pcie-expander-bus.x86_64-latest.xml b/tests/qemuxml2xmloutdata/pcie-expander-bus.x86_64-latest.xml index a441be8ebe..99612740b2 100644 --- a/tests/qemuxml2xmloutdata/pcie-expander-bus.x86_64-latest.xml +++ b/tests/qemuxml2xmloutdata/pcie-expander-bus.x86_64-latest.xml @@ -10,7 +10,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> - <topology sockets='2' dies='1' cores='4' threads='2'/> + <topology sockets='2' dies='1' clusters='1' cores='4' threads='2'/> <numa> <cell id='0' cpus='0-7' memory='109550' unit='KiB'/> <cell id='1' cpus='8-15' memory='109550' unit='KiB'/> diff --git a/tests/qemuxml2xmloutdata/pseries-phb-numa-node.ppc64-latest.xml b/tests/qemuxml2xmloutdata/pseries-phb-numa-node.ppc64-latest.xml index 59015846fb..0a044f50b0 100644 --- a/tests/qemuxml2xmloutdata/pseries-phb-numa-node.ppc64-latest.xml +++ b/tests/qemuxml2xmloutdata/pseries-phb-numa-node.ppc64-latest.xml @@ -14,7 +14,7 @@ </os> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>POWER9</model> - <topology sockets='2' dies='1' cores='1' threads='4'/> + <topology sockets='2' dies='1' clusters='1' cores='1' threads='4'/> <numa> <cell id='0' cpus='0-3' memory='1048576' unit='KiB'/> <cell id='1' cpus='4-7' memory='1048576' unit='KiB'/> diff --git a/tests/vmx2xmldata/esx-in-the-wild-10.xml b/tests/vmx2xmldata/esx-in-the-wild-10.xml index 47ed637920..78129682bd 100644 --- a/tests/vmx2xmldata/esx-in-the-wild-10.xml +++ b/tests/vmx2xmldata/esx-in-the-wild-10.xml @@ -12,7 +12,7 @@ <type arch='x86_64'>hvm</type> </os> <cpu> - <topology sockets='1' dies='1' cores='2' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='2' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/vmx2xmldata/esx-in-the-wild-8.xml b/tests/vmx2xmldata/esx-in-the-wild-8.xml index 0eea610709..47d22ced2a 100644 --- a/tests/vmx2xmldata/esx-in-the-wild-8.xml +++ b/tests/vmx2xmldata/esx-in-the-wild-8.xml @@ -11,7 +11,7 @@ <type arch='x86_64'>hvm</type> </os> <cpu> - <topology sockets='4' dies='1' cores='2' threads='1'/> + <topology sockets='4' dies='1' clusters='1' cores='2' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> diff --git a/tests/vmx2xmldata/esx-in-the-wild-9.xml b/tests/vmx2xmldata/esx-in-the-wild-9.xml index 66eca400dd..ee6be2527f 100644 --- a/tests/vmx2xmldata/esx-in-the-wild-9.xml +++ b/tests/vmx2xmldata/esx-in-the-wild-9.xml @@ -12,7 +12,7 @@ <type arch='x86_64'>hvm</type> </os> <cpu> - <topology sockets='4' dies='1' cores='4' threads='1'/> + <topology sockets='4' dies='1' clusters='1' cores='4' threads='1'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> -- 2.43.0

Signed-off-by: Andrea Bolognani <abologna@redhat.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_capabilities.c | 2 ++ src/qemu/qemu_capabilities.h | 1 + tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml | 1 + tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml | 1 + tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml | 1 + 14 files changed, 15 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 3d35333f09..a4d42b40ed 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -699,6 +699,7 @@ VIR_ENUM_IMPL(virQEMUCaps, "run-with.async-teardown", /* QEMU_CAPS_RUN_WITH_ASYNC_TEARDOWN */ "virtio-blk-vhost-vdpa", /* QEMU_CAPS_DEVICE_VIRTIO_BLK_VHOST_VDPA */ "virtio-blk.iothread-mapping", /* QEMU_CAPS_VIRTIO_BLK_IOTHREAD_MAPPING */ + "smp-clusters", /* QEMU_CAPS_SMP_CLUSTERS */ ); @@ -1552,6 +1553,7 @@ static struct virQEMUCapsStringFlags virQEMUCapsQMPSchemaQueries[] = { { "query-display-options/ret-type/+sdl", QEMU_CAPS_SDL }, { "query-display-options/ret-type/+egl-headless", QEMU_CAPS_EGL_HEADLESS }, { "query-hotpluggable-cpus/ret-type/props/die-id", QEMU_CAPS_SMP_DIES }, + { "query-hotpluggable-cpus/ret-type/props/cluster-id", QEMU_CAPS_SMP_CLUSTERS }, { "query-named-block-nodes/arg-type/flat", QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT }, { "screendump/arg-type/device", QEMU_CAPS_SCREENDUMP_DEVICE }, { "set-numa-node/arg-type/+hmat-lb", QEMU_CAPS_NUMA_HMAT }, diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index 279e9a8273..a353750670 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -678,6 +678,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for syntax-check */ QEMU_CAPS_RUN_WITH_ASYNC_TEARDOWN, /* asynchronous teardown -run-with async-teardown=on|off */ QEMU_CAPS_DEVICE_VIRTIO_BLK_VHOST_VDPA, /* virtio-blk-vhost-vdpa block driver */ QEMU_CAPS_VIRTIO_BLK_IOTHREAD_MAPPING, /* virtio-blk supports per-virtqueue iothread mapping */ + QEMU_CAPS_SMP_CLUSTERS, /* -smp clusters= */ QEMU_CAPS_LAST /* this must always be the last item */ } virQEMUCapsFlags; diff --git a/tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml b/tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml index 4315241d1d..536524cf18 100644 --- a/tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml @@ -154,6 +154,7 @@ <flag name='virtio-crypto'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='smp-clusters'/> <version>7001000</version> <microcodeVersion>42900244</microcodeVersion> <package>v7.1.0</package> diff --git a/tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml b/tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml index bd84750dc5..58e1111982 100644 --- a/tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml @@ -188,6 +188,7 @@ <flag name='virtio-crypto'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='smp-clusters'/> <version>7001000</version> <microcodeVersion>43100244</microcodeVersion> <package>v7.1.0</package> diff --git a/tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml b/tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml index a1fc441412..127b8ee4c2 100644 --- a/tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml +++ b/tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml @@ -149,6 +149,7 @@ <flag name='virtio-crypto'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='smp-clusters'/> <version>7002000</version> <microcodeVersion>0</microcodeVersion> <package>qemu-7.2.0-6.fc37</package> diff --git a/tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml b/tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml index 06a01a2c4c..a30ec3c164 100644 --- a/tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml +++ b/tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml @@ -192,6 +192,7 @@ <flag name='cryptodev-backend-lkcf'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='smp-clusters'/> <version>7002000</version> <microcodeVersion>43100245</microcodeVersion> <package>v7.2.0</package> diff --git a/tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml b/tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml index 8ac1529c30..24ac7b8f6e 100644 --- a/tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml @@ -192,6 +192,7 @@ <flag name='cryptodev-backend-lkcf'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='smp-clusters'/> <version>7002000</version> <microcodeVersion>43100245</microcodeVersion> <package>v7.2.0</package> diff --git a/tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml b/tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml index 31300d3d31..3f2acb5018 100644 --- a/tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml +++ b/tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml @@ -138,6 +138,7 @@ <flag name='virtio-crypto'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='smp-clusters'/> <version>7002050</version> <microcodeVersion>0</microcodeVersion> <package>v7.2.0-333-g222059a0fc</package> diff --git a/tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml b/tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml index c2fa8eb028..85869f6b5d 100644 --- a/tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml @@ -196,6 +196,7 @@ <flag name='virtio-gpu.blob'/> <flag name='rbd-encryption-layering'/> <flag name='rbd-encryption-luks-any'/> + <flag name='smp-clusters'/> <version>8000000</version> <microcodeVersion>43100244</microcodeVersion> <package>v8.0.0</package> diff --git a/tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml b/tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml index 427ee9d5c7..19422f08fa 100644 --- a/tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml +++ b/tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml @@ -112,6 +112,7 @@ <flag name='rbd-encryption-layering'/> <flag name='rbd-encryption-luks-any'/> <flag name='run-with.async-teardown'/> + <flag name='smp-clusters'/> <version>8000050</version> <microcodeVersion>39100245</microcodeVersion> <package>v8.0.0-1270-g1c12355b</package> diff --git a/tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml b/tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml index d266dd0f31..0caee53550 100644 --- a/tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml @@ -198,6 +198,7 @@ <flag name='qcow2-discard-no-unref'/> <flag name='run-with.async-teardown'/> <flag name='virtio-blk-vhost-vdpa'/> + <flag name='smp-clusters'/> <version>8001000</version> <microcodeVersion>43100245</microcodeVersion> <package>v8.1.0</package> diff --git a/tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml b/tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml index 40d490c1c0..54fd349365 100644 --- a/tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_8.2.0_aarch64.xml @@ -162,6 +162,7 @@ <flag name='rbd-encryption-luks-any'/> <flag name='qcow2-discard-no-unref'/> <flag name='run-with.async-teardown'/> + <flag name='smp-clusters'/> <version>8002000</version> <microcodeVersion>61700246</microcodeVersion> <package>v8.2.0</package> diff --git a/tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml b/tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml index ee52952702..8a6527810a 100644 --- a/tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_8.2.0_x86_64.xml @@ -199,6 +199,7 @@ <flag name='qcow2-discard-no-unref'/> <flag name='run-with.async-teardown'/> <flag name='virtio-blk-vhost-vdpa'/> + <flag name='smp-clusters'/> <version>8002000</version> <microcodeVersion>43100246</microcodeVersion> <package>v8.2.0</package> diff --git a/tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml b/tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml index 65d86f7016..b4c3b1bae3 100644 --- a/tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_9.0.0_x86_64.xml @@ -200,6 +200,7 @@ <flag name='run-with.async-teardown'/> <flag name='virtio-blk-vhost-vdpa'/> <flag name='virtio-blk.iothread-mapping'/> + <flag name='smp-clusters'/> <version>8002050</version> <microcodeVersion>43100245</microcodeVersion> <package>v8.2.0-196-g7425b6277f</package> -- 2.43.0

https://issues.redhat.com/browse/RHEL-7043 Signed-off-by: Andrea Bolognani <abologna@redhat.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_command.c | 5 ++++- .../qemuxml2argvdata/cpu-hotplug-startup.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/cpu-numa-disjoint.x86_64-latest.args | 2 +- .../qemuxml2argvdata/cpu-numa-disordered.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/cpu-numa-memshared.x86_64-latest.args | 2 +- .../cpu-numa-no-memory-element.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/cpu-numa1.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/cpu-numa2.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/cpu-topology1.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/cpu-topology2.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/cpu-topology3.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/cpu-topology4.x86_64-latest.args | 2 +- .../fd-memory-no-numa-topology.x86_64-latest.args | 2 +- .../fd-memory-numa-topology.x86_64-latest.args | 2 +- .../fd-memory-numa-topology2.x86_64-latest.args | 2 +- .../fd-memory-numa-topology3.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/hugepages-nvdimm.x86_64-latest.args | 2 +- .../memfd-memory-default-hugepage.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/memfd-memory-numa.x86_64-latest.args | 2 +- .../memory-hotplug-dimm-addr.x86_64-latest.args | 2 +- .../qemuxml2argvdata/memory-hotplug-dimm.x86_64-latest.args | 2 +- .../memory-hotplug-multiple.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm-access.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm-align.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm-label.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm-pmem.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm-ppc64-abi-update.ppc64-latest.args | 2 +- .../memory-hotplug-nvdimm-ppc64.ppc64-latest.args | 2 +- .../memory-hotplug-nvdimm-readonly.x86_64-latest.args | 2 +- .../memory-hotplug-nvdimm.x86_64-latest.args | 2 +- .../memory-hotplug-virtio-mem.x86_64-latest.args | 2 +- .../memory-hotplug-virtio-pmem.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/memory-hotplug.x86_64-latest.args | 2 +- .../numad-auto-memory-vcpu-cpuset.x86_64-latest.args | 2 +- ...to-memory-vcpu-no-cpuset-and-placement.x86_64-latest.args | 2 +- .../numad-auto-vcpu-no-numatune.x86_64-latest.args | 2 +- .../numad-auto-vcpu-static-numatune.x86_64-latest.args | 2 +- .../numad-static-memory-auto-vcpu.x86_64-latest.args | 2 +- .../numad-static-vcpu-no-numatune.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/numad.x86_64-latest.args | 2 +- .../numatune-auto-nodeset-invalid.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/pci-expander-bus.x86_64-latest.args | 2 +- tests/qemuxml2argvdata/pcie-expander-bus.x86_64-latest.args | 2 +- .../qemuxml2argvdata/pseries-phb-numa-node.ppc64-latest.args | 2 +- 44 files changed, 47 insertions(+), 44 deletions(-) diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 71daa85e55..712feb7b81 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7226,7 +7226,8 @@ qemuBuildSmpCommandLine(virCommand *cmd, _("Only 1 die per socket is supported")); return -1; } - if (def->cpu->clusters != 1) { + if (def->cpu->clusters != 1 && + !virQEMUCapsGet(qemuCaps, QEMU_CAPS_SMP_CLUSTERS)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("Only 1 cluster per die is supported")); return -1; @@ -7234,6 +7235,8 @@ qemuBuildSmpCommandLine(virCommand *cmd, virBufferAsprintf(&buf, ",sockets=%u", def->cpu->sockets); if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_SMP_DIES)) virBufferAsprintf(&buf, ",dies=%u", def->cpu->dies); + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_SMP_CLUSTERS)) + virBufferAsprintf(&buf, ",clusters=%u", def->cpu->clusters); virBufferAsprintf(&buf, ",cores=%u", def->cpu->cores); virBufferAsprintf(&buf, ",threads=%u", def->cpu->threads); } else { diff --git a/tests/qemuxml2argvdata/cpu-hotplug-startup.x86_64-latest.args b/tests/qemuxml2argvdata/cpu-hotplug-startup.x86_64-latest.args index 009c08d71a..af1b464104 100644 --- a/tests/qemuxml2argvdata/cpu-hotplug-startup.x86_64-latest.args +++ b/tests/qemuxml2argvdata/cpu-hotplug-startup.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264}' \ -overcommit mem-lock=off \ --smp 1,maxcpus=6,sockets=3,dies=1,cores=2,threads=1 \ +-smp 1,maxcpus=6,sockets=3,dies=1,clusters=1,cores=2,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/cpu-numa-disjoint.x86_64-latest.args b/tests/qemuxml2argvdata/cpu-numa-disjoint.x86_64-latest.args index 3b12934425..22fca082a8 100644 --- a/tests/qemuxml2argvdata/cpu-numa-disjoint.x86_64-latest.args +++ b/tests/qemuxml2argvdata/cpu-numa-disjoint.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k \ -overcommit mem-lock=off \ --smp 16,sockets=2,dies=1,cores=4,threads=2 \ +-smp 16,sockets=2,dies=1,clusters=1,cores=4,threads=2 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":112197632}' \ -numa node,nodeid=0,cpus=0-3,cpus=8-11,memdev=ram-node0 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node1","size":112197632}' \ diff --git a/tests/qemuxml2argvdata/cpu-numa-disordered.x86_64-latest.args b/tests/qemuxml2argvdata/cpu-numa-disordered.x86_64-latest.args index ee6974326d..bc4a6ad5f3 100644 --- a/tests/qemuxml2argvdata/cpu-numa-disordered.x86_64-latest.args +++ b/tests/qemuxml2argvdata/cpu-numa-disordered.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=328704k \ -overcommit mem-lock=off \ --smp 16,sockets=2,dies=1,cores=4,threads=2 \ +-smp 16,sockets=2,dies=1,clusters=1,cores=4,threads=2 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":112197632}' \ -numa node,nodeid=0,cpus=0-5,memdev=ram-node0 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node1","size":112197632}' \ diff --git a/tests/qemuxml2argvdata/cpu-numa-memshared.x86_64-latest.args b/tests/qemuxml2argvdata/cpu-numa-memshared.x86_64-latest.args index 0c9ec88b8b..1e486b1bbc 100644 --- a/tests/qemuxml2argvdata/cpu-numa-memshared.x86_64-latest.args +++ b/tests/qemuxml2argvdata/cpu-numa-memshared.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k \ -overcommit mem-lock=off \ --smp 16,sockets=2,dies=1,cores=4,threads=2 \ +-smp 16,sockets=2,dies=1,clusters=1,cores=4,threads=2 \ -object '{"qom-type":"memory-backend-file","id":"ram-node0","mem-path":"/var/lib/libvirt/qemu/ram/-1-QEMUGuest1/ram-node0","share":true,"size":112197632}' \ -numa node,nodeid=0,cpus=0-7,memdev=ram-node0 \ -object '{"qom-type":"memory-backend-file","id":"ram-node1","mem-path":"/var/lib/libvirt/qemu/ram/-1-QEMUGuest1/ram-node1","share":false,"size":112197632}' \ diff --git a/tests/qemuxml2argvdata/cpu-numa-no-memory-element.x86_64-latest.args b/tests/qemuxml2argvdata/cpu-numa-no-memory-element.x86_64-latest.args index 31a61f023e..59372c4ab9 100644 --- a/tests/qemuxml2argvdata/cpu-numa-no-memory-element.x86_64-latest.args +++ b/tests/qemuxml2argvdata/cpu-numa-no-memory-element.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k \ -overcommit mem-lock=off \ --smp 16,sockets=2,dies=1,cores=4,threads=2 \ +-smp 16,sockets=2,dies=1,clusters=1,cores=4,threads=2 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":112197632}' \ -numa node,nodeid=0,cpus=0-7,memdev=ram-node0 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node1","size":112197632}' \ diff --git a/tests/qemuxml2argvdata/cpu-numa1.x86_64-latest.args b/tests/qemuxml2argvdata/cpu-numa1.x86_64-latest.args index 31a61f023e..59372c4ab9 100644 --- a/tests/qemuxml2argvdata/cpu-numa1.x86_64-latest.args +++ b/tests/qemuxml2argvdata/cpu-numa1.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k \ -overcommit mem-lock=off \ --smp 16,sockets=2,dies=1,cores=4,threads=2 \ +-smp 16,sockets=2,dies=1,clusters=1,cores=4,threads=2 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":112197632}' \ -numa node,nodeid=0,cpus=0-7,memdev=ram-node0 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node1","size":112197632}' \ diff --git a/tests/qemuxml2argvdata/cpu-numa2.x86_64-latest.args b/tests/qemuxml2argvdata/cpu-numa2.x86_64-latest.args index 31a61f023e..59372c4ab9 100644 --- a/tests/qemuxml2argvdata/cpu-numa2.x86_64-latest.args +++ b/tests/qemuxml2argvdata/cpu-numa2.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k \ -overcommit mem-lock=off \ --smp 16,sockets=2,dies=1,cores=4,threads=2 \ +-smp 16,sockets=2,dies=1,clusters=1,cores=4,threads=2 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":112197632}' \ -numa node,nodeid=0,cpus=0-7,memdev=ram-node0 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node1","size":112197632}' \ diff --git a/tests/qemuxml2argvdata/cpu-topology1.x86_64-latest.args b/tests/qemuxml2argvdata/cpu-topology1.x86_64-latest.args index 009c08d71a..af1b464104 100644 --- a/tests/qemuxml2argvdata/cpu-topology1.x86_64-latest.args +++ b/tests/qemuxml2argvdata/cpu-topology1.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264}' \ -overcommit mem-lock=off \ --smp 1,maxcpus=6,sockets=3,dies=1,cores=2,threads=1 \ +-smp 1,maxcpus=6,sockets=3,dies=1,clusters=1,cores=2,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/cpu-topology2.x86_64-latest.args b/tests/qemuxml2argvdata/cpu-topology2.x86_64-latest.args index 7ba175fa80..8560eb6126 100644 --- a/tests/qemuxml2argvdata/cpu-topology2.x86_64-latest.args +++ b/tests/qemuxml2argvdata/cpu-topology2.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264}' \ -overcommit mem-lock=off \ --smp 6,sockets=1,dies=1,cores=2,threads=3 \ +-smp 6,sockets=1,dies=1,clusters=1,cores=2,threads=3 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/cpu-topology3.x86_64-latest.args b/tests/qemuxml2argvdata/cpu-topology3.x86_64-latest.args index c11b4cd307..3878c558b8 100644 --- a/tests/qemuxml2argvdata/cpu-topology3.x86_64-latest.args +++ b/tests/qemuxml2argvdata/cpu-topology3.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264}' \ -overcommit mem-lock=off \ --smp 6,sockets=3,dies=1,cores=2,threads=1 \ +-smp 6,sockets=3,dies=1,clusters=1,cores=2,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/cpu-topology4.x86_64-latest.args b/tests/qemuxml2argvdata/cpu-topology4.x86_64-latest.args index d0e31ba2b5..8720038c0d 100644 --- a/tests/qemuxml2argvdata/cpu-topology4.x86_64-latest.args +++ b/tests/qemuxml2argvdata/cpu-topology4.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264}' \ -overcommit mem-lock=off \ --smp 1,maxcpus=6,sockets=1,dies=3,cores=2,threads=1 \ +-smp 1,maxcpus=6,sockets=1,dies=3,clusters=1,cores=2,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/fd-memory-no-numa-topology.x86_64-latest.args b/tests/qemuxml2argvdata/fd-memory-no-numa-topology.x86_64-latest.args index 58b3c7b544..1bd75a85a6 100644 --- a/tests/qemuxml2argvdata/fd-memory-no-numa-topology.x86_64-latest.args +++ b/tests/qemuxml2argvdata/fd-memory-no-numa-topology.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-instance-00000092/.config \ -m size=14680064k \ -object '{"qom-type":"memory-backend-file","id":"pc.ram","mem-path":"/var/lib/libvirt/qemu/ram/-1-instance-00000092/pc.ram","share":true,"x-use-canonical-path-for-ramblock-id":false,"prealloc":true,"size":15032385536}' \ -overcommit mem-lock=off \ --smp 8,sockets=8,dies=1,cores=1,threads=1 \ +-smp 8,sockets=8,dies=1,clusters=1,cores=1,threads=1 \ -uuid 126f2720-6f8e-45ab-a886-ec9277079a67 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/fd-memory-numa-topology.x86_64-latest.args b/tests/qemuxml2argvdata/fd-memory-numa-topology.x86_64-latest.args index 21f9a16540..17ef506431 100644 --- a/tests/qemuxml2argvdata/fd-memory-numa-topology.x86_64-latest.args +++ b/tests/qemuxml2argvdata/fd-memory-numa-topology.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-instance-00000092/.config \ -cpu qemu64 \ -m size=14680064k \ -overcommit mem-lock=off \ --smp 8,sockets=1,dies=1,cores=8,threads=1 \ +-smp 8,sockets=1,dies=1,clusters=1,cores=8,threads=1 \ -object '{"qom-type":"memory-backend-file","id":"ram-node0","mem-path":"/var/lib/libvirt/qemu/ram/-1-instance-00000092/ram-node0","share":true,"prealloc":true,"size":15032385536}' \ -numa node,nodeid=0,cpus=0-7,memdev=ram-node0 \ -uuid 126f2720-6f8e-45ab-a886-ec9277079a67 \ diff --git a/tests/qemuxml2argvdata/fd-memory-numa-topology2.x86_64-latest.args b/tests/qemuxml2argvdata/fd-memory-numa-topology2.x86_64-latest.args index 3bf16f9caf..b247231b85 100644 --- a/tests/qemuxml2argvdata/fd-memory-numa-topology2.x86_64-latest.args +++ b/tests/qemuxml2argvdata/fd-memory-numa-topology2.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-instance-00000092/.config \ -cpu qemu64 \ -m size=29360128k \ -overcommit mem-lock=off \ --smp 20,sockets=1,dies=1,cores=20,threads=1 \ +-smp 20,sockets=1,dies=1,clusters=1,cores=20,threads=1 \ -object '{"qom-type":"memory-backend-file","id":"ram-node0","mem-path":"/var/lib/libvirt/qemu/ram/-1-instance-00000092/ram-node0","share":false,"prealloc":true,"size":15032385536}' \ -numa node,nodeid=0,cpus=0-7,cpus=16-19,memdev=ram-node0 \ -object '{"qom-type":"memory-backend-file","id":"ram-node1","mem-path":"/var/lib/libvirt/qemu/ram/-1-instance-00000092/ram-node1","share":true,"prealloc":true,"size":15032385536}' \ diff --git a/tests/qemuxml2argvdata/fd-memory-numa-topology3.x86_64-latest.args b/tests/qemuxml2argvdata/fd-memory-numa-topology3.x86_64-latest.args index 3153e22d56..9e94209499 100644 --- a/tests/qemuxml2argvdata/fd-memory-numa-topology3.x86_64-latest.args +++ b/tests/qemuxml2argvdata/fd-memory-numa-topology3.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-instance-00000092/.config \ -cpu qemu64 \ -m size=44040192k \ -overcommit mem-lock=off \ --smp 32,sockets=1,dies=1,cores=32,threads=1 \ +-smp 32,sockets=1,dies=1,clusters=1,cores=32,threads=1 \ -object '{"qom-type":"memory-backend-file","id":"ram-node0","mem-path":"/var/lib/libvirt/qemu/ram/-1-instance-00000092/ram-node0","share":true,"prealloc":true,"size":15032385536}' \ -numa node,nodeid=0,cpus=0-1,cpus=6-31,memdev=ram-node0 \ -object '{"qom-type":"memory-backend-file","id":"ram-node1","mem-path":"/var/lib/libvirt/qemu/ram/-1-instance-00000092/ram-node1","share":true,"prealloc":true,"size":15032385536}' \ diff --git a/tests/qemuxml2argvdata/hugepages-nvdimm.x86_64-latest.args b/tests/qemuxml2argvdata/hugepages-nvdimm.x86_64-latest.args index fa376accb5..f30db0ad09 100644 --- a/tests/qemuxml2argvdata/hugepages-nvdimm.x86_64-latest.args +++ b/tests/qemuxml2argvdata/hugepages-nvdimm.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=1048576k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-file","id":"ram-node0","mem-path":"/dev/hugepages2M/libvirt/qemu/-1-QEMUGuest1","share":true,"prealloc":true,"size":1073741824}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memfd-memory-default-hugepage.x86_64-latest.args b/tests/qemuxml2argvdata/memfd-memory-default-hugepage.x86_64-latest.args index 55969eb2fd..f850d7be60 100644 --- a/tests/qemuxml2argvdata/memfd-memory-default-hugepage.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memfd-memory-default-hugepage.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-instance-00000092/.config \ -cpu qemu64 \ -m size=14680064k \ -overcommit mem-lock=off \ --smp 8,sockets=1,dies=1,cores=8,threads=1 \ +-smp 8,sockets=1,dies=1,clusters=1,cores=8,threads=1 \ -object '{"qom-type":"thread-context","id":"tc-ram-node0","node-affinity":[3]}' \ -object '{"qom-type":"memory-backend-memfd","id":"ram-node0","hugetlb":true,"hugetlbsize":2097152,"share":true,"prealloc":true,"size":15032385536,"host-nodes":[3],"policy":"preferred","prealloc-context":"tc-ram-node0"}' \ -numa node,nodeid=0,cpus=0-7,memdev=ram-node0 \ diff --git a/tests/qemuxml2argvdata/memfd-memory-numa.x86_64-latest.args b/tests/qemuxml2argvdata/memfd-memory-numa.x86_64-latest.args index 1ef2d69fcb..dbe2b82a56 100644 --- a/tests/qemuxml2argvdata/memfd-memory-numa.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memfd-memory-numa.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-instance-00000092/.config \ -cpu qemu64 \ -m size=14680064k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 8,sockets=1,dies=1,cores=8,threads=1 \ +-smp 8,sockets=1,dies=1,clusters=1,cores=8,threads=1 \ -object '{"qom-type":"thread-context","id":"tc-ram-node0","node-affinity":[3]}' \ -object '{"qom-type":"memory-backend-memfd","id":"ram-node0","hugetlb":true,"hugetlbsize":2097152,"share":true,"prealloc":true,"prealloc-threads":8,"size":15032385536,"host-nodes":[3],"policy":"preferred","prealloc-context":"tc-ram-node0"}' \ -numa node,nodeid=0,cpus=0-7,memdev=ram-node0 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-dimm-addr.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-dimm-addr.x86_64-latest.args index 6ae1fd1b98..c15fe191de 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-dimm-addr.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-dimm-addr.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":224395264}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-dimm.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-dimm.x86_64-latest.args index 71817da309..a729930db2 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-dimm.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-dimm.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":224395264}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-multiple.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-multiple.x86_64-latest.args index ad1dad01ac..f1f2f93a11 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-multiple.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-multiple.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=2095104k,slots=2,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":2145386496}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-access.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-access.x86_64-latest.args index f09ae22927..d53732b39e 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-access.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-access.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":224395264}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-align.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-align.x86_64-latest.args index 6cfe4b8263..cba467d9d3 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-align.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-align.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":224395264}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-label.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-label.x86_64-latest.args index 4041c15b2b..2ad23a0224 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-label.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-label.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":224395264}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-pmem.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-pmem.x86_64-latest.args index 3547e96c00..ac5ca187b1 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-pmem.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-pmem.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":224395264}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-ppc64-abi-update.ppc64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-ppc64-abi-update.ppc64-latest.args index 9b57518fca..c2c1623d9f 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-ppc64-abi-update.ppc64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-ppc64-abi-update.ppc64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu POWER9 \ -m size=1048576k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":1073741824}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-ppc64.ppc64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-ppc64.ppc64-latest.args index 9b57518fca..c2c1623d9f 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-ppc64.ppc64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-ppc64.ppc64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu POWER9 \ -m size=1048576k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":1073741824}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-readonly.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-readonly.x86_64-latest.args index 17bacfb2f6..8af4673841 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm-readonly.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm-readonly.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":224395264}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-nvdimm.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-nvdimm.x86_64-latest.args index 1321e5556e..6531caa908 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-nvdimm.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-nvdimm.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=1048576k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":1073741824}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-virtio-mem.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-virtio-mem.x86_64-latest.args index 607ce9b0e8..dbe96ae21d 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-virtio-mem.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-virtio-mem.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=2095104k,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":2145386496}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug-virtio-pmem.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug-virtio-pmem.x86_64-latest.args index 9bbde420a9..df7b7f80a9 100644 --- a/tests/qemuxml2argvdata/memory-hotplug-virtio-pmem.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug-virtio-pmem.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=2095104k,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":2145386496}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/memory-hotplug.x86_64-latest.args b/tests/qemuxml2argvdata/memory-hotplug.x86_64-latest.args index 53f0fbc68f..d04d9d73e9 100644 --- a/tests/qemuxml2argvdata/memory-hotplug.x86_64-latest.args +++ b/tests/qemuxml2argvdata/memory-hotplug.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu qemu64 \ -m size=219136k,slots=16,maxmem=1099511627776k \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":224395264}' \ -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ diff --git a/tests/qemuxml2argvdata/numad-auto-memory-vcpu-cpuset.x86_64-latest.args b/tests/qemuxml2argvdata/numad-auto-memory-vcpu-cpuset.x86_64-latest.args index d4238f3d9e..138c8255f7 100644 --- a/tests/qemuxml2argvdata/numad-auto-memory-vcpu-cpuset.x86_64-latest.args +++ b/tests/qemuxml2argvdata/numad-auto-memory-vcpu-cpuset.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264,"host-nodes":[0,1,2,3],"policy":"interleave"}' \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/numad-auto-memory-vcpu-no-cpuset-and-placement.x86_64-latest.args b/tests/qemuxml2argvdata/numad-auto-memory-vcpu-no-cpuset-and-placement.x86_64-latest.args index d4238f3d9e..138c8255f7 100644 --- a/tests/qemuxml2argvdata/numad-auto-memory-vcpu-no-cpuset-and-placement.x86_64-latest.args +++ b/tests/qemuxml2argvdata/numad-auto-memory-vcpu-no-cpuset-and-placement.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264,"host-nodes":[0,1,2,3],"policy":"interleave"}' \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/numad-auto-vcpu-no-numatune.x86_64-latest.args b/tests/qemuxml2argvdata/numad-auto-vcpu-no-numatune.x86_64-latest.args index 7022d2cc00..f13f04c9d4 100644 --- a/tests/qemuxml2argvdata/numad-auto-vcpu-no-numatune.x86_64-latest.args +++ b/tests/qemuxml2argvdata/numad-auto-vcpu-no-numatune.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264,"host-nodes":[0,1,2,3],"policy":"bind"}' \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/numad-auto-vcpu-static-numatune.x86_64-latest.args b/tests/qemuxml2argvdata/numad-auto-vcpu-static-numatune.x86_64-latest.args index 9ddfb286b5..f1c49619db 100644 --- a/tests/qemuxml2argvdata/numad-auto-vcpu-static-numatune.x86_64-latest.args +++ b/tests/qemuxml2argvdata/numad-auto-vcpu-static-numatune.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264,"host-nodes":[0],"policy":"interleave"}' \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/numad-static-memory-auto-vcpu.x86_64-latest.args b/tests/qemuxml2argvdata/numad-static-memory-auto-vcpu.x86_64-latest.args index d4238f3d9e..138c8255f7 100644 --- a/tests/qemuxml2argvdata/numad-static-memory-auto-vcpu.x86_64-latest.args +++ b/tests/qemuxml2argvdata/numad-static-memory-auto-vcpu.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264,"host-nodes":[0,1,2,3],"policy":"interleave"}' \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/numad-static-vcpu-no-numatune.x86_64-latest.args b/tests/qemuxml2argvdata/numad-static-vcpu-no-numatune.x86_64-latest.args index ffbccb8408..76ca5c4bea 100644 --- a/tests/qemuxml2argvdata/numad-static-vcpu-no-numatune.x86_64-latest.args +++ b/tests/qemuxml2argvdata/numad-static-vcpu-no-numatune.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264}' \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/numad.x86_64-latest.args b/tests/qemuxml2argvdata/numad.x86_64-latest.args index d4238f3d9e..138c8255f7 100644 --- a/tests/qemuxml2argvdata/numad.x86_64-latest.args +++ b/tests/qemuxml2argvdata/numad.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264,"host-nodes":[0,1,2,3],"policy":"interleave"}' \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/numatune-auto-nodeset-invalid.x86_64-latest.args b/tests/qemuxml2argvdata/numatune-auto-nodeset-invalid.x86_64-latest.args index 57a2b893f1..e35471d91b 100644 --- a/tests/qemuxml2argvdata/numatune-auto-nodeset-invalid.x86_64-latest.args +++ b/tests/qemuxml2argvdata/numatune-auto-nodeset-invalid.x86_64-latest.args @@ -16,7 +16,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -m size=219136k \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264,"host-nodes":[0,1,2,3],"policy":"preferred"}' \ -overcommit mem-lock=off \ --smp 2,sockets=2,dies=1,cores=1,threads=1 \ +-smp 2,sockets=2,dies=1,clusters=1,cores=1,threads=1 \ -uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ -display none \ -no-user-config \ diff --git a/tests/qemuxml2argvdata/pci-expander-bus.x86_64-latest.args b/tests/qemuxml2argvdata/pci-expander-bus.x86_64-latest.args index bf553a8e32..d3960731be 100644 --- a/tests/qemuxml2argvdata/pci-expander-bus.x86_64-latest.args +++ b/tests/qemuxml2argvdata/pci-expander-bus.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-expander-test/.config \ -cpu qemu64 \ -m size=219136k \ -overcommit mem-lock=off \ --smp 16,sockets=2,dies=1,cores=4,threads=2 \ +-smp 16,sockets=2,dies=1,clusters=1,cores=4,threads=2 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":112197632}' \ -numa node,nodeid=0,cpus=0-7,memdev=ram-node0 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node1","size":112197632}' \ diff --git a/tests/qemuxml2argvdata/pcie-expander-bus.x86_64-latest.args b/tests/qemuxml2argvdata/pcie-expander-bus.x86_64-latest.args index 3fb86c29c2..b179fadc27 100644 --- a/tests/qemuxml2argvdata/pcie-expander-bus.x86_64-latest.args +++ b/tests/qemuxml2argvdata/pcie-expander-bus.x86_64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-pcie-expander-bus-te/.config \ -cpu qemu64 \ -m size=219136k \ -overcommit mem-lock=off \ --smp 16,sockets=2,dies=1,cores=4,threads=2 \ +-smp 16,sockets=2,dies=1,clusters=1,cores=4,threads=2 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":112197632}' \ -numa node,nodeid=0,cpus=0-7,memdev=ram-node0 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node1","size":112197632}' \ diff --git a/tests/qemuxml2argvdata/pseries-phb-numa-node.ppc64-latest.args b/tests/qemuxml2argvdata/pseries-phb-numa-node.ppc64-latest.args index 7ffcb1d8c5..942540a296 100644 --- a/tests/qemuxml2argvdata/pseries-phb-numa-node.ppc64-latest.args +++ b/tests/qemuxml2argvdata/pseries-phb-numa-node.ppc64-latest.args @@ -15,7 +15,7 @@ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ -cpu POWER9 \ -m size=2097152k \ -overcommit mem-lock=off \ --smp 8,sockets=2,dies=1,cores=1,threads=4 \ +-smp 8,sockets=2,dies=1,clusters=1,cores=1,threads=4 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":1073741824,"host-nodes":[1],"policy":"bind"}' \ -numa node,nodeid=0,cpus=0-3,memdev=ram-node0 \ -object '{"qom-type":"memory-backend-ram","id":"ram-node1","size":1073741824,"host-nodes":[2],"policy":"bind"}' \ -- 2.43.0

Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- .../cpu-topology5.aarch64-latest.args | 31 +++++++++++++++++++ tests/qemuxml2argvdata/cpu-topology5.xml | 17 ++++++++++ tests/qemuxml2argvtest.c | 1 + .../cpu-topology5.aarch64-latest.xml | 29 +++++++++++++++++ tests/qemuxml2xmltest.c | 2 ++ 5 files changed, 80 insertions(+) create mode 100644 tests/qemuxml2argvdata/cpu-topology5.aarch64-latest.args create mode 100644 tests/qemuxml2argvdata/cpu-topology5.xml create mode 100644 tests/qemuxml2xmloutdata/cpu-topology5.aarch64-latest.xml diff --git a/tests/qemuxml2argvdata/cpu-topology5.aarch64-latest.args b/tests/qemuxml2argvdata/cpu-topology5.aarch64-latest.args new file mode 100644 index 0000000000..d835e1c0fa --- /dev/null +++ b/tests/qemuxml2argvdata/cpu-topology5.aarch64-latest.args @@ -0,0 +1,31 @@ +LC_ALL=C \ +PATH=/bin \ +HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1 \ +USER=test \ +LOGNAME=test \ +XDG_DATA_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.local/share \ +XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.cache \ +XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain--1-QEMUGuest1/.config \ +/usr/bin/qemu-system-aarch64 \ +-name guest=QEMUGuest1,debug-threads=on \ +-S \ +-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain--1-QEMUGuest1/master-key.aes"}' \ +-machine virt,usb=off,gic-version=2,dump-guest-core=off,memory-backend=mach-virt.ram,acpi=off \ +-accel tcg \ +-cpu cortex-a15 \ +-m size=219136k \ +-object '{"qom-type":"memory-backend-ram","id":"mach-virt.ram","size":224395264}' \ +-overcommit mem-lock=off \ +-smp 1,maxcpus=8,sockets=1,dies=1,clusters=2,cores=2,threads=2 \ +-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ +-display none \ +-no-user-config \ +-nodefaults \ +-chardev socket,id=charmonitor,fd=1729,server=on,wait=off \ +-mon chardev=charmonitor,id=monitor,mode=control \ +-rtc base=utc \ +-no-shutdown \ +-boot strict=on \ +-audiodev '{"id":"audio1","driver":"none"}' \ +-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ +-msg timestamp=on diff --git a/tests/qemuxml2argvdata/cpu-topology5.xml b/tests/qemuxml2argvdata/cpu-topology5.xml new file mode 100644 index 0000000000..f78f0b6b54 --- /dev/null +++ b/tests/qemuxml2argvdata/cpu-topology5.xml @@ -0,0 +1,17 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219100</memory> + <vcpu placement='static' current='1'>8</vcpu> + <os> + <type arch='aarch64' machine='virt'>hvm</type> + </os> + <cpu> + <topology sockets='1' dies='1' clusters='2' cores='2' threads='2'/> + </cpu> + <devices> + <emulator>/usr/bin/qemu-system-aarch64</emulator> + <controller type='usb' model='none'/> + <memballoon model='none'/> + </devices> +</domain> diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index cb78465fc2..1be138bb0f 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -1813,6 +1813,7 @@ mymain(void) DO_TEST_CAPS_LATEST("cpu-topology2"); DO_TEST_CAPS_LATEST("cpu-topology3"); DO_TEST_CAPS_LATEST("cpu-topology4"); + DO_TEST_CAPS_ARCH_LATEST("cpu-topology5", "aarch64"); DO_TEST_CAPS_ARCH_LATEST_FULL("cpu-minimum1", "x86_64", ARG_CAPS_HOST_CPU_MODEL, QEMU_CPU_DEF_HASWELL); DO_TEST_CAPS_ARCH_LATEST_FULL("cpu-minimum2", "x86_64", ARG_CAPS_HOST_CPU_MODEL, QEMU_CPU_DEF_HASWELL); diff --git a/tests/qemuxml2xmloutdata/cpu-topology5.aarch64-latest.xml b/tests/qemuxml2xmloutdata/cpu-topology5.aarch64-latest.xml new file mode 100644 index 0000000000..2f5645baab --- /dev/null +++ b/tests/qemuxml2xmloutdata/cpu-topology5.aarch64-latest.xml @@ -0,0 +1,29 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219100</memory> + <currentMemory unit='KiB'>219100</currentMemory> + <vcpu placement='static' current='1'>8</vcpu> + <os> + <type arch='aarch64' machine='virt'>hvm</type> + <boot dev='hd'/> + </os> + <features> + <gic version='2'/> + </features> + <cpu mode='custom' match='exact' check='none'> + <model fallback='forbid'>cortex-a15</model> + <topology sockets='1' dies='1' clusters='2' cores='2' threads='2'/> + </cpu> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/bin/qemu-system-aarch64</emulator> + <controller type='usb' index='0' model='none'/> + <controller type='pci' index='0' model='pcie-root'/> + <audio id='1' type='none'/> + <memballoon model='none'/> + </devices> +</domain> diff --git a/tests/qemuxml2xmltest.c b/tests/qemuxml2xmltest.c index 4e39763dc7..15cb6bd692 100644 --- a/tests/qemuxml2xmltest.c +++ b/tests/qemuxml2xmltest.c @@ -674,6 +674,8 @@ mymain(void) DO_TEST_CAPS_LATEST("chardev-label"); + DO_TEST_CAPS_ARCH_LATEST("cpu-topology5", "aarch64"); + DO_TEST_CAPS_LATEST("cpu-numa1"); DO_TEST_CAPS_LATEST("cpu-numa2"); DO_TEST_CAPS_LATEST("cpu-numa-no-memory-element"); -- 2.43.0

On Thu, Jan 11, 2024 at 15:26:38 +0100, Andrea Bolognani wrote:
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- .../cpu-topology5.aarch64-latest.args | 31 +++++++++++++++++++ tests/qemuxml2argvdata/cpu-topology5.xml | 17 ++++++++++ tests/qemuxml2argvtest.c | 1 + .../cpu-topology5.aarch64-latest.xml | 29 +++++++++++++++++ tests/qemuxml2xmltest.c | 2 ++ 5 files changed, 80 insertions(+) create mode 100644 tests/qemuxml2argvdata/cpu-topology5.aarch64-latest.args create mode 100644 tests/qemuxml2argvdata/cpu-topology5.xml create mode 100644 tests/qemuxml2xmloutdata/cpu-topology5.aarch64-latest.xml
Reviewed-by: Peter Krempa <pkrempa@redhat.com>

This makes it so libvirt can obtain accurate information about guest CPUs from QEMU, and should make it possible to correctly perform operations such as CPU hotplug. Of course this is mostly moot at the moment: only aarch64 can use CPU clusters, and CPU hotplug is not yet implemented on that architecture. Signed-off-by: Andrea Bolognani <abologna@redhat.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> --- src/qemu/qemu_domain.c | 3 ++- src/qemu/qemu_monitor.c | 2 ++ src/qemu/qemu_monitor.h | 2 ++ src/qemu/qemu_monitor_json.c | 5 +++++ 4 files changed, 11 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 3a00fb689e..e2a1bf2c13 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -9900,11 +9900,12 @@ qemuDomainRefreshVcpuInfo(virDomainObj *vm, if (validTIDs) VIR_DEBUG("vCPU[%zu] PID %llu is valid " - "(node=%d socket=%d die=%d core=%d thread=%d)", + "(node=%d socket=%d die=%d cluster=%d core=%d thread=%d)", i, (unsigned long long)info[i].tid, info[i].node_id, info[i].socket_id, info[i].die_id, + info[i].cluster_id, info[i].core_id, info[i].thread_id); } diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index dfad4ee1ea..a1773d86d4 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -1501,6 +1501,7 @@ qemuMonitorCPUInfoClear(qemuMonitorCPUInfo *cpus, cpus[i].qemu_id = -1; cpus[i].socket_id = -1; cpus[i].die_id = -1; + cpus[i].cluster_id = -1; cpus[i].core_id = -1; cpus[i].thread_id = -1; cpus[i].node_id = -1; @@ -1658,6 +1659,7 @@ qemuMonitorGetCPUInfoHotplug(struct qemuMonitorQueryHotpluggableCpusEntry *hotpl !vcpus[mainvcpu].online; vcpus[mainvcpu].socket_id = hotplugvcpus[i].socket_id; vcpus[mainvcpu].die_id = hotplugvcpus[i].die_id; + vcpus[mainvcpu].cluster_id = hotplugvcpus[i].cluster_id; vcpus[mainvcpu].core_id = hotplugvcpus[i].core_id; vcpus[mainvcpu].thread_id = hotplugvcpus[i].thread_id; vcpus[mainvcpu].node_id = hotplugvcpus[i].node_id; diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index c4af9b407d..981c609e9f 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -590,6 +590,7 @@ struct qemuMonitorQueryHotpluggableCpusEntry { int node_id; int socket_id; int die_id; + int cluster_id; int core_id; int thread_id; @@ -613,6 +614,7 @@ struct _qemuMonitorCPUInfo { * all entries are -1 */ int socket_id; int die_id; + int cluster_id; int core_id; int thread_id; int node_id; diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 9cb0f3d1d8..e114b6bfb1 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -7579,12 +7579,14 @@ qemuMonitorJSONProcessHotpluggableCpusReply(virJSONValue *vcpu, entry->node_id = -1; entry->socket_id = -1; entry->die_id = -1; + entry->cluster_id = -1; entry->core_id = -1; entry->thread_id = -1; ignore_value(virJSONValueObjectGetNumberInt(props, "node-id", &entry->node_id)); ignore_value(virJSONValueObjectGetNumberInt(props, "socket-id", &entry->socket_id)); ignore_value(virJSONValueObjectGetNumberInt(props, "die-id", &entry->die_id)); + ignore_value(virJSONValueObjectGetNumberInt(props, "cluster-id", &entry->cluster_id)); ignore_value(virJSONValueObjectGetNumberInt(props, "core-id", &entry->core_id)); ignore_value(virJSONValueObjectGetNumberInt(props, "thread-id", &entry->thread_id)); @@ -7622,6 +7624,9 @@ qemuMonitorQueryHotpluggableCpusEntrySort(const void *p1, if (a->die_id != b->die_id) return a->die_id - b->die_id; + if (a->cluster_id != b->cluster_id) + return a->cluster_id - b->cluster_id; + if (a->core_id != b->core_id) return a->core_id - b->core_id; -- 2.43.0

Since aarch64 doesn't support CPU hotplug at the moment, we have to get a bit creative. While the 'query-cpus-fast' output is taken directly from a VM configured as <vcpu current='7'>16</vcpu> <cpu mode='host-passthrough'> <topology sockets='2' dies='1' clusters='2' cores='2' threads='2'/> </cpu> the 'query-hotpluggable-cpus' output is constructed by hand starting from the former and using the 'x86-dies' test data as a model. Signed-off-by: Andrea Bolognani <abologna@redhat.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> --- ...torjson-cpuinfo-aarch64-clusters-cpus.json | 88 +++++++++ ...json-cpuinfo-aarch64-clusters-hotplug.json | 171 ++++++++++++++++++ ...umonitorjson-cpuinfo-aarch64-clusters.data | 108 +++++++++++ tests/qemumonitorjsontest.c | 9 +- 4 files changed, 375 insertions(+), 1 deletion(-) create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-cpus.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-hotplug.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters.data diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-cpus.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-cpus.json new file mode 100644 index 0000000000..817f65d109 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-cpus.json @@ -0,0 +1,88 @@ +{ + "return": [ + { + "thread-id": 284700, + "props": { + "core-id": 0, + "thread-id": 0, + "socket-id": 0, + "cluster-id": 0 + }, + "qom-path": "/machine/unattached/device[0]", + "cpu-index": 0, + "target": "aarch64" + }, + { + "thread-id": 284701, + "props": { + "core-id": 0, + "thread-id": 1, + "socket-id": 0, + "cluster-id": 0 + }, + "qom-path": "/machine/unattached/device[1]", + "cpu-index": 1, + "target": "aarch64" + }, + { + "thread-id": 284702, + "props": { + "core-id": 1, + "thread-id": 0, + "socket-id": 0, + "cluster-id": 0 + }, + "qom-path": "/machine/unattached/device[2]", + "cpu-index": 2, + "target": "aarch64" + }, + { + "thread-id": 284703, + "props": { + "core-id": 1, + "thread-id": 1, + "socket-id": 0, + "cluster-id": 0 + }, + "qom-path": "/machine/unattached/device[3]", + "cpu-index": 3, + "target": "aarch64" + }, + { + "thread-id": 284704, + "props": { + "core-id": 0, + "thread-id": 0, + "socket-id": 0, + "cluster-id": 1 + }, + "qom-path": "/machine/unattached/device[4]", + "cpu-index": 4, + "target": "aarch64" + }, + { + "thread-id": 284705, + "props": { + "core-id": 0, + "thread-id": 1, + "socket-id": 0, + "cluster-id": 1 + }, + "qom-path": "/machine/unattached/device[5]", + "cpu-index": 5, + "target": "aarch64" + }, + { + "thread-id": 284706, + "props": { + "core-id": 1, + "thread-id": 0, + "socket-id": 0, + "cluster-id": 1 + }, + "qom-path": "/machine/unattached/device[6]", + "cpu-index": 6, + "target": "aarch64" + } + ] +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-hotplug.json b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-hotplug.json new file mode 100644 index 0000000000..7ae30bf111 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters-hotplug.json @@ -0,0 +1,171 @@ +{ + "return": [ + { + "props": { + "core-id": 1, + "thread-id": 1, + "socket-id": 1, + "cluster-id": 1 + }, + "vcpus-count": 1, + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 1, + "thread-id": 0, + "socket-id": 1, + "cluster-id": 1 + }, + "vcpus-count": 1, + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 1, + "socket-id": 1, + "cluster-id": 1 + }, + "vcpus-count": 1, + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 0, + "socket-id": 1, + "cluster-id": 1 + }, + "vcpus-count": 1, + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 1, + "thread-id": 1, + "socket-id": 1, + "cluster-id": 0 + }, + "vcpus-count": 1, + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 1, + "thread-id": 0, + "socket-id": 1, + "cluster-id": 0 + }, + "vcpus-count": 1, + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 1, + "socket-id": 1, + "cluster-id": 0 + }, + "vcpus-count": 1, + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 0, + "socket-id": 1, + "cluster-id": 0 + }, + "vcpus-count": 1, + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 1, + "thread-id": 1, + "socket-id": 0, + "cluster-id": 1 + }, + "vcpus-count": 1, + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 1, + "thread-id": 0, + "socket-id": 0, + "cluster-id": 1 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[6]", + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 1, + "socket-id": 0, + "cluster-id": 1 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[5]", + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 0, + "socket-id": 0, + "cluster-id": 1 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[4]", + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 1, + "thread-id": 1, + "socket-id": 0, + "cluster-id": 0 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[3]", + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 1, + "thread-id": 0, + "socket-id": 0, + "cluster-id": 0 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[2]", + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 1, + "socket-id": 0, + "cluster-id": 0 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[1]", + "type": "host-arm-cpu" + }, + { + "props": { + "core-id": 0, + "thread-id": 0, + "socket-id": 0, + "cluster-id": 0 + }, + "vcpus-count": 1, + "qom-path": "/machine/unattached/device[0]", + "type": "host-arm-cpu" + } + ] +} diff --git a/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters.data b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters.data new file mode 100644 index 0000000000..87e927e7a8 --- /dev/null +++ b/tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-aarch64-clusters.data @@ -0,0 +1,108 @@ +[vcpu libvirt-id='0'] + online=yes + hotpluggable=no + thread-id='284700' + enable-id='1' + query-cpus-id='0' + type='host-arm-cpu' + qom_path='/machine/unattached/device[0]' + topology: socket='0' cluster_id='0' core='0' thread='0' vcpus='1' +[vcpu libvirt-id='1'] + online=yes + hotpluggable=no + thread-id='284701' + enable-id='2' + query-cpus-id='1' + type='host-arm-cpu' + qom_path='/machine/unattached/device[1]' + topology: socket='0' cluster_id='0' core='0' thread='1' vcpus='1' +[vcpu libvirt-id='2'] + online=yes + hotpluggable=no + thread-id='284702' + enable-id='3' + query-cpus-id='2' + type='host-arm-cpu' + qom_path='/machine/unattached/device[2]' + topology: socket='0' cluster_id='0' core='1' thread='0' vcpus='1' +[vcpu libvirt-id='3'] + online=yes + hotpluggable=no + thread-id='284703' + enable-id='4' + query-cpus-id='3' + type='host-arm-cpu' + qom_path='/machine/unattached/device[3]' + topology: socket='0' cluster_id='0' core='1' thread='1' vcpus='1' +[vcpu libvirt-id='4'] + online=yes + hotpluggable=no + thread-id='284704' + enable-id='5' + query-cpus-id='4' + type='host-arm-cpu' + qom_path='/machine/unattached/device[4]' + topology: socket='0' cluster_id='1' core='0' thread='0' vcpus='1' +[vcpu libvirt-id='5'] + online=yes + hotpluggable=no + thread-id='284705' + enable-id='6' + query-cpus-id='5' + type='host-arm-cpu' + qom_path='/machine/unattached/device[5]' + topology: socket='0' cluster_id='1' core='0' thread='1' vcpus='1' +[vcpu libvirt-id='6'] + online=yes + hotpluggable=no + thread-id='284706' + enable-id='7' + query-cpus-id='6' + type='host-arm-cpu' + qom_path='/machine/unattached/device[6]' + topology: socket='0' cluster_id='1' core='1' thread='0' vcpus='1' +[vcpu libvirt-id='7'] + online=no + hotpluggable=yes + type='host-arm-cpu' + topology: socket='0' cluster_id='1' core='1' thread='1' vcpus='1' +[vcpu libvirt-id='8'] + online=no + hotpluggable=yes + type='host-arm-cpu' + topology: socket='1' cluster_id='0' core='0' thread='0' vcpus='1' +[vcpu libvirt-id='9'] + online=no + hotpluggable=yes + type='host-arm-cpu' + topology: socket='1' cluster_id='0' core='0' thread='1' vcpus='1' +[vcpu libvirt-id='10'] + online=no + hotpluggable=yes + type='host-arm-cpu' + topology: socket='1' cluster_id='0' core='1' thread='0' vcpus='1' +[vcpu libvirt-id='11'] + online=no + hotpluggable=yes + type='host-arm-cpu' + topology: socket='1' cluster_id='0' core='1' thread='1' vcpus='1' +[vcpu libvirt-id='12'] + online=no + hotpluggable=yes + type='host-arm-cpu' + topology: socket='1' cluster_id='1' core='0' thread='0' vcpus='1' +[vcpu libvirt-id='13'] + online=no + hotpluggable=yes + type='host-arm-cpu' + topology: socket='1' cluster_id='1' core='0' thread='1' vcpus='1' +[vcpu libvirt-id='14'] + online=no + hotpluggable=yes + type='host-arm-cpu' + topology: socket='1' cluster_id='1' core='1' thread='0' vcpus='1' +[vcpu libvirt-id='15'] + online=no + hotpluggable=yes + type='host-arm-cpu' + topology: socket='1' cluster_id='1' core='1' thread='1' vcpus='1' diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index d9ebb429e7..45cee23798 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -2262,13 +2262,16 @@ testQemuMonitorCPUInfoFormat(qemuMonitorCPUInfo *vcpus, if (vcpu->qom_path) virBufferAsprintf(&buf, "qom_path='%s'\n", vcpu->qom_path); - if (vcpu->socket_id != -1 || vcpu->core_id != -1 || + if (vcpu->socket_id != -1 || vcpu->die_id != -1 || + vcpu->cluster_id != -1 || vcpu->core_id != -1 || vcpu->thread_id != -1 || vcpu->vcpus != 0) { virBufferAddLit(&buf, "topology:"); if (vcpu->socket_id != -1) virBufferAsprintf(&buf, " socket='%d'", vcpu->socket_id); if (vcpu->die_id != -1) virBufferAsprintf(&buf, " die='%d'", vcpu->die_id); + if (vcpu->cluster_id != -1) + virBufferAsprintf(&buf, " cluster_id='%d'", vcpu->cluster_id); if (vcpu->core_id != -1) virBufferAsprintf(&buf, " core='%d'", vcpu->core_id); if (vcpu->thread_id != -1) @@ -2919,6 +2922,10 @@ mymain(void) DO_TEST_CPU_INFO("ppc64-hotplug-4", 24); DO_TEST_CPU_INFO("ppc64-no-threads", 16); + /* aarch64 doesn't support CPU hotplug yet, so the data used in + * this test is partially synthetic */ + DO_TEST_CPU_INFO("aarch64-clusters", 16); + DO_TEST_CPU_INFO("s390", 2); -- 2.43.0

On the guest configuration side, mention that support for the "dies" attribute was introduced in libvirt 6.1.0 and clarify that the ability to use non-default values is subject to architecuture and machine limitations. On the host capabilities side, the documentation was pretty much entirely missing. It's still far from perfect, but anything is better than having no information at all. Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- docs/formatcaps.rst | 48 +++++++++++++++++++++++++++++++++++++------ docs/formatdomain.rst | 16 +++++++++------ 2 files changed, 52 insertions(+), 12 deletions(-) diff --git a/docs/formatcaps.rst b/docs/formatcaps.rst index 3cccf70882..60f8b7caca 100644 --- a/docs/formatcaps.rst +++ b/docs/formatcaps.rst @@ -37,6 +37,12 @@ The ``<host/>`` element consists of the following child elements: The host UUID. ``cpu`` The host CPU architecture and features. + + Note that, while this element contains a ``topology`` sub-element, + the information contained therein is farily high-level and likely + not very useful when it comes to optimizing guest vCPU placement. + Look into the ``topology`` *element*, described below, for more + detailed information. ``power_management`` whether host is capable of memory suspend, disk hibernation, or hybrid suspend. @@ -44,12 +50,42 @@ The ``<host/>`` element consists of the following child elements: This element exposes information on the hypervisor's migration capabilities, like live migration, supported URI transports, and so on. ``topology`` - This element embodies the host internal topology. Management applications may - want to learn this information when orchestrating new guests - e.g. due to - reduce inter-NUMA node transfers. Note that the ``sockets`` value reported - here is per-NUMA-node; this is in contrast to the value given in domain - definitions, which is interpreted as a total number of sockets for the - domain. + This element describes the host CPU topology in detail. + + Management applications may want to use this information when defining new + guests: for example, in order to ensure that all vCPUs are scheduled on + CPUs that are in the same NUMA node or even CPU core. + + The ``cells`` sub-element contains a list of NUMA nodes, each one + represented by a single ``cell`` element. Within each ``cell``, a ``cpus`` + sub-element contains a list of logical CPUs, each one represented by a + single ``cpu`` element. In both cases, the ``num`` attribute of the + top-level element contains the number of children. + + Each ``cpu`` element contains the following attributes: + + ``id`` + CPU identifier. Can be used to refer to it in the context of + `CPU tuning <formatdomain.html#cpu-tuning>`__. + + ``socket_id`` + Identifier for the physical package the CPU is in. + + ``die_id`` + Identifier for the die the CPU is in. + + Note that not all architectures support CPU dies: if the current + architecture doesn't, the value will be 0 for all CPUs. + + ``core_id`` + Identifier for the core the CPU is in. + + ``siblings`` + List of CPUs that are in the same core. + + The list will include the current CPU, plus all other CPUs that have the + same values for ``socket_id``, ``die_id`` and ``core_id``. + ``secmodel`` To find out default security labels for different security models you need to parse this element. In contrast with the former elements, this is repeated diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst index 298ad46a45..73deaa5cb3 100644 --- a/docs/formatdomain.rst +++ b/docs/formatdomain.rst @@ -1578,14 +1578,18 @@ In case no restrictions need to be put on CPU model and its features, a simpler supported vendors can be found in ``cpu_map/*_vendors.xml``. ``topology`` The ``topology`` element specifies requested topology of virtual CPU provided - to the guest. Four attributes, ``sockets``, ``dies``, ``cores``, and - ``threads``, accept non-zero positive integer values. They refer to the - total number of CPU sockets, number of dies per socket, number of cores per - die, and number of threads per core, respectively. The ``dies`` attribute is - optional and will default to 1 if omitted, while the other attributes are all - mandatory. Hypervisors may require that the maximum number of vCPUs specified + to the guest. + Its attributes ``sockets``, ``dies`` (:since:`Since 6.1.0`), ``cores``, + and ``threads`` accept non-zero positive integer values. + They refer to the total number of CPU sockets, number of dies per socket, + number of cores per die, and number of threads per core, respectively. + The ``dies`` attribute is optional and will default to 1 if omitted, while + the other attributes are all mandatory. + Hypervisors may require that the maximum number of vCPUs specified by the ``cpus`` element equals to the number of vcpus resulting from the topology. + Moreover, not all architectures and machine types support specifying a value + other than 1 for all attributes. ``feature`` The ``cpu`` element can contain zero or more ``feature`` elements used to fine-tune features provided by the selected CPU model. The list of known -- 2.43.0

On Thu, Jan 11, 2024 at 15:26:41 +0100, Andrea Bolognani wrote:
On the guest configuration side, mention that support for the "dies" attribute was introduced in libvirt 6.1.0 and clarify that the ability to use non-default values is subject to architecuture and machine limitations.
On the host capabilities side, the documentation was pretty much entirely missing. It's still far from perfect, but anything is better than having no information at all.
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- docs/formatcaps.rst | 48 +++++++++++++++++++++++++++++++++++++------ docs/formatdomain.rst | 16 +++++++++------ 2 files changed, 52 insertions(+), 12 deletions(-)
diff --git a/docs/formatcaps.rst b/docs/formatcaps.rst index 3cccf70882..60f8b7caca 100644 --- a/docs/formatcaps.rst +++ b/docs/formatcaps.rst
[....]
@@ -44,12 +50,42 @@ The ``<host/>`` element consists of the following child elements: This element exposes information on the hypervisor's migration capabilities, like live migration, supported URI transports, and so on. ``topology`` - This element embodies the host internal topology. Management applications may - want to learn this information when orchestrating new guests - e.g. due to - reduce inter-NUMA node transfers. Note that the ``sockets`` value reported - here is per-NUMA-node; this is in contrast to the value given in domain - definitions, which is interpreted as a total number of sockets for the - domain. + This element describes the host CPU topology in detail. + + Management applications may want to use this information when defining new + guests: for example, in order to ensure that all vCPUs are scheduled on + CPUs that are in the same NUMA node or even CPU core. + + The ``cells`` sub-element contains a list of NUMA nodes, each one + represented by a single ``cell`` element. Within each ``cell``, a ``cpus`` + sub-element contains a list of logical CPUs, each one represented by a + single ``cpu`` element. In both cases, the ``num`` attribute of the + top-level element contains the number of children. + + Each ``cpu`` element contains the following attributes: + + ``id`` + CPU identifier. Can be used to refer to it in the context of + `CPU tuning <formatdomain.html#cpu-tuning>`__. + + ``socket_id`` + Identifier for the physical package the CPU is in. + + ``die_id`` + Identifier for the die the CPU is in. + + Note that not all architectures support CPU dies: if the current + architecture doesn't, the value will be 0 for all CPUs. + + ``core_id`` + Identifier for the core the CPU is in. + + ``siblings`` + List of CPUs that are in the same core. + + The list will include the current CPU, plus all other CPUs that have the + same values for ``socket_id``, ``die_id`` and ``core_id``.
IIRC the bit about 'core_id' is not true, at least for some older AMD cpus which had two fixed point units (each having it's own core id) sharing a FPU and some other less-used modules. That was a long time ago though, but the distinction was that the lowest level cache was shared at this level (again IIRC) See commit 828820e2d371205d6a6061301165d58a1a92e611 ; the 'bulldozer' example.
``secmodel`` To find out default security labels for different security models you need to parse this element. In contrast with the former elements, this is repeated diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst index 298ad46a45..73deaa5cb3 100644 --- a/docs/formatdomain.rst +++ b/docs/formatdomain.rst @@ -1578,14 +1578,18 @@ In case no restrictions need to be put on CPU model and its features, a simpler supported vendors can be found in ``cpu_map/*_vendors.xml``. ``topology`` The ``topology`` element specifies requested topology of virtual CPU provided - to the guest. Four attributes, ``sockets``, ``dies``, ``cores``, and - ``threads``, accept non-zero positive integer values. They refer to the - total number of CPU sockets, number of dies per socket, number of cores per - die, and number of threads per core, respectively. The ``dies`` attribute is - optional and will default to 1 if omitted, while the other attributes are all - mandatory. Hypervisors may require that the maximum number of vCPUs specified + to the guest. + Its attributes ``sockets``, ``dies`` (:since:`Since 6.1.0`), ``cores``, + and ``threads`` accept non-zero positive integer values. + They refer to the total number of CPU sockets, number of dies per socket, + number of cores per die, and number of threads per core, respectively. + The ``dies`` attribute is optional and will default to 1 if omitted, while + the other attributes are all mandatory. + Hypervisors may require that the maximum number of vCPUs specified by the ``cpus`` element equals to the number of vcpus resulting from the topology. + Moreover, not all architectures and machine types support specifying a value + other than 1 for all attributes. ``feature`` The ``cpu`` element can contain zero or more ``feature`` elements used toa
I'm not sure what to do with the siblings thing, but the rest: Reviewed-by: Peter Krempa <pkrempa@redhat.com>

On Fri, Jan 12, 2024 at 05:18:31PM +0100, Peter Krempa wrote:
On Thu, Jan 11, 2024 at 15:26:41 +0100, Andrea Bolognani wrote:
+ Each ``cpu`` element contains the following attributes: + + ``core_id`` + Identifier for the core the CPU is in. + + ``siblings`` + List of CPUs that are in the same core. + + The list will include the current CPU, plus all other CPUs that have the + same values for ``socket_id``, ``die_id`` and ``core_id``.
IIRC the bit about 'core_id' is not true, at least for some older AMD cpus which had two fixed point units (each having it's own core id) sharing a FPU and some other less-used modules.
That was a long time ago though, but the distinction was that the lowest level cache was shared at this level (again IIRC)
See commit 828820e2d371205d6a6061301165d58a1a92e611 ; the 'bulldozer' example.
I've heard the AMD Bulldozer being mentioned as a curiosity several times over the years. My understanding is that the architecture has now been completely abandoned, and that the most recent hardware that employs it was manufactured roughly a decade ago. The kernel documentation[1] for the files that we parse to produce those values is the following: What: /sys/devices/system/cpu/cpuX/topology/core_id Description: the CPU core ID of cpuX. Typically it is the hardware platform's identifier (rather than the kernel's). The actual value is architecture and platform dependent. Values: integer What: /sys/devices/system/cpu/cpuX/topology/core_cpus_list Description: human-readable list of CPUs within the same core. The format is like 0-3, 8-11, 14,17. (deprecated name: "thread_siblings_list"). Values: decimal list. So I think that, for all cases that are actually relevant today, the explanations I'm introducing are accurate. If you have reservations about them, please let me know how you'd like to change them and we can certainly find a compromise :) [1] https://www.kernel.org/doc/Documentation/ABI/stable/sysfs-devices-system-cpu -- Andrea Bolognani / Red Hat / Virtualization

On Fri, Jan 12, 2024 at 08:58:52AM -0800, Andrea Bolognani wrote:
On Fri, Jan 12, 2024 at 05:18:31PM +0100, Peter Krempa wrote:
On Thu, Jan 11, 2024 at 15:26:41 +0100, Andrea Bolognani wrote:
+ Each ``cpu`` element contains the following attributes: + + ``core_id`` + Identifier for the core the CPU is in. + + ``siblings`` + List of CPUs that are in the same core. + + The list will include the current CPU, plus all other CPUs that have the + same values for ``socket_id``, ``die_id`` and ``core_id``.
IIRC the bit about 'core_id' is not true, at least for some older AMD cpus which had two fixed point units (each having it's own core id) sharing a FPU and some other less-used modules.
That was a long time ago though, but the distinction was that the lowest level cache was shared at this level (again IIRC)
See commit 828820e2d371205d6a6061301165d58a1a92e611 ; the 'bulldozer' example.
I've heard the AMD Bulldozer being mentioned as a curiosity several times over the years. My understanding is that the architecture has now been completely abandoned, and that the most recent hardware that employs it was manufactured roughly a decade ago.
The kernel documentation[1] for the files that we parse to produce those values is the following:
What: /sys/devices/system/cpu/cpuX/topology/core_id Description: the CPU core ID of cpuX. Typically it is the hardware platform's identifier (rather than the kernel's). The actual value is architecture and platform dependent. Values: integer
What: /sys/devices/system/cpu/cpuX/topology/core_cpus_list Description: human-readable list of CPUs within the same core. The format is like 0-3, 8-11, 14,17. (deprecated name: "thread_siblings_list"). Values: decimal list.
So I think that, for all cases that are actually relevant today, the explanations I'm introducing are accurate. If you have reservations about them, please let me know how you'd like to change them and we can certainly find a compromise :)
So, can I push this as is with your R-b, or do you want me to make further tweaks?
[1] https://www.kernel.org/doc/Documentation/ABI/stable/sysfs-devices-system-cpu -- Andrea Bolognani / Red Hat / Virtualization

On Mon, Jan 15, 2024 at 05:58:18 -0800, Andrea Bolognani wrote:
On Fri, Jan 12, 2024 at 08:58:52AM -0800, Andrea Bolognani wrote:
On Fri, Jan 12, 2024 at 05:18:31PM +0100, Peter Krempa wrote:
On Thu, Jan 11, 2024 at 15:26:41 +0100, Andrea Bolognani wrote:
+ Each ``cpu`` element contains the following attributes: + + ``core_id`` + Identifier for the core the CPU is in. + + ``siblings`` + List of CPUs that are in the same core. + + The list will include the current CPU, plus all other CPUs that have the + same values for ``socket_id``, ``die_id`` and ``core_id``.
IIRC the bit about 'core_id' is not true, at least for some older AMD cpus which had two fixed point units (each having it's own core id) sharing a FPU and some other less-used modules.
That was a long time ago though, but the distinction was that the lowest level cache was shared at this level (again IIRC)
See commit 828820e2d371205d6a6061301165d58a1a92e611 ; the 'bulldozer' example.
I've heard the AMD Bulldozer being mentioned as a curiosity several times over the years. My understanding is that the architecture has now been completely abandoned, and that the most recent hardware that employs it was manufactured roughly a decade ago.
The kernel documentation[1] for the files that we parse to produce those values is the following:
What: /sys/devices/system/cpu/cpuX/topology/core_id Description: the CPU core ID of cpuX. Typically it is the hardware platform's identifier (rather than the kernel's). The actual value is architecture and platform dependent. Values: integer
What: /sys/devices/system/cpu/cpuX/topology/core_cpus_list Description: human-readable list of CPUs within the same core. The format is like 0-3, 8-11, 14,17. (deprecated name: "thread_siblings_list"). Values: decimal list.
So I think that, for all cases that are actually relevant today, the explanations I'm introducing are accurate. If you have reservations about them, please let me know how you'd like to change them and we can certainly find a compromise :)
So, can I push this as is with your R-b, or do you want me to make further tweaks?
Ah, sorry, I forgot to respond. I think this explanation makes sense, and since the HW I've mentioned is now obsolete as well as kernel markign the fields as deprecated: Reviewed-by: Peter Krempa <pkrempa@redhat.com>

Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- docs/formatcaps.rst | 24 +++++++++++++++--------- docs/formatdomain.rst | 18 ++++++++++-------- 2 files changed, 25 insertions(+), 17 deletions(-) diff --git a/docs/formatcaps.rst b/docs/formatcaps.rst index 60f8b7caca..d16cf182dc 100644 --- a/docs/formatcaps.rst +++ b/docs/formatcaps.rst @@ -77,6 +77,12 @@ The ``<host/>`` element consists of the following child elements: Note that not all architectures support CPU dies: if the current architecture doesn't, the value will be 0 for all CPUs. + ``cluster_id`` + Identifier for the cluster the CPU is in. + + Note that not all architectures support CPU clusters: if the current + architecture doesn't, the value will be 0 for all CPUs. + ``core_id`` Identifier for the core the CPU is in. @@ -196,7 +202,7 @@ capabilities enabled in the chip and BIOS you will see: <microcode version='236'/> <signature family='6' model='142' stepping='12'/> <counter name='tsc' frequency='2303997000' scaling='no'/> - <topology sockets='1' dies='1' cores='4' threads='2'/> + <topology sockets='1' dies='1' clusters='1' cores='4' threads='2'/> <maxphysaddr mode='emulate' bits='39'/> <feature name='ds'/> <feature name='acpi'/> @@ -261,14 +267,14 @@ capabilities enabled in the chip and BIOS you will see: <sibling id='0' value='10'/> </distances> <cpus num='8'> - <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0,4'/> - <cpu id='1' socket_id='0' die_id='0' core_id='1' siblings='1,5'/> - <cpu id='2' socket_id='0' die_id='0' core_id='2' siblings='2,6'/> - <cpu id='3' socket_id='0' die_id='0' core_id='3' siblings='3,7'/> - <cpu id='4' socket_id='0' die_id='0' core_id='0' siblings='0,4'/> - <cpu id='5' socket_id='0' die_id='0' core_id='1' siblings='1,5'/> - <cpu id='6' socket_id='0' die_id='0' core_id='2' siblings='2,6'/> - <cpu id='7' socket_id='0' die_id='0' core_id='3' siblings='3,7'/> + <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0,4'/> + <cpu id='1' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='1,5'/> + <cpu id='2' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='2,6'/> + <cpu id='3' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='3,7'/> + <cpu id='4' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0,4'/> + <cpu id='5' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='1,5'/> + <cpu id='6' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='2,6'/> + <cpu id='7' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='3,7'/> </cpus> </cell> </cells> diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst index 73deaa5cb3..67d5f958d5 100644 --- a/docs/formatdomain.rst +++ b/docs/formatdomain.rst @@ -1377,7 +1377,7 @@ following collection of elements. :since:`Since 0.7.5` <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> - <topology sockets='1' dies='1' cores='2' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='2' threads='1'/> <cache level='3' mode='emulate'/> <maxphysaddr mode='emulate' bits='42'/> <feature policy='disable' name='lahf_lm'/> @@ -1388,7 +1388,7 @@ following collection of elements. :since:`Since 0.7.5` <cpu mode='host-model'> <model fallback='forbid'/> - <topology sockets='1' dies='1' cores='2' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='2' threads='1'/> </cpu> ... @@ -1414,7 +1414,7 @@ In case no restrictions need to be put on CPU model and its features, a simpler ... <cpu> - <topology sockets='1' dies='1' cores='2' threads='1'/> + <topology sockets='1' dies='1' clusters='1' cores='2' threads='1'/> </cpu> ... @@ -1579,12 +1579,14 @@ In case no restrictions need to be put on CPU model and its features, a simpler ``topology`` The ``topology`` element specifies requested topology of virtual CPU provided to the guest. - Its attributes ``sockets``, ``dies`` (:since:`Since 6.1.0`), ``cores``, - and ``threads`` accept non-zero positive integer values. + Its attributes ``sockets``, ``dies`` (:since:`Since 6.1.0`), ``clusters`` + (:since:`Since 10.1.0`), ``cores``, and ``threads`` accept non-zero positive + integer values. They refer to the total number of CPU sockets, number of dies per socket, - number of cores per die, and number of threads per core, respectively. - The ``dies`` attribute is optional and will default to 1 if omitted, while - the other attributes are all mandatory. + number of clusters per die, number of cores per cluster, and number of + threads per core, respectively. + The ``dies`` and ``clusters`` attributes are optional and will default to 1 + if omitted, while the other attributes are all mandatory. Hypervisors may require that the maximum number of vCPUs specified by the ``cpus`` element equals to the number of vcpus resulting from the topology. -- 2.43.0

On Thu, Jan 11, 2024 at 15:26:42 +0100, Andrea Bolognani wrote:
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- docs/formatcaps.rst | 24 +++++++++++++++--------- docs/formatdomain.rst | 18 ++++++++++-------- 2 files changed, 25 insertions(+), 17 deletions(-)
Reviewed-by: Peter Krempa <pkrempa@redhat.com>

On Fri, Jan 12, 2024 at 05:18:57PM +0100, Peter Krempa wrote:
On Thu, Jan 11, 2024 at 15:26:42 +0100, Andrea Bolognani wrote:
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- docs/formatcaps.rst | 24 +++++++++++++++--------- docs/formatdomain.rst | 18 ++++++++++-------- 2 files changed, 25 insertions(+), 17 deletions(-)
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
I'll squash in the diff below before pushing. diff --git a/docs/formatcaps.rst b/docs/formatcaps.rst index d16cf182dc..f37532296f 100644 --- a/docs/formatcaps.rst +++ b/docs/formatcaps.rst @@ -90,7 +90,7 @@ The ``<host/>`` element consists of the following child elements: List of CPUs that are in the same core. The list will include the current CPU, plus all other CPUs that have the - same values for ``socket_id``, ``die_id`` and ``core_id``. + same values for ``socket_id``, ``die_id``, ``cluster_id`` and ``core_id``. ``secmodel`` To find out default security labels for different security models you need to -- Andrea Bolognani / Red Hat / Virtualization

Signed-off-by: Andrea Bolognani <abologna@redhat.com> Reviewed-by: Peter Krempa <pkrempa@redhat.com> --- NEWS.rst | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/NEWS.rst b/NEWS.rst index 9e538a8f57..7accddfbd7 100644 --- a/NEWS.rst +++ b/NEWS.rst @@ -24,6 +24,12 @@ v10.0.0 (unreleased) This should enable faster migration of memory pages that the destination tries to read before they are migrated from the source. + * qemu: Support clusters in CPU topology + + It is now possible to configure the guest CPU topology to use clusters. + Additionally, if CPU clusters are present in the host topology, they will + be reported as part of the capabilities XML. + * **Improvements** * **Bug fixes** -- 2.43.0
participants (2)
-
Andrea Bolognani
-
Peter Krempa