[libvirt] [PATCH v6 00/13] Add support for Veritas HyperScale (VxHS) block device protocol
by John Ferlan
Here's the reworked v5 series I promised:
https://www.redhat.com/archives/libvir-list/2017-August/thread.html
Each of the patches lists changes that I recall making in the
area. I may have missed a few... and I may have missed something
from my own review - so hopefully Ashish you can keep me honest and
of course since you have the environment, please check/test that
things actually work.
I've done quite a bit of reformatting the order and splitting things
up so that XML changes are in one patch and qemu changes are in a
subsequent patch. Not too little change, but not too excessive.
I think we do need to think about the default TLS environment and
whether we really care to fail in the event that cfg->vxhsTLS = 0
and src->haveTLS = yes.
Ashish Mittal (10):
storage: Introduce VIR_STORAGE_NET_PROTOCOL_VXHS
docs: Add schema and docs for Veritas HyperScale (VxHS)
util: storage: Add JSON backing volume parse for VxHS
qemu: Add qemu command line generation for a VxHS block device
conf: Introduce TLS options for VxHS block device clients
util: Add haveTLS to virStorageSource
util: Add virstoragetest to parse/format a tls='yes'
qemu: Add TLS support for Veritas HyperScale (VxHS)
tests: Add test for failure when vxhs_tls=0
tests: Add a test case for multiple VxHS disk configuration
John Ferlan (3):
qemu: Add QEMU 2.10 x86_64 the generated capabilities
qemu: Detect support for vxhs
qemu: Introduce qemuDomainPrepareDiskSource
docs/formatdomain.html.in | 46 +-
docs/schemas/domaincommon.rng | 18 +
src/conf/domain_conf.c | 19 +
src/libxl/libxl_conf.c | 1 +
src/qemu/libvirtd_qemu.aug | 4 +
src/qemu/qemu.conf | 33 +
src/qemu/qemu_block.c | 70 +-
src/qemu/qemu_block.h | 4 +-
src/qemu/qemu_capabilities.c | 4 +
src/qemu/qemu_capabilities.h | 3 +
src/qemu/qemu_command.c | 41 +-
src/qemu/qemu_conf.c | 16 +
src/qemu/qemu_conf.h | 3 +
src/qemu/qemu_domain.c | 58 +
src/qemu/qemu_domain.h | 5 +
src/qemu/qemu_driver.c | 3 +
src/qemu/qemu_parse_command.c | 15 +
src/qemu/qemu_process.c | 4 +
src/qemu/test_libvirtd_qemu.aug.in | 2 +
src/util/virstoragefile.c | 54 +-
src/util/virstoragefile.h | 4 +
src/xenconfig/xen_xl.c | 1 +
.../caps_2.10.0.x86_64.replies | 17994 +++++++++++++++++++
tests/qemucapabilitiesdata/caps_2.10.0.x86_64.xml | 792 +
tests/qemucapabilitiestest.c | 1 +
...ml2argv-disk-drive-network-tlsx509-err-vxhs.xml | 34 +
...-disk-drive-network-tlsx509-multidisk-vxhs.args | 43 +
...v-disk-drive-network-tlsx509-multidisk-vxhs.xml | 50 +
...muxml2argv-disk-drive-network-tlsx509-vxhs.args | 30 +
...emuxml2argv-disk-drive-network-tlsx509-vxhs.xml | 32 +
.../qemuxml2argv-disk-drive-network-vxhs.args | 27 +
.../qemuxml2argv-disk-drive-network-vxhs.xml | 32 +
tests/qemuxml2argvtest.c | 10 +
...uxml2xmlout-disk-drive-network-tlsx509-vxhs.xml | 34 +
.../qemuxml2xmlout-disk-drive-network-vxhs.xml | 34 +
tests/qemuxml2xmltest.c | 2 +
tests/virstoragetest.c | 23 +
37 files changed, 19534 insertions(+), 12 deletions(-)
create mode 100644 tests/qemucapabilitiesdata/caps_2.10.0.x86_64.replies
create mode 100644 tests/qemucapabilitiesdata/caps_2.10.0.x86_64.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-err-vxhs.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-multidisk-vxhs.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-multidisk-vxhs.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-vxhs.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-vxhs.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml
create mode 100644 tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-drive-network-tlsx509-vxhs.xml
create mode 100644 tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-drive-network-vxhs.xml
--
2.9.5
7 years, 2 months
[libvirt] [PATCH] virsh: migrate --timeout-postcopy requires --postcopy
by Jiri Denemark
Requesting an automated switch to a post-copy migration (using
--timeout-postcopy) without actually enabling post-copy migration (using
--postcopy) doesn't really do anything. Let's make this dependency
explicit to avoid unexpected behavior.
https://bugzilla.redhat.com/show_bug.cgi?id=1455023
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
tools/virsh-domain.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index f235c66b07..a3f3b7c7bd 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -10768,6 +10768,7 @@ cmdMigrate(vshControl *ctl, const vshCmd *cmd)
VSH_EXCLUSIVE_OPTIONS("live", "offline");
VSH_EXCLUSIVE_OPTIONS("timeout-suspend", "timeout-postcopy");
VSH_REQUIRE_OPTION("postcopy-after-precopy", "postcopy");
+ VSH_REQUIRE_OPTION("timeout-postcopy", "postcopy");
VSH_REQUIRE_OPTION("persistent-xml", "persistent");
if (!(dom = virshCommandOptDomain(ctl, cmd, NULL)))
--
2.14.1
7 years, 2 months
[libvirt] [PATCH] virsh: help: Drop 'id' from possible values for <domain> argument
by Erik Skultety
At the moment, we can only rename inactive domains.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1490164
Signed-off-by: Erik Skultety <eskultet(a)redhat.com>
---
Theoretically, we could also remove this check from qemu_driver.c as it's a
useless check, given that a VM which passes the 'active' check is going to be a
persistent domain:
if (!vm->persistent) {
virReportError(VIR_ERR_OPERATION_INVALID, "%s",
_("cannot rename a transient domain"));
goto endjob;
}
Also, virshCommandOptDomainBy could be used instead of virshCommandOptDomain in
this case, but we might as well enable rename for active domains as well, who
knows and I don't think this is worth any more attention that just a help string
tweak.
tools/virsh-domain.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index f235c66b0..84c8dccae 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -10151,7 +10151,7 @@ static const vshCmdInfo info_domrename[] = {
};
static const vshCmdOptDef opts_domrename[] = {
- VIRSH_COMMON_OPT_DOMAIN_FULL,
+ VIRSH_COMMON_OPT_DOMAIN(N_("domain name or uuid")),
{.name = "new-name",
.type = VSH_OT_DATA,
.flags = VSH_OFLAG_REQ,
--
2.13.3
7 years, 2 months
[libvirt] [PATCH v3 0/4] numa: describe sibling nodes distances
by Wim Ten Have
From: Wim ten Have <wim.ten.have(a)oracle.com>
This patch extents guest domain administration adding support to advertise
node sibling distances when configuring HVM numa guests.
NUMA (non-uniform memory access), a method of configuring a cluster of nodes
within a single multiprocessing system such that it shares processor
local memory amongst others improving performance and the ability of the
system to be expanded.
A NUMA system could be illustrated as shown below. Within this 4-node
system, every socket is equipped with its own distinct memory. The whole
typically resembles a SMP (symmetric multiprocessing) system being a
"tightly-coupled," "share everything" system in which multiple processors
are working under a single operating system and can access each others'
memory over multiple "Bus Interconnect" paths.
+-----+-----+-----+ +-----+-----+-----+
| M | CPU | CPU | | CPU | CPU | M |
| E | | | | | | E |
| M +- Socket0 -+ +- Socket3 -+ M |
| O | | | | | | O |
| R | CPU | CPU <---------> CPU | CPU | R |
| Y | | | | | | Y |
+-----+--^--+-----+ +-----+--^--+-----+
| |
| Bus Interconnect |
| |
+-----+--v--+-----+ +-----+--v--+-----+
| M | | | | | | M |
| E | CPU | CPU <---------> CPU | CPU | E |
| M | | | | | | M |
| O +- Socket1 -+ +- Socket2 -+ O |
| R | | | | | | R |
| Y | CPU | CPU | | CPU | CPU | Y |
+-----+-----+-----+ +-----+-----+-----+
In contrast there is the limitation of a flat SMP system, not illustrated.
Here, as sockets are added, the bus (data and address path), under high
activity, gets overloaded and easily becomes a performance bottleneck.
NUMA adds an intermediate level of memory shared amongst a few cores per
socket as illustrated above, so that data accesses do not have to travel
over a single bus.
Unfortunately the way NUMA does this adds its own limitations. This,
as visualized in the illustration above, happens when data is stored in
memory associated with Socket2 and is accessed by a CPU (core) in Socket0.
The processors use the "Bus Interconnect" to create gateways between the
sockets (nodes) enabling inter-socket access to memory. These "Bus
Interconnect" hops add data access delays when a CPU (core) accesses
memory associated with a remote socket (node).
For terminology we refer to sockets as "nodes" where access to each
others' distinct resources such as memory make them "siblings" with a
designated "distance" between them. A specific design is described under
the ACPI (Advanced Configuration and Power Interface Specification)
within the chapter explaining the system's SLIT (System Locality Distance
Information Table).
These patches extend core libvirt's XML description of a virtual machine's
hardware to include NUMA distance information for sibling nodes, which
is then passed to Xen guests via libxl. Recently qemu landed support for
constructing the SLIT since commit 0f203430dd ("numa: Allow setting NUMA
distance for different NUMA nodes"), hence these core libvirt extensions
can also help other drivers in supporting this feature.
The XML changes made allow to describe the <cell> (or node/sockets) <distances>
amongst <sibling> node identifiers and propagate these towards the numa
domain functionality finally adding support to libxl.
[below is an example illustrating a 4 node/socket <cell> setup]
<cpu>
<numa>
<cell id='0' cpus='0,4-7' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='10'/>
<sibling id='1' value='21'/>
<sibling id='2' value='31'/>
<sibling id='3' value='41'/>
</distances>
</cell>
<cell id='1' cpus='1,8-10,12-15' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='21'/>
<sibling id='1' value='10'/>
<sibling id='2' value='21'/>
<sibling id='3' value='31'/>
</distances>
</cell>
<cell id='2' cpus='2,11' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='31'/>
<sibling id='1' value='21'/>
<sibling id='2' value='10'/>
<sibling id='3' value='21'/>
</distances>
</cell>
<cell id='3' cpus='3' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='41'/>
<sibling id='1' value='31'/>
<sibling id='2' value='21'/>
<sibling id='3' value='10'/>
</distances>
</cell>
</numa>
</cpu>
By default on libxl, if no <distances> are given to describe the SLIT data
between different <cell>s, this patch will default to a scheme using 10
for local and 21 for any remote node/socket, which is the assumption of
guest OS when no SLIT is specified. While SLIT is optional, libxl requires
that distances are set nonetheless.
On Linux systems the SLIT detail can be listed with help of the 'numactl -H'
command. An above HVM guest as described would on such prompt with below output.
[root@f25 ~]# numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 4 5 6 7
node 0 size: 1988 MB
node 0 free: 1743 MB
node 1 cpus: 1 8 9 10 12 13 14 15
node 1 size: 1946 MB
node 1 free: 1885 MB
node 2 cpus: 2 11
node 2 size: 2011 MB
node 2 free: 1912 MB
node 3 cpus: 3
node 3 size: 2010 MB
node 3 free: 1980 MB
node distances:
node 0 1 2 3
0: 10 21 31 41
1: 21 10 21 31
2: 31 21 10 21
3: 41 31 21 10
Wim ten Have (4):
numa: describe siblings distances within cells
libxl: vnuma support
xenconfig: add domxml conversions for xen-xl
xlconfigtest: add tests for numa cell sibling distances
docs/formatdomain.html.in | 70 ++++-
docs/schemas/basictypes.rng | 9 +
docs/schemas/cputypes.rng | 18 ++
src/conf/cpu_conf.c | 2 +-
src/conf/numa_conf.c | 323 +++++++++++++++++++-
src/conf/numa_conf.h | 25 +-
src/libvirt_private.syms | 6 +
src/libxl/libxl_conf.c | 120 ++++++++
src/libxl/libxl_driver.c | 3 +-
src/xenconfig/xen_xl.c | 333 +++++++++++++++++++++
.../test-fullvirt-vnuma-nodistances.cfg | 26 ++
.../test-fullvirt-vnuma-nodistances.xml | 53 ++++
tests/xlconfigdata/test-fullvirt-vnuma.cfg | 26 ++
tests/xlconfigdata/test-fullvirt-vnuma.xml | 81 +++++
tests/xlconfigtest.c | 4 +
15 files changed, 1089 insertions(+), 10 deletions(-)
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-nodistances.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-nodistances.xml
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma.xml
--
2.9.5
7 years, 2 months
[libvirt] [PATCH 0/2] Tiny adjustments to the virsh's man page
by Erik Skultety
*** BLURB HERE, BLURB THERE, BLURB EVERYWHERE :) ***
Erik Skultety (2):
virsh: man: Be more explicit about 'create' creating transient domain
virsh: man: Document the --validate option for create and define cmds
tools/virsh.pod | 27 ++++++++++++++++++---------
1 file changed, 18 insertions(+), 9 deletions(-)
--
2.13.3
7 years, 2 months
[libvirt] [PATCH] docs: Update --timeout description in libvirtd's man page
by Erik Skultety
Since commit @ae2163f8, only active client connections or running
domains are allowed to inhibit daemon shutdown. The man page however
wasn't updated appropriately.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1325066
Signed-off-by: Erik Skultety <eskultet(a)redhat.com>
---
daemon/libvirtd.pod | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/daemon/libvirtd.pod b/daemon/libvirtd.pod
index 9f5a17d9c..8d3fbd13d 100644
--- a/daemon/libvirtd.pod
+++ b/daemon/libvirtd.pod
@@ -56,10 +56,8 @@ Use this name for the PID file, overriding the default value.
=item B<-t, --timeout> I<SECONDS>
-Exit after timeout period (in seconds) elapse with no client connections
-or registered resources. Be aware that resources such as autostart
-networks will result in never reaching the timeout, even when there are
-no client connections.
+Exit after timeout period (in seconds), provided there are neither any client
+connections nor any running domains.
=item B<-v, --verbose>
--
2.13.3
7 years, 2 months
[libvirt] [PATCH] cpu: Add new EPYC CPU model
by Brijesh Singh
Add a new CPU model called 'EPYC' to model processors from AMD EPYC
family (which includes EPYC 76xx,75xx,74xx, 73xx and 72xx).
The following features bits have been added/removed compare to Opteron_G5
Added: monitor, movbe, rdrand, mmxext, ffxsr, rdtscp, cr8legacy, osvw,
fsgsbase, bmi1, avx2, smep, bmi2, rdseed, adx, smap, clfshopt, sha
xsaveopt, xsavec, xgetbv1, arat
Removed: xop, fma4, tbm
The patch is depend on EPYC CPU model supported introduced in qemu [1]
[1] https://patchwork.kernel.org/patch/9902205/
Cc: Tom Lendacky <Thomas.Lendacky(a)amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh(a)amd.com>
---
src/cpu/cpu_map.xml | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)
diff --git a/src/cpu/cpu_map.xml b/src/cpu/cpu_map.xml
index 8e7ac49..522d66b 100644
--- a/src/cpu/cpu_map.xml
+++ b/src/cpu/cpu_map.xml
@@ -251,6 +251,9 @@
<feature name='clflushopt'>
<cpuid eax_in='0x07' ecx_in='0x00' ebx='0x00800000'/>
</feature>
+ <feature name='sha_ni'>
+ <cpuid eax_in='0x07' ecx_in='0x00' ebx='0x20000000'/>
+ </feature>
<feature name='avx512pf'>
<cpuid eax_in='0x07' ecx_in='0x00' ebx='0x04000000'/>
</feature>
@@ -1545,6 +1548,77 @@
<feature name='xop'/>
<feature name='xsave'/>
</model>
+
+ <model name='EPYC'>
+ <signature family='23' model='1'/>
+ <vendor name='AMD'/>
+ <feature name='sse2'/>
+ <feature name='sse'/>
+ <feature name='fxsr'/>
+ <feature name='mmx'/>
+ <feature name='clflush'/>
+ <feature name='pse36'/>
+ <feature name='pat'/>
+ <feature name='cmov'/>
+ <feature name='mca'/>
+ <feature name='pge'/>
+ <feature name='mtrr'/>
+ <feature name='sep'/>
+ <feature name='apic'/>
+ <feature name='cx8'/>
+ <feature name='mce'/>
+ <feature name='pae'/>
+ <feature name='msr'/>
+ <feature name='tsc'/>
+ <feature name='pse'/>
+ <feature name='de'/>
+ <feature name='vme'/>
+ <feature name='fpu'/>
+ <feature name='rdrand'/>
+ <feature name='f16c'/>
+ <feature name='avx'/>
+ <feature name='xsave'/>
+ <feature name='aes'/>
+ <feature name='popcnt'/>
+ <feature name='movbe'/>
+ <feature name='sse4.2'/>
+ <feature name='sse4.1'/>
+ <feature name='cx16'/>
+ <feature name='fma'/>
+ <feature name='ssse3'/>
+ <feature name='monitor'/>
+ <feature name='pclmuldq'/>
+ <feature name='pni'/>
+ <feature name='lm'/>
+ <feature name='rdtscp'/>
+ <feature name='pdpe1gb'/>
+ <feature name='fxsr_opt'/>
+ <feature name='mmxext'/>
+ <feature name='nx'/>
+ <feature name='syscall'/>
+ <feature name='osvw'/>
+ <feature name='3dnowprefetch'/>
+ <feature name='misalignsse'/>
+ <feature name='sse4a'/>
+ <feature name='abm'/>
+ <feature name='cr8legacy'/>
+ <feature name='svm'/>
+ <feature name='lahf_lm'/>
+ <feature name='fsgsbase'/>
+ <feature name='bmi1'/>
+ <feature name='avx2'/>
+ <feature name='smep'/>
+ <feature name='bmi2'/>
+ <feature name='rdseed'/>
+ <feature name='adx'/>
+ <feature name='smap'/>
+ <feature name='clflushopt'/>
+ <feature name='sha_ni'/>
+ <feature name='xsaveopt'/>
+ <feature name='xsavec'/>
+ <feature name='xgetbv1'/>
+ <feature name='arat'/>
+ </model>
</arch>
<arch name='ppc64'>
--
2.9.4
7 years, 2 months
[libvirt] [PATCH v4 00/13] qemu: migration: show disks stats for nbd migration
by Nikolay Shirokovskiy
diff from v3:
============
1. Fix misc style issues
2. Use different structure to store mirror stats
3. Drop logic to update mirror stats after mirror become ready
This patch series add disks stats to domain job info(stats) as
well as to migration completed event in case nbd scheme is used.
Patches that were explicitly ACKed in previous review
(up to style issues) marked with A.
Nikolay Shirokovskiy (13):
A qemu: drop code for VIR_DOMAIN_JOB_BOUNDED and timeRemaining
A qemu: introduce qemu domain job status
A qemu: introduce QEMU_DOMAIN_JOB_STATUS_POSTCOPY
A qemu: drop QEMU_MIGRATION_COMPLETED_UPDATE_STATS
A qemu: drop excessive zero-out in qemuMigrationFetchJobStatus
qemu: refactor fetching migration stats
qemu: simplify getting completed job stats
A qemu: fail querying destination migration statistics always
A qemu: start all async job with job status active
A qemu: introduce migrating job status
A qemu: always get job condition on getting job stats
qemu: migrate: add mirror stats to migration stats
A qemu: migration: don't expose incomplete job as complete
src/qemu/qemu_domain.c | 69 ++++++++++----
src/qemu/qemu_domain.h | 23 ++++-
src/qemu/qemu_driver.c | 86 +++++++++--------
src/qemu/qemu_migration.c | 195 +++++++++++++++++++++++----------------
src/qemu/qemu_migration.h | 14 ++-
src/qemu/qemu_migration_cookie.c | 7 +-
src/qemu/qemu_process.c | 8 +-
7 files changed, 243 insertions(+), 159 deletions(-)
--
1.8.3.1
7 years, 2 months
[libvirt] [PATCH] tpm: Use /dev/null for cancel path if none was found
by Stefan Berger
TPM 2 does not implement sysfs files for cancellation of commands.
We therefore use /dev/null for the cancel path passed to QEMU.
Signed-off-by: Stefan Berger <stefanb(a)linux.vnet.ibm.com>
---
src/util/virtpm.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/src/util/virtpm.c b/src/util/virtpm.c
index 6d9b065..d5c10da 100644
--- a/src/util/virtpm.c
+++ b/src/util/virtpm.c
@@ -61,9 +61,7 @@ virTPMCreateCancelPath(const char *devpath)
VIR_FREE(path);
}
if (!path)
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("No usable sysfs TPM cancel file could be "
- "found"));
+ ignore_value(VIR_STRDUP(path, "/dev/null"));
} else {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("TPM device path %s is invalid"), devpath);
--
2.5.5
7 years, 2 months
[libvirt] [PATCH v2 0/3] Add new EPYC CPU model
by Jiri Denemark
Brijesh Singh (1):
cpu: Add new EPYC CPU model
Jiri Denemark (2):
tests: Add CPUID data for AMD Ryzen 7 1800X Eight-Core Processor
tests: Add CPUID data for AMD EPYC 7601 32-Core Processor
src/cpu/cpu_map.xml | 74 +++++++
tests/cputest.c | 2 +
.../x86_64-cpuid-EPYC-7601-32-Core-disabled.xml | 8 +
.../x86_64-cpuid-EPYC-7601-32-Core-enabled.xml | 10 +
.../x86_64-cpuid-EPYC-7601-32-Core-guest.xml | 16 ++
.../x86_64-cpuid-EPYC-7601-32-Core-host.xml | 17 ++
.../x86_64-cpuid-EPYC-7601-32-Core-json.xml | 11 +
.../x86_64-cpuid-EPYC-7601-32-Core.json | 241 +++++++++++++++++++++
.../cputestdata/x86_64-cpuid-EPYC-7601-32-Core.xml | 54 +++++
..._64-cpuid-Ryzen-7-1800X-Eight-Core-disabled.xml | 9 +
...6_64-cpuid-Ryzen-7-1800X-Eight-Core-enabled.xml | 10 +
...x86_64-cpuid-Ryzen-7-1800X-Eight-Core-guest.xml | 16 ++
.../x86_64-cpuid-Ryzen-7-1800X-Eight-Core-host.xml | 17 ++
.../x86_64-cpuid-Ryzen-7-1800X-Eight-Core-json.xml | 11 +
.../x86_64-cpuid-Ryzen-7-1800X-Eight-Core.json | 203 +++++++++++++++++
.../x86_64-cpuid-Ryzen-7-1800X-Eight-Core.xml | 52 +++++
16 files changed, 751 insertions(+)
create mode 100644 tests/cputestdata/x86_64-cpuid-EPYC-7601-32-Core-disabled.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-EPYC-7601-32-Core-enabled.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-EPYC-7601-32-Core-guest.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-EPYC-7601-32-Core-host.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-EPYC-7601-32-Core-json.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-EPYC-7601-32-Core.json
create mode 100644 tests/cputestdata/x86_64-cpuid-EPYC-7601-32-Core.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-Ryzen-7-1800X-Eight-Core-disabled.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-Ryzen-7-1800X-Eight-Core-enabled.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-Ryzen-7-1800X-Eight-Core-guest.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-Ryzen-7-1800X-Eight-Core-host.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-Ryzen-7-1800X-Eight-Core-json.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-Ryzen-7-1800X-Eight-Core.json
create mode 100644 tests/cputestdata/x86_64-cpuid-Ryzen-7-1800X-Eight-Core.xml
--
2.14.1
7 years, 2 months