[libvirt] [PATCH 0/7] Keep non-persistent changes alive in snapshot
by Kothapally Madhu Pavan
Restoring to a snapshot should not overwrite the persistent XML configuration
of a snapshot as a side effect. This patchset fixes the same. Currently,
virDomainSnapshotDef only saves active domain definition of the guest.
And on restore the active domain definition is used as both active and
inactive domain definitions. This will make the non-persistent changes
persistent in snapshot image. This patchset allows to save inactive domain
definition as well and on snapshot-revert non-persistent configuration is
restored as is.
Currently, snapshot-revert is making non-presistent changes as persistent.
Here are the steps to reproduce.
Step1: virsh define $dom
Step2: virsh attach-device $dom $memory-device.xml --live
Step3: virsh snapshot-create $dom
Step4: virsh destroy $dom
Step5: virsh snapshot-revert $dom $snapshot-name
Step6: virsh destroy $dom
Step7: virsh start $dom
Here we still have $memory-device attached in Step2.
This patchset is attempting to solve this issue. This patchset will also
allow user to dump and edit inactive XML configuration of a snapshot.
Dumping inactive domain definition of a snapshot is important as
--redefine uses snapshot-dumpxml output to redefine a snapshot.
Kothapally Madhu Pavan (7):
qemu: Store inactive domain configuration in snapshot
qemu: Use active and inactive snapshot configuration on restore
conf: Allow editing inactive snapshot configuration
virsh: Dump inactive XML configuration of snapshot using
snapshot-dumpxml
virsh: Edit inactive XML configuration of snapshot using snapshot-edit
virsh: Allow restoring snapshot with non-persistent configuration
tests: docs: Add schema and testcase for domainsnapshot
docs/schemas/domainsnapshot.rng | 19 +++++
include/libvirt/libvirt-domain-snapshot.h | 10 ++-
include/libvirt/libvirt-domain.h | 1 +
src/conf/domain_conf.c | 6 +-
src/conf/domain_conf.h | 2 +
src/conf/snapshot_conf.c | 48 ++++++++++++-
src/conf/snapshot_conf.h | 1 +
src/qemu/qemu_driver.c | 33 ++++++++-
.../full_domain_withinactive.xml | 83 ++++++++++++++++++++++
tests/domainsnapshotxml2xmltest.c | 1 +
tools/virsh-snapshot.c | 20 ++++++
tools/virsh.pod | 37 +++++++++-
12 files changed, 251 insertions(+), 10 deletions(-)
create mode 100644 tests/domainsnapshotxml2xmlout/full_domain_withinactive.xml
--
1.8.3.1
6 years, 9 months
[libvirt] [PATCH v4 0/4] nwfilter common object adjustments
by John Ferlan
v3: https://www.redhat.com/archives/libvir-list/2017-October/msg00264.html
Although v3 didn't get any attention - I figured I'd update and repost.
The only difference between this series and that one is that I dropped
patch 1 from v3. It was an attempt to fix a perceived issue in nwfilter
that I actually now have determined is in nodedev for which I'll have a
different set of patches.
John Ferlan (4):
nwfilter: Remove unnecessary UUID comparison bypass
nwfilter: Convert _virNWFilterObj to use virObjectRWLockable
nwfilter: Convert _virNWFilterObjList to use virObjectRWLockable
nwfilter: Remove need for nwfilterDriverLock in some API's
src/conf/virnwfilterobj.c | 555 +++++++++++++++++++++++----------
src/conf/virnwfilterobj.h | 11 +-
src/libvirt_private.syms | 3 +-
src/nwfilter/nwfilter_driver.c | 71 ++---
src/nwfilter/nwfilter_gentech_driver.c | 11 +-
5 files changed, 427 insertions(+), 224 deletions(-)
--
2.13.6
6 years, 9 months
[libvirt] [PATCH 0/6] port allocator: make used port bitmap global etc
by Nikolay Shirokovskiy
This patch set addresses issue described in [1] and the core of
changes go to the first patch. The others are cleanups and
refactorings.
[1] https://www.redhat.com/archives/libvir-list/2017-December/msg00600.html
Nikolay Shirokovskiy (6):
port allocator: make used port bitmap global
port allocator: remove range on manual port reserving
port allocator: remove range check in release function
port allocator: drop skip flag
port allocator: remove release functionality from set used
port allocator: make port range constant object
src/bhyve/bhyve_command.c | 4 +-
src/bhyve/bhyve_driver.c | 4 +-
src/bhyve/bhyve_process.c | 7 +-
src/bhyve/bhyve_utils.h | 2 +-
src/libvirt_private.syms | 3 +-
src/libxl/libxl_conf.c | 8 +--
src/libxl/libxl_conf.h | 12 ++--
src/libxl/libxl_domain.c | 3 +-
src/libxl/libxl_driver.c | 17 +++--
src/libxl/libxl_migration.c | 4 +-
src/qemu/qemu_conf.h | 12 ++--
src/qemu/qemu_driver.c | 27 ++++----
src/qemu/qemu_migration.c | 12 ++--
src/qemu/qemu_process.c | 55 +++++----------
src/util/virportallocator.c | 148 +++++++++++++++++++++++------------------
src/util/virportallocator.h | 24 +++----
tests/bhyvexml2argvtest.c | 5 +-
tests/libxlxml2domconfigtest.c | 7 +-
tests/virportallocatortest.c | 49 ++++++++------
19 files changed, 196 insertions(+), 207 deletions(-)
--
1.8.3.1
6 years, 9 months
[libvirt] PATCH add q35 support ide
by Paul Schlacter
hello everyone:
In q35 motherboard use ide, Currently, the qemu has supported q35
Motherboard support ide bus
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index cc7596b..2dbade8 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -7188,6 +7188,7 @@ bool
qemuDomainMachineHasBuiltinIDE(const char *machine)
{
return qemuDomainMachineIsI440FX(machine) ||
+ qemuDomainMachineIsQ35(machine) ||
STREQ(machine, "malta") ||
STREQ(machine, "sun4u") ||
STREQ(machine, "g3beige");
[root@kvm ~]# virsh dumpxml instance-00000004 | grep machine=
<type arch='x86_64' machine='pc-q35-rhel7.3.0'>hvm</type>
[root@kvm~]#
[root@kvm~]# virsh dumpxml instance-00000004 | grep "'disk'" -A 13
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/var/lib/nova/instances/288271ce-69eb-4629-b98c-
779036661294/disk'/>
<backingStore type='file' index='1'>
<format type='raw'/>
<source file='/var/lib/nova/instances/_base/
8d383eef2e628adfc197a6e40e656916de566ab1'/>
<backingStore/>
</backingStore>
<target dev='vda' bus='ide'/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
6 years, 9 months
[libvirt] [PATCH] cpu: Add support for al57 Intel features
by Shaohe Feng
We can start qemu with a "cpu,+la57" to set 57-bit vitrual address
space. So VM can be aware that it need to enable 5-level paging.
Corresponding QEMU commits:
al57 6c7c3c21f95dd9af8a0691c0dd29b07247984122
---
src/cpu/cpu_map.xml | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/cpu/cpu_map.xml b/src/cpu/cpu_map.xml
index e5da7a8..922a195 100644
--- a/src/cpu/cpu_map.xml
+++ b/src/cpu/cpu_map.xml
@@ -285,6 +285,9 @@
<feature name='ospke'>
<cpuid eax_in='0x07' ecx_in='0x00' ecx='0x00000010'/>
</feature>
+ <feature name='la57'>
+ <cpuid eax_in='0x07' ecx_in='0x00' ecx='0x00010000'/>
+ </feature>
<feature name='avx512-4vnniw'>
<cpuid eax_in='0x07' ecx_in='0x00' edx='0x00000004'/>
--
2.7.4
6 years, 9 months
[libvirt] [PATCH 0/9] Yet another version of CAT stuff (no idea about the version number)
by Martin Kletzander
Added stuff that Pavel wanted, removed some testcases that I still have in my
repo, but it's way to complicated to add properly, so for now I would just go
with this and manual testing of starting some domains. I got no hardware to
test this on, currently, so some testing is needed before pushing this.
@Eli: Can you help with the testing?
Martin Kletzander (9):
Rename virResctrlInfo to virResctrlInfoPerCache
util: Add virResctrlInfo
conf: Use virResctrlInfo in capabilities
util: Remove now-unneeded resctrl functions
resctrl: Add functions to work with resctrl allocations
conf: Add support for cputune/cachetune
tests: Add virresctrltest
qemu: Add support for resctrl
docs: Add CAT (resctrl) support into news.xml
docs/formatdomain.html.in | 54 +
docs/news.xml | 9 +
docs/schemas/domaincommon.rng | 32 +
po/POTFILES.in | 1 +
src/Makefile.am | 2 +-
src/conf/capabilities.c | 55 +-
src/conf/capabilities.h | 4 +-
src/conf/domain_conf.c | 251 ++++
src/conf/domain_conf.h | 13 +
src/libvirt_private.syms | 16 +-
src/qemu/qemu_process.c | 61 +-
src/util/virresctrl.c | 1380 ++++++++++++++++++--
src/util/virresctrl.h | 86 +-
src/util/virresctrlpriv.h | 27 +
tests/Makefile.am | 8 +-
tests/genericxml2xmlindata/cachetune-cdp.xml | 36 +
.../cachetune-colliding-allocs.xml | 30 +
.../cachetune-colliding-tunes.xml | 32 +
.../cachetune-colliding-types.xml | 30 +
tests/genericxml2xmlindata/cachetune-small.xml | 29 +
tests/genericxml2xmlindata/cachetune.xml | 33 +
tests/genericxml2xmltest.c | 10 +
tests/virresctrldata/resctrl-cdp.schemata | 2 +
.../virresctrldata/resctrl-skx-twocaches.schemata | 1 +
tests/virresctrldata/resctrl-skx.schemata | 1 +
tests/virresctrldata/resctrl.schemata | 1 +
tests/virresctrltest.c | 102 ++
27 files changed, 2174 insertions(+), 132 deletions(-)
create mode 100644 src/util/virresctrlpriv.h
create mode 100644 tests/genericxml2xmlindata/cachetune-cdp.xml
create mode 100644 tests/genericxml2xmlindata/cachetune-colliding-allocs.xml
create mode 100644 tests/genericxml2xmlindata/cachetune-colliding-tunes.xml
create mode 100644 tests/genericxml2xmlindata/cachetune-colliding-types.xml
create mode 100644 tests/genericxml2xmlindata/cachetune-small.xml
create mode 100644 tests/genericxml2xmlindata/cachetune.xml
create mode 100644 tests/virresctrldata/resctrl-cdp.schemata
create mode 100644 tests/virresctrldata/resctrl-skx-twocaches.schemata
create mode 100644 tests/virresctrldata/resctrl-skx.schemata
create mode 100644 tests/virresctrldata/resctrl.schemata
create mode 100644 tests/virresctrltest.c
--
2.15.1
6 years, 10 months
[libvirt] [PATCH v2] treat host models as case-insensitive strings
by Scott Garfinkle
Qemu now allows case-insensitive specification of CPU models. This fixes the
resulting problems on (at least) POWER arch machines.
Patch V2: Change only the internal interface. This solves the actual problem at
hand of reporting unsupported models now that qemu allows case-insensitive
strings (e.g. "Power8" instead of "POWER8").
Signed-off-by: Scott Garfinkle <scottgar(a)linux.vnet.ibm.com>
---
src/conf/domain_capabilities.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/conf/domain_capabilities.c b/src/conf/domain_capabilities.c
index e7323a8..f7d9be5 100644
--- a/src/conf/domain_capabilities.c
+++ b/src/conf/domain_capabilities.c
@@ -271,7 +271,7 @@ virDomainCapsCPUModelsGet(virDomainCapsCPUModelsPtr cpuModels,
return NULL;
for (i = 0; i < cpuModels->nmodels; i++) {
- if (STREQ(cpuModels->models[i].name, name))
+ if (STRCASEEQ(cpuModels->models[i].name, name))
return cpuModels->models + i;
}
--
1.8.3.1
6 years, 10 months
[libvirt] [PATCH] vsh: add a necessary assertion
by Marc Hartmayer
This fixes the compilation error (compiled with the compiler option
'-03').
In file included from ../../tools/vsh.c:28:0:
../../tools/vsh.c: In function 'vshCommandOptStringQuiet':
../../tools/vsh.c:838:30: error: potential null pointer dereference [-Werror=null-dereference]
assert(!needData || valid->type != VSH_OT_BOOL);
Signed-off-by: Marc Hartmayer <mhartmay(a)linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy(a)linux.vnet.ibm.com>
---
tools/vsh.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/tools/vsh.c b/tools/vsh.c
index e878119b988f..677eb9db3e41 100644
--- a/tools/vsh.c
+++ b/tools/vsh.c
@@ -816,8 +816,8 @@ vshCommandFree(vshCmd *cmd)
* to the option if found, 0 with *OPT set to NULL if the name is
* valid and the option is not required, -1 with *OPT set to NULL if
* the option is required but not present, and assert if NAME is not
- * valid (which indicates a programming error). No error messages are
- * issued if a value is returned.
+ * valid or the option was not found (which indicates a programming
+ * error). No error messages are issued if a value is returned.
*/
static int
vshCommandOpt(const vshCmd *cmd, const char *name, vshCmdOpt **opt,
@@ -835,6 +835,8 @@ vshCommandOpt(const vshCmd *cmd, const char *name, vshCmdOpt **opt,
break;
valid++;
}
+ assert(valid);
+
assert(!needData || valid->type != VSH_OT_BOOL);
if (valid->flags & VSH_OFLAG_REQ)
ret = -1;
--
2.13.4
6 years, 10 months
[libvirt] [PATCH 0/5] Proof of concept for libvirt_qemu shim process
by Daniel P. Berrange
This patch series provides a proof of concept impl of the libvirt_qemu
shim process I previously suggested here:
https://www.redhat.com/archives/libvir-list/2017-November/msg00526.html
The end goal is that we'll be able to fully isolate managemen to each
QEMU process. ie all the virDomain* APIs would be executed inside the
libvirt_qemu shim process. The QEMU driver in libvirtd would merely
deal with aggregating the views / tracking central resource allocations.
This series, however, does *not* do that. It is a very much smaller
proof of concept, principally to:
- Learn about pros/cons of different approaches for the long term
goal
- Provide a working PoC that can be used by the KubeVirt project
such that they can spawn QEMU in a separate docker container
than libvirtd is in, and inherit namespaces & cgroup placement, etc
So, in this series, libvirtd functionality remains essentially
unchanged. All I have done is provide a new binary 'libvirt_qemu'
that accepts an XML file as input, launches the QEMU process
directly, and then calls a new virDomainQemuReconnect() API to
make libvirtd aware of its existance. At this point libvirtd
can deal with it normally (some caveats listed in last patch).
Usage is pretty simple - start libvirtd normally, then to launch
a guest just use
$ libvirt_qemu /path/to/xml/file
It'll be associated with whatever libvirtd instance is running
with the same user account. ie if you launch libvirt_qemu as
root, it'll associate with qemu:///system.
By default it will still place VMs in a dedicated cgroup. To
inherit the cgroup of the caller, use <resource register="none"/>
in the XML schema to turn off cgroup setup in libvirt_qemu.
Having written this PoC, however, I'm less convinced that a bottom
up, minimal impl which incrementally picks certain subsets of QEMU
driver APIs to call is the right way to attack this problem. ie I
was intending to have this minimal shim, then gradually move
functionality into it from libvirtd. This feels like it is going
to create alot of busy-work, delaying the end goal.
I think instead a different approach might be better in the short
term. Take the existing libvirtd code as a starting point, clone
it to a libvirt_qemu and just start cutting out existing code to
make a lighter weight binary that can only run a single guest,
whose XML is passed in. We would still ultimately need to deal
with much of the same issues, like getting VMs reported to the
central libvirtd, but I think that might get to the end result,
where all APIs run inside the shim, quicker. The key difference
is that we could sooner focus on the harder problems of dealing
with shared resource allocation tracking, instead of doing lots
of code rewiring for API execution.
Daniel P. Berrange (5):
conf: allow different resource registration modes
conf: expose APIs to let drivers load individual config / status files
qemu: add a public API to trigger QEMU driver to connect to running
guest
qemu: implement the new virDomainQemuReconnect method
qemu: implement the 'libvirt_qemu' shim for launching guests
externally
include/libvirt/libvirt-qemu.h | 4 +
po/POTFILES.in | 1 +
src/Makefile.am | 49 ++++
src/conf/domain_conf.c | 42 +++-
src/conf/domain_conf.h | 12 +
src/conf/virdomainobjlist.c | 98 +++++---
src/conf/virdomainobjlist.h | 17 ++
src/driver-hypervisor.h | 5 +
src/libvirt-qemu.c | 48 ++++
src/libvirt_private.syms | 4 +
src/libvirt_qemu.syms | 5 +
src/lxc/lxc_cgroup.c | 34 +++
src/lxc/lxc_cgroup.h | 3 +
src/lxc/lxc_process.c | 11 +-
src/qemu/qemu_cgroup.c | 69 +++++-
src/qemu/qemu_conf.h | 2 +-
src/qemu/qemu_controller.c | 551 +++++++++++++++++++++++++++++++++++++++++
src/qemu/qemu_domain.c | 2 +-
src/qemu/qemu_driver.c | 59 ++++-
src/qemu/qemu_process.c | 31 ++-
src/qemu/qemu_process.h | 1 +
src/remote/qemu_protocol.x | 18 +-
src/remote/remote_driver.c | 1 +
src/util/vircgroup.c | 55 ++--
src/util/vircgroup.h | 10 +-
25 files changed, 1046 insertions(+), 86 deletions(-)
create mode 100644 src/qemu/qemu_controller.c
--
2.14.3
6 years, 10 months
[libvirt] [PATCH 0/3] processor frequency information on S390
by Bjoern Walk
Since kernel version 4.7, processor frequency information is available
on S390. This patch series extends the parser for both node information
and system information, respectively.
Let's also add a testcase to the test suite for a S390 CPU configuration
running kernel version 4.14 on LPAR.
This goes on top of Andrea's changes in here:
https://www.redhat.com/archives/libvir-list/2017-December/msg00519.html
Bjoern Walk (3):
util: virhostcpu: parse frequency information on S390
tests: virhostcputest: testcase for S390 system
util: virsysinfo: parse frequency information on S390
src/util/virhostcpu.c | 2 +
src/util/virsysinfo.c | 31 +++++++++++++
.../linux-s390x-with-frequency.cpuinfo | 52 ++++++++++++++++++++++
.../linux-s390x-with-frequency.expected | 1 +
.../linux-with-frequency/cpu/cpu0/online | 1 +
.../linux-with-frequency/cpu/cpu0/topology/book_id | 1 +
.../cpu/cpu0/topology/book_siblings | 1 +
.../cpu/cpu0/topology/book_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu0/topology/core_id | 1 +
.../cpu/cpu0/topology/core_siblings | 1 +
.../cpu/cpu0/topology/core_siblings_list | 1 +
.../cpu/cpu0/topology/drawer_id | 1 +
.../cpu/cpu0/topology/drawer_siblings | 1 +
.../cpu/cpu0/topology/drawer_siblings_list | 1 +
.../cpu/cpu0/topology/physical_package_id | 1 +
.../cpu/cpu0/topology/thread_siblings | 1 +
.../cpu/cpu0/topology/thread_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu1/online | 1 +
.../linux-with-frequency/cpu/cpu1/topology/book_id | 1 +
.../cpu/cpu1/topology/book_siblings | 1 +
.../cpu/cpu1/topology/book_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu1/topology/core_id | 1 +
.../cpu/cpu1/topology/core_siblings | 1 +
.../cpu/cpu1/topology/core_siblings_list | 1 +
.../cpu/cpu1/topology/drawer_id | 1 +
.../cpu/cpu1/topology/drawer_siblings | 1 +
.../cpu/cpu1/topology/drawer_siblings_list | 1 +
.../cpu/cpu1/topology/physical_package_id | 1 +
.../cpu/cpu1/topology/thread_siblings | 1 +
.../cpu/cpu1/topology/thread_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu2/online | 1 +
.../linux-with-frequency/cpu/cpu2/topology/book_id | 1 +
.../cpu/cpu2/topology/book_siblings | 1 +
.../cpu/cpu2/topology/book_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu2/topology/core_id | 1 +
.../cpu/cpu2/topology/core_siblings | 1 +
.../cpu/cpu2/topology/core_siblings_list | 1 +
.../cpu/cpu2/topology/drawer_id | 1 +
.../cpu/cpu2/topology/drawer_siblings | 1 +
.../cpu/cpu2/topology/drawer_siblings_list | 1 +
.../cpu/cpu2/topology/physical_package_id | 1 +
.../cpu/cpu2/topology/thread_siblings | 1 +
.../cpu/cpu2/topology/thread_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu3/online | 1 +
.../linux-with-frequency/cpu/cpu3/topology/book_id | 1 +
.../cpu/cpu3/topology/book_siblings | 1 +
.../cpu/cpu3/topology/book_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu3/topology/core_id | 1 +
.../cpu/cpu3/topology/core_siblings | 1 +
.../cpu/cpu3/topology/core_siblings_list | 1 +
.../cpu/cpu3/topology/drawer_id | 1 +
.../cpu/cpu3/topology/drawer_siblings | 1 +
.../cpu/cpu3/topology/drawer_siblings_list | 1 +
.../cpu/cpu3/topology/physical_package_id | 1 +
.../cpu/cpu3/topology/thread_siblings | 1 +
.../cpu/cpu3/topology/thread_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu4/online | 1 +
.../linux-with-frequency/cpu/cpu4/topology/book_id | 1 +
.../cpu/cpu4/topology/book_siblings | 1 +
.../cpu/cpu4/topology/book_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu4/topology/core_id | 1 +
.../cpu/cpu4/topology/core_siblings | 1 +
.../cpu/cpu4/topology/core_siblings_list | 1 +
.../cpu/cpu4/topology/drawer_id | 1 +
.../cpu/cpu4/topology/drawer_siblings | 1 +
.../cpu/cpu4/topology/drawer_siblings_list | 1 +
.../cpu/cpu4/topology/physical_package_id | 1 +
.../cpu/cpu4/topology/thread_siblings | 1 +
.../cpu/cpu4/topology/thread_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu5/online | 1 +
.../linux-with-frequency/cpu/cpu5/topology/book_id | 1 +
.../cpu/cpu5/topology/book_siblings | 1 +
.../cpu/cpu5/topology/book_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu5/topology/core_id | 1 +
.../cpu/cpu5/topology/core_siblings | 1 +
.../cpu/cpu5/topology/core_siblings_list | 1 +
.../cpu/cpu5/topology/drawer_id | 1 +
.../cpu/cpu5/topology/drawer_siblings | 1 +
.../cpu/cpu5/topology/drawer_siblings_list | 1 +
.../cpu/cpu5/topology/physical_package_id | 1 +
.../cpu/cpu5/topology/thread_siblings | 1 +
.../cpu/cpu5/topology/thread_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu6/online | 1 +
.../linux-with-frequency/cpu/cpu6/topology/book_id | 1 +
.../cpu/cpu6/topology/book_siblings | 1 +
.../cpu/cpu6/topology/book_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu6/topology/core_id | 1 +
.../cpu/cpu6/topology/core_siblings | 1 +
.../cpu/cpu6/topology/core_siblings_list | 1 +
.../cpu/cpu6/topology/drawer_id | 1 +
.../cpu/cpu6/topology/drawer_siblings | 1 +
.../cpu/cpu6/topology/drawer_siblings_list | 1 +
.../cpu/cpu6/topology/physical_package_id | 1 +
.../cpu/cpu6/topology/thread_siblings | 1 +
.../cpu/cpu6/topology/thread_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu7/online | 1 +
.../linux-with-frequency/cpu/cpu7/topology/book_id | 1 +
.../cpu/cpu7/topology/book_siblings | 1 +
.../cpu/cpu7/topology/book_siblings_list | 1 +
.../linux-with-frequency/cpu/cpu7/topology/core_id | 1 +
.../cpu/cpu7/topology/core_siblings | 1 +
.../cpu/cpu7/topology/core_siblings_list | 1 +
.../cpu/cpu7/topology/drawer_id | 1 +
.../cpu/cpu7/topology/drawer_siblings | 1 +
.../cpu/cpu7/topology/drawer_siblings_list | 1 +
.../cpu/cpu7/topology/physical_package_id | 1 +
.../cpu/cpu7/topology/thread_siblings | 1 +
.../cpu/cpu7/topology/thread_siblings_list | 1 +
.../virhostcpudata/linux-with-frequency/cpu/online | 1 +
.../linux-with-frequency/cpu/present | 1 +
tests/virhostcputest.c | 1 +
111 files changed, 193 insertions(+)
create mode 100644 tests/virhostcpudata/linux-s390x-with-frequency.cpuinfo
create mode 100644 tests/virhostcpudata/linux-s390x-with-frequency.expected
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/online
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/book_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/book_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/book_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/core_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/core_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/core_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/drawer_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/drawer_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/drawer_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/physical_package_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/thread_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu0/topology/thread_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/online
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/book_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/book_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/book_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/core_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/core_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/core_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/drawer_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/drawer_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/drawer_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/physical_package_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/thread_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu1/topology/thread_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/online
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/book_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/book_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/book_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/core_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/core_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/core_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/drawer_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/drawer_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/drawer_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/physical_package_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/thread_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu2/topology/thread_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/online
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/book_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/book_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/book_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/core_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/core_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/core_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/drawer_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/drawer_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/drawer_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/physical_package_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/thread_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu3/topology/thread_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/online
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/book_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/book_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/book_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/core_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/core_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/core_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/drawer_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/drawer_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/drawer_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/physical_package_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/thread_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu4/topology/thread_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/online
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/book_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/book_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/book_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/core_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/core_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/core_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/drawer_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/drawer_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/drawer_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/physical_package_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/thread_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu5/topology/thread_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/online
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/book_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/book_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/book_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/core_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/core_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/core_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/drawer_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/drawer_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/drawer_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/physical_package_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/thread_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu6/topology/thread_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/online
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/book_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/book_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/book_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/core_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/core_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/core_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/drawer_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/drawer_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/drawer_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/physical_package_id
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/thread_siblings
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/cpu7/topology/thread_siblings_list
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/online
create mode 100644 tests/virhostcpudata/linux-with-frequency/cpu/present
--
2.13.4
6 years, 10 months