[PATCH v8 0/3] qemu: Support rbd namespace
by Han Han
Diff from v7:
- Squash a commit
- Rebase to latest code
v7: https://listman.redhat.com/archives/libvir-list/2020-November/msg00480.html
Han Han (3):
qemu_capabilities: Add QEMU_CAPS_RBD_NAMESPACE
conf: Support to parse rbd namespace from source name
qemu: Implement rbd namespace to the source name attribute
docs/formatdomain.rst | 16 ++++++
src/conf/domain_conf.c | 47 +++++++++++++++--
src/conf/storage_source_conf.c | 2 +
src/conf/storage_source_conf.h | 1 +
src/qemu/qemu_block.c | 1 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_domain.c | 8 +++
.../caps_5.0.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 1 +
.../caps_5.0.0.riscv64.xml | 1 +
.../caps_5.0.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_5.1.0.sparc.xml | 1 +
.../caps_5.1.0.x86_64.xml | 1 +
.../caps_5.2.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_5.2.0.ppc64.xml | 1 +
.../caps_5.2.0.riscv64.xml | 1 +
.../qemucapabilitiesdata/caps_5.2.0.s390x.xml | 1 +
.../caps_5.2.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_6.0.0.s390x.xml | 1 +
.../caps_6.0.0.x86_64.xml | 1 +
.../caps_6.1.0.x86_64.xml | 1 +
...k-network-rbd-namespace.x86_64-latest.args | 38 ++++++++++++++
.../disk-network-rbd-namespace.xml | 40 +++++++++++++++
tests/qemuxml2argvtest.c | 1 +
...sk-network-rbd-namespace.x86_64-latest.xml | 50 +++++++++++++++++++
tests/qemuxml2xmltest.c | 1 +
27 files changed, 218 insertions(+), 4 deletions(-)
create mode 100644 tests/qemuxml2argvdata/disk-network-rbd-namespace.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/disk-network-rbd-namespace.xml
create mode 100644 tests/qemuxml2xmloutdata/disk-network-rbd-namespace.x86_64-latest.xml
--
2.31.1
3 years, 6 months
[PATCH v1 00/10] capabilities: Expose HMAT
by Michal Privoznik
We allow configuring HMAT for domains since v6.6.0-rc1~249 (and
friends). Basically, HMAT is more fine grained description of
interconnects of NUMA nodes than basic NUMA distances. The former
describes bandwidths and latencies while the latter is some
dimensionless and normalized number.
Anyway, mgmt apps did not really know what values to set for HMAT
because we are not exposing them in capabilities because we were waiting
on kernel to expose them. And it just did.
In 09/10 I'm describing sysfs interface briefly and also mention that
there's no interpretation of links to memory side caches, yet. I'm
talking to kernel developers so we might get some movement there. But
also, I'm not sure whether it's worth the effort OR if there really is a
machine that has separate links to main memory and caches.
Here's link to ACPI spec:
https://uefi.org/sites/default/files/resources/ACPI_6_2.pdf
Look for "5.2.27.4 System Locality Latency and Bandwidth Information
Structure".
And here's link to sysfs docs:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/D...
Michal Prívozník (10):
tests: glib-ify vircaps2xmltest
schemas: Allow zero <cpu/> for capabilities
capabilities: Separate <cpu/> formatting into a function
numa_conf: Rename virDomainCache* to virNumaCache*
numa_conf: Expose virNumaCache formatter
capabilities: Expose NUMA memory side cache
numa_conf: Rename virDomainNumaInterconnect* to virNumaInterconnect*
numa_conf: Expose virNumaInterconnect formatter
capabilities: Expose NUMA interconnects
vircaps2xmltest: Introduce HMAT test case
build-aux/syntax-check.mk | 2 +-
docs/schemas/capability.rng | 11 +-
src/conf/capabilities.c | 388 ++++++++++++++++--
src/conf/capabilities.h | 5 +-
src/conf/numa_conf.c | 244 +++++------
src/conf/numa_conf.h | 81 ++--
src/libvirt_private.syms | 14 +-
src/libxl/libxl_capabilities.c | 3 +-
src/qemu/qemu_command.c | 30 +-
src/test/test_driver.c | 3 +-
tests/testutils.c | 3 +-
.../system/cpu/cpu0/cache/index0/level | 1 +
.../system/cpu/cpu0/cache/index1/level | 1 +
.../system/cpu/cpu0/cache/index2/level | 1 +
.../system/cpu/cpu0/cache/index3/id | 1 +
.../system/cpu/cpu0/cache/index3/level | 1 +
.../cpu/cpu0/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu0/cache/index3/size | 1 +
.../system/cpu/cpu0/cache/index3/type | 1 +
.../system/cpu/cpu0/topology/core_id | 1 +
.../system/cpu/cpu0/topology/die_id | 1 +
.../cpu/cpu0/topology/physical_package_id | 1 +
.../cpu/cpu0/topology/thread_siblings_list | 1 +
.../system/cpu/cpu1/cache/index0/level | 1 +
.../system/cpu/cpu1/cache/index1/level | 1 +
.../system/cpu/cpu1/cache/index2/level | 1 +
.../system/cpu/cpu1/cache/index3/id | 1 +
.../system/cpu/cpu1/cache/index3/level | 1 +
.../cpu/cpu1/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu1/cache/index3/size | 1 +
.../system/cpu/cpu1/cache/index3/type | 1 +
.../system/cpu/cpu1/topology/core_id | 1 +
.../system/cpu/cpu1/topology/die_id | 1 +
.../cpu/cpu1/topology/physical_package_id | 1 +
.../cpu/cpu1/topology/thread_siblings_list | 1 +
.../system/cpu/cpu10/cache/index0/level | 1 +
.../system/cpu/cpu10/cache/index1/level | 1 +
.../system/cpu/cpu10/cache/index2/level | 1 +
.../system/cpu/cpu10/cache/index3/id | 1 +
.../system/cpu/cpu10/cache/index3/level | 1 +
.../cpu/cpu10/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu10/cache/index3/size | 1 +
.../system/cpu/cpu10/cache/index3/type | 1 +
.../system/cpu/cpu10/topology/core_id | 1 +
.../system/cpu/cpu10/topology/die_id | 1 +
.../cpu/cpu10/topology/physical_package_id | 1 +
.../cpu/cpu10/topology/thread_siblings_list | 1 +
.../system/cpu/cpu11/cache/index0/level | 1 +
.../system/cpu/cpu11/cache/index1/level | 1 +
.../system/cpu/cpu11/cache/index2/level | 1 +
.../system/cpu/cpu11/cache/index3/id | 1 +
.../system/cpu/cpu11/cache/index3/level | 1 +
.../cpu/cpu11/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu11/cache/index3/size | 1 +
.../system/cpu/cpu11/cache/index3/type | 1 +
.../system/cpu/cpu11/topology/core_id | 1 +
.../system/cpu/cpu11/topology/die_id | 1 +
.../cpu/cpu11/topology/physical_package_id | 1 +
.../cpu/cpu11/topology/thread_siblings_list | 1 +
.../system/cpu/cpu12/cache/index0/level | 1 +
.../system/cpu/cpu12/cache/index1/level | 1 +
.../system/cpu/cpu12/cache/index2/level | 1 +
.../system/cpu/cpu12/cache/index3/id | 1 +
.../system/cpu/cpu12/cache/index3/level | 1 +
.../cpu/cpu12/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu12/cache/index3/size | 1 +
.../system/cpu/cpu12/cache/index3/type | 1 +
.../system/cpu/cpu12/topology/core_id | 1 +
.../system/cpu/cpu12/topology/die_id | 1 +
.../cpu/cpu12/topology/physical_package_id | 1 +
.../cpu/cpu12/topology/thread_siblings_list | 1 +
.../system/cpu/cpu13/cache/index0/level | 1 +
.../system/cpu/cpu13/cache/index1/level | 1 +
.../system/cpu/cpu13/cache/index2/level | 1 +
.../system/cpu/cpu13/cache/index3/id | 1 +
.../system/cpu/cpu13/cache/index3/level | 1 +
.../cpu/cpu13/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu13/cache/index3/size | 1 +
.../system/cpu/cpu13/cache/index3/type | 1 +
.../system/cpu/cpu13/topology/core_id | 1 +
.../system/cpu/cpu13/topology/die_id | 1 +
.../cpu/cpu13/topology/physical_package_id | 1 +
.../cpu/cpu13/topology/thread_siblings_list | 1 +
.../system/cpu/cpu14/cache/index0/level | 1 +
.../system/cpu/cpu14/cache/index1/level | 1 +
.../system/cpu/cpu14/cache/index2/level | 1 +
.../system/cpu/cpu14/cache/index3/id | 1 +
.../system/cpu/cpu14/cache/index3/level | 1 +
.../cpu/cpu14/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu14/cache/index3/size | 1 +
.../system/cpu/cpu14/cache/index3/type | 1 +
.../system/cpu/cpu14/topology/core_id | 1 +
.../system/cpu/cpu14/topology/die_id | 1 +
.../cpu/cpu14/topology/physical_package_id | 1 +
.../cpu/cpu14/topology/thread_siblings_list | 1 +
.../system/cpu/cpu15/cache/index0/level | 1 +
.../system/cpu/cpu15/cache/index1/level | 1 +
.../system/cpu/cpu15/cache/index2/level | 1 +
.../system/cpu/cpu15/cache/index3/id | 1 +
.../system/cpu/cpu15/cache/index3/level | 1 +
.../cpu/cpu15/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu15/cache/index3/size | 1 +
.../system/cpu/cpu15/cache/index3/type | 1 +
.../system/cpu/cpu15/topology/core_id | 1 +
.../system/cpu/cpu15/topology/die_id | 1 +
.../cpu/cpu15/topology/physical_package_id | 1 +
.../cpu/cpu15/topology/thread_siblings_list | 1 +
.../system/cpu/cpu16/cache/index0/level | 1 +
.../system/cpu/cpu16/cache/index1/level | 1 +
.../system/cpu/cpu16/cache/index2/level | 1 +
.../system/cpu/cpu16/cache/index3/id | 1 +
.../system/cpu/cpu16/cache/index3/level | 1 +
.../cpu/cpu16/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu16/cache/index3/size | 1 +
.../system/cpu/cpu16/cache/index3/type | 1 +
.../system/cpu/cpu16/topology/core_id | 1 +
.../system/cpu/cpu16/topology/die_id | 1 +
.../cpu/cpu16/topology/physical_package_id | 1 +
.../cpu/cpu16/topology/thread_siblings_list | 1 +
.../system/cpu/cpu17/cache/index0/level | 1 +
.../system/cpu/cpu17/cache/index1/level | 1 +
.../system/cpu/cpu17/cache/index2/level | 1 +
.../system/cpu/cpu17/cache/index3/id | 1 +
.../system/cpu/cpu17/cache/index3/level | 1 +
.../cpu/cpu17/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu17/cache/index3/size | 1 +
.../system/cpu/cpu17/cache/index3/type | 1 +
.../system/cpu/cpu17/topology/core_id | 1 +
.../system/cpu/cpu17/topology/die_id | 1 +
.../cpu/cpu17/topology/physical_package_id | 1 +
.../cpu/cpu17/topology/thread_siblings_list | 1 +
.../system/cpu/cpu18/cache/index0/level | 1 +
.../system/cpu/cpu18/cache/index1/level | 1 +
.../system/cpu/cpu18/cache/index2/level | 1 +
.../system/cpu/cpu18/cache/index3/id | 1 +
.../system/cpu/cpu18/cache/index3/level | 1 +
.../cpu/cpu18/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu18/cache/index3/size | 1 +
.../system/cpu/cpu18/cache/index3/type | 1 +
.../system/cpu/cpu18/topology/core_id | 1 +
.../system/cpu/cpu18/topology/die_id | 1 +
.../cpu/cpu18/topology/physical_package_id | 1 +
.../cpu/cpu18/topology/thread_siblings_list | 1 +
.../system/cpu/cpu19/cache/index0/level | 1 +
.../system/cpu/cpu19/cache/index1/level | 1 +
.../system/cpu/cpu19/cache/index2/level | 1 +
.../system/cpu/cpu19/cache/index3/id | 1 +
.../system/cpu/cpu19/cache/index3/level | 1 +
.../cpu/cpu19/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu19/cache/index3/size | 1 +
.../system/cpu/cpu19/cache/index3/type | 1 +
.../system/cpu/cpu19/topology/core_id | 1 +
.../system/cpu/cpu19/topology/die_id | 1 +
.../cpu/cpu19/topology/physical_package_id | 1 +
.../cpu/cpu19/topology/thread_siblings_list | 1 +
.../system/cpu/cpu2/cache/index0/level | 1 +
.../system/cpu/cpu2/cache/index1/level | 1 +
.../system/cpu/cpu2/cache/index2/level | 1 +
.../system/cpu/cpu2/cache/index3/id | 1 +
.../system/cpu/cpu2/cache/index3/level | 1 +
.../cpu/cpu2/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu2/cache/index3/size | 1 +
.../system/cpu/cpu2/cache/index3/type | 1 +
.../system/cpu/cpu2/topology/core_id | 1 +
.../system/cpu/cpu2/topology/die_id | 1 +
.../cpu/cpu2/topology/physical_package_id | 1 +
.../cpu/cpu2/topology/thread_siblings_list | 1 +
.../system/cpu/cpu20/cache/index0/level | 1 +
.../system/cpu/cpu20/cache/index1/level | 1 +
.../system/cpu/cpu20/cache/index2/level | 1 +
.../system/cpu/cpu20/cache/index3/id | 1 +
.../system/cpu/cpu20/cache/index3/level | 1 +
.../cpu/cpu20/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu20/cache/index3/size | 1 +
.../system/cpu/cpu20/cache/index3/type | 1 +
.../system/cpu/cpu20/topology/core_id | 1 +
.../system/cpu/cpu20/topology/die_id | 1 +
.../cpu/cpu20/topology/physical_package_id | 1 +
.../cpu/cpu20/topology/thread_siblings_list | 1 +
.../system/cpu/cpu21/cache/index0/level | 1 +
.../system/cpu/cpu21/cache/index1/level | 1 +
.../system/cpu/cpu21/cache/index2/level | 1 +
.../system/cpu/cpu21/cache/index3/id | 1 +
.../system/cpu/cpu21/cache/index3/level | 1 +
.../cpu/cpu21/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu21/cache/index3/size | 1 +
.../system/cpu/cpu21/cache/index3/type | 1 +
.../system/cpu/cpu21/topology/core_id | 1 +
.../system/cpu/cpu21/topology/die_id | 1 +
.../cpu/cpu21/topology/physical_package_id | 1 +
.../cpu/cpu21/topology/thread_siblings_list | 1 +
.../system/cpu/cpu22/cache/index0/level | 1 +
.../system/cpu/cpu22/cache/index1/level | 1 +
.../system/cpu/cpu22/cache/index2/level | 1 +
.../system/cpu/cpu22/cache/index3/id | 1 +
.../system/cpu/cpu22/cache/index3/level | 1 +
.../cpu/cpu22/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu22/cache/index3/size | 1 +
.../system/cpu/cpu22/cache/index3/type | 1 +
.../system/cpu/cpu22/topology/core_id | 1 +
.../system/cpu/cpu22/topology/die_id | 1 +
.../cpu/cpu22/topology/physical_package_id | 1 +
.../cpu/cpu22/topology/thread_siblings_list | 1 +
.../system/cpu/cpu23/cache/index0/level | 1 +
.../system/cpu/cpu23/cache/index1/level | 1 +
.../system/cpu/cpu23/cache/index2/level | 1 +
.../system/cpu/cpu23/cache/index3/id | 1 +
.../system/cpu/cpu23/cache/index3/level | 1 +
.../cpu/cpu23/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu23/cache/index3/size | 1 +
.../system/cpu/cpu23/cache/index3/type | 1 +
.../system/cpu/cpu23/topology/core_id | 1 +
.../system/cpu/cpu23/topology/die_id | 1 +
.../cpu/cpu23/topology/physical_package_id | 1 +
.../cpu/cpu23/topology/thread_siblings_list | 1 +
.../system/cpu/cpu3/cache/index0/level | 1 +
.../system/cpu/cpu3/cache/index1/level | 1 +
.../system/cpu/cpu3/cache/index2/level | 1 +
.../system/cpu/cpu3/cache/index3/id | 1 +
.../system/cpu/cpu3/cache/index3/level | 1 +
.../cpu/cpu3/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu3/cache/index3/size | 1 +
.../system/cpu/cpu3/cache/index3/type | 1 +
.../system/cpu/cpu3/topology/core_id | 1 +
.../system/cpu/cpu3/topology/die_id | 1 +
.../cpu/cpu3/topology/physical_package_id | 1 +
.../cpu/cpu3/topology/thread_siblings_list | 1 +
.../system/cpu/cpu4/cache/index0/level | 1 +
.../system/cpu/cpu4/cache/index1/level | 1 +
.../system/cpu/cpu4/cache/index2/level | 1 +
.../system/cpu/cpu4/cache/index3/id | 1 +
.../system/cpu/cpu4/cache/index3/level | 1 +
.../cpu/cpu4/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu4/cache/index3/size | 1 +
.../system/cpu/cpu4/cache/index3/type | 1 +
.../system/cpu/cpu4/topology/core_id | 1 +
.../system/cpu/cpu4/topology/die_id | 1 +
.../cpu/cpu4/topology/physical_package_id | 1 +
.../cpu/cpu4/topology/thread_siblings_list | 1 +
.../system/cpu/cpu5/cache/index0/level | 1 +
.../system/cpu/cpu5/cache/index1/level | 1 +
.../system/cpu/cpu5/cache/index2/level | 1 +
.../system/cpu/cpu5/cache/index3/id | 1 +
.../system/cpu/cpu5/cache/index3/level | 1 +
.../cpu/cpu5/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu5/cache/index3/size | 1 +
.../system/cpu/cpu5/cache/index3/type | 1 +
.../system/cpu/cpu5/topology/core_id | 1 +
.../system/cpu/cpu5/topology/die_id | 1 +
.../cpu/cpu5/topology/physical_package_id | 1 +
.../cpu/cpu5/topology/thread_siblings_list | 1 +
.../system/cpu/cpu6/cache/index0/level | 1 +
.../system/cpu/cpu6/cache/index1/level | 1 +
.../system/cpu/cpu6/cache/index2/level | 1 +
.../system/cpu/cpu6/cache/index3/id | 1 +
.../system/cpu/cpu6/cache/index3/level | 1 +
.../cpu/cpu6/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu6/cache/index3/size | 1 +
.../system/cpu/cpu6/cache/index3/type | 1 +
.../system/cpu/cpu6/topology/core_id | 1 +
.../system/cpu/cpu6/topology/die_id | 1 +
.../cpu/cpu6/topology/physical_package_id | 1 +
.../cpu/cpu6/topology/thread_siblings_list | 1 +
.../system/cpu/cpu7/cache/index0/level | 1 +
.../system/cpu/cpu7/cache/index1/level | 1 +
.../system/cpu/cpu7/cache/index2/level | 1 +
.../system/cpu/cpu7/cache/index3/id | 1 +
.../system/cpu/cpu7/cache/index3/level | 1 +
.../cpu/cpu7/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu7/cache/index3/size | 1 +
.../system/cpu/cpu7/cache/index3/type | 1 +
.../system/cpu/cpu7/topology/core_id | 1 +
.../system/cpu/cpu7/topology/die_id | 1 +
.../cpu/cpu7/topology/physical_package_id | 1 +
.../cpu/cpu7/topology/thread_siblings_list | 1 +
.../system/cpu/cpu8/cache/index0/level | 1 +
.../system/cpu/cpu8/cache/index1/level | 1 +
.../system/cpu/cpu8/cache/index2/level | 1 +
.../system/cpu/cpu8/cache/index3/id | 1 +
.../system/cpu/cpu8/cache/index3/level | 1 +
.../cpu/cpu8/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu8/cache/index3/size | 1 +
.../system/cpu/cpu8/cache/index3/type | 1 +
.../system/cpu/cpu8/topology/core_id | 1 +
.../system/cpu/cpu8/topology/die_id | 1 +
.../cpu/cpu8/topology/physical_package_id | 1 +
.../cpu/cpu8/topology/thread_siblings_list | 1 +
.../system/cpu/cpu9/cache/index0/level | 1 +
.../system/cpu/cpu9/cache/index1/level | 1 +
.../system/cpu/cpu9/cache/index2/level | 1 +
.../system/cpu/cpu9/cache/index3/id | 1 +
.../system/cpu/cpu9/cache/index3/level | 1 +
.../cpu/cpu9/cache/index3/shared_cpu_list | 1 +
.../system/cpu/cpu9/cache/index3/size | 1 +
.../system/cpu/cpu9/cache/index3/type | 1 +
.../system/cpu/cpu9/topology/core_id | 1 +
.../system/cpu/cpu9/topology/die_id | 1 +
.../cpu/cpu9/topology/physical_package_id | 1 +
.../cpu/cpu9/topology/thread_siblings_list | 1 +
.../linux-hmat/system/cpu/online | 1 +
.../node/node0/access0/initiators/node0 | 1 +
.../node0/access0/initiators/read_bandwidth | 1 +
.../node0/access0/initiators/read_latency | 1 +
.../node0/access0/initiators/write_bandwidth | 1 +
.../node0/access0/initiators/write_latency | 1 +
.../system/node/node0/access0/targets/node0 | 1 +
.../system/node/node0/access0/targets/node1 | 1 +
.../node/node0/access1/initiators/node0 | 1 +
.../node0/access1/initiators/read_bandwidth | 1 +
.../node0/access1/initiators/read_latency | 1 +
.../node0/access1/initiators/write_bandwidth | 1 +
.../node0/access1/initiators/write_latency | 1 +
.../system/node/node0/access1/targets/node0 | 1 +
.../system/node/node0/access1/targets/node1 | 1 +
.../linux-hmat/system/node/node0/cpulist | 1 +
.../linux-hmat/system/node/node0/distance | 1 +
.../hugepages-1048576kB/free_hugepages | 1 +
.../hugepages-1048576kB/nr_hugepages | 1 +
.../hugepages-1048576kB/surplus_hugepages | 1 +
.../hugepages/hugepages-2048kB/free_hugepages | 1 +
.../hugepages/hugepages-2048kB/nr_hugepages | 1 +
.../hugepages-2048kB/surplus_hugepages | 1 +
.../node0/memory_side_cache/index1/indexing | 1 +
.../node0/memory_side_cache/index1/line_size | 1 +
.../node/node0/memory_side_cache/index1/size | 1 +
.../memory_side_cache/index1/write_policy | 1 +
.../node0/memory_side_cache/index2/indexing | 1 +
.../node0/memory_side_cache/index2/line_size | 1 +
.../node/node0/memory_side_cache/index2/size | 1 +
.../memory_side_cache/index2/write_policy | 1 +
.../node/node1/access0/initiators/node0 | 1 +
.../node1/access0/initiators/read_bandwidth | 1 +
.../node1/access0/initiators/read_latency | 1 +
.../node1/access0/initiators/write_bandwidth | 1 +
.../node1/access0/initiators/write_latency | 1 +
.../node/node1/access1/initiators/node0 | 1 +
.../node1/access1/initiators/read_bandwidth | 1 +
.../node1/access1/initiators/read_latency | 1 +
.../node1/access1/initiators/write_bandwidth | 1 +
.../node1/access1/initiators/write_latency | 1 +
.../linux-hmat/system/node/node1/cpulist | 1 +
.../linux-hmat/system/node/node1/distance | 1 +
.../hugepages-1048576kB/free_hugepages | 1 +
.../hugepages-1048576kB/nr_hugepages | 1 +
.../hugepages-1048576kB/surplus_hugepages | 1 +
.../hugepages/hugepages-2048kB/free_hugepages | 1 +
.../hugepages/hugepages-2048kB/nr_hugepages | 1 +
.../hugepages-2048kB/surplus_hugepages | 1 +
.../node1/memory_side_cache/index1/indexing | 1 +
.../node1/memory_side_cache/index1/line_size | 1 +
.../node/node1/memory_side_cache/index1/size | 1 +
.../memory_side_cache/index1/write_policy | 1 +
.../linux-hmat/system/node/online | 1 +
tests/vircaps2xmldata/vircaps-x86_64-hmat.xml | 105 +++++
tests/vircaps2xmltest.c | 33 +-
355 files changed, 1042 insertions(+), 222 deletions(-)
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu0/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu1/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu10/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu11/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu12/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu13/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu14/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu15/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu16/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu17/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu18/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu19/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu2/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu20/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu21/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu22/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu23/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu3/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu4/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu5/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu6/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu7/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu8/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/cache/index0/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/cache/index1/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/cache/index2/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/cache/index3/id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/cache/index3/level
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/cache/index3/shared_cpu_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/cache/index3/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/cache/index3/type
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/topology/core_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/topology/die_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/topology/physical_package_id
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/cpu9/topology/thread_siblings_list
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/cpu/online
create mode 120000 tests/vircaps2xmldata/linux-hmat/system/node/node0/access0/initiators/node0
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/access0/initiators/read_bandwidth
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/access0/initiators/read_latency
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/access0/initiators/write_bandwidth
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/access0/initiators/write_latency
create mode 120000 tests/vircaps2xmldata/linux-hmat/system/node/node0/access0/targets/node0
create mode 120000 tests/vircaps2xmldata/linux-hmat/system/node/node0/access0/targets/node1
create mode 120000 tests/vircaps2xmldata/linux-hmat/system/node/node0/access1/initiators/node0
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/access1/initiators/read_bandwidth
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/access1/initiators/read_latency
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/access1/initiators/write_bandwidth
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/access1/initiators/write_latency
create mode 120000 tests/vircaps2xmldata/linux-hmat/system/node/node0/access1/targets/node0
create mode 120000 tests/vircaps2xmldata/linux-hmat/system/node/node0/access1/targets/node1
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/cpulist
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/distance
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/hugepages/hugepages-1048576kB/surplus_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/hugepages/hugepages-2048kB/free_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/hugepages/hugepages-2048kB/surplus_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/memory_side_cache/index1/indexing
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/memory_side_cache/index1/line_size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/memory_side_cache/index1/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/memory_side_cache/index1/write_policy
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/memory_side_cache/index2/indexing
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/memory_side_cache/index2/line_size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/memory_side_cache/index2/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node0/memory_side_cache/index2/write_policy
create mode 120000 tests/vircaps2xmldata/linux-hmat/system/node/node1/access0/initiators/node0
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/access0/initiators/read_bandwidth
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/access0/initiators/read_latency
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/access0/initiators/write_bandwidth
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/access0/initiators/write_latency
create mode 120000 tests/vircaps2xmldata/linux-hmat/system/node/node1/access1/initiators/node0
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/access1/initiators/read_bandwidth
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/access1/initiators/read_latency
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/access1/initiators/write_bandwidth
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/access1/initiators/write_latency
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/cpulist
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/distance
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/hugepages/hugepages-1048576kB/free_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/hugepages/hugepages-1048576kB/surplus_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/hugepages/hugepages-2048kB/free_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/hugepages/hugepages-2048kB/surplus_hugepages
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/memory_side_cache/index1/indexing
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/memory_side_cache/index1/line_size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/memory_side_cache/index1/size
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/node1/memory_side_cache/index1/write_policy
create mode 100644 tests/vircaps2xmldata/linux-hmat/system/node/online
create mode 100644 tests/vircaps2xmldata/vircaps-x86_64-hmat.xml
--
2.31.1
3 years, 6 months
Libvirt CI for running functional tests
by Praveen K Paladugu
Hi,
While developing cloud-hypervisor driver for libvirt, we re-fitted
cloud-hypervisor project's CI to libvirt. This CI was built on Rust and
currently supports VM boot up tests.
https://github.com/cloud-hypervisor/libvirt/tree/ch/ch_integration_tests
We are working on extending this CI to incorporate more functional
tests: Networking, Thread Pinning etc. We are curious to know if libvirt
project has any plans to setup a CI to run functional tests.
I noticed https://gitlab.com/libvirt/libvirt-ci effort which focuses on
running builds against various platforms and formats. Could you please
clarify libvirt project's plan for setting up a CI to run functional tests.
Regards,
Praveen K Paladugu
3 years, 6 months
[PATCH 0/3] Replace some libvirt handling functions with GLib APIs
by Luke Yue
Note that the g_find_program_in_path() will search file in current dir
when PATH env is not set, while virFindFileInPath() won't.
The virFileAbsPath() could be replaced by g_canonicalize_file() when not
take the return value into account, and it just simply return 0 now, so
maybe remove the function is a better choice?
Related issue: https://gitlab.com/libvirt/libvirt/-/issues/12
Luke Yue (3):
virfile: Use g_build_filename() when building paths
virfile: Simplify virFindFileInPath() with g_find_program_in_path()
virfile: Use g_canonicalize_file() to simplify virFileAbsPath()
src/util/virfile.c | 71 +++++++++++++---------------------------------
1 file changed, 19 insertions(+), 52 deletions(-)
--
2.31.1
3 years, 6 months
[PATCH v2] Add basic driver for the Cloud-Hypervisor
by William Douglas
Cloud-Hypervisor is a KVM virtualization using hypervisor. It
functions similarly to qemu and the libvirt Cloud-Hypervisor driver
uses a very similar structure to the libvirt driver.
The biggest difference from the libvirt perspective is that the
"monitor" socket is seperated into two sockets one that commands are
issued to and one that events are notified from. The current
implementation only uses the command socket (running over a REST API
with json encoded data) with future changes to add support for the
event socket (to better handle shutdowns from inside the VM).
This patch adds support for the following initial VM actions using the
Cloud-Hypervsior API:
* vm.create
* vm.delete
* vm.boot
* vm.shutdown
* vm.reboot
* vm.pause
* vm.resume
To use the Cloud-Hypervisor driver, the v15.0 release of
Cloud-Hypervisor is required to be installed.
Some additional notes:
* The curl handle is persistent but not useful to detect ch process
shutdown/crash (a future patch will address this shortcoming)
* On a 64-bit host Cloud-Hypervisor needs to support PVH and so can
emulate 32-bit mode but it isn't fully tested (a 64-bit kernel and
32-bit userspace is fine, a 32-bit kernel isn't validated)
Signed-off-by: William Douglas <william.douglas(a)intel.com>
---
The original RFC is
https://listman.redhat.com/archives/libvir-list/2020-August/msg01040.html
The v1 patch is
https://listman.redhat.com/archives/libvir-list/2021-April/msg01271.html
Changes since v1:
* Removed specfile changes as cloud-hypervisor isn't included in
distros yet
* Clarified booting a 32-bit kernel is untested
* Removed unused ch_cmd variable from a previous refactor
* Updated version detection based on latest cloud-hypervisor release
* Update the libvirt version cloud-hypervisor driver features were
added in
* Updated the supported URI schemas to remove "Cloud-Hypervisor:///"
* Updated build system to only build the cloud-hypervisor driver on
x86_64 and aarch64
---
docs/drivers.html.in | 1 +
docs/drvch.rst | 55 ++
docs/meson.build | 1 +
include/libvirt/virterror.h | 1 +
meson.build | 43 ++
meson_options.txt | 3 +
po/POTFILES.in | 5 +
src/ch/ch_conf.c | 251 ++++++++
src/ch/ch_conf.h | 85 +++
src/ch/ch_domain.c | 203 ++++++
src/ch/ch_domain.h | 65 ++
src/ch/ch_driver.c | 930 ++++++++++++++++++++++++++++
src/ch/ch_driver.h | 24 +
src/ch/ch_monitor.c | 837 +++++++++++++++++++++++++
src/ch/ch_monitor.h | 60 ++
src/ch/ch_process.c | 126 ++++
src/ch/ch_process.h | 31 +
src/ch/meson.build | 74 +++
src/ch/virtchd.service.in | 47 ++
src/ch/virtchd.sysconf | 3 +
src/meson.build | 1 +
src/remote/remote_daemon.c | 4 +
src/remote/remote_daemon_dispatch.c | 3 +-
src/util/virerror.c | 1 +
tools/virsh.c | 3 +
25 files changed, 2856 insertions(+), 1 deletion(-)
create mode 100644 docs/drvch.rst
create mode 100644 src/ch/ch_conf.c
create mode 100644 src/ch/ch_conf.h
create mode 100644 src/ch/ch_domain.c
create mode 100644 src/ch/ch_domain.h
create mode 100644 src/ch/ch_driver.c
create mode 100644 src/ch/ch_driver.h
create mode 100644 src/ch/ch_monitor.c
create mode 100644 src/ch/ch_monitor.h
create mode 100644 src/ch/ch_process.c
create mode 100644 src/ch/ch_process.h
create mode 100644 src/ch/meson.build
create mode 100644 src/ch/virtchd.service.in
create mode 100644 src/ch/virtchd.sysconf
diff --git a/docs/drivers.html.in b/docs/drivers.html.in
index 34f98f60b6..824604998e 100644
--- a/docs/drivers.html.in
+++ b/docs/drivers.html.in
@@ -37,6 +37,7 @@
<li><strong><a href="drvhyperv.html">Microsoft Hyper-V</a></strong></li>
<li><strong><a href="drvvirtuozzo.html">Virtuozzo</a></strong></li>
<li><strong><a href="drvbhyve.html">Bhyve</a></strong> - The BSD Hypervisor</li>
+ <li><strong><a href="drvch.html">Cloud Hypervisor</a></strong></li>
</ul>
</body>
diff --git a/docs/drvch.rst b/docs/drvch.rst
new file mode 100644
index 0000000000..bb13599e6f
--- /dev/null
+++ b/docs/drvch.rst
@@ -0,0 +1,55 @@
+=======================
+Cloud Hypervisor driver
+=======================
+
+.. contents::
+
+Cloud Hypervisor is an open source Virtual Machine Monitor (VMM) that
+runs on top of KVM. The project focuses on exclusively running modern,
+cloud workloads, on top of a limited set of hardware architectures and
+platforms. Cloud workloads refers to those that are usually run by
+customers inside a cloud provider. For our purposes this means modern
+operating systems with most I/O handled by paravirtualised devices
+(i.e. virtio), no requirement for legacy devices, and 64-bit CPUs.
+
+The libvirt Cloud Hypervisor driver is intended to be run as a session
+driver without privileges. The cloud-hypervisor binary itself should be
+``setcap cap_net_admin+ep`` (in order to create tap interfaces).
+
+Expected connection URI would be
+
+``ch:///session``
+
+
+Example guest domain XML configurations
+=======================================
+
+The Cloud Hypervisor driver in libvirt is in its early stage under active
+development only supporting a limited number of Cloud Hypervisor features.
+
+Firmware is from
+`hypervisor-fw <https://github.com/cloud-hypervisor/rust-hypervisor-firmware/releases>`__
+
+**Note: Only virtio devices are supported**
+
+::
+
+ <domain type='kvm'>
+ <name>cloudhypervisor</name>
+ <uuid>4dea22b3-1d52-d8f3-2516-782e98ab3fa0</uuid>
+ <os>
+ <type>hvm</type>
+ <kernel>hypervisor-fw</kernel>
+ </os>
+ <memory unit='G'>2</memory>
+ <devices>
+ <disk type='file'>
+ <source file='disk.raw'/>
+ <target dev='vda' bus='virtio'/>
+ </disk>
+ <interface type='ethernet'>
+ <model type='virtio'/>
+ </interface>
+ </devices>
+ <vcpu>2</vcpu>
+ </domain>
diff --git a/docs/meson.build b/docs/meson.build
index f550629d8e..bee0d80d53 100644
--- a/docs/meson.build
+++ b/docs/meson.build
@@ -111,6 +111,7 @@ docs_rst_files = [
'daemons',
'developer-tooling',
'drvqemu',
+ 'drvch',
'formatbackup',
'formatcheckpoint',
'formatdomain',
diff --git a/include/libvirt/virterror.h b/include/libvirt/virterror.h
index 524a7bf9e8..57986931fd 100644
--- a/include/libvirt/virterror.h
+++ b/include/libvirt/virterror.h
@@ -136,6 +136,7 @@ typedef enum {
VIR_FROM_TPM = 70, /* Error from TPM */
VIR_FROM_BPF = 71, /* Error from BPF code */
+ VIR_FROM_CH = 72, /* Error from Cloud-Hypervisor driver */
# ifdef VIR_ENUM_SENTINELS
VIR_ERR_DOMAIN_LAST
diff --git a/meson.build b/meson.build
index 1f97842319..933055ef33 100644
--- a/meson.build
+++ b/meson.build
@@ -1525,6 +1525,48 @@ elif get_option('driver_lxc').enabled()
error('linux and remote_driver are required for LXC')
endif
+if not get_option('driver_ch').disabled() and host_machine.system() == 'linux' and (host_machine.cpu_family() == 'x86_64' or host_machine.cpu_family() == 'aarch64')
+ use_ch = true
+
+ if not conf.has('WITH_LIBVIRTD')
+ use_ch = false
+ if get_option('driver_ch').enabled()
+ error('libvirtd is required to build Cloud-Hypervisor driver')
+ endif
+ endif
+
+ if not yajl_dep.found()
+ use_ch = false
+ if get_option('driver_ch').enabled()
+ error('YAJL 2 is required to build Cloud-Hypervisor driver')
+ endif
+ endif
+
+ if not curl_dep.found()
+ use_ch = false
+ if get_option('driver_ch').enabled()
+ error('curl is required to build Cloud-Hypervisor driver')
+ endif
+ endif
+
+ if use_ch
+ conf.set('WITH_CH', 1)
+
+ default_ch_user = 'root'
+ default_ch_group = 'root'
+ ch_user = get_option('ch_user')
+ if ch_user == ''
+ ch_user = default_ch_user
+ endif
+ ch_group = get_option('ch_group')
+ if ch_group == ''
+ ch_group = default_ch_group
+ endif
+ conf.set_quoted('CH_USER', ch_user)
+ conf.set_quoted('CH_GROUP', ch_group)
+ endif
+endif
+
# there's no use compiling the network driver without the libvirt
# daemon, nor compiling it for macOS, where it breaks the compile
if not get_option('driver_network').disabled() and conf.has('WITH_LIBVIRTD') and host_machine.system() != 'darwin'
@@ -2178,6 +2220,7 @@ driver_summary = {
'VBox': conf.has('WITH_VBOX'),
'libxl': conf.has('WITH_LIBXL'),
'LXC': conf.has('WITH_LXC'),
+ 'Cloud-Hypervisor': conf.has('WITH_CH'),
'ESX': conf.has('WITH_ESX'),
'Hyper-V': conf.has('WITH_HYPERV'),
'vz': conf.has('WITH_VZ'),
diff --git a/meson_options.txt b/meson_options.txt
index 2606648b64..cd0b4b33be 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -53,6 +53,9 @@ option('driver_interface', type: 'feature', value: 'auto', description: 'host in
option('driver_libvirtd', type: 'feature', value: 'auto', description: 'libvirtd driver')
option('driver_libxl', type: 'feature', value: 'auto', description: 'libxenlight driver')
option('driver_lxc', type: 'feature', value: 'auto', description: 'Linux Container driver')
+option('driver_ch', type: 'feature', value: 'auto', description: 'Cloud-Hypervisor driver')
+option('ch_user', type: 'string', value: '', description: 'username to run Cloud-Hypervisor system instance as')
+option('ch_group', type: 'string', value: '', description: 'groupname to run Cloud-Hypervisor system instance as')
option('driver_network', type: 'feature', value: 'auto', description: 'virtual network driver')
option('driver_openvz', type: 'feature', value: 'auto', description: 'OpenVZ driver')
option('driver_qemu', type: 'feature', value: 'auto', description: 'QEMU/KVM driver')
diff --git a/po/POTFILES.in b/po/POTFILES.in
index 413783ee35..c200d7452a 100644
--- a/po/POTFILES.in
+++ b/po/POTFILES.in
@@ -18,6 +18,11 @@
@SRCDIR(a)src/bhyve/bhyve_monitor.c
@SRCDIR(a)src/bhyve/bhyve_parse_command.c
@SRCDIR(a)src/bhyve/bhyve_process.c
+@SRCDIR(a)src/ch/ch_conf.c
+@SRCDIR(a)src/ch/ch_domain.c
+@SRCDIR(a)src/ch/ch_driver.c
+@SRCDIR(a)src/ch/ch_monitor.c
+@SRCDIR(a)src/ch/ch_process.c
@SRCDIR(a)src/conf/backup_conf.c
@SRCDIR(a)src/conf/capabilities.c
@SRCDIR(a)src/conf/checkpoint_conf.c
diff --git a/src/ch/ch_conf.c b/src/ch/ch_conf.c
new file mode 100644
index 0000000000..2dd104b8a8
--- /dev/null
+++ b/src/ch/ch_conf.c
@@ -0,0 +1,251 @@
+/*
+ * Copyright Intel Corp. 2020-2021
+ *
+ * ch_conf.c: functions for Cloud-Hypervisor configuration
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library. If not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#include <config.h>
+
+#include "configmake.h"
+#include "viralloc.h"
+#include "vircommand.h"
+#include "virlog.h"
+#include "virobject.h"
+#include "virstring.h"
+#include "virutil.h"
+
+#include "ch_conf.h"
+#include "ch_domain.h"
+
+#define VIR_FROM_THIS VIR_FROM_CH
+
+VIR_LOG_INIT("ch.ch_conf");
+
+static virClass *virCHDriverConfigClass;
+static void virCHDriverConfigDispose(void *obj);
+
+static int virCHConfigOnceInit(void)
+{
+ if (!VIR_CLASS_NEW(virCHDriverConfig, virClassForObject()))
+ return -1;
+
+ return 0;
+}
+
+VIR_ONCE_GLOBAL_INIT(virCHConfig);
+
+
+/* Functions */
+virCaps *virCHDriverCapsInit(void)
+{
+ virCaps *caps;
+ virCapsGuest *guest;
+
+ if ((caps = virCapabilitiesNew(virArchFromHost(),
+ false, false)) == NULL)
+ goto cleanup;
+
+ if (!(caps->host.numa = virCapabilitiesHostNUMANewHost()))
+ goto cleanup;
+
+ if (virCapabilitiesInitCaches(caps) < 0)
+ goto cleanup;
+
+ if ((guest = virCapabilitiesAddGuest(caps,
+ VIR_DOMAIN_OSTYPE_HVM,
+ caps->host.arch,
+ NULL,
+ NULL,
+ 0,
+ NULL)) == NULL)
+ goto cleanup;
+
+ if (virCapabilitiesAddGuestDomain(guest,
+ VIR_DOMAIN_VIRT_KVM,
+ NULL,
+ NULL,
+ 0,
+ NULL) == NULL)
+ goto cleanup;
+
+ return caps;
+
+ cleanup:
+ virObjectUnref(caps);
+ return NULL;
+}
+
+/**
+ * virCHDriverGetCapabilities:
+ *
+ * Get a reference to the virCaps instance for the
+ * driver. If @refresh is true, the capabilities will be
+ * rebuilt first
+ *
+ * The caller must release the reference with virObjetUnref
+ *
+ * Returns: a reference to a virCaps instance or NULL
+ */
+virCaps *virCHDriverGetCapabilities(virCHDriver *driver,
+ bool refresh)
+{
+ virCaps *ret;
+ if (refresh) {
+ virCaps *caps = NULL;
+ if ((caps = virCHDriverCapsInit()) == NULL)
+ return NULL;
+
+ chDriverLock(driver);
+ virObjectUnref(driver->caps);
+ driver->caps = caps;
+ } else {
+ chDriverLock(driver);
+ }
+
+ ret = virObjectRef(driver->caps);
+ chDriverUnlock(driver);
+ return ret;
+}
+
+virDomainXMLOption *
+chDomainXMLConfInit(virCHDriver *driver)
+{
+ virCHDriverDomainDefParserConfig.priv = driver;
+ return virDomainXMLOptionNew(&virCHDriverDomainDefParserConfig,
+ &virCHDriverPrivateDataCallbacks,
+ NULL, NULL, NULL);
+}
+
+virCHDriverConfig *
+virCHDriverConfigNew(bool privileged)
+{
+ virCHDriverConfig *cfg;
+
+ if (virCHConfigInitialize() < 0)
+ return NULL;
+
+ if (!(cfg = virObjectNew(virCHDriverConfigClass)))
+ return NULL;
+
+ cfg->uri = g_strdup(privileged ? "ch:///system" : "ch:///session");
+ if (privileged) {
+ if (virGetUserID(CH_USER, &cfg->user) < 0)
+ return NULL;
+ if (virGetGroupID(CH_GROUP, &cfg->group) < 0)
+ return NULL;
+ } else {
+ cfg->user = (uid_t)-1;
+ cfg->group = (gid_t)-1;
+ }
+
+ if (privileged) {
+ cfg->logDir = g_strdup_printf("%s/log/libvirt/ch", LOCALSTATEDIR);
+ cfg->stateDir = g_strdup_printf("%s/libvirt/ch", RUNSTATEDIR);
+
+ } else {
+ g_autofree char *rundir = NULL;
+ g_autofree char *cachedir = NULL;
+
+ cachedir = virGetUserCacheDirectory();
+
+ cfg->logDir = g_strdup_printf("%s/ch/log", cachedir);
+
+ rundir = virGetUserRuntimeDirectory();
+ cfg->stateDir = g_strdup_printf("%s/ch/run", rundir);
+ }
+
+ return cfg;
+}
+
+virCHDriverConfig *virCHDriverGetConfig(virCHDriver *driver)
+{
+ virCHDriverConfig *cfg;
+ chDriverLock(driver);
+ cfg = virObjectRef(driver->config);
+ chDriverUnlock(driver);
+ return cfg;
+}
+
+static void
+virCHDriverConfigDispose(void *obj)
+{
+ virCHDriverConfig *cfg = obj;
+
+ g_free(cfg->stateDir);
+ g_free(cfg->logDir);
+}
+
+#define MIN_VERSION ((15 * 1000000) + (0 * 1000) + (0))
+
+static int
+chExtractVersionInfo(int *retversion)
+{
+ int ret = -1;
+ unsigned long version;
+ char *help = NULL;
+ char *tmp = NULL;
+ g_autofree char *ch_cmd = g_find_program_in_path(CH_CMD);
+ virCommand *cmd = virCommandNewArgList(ch_cmd, "--version", NULL);
+
+ if (retversion)
+ *retversion = 0;
+
+ virCommandAddEnvString(cmd, "LC_ALL=C");
+ virCommandSetOutputBuffer(cmd, &help);
+
+ if (virCommandRun(cmd, NULL) < 0)
+ goto cleanup;
+
+ tmp = help;
+
+ /* expected format: cloud-hypervisor v<major>.<minor>.<micro> */
+ if ((tmp = STRSKIP(tmp, "cloud-hypervisor v")) == NULL)
+ goto cleanup;
+
+ if (virParseVersionString(tmp, &version, true) < 0)
+ goto cleanup;
+
+ if (version < MIN_VERSION) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Cloud-Hypervisor version is too old (v15.0 is the minimum supported version)"));
+ goto cleanup;
+ }
+
+ if (retversion)
+ *retversion = version;
+
+ ret = 0;
+
+ cleanup:
+ virCommandFree(cmd);
+
+ return ret;
+}
+
+int chExtractVersion(virCHDriver *driver)
+{
+ if (driver->version > 0)
+ return 0;
+
+ if (chExtractVersionInfo(&driver->version) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Could not extract Cloud-Hypervisor version"));
+ return -1;
+ }
+
+ return 0;
+}
diff --git a/src/ch/ch_conf.h b/src/ch/ch_conf.h
new file mode 100644
index 0000000000..d856825377
--- /dev/null
+++ b/src/ch/ch_conf.h
@@ -0,0 +1,85 @@
+/*
+ * Copyright Intel Corp. 2020-2021
+ *
+ * ch_conf.h: header file for Cloud-Hypervisor configuration
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library. If not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#pragma once
+
+#include "virdomainobjlist.h"
+#include "virthread.h"
+
+#define CH_DRIVER_NAME "CH"
+#define CH_CMD "cloud-hypervisor"
+
+typedef struct _virCHDriver virCHDriver;
+
+typedef struct _virCHDriverConfig virCHDriverConfig;
+
+struct _virCHDriverConfig {
+ GObject parent;
+
+ char *stateDir;
+ char *logDir;
+ char *uri;
+
+ uid_t user;
+ gid_t group;
+};
+
+struct _virCHDriver
+{
+ virMutex lock;
+
+ /* Require lock to get a reference on the object,
+ * lockless access thereafter */
+ virCaps *caps;
+
+ /* Immutable pointer, Immutable object */
+ virDomainXMLOption *xmlopt;
+
+ /* Immutable pointer, self-locking APIs */
+ virDomainObjList *domains;
+
+ /* Cloud-Hypervisor version */
+ int version;
+
+ /* Require lock to get reference on 'config',
+ * then lockless thereafter */
+ virCHDriverConfig *config;
+
+ /* pid file FD, ensures two copies of the driver can't use the same root */
+ int lockFD;
+};
+
+virCaps *virCHDriverCapsInit(void);
+virCaps *virCHDriverGetCapabilities(virCHDriver *driver,
+ bool refresh);
+virDomainXMLOption *chDomainXMLConfInit(virCHDriver *driver);
+virCHDriverConfig *virCHDriverConfigNew(bool privileged);
+virCHDriverConfig *virCHDriverGetConfig(virCHDriver *driver);
+int chExtractVersion(virCHDriver *driver);
+
+static inline void chDriverLock(virCHDriver *driver)
+{
+ virMutexLock(&driver->lock);
+}
+
+static inline void chDriverUnlock(virCHDriver *driver)
+{
+ virMutexUnlock(&driver->lock);
+}
diff --git a/src/ch/ch_domain.c b/src/ch/ch_domain.c
new file mode 100644
index 0000000000..9d0b091699
--- /dev/null
+++ b/src/ch/ch_domain.c
@@ -0,0 +1,203 @@
+/*
+ * Copyright Intel Corp. 2020-2021
+ *
+ * ch_domain.c: Domain manager functions for Cloud-Hypervisor driver
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library. If not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#include <config.h>
+
+#include "ch_domain.h"
+#include "viralloc.h"
+#include "virlog.h"
+#include "virtime.h"
+
+#define VIR_FROM_THIS VIR_FROM_CH
+
+VIR_ENUM_IMPL(virCHDomainJob,
+ CH_JOB_LAST,
+ "none",
+ "query",
+ "destroy",
+ "modify",
+);
+
+VIR_LOG_INIT("ch.ch_domain");
+
+static int
+virCHDomainObjInitJob(virCHDomainObjPrivate *priv)
+{
+ memset(&priv->job, 0, sizeof(priv->job));
+
+ if (virCondInit(&priv->job.cond) < 0)
+ return -1;
+
+ return 0;
+}
+
+static void
+virCHDomainObjResetJob(virCHDomainObjPrivate *priv)
+{
+ struct virCHDomainJobObj *job = &priv->job;
+
+ job->active = CH_JOB_NONE;
+ job->owner = 0;
+}
+
+static void
+virCHDomainObjFreeJob(virCHDomainObjPrivate *priv)
+{
+ ignore_value(virCondDestroy(&priv->job.cond));
+}
+
+/*
+ * obj must be locked before calling, virCHDriver must NOT be locked
+ *
+ * This must be called by anything that will change the VM state
+ * in any way
+ *
+ * Upon successful return, the object will have its ref count increased.
+ * Successful calls must be followed by EndJob eventually.
+ */
+int
+virCHDomainObjBeginJob(virDomainObj *obj, enum virCHDomainJob job)
+{
+ virCHDomainObjPrivate *priv = obj->privateData;
+ unsigned long long now;
+ unsigned long long then;
+
+ if (virTimeMillisNow(&now) < 0)
+ return -1;
+ then = now + CH_JOB_WAIT_TIME;
+
+ while (priv->job.active) {
+ VIR_DEBUG("Wait normal job condition for starting job: %s",
+ virCHDomainJobTypeToString(job));
+ if (virCondWaitUntil(&priv->job.cond, &obj->parent.lock, then) < 0)
+ goto error;
+ }
+
+ virCHDomainObjResetJob(priv);
+
+ VIR_DEBUG("Starting job: %s", virCHDomainJobTypeToString(job));
+ priv->job.active = job;
+ priv->job.owner = virThreadSelfID();
+
+ return 0;
+
+ error:
+ VIR_WARN("Cannot start job (%s) for domain %s;"
+ " current job is (%s) owned by (%d)",
+ virCHDomainJobTypeToString(job),
+ obj->def->name,
+ virCHDomainJobTypeToString(priv->job.active),
+ priv->job.owner);
+
+ if (errno == ETIMEDOUT)
+ virReportError(VIR_ERR_OPERATION_TIMEOUT,
+ "%s", _("cannot acquire state change lock"));
+ else
+ virReportSystemError(errno,
+ "%s", _("cannot acquire job mutex"));
+ return -1;
+}
+
+/*
+ * obj must be locked and have a reference before calling
+ *
+ * To be called after completing the work associated with the
+ * earlier virCHDomainBeginJob() call
+ */
+void
+virCHDomainObjEndJob(virDomainObj *obj)
+{
+ virCHDomainObjPrivate *priv = obj->privateData;
+ enum virCHDomainJob job = priv->job.active;
+
+ VIR_DEBUG("Stopping job: %s",
+ virCHDomainJobTypeToString(job));
+
+ virCHDomainObjResetJob(priv);
+ virCondSignal(&priv->job.cond);
+}
+
+static void *
+virCHDomainObjPrivateAlloc(void *opaque G_GNUC_UNUSED)
+{
+ virCHDomainObjPrivate *priv;
+
+ priv = g_new0(virCHDomainObjPrivate, 1);
+
+ if (virCHDomainObjInitJob(priv) < 0) {
+ g_free(priv);
+ return NULL;
+ }
+
+ return priv;
+}
+
+static void
+virCHDomainObjPrivateFree(void *data)
+{
+ virCHDomainObjPrivate *priv = data;
+
+ virCHDomainObjFreeJob(priv);
+ g_free(priv);
+}
+
+virDomainXMLPrivateDataCallbacks virCHDriverPrivateDataCallbacks = {
+ .alloc = virCHDomainObjPrivateAlloc,
+ .free = virCHDomainObjPrivateFree,
+};
+
+static int
+virCHDomainDefPostParseBasic(virDomainDef *def,
+ void *opaque G_GNUC_UNUSED)
+{
+ /* check for emulator and create a default one if needed */
+ if (!def->emulator) {
+ if (!(def->emulator = g_find_program_in_path(CH_CMD))) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("No emulator found for cloud-hypervisor"));
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+static int
+virCHDomainDefPostParse(virDomainDef *def,
+ unsigned int parseFlags G_GNUC_UNUSED,
+ void *opaque,
+ void *parseOpaque G_GNUC_UNUSED)
+{
+ virCHDriver *driver = opaque;
+ g_autoptr(virCaps) caps = virCHDriverGetCapabilities(driver, false);
+ if (!caps)
+ return -1;
+ if (!virCapabilitiesDomainSupported(caps, def->os.type,
+ def->os.arch,
+ def->virtType))
+ return -1;
+
+ return 0;
+}
+
+virDomainDefParserConfig virCHDriverDomainDefParserConfig = {
+ .domainPostParseBasicCallback = virCHDomainDefPostParseBasic,
+ .domainPostParseCallback = virCHDomainDefPostParse,
+};
diff --git a/src/ch/ch_domain.h b/src/ch/ch_domain.h
new file mode 100644
index 0000000000..b4e0d4c212
--- /dev/null
+++ b/src/ch/ch_domain.h
@@ -0,0 +1,65 @@
+/*
+ * Copyright Intel Corp. 2020-2021
+ *
+ * ch_domain.h: header file for domain manager's Cloud-Hypervisor driver functions
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library. If not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#pragma once
+
+#include "ch_conf.h"
+#include "ch_monitor.h"
+
+/* Give up waiting for mutex after 30 seconds */
+#define CH_JOB_WAIT_TIME (1000ull * 30)
+
+/* Only 1 job is allowed at any time
+ * A job includes *all* ch.so api, even those just querying
+ * information, not merely actions */
+
+enum virCHDomainJob {
+ CH_JOB_NONE = 0, /* Always set to 0 for easy if (jobActive) conditions */
+ CH_JOB_QUERY, /* Doesn't change any state */
+ CH_JOB_DESTROY, /* Destroys the domain (cannot be masked out) */
+ CH_JOB_MODIFY, /* May change state */
+ CH_JOB_LAST
+};
+VIR_ENUM_DECL(virCHDomainJob);
+
+
+struct virCHDomainJobObj {
+ virCond cond; /* Use to coordinate jobs */
+ enum virCHDomainJob active; /* Currently running job */
+ int owner; /* Thread which set current job */
+};
+
+
+typedef struct _virCHDomainObjPrivate virCHDomainObjPrivate;
+struct _virCHDomainObjPrivate {
+ struct virCHDomainJobObj job;
+
+ virCHMonitor *monitor;
+};
+
+extern virDomainXMLPrivateDataCallbacks virCHDriverPrivateDataCallbacks;
+extern virDomainDefParserConfig virCHDriverDomainDefParserConfig;
+
+int
+virCHDomainObjBeginJob(virDomainObj *obj, enum virCHDomainJob job)
+ G_GNUC_WARN_UNUSED_RESULT;
+
+void
+virCHDomainObjEndJob(virDomainObj *obj);
diff --git a/src/ch/ch_driver.c b/src/ch/ch_driver.c
new file mode 100644
index 0000000000..a4f03d644d
--- /dev/null
+++ b/src/ch/ch_driver.c
@@ -0,0 +1,930 @@
+/*
+ * Copyright Intel Corp. 2020-2021
+ *
+ * ch_driver.c: Core Cloud-Hypervisor driver functions
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library. If not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#include <config.h>
+
+#include "ch_conf.h"
+#include "ch_domain.h"
+#include "ch_driver.h"
+#include "ch_monitor.h"
+#include "ch_process.h"
+#include "datatypes.h"
+#include "driver.h"
+#include "viraccessapicheck.h"
+#include "viralloc.h"
+#include "virbuffer.h"
+#include "vircommand.h"
+#include "virerror.h"
+#include "virfile.h"
+#include "virlog.h"
+#include "virnetdevtap.h"
+#include "virobject.h"
+#include "virstring.h"
+#include "virtypedparam.h"
+#include "viruri.h"
+#include "virutil.h"
+#include "viruuid.h"
+
+#define VIR_FROM_THIS VIR_FROM_CH
+
+VIR_LOG_INIT("ch.ch_driver");
+
+static int chStateInitialize(bool privileged,
+ const char *root,
+ virStateInhibitCallback callback,
+ void *opaque);
+static int chStateCleanup(void);
+virCHDriver *ch_driver = NULL;
+
+static virDomainObj *
+chDomObjFromDomain(virDomain *domain)
+{
+ virDomainObj *vm;
+ virCHDriver *driver = domain->conn->privateData;
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+
+ vm = virDomainObjListFindByUUID(driver->domains, domain->uuid);
+ if (!vm) {
+ virUUIDFormat(domain->uuid, uuidstr);
+ virReportError(VIR_ERR_NO_DOMAIN,
+ _("no domain with matching uuid '%s' (%s)"),
+ uuidstr, domain->name);
+ return NULL;
+ }
+
+ return vm;
+}
+
+/* Functions */
+static int
+chConnectURIProbe(char **uri)
+{
+ if (ch_driver == NULL)
+ return 0;
+
+ *uri = g_strdup("ch:///system");
+ return 1;
+}
+
+static virDrvOpenStatus chConnectOpen(virConnectPtr conn,
+ virConnectAuthPtr auth G_GNUC_UNUSED,
+ virConf *conf G_GNUC_UNUSED,
+ unsigned int flags)
+{
+ virCheckFlags(VIR_CONNECT_RO, VIR_DRV_OPEN_ERROR);
+
+ /* URI was good, but driver isn't active */
+ if (ch_driver == NULL) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ "%s", _("Cloud-Hypervisor state driver is not active"));
+ return VIR_DRV_OPEN_ERROR;
+ }
+
+ if (virConnectOpenEnsureACL(conn) < 0)
+ return VIR_DRV_OPEN_ERROR;
+
+ conn->privateData = ch_driver;
+
+ return VIR_DRV_OPEN_SUCCESS;
+}
+
+static int chConnectClose(virConnectPtr conn)
+{
+ conn->privateData = NULL;
+ return 0;
+}
+
+static const char *chConnectGetType(virConnectPtr conn)
+{
+ if (virConnectGetTypeEnsureACL(conn) < 0)
+ return NULL;
+
+ return "CH";
+}
+
+static int chConnectGetVersion(virConnectPtr conn,
+ unsigned long *version)
+{
+ virCHDriver *driver = conn->privateData;
+
+ if (virConnectGetVersionEnsureACL(conn) < 0)
+ return -1;
+
+ chDriverLock(driver);
+ *version = driver->version;
+ chDriverUnlock(driver);
+ return 0;
+}
+
+static char *chConnectGetHostname(virConnectPtr conn)
+{
+ if (virConnectGetHostnameEnsureACL(conn) < 0)
+ return NULL;
+
+ return virGetHostname();
+}
+
+static int chConnectNumOfDomains(virConnectPtr conn)
+{
+ virCHDriver *driver = conn->privateData;
+
+ if (virConnectNumOfDomainsEnsureACL(conn) < 0)
+ return -1;
+
+ return virDomainObjListNumOfDomains(driver->domains, true,
+ virConnectNumOfDomainsCheckACL, conn);
+}
+
+static int chConnectListDomains(virConnectPtr conn, int *ids, int nids)
+{
+ virCHDriver *driver = conn->privateData;
+
+ if (virConnectListDomainsEnsureACL(conn) < 0)
+ return -1;
+
+ return virDomainObjListGetActiveIDs(driver->domains, ids, nids,
+ virConnectListDomainsCheckACL, conn);
+}
+
+static int
+chConnectListAllDomains(virConnectPtr conn,
+ virDomainPtr **domains,
+ unsigned int flags)
+{
+ virCHDriver *driver = conn->privateData;
+
+ virCheckFlags(VIR_CONNECT_LIST_DOMAINS_FILTERS_ALL, -1);
+
+ if (virConnectListAllDomainsEnsureACL(conn) < 0)
+ return -1;
+
+ return virDomainObjListExport(driver->domains, conn, domains,
+ virConnectListAllDomainsCheckACL, flags);
+}
+
+static int chNodeGetInfo(virConnectPtr conn,
+ virNodeInfoPtr nodeinfo)
+{
+ if (virNodeGetInfoEnsureACL(conn) < 0)
+ return -1;
+
+ return virCapabilitiesGetNodeInfo(nodeinfo);
+}
+
+static char *chConnectGetCapabilities(virConnectPtr conn)
+{
+ virCHDriver *driver = conn->privateData;
+ virCaps *caps;
+ char *xml;
+
+ if (virConnectGetCapabilitiesEnsureACL(conn) < 0)
+ return NULL;
+
+ if (!(caps = virCHDriverGetCapabilities(driver, true)))
+ return NULL;
+
+ xml = virCapabilitiesFormatXML(caps);
+
+ virObjectUnref(caps);
+ return xml;
+}
+
+/**
+ * chDomainCreateXML:
+ * @conn: pointer to connection
+ * @xml: XML definition of domain
+ * @flags: bitwise-OR of supported virDomainCreateFlags
+ *
+ * Creates a domain based on xml and starts it
+ *
+ * Returns a new domain object or NULL in case of failure.
+ */
+static virDomainPtr
+chDomainCreateXML(virConnectPtr conn,
+ const char *xml,
+ unsigned int flags)
+{
+ virCHDriver *driver = conn->privateData;
+ virDomainDef *vmdef = NULL;
+ virDomainObj *vm = NULL;
+ virDomainPtr dom = NULL;
+ unsigned int parse_flags = VIR_DOMAIN_DEF_PARSE_INACTIVE;
+
+ virCheckFlags(VIR_DOMAIN_START_VALIDATE, NULL);
+
+ if (flags & VIR_DOMAIN_START_VALIDATE)
+ parse_flags |= VIR_DOMAIN_DEF_PARSE_VALIDATE_SCHEMA;
+
+
+ if ((vmdef = virDomainDefParseString(xml, driver->xmlopt,
+ NULL, parse_flags)) == NULL)
+ goto cleanup;
+
+ if (virDomainCreateXMLEnsureACL(conn, vmdef) < 0)
+ goto cleanup;
+
+ if (!(vm = virDomainObjListAdd(driver->domains,
+ vmdef,
+ driver->xmlopt,
+ VIR_DOMAIN_OBJ_LIST_ADD_LIVE |
+ VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE,
+ NULL)))
+ goto cleanup;
+
+ vmdef = NULL;
+
+ if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0)
+ goto cleanup;
+
+ if (virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED) < 0)
+ goto cleanup;
+
+ dom = virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id);
+
+ virCHDomainObjEndJob(vm);
+
+ cleanup:
+ if (!dom) {
+ virDomainObjListRemove(driver->domains, vm);
+ }
+ virDomainDefFree(vmdef);
+ virDomainObjEndAPI(&vm);
+ chDriverUnlock(driver);
+ return dom;
+}
+
+static int
+chDomainCreateWithFlags(virDomainPtr dom, unsigned int flags)
+{
+ virCHDriver *driver = dom->conn->privateData;
+ virDomainObj *vm;
+ int ret = -1;
+
+ virCheckFlags(0, -1);
+
+ if (!(vm = chDomObjFromDomain(dom)))
+ goto cleanup;
+
+ if (virDomainCreateWithFlagsEnsureACL(dom->conn, vm->def) < 0)
+ goto cleanup;
+
+ if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0)
+ goto cleanup;
+
+ ret = virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED);
+
+ virCHDomainObjEndJob(vm);
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return ret;
+}
+
+static int
+chDomainCreate(virDomainPtr dom)
+{
+ return chDomainCreateWithFlags(dom, 0);
+}
+
+static virDomainPtr
+chDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int flags)
+{
+ virCHDriver *driver = conn->privateData;
+ virDomainDef *vmdef = NULL;
+ virDomainObj *vm = NULL;
+ virDomainPtr dom = NULL;
+ unsigned int parse_flags = VIR_DOMAIN_DEF_PARSE_INACTIVE;
+
+ virCheckFlags(VIR_DOMAIN_DEFINE_VALIDATE, NULL);
+
+ if (flags & VIR_DOMAIN_START_VALIDATE)
+ parse_flags |= VIR_DOMAIN_DEF_PARSE_VALIDATE_SCHEMA;
+
+ if ((vmdef = virDomainDefParseString(xml, driver->xmlopt,
+ NULL, parse_flags)) == NULL)
+ goto cleanup;
+
+ if (virXMLCheckIllegalChars("name", vmdef->name, "\n") < 0)
+ goto cleanup;
+
+ if (virDomainDefineXMLFlagsEnsureACL(conn, vmdef) < 0)
+ goto cleanup;
+
+ if (!(vm = virDomainObjListAdd(driver->domains, vmdef,
+ driver->xmlopt,
+ 0, NULL)))
+ goto cleanup;
+
+ vmdef = NULL;
+ vm->persistent = 1;
+
+ dom = virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id);
+
+ cleanup:
+ virDomainDefFree(vmdef);
+ virDomainObjEndAPI(&vm);
+ return dom;
+}
+
+static virDomainPtr
+chDomainDefineXML(virConnectPtr conn, const char *xml)
+{
+ return chDomainDefineXMLFlags(conn, xml, 0);
+}
+
+static int
+chDomainUndefineFlags(virDomainPtr dom,
+ unsigned int flags)
+{
+ virCHDriver *driver = dom->conn->privateData;
+ virDomainObj *vm;
+ int ret = -1;
+
+ virCheckFlags(0, -1);
+
+ if (!(vm = chDomObjFromDomain(dom)))
+ goto cleanup;
+
+ if (virDomainUndefineFlagsEnsureACL(dom->conn, vm->def) < 0)
+ goto cleanup;
+
+ if (!vm->persistent) {
+ virReportError(VIR_ERR_OPERATION_INVALID,
+ "%s", _("Cannot undefine transient domain"));
+ goto cleanup;
+ }
+
+ if (virDomainObjIsActive(vm)) {
+ vm->persistent = 0;
+ } else {
+ virDomainObjListRemove(driver->domains, vm);
+ }
+
+ ret = 0;
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return ret;
+}
+
+static int
+chDomainUndefine(virDomainPtr dom)
+{
+ return chDomainUndefineFlags(dom, 0);
+}
+
+static int chDomainIsActive(virDomainPtr dom)
+{
+ virCHDriver *driver = dom->conn->privateData;
+ virDomainObj *vm;
+ int ret = -1;
+
+ chDriverLock(driver);
+ if (!(vm = chDomObjFromDomain(dom)))
+ goto cleanup;
+
+ if (virDomainIsActiveEnsureACL(dom->conn, vm->def) < 0)
+ goto cleanup;
+
+ ret = virDomainObjIsActive(vm);
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ chDriverUnlock(driver);
+ return ret;
+}
+
+static int
+chDomainShutdownFlags(virDomainPtr dom,
+ unsigned int flags)
+{
+ virCHDomainObjPrivate *priv;
+ virDomainObj *vm;
+ virDomainState state;
+ int ret = -1;
+
+ virCheckFlags(VIR_DOMAIN_SHUTDOWN_ACPI_POWER_BTN, -1);
+
+ if (!(vm = chDomObjFromDomain(dom)))
+ goto cleanup;
+
+ priv = vm->privateData;
+
+ if (virDomainShutdownFlagsEnsureACL(dom->conn, vm->def, flags) < 0)
+ goto cleanup;
+
+ if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0)
+ goto cleanup;
+
+ if (virDomainObjCheckActive(vm) < 0)
+ goto endjob;
+
+ state = virDomainObjGetState(vm, NULL);
+ if (state != VIR_DOMAIN_RUNNING && state != VIR_DOMAIN_PAUSED) {
+ virReportError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("only can shutdown running/paused domain"));
+ goto endjob;
+ } else {
+ if (virCHMonitorShutdownVM(priv->monitor) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("failed to shutdown guest VM"));
+ goto endjob;
+ }
+ }
+
+ virDomainObjSetState(vm, VIR_DOMAIN_SHUTDOWN, VIR_DOMAIN_SHUTDOWN_USER);
+
+ ret = 0;
+
+ endjob:
+ virCHDomainObjEndJob(vm);
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return ret;
+}
+
+static int
+chDomainShutdown(virDomainPtr dom)
+{
+ return chDomainShutdownFlags(dom, 0);
+}
+
+
+static int
+chDomainReboot(virDomainPtr dom, unsigned int flags)
+{
+ virCHDomainObjPrivate *priv;
+ virDomainObj *vm;
+ virDomainState state;
+ int ret = -1;
+
+ virCheckFlags(VIR_DOMAIN_REBOOT_ACPI_POWER_BTN, -1);
+
+ if (!(vm = chDomObjFromDomain(dom)))
+ goto cleanup;
+
+ priv = vm->privateData;
+
+ if (virDomainRebootEnsureACL(dom->conn, vm->def, flags) < 0)
+ goto cleanup;
+
+ if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0)
+ goto cleanup;
+
+ if (virDomainObjCheckActive(vm) < 0)
+ goto endjob;
+
+ state = virDomainObjGetState(vm, NULL);
+ if (state != VIR_DOMAIN_RUNNING && state != VIR_DOMAIN_PAUSED) {
+ virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
+ _("only can reboot running/paused domain"));
+ goto endjob;
+ } else {
+ if (virCHMonitorRebootVM(priv->monitor) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("failed to reboot domain"));
+ goto endjob;
+ }
+ }
+
+ if (state == VIR_DOMAIN_RUNNING)
+ virDomainObjSetState(vm, VIR_DOMAIN_RUNNING, VIR_DOMAIN_RUNNING_BOOTED);
+ else
+ virDomainObjSetState(vm, VIR_DOMAIN_RUNNING, VIR_DOMAIN_RUNNING_UNPAUSED);
+
+ ret = 0;
+
+ endjob:
+ virCHDomainObjEndJob(vm);
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return ret;
+}
+
+static int
+chDomainSuspend(virDomainPtr dom)
+{
+ virCHDomainObjPrivate *priv;
+ virDomainObj *vm;
+ int ret = -1;
+
+ if (!(vm = chDomObjFromDomain(dom)))
+ goto cleanup;
+
+ priv = vm->privateData;
+
+ if (virDomainSuspendEnsureACL(dom->conn, vm->def) < 0)
+ goto cleanup;
+
+ if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0)
+ goto cleanup;
+
+ if (virDomainObjCheckActive(vm) < 0)
+ goto endjob;
+
+ if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_RUNNING) {
+ virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
+ _("only can suspend running domain"));
+ goto endjob;
+ } else {
+ if (virCHMonitorSuspendVM(priv->monitor) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("failed to suspend domain"));
+ goto endjob;
+ }
+ }
+
+ virDomainObjSetState(vm, VIR_DOMAIN_PAUSED, VIR_DOMAIN_PAUSED_USER);
+
+ ret = 0;
+
+ endjob:
+ virCHDomainObjEndJob(vm);
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return ret;
+}
+
+static int
+chDomainResume(virDomainPtr dom)
+{
+ virCHDomainObjPrivate *priv;
+ virDomainObj *vm;
+ int ret = -1;
+
+ if (!(vm = chDomObjFromDomain(dom)))
+ goto cleanup;
+
+ priv = vm->privateData;
+
+ if (virDomainResumeEnsureACL(dom->conn, vm->def) < 0)
+ goto cleanup;
+
+ if (virCHDomainObjBeginJob(vm, CH_JOB_MODIFY) < 0)
+ goto cleanup;
+
+ if (virDomainObjCheckActive(vm) < 0)
+ goto endjob;
+
+ if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_PAUSED) {
+ virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
+ _("only can resume paused domain"));
+ goto endjob;
+ } else {
+ if (virCHMonitorResumeVM(priv->monitor) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("failed to resume domain"));
+ goto endjob;
+ }
+ }
+
+ virDomainObjSetState(vm, VIR_DOMAIN_RUNNING, VIR_DOMAIN_RUNNING_UNPAUSED);
+
+ ret = 0;
+
+ endjob:
+ virCHDomainObjEndJob(vm);
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return ret;
+}
+
+/**
+ * chDomainDestroyFlags:
+ * @dom: pointer to domain to destroy
+ * @flags: extra flags; not used yet.
+ *
+ * Sends SIGKILL to Cloud-Hypervisor process to terminate it
+ *
+ * Returns 0 on success or -1 in case of error
+ */
+static int
+chDomainDestroyFlags(virDomainPtr dom, unsigned int flags)
+{
+ virCHDriver *driver = dom->conn->privateData;
+ virDomainObj *vm;
+ int ret = -1;
+
+ virCheckFlags(0, -1);
+
+ if (!(vm = chDomObjFromDomain(dom)))
+ goto cleanup;
+
+ if (virDomainDestroyFlagsEnsureACL(dom->conn, vm->def) < 0)
+ goto cleanup;
+
+ if (virCHDomainObjBeginJob(vm, CH_JOB_DESTROY) < 0)
+ goto cleanup;
+
+ if (virDomainObjCheckActive(vm) < 0)
+ goto endjob;
+
+ ret = virCHProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_DESTROYED);
+
+ endjob:
+ virCHDomainObjEndJob(vm);
+ if (!vm->persistent)
+ virDomainObjListRemove(driver->domains, vm);
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return ret;
+}
+
+static int
+chDomainDestroy(virDomainPtr dom)
+{
+ return chDomainDestroyFlags(dom, 0);
+}
+
+static virDomainPtr chDomainLookupByID(virConnectPtr conn,
+ int id)
+{
+ virCHDriver *driver = conn->privateData;
+ virDomainObj *vm;
+ virDomainPtr dom = NULL;
+
+ chDriverLock(driver);
+ vm = virDomainObjListFindByID(driver->domains, id);
+ chDriverUnlock(driver);
+
+ if (!vm) {
+ virReportError(VIR_ERR_NO_DOMAIN,
+ _("no domain with matching id '%d'"), id);
+ goto cleanup;
+ }
+
+ if (virDomainLookupByIDEnsureACL(conn, vm->def) < 0)
+ goto cleanup;
+
+ dom = virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id);
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return dom;
+}
+
+static virDomainPtr chDomainLookupByName(virConnectPtr conn,
+ const char *name)
+{
+ virCHDriver *driver = conn->privateData;
+ virDomainObj *vm;
+ virDomainPtr dom = NULL;
+
+ chDriverLock(driver);
+ vm = virDomainObjListFindByName(driver->domains, name);
+ chDriverUnlock(driver);
+
+ if (!vm) {
+ virReportError(VIR_ERR_NO_DOMAIN,
+ _("no domain with matching name '%s'"), name);
+ goto cleanup;
+ }
+
+ if (virDomainLookupByNameEnsureACL(conn, vm->def) < 0)
+ goto cleanup;
+
+ dom = virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id);
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return dom;
+}
+
+static virDomainPtr chDomainLookupByUUID(virConnectPtr conn,
+ const unsigned char *uuid)
+{
+ virCHDriver *driver = conn->privateData;
+ virDomainObj *vm;
+ virDomainPtr dom = NULL;
+
+ chDriverLock(driver);
+ vm = virDomainObjListFindByUUID(driver->domains, uuid);
+ chDriverUnlock(driver);
+
+ if (!vm) {
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virUUIDFormat(uuid, uuidstr);
+ virReportError(VIR_ERR_NO_DOMAIN,
+ _("no domain with matching uuid '%s'"), uuidstr);
+ goto cleanup;
+ }
+
+ if (virDomainLookupByUUIDEnsureACL(conn, vm->def) < 0)
+ goto cleanup;
+
+ dom = virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id);
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return dom;
+}
+
+static int
+chDomainGetState(virDomainPtr dom,
+ int *state,
+ int *reason,
+ unsigned int flags)
+{
+ virDomainObj *vm;
+ int ret = -1;
+
+ virCheckFlags(0, -1);
+
+ if (!(vm = chDomObjFromDomain(dom)))
+ goto cleanup;
+
+ if (virDomainGetStateEnsureACL(dom->conn, vm->def) < 0)
+ goto cleanup;
+
+ *state = virDomainObjGetState(vm, reason);
+ ret = 0;
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return ret;
+}
+
+static char *chDomainGetXMLDesc(virDomainPtr dom,
+ unsigned int flags)
+{
+ virCHDriver *driver = dom->conn->privateData;
+ virDomainObj *vm;
+ char *ret = NULL;
+
+ virCheckFlags(VIR_DOMAIN_XML_COMMON_FLAGS, NULL);
+
+ if (!(vm = chDomObjFromDomain(dom)))
+ goto cleanup;
+
+ if (virDomainGetXMLDescEnsureACL(dom->conn, vm->def, flags) < 0)
+ goto cleanup;
+
+ ret = virDomainDefFormat(vm->def, driver->xmlopt,
+ virDomainDefFormatConvertXMLFlags(flags));
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return ret;
+}
+
+static int chDomainGetInfo(virDomainPtr dom,
+ virDomainInfoPtr info)
+{
+ virDomainObj *vm;
+ int ret = -1;
+
+ if (!(vm = chDomObjFromDomain(dom)))
+ goto cleanup;
+
+ if (virDomainGetInfoEnsureACL(dom->conn, vm->def) < 0)
+ goto cleanup;
+
+ info->state = virDomainObjGetState(vm, NULL);
+
+ info->cpuTime = 0;
+
+ info->maxMem = virDomainDefGetMemoryTotal(vm->def);
+ info->memory = vm->def->mem.cur_balloon;
+ info->nrVirtCpu = virDomainDefGetVcpus(vm->def);
+
+ ret = 0;
+
+ cleanup:
+ virDomainObjEndAPI(&vm);
+ return ret;
+}
+
+static int chStateCleanup(void)
+{
+ if (ch_driver == NULL)
+ return -1;
+
+ virObjectUnref(ch_driver->domains);
+ virObjectUnref(ch_driver->xmlopt);
+ virObjectUnref(ch_driver->caps);
+ virObjectUnref(ch_driver->config);
+ virMutexDestroy(&ch_driver->lock);
+ g_free(ch_driver);
+
+ return 0;
+}
+
+static int chStateInitialize(bool privileged,
+ const char *root,
+ virStateInhibitCallback callback G_GNUC_UNUSED,
+ void *opaque G_GNUC_UNUSED)
+{
+ if (root != NULL) {
+ virReportError(VIR_ERR_INVALID_ARG, "%s",
+ _("Driver does not support embedded mode"));
+ return -1;
+ }
+
+ ch_driver = g_new0(virCHDriver, 1);
+
+ if (virMutexInit(&ch_driver->lock) < 0) {
+ g_free(ch_driver);
+ return VIR_DRV_STATE_INIT_ERROR;
+ }
+
+ if (!(ch_driver->domains = virDomainObjListNew()))
+ goto cleanup;
+
+ if (!(ch_driver->caps = virCHDriverCapsInit()))
+ goto cleanup;
+
+ if (!(ch_driver->xmlopt = chDomainXMLConfInit(ch_driver)))
+ goto cleanup;
+
+ if (!(ch_driver->config = virCHDriverConfigNew(privileged)))
+ goto cleanup;
+
+ if (chExtractVersion(ch_driver) < 0)
+ goto cleanup;
+
+ return VIR_DRV_STATE_INIT_COMPLETE;
+
+ cleanup:
+ chStateCleanup();
+ return VIR_DRV_STATE_INIT_ERROR;
+}
+
+/* Function Tables */
+static virHypervisorDriver chHypervisorDriver = {
+ .name = "CH",
+ .connectURIProbe = chConnectURIProbe,
+ .connectOpen = chConnectOpen, /* 7.3.0 */
+ .connectClose = chConnectClose, /* 7.3.0 */
+ .connectGetType = chConnectGetType, /* 7.3.0 */
+ .connectGetVersion = chConnectGetVersion, /* 7.3.0 */
+ .connectGetHostname = chConnectGetHostname, /* 7.3.0 */
+ .connectNumOfDomains = chConnectNumOfDomains, /* 7.3.0 */
+ .connectListAllDomains = chConnectListAllDomains, /* 7.3.0 */
+ .connectListDomains = chConnectListDomains, /* 7.3.0 */
+ .connectGetCapabilities = chConnectGetCapabilities, /* 7.3.0 */
+ .domainCreateXML = chDomainCreateXML, /* 7.3.0 */
+ .domainCreate = chDomainCreate, /* 7.3.0 */
+ .domainCreateWithFlags = chDomainCreateWithFlags, /* 7.3.0 */
+ .domainShutdown = chDomainShutdown, /* 7.3.0 */
+ .domainShutdownFlags = chDomainShutdownFlags, /* 7.3.0 */
+ .domainReboot = chDomainReboot, /* 7.3.0 */
+ .domainSuspend = chDomainSuspend, /* 7.3.0 */
+ .domainResume = chDomainResume, /* 7.3.0 */
+ .domainDestroy = chDomainDestroy, /* 7.3.0 */
+ .domainDestroyFlags = chDomainDestroyFlags, /* 7.3.0 */
+ .domainDefineXML = chDomainDefineXML, /* 7.3.0 */
+ .domainDefineXMLFlags = chDomainDefineXMLFlags, /* 7.3.0 */
+ .domainUndefine = chDomainUndefine, /* 7.3.0 */
+ .domainUndefineFlags = chDomainUndefineFlags, /* 7.3.0 */
+ .domainLookupByID = chDomainLookupByID, /* 7.3.0 */
+ .domainLookupByUUID = chDomainLookupByUUID, /* 7.3.0 */
+ .domainLookupByName = chDomainLookupByName, /* 7.3.0 */
+ .domainGetState = chDomainGetState, /* 7.3.0 */
+ .domainGetXMLDesc = chDomainGetXMLDesc, /* 7.3.0 */
+ .domainGetInfo = chDomainGetInfo, /* 7.3.0 */
+ .domainIsActive = chDomainIsActive, /* 7.3.0 */
+ .nodeGetInfo = chNodeGetInfo, /* 7.3.0 */
+};
+
+static virConnectDriver chConnectDriver = {
+ .localOnly = true,
+ .uriSchemes = (const char *[]){"ch", NULL},
+ .hypervisorDriver = &chHypervisorDriver,
+};
+
+static virStateDriver chStateDriver = {
+ .name = "cloud-hypervisor",
+ .stateInitialize = chStateInitialize,
+ .stateCleanup = chStateCleanup,
+};
+
+int chRegister(void)
+{
+ if (virRegisterConnectDriver(&chConnectDriver, false) < 0)
+ return -1;
+ if (virRegisterStateDriver(&chStateDriver) < 0)
+ return -1;
+ return 0;
+}
diff --git a/src/ch/ch_driver.h b/src/ch/ch_driver.h
new file mode 100644
index 0000000000..933be3953b
--- /dev/null
+++ b/src/ch/ch_driver.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright Intel Corp. 2020-2021
+ *
+ * ch_driver.h: header file for Cloud-Hypervisor driver functions
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library. If not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#pragma once
+
+/* Function declarations */
+int chRegister(void);
diff --git a/src/ch/ch_monitor.c b/src/ch/ch_monitor.c
new file mode 100644
index 0000000000..2968c2aae7
--- /dev/null
+++ b/src/ch/ch_monitor.c
@@ -0,0 +1,837 @@
+/*
+ * Copyright Intel Corp. 2020-2021
+ *
+ * ch_monitor.c: Manage Cloud-Hypervisor interactions
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library. If not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#include <config.h>
+
+#include <stdio.h>
+#include <unistd.h>
+#include <curl/curl.h>
+
+#include "ch_conf.h"
+#include "ch_monitor.h"
+#include "viralloc.h"
+#include "vircommand.h"
+#include "virerror.h"
+#include "virfile.h"
+#include "virjson.h"
+#include "virlog.h"
+#include "virstring.h"
+#include "virtime.h"
+
+#define VIR_FROM_THIS VIR_FROM_CH
+
+VIR_LOG_INIT("ch.ch_monitor");
+
+static virClass *virCHMonitorClass;
+static void virCHMonitorDispose(void *obj);
+
+static int virCHMonitorOnceInit(void)
+{
+ if (!VIR_CLASS_NEW(virCHMonitor, virClassForObjectLockable()))
+ return -1;
+
+ return 0;
+}
+
+VIR_ONCE_GLOBAL_INIT(virCHMonitor);
+
+int virCHMonitorShutdownVMM(virCHMonitor *mon);
+int virCHMonitorPutNoContent(virCHMonitor *mon, const char *endpoint);
+int virCHMonitorGet(virCHMonitor *mon, const char *endpoint);
+
+static int
+virCHMonitorBuildCPUJson(virJSONValue *content, virDomainDef *vmdef)
+{
+ virJSONValue *cpus;
+ unsigned int maxvcpus = 0;
+ unsigned int nvcpus = 0;
+ virDomainVcpuDef *vcpu;
+ size_t i;
+
+ /* count maximum allowed number vcpus and enabled vcpus when boot.*/
+ maxvcpus = virDomainDefGetVcpusMax(vmdef);
+ for (i = 0; i < maxvcpus; i++) {
+ vcpu = virDomainDefGetVcpu(vmdef, i);
+ if (vcpu->online)
+ nvcpus++;
+ }
+
+ if (maxvcpus != 0 || nvcpus != 0) {
+ cpus = virJSONValueNewObject();
+ if (virJSONValueObjectAppendNumberInt(cpus, "boot_vcpus", nvcpus) < 0)
+ goto cleanup;
+ if (virJSONValueObjectAppendNumberInt(cpus, "max_vcpus", vmdef->maxvcpus) < 0)
+ goto cleanup;
+ if (virJSONValueObjectAppend(content, "cpus", &cpus) < 0)
+ goto cleanup;
+ }
+
+ return 0;
+
+ cleanup:
+ virJSONValueFree(cpus);
+ return -1;
+}
+
+static int
+virCHMonitorBuildKernelRelatedJson(virJSONValue *content, virDomainDef *vmdef)
+{
+ virJSONValue *kernel = virJSONValueNewObject();
+ virJSONValue *cmdline = virJSONValueNewObject();
+ virJSONValue *initramfs = virJSONValueNewObject();
+
+ if (vmdef->os.kernel == NULL) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Kernel image path in this domain is not defined"));
+ goto cleanup;
+ } else {
+ kernel = virJSONValueNewObject();
+ if (virJSONValueObjectAppendString(kernel, "path", vmdef->os.kernel) < 0)
+ goto cleanup;
+ if (virJSONValueObjectAppend(content, "kernel", &kernel) < 0)
+ goto cleanup;
+ }
+
+ if (vmdef->os.cmdline) {
+ if (virJSONValueObjectAppendString(cmdline, "args", vmdef->os.cmdline) < 0)
+ goto cleanup;
+ if (virJSONValueObjectAppend(content, "cmdline", &cmdline) < 0)
+ goto cleanup;
+ }
+
+ if (vmdef->os.initrd != NULL) {
+ initramfs = virJSONValueNewObject();
+ if (virJSONValueObjectAppendString(initramfs, "path", vmdef->os.initrd) < 0)
+ goto cleanup;
+ if (virJSONValueObjectAppend(content, "initramfs", &initramfs) < 0)
+ goto cleanup;
+ }
+
+ return 0;
+
+ cleanup:
+ virJSONValueFree(kernel);
+ virJSONValueFree(cmdline);
+ virJSONValueFree(initramfs);
+
+ return -1;
+}
+
+static int
+virCHMonitorBuildMemoryJson(virJSONValue *content, virDomainDef *vmdef)
+{
+ virJSONValue *memory;
+ unsigned long long total_memory = virDomainDefGetMemoryInitial(vmdef) * 1024;
+
+ if (total_memory != 0) {
+ memory = virJSONValueNewObject();
+ if (virJSONValueObjectAppendNumberUlong(memory, "size", total_memory) < 0)
+ goto cleanup;
+ if (virJSONValueObjectAppend(content, "memory", &memory) < 0)
+ goto cleanup;
+ }
+
+ return 0;
+
+ cleanup:
+ virJSONValueFree(memory);
+ return -1;
+}
+
+static int
+virCHMonitorBuildDiskJson(virJSONValue *disks, virDomainDiskDef *diskdef)
+{
+ virJSONValue *disk = virJSONValueNewObject();
+
+ if (!diskdef->src)
+ goto cleanup;
+
+ switch (diskdef->src->type) {
+ case VIR_STORAGE_TYPE_FILE:
+ if (!diskdef->src->path) {
+ virReportError(VIR_ERR_INVALID_ARG, "%s",
+ _("Missing disk file path in domain"));
+ goto cleanup;
+ }
+ if (diskdef->bus != VIR_DOMAIN_DISK_BUS_VIRTIO) {
+ virReportError(VIR_ERR_INVALID_ARG,
+ _("Only virtio bus types are supported for '%s'"), diskdef->src->path);
+ goto cleanup;
+ }
+ if (virJSONValueObjectAppendString(disk, "path", diskdef->src->path) < 0)
+ goto cleanup;
+ if (diskdef->src->readonly) {
+ if (virJSONValueObjectAppendBoolean(disk, "readonly", true) < 0)
+ goto cleanup;
+ }
+ if (virJSONValueArrayAppend(disks, &disk) < 0)
+ goto cleanup;
+
+ break;
+ case VIR_STORAGE_TYPE_NONE:
+ case VIR_STORAGE_TYPE_BLOCK:
+ case VIR_STORAGE_TYPE_DIR:
+ case VIR_STORAGE_TYPE_NETWORK:
+ case VIR_STORAGE_TYPE_VOLUME:
+ case VIR_STORAGE_TYPE_NVME:
+ case VIR_STORAGE_TYPE_VHOST_USER:
+ default:
+ virReportEnumRangeError(virStorageType, diskdef->src->type);
+ goto cleanup;
+ }
+
+ return 0;
+
+ cleanup:
+ virJSONValueFree(disk);
+ return -1;
+}
+
+static int
+virCHMonitorBuildDisksJson(virJSONValue *content, virDomainDef *vmdef)
+{
+ virJSONValue *disks;
+ size_t i;
+
+ if (vmdef->ndisks > 0) {
+ disks = virJSONValueNewArray();
+
+ for (i = 0; i < vmdef->ndisks; i++) {
+ if (virCHMonitorBuildDiskJson(disks, vmdef->disks[i]) < 0)
+ goto cleanup;
+ }
+ if (virJSONValueObjectAppend(content, "disks", &disks) < 0)
+ goto cleanup;
+ }
+
+ return 0;
+
+ cleanup:
+ virJSONValueFree(disks);
+ return -1;
+}
+
+static int
+virCHMonitorBuildNetJson(virJSONValue *nets, virDomainNetDef *netdef)
+{
+ virDomainNetType netType = virDomainNetGetActualType(netdef);
+ char macaddr[VIR_MAC_STRING_BUFLEN];
+ virJSONValue *net;
+
+ // check net type at first
+ net = virJSONValueNewObject();
+
+ switch (netType) {
+ case VIR_DOMAIN_NET_TYPE_ETHERNET:
+ if (netdef->guestIP.nips == 1) {
+ const virNetDevIPAddr *ip = netdef->guestIP.ips[0];
+ g_autofree char *addr = NULL;
+ virSocketAddr netmask;
+ g_autofree char *netmaskStr = NULL;
+ if (!(addr = virSocketAddrFormat(&ip->address)))
+ goto cleanup;
+ if (virJSONValueObjectAppendString(net, "ip", addr) < 0)
+ goto cleanup;
+
+ if (virSocketAddrPrefixToNetmask(ip->prefix, &netmask, AF_INET) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ _("Failed to translate net prefix %d to netmask"),
+ ip->prefix);
+ goto cleanup;
+ }
+ if (!(netmaskStr = virSocketAddrFormat(&netmask)))
+ goto cleanup;
+ if (virJSONValueObjectAppendString(net, "mask", netmaskStr) < 0)
+ goto cleanup;
+ } else if (netdef->guestIP.nips > 1) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("ethernet type supports a single guest ip"));
+ }
+ break;
+ case VIR_DOMAIN_NET_TYPE_VHOSTUSER:
+ if ((virDomainChrType)netdef->data.vhostuser->type != VIR_DOMAIN_CHR_TYPE_UNIX) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("vhost_user type support UNIX socket in this CH"));
+ goto cleanup;
+ } else {
+ if (virJSONValueObjectAppendString(net, "vhost_socket", netdef->data.vhostuser->data.nix.path) < 0)
+ goto cleanup;
+ if (virJSONValueObjectAppendBoolean(net, "vhost_user", true) < 0)
+ goto cleanup;
+ }
+ break;
+ case VIR_DOMAIN_NET_TYPE_BRIDGE:
+ case VIR_DOMAIN_NET_TYPE_NETWORK:
+ case VIR_DOMAIN_NET_TYPE_DIRECT:
+ case VIR_DOMAIN_NET_TYPE_USER:
+ case VIR_DOMAIN_NET_TYPE_SERVER:
+ case VIR_DOMAIN_NET_TYPE_CLIENT:
+ case VIR_DOMAIN_NET_TYPE_MCAST:
+ case VIR_DOMAIN_NET_TYPE_INTERNAL:
+ case VIR_DOMAIN_NET_TYPE_HOSTDEV:
+ case VIR_DOMAIN_NET_TYPE_UDP:
+ case VIR_DOMAIN_NET_TYPE_VDPA:
+ case VIR_DOMAIN_NET_TYPE_LAST:
+ default:
+ virReportEnumRangeError(virDomainNetType, netType);
+ goto cleanup;
+ }
+
+ if (netdef->ifname != NULL) {
+ if (virJSONValueObjectAppendString(net, "tap", netdef->ifname) < 0)
+ goto cleanup;
+ }
+ if (virJSONValueObjectAppendString(net, "mac", virMacAddrFormat(&netdef->mac, macaddr)) < 0)
+ goto cleanup;
+
+
+ if (netdef->virtio != NULL) {
+ if (netdef->virtio->iommu == VIR_TRISTATE_SWITCH_ON) {
+ if (virJSONValueObjectAppendBoolean(net, "iommu", true) < 0)
+ goto cleanup;
+ }
+ }
+ if (netdef->driver.virtio.queues) {
+ if (virJSONValueObjectAppendNumberInt(net, "num_queues", netdef->driver.virtio.queues) < 0)
+ goto cleanup;
+ }
+
+ if (netdef->driver.virtio.rx_queue_size || netdef->driver.virtio.tx_queue_size) {
+ if (netdef->driver.virtio.rx_queue_size != netdef->driver.virtio.tx_queue_size) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("virtio rx_queue_size option %d is not same with tx_queue_size %d"),
+ netdef->driver.virtio.rx_queue_size,
+ netdef->driver.virtio.tx_queue_size);
+ goto cleanup;
+ }
+ if (virJSONValueObjectAppendNumberInt(net, "queue_size", netdef->driver.virtio.rx_queue_size) < 0)
+ goto cleanup;
+ }
+
+ if (virJSONValueArrayAppend(nets, &net) < 0)
+ goto cleanup;
+
+ return 0;
+
+ cleanup:
+ virJSONValueFree(net);
+ return -1;
+}
+
+static int
+virCHMonitorBuildNetsJson(virJSONValue *content, virDomainDef *vmdef)
+{
+ virJSONValue *nets;
+ size_t i;
+
+ if (vmdef->nnets > 0) {
+ nets = virJSONValueNewArray();
+
+ for (i = 0; i < vmdef->nnets; i++) {
+ if (virCHMonitorBuildNetJson(nets, vmdef->nets[i]) < 0)
+ goto cleanup;
+ }
+ if (virJSONValueObjectAppend(content, "net", &nets) < 0)
+ goto cleanup;
+ }
+
+ return 0;
+
+ cleanup:
+ virJSONValueFree(nets);
+ return -1;
+}
+
+static int
+virCHMonitorDetectUnsupportedDevices(virDomainDef *vmdef)
+{
+ int ret = 0;
+
+ if (vmdef->ngraphics > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support graphics"));
+ ret = 1;
+ }
+ if (vmdef->ncontrollers > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support controllers"));
+ ret = 1;
+ }
+ if (vmdef->nfss > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support fss"));
+ ret = 1;
+ }
+ if (vmdef->ninputs > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support inputs"));
+ ret = 1;
+ }
+ if (vmdef->nsounds > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support sounds"));
+ ret = 1;
+ }
+ if (vmdef->naudios > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support audios"));
+ ret = 1;
+ }
+ if (vmdef->nvideos > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support videos"));
+ ret = 1;
+ }
+ if (vmdef->nhostdevs > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support hostdevs"));
+ ret = 1;
+ }
+ if (vmdef->nredirdevs > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support redirdevs"));
+ ret = 1;
+ }
+ if (vmdef->nsmartcards > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support smartcards"));
+ ret = 1;
+ }
+ if (vmdef->nserials > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support serials"));
+ ret = 1;
+ }
+ if (vmdef->nparallels > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support parallels"));
+ ret = 1;
+ }
+ if (vmdef->nchannels > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support channels"));
+ ret = 1;
+ }
+ if (vmdef->nconsoles > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support consoles"));
+ ret = 1;
+ }
+ if (vmdef->nleases > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support leases"));
+ ret = 1;
+ }
+ if (vmdef->nhubs > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support hubs"));
+ ret = 1;
+ }
+ if (vmdef->nseclabels > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support seclabels"));
+ ret = 1;
+ }
+ if (vmdef->nrngs > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support rngs"));
+ ret = 1;
+ }
+ if (vmdef->nshmems > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support shmems"));
+ ret = 1;
+ }
+ if (vmdef->nmems > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support mems"));
+ ret = 1;
+ }
+ if (vmdef->npanics > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support panics"));
+ ret = 1;
+ }
+ if (vmdef->nsysinfo > 0) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("Cloud-Hypervisor doesn't support sysinfo"));
+ ret = 1;
+ }
+
+ return ret;
+}
+
+static int
+virCHMonitorBuildVMJson(virDomainDef *vmdef, char **jsonstr)
+{
+ virJSONValue *content = virJSONValueNewObject();
+ int ret = -1;
+
+ if (vmdef == NULL) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("VM is not defined"));
+ goto cleanup;
+ }
+
+ if (virCHMonitorDetectUnsupportedDevices(vmdef))
+ goto cleanup;
+
+ if (virCHMonitorBuildCPUJson(content, vmdef) < 0)
+ goto cleanup;
+
+ if (virCHMonitorBuildMemoryJson(content, vmdef) < 0)
+ goto cleanup;
+
+ if (virCHMonitorBuildKernelRelatedJson(content, vmdef) < 0)
+ goto cleanup;
+
+ if (virCHMonitorBuildDisksJson(content, vmdef) < 0)
+ goto cleanup;
+
+ if (virCHMonitorBuildNetsJson(content, vmdef) < 0)
+ goto cleanup;
+
+ if (!(*jsonstr = virJSONValueToString(content, false)))
+ goto cleanup;
+
+ ret = 0;
+
+ cleanup:
+ virJSONValueFree(content);
+ return ret;
+}
+
+static int
+chMonitorCreateSocket(const char *socket_path)
+{
+ struct sockaddr_un addr;
+ socklen_t addrlen = sizeof(addr);
+ int fd;
+
+ if ((fd = socket(AF_UNIX, SOCK_STREAM, 0)) < 0) {
+ virReportSystemError(errno, "%s",
+ _("Unable to create UNIX socket"));
+ goto error;
+ }
+
+ memset(&addr, 0, sizeof(addr));
+ addr.sun_family = AF_UNIX;
+ if (virStrcpyStatic(addr.sun_path, socket_path) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ _("UNIX socket path '%s' too long"),
+ socket_path);
+ goto error;
+ }
+
+ if (unlink(socket_path) < 0 && errno != ENOENT) {
+ virReportSystemError(errno,
+ _("Unable to unlink %s"),
+ socket_path);
+ goto error;
+ }
+
+ if (bind(fd, (struct sockaddr *)&addr, addrlen) < 0) {
+ virReportSystemError(errno,
+ _("Unable to bind to UNIX socket path '%s'"),
+ socket_path);
+ goto error;
+ }
+
+ if (listen(fd, 1) < 0) {
+ virReportSystemError(errno,
+ _("Unable to listen to UNIX socket path '%s'"),
+ socket_path);
+ goto error;
+ }
+
+ /* We run cloud-hypervisor with umask 0002. Compensate for the umask
+ * libvirtd might be running under to get the same permission
+ * cloud-hypervisor would have. */
+ if (virFileUpdatePerm(socket_path, 0002, 0664) < 0)
+ goto error;
+
+ return fd;
+
+ error:
+ VIR_FORCE_CLOSE(fd);
+ return -1;
+}
+
+virCHMonitor *
+virCHMonitorNew(virDomainObj *vm, const char *socketdir)
+{
+ virCHMonitor *ret = NULL;
+ virCHMonitor *mon = NULL;
+ virCommand *cmd = NULL;
+ int socket_fd = 0;
+
+ if (virCHMonitorInitialize() < 0)
+ return NULL;
+
+ if (!(mon = virObjectLockableNew(virCHMonitorClass)))
+ return NULL;
+
+ if (!vm->def) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("VM is not defined"));
+ return NULL;
+ }
+
+ /* prepare to launch Cloud-Hypervisor socket */
+ mon->socketpath = g_strdup_printf("%s/%s-socket", socketdir, vm->def->name);
+ if (g_mkdir_with_parents(socketdir, 0777) < 0) {
+ virReportSystemError(errno,
+ _("Cannot create socket directory '%s'"),
+ socketdir);
+ goto cleanup;
+ }
+
+ cmd = virCommandNew(vm->def->emulator);
+ virCommandSetUmask(cmd, 0x002);
+ socket_fd = chMonitorCreateSocket(mon->socketpath);
+ if (socket_fd < 0) {
+ virReportSystemError(errno,
+ _("Cannot create socket '%s'"),
+ mon->socketpath);
+ goto cleanup;
+ }
+
+ virCommandAddArg(cmd, "--api-socket");
+ virCommandAddArgFormat(cmd, "fd=%d", socket_fd);
+ virCommandPassFD(cmd, socket_fd, VIR_COMMAND_PASS_FD_CLOSE_PARENT);
+
+ /* launch Cloud-Hypervisor socket */
+ if (virCommandRunAsync(cmd, &mon->pid) < 0)
+ goto cleanup;
+
+ /* get a curl handle */
+ mon->handle = curl_easy_init();
+
+ /* now has its own reference */
+ virObjectRef(mon);
+ mon->vm = virObjectRef(vm);
+
+ ret = mon;
+
+ cleanup:
+ virCommandFree(cmd);
+ return ret;
+}
+
+static void virCHMonitorDispose(void *opaque)
+{
+ virCHMonitor *mon = opaque;
+
+ VIR_DEBUG("mon=%p", mon);
+ virObjectUnref(mon->vm);
+}
+
+void virCHMonitorClose(virCHMonitor *mon)
+{
+ if (!mon)
+ return;
+
+ if (mon->pid > 0) {
+ /* try cleaning up the Cloud-Hypervisor process */
+ virProcessAbort(mon->pid);
+ mon->pid = 0;
+ }
+
+ if (mon->handle)
+ curl_easy_cleanup(mon->handle);
+
+ if (mon->socketpath) {
+ if (virFileRemove(mon->socketpath, -1, -1) < 0) {
+ VIR_WARN("Unable to remove CH socket file '%s'",
+ mon->socketpath);
+ }
+ g_free(mon->socketpath);
+ }
+
+ virObjectUnref(mon);
+ if (mon->vm)
+ virObjectUnref(mon->vm);
+}
+
+static int
+virCHMonitorCurlPerform(CURL *handle)
+{
+ CURLcode errorCode;
+ long responseCode = 0;
+
+ errorCode = curl_easy_perform(handle);
+
+ if (errorCode != CURLE_OK) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ _("curl_easy_perform() returned an error: %s (%d)"),
+ curl_easy_strerror(errorCode), errorCode);
+ return -1;
+ }
+
+ errorCode = curl_easy_getinfo(handle, CURLINFO_RESPONSE_CODE,
+ &responseCode);
+
+ if (errorCode != CURLE_OK) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ _("curl_easy_getinfo(CURLINFO_RESPONSE_CODE) returned an "
+ "error: %s (%d)"), curl_easy_strerror(errorCode),
+ errorCode);
+ return -1;
+ }
+
+ if (responseCode < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("curl_easy_getinfo(CURLINFO_RESPONSE_CODE) returned a "
+ "negative response code"));
+ return -1;
+ }
+
+ return responseCode;
+}
+
+int
+virCHMonitorPutNoContent(virCHMonitor *mon, const char *endpoint)
+{
+ g_autofree char *url = NULL;
+ int responseCode = 0;
+ int ret = -1;
+
+ url = g_strdup_printf("%s/%s", URL_ROOT, endpoint);
+
+ virObjectLock(mon);
+
+ /* reset all options of a libcurl session handle at first */
+ curl_easy_reset(mon->handle);
+
+ curl_easy_setopt(mon->handle, CURLOPT_UNIX_SOCKET_PATH, mon->socketpath);
+ curl_easy_setopt(mon->handle, CURLOPT_URL, url);
+ curl_easy_setopt(mon->handle, CURLOPT_PUT, true);
+ curl_easy_setopt(mon->handle, CURLOPT_HTTPHEADER, NULL);
+
+ responseCode = virCHMonitorCurlPerform(mon->handle);
+
+ virObjectUnlock(mon);
+
+ if (responseCode == 200 || responseCode == 204)
+ ret = 0;
+
+ return ret;
+}
+
+int
+virCHMonitorGet(virCHMonitor *mon, const char *endpoint)
+{
+ g_autofree char *url = NULL;
+ int responseCode = 0;
+ int ret = -1;
+
+ url = g_strdup_printf("%s/%s", URL_ROOT, endpoint);
+
+ virObjectLock(mon);
+
+ /* reset all options of a libcurl session handle at first */
+ curl_easy_reset(mon->handle);
+
+ curl_easy_setopt(mon->handle, CURLOPT_UNIX_SOCKET_PATH, mon->socketpath);
+ curl_easy_setopt(mon->handle, CURLOPT_URL, url);
+
+ responseCode = virCHMonitorCurlPerform(mon->handle);
+
+ virObjectUnlock(mon);
+
+ if (responseCode == 200 || responseCode == 204)
+ ret = 0;
+
+ return ret;
+}
+
+int
+virCHMonitorShutdownVMM(virCHMonitor *mon)
+{
+ return virCHMonitorPutNoContent(mon, URL_VMM_SHUTDOWN);
+}
+
+int
+virCHMonitorCreateVM(virCHMonitor *mon)
+{
+ g_autofree char *url = NULL;
+ int responseCode = 0;
+ int ret = -1;
+ g_autofree char *payload = NULL;
+ struct curl_slist *headers = NULL;
+
+ url = g_strdup_printf("%s/%s", URL_ROOT, URL_VM_CREATE);
+ headers = curl_slist_append(headers, "Accept: application/json");
+ headers = curl_slist_append(headers, "Content-Type: application/json");
+
+ if (virCHMonitorBuildVMJson(mon->vm->def, &payload) != 0)
+ return -1;
+
+ virObjectLock(mon);
+
+ /* reset all options of a libcurl session handle at first */
+ curl_easy_reset(mon->handle);
+
+ curl_easy_setopt(mon->handle, CURLOPT_UNIX_SOCKET_PATH, mon->socketpath);
+ curl_easy_setopt(mon->handle, CURLOPT_URL, url);
+ curl_easy_setopt(mon->handle, CURLOPT_CUSTOMREQUEST, "PUT");
+ curl_easy_setopt(mon->handle, CURLOPT_HTTPHEADER, headers);
+ curl_easy_setopt(mon->handle, CURLOPT_POSTFIELDS, payload);
+
+ responseCode = virCHMonitorCurlPerform(mon->handle);
+
+ virObjectUnlock(mon);
+
+ if (responseCode == 200 || responseCode == 204)
+ ret = 0;
+
+ curl_slist_free_all(headers);
+ return ret;
+}
+
+int
+virCHMonitorBootVM(virCHMonitor *mon)
+{
+ return virCHMonitorPutNoContent(mon, URL_VM_BOOT);
+}
+
+int
+virCHMonitorShutdownVM(virCHMonitor *mon)
+{
+ return virCHMonitorPutNoContent(mon, URL_VM_SHUTDOWN);
+}
+
+int
+virCHMonitorRebootVM(virCHMonitor *mon)
+{
+ return virCHMonitorPutNoContent(mon, URL_VM_REBOOT);
+}
+
+int
+virCHMonitorSuspendVM(virCHMonitor *mon)
+{
+ return virCHMonitorPutNoContent(mon, URL_VM_Suspend);
+}
+
+int
+virCHMonitorResumeVM(virCHMonitor *mon)
+{
+ return virCHMonitorPutNoContent(mon, URL_VM_RESUME);
+}
diff --git a/src/ch/ch_monitor.h b/src/ch/ch_monitor.h
new file mode 100644
index 0000000000..e717e11cbc
--- /dev/null
+++ b/src/ch/ch_monitor.h
@@ -0,0 +1,60 @@
+/*
+ * Copyright Intel Corp. 2020-2021
+ *
+ * ch_monitor.h: header file for managing Cloud-Hypervisor interactions
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library. If not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#pragma once
+
+#include <curl/curl.h>
+
+#include "virobject.h"
+#include "domain_conf.h"
+
+#define URL_ROOT "http://localhost/api/v1"
+#define URL_VMM_SHUTDOWN "vmm.shutdown"
+#define URL_VM_CREATE "vm.create"
+#define URL_VM_DELETE "vm.delete"
+#define URL_VM_BOOT "vm.boot"
+#define URL_VM_SHUTDOWN "vm.shutdown"
+#define URL_VM_REBOOT "vm.reboot"
+#define URL_VM_Suspend "vm.pause"
+#define URL_VM_RESUME "vm.resume"
+
+typedef struct _virCHMonitor virCHMonitor;
+
+struct _virCHMonitor {
+ virObjectLockable parent;
+
+ CURL *handle;
+
+ char *socketpath;
+
+ pid_t pid;
+
+ virDomainObj *vm;
+};
+
+virCHMonitor *virCHMonitorNew(virDomainObj *vm, const char *socketdir);
+void virCHMonitorClose(virCHMonitor *mon);
+
+int virCHMonitorCreateVM(virCHMonitor *mon);
+int virCHMonitorBootVM(virCHMonitor *mon);
+int virCHMonitorShutdownVM(virCHMonitor *mon);
+int virCHMonitorRebootVM(virCHMonitor *mon);
+int virCHMonitorSuspendVM(virCHMonitor *mon);
+int virCHMonitorResumeVM(virCHMonitor *mon);
diff --git a/src/ch/ch_process.c b/src/ch/ch_process.c
new file mode 100644
index 0000000000..93b1f7f97e
--- /dev/null
+++ b/src/ch/ch_process.c
@@ -0,0 +1,126 @@
+/*
+ * Copyright Intel Corp. 2020-2021
+ *
+ * ch_process.c: Process controller for Cloud-Hypervisor driver
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library. If not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#include <config.h>
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "ch_domain.h"
+#include "ch_monitor.h"
+#include "ch_process.h"
+#include "viralloc.h"
+#include "virerror.h"
+#include "virlog.h"
+
+#define VIR_FROM_THIS VIR_FROM_CH
+
+VIR_LOG_INIT("ch.ch_process");
+
+#define START_SOCKET_POSTFIX ": starting up socket\n"
+#define START_VM_POSTFIX ": starting up vm\n"
+
+
+
+static virCHMonitor *
+virCHProcessConnectMonitor(virCHDriver *driver,
+ virDomainObj *vm)
+{
+ virCHMonitor *monitor = NULL;
+ virCHDriverConfig *cfg = virCHDriverGetConfig(driver);
+
+ monitor = virCHMonitorNew(vm, cfg->stateDir);
+
+ virObjectUnref(cfg);
+ return monitor;
+}
+
+/**
+ * virCHProcessStart:
+ * @driver: pointer to driver structure
+ * @vm: pointer to virtual machine structure
+ * @reason: reason for switching vm to running state
+ *
+ * Starts Cloud-Hypervisor listen on a local socket
+ *
+ * Returns 0 on success or -1 in case of error
+ */
+int virCHProcessStart(virCHDriver *driver,
+ virDomainObj *vm,
+ virDomainRunningReason reason)
+{
+ int ret = -1;
+ virCHDomainObjPrivate *priv = vm->privateData;
+
+ if (!priv->monitor) {
+ /* And we can get the first monitor connection now too */
+ if (!(priv->monitor = virCHProcessConnectMonitor(driver, vm))) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("failed to create connection to CH socket"));
+ goto cleanup;
+ }
+
+ if (virCHMonitorCreateVM(priv->monitor) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("failed to create guest VM"));
+ goto cleanup;
+ }
+ }
+
+ if (virCHMonitorBootVM(priv->monitor) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("failed to boot guest VM"));
+ goto cleanup;
+ }
+
+ vm->pid = priv->monitor->pid;
+ vm->def->id = vm->pid;
+ virDomainObjSetState(vm, VIR_DOMAIN_RUNNING, reason);
+
+ return 0;
+
+ cleanup:
+ if (ret)
+ virCHProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED);
+
+ return ret;
+}
+
+int virCHProcessStop(virCHDriver *driver G_GNUC_UNUSED,
+ virDomainObj *vm,
+ virDomainShutoffReason reason)
+{
+ virCHDomainObjPrivate *priv = vm->privateData;
+
+ VIR_DEBUG("Stopping VM name=%s pid=%d reason=%d",
+ vm->def->name, (int)vm->pid, (int)reason);
+
+ if (priv->monitor) {
+ virCHMonitorClose(priv->monitor);
+ priv->monitor = NULL;
+ }
+
+ vm->pid = -1;
+ vm->def->id = -1;
+
+ virDomainObjSetState(vm, VIR_DOMAIN_SHUTOFF, reason);
+
+ return 0;
+}
diff --git a/src/ch/ch_process.h b/src/ch/ch_process.h
new file mode 100644
index 0000000000..abc4915979
--- /dev/null
+++ b/src/ch/ch_process.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright Intel Corp. 2020-2021
+ *
+ * ch_process.h: header file for Cloud-Hypervisor's process controller
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library. If not, see
+ * <http://www.gnu.org/licenses/>.
+ */
+
+#pragma once
+
+#include "ch_conf.h"
+#include "internal.h"
+
+int virCHProcessStart(virCHDriver *driver,
+ virDomainObj *vm,
+ virDomainRunningReason reason);
+int virCHProcessStop(virCHDriver *driver,
+ virDomainObj *vm,
+ virDomainShutoffReason reason);
diff --git a/src/ch/meson.build b/src/ch/meson.build
new file mode 100644
index 0000000000..e34974d56c
--- /dev/null
+++ b/src/ch/meson.build
@@ -0,0 +1,74 @@
+ch_driver_sources = [
+ 'ch_conf.c',
+ 'ch_conf.h',
+ 'ch_domain.c',
+ 'ch_domain.h',
+ 'ch_driver.c',
+ 'ch_driver.h',
+ 'ch_monitor.c',
+ 'ch_monitor.h',
+ 'ch_process.c',
+ 'ch_process.h',
+]
+
+driver_source_files += files(ch_driver_sources)
+
+stateful_driver_source_files += files(ch_driver_sources)
+
+if conf.has('WITH_CH')
+ ch_driver_impl = static_library(
+ 'virt_driver_ch_impl',
+ [
+ ch_driver_sources,
+ ],
+ dependencies: [
+ access_dep,
+ curl_dep,
+ log_dep,
+ src_dep,
+ ],
+ include_directories: [
+ conf_inc_dir,
+ ],
+ )
+
+ virt_modules += {
+ 'name': 'virt_driver_ch',
+ 'link_whole': [
+ ch_driver_impl,
+ ],
+ 'link_args': [
+ libvirt_no_undefined,
+ ],
+ }
+
+ virt_daemons += {
+ 'name': 'virtchd',
+ 'c_args': [
+ '-DDAEMON_NAME="virtchd"',
+ '-DMODULE_NAME="ch"',
+ ],
+ }
+
+ virt_daemon_confs += {
+ 'name': 'virtchd',
+ }
+
+ virt_daemon_units += {
+ 'service': 'virtchd',
+ 'service_in': files('virtchd.service.in'),
+ 'name': 'Libvirt ch',
+ 'sockprefix': 'virtchd',
+ 'sockets': [ 'main', 'ro', 'admin' ],
+ }
+
+ sysconf_files += {
+ 'name': 'virtchd',
+ 'file': files('virtchd.sysconf'),
+ }
+
+ virt_install_dirs += [
+ localstatedir / 'lib' / 'libvirt' / 'ch',
+ runstatedir / 'libvirt' / 'ch',
+ ]
+endif
diff --git a/src/ch/virtchd.service.in b/src/ch/virtchd.service.in
new file mode 100644
index 0000000000..cc1e85d1df
--- /dev/null
+++ b/src/ch/virtchd.service.in
@@ -0,0 +1,47 @@
+[Unit]
+Description=Virtualization Cloud-Hypervisor daemon
+Conflicts=libvirtd.service
+Requires=virtchd.socket
+Requires=virtchd-ro.socket
+Requires=virtchd-admin.socket
+Wants=systemd-machined.service
+Before=libvirt-guests.service
+After=network.target
+After=dbus.service
+After=apparmor.service
+After=local-fs.target
+After=remote-fs.target
+After=systemd-logind.service
+After=systemd-machined.service
+Documentation=man:libvirtd(8)
+Documentation=https://libvirt.org
+
+[Service]
+Type=notify
+EnvironmentFile=-@sysconfdir@/sysconfig/virtchd
+ExecStart=@sbindir@/virtchd $VIRTCHD_ARGS
+ExecReload=/bin/kill -HUP $MAINPID
+KillMode=process
+Restart=on-failure
+# At least 2 FD per guest (eg ch monitor + ch socket).
+# eg if we want to support 4096 guests, we'll typically need 8192 FDs
+# If changing this, also consider virtlogd.service & virtlockd.service
+# limits which are also related to number of guests
+LimitNOFILE=8192
+# The cgroups pids controller can limit the number of tasks started by
+# the daemon, which can limit the number of domains for some hypervisors.
+# A conservative default of 8 tasks per guest results in a TasksMax of
+# 32k to support 4096 guests.
+TasksMax=32768
+# With cgroups v2 there is no devices controller anymore, we have to use
+# eBPF to control access to devices. In order to do that we create a eBPF
+# hash MAP which locks memory. The default map size for 64 devices together
+# with program takes 12k per guest. After rounding up we will get 64M to
+# support 4096 guests.
+LimitMEMLOCK=64M
+
+[Install]
+WantedBy=multi-user.target
+Also=virtchd.socket
+Also=virtchd-ro.socket
+Also=virtchd-admin.socket
diff --git a/src/ch/virtchd.sysconf b/src/ch/virtchd.sysconf
new file mode 100644
index 0000000000..5ee44be5cf
--- /dev/null
+++ b/src/ch/virtchd.sysconf
@@ -0,0 +1,3 @@
+# Customizations for the virtchd.service systemd unit
+
+VIRTCHD_ARGS="--timeout 120"
diff --git a/src/meson.build b/src/meson.build
index c7ff9e978c..2bd88e6699 100644
--- a/src/meson.build
+++ b/src/meson.build
@@ -271,6 +271,7 @@ subdir('esx')
subdir('hyperv')
subdir('libxl')
subdir('lxc')
+subdir('ch')
subdir('openvz')
subdir('qemu')
subdir('test')
diff --git a/src/remote/remote_daemon.c b/src/remote/remote_daemon.c
index ec2661d0a8..7076fe3294 100644
--- a/src/remote/remote_daemon.c
+++ b/src/remote/remote_daemon.c
@@ -169,6 +169,10 @@ static int daemonInitialize(void)
if (virDriverLoadModule("qemu", "qemuRegister", false) < 0)
return -1;
# endif
+# ifdef WITH_CH
+ if (virDriverLoadModule("ch", "chRegister", false) < 0)
+ return -1;
+# endif
# ifdef WITH_LXC
if (virDriverLoadModule("lxc", "lxcRegister", false) < 0)
return -1;
diff --git a/src/remote/remote_daemon_dispatch.c b/src/remote/remote_daemon_dispatch.c
index 1b4f5256c3..e500b41564 100644
--- a/src/remote/remote_daemon_dispatch.c
+++ b/src/remote/remote_daemon_dispatch.c
@@ -2139,7 +2139,8 @@ remoteDispatchConnectOpen(virNetServer *server G_GNUC_UNUSED,
STREQ(type, "VBOX") ||
STREQ(type, "bhyve") ||
STREQ(type, "vz") ||
- STREQ(type, "Parallels")) {
+ STREQ(type, "Parallels") ||
+ STREQ(type, "CH")) {
VIR_DEBUG("Hypervisor driver found, setting URIs for secondary drivers");
if (getuid() == 0) {
priv->interfaceURI = "interface:///system";
diff --git a/src/util/virerror.c b/src/util/virerror.c
index 1746487f7d..d9e2c65dc8 100644
--- a/src/util/virerror.c
+++ b/src/util/virerror.c
@@ -145,6 +145,7 @@ VIR_ENUM_IMPL(virErrorDomain,
"TPM", /* 70 */
"BPF",
+ "Cloud-Hypervisor Driver",
);
diff --git a/tools/virsh.c b/tools/virsh.c
index 7d7109cfdf..70355a606b 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -506,6 +506,9 @@ virshShowVersion(vshControl *ctl G_GNUC_UNUSED)
#ifdef WITH_OPENVZ
vshPrint(ctl, " OpenVZ");
#endif
+#ifdef WITH_CH
+ vshPrint(ctl, " Cloud-Hypervisor");
+#endif
#ifdef WITH_VZ
vshPrint(ctl, " Virtuozzo");
#endif
--
2.31.1
3 years, 6 months
Storing PCI(e) VPD Board Serial Numbers
by Dmitrii Shcherbakov
Hello Libvirt Developers,
I am looking for some feedback on a planned enhancement to Libvirt: the aim
is
to store a portion of PCI(e) Vital Product Data (VPD) for each device along
with other PCI/PCIe device information already collected. Specifically, the
SN
(Serial Number) read-only field of a VPD data structure of a device is of
interest which is described in PCI/PCIe specs (PCI local bus 2.1+ and PCIe
4.0+).
The context for this is the cross-project work in OpenStack (Nova, Neutron),
OVS and OVN to support for off-path SmartNIC DPUs ([1], [2], [3], [4]). The
Nova specification [1] provides an overview of the relevant hardware and the
use-case for board serial numbers, however, VPD is the standard capability
in
the PCI/PCIe specifications not tied to the use-case in particular so the
suggestion from the Nova core team was to aim at introducing means of
collecting this information via Libvirt. It can then be retrieved by the
respective virt driver in Nova via Libvirt without having to introduce this
code into Nova itself.
Quoting the PCI(e) specs:
* "Vital Product Data (VPD) is the information that uniquely defines items
such
as the hardware, software, and microcode elements of a system.";
* "Vital Product Data is made up of Small and Large Resource Data Types.";
* "Large resource type VPD-R Tag: This tag contains the read only VPD
keywords
for an add-in card."
* SN read-only field: "The characters are alphanumeric and represent the
unique
add-in card Serial Number."
The VPD capability is optional per the specification so it may or may not
appear for PCI(e) endpoints. The devices of interest (SmartNIC DPUs),
however,
generally have it exposed.
The PCI/PCIe specs define a binary format for VPD and a sysfs entry exposing
a binary blob in that format has been available since kernel v2.6.26 [5].
The
relevant sections of specs are:
* "6.4. Vital Product Data" in the PCI Local Bus specification;
* "6.28 Vital Product Data (VPD)" in the PCIe 4.0 Base Specification.
Note that the serial number stored in VPD is not identical to the
information
stored in the Device Serial Number (DSN) capability also present in the
specs
as it may identify a component on a board which presents a multi-function
device but the board itself may have multiple components ([9] also makes a
distinction between a board serial and a device serial).
As a reference, there is some code to parse and print the VPD in lspci [6]
and
there is a prototype along those lines in Python [7], a polished version of
which I plan to use in Nova until the relevant functionality appears in
Libvirt.
Likewise, the devlink kernel infrastructure, which is already used in
Libvirt
to query additional device capabilities [8] (e.g. the presence of an
eswitch
and its switchdev mode) has a devlink-info API [9] that exposes a way to
query
a board serial number if a device driver exposes it (in turn, by querying
controller firmware or via PCIe VPD). This allows doing that in a
bus-independent manner (e.g. it would work for PCIe, platform devices or
other
I/O interconnects) but in the context of devices that implement devlink API
only (which are not necessarily network devices [10] but most of them
currently
are).
I would like to suggest the following to be done in Libvirt:
1) adding the code for extracting a serial number from VPD for PCI/PCIe
devices
in general and storing it for exposure via the Libvirt API;
More specifically, I propose adding a nested capability called "vpd" under
VIR_NODE_DEV_CAP_PCI_DEV:
<capability type='pci'>
<capability type='vpd'>
<serial>UNIQUESERIAL</serial>
<!-- ... other VPD attributes if present -->
</capability>
<!-- ... -->
</capability>
2) (optional) implementing functionality to obtain a board serial number via
devlink-info for PCIe devices if they do not expose a VPD capability
but the device driver can retrieve it via firmware. The board serial number
can be stored in the same element as suggested above.
Not all devices expose the devlink API and even fewer do expose board serial
via devlink-info:
* devlink was added in 4.10 [11];
* devlink-info was introduced in 5.1 [12];
* querying for board.serial_number was added in kernel 5.9 [13] and iproute2
5.9.0 [14];
* Besides the generic devlink infrastructure support above, device drivers
also need to support exposing this field.
Therefore, implementing two approaches (sysfs VPD, devlink) is preferable
for better compatibility.
I would appreciate any feedback on whether this potential addition makes
sense.
If so, I can look into implementing this.
[1] https://review.opendev.org/c/openstack/nova-specs/+/787458
[2] https://review.opendev.org/c/openstack/neutron-specs/+/788821
[3]
https://patchwork.ozlabs.org/project/openvswitch/patch/20210323145032.453...
[4]
https://patchwork.ozlabs.org/project/ovn/patch/20210509140305.1910796-1-f...
[5]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit...
[6] https://github.com/pciutils/pciutils/blob/v3.7.0/ls-vpd.c#L95-L216
[7] https://gist.github.com/dshcherb/40e982989599a757e5b1e25999501019
[8]
https://github.com/libvirt/libvirt/blob/v7.3.0/src/util/virnetdev.c#L3167...
[9]
https://www.kernel.org/doc/html/latest/networking/devlink/devlink-info.html
[10] https://www.kernel.org/doc/html/latest/networking/devlink/index.html
[11]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit...
[12]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit...
[13]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit...
[14]
https://git.kernel.org/pub/scm/network/iproute2/iproute2.git/commit/?id=7...
Best Regards,
Dmitrii Shcherbakov
LP: ~dmitriis
3 years, 6 months
[PATCH v2 0/3] virsh: Fix logic wrt to --current flag in cmdSetmem
by Michal Privoznik
v2 of:
https://listman.redhat.com/archives/libvir-list/2021-May/msg00484.html
diff to v1:
- Work in Jano's and Peter's review suggestions in 1/3.
- Patches 2/3 and 3/3 are new. I've done 2/3 before 3/3 because it's
still worth merging even if 3/3 is disliked.
Michal Prívozník (3):
virsh: Fix logic wrt to --current flag in cmdSetmem
virsh-domain: Fix @ret handling in cmdSetmem and cmdSetmaxmem
virsh-domain: Drop support for old APIs in cmdSetmem and cmdSetmaxmem
tools/virsh-domain.c | 48 +++++++++++++++++---------------------------
1 file changed, 18 insertions(+), 30 deletions(-)
--
2.26.3
3 years, 6 months
Recommended volume permissions (being created for vagrant-libvirt via fog-libvirt)
by Darragh Bailey
Hi,
A request has come up recently in vagrant-libvirt about changing the
permissions used for the VM volume image file.
Currently there is a backing image file uploaded that gets 744 as the file
permissions, and then the VM domain is created using this as the backing
file for any changes. The file containing the changes for the VM gets 600,
so accessing what is contained is limited to libvirt and thus to those that
can connect to libvirt.
The request is to change this to be 744, it appears to have been triggered
due to a desire to try and use virt-v2v to create a portable XML and export
the disks.
However I'm a little hesitant as in general I would default to more secure
rather than less secure to avoid creating security concerns down the line.
Even though vagrant-libvirt is typically used for development, it wouldn't
surprise me to see it being used on CI build infrastructure and given the
shared nature of that, making things less secure may cause issues for some
users. Of course working out who would be impacted is virtually impossible
without making the change and seeing who is concerned. And that might be
several months down the line before it's raised.
Rather than just merging this, wondering if there are any security
guidelines on the file permissions for VM image files? That or something
that can outline the risks, or even clarify that it's unnecessary to worry
about?
--
Darragh Bailey
"Nothing is foolproof to a sufficiently talented fool"
3 years, 6 months
[libvirt PATCH 0/2] fix disk XML formatting for qemuDomainBlockCopy API
by Pavel Hrdina
Pavel Hrdina (2):
domain_conf: extract disk driver source bits to its own function
virDomainDiskDefParseSource: parse source bits from driver element
src/conf/domain_conf.c | 52 ++++++++++++++++++++++++++++++------------
1 file changed, 37 insertions(+), 15 deletions(-)
--
2.31.1
3 years, 6 months