[PATCH] virQEMUDriverConfigNew: Add slash to cfg->defaultTLSx509certdir for non-embedded driver
by Peter Krempa
Commit 068efae5b1a9ef accidentally removed the slash.
https://bugzilla.redhat.com/show_bug.cgi?id=1847234
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
src/qemu/qemu_conf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index b49299e1de..33b3989268 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -234,7 +234,7 @@ virQEMUDriverConfigPtr virQEMUDriverConfigNew(bool privileged,
* directory doesn't exist (although we don't check if this exists).
*/
if (root == NULL) {
- cfg->defaultTLSx509certdir = g_strdup(SYSCONFDIR "pki/qemu");
+ cfg->defaultTLSx509certdir = g_strdup(SYSCONFDIR "/pki/qemu");
} else {
cfg->defaultTLSx509certdir = g_strdup_printf("%s/etc/pki/qemu", root);
}
--
2.26.2
4 years, 5 months
[libvirt] [PATCH 0/5] Add support for fine grained discard control of qemu
by Lin Ma
* The 'discard_granularity' property is available to 'scsi-hd', 'virtio-blk' and 'ide-*'.
It impacts the 'Optimal Unmap Granularity' field in the block limits VPD page:
Optimal Unmap Granularity = discard granularity / logical block size
* The 'max_unmap_size' property is available to 'scsi-hd' and 'scsi-block'.
It impacts the 'Maximum unmap LBA count' field in the block limits VPD page:
Maximum unmap LBA count = max unmap size / logical block size
Lin Ma (5):
conf: Add support for discard granularity
qemu: Add Support for discard granularity
conf: Add support for unmap limit
qemu: caps: Add max_unmap_size property of scsi-disk
qemu: Add support for max_unmap_size property of scsi-disk
docs/formatdomain.html.in | 18 +++++++-
docs/schemas/domaincommon.rng | 10 +++++
src/conf/domain_conf.c | 43 ++++++++++++++++++-
src/conf/domain_conf.h | 2 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 13 ++++++
src/qemu/qemu_domain.c | 4 ++
.../caps_2.1.1.x86_64.xml | 1 +
.../caps_2.10.0.aarch64.xml | 1 +
.../caps_2.10.0.ppc64.xml | 1 +
.../caps_2.10.0.s390x.xml | 1 +
.../caps_2.10.0.x86_64.xml | 1 +
.../caps_2.11.0.s390x.xml | 1 +
.../caps_2.11.0.x86_64.xml | 1 +
.../caps_2.12.0.aarch64.xml | 1 +
.../caps_2.12.0.ppc64.xml | 1 +
.../caps_2.12.0.s390x.xml | 1 +
.../caps_2.12.0.x86_64.xml | 1 +
.../caps_2.4.0.x86_64.xml | 1 +
.../caps_2.5.0.x86_64.xml | 1 +
.../caps_2.6.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_2.6.0.ppc64.xml | 1 +
.../caps_2.6.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_2.7.0.s390x.xml | 1 +
.../caps_2.7.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_2.8.0.s390x.xml | 1 +
.../caps_2.8.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_2.9.0.ppc64.xml | 1 +
.../qemucapabilitiesdata/caps_2.9.0.s390x.xml | 1 +
.../caps_2.9.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_3.0.0.ppc64.xml | 1 +
.../caps_3.0.0.riscv32.xml | 1 +
.../caps_3.0.0.riscv64.xml | 1 +
.../qemucapabilitiesdata/caps_3.0.0.s390x.xml | 1 +
.../caps_3.0.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_3.1.0.ppc64.xml | 1 +
.../caps_3.1.0.x86_64.xml | 1 +
.../caps_4.0.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_4.0.0.ppc64.xml | 1 +
.../caps_4.0.0.riscv32.xml | 1 +
.../caps_4.0.0.riscv64.xml | 1 +
.../qemucapabilitiesdata/caps_4.0.0.s390x.xml | 1 +
.../caps_4.0.0.x86_64.xml | 1 +
.../caps_4.1.0.x86_64.xml | 1 +
.../caps_4.2.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_4.2.0.ppc64.xml | 1 +
.../qemucapabilitiesdata/caps_4.2.0.s390x.xml | 1 +
.../caps_4.2.0.x86_64.xml | 1 +
.../caps_5.0.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 1 +
.../caps_5.0.0.riscv64.xml | 1 +
.../caps_5.0.0.x86_64.xml | 1 +
.../caps_5.1.0.x86_64.xml | 1 +
tests/qemuxml2argvdata/disk-blockio.args | 2 +-
tests/qemuxml2argvdata/disk-blockio.xml | 2 +-
.../disk-scsi-disk-max_unmap_size.args | 32 ++++++++++++++
.../disk-scsi-disk-max_unmap_size.xml | 28 ++++++++++++
tests/qemuxml2argvtest.c | 4 ++
59 files changed, 203 insertions(+), 4 deletions(-)
create mode 100644 tests/qemuxml2argvdata/disk-scsi-disk-max_unmap_size.args
create mode 100644 tests/qemuxml2argvdata/disk-scsi-disk-max_unmap_size.xml
--
2.26.0
4 years, 5 months
[libvirt PATCH 0/2] conf: Increase cpuset length limit for CPU pinning
by Jiri Denemark
The tests in patch 2/2 would fail without the first patch.
Jiri Denemark (2):
conf: Increase cpuset length limit for CPU pinning
qemuxml2*test: Add cases for CPU pinning to large host CPU IDs
src/conf/domain_conf.h | 2 +-
.../cputune-cpuset-big-id.x86_64-latest.args | 39 +++++++++++++++++
.../cputune-cpuset-big-id.xml | 33 ++++++++++++++
tests/qemuxml2argvtest.c | 1 +
.../cputune-cpuset-big-id.x86_64-latest.xml | 43 +++++++++++++++++++
tests/qemuxml2xmltest.c | 1 +
6 files changed, 118 insertions(+), 1 deletion(-)
create mode 100644 tests/qemuxml2argvdata/cputune-cpuset-big-id.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/cputune-cpuset-big-id.xml
create mode 100644 tests/qemuxml2xmloutdata/cputune-cpuset-big-id.x86_64-latest.xml
--
2.27.0
4 years, 5 months
Does 'numad' interacts with memory_migration with 'numatune'?
by Daniel Henrique Barboza
Hi,
While investigating a 'virsh numatune' behavior in Power 9 guests I came
across this doubt and couldn't find a direct answer.
numad role, as far as [1] goes, is automatic NUMA affinity only. As far as
Libvirt and my understanding goes , numad is used for placement='auto' setups,
which aren't even allowed for numatune operations in the first place.
Problem is that I'm not sure if the mere presence of numad running in the
host might be accelerating the memory migration triggered by numatune,
regardless of placement settings. My first answer would be no, but several
examples in the internet shows all the RAM in the guest being migrated
from one NUMA node to the other almost instantly*, and aside from them being
done in x86 I wonder whether numad is having any impact on that.
The reason I'm asking is because I don't have a x86 setup with multiple
NUMA nodes to compare results, and numad is broken sparse NUMA setups for some
time now ([2] tells the story if you're interested), and Power 8/9 happens
to operate with sparse NUMA setups, so no numad for me.
If someone can confirm my suspicion (i.e. numad has no interference in NUMA
memory migration triggered by numatune) I appreciate.
Thanks,
DHB
* or at very least no one cared to point out that the memory is migrated
according to the paging demanding of the guest, as I see happen in Power
guests and working as intended according to kernel cgroup docs.
[1] https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/...
[2] https://bugs.launchpad.net/ubuntu/+source/numad/+bug/1832915
4 years, 5 months
[libvirt-dockerfiles PATCH] README: Update to reflect repository status
by Andrea Bolognani
The contents of this repository are no longer used, so briefly
explain why that is the case and point interested people in the
right direction.
Signed-off-by: Andrea Bolognani <abologna(a)redhat.com>
---
README.md | 51 ---------------------------------------------------
README.rst | 12 ++++++++++++
2 files changed, 12 insertions(+), 51 deletions(-)
delete mode 100644 README.md
create mode 100644 README.rst
diff --git a/README.md b/README.md
deleted file mode 100644
index 3508517..0000000
--- a/README.md
+++ /dev/null
@@ -1,51 +0,0 @@
-Docker-based build environments for libvirt
-===========================================
-
-These images come with all libvirt build dependencies, including
-optional ones, already installed: this makes it possible to run
-something like
-
- $ docker run \
- -v $(pwd):/libvirt \
- -w /libvirt \
- -it \
- buildenv-centos-7
-
-from a git clone and start building libvirt right away.
-
-Image availability is influenced by libvirt's
-[platform support policy](https://libvirt.org/platforms.html),
-with the obvious caveat that non-Linux operating systems can't
-be run on top of a Linux kernel and as such are not included.
-
-
-Intended use
-------------
-
-The images are primarily intended for use on
-[Travis CI](https://travis-ci.org/libvirt/libvirt).
-
-The primary CI environment for the libvirt project is hosted on
-[CentOS CI](https://ci.centos.org/view/libvirt/); however, since
-that environment feeds off the `master` branch of the various
-projects, it can only detect issues after the code causing them
-has already been merged.
-
-While testing on Travis CI doesn't cover as many platforms or the
-interactions between as many components, it can be very useful as
-a smoke test of sorts that allows developers to catch mistakes
-before posting patches to the mailing list.
-
-As an alternative, images can be used locally without relying on
-third-party services; in this scenario, the number of platforms
-patches are tested against is only limited by image availability
-and hardware resources.
-
-
-Information about build dependencies
-------------------------------------
-
-The list of build dependencies for libvirt (as well as many
-other virtualization-related projects) is taken from the
-[libvirt-ci](https://gitlab.com/libvirt/libvirt-ci) repository,
-which also contains the tooling used to generate Dockerfiles.
diff --git a/README.rst b/README.rst
new file mode 100644
index 0000000..ecdabcf
--- /dev/null
+++ b/README.rst
@@ -0,0 +1,12 @@
+===================================
+!!! This repository is obsolete !!!
+===================================
+
+Container images are now generated as part of the GitLab CI pipeline
+for the respective projects and hosted on the GitLab container
+registry instead of being generated and hosted on Quay, with builds
+triggered manually using the scripts in this repository; the
+Dockerfiles themselves are also now part of the project's repository
+instead of being centrally managed here.
+
+For more information, look into each project's ``ci/`` directory.
--
2.25.4
4 years, 5 months
[libvirt PATCH] docs: add kbase entry showing KVM real time guest config
by Daniel P. Berrangé
There are many different settings that required to config a KVM guest
for real time, low latency workoads. The documentation included here is
based on guidance developed & tested by the Red Hat KVM real time team.
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
docs/kbase.html.in | 3 +
docs/kbase/kvm-realtime.rst | 213 ++++++++++++++++++++++++++++++++++++
2 files changed, 216 insertions(+)
create mode 100644 docs/kbase/kvm-realtime.rst
diff --git a/docs/kbase.html.in b/docs/kbase.html.in
index c586e0f676..e663ca525f 100644
--- a/docs/kbase.html.in
+++ b/docs/kbase.html.in
@@ -36,6 +36,9 @@
<dt><a href="kbase/virtiofs.html">Virtio-FS</a></dt>
<dd>Share a filesystem between the guest and the host</dd>
+
+ <dt><a href="kbase/kvm-realtime.html">KVM real time</a></dt>
+ <dd>Run real time workloads in guests on a KVM hypervisor</dd>
</dl>
</div>
diff --git a/docs/kbase/kvm-realtime.rst b/docs/kbase/kvm-realtime.rst
new file mode 100644
index 0000000000..ac6102879b
--- /dev/null
+++ b/docs/kbase/kvm-realtime.rst
@@ -0,0 +1,213 @@
+==========================
+KVM Real Time Guest Config
+==========================
+
+.. contents::
+
+The KVM hypervisor is capable of running real time guest workloads. This page
+describes the key pieces of configuration required in the domain XML to achieve
+the low latency needs of real time workloads.
+
+For the most part, configuration of the host OS is out of scope of this
+documentation. Refer to the operating system vendor's guidance on configuring
+the host OS and hardware for real time. Note in particular that the default
+kernel used by most Linux distros is not suitable for low latency real time and
+must be replaced by an special kernel build.
+
+
+Host partitioning plan
+======================
+
+Running real time workloads requires carefully partitioning up the host OS
+resources, such that the KVM / QEMU processes are strictly separated from any
+other workload running on the host, both userspace processes and kernel threads.
+
+As such, some subset of host CPUs need to be reserved exclusively for running
+KVM guests. This requires that the host kernel be booted using the ``isolcpus``
+kernel command line parameter. This parameter removes a set of CPUs from the
+schedular, such that that no kernel threads or userspace processes will ever get
+placed on those CPUs automatically. KVM guests are then manually placed onto
+these CPUs.
+
+Deciding which host CPUs to reserve for real time requires understanding of the
+guest workload needs and balancing with the host OS needs. The trade off will
+also vary based on the physical hardware available.
+
+For the sake of illustration, this guide will assume a physical machine with two
+NUMA nodes, each with 2 sockets and 4 cores, giving a total of 16 CPUs on the
+host. Furthermore, it is assumed that hyperthreading is either not supported or
+has been disabled in the BIOS, since it is incompatible with real time. Each
+NUMA node is assumed to have 32 GB of RAM, giving 64 GB total for the host.
+
+It is assumed that 2 CPUs in each NUMA node are reserved for the host OS, with
+the remaining 6 CPUs available for KVM real time. With this in mind, the host
+kernel should have booted with ``isolcpus=2-7,10,15`` to reserve CPUs.
+
+To maximise efficiency of page table lookups for the guest, the host needs to be
+configured with most RAM exposed as huge pages, ideally 1 GB sized. 6 GB of RAM
+in each NUMA node will be reserved for general host OS usage as normal sized
+pages, leaving 26 GB for KVM usage as huge pages.
+
+Once huge pages are reserved on the hypothetical machine, the ``virsh
+capabilities`` command output is expected to look approximately like:
+
+::
+
+ <topology>
+ <cells num='2'>
+ <cell id='0'>
+ <memory unit='KiB'>33554432</memory>
+ <pages unit='KiB' size='4'>1572864</pages>
+ <pages unit='KiB' size='2048'>0</pages>
+ <pages unit='KiB' size='1048576'>26</pages>
+ <distances>
+ <sibling id='0' value='10'/>
+ <sibling id='1' value='21'/>
+ </distances>
+ <cpus num='8'>
+ <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
+ <cpu id='1' socket_id='0' core_id='1' siblings='1'/>
+ <cpu id='2' socket_id='0' core_id='2' siblings='2'/>
+ <cpu id='3' socket_id='0' core_id='3' siblings='3'/>
+ <cpu id='4' socket_id='1' core_id='0' siblings='4'/>
+ <cpu id='5' socket_id='1' core_id='1' siblings='5'/>
+ <cpu id='6' socket_id='1' core_id='2' siblings='6'/>
+ <cpu id='7' socket_id='1' core_id='3' siblings='7'/>
+ </cpus>
+ </cell>
+ <cell id='1'>
+ <memory unit='KiB'>33554432</memory>
+ <pages unit='KiB' size='4'>1572864</pages>
+ <pages unit='KiB' size='2048'>0</pages>
+ <pages unit='KiB' size='1048576'>26</pages>
+ <distances>
+ <sibling id='0' value='21'/>
+ <sibling id='1' value='10'/>
+ </distances>
+ <cpus num='8'>
+ <cpu id='8' socket_id='0' core_id='0' siblings='8'/>
+ <cpu id='9' socket_id='0' core_id='1' siblings='9'/>
+ <cpu id='10' socket_id='0' core_id='2' siblings='10'/>
+ <cpu id='11' socket_id='0' core_id='3' siblings='11'/>
+ <cpu id='12' socket_id='1' core_id='0' siblings='12'/>
+ <cpu id='13' socket_id='1' core_id='1' siblings='13'/>
+ <cpu id='14' socket_id='1' core_id='2' siblings='14'/>
+ <cpu id='15' socket_id='1' core_id='3' siblings='15'/>
+ </cpus>
+ </cell>
+ </cells>
+ </topology>
+
+Be aware that CPU ID numbers are not always allocated sequentially as shown
+here. It is not unusual to see IDs interleaved between sockets on the two NUMA
+nodes, such that ``0-3,8-11`` are be on the first node and ``4-7,12-15`` are on
+the second node. Carefully check the ``virsh capabilities`` output to determine
+the CPU ID numbers when configiring both ``isolcpus`` and the guest ``cpuset``
+values.
+
+Guest configuration
+===================
+
+What follows is an overview of the key parts of the domain XML that need to be
+configured to achieve low latency for real time workflows. The following example
+will assume a 4 CPU guest, requiring 16 GB of RAM. It is intended to be placed
+on the second host NUMA node.
+
+CPU configuration
+-----------------
+
+Real time KVM guests intended to run Linux should have a minimum of 2 CPUs.
+One vCPU is for running non-real time processes and performing I/O. The other
+vCPUs will run real time applications. Some non-Linux OS may not require a
+special non-real time CPU to be available, in which case the 2 CPU minimum would
+not apply.
+
+Each guest CPU, even the non-real time one, needs to be pinned to a dedicated
+host core that is in the `isolcpus` reserved set. The QEMU emulator threads
+also need to be pinned to host CPUs that are not in the `isolcpus` reserved set.
+The vCPUs need to be given a real time CPU schedular policy.
+
+When configuring the `guest CPU count <../formatdomain.html#elementsCPUAllocation>`_,
+do not include any CPU affinity are this stage:
+
+::
+
+ <vcpu placement='static'>4</vcpu>
+
+The guest CPUs now need to be placed individually. In this case, they will all
+be put within the same host socket, such that they can be exposed as core
+siblings. This is achieved using the `CPU tunning config <../formatdomain.html#elementsCPUTuning>`_:
+
+::
+
+ <cputune>
+ <emulatorpin cpuset="8-9"/>
+ <vcpupin vcpu="0" cpuset="12"/>
+ <vcpupin vcpu="1" cpuset="13"/>
+ <vcpupin vcpu="2" cpuset="14"/>
+ <vcpupin vcpu="3" cpuset="15"/>
+ <vcpusched vcpus='0-4' scheduler='fifo' priority='1'/>
+ </cputune>
+
+The `guest CPU model <formatdomain.html#elementsCPU>`_ now needs to be
+configured to pass through the host model unchanged, with topology matching the
+placement:
+
+::
+
+ <cpu mode='host-passthrough'>
+ <topology sockets='1' dies='1' cores='4' threads='1'/>
+ <feature policy='require' name='tsc-deadline'/>
+ </cpu>
+
+The performance monitoring unit virtualization needs to be disabled
+via the `hypervisor features <../formatdomain.html#elementsFeatures>`_:
+
+::
+
+ <features>
+ ...
+ <pmu state='off'/>
+ </features>
+
+
+Memory configuration
+--------------------
+
+The host memory used for guest RAM needs to be allocated from huge pages on the
+second NUMA node, and all other memory allocation needs to be locked into RAM
+with memory page sharing disabled.
+This is achieved by using the `memory backing config <formatdomain.html#elementsMemoryBacking>`_:
+
+::
+
+ <memoryBacking>
+ <hugepages>
+ <page size="1" unit="G" nodeset="1"/>
+ </hugepages>
+ <locked/>
+ <nosharepages/>
+ </memoryBacking>
+
+
+Device configuration
+--------------------
+
+Libvirt adds a few devices by default to maintain historical QEMU configuration
+behaviour. It is unlikely these devices are required by real time guests, so it
+is wise to disable them. Remove all USB controllers that may exist in the XML
+config and replace them with:
+
+::
+
+ <controller type="usb" model="none"/>
+
+Similarly the memory balloon config should be changed to
+
+::
+
+ <memballoon model="none"/>
+
+If the guest had a graphical console at installation time this can also be
+disabled, with remote access being over SSH, with a minimal serial console
+for emergencies.
--
2.26.2
4 years, 5 months
[PATCH 0/2] virDevMapperGetTargetsImpl: Check for dm major properly
by Michal Privoznik
*** BLURB HERE ***
Michal Prívozník (2):
util: Move virIsDevMapperDevice() to virdevmapper.c
virDevMapperGetTargetsImpl: Check for dm major properly
src/libvirt_private.syms | 2 +-
src/storage/parthelper.c | 2 +-
src/storage/storage_backend_disk.c | 1 +
src/util/virdevmapper.c | 33 ++++++++++++++++++++++--------
src/util/virdevmapper.h | 3 +++
src/util/virutil.c | 24 ----------------------
src/util/virutil.h | 2 --
7 files changed, 31 insertions(+), 36 deletions(-)
--
2.26.2
4 years, 5 months
[libvirt-dockerfiles PATCH] Drop libvirt images
by Andrea Bolognani
As of
https://gitlab.com/libvirt/libvirt/-/commit/95abbdc432133b9ae4a76d15251d6...
libvirt uses the GitLab container registry for its CI, so we no
longer need to build these on Quay.
Signed-off-by: Andrea Bolognani <abologna(a)redhat.com>
---
Pushed under the Dockerfile refresh rule.
buildenv-libvirt-centos-7.zip | Bin 2022 -> 0 bytes
buildenv-libvirt-centos-8.zip | Bin 946 -> 0 bytes
buildenv-libvirt-debian-10-cross-aarch64.zip | Bin 1117 -> 0 bytes
buildenv-libvirt-debian-10-cross-armv6l.zip | Bin 1110 -> 0 bytes
buildenv-libvirt-debian-10-cross-armv7l.zip | Bin 1115 -> 0 bytes
buildenv-libvirt-debian-10-cross-i686.zip | Bin 1112 -> 0 bytes
buildenv-libvirt-debian-10-cross-mips.zip | Bin 1110 -> 0 bytes
buildenv-libvirt-debian-10-cross-mips64el.zip | Bin 1121 -> 0 bytes
buildenv-libvirt-debian-10-cross-mipsel.zip | Bin 1113 -> 0 bytes
buildenv-libvirt-debian-10-cross-ppc64le.zip | Bin 1123 -> 0 bytes
buildenv-libvirt-debian-10-cross-s390x.zip | Bin 1112 -> 0 bytes
buildenv-libvirt-debian-10.zip | Bin 1039 -> 0 bytes
buildenv-libvirt-debian-9-cross-aarch64.zip | Bin 1150 -> 0 bytes
buildenv-libvirt-debian-9-cross-armv6l.zip | Bin 1142 -> 0 bytes
buildenv-libvirt-debian-9-cross-armv7l.zip | Bin 1147 -> 0 bytes
buildenv-libvirt-debian-9-cross-mips.zip | Bin 1141 -> 0 bytes
buildenv-libvirt-debian-9-cross-mips64el.zip | Bin 1152 -> 0 bytes
buildenv-libvirt-debian-9-cross-mipsel.zip | Bin 1146 -> 0 bytes
buildenv-libvirt-debian-9-cross-ppc64le.zip | Bin 1154 -> 0 bytes
buildenv-libvirt-debian-9-cross-s390x.zip | Bin 1144 -> 0 bytes
buildenv-libvirt-debian-9.zip | Bin 1069 -> 0 bytes
buildenv-libvirt-debian-sid-cross-aarch64.zip | Bin 1117 -> 0 bytes
buildenv-libvirt-debian-sid-cross-armv6l.zip | Bin 1110 -> 0 bytes
buildenv-libvirt-debian-sid-cross-armv7l.zip | Bin 1115 -> 0 bytes
buildenv-libvirt-debian-sid-cross-i686.zip | Bin 1112 -> 0 bytes
buildenv-libvirt-debian-sid-cross-mips64el.zip | Bin 1121 -> 0 bytes
buildenv-libvirt-debian-sid-cross-mipsel.zip | Bin 1109 -> 0 bytes
buildenv-libvirt-debian-sid-cross-ppc64le.zip | Bin 1123 -> 0 bytes
buildenv-libvirt-debian-sid-cross-s390x.zip | Bin 1112 -> 0 bytes
buildenv-libvirt-debian-sid.zip | Bin 1039 -> 0 bytes
buildenv-libvirt-fedora-31.zip | Bin 910 -> 0 bytes
buildenv-libvirt-fedora-32.zip | Bin 910 -> 0 bytes
...denv-libvirt-fedora-rawhide-cross-mingw32.zip | Bin 1057 -> 0 bytes
...denv-libvirt-fedora-rawhide-cross-mingw64.zip | Bin 1060 -> 0 bytes
buildenv-libvirt-fedora-rawhide.zip | Bin 929 -> 0 bytes
buildenv-libvirt-opensuse-151.zip | Bin 920 -> 0 bytes
buildenv-libvirt-ubuntu-1804.zip | Bin 1075 -> 0 bytes
buildenv-libvirt-ubuntu-2004.zip | Bin 1045 -> 0 bytes
38 files changed, 0 insertions(+), 0 deletions(-)
delete mode 100644 buildenv-libvirt-centos-7.zip
delete mode 100644 buildenv-libvirt-centos-8.zip
delete mode 100644 buildenv-libvirt-debian-10-cross-aarch64.zip
delete mode 100644 buildenv-libvirt-debian-10-cross-armv6l.zip
delete mode 100644 buildenv-libvirt-debian-10-cross-armv7l.zip
delete mode 100644 buildenv-libvirt-debian-10-cross-i686.zip
delete mode 100644 buildenv-libvirt-debian-10-cross-mips.zip
delete mode 100644 buildenv-libvirt-debian-10-cross-mips64el.zip
delete mode 100644 buildenv-libvirt-debian-10-cross-mipsel.zip
delete mode 100644 buildenv-libvirt-debian-10-cross-ppc64le.zip
delete mode 100644 buildenv-libvirt-debian-10-cross-s390x.zip
delete mode 100644 buildenv-libvirt-debian-10.zip
delete mode 100644 buildenv-libvirt-debian-9-cross-aarch64.zip
delete mode 100644 buildenv-libvirt-debian-9-cross-armv6l.zip
delete mode 100644 buildenv-libvirt-debian-9-cross-armv7l.zip
delete mode 100644 buildenv-libvirt-debian-9-cross-mips.zip
delete mode 100644 buildenv-libvirt-debian-9-cross-mips64el.zip
delete mode 100644 buildenv-libvirt-debian-9-cross-mipsel.zip
delete mode 100644 buildenv-libvirt-debian-9-cross-ppc64le.zip
delete mode 100644 buildenv-libvirt-debian-9-cross-s390x.zip
delete mode 100644 buildenv-libvirt-debian-9.zip
delete mode 100644 buildenv-libvirt-debian-sid-cross-aarch64.zip
delete mode 100644 buildenv-libvirt-debian-sid-cross-armv6l.zip
delete mode 100644 buildenv-libvirt-debian-sid-cross-armv7l.zip
delete mode 100644 buildenv-libvirt-debian-sid-cross-i686.zip
delete mode 100644 buildenv-libvirt-debian-sid-cross-mips64el.zip
delete mode 100644 buildenv-libvirt-debian-sid-cross-mipsel.zip
delete mode 100644 buildenv-libvirt-debian-sid-cross-ppc64le.zip
delete mode 100644 buildenv-libvirt-debian-sid-cross-s390x.zip
delete mode 100644 buildenv-libvirt-debian-sid.zip
delete mode 100644 buildenv-libvirt-fedora-31.zip
delete mode 100644 buildenv-libvirt-fedora-32.zip
delete mode 100644 buildenv-libvirt-fedora-rawhide-cross-mingw32.zip
delete mode 100644 buildenv-libvirt-fedora-rawhide-cross-mingw64.zip
delete mode 100644 buildenv-libvirt-fedora-rawhide.zip
delete mode 100644 buildenv-libvirt-opensuse-151.zip
delete mode 100644 buildenv-libvirt-ubuntu-1804.zip
delete mode 100644 buildenv-libvirt-ubuntu-2004.zip
--
2.25.4
4 years, 5 months
driver-storage: undefined symbols in libvirt_storage_backend_*.so
by liangpeng (H)
Hello everyone,
There are lots of undefined symbols in /usr/lib64/libvirt/storage-backend/libvirt_storage_backend_*.so. For example,
# ldd -r /usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so
linux-vdso.so.1 (0x0000ffff97901000)
...
libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x0000ffff9555a000)
undefined symbol: virStorageBackendRefreshLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendDeleteLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendVolBuildLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendVolBuildFromLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendVolCreateLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendVolRefreshLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendVolDeleteLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendVolResizeLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendVolUploadLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendVolDownloadLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendVolWipeLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendFileSystemMountCmd (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendFindGlusterPoolSources (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendRegister (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendNamespaceInit (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendDeviceIsEmpty (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendFileSystemGetPoolSource (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
undefined symbol: virStorageBackendBuildLocal (/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_fs.so)
All the undefined symbols are defined in libvirt_driver_storage.so. And libvirt_storage_backend_*.so are loaded by
virStorageDriverLoadBackendModule in libvirt_driver_storage.so. So there is no error when using.
Shall we add libvirt_driver_storage.so to the shared object dependencies of libvirt_storage_backend_*.so?
4 years, 5 months