[libvirt][PATCH v1 0/3] introduce 'restrictive' mode in numatune
by Luyao Zhong
Before this patch set, numatune only has three memory modes:
static, interleave and prefered. These memory policies are
ultimately set by mbind() system call.
Memory policy could be 'hard coded' into the kernel, but none of
above policies fit our requirment under this case. mbind() support
default memory policy, but it requires a NULL nodemask. So obviously
setting allowed memory nodes is cgroups' mission under this case.
So we introduce a new option for mode in numatune named 'restrictive'.
<numatune>
<memory mode="restrictive" nodeset="1-4,^3"/>
<memnode cellid="0" mode="restrictive" nodeset="1"/>
<memnode cellid="2" mode="restrictive" nodeset="2"/>
</numatune>
The config above means we only use cgroups to restrict the allowed
memory nodes and not setting any specific memory policies explicitly.
RFC discussion:
https://www.redhat.com/archives/libvir-list/2020-November/msg01256.html
Regards,
Luyao
Luyao Zhong (3):
docs: add docs for 'restrictive' option for mode in numatune
schema: add 'restrictive' config option for mode in numatune
qemu: add parser and formatter for 'restrictive' mode in numatune
docs/formatdomain.rst | 7 +++-
docs/schemas/domaincommon.rng | 2 +
include/libvirt/libvirt-domain.h | 1 +
src/conf/numa_conf.c | 9 +++++
src/qemu/qemu_command.c | 6 ++-
src/qemu/qemu_process.c | 27 +++++++++++++
src/util/virnuma.c | 3 ++
.../numatune-memnode-invalid-mode.err | 1 +
.../numatune-memnode-invalid-mode.xml | 33 +++++++++++++++
...emnode-restrictive-mode.x86_64-latest.args | 40 +++++++++++++++++++
.../numatune-memnode-restrictive-mode.xml | 33 +++++++++++++++
tests/qemuxml2argvtest.c | 2 +
...memnode-restrictive-mode.x86_64-latest.xml | 40 +++++++++++++++++++
tests/qemuxml2xmltest.c | 1 +
14 files changed, 202 insertions(+), 3 deletions(-)
create mode 100644 tests/qemuxml2argvdata/numatune-memnode-invalid-mode.err
create mode 100644 tests/qemuxml2argvdata/numatune-memnode-invalid-mode.xml
create mode 100644 tests/qemuxml2argvdata/numatune-memnode-restrictive-mode.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/numatune-memnode-restrictive-mode.xml
create mode 100644 tests/qemuxml2xmloutdata/numatune-memnode-restrictive-mode.x86_64-latest.xml
--
2.25.4
3 years, 12 months
[PATCH v2] gitlab-ci: publish test report as an artifact
by Paolo Bonzini
Since version 0.55, "meson test" produces JUnit XML in the meson-logs
directory. The XML can be parsed by GitLab and showed as part of the
CI report.
However, if the build and tests are performed by "meson dist",
the tests are performed in "meson dist"'s own build directory
and the logs are not accessible. So switch from "ninja dist"
to "meson dist --no-tests" after a separate build step that
is shared by the normal and the DIST=skip cases.
Signed-off-by: Paolo Bonzini <pbonzini(a)redhat.com>
---
v1->v2: only do it for new-enough distros
For an example see
https://gitlab.com/bonzini/libvirt/-/pipelines/221545357/test_report.
Test durations however are not yet available in upstream Meson.
---
.gitlab-ci.yml | 38 ++++++++++++++++++++++++++++++++++----
1 file changed, 34 insertions(+), 4 deletions(-)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 6792accf8f..c4b54201f8 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -40,6 +40,36 @@ stages:
<<: *container_job_definition
allow_failure: true
+# For new enough distros that have Meson 0.55, include the JUnit XML
+# report in the artifacts. In order to preserve it, we cannot use
+# "meson dist" and need to do the compilation test manually.
+# Fortunately, "meson dist --no-tests" was also added in Meson 0.55.
+.native_meson055_build_job_template: &native_meson055_build_job_definition
+ stage: builds
+ image: $CI_REGISTRY_IMAGE/ci-$NAME:latest
+ cache:
+ paths:
+ - ccache/
+ key: "$CI_JOB_NAME"
+ before_script:
+ - *script_variables
+ script:
+ - meson build --werror || (cat build/meson-logs/meson-log.txt && exit 1)
+ - ninja -C build
+ - ninja -C build test
+ - DESTDIR=$PWD/install/ ninja -C build install
+ - meson dist -C build --no-tests
+ - if test -x /usr/bin/rpmbuild && test "$RPM" != "skip";
+ then
+ rpmbuild --nodeps -ta build/meson-dist/libvirt-*.tar.xz;
+ fi
+ artifacts:
+ when: always
+ paths:
+ - build/meson-logs/
+ reports:
+ junit: build/meson-logs/testlog.junit.xml
+
.native_build_job_template: &native_build_job_definition
stage: builds
image: $CI_REGISTRY_IMAGE/ci-$NAME:latest
@@ -292,7 +322,7 @@ x64-debian-10-clang:
CC: clang
x64-debian-sid:
- <<: *native_build_job_definition
+ <<: *native_meson055_build_job_definition
needs:
- x64-debian-sid-container
variables:
@@ -343,21 +373,21 @@ x64-fedora-31:
RPM: skip
x64-fedora-32:
- <<: *native_build_job_definition
+ <<: *native_meson055_build_job_definition
needs:
- x64-fedora-32-container
variables:
NAME: fedora-32
x64-fedora-rawhide:
- <<: *native_build_job_definition
+ <<: *native_meson055_build_job_definition
needs:
- x64-fedora-rawhide-container
variables:
NAME: fedora-rawhide
x64-fedora-rawhide-clang:
- <<: *native_build_job_definition
+ <<: *native_meson055_build_job_definition
needs:
- x64-fedora-rawhide-container
variables:
--
2.28.0
3 years, 12 months
[PATCH] NEWS: Mention network disk support in 'virsh attach-disk'
by Peter Krempa
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
NEWS.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index aa8a217eb6..8c2b5def0f 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -42,6 +42,14 @@ v6.10.0 (unreleased)
* **Improvements**
+ * virsh: Support network disks in ``virsh attach-disk``
+
+ The ``virsh attach-disk`` helper command which simplifies attaching of disks
+ without the need for the user to formulate the disk XML manually now
+ supports network-backed images. Users can specify the protocol and host
+ specification with new command line arguments. Please refer to the man
+ page of virsh for further information.
+
* **Bug fixes**
* **Removed features**
--
2.28.0
3 years, 12 months
[PATCH] gitlab-ci: publish test report as an artifact
by Paolo Bonzini
"meson test" produces JUnit XML in the meson-logs directory. The
XML can be parsed by GitLab and showed as part of the CI report.
However, if the build and tests are performed by "meson dist",
the tests are performed in "meson dist"'s own build directory
and the logs are not accessible. So switch from "ninja dist"
to "meson dist --no-tests" after a separate build step that
is shared by the normal and the DIST=skip cases.
Signed-off-by: Paolo Bonzini <pbonzini(a)redhat.com>
---
For an example see
https://gitlab.com/bonzini/libvirt/-/pipelines/221545357/test_report.
Test durations however are not yet available in upstream Meson.
---
.gitlab-ci.yml | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 6792accf8f..ce7b60dc6b 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -51,17 +51,23 @@ stages:
- *script_variables
script:
- meson build --werror || (cat build/meson-logs/meson-log.txt && exit 1)
+ - ninja -C build
+ - ninja -C build test
+ - DESTDIR=$PWD/install/ ninja -C build install
- if test "$DIST" != "skip";
then
- ninja -C build dist;
- else
- ninja -C build;
- ninja -C build test;
+ meson dist -C build --no-tests;
fi
- if test -x /usr/bin/rpmbuild && test "$RPM" != "skip";
then
rpmbuild --nodeps -ta build/meson-dist/libvirt-*.tar.xz;
fi
+ artifacts:
+ when: always
+ paths:
+ - build/meson-logs/
+ reports:
+ junit: build/meson-logs/testlog.junit.xml
# Jobs that we delegate to Cirrus CI because they require an operating
# system other than Linux. These jobs will only run if the required
--
2.28.0
3 years, 12 months
[libvirt PATCH 0/2] fix regression in SSH tunnelling performance
by Daniel P. Berrangé
In testing the "vol-download" command in virsh, downloading a
1G file takes a ridiculous amount of time (minutes) with the
new SSH helper.
After the first patch is applied the time gets down to a much
more reasonable 5.5 seconds on my test machine.
By comparison netcat achieved 4 seconds.
After applying the second patch, the time is reduced to 3.5
seconds, so we actually end up beating netcat, as long as
we have glib >= 2.64.0 available.
Daniel P. Berrangé (2):
remote: make ssh-helper massively faster
util: avoid glib event loop workaround where possible
src/remote/remote_ssh_helper.c | 113 ++++++++++++++++++++-------------
src/util/vireventglib.c | 29 ++++++---
2 files changed, 89 insertions(+), 53 deletions(-)
--
2.25.4
3 years, 12 months
Migration with "--p2p --tunnelled" hanging in v6.9.0
by Christian Ehrhardt
Hi,
I'm wondering about the best next steps to debug a migration issue.
What I found is that with libvirt v6.9.0 a migration hangs if used like:
$ virsh migrate --unsafe --live --p2p --tunnelled h-migr-test \
qemu+ssh://testkvm-hirsute-to/system
Just "--live --p2p" works fine. Also a bunch of other migration option
combinations work with the same build/setup, just p2p+tunnelled fails.
Also if either source or target are not on 6.9 the migration works
(former version used is v6.6 for me).
I looked at alternating qemu versions (5.0 / 5.1), but it had no impact.
It only depends on libvirt to be on the new version on source&target to
trigger the issue.
Unfortunately there is no crash or error to debug into, it just gets stuck
with the "virsh migrate" hanging on the source and never leaving the "paused"
state on the target.
I have compared setups with the least amount of "change":
good: Qemu 5.1 + Libvirt 6.9 -> Qemu 5.1 / Libvirt 6.6
bad: Qemu 5.1 + Libvirt 6.9 -> Qemu 5.1 / Libvirt 6.9
[1] has the debug logs of those, beginning with a libvirtd restart that one can
likely skip and then into the migration that hangs in the bad case.
But I failed to see an obvious reason in the log.
In git/news I only found these changes which sounded to be relevant:
f51cbe92c0 qemu: Allow migration over UNIX socket
c69915ccaf peer2peer migration: allow connecting to local sockets
But I'm not using unix: and in the logs the only unix: mentions are for the
qemu monitor and qemu-guest-agent.
I wanted to ask:
- if something related was recently changed that comes to mind?
- if someone else sees anything in the linked logs that I missed?
- if someone else has seen/reproduced the same?
- for best practise to debug a hanging migration
Thanks in advance!
[1]: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1904584/+attachment/5...
--
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd
3 years, 12 months
[libvirt PATCH 0/4] cgroup cpu period and quota fixes
by Pavel Hrdina
Pavel Hrdina (4):
qemu: move cgroup cpu period and quota defines to vircgroup.h
vircgroupv1: use defines for cpu period and quota limits
vircgroupv2: use defines for cpu period and quota limits
vircgroup: fix cpu quota maximum limit
src/qemu/qemu_driver.c | 21 ++++++++-------------
src/util/vircgroup.h | 7 +++++++
src/util/vircgroupv1.c | 23 ++++++++++++-----------
src/util/vircgroupv2.c | 25 +++++++++++++------------
4 files changed, 40 insertions(+), 36 deletions(-)
--
2.26.2
3 years, 12 months
[PATCH] qemu: Tweak debug message for qemuMigrationSrcPerformPeer2Peer3
by Martin Kletzander
Commit 49186372dbe8 forgot to add the new parameter.
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
Pushed as 'trivial'.
src/qemu/qemu_migration.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 4be8f3c64c94..fcb33d0364a1 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -4356,12 +4356,12 @@ qemuMigrationSrcPerformPeer2Peer3(virQEMUDriverPtr driver,
VIR_DEBUG("driver=%p, sconn=%p, dconn=%p, dconnuri=%s, vm=%p, xmlin=%s, "
"dname=%s, uri=%s, graphicsuri=%s, listenAddress=%s, "
- "nmigrate_disks=%zu, migrate_disks=%p, nbdPort=%d, "
+ "nmigrate_disks=%zu, migrate_disks=%p, nbdPort=%d, nbdURI=%s, "
"bandwidth=%llu, useParams=%d, flags=0x%lx",
driver, sconn, dconn, NULLSTR(dconnuri), vm, NULLSTR(xmlin),
NULLSTR(dname), NULLSTR(uri), NULLSTR(graphicsuri),
NULLSTR(listenAddress), nmigrate_disks, migrate_disks, nbdPort,
- bandwidth, useParams, flags);
+ NULLSTR(nbdURI), bandwidth, useParams, flags);
/* Unlike the virDomainMigrateVersion3 counterpart, we don't need
* to worry about auto-setting the VIR_MIGRATE_CHANGE_PROTECTION
--
2.29.2
3 years, 12 months
[libvirt PATCH 0/2] Add more documentation for migrations over UNIX sockets
by Martin Kletzander
Few words about SELinux that might not be very clear to some.
Martin Kletzander (2):
qemu: Disable NBD TLS migration over UNIX socket
docs: Document SELinux caveats when migrating over UNIX sockets
docs/manpages/virsh.rst | 9 ++++++++-
docs/migration.html.in | 9 +++++++++
src/qemu/qemu_migration.c | 10 ++++++++--
3 files changed, 25 insertions(+), 3 deletions(-)
--
2.29.2
3 years, 12 months
[PATCH v2 0/5] Hypervisor CPU Baseline Cleanups and Fixes
by Collin Walling
The following patches provide some TLC to the hypervisor CPU baseline
handler within the qemu_driver code.
#1 checks for the cpu-model-expansion capability before
executing the baseline handler since it is used for feature expansion.
#2 fixes a styling where a < 0 condition was missing from one of the
if (function()) lines for consistency's sake.
#3 will check if the cpu definition(s) are valid and contain a model
name.
#4 checks the cpu definition(s) model names against the model names
known by the hypervisor. This patch must come before #5.
#5 will allow the baseline command to be ran with a single cpu
definition, whereas before the command would simply fail with an
unhelpful error message. A CPU model expansion will be performed in
this case, which will produce the same result as if the model were
actually baselined.
Note: without patch #4, #5 can result in a segfault in the case where
a single CPU model is provided and the model is not recognized by the
hypervisor. This is because cpu model expansion will return 0 and the
result will be NULL.
Since the QMP response returns "GenericError" in the case of a missing
CPU model, or if the command is not supported (and perhaps for other
reasons I am unsure of -- the response does not explicitly detail that
the CPU model provided was erroneous), we cannot rely on this
response always meaning there was a missing CPU model.
So, to be safe-and-sure, the CPU model is checked against the list of
CPU models known to the hypervisor prior to baselining / expanding
(which were retrieved at some point previously during libvirt init).
Collin Walling (5):
qemu: check for model-expansion cap before baselining
qemu: fix one instance of rc check styling in baseline
qemu: report error if missing model name when baselining
qemu: check if cpu model is supported before baselining
qemu: fix error message when baselining with a single cpu
src/qemu/qemu_driver.c | 45 ++++++++++++++++++++++++++++++++++--------
1 file changed, 37 insertions(+), 8 deletions(-)
--
2.26.2
3 years, 12 months