[PATCH] virnetdevtap: Fix memory leak in virNetDevTapReattachBridge
by liu.song13@zte.com.cn
From: QiangWei Zhang <zhang.qiangwei(a)zte.com.cn>
Variable 'master' needs to be free because it will be reassigned in
virNetDevOpenvswitchInterfaceGetMaster().
The leaked stack:
Direct leak of 11 byte(s) in 1 object(s) allocated from:
#0 0x7f7dad8ba6df in __interceptor_malloc (/lib64/libasan.so.8+0xba6df)
#1 0x7f7dad715728 in g_malloc (/lib64/libglib-2.0.so.0+0x60728)
#2 0x7f7dad72d8b2 in g_strdup (/lib64/libglib-2.0.so.0+0x788b2)
#3 0x7f7dacb63088 in g_strdup_inline /usr/include/glib-2.0/glib/gstrfuncs.h:321
#4 0x7f7dacb63088 in virNetDevGetName ../src/util/virnetdev.c:823
#5 0x7f7dacb63886 in virNetDevGetMaster ../src/util/virnetdev.c:909
#6 0x7f7dacb90288 in virNetDevTapReattachBridge ../src/util/virnetdevtap.c:527
#7 0x7f7dacd5cd67 in virDomainNetNotifyActualDevice ../src/conf/domain_conf.c:30505
#8 0x7f7da3a10bc3 in qemuProcessNotifyNets ../src/qemu/qemu_process.c:3290
#9 0x7f7da3a375c6 in qemuProcessReconnect ../src/qemu/qemu_process.c:9211
#10 0x7f7dacc0cc53 in virThreadHelper ../src/util/virthread.c:256
#11 0x7f7dac2875d4 in start_thread (/lib64/libc.so.6+0x875d4)
#12 0x7f7dac3091bb in __GI___clone3 (/lib64/libc.so.6+0x1091bb)
Signed-off-by: QiangWei Zhang <zhang.qiangwei(a)zte.com.cn>
---
src/util/virnetdevtap.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/util/virnetdevtap.c b/src/util/virnetdevtap.c
index 1dc77f0f5c..860a8e2dd5 100644
--- a/src/util/virnetdevtap.c
+++ b/src/util/virnetdevtap.c
@@ -541,6 +541,9 @@ virNetDevTapReattachBridge(const char *tapname,
/* IFLA_MASTER for a tap on an OVS switch is always "ovs-system" */
if (STREQ_NULLABLE(master, "ovs-system")) {
useOVS = true;
+
+ /* master needs to be released here because it will be reassigned */
+ VIR_FREE(master);
if (virNetDevOpenvswitchInterfaceGetMaster(tapname, &master) < 0)
return -1;
}
--
2.27.0
3 weeks, 4 days
KVM Forum 2025 Call for Presentations
by Paolo Bonzini
###########################
KVM Forum 2025
September 4-5, 2025
Milan, Italy
https://kvm-forum.qemu.org/
###########################
KVM Forum is an annual event that brings together developers and users,
discussing the state of Linux virtualization technology and planning for
the challenges ahead. Sessions include updates on the KVM virtualization
stack, ideas for the future, and collaborative "birds of a feather"
(BoF) sessions to plan for the year ahead. KVM Forum provides a unique
platform to contribute to the growth of the open source virtualization
ecosystem.
This year's event will be held in Milan, Italy on September 4-5, 2025,
at the Politecnico di Milano university.
CALL FOR PRESENTATIONS
======================
We encourage you to submit presentations via Pretalx at
https://kvm-forum.qemu.org/2025/cfp/. Suggested topics include:
* Scalability and Optimization
* Hardening and security
* Confidential computing
* KVM and the Linux Kernel:
* New Features and Ports
* Device Passthrough: VFIO, mdev, vDPA
* Network Virtualization
* Virtio and vhost
* Virtual Machine Monitors and Management:
* VMM Implementation: APIs, Live Migration, Performance Tuning, etc.
* Multi-process VMMs: vhost-user, vfio-user, QEMU Storage Daemon
* QEMU without KVM: Hypervisor.framework and other hypervisors
* Managing KVM: Libvirt, KubeVirt, Kata Containers
* Emulation:
* New Devices, Boards and Architectures
* CPU Emulation and Binary Translation
* Developer-focused content:
* Tooling improvements
* Enabling Rust
* Testing frameworks and strategies
All presentation slots will be 25 minutes + 5 minutes for questions.
IMPORTANT DATES
===============
The deadline for submitting presentations is June 8, 2025 - 11:59 PM CEST.
Accepted speakers will be notified on July 5, 2025.
ATTENDING KVM FORUM
===================
Admission to KVM Forum costs $75. You can get your ticket at
https://kvm-forum.qemu.org/2025/register/
Admission is free for accepted speakers.
The conference will be held at the Politecnico di Milano university.
The venue is a 5 minutes walk from the Piola stop of the "green" M2
subway line. Downtown Milan can be reached by subway in about 10
minutes.
Special hotel room prices will be available for attendees
of KVM Forum. More information will be available soon at
https://kvm-forum.qemu.org/location/.
We are committed to fostering an open and welcoming environment at our
conference. Participants are expected to abide by our code of conduct
and media policy:
https://kvm-forum.qemu.org/coc/
https://kvm-forum.qemu.org/media-policy/
GETTING TO MILAN
================
The main airport in Milan is Milano Malpensa (MXP). It is well
connected by trains to the city center and to the subway lines. Milano
Linate (LIN) is a city airport with a fast connection to downtown via
the "blue" M4 subway line.
Flights are available between the Milan area and most European
countries, as well as from America and Asia to Malpensa.
Another airport, Bergamo (BGY), hosts low-cost airlines and is
connected to the city center by buses.
Milan is also accessible by rail, including high-speed and international
routes.
If you need a visa invitation letter, please reach out to the organizers
at kvm-forum-pc(a)redhat.com.
CONTACTS
========
Reach out to us should you have any questions. The program committee may
be contacted as a group via email: kvm-forum-pc(a)redhat.com.
3 weeks, 4 days
[PATCH] docs: hooks: Document when shutoff-reason argument was introduced
by Michal Privoznik
From: Michal Privoznik <mprivozn(a)redhat.com>
Introduced in v10.5.0-rc1~52, qemu and lxc hook scripts are
executed with additional argument: shutoff reason. But wording of
our docs make it looks like it's been that way forever. Make it
clear this is `recent` feature.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/766
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
docs/hooks.rst | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/docs/hooks.rst b/docs/hooks.rst
index 48128ba3d8..b363f51da1 100644
--- a/docs/hooks.rst
+++ b/docs/hooks.rst
@@ -211,7 +211,9 @@ operation. There is no specific operation to indicate a "restart" is occurring.
/etc/libvirt/hooks/qemu guest_name stopped end -
Then, after libvirt has released all resources, the hook is called again,
- :since:`since 0.9.0`, to allow any additional resource cleanup:
+ :since:`since 0.9.0`, to allow any additional resource cleanup
+ (:since:`since 10.5.0` there's additional argument ``shutoff-reason`` passed
+ to the hook):
::
@@ -331,7 +333,9 @@ operation. There is no specific operation to indicate a "restart" is occurring.
/etc/libvirt/hooks/lxc guest_name stopped end -
Then, after libvirt has released all resources, the hook is called again,
- :since:`since 0.9.0`, to allow any additional resource cleanup:
+ :since:`since 0.9.0`, to allow any additional resource cleanup
+ (:since:`since 10.5.0` there's additional argument ``shutoff-reason`` passed
+ to the hook):
::
--
2.49.0
3 weeks, 5 days
[PATCH] ci: refresh with 'lcitool manifest'
by Daniel P. Berrangé
From: Daniel P. Berrangé <berrange(a)redhat.com>
This removes librbd from 32-bit arches on debian sid, which no longer
exists.
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
ci/buildenv/debian-sid-cross-armv6l.sh | 1 -
ci/buildenv/debian-sid-cross-armv7l.sh | 1 -
ci/buildenv/debian-sid-cross-i686.sh | 1 -
ci/containers/debian-sid-cross-armv6l.Dockerfile | 1 -
ci/containers/debian-sid-cross-armv7l.Dockerfile | 1 -
ci/containers/debian-sid-cross-i686.Dockerfile | 1 -
6 files changed, 6 deletions(-)
diff --git a/ci/buildenv/debian-sid-cross-armv6l.sh b/ci/buildenv/debian-sid-cross-armv6l.sh
index ac03caeb5c..598c50c518 100644
--- a/ci/buildenv/debian-sid-cross-armv6l.sh
+++ b/ci/buildenv/debian-sid-cross-armv6l.sh
@@ -77,7 +77,6 @@ function install_buildenv() {
libparted-dev:armel \
libpcap0.8-dev:armel \
libpciaccess-dev:armel \
- librbd-dev:armel \
libreadline-dev:armel \
libsanlock-dev:armel \
libsasl2-dev:armel \
diff --git a/ci/buildenv/debian-sid-cross-armv7l.sh b/ci/buildenv/debian-sid-cross-armv7l.sh
index c540104cb0..5592b1f19f 100644
--- a/ci/buildenv/debian-sid-cross-armv7l.sh
+++ b/ci/buildenv/debian-sid-cross-armv7l.sh
@@ -77,7 +77,6 @@ function install_buildenv() {
libparted-dev:armhf \
libpcap0.8-dev:armhf \
libpciaccess-dev:armhf \
- librbd-dev:armhf \
libreadline-dev:armhf \
libsanlock-dev:armhf \
libsasl2-dev:armhf \
diff --git a/ci/buildenv/debian-sid-cross-i686.sh b/ci/buildenv/debian-sid-cross-i686.sh
index b558576fca..60b4862674 100644
--- a/ci/buildenv/debian-sid-cross-i686.sh
+++ b/ci/buildenv/debian-sid-cross-i686.sh
@@ -77,7 +77,6 @@ function install_buildenv() {
libparted-dev:i386 \
libpcap0.8-dev:i386 \
libpciaccess-dev:i386 \
- librbd-dev:i386 \
libreadline-dev:i386 \
libsanlock-dev:i386 \
libsasl2-dev:i386 \
diff --git a/ci/containers/debian-sid-cross-armv6l.Dockerfile b/ci/containers/debian-sid-cross-armv6l.Dockerfile
index d3034c0131..130bd8a12d 100644
--- a/ci/containers/debian-sid-cross-armv6l.Dockerfile
+++ b/ci/containers/debian-sid-cross-armv6l.Dockerfile
@@ -88,7 +88,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
libparted-dev:armel \
libpcap0.8-dev:armel \
libpciaccess-dev:armel \
- librbd-dev:armel \
libreadline-dev:armel \
libsanlock-dev:armel \
libsasl2-dev:armel \
diff --git a/ci/containers/debian-sid-cross-armv7l.Dockerfile b/ci/containers/debian-sid-cross-armv7l.Dockerfile
index 30234b6755..fd0992b308 100644
--- a/ci/containers/debian-sid-cross-armv7l.Dockerfile
+++ b/ci/containers/debian-sid-cross-armv7l.Dockerfile
@@ -88,7 +88,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
libparted-dev:armhf \
libpcap0.8-dev:armhf \
libpciaccess-dev:armhf \
- librbd-dev:armhf \
libreadline-dev:armhf \
libsanlock-dev:armhf \
libsasl2-dev:armhf \
diff --git a/ci/containers/debian-sid-cross-i686.Dockerfile b/ci/containers/debian-sid-cross-i686.Dockerfile
index 2c2c4772c8..8aedb83266 100644
--- a/ci/containers/debian-sid-cross-i686.Dockerfile
+++ b/ci/containers/debian-sid-cross-i686.Dockerfile
@@ -88,7 +88,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
libparted-dev:i386 \
libpcap0.8-dev:i386 \
libpciaccess-dev:i386 \
- librbd-dev:i386 \
libreadline-dev:i386 \
libsanlock-dev:i386 \
libsasl2-dev:i386 \
--
2.49.0
3 weeks, 5 days
Re: [PATCH V1 0/6] fast qom tree get
by Markus Armbruster
Hi Steve, I apologize for the slow response.
Steve Sistare <steven.sistare(a)oracle.com> writes:
> Using qom-list and qom-get to get all the nodes and property values in a
> QOM tree can take multiple seconds because it requires 1000's of individual
> QOM requests. Some managers fetch the entire tree or a large subset
> of it when starting a new VM, and this cost is a substantial fraction of
> start up time.
"Some managers"... could you name one?
> To reduce this cost, consider QAPI calls that fetch more information in
> each call:
> * qom-list-get: given a path, return a list of properties and values.
> * qom-list-getv: given a list of paths, return a list of properties and
> values for each path.
> * qom-tree-get: given a path, return all descendant nodes rooted at that
> path, with properties and values for each.
Libvirt developers, would you be interested in any of these?
> In all cases, a returned property is represented by ObjectPropertyValue,
> with fields name, type, value, and error. If an error occurs when reading
> a value, the value field is omitted, and the error message is returned in the
> the error field. Thus an error for one property will not cause a bulk fetch
> operation to fail.
Returning errors this way is highly unusual. Observation; I'm not
rejecting this out of hand. Can you elaborate a bit on why it's useful?
> To evaluate each method, I modified scripts/qmp/qom-tree to use the method,
> verified all methods produce the same output, and timed each using:
>
> qemu-system-x86_64 -display none \
> -chardev socket,id=monitor0,path=/tmp/vm1.sock,server=on,wait=off \
> -mon monitor0,mode=control &
>
> time qom-tree -s /tmp/vm1.sock > /dev/null
Cool!
> I only measured once per method, but the variation is low after a warm up run.
> The 'real - user - sys' column is a proxy for QEMU CPU time.
>
> method real(s) user(s) sys(s) (real - user - sys)(s)
> qom-list / qom-get 2.048 0.932 0.057 1.059
> qom-list-get 0.402 0.230 0.029 0.143
> qom-list-getv 0.200 0.132 0.015 0.053
> qom-tree-get 0.143 0.123 0.012 0.008
>
> qom-tree-get is the clear winner, reducing elapsed time by a factor of 14X,
> and reducing QEMU CPU time by 132X.
>
> qom-list-getv is slower when fetching the entire tree, but can beat
> qom-tree-get when only a subset of the tree needs to be fetched (not shown).
>
> qom-list-get is shown for comparison only, and is not included in this series.
If we have qom-list-getv, then qom-list-get is not worth having.
4 weeks, 1 day
Release of libvirt-11.3.0
by Jiri Denemark
The 11.3.0 release of both libvirt and libvirt-python is tagged and
signed tarballs are available at
https://download.libvirt.org/
https://download.libvirt.org/python/
Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing feedback. Your work is greatly
appreciated.
* Removed features
* Support for AppArmor versions prior to 3.0.0 has been dropped.
* New features
* xen: Support configuration of ``<hyperv/>`` flags for Xen domains.
The following flags are now configurable for Xen: ``vapic``, ``synic``,
``stimer``, ``frequencies``, ``tlbflush`` and ``ipi``.
* bhyve: Support virtio random number generator devices
Domain XMLs can now include virtio random number generator devices.
They are configured with::
<rng model='virtio'>
<backend model='random'/>
</rng>
* bhyve: Support ``<interface type='network'>``
At the moment it doesn't provide any new features compared to
``<interface type='bridge'>``, but allows a more flexible configuration.
* Bug fixes
* cpu_map: Install Ampere-1 ARM CPU models
The Ampere-1 CPU models added in the previous release were not properly
installed and thus every attempt to start an ARM domain with custom
CPU definition would fail.
* storage: Fix new volume creation
No more errors occur when new storage volume is being created using ``virsh
vol-create`` with ``--validate`` option and/or ``virStorageVolCreateXML()``
with ``VIR_VOL_XML_PARSE_VALIDATE`` flag.
* Don't spam logs with error about ``qemu-rdp`` when starting a qemu VM
On hosts where the ``qemu-rdp`` binary is not installed a start of a VM
would cause an error such as ::
error : qemuRdpNewForHelper:103 : 'qemu-rdp' is not a suitable qemu-rdp helper name: No such file or directory
to be logged in the system log. It is safe to ignore the error. The code
was fixed to avoid the message when probing for support.
* Fix libvirt daemon crash on failure to hotplug a disk into a ``qemu`` VM
Some failures of disk hotplug could cause the libvirt daemon to crash due
to a bug when rolling back disk throttling filters.
Enjoy.
Jirka
4 weeks, 1 day