[libvirt] [PATCH] libxl: fix leaking of allocated migration ports
by Jim Fehlig
Although the migration port is immediately released in the
finish phase of migration, it was never set in the domain
private object when allocated in the prepare phase. So
libxlDomainMigrationFinish() always released a 0-initialized
migrationPort, leaking any allocated port. After enough
migrations to exhaust the migration port pool, migration would
fail with
error: internal error: Unable to find an unused port in range
'migration' (49152-49216)
Fix it by setting libxlDomainObjPrivate->migrationPort to the
port allocated in the prepare phase. While at it, also fix
leaking an allocated port if the prepare phase fails.
Signed-off-by: Jim Fehlig <jfehlig(a)suse.com>
---
src/libxl/libxl_migration.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/src/libxl/libxl_migration.c b/src/libxl/libxl_migration.c
index 534abb8..a471d2a 100644
--- a/src/libxl/libxl_migration.c
+++ b/src/libxl/libxl_migration.c
@@ -594,6 +594,7 @@ libxlDomainMigrationPrepare(virConnectPtr dconn,
if (virPortAllocatorAcquire(driver->migrationPorts, &port) < 0)
goto error;
+ priv->migrationPort = port;
if (virAsprintf(uri_out, "tcp://%s:%d", hostname, port) < 0)
goto error;
} else {
@@ -628,6 +629,7 @@ libxlDomainMigrationPrepare(virConnectPtr dconn,
if (virPortAllocatorAcquire(driver->migrationPorts, &port) < 0)
goto error;
+ priv->migrationPort = port;
} else {
port = uri->port;
}
@@ -690,6 +692,8 @@ libxlDomainMigrationPrepare(virConnectPtr dconn,
}
VIR_FREE(socks);
virObjectUnref(args);
+ virPortAllocatorRelease(driver->migrationPorts, priv->migrationPort);
+ priv->migrationPort = 0;
/* Remove virDomainObj from domain list */
if (vm) {
--
2.9.2
8 years, 1 month
[libvirt] [RFC PATCH v2] qemu: Use virtio-pci by default for mach-virt guests
by Andrea Bolognani
virtio-pci is the way forward for aarch64 guests: it's faster
and less alien to people coming from other architectures.
Now that guest support is finally getting there, we'd like to
start using it by default instead of virtio-mmio.
Users and applications can already opt-in by explicitly using
<address type='pci'/>
inside the relevant elements, but that's kind of cumbersome and
requires all users and management applications to adapt, which
we'd really like to avoid.
What we can do instead is use virtio-mmio only if the guest
already has at least one virtio-mmio device, and use virtio-pci
in all other situations.
That means existing virtio-mmio guests will keep using the old
addressing scheme, and new guests will automatically be created
using virtio-pci instead. Users can still override the default
in either direction.
---
RFC: still needs polish (documentation, test cases) before it
can be considered for merging, plus it builds on top of changes
that haven't made it into master yet.
Changes from v1:
* use virDomainDeviceInfoIterate(), as suggested by Martin
and Laine, which results in cleaner and more robust code
src/qemu/qemu_domain_address.c | 39 ++++++++++++++++++++--
...l2argv-aarch64-virt-2.6-virtio-pci-default.args | 14 +++++---
.../qemuxml2argv-aarch64-virtio-pci-default.args | 14 +++++---
.../qemuxml2xmlout-aarch64-virtio-pci-default.xml | 24 ++++++++++---
4 files changed, 73 insertions(+), 18 deletions(-)
diff --git a/src/qemu/qemu_domain_address.c b/src/qemu/qemu_domain_address.c
index f27d1e3..30b1ddf 100644
--- a/src/qemu/qemu_domain_address.c
+++ b/src/qemu/qemu_domain_address.c
@@ -386,6 +386,32 @@ qemuDomainAssignS390Addresses(virDomainDefPtr def,
}
+static int
+qemuDomainCountVirtioMMIODevicesCallback(virDomainDefPtr def ATTRIBUTE_UNUSED,
+ virDomainDeviceDefPtr dev ATTRIBUTE_UNUSED,
+ virDomainDeviceInfoPtr info,
+ void *opaque)
+{
+ if (info->type == VIR_DOMAIN_DEVICE_ADDRESS_TYPE_VIRTIO_MMIO)
+ (*((size_t *) opaque))++;
+
+ return 0;
+}
+
+
+static size_t
+qemuDomainCountVirtioMMIODevices(virDomainDefPtr def)
+{
+ size_t count = 0;
+
+ virDomainDeviceInfoIterate(def,
+ qemuDomainCountVirtioMMIODevicesCallback,
+ &count);
+
+ return count;
+}
+
+
static void
qemuDomainAssignARMVirtioMMIOAddresses(virDomainDefPtr def,
virQEMUCapsPtr qemuCaps)
@@ -398,9 +424,16 @@ qemuDomainAssignARMVirtioMMIOAddresses(virDomainDefPtr def,
qemuDomainMachineIsVirt(def)))
return;
- if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE_VIRTIO_MMIO)) {
- qemuDomainPrimeVirtioDeviceAddresses(
- def, VIR_DOMAIN_DEVICE_ADDRESS_TYPE_VIRTIO_MMIO);
+ /* We use virtio-mmio by default on mach-virt guests only if they already
+ * have at least one virtio-mmio device: in all other cases, we prefer
+ * virtio-pci */
+ if (qemuDomainMachineHasPCIeRoot(def) &&
+ qemuDomainCountVirtioMMIODevices(def) == 0) {
+ qemuDomainPrimeVirtioDeviceAddresses(def,
+ VIR_DOMAIN_DEVICE_ADDRESS_TYPE_PCI);
+ } else if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE_VIRTIO_MMIO)) {
+ qemuDomainPrimeVirtioDeviceAddresses(def,
+ VIR_DOMAIN_DEVICE_ADDRESS_TYPE_VIRTIO_MMIO);
}
}
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-aarch64-virt-2.6-virtio-pci-default.args b/tests/qemuxml2argvdata/qemuxml2argv-aarch64-virt-2.6-virtio-pci-default.args
index 75db1a4..df03c6e 100644
--- a/tests/qemuxml2argvdata/qemuxml2argv-aarch64-virt-2.6-virtio-pci-default.args
+++ b/tests/qemuxml2argvdata/qemuxml2argv-aarch64-virt-2.6-virtio-pci-default.args
@@ -21,14 +21,18 @@ QEMU_AUDIO_DRV=none \
-initrd /aarch64.initrd \
-append 'earlyprintk console=ttyAMA0,115200n8 rw root=/dev/vda rootwait' \
-dtb /aarch64.dtb \
--device virtio-serial-device,id=virtio-serial0 \
+-device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1 \
+-device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x0 \
+-device ioh3420,port=0x10,chassis=3,id=pci.3,bus=pcie.0,addr=0x2 \
+-device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x2 \
-drive file=/aarch64.raw,format=raw,if=none,id=drive-virtio-disk0 \
--device virtio-blk-device,drive=drive-virtio-disk0,id=virtio-disk0 \
--device virtio-net-device,vlan=0,id=net0,mac=52:54:00:09:a4:37 \
+-device virtio-blk-pci,bus=pci.2,addr=0x3,drive=drive-virtio-disk0,\
+id=virtio-disk0 \
+-device virtio-net-pci,vlan=0,id=net0,mac=52:54:00:09:a4:37,bus=pci.2,addr=0x1 \
-net user,vlan=0,name=hostnet0 \
-serial pty \
-chardev pty,id=charconsole1 \
-device virtconsole,chardev=charconsole1,id=console1 \
--device virtio-balloon-device,id=balloon0 \
+-device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x4 \
-object rng-random,id=objrng0,filename=/dev/random \
--device virtio-rng-device,rng=objrng0,id=rng0
+-device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.2,addr=0x5
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-aarch64-virtio-pci-default.args b/tests/qemuxml2argvdata/qemuxml2argv-aarch64-virtio-pci-default.args
index b5b010c..f205d9a 100644
--- a/tests/qemuxml2argvdata/qemuxml2argv-aarch64-virtio-pci-default.args
+++ b/tests/qemuxml2argvdata/qemuxml2argv-aarch64-virtio-pci-default.args
@@ -21,14 +21,18 @@ QEMU_AUDIO_DRV=none \
-initrd /aarch64.initrd \
-append 'earlyprintk console=ttyAMA0,115200n8 rw root=/dev/vda rootwait' \
-dtb /aarch64.dtb \
--device virtio-serial-device,id=virtio-serial0 \
+-device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1 \
+-device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x0 \
+-device ioh3420,port=0x10,chassis=3,id=pci.3,bus=pcie.0,addr=0x2 \
+-device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x2 \
-drive file=/aarch64.raw,format=raw,if=none,id=drive-virtio-disk0 \
--device virtio-blk-device,drive=drive-virtio-disk0,id=virtio-disk0 \
--device virtio-net-device,vlan=0,id=net0,mac=52:54:00:09:a4:37 \
+-device virtio-blk-pci,bus=pci.2,addr=0x3,drive=drive-virtio-disk0,\
+id=virtio-disk0 \
+-device virtio-net-pci,vlan=0,id=net0,mac=52:54:00:09:a4:37,bus=pci.2,addr=0x1 \
-net user,vlan=0,name=hostnet0 \
-serial pty \
-chardev pty,id=charconsole1 \
-device virtconsole,chardev=charconsole1,id=console1 \
--device virtio-balloon-device,id=balloon0 \
+-device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x4 \
-object rng-random,id=objrng0,filename=/dev/random \
--device virtio-rng-device,rng=objrng0,id=rng0
+-device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.2,addr=0x5
diff --git a/tests/qemuxml2xmloutdata/qemuxml2xmlout-aarch64-virtio-pci-default.xml b/tests/qemuxml2xmloutdata/qemuxml2xmlout-aarch64-virtio-pci-default.xml
index 7c3fc19..90659a1 100644
--- a/tests/qemuxml2xmloutdata/qemuxml2xmlout-aarch64-virtio-pci-default.xml
+++ b/tests/qemuxml2xmloutdata/qemuxml2xmlout-aarch64-virtio-pci-default.xml
@@ -30,16 +30,30 @@
<disk type='file' device='disk'>
<source file='/aarch64.raw'/>
<target dev='vda' bus='virtio'/>
- <address type='virtio-mmio'/>
+ <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>
</disk>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='virtio-serial' index='0'>
- <address type='virtio-mmio'/>
+ <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
+ </controller>
+ <controller type='pci' index='1' model='dmi-to-pci-bridge'>
+ <model name='i82801b11-bridge'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
+ </controller>
+ <controller type='pci' index='2' model='pci-bridge'>
+ <model name='pci-bridge'/>
+ <target chassisNr='2'/>
+ <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
+ </controller>
+ <controller type='pci' index='3' model='pcie-root-port'>
+ <model name='ioh3420'/>
+ <target chassis='3' port='0x10'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</controller>
<interface type='user'>
<mac address='52:54:00:09:a4:37'/>
<model type='virtio'/>
- <address type='virtio-mmio'/>
+ <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
@@ -51,11 +65,11 @@
<target type='virtio' port='1'/>
</console>
<memballoon model='virtio'>
- <address type='virtio-mmio'/>
+ <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
- <address type='virtio-mmio'/>
+ <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/>
</rng>
</devices>
</domain>
--
2.7.4
8 years, 1 month
[libvirt] [PATCH 00/11] clean up after "slot sharing" patches
by Laine Stump
Some of the patches that enabled sharing PCI slots by multiple
pcie-root-ports made the idea of reserving an entire slot obsolete. To
reduce confusion and misunderstandings, this patch series gets rid of
the name "Slot" in all of the functions that reserve and release PCI
addresses.
None of these patches affect any functional change.
This is a lot of patches, but each patch is trivial, I promise - most
are simply renaming a single function, or changing one argument to
some calls to a function.
Laine Stump (11):
conf: fix fromConfig argument to virDomainPCIAddressReserveAddr()
conf: fix fromConfig argument to virDomainPCIAddressValidate()
conf: rename virDomainPCIAddressGetNextSlot() to ...GetNextAddr()
conf: eliminate virDomainPCIAddressReserveNextSlot()
qemu: replace virDomainPCIAddressReserveAddr with
virDomainPCIAddressReserveSlot
conf: make virDomainPCIAddressReserveAddr() a static function
conf: rename virDomainPCIAddressReserveAddr() to ...Internal()
conf: rename virDomainPCIAddressReserveSlot() to ...Addr()
qemu: remove qemuDomainPCIAddressReserveNextAddr()
qemu: rename qemuDomainPCIAddressReserveNextSlot() to ...Addr()
conf: eliminate virDomainPCIAddressReleaseSlot() in favor of ...Addr()
src/bhyve/bhyve_device.c | 26 +++++++----
src/conf/domain_addr.c | 60 ++++++------------------
src/conf/domain_addr.h | 15 ------
src/libvirt_private.syms | 4 +-
src/qemu/qemu_domain_address.c | 102 +++++++++++++++++++----------------------
5 files changed, 78 insertions(+), 129 deletions(-)
--
2.7.4
8 years, 1 month
[libvirt] [PATCH 0/8] aggregate multiple pcie-root-ports onto a single slot
by Laine Stump
These patches implement a reccomendation from Gerd Hoffman during
discussion of Marcel Apfelbaum's proposed "PCIe devices placement
guidelines" document on qemu-devel:
https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg01381.html
The basic idea is to put up to 8 pcie-root-ports on each slot of
pcie-root, rather than libvirt's historical practice of assigning only
a single device to each slot.
This is done by defining a new pciConnectFlag -
VIR_PCI_CONNECT_AGGREGATE_SLOT - which is set for any device that can
be automatically placed together with other similarly flagged devices
on multiple functions of the same slot. In this way we can
auto-address up to 224 hotpluggable PCIe devices without needing to
figure out how to automatically add upstream/downstream ports.
Other types of devices could be given the same treatment, although it
would make no sense for anything that you wanted to be hotplugable.
In order for this all to work nicely and make sense, the PCI address
reservation code has eliminated the "reserveEntireSlot" concept - if
the first device assigned to a particular slot doesn't have the
AGGREGATE_SLOT flag set, then it will be the only device allowed on
that slot even though the address set only shows function 0 as being
in use.
After all of this change, using the term "Slot" in so many function
names no longer makes sense; there is another patchset that I will
post shortly that gets rid of all that "old inaccurate" naming. I made
it separate because it isn't strictly necessary for the AGGREGATE_SLOT
functionality.
This series needs to be applied on top of my earlier series that adds
support for plugging virtio devices into PCIe slots.
Laine Stump (8):
conf: use struct instead of int for each slot in
virDomainPCIAddressBus
conf: eliminate concept of "reserveEntireSlot"
conf: eliminate repetitive code in virDomainPCIAddressGetNextSlot()
conf: start search for next unused PCI address at same slot as
previous find
conf: new function virDomainPCIAddressIsMulti()
qemu: use virDomainPCIAddressIsMulti() to determine multifunction
setting
conf: aggregate multiple devices on a slot when assigning PCI
addresses
conf: aggregate multiple pcie-root-ports onto a single slot
src/conf/domain_addr.c | 290 ++++++++++++++-------
src/conf/domain_addr.h | 48 +++-
src/libvirt_private.syms | 1 +
src/qemu/qemu_command.c | 16 +-
src/qemu/qemu_domain_address.c | 35 ++-
.../qemuxml2argv-pcie-root-port.args | 5 +-
.../qemuxml2argv-pcie-switch-upstream-port.args | 5 +-
.../qemuxml2argv-q35-default-devices-only.args | 7 +-
.../qemuxml2argv-q35-multifunction.args | 43 +++
.../qemuxml2argv-q35-multifunction.xml | 51 ++++
.../qemuxml2argv-q35-pcie-autoadd.args | 30 ++-
tests/qemuxml2argvdata/qemuxml2argv-q35-pcie.args | 28 +-
.../qemuxml2argv-q35-virt-manager-basic.args | 13 +-
.../qemuxml2argv-q35-virtio-pci.args | 28 +-
tests/qemuxml2argvtest.c | 25 ++
.../qemuxml2xmlout-pcie-root-port.xml | 2 +-
.../qemuxml2xmlout-pcie-switch-upstream-port.xml | 4 +-
.../qemuxml2xmlout-q35-default-devices-only.xml | 8 +-
.../qemuxml2xmlout-q35-multifunction.xml | 120 +++++++++
.../qemuxml2xmlout-q35-pcie-autoadd.xml | 52 ++--
.../qemuxml2xmloutdata/qemuxml2xmlout-q35-pcie.xml | 48 ++--
.../qemuxml2xmlout-q35-virt-manager-basic.xml | 20 +-
.../qemuxml2xmlout-q35-virtio-pci.xml | 48 ++--
tests/qemuxml2xmltest.c | 25 ++
24 files changed, 676 insertions(+), 276 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-multifunction.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-multifunction.xml
create mode 100644 tests/qemuxml2xmloutdata/qemuxml2xmlout-q35-multifunction.xml
--
2.7.4
8 years, 1 month
[libvirt] [PATCH] virQEMUCapsReset: also clear out hostCPUModel
by Ján Tomko
After succesfully reading an outdated caps cache from disk,
calling virQEMUCapsReset did not properly clear out the host
CPU model. This lead to a memory leak when the host CPU model
pointer was overwritten later in virQEMUCapsNewForBinaryInternal.
Introduced by commit 68c70118.
---
src/qemu/qemu_capabilities.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 9132469..130f1db 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -3430,6 +3430,9 @@ virQEMUCapsReset(virQEMUCapsPtr qemuCaps)
VIR_FREE(qemuCaps->gicCapabilities);
qemuCaps->ngicCapabilities = 0;
+
+ virCPUDefFree(qemuCaps->hostCPUModel);
+ qemuCaps->hostCPUModel = NULL;
}
--
2.7.3
8 years, 1 month
[libvirt] [PATCH 0/3] last part of TLS encryption for char devices
by Pavel Hrdina
Pavel Hrdina (3):
qemu_hotplug: remove union for one member
domain: Add optional 'tls' attribute for TCP chardev
domain: fix migration to older libvirt
docs/formatdomain.html.in | 28 +++++++++
docs/schemas/domaincommon.rng | 5 ++
src/conf/domain_conf.c | 67 +++++++++++++++++-----
src/conf/domain_conf.h | 6 +-
src/qemu/qemu_command.c | 4 +-
src/qemu/qemu_domain.c | 67 ++++++++++++++++++++++
src/qemu/qemu_domain.h | 8 +++
src/qemu/qemu_hotplug.c | 16 ++++--
src/qemu/qemu_process.c | 2 +
...uxml2argv-serial-tcp-tlsx509-chardev-notls.args | 30 ++++++++++
...muxml2argv-serial-tcp-tlsx509-chardev-notls.xml | 50 ++++++++++++++++
tests/qemuxml2argvtest.c | 3 +
...xml2xmlout-serial-tcp-tlsx509-chardev-notls.xml | 1 +
tests/qemuxml2xmltest.c | 1 +
14 files changed, 266 insertions(+), 22 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-serial-tcp-tlsx509-chardev-notls.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-serial-tcp-tlsx509-chardev-notls.xml
create mode 120000 tests/qemuxml2xmloutdata/qemuxml2xmlout-serial-tcp-tlsx509-chardev-notls.xml
--
2.10.1
8 years, 1 month
[libvirt] [PATCH] doc: Describe the VCPU states returned by virsh vcpuinfo
by Viktor Mihajlovski
Added a brief description of the VCPU states.
Signed-off-by: Viktor Mihajlovski <mihajlov(a)linux.vnet.ibm.com>
---
tools/virsh.pod | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 9e3a248..6d4fd07 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -2536,6 +2536,55 @@ vCPUs, the running time, the affinity to physical processors.
With I<--pretty>, cpu affinities are shown as ranges.
+An example output is
+
+ $ virsh vcpuinfo fedora
+ VCPU: 0
+ CPU: 0
+ State: running
+ CPU time: 7,0s
+ CPU Affinity: yyyy
+
+ VCPU: 1
+ CPU: 1
+ State: running
+ CPU time: 0,7s
+ CPU Affinity: yyyy
+
+B<STATES>
+
+The State field displays the current operating state of a virtual CPU
+
+=over 4
+
+=item B<offline>
+
+The virtual CPU is offline and not usable by the domain.
+This state is not supported by all hypervisors.
+
+=item B<running>
+
+The virtual CPU is available to the domain and is operating.
+
+=item B<blocked>
+
+The virtual CPU is available to the domain but is waiting for a resource.
+This state is not supported by all hypervisors, in which case I<running>
+may be reported instead.
+
+=item B<no state>
+
+The virtual CPU state could not be determined. This could happen if
+the hypervisor is newer than virsh.
+
+=item B<N/A>
+
+There's no information about the virtual CPU state available. This can
+be the case if the domain is not running or the hypervisor does
+not report the virtual CPU state.
+
+=back
+
=item B<vcpupin> I<domain> [I<vcpu>] [I<cpulist>] [[I<--live>]
[I<--config>] | [I<--current>]]
--
1.9.1
8 years, 1 month
[libvirt] [PATCH] domain_conf: fix memory leak in virDomainDefAddConsoleCompat
by Pavel Hrdina
Signed-off-by: Pavel Hrdina <phrdina(a)redhat.com>
---
src/conf/domain_conf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 748ffd5..6100c3d 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -3895,7 +3895,7 @@ virDomainDefAddConsoleCompat(virDomainDefPtr def)
0,
def->nconsoles,
chr) < 0) {
- VIR_FREE(chr);
+ virDomainChrDefFree(chr);
return -1;
}
--
2.10.1
8 years, 1 month
[libvirt] Race in log manager causes segfault
by Bjoern Walk
Hello,
I am currently investigating a rare segfault in libvirt. I have attached a
backtrace, the coredump is for s390x. I am currently trying to reproduce the
segfault on x86 but it did not occur yet (timespan to short).
This can be triggered by rapidly performing domain start/stop cycles in a
tight loop and will trigger in the order of a couple weeks.
I have come to the conclusion that there seems to be a race condition in the
log manager client. When the log manager gets freed via virLogManagerFree() it
(asynchronously) invokes virNetClientClose() and unrefs the associated client
structure in virLogManager. If there are other threads waiting for data on the
socket they will be woken up but because they rely on virLogManager holding a
ref to the client we get a use-after-free.
Can anyone verify this analysis and either provide a fix or at least give me
some pointers in the right direction on how to further proceed for debugging?
Should I open a bug for this?
Best regards,
Bjoern
8 years, 1 month