[libvirt] [PATCH 0/4] Couple more hotplug cleanups
by John Ferlan
Patches speak for themselves - they also resolve a couple of new
Coverity whines.
John Ferlan (4):
conf,qemu: Check for NULL addrs in virDomainUSBAddressRelease
conf,qemu: Check for NULL addrs in virDomainUSBAddressEnsure
qemu: Remove unnecessary virtio disk detach info.alias check
qemu: Remove unnecessary controller detach info.alias check
src/conf/domain_addr.c | 5 ++++-
src/conf/domain_addr.h | 4 ++--
src/qemu/qemu_domain_address.c | 7 ++-----
src/qemu/qemu_hotplug.c | 43 ++++++++++++++----------------------------
4 files changed, 22 insertions(+), 37 deletions(-)
--
2.13.6
6 years, 11 months
Re: [libvirt] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default
by Eduardo Habkost
(CCing libvir-list)
On Mon, Oct 09, 2017 at 05:47:44PM +0200, Paolo Bonzini wrote:
> On 09/10/2017 17:15, Waiman Long wrote:
> > On 10/09/2017 09:39 AM, Paolo Bonzini wrote:
> >> On 06/10/2017 23:52, Eduardo Habkost wrote:
> >>> This series enables kvm_pv_unhalt by default on pc-*-2.11 and
> >>> newer.
> >>
> >> I've discussed PV spinlocks with some folks at Microsoft for a few weeks
> >> now, and I'm not 100% sure about enabling kvm_pv_unhalt by default.
> >> It's probably a good idea overall, but it does come with some caveats.
> >>
> >> The problem is that there were two different implementations of fair
> >> spinlocks in Linux, ticket spinlocks and queued spinlocks. When
> >> kvm_pv_unhalt is disabled, ticket spinlocks sucked badly indeed; queued
> >> spinlocks however simply revert to unfair spinlocks, which loses the
> >> fairness but has the best performance. See virt_spin_lock in
> >> arch/x86/include/asm/qspinlock.h.
> >>
> >> Now, fair spinlocks are only really needed for large NUMA machines.
> >> With a single NUMA node, for example, test-and-set spinlocks work well
> >> enough; there's not _much_ need for fairness in practice, and the
> >> paravirtualization does introduce some overhead.
> >>
> >> Therefore, the best performance would be achieved with kvm_pv_unhalt
> >> disabled on small VMs, and enabled on large VMs spanning multiple host
> >> NUMA nodes.
> >>
> >> Waiman, Davidlohr, do you have an opinion on this as well?
> >
> > I agree. Using unfair lock in a small VM is good for performance. The
> > only problem I see is how do we define what is small. Now, even a
> > machine with a single socket can have up to 28 cores, 56 threads. If a
> > VM has up to 56 vCPUs, we may still want pvqspinlock to be used.
>
> Yes. Another possibility is to enable it when there is >1 NUMA node in
> the guest. We generally don't do this kind of magic but higher layers
> (oVirt/OpenStack) do.
Can't the guest make this decision, instead of the host?
>
> It also depends on how worse performance is with PV qspinlocks,
> especially since now there's also vcpu_is_preempted that has to be
> thrown in the mix. A lot more testing is needed.
>
This is an interesting case for "-cpu host"/host-passthrough:
usage of "-cpu host" has an implicit assumption that no feature
will decrease performance just because we told the guest that it
is available.
Can't the guest choose the most reasonable option based on the
information we provide to it, instead of having to hide
information from the guest to get better performance?
> > Is the kvm_pv_unhalt flag a global one for all VMs within a machine? Or
> > can it be different for each VM? We may want to have this flag
> > dynamically determined depending on the configuration of the VM.
>
> It's per-VM, so that's easy.
--
Eduardo
6 years, 11 months
[libvirt] Libvirt config converter can't handle file not ending with new line
by Wei Liu
Hi Jim
I discover a problem when using xen_xl converter. When the file in
question doesn't end with a new line, I get the following error:
error: configuration file syntax error: memory conf:53: expecting a value
After digging a bit (but haven't read libvirt code), it appears that the
file didn't end with a new line.
I have worked around this, but I think this is an issue that should be
fixed.
Thanks,
Wei.
6 years, 11 months
[libvirt] [PATCH v2 00/15] vbox: Improve handling of storage devices
by Dawid Zamirski
From: Dawid Zamirski <dzrudy(a)gmail.com>
v1: https://www.redhat.com/archives/libvir-list/2017-October/msg00381.html
Changes since v1:
* split out the original patches into smaller ones, with each bugfix or
cleanup task is in its own isolated and compilable patch
* make sure switch statements are type casted and all cases are handled
* fixed implementation of partial VM cleanup code and isolated into
separate patch (patch #3)
* squashed doc and rng updates into a single patch (#9), updated doc
wording as suggested
* addressed other minor issues (add debug statements, code formatting,
better comments, commit message spelling etc)
Dawid Zamirski (15):
vbox: Update ATTRIBUTE_UNUSED usage
vbox: Close media when undefining domains
vbox: Cleanup partially-defined VM on failure
vbox: vboxAttachDrives now relies on address info
vbox: Cleanup vboxAttachDrives implementation
vbox: Errors in vboxAttachDrives are now critical
vbox: Support empty removable drives.
vbox: Add more IStorageController API mappings
domain: Allow 'model' attribute for ide controller
vbox: Process <controller> element in domain XML
vbox: Add vboxDumpStorageControllers
vbox: Correctly generate drive name in dumpxml
vbox: Cleanup vboxDumpDisks implementation
vbox: Process empty removable disks in dumpxml
vbox: Generate disk address element in dumpxml
docs/formatdomain.html.in | 4 +
docs/schemas/domaincommon.rng | 18 +-
src/conf/domain_conf.c | 9 +
src/conf/domain_conf.h | 9 +
src/libvirt_private.syms | 2 +
src/vbox/vbox_common.c | 1422 +++++++++++++++++++++++++----------------
src/vbox/vbox_common.h | 21 +
src/vbox/vbox_tmpl.c | 93 ++-
src/vbox/vbox_uniformed_api.h | 3 +
9 files changed, 983 insertions(+), 598 deletions(-)
--
2.14.2
6 years, 11 months
[libvirt] [PATCH REPOST 0/8] Privatize _virStoragePoolObj and _virStorageVolDefList (Batch #3)
by John Ferlan
Since the original series (19 patches):
https://www.redhat.com/archives/libvir-list/2017-September/msg00594.html
didn't garner any attention, I'm going with smaller patch piles to make
forward progress.
This is the last half of the storage backends plus one missed merge
(because I broke things up <sigh>)
John Ferlan (8):
storage: Use virStoragePoolObjGetDef accessor for iSCSI backend
storage: Use virStoragePoolObjGetDef accessor for MPATH backend
storage: Use virStoragePoolObjGetDef accessor for RBD backend
storage: Use virStoragePoolObjGetDef accessor for SCSI backend
storage: Use virStoragePoolObjGetDef accessor for VSTORAGE backend
storage: Use virStoragePoolObjGetDef accessor for ZFS backend
storage: Use virStoragePoolObjGetDef accessor for new driver events
storage: Privatize virStoragePoolObj and virStorageVolDefList
src/conf/storage_conf.h | 4 ---
src/conf/virstorageobj.c | 20 +++++++++++
src/conf/virstorageobj.h | 15 --------
src/storage/storage_backend_iscsi.c | 41 ++++++++++++----------
src/storage/storage_backend_mpath.c | 8 +++--
src/storage/storage_backend_rbd.c | 64 ++++++++++++++++++----------------
src/storage/storage_backend_scsi.c | 30 +++++++++-------
src/storage/storage_backend_vstorage.c | 31 ++++++++--------
src/storage/storage_backend_zfs.c | 39 ++++++++++++---------
src/storage/storage_driver.c | 8 ++---
10 files changed, 144 insertions(+), 116 deletions(-)
--
2.13.6
6 years, 11 months
[libvirt] [PATCH v2] vz: allow to start vz driver without host cache info
by Mikhail Feoktistov
Show warning message instead of fail operation.
It happens if kernel or cpu doesn't support reporting cpu cache info.
In case of Virtuozzo file "id" doesn't exist.
---
src/vz/vz_driver.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index 6f4aee3..eb97e54 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -119,7 +119,7 @@ vzBuildCapabilities(void)
goto error;
if (virCapabilitiesInitCaches(caps) < 0)
- goto error;
+ VIR_WARN("Failed to get host CPU cache info");
verify(ARRAY_CARDINALITY(archs) == ARRAY_CARDINALITY(emulators));
--
1.8.3.1
6 years, 11 months
[libvirt] [RFC] docs: Discourage usage of cache mode=passthrough
by Eduardo Habkost
Cache mode=passthrough can result in a broken cache topology if
the domain topology is not exactly the same as the host topology.
Warn about that in the documentation.
Bug report for reference:
https://bugzilla.redhat.com/show_bug.cgi?id=1184125
Signed-off-by: Eduardo Habkost <ehabkost(a)redhat.com>
---
docs/formatdomain.html.in | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 57ec2ff34..9c21892f3 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -1478,7 +1478,9 @@
<dt><code>passthrough</code></dt>
<dd>The real CPU cache data reported by the host CPU will be
- passed through to the virtual CPU.</dd>
+ passed through to the virtual CPU. Using this mode is not
+ recommended unless the domain CPU and NUMA topology is exactly
+ the same as the host CPU and NUMA topology.</dd>
<dt><code>disable</code></dt>
<dd>The virtual CPU will report no CPU cache of the specified
--
2.13.5
6 years, 12 months
[libvirt] [PATCH] remote: Fix libvirtd service memory leak when libvirt client was terminated unexpectly
by xinhua.Cao
base on commit 2033e8cc119454bc4273e8a41e66c899c60ba58b and fe8f1c8b8650c8ab5e7d27338f4538582651bd14, we solve libvirt coredump problem, but it introduce a memory leak sense.
the sense follow
1. one client register a domain event such as reboot event
2. and client was terminated unexpectly, then this client will not free at libvirtd service program.
remoteDispatchConnectDomainEventCallbackRegisterAny reference the client, but when client was terminated before it call deRegisterAny, the reference of client will not reduced to zero. so the memory leak take place. this patch will deRegister all event when client was close.
---
daemon/remote.c | 47 +++++++++++++++++++++++++++++------------------
1 file changed, 29 insertions(+), 18 deletions(-)
diff --git a/daemon/remote.c b/daemon/remote.c
index 3f7d2d3..2b5a18b 100644
--- a/daemon/remote.c
+++ b/daemon/remote.c
@@ -1686,25 +1686,16 @@ void remoteRelayConnectionClosedEvent(virConnectPtr conn ATTRIBUTE_UNUSED, int r
VIR_WARN("unexpected %s event deregister failure", name); \
} \
VIR_FREE(eventCallbacks); \
+ neventCallbacks = 0; \
} while (0);
-/*
- * You must hold lock for at least the client
- * We don't free stuff here, merely disconnect the client's
- * network socket & resources.
- * We keep the libvirt connection open until any async
- * jobs have finished, then clean it up elsewhere
- */
-void remoteClientFreeFunc(void *data)
+static void
+remoteFreePrivCallbacks(void *data)
{
struct daemonClientPrivate *priv = data;
/* Deregister event delivery callback */
- if (priv->conn) {
- virIdentityPtr sysident = virIdentityGetSystem();
-
- virIdentitySetCurrent(sysident);
-
+ if (priv && priv->conn) {
DEREG_CB(priv->conn, priv->domainEventCallbacks,
priv->ndomainEventCallbacks,
virConnectDomainEventDeregisterAny, "domain");
@@ -1723,6 +1714,26 @@ void remoteClientFreeFunc(void *data)
DEREG_CB(priv->conn, priv->qemuEventCallbacks,
priv->nqemuEventCallbacks,
virConnectDomainQemuMonitorEventDeregister, "qemu monitor");
+ }
+}
+#undef DEREG_CB
+
+/*
+ * You must hold lock for at least the client
+ * We don't free stuff here, merely disconnect the client's
+ * network socket & resources.
+ * We keep the libvirt connection open until any async
+ * jobs have finished, then clean it up elsewhere
+ */
+void remoteClientFreeFunc(void *data)
+{
+ struct daemonClientPrivate *priv = data;
+
+ if (priv) {
+ virIdentityPtr sysident = virIdentityGetSystem();
+
+ virIdentitySetCurrent(sysident);
+ remoteFreePrivCallbacks(priv);
if (priv->closeRegistered) {
if (virConnectUnregisterCloseCallback(priv->conn,
@@ -1734,18 +1745,18 @@ void remoteClientFreeFunc(void *data)
virIdentitySetCurrent(NULL);
virObjectUnref(sysident);
+ VIR_FREE(priv);
}
-
- VIR_FREE(priv);
}
-#undef DEREG_CB
-
static void remoteClientCloseFunc(virNetServerClientPtr client)
{
struct daemonClientPrivate *priv = virNetServerClientGetPrivateData(client);
- daemonRemoveAllClientStreams(priv->streams);
+ if (priv) {
+ daemonRemoveAllClientStreams(priv->streams);
+ remoteFreePrivCallbacks(priv);
+ }
}
--
2.8.3
6 years, 12 months
[libvirt] [PATCH 0/2] Removing backend support when net interface is user/direct/hostdev.
by Julio Faracco
The <backend> xml tag is not supported for some interface types and the virsh
command 'attach-device' permits to add <backend> settings to all of them.
These commits avoid <backend> for user, direct and hostdev interface types.
Julio Faracco (2):
tests: removing backend xml tag inside some test cases.
conf: network user/direct/hostdev do not support backend tag.
src/conf/domain_conf.c | 10 ++++++++++
tests/qemuxml2argvdata/qemuxml2argv-tap-vhost-incorrect.xml | 2 --
tests/qemuxml2argvdata/qemuxml2argv-tap-vhost.xml | 1 -
tests/qemuxml2argvtest.c | 6 +++---
.../qemuxml2xmloutdata/qemuxml2xmlout-tap-vhost-incorrect.xml | 2 --
tests/qemuxml2xmloutdata/qemuxml2xmlout-tap-vhost.xml | 1 -
6 files changed, 13 insertions(+), 9 deletions(-)
--
2.7.4
6 years, 12 months
[libvirt] [PATCH 00/12] qemu: block: Prepare for user-specified backing chains (blockdev-add saga)
by Peter Krempa
When users will specify backing chain, we need to take into account what
was passed in them and/or inherit the data from the parents as we are
copying the data (labels, etc...) from the parent disk source.
Peter Krempa (12):
storage: Extract common code to retrieve driver backend for support
check
storage: Add feature check for storage file backend supporting access
check
storage: Extract error reporting for broken chains
security: selinux: Pass parent storage source into image labeling
helper
security: dac: Take parent security label into account
security: selinux: Take parent security label into account
qemu: domain: Simplify using DAC permissions of top of backing chain
qemu: domain: Extract setup for disk source secrets
qemu: domain: Destroy secrets for complete backing chain
qemu: domain: Remove pointless alias check
qemu: domain: Prepare TLS data for the whole backing chain
qemu: domain: skip chain detection to end of backing chain
src/qemu/qemu_domain.c | 177 ++++++++++++++++++++++++++--------------
src/qemu/qemu_domain.h | 6 +-
src/qemu/qemu_driver.c | 6 +-
src/qemu/qemu_hotplug.c | 2 +-
src/qemu/qemu_process.c | 2 +-
src/security/security_dac.c | 38 +++++++--
src/security/security_selinux.c | 26 +++---
src/storage/storage_source.c | 110 ++++++++++++++++---------
src/storage/storage_source.h | 5 ++
9 files changed, 246 insertions(+), 126 deletions(-)
--
2.14.1
6 years, 12 months