[libvirt] [PATCH v2 0/4] cleanup resolving persistent/running/current flags
by Nikolay Shirokovskiy
Original name of series was:
Subject: [PATCH 0/3] make virDomainObjGetPersistentDef return only persistent definition
Changes from version1
=====================
Original motivation of the series was fixing virDomainObjGetPersistentDef so
that it would not return configs other than persistent. However the final patch
that does it in the first version is absent in current version. I think the
fix should be done another way and in different series. Thus this version has
only misc cleanups which are better splitted into patches in this version.
A few words why I leave patch for virDomainObjGetPersistentDef. First I should
not return NULL from the function as this value already has different meaning
- memory allocation error and there is code that checks for this error. This
code calls virDomainObjGetPersistentDef unconditionally but later use the
return value only for persistent domains, thus it could get NULL pointer on
function call and treat it as an allocation error.
I think the proper way would be chaning virDomainObjGetPersistentDef so it
could not fail that is not call virDomainObjSetDefTransient. Looks like this
is already true because drivers that distiguish between running/persistent
config call virDomainObjSetDefTransient ealy in the domain start process (and
even two times) so that later calls for virDomainObjSetDefTransient are
unnecessary.
Nikolay Shirokovskiy (4):
virDomainObjUpdateModificationImpact: reduce nesting
libxlDomainSetMemoryFlags : reuse virDomainLiveConfigHelperMethod
lxc, libxl: reuse virDomainObjUpdateModificationImpact
libxlDomainPinVcpuFlags: remove check duplicates
src/conf/domain_conf.c | 12 +++---
src/libxl/libxl_driver.c | 103 ++++-------------------------------------------
src/lxc/lxc_driver.c | 75 +++-------------------------------
3 files changed, 19 insertions(+), 171 deletions(-)
--
1.8.3.1
8 years, 8 months
[libvirt] [PATCH v3] This patch gives an error when migration is attempted with both --live and --offline options.
by Nitesh Konkar
Signed-off-by: Nitesh Konkar <nitkon12(a)linux.vnet.ibm.com>
---
tools/virsh-domain.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 43c8436..b9f678f 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -9838,6 +9838,8 @@ cmdMigrate(vshControl *ctl, const vshCmd *cmd)
bool live_flag = false;
virshCtrlData data = { .dconn = NULL };
+ VSH_EXCLUSIVE_OPTIONS("live", "offline");
+
if (!(dom = virshCommandOptDomain(ctl, cmd, NULL)))
return false;
--
1.8.3.1
8 years, 8 months
[libvirt] [PATCH] Libvirt: Add missing default value for config option max_queued_clients
by Jason J. Herne
Commit 1199edb1d4e3 added config option max_queued_clients and documented the
default value as 1000 but never actually set that value. This patch sets the
default value.
This addresses an issue whereby the following error message is reported if too
many migrations are started simultaneously:
error: End of file while reading data: Ncat: Invalid argument.: Input/output error
The problem is that too many ncat processes are spawned on the destination
system. They all attempt to connect to the libvirt socket. Because the
destination libvirtd cannot respond to the connect requests quickly enough we
overrun the socket's pending connections queue.
Signed-off-by: Jason J. Herne <jjherne(a)linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy(a)linux.vnet.ibm.com>
---
daemon/libvirtd-config.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/daemon/libvirtd-config.c b/daemon/libvirtd-config.c
index c31c8b2..7a448f9 100644
--- a/daemon/libvirtd-config.c
+++ b/daemon/libvirtd-config.c
@@ -280,6 +280,7 @@ daemonConfigNew(bool privileged ATTRIBUTE_UNUSED)
data->min_workers = 5;
data->max_workers = 20;
data->max_clients = 5000;
+ data->max_queued_clients = 1000;
data->max_anonymous_clients = 20;
data->prio_workers = 5;
--
1.9.1
8 years, 8 months
[libvirt] [PATCH v7 0/6] Global domain cpu.cfs_period_us and cpu.cfs_quota_us setup
by Alexander Burluka
This patchset implements an ability to specify values for domain top level
cpu.cfs_period_us and cpu.cfs_quota_us cgroups. These parameters are opt-in
and named "global_period" and "global_quota".
Introduction of these settings gives management applications further
choice of controlling CPU usage.
Changes in v2: add XML validation test
Changes in v3: remove unneccessary cgroup copying
Changes in v4: fix little rebase error
Changes in v5: rebase to version 1.3.1
Changes in v6: remove unnecessary check
Changes in v7: rebase to current master
Alexander Burluka (6):
Add global period definitions
Add global quota parameter necessary definitions
Add error checking on global quota and period
Add global_period and global_quota XML validation test
Implement qemuSetupGlobalCpuCgroup
Implement handling of per-domain bandwidth settings
docs/schemas/domaincommon.rng | 10 +++
include/libvirt/libvirt-domain.h | 32 +++++++
src/conf/domain_conf.c | 37 +++++++++
src/conf/domain_conf.h | 2 +
src/qemu/qemu_cgroup.c | 49 +++++++++++
src/qemu/qemu_cgroup.h | 1 +
src/qemu/qemu_command.c | 3 +-
src/qemu/qemu_driver.c | 97 +++++++++++++++++++++-
src/qemu/qemu_process.c | 4 +
tests/qemuxml2argvdata/qemuxml2argv-cputune.xml | 2 +
.../qemuxml2xmloutdata/qemuxml2xmlout-cputune.xml | 2 +
11 files changed, 236 insertions(+), 3 deletions(-)
--
1.8.3.1
8 years, 8 months
[libvirt] [PATCH] qemu: Don't always wait for SPICE to finish migration
by Jiri Denemark
When SPICE graphics is configured for a domain but we did not ask the
client to switch to the destination, we should not wait for
SPICE_MIGRATE_COMPLETED event (which will never come).
https://bugzilla.redhat.com/show_bug.cgi?id=1151723
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
src/qemu/qemu_domain.h | 2 ++
src/qemu/qemu_migration.c | 4 +++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 8359b1a..0144792 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -137,6 +137,8 @@ struct qemuDomainJobObj {
qemuDomainJobInfoPtr current; /* async job progress data */
qemuDomainJobInfoPtr completed; /* statistics data of a recently completed job */
bool abortJob; /* abort of the job requested */
+ bool spiceMigration; /* we asked for spice migration and we
+ * should wait for it to finish */
bool spiceMigrated; /* spice migration completed */
};
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 704e182..64cbffa 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -2415,7 +2415,8 @@ qemuMigrationWaitForSpice(virDomainObjPtr vm)
bool wait_for_spice = false;
size_t i = 0;
- if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_SEAMLESS_MIGRATION))
+ if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_SEAMLESS_MIGRATION) ||
+ !priv->job.spiceMigration)
return 0;
for (i = 0; i < vm->def->ngraphics; i++) {
@@ -2789,6 +2790,7 @@ qemuDomainMigrateGraphicsRelocate(virQEMUDriverPtr driver,
QEMU_ASYNC_JOB_MIGRATION_OUT) == 0) {
ret = qemuMonitorGraphicsRelocate(priv->mon, type, listenAddress,
port, tlsPort, tlsSubject);
+ priv->job.spiceMigration = !ret;
if (qemuDomainObjExitMonitor(driver, vm) < 0)
ret = -1;
}
--
2.7.2
8 years, 8 months
[libvirt] [PATCH] qemu: Don't try to fetch migration stats on destination
by Jiri Denemark
Migration statistics are not available on the destination host and
starting a query job during incoming migration is not allowed. Trying to
do that would result in
Timed out during operation: cannot acquire state change lock (held
by remoteDispatchDomainMigratePrepare3Params)
error. We should not even try to start the job.
https://bugzilla.redhat.com/show_bug.cgi?id=1278727
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
src/qemu/qemu_driver.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 45ff3c0..241de67 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -12901,9 +12901,16 @@ qemuDomainGetJobStatsInternal(virQEMUDriverPtr driver,
if (!priv->job.current || !priv->job.current->stats.status)
fetch = false;
- if (fetch &&
- qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0)
- return -1;
+ if (fetch) {
+ if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_IN) {
+ virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
+ _("migration statistics are available only on "
+ "the source host"));
+ return -1;
+ }
+ if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0)
+ return -1;
+ }
if (!completed &&
!virDomainObjIsActive(vm)) {
--
2.7.2
8 years, 8 months
[libvirt] [PATCH] qemu: enalbe hotplugging of macvtap device with multiqueue
by Shanzhi Yu
in commit 81a110, multiqueue for macvtap is enabled but forget
to support hotplugging enabled
Signed-off-by: Shanzhi Yu <shyu(a)redhat.com>
---
src/qemu/qemu_hotplug.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c
index dc76268..b580283 100644
--- a/src/qemu/qemu_hotplug.c
+++ b/src/qemu/qemu_hotplug.c
@@ -892,10 +892,11 @@ int qemuDomainAttachNetDevice(virConnectPtr conn,
goto cleanup;
}
- /* Currently nothing besides TAP devices supports multiqueue. */
+ /* Currently only TAP/macvtap devices supports multiqueue. */
if (net->driver.virtio.queues > 0 &&
!(actualType == VIR_DOMAIN_NET_TYPE_NETWORK ||
- actualType == VIR_DOMAIN_NET_TYPE_BRIDGE)) {
+ actualType == VIR_DOMAIN_NET_TYPE_BRIDGE ||
+ actualType == VIR_DOMAIN_NET_TYPE_DIRECT)) {
virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
_("Multiqueue network is not supported for: %s"),
virDomainNetTypeToString(actualType));
--
1.8.3.1
8 years, 8 months
[libvirt] [RFC] [libvirt-gconfig] Suggestion about (maybe) re-factoring GVirtConfigDomainGraphics
by Fabiano Fidêncio
Howdy!
I've been trying to use libvirt-gobject and libvirt-gconfig, on
virt-viewer, for accessing VMs and looking at their config, instead of
using libvrit and parsing XML directly and turns out, that
libvirt-gconfig is not exactly handful for a "read-only" use case (as
virt-viewer's one).
Let me try to explain pointing to some code as example and then I'll
give my suggestions.
For example, let's take a look on
https://git.fedorahosted.org/cgit/virt-viewer.git/tree/src/virt-viewer.c#...
In this function, the first thing done is to get the type of the
graphic device (SPICE or VNC) and here is the first problem. There is
no straightforward way for doing this on libvirt-gconfig (or is
there?).
Seems easier to continue getting the type using libxml and then use
the specific spice/vnc functions for getting the properties. And here
is the second problem, because I'll always need to have an if/else
statement for getting the properties. Something like:
if (g_str_equal(type, "vnc"))
port = gvir_config_domain_graphics_vnc_get_port(domain);
else if (g_str_equal(type, "spice"))
port = gvir_config_domain_graphics_spice_get_port(domain);
This kind of usage makes me think that libvirt-gconfig is missing some
abstraction class, parent of GVirConfigDomainGraphics{Spice,Vnc,...)
which could provide virtual functions like:
gtype = gvir_config_domain_graphics_get_gtype(domain);
port = gvir_config_domain_graphics_get_port(domain);
Thinking a bit about it and taking a look in the
GVirConfigDomainGraphics code, is possible to see a possibility for 2
new classes:
GVirConfigDomainGraphicsLocal and GVirConfigDomainGraphicsRemote.
Then we could have something like:
GVirtConfigDomainGraphics
|_
| GVirtConfigDomainGraphicsLocal
| |_
| | GVirtConfigDomainGraphicsLocalDesktop
| |_
| GVirtConfigDomainGraphicsLocalSdl
|_
GVirtConfigDomainGraphicsRemote
|_
| GVirtConfigDomainGraphicsRemoteSpice
|_
| GVirtConfigDomainGraphicsRemoteVnc
|_
GVirtConfigDomainGraphicsRemoteRdp
I do know that Local and Remote are not exactly accurate names, but is
the best that I could come up with.
So, would be acceptable to introduce these two new classes and then
have the specific graphics classes inheriting from those? Does this
make sense for you, people?
I'm more than happy in provide the code for this, but not before we
discuss and set a decision about the approach. :-)
I'm looking forward for some feedback.
--
Fabiano Fidêncio
8 years, 8 months
[libvirt] [PATCH v5 0/10] add close callback for drivers with persistent connection
by Nikolay Shirokovskiy
Currently close callback API can only inform us of closing the connection
between remote driver and daemon. But what if a driver running in the
daemon itself can have another persistent connection? In this case
we want to be informed of that connection changes state too.
This patch series extends meaning of current close callback API so
that now it notifies of closing of any internal persistent connection.
The overall approach is to move close callback support to drivers.
Changes from v4:
================
1. New patch "remote: factor out feature checks on connection open" to get
rid of code dups.
2. "daemon: add connection close rpc" now checks if peer supports
rpc for close callbacks. If it is not then we handle only disconnections
to peer as before.
Changes from v3:
================
Add patch [3] "close callback: make unregister clean after connect close event."
Make register/unregister methods of connection close callback object
return void. This solves the problem of patch [8] "daemon: add connection close rpc"
([7] in previous version) of consistent unregistering. All checks are
moved outside of the methods. I hesitate whether to add or not means
that track connection close callback object consinstency and finally
decided to add (checks and warnings inside methods). The reason is that
without these checks we get memory leaks which are rather difficult
to find out. Unfortunately this change touch a number of patches as
the first change is done in the first patch of the series.
Changes from v2:
================
Split patches further to make it more comprehensible.
Nikolay Shirokovskiy (10):
factor out virConnectCloseCallbackDataPtr methods
virConnectCloseCallbackData: fix connection object refcount
close callback: make unregister clean after connect close event
virConnectCloseCallbackData: factor out callback disarming
close callback API: remove unnecessary locks
virConnectCloseCallbackDataDispose: remove unnecessary locks
close callback: move it to driver
remote: factor out feature checks on connection open
daemon: add connection close rpc
vz: implement connection close notification
daemon/libvirtd.h | 1 +
daemon/remote.c | 85 ++++++++++++++++++++++
src/datatypes.c | 118 +++++++++++++++++++++++-------
src/datatypes.h | 16 ++++-
src/driver-hypervisor.h | 12 ++++
src/libvirt-host.c | 46 ++----------
src/libvirt_internal.h | 5 ++
src/remote/remote_driver.c | 167 ++++++++++++++++++++++++++++++++-----------
src/remote/remote_protocol.x | 24 ++++++-
src/remote_protocol-structs | 6 ++
src/vz/vz_driver.c | 59 +++++++++++++++
src/vz/vz_sdk.c | 4 ++
src/vz/vz_utils.h | 3 +
13 files changed, 433 insertions(+), 113 deletions(-)
--
1.8.3.1
8 years, 8 months
[libvirt] [PATCHv2 0/3] reorder qemu cgroups operations
by Henning Schild
This is a much shorter series focusing on the key point, the second patch.
The first patch is somehing that was found when looking at the code and is
just a cosmetic change. The third patch just cleans up. They where both
already ACKed.
Patch 2 was also already ACKed but conflicted with another pending change.
It should be reviewed in its new context. Note the new order with the
"manual" affinity setting code.
@Peter:
qemuProcessInitCpuAffinity and qemuProcessSetupEmulator have a lot in
in common. I guess there is potential for further simplification.
The series is based on 92ec2e5e9b79b7df4d575040224bd606ab0b6dd8 with
these two patches on top:
http://www.redhat.com/archives/libvir-list/2016-February/msg01211.html
Henning Schild (3):
vircgroup: one central point for adding tasks to cgroups
qemu_cgroup: put qemu right into emulator sub-cgroup
qemu_cgroup: use virCgroupAddTask instead of virCgroupMoveTask
src/libvirt_private.syms | 1 -
src/qemu/qemu_process.c | 10 ++---
src/util/vircgroup.c | 105 +----------------------------------------------
src/util/vircgroup.h | 3 --
4 files changed, 6 insertions(+), 113 deletions(-)
--
2.4.10
8 years, 8 months