[libvirt] [PATCH v2 00/16] Fix styles of curly braces around function bodies
by Martin Kletzander
Basically a v2 of:
https://www.redhat.com/archives/libvir-list/2014-March/msg00987.html
with way more tuned regexp, wrapped lines, syntax-check and so on.
Martin Kletzander (16):
Use K&R style for curly braces in tests/
Use K&R style for curly braces in src/xen*/
Use K&R style for curly braces in src/util/
Use K&R style for curly braces in src/rpc/
Use K&R style for curly braces in src/conf/
Use K&R style for curly braces in src/qemu/
Use K&R style for curly braces in src/storage/
Use K&R style for curly braces in src/openvz/
Use K&R style for curly braces in src/nwfilter/
Use K&R style for curly braces in src/test/test_driver.c
Use K&R style for curly braces in src/uml/
Use K&R style for curly braces in src/lxc/lxc_driver.c
Use K&R style for curly braces in src/network/bridge_driver.c
Use K&R style for curly braces in src/vbox/
Use K&R style for curly braces in remaining files
Require K&R styled curly braces around function bodies
cfg.mk | 7 ++
daemon/libvirtd-config.c | 9 +-
docs/internals/command.html.in | 3 +-
examples/dominfo/info1.c | 7 +-
src/conf/domain_conf.c | 9 +-
src/conf/domain_nwfilter.c | 13 ++-
src/conf/interface_conf.c | 54 ++++++---
src/conf/nwfilter_conf.c | 30 +++--
src/conf/nwfilter_params.c | 3 +-
src/interface/interface_backend_netcf.c | 8 +-
src/interface/interface_backend_udev.c | 4 +-
src/interface/interface_driver.c | 4 +-
src/libxl/libxl_driver.c | 3 +-
src/lxc/lxc_driver.c | 30 +++--
src/network/bridge_driver.c | 54 ++++++---
src/node_device/node_device_driver.c | 5 +-
src/nwfilter/nwfilter_driver.c | 32 +++--
src/nwfilter/nwfilter_ebiptables_driver.c | 3 +-
src/nwfilter/nwfilter_learnipaddr.c | 41 ++++---
src/openvz/openvz_conf.c | 15 ++-
src/openvz/openvz_driver.c | 45 ++++---
src/qemu/qemu_agent.c | 5 +-
src/qemu/qemu_command.c | 6 +-
src/qemu/qemu_driver.c | 94 ++++++++++-----
src/qemu/qemu_migration.c | 3 +-
src/qemu/qemu_monitor.c | 3 +-
src/qemu/qemu_monitor_json.c | 3 +-
src/qemu/qemu_monitor_text.c | 20 +++-
src/rpc/virnetserver.c | 8 +-
src/rpc/virnetserverclient.c | 5 +-
src/rpc/virnettlscontext.c | 5 +-
src/secret/secret_driver.c | 8 +-
src/security/security_stack.c | 8 +-
src/storage/storage_backend_fs.c | 12 +-
src/storage/storage_driver.c | 78 ++++++++-----
src/test/test_driver.c | 135 ++++++++++++++-------
src/uml/uml_conf.c | 5 +-
src/uml/uml_driver.c | 78 ++++++++-----
src/util/vircgroup.c | 9 +-
src/util/virconf.c | 5 +-
src/util/virdbus.c | 8 +-
src/util/virerror.c | 3 +-
src/util/vireventpoll.c | 29 +++--
src/util/virhook.c | 11 +-
src/util/virnetdevvportprofile.c | 5 +-
src/util/virrandom.c | 5 +-
src/util/virsocketaddr.c | 26 +++--
src/util/virsysinfo.c | 15 ++-
src/util/virthread.c | 5 +-
src/util/virutil.c | 20 ++--
src/util/virutil.h | 12 +-
src/util/viruuid.c | 5 +-
src/vbox/vbox_driver.c | 5 +-
src/vbox/vbox_tmpl.c | 188 ++++++++++++++++++++----------
src/xen/xen_driver.c | 6 +-
src/xen/xen_hypervisor.c | 5 +-
src/xen/xm_internal.c | 10 +-
src/xenapi/xenapi_utils.c | 5 +-
src/xenxs/xen_xm.c | 35 ++++--
tests/commandhelper.c | 5 +-
tests/qemuargv2xmltest.c | 3 +-
tests/shunloadhelper.c | 5 +-
tests/testutils.c | 15 ++-
tests/testutilslxc.c | 3 +-
tests/testutilsqemu.c | 3 +-
tests/testutilsxen.c | 3 +-
tests/virshtest.c | 54 ++++++---
tests/virusbtest.c | 3 +-
tests/xencapstest.c | 33 ++++--
69 files changed, 929 insertions(+), 465 deletions(-)
--
1.9.0
10 years, 9 months
[libvirt] Adding support to limit client connections for vnc/spice display driver
by Patil, Tushar
Hello,
Greeting!!!
We are using KVM hypervisor driver for running OpenStack IaaS. Couple of months back we have reported one security issue [1] in OS. Basically we want to limit on the number of vnc client connections that can be opened by users for a given VM. I know that there is share policy where you can specify "VNC_SHARE_MODE_EXCLUSIVE" share mode. But it only allows one client connection to the vnc console, it will disconnect all previously opened vnc client connections. Since vnc display driver already has the data of client connections, it would be possible to add the logic to limit on the number of client connections.
What we want is the ability to specify threshold how many vnc client connections should be allowed at any given point of time in the domain xml for graphics device (especially for vnc/spice)?
Example
<graphics type="vnc" autoport="yes" keymap="en-us" listen="127.0.0.1"/, share-policy="limit-connections", connections="5">
Add support to accept new share policy "limit-connections'.
So in the above example, when user tries to open vnc display for the 6th time, then the request should be rejected.
I need your expert opinion in deciding whether it's a good idea to add this support in the vnc/spice display driver or this kind of constraint should be imposed outside libvirt?
[1] : https://bugs.launchpad.net/nova/+bug/1227575
Tushar
______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding
10 years, 9 months
[libvirt] [PATCH] Fix unitialized data in virSocketAddrMask
by Daniel P. Berrange
The virSocketAddrMask method did not initialize all fields
in the sockaddr_in6 struct. In paticular the 'sin6_scope_id'
field could contain random garbage, which would in turn
affect the result of any later virSocketAddrFormat calls.
This led to ip6tables rules in the FORWARD chain which
matched on random garbage sin6_scope_id. Fortunately these
were ACCEPT rules, so the impact was merely that desired
traffic was blocked, rather than undesired traffic allowed.
Signed-off-by: Daniel P. Berrange <berrange(a)redhat.com>
---
src/util/virsocketaddr.c | 1 +
tests/sockettest.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 54 insertions(+)
diff --git a/src/util/virsocketaddr.c b/src/util/virsocketaddr.c
index 3f270e2..1099eae 100644
--- a/src/util/virsocketaddr.c
+++ b/src/util/virsocketaddr.c
@@ -424,6 +424,7 @@ virSocketAddrMask(const virSocketAddr *addr,
const virSocketAddr *netmask,
virSocketAddrPtr network)
{
+ memset(network, 0, sizeof(*network));
if (addr->data.stor.ss_family != netmask->data.stor.ss_family) {
network->data.stor.ss_family = AF_UNSPEC;
return -1;
diff --git a/tests/sockettest.c b/tests/sockettest.c
index e613546..68b0536 100644
--- a/tests/sockettest.c
+++ b/tests/sockettest.c
@@ -153,6 +153,49 @@ static int testNetmaskHelper(const void *opaque)
return testNetmask(data->addr1, data->addr2, data->netmask, data->pass);
}
+
+
+static int testMaskNetwork(const char *addrstr,
+ int prefix,
+ const char *networkstr)
+{
+ virSocketAddr addr;
+ virSocketAddr network;
+ char *gotnet = NULL;
+
+ /* Intentionally fill with garbage */
+ memset(&network, 1, sizeof(network));
+
+ if (virSocketAddrParse(&addr, addrstr, AF_UNSPEC) < 0)
+ return -1;
+
+ if (virSocketAddrMaskByPrefix(&addr, prefix, &network) < 0)
+ return -1;
+
+ if (!(gotnet = virSocketAddrFormat(&network)))
+ return -1;
+
+ if (STRNEQ(networkstr, gotnet)) {
+ VIR_FREE(gotnet);
+ fprintf(stderr, "Expected %s, got %s\n", networkstr, gotnet);
+ return -1;
+ }
+ VIR_FREE(gotnet);
+ return 0;
+}
+
+struct testMaskNetworkData {
+ const char *addr1;
+ int prefix;
+ const char *network;
+};
+static int testMaskNetworkHelper(const void *opaque)
+{
+ const struct testMaskNetworkData *data = opaque;
+ return testMaskNetwork(data->addr1, data->prefix, data->network);
+}
+
+
static int testWildcard(const char *addrstr,
bool pass)
{
@@ -255,6 +298,14 @@ mymain(void)
ret = -1; \
} while (0)
+#define DO_TEST_MASK_NETWORK(addr1, prefix, network) \
+ do { \
+ struct testMaskNetworkData data = { addr1, prefix, network }; \
+ if (virtTestRun("Test mask network " addr1 " / " #prefix " == " network, \
+ testMaskNetworkHelper, &data) < 0) \
+ ret = -1; \
+ } while (0)
+
#define DO_TEST_WILDCARD(addr, pass) \
do { \
struct testWildcardData data = { addr, pass}; \
@@ -324,6 +375,8 @@ mymain(void)
DO_TEST_NETMASK("2000::1:1", "9000::1:1",
"ffff:ffff:ffff:ffff:ffff:ffff:ffff:0", false);
+ DO_TEST_MASK_NETWORK("2001:db8:ca2:2::1", 64, "2001:db8:ca2:2::");
+
DO_TEST_WILDCARD("0.0.0.0", true);
DO_TEST_WILDCARD("::", true);
DO_TEST_WILDCARD("0", true);
--
1.8.5.3
10 years, 9 months
[libvirt] [PATCH] Fix virQEMUCapsLoadCache leaks
by Ján Tomko
Valgrind reported leaking of maxCpus and arch strings from
virXPathString, as well as the leak of the machineMaxCpus array.
Use 'tmp' for the strings we don't want to free, to allow freeing
of 'str' in the cleanup label and free machineMaxCpus
in virCapsReset too.
---
src/qemu/qemu_capabilities.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 2914200..e742c03 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -2376,7 +2376,8 @@ virQEMUCapsLoadCache(virQEMUCapsPtr qemuCaps, const char *filename,
int n;
xmlNodePtr *nodes = NULL;
xmlXPathContextPtr ctxt = NULL;
- char *str;
+ char *str = NULL;
+ char *tmp;
long long int l;
if (!(doc = virXMLParseFile(filename)))
@@ -2432,7 +2433,6 @@ virQEMUCapsLoadCache(virQEMUCapsPtr qemuCaps, const char *filename,
if (flag < 0) {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("Unknown qemu capabilities flag %s"), str);
- VIR_FREE(str);
goto cleanup;
}
VIR_FREE(str);
@@ -2463,6 +2463,7 @@ virQEMUCapsLoadCache(virQEMUCapsPtr qemuCaps, const char *filename,
_("unknown arch %s in QEMU capabilities cache"), str);
goto cleanup;
}
+ VIR_FREE(str);
if ((n = virXPathNodeSet("./cpu", ctxt, &nodes)) < 0) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
@@ -2476,12 +2477,12 @@ virQEMUCapsLoadCache(virQEMUCapsPtr qemuCaps, const char *filename,
goto cleanup;
for (i = 0; i < n; i++) {
- if (!(str = virXMLPropString(nodes[i], "name"))) {
+ if (!(tmp = virXMLPropString(nodes[i], "name"))) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("missing cpu name in QEMU capabilities cache"));
goto cleanup;
}
- qemuCaps->cpuDefinitions[i] = str;
+ qemuCaps->cpuDefinitions[i] = tmp;
}
}
VIR_FREE(nodes);
@@ -2503,12 +2504,12 @@ virQEMUCapsLoadCache(virQEMUCapsPtr qemuCaps, const char *filename,
goto cleanup;
for (i = 0; i < n; i++) {
- if (!(str = virXMLPropString(nodes[i], "name"))) {
+ if (!(tmp = virXMLPropString(nodes[i], "name"))) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("missing machine name in QEMU capabilities cache"));
goto cleanup;
}
- qemuCaps->machineTypes[i] = str;
+ qemuCaps->machineTypes[i] = tmp;
qemuCaps->machineAliases[i] = virXMLPropString(nodes[i], "alias");
@@ -2519,12 +2520,14 @@ virQEMUCapsLoadCache(virQEMUCapsPtr qemuCaps, const char *filename,
_("malformed machine cpu count in QEMU capabilities cache"));
goto cleanup;
}
+ VIR_FREE(str);
}
}
VIR_FREE(nodes);
ret = 0;
- cleanup:
+cleanup:
+ VIR_FREE(str);
VIR_FREE(nodes);
xmlXPathFreeContext(ctxt);
xmlFreeDoc(doc);
@@ -2668,6 +2671,7 @@ virQEMUCapsReset(virQEMUCapsPtr qemuCaps)
}
VIR_FREE(qemuCaps->machineTypes);
VIR_FREE(qemuCaps->machineAliases);
+ VIR_FREE(qemuCaps->machineMaxCpus);
qemuCaps->nmachineTypes = 0;
}
--
1.8.3.2
10 years, 9 months
[libvirt] [PATCH v2] daemon: Enhance documentation for changing NOFILE limit
by Jiri Denemark
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
daemon/libvirtd.sysconf | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/daemon/libvirtd.sysconf b/daemon/libvirtd.sysconf
index 3af1f03..8bdddd3 100644
--- a/daemon/libvirtd.sysconf
+++ b/daemon/libvirtd.sysconf
@@ -20,5 +20,13 @@
#
#SDL_AUDIODRIVER=pulse
-# Override the maximum number of opened files
+# Override the maximum number of opened files.
+# This only works with traditional init scripts. In systemd world, the limit
+# can only be changed by overriding LimitNOFILE for libvirtd.service. To do
+# that, just create a *.conf file in /etc/systemd/system/libvirtd.service.d/
+# (for example /etc/systemd/system/libvirtd.service.d/openfiles.conf) and
+# write the following two lines in it:
+# [Service]
+# LimitNOFILE=2048
+#
#LIBVIRTD_NOFILES_LIMIT=2048
--
1.9.0
10 years, 9 months
[libvirt] [PATCH V2 00/13] libxl: add basic support for migration
by Jim Fehlig
V2 of
https://www.redhat.com/archives/libvir-list/2014-March/msg00156.html
New in this version: not much. I rebased on the hostdev passthrough
changes and added a few 'begin phase' checks to fail early if the
domain is not migratable. See 13/13 for details.
Based on an earlier patch from Chunyan Liu
https://www.redhat.com/archives/libvir-list/2013-September/msg00667.html
This patch series adds basic migration support to the libxl driver.
Follow-up patches can improve pre-migration checks and add support for
additional migration flags.
Patches 1-12 are almost exclusively code motion, moving functions from
the main driver module into the libxl_domain and libxl_conf modules.
Patch 13 contains the actual migration impl.
Jim Fehlig (13):
libxl: move libxlDomainEventQueue to libxl_domain
libxl: move libxlDomainManagedSavePath to libxl_domain
libxl: move libxlSaveImageOpen to libxl_domain
libxl: move libxlVmCleanup{,Job} to libxl_domain
libxl: move libxlDomEventsRegister to libxl_domain
libxl: move libxlDomainAutoCoreDump to libxl_domain
libxl: move libxlDoNodeGetInfo to libxl_conf
libxl: move libxlDomainSetVcpuAffinities to libxl_domain
libxl: move libxlFreeMem to libxl_domain
libxl: move libxlVmStart to libxl_domain
libxl: include a pointer to the driver in libxlDomainObjPrivate
libxl: move domain event handler to libxl_domain
libxl: add migration support
po/POTFILES.in | 1 +
src/Makefile.am | 3 +-
src/libxl/libxl_conf.c | 36 ++
src/libxl/libxl_conf.h | 10 +
src/libxl/libxl_domain.c | 705 +++++++++++++++++++++++++++++++
src/libxl/libxl_domain.h | 51 ++-
src/libxl/libxl_driver.c | 988 +++++++++++---------------------------------
src/libxl/libxl_migration.c | 598 +++++++++++++++++++++++++++
src/libxl/libxl_migration.h | 78 ++++
9 files changed, 1716 insertions(+), 754 deletions(-)
create mode 100644 src/libxl/libxl_migration.c
create mode 100644 src/libxl/libxl_migration.h
--
1.8.1.4
10 years, 9 months
[libvirt] [Question] Start VMs coincidently failed
by Wangyufei (James)
Hello,
When I start multi VMs coincidently and any of the cgroup directories
named machine doesn't exist. There's a chance that VM start failed. And
the errors reported are:
1. Unable to initialize /machine cgroup: File exists
2. Unable to create cgroup for sit_vm_16: No such file or directory
So I analyze it and find that:
If thread A and thread B start VMs at the same time, there's a chance that, thread A creat
directory failed and thread B succeed. Then thread A will clean up machine directory, and
thread B will do something in the machine directory cleaned up by thread A. At last thread B
failed too because it can't find the directory.
Thread A reports error 1 and thread B reports error 2.
Well, the true reason is that the judgement of dirctory existence and the operation of creating
directory is not atomic. At first, I want to find a lock to fix it, but failed. Then I apply a patch to
get the EEXIST through to fix it.
Guys, any better idea?
My best regards
WangYufei
10 years, 9 months
[libvirt] Why using AC_PATH_PROG to detect the location of run-time external programs
by Howard Tsai
Hello,
I am wondering why AC_PATH_PROG is used to detect the location of run-time
external programs (such as dnsmasq, ovs-vsctl, etc.), and hardcoded the
locations in binary. It seems wrong to me.
At compile-time, configure script should only determine whether the
resulting binary should support the use of these external programs. The
actual location of these programs should be determined at run-time, either
from libvirtd.conf or by $PATH.
I haven't looked into how to fix it.
The problem I ran into earlier is this: on my dev box, I have Open vSwitch
utility ovs-vsctl installed locally in /usr/local/bin. Therefore, at
build-time, libvirt detected the location of ovs-vsctl and hardcoded
'/usr/local/bin/ovs-vsctl' in libvirtd. When I package libvirt and install
it on my test box, it couldn't find ovs-vsctl, as ovs-vsctl is installed at
/usr/bin/ovs-vsctl in the test box, so it failed.
Thanks,
Howard
10 years, 9 months
[libvirt] [PATCH] qemu: Fix seamless SPICE migration
by Martin Kletzander
Since the wait is done during migration (still inside
QEMU_ASYNC_JOB_MIGRATION_OUT), the code should enter the monitor as such
in order to prohibit all other jobs from interfering in the meantime.
This patch fixes bug #1009886 in which qemuDomainGetBlockInfo was
waiting on the monitor condition and after GetSpiceMigrationStatus
mangled its internal data, the daemon crashed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1009886
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
src/qemu/qemu_migration.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index d7b89fc..3a1aab7 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1595,7 +1595,10 @@ qemuMigrationWaitForSpice(virQEMUDriverPtr driver,
/* Poll every 50ms for progress & to allow cancellation */
struct timespec ts = { .tv_sec = 0, .tv_nsec = 50 * 1000 * 1000ull };
- qemuDomainObjEnterMonitor(driver, vm);
+ if (qemuDomainObjEnterMonitorAsync(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
+ return -1;
+
if (qemuMonitorGetSpiceMigrationStatus(priv->mon,
&spice_migrated) < 0) {
qemuDomainObjExitMonitor(driver, vm);
--
1.8.3.2
10 years, 9 months