[libvirt] RHBZ 1013045: Crash on xen domain startup: *** Error in `/usr/sbin/libvirtd': free(): invalid next size (fast): 0x00007f82c8003210 ***
by Jeremy Fitzhardinge
Hi all,
I posted this bug (https://bugzilla.redhat.com/show_bug.cgi?id=1013045)
to the Redhat Bugzilla a while ago, and the only response has been to
post a note to this list about the bug.
Summary below, but it looks like a pretty clear use-after-free or
something. The full details are attached to the bug report.
Thanks,
J
--
Description of problem:
When starting a Xen domain with libvirt + libxl, it crashes after
creating the domain. The domain is left in a paused state, and works
fine if manually unpaused with xl unpause. virt-manager never shows the
domain as running.
[...]
Steps to Reproduce:
1. Open virt-manager
2. Connect to localhost (xen)
3. Start a domain
Actual results:
Domain is created in a paused state, virt-manager shows errors about
losing connection to the daemon. Logs show libvirtd crashed.
Expected results:
Domain creation.
Additional info:
Sep 27 09:08:30 saboo libvirtd[24880]: *** Error in
`/usr/sbin/libvirtd': free(): invalid next size (fast):
0x00007f82c8003210 ***
Sep 27 09:08:30 saboo libvirtd[24880]: ======= Backtrace: =========
Sep 27 09:08:30 saboo libvirtd[24880]:
/lib64/libc.so.6(+0x365b27d0e8)[0x7f82f5a7a0e8]
Sep 27 09:08:30 saboo libvirtd[24880]:
/lib64/libvirt.so.0(virFree+0x1a)[0x7f82f8f07d5a]
Sep 27 09:08:30 saboo libvirtd[24880]:
/usr/lib64/libvirt/connection-driver/libvirt_driver_libxl.so(+0x14b6c)[0x7f82e032bb6c]
Sep 27 09:08:30 saboo libvirtd[24880]:
/usr/lib64/libvirt/connection-driver/libvirt_driver_libxl.so(+0x154d4)[0x7f82e032c4d4]
Sep 27 09:08:30 saboo libvirtd[24880]:
/lib64/libvirt.so.0(virDomainCreate+0xf7)[0x7f82f8fdb6b7]
Sep 27 09:08:30 saboo libvirtd[24880]:
/usr/sbin/libvirtd(+0x350c7)[0x7f82f9a1a0c7]
Sep 27 09:08:30 saboo libvirtd[24880]:
/lib64/libvirt.so.0(virNetServerProgramDispatch+0x3ba)[0x7f82f90314aa]
Sep 27 09:08:30 saboo libvirtd[24880]:
/lib64/libvirt.so.0(+0x3a33f822d8)[0x7f82f902c2d8]
Sep 27 09:08:30 saboo libvirtd[24880]:
/lib64/libvirt.so.0(+0x3a33ea0c15)[0x7f82f8f4ac15]
Sep 27 09:08:30 saboo libvirtd[24880]:
/lib64/libvirt.so.0(+0x3a33ea0691)[0x7f82f8f4a691]
Sep 27 09:08:30 saboo libvirtd[24880]:
/lib64/libpthread.so.0(+0x365ba07c53)[0x7f82f61ccc53]
Sep 27 09:08:30 saboo libvirtd[24880]:
/lib64/libc.so.6(clone+0x6d)[0x7f82f5af2d3d]
11 years, 1 month
[libvirt] [PATCH] fix api changes in xen restore
by Bamvor Jian Zhang
in recently xen commit: 7051d5c8, there is a api changes in
libxl_domain_create_restore.
Author: Andrew Cooper <andrew.cooper3(a)citrix.com>
Date: Thu Oct 10 12:23:10 2013 +0100
tools/migrate: Fix regression when migrating from older version of Xen
use the macro LIBXL_HAVE_DOMAIN_CREATE_RESTORE_PARAMS in libxl.h
in order to make libvirt could compile with old and new xen.
the params checkpointed_stream is useful if libvirt libxl driver
support migration. for new, set it as zero.
Signed-off-by: Bamvor Jian Zhang <bjzhang(a)suse.com>
---
src/libxl/libxl_driver.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index 4928695..104ad31 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -555,6 +555,9 @@ libxlVmStart(libxlDriverPrivatePtr driver, virDomainObjPtr vm,
int managed_save_fd = -1;
libxlDomainObjPrivatePtr priv = vm->privateData;
libxlDriverConfigPtr cfg = libxlDriverConfigGet(driver);
+#ifdef LIBXL_HAVE_DOMAIN_CREATE_RESTORE_PARAMS
+ libxl_domain_restore_params params;
+#endif
if (libxlDomainObjPrivateInitCtx(vm) < 0)
goto error;
@@ -619,8 +622,14 @@ libxlVmStart(libxlDriverPrivatePtr driver, virDomainObjPtr vm,
ret = libxl_domain_create_new(priv->ctx, &d_config,
&domid, NULL, NULL);
else
+#ifdef LIBXL_HAVE_DOMAIN_CREATE_RESTORE_PARAMS
+ params.checkpointed_stream = 0;
+ ret = libxl_domain_create_restore(priv->ctx, &d_config, &domid,
+ restore_fd, ¶ms, NULL, NULL);
+#else
ret = libxl_domain_create_restore(priv->ctx, &d_config, &domid,
restore_fd, NULL, NULL);
+#endif
if (ret) {
if (restore_fd < 0)
--
1.8.1.4
11 years, 1 month
[libvirt] [PATCH] Use a port from the migration range for NBD as well
by Ján Tomko
Instead of using a port from the remote display range.
---
src/qemu/qemu_migration.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index cb59620..4f35a7a 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1115,7 +1115,7 @@ qemuMigrationStartNBDServer(virQEMUDriverPtr driver,
goto cleanup;
if (!port &&
- ((virPortAllocatorAcquire(driver->remotePorts, &port) < 0) ||
+ ((virPortAllocatorAcquire(driver->migrationPorts, &port) < 0) ||
(qemuMonitorNBDServerStart(priv->mon, listenAddr, port) < 0))) {
qemuDomainObjExitMonitor(driver, vm);
goto cleanup;
--
1.8.1.5
11 years, 1 month
[libvirt] [PATCH] Fix race in starting transient VMs
by Daniel P. Berrange
From: "Daniel P. Berrange" <berrange(a)redhat.com>
When starting a transient VM the first thing done is to check
for duplicates. The check looks if there are any running VMs
with the matching name/uuid. It explicitly allows there to
be inactive VMs, so that a persistent VM can be temporarily
booted with a different config.
There is a race condition, however, where 2 or more clients
try to create the same transient VM. The first client will
cause a virDomainObjPtr to be added to the domain list, and
it is inactive at this stage. The second client may then
come along and see this inactive VM, and mistake it for a
persistent VM.
If the first VM fails to start its transient guest for any
reason, then it'll remove the virDomainObjPtr from the list.
The second client now has a virDomainObjPtr that it can try
to boot, which libvirt no longer has a record of. The result
can be a running QEMU process that is orphaned.
It was also, however, possible for the virDomainObjPtr to be
completely free'd which will cause libvirtd to crash in some
scenarios.
The fix is to only allow an existing inactive VM if it is
marked as persistent.
Signed-off-by: Daniel P. Berrange <berrange(a)redhat.com>
---
src/conf/domain_conf.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 51c4e29..454fbfe 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -2171,6 +2171,12 @@ virDomainObjListAddLocked(virDomainObjListPtr doms,
vm->def->name);
goto error;
}
+ if (!vm->persistent) {
+ virReportError(VIR_ERR_OPERATION_INVALID,
+ _("domain is being started as '%s'"),
+ vm->def->name);
+ goto error;
+ }
}
virDomainObjAssignDef(vm,
--
1.8.3.1
11 years, 1 month
[libvirt] [PATCH v2] xenapi: fix the coding stype in xenapi_driver.c
by Hongwei Bi
fix the if statement coding stype
Signed-off-by: Hongwei Bi <hwbi2008(a)gmail.com>
---
src/xenapi/xenapi_driver.c | 63 ++++++++++++++++++++++++++++++----------------
1 file changed, 42 insertions(+), 21 deletions(-)
diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c
index 4b522c0..c5b8d8f 100644
--- a/src/xenapi/xenapi_driver.c
+++ b/src/xenapi/xenapi_driver.c
@@ -437,7 +437,8 @@ xenapiConnectGetCapabilities(virConnectPtr conn)
virCapsPtr caps = ((struct _xenapiPrivate *)(conn->privateData))->caps;
if (caps) {
char *xml = virCapabilitiesFormatXML(caps);
- if (!xml) goto cleanup;
+ if (!xml)
+ goto cleanup;
return xml;
}
cleanup:
@@ -704,7 +705,8 @@ xenapiDomainLookupByName(virConnectPtr conn,
}
}
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(conn, VIR_ERR_NO_DOMAIN, NULL);
return NULL;
}
@@ -739,7 +741,8 @@ xenapiDomainSuspend(virDomainPtr dom)
return 0;
}
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
@@ -774,7 +777,8 @@ xenapiDomainResume(virDomainPtr dom)
return 0;
}
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
@@ -812,7 +816,8 @@ xenapiDomainShutdownFlags(virDomainPtr dom, unsigned int flags)
return 0;
}
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
@@ -855,7 +860,8 @@ xenapiDomainReboot(virDomainPtr dom, unsigned int flags)
xen_vm_set_free(vms);
return 0;
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
@@ -899,7 +905,8 @@ xenapiDomainDestroyFlags(virDomainPtr dom,
dom->id = -1;
return 0;
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
@@ -949,7 +956,8 @@ xenapiDomainGetOSType(virDomainPtr dom)
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
cleanup:
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
return ostype;
}
/*
@@ -977,7 +985,8 @@ xenapiDomainGetMaxMemory(virDomainPtr dom)
xen_vm_set_free(vms);
return mem_static_max / 1024;
} else {
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return 0;
}
@@ -1011,7 +1020,8 @@ xenapiDomainSetMaxMemory(virDomainPtr dom, unsigned long memory)
}
xen_vm_set_free(vms);
} else {
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
@@ -1057,7 +1067,8 @@ xenapiDomainGetInfo(virDomainPtr dom, virDomainInfoPtr info)
xen_vm_set_free(vms);
return 0;
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
@@ -1145,7 +1156,8 @@ xenapiDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus,
return 0;
}
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
@@ -1198,7 +1210,8 @@ xenapiDomainPinVcpu(virDomainPtr dom, unsigned int vcpu ATTRIBUTE_UNUSED,
return -1;
}
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_INTERNAL_ERROR, NULL);
return -1;
}
@@ -1319,7 +1332,8 @@ xenapiDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags)
xen_vm_set_free(vms);
return (int)maxvcpu;
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_INTERNAL_ERROR, NULL);
return -1;
}
@@ -1360,7 +1374,8 @@ xenapiDomainGetXMLDesc(virDomainPtr dom, unsigned int flags)
/* Flags checked by virDomainDefFormat */
- if (!xen_vm_get_by_name_label(session, &vms, dom->name)) return NULL;
+ if (!xen_vm_get_by_name_label(session, &vms, dom->name))
+ return NULL;
if (vms->size != 1) {
xenapiSessionErrorHandler(dom->conn, VIR_ERR_INTERNAL_ERROR,
_("Domain name is not unique"));
@@ -1524,7 +1539,8 @@ xenapiDomainGetXMLDesc(virDomainPtr dom, unsigned int flags)
}
xen_vif_set_free(vif_set);
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xml = virDomainDefFormat(defPtr, flags);
virDomainDefFree(defPtr);
return xml;
@@ -1654,7 +1670,8 @@ xenapiDomainCreateWithFlags(virDomainPtr dom, unsigned int flags)
xen_vm_set_free(vms);
} else {
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
@@ -1748,7 +1765,8 @@ xenapiDomainUndefineFlags(virDomainPtr dom, unsigned int flags)
xen_vm_set_free(vms);
return 0;
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
@@ -1800,10 +1818,12 @@ xenapiDomainGetAutostart(virDomainPtr dom, int *autostart)
}
xen_vm_set_free(vms);
xen_string_string_map_free(result);
- if (flag == 0) return -1;
+ if (flag == 0)
+ return -1;
return 0;
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
@@ -1842,7 +1862,8 @@ xenapiDomainSetAutostart(virDomainPtr dom, int autostart)
xen_vm_set_free(vms);
return 0;
}
- if (vms) xen_vm_set_free(vms);
+ if (vms)
+ xen_vm_set_free(vms);
xenapiSessionErrorHandler(dom->conn, VIR_ERR_NO_DOMAIN, NULL);
return -1;
}
--
1.8.1.2
11 years, 1 month
[libvirt] [PATCH 0/3] Test our PCI device handling functions
by Michal Privoznik
*** BLURB HERE ***
Michal Privoznik (3):
tests: Introduce virpcitest
virpcitest: Test virPCIDeviceDetach
virpcitest: Introduce testVirPCIDeviceReattach
.gitignore | 1 +
cfg.mk | 4 +-
tests/Makefile.am | 21 +-
tests/virpcimock.c | 872 +++++++++++++++++++++++++++++++++++++++++++++++++++++
tests/virpcitest.c | 184 +++++++++++
5 files changed, 1078 insertions(+), 4 deletions(-)
create mode 100644 tests/virpcimock.c
create mode 100644 tests/virpcitest.c
--
1.8.1.5
11 years, 1 month
[libvirt] [PATCH] storage: implement rudimentary glusterfs pool refresh
by Eric Blake
Actually put gfapi to use, by allowing the creation of a gluster
pool. Right now, all volumes are treated as raw; further patches
will allow peering into files to allow for qcow2 files and backing
chains, and reporting proper volume allocation.
I've reported a couple of glusterfs bugs; if we were to require a
minimum of (not-yet-released) glusterfs 3.5, we could use the new
glfs_readdir [1] and not worry about the bogus return value of
glfs_fini [2], but for now I'm testing with Fedora 19's glusterfs
3.4.1.
[1] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00085.html
[2] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00086.html
* src/storage/storage_backend_gluster.c
(virStorageBackendGlusterRefreshPool): Initial implementation.
(virStorageBackendGlusterOpen, virStorageBackendGlusterClose): New
helper functions.
Signed-off-by: Eric Blake <eblake(a)redhat.com>
---
Depends on these pre-req patches:
https://www.redhat.com/archives/libvir-list/2013-October/msg01266.html
https://www.redhat.com/archives/libvir-list/2013-October/msg00913.html
My next task - figuring out the use of glfs_open() to read metadata
from a file and determine backing chains.
src/storage/storage_backend_gluster.c | 138 ++++++++++++++++++++++++++++++++--
1 file changed, 133 insertions(+), 5 deletions(-)
diff --git a/src/storage/storage_backend_gluster.c b/src/storage/storage_backend_gluster.c
index 2863c73..b0b6ce6 100644
--- a/src/storage/storage_backend_gluster.c
+++ b/src/storage/storage_backend_gluster.c
@@ -23,20 +23,148 @@
#include <glusterfs/api/glfs.h>
-#include "virerror.h"
#include "storage_backend_gluster.h"
#include "storage_conf.h"
+#include "viralloc.h"
+#include "virerror.h"
+#include "virlog.h"
+#include "virstoragefile.h"
+#include "virstring.h"
#define VIR_FROM_THIS VIR_FROM_STORAGE
+struct _virStorageBackendGlusterState {
+ glfs_t *vol;
+};
+
+typedef struct _virStorageBackendGlusterState virStorageBackendGlusterState;
+typedef virStorageBackendGlusterState *virStorageBackendGlusterStatePtr;
+
+static void
+virStorageBackendGlusterClose(virStorageBackendGlusterStatePtr state)
+{
+ if (!state || !state->vol)
+ return;
+ /* Yuck - glusterfs-api-3.4.1 appears to always return -1 for
+ * glfs_fini, with errno containing random data, so there's no way
+ * to tell if it succeeded. 3.4.2 is supposed to fix this.*/
+ if (glfs_fini(state->vol) < 0)
+ VIR_DEBUG("shutdown of gluster failed with errno %d", errno);
+}
+
+static virStorageBackendGlusterStatePtr
+virStorageBackendGlusterOpen(virStoragePoolObjPtr pool)
+{
+ virStorageBackendGlusterStatePtr ret = NULL;
+
+ if (VIR_ALLOC(ret) < 0)
+ return NULL;
+
+ if (!(ret->vol = glfs_new(pool->def->source.name))) {
+ virReportOOMError();
+ goto error;
+ }
+
+ /* FIXME: allow alternate transport in the pool xml */
+ if (glfs_set_volfile_server(ret->vol, "tcp",
+ pool->def->source.hosts[0].name,
+ pool->def->source.hosts[0].port) < 0 ||
+ glfs_init(ret->vol) < 0) {
+ virReportSystemError(errno, _("failed to connect to gluster %s/%s"),
+ pool->def->source.hosts[0].name,
+ pool->def->name);
+ goto error;
+ }
+
+ return ret;
+
+error:
+ virStorageBackendGlusterClose(ret);
+ return NULL;
+}
static int
virStorageBackendGlusterRefreshPool(virConnectPtr conn ATTRIBUTE_UNUSED,
- virStoragePoolObjPtr pool ATTRIBUTE_UNUSED)
+ virStoragePoolObjPtr pool)
{
- virReportError(VIR_ERR_NO_SUPPORT, "%s",
- _("gluster pool type not fully supported yet"));
- return -1;
+ int ret = -1;
+ virStorageBackendGlusterStatePtr state = NULL;
+ struct {
+ struct dirent ent;
+ /* See comment below about readdir_r needing padding */
+ char padding[MAX(1, 256 - (int) (sizeof(struct dirent)
+ - offsetof(struct dirent, d_name)))];
+ } de;
+ struct dirent *ent;
+ glfs_fd_t *dir = NULL;
+ virStorageVolDefPtr vol = NULL;
+ struct statvfs sb;
+
+ if (!(state = virStorageBackendGlusterOpen(pool)))
+ goto cleanup;
+
+ /* Why oh why did glfs 3.4 decide to expose only readdir_r rather
+ * than readdir? POSIX admits that readdir_r is inherently a
+ * flawed design, because systems are not required to define
+ * NAME_MAX: http://austingroupbugs.net/view.php?id=696
+ * http://womble.decadent.org.uk/readdir_r-advisory.html
+ *
+ * Fortunately, gluster uses _only_ XFS file systems, and XFS has
+ * a known NAME_MAX of 255; so we are guaranteed that if we
+ * provide 256 bytes of tail padding, then we have enough space to
+ * avoid buffer overflow no matter whether the OS used d_name[],
+ * d_name[1], or d_name[256] in its 'struct dirent'.
+ * http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00083.html
+ */
+
+ if (!(dir = glfs_opendir(state->vol, "."))) {
+ virReportSystemError(errno, _("cannot open path '%s'"),
+ pool->def->name);
+ goto cleanup;
+ }
+ while (!(errno = glfs_readdir_r(dir, &de.ent, &ent)) && ent) {
+ if (STREQ(ent->d_name, ".") || STREQ(ent->d_name, ".."))
+ continue;
+ if (VIR_ALLOC(vol) < 0 ||
+ VIR_STRDUP(vol->name, ent->d_name) < 0)
+ goto cleanup;
+ /* FIXME - must open files to determine if they are non-raw */
+ vol->type = VIR_STORAGE_VOL_NETWORK;
+ vol->target.format = VIR_STORAGE_FILE_RAW;
+ if (virAsprintf(&vol->key, "%s/%s",
+ pool->def->name, vol->name) < 0)
+ goto cleanup;
+ if (VIR_APPEND_ELEMENT(pool->volumes.objs, pool->volumes.count,
+ vol) < 0)
+ goto cleanup;
+ }
+ if (errno) {
+ virReportSystemError(errno, _("failed to read directory '%s'"),
+ pool->def->name);
+ goto cleanup;
+ }
+
+ if (glfs_statvfs(state->vol, ".", &sb) < 0) {
+ virReportSystemError(errno, _("cannot statvfs path '%s'"),
+ pool->def->name);
+ goto cleanup;
+ }
+
+ pool->def->capacity = ((unsigned long long)sb.f_frsize *
+ (unsigned long long)sb.f_blocks);
+ pool->def->available = ((unsigned long long)sb.f_bfree *
+ (unsigned long long)sb.f_frsize);
+ pool->def->allocation = pool->def->capacity - pool->def->available;
+
+ ret = 0;
+cleanup:
+ if (dir)
+ glfs_closedir(dir);
+ virStorageVolDefFree(vol);
+ virStorageBackendGlusterClose(state);
+ if (ret < 0)
+ virStoragePoolObjClearVols(pool);
+ return ret;
}
virStorageBackend virStorageBackendGluster = {
--
1.8.3.1
11 years, 1 month
[libvirt] [PATCH] network_conf.c: correct the value of the 'result' variable
by Hongwei Bi
The result variable in virNetworkDNSDefFormat() function should be
initialized to -1 at first,only in this way can we use it properly.
Signed-off-by: Hongwei Bi <hwbi2008(a)gmail.com>
---
src/conf/network_conf.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/src/conf/network_conf.c b/src/conf/network_conf.c
index 447eca4..8ab4e96 100644
--- a/src/conf/network_conf.c
+++ b/src/conf/network_conf.c
@@ -2295,7 +2295,7 @@ static int
virNetworkDNSDefFormat(virBufferPtr buf,
const virNetworkDNSDef *def)
{
- int result = 0;
+ int result = -1;
size_t i, j;
if (!(def->forwardPlainNames || def->forwarders || def->nhosts ||
@@ -2363,6 +2363,8 @@ virNetworkDNSDefFormat(virBufferPtr buf,
}
virBufferAdjustIndent(buf, -2);
virBufferAddLit(buf, "</dns>\n");
+
+ result = 0;
out:
return result;
}
--
1.8.1.2
11 years, 1 month