[libvirt] [PATCH] virsh: Move --completed from resume to domjobinfo
by Jiri Denemark
Because of similar contexts, git rebase I did just before pushing the
series which added --completed option patched the wrong command.
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
Pushed as trivial.
tools/virsh-domain.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index cc1e554..30b3fa9 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -5114,10 +5114,6 @@ static const vshCmdOptDef opts_resume[] = {
.flags = VSH_OFLAG_REQ,
.help = N_("domain name, id or uuid")
},
- {.name = "completed",
- .type = VSH_OT_BOOL,
- .help = N_("return statistics of a recently completed job")
- },
{.name = NULL}
};
@@ -5377,6 +5373,10 @@ static const vshCmdOptDef opts_domjobinfo[] = {
.flags = VSH_OFLAG_REQ,
.help = N_("domain name, id or uuid")
},
+ {.name = "completed",
+ .type = VSH_OT_BOOL,
+ .help = N_("return statistics of a recently completed job")
+ },
{.name = NULL}
};
--
2.1.0
10 years, 2 months
[libvirt] [PATCHv3 0/8] bulk stats: QEMU implementation
by Francesco Romani
This patchset enhances the QEMU support
for the new bulk stats API to include
equivalents of these APIs:
virDomainBlockInfo
virDomainGetInfo - for balloon stats
virDomainGetCPUStats
virDomainBlockStatsFlags
virDomainInterfaceStats
virDomainGetVcpusFlags
virDomainGetVcpus
This subset of API is the one oVirt relies on.
Scale/stress test on an oVirt test environment is in progress.
changes in v3: more polishing and fixes after first review
- addressed Eric's comments.
- squashed patches which extracts helpers with patches which
use them.
- changed gathering strategy: now code tries to reap as much
information as possible instead to give up and bail out with
error. Only critical errors cause the bulk stats to fail.
- moved away from the transfer semantics. I find it error-prone
and not flexible enough, I'd like to avoid as much as possible.
- rearranged helpers to have one single QEMU query job with
many monitor jobs nested inside.
- fixed docs.
- implemented missing virsh domstats bits.
changes in v2: polishing and optimizations.
- incorporated feedback from Li Wei (thanks).
- added documentation.
- optimized block group to gather all the information with just
one call to QEMU monitor.
- stripped to bare bones merged the 'block info' group into the
'block' group - oVirt actually needs just one stat from there.
- reorganized the keys to be more consistent and shorter.
The patchset is organized as follows:
- the first patch enhance the internal stats gathering API
to accomodate the needs of the groups which extract information
using QEMU monitor jobs.
- the next 6 patches implement the bulk stats groups, extracting
helpers where do refactoring to extract internal helpers every time
it is feasible and convenient.
- the last patch enhance the virsh domstats command with options to
use the new bulk stats.
*** BLURB HERE ***
Francesco Romani (8):
qemu: bulk stats: extend internal collection API
qemu: bulk stats: implement CPU stats group
qemu: bulk stats: implement balloon group
qemu: bulk stats: implement VCPU group
qemu: bulk stats: implement interface group
qemu: bulk stats: implement block group
qemu: bulk stats: add block allocation information
virsh: add options to query bulk stats group
include/libvirt/libvirt.h.in | 5 +
src/libvirt.c | 61 +++++
src/qemu/qemu_driver.c | 577 +++++++++++++++++++++++++++++++++++++------
src/qemu/qemu_monitor.c | 22 ++
src/qemu/qemu_monitor.h | 21 ++
src/qemu/qemu_monitor_json.c | 211 +++++++++++-----
src/qemu/qemu_monitor_json.h | 4 +
src/qemu/qemu_monitor_text.c | 13 +
src/qemu/qemu_monitor_text.h | 4 +
tools/virsh-domain-monitor.c | 35 +++
tools/virsh.pod | 4 +-
11 files changed, 823 insertions(+), 134 deletions(-)
--
1.9.3
10 years, 2 months
[libvirt] libvirt-python: memory leak after GetXMLDesc?
by Junichi Nomura
Hello,
I've observed memory leak in long-running python program and
suspects a bug in libvirt-python.
libvirt-python contains auto-generated code like this:
libvirt_virDomainGetXMLDesc(...) {
...
LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virDomainGetXMLDesc(domain, flags);
LIBVIRT_END_ALLOW_THREADS;
py_retval = libvirt_charPtrWrap((char *) c_retval);
return py_retval;
}
virDomainGetXMLDesc() expects the caller to free c_retval.
Though it used to be freed in libvirt_charPtrWrap(), commit bb3301ba
("Don't free passed in args in libvirt_charPtrWrap /
libvirt_charPtrSizeWrap") has moved the responsibility to the outside.
So, it seems either GetXMLDesc should not depend on auto-generation or
the generator should be fixed.
Any comments?
--
Jun'ichi Nomura, NEC Corporation
10 years, 2 months
[libvirt] [PATCH 0/2] Couple of NVRAM fixes
by Michal Privoznik
*** BLURB HERE ***
Michal Privoznik (2):
nvram: Fix permissions
virDomainUndefineFlags: Allow NVRAM unlinking
include/libvirt/libvirt.h.in | 2 ++
libvirt.spec.in | 2 +-
src/qemu/qemu_driver.c | 19 ++++++++++++++++++-
src/security/security_selinux.c | 5 ++++-
tools/virsh-domain.c | 15 ++++++++++++---
tools/virsh.pod | 6 +++++-
6 files changed, 42 insertions(+), 7 deletions(-)
--
1.8.5.5
10 years, 2 months
[libvirt] QEMU migration with non-shared storage
by Michael Chapman
Hello,
I am trying to understand libvirt's logic for checking whether migration
of a VM is safe, and how it determines which disks should be mirrored by
QEMU. My particular use case involves VMs that may have disks backed onto
LVM or onto Ceph RBD, or both.
As far as I can tell, the qemuMigrationIsSafe check is there to ensure
that all disks are readonly, or have cache=none, or their backends can
guarantee cache coherence. As far as I can tell, however, QEMU flushes
*all* block devices when it pauses a VM's CPUs (just before the final part
of migration, for instance), so I'm wondering why this check is needed. Is
there any possible situation for the source VM to be paused, for its block
devices to be flushed, and yet the destination VM can't see all completed
writes?
Why is RBD is handled specially in this function? The current logic is
that an RBD-backed disk is safe to be migrated even if it's got caching
enabled, but I'm not sure how RBD is different from other backends in this
regard.
If VIR_MIGRATE_NON_SHARED_DISK or _INC is specified, should these safety
checks be relaxed? It seems to me that if any non-shared disk is going to
be *explicitly* copied from the source to the destination VM, then cache
coherence in the backend is irrelevant.
At the moment, the set of non-shared block devices copied by
VIR_MIGRATE_NON_SHARED_* differs depending on whether NBD is being used in
the migration:
- If NBD can't be used (e.g. with a tunnelled migration), then QEMU will
copy *all* non-readonly block devices;
- If NBD is being used, then QEMU will only mirror "shareable", "readonly"
or "sourceless" disks.
A problem arises with RBD disks that have caching enabled. According to
qemuMigrationIsSafe, these disks are "safe" to be migrated. However in
either the NBD or the non-NBD case, the RBD disk will be copied. This is
clearly not desirable. If RBD is a special case in qemuMigrationIsSafe,
does it also need to be a special case when configuring the NBD server?
Or, if an NBD server is not going to be used, should the migration be
considered "unsafe" if an RBD disk is present?
I'd very much appreciate some help in understanding all of this. At the
moment, I think my only option is to run RBD without caching at all.
However, not only does that result in very poor performance, it also
doesn't seem to match the qemuMigrationIsSafe check.
Regards,
Michael
10 years, 2 months
[libvirt] [PATCH V4 0/3] Testing libvirt XML -> libxl_domain_config conversion
by Jim Fehlig
This is version 4 of the work started by danpb:
https://www.redhat.com/archives/libvir-list/2014-May/msg01102.html
This series tests the conversion of libvirt XML to libxl_domain_config
objects by the libvirt libxl driver.
Changed in V4:
- V3 patches 2-4 have been pushed
- Patch 1 is unchanged from V3
- Patch 2 is new and adds tests for virJSONStringCompare
- Cleanup of ignored context paths definition in patch 3
(was #5 in V3)
Daniel P. Berrange (2):
util: Introduce virJSONStringCompare for JSON doc comparisons
libxl: Add a test suite for libxl option generator
Jim Fehlig (1):
tests: add tests for virJSONStringCompare
configure.ac | 2 +
src/libvirt_private.syms | 1 +
src/util/virjson.c | 242 +++++++++++++++++++++++++++++++++
src/util/virjson.h | 16 +++
tests/Makefile.am | 25 +++-
tests/jsontest.c | 63 ++++++++-
tests/libxlxml2jsondata/basic-hvm.json | 217 +++++++++++++++++++++++++++++
tests/libxlxml2jsondata/basic-hvm.xml | 36 +++++
tests/libxlxml2jsondata/basic-pv.json | 163 ++++++++++++++++++++++
tests/libxlxml2jsondata/basic-pv.xml | 28 ++++
tests/libxlxml2jsontest.c | 228 +++++++++++++++++++++++++++++++
tests/virmocklibxl.c | 87 ++++++++++++
12 files changed, 1102 insertions(+), 6 deletions(-)
create mode 100644 tests/libxlxml2jsondata/basic-hvm.json
create mode 100644 tests/libxlxml2jsondata/basic-hvm.xml
create mode 100644 tests/libxlxml2jsondata/basic-pv.json
create mode 100644 tests/libxlxml2jsondata/basic-pv.xml
create mode 100644 tests/libxlxml2jsontest.c
create mode 100644 tests/virmocklibxl.c
--
1.8.4.5
10 years, 2 months
[libvirt] [PATCH V3 0/5] Testing libvirt XML -> libxl_domain_config conversion
by Jim Fehlig
This is version 3 of the work started by danpb:
https://www.redhat.com/archives/libvir-list/2014-May/msg01102.html
This series tests the conversion of libvirt XML to libxl_domain_config
objects by the libvirt libxl driver.
Changed in v3:
- Change virJSONStringCompare to accept a list of context paths to
ignore
- Report error in virJSONStringCompare in libyajl is not available
- Fix a bug (4/5) exposed by the new tests
- Add tests for conversion of both PV and HVM config
- Define json context paths to ignore based on features defined
in libxl.h
Daniel P. Berrange (4):
util: Introduce virJSONStringCompare for JSON doc comparisons
util: Allow port allocator to skip bind() check
tests: Add more test suite mock helpers
libxl: Add a test suite for libxl option generator
Jim Fehlig (1):
libxl: fix mapping of libvirt and libxl lifecycle actions
configure.ac | 2 +
src/libvirt_private.syms | 1 +
src/libxl/libxl_conf.c | 62 +++++++-
src/libxl/libxl_driver.c | 5 +-
src/qemu/qemu_driver.c | 9 +-
src/util/virjson.c | 242 +++++++++++++++++++++++++++++++
src/util/virjson.h | 16 +++
src/util/virportallocator.c | 14 +-
src/util/virportallocator.h | 7 +-
tests/Makefile.am | 25 +++-
tests/libxlxml2jsondata/basic-hvm.json | 217 ++++++++++++++++++++++++++++
tests/libxlxml2jsondata/basic-hvm.xml | 36 +++++
tests/libxlxml2jsondata/basic-pv.json | 163 +++++++++++++++++++++
tests/libxlxml2jsondata/basic-pv.xml | 28 ++++
tests/libxlxml2jsontest.c | 251 +++++++++++++++++++++++++++++++++
tests/virfirewalltest.c | 4 +-
tests/virmock.h | 54 +++++--
tests/virmocklibxl.c | 87 ++++++++++++
tests/virportallocatortest.c | 4 +-
tests/virsystemdtest.c | 4 +-
20 files changed, 1198 insertions(+), 33 deletions(-)
create mode 100644 tests/libxlxml2jsondata/basic-hvm.json
create mode 100644 tests/libxlxml2jsondata/basic-hvm.xml
create mode 100644 tests/libxlxml2jsondata/basic-pv.json
create mode 100644 tests/libxlxml2jsondata/basic-pv.xml
create mode 100644 tests/libxlxml2jsontest.c
create mode 100644 tests/virmocklibxl.c
--
1.8.4.5
10 years, 2 months
[libvirt] [PATCH] add migration support for OpenVZ driver
by Hongbin Lu
This patch adds initial migration support to the OpenVZ driver,
using the VIR_DRV_FEATURE_MIGRATION_PARAMS family of migration
functions.
---
src/openvz/openvz_conf.h | 5 +-
src/openvz/openvz_driver.c | 348 ++++++++++++++++++++++++++++++++++++++++++++
src/openvz/openvz_driver.h | 10 ++
3 files changed, 361 insertions(+), 2 deletions(-)
diff --git a/src/openvz/openvz_conf.h b/src/openvz/openvz_conf.h
index a7de7d2..33998d6 100644
--- a/src/openvz/openvz_conf.h
+++ b/src/openvz/openvz_conf.h
@@ -35,8 +35,9 @@
/* OpenVZ commands - Replace with wrapper scripts later? */
-# define VZLIST "/usr/sbin/vzlist"
-# define VZCTL "/usr/sbin/vzctl"
+# define VZLIST "/usr/sbin/vzlist"
+# define VZCTL "/usr/sbin/vzctl"
+# define VZMIGRATE "/usr/sbin/vzmigrate"
# define VZ_CONF_FILE "/etc/vz/vz.conf"
# define VZCTL_BRIDGE_MIN_VERSION ((3 * 1000 * 1000) + (0 * 1000) + 22 + 1)
diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c
index 851ed30..0f46872 100644
--- a/src/openvz/openvz_driver.c
+++ b/src/openvz/openvz_driver.c
@@ -2207,6 +2207,348 @@ openvzNodeGetCPUMap(virConnectPtr conn ATTRIBUTE_UNUSED,
}
+static int
+openvzConnectSupportsFeature(virConnectPtr conn ATTRIBUTE_UNUSED, int feature)
+{
+ switch (feature) {
+ case VIR_DRV_FEATURE_MIGRATION_PARAMS:
+ case VIR_DRV_FEATURE_MIGRATION_V3:
+ return 1;
+ default:
+ return 0;
+ }
+}
+
+
+static char *
+openvzDomainMigrateBegin3Params(virDomainPtr domain,
+ virTypedParameterPtr params,
+ int nparams,
+ char **cookieout ATTRIBUTE_UNUSED,
+ int *cookieoutlen ATTRIBUTE_UNUSED,
+ unsigned int flags)
+{
+ virDomainObjPtr vm = NULL;
+ struct openvz_driver *driver = domain->conn->privateData;
+ char *xml = NULL;
+ int status;
+
+ virCheckFlags(OPENVZ_MIGRATION_FLAGS, NULL);
+ if (virTypedParamsValidate(params, nparams, OPENVZ_MIGRATION_PARAMETERS) < 0)
+ return NULL;
+
+ openvzDriverLock(driver);
+ vm = virDomainObjListFindByUUID(driver->domains, domain->uuid);
+ openvzDriverUnlock(driver);
+
+ if (!vm) {
+ virReportError(VIR_ERR_NO_DOMAIN, "%s",
+ _("no domain with matching uuid"));
+ goto cleanup;
+ }
+
+ if (!virDomainObjIsActive(vm)) {
+ virReportError(VIR_ERR_OPERATION_INVALID,
+ "%s", _("domain is not running"));
+ goto cleanup;
+ }
+
+ if (openvzGetVEStatus(vm, &status, NULL) == -1)
+ goto cleanup;
+
+ if (status != VIR_DOMAIN_RUNNING) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("domain is not in running state"));
+ goto cleanup;
+ }
+
+ xml = virDomainDefFormat(vm->def, VIR_DOMAIN_XML_SECURE);
+
+ cleanup:
+ if (vm)
+ virObjectUnlock(vm);
+ return xml;
+}
+
+static int
+openvzDomainMigratePrepare3Params(virConnectPtr dconn,
+ virTypedParameterPtr params,
+ int nparams,
+ const char *cookiein ATTRIBUTE_UNUSED,
+ int cookieinlen ATTRIBUTE_UNUSED,
+ char **cookieout ATTRIBUTE_UNUSED,
+ int *cookieoutlen ATTRIBUTE_UNUSED,
+ char **uri_out,
+ unsigned int fflags ATTRIBUTE_UNUSED)
+{
+ struct openvz_driver *driver = dconn->privateData;
+ const char *dom_xml = NULL;
+ const char *uri_in = NULL;
+ virDomainDefPtr def = NULL;
+ virDomainObjPtr vm = NULL;
+ char *hostname = NULL;
+ virURIPtr uri = NULL;
+ int ret = -1;
+
+ if (virTypedParamsValidate(params, nparams, OPENVZ_MIGRATION_PARAMETERS) < 0)
+ goto error;
+
+ if (virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_DEST_XML,
+ &dom_xml) < 0 ||
+ virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_URI,
+ &uri_in) < 0)
+ goto error;
+
+ if (!dom_xml) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("no domain XML passed"));
+ goto error;
+ }
+
+ if (!(def = virDomainDefParseString(dom_xml, driver->caps, driver->xmlopt,
+ 1 << VIR_DOMAIN_VIRT_OPENVZ,
+ VIR_DOMAIN_XML_INACTIVE)))
+ goto error;
+
+ if (!(vm = virDomainObjListAdd(driver->domains, def,
+ driver->xmlopt,
+ VIR_DOMAIN_OBJ_LIST_ADD_LIVE |
+ VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE,
+ NULL)))
+ goto error;
+ def = NULL;
+
+ if (!uri_in) {
+ if ((hostname = virGetHostname()) == NULL)
+ goto error;
+
+ if (STRPREFIX(hostname, "localhost")) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("hostname on destination resolved to localhost,"
+ " but migration requires an FQDN"));
+ goto error;
+ }
+ } else {
+ uri = virURIParse(uri_in);
+
+ if (uri == NULL) {
+ virReportError(VIR_ERR_INVALID_ARG,
+ _("unable to parse URI: %s"),
+ uri_in);
+ goto error;
+ }
+
+ if (uri->server == NULL) {
+ virReportError(VIR_ERR_INVALID_ARG,
+ _("missing host in migration URI: %s"),
+ uri_in);
+ goto error;
+ } else {
+ hostname = uri->server;
+ }
+ }
+
+ if (virAsprintf(uri_out, "tcp://%s", hostname) < 0)
+ goto error;
+
+ ret = 0;
+ goto done;
+
+ error:
+ virDomainDefFree(def);
+ if (vm) {
+ virDomainObjListRemove(driver->domains, vm);
+ vm = NULL;
+ }
+
+ done:
+ virURIFree(uri);
+ if (vm)
+ virObjectUnlock(vm);
+ return ret;
+}
+
+static int
+openvzDomainMigratePerform3Params(virDomainPtr domain,
+ const char *dconnuri ATTRIBUTE_UNUSED,
+ virTypedParameterPtr params,
+ int nparams,
+ const char *cookiein ATTRIBUTE_UNUSED,
+ int cookieinlen ATTRIBUTE_UNUSED,
+ char **cookieout ATTRIBUTE_UNUSED,
+ int *cookieoutlen ATTRIBUTE_UNUSED,
+ unsigned int flags)
+{
+ struct openvz_driver *driver = domain->conn->privateData;
+ virDomainObjPtr vm = NULL;
+ const char *uri_str = NULL;
+ virURIPtr uri = NULL;
+ virCommandPtr cmd = virCommandNew(VZMIGRATE);
+ int ret = -1;
+
+ virCheckFlags(OPENVZ_MIGRATION_FLAGS, -1);
+ if (virTypedParamsValidate(params, nparams, OPENVZ_MIGRATION_PARAMETERS) < 0)
+ goto cleanup;
+
+ if (virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_URI,
+ &uri_str) < 0)
+ goto cleanup;
+
+ openvzDriverLock(driver);
+ vm = virDomainObjListFindByUUID(driver->domains, domain->uuid);
+ openvzDriverUnlock(driver);
+
+ if (!vm) {
+ virReportError(VIR_ERR_NO_DOMAIN, "%s",
+ _("no domain with matching uuid"));
+ goto cleanup;
+ }
+
+ /* parse dst host:port from uri */
+ uri = virURIParse(uri_str);
+ if (uri == NULL || uri->server == NULL)
+ goto cleanup;
+
+ if (flags & VIR_MIGRATE_LIVE)
+ virCommandAddArg(cmd, "--live");
+ virCommandAddArg(cmd, uri->server);
+ virCommandAddArg(cmd, vm->def->name);
+
+ if (virCommandRun(cmd, NULL) < 0)
+ goto cleanup;
+
+ ret = 0;
+
+ cleanup:
+ virCommandFree(cmd);
+ virURIFree(uri);
+ if (vm)
+ virObjectUnlock(vm);
+ return ret;
+}
+
+static virDomainPtr
+openvzDomainMigrateFinish3Params(virConnectPtr dconn,
+ virTypedParameterPtr params,
+ int nparams,
+ const char *cookiein ATTRIBUTE_UNUSED,
+ int cookieinlen ATTRIBUTE_UNUSED,
+ char **cookieout ATTRIBUTE_UNUSED,
+ int *cookieoutlen ATTRIBUTE_UNUSED,
+ unsigned int flags,
+ int cancelled)
+{
+ struct openvz_driver *driver = dconn->privateData;
+ virDomainObjPtr vm = NULL;
+ const char *dname = NULL;
+ virDomainPtr dom = NULL;
+ int status;
+
+ if (cancelled)
+ goto cleanup;
+
+ virCheckFlags(OPENVZ_MIGRATION_FLAGS, NULL);
+ if (virTypedParamsValidate(params, nparams, OPENVZ_MIGRATION_PARAMETERS) < 0)
+ goto cleanup;
+
+ if (virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_DEST_NAME,
+ &dname) < 0)
+ goto cleanup;
+
+ if (!dname ||
+ !(vm = virDomainObjListFindByName(driver->domains, dname))) {
+ /* Migration obviously failed if the domain doesn't exist */
+ virReportError(VIR_ERR_OPERATION_FAILED,
+ _("Migration failed. No domain on destination host "
+ "with matching name '%s'"),
+ NULLSTR(dname));
+ goto cleanup;
+ }
+
+ if (openvzGetVEStatus(vm, &status, NULL) == -1)
+ goto cleanup;
+
+ if (status != VIR_DOMAIN_RUNNING) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("domain is not running on destination host"));
+ goto cleanup;
+ }
+
+ vm->def->id = strtoI(vm->def->name);
+ virDomainObjSetState(vm, VIR_DOMAIN_RUNNING, VIR_DOMAIN_RUNNING_MIGRATED);
+
+ dom = virGetDomain(dconn, vm->def->name, vm->def->uuid);
+ if (dom)
+ dom->id = vm->def->id;
+
+ cleanup:
+ if (vm)
+ virObjectUnlock(vm);
+ return dom;
+}
+
+static int
+openvzDomainMigrateConfirm3Params(virDomainPtr domain,
+ virTypedParameterPtr params,
+ int nparams,
+ const char *cookiein ATTRIBUTE_UNUSED,
+ int cookieinlen ATTRIBUTE_UNUSED,
+ unsigned int flags,
+ int cancelled)
+{
+ struct openvz_driver *driver = domain->conn->privateData;
+ virDomainObjPtr vm = NULL;
+ int status;
+ int ret = -1;
+
+ virCheckFlags(OPENVZ_MIGRATION_FLAGS, -1);
+ if (virTypedParamsValidate(params, nparams, OPENVZ_MIGRATION_PARAMETERS) < 0)
+ goto cleanup;
+
+ openvzDriverLock(driver);
+ vm = virDomainObjListFindByUUID(driver->domains, domain->uuid);
+ openvzDriverUnlock(driver);
+
+ if (!vm) {
+ virReportError(VIR_ERR_NO_DOMAIN, "%s",
+ _("no domain with matching uuid"));
+ goto cleanup;
+ }
+
+ if (cancelled) {
+ if (openvzGetVEStatus(vm, &status, NULL) == -1)
+ goto cleanup;
+
+ if (status == VIR_DOMAIN_RUNNING) {
+ ret = 0;
+ } else {
+ VIR_DEBUG("Domain '%s' does not recover after failed migration",
+ vm->def->name);
+ }
+
+ goto cleanup;
+ }
+
+ vm->def->id = -1;
+
+ VIR_DEBUG("Domain '%s' successfully migrated", vm->def->name);
+
+ virDomainObjListRemove(driver->domains, vm);
+ vm = NULL;
+
+ ret = 0;
+
+ cleanup:
+ if (vm)
+ virObjectUnlock(vm);
+ return ret;
+}
+
+
static virDriver openvzDriver = {
.no = VIR_DRV_OPENVZ,
.name = "OPENVZ",
@@ -2265,6 +2607,12 @@ static virDriver openvzDriver = {
.connectIsAlive = openvzConnectIsAlive, /* 0.9.8 */
.domainUpdateDeviceFlags = openvzDomainUpdateDeviceFlags, /* 0.9.13 */
.domainGetHostname = openvzDomainGetHostname, /* 0.10.0 */
+ .connectSupportsFeature = openvzConnectSupportsFeature, /* 1.2.8 */
+ .domainMigrateBegin3Params = openvzDomainMigrateBegin3Params, /* 1.2.8 */
+ .domainMigratePrepare3Params = openvzDomainMigratePrepare3Params, /* 1.2.8 */
+ .domainMigratePerform3Params = openvzDomainMigratePerform3Params, /* 1.2.8 */
+ .domainMigrateFinish3Params = openvzDomainMigrateFinish3Params, /* 1.2.8 */
+ .domainMigrateConfirm3Params = openvzDomainMigrateConfirm3Params, /* 1.2.8 */
};
int openvzRegister(void)
diff --git a/src/openvz/openvz_driver.h b/src/openvz/openvz_driver.h
index b39e81c..0c7a070 100644
--- a/src/openvz/openvz_driver.h
+++ b/src/openvz/openvz_driver.h
@@ -31,6 +31,16 @@
# include "internal.h"
+# define OPENVZ_MIGRATION_FLAGS \
+ (VIR_MIGRATE_LIVE)
+
+/* All supported migration parameters and their types. */
+# define OPENVZ_MIGRATION_PARAMETERS \
+ VIR_MIGRATE_PARAM_URI, VIR_TYPED_PARAM_STRING, \
+ VIR_MIGRATE_PARAM_DEST_NAME, VIR_TYPED_PARAM_STRING, \
+ VIR_MIGRATE_PARAM_DEST_XML, VIR_TYPED_PARAM_STRING, \
+ NULL
+
int openvzRegister(void);
#endif
--
1.7.1
10 years, 2 months
[libvirt] [PATCH] conf: snapshot: Don't default-snapshot empty floppy drives
by Peter Krempa
If a floppy drive isn't selected for snapshot explicitly and is empty
don't try to snapshot it. For external snapshots this would fail as we
can't generate a name for the snapshot from an empty drive.
Reported-by: Pavel Hrdina <phrdina(a)redhat.com>
---
src/conf/snapshot_conf.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/src/conf/snapshot_conf.c b/src/conf/snapshot_conf.c
index c53a66b..cbaff74 100644
--- a/src/conf/snapshot_conf.c
+++ b/src/conf/snapshot_conf.c
@@ -561,7 +561,14 @@ virDomainSnapshotAlignDisks(virDomainSnapshotDefPtr def,
if (VIR_STRDUP(disk->name, def->dom->disks[i]->dst) < 0)
goto cleanup;
disk->index = i;
- disk->snapshot = def->dom->disks[i]->snapshot;
+
+ /* Don't snapshot empty floppy drives */
+ if (def->dom->disks[i]->device == VIR_DOMAIN_DISK_DEVICE_FLOPPY &&
+ !virDomainDiskGetSource(def->dom->disks[i]))
+ disk->snapshot = VIR_DOMAIN_SNAPSHOT_LOCATION_NONE;
+ else
+ disk->snapshot = def->dom->disks[i]->snapshot;
+
disk->src->type = VIR_STORAGE_TYPE_FILE;
if (!disk->snapshot)
disk->snapshot = default_snapshot;
--
2.1.0
10 years, 2 months
[libvirt] [PATCH 00/26] Resolve more Coverity issues
by John Ferlan
Sorry for the large dump, but before I got too involved in other things
I figured I'd go through the list of the remaining 68 Coverity issues
from the new version in order to reduce the pile. Many are benign, some
seemingly false positives, and I think most are error paths. The one
non error path that does stick out is the qemu_driver.c changes in the
qemuDomainSetBlkioParameters() routine where 'param' and 'params' were
used differently between LIVE and CONFIG. In particular, in CONFIG the
use of 'params->field' instead of 'param->field'.
One that does bear looking at more closely and if someone has a better
idea is avoiding a false positive resource_leak in remote_driver.c. I
left a healthy comment in the code - you'll know when you see it.
These patches get the numbers down to 19 issues. Of the remaining
issues - some are related to Coverity thinking that 'mgetgroups' could
return a negative value with an allocated groups structure (which I'm
still scratching my head over). There is also a few calls to
virJSONValueObjectGetNumberUlong() in qemu_monitor_json.c that don't
check status, but I'm not sure why - just didn't have the research
cycles for that.
John Ferlan (26):
qemu_driver: Resolve Coverity COPY_PASTE_ERROR
remote_driver: Resolve Coverity RESOURCE_LEAK
storage: Resolve Coverity UNUSED_VALUE
vbox: Resolve Coverity UNUSED_VALUE
qemu: Resolve Coverity REVERSE_INULL
storage: Resolve Coverity OVERFLOW_BEFORE_WIDEN
virsh: Resolve Coverity DEADCODE
virfile: Resolve Coverity DEADCODE
virsh: Resolve Coverity DEADCODE
qemu: Resolve Coverity DEADCODE
tests: Resolve Coverity DEADCODE
virsh: Resolve Coverity DEADCODE
qemu: Resolve Coverity FORWARD_NULL
lxc: Resolve Coverity FORWARD_NULL
qemu: Resolve Coverity FORWARD_NULL
network: Resolve Coverity FORWARD_NULL
virstring: Resolve Coverity FORWARD_NULL
qemu: Resolve Coverity FORWARD_NULL
network_conf: Resolve Coverity FORWARD_NULL
qemu: Resolve Coverity NEGATIVE_RETURNS
nodeinfo: Resolve Coverity NEGATIVE_RETURNS
virsh: Resolve Coverity NEGATIVE_RETURNS
xen: Resolve Coverity NEGATIVE_RETURNS
qemu: Resolve Coverity NEGATIVE_RETURNS
qemu: Resolve Coverity NEGATIVE_RETURNS
libxl: Resolve Coverity NULL_RETURNS
src/conf/network_conf.c | 4 ++--
src/libxl/libxl_migration.c | 1 -
src/lxc/lxc_driver.c | 6 ++++--
src/network/leaseshelper.c | 3 +--
src/nodeinfo.c | 2 +-
src/qemu/qemu_capabilities.c | 2 +-
src/qemu/qemu_command.c | 1 +
src/qemu/qemu_driver.c | 26 +++++++++++++++-----------
src/qemu/qemu_migration.c | 3 ++-
src/qemu/qemu_monitor_json.c | 2 +-
src/qemu/qemu_process.c | 5 +++--
src/remote/remote_driver.c | 12 ++++++++++++
src/storage/storage_backend_disk.c | 2 +-
src/storage/storage_backend_fs.c | 1 -
src/util/virfile.c | 5 ++---
src/util/virstring.c | 3 +++
src/vbox/vbox_common.c | 9 +++++++--
src/xen/xend_internal.c | 3 ++-
tests/virstringtest.c | 5 +++++
tools/virsh-domain.c | 22 ++++++++--------------
tools/virsh-edit.c | 9 ---------
tools/virsh-interface.c | 3 ---
tools/virsh-network.c | 12 +++++-------
tools/virsh-nwfilter.c | 3 ---
tools/virsh-pool.c | 3 ---
tools/virsh-snapshot.c | 3 ---
26 files changed, 76 insertions(+), 74 deletions(-)
--
1.9.3
10 years, 2 months