[libvirt] [PATCH 0/4] ivshmem support
by Osier Yang
Shawn Furrow proposed a patch more than a month ago:
https://www.redhat.com/archives/libvir-list/2012-September/msg01612.html
But this is a complete different implementation. Considering
there could be other memory related devices in futuer, this
introduces a new device model, called "memory device", instead
of a specific device like "ivshmem", though only "ivshmem"
is supported currently. Please refer to PATCH 1/4 for more
details.
CC'ed to Cam and Shawn, to see if there is advise on the documents.
Osier Yang (4):
docs: Add documents for memory device
conf: Parse and format memory device XML
qemu: Add cap flag QEMU_CAPS_IVSHMEM
qemu: Build command line for ivshmem device
docs/formatdomain.html.in | 39 +++++
docs/schemas/domaincommon.rng | 38 +++++
src/conf/domain_conf.c | 184 +++++++++++++++++++++-
src/conf/domain_conf.h | 27 +++
src/libvirt_private.syms | 3 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 85 ++++++++++
src/util/util.c | 5 +
src/util/util.h | 2 +
tests/qemuhelptest.c | 12 +-
tests/qemuxml2argvdata/qemuxml2argv-ivshmem.args | 7 +
tests/qemuxml2argvdata/qemuxml2argv-ivshmem.xml | 33 ++++
tests/qemuxml2argvtest.c | 2 +
14 files changed, 435 insertions(+), 5 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-ivshmem.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-ivshmem.xml
--
1.7.7.6
Regards,
Osier
12 years, 2 months
[libvirt] [PATCH 0/2] S390: Adding support for SCLP Console
by Viktor Mihajlovski
The S390 architecture comes with a native console type (SCLP
console) which is now also supported by current QEMU.
This series is enabling libvirt to configure S390 domains with SCLP
consoles.
The domain XML has to be extended for the new console target types
'sclp' and 'sclplm' (line mode = dumb).
As usual the QEMU driver must do capability probing in order to find
out whether SCLP is supported and format the QEMU command line
for the new console type.
J.B. Joret (2):
S390: Add SCLP console front end support
S390: Enable SCLP Console in QEMU driver
docs/formatdomain.html.in | 19 ++++++-
docs/schemas/domaincommon.rng | 2 +
src/conf/domain_conf.c | 4 +-
src/conf/domain_conf.h | 2 +
src/qemu/qemu_capabilities.c | 3 ++
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 59 ++++++++++++++++++++++
.../qemuxml2argv-console-sclp.args | 8 +++
.../qemuxml2argvdata/qemuxml2argv-console-sclp.xml | 24 +++++++++
tests/qemuxml2argvtest.c | 3 ++
10 files changed, 123 insertions(+), 2 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-console-sclp.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-console-sclp.xml
--
1.7.12.4
12 years, 2 months
[libvirt] [PATCH v13] support offline migration
by liguang
original migration did not aware of offline case,
so, try to support offline migration quietly
(did not disturb original migration) by pass
VIR_MIGRATE_OFFLINE flag to migration APIs if only
the domain is really inactive, and
migration process will not puzzled by domain
offline and exit unexpectedly.
these changes did not take care of disk images the
domain required, for them could be transferred by
other APIs as suggested, then VIR_MIGRATE_OFFLINE
must not combined with VIR_MIGRATE_NON_SHARED_*.
if you want a persistent migration,
you should do "virsh migrate --persistent" youself.
v12:
rebased for conflicting with commit 2f3e2c0c434218a3d656c08779cb98b327170e11,
and take in some messages from Doug Goldstein's patch
https://www.redhat.com/archives/libvir-list/2012-October/msg00957.html
v13:
changed for comments from Jiri Denemark
https://www.redhat.com/archives/libvir-list/2012-November/msg00153.html
Signed-off-by: liguang <lig.fnst(a)cn.fujitsu.com>
---
include/libvirt/libvirt.h.in | 1 +
src/qemu/qemu_driver.c | 8 ++--
src/qemu/qemu_migration.c | 99 +++++++++++++++++++++++++++++++++---------
src/qemu/qemu_migration.h | 3 +-
tools/virsh-domain.c | 10 ++++
5 files changed, 95 insertions(+), 26 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index fe58c08..1e0500d 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -1090,6 +1090,7 @@ typedef enum {
* whole migration process; this will be used automatically
* when supported */
VIR_MIGRATE_UNSAFE = (1 << 9), /* force migration even if it is considered unsafe */
+ VIR_MIGRATE_OFFLINE = (1 << 10), /* offline migrate */
} virDomainMigrateFlags;
/* Domain migration. */
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 978af57..6c2bf98 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -9796,7 +9796,7 @@ qemuDomainMigrateBegin3(virDomainPtr domain,
asyncJob = QEMU_ASYNC_JOB_NONE;
}
- if (!virDomainObjIsActive(vm)) {
+ if (!virDomainObjIsActive(vm) && !(flags & VIR_MIGRATE_OFFLINE)) {
virReportError(VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
goto endjob;
@@ -9805,9 +9805,9 @@ qemuDomainMigrateBegin3(virDomainPtr domain,
/* Check if there is any ejected media.
* We don't want to require them on the destination.
*/
-
- if (qemuDomainCheckEjectableMedia(driver, vm, asyncJob) < 0)
- goto endjob;
+ if (virDomainObjIsActive(vm))
+ if (qemuDomainCheckEjectableMedia(driver, vm, asyncJob) < 0)
+ goto endjob;
if (!(xml = qemuMigrationBegin(driver, vm, xmlin, dname,
cookieout, cookieoutlen,
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 5f8a9c5..66fbc02 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -72,6 +72,7 @@ enum qemuMigrationCookieFlags {
QEMU_MIGRATION_COOKIE_FLAG_LOCKSTATE,
QEMU_MIGRATION_COOKIE_FLAG_PERSISTENT,
QEMU_MIGRATION_COOKIE_FLAG_NETWORK,
+ QEMU_MIGRATION_COOKIE_FLAG_OFFLINE,
QEMU_MIGRATION_COOKIE_FLAG_LAST
};
@@ -79,13 +80,14 @@ enum qemuMigrationCookieFlags {
VIR_ENUM_DECL(qemuMigrationCookieFlag);
VIR_ENUM_IMPL(qemuMigrationCookieFlag,
QEMU_MIGRATION_COOKIE_FLAG_LAST,
- "graphics", "lockstate", "persistent", "network");
+ "graphics", "lockstate", "persistent", "network", "offline");
enum qemuMigrationCookieFeatures {
QEMU_MIGRATION_COOKIE_GRAPHICS = (1 << QEMU_MIGRATION_COOKIE_FLAG_GRAPHICS),
QEMU_MIGRATION_COOKIE_LOCKSTATE = (1 << QEMU_MIGRATION_COOKIE_FLAG_LOCKSTATE),
QEMU_MIGRATION_COOKIE_PERSISTENT = (1 << QEMU_MIGRATION_COOKIE_FLAG_PERSISTENT),
QEMU_MIGRATION_COOKIE_NETWORK = (1 << QEMU_MIGRATION_COOKIE_FLAG_NETWORK),
+ QEMU_MIGRATION_COOKIE_OFFLINE = (1 << QEMU_MIGRATION_COOKIE_FLAG_OFFLINE),
};
typedef struct _qemuMigrationCookieGraphics qemuMigrationCookieGraphics;
@@ -594,6 +596,9 @@ qemuMigrationCookieXMLFormat(struct qemud_driver *driver,
if ((mig->flags & QEMU_MIGRATION_COOKIE_NETWORK) && mig->network)
qemuMigrationCookieNetworkXMLFormat(buf, mig->network);
+ if (mig->flags & QEMU_MIGRATION_COOKIE_OFFLINE)
+ virBufferAsprintf(buf, " <offline/>\n");
+
virBufferAddLit(buf, "</qemu-migration>\n");
return 0;
}
@@ -874,6 +879,11 @@ qemuMigrationCookieXMLParse(qemuMigrationCookiePtr mig,
(!(mig->network = qemuMigrationCookieNetworkXMLParse(ctxt))))
goto error;
+ if ((flags & QEMU_MIGRATION_COOKIE_OFFLINE)) {
+ if (virXPathBoolean("count(./offline) > 0", ctxt))
+ mig->flags |= QEMU_MIGRATION_COOKIE_OFFLINE;
+ }
+
return 0;
error:
@@ -938,6 +948,10 @@ qemuMigrationBakeCookie(qemuMigrationCookiePtr mig,
return -1;
}
+ if (flags & QEMU_MIGRATION_COOKIE_OFFLINE) {
+ mig->flags |= QEMU_MIGRATION_COOKIE_OFFLINE;
+ }
+
if (!(*cookieout = qemuMigrationCookieXMLFormatStr(driver, mig)))
return -1;
@@ -1443,6 +1457,24 @@ char *qemuMigrationBegin(struct qemud_driver *driver,
QEMU_MIGRATION_COOKIE_LOCKSTATE) < 0)
goto cleanup;
+ if (flags & VIR_MIGRATE_OFFLINE) {
+ if (flags & (VIR_MIGRATE_NON_SHARED_DISK|
+ VIR_MIGRATE_NON_SHARED_INC)) {
+ virReportError(VIR_ERR_OPERATION_INVALID,
+ "%s", _("offline migration cannot handle non-shared storage"));
+ goto cleanup;
+ }
+ if (!(flags & VIR_MIGRATE_PERSIST_DEST)) {
+ virReportError(VIR_ERR_OPERATION_INVALID,
+ "%s", _("offline migration must be specified with the persistent flag set"));
+ goto cleanup;
+ }
+ if (qemuMigrationBakeCookie(mig, driver, vm,
+ cookieout, cookieoutlen,
+ QEMU_MIGRATION_COOKIE_OFFLINE) < 0)
+ goto cleanup;
+ }
+
if (xmlin) {
if (!(def = virDomainDefParseString(driver->caps, xmlin,
QEMU_EXPECTED_VIRT_TYPES,
@@ -1607,6 +1639,15 @@ qemuMigrationPrepareAny(struct qemud_driver *driver,
goto endjob;
}
+ if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen,
+ QEMU_MIGRATION_COOKIE_OFFLINE)))
+ return ret;
+
+ if (mig->flags & QEMU_MIGRATION_COOKIE_OFFLINE) {
+ ret = 0;
+ goto done;
+ }
+
/* Start the QEMU daemon, with the same command-line arguments plus
* -incoming $migrateFrom
*/
@@ -1658,6 +1699,7 @@ qemuMigrationPrepareAny(struct qemud_driver *driver,
VIR_DOMAIN_EVENT_STARTED,
VIR_DOMAIN_EVENT_STARTED_MIGRATED);
+done:
/* We keep the job active across API calls until the finish() call.
* This prevents any other APIs being invoked while incoming
* migration is taking place.
@@ -2150,6 +2192,9 @@ qemuMigrationRun(struct qemud_driver *driver,
return -1;
}
+ if (flags & VIR_MIGRATE_OFFLINE)
+ return 0;
+
if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen,
QEMU_MIGRATION_COOKIE_GRAPHICS)))
goto cleanup;
@@ -2665,7 +2710,12 @@ static int doPeer2PeerMigrate3(struct qemud_driver *driver,
uri, &uri_out, flags, dname, resource, dom_xml);
qemuDomainObjExitRemoteWithDriver(driver, vm);
}
+
VIR_FREE(dom_xml);
+
+ if (flags & VIR_MIGRATE_OFFLINE)
+ goto cleanup;
+
if (ret == -1)
goto cleanup;
@@ -2771,7 +2821,7 @@ finish:
vm->def->name);
cleanup:
- if (ddomain) {
+ if (ddomain || (flags & VIR_MIGRATE_OFFLINE)) {
virObjectUnref(ddomain);
ret = 0;
} else {
@@ -2848,7 +2898,7 @@ static int doPeer2PeerMigrate(struct qemud_driver *driver,
}
/* domain may have been stopped while we were talking to remote daemon */
- if (!virDomainObjIsActive(vm)) {
+ if (!virDomainObjIsActive(vm) && !(flags & VIR_MIGRATE_OFFLINE)) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("guest unexpectedly quit"));
goto cleanup;
@@ -2911,7 +2961,7 @@ qemuMigrationPerformJob(struct qemud_driver *driver,
if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
goto cleanup;
- if (!virDomainObjIsActive(vm)) {
+ if (!virDomainObjIsActive(vm) && !(flags & VIR_MIGRATE_OFFLINE)) {
virReportError(VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
goto endjob;
@@ -3235,26 +3285,27 @@ qemuMigrationFinish(struct qemud_driver *driver,
* object, but if no, clean up the empty qemu process.
*/
if (retcode == 0) {
- if (!virDomainObjIsActive(vm)) {
+ if (!virDomainObjIsActive(vm) && !(flags & VIR_MIGRATE_OFFLINE)) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("guest unexpectedly quit"));
goto endjob;
}
- if (qemuMigrationVPAssociatePortProfiles(vm->def) < 0) {
- qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED,
- VIR_QEMU_PROCESS_STOP_MIGRATED);
- virDomainAuditStop(vm, "failed");
- event = virDomainEventNewFromObj(vm,
- VIR_DOMAIN_EVENT_STOPPED,
- VIR_DOMAIN_EVENT_STOPPED_FAILED);
- goto endjob;
+ if (!(flags & VIR_MIGRATE_OFFLINE)) {
+ if (qemuMigrationVPAssociatePortProfiles(vm->def) < 0) {
+ qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED,
+ VIR_QEMU_PROCESS_STOP_MIGRATED);
+ virDomainAuditStop(vm, "failed");
+ event = virDomainEventNewFromObj(vm,
+ VIR_DOMAIN_EVENT_STOPPED,
+ VIR_DOMAIN_EVENT_STOPPED_FAILED);
+ goto endjob;
+ }
+ if (mig->network)
+ if (qemuDomainMigrateOPDRelocate(driver, vm, mig) < 0)
+ VIR_WARN("unable to provide network data for relocation");
}
- if (mig->network)
- if (qemuDomainMigrateOPDRelocate(driver, vm, mig) < 0)
- VIR_WARN("unable to provide network data for relocation");
-
if (flags & VIR_MIGRATE_PERSIST_DEST) {
virDomainDefPtr vmdef;
if (vm->persistent)
@@ -3302,7 +3353,7 @@ qemuMigrationFinish(struct qemud_driver *driver,
event = NULL;
}
- if (!(flags & VIR_MIGRATE_PAUSED)) {
+ if (!(flags & VIR_MIGRATE_PAUSED) && !(flags & VIR_MIGRATE_OFFLINE)) {
/* run 'cont' on the destination, which allows migration on qemu
* >= 0.10.6 to work properly. This isn't strictly necessary on
* older qemu's, but it also doesn't hurt anything there
@@ -3351,9 +3402,11 @@ qemuMigrationFinish(struct qemud_driver *driver,
VIR_DOMAIN_EVENT_SUSPENDED,
VIR_DOMAIN_EVENT_SUSPENDED_PAUSED);
}
- if (virDomainSaveStatus(driver->caps, driver->stateDir, vm) < 0) {
- VIR_WARN("Failed to save status on vm %s", vm->def->name);
- goto endjob;
+ if (virDomainObjIsActive(vm)) {
+ if (virDomainSaveStatus(driver->caps, driver->stateDir, vm) < 0) {
+ VIR_WARN("Failed to save status on vm %s", vm->def->name);
+ goto endjob;
+ }
}
/* Guest is successfully running, so cancel previous auto destroy */
@@ -3420,6 +3473,9 @@ int qemuMigrationConfirm(struct qemud_driver *driver,
if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen, 0)))
return -1;
+ if (flags & VIR_MIGRATE_OFFLINE)
+ goto done;
+
/* Did the migration go as planned? If yes, kill off the
* domain object, but if no, resume CPUs
*/
@@ -3455,6 +3511,7 @@ int qemuMigrationConfirm(struct qemud_driver *driver,
}
}
+done:
qemuMigrationCookieFree(mig);
rv = 0;
diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h
index 7a2269a..b4f6a77 100644
--- a/src/qemu/qemu_migration.h
+++ b/src/qemu/qemu_migration.h
@@ -36,7 +36,8 @@
VIR_MIGRATE_NON_SHARED_DISK | \
VIR_MIGRATE_NON_SHARED_INC | \
VIR_MIGRATE_CHANGE_PROTECTION | \
- VIR_MIGRATE_UNSAFE)
+ VIR_MIGRATE_UNSAFE | \
+ VIR_MIGRATE_OFFLINE)
enum qemuMigrationJobPhase {
QEMU_MIGRATION_PHASE_NONE = 0,
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 393b67b..54ba63a 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -6644,6 +6644,7 @@ static const vshCmdInfo info_migrate[] = {
static const vshCmdOptDef opts_migrate[] = {
{"live", VSH_OT_BOOL, 0, N_("live migration")},
+ {"offline", VSH_OT_BOOL, 0, N_("offline (domain's inactive) migration")},
{"p2p", VSH_OT_BOOL, 0, N_("peer-2-peer migration")},
{"direct", VSH_OT_BOOL, 0, N_("direct migration")},
{"tunneled", VSH_OT_ALIAS, 0, "tunnelled"},
@@ -6729,6 +6730,15 @@ doMigrate(void *opaque)
if (vshCommandOptBool(cmd, "unsafe"))
flags |= VIR_MIGRATE_UNSAFE;
+ if (vshCommandOptBool(cmd, "offline")) {
+ flags |= VIR_MIGRATE_OFFLINE;
+ }
+
+ if (virDomainIsActive(dom) && (flags & VIR_MIGRATE_OFFLINE)) {
+ vshError(ctl, "%s", _("domain is active, offline migration for inactive domain only"));
+ goto out;
+ }
+
if (xmlfile &&
virFileReadAll(xmlfile, 8192, &xml) < 0) {
vshError(ctl, _("file '%s' doesn't exist"), xmlfile);
--
1.7.1
12 years, 2 months
[libvirt] [PATCH v7 1/6] add a configure option --with-fuse to prepare introduction of fuse support for libvirt lxc
by Gao feng
add a configure option --with-fuse to prepare introduction
of fuse support for libvirt lxc.
With help from Daniel and Richard.
Signed-off-by: Gao feng <gaofeng(a)cn.fujitsu.com>
---
configure.ac | 29 +++++++++++++++++++++++++++++
libvirt.spec.in | 9 +++++++++
2 files changed, 38 insertions(+), 0 deletions(-)
diff --git a/configure.ac b/configure.ac
index 9108ea8..495cbfa 100644
--- a/configure.ac
+++ b/configure.ac
@@ -115,6 +115,7 @@ LIBSSH2_REQUIRED="1.0"
LIBSSH2_TRANSPORT_REQUIRED="1.3"
LIBBLKID_REQUIRED="2.17"
DBUS_REQUIRED="1.0.0"
+FUSE_REQUIRED="2.8.6"
dnl Checks for C compiler.
AC_PROG_CC
@@ -1859,6 +1860,29 @@ AC_SUBST([CAPNG_CFLAGS])
AC_SUBST([CAPNG_LIBS])
+dnl libfuse
+AC_ARG_WITH([fuse],
+ AC_HELP_STRING([--with-fuse], [use libfuse to proivde fuse filesystem support for libvirt lxc]),
+ [],
+ [with_fuse=check])
+dnl
+dnl This check looks for 'fuse'
+dnl
+AS_IF([test "x$with_fuse" != "xno"],
+ [PKG_CHECK_MODULES([FUSE], [fuse >= $FUSE_REQUIRED],
+ [with_fuse=yes
+ AC_SUBST([FUSE_CFLAGS])
+ AC_SUBST([FUSE_LIBS])
+ AC_DEFINE_UNQUOTED([HAVE_FUSE], 1, [whether fuse is available for libvirt lxc])
+ ],
+ [if test "x$with_fuse" = "xyes" ; then
+ AC_MSG_ERROR([You must install fuse library to compile libvirt])
+ else
+ with_fuse=no
+ fi
+ ])
+ ])
+AM_CONDITIONAL([HAVE_FUSE], [test "x$with_fuse" = "xyes"])
dnl virsh libraries
AC_CHECK_HEADERS([readline/readline.h])
@@ -3163,6 +3187,11 @@ AC_MSG_NOTICE([ capng: $CAPNG_CFLAGS $CAPNG_LIBS])
else
AC_MSG_NOTICE([ capng: no])
fi
+if test "$with_fuse" = "yes" ; then
+AC_MSG_NOTICE([ fuse: $FUSE_CFLAGS $FUSE_LIBS])
+else
+AC_MSG_NOTICE([ fuse: no])
+fi
if test "$with_xen" = "yes" ; then
AC_MSG_NOTICE([ xen: $XEN_CFLAGS $XEN_LIBS])
else
diff --git a/libvirt.spec.in b/libvirt.spec.in
index b6ded04..55408b6 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -93,6 +93,7 @@
# A few optional bits off by default, we enable later
%define with_polkit 0%{!?_without_polkit:0}
%define with_capng 0%{!?_without_capng:0}
+%define with_fuse 0%{!?_without_fuse:0}
%define with_netcf 0%{!?_without_netcf:0}
%define with_udev 0%{!?_without_udev:0}
%define with_hal 0%{!?_without_hal:0}
@@ -503,6 +504,9 @@ BuildRequires: numactl-devel
%if %{with_capng}
BuildRequires: libcap-ng-devel >= 0.5.0
%endif
+%if %{with_fuse}
+BuildRequires: fuse-devel >= 2.8.6
+%endif
%if %{with_phyp} || %{with_libssh2_transport}
%if %{with_libssh2_transport}
BuildRequires: libssh2-devel >= 1.3.0
@@ -1186,6 +1190,10 @@ of recent versions of Linux (and other OSes).
%define _without_capng --without-capng
%endif
+%if ! %{with_fuse}
+%define _without_fuse --without-fuse
+%endif
+
%if ! %{with_netcf}
%define _without_netcf --without-netcf
%endif
@@ -1289,6 +1297,7 @@ autoreconf -if
%{?_without_numactl} \
%{?_without_numad} \
%{?_without_capng} \
+ %{?_without_fuse} \
%{?_without_netcf} \
%{?_without_selinux} \
%{?_with_selinux_mount} \
--
1.7.7.6
12 years, 2 months
[libvirt] [PATCH] qemu: Fix function header formating of 2 functions
by Peter Krempa
Headers of qemuDomainSnapshotLoad and qemuDomainNetsRestart were
improperly formatted.
---
Pushing under the trivial rule.
---
src/qemu/qemu_driver.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 79b9607..3309f34 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -445,9 +445,11 @@ err_exit:
return NULL;
}
-static void qemuDomainSnapshotLoad(void *payload,
- const void *name ATTRIBUTE_UNUSED,
- void *data)
+
+static void
+qemuDomainSnapshotLoad(void *payload,
+ const void *name ATTRIBUTE_UNUSED,
+ void *data)
{
virDomainObjPtr vm = (virDomainObjPtr)payload;
char *baseDir = (char *)data;
@@ -559,9 +561,10 @@ cleanup:
}
-static void qemuDomainNetsRestart(void *payload,
- const void *name ATTRIBUTE_UNUSED,
- void *data ATTRIBUTE_UNUSED)
+static void
+qemuDomainNetsRestart(void *payload,
+ const void *name ATTRIBUTE_UNUSED,
+ void *data ATTRIBUTE_UNUSED)
{
int i;
virDomainObjPtr vm = (virDomainObjPtr)payload;
--
1.8.0
12 years, 2 months
[libvirt] [PATCH] qemu: Allow migration to be cancelled at any phase
by Michal Privoznik
Currently, if user calls virDomainAbortJob we just issue
'migrate_cancel' and hope for the best. However, if user calls
the API in wrong phase when migration hasn't been started yet
(perform phase) the cancel request is just ignored. With this
patch, the request is remembered and as soon as perform phase
starts, migration is cancelled.
---
src/qemu/qemu_domain.c | 26 ++++++++++++++++++++++++++
src/qemu/qemu_domain.h | 4 ++++
src/qemu/qemu_driver.c | 31 +++++++++++++++++++++++++++----
src/qemu/qemu_migration.c | 43 +++++++++++++++++++++++++++++++++++++++++--
4 files changed, 98 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index a5592b9..031be5f 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -160,6 +160,7 @@ qemuDomainObjResetAsyncJob(qemuDomainObjPrivatePtr priv)
job->mask = DEFAULT_JOB_MASK;
job->start = 0;
job->dump_memory_only = false;
+ job->asyncAbort = false;
memset(&job->info, 0, sizeof(job->info));
}
@@ -959,6 +960,31 @@ qemuDomainObjEndAsyncJob(struct qemud_driver *driver, virDomainObjPtr obj)
return virObjectUnref(obj);
}
+void
+qemuDomainObjAbortAsyncJob(virDomainObjPtr obj)
+{
+ qemuDomainObjPrivatePtr priv = obj->privateData;
+
+ VIR_DEBUG("Requesting abort of async job: %s",
+ qemuDomainAsyncJobTypeToString(priv->job.asyncJob));
+
+ priv->job.asyncAbort = true;
+}
+
+/**
+ * qemuDomainObjAbortAsyncJobRequested:
+ * @obj: domain object
+ *
+ * Was abort requested? @obj MUST be locked when calling this.
+ */
+bool
+qemuDomainObjAbortAsyncJobRequested(virDomainObjPtr obj)
+{
+ qemuDomainObjPrivatePtr priv = obj->privateData;
+
+ return priv->job.asyncAbort;
+}
+
static int
qemuDomainObjEnterMonitorInternal(struct qemud_driver *driver,
bool driver_locked,
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 9c2f67c..9a31bbe 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -111,6 +111,7 @@ struct qemuDomainJobObj {
unsigned long long start; /* When the async job started */
bool dump_memory_only; /* use dump-guest-memory to do dump */
virDomainJobInfo info; /* Async job progress data */
+ bool asyncAbort; /* abort of async job requested */
};
typedef struct _qemuDomainPCIAddressSet qemuDomainPCIAddressSet;
@@ -204,6 +205,9 @@ bool qemuDomainObjEndJob(struct qemud_driver *driver,
bool qemuDomainObjEndAsyncJob(struct qemud_driver *driver,
virDomainObjPtr obj)
ATTRIBUTE_RETURN_CHECK;
+void qemuDomainObjAbortAsyncJob(virDomainObjPtr obj);
+bool qemuDomainObjAbortAsyncJobRequested(virDomainObjPtr obj);
+
void qemuDomainObjSetJobPhase(struct qemud_driver *driver,
virDomainObjPtr obj,
int phase);
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 7b8eec6..009c2c8 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -10331,6 +10331,8 @@ static int qemuDomainAbortJob(virDomainPtr dom) {
virDomainObjPtr vm;
int ret = -1;
qemuDomainObjPrivatePtr priv;
+ /* Poll every 50ms for job termination */
+ struct timespec ts = { .tv_sec = 0, .tv_nsec = 50 * 1000 * 1000ull };
qemuDriverLock(driver);
vm = virDomainFindByUUID(&driver->domains, dom->uuid);
@@ -10365,10 +10367,31 @@ static int qemuDomainAbortJob(virDomainPtr dom) {
goto endjob;
}
- VIR_DEBUG("Cancelling job at client request");
- qemuDomainObjEnterMonitor(driver, vm);
- ret = qemuMonitorMigrateCancel(priv->mon);
- qemuDomainObjExitMonitor(driver, vm);
+ qemuDomainObjAbortAsyncJob(vm);
+ VIR_DEBUG("Waiting for async job '%s' to finish",
+ qemuDomainAsyncJobTypeToString(priv->job.asyncJob));
+ while (priv->job.asyncJob) {
+ if (qemuDomainObjEndJob(driver, vm) == 0) {
+ vm = NULL;
+ goto cleanup;
+ }
+ virDomainObjUnlock(vm);
+
+ nanosleep(&ts, NULL);
+
+ virDomainObjLock(vm);
+ if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_ABORT) < 0)
+ goto cleanup;
+
+ if (!virDomainObjIsActive(vm)) {
+ virReportError(VIR_ERR_OPERATION_INVALID,
+ "%s", _("domain is not running"));
+ goto endjob;
+ }
+
+ }
+
+ ret = 0;
endjob:
if (qemuDomainObjEndJob(driver, vm) == 0)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 5f8a9c5..c840686 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1172,6 +1172,12 @@ qemuMigrationUpdateJobStatus(struct qemud_driver *driver,
}
priv->job.info.timeElapsed -= priv->job.start;
+ if (qemuDomainObjAbortAsyncJobRequested(vm)) {
+ VIR_DEBUG("Migration abort requested. Translating "
+ "status to MIGRATION_STATUS_CANCELLED");
+ status = QEMU_MONITOR_MIGRATION_STATUS_CANCELLED;
+ }
+
ret = -1;
switch (status) {
case QEMU_MONITOR_MIGRATION_STATUS_INACTIVE:
@@ -1214,6 +1220,35 @@ qemuMigrationUpdateJobStatus(struct qemud_driver *driver,
return ret;
}
+static int
+qemuMigrationCancel(struct qemud_driver *driver, virDomainObjPtr vm)
+{
+ qemuDomainObjPrivatePtr priv = vm->privateData;
+ int ret = -1;
+
+ if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_ABORT) < 0)
+ goto cleanup;
+
+ if (!virDomainObjIsActive(vm)) {
+ virReportError(VIR_ERR_OPERATION_INVALID,
+ "%s", _("domain is not running"));
+ goto endjob;
+ }
+
+ qemuDomainObjEnterMonitor(driver, vm);
+ ret = qemuMonitorMigrateCancel(priv->mon);
+ qemuDomainObjExitMonitor(driver, vm);
+
+endjob:
+ if (qemuDomainObjEndJob(driver, vm) == 0) {
+ virReportError(VIR_ERR_OPEN_FAILED, "%s",
+ _("domain unexpectedly died"));
+ ret = -1;
+ }
+
+cleanup:
+ return ret;
+}
static int
qemuMigrationWaitForCompletion(struct qemud_driver *driver, virDomainObjPtr vm,
@@ -1262,10 +1297,14 @@ qemuMigrationWaitForCompletion(struct qemud_driver *driver, virDomainObjPtr vm,
}
cleanup:
- if (priv->job.info.type == VIR_DOMAIN_JOB_COMPLETED)
+ if (priv->job.info.type == VIR_DOMAIN_JOB_COMPLETED) {
return 0;
- else
+ } else {
+ if (priv->job.info.type == VIR_DOMAIN_JOB_CANCELLED &&
+ qemuMigrationCancel(driver, vm) < 0)
+ VIR_DEBUG("Cancelling job at client request");
return -1;
+ }
}
--
1.7.8.6
12 years, 2 months
[libvirt] [PATCHv3] snapshot: qemu: Add support for external inactive snapshots
by Peter Krempa
This patch adds support for external disk snapshots of inactive domains.
The snapshot is created by calling using qemu-img by calling:
qemu-img create -f format_of_snapshot -o
backing_file=/path/to/src,backing_fmt=format_of_backing_image
/path/to/snapshot
in case the backing image format is known or probing is allowed and
otherwise:
qemu-img create -f format_of_snapshot -o backing_file=/path/to/src
/path/to/snapshot
on each of the disks selected for snapshotting. This patch also modifies
the snapshot preparing function to support creating external snapshots
and to sanitize arguments. For now the user isn't able to mix external
and internal snapshots but this restriction might be lifted in the
future.
---
Diff to v2: incorporated a ton of review feedback
- fix of unlinking images on OOM
- format command line arguments better
- fix preparing function to reject flags properly
---
src/qemu/qemu_driver.c | 219 ++++++++++++++++++++++++++++++++++++++++---------
1 file changed, 179 insertions(+), 40 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 01ba7eb..17de98e 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -10658,13 +10658,125 @@ qemuDomainSnapshotFSThaw(struct qemud_driver *driver,
/* The domain is expected to be locked and inactive. */
static int
-qemuDomainSnapshotCreateInactive(struct qemud_driver *driver,
- virDomainObjPtr vm,
- virDomainSnapshotObjPtr snap)
+qemuDomainSnapshotCreateInactiveInternal(struct qemud_driver *driver,
+ virDomainObjPtr vm,
+ virDomainSnapshotObjPtr snap)
{
return qemuDomainSnapshotForEachQcow2(driver, vm, snap, "-c", false);
}
+/* The domain is expected to be locked and inactive. */
+static int
+qemuDomainSnapshotCreateInactiveExternal(struct qemud_driver *driver,
+ virDomainObjPtr vm,
+ virDomainSnapshotObjPtr snap,
+ bool reuse)
+{
+ int i;
+ virDomainSnapshotDiskDefPtr snapdisk;
+ virDomainDiskDefPtr defdisk;
+ virCommandPtr cmd = NULL;
+ const char *qemuImgPath;
+ struct stat st;
+
+ int ret = -1;
+
+ if (!(qemuImgPath = qemuFindQemuImgBinary(driver)))
+ return -1;
+
+ for (i = 0; i < snap->def->ndisks; i++) {
+ snapdisk = &(snap->def->disks[i]);
+ defdisk = snap->def->dom->disks[snapdisk->index];
+
+ /* no-op if reuse is true and file exists and is valid */
+ if (reuse) {
+ if (stat(snapdisk->file, &st) < 0) {
+ if (errno != ENOENT) {
+ virReportSystemError(errno,
+ _("unable to stat snapshot image %s"),
+ snapdisk->file);
+ goto cleanup;
+ }
+ } else if (!S_ISBLK(st.st_mode) && st.st_size > 0) {
+ /* the existing image is reused */
+ continue;
+ }
+ }
+
+ if (!snapdisk->format)
+ snapdisk->format = VIR_STORAGE_FILE_QCOW2;
+
+ /* creates cmd line args: qemu-img create -f qcow2 -o */
+ if (!(cmd = virCommandNewArgList(qemuImgPath,
+ "create",
+ "-f",
+ virStorageFileFormatTypeToString(snapdisk->format),
+ "-o",
+ NULL)))
+ goto cleanup;
+
+ if (defdisk->format > 0) {
+ /* adds cmd line arg: backing_file=/path/to/backing/file,backing_fmd=format */
+ virCommandAddArgFormat(cmd, "backing_file=%s,backing_fmt=%s",
+ defdisk->src,
+ virStorageFileFormatTypeToString(defdisk->format));
+ } else {
+ if (!driver->allowDiskFormatProbing) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("unknown image format of '%s' and "
+ "format probing is disabled"),
+ defdisk->src);
+ goto cleanup;
+ }
+
+ /* adds cmd line arg: backing_file=/path/to/backing/file */
+ virCommandAddArgFormat(cmd, "backing_file=%s", defdisk->src);
+ }
+
+ /* adds cmd line args: /path/to/target/file */
+ virCommandAddArg(cmd, snapdisk->file);
+
+ if (virCommandRun(cmd, NULL) < 0)
+ goto cleanup;
+
+ virCommandFree(cmd);
+ cmd = NULL;
+ }
+
+ /* update disk definitions */
+ for (i = 0; i < snap->def->ndisks; i++) {
+ snapdisk = &(snap->def->disks[i]);
+ defdisk = vm->def->disks[snapdisk->index];
+
+ if (snapdisk->snapshot == VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL) {
+ VIR_FREE(defdisk->src);
+ if (!(defdisk->src = strdup(snapdisk->file))) {
+ /* we cannot rollback here in a sane way */
+ virReportOOMError();
+ return -1;
+ }
+ defdisk->format = snapdisk->format;
+ }
+ }
+
+ ret = 0;
+
+cleanup:
+ virCommandFree(cmd);
+
+ /* unlink images if creation has failed */
+ if (ret < 0 && i > 0) {
+ for (; i > 0; i--) {
+ snapdisk = &(snap->def->disks[i]);
+ if (unlink(snapdisk->file) < 0)
+ VIR_WARN("Failed to remove snapshot image '%s'",
+ snapdisk->file);
+ }
+ }
+
+ return ret;
+}
+
/* The domain is expected to be locked and active. */
static int
@@ -10758,11 +10870,11 @@ qemuDomainSnapshotPrepare(virDomainObjPtr vm, virDomainSnapshotDefPtr def,
{
int ret = -1;
int i;
- bool found = false;
bool active = virDomainObjIsActive(vm);
struct stat st;
bool reuse = (*flags & VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT) != 0;
bool atomic = (*flags & VIR_DOMAIN_SNAPSHOT_CREATE_ATOMIC) != 0;
+ bool found_internal = false;
int external = 0;
qemuDomainObjPrivatePtr priv = vm->privateData;
@@ -10783,7 +10895,6 @@ qemuDomainSnapshotPrepare(virDomainObjPtr vm, virDomainSnapshotDefPtr def,
dom_disk->type == VIR_DOMAIN_DISK_TYPE_NETWORK &&
(dom_disk->protocol == VIR_DOMAIN_DISK_PROTOCOL_SHEEPDOG ||
dom_disk->protocol == VIR_DOMAIN_DISK_PROTOCOL_RBD)) {
- found = true;
break;
}
if (vm->def->disks[i]->format > 0 &&
@@ -10803,7 +10914,7 @@ qemuDomainSnapshotPrepare(virDomainObjPtr vm, virDomainSnapshotDefPtr def,
disk->name);
goto cleanup;
}
- found = true;
+ found_internal = true;
break;
case VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL:
@@ -10837,7 +10948,6 @@ qemuDomainSnapshotPrepare(virDomainObjPtr vm, virDomainSnapshotDefPtr def,
disk->name, disk->file);
goto cleanup;
}
- found = true;
external++;
break;
@@ -10852,15 +10962,37 @@ qemuDomainSnapshotPrepare(virDomainObjPtr vm, virDomainSnapshotDefPtr def,
}
}
- /* external snapshot is possible without specifying a disk to snapshot */
- if (!found &&
- def->memory != VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL) {
+ /* internal snapshot requires a disk image to store the memory image to */
+ if (def->memory == VIR_DOMAIN_SNAPSHOT_LOCATION_INTERNAL &&
+ !found_internal) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("internal checkpoints require at least "
+ "one disk to be selected for snapshot"));
+ goto cleanup;
+ }
+
+ /* disk snapshot requires at least one disk */
+ if (def->state == VIR_DOMAIN_DISK_SNAPSHOT && !external) {
virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
- _("internal and disk-only snapshots require at least "
+ _("disk-only snapshots require at least "
"one disk to be selected for snapshot"));
goto cleanup;
}
+ /* For now, we don't allow mixing internal and external disks.
+ * XXX technically, we could mix internal and external disks for
+ * offline snapshots */
+ if (found_internal && external) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("mixing internal and external snapshots is not "
+ "supported yet"));
+ goto cleanup;
+ }
+
+ /* Alter flags to let later users know what we learned. */
+ if (external && !active)
+ *flags |= VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY;
+
if (def->state != VIR_DOMAIN_DISK_SNAPSHOT && active) {
if (external == 1 ||
qemuCapsGet(priv->caps, QEMU_CAPS_TRANSACTION)) {
@@ -11360,6 +11492,25 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain,
parse_flags)))
goto cleanup;
+ /* reject the VIR_DOMAIN_SNAPSHOT_CREATE_LIVE flag where not supported */
+ if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_LIVE &&
+ (!virDomainObjIsActive(vm) ||
+ def->memory != VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL ||
+ flags & VIR_DOMAIN_SNAPSHOT_CREATE_REDEFINE)) {
+ virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
+ _("live snapshot creation is supported only "
+ "with external checkpoints"));
+ goto cleanup;
+ }
+ if ((def->memory == VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL ||
+ def->memory == VIR_DOMAIN_SNAPSHOT_LOCATION_INTERNAL) &&
+ flags & VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY) {
+ virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
+ _("disk-only snapshot creation is not compatible with "
+ "memory snapshot"));
+ goto cleanup;
+ }
+
if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_REDEFINE) {
/* Prevent circular chains */
if (def->parent) {
@@ -11472,15 +11623,12 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain,
goto cleanup;
if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY) {
- if (!virDomainObjIsActive(vm)) {
- virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
- _("disk snapshots of inactive domains not "
- "implemented yet"));
- goto cleanup;
- }
align_location = VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL;
align_match = false;
- def->state = VIR_DOMAIN_DISK_SNAPSHOT;
+ if (virDomainObjIsActive(vm))
+ def->state = VIR_DOMAIN_DISK_SNAPSHOT;
+ else
+ def->state = VIR_DOMAIN_SHUTOFF;
def->memory = VIR_DOMAIN_SNAPSHOT_LOCATION_NONE;
} else if (def->memory == VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL) {
def->state = virDomainObjGetState(vm, NULL);
@@ -11523,25 +11671,6 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain,
}
}
- /* reject the VIR_DOMAIN_SNAPSHOT_CREATE_LIVE flag where not supported */
- if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_LIVE &&
- (!virDomainObjIsActive(vm) ||
- snap->def->memory != VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL ||
- flags & VIR_DOMAIN_SNAPSHOT_CREATE_REDEFINE)) {
- virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
- _("live snapshot creation is supported only "
- "with external checkpoints"));
- goto cleanup;
- }
- if ((snap->def->memory == VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL ||
- snap->def->memory == VIR_DOMAIN_SNAPSHOT_LOCATION_INTERNAL) &&
- flags & VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY) {
- virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
- _("disk-only snapshot creation is not compatible with "
- "memory snapshot"));
- goto cleanup;
- }
-
/* actually do the snapshot */
if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_REDEFINE) {
/* XXX Should we validate that the redefined snapshot even
@@ -11561,9 +11690,19 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain,
goto cleanup;
}
} else {
- /* inactive */
- if (qemuDomainSnapshotCreateInactive(driver, vm, snap) < 0)
- goto cleanup;
+ /* inactive; qemuDomainSnapshotPrepare guaranteed that we
+ * aren't mixing internal and external, and altered flags to
+ * contain DISK_ONLY if there is an external disk. */
+ if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY) {
+ bool reuse = !!(flags & VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT);
+
+ if (qemuDomainSnapshotCreateInactiveExternal(driver, vm, snap,
+ reuse) < 0)
+ goto cleanup;
+ } else {
+ if (qemuDomainSnapshotCreateInactiveInternal(driver, vm, snap) < 0)
+ goto cleanup;
+ }
}
/* If we fail after this point, there's not a whole lot we can
--
1.8.0
12 years, 2 months
[libvirt] [PATCH] qemu: Allow the user to specify vendor and product for disk
by Osier Yang
QEMU supports to set vendor and product strings for disk since
1.2.0 (only scsi-disk, scsi-hd, scsi-cd support it), this patch
exposes it with new XML elements <vendor> and <product> of disk
device.
---
docs/formatdomain.html.in | 10 +++++
docs/schemas/domaincommon.rng | 10 +++++
src/conf/domain_conf.c | 30 ++++++++++++++++
src/conf/domain_conf.h | 2 +
src/qemu/qemu_command.c | 29 ++++++++++++++++
.../qemuxml2argv-disk-scsi-disk-vpd.args | 13 +++++++
.../qemuxml2argv-disk-scsi-disk-vpd.xml | 36 ++++++++++++++++++++
tests/qemuxml2argvtest.c | 4 ++
8 files changed, 134 insertions(+), 0 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-scsi-disk-vpd.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-scsi-disk-vpd.xml
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index c8da33d..cc9e871 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -1657,6 +1657,16 @@
of 16 hexadecimal digits.
<span class='since'>Since 0.10.1</span>
</dd>
+ <dt><code>vendor</code></dt>
+ <dd>If present, this element specifies the vendor of a virtual hard
+ disk or CD-ROM device. It's a not more than 8 bytes alphanumeric string.
+ <span class='since'>Since 1.0.1</span>
+ </dd>
+ <dt><code>product</code></dt>
+ <dd>If present, this element specifies the product of a virtual hard
+ disk or CD-ROM device. It's a not more than 16 bytes alphanumeric string.
+ <span class='since'>Since 1.0.1</span>
+ </dd>
<dt><code>host</code></dt>
<dd>The <code>host</code> element has two attributes "name" and "port",
which specify the hostname and the port number. The meaning of this
diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng
index 2beb035..ed7d1d0 100644
--- a/docs/schemas/domaincommon.rng
+++ b/docs/schemas/domaincommon.rng
@@ -905,6 +905,16 @@
<ref name="wwn"/>
</element>
</optional>
+ <optional>
+ <element name="vendor">
+ <text/>
+ </element>
+ </optional>
+ <optional>
+ <element name="product">
+ <text/>
+ </element>
+ </optional>
</interleave>
</define>
<define name="snapshot">
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 0575fcd..db6608e 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -979,6 +979,8 @@ void virDomainDiskDefFree(virDomainDiskDefPtr def)
VIR_FREE(def->mirror);
VIR_FREE(def->auth.username);
VIR_FREE(def->wwn);
+ VIR_FREE(def->vendor);
+ VIR_FREE(def->product);
if (def->auth.secretType == VIR_DOMAIN_DISK_SECRET_TYPE_USAGE)
VIR_FREE(def->auth.secret.usage);
virStorageEncryptionFree(def->encryption);
@@ -3498,6 +3500,8 @@ cleanup:
goto cleanup;
}
+#define VENDOR_LEN 8
+#define PRODUCT_LEN 16
/* Parse the XML definition for a disk
* @param node XML nodeset to parse for disk definition
@@ -3550,6 +3554,8 @@ virDomainDiskDefParseXML(virCapsPtr caps,
char *logical_block_size = NULL;
char *physical_block_size = NULL;
char *wwn = NULL;
+ char *vendor = NULL;
+ char *product = NULL;
if (VIR_ALLOC(def) < 0) {
virReportOOMError();
@@ -3888,6 +3894,24 @@ virDomainDiskDefParseXML(virCapsPtr caps,
if (!virValidateWWN(wwn))
goto error;
+ } else if (!vendor &&
+ xmlStrEqual(cur->name, BAD_CAST "vendor")) {
+ vendor = (char *)xmlNodeGetContent(cur);
+
+ if (strlen(vendor) > VENDOR_LEN) {
+ virReportError(VIR_ERR_XML_ERROR, "%s",
+ _("disk vendor is more than 8 bytes string"));
+ goto error;
+ }
+ } else if (!product &&
+ xmlStrEqual(cur->name, BAD_CAST "product")) {
+ product = (char *)xmlNodeGetContent(cur);
+
+ if (strlen(vendor) > PRODUCT_LEN) {
+ virReportError(VIR_ERR_XML_ERROR, "%s",
+ _("disk product is more than 16 bytes string"));
+ goto error;
+ }
} else if (xmlStrEqual(cur->name, BAD_CAST "boot")) {
/* boot is parsed as part of virDomainDeviceInfoParseXML */
}
@@ -4184,6 +4208,10 @@ virDomainDiskDefParseXML(virCapsPtr caps,
serial = NULL;
def->wwn = wwn;
wwn = NULL;
+ def->vendor = vendor;
+ vendor = NULL;
+ def->product = product;
+ product = NULL;
if (driverType) {
def->format = virStorageFileFormatTypeFromString(driverType);
@@ -4257,6 +4285,8 @@ cleanup:
VIR_FREE(logical_block_size);
VIR_FREE(physical_block_size);
VIR_FREE(wwn);
+ VIR_FREE(vendor);
+ VIR_FREE(product);
ctxt->node = save_ctxt;
return def;
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index 6539281..c7c1ca6 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -591,6 +591,8 @@ struct _virDomainDiskDef {
char *serial;
char *wwn;
+ char *vendor;
+ char *product;
int cachemode;
int error_policy; /* enum virDomainDiskErrorPolicy */
int rerror_policy; /* enum virDomainDiskErrorPolicy */
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 389c480..b0b81f3 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -2428,6 +2428,13 @@ qemuBuildDriveDevStr(virDomainDefPtr def,
}
}
+ if ((disk->vendor || disk->product) &&
+ disk->bus != VIR_DOMAIN_DISK_BUS_SCSI) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("Only scsi disk support vendor and product"));
+ goto error;
+ }
+
if (disk->device == VIR_DOMAIN_DISK_DEVICE_LUN) {
/* make sure that both the bus and the qemu binary support
* type='lun' (SG_IO).
@@ -2455,6 +2462,11 @@ qemuBuildDriveDevStr(virDomainDefPtr def,
_("Setting wwn is not supported for lun device"));
goto error;
}
+ if (disk->vendor || disk->product) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("Setting vendor or product is not supported for lun device"));
+ goto error;
+ }
}
switch (disk->bus) {
@@ -2504,6 +2516,17 @@ qemuBuildDriveDevStr(virDomainDefPtr def,
goto error;
}
+ /* Properties wwn, vendor and product were introduced in the
+ * same QEMU release (1.2.0).
+ */
+ if ((disk->vendor || disk->product) &&
+ !qemuCapsGet(caps, QEMU_CAPS_SCSI_DISK_WWN)) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("Setting vendor or product for scsi disk is not "
+ "supported by this QEMU"));
+ goto error;
+ }
+
controllerModel =
virDomainDiskFindControllerModel(def, disk,
VIR_DOMAIN_CONTROLLER_TYPE_SCSI);
@@ -2649,6 +2672,12 @@ qemuBuildDriveDevStr(virDomainDefPtr def,
if (disk->wwn)
virBufferAsprintf(&opt, ",wwn=%s", disk->wwn);
+ if (disk->vendor)
+ virBufferAsprintf(&opt, ",vendor=%s", disk->vendor);
+
+ if (disk->product)
+ virBufferAsprintf(&opt, ",product=%s", disk->product);
+
if (virBufferError(&opt)) {
virReportOOMError();
goto error;
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-scsi-disk-vpd.args b/tests/qemuxml2argvdata/qemuxml2argv-disk-scsi-disk-vpd.args
new file mode 100644
index 0000000..4aefb7f
--- /dev/null
+++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-scsi-disk-vpd.args
@@ -0,0 +1,13 @@
+LC_ALL=C PATH=/bin HOME=/home/test USER=test LOGNAME=test \
+/usr/bin/qemu -S -M pc -m 214 -smp 1 -nographic -nodefconfig -nodefaults \
+-monitor unix:/tmp/test-monitor,server,nowait -no-acpi -boot c \
+-device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 \
+-device lsi,id=scsi1,bus=pci.0,addr=0x4 \
+-usb \
+-drive file=/dev/HostVG/QEMUGuest1,if=none,id=drive-scsi0-0-1-0 \
+-device scsi-cd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,\
+id=scsi0-0-1-0,vendor=SEAGATE,product=ST3146707LC \
+-drive file=/dev/HostVG/QEMUGuest2,if=none,id=drive-scsi0-0-0-0 \
+-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,\
+id=scsi0-0-0-0,vendor=SEAGATE,product=ST3567807GD \
+-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-disk-scsi-disk-vpd.xml b/tests/qemuxml2argvdata/qemuxml2argv-disk-scsi-disk-vpd.xml
new file mode 100644
index 0000000..4918e37
--- /dev/null
+++ b/tests/qemuxml2argvdata/qemuxml2argv-disk-scsi-disk-vpd.xml
@@ -0,0 +1,36 @@
+<domain type='qemu'>
+ <name>QEMUGuest1</name>
+ <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
+ <memory unit='KiB'>219136</memory>
+ <currentMemory unit='KiB'>219136</currentMemory>
+ <vcpu placement='static'>1</vcpu>
+ <os>
+ <type arch='i686' machine='pc'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu</emulator>
+ <disk type='block' device='cdrom'>
+ <source dev='/dev/HostVG/QEMUGuest1'/>
+ <target dev='sda' bus='scsi'/>
+ <address type='drive' controller='0' bus='0' target='1' unit='0'/>
+ <vendor>SEAGATE</vendor>
+ <product>ST3146707LC</product>
+ </disk>
+ <disk type='block' device='disk'>
+ <source dev='/dev/HostVG/QEMUGuest2'/>
+ <target dev='sdb' bus='scsi'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+ <vendor>SEAGATE</vendor>
+ <product>ST3567807GD</product>
+ </disk>
+ <controller type='usb' index='0'/>
+ <controller type='scsi' index='0' model='virtio-scsi'/>
+ <controller type='scsi' index='1' model='lsilogic'/>
+ <memballoon model='virtio'/>
+ </devices>
+</domain>
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index 20b0b35..39a7e3f 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -499,6 +499,10 @@ mymain(void)
QEMU_CAPS_DRIVE, QEMU_CAPS_DEVICE, QEMU_CAPS_NODEFCONFIG,
QEMU_CAPS_SCSI_CD, QEMU_CAPS_SCSI_LSI, QEMU_CAPS_VIRTIO_SCSI_PCI,
QEMU_CAPS_SCSI_DISK_WWN);
+ DO_TEST("disk-scsi-disk-vpd",
+ QEMU_CAPS_DRIVE, QEMU_CAPS_DEVICE, QEMU_CAPS_NODEFCONFIG,
+ QEMU_CAPS_SCSI_CD, QEMU_CAPS_SCSI_LSI, QEMU_CAPS_VIRTIO_SCSI_PCI,
+ QEMU_CAPS_SCSI_DISK_WWN);
DO_TEST("disk-scsi-vscsi",
QEMU_CAPS_DRIVE, QEMU_CAPS_DEVICE, QEMU_CAPS_NODEFCONFIG);
DO_TEST("disk-scsi-virtio-scsi",
--
1.7.7.6
12 years, 2 months
[libvirt] [PATCH] Fix "virsh create" example
by Guido Günther
We require a file and don't accept standard input:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=692322
---
tools/virsh.pod | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 0808d72..0984e6e 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -553,7 +553,7 @@ B<Example>
virsh dumpxml <domain> > domain.xml
vi domain.xml (or make changes with your other text editor)
- virsh create < domain.xml
+ virsh create domain.xml
=item B<define> I<FILE>
--
1.7.10.4
12 years, 2 months
Re: [libvirt] [PATCH 1/2] Introduce a lock for libxl long-running api
by Jim Fehlig
Bamvor Jian Zhang wrote:
>>> +static int
>>> +libxlDomainAbortJob(virDomainPtr dom)
>>> +{
>>> + libxlDriverPrivatePtr driver = dom->conn->privateData;
>>> + virDomainObjPtr vm;
>>> + int ret = -1;
>>> + libxlDomainObjPrivatePtr priv;
>>> +
>>> + libxlDriverLock(driver);
>>> + vm = virDomainFindByUUID(&driver->domains, dom->uuid);
>>> + libxlDriverUnlock(driver);
>>> + if (!vm) {
>>> + char uuidstr[VIR_UUID_STRING_BUFLEN];
>>> + virUUIDFormat(dom->uuid, uuidstr);
>>> + virReportError(VIR_ERR_NO_DOMAIN,
>>> + _("no domain with matching uuid '%s'"), uuidstr);
>>> + goto cleanup;
>>> + }
>>> +
>>> + if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_ABORT) < 0)
>>> + goto cleanup;
>>> +
>>> + if (!virDomainObjIsActive(vm)) {
>>> + virReportError(VIR_ERR_OPERATION_INVALID,
>>> + "%s", _("domain is not running"));
>>> + goto endjob;
>>> + }
>>> +
>>> + priv = vm->privateData;
>>> +
>>> + if (!priv->job.asyncJob) {
>>> + virReportError(VIR_ERR_OPERATION_INVALID,
>>> + "%s", _("no job is active on the domain"));
>>> + goto endjob;
>>> + } else {
>>> + virReportError(VIR_ERR_OPERATION_INVALID,
>>> + _("cannot abort %s; use virDomainDestroy instead"),
>>> + libxlDomainAsyncJobTypeToString(priv->job.asyncJob));
>>> + goto endjob;
>>> + }
>>>
>>>
>>
>> This function will always fail with the above logic. ret is initialized
>> to -1 and is never changed.
>>
>> Is it even possible to safely abort a libxl operation? If not, this
>> function should probably remain unimplemented. Maybe it will be useful
>> when the libxl driver supports migration.
>>
> return error because of the there is no cancelation opeartion in libvirt libxl
> driver with xen 4.1. according to xen4.2 release document, maybe the
> cancelation of long-running jobs is supported.
I finally got some time to take a closer look at Xen 4.2 libxl and
notice that the "long running" operations (create, save, dump, restore,
etc.) now support a 'libxl_asyncop_how *ao_how' parameter to control
their concurrency. If ao_how->callback is NULL, a libxl_event is
generated when the operation completes. We'll just need to handle these
events in the existing libxlEventHandler. Some of the async operations
support reporting intermediate progress (e.g. for
libxlDomainGetJobInfo), but at this time none of them support
cancellation AFAICT.
With the new asynchronous support in Xen 4.2 libxl, IMO we should delay
this patchset until converting the driver to support 4.2. I didn't
think this patch would be affected by Xen 4.1 vs 4.2 libxl, but it is
and I don't see any reason to add code that further complicates the
conversion.
BTW, Ondrej was working on a patch to convert the driver to 4.2. Now
that I have some free time, I'll pick up some of this work too.
> but it is still useful for save, dump and migration(in future), because libvirt
> should block the user abort operation othervise xenlight might crash
>
If it is left unimplemented, libvirt would block the operation anyhow,
failing with "not supported"
Regards,
Jim
12 years, 2 months