Hi all,
Recently QEMU developers are working on a feature to allow upgrading a live QEMU instance
to a new version without restarting the VM. This is implemented as live migration between
the old and new QEMU process on the same host [1]. Here is the the use case:
1) Guests are running QEMU release 1.6.1.
2) Admin installs QEMU release 1.6.2 via RPM or deb.
3) Admin starts a new VM using the updated QEMU binary, and asks the old
QEMU process to migrate the VM to the newly started VM.
I think it will be very useful to support QEMU live upgrade in libvirt. After some
investigations, I found migrating to the same host breaks the current migration code.
I'd like to propose a new work flow for QEMU live migration. It is to implement the
above step 3).
I add a new API named virDomainQemuLiveUpgrade, and a new domain command in virsh named
qemu-live-upgrade. The work flow of virDomainQemuLiveUpgrade is like following.
newDef = deep copy oldVm definition
newVm = create VM using newDef, start QEMU process with all vCPUs paused
oldVm migrate to newVm using unix socket
shutdown oldVm
newPid = newVm->pid
finalDef = live deep copy of newVm definition
Drop the newVm from qemu domain table without shutting down QEMU process
Assign finalDef to oldVm
oldVm attaches to QEMU process newPid using finalDef
resume all vCPUs in oldVm
I wrote a RFC patch to demo this work flow. To try my patch, you can firstly apply the
patch, build and install libvirt, then create and start a KVM virtual machine, at last run
the following command
virsh qemu-live-upgrade your_domain_name
Check the output of "virsh list" and "ps aux | grep qemu", you will
find the virtual machine gets a new id, and a new QEMU process running with different
process id. I tested this patch on a Fedora 19 box with QEMU upgraded to 1.6. I'd like
to hear your precious opinions on this upgrading flow. I can improve the flow, after we
reach an agreement on this, I can start to write a more formal patch.
After the upgrading work flow becomes mature, I want to add a "page-flipping"
flag to this new API. The reason is that migrating to localhost requires twice host memory
as the original VM does during the upgrading. Thus comes the ongoing development of using
the new system call vmsplice to move memory pages between two QEMU instances, so that the
kernel can just perform page re-mapping from source QEMU process to destination QEMU
process in a zero-copy manner. This is expected to reduce memory consumption and speedup
the whole procedure. This mechanism is based on Unix domain socket, pipes and FD
inter-process passing magic[2].
The page re-mapping mechanism is transparent to libvirt, all we need to trigger this magic
is (1) set QEMU migration capability to enable page re-mapping, (2) start destination QEMU
process with "-incoming unix:/path/to/socket", (3) use
"unix:/path/to/socket" URI when issuing QMP migration command to the source QEMU
process.
This RFC patch is already using Unix socket to migrate QEMU virtual machine. I'll add
code to parse and inspect a "page-flipping" flag, and call QEMU monitor to
enable this capability. Thanks very much any comments on this patch!
[1]
http://lists.nongnu.org/archive/html/qemu-devel/2013-08/msg02916.html
[2]
http://lists.gnu.org/archive/html/qemu-devel/2013-09/msg04043.html
From 2b659584f2cbe676c843ddeaf198c9a8368ff0ff Mon Sep 17 00:00:00 2001
From: Zhou Zheng Sheng <zhshzhou(a)linux.vnet.ibm.com>
Date: Wed, 30 Oct 2013 15:36:49 +0800
Subject: [PATCH] RFC: Support QEMU live uprgade
This patch is to support upgrading QEMU version without restarting the
virtual machine.
Add new API virDomainQemuLiveUpgrade(), and a new virsh command
qemu-live-upgrade. virDomainQemuLiveUpgrade() migrates a running VM to
the same host as a new VM with new name and new UUID. Then it shutdown
the original VM and drop the new VM definition without shutdown the QEMU
process of the new VM. At last it attaches original VM to the new QEMU
process.
Firstly the admin installs new QEMU package, then he runs
virsh qemu-live-upgrade domain_name
to trigger our virDomainQemuLiveUpgrade() upgrading flow.
Signed-off-by: Zhou Zheng Sheng <zhshzhou(a)linux.vnet.ibm.com>
---
include/libvirt/libvirt.h.in | 3 +
src/driver.h | 4 +
src/libvirt.c | 23 +++
src/libvirt_public.syms | 1 +
src/qemu/qemu_driver.c | 339 +++++++++++++++++++++++++++++++++++++++++++
src/qemu/qemu_migration.c | 2 +-
src/qemu/qemu_migration.h | 3 +
src/remote/remote_driver.c | 1 +
src/remote/remote_protocol.x | 19 ++-
tools/virsh-domain.c | 139 ++++++++++++++++++
10 files changed, 532 insertions(+), 2 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 80b2d78..7c87044 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -1331,6 +1331,9 @@ int virDomainMigrateGetMaxSpeed(virDomainPtr domain,
unsigned long *bandwidth,
unsigned int flags);
+virDomainPtr virDomainQemuLiveUpgrade(virDomainPtr domain,
+ unsigned int flags);
+
/**
* VIR_NODEINFO_MAXCPUS:
* @nodeinfo: virNodeInfo instance
diff --git a/src/driver.h b/src/driver.h
index 8cd164a..1bafa98 100644
--- a/src/driver.h
+++ b/src/driver.h
@@ -686,6 +686,9 @@ typedef int
const char *args,
char ***models,
unsigned int flags);
+typedef virDomainPtr
+(*virDrvDomainQemuLiveUpgrade)(virDomainPtr domain,
+ unsigned int flags);
typedef int
(*virDrvDomainGetJobInfo)(virDomainPtr domain,
@@ -1339,6 +1342,7 @@ struct _virDriver {
virDrvDomainMigrateFinish3Params domainMigrateFinish3Params;
virDrvDomainMigrateConfirm3Params domainMigrateConfirm3Params;
virDrvConnectGetCPUModelNames connectGetCPUModelNames;
+ virDrvDomainQemuLiveUpgrade domainQemuLiveUpgrade;
};
diff --git a/src/libvirt.c b/src/libvirt.c
index 90608ab..9e5ff8a 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -7524,6 +7524,29 @@ error:
/**
+ * virDomainQemuLiveUpgrade:
+ * @domain: a domain object
+ * @flags: bitwise-OR of flags
+ *
+ * Live upgrade qemu binary version of the domain.
+ *
+ * Returns the new domain object if the upgrade was successful,
+ * or NULL in case of error.
+ */
+virDomainPtr
+virDomainQemuLiveUpgrade(virDomainPtr domain,
+ unsigned int flags)
+{
+ VIR_DEBUG("domain=%p, flags=%x", domain, flags);
+ if (!domain->conn->driver->domainQemuLiveUpgrade) {
+ virLibConnError(VIR_ERR_INTERNAL_ERROR, __FUNCTION__);
+ return NULL;
+ }
+ return domain->conn->driver->domainQemuLiveUpgrade(domain, flags);
+}
+
+
+/**
* virNodeGetInfo:
* @conn: pointer to the hypervisor connection
* @info: pointer to a virNodeInfo structure allocated by the user
diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms
index fe9b497..82f0b37 100644
--- a/src/libvirt_public.syms
+++ b/src/libvirt_public.syms
@@ -637,6 +637,7 @@ LIBVIRT_1.1.1 {
LIBVIRT_1.1.3 {
global:
virConnectGetCPUModelNames;
+ virDomainQemuLiveUpgrade;
} LIBVIRT_1.1.1;
# .... define new API here using predicted next version number ....
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index ef1359c..7cd76e0 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -15705,6 +15705,344 @@ qemuConnectGetCPUModelNames(virConnectPtr conn,
}
+static virDomainDefPtr
+virDomainDefLiveCopy(virDomainDefPtr src,
+ virCapsPtr caps,
+ virDomainXMLOptionPtr xmlopt)
+{
+ char *xml;
+ virDomainDefPtr ret;
+ unsigned int flags = VIR_DOMAIN_XML_SECURE;
+
+ /* Easiest to clone via a round-trip through XML. */
+ if (!(xml = virDomainDefFormat(src, flags)))
+ return NULL;
+
+ ret = virDomainDefParseString(xml, caps, xmlopt, -1, flags);
+
+ VIR_FREE(xml);
+ return ret;
+}
+
+
+static virDomainDefPtr
+qemuLiveUpgradeMiniBegin(virQEMUDriverPtr driver, virDomainObjPtr vm) {
+ virCapsPtr caps = NULL;
+ virDomainDefPtr newDef = NULL;
+ virDomainDefPtr result = NULL;
+ char *newName = NULL;
+
+ if (!(caps = virQEMUDriverGetCapabilities(driver, false)))
+ goto cleanup;
+
+ if (!(newDef = virDomainDefCopy(vm->def, caps, driver->xmlopt, true)))
+ goto cleanup;
+
+ if (virAsprintf(&newName, "%s_qemu_live_upgrade", vm->def->name)
< 0)
+ goto cleanup;
+
+ VIR_FREE(newDef->name);
+ newDef->name = newName;
+
+ if (-1 == virUUIDGenerate(newDef->uuid))
+ goto cleanup;
+
+ result = newDef;
+ newDef = NULL;
+
+cleanup:
+ virDomainDefFree(newDef);
+ virObjectUnref(caps);
+
+ return result;
+}
+
+static virDomainObjPtr
+qemuLiveUpgradeMiniPrepare(virQEMUDriverPtr driver, virConnectPtr conn,
+ virDomainDefPtr newDef) {
+ virDomainObjPtr newVm = NULL;
+ virDomainObjPtr result = NULL;
+ char *upgradeUri = NULL;
+ virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
+
+ newVm = virDomainObjListAdd(driver->domains, newDef, driver->xmlopt,
+ VIR_DOMAIN_OBJ_LIST_ADD_LIVE |
+ VIR_DOMAIN_OBJ_LIST_ADD_CHECK_LIVE, NULL);
+ if (!newVm)
+ goto cleanup;
+ newDef = NULL;
+
+ if (virAsprintf(&upgradeUri, "unix:%s/qemu.live.upgrade.%s.sock",
+ cfg->libDir, newVm->def->name) < 0)
+ goto cleanup;
+
+ if (qemuProcessStart(conn, driver, newVm, upgradeUri, -1, NULL,
+ NULL, VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START,
+ VIR_QEMU_PROCESS_START_PAUSED |
+ VIR_QEMU_PROCESS_START_AUTODESTROY) < 0)
+ goto cleanup;
+
+ result = newVm;
+ newVm = NULL;
+
+cleanup:
+ if (newVm)
+ qemuDomainRemoveInactive(driver, newVm);
+ VIR_FREE(upgradeUri);
+ virDomainDefFree(newDef);
+ virObjectUnref(cfg);
+
+ if (result)
+ virObjectUnlock(result);
+ return result;
+}
+
+static bool
+qemuLiveUpgradeMiniPerform(virQEMUDriverPtr driver, virDomainObjPtr vm,
+ const char *newName) {
+ char *upgradeSock = NULL;
+ qemuDomainObjPrivatePtr priv = vm->privateData;
+ bool result = false;
+ bool migrate = false;
+ int r = 0;
+ virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
+
+ if (virAsprintf(&upgradeSock, "%s/qemu.live.upgrade.%s.sock",
+ cfg->libDir, newName) < 0)
+ goto cleanup;
+
+ if (qemuDomainObjEnterMonitorAsync(driver, vm, QEMU_ASYNC_JOB_NONE) < 0)
+ goto cleanup;
+ r = qemuMonitorMigrateToUnix(priv->mon, 0, upgradeSock);
+ qemuDomainObjExitMonitor(driver, vm);
+ migrate = true;
+ if (r < 0)
+ goto cleanup;
+
+ if (!virDomainObjIsActive(vm)) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("guest unexpectedly quit"));
+ goto cleanup;
+ }
+
+ if (qemuMigrationWaitForCompletion(driver, vm, QEMU_ASYNC_JOB_NONE,
+ NULL, true) < 0)
+ goto cleanup;
+
+ if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) {
+ if (qemuMigrationSetOffline(driver, vm) < 0)
+ goto cleanup;
+ }
+
+ result = true;
+
+cleanup:
+ /* QEMU memory pages has been re-mapped during the migrate, no way to
+ * continue the original VM */
+ if (migrate)
+ qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_MIGRATED,
+ QEMU_ASYNC_JOB_NONE);
+ VIR_FREE(upgradeSock);
+ virObjectUnref(cfg);
+
+ return result;
+}
+
+static bool
+qemuLiveUpgradeMiniFinish(virQEMUDriverPtr driver, virDomainObjPtr newVm,
+ const char *origName, const unsigned char *origUuid,
+ virDomainDefPtr *newDef, bool *newMonJSON,
+ virDomainChrSourceDefPtr *newMonConf,
+ pid_t *newPid, const char **newPidFile) {
+ virCapsPtr caps = NULL;
+ qemuDomainObjPrivatePtr newPriv = newVm->privateData;
+ bool result = false;
+ size_t i = 0;
+
+ virDomainDefPtr tmpDef = NULL;
+ virDomainChrSourceDefPtr tmpMonConf = NULL;
+ char *tmpPidFile = NULL;
+
+ if (!(caps = virQEMUDriverGetCapabilities(driver, false)))
+ goto abort;
+
+ newVm->persistent = 0;
+ qemuProcessAutoDestroyRemove(driver, newVm);
+
+ if (!(tmpDef = virDomainDefLiveCopy(newVm->def, caps, driver->xmlopt)))
+ goto abort;
+
+ VIR_FREE(tmpDef->name);
+ if (VIR_STRDUP(tmpDef->name, origName) < 0) {
+ goto abort;
+ }
+
+ for (i=0; i < VIR_UUID_BUFLEN; ++i) {
+ tmpDef->uuid[i] = origUuid[i];
+ }
+
+ if (VIR_ALLOC(tmpMonConf) < 0)
+ goto abort;
+
+ if (virDomainChrSourceDefCopy(tmpMonConf, newPriv->monConfig) < 0)
+ goto abort;
+
+ if (VIR_STRDUP(tmpPidFile, newPriv->pidfile) < 0)
+ goto abort;
+
+ if (newPriv->mon) {
+ qemuMonitorClose(newPriv->mon);
+ newPriv->mon = NULL;
+ }
+
+ *newPid = newVm->pid;
+ *newPidFile = tmpPidFile;
+ *newDef = tmpDef;
+ *newMonConf = tmpMonConf;
+ *newMonJSON = newPriv->monJSON;
+ result = true;
+
+cleanup:
+ virObjectUnref(caps);
+
+ qemuDomainRemoveInactive(driver, newVm);
+ return result;
+
+abort:
+ VIR_FREE(tmpPidFile);
+ virDomainChrSourceDefFree(tmpMonConf);
+ virDomainDefFree(tmpDef);
+ qemuProcessStop(driver, newVm, VIR_DOMAIN_SHUTOFF_MIGRATED,
+ QEMU_ASYNC_JOB_NONE);
+ goto cleanup;
+}
+
+static virDomainPtr
+qemuLiveUpgradeMiniConfirm(virQEMUDriverPtr driver, virConnectPtr conn,
+ virDomainDefPtr newDef, bool newMonJSON,
+ virDomainChrSourceDefPtr newMonConf,
+ pid_t newPid, const char *newPidFile) {
+ virDomainPtr newDom = NULL;
+ virDomainObjPtr newVm = NULL;
+ int r = 0;
+
+ if (!(newVm = virDomainObjListAdd(driver->domains, newDef, driver->xmlopt,
+ 0, NULL)))
+ goto cleanup;
+ newDef = NULL;
+ newVm->def->id = -1;
+
+ VIR_SHRINK_N(newVm->def->seclabels, newVm->def->nseclabels,
newVm->def->nseclabels);
+ if (virSecurityManagerGenLabel(driver->securityManager, newVm->def) < 0)
+ goto cleanup;
+ r = qemuProcessAttach(conn, driver, newVm, newPid,
+ newPidFile, newMonConf, newMonJSON);
+ newMonConf = NULL;
+ if (r < 0)
+ goto cleanup;
+
+ if (qemuProcessStartCPUs(driver, newVm, conn,
+ VIR_DOMAIN_RUNNING_MIGRATED,
+ QEMU_ASYNC_JOB_NONE) < 0) {
+ qemuProcessStop(driver, newVm, VIR_DOMAIN_SHUTOFF_FAILED,
+ VIR_QEMU_PROCESS_STOP_MIGRATED);
+ goto cleanup;
+ }
+ newDom = virGetDomain(conn, newVm->def->name, newVm->def->uuid);
+
+cleanup:
+ VIR_FREE(newPidFile);
+ virDomainChrSourceDefFree(newMonConf);
+ virDomainDefFree(newDef);
+ if (newVm)
+ virObjectUnlock(newVm);
+
+ return newDom;
+}
+
+static virDomainPtr
+qemuDomObjQemuLiveUpgrade(virQEMUDriverPtr driver, virConnectPtr conn,
+ virDomainObjPtr vm, unsigned int flags) {
+ char *origName = NULL;
+ unsigned char origUuid[VIR_UUID_BUFLEN];
+ size_t i = 0;
+ virDomainPtr newDom = NULL;
+ virDomainDefPtr newDef = NULL;
+ virDomainObjPtr newVm = NULL;
+ virDomainChrSourceDefPtr newMonConf = NULL;
+ bool newMonJSON = false;
+ pid_t newPid = -1;
+ const char * newPidFile = NULL;
+ virDomainDefPtr finalDef = NULL;
+
+ VIR_DEBUG("vm=%p, flags=%x", vm, flags);
+
+ if (!(newDef = qemuLiveUpgradeMiniBegin(driver, vm)))
+ goto cleanup;
+
+ virObjectUnlock(vm);
+ newVm = qemuLiveUpgradeMiniPrepare(driver, conn, newDef);
+ virObjectLock(vm);
+ newDef = NULL;
+ if (!newVm) {
+ goto cleanup;
+ }
+
+ if (!qemuLiveUpgradeMiniPerform(driver, vm, newVm->def->name)) {
+ goto cleanup;
+ }
+
+ if (VIR_STRDUP(origName, vm->def->name) < 0)
+ goto cleanup;
+ for (i=0; i < VIR_UUID_BUFLEN; ++i) {
+ origUuid[i] = vm->def->uuid[i];
+ }
+ virObjectUnlock(vm);
+ vm = NULL;
+
+ if (!qemuLiveUpgradeMiniFinish(driver, newVm, origName, origUuid,
+ &finalDef, &newMonJSON, &newMonConf,
&newPid,
+ &newPidFile))
+ goto cleanup;
+ newVm = NULL;
+
+ newDom = qemuLiveUpgradeMiniConfirm(driver, conn, finalDef, newMonJSON,
+ newMonConf, newPid, newPidFile);
+ finalDef = NULL;
+ newMonConf = NULL;
+ newPidFile = NULL;
+
+cleanup:
+ VIR_FREE(origName);
+ if (newVm)
+ qemuDomainRemoveInactive(driver, newVm);
+ if (vm)
+ virObjectUnlock(vm);
+
+ return newDom;
+}
+
+static virDomainPtr
+qemuDomainQemuLiveUpgrade(virDomainPtr domain,
+ unsigned int flags)
+{
+ virQEMUDriverPtr driver = domain->conn->privateData;
+ virDomainObjPtr vm = NULL;
+
+ VIR_DEBUG("domain=%p, flags=%x", domain, flags);
+
+ if (!(vm = qemuDomObjFromDomain(domain)))
+ return NULL;
+
+ if (virDomainQemuLiveUpgradeEnsureACL(domain->conn, vm->def) < 0) {
+ virObjectUnlock(vm);
+ return NULL;
+ }
+
+ return qemuDomObjQemuLiveUpgrade(driver, domain->conn, vm, flags);
+}
+
+
static virDriver qemuDriver = {
.no = VIR_DRV_QEMU,
.name = QEMU_DRIVER_NAME,
@@ -15892,6 +16230,7 @@ static virDriver qemuDriver = {
.domainMigrateFinish3Params = qemuDomainMigrateFinish3Params, /* 1.1.0 */
.domainMigrateConfirm3Params = qemuDomainMigrateConfirm3Params, /* 1.1.0 */
.connectGetCPUModelNames = qemuConnectGetCPUModelNames, /* 1.1.3 */
+ .domainQemuLiveUpgrade = qemuDomainQemuLiveUpgrade, /* 1.1.3 */
};
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index a3d986f..f859936 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1686,7 +1686,7 @@ qemuMigrationUpdateJobStatus(virQEMUDriverPtr driver,
}
-static int
+int
qemuMigrationWaitForCompletion(virQEMUDriverPtr driver, virDomainObjPtr vm,
enum qemuDomainAsyncJob asyncJob,
virConnectPtr dconn, bool abort_on_error)
diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h
index cafa2a2..48b2009 100644
--- a/src/qemu/qemu_migration.h
+++ b/src/qemu/qemu_migration.h
@@ -162,6 +162,9 @@ int qemuMigrationConfirm(virConnectPtr conn,
int cookieinlen,
unsigned int flags,
int cancelled);
+int qemuMigrationWaitForCompletion(virQEMUDriverPtr driver, virDomainObjPtr vm,
+ enum qemuDomainAsyncJob asyncJob,
+ virConnectPtr dconn, bool abort_on_error);
bool qemuMigrationIsAllowed(virQEMUDriverPtr driver, virDomainObjPtr vm,
virDomainDefPtr def, bool remote,
diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c
index 7181949..cfa70bd 100644
--- a/src/remote/remote_driver.c
+++ b/src/remote/remote_driver.c
@@ -7013,6 +7013,7 @@ static virDriver remote_driver = {
.domainMigrateFinish3Params = remoteDomainMigrateFinish3Params, /* 1.1.0 */
.domainMigrateConfirm3Params = remoteDomainMigrateConfirm3Params, /* 1.1.0 */
.connectGetCPUModelNames = remoteConnectGetCPUModelNames, /* 1.1.3 */
+ .domainQemuLiveUpgrade = remoteDomainQemuLiveUpgrade, /* 1.1.3 */
};
static virNetworkDriver network_driver = {
diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x
index f942670..25f35b2 100644
--- a/src/remote/remote_protocol.x
+++ b/src/remote/remote_protocol.x
@@ -2849,6 +2849,15 @@ struct remote_connect_get_cpu_model_names_ret {
int ret;
};
+struct remote_domain_qemu_live_upgrade_args {
+ remote_nonnull_domain dom;
+ unsigned int flags;
+};
+
+struct remote_domain_qemu_live_upgrade_ret {
+ remote_nonnull_domain domUpgraded;
+};
+
/*----- Protocol. -----*/
/* Define the program number, protocol version and procedure numbers here. */
@@ -5018,5 +5027,13 @@ enum remote_procedure {
* @generate: none
* @acl: connect:read
*/
- REMOTE_PROC_CONNECT_GET_CPU_MODEL_NAMES = 312
+ REMOTE_PROC_CONNECT_GET_CPU_MODEL_NAMES = 312,
+
+ /**
+ * @generate: both
+ * @acl: domain:migrate
+ * @acl: domain:start
+ * @acl: domain:write
+ */
+ REMOTE_PROC_DOMAIN_QEMU_LIVE_UPGRADE = 313
};
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 60abd3d..e0c7997 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -10451,6 +10451,139 @@ cleanup:
return ret;
}
+/*
+ * "qemu-live-upgrade" command
+ */
+static const vshCmdInfo info_qemu_live_upgrade[] = {
+ {.name = "help",
+ .data = N_("Live upgrade QEMU binary version of a running domain")
+ },
+ {.name = "desc",
+ .data = N_("Let the domain make use of a newly upgraded QEMU binary without
restart.")
+ },
+ {.name = NULL}
+};
+
+static const vshCmdOptDef opts_qemu_live_upgrade[] = {
+ {.name = "domain",
+ .type = VSH_OT_DATA,
+ .flags = VSH_OFLAG_REQ,
+ .help = N_("domain name, id or uuid")
+ },
+ {.name = "page-flipping",
+ .type = VSH_OT_BOOL,
+ .help = N_("enable memory page flipping when upgrading the QEMU binray")
+ },
+ {.name = "timeout",
+ .type = VSH_OT_INT,
+ .help = N_("force guest to suspend if QEMU live upgrade exceeds timeout (in
seconds)")
+ },
+ {.name = "verbose",
+ .type = VSH_OT_BOOL,
+ .help = N_("display the progress of uprgade")
+ },
+ {.name = NULL}
+};
+
+static void
+doQemuLiveUpgrade(void *opaque)
+{
+ char ret = '1';
+ virDomainPtr dom = NULL;
+ virDomainPtr domUpgraded = NULL;
+ unsigned int flags = 0;
+ vshCtrlData *data = opaque;
+ vshControl *ctl = data->ctl;
+ const vshCmd *cmd = data->cmd;
+ sigset_t sigmask, oldsigmask;
+
+ sigemptyset(&sigmask);
+ sigaddset(&sigmask, SIGINT);
+ if (pthread_sigmask(SIG_BLOCK, &sigmask, &oldsigmask) < 0)
+ goto out_sig;
+
+ if (!(dom = vshCommandOptDomain(ctl, cmd, NULL)))
+ goto out;
+
+ if ((domUpgraded = virDomainQemuLiveUpgrade(dom, flags))) {
+ ret = '0';
+ }
+
+out:
+ pthread_sigmask(SIG_SETMASK, &oldsigmask, NULL);
+out_sig:
+ if (domUpgraded)
+ virDomainFree(domUpgraded);
+ if (dom)
+ virDomainFree(dom);
+ ignore_value(safewrite(data->writefd, &ret, sizeof(ret)));
+ return;
+}
+
+static void
+vshQemuLiveUpgradeTimeout(vshControl *ctl,
+ virDomainPtr dom,
+ void *opaque ATTRIBUTE_UNUSED)
+{
+ vshDebug(ctl, VSH_ERR_DEBUG, "suspending the domain, "
+ "since QEMU live uprgade timed out\n");
+ virDomainSuspend(dom);
+}
+
+static bool
+cmdQemuLiveUpgrade(vshControl *ctl, const vshCmd *cmd)
+{
+ virDomainPtr dom = NULL;
+ int p[2] = {-1, -1};
+ virThread workerThread;
+ bool verbose = false;
+ bool functionReturn = false;
+ int timeout = 0;
+ vshCtrlData data;
+ int rv;
+
+ if (!(dom = vshCommandOptDomain(ctl, cmd, NULL)))
+ return false;
+
+ if (vshCommandOptBool(cmd, "verbose"))
+ verbose = true;
+
+ if ((rv = vshCommandOptInt(cmd, "timeout", &timeout)) < 0 ||
+ (rv > 0 && timeout < 1)) {
+ vshError(ctl, "%s", _("qemu-live-uprgade: Invalid
timeout"));
+ goto cleanup;
+ } else if (rv > 0) {
+ /* Ensure that we can multiply by 1000 without overflowing. */
+ if (timeout > INT_MAX / 1000) {
+ vshError(ctl, "%s", _("qemu-live-uprgade: Timeout is too
big"));
+ goto cleanup;
+ }
+ }
+
+ if (pipe(p) < 0)
+ goto cleanup;
+
+ data.ctl = ctl;
+ data.cmd = cmd;
+ data.writefd = p[1];
+
+ if (virThreadCreate(&workerThread,
+ true,
+ doQemuLiveUpgrade,
+ &data) < 0)
+ goto cleanup;
+ functionReturn = vshWatchJob(ctl, dom, verbose, p[0], timeout,
+ vshQemuLiveUpgradeTimeout, NULL,
_("Upgrade"));
+
+ virThreadJoin(&workerThread);
+
+cleanup:
+ virDomainFree(dom);
+ VIR_FORCE_CLOSE(p[0]);
+ VIR_FORCE_CLOSE(p[1]);
+ return functionReturn;
+}
+
const vshCmdDef domManagementCmds[] = {
{.name = "attach-device",
.handler = cmdAttachDevice,
@@ -10796,6 +10929,12 @@ const vshCmdDef domManagementCmds[] = {
.info = info_qemu_agent_command,
.flags = 0
},
+ {.name = "qemu-live-upgrade",
+ .handler = cmdQemuLiveUpgrade,
+ .opts = opts_qemu_live_upgrade,
+ .info = info_qemu_live_upgrade,
+ .flags = 0
+ },
{.name = "reboot",
.handler = cmdReboot,
.opts = opts_reboot,
--
1.8.3.1
_____________________________
Zhou Zheng Sheng / 周征晟
Software Engineer
E-mail: zhshzhou(a)cn.ibm.com
Telephone: 86-10-82454397