[libvirt] RFC (V2) New virDomainBlockPull API family to libvirt

Here are the patches to implement the BlockPull/BlockJob API as discussed and agreed to. I am testing with a python script (included for completeness as the final patch). The qemu monitor interface is not expected to change in the future. Stefan is planning to submit placeholder commands for upstream qemu until the generic streaming support is implemented. Changes since V1: - Make virDomainBlockPullAbort() and virDomainGetBlockPullInfo() into a generic BlockJob interface. - Added virDomainBlockJobSetSpeed() - Rename VIR_DOMAIN_EVENT_ID_BLOCK_PULL event to fit into block job API - Add bandwidth argument to virDomainBlockPull() Summary of changes since first generation patch series: - Qemu dropped incremental streaming so remove libvirt incremental BlockPull() API - Rename virDomainBlockPullAll() to virDomainBlockPull() - Changes required to qemu monitor handlers for changed command names -- To help speed the provisioning process for large domains, new QED disks are created with backing to a template image. These disks are configured with copy on read such that blocks that are read from the backing file are copied to the new disk. This reduces I/O over a potentially costly path to the backing image. In such a configuration, there is a desire to remove the dependency on the backing image as the domain runs. To accomplish this, qemu will provide an interface to perform sequential copy on read operations during normal VM operation. Once all data has been copied, the disk image's link to the backing file is removed. The virDomainBlockPull API family brings this functionality to libvirt. virDomainBlockPull() instructs the hypervisor to stream the entire device in the background. Progress of this operation can be checked with the function virDomainBlockJobInfo(). An ongoing stream can be cancelled with virDomainBlockJobAbort(). virDomainBlockJobSetSpeed() allows you to limit the bandwidth that the operation may consume. An event (VIR_DOMAIN_EVENT_ID_BLOCK_JOB) will be emitted when a disk has been fully populated or if a BlockPull() operation was terminated due to an error. This event is useful to avoid polling on virDomainBlockJobInfo() for completion and could also be used by the security driver to revoke access to the backing file when it is no longer needed. make check: PASS make syntax-check: PASS make -C tests valgrind: PASS [PATCH 1/8] Add new API virDomainBlockPull* to headers [PATCH 2/8] virDomainBlockPull: Implement the main entry points [PATCH 3/8] Add virDomainBlockPull support to the remote driver [PATCH 4/8] Implement virDomainBlockPull for the qemu driver [PATCH 5/8] Enable the virDomainBlockPull API in virsh [PATCH 6/8] Enable virDomainBlockPull in the python API. [PATCH 7/8] Asynchronous event for BlockJob completion [PATCH 8/8] Test the blockJob/BlockPull API

Set up the types for the block pull functions and insert them into the virDriver structure definition. Symbols are exported in this patch to prevent documentation compile failures. * include/libvirt/libvirt.h.in: new API * src/driver.h: add the new entry to the driver structure * python/generator.py: fix compiler errors, the actual python bindings are implemented later * src/libvirt_public.syms: export symbols * docs/apibuild.py: Extend 'unsigned long' parameter exception to this API Signed-off-by: Adam Litke <agl@us.ibm.com> --- docs/apibuild.py | 7 ++++- include/libvirt/libvirt.h.in | 44 ++++++++++++++++++++++++++++++++++++++++++ python/generator.py | 2 + src/driver.h | 24 ++++++++++++++++++++++ src/libvirt_public.syms | 7 ++++++ 5 files changed, 82 insertions(+), 2 deletions(-) diff --git a/docs/apibuild.py b/docs/apibuild.py index 6e35cfb..53b3421 100755 --- a/docs/apibuild.py +++ b/docs/apibuild.py @@ -1641,7 +1641,9 @@ class CParser: "virDomainMigrateSetMaxSpeed" : (False, ("bandwidth")), "virDomainSetMaxMemory" : (False, ("memory")), "virDomainSetMemory" : (False, ("memory")), - "virDomainSetMemoryFlags" : (False, ("memory")) } + "virDomainSetMemoryFlags" : (False, ("memory")), + "virDomainBlockJobSetSpeed" : (False, ("bandwidth")), + "virDomainBlockPull" : (False, ("bandwidth")) } def checkLongLegacyFunction(self, name, return_type, signature): if "long" in return_type and "long long" not in return_type: @@ -1667,7 +1669,8 @@ class CParser: # [unsigned] long long long_legacy_struct_fields = \ { "_virDomainInfo" : ("maxMem", "memory"), - "_virNodeInfo" : ("memory") } + "_virNodeInfo" : ("memory"), + "_virDomainBlockJobInfo" : ("bandwidth") } def checkLongLegacyStruct(self, name, fields): for field in fields: diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index 607b5bc..23947c7 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -1375,6 +1375,50 @@ int virDomainUpdateDeviceFlags(virDomainPtr domain, const char *xml, unsigned int flags); /* + * BlockJob API + */ + +/** + * virDomainBlockJobType: + * + * VIR_DOMAIN_BLOCK_JOB_TYPE_PULL: Block Pull (virDomainBlockPull) + */ +typedef enum { + VIR_DOMAIN_BLOCK_JOB_TYPE_UNKNOWN = 0, + VIR_DOMAIN_BLOCK_JOB_TYPE_PULL = 1, +} virDomainBlockJobType; + +/* An iterator for monitoring block job operations */ +typedef unsigned long long virDomainBlockJobCursor; + +typedef struct _virDomainBlockJobInfo virDomainBlockJobInfo; +struct _virDomainBlockJobInfo { + virDomainBlockJobType type; + unsigned long bandwidth; + /* + * The following fields provide an indication of block job progress. @cur + * indicates the current position and will be between 0 and @end. @end is + * the final cursor position for this operation and represents completion. + * To approximate progress, divide @cur by @end. + */ + virDomainBlockJobCursor cur; + virDomainBlockJobCursor end; +}; +typedef virDomainBlockJobInfo *virDomainBlockJobInfoPtr; + +int virDomainBlockJobAbort(virDomainPtr dom, const char *path, + unsigned int flags); +int virDomainGetBlockJobInfo(virDomainPtr dom, const char *path, + virDomainBlockJobInfoPtr info, + unsigned int flags); +int virDomainBlockJobSetSpeed(virDomainPtr dom, const char *path, + unsigned long bandwidth, unsigned int flags); + +int virDomainBlockPull(virDomainPtr dom, const char *path, + unsigned long bandwidth, unsigned int flags); + + +/* * NUMA support */ diff --git a/python/generator.py b/python/generator.py index 1cb82f5..b25c74e 100755 --- a/python/generator.py +++ b/python/generator.py @@ -186,6 +186,7 @@ def enum(type, name, value): functions_failed = [] functions_skipped = [ "virConnectListDomains", + 'virDomainGetBlockJobInfo', ] skipped_modules = { @@ -202,6 +203,7 @@ skipped_types = { 'virStreamEventCallback': "No function types in python", 'virEventHandleCallback': "No function types in python", 'virEventTimeoutCallback': "No function types in python", + 'virDomainBlockJobInfoPtr': "Not implemented yet", } ####################################################################### diff --git a/src/driver.h b/src/driver.h index 9d0d3de..776bb7f 100644 --- a/src/driver.h +++ b/src/driver.h @@ -661,6 +661,26 @@ typedef int unsigned long flags, int cancelled); + +typedef int + (*virDrvDomainBlockJobAbort)(virDomainPtr dom, const char *path, + unsigned int flags); + +typedef int + (*virDrvDomainGetBlockJobInfo)(virDomainPtr dom, const char *path, + virDomainBlockJobInfoPtr info, + unsigned int flags); + +typedef int + (*virDrvDomainBlockJobSetSpeed)(virDomainPtr dom, + const char *path, unsigned long bandwidth, + unsigned int flags); + +typedef int + (*virDrvDomainBlockPull)(virDomainPtr dom, const char *path, + unsigned long bandwidth, unsigned int flags); + + /** * _virDriver: * @@ -802,6 +822,10 @@ struct _virDriver { virDrvDomainMigrateFinish3 domainMigrateFinish3; virDrvDomainMigrateConfirm3 domainMigrateConfirm3; virDrvDomainSendKey domainSendKey; + virDrvDomainBlockJobAbort domainBlockJobAbort; + virDrvDomainGetBlockJobInfo domainGetBlockJobInfo; + virDrvDomainBlockJobSetSpeed domainBlockJobSetSpeed; + virDrvDomainBlockPull domainBlockPull; }; typedef int diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index 5f2541a..5fc6398 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -467,3 +467,10 @@ LIBVIRT_0.9.3 { } LIBVIRT_0.9.2; # .... define new API here using predicted next version number .... +LIBVIRT_0.9.4 { + global: + virDomainBlockJobAbort; + virDomainGetBlockJobInfo; + virDomainBlockJobSetSpeed; + virDomainBlockPull; +} LIBVIRT_0.9.3; -- 1.7.3

* src/libvirt.c: implement the main entry points Signed-off-by: Adam Litke <agl@us.ibm.com> --- src/libvirt.c | 226 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 226 insertions(+), 0 deletions(-) diff --git a/src/libvirt.c b/src/libvirt.c index 39e2041..5ca8c03 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -15417,3 +15417,229 @@ error: virDispatchError(conn); return -1; } + +/** + * virDomainBlockJobAbort: + * @dom: pointer to domain object + * @path: fully-qualified filename of disk + * @flags: currently unused, for future extension + * + * Cancel the active block job on the given disk. + * + * Returns -1 in case of failure, 0 when successful. + */ +int virDomainBlockJobAbort(virDomainPtr dom, const char *path, + unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(dom, "path=%p, flags=%x", path, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN (dom)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + conn = dom->conn; + + if (dom->conn->flags & VIR_CONNECT_RO) { + virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if (!path) { + virLibDomainError(VIR_ERR_INVALID_ARG, + _("path is NULL")); + goto error; + } + + if (conn->driver->domainBlockJobAbort) { + int ret; + ret = conn->driver->domainBlockJobAbort(dom, path, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibDomainError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(dom->conn); + return -1; +} + +/** + * virDomainGetBlockJobInfo: + * @dom: pointer to domain object + * @path: fully-qualified filename of disk + * @info: pointer to a virDomainBlockJobInfo structure + * @flags: currently unused, for future extension + * + * Request block job information for the given disk. If an operation is active + * @info will be updated with the current progress. + * + * Returns -1 in case of failure, 0 when nothing found, 1 when info was found. + */ +int virDomainGetBlockJobInfo(virDomainPtr dom, const char *path, + virDomainBlockJobInfoPtr info, unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(dom, "path=%p, info=%p, flags=%x", path, info, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN (dom)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + conn = dom->conn; + + if (!path) { + virLibDomainError(VIR_ERR_INVALID_ARG, + _("path is NULL")); + goto error; + } + + if (!info) { + virLibDomainError(VIR_ERR_INVALID_ARG, + _("info is NULL")); + goto error; + } + + if (conn->driver->domainGetBlockJobInfo) { + int ret; + ret = conn->driver->domainGetBlockJobInfo(dom, path, info, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibDomainError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(dom->conn); + return -1; +} + +/** + * virDomainBlockJobSetSpeed: + * @dom: pointer to domain object + * @path: fully-qualified filename of disk + * @bandwidth: specify bandwidth limit in Mbps + * @flags: currently unused, for future extension + * + * Set the maximimum allowable bandwidth that a block job may consume. If + * bandwidth is 0, the limit will revert to the hypervisor default. + * + * Returns -1 in case of failure, 0 when successful. + */ +int virDomainBlockJobSetSpeed(virDomainPtr dom, const char *path, + unsigned long bandwidth, unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(dom, "path=%p, bandwidth=%lu, flags=%x", + path, bandwidth, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN (dom)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + conn = dom->conn; + + if (dom->conn->flags & VIR_CONNECT_RO) { + virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if (!path) { + virLibDomainError(VIR_ERR_INVALID_ARG, + _("path is NULL")); + goto error; + } + + if (conn->driver->domainBlockJobSetSpeed) { + int ret; + ret = conn->driver->domainBlockJobSetSpeed(dom, path, bandwidth, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibDomainError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(dom->conn); + return -1; +} + +/** + * virDomainBlockPull: + * @dom: pointer to domain object + * @path: Fully-qualified filename of disk + * @bandwidth: (optional) specify copy bandwidth limit in Mbps + * @flags: currently unused, for future extension + * + * Populate a disk image with data from its backing image. Once all data from + * its backing image has been pulled, the disk no longer depends on a backing + * image. This function pulls data for the entire device in the background. + * Progress of the operation can be checked with virDomainGetBlockJobInfo() and + * the operation can be aborted with virDomainBlockJobAbort(). When finished, + * an asynchronous event is raised to indicate the final status. + * + * The maximum bandwidth (in Mbps) that will be used to do the copy can be + * specified with the bandwidth parameter. If set to 0, libvirt will choose a + * suitable default. Some hypervisors do not support this feature and will + * return an error if bandwidth is not 0. + * + * Returns 0 if the operation has started, -1 on failure. + */ +int virDomainBlockPull(virDomainPtr dom, const char *path, + unsigned long bandwidth, unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(dom, "path=%p, bandwidth=%lu, flags=%x", + path, bandwidth, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECTED_DOMAIN (dom)) { + virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + conn = dom->conn; + + if (dom->conn->flags & VIR_CONNECT_RO) { + virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__); + goto error; + } + + if (!path) { + virLibDomainError(VIR_ERR_INVALID_ARG, + _("path is NULL")); + goto error; + } + + if (conn->driver->domainBlockPull) { + int ret; + ret = conn->driver->domainBlockPull(dom, path, bandwidth, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibDomainError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(dom->conn); + return -1; +} -- 1.7.3

The generator can handle everything except virDomainGetBlockJobInfo(). * src/remote/remote_protocol.x: provide defines for the new entry points * src/remote/remote_driver.c daemon/remote.c: implement the client and server side for virDomainGetBlockJobInfo. * src/remote_protocol-structs: structure definitions for protocol verification * src/rpc/gendispatch.pl: Permit some unsigned long parameters Signed-off-by: Adam Litke <agl@us.ibm.com> --- daemon/remote.c | 42 ++++++++++++++++++++++++++++++++++++++++++ src/remote/remote_driver.c | 42 ++++++++++++++++++++++++++++++++++++++++++ src/remote/remote_protocol.x | 41 ++++++++++++++++++++++++++++++++++++++++- src/remote_protocol-structs | 34 ++++++++++++++++++++++++++++++++++ src/rpc/gendispatch.pl | 2 ++ 5 files changed, 160 insertions(+), 1 deletions(-) diff --git a/daemon/remote.c b/daemon/remote.c index daad39d..b471abc 100644 --- a/daemon/remote.c +++ b/daemon/remote.c @@ -1587,6 +1587,48 @@ no_memory: goto cleanup; } +static int +remoteDispatchDomainGetBlockJobInfo(virNetServerPtr server ATTRIBUTE_UNUSED, + virNetServerClientPtr client ATTRIBUTE_UNUSED, + virNetMessageHeaderPtr hdr ATTRIBUTE_UNUSED, + virNetMessageErrorPtr rerr, + remote_domain_get_block_job_info_args *args, + remote_domain_get_block_job_info_ret *ret) +{ + virDomainPtr dom = NULL; + virDomainBlockJobInfo tmp; + int rv = -1; + struct daemonClientPrivate *priv = + virNetServerClientGetPrivateData(client); + + if (!priv->conn) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("connection not open")); + goto cleanup; + } + + if (!(dom = get_nonnull_domain(priv->conn, args->dom))) + goto cleanup; + + rv = virDomainGetBlockJobInfo(dom, args->path, &tmp, args->flags); + if (rv <= 0) + goto cleanup; + + ret->type = tmp.type; + ret->bandwidth = tmp.bandwidth; + ret->cur = tmp.cur; + ret->end = tmp.end; + ret->found = 1; + rv = 0; + +cleanup: + if (rv < 0) + virNetMessageSaveError(rerr); + if (dom) + virDomainFree(dom); + return rv; +} + + /*-------------------------------------------------------------*/ static int diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index c2f8bbd..a70b455 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -1995,6 +1995,44 @@ done: return rv; } +static int remoteDomainGetBlockJobInfo(virDomainPtr domain, + const char *path, + virDomainBlockJobInfoPtr info, + unsigned int flags) +{ + int rv = -1; + remote_domain_get_block_job_info_args args; + remote_domain_get_block_job_info_ret ret; + struct private_data *priv = domain->conn->privateData; + + remoteDriverLock(priv); + + make_nonnull_domain(&args.dom, domain); + args.path = (char *)path; + args.flags = flags; + + if (call(domain->conn, priv, 0, REMOTE_PROC_DOMAIN_GET_BLOCK_JOB_INFO, + (xdrproc_t)xdr_remote_domain_get_block_job_info_args, + (char *)&args, + (xdrproc_t)xdr_remote_domain_get_block_job_info_ret, + (char *)&ret) == -1) + goto done; + + if (ret.found) { + info->type = ret.type; + info->bandwidth = ret.bandwidth; + info->cur = ret.cur; + info->end = ret.end; + rv = 1; + } else { + rv = 0; + } + +done: + remoteDriverUnlock(priv); + return rv; +} + /*----------------------------------------------------------------------*/ static virDrvOpenStatus ATTRIBUTE_NONNULL (1) @@ -4254,6 +4292,10 @@ static virDriver remote_driver = { .domainMigrateFinish3 = remoteDomainMigrateFinish3, /* 0.9.2 */ .domainMigrateConfirm3 = remoteDomainMigrateConfirm3, /* 0.9.2 */ .domainSendKey = remoteDomainSendKey, /* 0.9.3 */ + .domainBlockJobAbort = remoteDomainBlockJobAbort, /* 0.9.4 */ + .domainGetBlockJobInfo = remoteDomainGetBlockJobInfo, /* 0.9.4 */ + .domainBlockJobSetSpeed = remoteDomainBlockJobSetSpeed, /* 0.9.4 */ + .domainBlockPull = remoteDomainBlockPull, /* 0.9.4 */ }; static virNetworkDriver network_driver = { diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index d72a60d..96113d8 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -980,6 +980,40 @@ struct remote_domain_set_autostart_args { int autostart; }; +struct remote_domain_block_job_abort_args { + remote_nonnull_domain dom; + remote_nonnull_string path; + unsigned int flags; +}; + +struct remote_domain_get_block_job_info_args { + remote_nonnull_domain dom; + remote_nonnull_string path; + unsigned int flags; +}; + +struct remote_domain_get_block_job_info_ret { + int found; + int type; + unsigned hyper bandwidth; + unsigned hyper cur; + unsigned hyper end; +}; + +struct remote_domain_block_job_set_speed_args { + remote_nonnull_domain dom; + remote_nonnull_string path; + unsigned hyper bandwidth; + unsigned int flags; +}; + +struct remote_domain_block_pull_args { + remote_nonnull_domain dom; + remote_nonnull_string path; + unsigned hyper bandwidth; + unsigned int flags; +}; + /* Network calls: */ struct remote_num_of_networks_ret { @@ -2383,7 +2417,12 @@ enum remote_procedure { REMOTE_PROC_NODE_GET_CPU_STATS = 227, /* skipgen skipgen */ REMOTE_PROC_NODE_GET_MEMORY_STATS = 228, /* skipgen skipgen */ REMOTE_PROC_DOMAIN_GET_CONTROL_INFO = 229, /* autogen autogen */ - REMOTE_PROC_DOMAIN_GET_VCPU_PIN_INFO = 230 /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_GET_VCPU_PIN_INFO = 230, /* skipgen skipgen */ + + REMOTE_PROC_DOMAIN_BLOCK_JOB_ABORT = 231, /* autogen autogen */ + REMOTE_PROC_DOMAIN_GET_BLOCK_JOB_INFO = 232, /* skipgen skipgen */ + REMOTE_PROC_DOMAIN_BLOCK_JOB_SET_SPEED = 233, /* autogen autogen */ + REMOTE_PROC_DOMAIN_BLOCK_PULL = 234 /* autogen autogen */ /* * Notice how the entries are grouped in sets of 10 ? diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 221562d..d6bcdd0 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -680,6 +680,35 @@ struct remote_domain_set_autostart_args { remote_nonnull_domain dom; int autostart; }; +struct remote_domain_block_job_abort_args { + remote_nonnull_domain dom; + remote_nonnull_string path; + u_int flags; +}; +struct remote_domain_get_block_job_info_args { + remote_nonnull_domain dom; + remote_nonnull_string path; + u_int flags; +}; +struct remote_domain_get_block_job_info_ret { + int found; + int type; + uint64_t bandwidth; + uint64_t cur; + uint64_t end; +}; +struct remote_domain_block_job_set_speed_args { + remote_nonnull_domain dom; + remote_nonnull_string path; + uint64_t bandwidth; + u_int flags; +}; +struct remote_domain_block_pull_args { + remote_nonnull_domain dom; + remote_nonnull_string path; + uint64_t bandwidth; + u_int flags; +}; struct remote_num_of_networks_ret { int num; }; @@ -1859,4 +1888,9 @@ enum remote_procedure { REMOTE_PROC_NODE_GET_MEMORY_STATS = 228, REMOTE_PROC_DOMAIN_GET_CONTROL_INFO = 229, REMOTE_PROC_DOMAIN_GET_VCPU_PIN_INFO = 230, + REMOTE_PROC_DOMAIN_BLOCK_JOB_ABORT = 231, + REMOTE_PROC_DOMAIN_GET_BLOCK_JOB_INFO = 232, + REMOTE_PROC_DOMAIN_BLOCK_JOB_SET_SPEED = 233, + REMOTE_PROC_DOMAIN_BLOCK_PULL = 234, + }; diff --git a/src/rpc/gendispatch.pl b/src/rpc/gendispatch.pl index e068b53..583e808 100755 --- a/src/rpc/gendispatch.pl +++ b/src/rpc/gendispatch.pl @@ -218,6 +218,8 @@ my $long_legacy = { GetLibVersion => { ret => { lib_ver => 1 } }, GetVersion => { ret => { hv_ver => 1 } }, NodeGetInfo => { ret => { memory => 1 } }, + DomainBlockPull => { arg => { bandwidth => 1 } }, + DomainBlockJobSetSpeed => { arg => { bandwidth => 1 } }, }; sub hyper_to_long -- 1.7.3

The virDomainBlockPull* family of commands are enabled by the following HMP/QMP commands: 'block_stream', 'block_job_cancel', 'info block-jobs' / 'query-block-jobs', and 'block_job_set_speed'. * src/qemu/qemu_driver.c src/qemu/qemu_monitor_text.[ch]: implement disk streaming by using the proper qemu monitor commands. * src/qemu/qemu_monitor_json.[ch]: implement commands using the qmp monitor Signed-off-by: Adam Litke <agl@us.ibm.com> --- src/qemu/qemu_driver.c | 113 +++++++++++++++++++++++++++++ src/qemu/qemu_monitor.c | 18 +++++ src/qemu/qemu_monitor.h | 13 ++++ src/qemu/qemu_monitor_json.c | 147 ++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 5 ++ src/qemu/qemu_monitor_text.c | 162 ++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor_text.h | 6 ++ 7 files changed, 464 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 8870e33..0f556a9 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -8493,6 +8493,115 @@ cleanup: return ret; } +static const char * +qemuDiskPathToAlias(virDomainObjPtr vm, const char *path) { + int i; + char *ret = NULL; + + for (i = 0 ; i < vm->def->ndisks ; i++) { + virDomainDiskDefPtr disk = vm->def->disks[i]; + + if (disk->type != VIR_DOMAIN_DISK_TYPE_BLOCK && + disk->type != VIR_DOMAIN_DISK_TYPE_FILE) + continue; + + if (disk->src != NULL && STREQ(disk->src, path)) { + if (virAsprintf(&ret, "drive-%s", disk->info.alias) < 0) { + virReportOOMError(); + return NULL; + } + break; + } + } + + if (!ret) { + qemuReportError(VIR_ERR_INVALID_ARG, + "%s", _("No device found for specified path")); + } + return ret; +} + +static int +qemuDomainBlockJobImpl(virDomainPtr dom, const char *path, + unsigned long bandwidth, virDomainBlockJobInfoPtr info, + int mode) +{ + struct qemud_driver *driver = dom->conn->privateData; + virDomainObjPtr vm = NULL; + qemuDomainObjPrivatePtr priv; + char uuidstr[VIR_UUID_STRING_BUFLEN]; + const char *device = NULL; + int ret = -1; + + qemuDriverLock(driver); + virUUIDFormat(dom->uuid, uuidstr); + vm = virDomainFindByUUID(&driver->domains, dom->uuid); + if (!vm) { + qemuReportError(VIR_ERR_NO_DOMAIN, + _("no domain with matching uuid '%s'"), uuidstr); + goto cleanup; + } + + if (!virDomainObjIsActive(vm)) { + qemuReportError(VIR_ERR_OPERATION_INVALID, + "%s", _("domain is not running")); + goto cleanup; + } + + device = qemuDiskPathToAlias(vm, path); + if (!device) { + goto cleanup; + } + + if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0) + goto cleanup; + ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm)); + priv = vm->privateData; + ret = qemuMonitorBlockJob(priv->mon, device, bandwidth, info, mode); + qemuDomainObjExitMonitorWithDriver(driver, vm); + if (qemuDomainObjEndJob(driver, vm) == 0) { + vm = NULL; + goto cleanup; + } + +cleanup: + VIR_FREE(device); + if (vm) + virDomainObjUnlock(vm); + qemuDriverUnlock(driver); + return ret; +} + +static int +qemuDomainBlockJobAbort(virDomainPtr dom, const char *path, unsigned int flags) +{ + virCheckFlags(0, -1); + return qemuDomainBlockJobImpl(dom, path, 0, NULL, BLOCK_JOB_ABORT); +} + +static int +qemuDomainGetBlockJobInfo(virDomainPtr dom, const char *path, + virDomainBlockJobInfoPtr info, unsigned int flags) +{ + virCheckFlags(0, -1); + return qemuDomainBlockJobImpl(dom, path, 0, info, BLOCK_JOB_INFO); +} + +static int +qemuDomainBlockJobSetSpeed(virDomainPtr dom, const char *path, + unsigned long bandwidth, unsigned int flags) +{ + virCheckFlags(0, -1); + return qemuDomainBlockJobImpl(dom, path, bandwidth, NULL, BLOCK_JOB_SPEED); +} + +static int +qemuDomainBlockPull(virDomainPtr dom, const char *path, unsigned long bandwidth, + unsigned int flags) +{ + virCheckFlags(0, -1); + return qemuDomainBlockJobImpl(dom, path, bandwidth, NULL, BLOCK_JOB_PULL); +} static virDriver qemuDriver = { .no = VIR_DRV_QEMU, @@ -8619,6 +8728,10 @@ static virDriver qemuDriver = { .domainMigratePerform3 = qemuDomainMigratePerform3, /* 0.9.2 */ .domainMigrateFinish3 = qemuDomainMigrateFinish3, /* 0.9.2 */ .domainMigrateConfirm3 = qemuDomainMigrateConfirm3, /* 0.9.2 */ + .domainBlockJobAbort = qemuDomainBlockJobAbort, /* 0.9.4 */ + .domainGetBlockJobInfo = qemuDomainGetBlockJobInfo, /* 0.9.4 */ + .domainBlockJobSetSpeed = qemuDomainBlockJobSetSpeed, /* 0.9.4 */ + .domainBlockPull = qemuDomainBlockPull, /* 0.9.4 */ }; diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 3a30a15..5c048eb 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -2427,3 +2427,21 @@ int qemuMonitorScreendump(qemuMonitorPtr mon, ret = qemuMonitorTextScreendump(mon, file); return ret; } + +int qemuMonitorBlockJob(qemuMonitorPtr mon, + const char *device, + unsigned long bandwidth, + virDomainBlockJobInfoPtr info, + int mode) +{ + int ret; + + VIR_DEBUG("mon=%p, device=%p, bandwidth=%lu, info=%p, mode=%o", + mon, device, bandwidth, info, mode); + + if (mon->json) + ret = qemuMonitorJSONBlockJob(mon, device, bandwidth, info, mode); + else + ret = qemuMonitorTextBlockJob(mon, device, bandwidth, info, mode); + return ret; +} diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index f246d21..c5d27ef 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -447,6 +447,19 @@ int qemuMonitorInjectNMI(qemuMonitorPtr mon); int qemuMonitorScreendump(qemuMonitorPtr mon, const char *file); +typedef enum { + BLOCK_JOB_ABORT = 0, + BLOCK_JOB_INFO = 1, + BLOCK_JOB_SPEED = 2, + BLOCK_JOB_PULL = 3, +} BLOCK_JOB_CMD; + +int qemuMonitorBlockJob(qemuMonitorPtr mon, + const char *device, + unsigned long bandwidth, + virDomainBlockJobInfoPtr info, + int mode); + /** * When running two dd process and using <> redirection, we need a * shell that will not truncate files. These two strings serve that diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 4db2b78..e7163bb 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -2717,3 +2717,150 @@ int qemuMonitorJSONScreendump(qemuMonitorPtr mon, virJSONValueFree(reply); return ret; } + +static int qemuMonitorJSONGetBlockJobInfoOne(virJSONValuePtr entry, + const char *device, + virDomainBlockJobInfoPtr info) +{ + const char *this_dev; + const char *type; + unsigned long long speed_bytes; + + if ((this_dev = virJSONValueObjectGetString(entry, "device")) == NULL) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("entry was missing 'device'")); + return -1; + } + if (!STREQ(this_dev, device)) + return -1; + + type = virJSONValueObjectGetString(entry, "type"); + if (!type) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("entry was missing 'type'")); + return -1; + } + if (STREQ(type, "stream")) + info->type = VIR_DOMAIN_BLOCK_JOB_TYPE_PULL; + else + info->type = VIR_DOMAIN_BLOCK_JOB_TYPE_UNKNOWN; + + if (virJSONValueObjectGetNumberUlong(entry, "speed", &speed_bytes) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("entry was missing 'speed'")); + return -1; + } + info->bandwidth = speed_bytes / 1024ULL / 1024ULL; + + if (virJSONValueObjectGetNumberUlong(entry, "offset", &info->cur) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("entry was missing 'offset'")); + return -1; + } + + if (virJSONValueObjectGetNumberUlong(entry, "len", &info->end) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("entry was missing 'len'")); + return -1; + } + return 0; +} + +/** qemuMonitorJSONGetBlockJobInfo: + * Parse Block Job information. + * The reply is a JSON array of objects, one per active job. + */ +static int qemuMonitorJSONGetBlockJobInfo(virJSONValuePtr reply, + const char *device, + virDomainBlockJobInfoPtr info) +{ + virJSONValuePtr data; + int nr_results, i; + + if (!info) + return -1; + + if ((data = virJSONValueObjectGet(reply, "return")) == NULL) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("reply was missing return data")); + return -1; + } + + if (data->type != VIR_JSON_TYPE_ARRAY) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("urecognized format of block job information")); + return -1; + } + + if ((nr_results = virJSONValueArraySize(data)) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("unable to determine array size")); + return -1; + } + + for (i = 0; i < nr_results; i++) { + virJSONValuePtr entry = virJSONValueArrayGet(data, i); + if (qemuMonitorJSONGetBlockJobInfoOne(entry, device, info) == 0) + return 1; + } + + return 0; +} + + +int qemuMonitorJSONBlockJob(qemuMonitorPtr mon, + const char *device, + unsigned long bandwidth, + virDomainBlockJobInfoPtr info, + int mode) +{ + int ret = -1; + virJSONValuePtr cmd = NULL; + virJSONValuePtr reply = NULL; + + if (mode == BLOCK_JOB_ABORT) + cmd = qemuMonitorJSONMakeCommand("block_job_cancel", + "s:device", device, NULL); + else if (mode == BLOCK_JOB_INFO) + cmd = qemuMonitorJSONMakeCommand("query-block-jobs", NULL); + else if (mode == BLOCK_JOB_SPEED) + cmd = qemuMonitorJSONMakeCommand("block_job_set_speed", + "s:device", device, + "U:value", bandwidth * 1024ULL * 1024ULL, + NULL); + else if (mode == BLOCK_JOB_PULL) + cmd = qemuMonitorJSONMakeCommand("block_stream", + "s:device", device, NULL); + + if (!cmd) + return -1; + + ret = qemuMonitorJSONCommand(mon, cmd, &reply); + + if (ret == 0 && virJSONValueObjectHasKey(reply, "error")) { + if (qemuMonitorJSONHasError(reply, "DeviceNotActive")) + qemuReportError(VIR_ERR_OPERATION_INVALID, + _("No active operation on device: %s"), device); + else if (qemuMonitorJSONHasError(reply, "DeviceInUse")) + qemuReportError(VIR_ERR_OPERATION_FAILED, + _("Device %s in use"), device); + else if (qemuMonitorJSONHasError(reply, "NotSupported")) + qemuReportError(VIR_ERR_OPERATION_INVALID, + _("Operation is not supported for device: %s"), device); + else + qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Unexpected error")); + ret = -1; + } + + if (ret == 0 && mode == BLOCK_JOB_INFO) + ret = qemuMonitorJSONGetBlockJobInfo(reply, device, info); + + if (ret == 0 && mode == BLOCK_JOB_PULL && bandwidth != 0) + ret = qemuMonitorJSONBlockJob(mon, device, bandwidth, NULL, + BLOCK_JOB_SPEED); + + virJSONValueFree(cmd); + virJSONValueFree(reply); + return ret; +} diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 380e26a..1804390 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -220,5 +220,10 @@ int qemuMonitorJSONInjectNMI(qemuMonitorPtr mon); int qemuMonitorJSONScreendump(qemuMonitorPtr mon, const char *file); +int qemuMonitorJSONBlockJob(qemuMonitorPtr mon, + const char *device, + unsigned long bandwidth, + virDomainBlockJobInfoPtr info, + int mode); #endif /* QEMU_MONITOR_JSON_H */ diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c index 0965a08..c7632e2 100644 --- a/src/qemu/qemu_monitor_text.c +++ b/src/qemu/qemu_monitor_text.c @@ -2785,3 +2785,165 @@ cleanup: VIR_FREE(cmd); return ret; } + +static int qemuMonitorTextParseBlockJobOne(const char *text, + const char *device, + virDomainBlockJobInfoPtr info, + const char **next) +{ + virDomainBlockJobInfo tmp; + char *p; + unsigned long long speed_bytes; + int mismatch = 0; + + if (next == NULL) + return -1; + *next = NULL; + + /* + * Each active stream will appear on its own line in the following format: + * Streaming device <device>: Completed <cur> of <end> bytes + */ + if ((text = STRSKIP(text, "Streaming device ")) == NULL) + return -EINVAL; + + if (!STREQLEN(text, device, strlen(device))) + mismatch = 1; + + if ((text = strstr(text, ": Completed ")) == NULL) + return -EINVAL; + text += 11; + + if (virStrToLong_ull (text, &p, 10, &tmp.cur)) + return -EINVAL; + text = p; + + if (!STRPREFIX(text, " of ")) + return -EINVAL; + text += 4; + + if (virStrToLong_ull (text, &p, 10, &tmp.end)) + return -EINVAL; + text = p; + + if (!STRPREFIX(text, " bytes, speed limit ")) + return -EINVAL; + text += 20; + + if (virStrToLong_ull (text, &p, 10, &speed_bytes)) + return -EINVAL; + text = p; + + if (!STRPREFIX(text, " bytes/s")) + return -EINVAL; + + if (mismatch) { + *next = STRSKIP(text, "\n"); + return -EAGAIN; + } + + if (info) { + info->cur = tmp.cur; + info->end = tmp.end; + info->bandwidth = speed_bytes / 1024ULL / 1024ULL; + info->type = VIR_DOMAIN_BLOCK_JOB_TYPE_PULL; + } + return 1; +} + +static int qemuMonitorTextParseBlockJob(const char *text, + const char *device, + virDomainBlockJobInfoPtr info) +{ + const char *next = NULL; + int ret = 0; + + /* Check error: Device not found */ + if (strstr(text, "Device '") && strstr(text, "' not found")) { + qemuReportError(VIR_ERR_OPERATION_INVALID, "%s", _("Device not found")); + return -1; + } + + /* Check error: Job already active on this device */ + if (strstr(text, "Device '") && strstr(text, "' is in use")) { + qemuReportError(VIR_ERR_OPERATION_FAILED, _("Device %s in use"), + device); + return -1; + } + + /* Check error: Stop non-existent job */ + if (strstr(text, "has not been activated")) { + qemuReportError(VIR_ERR_OPERATION_INVALID,\ + _("No active operation on device: %s"), device); + return -1; + } + + /* This is not an error condition, there are just no results to report. */ + if (strstr(text, "No active jobs")) { + return 0; + } + + /* Check for unsupported operation */ + if (strstr(text, "Operation is not supported")) { + qemuReportError(VIR_ERR_OPERATION_INVALID, + _("Operation is not supported for device: %s"), device); + return -1; + } + + /* No output indicates success for Pull, JobAbort, and JobSetSpeed */ + if (STREQ(text, "")) + return 0; + + /* Now try to parse BlockJobInfo */ + do { + ret = qemuMonitorTextParseBlockJobOne(text, device, info, &next); + text = next; + } while (text && ret == -EAGAIN); + + if (ret < 0) + return -1; + return ret; +} + +int qemuMonitorTextBlockJob(qemuMonitorPtr mon, + const char *device, + unsigned long bandwidth, + virDomainBlockJobInfoPtr info, + int mode) +{ + char *cmd = NULL; + char *reply = NULL; + int ret; + + if (mode == BLOCK_JOB_ABORT) + ret = virAsprintf(&cmd, "block_job_cancel %s", device); + else if (mode == BLOCK_JOB_INFO) + ret = virAsprintf(&cmd, "info block-jobs"); + else if (mode == BLOCK_JOB_SPEED) + ret = virAsprintf(&cmd, "block_job_set_speed %s %llu", device, + bandwidth * 1024ULL * 1024ULL); + else if (mode == BLOCK_JOB_PULL) + ret = virAsprintf(&cmd, "block_stream %s", device); + else + return -1; + + if (ret < 0) { + virReportOOMError(); + return -1; + } + + ret = 0; + if (qemuMonitorHMPCommand(mon, cmd, &reply) < 0) { + qemuReportError(VIR_ERR_INTERNAL_ERROR, + "%s", _("cannot run monitor command")); + ret = -1; + goto cleanup; + } + + ret = qemuMonitorTextParseBlockJob(reply, device, info); + +cleanup: + VIR_FREE(cmd); + VIR_FREE(reply); + return ret; +} diff --git a/src/qemu/qemu_monitor_text.h b/src/qemu/qemu_monitor_text.h index e53f693..9a1c7c0 100644 --- a/src/qemu/qemu_monitor_text.h +++ b/src/qemu/qemu_monitor_text.h @@ -213,4 +213,10 @@ int qemuMonitorTextInjectNMI(qemuMonitorPtr mon); int qemuMonitorTextScreendump(qemuMonitorPtr mon, const char *file); +int qemuMonitorTextBlockJob(qemuMonitorPtr mon, + const char *device, + unsigned long bandwidth, + virDomainBlockJobInfoPtr info, + int mode); + #endif /* QEMU_MONITOR_TEXT_H */ -- 1.7.3

Define two new virsh commands: * blockpull: Initiate a blockPull for the given disk * blockjob: Retrieve progress info, modify speed, and cancel active block jobs Share print_job_progress() with the migration code. * tools/virsh.c: implement the new commands Signed-off-by: Adam Litke <agl@us.ibm.com> --- tools/virsh.c | 135 +++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 files changed, 131 insertions(+), 4 deletions(-) diff --git a/tools/virsh.c b/tools/virsh.c index a6803d8..2b590d3 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -4527,7 +4527,8 @@ out_sig: } static void -print_job_progress(unsigned long long remaining, unsigned long long total) +print_job_progress(const char *label, unsigned long long remaining, + unsigned long long total) { int progress; @@ -4547,7 +4548,7 @@ print_job_progress(unsigned long long remaining, unsigned long long total) } } - fprintf(stderr, "\rMigration: [%3d %%]", progress); + fprintf(stderr, "\r%s: [%3d %%]", label, progress); } static bool @@ -4632,7 +4633,7 @@ repoll: functionReturn = true; if (verbose) { /* print [100 %] */ - print_job_progress(0, 1); + print_job_progress("Migration", 0, 1); } } else functionReturn = false; @@ -4668,7 +4669,8 @@ repoll: ret = virDomainGetJobInfo(dom, &jobinfo); pthread_sigmask(SIG_SETMASK, &oldsigmask, NULL); if (ret == 0) - print_job_progress(jobinfo.dataRemaining, jobinfo.dataTotal); + print_job_progress("Migration", jobinfo.dataRemaining, + jobinfo.dataTotal); } } @@ -4771,6 +4773,129 @@ done: return ret; } +typedef enum { + VSH_CMD_BLOCK_JOB_ABORT = 0, + VSH_CMD_BLOCK_JOB_INFO = 1, + VSH_CMD_BLOCK_JOB_SPEED = 2, + VSH_CMD_BLOCK_JOB_PULL = 3, +} VSH_CMD_BLOCK_JOB_MODE; + +static int +blockJobImpl(vshControl *ctl, const vshCmd *cmd, + virDomainBlockJobInfoPtr info, int mode) +{ + virDomainPtr dom = NULL; + const char *name, *path; + unsigned long bandwidth = 0; + int ret = -1; + + if (!vshConnectionUsability(ctl, ctl->conn)) + goto out; + + if (!(dom = vshCommandOptDomain(ctl, cmd, &name))) + goto out; + + if (vshCommandOptString(cmd, "path", &path) < 0) + goto out; + + if (vshCommandOptUL(cmd, "bandwidth", &bandwidth) < 0) + goto out; + + if (mode == VSH_CMD_BLOCK_JOB_ABORT) + ret = virDomainBlockJobAbort(dom, path, 0); + else if (mode == VSH_CMD_BLOCK_JOB_INFO) + ret = virDomainGetBlockJobInfo(dom, path, info, 0); + else if (mode == VSH_CMD_BLOCK_JOB_SPEED) + ret = virDomainBlockJobSetSpeed(dom, path, bandwidth, 0); + else if (mode == VSH_CMD_BLOCK_JOB_PULL) + ret = virDomainBlockPull(dom, path, bandwidth, 0); + +out: + virDomainFree(dom); + return ret; +} + +/* + * "blockpull" command + */ +static const vshCmdInfo info_block_pull[] = { + {"help", N_("Populate a disk from its backing image.")}, + {"desc", N_("Populate a disk from its backing image.")}, + {NULL, NULL} +}; + +static const vshCmdOptDef opts_block_pull[] = { + {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {"path", VSH_OT_DATA, VSH_OFLAG_REQ, N_("Fully-qualified path of disk")}, + {"bandwidth", VSH_OT_DATA, VSH_OFLAG_NONE, N_("Bandwidth limit in MB/s")}, + {NULL, 0, 0, NULL} +}; + +static bool +cmdBlockPull(vshControl *ctl, const vshCmd *cmd) +{ + if (blockJobImpl(ctl, cmd, NULL, VSH_CMD_BLOCK_JOB_PULL) != 0) + return false; + return true; +} + +/* + * "blockjobinfo" command + */ +static const vshCmdInfo info_block_job[] = { + {"help", N_("Manage active block operations.")}, + {"desc", N_("Manage active block operations.")}, + {NULL, NULL} +}; + +static const vshCmdOptDef opts_block_job[] = { + {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")}, + {"path", VSH_OT_DATA, VSH_OFLAG_REQ, N_("Fully-qualified path of disk")}, + {"abort", VSH_OT_BOOL, VSH_OFLAG_NONE, N_("Abort the active job on the speficied disk")}, + {"info", VSH_OT_BOOL, VSH_OFLAG_NONE, N_("Get active job information for the specified disk")}, + {"bandwidth", VSH_OT_DATA, VSH_OFLAG_NONE, N_("Set the Bandwidth limit in MB/s")}, + {NULL, 0, 0, NULL} +}; + +static bool +cmdBlockJob(vshControl *ctl, const vshCmd *cmd) +{ + int mode; + virDomainBlockJobInfo info; + const char *type; + int ret; + + if (vshCommandOptBool (cmd, "abort")) { + mode = VSH_CMD_BLOCK_JOB_ABORT; + } else if (vshCommandOptBool (cmd, "info")) { + mode = VSH_CMD_BLOCK_JOB_INFO; + } else if (vshCommandOptBool (cmd, "bandwidth")) { + mode = VSH_CMD_BLOCK_JOB_SPEED; + } else { + vshError(ctl, "%s", + _("One of --abort, --info, or --bandwidth is required")); + return false; + } + + ret = blockJobImpl(ctl, cmd, &info, mode); + if (ret < 0) + return false; + + if (ret == 0 || mode != VSH_CMD_BLOCK_JOB_INFO) + return true; + + if (info.type == VIR_DOMAIN_BLOCK_JOB_TYPE_PULL) + type = "Block Pull"; + else + type = "Unknown job"; + + print_job_progress(type, info.end - info.cur, info.end); + if (info.bandwidth != 0) + vshPrint(ctl, " Bandwidth limit: %lu MB/s\n", info.bandwidth); + return true; +} + + /* * "net-autostart" command */ @@ -11920,6 +12045,8 @@ static const vshCmdDef domManagementCmds[] = { info_attach_interface, 0}, {"autostart", cmdAutostart, opts_autostart, info_autostart, 0}, {"blkiotune", cmdBlkiotune, opts_blkiotune, info_blkiotune, 0}, + {"blockpull", cmdBlockPull, opts_block_pull, info_block_pull, 0}, + {"blockjob", cmdBlockJob, opts_block_job, info_block_job, 0}, #ifndef WIN32 {"console", cmdConsole, opts_console, info_console, 0}, #endif -- 1.7.3

virDomainGetBlockJobInfo requires manual override since it returns a custom type. * python/generator.py: reenable bindings for this entry point * python/libvirt-override-api.xml python/libvirt-override.c: manual overrides Signed-off-by: Adam Litke <agl@us.ibm.com> --- python/generator.py | 2 +- python/libvirt-override-api.xml | 7 +++++++ python/libvirt-override.c | 39 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 47 insertions(+), 1 deletions(-) diff --git a/python/generator.py b/python/generator.py index b25c74e..d0d3ae6 100755 --- a/python/generator.py +++ b/python/generator.py @@ -186,7 +186,6 @@ def enum(type, name, value): functions_failed = [] functions_skipped = [ "virConnectListDomains", - 'virDomainGetBlockJobInfo', ] skipped_modules = { @@ -370,6 +369,7 @@ skip_impl = ( 'virDomainSendKey', 'virNodeGetCPUStats', 'virNodeGetMemoryStats', + 'virDomainGetBlockJobInfo', ) diff --git a/python/libvirt-override-api.xml b/python/libvirt-override-api.xml index 01207d6..268f897 100644 --- a/python/libvirt-override-api.xml +++ b/python/libvirt-override-api.xml @@ -320,5 +320,12 @@ <arg name='flags' type='unsigned int' info='flags, curently unused'/> <return type='int' info="0 on success, -1 on error"/> </function> + <function name='virDomainGetBlockJobInfo' file='python'> + <info>Get progress information for a block job</info> + <arg name='dom' type='virDomainPtr' info='pointer to the domain'/> + <arg name='path' type='const char *' info='Fully-qualified filename of disk'/> + <arg name='flags' type='unsigned int' info='fine-tuning flags, currently unused, pass 0.'/> + <return type='virDomainBlockJobInfo' info='A dictionary containing job information.' /> + </function> </symbols> </api> diff --git a/python/libvirt-override.c b/python/libvirt-override.c index b713b6a..e89bc97 100644 --- a/python/libvirt-override.c +++ b/python/libvirt-override.c @@ -2413,6 +2413,44 @@ libvirt_virDomainGetJobInfo(PyObject *self ATTRIBUTE_UNUSED, PyObject *args) { return(py_retval); } +static PyObject * +libvirt_virDomainGetBlockJobInfo(PyObject *self ATTRIBUTE_UNUSED, + PyObject *args) +{ + virDomainPtr domain; + PyObject *pyobj_domain; + const char *path; + unsigned int flags; + virDomainBlockJobInfo info; + int c_ret; + PyObject *ret; + + if (!PyArg_ParseTuple(args, (char *)"Ozi:virDomainGetBlockJobInfo", + &pyobj_domain, &path, &flags)) + return(NULL); + domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain); + +LIBVIRT_BEGIN_ALLOW_THREADS; + c_ret = virDomainGetBlockJobInfo(domain, path, &info, flags); +LIBVIRT_END_ALLOW_THREADS; + + if (c_ret != 1) + return VIR_PY_NONE; + + if ((ret = PyDict_New()) == NULL) + return VIR_PY_NONE; + + PyDict_SetItem(ret, libvirt_constcharPtrWrap("type"), + libvirt_intWrap(info.type)); + PyDict_SetItem(ret, libvirt_constcharPtrWrap("bandwidth"), + libvirt_ulongWrap(info.bandwidth)); + PyDict_SetItem(ret, libvirt_constcharPtrWrap("cur"), + libvirt_ulonglongWrap(info.cur)); + PyDict_SetItem(ret, libvirt_constcharPtrWrap("end"), + libvirt_ulonglongWrap(info.end)); + + return ret; +} /******************************************* * Helper functions to avoid importing modules @@ -3872,6 +3910,7 @@ static PyMethodDef libvirtMethods[] = { {(char *) "virDomainGetJobInfo", libvirt_virDomainGetJobInfo, METH_VARARGS, NULL}, {(char *) "virDomainSnapshotListNames", libvirt_virDomainSnapshotListNames, METH_VARARGS, NULL}, {(char *) "virDomainRevertToSnapshot", libvirt_virDomainRevertToSnapshot, METH_VARARGS, NULL}, + {(char *) "virDomainGetBlockJobInfo", libvirt_virDomainGetBlockJobInfo, METH_VARARGS, NULL}, {NULL, NULL, 0, NULL} }; -- 1.7.3

When an operation started by virDomainBlockPull completes (either with success or with failure), raise an event to indicate the final status. This allows an API user to avoid polling on virDomainGetBlockJobInfo if they would prefer to use the event mechanism. * daemon/remote.c: Dispatch events to client * include/libvirt/libvirt.h.in: Define event ID and callback signature * src/conf/domain_event.c, src/conf/domain_event.h, src/libvirt_private.syms: Extend API to handle the new event * src/qemu/qemu_driver.c: Connect to the QEMU monitor event for block_stream completion and emit a libvirt block pull event * src/remote/remote_driver.c: Receive and dispatch events to application * src/remote/remote_protocol.x: Wire protocol definition for the event * src/remote_protocol-structs: structure definitions for protocol verification * src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h, src/qemu/qemu_monitor_json.c: Watch for BLOCK_STREAM_COMPLETED event from QEMU monitor Signed-off-by: Adam Litke <agl@us.ibm.com> --- daemon/remote.c | 31 ++++++++++++++++++ include/libvirt/libvirt.h.in | 29 +++++++++++++++++ python/libvirt-override-virConnect.py | 12 +++++++ python/libvirt-override.c | 52 ++++++++++++++++++++++++++++++ src/conf/domain_event.c | 56 +++++++++++++++++++++++++++++++++ src/conf/domain_event.h | 9 +++++- src/libvirt_private.syms | 2 + src/qemu/qemu_monitor.c | 13 +++++++ src/qemu/qemu_monitor.h | 10 ++++++ src/qemu/qemu_monitor_json.c | 40 +++++++++++++++++++++++ src/qemu/qemu_process.c | 31 ++++++++++++++++++ src/remote/remote_driver.c | 31 ++++++++++++++++++ src/remote/remote_protocol.x | 10 +++++- src/remote_protocol-structs | 8 ++++- 14 files changed, 331 insertions(+), 3 deletions(-) diff --git a/daemon/remote.c b/daemon/remote.c index b471abc..939044c 100644 --- a/daemon/remote.c +++ b/daemon/remote.c @@ -339,6 +339,36 @@ static int remoteRelayDomainEventGraphics(virConnectPtr conn ATTRIBUTE_UNUSED, return 0; } +static int remoteRelayDomainEventBlockJob(virConnectPtr conn ATTRIBUTE_UNUSED, + virDomainPtr dom, + const char *path, + int type, + int status, + void *opaque) +{ + virNetServerClientPtr client = opaque; + remote_domain_event_block_job_msg data; + + if (!client) + return -1; + + VIR_DEBUG("Relaying domain block job event %s %d %s %i, %i", + dom->name, dom->id, path, type, status); + + /* build return data */ + memset(&data, 0, sizeof data); + make_nonnull_domain(&data.dom, dom); + data.path = (char*)path; + data.type = type; + data.status = status; + + remoteDispatchDomainEventSend(client, remoteProgram, + REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB, + (xdrproc_t)xdr_remote_domain_event_block_job_msg, &data); + + return 0; +} + static int remoteRelayDomainEventControlError(virConnectPtr conn ATTRIBUTE_UNUSED, virDomainPtr dom, @@ -373,6 +403,7 @@ static virConnectDomainEventGenericCallback domainEventCallbacks[] = { VIR_DOMAIN_EVENT_CALLBACK(remoteRelayDomainEventGraphics), VIR_DOMAIN_EVENT_CALLBACK(remoteRelayDomainEventIOErrorReason), VIR_DOMAIN_EVENT_CALLBACK(remoteRelayDomainEventControlError), + VIR_DOMAIN_EVENT_CALLBACK(remoteRelayDomainEventBlockJob), }; verify(ARRAY_CARDINALITY(domainEventCallbacks) == VIR_DOMAIN_EVENT_ID_LAST); diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index 23947c7..d215655 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -2735,6 +2735,34 @@ typedef void (*virConnectDomainEventGraphicsCallback)(virConnectPtr conn, void *opaque); /** + * virConnectDomainEventBlockJobStatus: + * + * The final status of a virDomainBlockPullAll() operation + */ +typedef enum { + VIR_DOMAIN_BLOCK_JOB_COMPLETED = 0, + VIR_DOMAIN_BLOCK_JOB_FAILED = 1, +} virConnectDomainEventBlockJobStatus; + +/** + * virConnectDomainEventBlockJobCallback: + * @conn: connection object + * @dom: domain on which the event occurred + * @path: fully-qualified filename of the affected disk + * @type: type of block job (virDomainBlockJobType) + * @status: final status of the operation (virConnectDomainEventBlockJobStatus) + * + * The callback signature to use when registering for an event of type + * VIR_DOMAIN_EVENT_ID_BLOCK_JOB with virConnectDomainEventRegisterAny() + */ +typedef void (*virConnectDomainEventBlockJobCallback)(virConnectPtr conn, + virDomainPtr dom, + const char *path, + int type, + int status, + void *opaque); + +/** * VIR_DOMAIN_EVENT_CALLBACK: * * Used to cast the event specific callback into the generic one @@ -2752,6 +2780,7 @@ typedef enum { VIR_DOMAIN_EVENT_ID_GRAPHICS = 5, /* virConnectDomainEventGraphicsCallback */ VIR_DOMAIN_EVENT_ID_IO_ERROR_REASON = 6, /* virConnectDomainEventIOErrorReasonCallback */ VIR_DOMAIN_EVENT_ID_CONTROL_ERROR = 7, /* virConnectDomainEventGenericCallback */ + VIR_DOMAIN_EVENT_ID_BLOCK_JOB = 8, /* virConnectDomainEventBlockJobCallback */ /* * NB: this enum value will increase over time as new events are diff --git a/python/libvirt-override-virConnect.py b/python/libvirt-override-virConnect.py index eeeedf9..65b5342 100644 --- a/python/libvirt-override-virConnect.py +++ b/python/libvirt-override-virConnect.py @@ -113,6 +113,18 @@ authScheme, subject, opaque) return 0 + def dispatchDomainEventBlockPullCallback(self, dom, path, type, status, cbData): + """Dispatches events to python user domain blockJob event callbacks + """ + try: + cb = cbData["cb"] + opaque = cbData["opaque"] + + cb(self, virDomain(self, _obj=dom), path, type, status, opaque) + return 0 + except AttributeError: + pass + def domainEventDeregisterAny(self, callbackID): """Removes a Domain Event Callback. De-registering for a domain callback will disable delivery of this event type """ diff --git a/python/libvirt-override.c b/python/libvirt-override.c index e89bc97..db76315 100644 --- a/python/libvirt-override.c +++ b/python/libvirt-override.c @@ -3582,6 +3582,55 @@ libvirt_virConnectDomainEventGraphicsCallback(virConnectPtr conn ATTRIBUTE_UNUSE return ret; } +static int +libvirt_virConnectDomainEventBlockJobCallback(virConnectPtr conn ATTRIBUTE_UNUSED, + virDomainPtr dom, + const char *path, + int type, + int status, + void *opaque) +{ + PyObject *pyobj_cbData = (PyObject*)opaque; + PyObject *pyobj_dom; + PyObject *pyobj_ret; + PyObject *pyobj_conn; + PyObject *dictKey; + int ret = -1; + + LIBVIRT_ENSURE_THREAD_STATE; + + /* Create a python instance of this virDomainPtr */ + virDomainRef(dom); + pyobj_dom = libvirt_virDomainPtrWrap(dom); + Py_INCREF(pyobj_cbData); + + dictKey = libvirt_constcharPtrWrap("conn"); + pyobj_conn = PyDict_GetItem(pyobj_cbData, dictKey); + Py_DECREF(dictKey); + + /* Call the Callback Dispatcher */ + pyobj_ret = PyObject_CallMethod(pyobj_conn, + (char*)"dispatchDomainEventBlockPullCallback", + (char*)"OsiiO", + pyobj_dom, path, type, status, pyobj_cbData); + + Py_DECREF(pyobj_cbData); + Py_DECREF(pyobj_dom); + + if(!pyobj_ret) { +#if DEBUG_ERROR + printf("%s - ret:%p\n", __FUNCTION__, pyobj_ret); +#endif + PyErr_Print(); + } else { + Py_DECREF(pyobj_ret); + ret = 0; + } + + LIBVIRT_RELEASE_THREAD_STATE; + return ret; +} + static PyObject * libvirt_virConnectDomainEventRegisterAny(ATTRIBUTE_UNUSED PyObject * self, PyObject * args) @@ -3636,6 +3685,9 @@ libvirt_virConnectDomainEventRegisterAny(ATTRIBUTE_UNUSED PyObject * self, case VIR_DOMAIN_EVENT_ID_CONTROL_ERROR: cb = VIR_DOMAIN_EVENT_CALLBACK(libvirt_virConnectDomainEventGenericCallback); break; + case VIR_DOMAIN_EVENT_ID_BLOCK_JOB: + cb = VIR_DOMAIN_EVENT_CALLBACK(libvirt_virConnectDomainEventBlockJobCallback); + break; } if (!cb) { diff --git a/src/conf/domain_event.c b/src/conf/domain_event.c index c435484..dda7e74 100644 --- a/src/conf/domain_event.c +++ b/src/conf/domain_event.c @@ -83,6 +83,11 @@ struct _virDomainEvent { char *authScheme; virDomainEventGraphicsSubjectPtr subject; } graphics; + struct { + char *path; + int type; + int status; + } blockJob; } data; }; @@ -499,6 +504,11 @@ void virDomainEventFree(virDomainEventPtr event) } VIR_FREE(event->data.graphics.subject); } + break; + + case VIR_DOMAIN_EVENT_ID_BLOCK_JOB: + VIR_FREE(event->data.blockJob.path); + break; } VIR_FREE(event->dom.name); @@ -874,6 +884,44 @@ virDomainEventPtr virDomainEventGraphicsNewFromObj(virDomainObjPtr obj, return ev; } +static virDomainEventPtr +virDomainEventBlockJobNew(int id, const char *name, unsigned char *uuid, + const char *path, int type, int status) +{ + virDomainEventPtr ev = + virDomainEventNewInternal(VIR_DOMAIN_EVENT_ID_BLOCK_JOB, + id, name, uuid); + + if (ev) { + if (!(ev->data.blockJob.path = strdup(path))) { + virReportOOMError(); + virDomainEventFree(ev); + return NULL; + } + ev->data.blockJob.type = type; + ev->data.blockJob.status = status; + } + + return ev; +} + +virDomainEventPtr virDomainEventBlockJobNewFromObj(virDomainObjPtr obj, + const char *path, + int type, + int status) +{ + return virDomainEventBlockJobNew(obj->def->id, obj->def->name, + obj->def->uuid, path, type, status); +} + +virDomainEventPtr virDomainEventBlockJobNewFromDom(virDomainPtr dom, + const char *path, + int type, + int status) +{ + return virDomainEventBlockJobNew(dom->id, dom->name, dom->uuid, + path, type, status); +} virDomainEventPtr virDomainEventControlErrorNewFromDom(virDomainPtr dom) { @@ -1027,6 +1075,14 @@ void virDomainEventDispatchDefaultFunc(virConnectPtr conn, cbopaque); break; + case VIR_DOMAIN_EVENT_ID_BLOCK_JOB: + ((virConnectDomainEventBlockJobCallback)cb)(conn, dom, + event->data.blockJob.path, + event->data.blockJob.type, + event->data.blockJob.status, + cbopaque); + break; + default: VIR_WARN("Unexpected event ID %d", event->eventID); break; diff --git a/src/conf/domain_event.h b/src/conf/domain_event.h index f56408f..b06be16 100644 --- a/src/conf/domain_event.h +++ b/src/conf/domain_event.h @@ -169,7 +169,14 @@ virDomainEventPtr virDomainEventGraphicsNewFromObj(virDomainObjPtr obj, virDomainEventPtr virDomainEventControlErrorNewFromDom(virDomainPtr dom); virDomainEventPtr virDomainEventControlErrorNewFromObj(virDomainObjPtr obj); - +virDomainEventPtr virDomainEventBlockJobNewFromObj(virDomainObjPtr obj, + const char *path, + int type, + int status); +virDomainEventPtr virDomainEventBlockJobNewFromDom(virDomainPtr dom, + const char *path, + int type, + int status); int virDomainEventQueuePush(virDomainEventQueuePtr evtQueue, virDomainEventPtr event); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 3e3b1dd..c21119f 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -403,6 +403,8 @@ virDomainWatchdogModelTypeToString; # domain_event.h +virDomainEventBlockJobNewFromObj; +virDomainEventBlockJobNewFromDom; virDomainEventCallbackListAdd; virDomainEventCallbackListAddID; virDomainEventCallbackListCount; diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 5c048eb..e5ab9f3 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -957,6 +957,19 @@ int qemuMonitorEmitGraphics(qemuMonitorPtr mon, return ret; } +int qemuMonitorEmitBlockJob(qemuMonitorPtr mon, + const char *diskAlias, + int type, + int status) +{ + int ret = -1; + VIR_DEBUG("mon=%p", mon); + + QEMU_MONITOR_CALLBACK(mon, ret, domainBlockJob, mon->vm, + diskAlias, type, status); + return ret; +} + int qemuMonitorSetCapabilities(qemuMonitorPtr mon) diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index c5d27ef..73be07d 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -117,6 +117,11 @@ struct _qemuMonitorCallbacks { const char *authScheme, const char *x509dname, const char *saslUsername); + int (*domainBlockJob)(qemuMonitorPtr mon, + virDomainObjPtr vm, + const char *diskAlias, + int type, + int status); }; @@ -179,6 +184,11 @@ int qemuMonitorEmitGraphics(qemuMonitorPtr mon, const char *authScheme, const char *x509dname, const char *saslUsername); +int qemuMonitorEmitBlockJob(qemuMonitorPtr mon, + const char *diskAlias, + int type, + int status); + int qemuMonitorStartCPUs(qemuMonitorPtr mon, diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index e7163bb..2b18cd4 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -56,6 +56,7 @@ static void qemuMonitorJSONHandleIOError(qemuMonitorPtr mon, virJSONValuePtr dat static void qemuMonitorJSONHandleVNCConnect(qemuMonitorPtr mon, virJSONValuePtr data); static void qemuMonitorJSONHandleVNCInitialize(qemuMonitorPtr mon, virJSONValuePtr data); static void qemuMonitorJSONHandleVNCDisconnect(qemuMonitorPtr mon, virJSONValuePtr data); +static void qemuMonitorJSONHandleBlockJob(qemuMonitorPtr mon, virJSONValuePtr data); struct { const char *type; @@ -71,6 +72,7 @@ struct { { "VNC_CONNECTED", qemuMonitorJSONHandleVNCConnect, }, { "VNC_INITIALIZED", qemuMonitorJSONHandleVNCInitialize, }, { "VNC_DISCONNECTED", qemuMonitorJSONHandleVNCDisconnect, }, + { "BLOCK_JOB_COMPLETED", qemuMonitorJSONHandleBlockJob, }, }; @@ -678,6 +680,44 @@ static void qemuMonitorJSONHandleVNCDisconnect(qemuMonitorPtr mon, virJSONValueP qemuMonitorJSONHandleVNC(mon, data, VIR_DOMAIN_EVENT_GRAPHICS_DISCONNECT); } +static void qemuMonitorJSONHandleBlockJob(qemuMonitorPtr mon, virJSONValuePtr data) +{ + const char *device; + const char *type_str; + int type = VIR_DOMAIN_BLOCK_JOB_TYPE_UNKNOWN; + unsigned long long offset, len; + int status = VIR_DOMAIN_BLOCK_JOB_FAILED; + + if ((device = virJSONValueObjectGetString(data, "device")) == NULL) { + VIR_WARN("missing device in block job event"); + goto out; + } + + if (virJSONValueObjectGetNumberUlong(data, "offset", &offset) < 0) { + VIR_WARN("missing offset in block job event"); + goto out; + } + + if (virJSONValueObjectGetNumberUlong(data, "len", &len) < 0) { + VIR_WARN("missing len in block job event"); + goto out; + } + + if ((type_str = virJSONValueObjectGetString(data, "type")) == NULL) { + VIR_WARN("missing type in block job event"); + goto out; + } + + if (STREQ(type_str, "stream")) + type = VIR_DOMAIN_BLOCK_JOB_TYPE_PULL; + + if (offset != 0 && offset == len) + status = VIR_DOMAIN_BLOCK_JOB_COMPLETED; + +out: + qemuMonitorEmitBlockJob(mon, device, type, status); +} + int qemuMonitorJSONHumanCommandWithFd(qemuMonitorPtr mon, diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 448b06e..1a76d52 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -661,6 +661,36 @@ qemuProcessHandleIOError(qemuMonitorPtr mon ATTRIBUTE_UNUSED, return 0; } +static int +qemuProcessHandleBlockJob(qemuMonitorPtr mon ATTRIBUTE_UNUSED, + virDomainObjPtr vm, + const char *diskAlias, + int type, + int status) +{ + struct qemud_driver *driver = qemu_driver; + virDomainEventPtr event = NULL; + const char *path; + virDomainDiskDefPtr disk; + + virDomainObjLock(vm); + disk = qemuProcessFindDomainDiskByAlias(vm, diskAlias); + + if (disk) { + path = disk->src; + event = virDomainEventBlockJobNewFromObj(vm, path, type, status); + } + + virDomainObjUnlock(vm); + + if (event) { + qemuDriverLock(driver); + qemuDomainEventQueue(driver, event); + qemuDriverUnlock(driver); + } + + return 0; +} static int qemuProcessHandleGraphics(qemuMonitorPtr mon ATTRIBUTE_UNUSED, @@ -778,6 +808,7 @@ static qemuMonitorCallbacks monitorCallbacks = { .domainWatchdog = qemuProcessHandleWatchdog, .domainIOError = qemuProcessHandleIOError, .domainGraphics = qemuProcessHandleGraphics, + .domainBlockJob = qemuProcessHandleBlockJob, }; static int diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index a70b455..c627644 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -223,6 +223,11 @@ remoteDomainBuildEventControlError(virNetClientProgramPtr prog, virNetClientPtr client, void *evdata, void *opaque); +static void +remoteDomainBuildEventBlockJob(virNetClientProgramPtr prog, + virNetClientPtr client, + void *evdata, void *opaque); + static virNetClientProgramEvent remoteDomainEvents[] = { { REMOTE_PROC_DOMAIN_EVENT_RTC_CHANGE, remoteDomainBuildEventRTCChange, @@ -256,6 +261,10 @@ static virNetClientProgramEvent remoteDomainEvents[] = { remoteDomainBuildEventControlError, sizeof(remote_domain_event_control_error_msg), (xdrproc_t)xdr_remote_domain_event_control_error_msg }, + { REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB, + remoteDomainBuildEventBlockJob, + sizeof(remote_domain_event_block_job_msg), + (xdrproc_t)xdr_remote_domain_event_block_job_msg }, }; enum virDrvOpenRemoteFlags { @@ -3093,6 +3102,28 @@ remoteDomainBuildEventIOErrorReason(virNetClientProgramPtr prog ATTRIBUTE_UNUSED remoteDomainEventQueue(priv, event); } +static void +remoteDomainBuildEventBlockJob(virNetClientProgramPtr prog ATTRIBUTE_UNUSED, + virNetClientPtr client ATTRIBUTE_UNUSED, + void *evdata, void *opaque) +{ + virConnectPtr conn = opaque; + struct private_data *priv = conn->privateData; + remote_domain_event_block_job_msg *msg = evdata; + virDomainPtr dom; + virDomainEventPtr event = NULL; + + dom = get_nonnull_domain(conn, msg->dom); + if (!dom) + return; + + event = virDomainEventBlockJobNewFromDom(dom, msg->path, msg->type, + msg->status); + + virDomainFree(dom); + + remoteDomainEventQueue(priv, event); +} static void remoteDomainBuildEventGraphics(virNetClientProgramPtr prog ATTRIBUTE_UNUSED, diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 96113d8..6d6dff3 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -1934,6 +1934,13 @@ struct remote_domain_event_graphics_msg { remote_domain_event_graphics_identity subject<REMOTE_DOMAIN_EVENT_GRAPHICS_IDENTITY_MAX>; }; +struct remote_domain_event_block_job_msg { + remote_nonnull_domain dom; + remote_nonnull_string path; + int type; + int status; +}; + struct remote_domain_managed_save_args { remote_nonnull_domain dom; unsigned int flags; @@ -2422,7 +2429,8 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_BLOCK_JOB_ABORT = 231, /* autogen autogen */ REMOTE_PROC_DOMAIN_GET_BLOCK_JOB_INFO = 232, /* skipgen skipgen */ REMOTE_PROC_DOMAIN_BLOCK_JOB_SET_SPEED = 233, /* autogen autogen */ - REMOTE_PROC_DOMAIN_BLOCK_PULL = 234 /* autogen autogen */ + REMOTE_PROC_DOMAIN_BLOCK_PULL = 234, /* autogen autogen */ + REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB = 235 /* skipgen skipgen */ /* * Notice how the entries are grouped in sets of 10 ? diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index d6bcdd0..ab9b190 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -1448,6 +1448,12 @@ struct remote_domain_event_graphics_msg { remote_domain_event_graphics_identity * subject_val; } subject; }; +struct remote_domain_event_block_job_msg { + remote_nonnull_domain dom; + remote_nonnull_string path; + int type; + int status; +}; struct remote_domain_managed_save_args { remote_nonnull_domain dom; u_int flags; @@ -1892,5 +1898,5 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_GET_BLOCK_JOB_INFO = 232, REMOTE_PROC_DOMAIN_BLOCK_JOB_SET_SPEED = 233, REMOTE_PROC_DOMAIN_BLOCK_PULL = 234, - + REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB = 235, }; -- 1.7.3

This patch is for information only and should not be comitted. Signed-off-by: Adam Litke <agl@us.ibm.com> --- blockPull-test.py | 281 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 281 insertions(+), 0 deletions(-) create mode 100644 blockPull-test.py diff --git a/blockPull-test.py b/blockPull-test.py new file mode 100644 index 0000000..a75e0d9 --- /dev/null +++ b/blockPull-test.py @@ -0,0 +1,281 @@ +#!/usr/bin/env python + +import sys +import subprocess +import time +import unittest +import re +import threading +import libvirt + +qemu_img_bin = "/home/aglitke/src/qemu/qemu-img" +virsh_bin = "/home/aglitke/src/libvirt/tools/virsh" + +dom_xml = """ +<domain type='kvm'> + <name>blockPull-test</name> + <memory>131072</memory> + <currentMemory>131072</currentMemory> + <vcpu>1</vcpu> + <os> + <type arch='x86_64' machine='pc-0.13'>hvm</type> + <boot dev='hd'/> + </os> + <features> + <acpi/> + <apic/> + <pae/> + </features> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>restart</on_crash> + <devices> + <emulator>/home/aglitke/src/qemu/x86_64-softmmu/qemu-system-x86_64</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='qed'/> + <source file='/tmp/disk1.qed' /> + <target dev='vda' bus='virtio'/> + </disk> + <disk type='file' device='disk'> + <driver name='qemu' type='qed'/> + <source file='/tmp/disk2.qed' /> + <target dev='vdb' bus='virtio'/> + </disk> + <disk type='file' device='disk'> + <driver name='qemu' type='raw'/> + <source file='/tmp/disk3.raw' /> + <target dev='vdc' bus='virtio'/> + </disk> + <graphics type='vnc' port='-1' autoport='yes'/> + </devices> +</domain> +""" + +def qemu_img(*args): + global qemu_img_bin + + devnull = open('/dev/null', 'r+') + return subprocess.call([qemu_img_bin] + list(args), stdin=devnull, stdout=devnull) + +def virsh(*args): + global virsh_bin + + devnull = open('/dev/null', 'r+') + return subprocess.Popen([virsh_bin] + list(args), + stdout=subprocess.PIPE).communicate()[0] + #return subprocess.call([virsh_bin] + list(args), + # stdin=devnull, stdout=devnull, stderr=devnull) + +def make_baseimage(name, size_mb): + devnull = open('/dev/null', 'r+') + return subprocess.call(['dd', 'if=/dev/zero', "of=%s" % name, 'bs=1M', + 'count=%i' % size_mb], stdin=devnull, stdout=devnull, stderr=devnull) + +def has_backing_file(path): + global qemu_img_bin + p1 = subprocess.Popen([qemu_img_bin, "info", path], + stdout=subprocess.PIPE).communicate()[0] + matches = re.findall("^backing file:", p1, re.M) + if len(matches) > 0: + return True + return False + +class BlockPullTestCase(unittest.TestCase): + def _error_handler(self, ctx, error, dummy=None): + pass + + def create_disks(self, sparse): + self.disks = [ '/tmp/disk1.qed', '/tmp/disk2.qed', '/tmp/disk3.raw' ] + if sparse: + qemu_img('create', '-f', 'raw', '/tmp/backing1.img', '100M') + qemu_img('create', '-f', 'raw', '/tmp/backing2.img', '100M') + else: + make_baseimage('/tmp/backing1.img', 100) + make_baseimage('/tmp/backing2.img', 100) + qemu_img('create', '-f', 'qed', '-o', 'backing_file=/tmp/backing1.img', self.disks[0]) + qemu_img('create', '-f', 'qed', '-o', 'backing_file=/tmp/backing2.img', self.disks[1]) + qemu_img('create', '-f', 'raw', self.disks[2], '100M') + + def begin(self, sparse=True): + global dom_xml + + libvirt.registerErrorHandler(self._error_handler, None) + self.create_disks(sparse) + self.conn = libvirt.open('qemu:///system') + self.dom = self.conn.createXML(dom_xml, 0) + + def end(self): + self.dom.destroy() + self.conn.close() + +class TestBasicErrors(BlockPullTestCase): + def setUp(self): + self.begin() + + def tearDown(self): + self.end() + + def test_bad_path(self): + try: + self.dom.blockPull('/dev/null', 0, 0) + except libvirt.libvirtError, e: + self.assertEqual(libvirt.VIR_ERR_INVALID_ARG, e.get_error_code()) + else: + e = self.conn.virConnGetLastError() + self.assertEqual(libvirt.VIR_ERR_INVALID_ARG, e[0]) + + def test_abort_no_stream(self): + try: + self.dom.blockJobAbort(self.disks[0], 0) + except libvirt.libvirtError, e: + self.assertEqual(libvirt.VIR_ERR_OPERATION_INVALID, e.get_error_code()) + else: + e = self.conn.virConnGetLastError() + self.assertEqual(libvirt.VIR_ERR_OPERATION_INVALID, e[0]) + + def test_start_same_twice(self): + self.dom.blockPull(self.disks[0], 0, 0) + try: + self.dom.blockPull(self.disks[0], 0, 0) + except libvirt.libvirtError, e: + self.assertEqual(libvirt.VIR_ERR_OPERATION_FAILED, e.get_error_code()) + else: + e = self.conn.virConnGetLastError() + self.assertEqual(libvirt.VIR_ERR_OPERATION_FAILED, e[0]) + + def test_unsupported_disk(self): + try: + self.dom.blockPull(self.disks[2], 0, 0) + except libvirt.libvirtError, e: + self.assertEqual(libvirt.VIR_ERR_OPERATION_INVALID, e.get_error_code()) + else: + e = self.conn.virConnGetLastError() + self.assertEqual(libvirt.VIR_ERR_OPERATION_INVALID, e[0]) + +class TestBasicCommands(BlockPullTestCase): + def setUp(self): + pass + + def tearDown(self): + self.end() + + def test_start_stop(self): + self.begin(sparse=False) + self.dom.blockPull(self.disks[0], 0, 0) + time.sleep(1) + info = self.dom.blockJobInfo(self.disks[0], 0) + self.assertIsNot(None, info) + self.assertEqual(info['type'], libvirt.VIR_DOMAIN_BLOCK_JOB_TYPE_PULL) + self.dom.blockJobAbort(self.disks[0], 0) + time.sleep(1) + self.assertIs(None, self.dom.blockJobInfo(self.disks[0], 0)) + + def test_whole_disk(self): + self.begin() + self.assertTrue(has_backing_file(self.disks[0])) + self.dom.blockPull(self.disks[0], 0, 0) + for i in xrange(1, 5): + if self.dom.blockJobInfo(self.disks[0], 0) is None: + break + time.sleep(1) + self.assertFalse(has_backing_file(self.disks[0])) + + def test_two_disks_at_once(self): + self.begin() + disk_list = range(2) + for d in disk_list: + self.dom.blockPull(self.disks[d], 0, 0) + + for i in xrange(5): + for d in disk_list: + info = self.dom.blockJobInfo(self.disks[d], 0) + if info is None: + disk_list.remove(d) + if len(disk_list) == 0: + break + time.sleep(1) + for d in range(2): + self.assertFalse(has_backing_file(self.disks[d])) + +class TestEvents(BlockPullTestCase): + def eventLoopRun(self): + while self.do_events: + libvirt.virEventRunDefaultImpl() + + def eventLoopStart(self): + libvirt.virEventRegisterDefaultImpl() + self.eventLoopThread = threading.Thread(target=self.eventLoopRun, name="libvirtEventLoop") + self.eventLoopThread.setDaemon(True) + self.do_events = True + self.eventLoopThread.start() + + def eventLoopStop(self): + self.do_events = False + + def setUp(self): + self.eventLoopStart() + + def tearDown(self): + self.end() + + @staticmethod + def recordBlockJobEvent(conn, dom, path, type, status, inst): + inst.event = (dom, path, type, status) + + def test_event_complete(self): + self.begin() + self.event = None + self.conn.domainEventRegisterAny(self.dom, libvirt.VIR_DOMAIN_EVENT_ID_BLOCK_JOB, + TestEvents.recordBlockJobEvent, self) + self.dom.blockPull(self.disks[0], 0, 0) + for i in xrange(1, 5): + if self.event is not None: + break + time.sleep(1) + self.eventLoopStop() + self.assertIsNot(None, self.event) + self.assertFalse(has_backing_file(self.disks[0])) + self.assertEqual(self.event[1], self.disks[0]) + self.assertEqual(self.event[2], libvirt.VIR_DOMAIN_BLOCK_JOB_TYPE_PULL) + self.assertEqual(self.event[3], libvirt.VIR_DOMAIN_BLOCK_JOB_COMPLETED) + +class TestVirsh(BlockPullTestCase): + def setUp(self): + pass + + def tearDown(self): + self.end() + + def test_blockpull(self): + self.begin() + virsh('blockpull', self.dom.name(), self.disks[0]) + for i in xrange(1, 5): + if self.dom.blockJobInfo(self.disks[0], 0) is None: + break + time.sleep(1) + self.assertFalse(has_backing_file(self.disks[0])) + + def test_job_abort(self): + self.begin(sparse=False) + self.dom.blockPull(self.disks[0], 0, 0) + time.sleep(1) + self.assertIsNot(None, self.dom.blockJobInfo(self.disks[0], 0)) + virsh('blockjob', self.dom.name(), '--abort', self.disks[0]) + time.sleep(2) + self.assertIs(None, self.dom.blockJobInfo(self.disks[0], 0)) + self.assertTrue(has_backing_file(self.disks[0])) + + def test_job_info(self): + self.begin(sparse=False) + virsh('blockpull', self.dom.name(), self.disks[0]) + for i in xrange(1, 10): + output = virsh('blockjob', self.dom.name(), '--info', self.disks[0]) + matches = re.findall("^Block Pull:", output, re.M) + if len(matches) > 0: + break + time.sleep(1) + self.assertFalse(has_backing_file(self.disks[0])) + +if __name__ == '__main__': + unittest.main() -- 1.7.3

On Thu, Jul 21, 2011 at 01:55:04PM -0500, Adam Litke wrote:
Here are the patches to implement the BlockPull/BlockJob API as discussed and agreed to. I am testing with a python script (included for completeness as the final patch). The qemu monitor interface is not expected to change in the future. Stefan is planning to submit placeholder commands for upstream qemu until the generic streaming support is implemented.
Changes since V1: - Make virDomainBlockPullAbort() and virDomainGetBlockPullInfo() into a generic BlockJob interface. - Added virDomainBlockJobSetSpeed() - Rename VIR_DOMAIN_EVENT_ID_BLOCK_PULL event to fit into block job API - Add bandwidth argument to virDomainBlockPull()
Summary of changes since first generation patch series: - Qemu dropped incremental streaming so remove libvirt incremental BlockPull() API - Rename virDomainBlockPullAll() to virDomainBlockPull() - Changes required to qemu monitor handlers for changed command names
--
To help speed the provisioning process for large domains, new QED disks are created with backing to a template image. These disks are configured with copy on read such that blocks that are read from the backing file are copied to the new disk. This reduces I/O over a potentially costly path to the backing image.
In such a configuration, there is a desire to remove the dependency on the backing image as the domain runs. To accomplish this, qemu will provide an interface to perform sequential copy on read operations during normal VM operation. Once all data has been copied, the disk image's link to the backing file is removed.
The virDomainBlockPull API family brings this functionality to libvirt.
virDomainBlockPull() instructs the hypervisor to stream the entire device in the background. Progress of this operation can be checked with the function virDomainBlockJobInfo(). An ongoing stream can be cancelled with virDomainBlockJobAbort(). virDomainBlockJobSetSpeed() allows you to limit the bandwidth that the operation may consume.
An event (VIR_DOMAIN_EVENT_ID_BLOCK_JOB) will be emitted when a disk has been fully populated or if a BlockPull() operation was terminated due to an error. This event is useful to avoid polling on virDomainBlockJobInfo() for completion and could also be used by the security driver to revoke access to the backing file when it is no longer needed.
Thanks Adam for that revised patch set. ACK It all looked good to me, based on previous review and a last look. I just had to fix a few merge conflicts due to new entry points being added in the meantime and one commit message, but basically it was clean :-) So I pushed the set except 8 of course. I'm not sure if we should try to store it in the example, or on the wiki. The Wiki might be a bit more logical because I'm not sure we can run the test as is now in all setups. I think the remaining item would be to add documentation about how to use this, the paragraphs above should probably land somewhere on the web site, ideally on the development guide http://libvirt.org/devguide.html but I'm open to suggestions :-) Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

Thanks Daniel. The upstream code is looking good. I will work on adding some documentation to the development guide. On 07/22/2011 01:07 AM, Daniel Veillard wrote:
On Thu, Jul 21, 2011 at 01:55:04PM -0500, Adam Litke wrote:
Here are the patches to implement the BlockPull/BlockJob API as discussed and agreed to. I am testing with a python script (included for completeness as the final patch). The qemu monitor interface is not expected to change in the future. Stefan is planning to submit placeholder commands for upstream qemu until the generic streaming support is implemented.
Changes since V1: - Make virDomainBlockPullAbort() and virDomainGetBlockPullInfo() into a generic BlockJob interface. - Added virDomainBlockJobSetSpeed() - Rename VIR_DOMAIN_EVENT_ID_BLOCK_PULL event to fit into block job API - Add bandwidth argument to virDomainBlockPull()
Summary of changes since first generation patch series: - Qemu dropped incremental streaming so remove libvirt incremental BlockPull() API - Rename virDomainBlockPullAll() to virDomainBlockPull() - Changes required to qemu monitor handlers for changed command names
--
To help speed the provisioning process for large domains, new QED disks are created with backing to a template image. These disks are configured with copy on read such that blocks that are read from the backing file are copied to the new disk. This reduces I/O over a potentially costly path to the backing image.
In such a configuration, there is a desire to remove the dependency on the backing image as the domain runs. To accomplish this, qemu will provide an interface to perform sequential copy on read operations during normal VM operation. Once all data has been copied, the disk image's link to the backing file is removed.
The virDomainBlockPull API family brings this functionality to libvirt.
virDomainBlockPull() instructs the hypervisor to stream the entire device in the background. Progress of this operation can be checked with the function virDomainBlockJobInfo(). An ongoing stream can be cancelled with virDomainBlockJobAbort(). virDomainBlockJobSetSpeed() allows you to limit the bandwidth that the operation may consume.
An event (VIR_DOMAIN_EVENT_ID_BLOCK_JOB) will be emitted when a disk has been fully populated or if a BlockPull() operation was terminated due to an error. This event is useful to avoid polling on virDomainBlockJobInfo() for completion and could also be used by the security driver to revoke access to the backing file when it is no longer needed.
Thanks Adam for that revised patch set.
ACK
It all looked good to me, based on previous review and a last look. I just had to fix a few merge conflicts due to new entry points being added in the meantime and one commit message, but basically it was clean :-)
So I pushed the set except 8 of course. I'm not sure if we should try to store it in the example, or on the wiki. The Wiki might be a bit more logical because I'm not sure we can run the test as is now in all setups.
I think the remaining item would be to add documentation about how to use this, the paragraphs above should probably land somewhere on the web site, ideally on the development guide http://libvirt.org/devguide.html but I'm open to suggestions :-)
Daniel
-- Adam Litke IBM Linux Technology Center

HI, Deniel and Adam. Have the patchset been merged into libvirt upstream? On Fri, Jul 22, 2011 at 11:01 PM, Adam Litke <agl@us.ibm.com> wrote:
Thanks Daniel. The upstream code is looking good. I will work on adding some documentation to the development guide.
On 07/22/2011 01:07 AM, Daniel Veillard wrote:
On Thu, Jul 21, 2011 at 01:55:04PM -0500, Adam Litke wrote:
Here are the patches to implement the BlockPull/BlockJob API as discussed and agreed to. I am testing with a python script (included for completeness as the final patch). The qemu monitor interface is not expected to change in the future. Stefan is planning to submit placeholder commands for upstream qemu until the generic streaming support is implemented.
Changes since V1: - Make virDomainBlockPullAbort() and virDomainGetBlockPullInfo() into a generic BlockJob interface. - Added virDomainBlockJobSetSpeed() - Rename VIR_DOMAIN_EVENT_ID_BLOCK_PULL event to fit into block job API - Add bandwidth argument to virDomainBlockPull()
Summary of changes since first generation patch series: - Qemu dropped incremental streaming so remove libvirt incremental BlockPull() API - Rename virDomainBlockPullAll() to virDomainBlockPull() - Changes required to qemu monitor handlers for changed command names
--
To help speed the provisioning process for large domains, new QED disks are created with backing to a template image. These disks are configured with copy on read such that blocks that are read from the backing file are copied to the new disk. This reduces I/O over a potentially costly path to the backing image.
In such a configuration, there is a desire to remove the dependency on the backing image as the domain runs. To accomplish this, qemu will provide an interface to perform sequential copy on read operations during normal VM operation. Once all data has been copied, the disk image's link to the backing file is removed.
The virDomainBlockPull API family brings this functionality to libvirt.
virDomainBlockPull() instructs the hypervisor to stream the entire device in the background. Progress of this operation can be checked with the function virDomainBlockJobInfo(). An ongoing stream can be cancelled with virDomainBlockJobAbort(). virDomainBlockJobSetSpeed() allows you to limit the bandwidth that the operation may consume.
An event (VIR_DOMAIN_EVENT_ID_BLOCK_JOB) will be emitted when a disk has been fully populated or if a BlockPull() operation was terminated due to an error. This event is useful to avoid polling on virDomainBlockJobInfo() for completion and could also be used by the security driver to revoke access to the backing file when it is no longer needed.
Thanks Adam for that revised patch set.
ACK
It all looked good to me, based on previous review and a last look. I just had to fix a few merge conflicts due to new entry points being added in the meantime and one commit message, but basically it was clean :-)
So I pushed the set except 8 of course. I'm not sure if we should try to store it in the example, or on the wiki. The Wiki might be a bit more logical because I'm not sure we can run the test as is now in all setups.
I think the remaining item would be to add documentation about how to use this, the paragraphs above should probably land somewhere on the web site, ideally on the development guide http://libvirt.org/devguide.html but I'm open to suggestions :-)
Daniel
-- Adam Litke IBM Linux Technology Center
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
-- Regards, Zhi Yong Wu

On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
HI, Deniel and Adam.
Have the patchset been merged into libvirt upstream?
Yes they have. However, the functionality is still missing from qemu. The two communities have agreed upon the interface and semantics, but work continues on the qemu implementation. Let me know if you would like a link to some qemu patches that support this functionality for qed images. -- Adam Litke IBM Linux Technology Center

On Mon, Aug 15, 2011 at 1:36 PM, Adam Litke <agl@us.ibm.com> wrote:
On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
HI, Deniel and Adam.
Have the patchset been merged into libvirt upstream?
Yes they have. However, the functionality is still missing from qemu. The two communities have agreed upon the interface and semantics, but work continues on the qemu implementation. Let me know if you would like a link to some qemu patches that support this functionality for qed images.
I also have a series to put these commands into QEMU without any image format support. They just return NotSupported but it puts the commands into QEMU so we can run the libvirt commands against them. Will send those patches to qemu-devel soon. Stefan

On Tue, Aug 16, 2011 at 6:52 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
On Mon, Aug 15, 2011 at 1:36 PM, Adam Litke <agl@us.ibm.com> wrote:
On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
HI, Deniel and Adam.
Have the patchset been merged into libvirt upstream?
Yes they have. However, the functionality is still missing from qemu. The two communities have agreed upon the interface and semantics, but work continues on the qemu implementation. Let me know if you would like a link to some qemu patches that support this functionality for qed images.
I also have a series to put these commands into QEMU without any image format support. They just return NotSupported but it puts the commands into QEMU so we can run the libvirt commands against them. Without image format support, it will not be a nice sample for us.:) Why did you not implement them?
Will send those patches to qemu-devel soon.
Stefan
-- Regards, Zhi Yong Wu

On Tue, Aug 16, 2011 at 09:28:27AM +0800, Zhi Yong Wu wrote:
On Tue, Aug 16, 2011 at 6:52 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
On Mon, Aug 15, 2011 at 1:36 PM, Adam Litke <agl@us.ibm.com> wrote:
On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
HI, Deniel and Adam.
Have the patchset been merged into libvirt upstream?
Yes they have. However, the functionality is still missing from qemu. The two communities have agreed upon the interface and semantics, but work continues on the qemu implementation. Let me know if you would like a link to some qemu patches that support this functionality for qed images.
I also have a series to put these commands into QEMU without any image format support. They just return NotSupported but it puts the commands into QEMU so we can run the libvirt commands against them. Without image format support, it will not be a nice sample for us.:) Why did you not implement them?
Code for block streaming with QED will be going into QEMU in the not too distant future too. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Tue, Aug 16, 2011 at 11:39 PM, Daniel P. Berrange <berrange@redhat.com> wrote:
On Tue, Aug 16, 2011 at 09:28:27AM +0800, Zhi Yong Wu wrote:
On Tue, Aug 16, 2011 at 6:52 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
On Mon, Aug 15, 2011 at 1:36 PM, Adam Litke <agl@us.ibm.com> wrote:
On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
HI, Deniel and Adam.
Have the patchset been merged into libvirt upstream?
Yes they have. However, the functionality is still missing from qemu. The two communities have agreed upon the interface and semantics, but work continues on the qemu implementation. Let me know if you would like a link to some qemu patches that support this functionality for qed images.
I also have a series to put these commands into QEMU without any image format support. They just return NotSupported but it puts the commands into QEMU so we can run the libvirt commands against them. Without image format support, it will not be a nice sample for us.:) Why did you not implement them?
Code for block streaming with QED will be going into QEMU in the not too distant future too. Got it. thanks.
Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
-- Regards, Zhi Yong Wu

On Tue, Aug 16, 2011 at 2:28 AM, Zhi Yong Wu <zwu.kernel@gmail.com> wrote:
On Tue, Aug 16, 2011 at 6:52 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
On Mon, Aug 15, 2011 at 1:36 PM, Adam Litke <agl@us.ibm.com> wrote:
On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
HI, Deniel and Adam.
Have the patchset been merged into libvirt upstream?
Yes they have. However, the functionality is still missing from qemu. The two communities have agreed upon the interface and semantics, but work continues on the qemu implementation. Let me know if you would like a link to some qemu patches that support this functionality for qed images.
I also have a series to put these commands into QEMU without any image format support. They just return NotSupported but it puts the commands into QEMU so we can run the libvirt commands against them. Without image format support, it will not be a nice sample for us.:) Why did you not implement them?
The point is just to commit to the API in QEMU while we do the generic image streaming implementation that covers all formats that can do backing files. Stefan

On Mon, Aug 15, 2011 at 8:36 PM, Adam Litke <agl@us.ibm.com> wrote:
On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
HI, Deniel and Adam.
Have the patchset been merged into libvirt upstream?
Yes they have. However, the functionality is still missing from qemu. The two communities have agreed upon the interface and semantics, but work continues on the qemu implementation. Let me know if you would like a link to some qemu patches that support this functionality for qed images. Sure, If you share it with me, at the same time learning your libvirt API, i can also sample this feature.:) anyway, thanks, Adam.
-- Adam Litke IBM Linux Technology Center
-- Regards, Zhi Yong Wu

Stefan has a git repo with QED block streaming support here: git://repo.or.cz/qemu/stefanha.git stream-command On 08/15/2011 08:25 PM, Zhi Yong Wu wrote:
On Mon, Aug 15, 2011 at 8:36 PM, Adam Litke <agl@us.ibm.com> wrote:
On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
HI, Deniel and Adam.
Have the patchset been merged into libvirt upstream?
Yes they have. However, the functionality is still missing from qemu. The two communities have agreed upon the interface and semantics, but work continues on the qemu implementation. Let me know if you would like a link to some qemu patches that support this functionality for qed images. Sure, If you share it with me, at the same time learning your libvirt API, i can also sample this feature.:) anyway, thanks, Adam.
-- Adam Litke IBM Linux Technology Center
-- Adam Litke IBM Linux Technology Center

On Tue, 2011-08-16 at 09:31 -0500, Adam Litke wrote:
Stefan has a git repo with QED block streaming support here:
git://repo.or.cz/qemu/stefanha.git stream-command OK. thanks.
On 08/15/2011 08:25 PM, Zhi Yong Wu wrote:
On Mon, Aug 15, 2011 at 8:36 PM, Adam Litke <agl@us.ibm.com> wrote:
On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
HI, Deniel and Adam.
Have the patchset been merged into libvirt upstream?
Yes they have. However, the functionality is still missing from qemu. The two communities have agreed upon the interface and semantics, but work continues on the qemu implementation. Let me know if you would like a link to some qemu patches that support this functionality for qed images. Sure, If you share it with me, at the same time learning your libvirt API, i can also sample this feature.:) anyway, thanks, Adam.
-- Adam Litke IBM Linux Technology Center
-- Regards, Zhi Yong Wu

On Tue, Aug 16, 2011 at 10:31 PM, Adam Litke <agl@us.ibm.com> wrote:
Stefan has a git repo with QED block streaming support here:
git://repo.or.cz/qemu/stefanha.git stream-command This tree branch seems to be a litter wrong; I tried multiple times, and still got this error.
Cloning into stream-command... remote: Counting objects: 85498, done. remote: Compressing objects: 100% (17878/17878), done. remote: Total 85498 (delta 67839), reused 85023 (delta 67511) Receiving objects: 100% (85498/85498), 31.90 MiB | 2.46 MiB/s, done. Resolving deltas: 100% (67839/67839), done. warning: remote HEAD refers to nonexistent ref, unable to checkout.
On 08/15/2011 08:25 PM, Zhi Yong Wu wrote:
On Mon, Aug 15, 2011 at 8:36 PM, Adam Litke <agl@us.ibm.com> wrote:
On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
HI, Deniel and Adam.
Have the patchset been merged into libvirt upstream?
Yes they have. However, the functionality is still missing from qemu. The two communities have agreed upon the interface and semantics, but work continues on the qemu implementation. Let me know if you would like a link to some qemu patches that support this functionality for qed images. Sure, If you share it with me, at the same time learning your libvirt API, i can also sample this feature.:) anyway, thanks, Adam.
-- Adam Litke IBM Linux Technology Center
-- Adam Litke IBM Linux Technology Center
-- Regards, Zhi Yong Wu

On Wed, Aug 17, 2011 at 1:47 PM, Zhi Yong Wu <zwu.kernel@gmail.com> wrote:
On Tue, Aug 16, 2011 at 10:31 PM, Adam Litke <agl@us.ibm.com> wrote:
Stefan has a git repo with QED block streaming support here:
git://repo.or.cz/qemu/stefanha.git stream-command This tree branch seems to be a litter wrong; I tried multiple times, and still got this error.
Cloning into stream-command... remote: Counting objects: 85498, done. remote: Compressing objects: 100% (17878/17878), done. remote: Total 85498 (delta 67839), reused 85023 (delta 67511) Receiving objects: 100% (85498/85498), 31.90 MiB | 2.46 MiB/s, done. Resolving deltas: 100% (67839/67839), done. warning: remote HEAD refers to nonexistent ref, unable to checkout.
You have cloned successfully. My public repo just doesn't have a 'master' branch, only topic branches for features that I develop. Try this: cd stream-command git checkout stream-command BTW the git clone command to directly clone and checkout a particular branch is (but be warned that older versions of git do not support the -b option): git clone -b stream-command git://repo.or.cz/qemu/stefanha.git Stefan
participants (6)
-
Adam Litke
-
Daniel P. Berrange
-
Daniel Veillard
-
Stefan Hajnoczi
-
Zhi Yong Wu
-
Zhi Yong Wu