On Wed, Jun 10, 2015 at 07:56:32 -0400, John Ferlan wrote:
On 06/02/2015 08:34 AM, Jiri Denemark wrote:
> By switching block jobs to use domain conditions, we can drop some
> pretty complicated code in NBD storage migration. Moreover, we are
> getting ready for migration events (to replace our 50ms polling on
> query-migrate). The ultimate goal is to have a single loop waiting
> (virDomainObjWait) for any migration related event (changed status of a
> migration, disk mirror events, internal abort requests, ...). This patch
> makes NBD storage migration ready for this: first we call a QMP command
> to start or cancel drive mirror on all disks we are interested in and
> then we wait for a single condition which is signaled on any event
> related to any of the mirrors.
>
> Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
> ---
>
> Notes:
> Version 2:
> - slightly modified to use domain conditions
>
> po/POTFILES.in | 1 -
> src/qemu/qemu_blockjob.c | 137 ++-------------------
> src/qemu/qemu_blockjob.h | 12 +-
> src/qemu/qemu_domain.c | 17 +--
> src/qemu/qemu_domain.h | 1 -
> src/qemu/qemu_driver.c | 24 ++--
> src/qemu/qemu_migration.c | 299 ++++++++++++++++++++++++++--------------------
> src/qemu/qemu_process.c | 13 +-
> 8 files changed, 197 insertions(+), 307 deletions(-)
>
...
> diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> index 93e29e7..61b3e34 100644
> --- a/src/qemu/qemu_migration.c
> +++ b/src/qemu/qemu_migration.c
> @@ -1720,7 +1720,7 @@ qemuMigrationStopNBDServer(virQEMUDriverPtr driver,
>
>
> /**
> - * qemuMigrationCheckDriveMirror:
> + * qemuMigrationDriveMirrorReady:
> * @driver: qemu driver
> * @vm: domain
> *
> @@ -1733,111 +1733,148 @@ qemuMigrationStopNBDServer(virQEMUDriverPtr driver,
> * -1 on error.
> */
> static int
> -qemuMigrationCheckDriveMirror(virQEMUDriverPtr driver,
> +qemuMigrationDriveMirrorReady(virQEMUDriverPtr driver,
> virDomainObjPtr vm)
> {
> size_t i;
> - int ret = 1;
> + size_t notReady = 0;
> + int status;
>
> for (i = 0; i < vm->def->ndisks; i++) {
> virDomainDiskDefPtr disk = vm->def->disks[i];
> qemuDomainDiskPrivatePtr diskPriv = QEMU_DOMAIN_DISK_PRIVATE(disk);
>
> - if (!diskPriv->migrating || !diskPriv->blockJobSync)
> + if (!diskPriv->migrating)
> continue;
>
> - /* process any pending event */
> - if (qemuBlockJobSyncWaitWithTimeout(driver, vm, disk,
> - 0ull, NULL) < 0)
> - return -1;
> -
> - switch (disk->mirrorState) {
> - case VIR_DOMAIN_DISK_MIRROR_STATE_NONE:
> - ret = 0;
> - break;
> - case VIR_DOMAIN_DISK_MIRROR_STATE_ABORT:
> + status = qemuBlockJobUpdate(driver, vm, disk);
> + if (status == VIR_DOMAIN_BLOCK_JOB_FAILED) {
ah - and now I see it cannot be a void function; however, this could
cause Coverity checker to complain about other uses. I wasn't able to
apply the series to my Coverity branch since it wouldn't git am apply.
Still that leaves me wondering why other paths don't check the status.
They may not care, but with one or more paths checking (as seen in the
rest of this module) it leaves a future reader not knowing whether the
paths that aren't checking should check.
Well, the return value is the event qemuBlockJobUpdate processed, it's
not indicating success or failure, qemuBlockJobUpdate just always
succeeds. So I think it's pretty clear that if the caller is not
interested in the event which got processed (most callers just check the
resulting state), it just ignores it. We can always add ignore_valu() if
coverity is not happy about it...
Jirka