[GSoC][PATCH v2 0/6] remove dependency of domainJobs on

Following series of patches deal majorly with moving code around aimed towards making domain-jobs hypervisor agnostic. Previous version of this series can be found here[1]. [1]: https://www.redhat.com/archives/libvir-list/2020-August/msg00180.html Prathamesh Chavan (6): qemu_domain: Added `qemuDomainJobInfo` to domainJob's `privateData` qemu_domainjob: jobs_queued parameter added to `qemuDomainJobPrivate` qemu_domainjob: `maxQueuedJobs` added to `qemuDomainJobPrivate` qemu_domain: funciton declarations moved to correct file virmigraiton: `qemuMigrationJobPhase` transformed for more generic use qemu_domainjob: remove dependency on `qemuDomainDiskPrivatePtr` src/hypervisor/meson.build | 1 + src/hypervisor/virmigration.c | 41 ++ src/hypervisor/virmigration.h | 38 ++ src/libvirt_private.syms | 4 + src/qemu/MIGRATION.txt | 8 +- src/qemu/qemu_backup.c | 22 +- src/qemu/qemu_domain.c | 653 +++++++++++++++++++++++++++++- src/qemu/qemu_domain.h | 82 +++- src/qemu/qemu_domainjob.c | 662 +------------------------------ src/qemu/qemu_domainjob.h | 99 +---- src/qemu/qemu_driver.c | 49 ++- src/qemu/qemu_migration.c | 135 ++++--- src/qemu/qemu_migration.h | 17 +- src/qemu/qemu_migration_cookie.c | 8 +- src/qemu/qemu_process.c | 82 ++-- 15 files changed, 1014 insertions(+), 887 deletions(-) create mode 100644 src/hypervisor/virmigration.c create mode 100644 src/hypervisor/virmigration.h -- 2.25.1

As `qemuDomainJobInfo` had attributes specific to qemu hypervisor's jobs, we moved the attribute `current` and `completed` from `qemuDomainJobObj` to its `privateData` structure. In this process, two callback functions: `setJobInfoOperation` and `currentJobInfoInit` were introduced to qemuDomainJob's callback structure. Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_backup.c | 22 +- src/qemu/qemu_domain.c | 498 +++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 74 +++++ src/qemu/qemu_domainjob.c | 483 +----------------------------- src/qemu/qemu_domainjob.h | 81 +---- src/qemu/qemu_driver.c | 49 +-- src/qemu/qemu_migration.c | 62 ++-- src/qemu/qemu_migration_cookie.c | 8 +- src/qemu/qemu_process.c | 32 +- 9 files changed, 680 insertions(+), 629 deletions(-) diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index a402730d38..1822c6f267 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -529,20 +529,21 @@ qemuBackupJobTerminate(virDomainObjPtr vm, { qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; size_t i; - qemuDomainJobInfoUpdateTime(priv->job.current); + qemuDomainJobInfoUpdateTime(jobPriv->current); - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); - priv->job.completed = qemuDomainJobInfoCopy(priv->job.current); + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); + jobPriv->completed = qemuDomainJobInfoCopy(jobPriv->current); - priv->job.completed->stats.backup.total = priv->backup->push_total; - priv->job.completed->stats.backup.transferred = priv->backup->push_transferred; - priv->job.completed->stats.backup.tmp_used = priv->backup->pull_tmp_used; - priv->job.completed->stats.backup.tmp_total = priv->backup->pull_tmp_total; + jobPriv->completed->stats.backup.total = priv->backup->push_total; + jobPriv->completed->stats.backup.transferred = priv->backup->push_transferred; + jobPriv->completed->stats.backup.tmp_used = priv->backup->pull_tmp_used; + jobPriv->completed->stats.backup.tmp_total = priv->backup->pull_tmp_total; - priv->job.completed->status = jobstatus; - priv->job.completed->errmsg = g_strdup(priv->backup->errmsg); + jobPriv->completed->status = jobstatus; + jobPriv->completed->errmsg = g_strdup(priv->backup->errmsg); qemuDomainEventEmitJobCompleted(priv->driver, vm); @@ -694,6 +695,7 @@ qemuBackupBegin(virDomainObjPtr vm, unsigned int flags) { qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(priv->driver); g_autoptr(virDomainBackupDef) def = NULL; g_autofree char *suffix = NULL; @@ -745,7 +747,7 @@ qemuBackupBegin(virDomainObjPtr vm, qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | JOB_MASK(QEMU_JOB_SUSPEND) | JOB_MASK(QEMU_JOB_MODIFY))); - priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + jobPriv->current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; if (!virDomainObjIsActive(vm)) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index c440c79e1d..1ae44ae39f 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -75,6 +75,457 @@ VIR_LOG_INIT("qemu.qemu_domain"); +static virDomainJobType +qemuDomainJobStatusToType(qemuDomainJobStatus status) +{ + switch (status) { + case QEMU_DOMAIN_JOB_STATUS_NONE: + break; + + case QEMU_DOMAIN_JOB_STATUS_ACTIVE: + case QEMU_DOMAIN_JOB_STATUS_MIGRATING: + case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: + case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: + case QEMU_DOMAIN_JOB_STATUS_PAUSED: + return VIR_DOMAIN_JOB_UNBOUNDED; + + case QEMU_DOMAIN_JOB_STATUS_COMPLETED: + return VIR_DOMAIN_JOB_COMPLETED; + + case QEMU_DOMAIN_JOB_STATUS_FAILED: + return VIR_DOMAIN_JOB_FAILED; + + case QEMU_DOMAIN_JOB_STATUS_CANCELED: + return VIR_DOMAIN_JOB_CANCELLED; + } + + return VIR_DOMAIN_JOB_NONE; +} + +int +qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) +{ + unsigned long long now; + + if (!jobInfo->started) + return 0; + + if (virTimeMillisNow(&now) < 0) + return -1; + + if (now < jobInfo->started) { + VIR_WARN("Async job starts in the future"); + jobInfo->started = 0; + return 0; + } + + jobInfo->timeElapsed = now - jobInfo->started; + return 0; +} + +int +qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) +{ + unsigned long long now; + + if (!jobInfo->stopped) + return 0; + + if (virTimeMillisNow(&now) < 0) + return -1; + + if (now < jobInfo->stopped) { + VIR_WARN("Guest's CPUs stopped in the future"); + jobInfo->stopped = 0; + return 0; + } + + jobInfo->stats.mig.downtime = now - jobInfo->stopped; + jobInfo->stats.mig.downtime_set = true; + return 0; +} + + +int +qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, + virDomainJobInfoPtr info) +{ + info->type = qemuDomainJobStatusToType(jobInfo->status); + info->timeElapsed = jobInfo->timeElapsed; + + switch (jobInfo->statsType) { + case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: + info->memTotal = jobInfo->stats.mig.ram_total; + info->memRemaining = jobInfo->stats.mig.ram_remaining; + info->memProcessed = jobInfo->stats.mig.ram_transferred; + info->fileTotal = jobInfo->stats.mig.disk_total + + jobInfo->mirrorStats.total; + info->fileRemaining = jobInfo->stats.mig.disk_remaining + + (jobInfo->mirrorStats.total - + jobInfo->mirrorStats.transferred); + info->fileProcessed = jobInfo->stats.mig.disk_transferred + + jobInfo->mirrorStats.transferred; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: + info->memTotal = jobInfo->stats.mig.ram_total; + info->memRemaining = jobInfo->stats.mig.ram_remaining; + info->memProcessed = jobInfo->stats.mig.ram_transferred; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: + info->memTotal = jobInfo->stats.dump.total; + info->memProcessed = jobInfo->stats.dump.completed; + info->memRemaining = info->memTotal - info->memProcessed; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: + info->fileTotal = jobInfo->stats.backup.total; + info->fileProcessed = jobInfo->stats.backup.transferred; + info->fileRemaining = info->fileTotal - info->fileProcessed; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: + break; + } + + info->dataTotal = info->memTotal + info->fileTotal; + info->dataRemaining = info->memRemaining + info->fileRemaining; + info->dataProcessed = info->memProcessed + info->fileProcessed; + + return 0; +} + + +static int +qemuDomainMigrationJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + qemuMonitorMigrationStats *stats = &jobInfo->stats.mig; + qemuDomainMirrorStatsPtr mirrorStats = &jobInfo->mirrorStats; + virTypedParameterPtr par = NULL; + int maxpar = 0; + int npar = 0; + unsigned long long mirrorRemaining = mirrorStats->total - + mirrorStats->transferred; + + if (virTypedParamsAddInt(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_OPERATION, + jobInfo->operation) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_TIME_ELAPSED, + jobInfo->timeElapsed) < 0) + goto error; + + if (jobInfo->timeDeltaSet && + jobInfo->timeElapsed > jobInfo->timeDelta && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_TIME_ELAPSED_NET, + jobInfo->timeElapsed - jobInfo->timeDelta) < 0) + goto error; + + if (stats->downtime_set && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DOWNTIME, + stats->downtime) < 0) + goto error; + + if (stats->downtime_set && + jobInfo->timeDeltaSet && + stats->downtime > jobInfo->timeDelta && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DOWNTIME_NET, + stats->downtime - jobInfo->timeDelta) < 0) + goto error; + + if (stats->setup_time_set && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_SETUP_TIME, + stats->setup_time) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DATA_TOTAL, + stats->ram_total + + stats->disk_total + + mirrorStats->total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DATA_PROCESSED, + stats->ram_transferred + + stats->disk_transferred + + mirrorStats->transferred) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DATA_REMAINING, + stats->ram_remaining + + stats->disk_remaining + + mirrorRemaining) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_TOTAL, + stats->ram_total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_PROCESSED, + stats->ram_transferred) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_REMAINING, + stats->ram_remaining) < 0) + goto error; + + if (stats->ram_bps && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_BPS, + stats->ram_bps) < 0) + goto error; + + if (stats->ram_duplicate_set) { + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_CONSTANT, + stats->ram_duplicate) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_NORMAL, + stats->ram_normal) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_NORMAL_BYTES, + stats->ram_normal_bytes) < 0) + goto error; + } + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_DIRTY_RATE, + stats->ram_dirty_rate) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_ITERATION, + stats->ram_iteration) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_POSTCOPY_REQS, + stats->ram_postcopy_reqs) < 0) + goto error; + + if (stats->ram_page_size > 0 && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE, + stats->ram_page_size) < 0) + goto error; + + /* The remaining stats are disk, mirror, or migration specific + * so if this is a SAVEDUMP, we can just skip them */ + if (jobInfo->statsType == QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP) + goto done; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_TOTAL, + stats->disk_total + + mirrorStats->total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_PROCESSED, + stats->disk_transferred + + mirrorStats->transferred) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_REMAINING, + stats->disk_remaining + + mirrorRemaining) < 0) + goto error; + + if (stats->disk_bps && + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_DISK_BPS, + stats->disk_bps) < 0) + goto error; + + if (stats->xbzrle_set) { + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_CACHE, + stats->xbzrle_cache_size) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_BYTES, + stats->xbzrle_bytes) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_PAGES, + stats->xbzrle_pages) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_CACHE_MISSES, + stats->xbzrle_cache_miss) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_COMPRESSION_OVERFLOW, + stats->xbzrle_overflow) < 0) + goto error; + } + + if (stats->cpu_throttle_percentage && + virTypedParamsAddInt(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_AUTO_CONVERGE_THROTTLE, + stats->cpu_throttle_percentage) < 0) + goto error; + + done: + *type = qemuDomainJobStatusToType(jobInfo->status); + *params = par; + *nparams = npar; + return 0; + + error: + virTypedParamsFree(par, npar); + return -1; +} + + +static int +qemuDomainDumpJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + qemuMonitorDumpStats *stats = &jobInfo->stats.dump; + virTypedParameterPtr par = NULL; + int maxpar = 0; + int npar = 0; + + if (virTypedParamsAddInt(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_OPERATION, + jobInfo->operation) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_TIME_ELAPSED, + jobInfo->timeElapsed) < 0) + goto error; + + if (virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_TOTAL, + stats->total) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_PROCESSED, + stats->completed) < 0 || + virTypedParamsAddULLong(&par, &npar, &maxpar, + VIR_DOMAIN_JOB_MEMORY_REMAINING, + stats->total - stats->completed) < 0) + goto error; + + *type = qemuDomainJobStatusToType(jobInfo->status); + *params = par; + *nparams = npar; + return 0; + + error: + virTypedParamsFree(par, npar); + return -1; +} + + +static int +qemuDomainBackupJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + qemuDomainBackupStats *stats = &jobInfo->stats.backup; + g_autoptr(virTypedParamList) par = g_new0(virTypedParamList, 1); + + if (virTypedParamListAddInt(par, jobInfo->operation, + VIR_DOMAIN_JOB_OPERATION) < 0) + return -1; + + if (virTypedParamListAddULLong(par, jobInfo->timeElapsed, + VIR_DOMAIN_JOB_TIME_ELAPSED) < 0) + return -1; + + if (stats->transferred > 0 || stats->total > 0) { + if (virTypedParamListAddULLong(par, stats->total, + VIR_DOMAIN_JOB_DISK_TOTAL) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->transferred, + VIR_DOMAIN_JOB_DISK_PROCESSED) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->total - stats->transferred, + VIR_DOMAIN_JOB_DISK_REMAINING) < 0) + return -1; + } + + if (stats->tmp_used > 0 || stats->tmp_total > 0) { + if (virTypedParamListAddULLong(par, stats->tmp_used, + VIR_DOMAIN_JOB_DISK_TEMP_USED) < 0) + return -1; + + if (virTypedParamListAddULLong(par, stats->tmp_total, + VIR_DOMAIN_JOB_DISK_TEMP_TOTAL) < 0) + return -1; + } + + if (jobInfo->status != QEMU_DOMAIN_JOB_STATUS_ACTIVE && + virTypedParamListAddBoolean(par, + jobInfo->status == QEMU_DOMAIN_JOB_STATUS_COMPLETED, + VIR_DOMAIN_JOB_SUCCESS) < 0) + return -1; + + if (jobInfo->errmsg && + virTypedParamListAddString(par, jobInfo->errmsg, VIR_DOMAIN_JOB_ERRMSG) < 0) + return -1; + + *nparams = virTypedParamListStealParams(par, params); + *type = qemuDomainJobStatusToType(jobInfo->status); + return 0; +} + + +int +qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) +{ + switch (jobInfo->statsType) { + case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: + case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: + return qemuDomainMigrationJobInfoToParams(jobInfo, type, params, nparams); + + case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: + return qemuDomainDumpJobInfoToParams(jobInfo, type, params, nparams); + + case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: + return qemuDomainBackupJobInfoToParams(jobInfo, type, params, nparams); + + case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("invalid job statistics type")); + break; + + default: + virReportEnumRangeError(qemuDomainJobStatsType, jobInfo->statsType); + break; + } + + return -1; +} + + +void +qemuDomainJobInfoFree(qemuDomainJobInfoPtr info) +{ + g_free(info->errmsg); + g_free(info); +} + + +qemuDomainJobInfoPtr +qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info) +{ + qemuDomainJobInfoPtr ret = g_new0(qemuDomainJobInfo, 1); + + memcpy(ret, info, sizeof(*info)); + + ret->errmsg = g_strdup(info->errmsg); + + return ret; +} + + static void * qemuJobAllocPrivate(void) { @@ -91,6 +542,8 @@ qemuJobFreePrivate(void *opaque) return; qemuMigrationParamsFree(priv->migParams); + g_clear_pointer(&priv->current, qemuDomainJobInfoFree); + g_clear_pointer(&priv->completed, qemuDomainJobInfoFree); VIR_FREE(priv); } @@ -104,6 +557,7 @@ qemuJobResetPrivate(void *opaque) priv->spiceMigrated = false; priv->dumpCompleted = false; qemuMigrationParamsFree(priv->migParams); + g_clear_pointer(&priv->current, qemuDomainJobInfoFree); priv->migParams = NULL; } @@ -120,6 +574,48 @@ qemuDomainFormatJobPrivate(virBufferPtr buf, return 0; } +static void +qemuDomainCurrentJobInfoInit(qemuDomainJobObjPtr job, + unsigned long long now) +{ + qemuDomainJobPrivatePtr priv = job->privateData; + priv->current = g_new0(qemuDomainJobInfo, 1); + priv->current->status = QEMU_DOMAIN_JOB_STATUS_ACTIVE; + priv->current->started = now; + +} + +static void +qemuDomainJobInfoSetOperation(qemuDomainJobObjPtr job, + virDomainJobOperation operation) +{ + qemuDomainJobPrivatePtr priv = job->privateData; + priv->current->operation = operation; +} + +void +qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, + virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; + virObjectEventPtr event; + virTypedParameterPtr params = NULL; + int nparams = 0; + int type; + + if (!jobPriv->completed) + return; + + if (qemuDomainJobInfoToParams(jobPriv->completed, &type, + ¶ms, &nparams) < 0) { + VIR_WARN("Could not get stats for completed job; domain %s", + vm->def->name); + } + + event = virDomainEventJobCompletedNewFromObj(vm, params, nparams); + virObjectEventStateQueue(driver->domainEventState, event); +} static int qemuDomainParseJobPrivate(xmlXPathContextPtr ctxt, @@ -140,6 +636,8 @@ static qemuDomainObjPrivateJobCallbacks qemuPrivateJobCallbacks = { .resetJobPrivate = qemuJobResetPrivate, .formatJob = qemuDomainFormatJobPrivate, .parseJob = qemuDomainParseJobPrivate, + .setJobInfoOperation = qemuDomainJobInfoSetOperation, + .currentJobInfoInit = qemuDomainCurrentJobInfoInit, }; /** diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 3a1bcbbfa3..386ae17272 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -483,6 +483,52 @@ struct _qemuDomainXmlNsDef { char **capsdel; }; +typedef struct _qemuDomainMirrorStats qemuDomainMirrorStats; +typedef qemuDomainMirrorStats *qemuDomainMirrorStatsPtr; +struct _qemuDomainMirrorStats { + unsigned long long transferred; + unsigned long long total; +}; + +typedef struct _qemuDomainBackupStats qemuDomainBackupStats; +struct _qemuDomainBackupStats { + unsigned long long transferred; + unsigned long long total; + unsigned long long tmp_used; + unsigned long long tmp_total; +}; + +typedef struct _qemuDomainJobInfo qemuDomainJobInfo; +typedef qemuDomainJobInfo *qemuDomainJobInfoPtr; +struct _qemuDomainJobInfo { + qemuDomainJobStatus status; + virDomainJobOperation operation; + unsigned long long started; /* When the async job started */ + unsigned long long stopped; /* When the domain's CPUs were stopped */ + unsigned long long sent; /* When the source sent status info to the + destination (only for migrations). */ + unsigned long long received; /* When the destination host received status + info from the source (migrations only). */ + /* Computed values */ + unsigned long long timeElapsed; + long long timeDelta; /* delta = received - sent, i.e., the difference + between the source and the destination time plus + the time between the end of Perform phase on the + source and the beginning of Finish phase on the + destination. */ + bool timeDeltaSet; + /* Raw values from QEMU */ + qemuDomainJobStatsType statsType; + union { + qemuMonitorMigrationStats mig; + qemuMonitorDumpStats dump; + qemuDomainBackupStats backup; + } stats; + qemuDomainMirrorStats mirrorStats; + + char *errmsg; /* optional error message for failed completed jobs */ +}; + typedef struct _qemuDomainJobPrivate qemuDomainJobPrivate; typedef qemuDomainJobPrivate *qemuDomainJobPrivatePtr; struct _qemuDomainJobPrivate { @@ -491,8 +537,36 @@ struct _qemuDomainJobPrivate { bool spiceMigrated; /* spice migration completed */ bool dumpCompleted; /* dump completed */ qemuMigrationParamsPtr migParams; + qemuDomainJobInfoPtr current; /* async job progress data */ + qemuDomainJobInfoPtr completed; /* statistics data of a recently completed job */ }; + +void qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, + virDomainObjPtr vm); + +void +qemuDomainJobInfoFree(qemuDomainJobInfoPtr info); + +G_DEFINE_AUTOPTR_CLEANUP_FUNC(qemuDomainJobInfo, qemuDomainJobInfoFree); + +qemuDomainJobInfoPtr +qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info); + +int qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) + ATTRIBUTE_NONNULL(1); +int qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) + ATTRIBUTE_NONNULL(1); +int qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, + virDomainJobInfoPtr info) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); +int qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, + int *type, + virTypedParameterPtr *params, + int *nparams) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) + ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4); + int qemuDomainObjStartWorker(virDomainObjPtr dom); void qemuDomainObjStopWorker(virDomainObjPtr dom); diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 6393cc0b40..503a87bb12 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -115,51 +115,6 @@ qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, return -1; } - -void -qemuDomainJobInfoFree(qemuDomainJobInfoPtr info) -{ - g_free(info->errmsg); - g_free(info); -} - - -qemuDomainJobInfoPtr -qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info) -{ - qemuDomainJobInfoPtr ret = g_new0(qemuDomainJobInfo, 1); - - memcpy(ret, info, sizeof(*info)); - - ret->errmsg = g_strdup(info->errmsg); - - return ret; -} - -void -qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, - virDomainObjPtr vm) -{ - qemuDomainObjPrivatePtr priv = vm->privateData; - virObjectEventPtr event; - virTypedParameterPtr params = NULL; - int nparams = 0; - int type; - - if (!priv->job.completed) - return; - - if (qemuDomainJobInfoToParams(priv->job.completed, &type, - ¶ms, &nparams) < 0) { - VIR_WARN("Could not get stats for completed job; domain %s", - vm->def->name); - } - - event = virDomainEventJobCompletedNewFromObj(vm, params, nparams); - virObjectEventStateQueue(driver->domainEventState, event); -} - - int qemuDomainObjInitJob(qemuDomainJobObjPtr job, qemuDomainObjPrivateJobCallbacksPtr cb) @@ -216,7 +171,6 @@ qemuDomainObjResetAsyncJob(qemuDomainJobObjPtr job) job->mask = QEMU_JOB_DEFAULT_MASK; job->abortJob = false; VIR_FREE(job->error); - g_clear_pointer(&job->current, qemuDomainJobInfoFree); job->cb->resetJobPrivate(job->privateData); job->apiFlags = 0; } @@ -251,8 +205,6 @@ qemuDomainObjFreeJob(qemuDomainJobObjPtr job) qemuDomainObjResetJob(job); qemuDomainObjResetAsyncJob(job); job->cb->freeJobPrivate(job->privateData); - g_clear_pointer(&job->current, qemuDomainJobInfoFree); - g_clear_pointer(&job->completed, qemuDomainJobInfoFree); virCondDestroy(&job->cond); virCondDestroy(&job->asyncCond); } @@ -264,435 +216,6 @@ qemuDomainTrackJob(qemuDomainJob job) } -int -qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) -{ - unsigned long long now; - - if (!jobInfo->started) - return 0; - - if (virTimeMillisNow(&now) < 0) - return -1; - - if (now < jobInfo->started) { - VIR_WARN("Async job starts in the future"); - jobInfo->started = 0; - return 0; - } - - jobInfo->timeElapsed = now - jobInfo->started; - return 0; -} - -int -qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) -{ - unsigned long long now; - - if (!jobInfo->stopped) - return 0; - - if (virTimeMillisNow(&now) < 0) - return -1; - - if (now < jobInfo->stopped) { - VIR_WARN("Guest's CPUs stopped in the future"); - jobInfo->stopped = 0; - return 0; - } - - jobInfo->stats.mig.downtime = now - jobInfo->stopped; - jobInfo->stats.mig.downtime_set = true; - return 0; -} - -static virDomainJobType -qemuDomainJobStatusToType(qemuDomainJobStatus status) -{ - switch (status) { - case QEMU_DOMAIN_JOB_STATUS_NONE: - break; - - case QEMU_DOMAIN_JOB_STATUS_ACTIVE: - case QEMU_DOMAIN_JOB_STATUS_MIGRATING: - case QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED: - case QEMU_DOMAIN_JOB_STATUS_POSTCOPY: - case QEMU_DOMAIN_JOB_STATUS_PAUSED: - return VIR_DOMAIN_JOB_UNBOUNDED; - - case QEMU_DOMAIN_JOB_STATUS_COMPLETED: - return VIR_DOMAIN_JOB_COMPLETED; - - case QEMU_DOMAIN_JOB_STATUS_FAILED: - return VIR_DOMAIN_JOB_FAILED; - - case QEMU_DOMAIN_JOB_STATUS_CANCELED: - return VIR_DOMAIN_JOB_CANCELLED; - } - - return VIR_DOMAIN_JOB_NONE; -} - -int -qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, - virDomainJobInfoPtr info) -{ - info->type = qemuDomainJobStatusToType(jobInfo->status); - info->timeElapsed = jobInfo->timeElapsed; - - switch (jobInfo->statsType) { - case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: - info->memTotal = jobInfo->stats.mig.ram_total; - info->memRemaining = jobInfo->stats.mig.ram_remaining; - info->memProcessed = jobInfo->stats.mig.ram_transferred; - info->fileTotal = jobInfo->stats.mig.disk_total + - jobInfo->mirrorStats.total; - info->fileRemaining = jobInfo->stats.mig.disk_remaining + - (jobInfo->mirrorStats.total - - jobInfo->mirrorStats.transferred); - info->fileProcessed = jobInfo->stats.mig.disk_transferred + - jobInfo->mirrorStats.transferred; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - info->memTotal = jobInfo->stats.mig.ram_total; - info->memRemaining = jobInfo->stats.mig.ram_remaining; - info->memProcessed = jobInfo->stats.mig.ram_transferred; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - info->memTotal = jobInfo->stats.dump.total; - info->memProcessed = jobInfo->stats.dump.completed; - info->memRemaining = info->memTotal - info->memProcessed; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - info->fileTotal = jobInfo->stats.backup.total; - info->fileProcessed = jobInfo->stats.backup.transferred; - info->fileRemaining = info->fileTotal - info->fileProcessed; - break; - - case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: - break; - } - - info->dataTotal = info->memTotal + info->fileTotal; - info->dataRemaining = info->memRemaining + info->fileRemaining; - info->dataProcessed = info->memProcessed + info->fileProcessed; - - return 0; -} - - -static int -qemuDomainMigrationJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - qemuMonitorMigrationStats *stats = &jobInfo->stats.mig; - qemuDomainMirrorStatsPtr mirrorStats = &jobInfo->mirrorStats; - virTypedParameterPtr par = NULL; - int maxpar = 0; - int npar = 0; - unsigned long long mirrorRemaining = mirrorStats->total - - mirrorStats->transferred; - - if (virTypedParamsAddInt(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_OPERATION, - jobInfo->operation) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) - goto error; - - if (jobInfo->timeDeltaSet && - jobInfo->timeElapsed > jobInfo->timeDelta && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_TIME_ELAPSED_NET, - jobInfo->timeElapsed - jobInfo->timeDelta) < 0) - goto error; - - if (stats->downtime_set && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DOWNTIME, - stats->downtime) < 0) - goto error; - - if (stats->downtime_set && - jobInfo->timeDeltaSet && - stats->downtime > jobInfo->timeDelta && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DOWNTIME_NET, - stats->downtime - jobInfo->timeDelta) < 0) - goto error; - - if (stats->setup_time_set && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_SETUP_TIME, - stats->setup_time) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DATA_TOTAL, - stats->ram_total + - stats->disk_total + - mirrorStats->total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DATA_PROCESSED, - stats->ram_transferred + - stats->disk_transferred + - mirrorStats->transferred) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DATA_REMAINING, - stats->ram_remaining + - stats->disk_remaining + - mirrorRemaining) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_TOTAL, - stats->ram_total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_PROCESSED, - stats->ram_transferred) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_REMAINING, - stats->ram_remaining) < 0) - goto error; - - if (stats->ram_bps && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_BPS, - stats->ram_bps) < 0) - goto error; - - if (stats->ram_duplicate_set) { - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_CONSTANT, - stats->ram_duplicate) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_NORMAL, - stats->ram_normal) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_NORMAL_BYTES, - stats->ram_normal_bytes) < 0) - goto error; - } - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_DIRTY_RATE, - stats->ram_dirty_rate) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_ITERATION, - stats->ram_iteration) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_POSTCOPY_REQS, - stats->ram_postcopy_reqs) < 0) - goto error; - - if (stats->ram_page_size > 0 && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE, - stats->ram_page_size) < 0) - goto error; - - /* The remaining stats are disk, mirror, or migration specific - * so if this is a SAVEDUMP, we can just skip them */ - if (jobInfo->statsType == QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP) - goto done; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_TOTAL, - stats->disk_total + - mirrorStats->total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_PROCESSED, - stats->disk_transferred + - mirrorStats->transferred) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_REMAINING, - stats->disk_remaining + - mirrorRemaining) < 0) - goto error; - - if (stats->disk_bps && - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_DISK_BPS, - stats->disk_bps) < 0) - goto error; - - if (stats->xbzrle_set) { - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_CACHE, - stats->xbzrle_cache_size) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_BYTES, - stats->xbzrle_bytes) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_PAGES, - stats->xbzrle_pages) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_CACHE_MISSES, - stats->xbzrle_cache_miss) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_COMPRESSION_OVERFLOW, - stats->xbzrle_overflow) < 0) - goto error; - } - - if (stats->cpu_throttle_percentage && - virTypedParamsAddInt(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_AUTO_CONVERGE_THROTTLE, - stats->cpu_throttle_percentage) < 0) - goto error; - - done: - *type = qemuDomainJobStatusToType(jobInfo->status); - *params = par; - *nparams = npar; - return 0; - - error: - virTypedParamsFree(par, npar); - return -1; -} - - -static int -qemuDomainDumpJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - qemuMonitorDumpStats *stats = &jobInfo->stats.dump; - virTypedParameterPtr par = NULL; - int maxpar = 0; - int npar = 0; - - if (virTypedParamsAddInt(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_OPERATION, - jobInfo->operation) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_TIME_ELAPSED, - jobInfo->timeElapsed) < 0) - goto error; - - if (virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_TOTAL, - stats->total) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_PROCESSED, - stats->completed) < 0 || - virTypedParamsAddULLong(&par, &npar, &maxpar, - VIR_DOMAIN_JOB_MEMORY_REMAINING, - stats->total - stats->completed) < 0) - goto error; - - *type = qemuDomainJobStatusToType(jobInfo->status); - *params = par; - *nparams = npar; - return 0; - - error: - virTypedParamsFree(par, npar); - return -1; -} - - -static int -qemuDomainBackupJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - qemuDomainBackupStats *stats = &jobInfo->stats.backup; - g_autoptr(virTypedParamList) par = g_new0(virTypedParamList, 1); - - if (virTypedParamListAddInt(par, jobInfo->operation, - VIR_DOMAIN_JOB_OPERATION) < 0) - return -1; - - if (virTypedParamListAddULLong(par, jobInfo->timeElapsed, - VIR_DOMAIN_JOB_TIME_ELAPSED) < 0) - return -1; - - if (stats->transferred > 0 || stats->total > 0) { - if (virTypedParamListAddULLong(par, stats->total, - VIR_DOMAIN_JOB_DISK_TOTAL) < 0) - return -1; - - if (virTypedParamListAddULLong(par, stats->transferred, - VIR_DOMAIN_JOB_DISK_PROCESSED) < 0) - return -1; - - if (virTypedParamListAddULLong(par, stats->total - stats->transferred, - VIR_DOMAIN_JOB_DISK_REMAINING) < 0) - return -1; - } - - if (stats->tmp_used > 0 || stats->tmp_total > 0) { - if (virTypedParamListAddULLong(par, stats->tmp_used, - VIR_DOMAIN_JOB_DISK_TEMP_USED) < 0) - return -1; - - if (virTypedParamListAddULLong(par, stats->tmp_total, - VIR_DOMAIN_JOB_DISK_TEMP_TOTAL) < 0) - return -1; - } - - if (jobInfo->status != QEMU_DOMAIN_JOB_STATUS_ACTIVE && - virTypedParamListAddBoolean(par, - jobInfo->status == QEMU_DOMAIN_JOB_STATUS_COMPLETED, - VIR_DOMAIN_JOB_SUCCESS) < 0) - return -1; - - if (jobInfo->errmsg && - virTypedParamListAddString(par, jobInfo->errmsg, VIR_DOMAIN_JOB_ERRMSG) < 0) - return -1; - - *nparams = virTypedParamListStealParams(par, params); - *type = qemuDomainJobStatusToType(jobInfo->status); - return 0; -} - - -int -qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) -{ - switch (jobInfo->statsType) { - case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: - case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: - return qemuDomainMigrationJobInfoToParams(jobInfo, type, params, nparams); - - case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: - return qemuDomainDumpJobInfoToParams(jobInfo, type, params, nparams); - - case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: - return qemuDomainBackupJobInfoToParams(jobInfo, type, params, nparams); - - case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("invalid job statistics type")); - break; - - default: - virReportEnumRangeError(qemuDomainJobStatsType, jobInfo->statsType); - break; - } - - return -1; -} - - void qemuDomainObjSetJobPhase(virQEMUDriverPtr driver, virDomainObjPtr obj, @@ -894,13 +417,11 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, qemuDomainAsyncJobTypeToString(asyncJob), obj, obj->def->name); qemuDomainObjResetAsyncJob(&priv->job); - priv->job.current = g_new0(qemuDomainJobInfo, 1); - priv->job.current->status = QEMU_DOMAIN_JOB_STATUS_ACTIVE; + priv->job.cb->currentJobInfoInit(&priv->job, now); priv->job.asyncJob = asyncJob; priv->job.asyncOwner = virThreadSelfID(); priv->job.asyncOwnerAPI = virThreadJobGet(); priv->job.asyncStarted = now; - priv->job.current->started = now; } } @@ -1066,7 +587,7 @@ int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driver, return -1; priv = obj->privateData; - priv->job.current->operation = operation; + priv->job.cb->setJobInfoOperation(&priv->job, operation); priv->job.apiFlags = apiFlags; return 0; } diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index c83e055647..88051d099a 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -19,7 +19,6 @@ #pragma once #include <glib-object.h> -#include "qemu_monitor.h" #define JOB_MASK(job) (job == 0 ? 0 : 1 << (job - 1)) #define QEMU_JOB_DEFAULT_MASK \ @@ -99,61 +98,6 @@ typedef enum { QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP, } qemuDomainJobStatsType; - -typedef struct _qemuDomainMirrorStats qemuDomainMirrorStats; -typedef qemuDomainMirrorStats *qemuDomainMirrorStatsPtr; -struct _qemuDomainMirrorStats { - unsigned long long transferred; - unsigned long long total; -}; - -typedef struct _qemuDomainBackupStats qemuDomainBackupStats; -struct _qemuDomainBackupStats { - unsigned long long transferred; - unsigned long long total; - unsigned long long tmp_used; - unsigned long long tmp_total; -}; - -typedef struct _qemuDomainJobInfo qemuDomainJobInfo; -typedef qemuDomainJobInfo *qemuDomainJobInfoPtr; -struct _qemuDomainJobInfo { - qemuDomainJobStatus status; - virDomainJobOperation operation; - unsigned long long started; /* When the async job started */ - unsigned long long stopped; /* When the domain's CPUs were stopped */ - unsigned long long sent; /* When the source sent status info to the - destination (only for migrations). */ - unsigned long long received; /* When the destination host received status - info from the source (migrations only). */ - /* Computed values */ - unsigned long long timeElapsed; - long long timeDelta; /* delta = received - sent, i.e., the difference - between the source and the destination time plus - the time between the end of Perform phase on the - source and the beginning of Finish phase on the - destination. */ - bool timeDeltaSet; - /* Raw values from QEMU */ - qemuDomainJobStatsType statsType; - union { - qemuMonitorMigrationStats mig; - qemuMonitorDumpStats dump; - qemuDomainBackupStats backup; - } stats; - qemuDomainMirrorStats mirrorStats; - - char *errmsg; /* optional error message for failed completed jobs */ -}; - -void -qemuDomainJobInfoFree(qemuDomainJobInfoPtr info); - -G_DEFINE_AUTOPTR_CLEANUP_FUNC(qemuDomainJobInfo, qemuDomainJobInfoFree); - -qemuDomainJobInfoPtr -qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info); - typedef struct _qemuDomainJobObj qemuDomainJobObj; typedef qemuDomainJobObj *qemuDomainJobObjPtr; @@ -164,6 +108,10 @@ typedef int (*qemuDomainObjPrivateJobFormat)(virBufferPtr, qemuDomainJobObjPtr); typedef int (*qemuDomainObjPrivateJobParse)(xmlXPathContextPtr, qemuDomainJobObjPtr); +typedef void (*qemuDomainObjJobInfoSetOperation)(qemuDomainJobObjPtr, + virDomainJobOperation); +typedef void (*qemuDomainObjCurrentJobInfoInit)(qemuDomainJobObjPtr, + unsigned long long); typedef struct _qemuDomainObjPrivateJobCallbacks qemuDomainObjPrivateJobCallbacks; typedef qemuDomainObjPrivateJobCallbacks *qemuDomainObjPrivateJobCallbacksPtr; @@ -173,6 +121,8 @@ struct _qemuDomainObjPrivateJobCallbacks { qemuDomainObjPrivateJobReset resetJobPrivate; qemuDomainObjPrivateJobFormat formatJob; qemuDomainObjPrivateJobParse parseJob; + qemuDomainObjJobInfoSetOperation setJobInfoOperation; + qemuDomainObjCurrentJobInfoInit currentJobInfoInit; }; struct _qemuDomainJobObj { @@ -198,8 +148,6 @@ struct _qemuDomainJobObj { unsigned long long asyncStarted; /* When the current async job started */ int phase; /* Job phase (mainly for migrations) */ unsigned long long mask; /* Jobs allowed during async job */ - qemuDomainJobInfoPtr current; /* async job progress data */ - qemuDomainJobInfoPtr completed; /* statistics data of a recently completed job */ bool abortJob; /* abort of the job requested */ char *error; /* job event completion error */ unsigned long apiFlags; /* flags passed to the API which started the async job */ @@ -213,9 +161,6 @@ const char *qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, int qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, const char *phase); -void qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, - virDomainObjPtr vm); - int qemuDomainObjBeginJob(virQEMUDriverPtr driver, virDomainObjPtr obj, qemuDomainJob job) @@ -262,20 +207,6 @@ void qemuDomainRemoveInactiveJob(virQEMUDriverPtr driver, void qemuDomainRemoveInactiveJobLocked(virQEMUDriverPtr driver, virDomainObjPtr vm); -int qemuDomainJobInfoUpdateTime(qemuDomainJobInfoPtr jobInfo) - ATTRIBUTE_NONNULL(1); -int qemuDomainJobInfoUpdateDowntime(qemuDomainJobInfoPtr jobInfo) - ATTRIBUTE_NONNULL(1); -int qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, - virDomainJobInfoPtr info) - ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); -int qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo, - int *type, - virTypedParameterPtr *params, - int *nparams) - ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) - ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4); - bool qemuDomainTrackJob(qemuDomainJob job); void qemuDomainObjFreeJob(qemuDomainJobObjPtr job); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 8008da6d16..27be53d3e4 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2724,6 +2724,7 @@ qemuDomainGetControlInfo(virDomainPtr dom, { virDomainObjPtr vm; qemuDomainObjPrivatePtr priv; + qemuDomainJobPrivatePtr jobPriv; int ret = -1; virCheckFlags(0, -1); @@ -2738,6 +2739,7 @@ qemuDomainGetControlInfo(virDomainPtr dom, goto cleanup; priv = vm->privateData; + jobPriv = priv->job.privateData; memset(info, 0, sizeof(*info)); @@ -2747,9 +2749,9 @@ qemuDomainGetControlInfo(virDomainPtr dom, } else if (priv->job.active) { if (virTimeMillisNow(&info->stateTime) < 0) goto cleanup; - if (priv->job.current) { + if (jobPriv->current) { info->state = VIR_DOMAIN_CONTROL_JOB; - info->stateTime -= priv->job.current->started; + info->stateTime -= jobPriv->current->started; } else { if (priv->monStart > 0) { info->state = VIR_DOMAIN_CONTROL_OCCUPIED; @@ -3314,6 +3316,7 @@ qemuDomainSaveInternal(virQEMUDriverPtr driver, int ret = -1; virObjectEventPtr event = NULL; qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; virQEMUSaveDataPtr data = NULL; g_autoptr(qemuDomainSaveCookie) cookie = NULL; @@ -3330,7 +3333,7 @@ qemuDomainSaveInternal(virQEMUDriverPtr driver, goto endjob; } - priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + jobPriv->current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; /* Pause */ if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) { @@ -3715,7 +3718,7 @@ qemuDumpWaitForCompletion(virDomainObjPtr vm) return -1; } - if (priv->job.current->stats.dump.status == QEMU_MONITOR_DUMP_STATUS_FAILED) { + if (jobPriv->current->stats.dump.status == QEMU_MONITOR_DUMP_STATUS_FAILED) { if (priv->job.error) virReportError(VIR_ERR_OPERATION_FAILED, _("memory-only dump failed: %s"), @@ -3726,7 +3729,7 @@ qemuDumpWaitForCompletion(virDomainObjPtr vm) return -1; } - qemuDomainJobInfoUpdateTime(priv->job.current); + qemuDomainJobInfoUpdateTime(jobPriv->current); return 0; } @@ -3740,6 +3743,7 @@ qemuDumpToFd(virQEMUDriverPtr driver, const char *dumpformat) { qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; bool detach = false; int ret = -1; @@ -3755,9 +3759,9 @@ qemuDumpToFd(virQEMUDriverPtr driver, return -1; if (detach) - priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP; + jobPriv->current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP; else - g_clear_pointer(&priv->job.current, qemuDomainJobInfoFree); + g_clear_pointer(&jobPriv->current, qemuDomainJobInfoFree); if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) return -1; @@ -3894,6 +3898,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, virQEMUDriverPtr driver = dom->conn->privateData; virDomainObjPtr vm; qemuDomainObjPrivatePtr priv = NULL; + qemuDomainJobPrivatePtr jobPriv; bool resume = false, paused = false; int ret = -1; virObjectEventPtr event = NULL; @@ -3918,7 +3923,8 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, goto endjob; priv = vm->privateData; - priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + jobPriv = priv->job.privateData; + jobPriv->current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; /* Migrate will always stop the VM, so the resume condition is independent of whether the stop command is issued. */ @@ -7479,6 +7485,7 @@ qemuDomainObjStart(virConnectPtr conn, bool force_boot = (flags & VIR_DOMAIN_START_FORCE_BOOT) != 0; unsigned int start_flags = VIR_QEMU_PROCESS_START_COLD; qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; start_flags |= start_paused ? VIR_QEMU_PROCESS_START_PAUSED : 0; start_flags |= autodestroy ? VIR_QEMU_PROCESS_START_AUTODESTROY : 0; @@ -7502,8 +7509,8 @@ qemuDomainObjStart(virConnectPtr conn, } vm->hasManagedSave = false; } else { - virDomainJobOperation op = priv->job.current->operation; - priv->job.current->operation = VIR_DOMAIN_JOB_OPERATION_RESTORE; + virDomainJobOperation op = jobPriv->current->operation; + jobPriv->current->operation = VIR_DOMAIN_JOB_OPERATION_RESTORE; ret = qemuDomainObjRestore(conn, driver, vm, managed_save, start_paused, bypass_cache, asyncJob); @@ -7521,7 +7528,7 @@ qemuDomainObjStart(virConnectPtr conn, return ret; } else { VIR_WARN("Ignoring incomplete managed state %s", managed_save); - priv->job.current->operation = op; + jobPriv->current->operation = op; vm->hasManagedSave = false; } } @@ -13575,13 +13582,14 @@ qemuDomainGetJobStatsInternal(virQEMUDriverPtr driver, qemuDomainJobInfoPtr *jobInfo) { qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; int ret = -1; *jobInfo = NULL; if (completed) { - if (priv->job.completed && !priv->job.current) - *jobInfo = qemuDomainJobInfoCopy(priv->job.completed); + if (jobPriv->completed && !jobPriv->current) + *jobInfo = qemuDomainJobInfoCopy(jobPriv->completed); return 0; } @@ -13599,11 +13607,11 @@ qemuDomainGetJobStatsInternal(virQEMUDriverPtr driver, if (virDomainObjCheckActive(vm) < 0) goto cleanup; - if (!priv->job.current) { + if (!jobPriv->current) { ret = 0; goto cleanup; } - *jobInfo = qemuDomainJobInfoCopy(priv->job.current); + *jobInfo = qemuDomainJobInfoCopy(jobPriv->current); switch ((*jobInfo)->statsType) { case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: @@ -13678,6 +13686,7 @@ qemuDomainGetJobStats(virDomainPtr dom, virQEMUDriverPtr driver = dom->conn->privateData; virDomainObjPtr vm; qemuDomainObjPrivatePtr priv; + qemuDomainJobPrivatePtr jobPriv; g_autoptr(qemuDomainJobInfo) jobInfo = NULL; bool completed = !!(flags & VIR_DOMAIN_JOB_STATS_COMPLETED); int ret = -1; @@ -13692,6 +13701,7 @@ qemuDomainGetJobStats(virDomainPtr dom, goto cleanup; priv = vm->privateData; + jobPriv = priv->job.privateData; if (qemuDomainGetJobStatsInternal(driver, vm, completed, &jobInfo) < 0) goto cleanup; @@ -13707,7 +13717,7 @@ qemuDomainGetJobStats(virDomainPtr dom, ret = qemuDomainJobInfoToParams(jobInfo, type, params, nparams); if (completed && ret == 0 && !(flags & VIR_DOMAIN_JOB_STATS_KEEP_COMPLETED)) - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); cleanup: virDomainObjEndAPI(&vm); @@ -13739,6 +13749,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) virDomainObjPtr vm; int ret = -1; qemuDomainObjPrivatePtr priv; + qemuDomainJobPrivatePtr jobPriv; int reason; if (!(vm = qemuDomainObjFromDomain(dom))) @@ -13754,6 +13765,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) goto endjob; priv = vm->privateData; + jobPriv = priv->job.privateData; switch (priv->job.asyncJob) { case QEMU_ASYNC_JOB_NONE: @@ -13774,7 +13786,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) break; case QEMU_ASYNC_JOB_MIGRATION_OUT: - if ((priv->job.current->status == QEMU_DOMAIN_JOB_STATUS_POSTCOPY || + if ((jobPriv->current->status == QEMU_DOMAIN_JOB_STATUS_POSTCOPY || (virDomainObjGetState(vm, &reason) == VIR_DOMAIN_PAUSED && reason == VIR_DOMAIN_PAUSED_POSTCOPY))) { virReportError(VIR_ERR_OPERATION_INVALID, "%s", @@ -15442,6 +15454,7 @@ qemuDomainSnapshotCreateActiveExternal(virQEMUDriverPtr driver, bool resume = false; int ret = -1; qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; g_autofree char *xml = NULL; virDomainSnapshotDefPtr snapdef = virDomainSnapshotObjGetDef(snap); bool memory = snapdef->memory == VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL; @@ -15519,7 +15532,7 @@ qemuDomainSnapshotCreateActiveExternal(virQEMUDriverPtr driver, if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0)) goto cleanup; - priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + jobPriv->current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; /* allow the migration job to be cancelled or the domain to be paused */ qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 0f2f92b211..c517774c9f 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1008,6 +1008,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriverPtr driver, unsigned int flags) { qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; int port; size_t i; unsigned long long mirror_speed = speed; @@ -1052,7 +1053,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriverPtr driver, return -1; if (priv->job.abortJob) { - priv->job.current->status = QEMU_DOMAIN_JOB_STATUS_CANCELED; + jobPriv->current->status = QEMU_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), qemuDomainAsyncJobTypeToString(priv->job.asyncJob), _("canceled by client")); @@ -1070,7 +1071,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriverPtr driver, } qemuMigrationSrcFetchMirrorStats(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT, - priv->job.current); + jobPriv->current); /* Okay, all disks are ready. Modify migrate_flags */ *migrate_flags &= ~(QEMU_MONITOR_MIGRATE_NON_SHARED_DISK | @@ -1550,7 +1551,8 @@ qemuMigrationJobCheckStatus(virQEMUDriverPtr driver, qemuDomainAsyncJob asyncJob) { qemuDomainObjPrivatePtr priv = vm->privateData; - qemuDomainJobInfoPtr jobInfo = priv->job.current; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; + qemuDomainJobInfoPtr jobInfo = jobPriv->current; char *error = NULL; bool events = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVENT); int ret = -1; @@ -1620,7 +1622,8 @@ qemuMigrationAnyCompleted(virQEMUDriverPtr driver, unsigned int flags) { qemuDomainObjPrivatePtr priv = vm->privateData; - qemuDomainJobInfoPtr jobInfo = priv->job.current; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; + qemuDomainJobInfoPtr jobInfo = jobPriv->current; int pauseReason; if (qemuMigrationJobCheckStatus(driver, vm, asyncJob) < 0) @@ -1711,7 +1714,8 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriverPtr driver, unsigned int flags) { qemuDomainObjPrivatePtr priv = vm->privateData; - qemuDomainJobInfoPtr jobInfo = priv->job.current; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; + qemuDomainJobInfoPtr jobInfo = jobPriv->current; bool events = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVENT); int rv; @@ -1743,9 +1747,9 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriverPtr driver, qemuDomainJobInfoUpdateTime(jobInfo); qemuDomainJobInfoUpdateDowntime(jobInfo); - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); - priv->job.completed = qemuDomainJobInfoCopy(jobInfo); - priv->job.completed->status = QEMU_DOMAIN_JOB_STATUS_COMPLETED; + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); + jobPriv->completed = qemuDomainJobInfoCopy(jobInfo); + jobPriv->completed->status = QEMU_DOMAIN_JOB_STATUS_COMPLETED; if (asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT && jobInfo->status == QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED) @@ -3018,16 +3022,16 @@ qemuMigrationSrcConfirmPhase(virQEMUDriverPtr driver, return -1; if (retcode == 0) - jobInfo = priv->job.completed; + jobInfo = jobPriv->completed; else - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); /* Update times with the values sent by the destination daemon */ if (mig->jobInfo && jobInfo) { int reason; /* We need to refresh migration statistics after a completed post-copy - * migration since priv->job.completed contains obsolete data from the + * migration since jobPriv->completed contains obsolete data from the * time we switched to post-copy mode. */ if (virDomainObjGetState(vm, &reason) == VIR_DOMAIN_PAUSED && @@ -3479,6 +3483,7 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, int ret = -1; unsigned int migrate_flags = QEMU_MONITOR_MIGRATE_BACKGROUND; qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; g_autoptr(qemuMigrationCookie) mig = NULL; g_autofree char *tlsAlias = NULL; qemuMigrationIOThreadPtr iothread = NULL; @@ -3636,7 +3641,7 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, /* explicitly do this *after* we entered the monitor, * as this is a critical section so we are guaranteed * priv->job.abortJob will not change */ - priv->job.current->status = QEMU_DOMAIN_JOB_STATUS_CANCELED; + jobPriv->current->status = QEMU_DOMAIN_JOB_STATUS_CANCELED; virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"), qemuDomainAsyncJobTypeToString(priv->job.asyncJob), _("canceled by client")); @@ -3741,7 +3746,7 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, * resume it now once we finished all block jobs and wait for the real * end of the migration. */ - if (priv->job.current->status == QEMU_DOMAIN_JOB_STATUS_PAUSED) { + if (jobPriv->current->status == QEMU_DOMAIN_JOB_STATUS_PAUSED) { if (qemuMigrationSrcContinue(driver, vm, QEMU_MONITOR_MIGRATION_STATUS_PRE_SWITCHOVER, QEMU_ASYNC_JOB_MIGRATION_OUT) < 0) @@ -3769,11 +3774,11 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, goto error; } - if (priv->job.completed) { - priv->job.completed->stopped = priv->job.current->stopped; - qemuDomainJobInfoUpdateTime(priv->job.completed); - qemuDomainJobInfoUpdateDowntime(priv->job.completed); - ignore_value(virTimeMillisNow(&priv->job.completed->sent)); + if (jobPriv->completed) { + jobPriv->completed->stopped = jobPriv->current->stopped; + qemuDomainJobInfoUpdateTime(jobPriv->completed); + qemuDomainJobInfoUpdateDowntime(jobPriv->completed); + ignore_value(virTimeMillisNow(&jobPriv->completed->sent)); } cookieFlags |= QEMU_MIGRATION_COOKIE_NETWORK | @@ -3801,7 +3806,7 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, if (virDomainObjIsActive(vm)) { if (cancel && - priv->job.current->status != QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED && + jobPriv->current->status != QEMU_DOMAIN_JOB_STATUS_QEMU_COMPLETED && qemuDomainObjEnterMonitorAsync(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT) == 0) { qemuMonitorMigrateCancel(priv->mon); @@ -3814,8 +3819,8 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver, QEMU_ASYNC_JOB_MIGRATION_OUT, dconn); - if (priv->job.current->status != QEMU_DOMAIN_JOB_STATUS_CANCELED) - priv->job.current->status = QEMU_DOMAIN_JOB_STATUS_FAILED; + if (jobPriv->current->status != QEMU_DOMAIN_JOB_STATUS_CANCELED) + jobPriv->current->status = QEMU_DOMAIN_JOB_STATUS_FAILED; } if (iothread) @@ -5023,7 +5028,7 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, : QEMU_MIGRATION_PHASE_FINISH2); qemuDomainCleanupRemove(vm, qemuMigrationDstPrepareCleanup); - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); cookie_flags = QEMU_MIGRATION_COOKIE_NETWORK | QEMU_MIGRATION_COOKIE_STATS | @@ -5115,7 +5120,7 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, goto endjob; } - if (priv->job.current->status == QEMU_DOMAIN_JOB_STATUS_POSTCOPY) + if (jobPriv->current->status == QEMU_DOMAIN_JOB_STATUS_POSTCOPY) inPostCopy = true; if (!(flags & VIR_MIGRATE_PAUSED)) { @@ -5229,9 +5234,9 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, if (dom) { if (jobInfo) { - priv->job.completed = g_steal_pointer(&jobInfo); - priv->job.completed->status = QEMU_DOMAIN_JOB_STATUS_COMPLETED; - priv->job.completed->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; + jobPriv->completed = g_steal_pointer(&jobInfo); + jobPriv->completed->status = QEMU_DOMAIN_JOB_STATUS_COMPLETED; + jobPriv->completed->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; } if (qemuMigrationBakeCookie(mig, driver, vm, @@ -5244,7 +5249,7 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, * is obsolete anyway. */ if (inPostCopy) - g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); } qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, @@ -5473,6 +5478,7 @@ qemuMigrationJobStart(virQEMUDriverPtr driver, unsigned long apiFlags) { qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; virDomainJobOperation op; unsigned long long mask; @@ -5489,7 +5495,7 @@ qemuMigrationJobStart(virQEMUDriverPtr driver, if (qemuDomainObjBeginAsyncJob(driver, vm, job, op, apiFlags) < 0) return -1; - priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; + jobPriv->current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION; qemuDomainObjSetAsyncJobMask(vm, mask); return 0; diff --git a/src/qemu/qemu_migration_cookie.c b/src/qemu/qemu_migration_cookie.c index 81b557e0a8..a0e8cba8ba 100644 --- a/src/qemu/qemu_migration_cookie.c +++ b/src/qemu/qemu_migration_cookie.c @@ -509,12 +509,13 @@ qemuMigrationCookieAddStatistics(qemuMigrationCookiePtr mig, virDomainObjPtr vm) { qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; - if (!priv->job.completed) + if (!jobPriv->completed) return 0; g_clear_pointer(&mig->jobInfo, qemuDomainJobInfoFree); - mig->jobInfo = qemuDomainJobInfoCopy(priv->job.completed); + mig->jobInfo = qemuDomainJobInfoCopy(jobPriv->completed); mig->flags |= QEMU_MIGRATION_COOKIE_STATS; @@ -1465,6 +1466,7 @@ qemuMigrationEatCookie(virQEMUDriverPtr driver, unsigned int flags) { g_autoptr(qemuMigrationCookie) mig = NULL; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; /* Parse & validate incoming cookie (if any) */ if (cookiein && cookieinlen && @@ -1513,7 +1515,7 @@ qemuMigrationEatCookie(virQEMUDriverPtr driver, } if (flags & QEMU_MIGRATION_COOKIE_STATS && mig->jobInfo) - mig->jobInfo->operation = priv->job.current->operation; + mig->jobInfo->operation = jobPriv->current->operation; return g_steal_pointer(&mig); } diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 126fabf5ef..652d217b5c 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -657,6 +657,7 @@ qemuProcessHandleStop(qemuMonitorPtr mon G_GNUC_UNUSED, virDomainEventSuspendedDetailType detail; g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; virObjectLock(vm); @@ -668,7 +669,7 @@ qemuProcessHandleStop(qemuMonitorPtr mon G_GNUC_UNUSED, if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING && !priv->pausedShutdown) { if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT) { - if (priv->job.current->status == QEMU_DOMAIN_JOB_STATUS_POSTCOPY) + if (jobPriv->current->status == QEMU_DOMAIN_JOB_STATUS_POSTCOPY) reason = VIR_DOMAIN_PAUSED_POSTCOPY; else reason = VIR_DOMAIN_PAUSED_MIGRATION; @@ -680,8 +681,8 @@ qemuProcessHandleStop(qemuMonitorPtr mon G_GNUC_UNUSED, vm->def->name, virDomainPausedReasonTypeToString(reason), detail); - if (priv->job.current) - ignore_value(virTimeMillisNow(&priv->job.current->stopped)); + if (jobPriv->current) + ignore_value(virTimeMillisNow(&jobPriv->current->stopped)); if (priv->signalStop) virDomainObjBroadcast(vm); @@ -1649,6 +1650,7 @@ qemuProcessHandleMigrationStatus(qemuMonitorPtr mon G_GNUC_UNUSED, void *opaque) { qemuDomainObjPrivatePtr priv; + qemuDomainJobPrivatePtr jobPriv; virQEMUDriverPtr driver = opaque; virObjectEventPtr event = NULL; g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); @@ -1661,12 +1663,13 @@ qemuProcessHandleMigrationStatus(qemuMonitorPtr mon G_GNUC_UNUSED, qemuMonitorMigrationStatusTypeToString(status)); priv = vm->privateData; + jobPriv = priv->job.privateData; if (priv->job.asyncJob == QEMU_ASYNC_JOB_NONE) { VIR_DEBUG("got MIGRATION event without a migration job"); goto cleanup; } - priv->job.current->stats.mig.status = status; + jobPriv->current->stats.mig.status = status; virDomainObjBroadcast(vm); if (status == QEMU_MONITOR_MIGRATION_STATUS_POSTCOPY && @@ -1747,13 +1750,13 @@ qemuProcessHandleDumpCompleted(qemuMonitorPtr mon G_GNUC_UNUSED, goto cleanup; } jobPriv->dumpCompleted = true; - priv->job.current->stats.dump = *stats; + jobPriv->current->stats.dump = *stats; priv->job.error = g_strdup(error); /* Force error if extracting the DUMP_COMPLETED status failed */ if (!error && status < 0) { priv->job.error = g_strdup(virGetLastErrorMessage()); - priv->job.current->stats.dump.status = QEMU_MONITOR_DUMP_STATUS_FAILED; + jobPriv->current->stats.dump.status = QEMU_MONITOR_DUMP_STATUS_FAILED; } virDomainObjBroadcast(vm); @@ -3267,6 +3270,7 @@ int qemuProcessStopCPUs(virQEMUDriverPtr driver, { int ret = -1; qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; VIR_FREE(priv->lockState); @@ -3285,8 +3289,8 @@ int qemuProcessStopCPUs(virQEMUDriverPtr driver, /* de-activate netdevs after stopping CPUs */ ignore_value(qemuInterfaceStopDevices(vm->def)); - if (priv->job.current) - ignore_value(virTimeMillisNow(&priv->job.current->stopped)); + if (jobPriv->current) + ignore_value(virTimeMillisNow(&jobPriv->current->stopped)); /* The STOP event handler will change the domain state with the reason * saved in priv->pausedReason and it will also emit corresponding domain @@ -3583,6 +3587,7 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver, unsigned int *stopFlags) { qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; virDomainState state; int reason; unsigned long long now; @@ -3651,11 +3656,11 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver, /* We reset the job parameters for backup so that the job will look * active. This is possible because we are able to recover the state * of blockjobs and also the backup job allows all sub-job types */ - priv->job.current = g_new0(qemuDomainJobInfo, 1); - priv->job.current->operation = VIR_DOMAIN_JOB_OPERATION_BACKUP; - priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; - priv->job.current->status = QEMU_DOMAIN_JOB_STATUS_ACTIVE; - priv->job.current->started = now; + jobPriv->current = g_new0(qemuDomainJobInfo, 1); + jobPriv->current->operation = VIR_DOMAIN_JOB_OPERATION_BACKUP; + jobPriv->current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + jobPriv->current->status = QEMU_DOMAIN_JOB_STATUS_ACTIVE; + jobPriv->current->started = now; break; case QEMU_ASYNC_JOB_NONE: @@ -3760,7 +3765,6 @@ qemuDomainPerfRestart(virDomainObjPtr vm) return 0; } - static void qemuProcessReconnectCheckMemAliasOrderMismatch(virDomainObjPtr vm) { -- 2.25.1

On Mon, Aug 17, 2020 at 10:37:16AM +0530, Prathamesh Chavan wrote:
As `qemuDomainJobInfo` had attributes specific to qemu hypervisor's jobs, we moved the attribute `current` and `completed` from `qemuDomainJobObj` to its `privateData` structure.
In this process, two callback functions: `setJobInfoOperation` and `currentJobInfoInit` were introduced to qemuDomainJob's callback structure.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_backup.c | 22 +- src/qemu/qemu_domain.c | 498 +++++++++++++++++++++++++++++++ src/qemu/qemu_domain.h | 74 +++++ src/qemu/qemu_domainjob.c | 483 +----------------------------- src/qemu/qemu_domainjob.h | 81 +---- src/qemu/qemu_driver.c | 49 +-- src/qemu/qemu_migration.c | 62 ++-- src/qemu/qemu_migration_cookie.c | 8 +- src/qemu/qemu_process.c | 32 +- 9 files changed, 680 insertions(+), 629 deletions(-)
diff --git a/src/qemu/qemu_backup.c b/src/qemu/qemu_backup.c index a402730d38..1822c6f267 100644 --- a/src/qemu/qemu_backup.c +++ b/src/qemu/qemu_backup.c @@ -529,20 +529,21 @@ qemuBackupJobTerminate(virDomainObjPtr vm,
{ qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; size_t i;
- qemuDomainJobInfoUpdateTime(priv->job.current); + qemuDomainJobInfoUpdateTime(jobPriv->current);
- g_clear_pointer(&priv->job.completed, qemuDomainJobInfoFree); - priv->job.completed = qemuDomainJobInfoCopy(priv->job.current); + g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); + jobPriv->completed = qemuDomainJobInfoCopy(jobPriv->current);
- priv->job.completed->stats.backup.total = priv->backup->push_total; - priv->job.completed->stats.backup.transferred = priv->backup->push_transferred; - priv->job.completed->stats.backup.tmp_used = priv->backup->pull_tmp_used; - priv->job.completed->stats.backup.tmp_total = priv->backup->pull_tmp_total; + jobPriv->completed->stats.backup.total = priv->backup->push_total; + jobPriv->completed->stats.backup.transferred = priv->backup->push_transferred; + jobPriv->completed->stats.backup.tmp_used = priv->backup->pull_tmp_used; + jobPriv->completed->stats.backup.tmp_total = priv->backup->pull_tmp_total;
- priv->job.completed->status = jobstatus; - priv->job.completed->errmsg = g_strdup(priv->backup->errmsg); + jobPriv->completed->status = jobstatus; + jobPriv->completed->errmsg = g_strdup(priv->backup->errmsg);
qemuDomainEventEmitJobCompleted(priv->driver, vm);
@@ -694,6 +695,7 @@ qemuBackupBegin(virDomainObjPtr vm, unsigned int flags) { qemuDomainObjPrivatePtr priv = vm->privateData; + qemuDomainJobPrivatePtr jobPriv = priv->job.privateData; g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(priv->driver); g_autoptr(virDomainBackupDef) def = NULL; g_autofree char *suffix = NULL; @@ -745,7 +747,7 @@ qemuBackupBegin(virDomainObjPtr vm, qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK | JOB_MASK(QEMU_JOB_SUSPEND) | JOB_MASK(QEMU_JOB_MODIFY))); - priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP; + jobPriv->current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP;
if (!virDomainObjIsActive(vm)) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
You could perform changes such as the ones ^above in a separate patch, while... [snip]
+ + +int +qemuDomainJobInfoToInfo(qemuDomainJobInfoPtr jobInfo, + virDomainJobInfoPtr info) +{ + info->type = qemuDomainJobStatusToType(jobInfo->status); + info->timeElapsed = jobInfo->timeElapsed; + + switch (jobInfo->statsType) { + case QEMU_DOMAIN_JOB_STATS_TYPE_MIGRATION: + info->memTotal = jobInfo->stats.mig.ram_total; + info->memRemaining = jobInfo->stats.mig.ram_remaining; + info->memProcessed = jobInfo->stats.mig.ram_transferred; + info->fileTotal = jobInfo->stats.mig.disk_total + + jobInfo->mirrorStats.total; + info->fileRemaining = jobInfo->stats.mig.disk_remaining + + (jobInfo->mirrorStats.total - + jobInfo->mirrorStats.transferred); + info->fileProcessed = jobInfo->stats.mig.disk_transferred + + jobInfo->mirrorStats.transferred; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP: + info->memTotal = jobInfo->stats.mig.ram_total; + info->memRemaining = jobInfo->stats.mig.ram_remaining; + info->memProcessed = jobInfo->stats.mig.ram_transferred; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_MEMDUMP: + info->memTotal = jobInfo->stats.dump.total; + info->memProcessed = jobInfo->stats.dump.completed; + info->memRemaining = info->memTotal - info->memProcessed; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP: + info->fileTotal = jobInfo->stats.backup.total; + info->fileProcessed = jobInfo->stats.backup.transferred; + info->fileRemaining = info->fileTotal - info->fileProcessed; + break; + + case QEMU_DOMAIN_JOB_STATS_TYPE_NONE: + break; + } + + info->dataTotal = info->memTotal + info->fileTotal; + info->dataRemaining = info->memRemaining + info->fileRemaining; + info->dataProcessed = info->memProcessed + info->fileProcessed; + + return 0; +}
...moving helper functions like ^this one in the current patch, if... [snip]
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 3a1bcbbfa3..386ae17272 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -483,6 +483,52 @@ struct _qemuDomainXmlNsDef { char **capsdel; };
+typedef struct _qemuDomainMirrorStats qemuDomainMirrorStats; +typedef qemuDomainMirrorStats *qemuDomainMirrorStatsPtr; +struct _qemuDomainMirrorStats { + unsigned long long transferred; + unsigned long long total; +}; + +typedef struct _qemuDomainBackupStats qemuDomainBackupStats; +struct _qemuDomainBackupStats { + unsigned long long transferred; + unsigned long long total; + unsigned long long tmp_used; + unsigned long long tmp_total; +}; + +typedef struct _qemuDomainJobInfo qemuDomainJobInfo; +typedef qemuDomainJobInfo *qemuDomainJobInfoPtr; +struct _qemuDomainJobInfo { + qemuDomainJobStatus status; + virDomainJobOperation operation; + unsigned long long started; /* When the async job started */ + unsigned long long stopped; /* When the domain's CPUs were stopped */ + unsigned long long sent; /* When the source sent status info to the + destination (only for migrations). */ + unsigned long long received; /* When the destination host received status + info from the source (migrations only). */ + /* Computed values */ + unsigned long long timeElapsed; + long long timeDelta; /* delta = received - sent, i.e., the difference + between the source and the destination time plus + the time between the end of Perform phase on the + source and the beginning of Finish phase on the + destination. */ + bool timeDeltaSet; + /* Raw values from QEMU */ + qemuDomainJobStatsType statsType; + union { + qemuMonitorMigrationStats mig; + qemuMonitorDumpStats dump; + qemuDomainBackupStats backup; + } stats; + qemuDomainMirrorStats mirrorStats; + + char *errmsg; /* optional error message for failed completed jobs */ +}; +
...you keep ^these structures in qemu_domainjob.h in patch 1, as qemu_domain.h includes qemu_domainjob.h, so all the functions that you'd move would have the symbol. Then, in patch 2 you'd move the structures themselves and perform changes like the ones below for example.
@@ -3330,7 +3333,7 @@ qemuDomainSaveInternal(virQEMUDriverPtr driver, goto endjob; }
- priv->job.current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP; + jobPriv->current->statsType = QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP;
/* Pause */ if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) { @@ -3715,7 +3718,7 @@ qemuDumpWaitForCompletion(virDomainObjPtr vm) return -1; }
- if (priv->job.current->stats.dump.status == QEMU_MONITOR_DUMP_STATUS_FAILED) { + if (jobPriv->current->stats.dump.status == QEMU_MONITOR_DUMP_STATUS_FAILED) { if (priv->job.error) virReportError(VIR_ERR_OPERATION_FAILED, _("memory-only dump failed: %s"),
Regards, Erik

Since the attribute `jobs_queued` was specific to jobs, we decided to move this from `qemuDomainObjPrivate` to `qemuDomainJobObj` structure. Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_domain.h | 2 -- src/qemu/qemu_domainjob.c | 14 +++++++------- src/qemu/qemu_domainjob.h | 2 ++ src/qemu/qemu_process.c | 2 +- 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 386ae17272..507f710200 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -161,8 +161,6 @@ struct _qemuDomainObjPrivate { bool pausedShutdown; virTristateBool allowReboot; - int jobs_queued; - unsigned long migMaxBandwidth; char *origname; int nbdPort; /* Port used for migration with NBD */ diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 503a87bb12..7cd1aabd9e 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -365,13 +365,13 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, if (virTimeMillisNow(&now) < 0) return -1; - priv->jobs_queued++; + priv->job.jobs_queued++; then = now + QEMU_JOB_WAIT_TIME; retry: if ((!async && job != QEMU_JOB_DESTROY) && cfg->maxQueuedJobs && - priv->jobs_queued > cfg->maxQueuedJobs) { + priv->job.jobs_queued > cfg->maxQueuedJobs) { goto error; } @@ -502,7 +502,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, } ret = -2; } else if (cfg->maxQueuedJobs && - priv->jobs_queued > cfg->maxQueuedJobs) { + priv->job.jobs_queued > cfg->maxQueuedJobs) { if (blocker && agentBlocker) { virReportError(VIR_ERR_OPERATION_FAILED, _("cannot acquire state change " @@ -532,7 +532,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, } cleanup: - priv->jobs_queued--; + priv->job.jobs_queued--; return ret; } @@ -653,7 +653,7 @@ qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj) qemuDomainObjPrivatePtr priv = obj->privateData; qemuDomainJob job = priv->job.active; - priv->jobs_queued--; + priv->job.jobs_queued--; VIR_DEBUG("Stopping job: %s (async=%s vm=%p name=%s)", qemuDomainJobTypeToString(job), @@ -674,7 +674,7 @@ qemuDomainObjEndAgentJob(virDomainObjPtr obj) qemuDomainObjPrivatePtr priv = obj->privateData; qemuDomainAgentJob agentJob = priv->job.agentActive; - priv->jobs_queued--; + priv->job.jobs_queued--; VIR_DEBUG("Stopping agent job: %s (async=%s vm=%p name=%s)", qemuDomainAgentJobTypeToString(agentJob), @@ -692,7 +692,7 @@ qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj) { qemuDomainObjPrivatePtr priv = obj->privateData; - priv->jobs_queued--; + priv->job.jobs_queued--; VIR_DEBUG("Stopping async job: %s (vm=%p name=%s)", qemuDomainAsyncJobTypeToString(priv->job.asyncJob), diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 88051d099a..0696b79fe3 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -152,6 +152,8 @@ struct _qemuDomainJobObj { char *error; /* job event completion error */ unsigned long apiFlags; /* flags passed to the API which started the async job */ + int jobs_queued; + void *privateData; /* job specific collection of data */ qemuDomainObjPrivateJobCallbacksPtr cb; }; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 652d217b5c..e114e4c4ce 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3644,7 +3644,7 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver, ignore_value(virTimeMillisNow(&now)); /* Restore the config of the async job which is not persisted */ - priv->jobs_queued++; + priv->job.jobs_queued++; priv->job.asyncJob = QEMU_ASYNC_JOB_BACKUP; priv->job.asyncOwnerAPI = virThreadJobGet(); priv->job.asyncStarted = now; -- 2.25.1

Reference to `maxQueuedJobs` required us to access config of the qemu-driver. And creating its copy in the `qemuDomainJob` helped us access the variable without referencing the driver's config. Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_domain.c | 5 ++++- src/qemu/qemu_domainjob.c | 13 +++++++------ src/qemu/qemu_domainjob.h | 4 +++- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 1ae44ae39f..677fa7ea91 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2085,11 +2085,14 @@ static void * qemuDomainObjPrivateAlloc(void *opaque) { qemuDomainObjPrivatePtr priv; + virQEMUDriverPtr driver = opaque; + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); if (VIR_ALLOC(priv) < 0) return NULL; - if (qemuDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks) < 0) { + if (qemuDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks, + cfg->maxQueuedJobs) < 0) { virReportSystemError(errno, "%s", _("Unable to init qemu driver mutexes")); goto error; diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 7cd1aabd9e..eebc144747 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -117,10 +117,12 @@ qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, int qemuDomainObjInitJob(qemuDomainJobObjPtr job, - qemuDomainObjPrivateJobCallbacksPtr cb) + qemuDomainObjPrivateJobCallbacksPtr cb, + unsigned int maxQueuedJobs) { memset(job, 0, sizeof(*job)); job->cb = cb; + job->maxQueuedJobs = maxQueuedJobs; if (!(job->privateData = job->cb->allocJobPrivate())) return -1; @@ -344,7 +346,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, unsigned long long then; bool nested = job == QEMU_JOB_ASYNC_NESTED; bool async = job == QEMU_JOB_ASYNC; - g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); const char *blocker = NULL; const char *agentBlocker = NULL; int ret = -1; @@ -370,8 +371,8 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, retry: if ((!async && job != QEMU_JOB_DESTROY) && - cfg->maxQueuedJobs && - priv->job.jobs_queued > cfg->maxQueuedJobs) { + priv->job.maxQueuedJobs && + priv->job.jobs_queued > priv->job.maxQueuedJobs) { goto error; } @@ -501,8 +502,8 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, _("cannot acquire state change lock")); } ret = -2; - } else if (cfg->maxQueuedJobs && - priv->job.jobs_queued > cfg->maxQueuedJobs) { + } else if (priv->job.maxQueuedJobs && + priv->job.jobs_queued > priv->job.maxQueuedJobs) { if (blocker && agentBlocker) { virReportError(VIR_ERR_OPERATION_FAILED, _("cannot acquire state change " diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 0696b79fe3..11e7f2f432 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -153,6 +153,7 @@ struct _qemuDomainJobObj { unsigned long apiFlags; /* flags passed to the API which started the async job */ int jobs_queued; + unsigned int maxQueuedJobs; void *privateData; /* job specific collection of data */ qemuDomainObjPrivateJobCallbacksPtr cb; @@ -215,7 +216,8 @@ void qemuDomainObjFreeJob(qemuDomainJobObjPtr job); int qemuDomainObjInitJob(qemuDomainJobObjPtr job, - qemuDomainObjPrivateJobCallbacksPtr cb); + qemuDomainObjPrivateJobCallbacksPtr cb, + unsigned int maxQueuedJobs); bool qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob); -- 2.25.1

On Mon, Aug 17, 2020 at 10:37:18AM +0530, Prathamesh Chavan wrote:
Reference to `maxQueuedJobs` required us to access config of the qemu-driver. And creating its copy in the `qemuDomainJob` helped us access the variable without referencing the driver's config.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_domain.c | 5 ++++- src/qemu/qemu_domainjob.c | 13 +++++++------ src/qemu/qemu_domainjob.h | 4 +++- 3 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 1ae44ae39f..677fa7ea91 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -2085,11 +2085,14 @@ static void * qemuDomainObjPrivateAlloc(void *opaque) { qemuDomainObjPrivatePtr priv; + virQEMUDriverPtr driver = opaque; + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
if (VIR_ALLOC(priv) < 0) return NULL;
- if (qemuDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks) < 0) { + if (qemuDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks, + cfg->maxQueuedJobs) < 0) { virReportSystemError(errno, "%s", _("Unable to init qemu driver mutexes")); goto error; diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 7cd1aabd9e..eebc144747 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -117,10 +117,12 @@ qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job,
int qemuDomainObjInitJob(qemuDomainJobObjPtr job, - qemuDomainObjPrivateJobCallbacksPtr cb) + qemuDomainObjPrivateJobCallbacksPtr cb, + unsigned int maxQueuedJobs) { memset(job, 0, sizeof(*job)); job->cb = cb; + job->maxQueuedJobs = maxQueuedJobs;
if (!(job->privateData = job->cb->allocJobPrivate())) return -1; @@ -344,7 +346,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, unsigned long long then; bool nested = job == QEMU_JOB_ASYNC_NESTED; bool async = job == QEMU_JOB_ASYNC; - g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); const char *blocker = NULL; const char *agentBlocker = NULL; int ret = -1; @@ -370,8 +371,8 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver,
retry: if ((!async && job != QEMU_JOB_DESTROY) && - cfg->maxQueuedJobs && - priv->job.jobs_queued > cfg->maxQueuedJobs) { + priv->job.maxQueuedJobs && + priv->job.jobs_queued > priv->job.maxQueuedJobs) { goto error; }
@@ -501,8 +502,8 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver, _("cannot acquire state change lock")); } ret = -2; - } else if (cfg->maxQueuedJobs && - priv->job.jobs_queued > cfg->maxQueuedJobs) { + } else if (priv->job.maxQueuedJobs && + priv->job.jobs_queued > priv->job.maxQueuedJobs) { if (blocker && agentBlocker) { virReportError(VIR_ERR_OPERATION_FAILED, _("cannot acquire state change " diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 0696b79fe3..11e7f2f432 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -153,6 +153,7 @@ struct _qemuDomainJobObj { unsigned long apiFlags; /* flags passed to the API which started the async job */
int jobs_queued; + unsigned int maxQueuedJobs;
void *privateData; /* job specific collection of data */ qemuDomainObjPrivateJobCallbacksPtr cb;
The comment below applies to both 2/6 and 3/6. Looking at this again, I don't think we want to move any of the jobs_queued or maxQueuedJobs attributes to the job structure, I'm sorry I didn't spot this right away in the previous revision of the series, I take responsibility for it. The job structure represents a single job, so from the design POV, referencing both jobs_queued (which is really tied to a domain) and maxQueuedJobs (which is a global driver setting) from within the job structure simply doesn't feel right. Instead, I think qemuDomainObjBeginJobInternal will simply have to be split in parts specific to QEMU that have access to the driver and hypervisor-agnostic bits that can be moved to src/hypervisor. Erik

On 8/18/20 10:48 AM, Erik Skultety wrote:
On Mon, Aug 17, 2020 at 10:37:18AM +0530, Prathamesh Chavan wrote:
Reference to `maxQueuedJobs` required us to access config of the qemu-driver. And creating its copy in the `qemuDomainJob` helped us access the variable without referencing the driver's config.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_domain.c | 5 ++++- src/qemu/qemu_domainjob.c | 13 +++++++------ src/qemu/qemu_domainjob.h | 4 +++- 3 files changed, 14 insertions(+), 8 deletions(-)
Looking at this again, I don't think we want to move any of the jobs_queued or maxQueuedJobs attributes to the job structure, I'm sorry I didn't spot this right away in the previous revision of the series, I take responsibility for it. The job structure represents a single job, so from the design POV, referencing both jobs_queued (which is really tied to a domain) and maxQueuedJobs (which is a global driver setting) from within the job structure simply doesn't feel right. Instead, I think qemuDomainObjBeginJobInternal will simply have to be split in parts specific to QEMU that have access to the driver and hypervisor-agnostic bits that can be moved to src/hypervisor.
Actually, I think maxQueuedJobs is job specific and not a global driver setting. I mean, I can imagine other drivers benefiting from it too. It's true that we currently don't allow specifying different values for different domains and only apply the setting from qemu.conf, but in the future we might make this available in domain XML somehow [1] and thus be per domain. Moreover, I can imagine users wanting to set the limit for volumes too [2]. Michal 1: this refers to discussion we had on the list (too lazy to find the link, sorry) about the qemu:///embed. So far, if an user wants to set some values, they have to create qemu.conf in the root before connecting to the embed driver (so that the driver loads them). The idea was to allow them to specify everything in domain XML so no file needs to be created. 2: although, it's not as bad as it used to be, but with rotational disks having two processes accessing different portions of a file resulted in very poor performance as the head had to jump up & down.

On Wed, Aug 19, 2020 at 01:45:37PM +0200, Michal Privoznik wrote:
On 8/18/20 10:48 AM, Erik Skultety wrote:
On Mon, Aug 17, 2020 at 10:37:18AM +0530, Prathamesh Chavan wrote:
Reference to `maxQueuedJobs` required us to access config of the qemu-driver. And creating its copy in the `qemuDomainJob` helped us access the variable without referencing the driver's config.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_domain.c | 5 ++++- src/qemu/qemu_domainjob.c | 13 +++++++------ src/qemu/qemu_domainjob.h | 4 +++- 3 files changed, 14 insertions(+), 8 deletions(-)
Looking at this again, I don't think we want to move any of the jobs_queued or maxQueuedJobs attributes to the job structure, I'm sorry I didn't spot this right away in the previous revision of the series, I take responsibility for it. The job structure represents a single job, so from the design POV, referencing both jobs_queued (which is really tied to a domain) and maxQueuedJobs (which is a global driver setting) from within the job structure simply doesn't feel right. Instead, I think qemuDomainObjBeginJobInternal will simply have to be split in parts specific to QEMU that have access to the driver and hypervisor-agnostic bits that can be moved to src/hypervisor.
Actually, I think maxQueuedJobs is job specific and not a global driver setting. I mean, I can imagine other drivers benefiting from it too. It's true that we currently don't allow specifying different values for different domains and only apply the setting from qemu.conf, but in the future we might make this available in domain XML somehow [1] and thus be per domain. Moreover, I can imagine users wanting to set the limit for volumes too [2].
I can relate to the reasoning, but I still don't think the Job structure is the right place, like I said, it holds everything that a single job needs, from that POV a single job doesn't really need to do anything with that value, it's just it happens to be a very convenient place right now, I say let's try finding a different location for the attribute where it makes more sense. Erik
Michal
1: this refers to discussion we had on the list (too lazy to find the link, sorry) about the qemu:///embed. So far, if an user wants to set some values, they have to create qemu.conf in the root before connecting to the embed driver (so that the driver loads them). The idea was to allow them to specify everything in domain XML so no file needs to be created.
2: although, it's not as bad as it used to be, but with rotational disks having two processes accessing different portions of a file resulted in very poor performance as the head had to jump up & down.

On 8/19/20 2:00 PM, Erik Skultety wrote:
On Wed, Aug 19, 2020 at 01:45:37PM +0200, Michal Privoznik wrote:
On 8/18/20 10:48 AM, Erik Skultety wrote:
On Mon, Aug 17, 2020 at 10:37:18AM +0530, Prathamesh Chavan wrote:
Reference to `maxQueuedJobs` required us to access config of the qemu-driver. And creating its copy in the `qemuDomainJob` helped us access the variable without referencing the driver's config.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_domain.c | 5 ++++- src/qemu/qemu_domainjob.c | 13 +++++++------ src/qemu/qemu_domainjob.h | 4 +++- 3 files changed, 14 insertions(+), 8 deletions(-)
Looking at this again, I don't think we want to move any of the jobs_queued or maxQueuedJobs attributes to the job structure, I'm sorry I didn't spot this right away in the previous revision of the series, I take responsibility for it. The job structure represents a single job, so from the design POV, referencing both jobs_queued (which is really tied to a domain) and maxQueuedJobs (which is a global driver setting) from within the job structure simply doesn't feel right. Instead, I think qemuDomainObjBeginJobInternal will simply have to be split in parts specific to QEMU that have access to the driver and hypervisor-agnostic bits that can be moved to src/hypervisor.
Actually, I think maxQueuedJobs is job specific and not a global driver setting. I mean, I can imagine other drivers benefiting from it too. It's true that we currently don't allow specifying different values for different domains and only apply the setting from qemu.conf, but in the future we might make this available in domain XML somehow [1] and thus be per domain. Moreover, I can imagine users wanting to set the limit for volumes too [2].
I can relate to the reasoning, but I still don't think the Job structure is the right place, like I said, it holds everything that a single job needs, from that POV a single job doesn't really need to do anything with that value, it's just it happens to be a very convenient place right now, I say let's try finding a different location for the attribute where it makes more sense.
Sure, we can probably find different location, but question then is how usable the APIs would be? My idea of a good API is like this: job = initJob(parameters); /* alternatively, initJob(&job, parameters); */ beginJob(&job, type); /* do something useful */ endJob(&job); If we'd have maxQueuedJobs separate, then we would have to pass it in beginJob(), right? Michal

On Wed, Aug 19, 2020 at 05:04:58PM +0200, Michal Privoznik wrote:
On 8/19/20 2:00 PM, Erik Skultety wrote:
On Wed, Aug 19, 2020 at 01:45:37PM +0200, Michal Privoznik wrote:
On 8/18/20 10:48 AM, Erik Skultety wrote:
On Mon, Aug 17, 2020 at 10:37:18AM +0530, Prathamesh Chavan wrote:
Reference to `maxQueuedJobs` required us to access config of the qemu-driver. And creating its copy in the `qemuDomainJob` helped us access the variable without referencing the driver's config.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_domain.c | 5 ++++- src/qemu/qemu_domainjob.c | 13 +++++++------ src/qemu/qemu_domainjob.h | 4 +++- 3 files changed, 14 insertions(+), 8 deletions(-)
Looking at this again, I don't think we want to move any of the jobs_queued or maxQueuedJobs attributes to the job structure, I'm sorry I didn't spot this right away in the previous revision of the series, I take responsibility for it. The job structure represents a single job, so from the design POV, referencing both jobs_queued (which is really tied to a domain) and maxQueuedJobs (which is a global driver setting) from within the job structure simply doesn't feel right. Instead, I think qemuDomainObjBeginJobInternal will simply have to be split in parts specific to QEMU that have access to the driver and hypervisor-agnostic bits that can be moved to src/hypervisor.
Actually, I think maxQueuedJobs is job specific and not a global driver setting. I mean, I can imagine other drivers benefiting from it too. It's true that we currently don't allow specifying different values for different domains and only apply the setting from qemu.conf, but in the future we might make this available in domain XML somehow [1] and thus be per domain. Moreover, I can imagine users wanting to set the limit for volumes too [2].
I can relate to the reasoning, but I still don't think the Job structure is the right place, like I said, it holds everything that a single job needs, from that POV a single job doesn't really need to do anything with that value, it's just it happens to be a very convenient place right now, I say let's try finding a different location for the attribute where it makes more sense.
Sure, we can probably find different location, but question then is how usable the APIs would be? My idea of a good API is like this:
job = initJob(parameters); /* alternatively, initJob(&job, parameters); */
beginJob(&job, type);
/* do something useful */
endJob(&job);
If we'd have maxQueuedJobs separate, then we would have to pass it in beginJob(), right?
Naturally. In a standard OOP design, such information would come from the object calling the beginJob() method, in our case this would most likely translate to the domain object over which we're executing a job. We'd therefore need the domain object as well. I see what you mean, but a usable API is as important as logically structured data the API takes. Erik

Keeping the above arguments in mind, I think having callback functions for accessing them instead of added them should do the work for now. Thanks, Prathamesh Chavan On Wed, Aug 19, 2020 at 9:05 PM Erik Skultety <eskultet@redhat.com> wrote:
On Wed, Aug 19, 2020 at 05:04:58PM +0200, Michal Privoznik wrote:
On 8/19/20 2:00 PM, Erik Skultety wrote:
On Wed, Aug 19, 2020 at 01:45:37PM +0200, Michal Privoznik wrote:
On 8/18/20 10:48 AM, Erik Skultety wrote:
On Mon, Aug 17, 2020 at 10:37:18AM +0530, Prathamesh Chavan wrote:
Reference to `maxQueuedJobs` required us to access config of the qemu-driver. And creating its copy in the `qemuDomainJob` helped us access the variable without referencing the driver's config.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_domain.c | 5 ++++- src/qemu/qemu_domainjob.c | 13 +++++++------ src/qemu/qemu_domainjob.h | 4 +++- 3 files changed, 14 insertions(+), 8 deletions(-)
Looking at this again, I don't think we want to move any of the jobs_queued or maxQueuedJobs attributes to the job structure, I'm sorry I didn't spot this right away in the previous revision of the series, I take responsibility for it. The job structure represents a single job, so from the design POV, referencing both jobs_queued (which is really tied to a domain) and maxQueuedJobs (which is a global driver setting) from within the job structure simply doesn't feel right. Instead, I think qemuDomainObjBeginJobInternal will simply have to be split in parts specific to QEMU that have access to the driver and hypervisor-agnostic bits that can be moved to src/hypervisor.
Actually, I think maxQueuedJobs is job specific and not a global driver setting. I mean, I can imagine other drivers benefiting from it too. It's true that we currently don't allow specifying different values for different domains and only apply the setting from qemu.conf, but in the future we might make this available in domain XML somehow [1] and thus be per domain. Moreover, I can imagine users wanting to set the limit for volumes too [2].
I can relate to the reasoning, but I still don't think the Job structure is the right place, like I said, it holds everything that a single job needs, from that POV a single job doesn't really need to do anything with that value, it's just it happens to be a very convenient place right now, I say let's try finding a different location for the attribute where it makes more sense.
Sure, we can probably find different location, but question then is how usable the APIs would be? My idea of a good API is like this:
job = initJob(parameters); /* alternatively, initJob(&job, parameters); */
beginJob(&job, type);
/* do something useful */
endJob(&job);
If we'd have maxQueuedJobs separate, then we would have to pass it in beginJob(), right?
Naturally. In a standard OOP design, such information would come from the object calling the beginJob() method, in our case this would most likely translate to the domain object over which we're executing a job. We'd therefore need the domain object as well.
I see what you mean, but a usable API is as important as logically structured data the API takes.
Erik

Functions `qemuDomainRemoveInactiveJob` and `qemuDomainRemoveInactiveJobLocked` had their declaration misplaced in `qemu_domainjob` and were moved to `qemu_domain`. Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_domain.h | 6 ++++++ src/qemu/qemu_domainjob.h | 6 ------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 507f710200..3e5d115096 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -540,6 +540,12 @@ struct _qemuDomainJobPrivate { }; +void qemuDomainRemoveInactiveJob(virQEMUDriverPtr driver, + virDomainObjPtr vm); + +void qemuDomainRemoveInactiveJobLocked(virQEMUDriverPtr driver, + virDomainObjPtr vm); + void qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, virDomainObjPtr vm); diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index 11e7f2f432..c5f68ca778 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -204,12 +204,6 @@ void qemuDomainObjDiscardAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj); void qemuDomainObjReleaseAsyncJob(virDomainObjPtr obj); -void qemuDomainRemoveInactiveJob(virQEMUDriverPtr driver, - virDomainObjPtr vm); - -void qemuDomainRemoveInactiveJobLocked(virQEMUDriverPtr driver, - virDomainObjPtr vm); - bool qemuDomainTrackJob(qemuDomainJob job); void qemuDomainObjFreeJob(qemuDomainJobObjPtr job); -- 2.25.1

On Mon, Aug 17, 2020 at 10:37:19AM +0530, Prathamesh Chavan wrote:
Functions `qemuDomainRemoveInactiveJob` and `qemuDomainRemoveInactiveJobLocked` had their declaration misplaced in `qemu_domainjob` and were moved to `qemu_domain`.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- Reviewed-by: Erik Skultety <eskultet@redhat.com>
I'll merge this one right away as it's an independent change.

`qemuMigrationJobPhase` was transformed into `virMigrationJobPhase` and a common util file `virmigration` was created to store its defination. Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/hypervisor/meson.build | 1 + src/hypervisor/virmigration.c | 41 ++++++++++++++++++++ src/hypervisor/virmigration.h | 38 ++++++++++++++++++ src/libvirt_private.syms | 4 ++ src/qemu/MIGRATION.txt | 8 ++-- src/qemu/qemu_domainjob.c | 4 +- src/qemu/qemu_migration.c | 73 +++++++++++++++++------------------ src/qemu/qemu_migration.h | 17 +------- src/qemu/qemu_process.c | 48 +++++++++++------------ 9 files changed, 151 insertions(+), 83 deletions(-) create mode 100644 src/hypervisor/virmigration.c create mode 100644 src/hypervisor/virmigration.h diff --git a/src/hypervisor/meson.build b/src/hypervisor/meson.build index 85149c683e..c81bdfa2fc 100644 --- a/src/hypervisor/meson.build +++ b/src/hypervisor/meson.build @@ -3,6 +3,7 @@ hypervisor_sources = [ 'domain_driver.c', 'virclosecallbacks.c', 'virhostdev.c', + 'virmigration.c', ] hypervisor_lib = static_library( diff --git a/src/hypervisor/virmigration.c b/src/hypervisor/virmigration.c new file mode 100644 index 0000000000..2cad5a6b1b --- /dev/null +++ b/src/hypervisor/virmigration.c @@ -0,0 +1,41 @@ +/* + * virmigration.c: hypervisor migration handling + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + */ + +#include <config.h> + +#include "virmigration.h" +#include "domain_driver.h" +#include "virlog.h" + +#define VIR_FROM_THIS VIR_FROM_DOMAIN + +VIR_LOG_INIT("util.migration"); + +VIR_ENUM_IMPL(virMigrationJobPhase, + VIR_MIGRATION_PHASE_LAST, + "none", + "perform2", + "begin3", + "perform3", + "perform3_done", + "confirm3_cancelled", + "confirm3", + "prepare", + "finish2", + "finish3", +); diff --git a/src/hypervisor/virmigration.h b/src/hypervisor/virmigration.h new file mode 100644 index 0000000000..e03d71c1bb --- /dev/null +++ b/src/hypervisor/virmigration.h @@ -0,0 +1,38 @@ +/* + * virmigration.h: hypervisor migration handling + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + */ + +#pragma once + +#include "virenum.h" + + +typedef enum { + VIR_MIGRATION_PHASE_NONE = 0, + VIR_MIGRATION_PHASE_PERFORM2, + VIR_MIGRATION_PHASE_BEGIN3, + VIR_MIGRATION_PHASE_PERFORM3, + VIR_MIGRATION_PHASE_PERFORM3_DONE, + VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED, + VIR_MIGRATION_PHASE_CONFIRM3, + VIR_MIGRATION_PHASE_PREPARE, + VIR_MIGRATION_PHASE_FINISH2, + VIR_MIGRATION_PHASE_FINISH3, + + VIR_MIGRATION_PHASE_LAST +} virMigrationJobPhase; +VIR_ENUM_DECL(virMigrationJobPhase); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 01c2e710cd..cf78c2f27a 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1474,6 +1474,10 @@ virHostdevUpdateActiveSCSIDevices; virHostdevUpdateActiveUSBDevices; +# hypervisor/virmigration.h +virMigrationJobPhaseTypeFromString; +virMigrationJobPhaseTypeToString; + # libvirt_internal.h virConnectSupportsFeature; virDomainMigrateBegin3; diff --git a/src/qemu/MIGRATION.txt b/src/qemu/MIGRATION.txt index e861fd001e..dd044c6064 100644 --- a/src/qemu/MIGRATION.txt +++ b/src/qemu/MIGRATION.txt @@ -74,7 +74,7 @@ The sequence of calling qemuMigrationJob* helper methods is as follows: migration type and version) has to start migration job and keep it active: qemuMigrationJobStart(driver, vm, QEMU_JOB_MIGRATION_{IN,OUT}); - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_*); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_*); ...do work... qemuMigrationJobContinue(vm); @@ -82,7 +82,7 @@ The sequence of calling qemuMigrationJob* helper methods is as follows: if (!qemuMigrationJobIsActive(vm, QEMU_JOB_MIGRATION_{IN,OUT})) return; - qemuMigrationJobStartPhase(driver, vm, QEMU_MIGRATION_PHASE_*); + qemuMigrationJobStartPhase(driver, vm, VIR_MIGRATION_PHASE_*); ...do work... qemuMigrationJobContinue(vm); @@ -90,11 +90,11 @@ The sequence of calling qemuMigrationJob* helper methods is as follows: if (!qemuMigrationJobIsActive(vm, QEMU_JOB_MIGRATION_{IN,OUT})) return; - qemuMigrationJobStartPhase(driver, vm, QEMU_MIGRATION_PHASE_*); + qemuMigrationJobStartPhase(driver, vm, VIR_MIGRATION_PHASE_*); ...do work... qemuMigrationJobFinish(driver, vm); While migration job is running (i.e., after qemuMigrationJobStart* but before qemuMigrationJob{Continue,Finish}), migration phase can be advanced using - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_*); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_*); diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index eebc144747..18abc0d986 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -70,7 +70,7 @@ qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job, switch (job) { case QEMU_ASYNC_JOB_MIGRATION_OUT: case QEMU_ASYNC_JOB_MIGRATION_IN: - return qemuMigrationJobPhaseTypeToString(phase); + return virMigrationJobPhaseTypeToString(phase); case QEMU_ASYNC_JOB_SAVE: case QEMU_ASYNC_JOB_DUMP: @@ -96,7 +96,7 @@ qemuDomainAsyncJobPhaseFromString(qemuDomainAsyncJob job, switch (job) { case QEMU_ASYNC_JOB_MIGRATION_OUT: case QEMU_ASYNC_JOB_MIGRATION_IN: - return qemuMigrationJobPhaseTypeFromString(phase); + return virMigrationJobPhaseTypeFromString(phase); case QEMU_ASYNC_JOB_SAVE: case QEMU_ASYNC_JOB_DUMP: diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index c517774c9f..996c03e948 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -67,8 +67,8 @@ VIR_LOG_INIT("qemu.qemu_migration"); -VIR_ENUM_IMPL(qemuMigrationJobPhase, - QEMU_MIGRATION_PHASE_LAST, +VIR_ENUM_IMPL(virMigrationJobPhase, + VIR_MIGRATION_PHASE_LAST, "none", "perform2", "begin3", @@ -91,13 +91,13 @@ qemuMigrationJobStart(virQEMUDriverPtr driver, static void qemuMigrationJobSetPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, - qemuMigrationJobPhase phase) + virMigrationJobPhase phase) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); static void qemuMigrationJobStartPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, - qemuMigrationJobPhase phase) + virMigrationJobPhase phase) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); static void @@ -2027,13 +2027,13 @@ qemuMigrationSrcCleanup(virDomainObjPtr vm, " was closed; canceling the migration", vm->def->name); - switch ((qemuMigrationJobPhase) priv->job.phase) { - case QEMU_MIGRATION_PHASE_BEGIN3: + switch ((virMigrationJobPhase) priv->job.phase) { + case VIR_MIGRATION_PHASE_BEGIN3: /* just forget we were about to migrate */ qemuDomainObjDiscardAsyncJob(driver, vm); break; - case QEMU_MIGRATION_PHASE_PERFORM3_DONE: + case VIR_MIGRATION_PHASE_PERFORM3_DONE: VIR_WARN("Migration of domain %s finished but we don't know if the" " domain was successfully started on destination or not", vm->def->name); @@ -2043,19 +2043,19 @@ qemuMigrationSrcCleanup(virDomainObjPtr vm, qemuDomainObjDiscardAsyncJob(driver, vm); break; - case QEMU_MIGRATION_PHASE_PERFORM3: + case VIR_MIGRATION_PHASE_PERFORM3: /* cannot be seen without an active migration API; unreachable */ - case QEMU_MIGRATION_PHASE_CONFIRM3: - case QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED: + case VIR_MIGRATION_PHASE_CONFIRM3: + case VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED: /* all done; unreachable */ - case QEMU_MIGRATION_PHASE_PREPARE: - case QEMU_MIGRATION_PHASE_FINISH2: - case QEMU_MIGRATION_PHASE_FINISH3: + case VIR_MIGRATION_PHASE_PREPARE: + case VIR_MIGRATION_PHASE_FINISH2: + case VIR_MIGRATION_PHASE_FINISH3: /* incoming migration; unreachable */ - case QEMU_MIGRATION_PHASE_PERFORM2: + case VIR_MIGRATION_PHASE_PERFORM2: /* single phase outgoing migration; unreachable */ - case QEMU_MIGRATION_PHASE_NONE: - case QEMU_MIGRATION_PHASE_LAST: + case VIR_MIGRATION_PHASE_NONE: + case VIR_MIGRATION_PHASE_LAST: /* unreachable */ ; } @@ -2091,7 +2091,7 @@ qemuMigrationSrcBeginPhase(virQEMUDriverPtr driver, * change protection. */ if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT) - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_BEGIN3); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_BEGIN3); if (!qemuMigrationSrcIsAllowed(driver, vm, true, flags)) return NULL; @@ -2550,7 +2550,7 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver, if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN, flags) < 0) goto cleanup; - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_PREPARE); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_PREPARE); /* Domain starts inactive, even if the domain XML had an id field. */ vm->def->id = -1; @@ -3011,10 +3011,9 @@ qemuMigrationSrcConfirmPhase(virQEMUDriverPtr driver, virCheckFlags(QEMU_MIGRATION_FLAGS, -1); - qemuMigrationJobSetPhase(driver, vm, - retcode == 0 - ? QEMU_MIGRATION_PHASE_CONFIRM3 - : QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED); + qemuMigrationJobSetPhase(driver, vm, retcode == 0 + ? VIR_MIGRATION_PHASE_CONFIRM3 + : VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED); if (!(mig = qemuMigrationEatCookie(driver, vm->def, priv->origname, priv, cookiein, cookieinlen, @@ -3104,7 +3103,7 @@ qemuMigrationSrcConfirm(virQEMUDriverPtr driver, unsigned int flags, int cancelled) { - qemuMigrationJobPhase phase; + virMigrationJobPhase phase; virQEMUDriverConfigPtr cfg = NULL; int ret = -1; @@ -3114,9 +3113,9 @@ qemuMigrationSrcConfirm(virQEMUDriverPtr driver, goto cleanup; if (cancelled) - phase = QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED; + phase = VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED; else - phase = QEMU_MIGRATION_PHASE_CONFIRM3; + phase = VIR_MIGRATION_PHASE_CONFIRM3; qemuMigrationJobStartPhase(driver, vm, phase); virCloseCallbacksUnset(driver->closeCallbacks, vm, @@ -4064,7 +4063,7 @@ qemuMigrationSrcPerformPeer2Peer2(virQEMUDriverPtr driver, * until the migration is complete. */ VIR_DEBUG("Perform %p", sconn); - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_PERFORM2); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_PERFORM2); if (flags & VIR_MIGRATE_TUNNELLED) ret = qemuMigrationSrcPerformTunnel(driver, vm, st, NULL, NULL, 0, NULL, NULL, @@ -4302,7 +4301,7 @@ qemuMigrationSrcPerformPeer2Peer3(virQEMUDriverPtr driver, * confirm migration completion. */ VIR_DEBUG("Perform3 %p uri=%s", sconn, NULLSTR(uri)); - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_PERFORM3); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_PERFORM3); VIR_FREE(cookiein); cookiein = g_steal_pointer(&cookieout); cookieinlen = cookieoutlen; @@ -4328,7 +4327,7 @@ qemuMigrationSrcPerformPeer2Peer3(virQEMUDriverPtr driver, virErrorPreserveLast(&orig_err); } else { qemuMigrationJobSetPhase(driver, vm, - QEMU_MIGRATION_PHASE_PERFORM3_DONE); + VIR_MIGRATION_PHASE_PERFORM3_DONE); } /* If Perform returns < 0, then we need to cancel the VM @@ -4692,7 +4691,7 @@ qemuMigrationSrcPerformJob(virQEMUDriverPtr driver, migParams, flags, dname, resource, &v3proto); } else { - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_PERFORM2); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_PERFORM2); ret = qemuMigrationSrcPerformNative(driver, vm, persist_xml, uri, cookiein, cookieinlen, cookieout, cookieoutlen, flags, resource, NULL, NULL, 0, NULL, @@ -4777,7 +4776,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriverPtr driver, return ret; } - qemuMigrationJobStartPhase(driver, vm, QEMU_MIGRATION_PHASE_PERFORM3); + qemuMigrationJobStartPhase(driver, vm, VIR_MIGRATION_PHASE_PERFORM3); virCloseCallbacksUnset(driver->closeCallbacks, vm, qemuMigrationSrcCleanup); @@ -4791,7 +4790,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriverPtr driver, goto endjob; } - qemuMigrationJobSetPhase(driver, vm, QEMU_MIGRATION_PHASE_PERFORM3_DONE); + qemuMigrationJobSetPhase(driver, vm, VIR_MIGRATION_PHASE_PERFORM3_DONE); if (virCloseCallbacksSet(driver->closeCallbacks, vm, conn, qemuMigrationSrcCleanup) < 0) @@ -5024,8 +5023,8 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver, ignore_value(virTimeMillisNow(&timeReceived)); qemuMigrationJobStartPhase(driver, vm, - v3proto ? QEMU_MIGRATION_PHASE_FINISH3 - : QEMU_MIGRATION_PHASE_FINISH2); + v3proto ? VIR_MIGRATION_PHASE_FINISH3 + : VIR_MIGRATION_PHASE_FINISH2); qemuDomainCleanupRemove(vm, qemuMigrationDstPrepareCleanup); g_clear_pointer(&jobPriv->completed, qemuDomainJobInfoFree); @@ -5504,14 +5503,14 @@ qemuMigrationJobStart(virQEMUDriverPtr driver, static void qemuMigrationJobSetPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, - qemuMigrationJobPhase phase) + virMigrationJobPhase phase) { qemuDomainObjPrivatePtr priv = vm->privateData; if (phase < priv->job.phase) { VIR_ERROR(_("migration protocol going backwards %s => %s"), - qemuMigrationJobPhaseTypeToString(priv->job.phase), - qemuMigrationJobPhaseTypeToString(phase)); + virMigrationJobPhaseTypeToString(priv->job.phase), + virMigrationJobPhaseTypeToString(phase)); return; } @@ -5521,7 +5520,7 @@ qemuMigrationJobSetPhase(virQEMUDriverPtr driver, static void qemuMigrationJobStartPhase(virQEMUDriverPtr driver, virDomainObjPtr vm, - qemuMigrationJobPhase phase) + virMigrationJobPhase phase) { qemuMigrationJobSetPhase(driver, vm, phase); } diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index b6f88d3fd9..b05f5254b4 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -24,6 +24,7 @@ #include "qemu_conf.h" #include "qemu_domain.h" #include "qemu_migration_params.h" +#include "virmigration.h" #include "virenum.h" /* @@ -87,22 +88,6 @@ NULL -typedef enum { - QEMU_MIGRATION_PHASE_NONE = 0, - QEMU_MIGRATION_PHASE_PERFORM2, - QEMU_MIGRATION_PHASE_BEGIN3, - QEMU_MIGRATION_PHASE_PERFORM3, - QEMU_MIGRATION_PHASE_PERFORM3_DONE, - QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED, - QEMU_MIGRATION_PHASE_CONFIRM3, - QEMU_MIGRATION_PHASE_PREPARE, - QEMU_MIGRATION_PHASE_FINISH2, - QEMU_MIGRATION_PHASE_FINISH3, - - QEMU_MIGRATION_PHASE_LAST -} qemuMigrationJobPhase; -VIR_ENUM_DECL(qemuMigrationJobPhase); - char * qemuMigrationSrcBegin(virConnectPtr conn, virDomainObjPtr vm, diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index e114e4c4ce..0f2cd47044 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -3437,24 +3437,24 @@ qemuProcessRecoverMigrationIn(virQEMUDriverPtr driver, (state == VIR_DOMAIN_RUNNING && reason == VIR_DOMAIN_RUNNING_POSTCOPY); - switch ((qemuMigrationJobPhase) job->phase) { - case QEMU_MIGRATION_PHASE_NONE: - case QEMU_MIGRATION_PHASE_PERFORM2: - case QEMU_MIGRATION_PHASE_BEGIN3: - case QEMU_MIGRATION_PHASE_PERFORM3: - case QEMU_MIGRATION_PHASE_PERFORM3_DONE: - case QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED: - case QEMU_MIGRATION_PHASE_CONFIRM3: - case QEMU_MIGRATION_PHASE_LAST: + switch ((virMigrationJobPhase) job->phase) { + case VIR_MIGRATION_PHASE_NONE: + case VIR_MIGRATION_PHASE_PERFORM2: + case VIR_MIGRATION_PHASE_BEGIN3: + case VIR_MIGRATION_PHASE_PERFORM3: + case VIR_MIGRATION_PHASE_PERFORM3_DONE: + case VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED: + case VIR_MIGRATION_PHASE_CONFIRM3: + case VIR_MIGRATION_PHASE_LAST: /* N/A for incoming migration */ break; - case QEMU_MIGRATION_PHASE_PREPARE: + case VIR_MIGRATION_PHASE_PREPARE: VIR_DEBUG("Killing unfinished incoming migration for domain %s", vm->def->name); return -1; - case QEMU_MIGRATION_PHASE_FINISH2: + case VIR_MIGRATION_PHASE_FINISH2: /* source domain is already killed so let's just resume the domain * and hope we are all set */ VIR_DEBUG("Incoming migration finished, resuming domain %s", @@ -3466,7 +3466,7 @@ qemuProcessRecoverMigrationIn(virQEMUDriverPtr driver, } break; - case QEMU_MIGRATION_PHASE_FINISH3: + case VIR_MIGRATION_PHASE_FINISH3: /* migration finished, we started resuming the domain but didn't * confirm success or failure yet; killing it seems safest unless * we already started guest CPUs or we were in post-copy mode */ @@ -3498,22 +3498,22 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr driver, reason == VIR_DOMAIN_PAUSED_POSTCOPY_FAILED); bool resume = false; - switch ((qemuMigrationJobPhase) job->phase) { - case QEMU_MIGRATION_PHASE_NONE: - case QEMU_MIGRATION_PHASE_PREPARE: - case QEMU_MIGRATION_PHASE_FINISH2: - case QEMU_MIGRATION_PHASE_FINISH3: - case QEMU_MIGRATION_PHASE_LAST: + switch ((virMigrationJobPhase) job->phase) { + case VIR_MIGRATION_PHASE_NONE: + case VIR_MIGRATION_PHASE_PREPARE: + case VIR_MIGRATION_PHASE_FINISH2: + case VIR_MIGRATION_PHASE_FINISH3: + case VIR_MIGRATION_PHASE_LAST: /* N/A for outgoing migration */ break; - case QEMU_MIGRATION_PHASE_BEGIN3: + case VIR_MIGRATION_PHASE_BEGIN3: /* nothing happened so far, just forget we were about to migrate the * domain */ break; - case QEMU_MIGRATION_PHASE_PERFORM2: - case QEMU_MIGRATION_PHASE_PERFORM3: + case VIR_MIGRATION_PHASE_PERFORM2: + case VIR_MIGRATION_PHASE_PERFORM3: /* migration is still in progress, let's cancel it and resume the * domain; however we can only do that before migration enters * post-copy mode @@ -3531,7 +3531,7 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr driver, } break; - case QEMU_MIGRATION_PHASE_PERFORM3_DONE: + case VIR_MIGRATION_PHASE_PERFORM3_DONE: /* migration finished but we didn't have a chance to get the result * of Finish3 step; third party needs to check what to do next; in * post-copy mode we can use PAUSED_POSTCOPY_FAILED state for this @@ -3540,7 +3540,7 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr driver, qemuMigrationAnyPostcopyFailed(driver, vm); break; - case QEMU_MIGRATION_PHASE_CONFIRM3_CANCELLED: + case VIR_MIGRATION_PHASE_CONFIRM3_CANCELLED: /* Finish3 failed, we need to resume the domain, but once we enter * post-copy mode there's no way back, so let's just mark the domain * as broken in that case @@ -3554,7 +3554,7 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr driver, } break; - case QEMU_MIGRATION_PHASE_CONFIRM3: + case VIR_MIGRATION_PHASE_CONFIRM3: /* migration completed, we need to kill the domain here */ *stopFlags |= VIR_QEMU_PROCESS_STOP_MIGRATED; return -1; -- 2.25.1

On Mon, Aug 17, 2020 at 10:37:20AM +0530, Prathamesh Chavan wrote:
`qemuMigrationJobPhase` was transformed into `virMigrationJobPhase` and a common util file `virmigration` was created to store its defination.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> ---
Like I mentioned in review of v1, moving *just* enums from one module into another is rarely a desirable change without any additional context. So this patch should be combined with changes that actually require such a code movement, either in this series or in a separate one (doesn't matter), but in its current shape this patch is a little out of place in this series. Erik

Dependency on qemu-specific `diskPrivatePtr` was removed by moving the funcitons `qemuDomainObjPrivateXMLParseJobNBD` and `qemuDomainObjPrivateXMLFormatNBDMigration` to `qemu_domain`, and moving their calls inside the `parseJob` and `formatJob` callback functions. Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> --- src/qemu/qemu_domain.c | 150 ++++++++++++++++++++++++++++++++++++- src/qemu/qemu_domainjob.c | 152 +------------------------------------- src/qemu/qemu_domainjob.h | 6 +- 3 files changed, 155 insertions(+), 153 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 677fa7ea91..615a8a293c 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -562,12 +562,70 @@ qemuJobResetPrivate(void *opaque) } +static int +qemuDomainObjPrivateXMLFormatNBDMigrationSource(virBufferPtr buf, + virStorageSourcePtr src, + virDomainXMLOptionPtr xmlopt) +{ + g_auto(virBuffer) attrBuf = VIR_BUFFER_INITIALIZER; + g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf); + + virBufferAsprintf(&attrBuf, " type='%s' format='%s'", + virStorageTypeToString(src->type), + virStorageFileFormatTypeToString(src->format)); + + if (virDomainDiskSourceFormat(&childBuf, src, "source", 0, false, + VIR_DOMAIN_DEF_FORMAT_STATUS, + false, false, xmlopt) < 0) + return -1; + + virXMLFormatElement(buf, "migrationSource", &attrBuf, &childBuf); + + return 0; +} + + +static int +qemuDomainObjPrivateXMLFormatNBDMigration(virBufferPtr buf, + virDomainObjPtr vm) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + size_t i; + virDomainDiskDefPtr disk; + qemuDomainDiskPrivatePtr diskPriv; + + for (i = 0; i < vm->def->ndisks; i++) { + g_auto(virBuffer) attrBuf = VIR_BUFFER_INITIALIZER; + g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf); + disk = vm->def->disks[i]; + diskPriv = QEMU_DOMAIN_DISK_PRIVATE(disk); + + virBufferAsprintf(&attrBuf, " dev='%s' migrating='%s'", + disk->dst, diskPriv->migrating ? "yes" : "no"); + + if (diskPriv->migrSource && + qemuDomainObjPrivateXMLFormatNBDMigrationSource(&childBuf, + diskPriv->migrSource, + priv->driver->xmlopt) < 0) + return -1; + + virXMLFormatElement(buf, "disk", &attrBuf, &childBuf); + } + + return 0; +} + static int qemuDomainFormatJobPrivate(virBufferPtr buf, - qemuDomainJobObjPtr job) + qemuDomainJobObjPtr job, + virDomainObjPtr vm) { qemuDomainJobPrivatePtr priv = job->privateData; + if (job->asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT && + qemuDomainObjPrivateXMLFormatNBDMigration(buf, vm) < 0) + return -1; + if (priv->migParams) qemuMigrationParamsFormat(buf, priv->migParams); @@ -617,12 +675,100 @@ qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver, virObjectEventStateQueue(driver->domainEventState, event); } +static int +qemuDomainObjPrivateXMLParseJobNBDSource(xmlNodePtr node, + xmlXPathContextPtr ctxt, + virDomainDiskDefPtr disk, + virDomainXMLOptionPtr xmlopt) +{ + VIR_XPATH_NODE_AUTORESTORE(ctxt); + qemuDomainDiskPrivatePtr diskPriv = QEMU_DOMAIN_DISK_PRIVATE(disk); + g_autofree char *format = NULL; + g_autofree char *type = NULL; + g_autoptr(virStorageSource) migrSource = NULL; + xmlNodePtr sourceNode; + + ctxt->node = node; + + if (!(ctxt->node = virXPathNode("./migrationSource", ctxt))) + return 0; + + if (!(type = virXMLPropString(ctxt->node, "type"))) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("missing storage source type")); + return -1; + } + + if (!(format = virXMLPropString(ctxt->node, "format"))) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("missing storage source format")); + return -1; + } + + if (!(migrSource = virDomainStorageSourceParseBase(type, format, NULL))) + return -1; + + /* newer libvirt uses the <source> subelement instead of formatting the + * source directly into <migrationSource> */ + if ((sourceNode = virXPathNode("./source", ctxt))) + ctxt->node = sourceNode; + + if (virDomainStorageSourceParse(ctxt->node, ctxt, migrSource, + VIR_DOMAIN_DEF_PARSE_STATUS, xmlopt) < 0) + return -1; + + diskPriv->migrSource = g_steal_pointer(&migrSource); + return 0; +} + + +static int +qemuDomainObjPrivateXMLParseJobNBD(virDomainObjPtr vm, + xmlXPathContextPtr ctxt) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + g_autofree xmlNodePtr *nodes = NULL; + size_t i; + int n; + + if ((n = virXPathNodeSet("./disk[@migrating='yes']", ctxt, &nodes)) < 0) + return -1; + + if (n > 0) { + if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT) { + VIR_WARN("Found disks marked for migration but we were not " + "migrating"); + n = 0; + } + for (i = 0; i < n; i++) { + virDomainDiskDefPtr disk; + g_autofree char *dst = NULL; + + if ((dst = virXMLPropString(nodes[i], "dev")) && + (disk = virDomainDiskByTarget(vm->def, dst))) { + QEMU_DOMAIN_DISK_PRIVATE(disk)->migrating = true; + + if (qemuDomainObjPrivateXMLParseJobNBDSource(nodes[i], ctxt, + disk, + priv->driver->xmlopt) < 0) + return -1; + } + } + } + + return 0; +} + static int qemuDomainParseJobPrivate(xmlXPathContextPtr ctxt, - qemuDomainJobObjPtr job) + qemuDomainJobObjPtr job, + virDomainObjPtr vm) { qemuDomainJobPrivatePtr priv = job->privateData; + if (qemuDomainObjPrivateXMLParseJobNBD(vm, ctxt) < 0) + return -1; + if (qemuMigrationParamsParse(ctxt, &priv->migParams) < 0) return -1; diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 18abc0d986..ae4ac9e0c1 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -19,7 +19,7 @@ #include <config.h> #include "qemu_domain.h" -#include "qemu_migration.h" +#include "virmigration.h" #include "qemu_domainjob.h" #include "viralloc.h" #include "virlog.h" @@ -717,66 +717,11 @@ qemuDomainObjAbortAsyncJob(virDomainObjPtr obj) virDomainObjBroadcast(obj); } - -static int -qemuDomainObjPrivateXMLFormatNBDMigrationSource(virBufferPtr buf, - virStorageSourcePtr src, - virDomainXMLOptionPtr xmlopt) -{ - g_auto(virBuffer) attrBuf = VIR_BUFFER_INITIALIZER; - g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf); - - virBufferAsprintf(&attrBuf, " type='%s' format='%s'", - virStorageTypeToString(src->type), - virStorageFileFormatTypeToString(src->format)); - - if (virDomainDiskSourceFormat(&childBuf, src, "source", 0, false, - VIR_DOMAIN_DEF_FORMAT_STATUS, - false, false, xmlopt) < 0) - return -1; - - virXMLFormatElement(buf, "migrationSource", &attrBuf, &childBuf); - - return 0; -} - - -static int -qemuDomainObjPrivateXMLFormatNBDMigration(virBufferPtr buf, - virDomainObjPtr vm) -{ - qemuDomainObjPrivatePtr priv = vm->privateData; - size_t i; - virDomainDiskDefPtr disk; - qemuDomainDiskPrivatePtr diskPriv; - - for (i = 0; i < vm->def->ndisks; i++) { - g_auto(virBuffer) attrBuf = VIR_BUFFER_INITIALIZER; - g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf); - disk = vm->def->disks[i]; - diskPriv = QEMU_DOMAIN_DISK_PRIVATE(disk); - - virBufferAsprintf(&attrBuf, " dev='%s' migrating='%s'", - disk->dst, diskPriv->migrating ? "yes" : "no"); - - if (diskPriv->migrSource && - qemuDomainObjPrivateXMLFormatNBDMigrationSource(&childBuf, - diskPriv->migrSource, - priv->driver->xmlopt) < 0) - return -1; - - virXMLFormatElement(buf, "disk", &attrBuf, &childBuf); - } - - return 0; -} - int qemuDomainObjPrivateXMLFormatJob(virBufferPtr buf, virDomainObjPtr vm) { qemuDomainObjPrivatePtr priv = vm->privateData; - qemuDomainJobObjPtr jobObj = &priv->job; g_auto(virBuffer) attrBuf = VIR_BUFFER_INITIALIZER; g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf); qemuDomainJob job = priv->job.active; @@ -801,11 +746,7 @@ qemuDomainObjPrivateXMLFormatJob(virBufferPtr buf, if (priv->job.asyncJob != QEMU_ASYNC_JOB_NONE) virBufferAsprintf(&attrBuf, " flags='0x%lx'", priv->job.apiFlags); - if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT && - qemuDomainObjPrivateXMLFormatNBDMigration(&childBuf, vm) < 0) - return -1; - - if (jobObj->cb->formatJob(&childBuf, jobObj) < 0) + if (priv->job.cb->formatJob(&childBuf, &priv->job, vm) < 0) return -1; virXMLFormatElement(buf, "job", &attrBuf, &childBuf); @@ -814,90 +755,6 @@ qemuDomainObjPrivateXMLFormatJob(virBufferPtr buf, } -static int -qemuDomainObjPrivateXMLParseJobNBDSource(xmlNodePtr node, - xmlXPathContextPtr ctxt, - virDomainDiskDefPtr disk, - virDomainXMLOptionPtr xmlopt) -{ - VIR_XPATH_NODE_AUTORESTORE(ctxt); - qemuDomainDiskPrivatePtr diskPriv = QEMU_DOMAIN_DISK_PRIVATE(disk); - g_autofree char *format = NULL; - g_autofree char *type = NULL; - g_autoptr(virStorageSource) migrSource = NULL; - xmlNodePtr sourceNode; - - ctxt->node = node; - - if (!(ctxt->node = virXPathNode("./migrationSource", ctxt))) - return 0; - - if (!(type = virXMLPropString(ctxt->node, "type"))) { - virReportError(VIR_ERR_XML_ERROR, "%s", - _("missing storage source type")); - return -1; - } - - if (!(format = virXMLPropString(ctxt->node, "format"))) { - virReportError(VIR_ERR_XML_ERROR, "%s", - _("missing storage source format")); - return -1; - } - - if (!(migrSource = virDomainStorageSourceParseBase(type, format, NULL))) - return -1; - - /* newer libvirt uses the <source> subelement instead of formatting the - * source directly into <migrationSource> */ - if ((sourceNode = virXPathNode("./source", ctxt))) - ctxt->node = sourceNode; - - if (virDomainStorageSourceParse(ctxt->node, ctxt, migrSource, - VIR_DOMAIN_DEF_PARSE_STATUS, xmlopt) < 0) - return -1; - - diskPriv->migrSource = g_steal_pointer(&migrSource); - return 0; -} - - -static int -qemuDomainObjPrivateXMLParseJobNBD(virDomainObjPtr vm, - xmlXPathContextPtr ctxt) -{ - qemuDomainObjPrivatePtr priv = vm->privateData; - g_autofree xmlNodePtr *nodes = NULL; - size_t i; - int n; - - if ((n = virXPathNodeSet("./disk[@migrating='yes']", ctxt, &nodes)) < 0) - return -1; - - if (n > 0) { - if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT) { - VIR_WARN("Found disks marked for migration but we were not " - "migrating"); - n = 0; - } - for (i = 0; i < n; i++) { - virDomainDiskDefPtr disk; - g_autofree char *dst = NULL; - - if ((dst = virXMLPropString(nodes[i], "dev")) && - (disk = virDomainDiskByTarget(vm->def, dst))) { - QEMU_DOMAIN_DISK_PRIVATE(disk)->migrating = true; - - if (qemuDomainObjPrivateXMLParseJobNBDSource(nodes[i], ctxt, - disk, - priv->driver->xmlopt) < 0) - return -1; - } - } - } - - return 0; -} - int qemuDomainObjPrivateXMLParseJob(virDomainObjPtr vm, xmlXPathContextPtr ctxt) @@ -949,10 +806,7 @@ qemuDomainObjPrivateXMLParseJob(virDomainObjPtr vm, return -1; } - if (qemuDomainObjPrivateXMLParseJobNBD(vm, ctxt) < 0) - return -1; - - if (job->cb->parseJob(ctxt, job) < 0) + if (priv->job.cb->parseJob(ctxt, job, vm) < 0) return -1; return 0; diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h index c5f68ca778..8f3dfbcf1e 100644 --- a/src/qemu/qemu_domainjob.h +++ b/src/qemu/qemu_domainjob.h @@ -105,9 +105,11 @@ typedef void *(*qemuDomainObjPrivateJobAlloc)(void); typedef void (*qemuDomainObjPrivateJobFree)(void *); typedef void (*qemuDomainObjPrivateJobReset)(void *); typedef int (*qemuDomainObjPrivateJobFormat)(virBufferPtr, - qemuDomainJobObjPtr); + qemuDomainJobObjPtr, + virDomainObjPtr); typedef int (*qemuDomainObjPrivateJobParse)(xmlXPathContextPtr, - qemuDomainJobObjPtr); + qemuDomainJobObjPtr, + virDomainObjPtr); typedef void (*qemuDomainObjJobInfoSetOperation)(qemuDomainJobObjPtr, virDomainJobOperation); typedef void (*qemuDomainObjCurrentJobInfoInit)(qemuDomainJobObjPtr, -- 2.25.1

On Mon, Aug 17, 2020 at 10:37:21AM +0530, Prathamesh Chavan wrote:
Dependency on qemu-specific `diskPrivatePtr` was removed by moving the funcitons `qemuDomainObjPrivateXMLParseJobNBD` and `qemuDomainObjPrivateXMLFormatNBDMigration` to `qemu_domain`, and moving their calls inside the `parseJob` and `formatJob` callback functions.
Mentioning diskPrivatePtr here is a little bit confusing to me, I'd simply say that both parsing and formatting of NBD is qemu specific and collides with our effort to convert qemu_domainjob module to a hypervisor agnostic one.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> ---
Overall, I agree with this change, so Reviewed-by: Erik Skultety <eskultet@redhat.com> ...
diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 18abc0d986..ae4ac9e0c1 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -19,7 +19,7 @@ #include <config.h>
#include "qemu_domain.h" -#include "qemu_migration.h" +#include "virmigration.h"
If we skip ^this hunk, the patch can be merged independently, it's up to you Prathamesh whether you want to add more changes or we you want to come up with a follow-up patch. Erik

On Tue, Aug 18, 2020 at 3:13 PM Erik Skultety <eskultet@redhat.com> wrote:
On Mon, Aug 17, 2020 at 10:37:21AM +0530, Prathamesh Chavan wrote:
Dependency on qemu-specific `diskPrivatePtr` was removed by moving the funcitons `qemuDomainObjPrivateXMLParseJobNBD` and `qemuDomainObjPrivateXMLFormatNBDMigration` to `qemu_domain`, and moving their calls inside the `parseJob` and `formatJob` callback functions.
Mentioning diskPrivatePtr here is a little bit confusing to me, I'd simply say that both parsing and formatting of NBD is qemu specific and collides with our effort to convert qemu_domainjob module to a hypervisor agnostic one.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com> ---
Overall, I agree with this change, so Reviewed-by: Erik Skultety <eskultet@redhat.com>
...
diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c index 18abc0d986..ae4ac9e0c1 100644 --- a/src/qemu/qemu_domainjob.c +++ b/src/qemu/qemu_domainjob.c @@ -19,7 +19,7 @@ #include <config.h>
#include "qemu_domain.h" -#include "qemu_migration.h" +#include "virmigration.h"
If we skip ^this hunk, the patch can be merged independently, it's up to you Prathamesh whether you want to add more changes or we you want to come up with a follow-up patch.
I'll post a follow-up patch. Thanks, Prathamesh Chavan
participants (3)
-
Erik Skultety
-
Michal Privoznik
-
Prathamesh Chavan