> typedef struct _qemuDomainObjPrivate qemuDomainObjPrivate;
> @@ -107,6 +112,7 @@ struct _qemuDomainObjPrivate {
> virCond jobCond; /* Use in conjunction with main virDomainObjPtr lock */
> enum qemuDomainJob jobActive; /* Currently running job */
> unsigned int jobSignals; /* Signals for running job */
> + struct qemuDomainJobSignalsData jobSignalsData; /* Signal specific data */
> virDomainJobInfo jobInfo;
> unsigned long long jobStart;
I think I would have had an easier time understanding the patch if
with some explanation that we use jobSignalsData to indicate the user
desired change and that it would be applied once the background
migration job wakes up and look at the data, then passes it to qemu.
Ah yes, sorry about that. There was a description of signals in the patch
which introduced them and I forgot to write more about it when I extended them
with additional data.
> +static int
> +qemuDomainMigrateSetMaxDowntime(virDomainPtr dom,
> + unsigned long long downtime,
> + unsigned int flags ATTRIBUTE_UNUSED)
> +{
> + struct qemud_driver *driver = dom->conn->privateData;
> + virDomainObjPtr vm;
> + qemuDomainObjPrivatePtr priv;
> + int ret = -1;
> +
> + qemuDriverLock(driver);
[...]
> + priv = vm->privateData;
> +
> + if (priv->jobActive != QEMU_JOB_MIGRATION) {
> + qemuReportError(VIR_ERR_OPERATION_INVALID,
> + "%s", _("domain is not being
migrated"));
> + goto cleanup;
> + }
> +
> + VIR_DEBUG("Requesting migration downtime change to %lluns",
downtime);
> + priv->jobSignals |= QEMU_JOB_SIGNAL_MIGRATE_DOWNTIME;
> + priv->jobSignalsData.migrateDowntime = downtime;
There is something I donc fully see in this, which is how the
producer/consumer for priv->jobSignals are synchronized.
I would actually swap those 2 statements if there is no synchronization
to make sure we don't pick an uninitialized (or zeroed) value in
qemuDomainWaitForMigrationComplete(). I'm not clear if this can be
processed by 2 different threads
Well, swapping wouldn't help as CPUs don't guarantee assignment order without
synchronizing. Anyway, the two threads are synchronized on the vm object. In
the part you removed ([...]) qemuDomainMigrateSetMaxDowntime calls
virDomainFindByUUID, which automatically locks the vm object returned. On the
other hand qemuDomainWaitForMigrationComplete has the vm object locked all the
time except for the time when it communicates with qemu monitor, i.e. between
qemuDomainObjEnterMonitorWithDriver and qemuDomainObjExitMonitorWithDriver.
Jirka