On Wed, Aug 11, 2010 at 03:07:29PM -0600, Eric Blake wrote:
On 08/11/2010 02:53 PM, Chris Lalancette wrote:
> Unfortunately, this is not how migration works in qemu/kvm. Using your
> nomenclature above, it's more like the following:
>
> A guest is running on S. A migration is then initiated, at which point D
> fires up a qemu process with a -incoming argument. This is sort of
> a container process that will receive all of the migration data. Crucially
> for sync-manager, though, qemu completely starts up and "attaches" to all
of
> the resources (including disks) *while* qemu at S is still running. Then it
> enters a sort of paused state (where the guest cannot run), and receives
> all of the migration data. Once all of the migration data has been received,
> the guest on S is destroyed, and the guest on D is unpaused. That's why Dan
> mentioned that we need two hosts to access the disk at once.
On the other hand, does D do any writes to the disk prior to the point
at which it is unpaused? Would it work if D can be granted a read-only
lease to access to the disk for the duration of the migration data, and
then be converted over to read-write at the point when S is destroyed?
Even if sync_manager had read/write lease semantics, this use case wouldn't
translate onto it, because S is in write mode the entire time that D is in
read mode, and read locks are not compatible with write locks.
sync_manager shouldn't be viewed as something that's trying to add any new
protection to the migration case. It's just trying to accurately represent,
on disk, where qemu is unpaused.
Dave