On Fri, Sep 10, 2010 at 02:39:41PM -0600, Eric Blake wrote:
On 09/10/2010 10:01 AM, Daniel P. Berrange wrote:
>
>At libvirtd startup:
>
> driver = virLockManagerPluginLoad("sync-manager");
>
>
>At libvirtd shtudown:
>
> virLockManagerPluginUnload(driver)
Can you load more than one lock manager at a time, or just one active
lock manager? How does a user configure which lock manager(s) to load
when libvirtd is started?
The intention is that any specific libvirt driver will only use
one lock manager, but LXC vs QEMU vs UML could each use a different
driver if required.
>At guest startup:
>
> manager = virLockManagerNew(driver,
> VIR_LOCK_MANAGER_START_DOMAIN,
> 0);
> virLockManagerSetParameter(manager, "id", id);
> virLockManagerSetParameter(manager, "uuid", uuid);
> virLockManagerSetParameter(manager, "name", name);
>
> foreach disk
> virLockManagerRegisterResource(manager,
> VIR_LOCK_MANAGER_RESOURCE_TYPE_DISK,
> disk.path,
> ..flags...);
>
> char **supervisorargv;
> int supervisorargc;
>
> supervisor = virLockManagerGetSupervisorPath(manager);
> virLockManagerGetSupervisorArgs(&argv,&argc);
>
> cmd = qemuBuildCommandLine(supervisor, supervisorargv, supervisorargv);
>
> supervisorpid = virCommandExec(cmd);
>
> if (!virLockManagerGetChild(manager,&qemupid))
> kill(supervisorpid); /* XXX or leave it running ??? */
Would it be better to first try virLockManagerShutdown? And rather than
a direct kill(), shouldn't this be virLockManagerFree?
Yes I guess so
>During migration:
>
> 1. On source host
>
> if (!virLockManagerPrepareMigrate(manager, hosturi))
> ..don't start migration..
>
> 2. On dest host
>
> manager = virLockManagerNew(driver,
> VIR_LOCK_MANAGER_START_DOMAIN,
> VIR_LOCK_MANAGER_NEW_MIGRATE);
> virLockManagerSetParameter(manager, "id", id);
> virLockManagerSetParameter(manager, "uuid", uuid);
> virLockManagerSetParameter(manager, "name", name);
>
> foreach disk
> virLockManagerRegisterResource(manager,
> VIR_LOCK_MANAGER_RESOURCE_TYPE_DISK,
> disk.path,
> ..flags...);
So if there needs to be any relaxation of locks from exclusive to
shared-write for the duration of the migration, that would be the
responsibility of virLockManagerPrepareMigrate, and not done directly by
libvirt?
As with my other reply on this topic, I didn't want to force a particular
design / implementation strategy for migration, so I just put in actions
at each key stage of migration. The driver impl can decide whether todo
a plain release+reacquire, or use some kind of shared lock
> char **supervisorargv;
> int supervisorargc;
>
> supervisor = virLockManagerGetSupervisorPath(manager);
> virLockManagerGetSupervisorArgs(&argv,&argc);
>
> cmd = qemuBuildCommandLine(supervisor, supervisorargv,
> supervisorargv);
>
> supervisorpid = virCommandExec(cmd);
>
> if (!virLockManagerGetChild(manager,&qemupid))
> kill(supervisorpid); /* XXX or leave it running ??? */
>
> 3. Initiate migration in QEMU on source and wait for completion
>
> 4a. On failure
>
> 4a1 On target
>
> virLockManagerCompleteMigrateIn(manager,
> VIR_LOCK_MANAGER_MIGRATE_CANCEL);
> virLockManagerShutdown(manager);
> virLockManagerFree(manager);
>
> 4a2 On source
>
> virLockManagerCompleteMigrateIn(manager,
> VIR_LOCK_MANAGER_MIGRATE_CANCEL);
Wouldn't this be virLockManagerCompleteMigrateOut?
Opps, yes.
>
> 4b. On succcess
>
>
> 4b1 On target
>
> virLockManagerCompleteMigrateIn(manager, 0);
>
> 42 On source
>
> virLockManagerCompleteMigrateIn(manager, 0);
Likewise?
Yes
> virLockManagerShutdown(manager);
> virLockManagerFree(manager);
>
>
>Notes:
>
> - If a lock manager impl does just VM level leases, it can
> ignore all the resource paths at startup.
>
> - If a lock manager impl does not support migrate
> it can return an error from all migrate calls
>
> - If a lock manger impl does not support hotplug
> it can return an error from all resource acquire/release calls
>
Overall, this looks workable to me. As proposed, this assumes a 1:1
relation between LockManager process and managed VMs. But I guess you
can still have a central manager process that manages all the VMs, by
having the lock manager plugin spawn a simple shim process that does all
the communication with the central lock manager.
I could have decided it such that it didn't assume presence of a angel
process around each VM, but I think it is easier to be able to presume
that there is one. It can be an incredibly thin stub if desired, so I
don't think it'll be too onerous on implementations
Daniel
--
|: Red Hat, Engineering, London -o-
http://people.redhat.com/berrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org -o-
http://deltacloud.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|