Sorry for commenting so late but it was Rosh Hashana over here.
Just to give you some pointers, sync_manager should also be zero config.
host id, timeouts etc are not in a config.
Host Id is decided when sync_manager starts, and timeouts are decided
upon lease creation so that all the sync_managers in the cluster are
synchronized.
Also when thinking about migration we have to think about the situation
where the source host in the migration can't see the storage thus can't
update the lease.
The solution we currently have in mind is using the leader version to
make sure no one took the lease while the handover was happening.
This will mean giving sync_manager an optional parameter (forceLVer?).
This will all be managed by vdsm\rhev-m. You just have to think that
there might be something other then leases location. Maybe just add a
generic addArg param. Or if you think all locking mechanisms should have
this feature, make it a part of the API.
On Fri, 2010-09-10 at 16:49 +0100, Daniel P. Berrange wrote:
A few weeks back David introduce sync-manager as a means for
protecting
against the same VM being started on multiple hosts:
https://www.redhat.com/archives/libvir-list/2010-August/msg00179.html
This is obviously a desirable feature for libvirt, but we don't want to
have a hardcoded dependancy on a particular implementation. We can
achieve isolation between libvirt & sync-manager, allowing for alternative
implementations by specifying a formal plugin for supervisor processes.
What follows is my mockup of what a plugin API would look like, its
internal libvirt API and an outline of the usage scenarios in psuedo
code.
At the minimum level of compliance, the plugin implementation provides
for per-VM level exclusion. Optionally a plugin can declare that it
- supports locks against individual resources (disks)
- supports hotplug of individual resources
- supports migration of the supervised process even
Migration/hotplug will be forbid if those capabilities aren't declared
by the plugin.
In parallel with David's work on sync-manager, I intend to write a simple
plugin implementation based solely on fcntl() locking. It is important
that we have 2 independant implementations to prove the validity of the
plugin API. The fcntl() impl should provide a zero-config impl we can
deploy out of the box that will provide protection against 2 vms using
the same disk image on a host, and limited protection in multi-host
scenarios with shared filesystems (no protection for shared block devs).
Perhaps we should have a 3rd impl based on libdlm too, for Red Hat
ClusterSuite scenarios, though sync-manager may well be general &
simple enough to easily cope with this already.
Daniel