How are locks acquired by libdlm scoped ? The reason we have
virtlockd is
that the fcntl() locks need to be held by a running process, and we wanted
them to persist across libvirtd restarts. This required holding them in a
separate process.
Locks are managed by 'dlm_controld' daemon. There is a
flag marked as
`LKF_PERSISTENT` could archive the purpose that locks still exist after
restarting libvirtd. And within my current cognition, lock would only
disappear after rebooting OS, (not release the locks manually); at the
same time, other nodes in clusters could recovery the lock context.
Are libdlm locks automatically released when the process that
acquired
them dies, or is a manual release action required ? If they always require
a manual release, then there would be no need to use virtlockd - the plugin
can just take care of acquire & release, and still cope with lbivirtd
restarts.
So why do I talk about virtlockd? It about the lockid. And what is
lockid?
In dlm, there are concepts about 'lockspace', 'resource' and
'lockid'. The
relationship: multiple locks could be associated with the same resource,
one lockspace could be added in multiple resource. When acquire a lock for
the resource, lockid is returned by acquiring API call.
Question: How could I know that which lockid is associated with the clear
resource? After all, all lockid information will be lost after rebooting
libvirtd in generally.
I think about some resolution: one is using virtlockd to maintain those
information, another is use some ways(such as share memory, write it to
disk) to persistent information Fortunately, maybe I don't have to use
virtlockd. LVM project also use the dlm, I could to refer it's implements.
-----
> The first half of 2017, QEMU introduced the `share-rw` and
`file.locking` to
> handle a problem:
https://bugzilla.redhat.com/show_bug.cgi?id=1080152 ,
link
correction:
https://bugzilla.redhat.com/show_bug.cgi?id=1378241
--
Regards
River