Dear Daniel,
Thank you very much for your response! For the two node Gluster KVM cluster, Jason Brooks
wrote (
https://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/):
"Due to a conflict between Gluster's built-in NFS server and NFS client-locking,
it's necessary to disable file locking in the /etc/nfsmount.conf file with the line
Lock=False to ensure that Gluster will reliably both serve up and access the engine volume
over NFS." Based on my experience I fear that this is still true and that this in the
end prevents VM locking in this setup. This would be sad, but it probably is a fact for
the time being.
Personally, I will probably switch to oVirt, once automated backups of VMs yielding qcow2
files with human-readable names (i.e., something one could manually carry over to a plain
KVM host if needed in an emergency or in case oVirt should later turn out to be too
difficult to manage) will become possible in oVirt.
Regards,
Michael
-----Ursprüngliche Nachricht-----
Von: Daniel P. Berrange [mailto:berrange@redhat.com]
Gesendet: Dienstag, 1. September 2015 10:50
An: Prof. Dr. Michael Schefczyk <michael(a)schefczyk.net>
Cc: libvirt-users(a)redhat.com
Betreff: Re: [libvirt-users] VM locking
On Mon, Aug 31, 2015 at 08:01:58PM +0000, Prof. Dr. Michael Schefczyk wrote:
Dear All,
I am trying to use VM (disk) locking on a two node Centos 7 KVM cluster. Unfortunately, I
am not successful.
Using virtlockd (
https://libvirt.org/locking-lockd.html), I get each
host to write the zero length file with a hashed filename to the
shared folder specified. Regardless of which host I start a VM
(domain) on, they do produce the identical filename per VM. What does
not work, however, is to prevent the second host to start the VM
already running on the first VM.
Ok, so you've configured the locks to happen on a shared filesystem which sounds
correct.
My system is current Centos 7 using a gluster storage setup like
suggested for oVirt
(
https://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3
-5/) based on the oVirt 3.5 repo but without an engine. I do this,
because I want to retain VM files in qcow2 format for live backups and
with human readable names. What does work then is, e.g., live
migration. A locking mechanism to prevent starting a VM twice would be
good.
Ok, so you are using gluster for your qcow2 files.
Please note that this configuration - both according to redhat and to
my own trial and error - requires Lock=False in /etc/nfsmount.conf.
Is there a connection with my findings? My issues occur regardless of
the files being in NFS or Gluster folders. KVM must load the Gluster
storage indirectly via NFS rather than straight via Gluster, as
Gluster storage does not seem to fully work, at least in virt-manager.
I'm a little confused about whether the virtlockd lock directory is stored on NFS or
on GlusterFS at this point.
If using NFS though, the lock=false setting will definitely break virtlockd. When you set
lock=false, it means that any fcntl() locks applications acquire are only scoped to the
local host - no other NFS clients will see them, which matches the behaviour you
describe.
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|:
http://entangle-photo.org -o-
http://live.gnome.org/gtk-vnc :|