
On Fri, Jan 22, 2016 at 03:56:08PM +0000, Daniel P. Berrange wrote:
We have had virtlockd available for a long time now but have always defaulted to the 'nop' lock driver which does no locking. This gives users an unsafe deployment by default unless they know to turn on lockd.
Does the default setup of virtlockd offer any safety? After looking at: https://bugzilla.redhat.com/show_bug.cgi?id=1191802 It seems that with dynamic_ownership enabled we first apply the security labels, then check the disk locks by calling virDomainLockProcessResume right before starting the CPUs in virDomainLockProcessResume. After that fails, resetting the uid:gid back to root:root cuts off the access to the disk for the already running domain. Is there a reason why the locks are checked so late? I assume this only ever worked for the case of migration with shared storage (and shared lockspace). Jan
virtlockd will auto-activate via systemd when guests launch, so setting it on by default will only cause upgrade pain for people on non-systemd distros.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com> --- src/qemu/qemu.conf | 3 ++- src/qemu/qemu_driver.c | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-)