On Tue, Jan 26, 2016 at 01:47:02PM +0000, Daniel P. Berrange wrote:
On Tue, Jan 26, 2016 at 02:28:56PM +0100, Ján Tomko wrote:
> On Fri, Jan 22, 2016 at 03:56:08PM +0000, Daniel P. Berrange wrote:
> > We have had virtlockd available for a long time now but
> > have always defaulted to the 'nop' lock driver which does
> > no locking. This gives users an unsafe deployment by
> > default unless they know to turn on lockd.
>
> Does the default setup of virtlockd offer any safety?
>
> After looking at:
>
https://bugzilla.redhat.com/show_bug.cgi?id=1191802
> It seems that with dynamic_ownership enabled we first apply
> the security labels, then check the disk locks by calling
> virDomainLockProcessResume right before starting the CPUs
> in virDomainLockProcessResume. After that fails, resetting
> the uid:gid back to root:root cuts off the access to the disk
> for the already running domain.
>
> Is there a reason why the locks are checked so late?
>
> I assume this only ever worked for the case of migration with
> shared storage (and shared lockspace).
NB, the virtlockd integration is intended to protect the
*content* of the disk image from being written to by two
processes at once.
Protecting the image metadata not a goal, since that is
something that should be dealt with by the security drivers.
ie the security driver should be acquiring suitable locks of
its own so that it does not block away in-use labels for other
VMs. We've had patches floating around for the security drivers
for a while, but never got as far as merging them yet.
Having said all that, I wonder if we should actually *not* turn
on virtlockd, until we have addressed the problem in the security
driver. Without the latter fix, it seems we'll get a never ending
stream of bug reports about this issue.
Regards,
Daniel
--
|: