On Thu, Dec 08, 2022 at 10:12:22AM +0100, Martin Kletzander wrote:
On Wed, Dec 07, 2022 at 12:07:11PM +0000, Daniel P. Berrangé wrote:
> On Wed, Dec 07, 2022 at 12:42:06PM +0100, Martin Kletzander wrote:
> > On Thu, Dec 01, 2022 at 10:17:49AM +0000, Daniel P. Berrangé wrote:
> > > The other end of the
> > >
> > > virInternalSetProcessSetMaxMemLockHandler
> > >
> > > wouldn't have ability to validate the VM identity even if we
> > > passed it, since the master source of VM identity info is
> > > the unprivileged and untrusted component.
> > >
> > > This means it is a big challenge to do more than just a blanket
> > > allow/deny for the entire 'max mem lock' feature, rather than try
> > > to finese it per VM.
> > >
> >
> > Exactly what I was afraid of with another approach I discussed with
> > someone else a while ago. If you start providing ways to do arbitrary
> > privileged operations, then you are effectively giving away privileged
> > access.
> >
> > In this case I think it was an unfortunate choice of an API. If the API
> > is *designed* to provide the proper identifying information, then the
> > management application can then choose the action properly.
>
> I think it is challenging no matter what because the privileged component
> is placing trust on the unprivilged component to supply honest identifying
> info. This is a key reason why polkit ACL checks are done based on
> process ID + permission name. Process ID can't be faked, and you're asking
> the user to confirm the honesty of the permission name.
>
> Overall, I think if you're going to allow "mem lock" to an
unprivileged
> VM that's fine, but the expectation should be that we're allowing this
> for *any* VM, and not able to offer per-VM access control on that usage.
>
What I meant was something more along the lines of "place_vm_in_cgroup" where
we
would offload the cgroup placement to kubevirt. It already has to trust us with
the information we provide now (which threads are placed in which cgroups). The
benefit of having the callback per-connection and calling it on the connection
that is starting the VM would kind of make the argument easier.
In terms of privilege questions, I think kubevirt is rather a special
case, because they don't have any privilege boundary between libvirt,
qemu or virt-handler, which are all running inside the same POD. Their
plugin would also be requesting something to be done on their behalf
outside the VM POD. In terms of privilege checks they won't need
anything more fine grained than a PID check, as that's sufficient to
identify the POD that is requesting the resource,from any other process.
With regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|