On 05/19/2016 08:52 AM, Daniel P. Berrange wrote:
On Thu, May 19, 2016 at 08:36:35AM -0400, Cole Robinson wrote:
> On 05/19/2016 08:21 AM, Daniel P. Berrange wrote:
>> On Thu, May 19, 2016 at 01:29:07PM +0200, Ján Tomko wrote:
>>> Allow access to /dev/dri/render* devices for domains
>>> using <graphics type="spice"> with <gl
enable="yes"/>
>>>
>>>
https://bugzilla.redhat.com/show_bug.cgi?id=1337290
>>
>> Ignoring cgroups for a minute, how exactly does QEMU get access to
>> the /dev/dri/render* devices in general ? ie when QEMU is running
>> as the 'qemu:qemu' user/group account, with selinux enforcing I
>> don't see how it can possibly open these files, as we're not granting
>> access to them in any of the security drivers. Given this, allowing
>> them in cgroups seems like the least of our problems.
>>
>
> The svirt bits can at least be temporarily worked around with chmod 666
> /dev/dri/render* and setenforce 0. The cgroup bit requires duplicating the
> entire cgroup_device_acl block in qemu.conf which is less friendly and not
> very future proof. Seems like an easy win
There's a potential issue though with going down a path now which is not
viable long term, which we then get stuck supporting for upgradability.
eg if we start granting permission to use these devices to multiple QEMUs
concurrently will we regret doing that later and have to break people's
deployments to fix it properly.
Hmm, I see. CCing gl guys
How this works on my f24 host:
$ ls -lZ /dev/dri/
total 0
crw-rw----+ 1 root video system_u:object_r:dri_device_t:s0 226, 0 May 18
19:17 card0
crw-------. 1 root video system_u:object_r:dri_device_t:s0 226, 64 May 18
19:17 controlD64
crw-rw----+ 1 root video system_u:object_r:dri_device_t:s0 226, 128 May 18
19:17 renderD128
qemu itself loops over /dev/dri, grabs the first matching renderD* that it can
open, then does its magic. Some questions I have in general:
- is there only ever one render node? or one per video card?
- is it okay to use the same node for multiple VMs simultaneously?
Maybe the long term fix is to have libvirt pass in a pre-opened fd for
qemu:///system, since I don't know if it would be acceptable to chown
qemu:qemu on the render node, but maybe we use setfacl instead.
Without sVirt integration though I'd suggest we don't really
advertize
this to users, as telling them to chmod / setenforce is not really a
supportable strategy for usage in any case.
I certainly agree with that at least WRT UI tools... after playing with this
stuff a bit I won't be adding UI clicky 'enable gl' in virt-manager in the
short term, there's just too many operational caveats. But advertisement in
the form of blog posts with all the caveats listed will probably save us bug
reports, and maybe a command line virt-xml one liner to turn it on for an
existing VM. And of course have virt-manager work with it correctly on the
viewer side, patches in git and fedora build coming soon
> But yes, there needs to be a larger discussion about how to
correctly handle
> this WRT svirt for both qemu:///system and qemu:///session. selinux bug here:
>
>
https://bugzilla.redhat.com/show_bug.cgi?id=1337333
Looks like we'd need to consider those separately - as in the session
case, even libvirtd won't have the option to fix permissioning. It is
something that would have to be done at the OS level to grant access.
Once granting access to just an unprivileged QEMU you might as well
just grant access to all a user's processes, since there's no separation
stopping other processes in the user session getting access to the devices
via QEMU. IOW, if you want qemu:///session mode to have access you end up
with a chmod 666 world, where everyone has access. I don't know enough about
it to know if that's reasonable or not.
Actually qemu:///session DAC permissions are fine, because the logged in user
has ACL access to the render node already, like /dev/snd/* for example. It's
just the svirt selinux policy that is rejecting access
The DAC permissions are an issue with qemu:qemu on qemu:///system though
- Cole