On 20 May 2016 at 00:23, Daniel P. Berrange <berrange(a)redhat.com> wrote:
On Thu, May 19, 2016 at 04:12:52PM +0200, Gerd Hoffmann wrote:
> Hi,
>
> > $ ls -lZ /dev/dri/
> > total 0
> > crw-rw----+ 1 root video system_u:object_r:dri_device_t:s0 226, 0 May 18
> > 19:17 card0
> > crw-------. 1 root video system_u:object_r:dri_device_t:s0 226, 64 May 18
> > 19:17 controlD64
> > crw-rw----+ 1 root video system_u:object_r:dri_device_t:s0 226, 128 May 18
> > 19:17 renderD128
> >
> > qemu itself loops over /dev/dri, grabs the first matching renderD* that it can
> > open, then does its magic. Some questions I have in general:
> >
> > - is there only ever one render node? or one per video card?
>
> One per video card.
Is there a way to tell QEMU which video card to use ? If so we need to
somehow represent this in libvirt.
We should probably add support for using an explicit path as the backing
for a particular virtio-gpu device. At the moment I think we just open the
first which may or may not be a great decision.
> > - is it okay to use the same node for multiple VMs simultaneously?
>
> Yes.
Presumably they're going to compete for execution time and potentially
VRAM at least ? I assume they have 100% security isolation from each
other though. IOW, permissioning is really just there to prevent a
rogue processes from doing denial of service on the GPU resources,
rather than actively compromising other users of the GPU ?
Securing 3D accelerated VM access to 100% is unlikely to ever be possible,
the GPU hardware just doesn't support this in some cases, later GPU
hardware is a lot better, but there will always be DoS and possible info
leaks through a GPU. I don't think VMware or anyone else do much
different here. What using a render node does is blocks you from deliberately
/accidentally accessing other users buffers using the defined API the
old drm API had a global namespace you could stumble through for
shared buffers.
> > Maybe the long term fix is to have libvirt pass in a
pre-opened fd for
> > qemu:///system, since I don't know if it would be acceptable to chown
> > qemu:qemu on the render node, but maybe we use setfacl instead.
>
> chown isn't a good idea I think. But doesn't use libvirt setfacl anyway
> for simliar things (i.e. /dev/bus/usb/... for usb pass-through) ?
No, we exclusively switch access to QEMU only.
Obviously the DRI stuff is different as we expected the host OS to
have continued concurrent use of the video card.
chowning wouldn't be acceptable, adding an acl for qemu:qemu
would be fine though.
Dave.