On Thu, May 19, 2016 at 04:12:52PM +0200, Gerd Hoffmann wrote:
Hi,
> $ ls -lZ /dev/dri/
> total 0
> crw-rw----+ 1 root video system_u:object_r:dri_device_t:s0 226, 0 May 18
> 19:17 card0
> crw-------. 1 root video system_u:object_r:dri_device_t:s0 226, 64 May 18
> 19:17 controlD64
> crw-rw----+ 1 root video system_u:object_r:dri_device_t:s0 226, 128 May 18
> 19:17 renderD128
>
> qemu itself loops over /dev/dri, grabs the first matching renderD* that it can
> open, then does its magic. Some questions I have in general:
>
> - is there only ever one render node? or one per video card?
One per video card.
Is there a way to tell QEMU which video card to use ? If so we need to
somehow represent this in libvirt.
> - is it okay to use the same node for multiple VMs
simultaneously?
Yes.
Presumably they're going to compete for execution time and potentially
VRAM at least ? I assume they have 100% security isolation from each
other though. IOW, permissioning is really just there to prevent a
rogue processes from doing denial of service on the GPU resources,
rather than actively compromising other users of the GPU ?
> Maybe the long term fix is to have libvirt pass in a pre-opened
fd for
> qemu:///system, since I don't know if it would be acceptable to chown
> qemu:qemu on the render node, but maybe we use setfacl instead.
chown isn't a good idea I think. But doesn't use libvirt setfacl anyway
for simliar things (i.e. /dev/bus/usb/... for usb pass-through) ?
No, we exclusively switch access to QEMU only.
Obviously the DRI stuff is different as we expected the host OS to
have continued concurrent use of the video card.
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|:
http://entangle-photo.org -o-
http://live.gnome.org/gtk-vnc :|