Hi,
$ ls -lZ /dev/dri/
total 0
crw-rw----+ 1 root video system_u:object_r:dri_device_t:s0 226, 0 May 18
19:17 card0
crw-------. 1 root video system_u:object_r:dri_device_t:s0 226, 64 May 18
19:17 controlD64
crw-rw----+ 1 root video system_u:object_r:dri_device_t:s0 226, 128 May 18
19:17 renderD128
qemu itself loops over /dev/dri, grabs the first matching renderD* that it can
open, then does its magic. Some questions I have in general:
- is there only ever one render node? or one per video card?
One per video card.
- is it okay to use the same node for multiple VMs simultaneously?
Yes.
Maybe the long term fix is to have libvirt pass in a pre-opened fd
for
qemu:///system, since I don't know if it would be acceptable to chown
qemu:qemu on the render node, but maybe we use setfacl instead.
chown isn't a good idea I think. But doesn't use libvirt setfacl anyway
for simliar things (i.e. /dev/bus/usb/... for usb pass-through) ?
I certainly agree with that at least WRT UI tools... after playing
with this
stuff a bit I won't be adding UI clicky 'enable gl' in virt-manager in the
short term, there's just too many operational caveats. But advertisement in
the form of blog posts with all the caveats listed will probably save us bug
reports, and maybe a command line virt-xml one liner to turn it on for an
existing VM. And of course have virt-manager work with it correctly on the
viewer side, patches in git and fedora build coming soon
Agree, we should first get things going smooth before adding a big
"enable gl" switch in the UI.
cheers,
Gerd