On Thu, Nov 22, 2018 at 05:35:58PM +0100, Erik Skultety wrote:
Fixes
https://bugzilla.redhat.com/show_bug.cgi?id=1628892.
The problem is that we didn't put the DRI device into the namespace for QEMU to
access, but that was only a part of the issue. The other part of the issue is
that QEMU doesn't support specifying 'rendernode' for egl-headless yet (some
patches to solve this are already upstream for 3.1, some are still waiting to
be merged). Instead, QEMU's been autoselecting the DRI device on its own.
There's no compelling reason for libvirt not doing that instead and thus
prevent any permission-related issues.
Unlike for SPICE though, I deliberately didn't add an XML attribute for users
to select the rendernode for egl-headless, because:
a) most of the time, users really don't care about which DRM node will be used
and libvirt will most probably do a good decision
Picking a default does not conflict with displaying it in live XML.
b) egl-headless is only useful until we have a remote OpenGL
acceleration
support within SPICE
c) for SPICE (or for SDL for that matter at some point), the rendernode is
specified as part of the <gl> subelement which says "if enabled, use OpenGL
acceleration", but egl-headless graphics type essentially serves the same
purpose, it's like having <gl enabled='yes'/> for SPICE, thus having a
<gl>
subelement for egl-headless type is rather confusing
Could be just <gl rendernode=''/>
Even if its usefulness is short-lived, not exposing this knob of the
domain while we do expose it for SPICE feels wrong.
Jano