[libvirt] [PATCH 00/12] Autoselect a DRM node for egl-headless and add it to cmdline

Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1628892. The problem is that we didn't put the DRI device into the namespace for QEMU to access, but that was only a part of the issue. The other part of the issue is that QEMU doesn't support specifying 'rendernode' for egl-headless yet (some patches to solve this are already upstream for 3.1, some are still waiting to be merged). Instead, QEMU's been autoselecting the DRI device on its own. There's no compelling reason for libvirt not doing that instead and thus prevent any permission-related issues. Unlike for SPICE though, I deliberately didn't add an XML attribute for users to select the rendernode for egl-headless, because: a) most of the time, users really don't care about which DRM node will be used and libvirt will most probably do a good decision b) egl-headless is only useful until we have a remote OpenGL acceleration support within SPICE c) for SPICE (or for SDL for that matter at some point), the rendernode is specified as part of the <gl> subelement which says "if enabled, use OpenGL acceleration", but egl-headless graphics type essentially serves the same purpose, it's like having <gl enabled='yes'/> for SPICE, thus having a <gl> subelement for egl-headless type is rather confusing Erik Skultety (12): util: Introduce virHostGetDRMRenderNode helper qemu: command: spice: Pick the first available DRM render node qemu: caps: Start probing for egl-headless display type qemu: caps: Introduce QEMU_EGL_HEADLESS_RENDERNODE capability qemu: command: Introduce qemuBuildGraphicsEGLHeadlessCommandLine helper conf: Add egl-headless to virDomainGraphicsDef union qemu: domain: Put the egl-headless' rendernode device into the namespace qemu: cgroup: Add the DRI device to the cgroup list for egl-headless too command: Put the 'rendernode' option onto egl-headless graphics cmdline security: dac: Relabel the DRI render device for egl-headless too tests: Add a test case for the egl-headless' rendernode option docs: Provide a news update for libvirt being able to pick a DRI device docs/news.xml | 13 ++++ src/conf/domain_conf.h | 3 + src/libvirt_private.syms | 1 + src/qemu/qemu_capabilities.c | 17 +++-- src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_cgroup.c | 15 ++-- src/qemu/qemu_command.c | 72 +++++++++++++++---- src/qemu/qemu_domain.c | 15 ++-- src/security/security_dac.c | 21 +++--- src/util/virutil.c | 53 ++++++++++++++ src/util/virutil.h | 2 + ...cs-egl-headless-rendernode-autoselect.args | 26 +++++++ ...ics-egl-headless-rendernode-autoselect.xml | 1 + tests/qemuxml2argvmock.c | 9 +++ tests/qemuxml2argvtest.c | 4 ++ 15 files changed, 220 insertions(+), 33 deletions(-) create mode 100644 tests/qemuxml2argvdata/graphics-egl-headless-rendernode-autoselect.args create mode 120000 tests/qemuxml2argvdata/graphics-egl-headless-rendernode-autoselect.xml -- 2.19.1

This is the first step towards libvirt picking the first available render node instead of QEMU. It also makes sense for us to be able to do that, since we allow specifying the node directly for SPICE, so if there's no render node specified by the user, we should pick the first available one. The algorithm used for that is essentially the same as the one QEMU uses. Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/libvirt_private.syms | 1 + src/util/virutil.c | 53 ++++++++++++++++++++++++++++++++++++++++ src/util/virutil.h | 2 ++ 3 files changed, 56 insertions(+) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 8889aaa379..f3a615595c 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -3150,6 +3150,7 @@ virGetUserName; virGetUserRuntimeDirectory; virGetUserShell; virHexToBin; +virHostGetDRMRenderNode; virHostHasIOMMU; virIndexToDiskName; virIsDevMapperDevice; diff --git a/src/util/virutil.c b/src/util/virutil.c index 974cffc2ee..948c082f37 100644 --- a/src/util/virutil.c +++ b/src/util/virutil.c @@ -2145,3 +2145,56 @@ virHostHasIOMMU(void) VIR_DIR_CLOSE(iommuDir); return ret; } + + +/** + * virHostGetDRMRenderNode: + * + * Picks the first DRM render node available. Missing DRI or missing DRM render + * nodes in the system results in an error. + * + * Returns an absolute path to the first render node available or NULL in case + * of an error with the error being reported. + * Caller is responsible for freeing the result string. + * + */ +char * +virHostGetDRMRenderNode(void) +{ + char *ret = NULL; + DIR *driDir = NULL; + const char *driPath = "/dev/dri"; + struct dirent *ent = NULL; + int dirErr = 0; + bool have_rendernode = false; + + if (virDirOpen(&driDir, driPath) < 0) + return NULL; + + while ((dirErr = virDirRead(driDir, &ent, driPath)) > 0) { + if (ent->d_type != DT_CHR) + continue; + + if (STREQLEN(ent->d_name, "renderD", 7)) { + have_rendernode = true; + break; + } + } + + if (dirErr < 0) + goto cleanup; + + /* even if /dev/dri exists, there might be no renderDX nodes available */ + if (!have_rendernode) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("No DRM render nodes available")); + goto cleanup; + } + + if (virAsprintf(&ret, "%s/%s", driPath, ent->d_name) < 0) + goto cleanup; + + cleanup: + VIR_DIR_CLOSE(driDir); + return ret; +} diff --git a/src/util/virutil.h b/src/util/virutil.h index e0ab0da0f2..89bd21b148 100644 --- a/src/util/virutil.h +++ b/src/util/virutil.h @@ -222,6 +222,8 @@ unsigned long long virMemoryMaxValue(bool ulong) ATTRIBUTE_NOINLINE; bool virHostHasIOMMU(void); +char *virHostGetDRMRenderNode(void); + /** * VIR_ASSIGN_IS_OVERFLOW: * @rvalue: value that is checked (evaluated twice) -- 2.19.1

On Thu, Nov 22, 2018 at 05:35:59PM +0100, Erik Skultety wrote:
+char * +virHostGetDRMRenderNode(void) +{ + char *ret = NULL; + DIR *driDir = NULL; + const char *driPath = "/dev/dri"; + struct dirent *ent = NULL; + int dirErr = 0; + bool have_rendernode = false; + + if (virDirOpen(&driDir, driPath) < 0) + return NULL; + + while ((dirErr = virDirRead(driDir, &ent, driPath)) > 0) { + if (ent->d_type != DT_CHR) + continue; + + if (STREQLEN(ent->d_name, "renderD", 7)) {
aka STRPREFIX Jano

Up until now, we formatted 'rendernode=' onto QEMU cmdline only if the user specified it in the XML, otherwise we let QEMU do it for us. This causes permission issues because by default the /dev/dri/renderDX permissions are as follows: crw-rw----. 1 root video There's literally no reason why it shouldn't be libvirt picking the DRM render node instead of QEMU, that way (and because we're using namespaces by default), we can safely relabel the device within the namespace. Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_command.c | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-) diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 23a6661c10..f6bee10d5c 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -8235,6 +8235,8 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg, } if (graphics->data.spice.gl == VIR_TRISTATE_BOOL_YES) { + char **rendernode = &graphics->data.spice.rendernode; + if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_GL)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("This QEMU doesn't support spice OpenGL")); @@ -8246,17 +8248,20 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg, virBufferAsprintf(&opt, "gl=%s,", virTristateSwitchTypeToString(graphics->data.spice.gl)); - if (graphics->data.spice.rendernode) { - if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_RENDERNODE)) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", - _("This QEMU doesn't support spice OpenGL rendernode")); - goto error; - } - - virBufferAddLit(&opt, "rendernode="); - virQEMUBuildBufferEscapeComma(&opt, graphics->data.spice.rendernode); - virBufferAddLit(&opt, ","); + if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_RENDERNODE)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("This QEMU doesn't support spice OpenGL rendernode")); + goto error; } + + /* pick the first DRM render node if none was specified */ + if (!*rendernode && + !(*rendernode = virHostGetDRMRenderNode())) + goto error; + + virBufferAddLit(&opt, "rendernode="); + virQEMUBuildBufferEscapeComma(&opt, graphics->data.spice.rendernode); + virBufferAddLit(&opt, ","); } if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_SEAMLESS_MIGRATION)) { -- 2.19.1

[I see you're taking the Peter approach to prefixes] On Thu, Nov 22, 2018 at 05:36:00PM +0100, Erik Skultety wrote:
Up until now, we formatted 'rendernode=' onto QEMU cmdline only if the user specified it in the XML, otherwise we let QEMU do it for us. This causes permission issues because by default the /dev/dri/renderDX permissions are as follows:
crw-rw----. 1 root video
There's literally no reason why it shouldn't be libvirt picking the DRM render node instead of QEMU, that way (and because we're using namespaces by default), we can safely relabel the device within the namespace.
Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_command.c | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 23a6661c10..f6bee10d5c 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -8235,6 +8235,8 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg, }
if (graphics->data.spice.gl == VIR_TRISTATE_BOOL_YES) { + char **rendernode = &graphics->data.spice.rendernode; + if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_GL)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", _("This QEMU doesn't support spice OpenGL")); @@ -8246,17 +8248,20 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg, virBufferAsprintf(&opt, "gl=%s,", virTristateSwitchTypeToString(graphics->data.spice.gl));
- if (graphics->data.spice.rendernode) { - if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_RENDERNODE)) { - virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", - _("This QEMU doesn't support spice OpenGL rendernode")); - goto error; - } - - virBufferAddLit(&opt, "rendernode="); - virQEMUBuildBufferEscapeComma(&opt, graphics->data.spice.rendernode); - virBufferAddLit(&opt, ","); + if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_RENDERNODE)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("This QEMU doesn't support spice OpenGL rendernode")); + goto error; } + + /* pick the first DRM render node if none was specified */ + if (!*rendernode && + !(*rendernode = virHostGetDRMRenderNode()))
Ideally qemuBuild*CommandLine would not be affected by host state. qemuProcessPrepareDomain is the place for live XML modifications Jano

QEMU 3.1 introduced the 'query-display-options' QMP command which in itself isn't of any use to us since it can only provide information during runtime as the information is based on what appears on the cmdline. However, just by having the command in place allows us to introspect the @DisplayOptions data type in the QAPI schema. This patch implements the proper way of checking for the 'egl-headless' display capability. Unfortunately, we can't get rid of the old code which set the capability based on a specific QEMU (2.10) version just yet as that would break backwards compatibility. Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_capabilities.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index fde27010e4..90b76db034 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -1248,6 +1248,7 @@ static struct virQEMUCapsStringFlags virQEMUCapsQMPSchemaQueries[] = { { "screendump/arg-type/device", QEMU_CAPS_SCREENDUMP_DEVICE }, { "block-commit/arg-type/*top", QEMU_CAPS_ACTIVE_COMMIT }, { "query-iothreads/ret-type/poll-max-ns", QEMU_CAPS_IOTHREAD_POLLING }, + { "query-display-options/ret-type/+egl-headless", QEMU_CAPS_EGL_HEADLESS }, }; typedef struct _virQEMUCapsObjectTypeProps virQEMUCapsObjectTypeProps; @@ -4122,11 +4123,6 @@ virQEMUCapsInitQMPMonitor(virQEMUCapsPtr qemuCaps, virQEMUCapsSet(qemuCaps, QEMU_CAPS_MACHINE_PSERIES_RESIZE_HPT); } - /* '-display egl-headless' cmdline option is supported since QEMU 2.10, but - * there's no way to probe it */ - if (qemuCaps->version >= 2010000) - virQEMUCapsSet(qemuCaps, QEMU_CAPS_EGL_HEADLESS); - /* no way to query for -numa dist */ if (qemuCaps->version >= 2010000) virQEMUCapsSet(qemuCaps, QEMU_CAPS_NUMA_DIST); @@ -4212,6 +4208,15 @@ virQEMUCapsInitQMPMonitor(virQEMUCapsPtr qemuCaps, virQEMUCapsClear(qemuCaps, QEMU_CAPS_SEV_GUEST); } + /* '-display egl-headless' cmdline option is supported since QEMU 2.10, but + * until QEMU 3.1 there hasn't been a way to probe it + * + * NOTE: One day in a future far far away, we can ditch this check + */ + if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_EGL_HEADLESS) && + qemuCaps->version >= 2010000) + virQEMUCapsSet(qemuCaps, QEMU_CAPS_EGL_HEADLESS); + ret = 0; cleanup: return ret; -- 2.19.1

On Thu, Nov 22, 2018 at 05:36:01PM +0100, Erik Skultety wrote:
QEMU 3.1 introduced the 'query-display-options' QMP command which in itself isn't of any use to us since it can only provide information during runtime as the information is based on what appears on the cmdline. However, just by having the command in place allows us to introspect the @DisplayOptions data type in the QAPI schema.
This patch implements the proper way of checking for the 'egl-headless' display capability. Unfortunately, we can't get rid of the old code which set the capability based on a specific QEMU (2.10) version just yet as that would break backwards compatibility.
But we can skip the old code for QEMUs with query-display-options, can't we?
Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_capabilities.c | 15 ++++++++++-----
Missing test changes. Jano

On Thu, Nov 22, 2018 at 17:36:01 +0100, Erik Skultety wrote:
QEMU 3.1 introduced the 'query-display-options' QMP command which in itself isn't of any use to us since it can only provide information during runtime as the information is based on what appears on the cmdline. However, just by having the command in place allows us to introspect the @DisplayOptions data type in the QAPI schema.
This patch implements the proper way of checking for the 'egl-headless' display capability. Unfortunately, we can't get rid of the old code which set the capability based on a specific QEMU (2.10) version just yet as that would break backwards compatibility.
Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_capabilities.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index fde27010e4..90b76db034 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -1248,6 +1248,7 @@ static struct virQEMUCapsStringFlags virQEMUCapsQMPSchemaQueries[] = { { "screendump/arg-type/device", QEMU_CAPS_SCREENDUMP_DEVICE }, { "block-commit/arg-type/*top", QEMU_CAPS_ACTIVE_COMMIT }, { "query-iothreads/ret-type/poll-max-ns", QEMU_CAPS_IOTHREAD_POLLING }, + { "query-display-options/ret-type/+egl-headless", QEMU_CAPS_EGL_HEADLESS }, };
typedef struct _virQEMUCapsObjectTypeProps virQEMUCapsObjectTypeProps; @@ -4122,11 +4123,6 @@ virQEMUCapsInitQMPMonitor(virQEMUCapsPtr qemuCaps, virQEMUCapsSet(qemuCaps, QEMU_CAPS_MACHINE_PSERIES_RESIZE_HPT); }
- /* '-display egl-headless' cmdline option is supported since QEMU 2.10, but - * there's no way to probe it */ - if (qemuCaps->version >= 2010000) - virQEMUCapsSet(qemuCaps, QEMU_CAPS_EGL_HEADLESS); - /* no way to query for -numa dist */ if (qemuCaps->version >= 2010000) virQEMUCapsSet(qemuCaps, QEMU_CAPS_NUMA_DIST); @@ -4212,6 +4208,15 @@ virQEMUCapsInitQMPMonitor(virQEMUCapsPtr qemuCaps, virQEMUCapsClear(qemuCaps, QEMU_CAPS_SEV_GUEST); }
+ /* '-display egl-headless' cmdline option is supported since QEMU 2.10, but + * until QEMU 3.1 there hasn't been a way to probe it + * + * NOTE: One day in a future far far away, we can ditch this check + */ + if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_EGL_HEADLESS) && + qemuCaps->version >= 2010000) + virQEMUCapsSet(qemuCaps, QEMU_CAPS_EGL_HEADLESS);
I don't see much point in this patch and much less with the comment above. The point where that code can be deleted is when 2.10 becomes unsupported by libvirt. At that point the capability can always be assumed and we don't need to gate it on the presence of that field. This just adds some more code while the net result is the same.

Now that we have QAPI introspection of display types in QEMU upstream, we can check whether the 'rendernode' option is supported with egl-headless display type. Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_capabilities.c | 2 ++ src/qemu/qemu_capabilities.h | 1 + 2 files changed, 3 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 90b76db034..87b0ce4af5 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -515,6 +515,7 @@ VIR_ENUM_IMPL(virQEMUCaps, QEMU_CAPS_LAST, /* 320 */ "memory-backend-memfd.hugetlb", "iothread.poll-max-ns", + "egl-headless.rendernode" ); @@ -1249,6 +1250,7 @@ static struct virQEMUCapsStringFlags virQEMUCapsQMPSchemaQueries[] = { { "block-commit/arg-type/*top", QEMU_CAPS_ACTIVE_COMMIT }, { "query-iothreads/ret-type/poll-max-ns", QEMU_CAPS_IOTHREAD_POLLING }, { "query-display-options/ret-type/+egl-headless", QEMU_CAPS_EGL_HEADLESS }, + { "query-display-options/ret-type/+egl-headless/rendernode", QEMU_CAPS_EGL_HEADLESS_RENDERNODE }, }; typedef struct _virQEMUCapsObjectTypeProps virQEMUCapsObjectTypeProps; diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index c2caaf6fe1..250dc81423 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -499,6 +499,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for syntax-check */ /* 320 */ QEMU_CAPS_OBJECT_MEMORY_MEMFD_HUGETLB, /* -object memory-backend-memfd.hugetlb */ QEMU_CAPS_IOTHREAD_POLLING, /* -object iothread.poll-max-ns */ + QEMU_CAPS_EGL_HEADLESS_RENDERNODE, /* -display egl-headless,rendernode= */ QEMU_CAPS_LAST /* this must always be the last item */ } virQEMUCapsFlags; -- 2.19.1

On Thu, Nov 22, 2018 at 05:36:02PM +0100, Erik Skultety wrote:
Now that we have QAPI introspection of display types in QEMU upstream, we can check whether the 'rendernode' option is supported with egl-headless display type.
Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_capabilities.c | 2 ++ src/qemu/qemu_capabilities.h | 1 + 2 files changed, 3 insertions(+)
Missing test changes. Jano

We're going to need a bit more logic for egl-headless down the road so prepare a helper just like for the other display types. Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_command.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index f6bee10d5c..dd2b4fa445 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -8292,6 +8292,19 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg, } +static int +qemuBuildGraphicsEGLHeadlessCommandLine(virQEMUDriverConfigPtr cfg ATTRIBUTE_UNUSED, + virCommandPtr cmd, + virQEMUCapsPtr qemuCaps ATTRIBUTE_UNUSED, + virDomainGraphicsDefPtr graphics ATTRIBUTE_UNUSED) +{ + virCommandAddArg(cmd, "-display"); + virCommandAddArg(cmd, "egl-headless"); + + return 0; +} + + static int qemuBuildGraphicsCommandLine(virQEMUDriverConfigPtr cfg, virCommandPtr cmd, @@ -8323,8 +8336,9 @@ qemuBuildGraphicsCommandLine(virQEMUDriverConfigPtr cfg, break; case VIR_DOMAIN_GRAPHICS_TYPE_EGL_HEADLESS: - virCommandAddArg(cmd, "-display"); - virCommandAddArg(cmd, "egl-headless"); + if (qemuBuildGraphicsEGLHeadlessCommandLine(cfg, cmd, + qemuCaps, graphics) < 0) + return -1; break; case VIR_DOMAIN_GRAPHICS_TYPE_RDP: -- 2.19.1

Since we need to specify the rendernode option onto QEMU cmdline, we need this union member to retain consistency in how we build the cmdline. Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/conf/domain_conf.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 467785cd83..b425bda00b 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -1658,6 +1658,9 @@ struct _virDomainGraphicsDef { virTristateBool gl; char *rendernode; } spice; + struct { + char *rendernode; + } egl_headless; } data; /* nListens, listens, and *port are only useful if type is vnc, * rdp, or spice. They've been extracted from the union only to -- 2.19.1

Just like for SPICE, we need to put the DRI device into the namespace, otherwise it will be left out from the DAC relabeling process. Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_domain.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 2f65bbe34e..569b35bcd0 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -11877,11 +11877,18 @@ qemuDomainSetupGraphics(virQEMUDriverConfigPtr cfg ATTRIBUTE_UNUSED, virDomainGraphicsDefPtr gfx, const struct qemuDomainCreateDeviceData *data) { - const char *rendernode = gfx->data.spice.rendernode; + const char *rendernode = NULL; - if (gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE || - gfx->data.spice.gl != VIR_TRISTATE_BOOL_YES || - !rendernode) + if (gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE && + gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_EGL_HEADLESS) + return 0; + + if (gfx->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE) + rendernode = gfx->data.spice.rendernode; + else + rendernode = gfx->data.egl_headless.rendernode; + + if (!rendernode) return 0; return qemuDomainCreateDevice(rendernode, data, false); -- 2.19.1

On Thu, Nov 22, 2018 at 05:36:05PM +0100, Erik Skultety wrote:
Just like for SPICE, we need to put the DRI device into the namespace, otherwise it will be left out from the DAC relabeling process.
Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_domain.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 2f65bbe34e..569b35bcd0 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -11877,11 +11877,18 @@ qemuDomainSetupGraphics(virQEMUDriverConfigPtr cfg ATTRIBUTE_UNUSED, virDomainGraphicsDefPtr gfx, const struct qemuDomainCreateDeviceData *data) { - const char *rendernode = gfx->data.spice.rendernode; + const char *rendernode = NULL;
- if (gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE || - gfx->data.spice.gl != VIR_TRISTATE_BOOL_YES || - !rendernode) + if (gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE && + gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_EGL_HEADLESS) + return 0; + + if (gfx->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE) + rendernode = gfx->data.spice.rendernode; + else + rendernode = gfx->data.egl_headless.rendernode; +
These changes are repetitive, consider a helper like virDomainInputDefGetPath Jano

Just like for SPICE, we need to put the render node DRI device into the the cgroup list so that users don't need to add it manually via qemu.conf file. Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_cgroup.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c index 43e17d786e..1698f401e9 100644 --- a/src/qemu/qemu_cgroup.c +++ b/src/qemu/qemu_cgroup.c @@ -491,15 +491,22 @@ qemuSetupGraphicsCgroup(virDomainObjPtr vm, virDomainGraphicsDefPtr gfx) { qemuDomainObjPrivatePtr priv = vm->privateData; - const char *rendernode = gfx->data.spice.rendernode; + const char *rendernode = NULL; int ret; if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_DEVICES)) return 0; - if (gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE || - gfx->data.spice.gl != VIR_TRISTATE_BOOL_YES || - !rendernode) + if (gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE && + gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_EGL_HEADLESS) + return 0; + + if (gfx->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE) + rendernode = gfx->data.spice.rendernode; + else + rendernode = gfx->data.egl_headless.rendernode; + + if (!rendernode) return 0; ret = virCgroupAllowDevicePath(priv->cgroup, rendernode, -- 2.19.1

Depending on whether QEMU actually supports the option, we need to pick the first available rendernode first. Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/qemu/qemu_command.c | 37 +++++++++++++++++++++++++++++++++---- 1 file changed, 33 insertions(+), 4 deletions(-) diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index dd2b4fa445..63b8f81835 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -8295,13 +8295,42 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg, static int qemuBuildGraphicsEGLHeadlessCommandLine(virQEMUDriverConfigPtr cfg ATTRIBUTE_UNUSED, virCommandPtr cmd, - virQEMUCapsPtr qemuCaps ATTRIBUTE_UNUSED, - virDomainGraphicsDefPtr graphics ATTRIBUTE_UNUSED) + virQEMUCapsPtr qemuCaps, + virDomainGraphicsDefPtr graphics) { + int ret = -1; + virBuffer opt = VIR_BUFFER_INITIALIZER; + + /* Until QEMU 3.1, there wasn't any support for the 'rendernode' option on + * the cmdline, so don't bother picking one, the user is responsible for + * ensuring the correct permissions on the DRI devices. + */ + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_EGL_HEADLESS_RENDERNODE)) { + + /* we must populate @def so we actually have something to relabel */ + graphics->data.egl_headless.rendernode = virHostGetDRMRenderNode(); + if (!graphics->data.egl_headless.rendernode) + return -1; + } + + virBufferAddLit(&opt, "egl-headless"); + + if (graphics->data.egl_headless.rendernode) { + virBufferAddLit(&opt, ",rendernode="); + virQEMUBuildBufferEscapeComma(&opt, + graphics->data.egl_headless.rendernode); + } + + if (virBufferCheckError(&opt) < 0) + goto cleanup; + virCommandAddArg(cmd, "-display"); - virCommandAddArg(cmd, "egl-headless"); + virCommandAddArgBuffer(cmd, &opt); - return 0; + ret = 0; + cleanup: + virBufferFreeAndReset(&opt); + return ret; } -- 2.19.1

On Thu, Nov 22, 2018 at 05:36:07PM +0100, Erik Skultety wrote:
+ + /* Until QEMU 3.1, there wasn't any support for the 'rendernode' option on + * the cmdline, so don't bother picking one, the user is responsible for + * ensuring the correct permissions on the DRI devices. + */ + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_EGL_HEADLESS_RENDERNODE)) { + + /* we must populate @def so we actually have something to relabel */ + graphics->data.egl_headless.rendernode = virHostGetDRMRenderNode();
qemuProcessPrepareDomain Jano

Just like for SPICE, we need to change the permissions on the DRI device used as the @rendernode for egl-headless graphics type. Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/security/security_dac.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/src/security/security_dac.c b/src/security/security_dac.c index 6b64d2c07a..646b3d4745 100644 --- a/src/security/security_dac.c +++ b/src/security/security_dac.c @@ -1492,11 +1492,17 @@ virSecurityDACSetGraphicsLabel(virSecurityManagerPtr mgr, virDomainGraphicsDefPtr gfx) { + const char *rendernode = NULL; virSecurityDACDataPtr priv = virSecurityManagerGetPrivateData(mgr); virSecurityLabelDefPtr seclabel; uid_t user; gid_t group; + /* So far, only SPICE and EGL headless support rendering on DRM nodes */ + if (gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE && + gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_EGL_HEADLESS) + return 0; + /* Skip chowning the shared render file if namespaces are disabled */ if (!priv->mountNamespace) return 0; @@ -1508,14 +1514,13 @@ virSecurityDACSetGraphicsLabel(virSecurityManagerPtr mgr, if (virSecurityDACGetIds(seclabel, priv, &user, &group, NULL, NULL) < 0) return -1; - if (gfx->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE && - gfx->data.spice.gl == VIR_TRISTATE_BOOL_YES && - gfx->data.spice.rendernode) { - if (virSecurityDACSetOwnership(mgr, NULL, - gfx->data.spice.rendernode, - user, group) < 0) - return -1; - } + if (gfx->type == VIR_DOMAIN_GRAPHICS_TYPE_EGL_HEADLESS) + rendernode = gfx->data.egl_headless.rendernode; + else if (gfx->data.spice.gl == VIR_TRISTATE_BOOL_YES) + rendernode = gfx->data.spice.rendernode; + + if (virSecurityDACSetOwnership(mgr, NULL, rendernode, user, group) < 0) + return -1; return 0; } -- 2.19.1

We don't need a new input source, hence the symlink, we just need a new .args output, since the functionality is determined by a QEMU capability. Signed-off-by: Erik Skultety <eskultet@redhat.com> --- src/util/virutil.h | 2 +- ...cs-egl-headless-rendernode-autoselect.args | 26 +++++++++++++++++++ ...ics-egl-headless-rendernode-autoselect.xml | 1 + tests/qemuxml2argvmock.c | 9 +++++++ tests/qemuxml2argvtest.c | 4 +++ 5 files changed, 41 insertions(+), 1 deletion(-) create mode 100644 tests/qemuxml2argvdata/graphics-egl-headless-rendernode-autoselect.args create mode 120000 tests/qemuxml2argvdata/graphics-egl-headless-rendernode-autoselect.xml diff --git a/src/util/virutil.h b/src/util/virutil.h index 89bd21b148..588d779d10 100644 --- a/src/util/virutil.h +++ b/src/util/virutil.h @@ -222,7 +222,7 @@ unsigned long long virMemoryMaxValue(bool ulong) ATTRIBUTE_NOINLINE; bool virHostHasIOMMU(void); -char *virHostGetDRMRenderNode(void); +char *virHostGetDRMRenderNode(void) ATTRIBUTE_NOINLINE; /** * VIR_ASSIGN_IS_OVERFLOW: diff --git a/tests/qemuxml2argvdata/graphics-egl-headless-rendernode-autoselect.args b/tests/qemuxml2argvdata/graphics-egl-headless-rendernode-autoselect.args new file mode 100644 index 0000000000..84633abca8 --- /dev/null +++ b/tests/qemuxml2argvdata/graphics-egl-headless-rendernode-autoselect.args @@ -0,0 +1,26 @@ +LC_ALL=C \ +PATH=/bin \ +HOME=/home/test \ +USER=test \ +LOGNAME=test \ +/usr/bin/qemu-system-i686 \ +-name QEMUGuest1 \ +-S \ +-machine pc,accel=tcg,usb=off,dump-guest-core=off \ +-m 214 \ +-smp 1,sockets=1,cores=1,threads=1 \ +-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ +-no-user-config \ +-nodefaults \ +-chardev socket,id=charmonitor,path=/tmp/lib/domain--1-QEMUGuest1/monitor.sock,\ +server,nowait \ +-mon chardev=charmonitor,id=monitor,mode=control \ +-rtc base=utc \ +-no-shutdown \ +-no-acpi \ +-usb \ +-drive file=/dev/HostVG/QEMUGuest1,format=raw,if=none,id=drive-ide0-0-0 \ +-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,\ +bootindex=1 \ +-display egl-headless,rendernode=/dev/dri/foo \ +-vga cirrus diff --git a/tests/qemuxml2argvdata/graphics-egl-headless-rendernode-autoselect.xml b/tests/qemuxml2argvdata/graphics-egl-headless-rendernode-autoselect.xml new file mode 120000 index 0000000000..065e77919e --- /dev/null +++ b/tests/qemuxml2argvdata/graphics-egl-headless-rendernode-autoselect.xml @@ -0,0 +1 @@ +graphics-egl-headless.xml \ No newline at end of file diff --git a/tests/qemuxml2argvmock.c b/tests/qemuxml2argvmock.c index 79152d928e..a64cd955c4 100644 --- a/tests/qemuxml2argvmock.c +++ b/tests/qemuxml2argvmock.c @@ -184,6 +184,15 @@ virNetDevRunEthernetScript(const char *ifname ATTRIBUTE_UNUSED, return 0; } +char * +virHostGetDRMRenderNode(void) +{ + char *dst = NULL; + + ignore_value(VIR_STRDUP(dst, "/dev/dri/foo")); + return dst; +} + static void (*real_virCommandPassFD)(virCommandPtr cmd, int fd, unsigned int flags); static const int testCommandPassSafeFDs[] = { 1730, 1731 }; diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index 95429b3ae7..856d374902 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -1246,6 +1246,10 @@ mymain(void) DO_TEST("graphics-egl-headless", QEMU_CAPS_EGL_HEADLESS, QEMU_CAPS_DEVICE_CIRRUS_VGA); + DO_TEST("graphics-egl-headless-rendernode-autoselect", + QEMU_CAPS_EGL_HEADLESS, + QEMU_CAPS_EGL_HEADLESS_RENDERNODE, + QEMU_CAPS_DEVICE_CIRRUS_VGA); DO_TEST("graphics-vnc", QEMU_CAPS_VNC, QEMU_CAPS_DEVICE_CIRRUS_VGA); DO_TEST("graphics-vnc-socket", QEMU_CAPS_VNC, QEMU_CAPS_DEVICE_CIRRUS_VGA); -- 2.19.1

Signed-off-by: Erik Skultety <eskultet@redhat.com> --- docs/news.xml | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/docs/news.xml b/docs/news.xml index 4406aeb775..0a98e5b963 100644 --- a/docs/news.xml +++ b/docs/news.xml @@ -70,6 +70,19 @@ </change> </section> <section title="Improvements"> + <change> + <summary> + Start selecting the first available DRI device for OpenGL operations + </summary> + <description> + If OpenGL support is needed (either with SPICE gl enabled or with + egl-headless), libvirt is now able to pick the first available DRI + device for the job. At the same time, this improvement is also a + bugfix as it prevents permission-related issues with regards to our + mount namespaces and the default DRI render node's permissions which + would normally prevent QEMU from accessing such a device. + </description> + </change> </section> <section title="Bug fixes"> <change> -- 2.19.1

On Thu, Nov 22, 2018 at 05:35:58PM +0100, Erik Skultety wrote:
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1628892. The problem is that we didn't put the DRI device into the namespace for QEMU to access, but that was only a part of the issue. The other part of the issue is that QEMU doesn't support specifying 'rendernode' for egl-headless yet (some patches to solve this are already upstream for 3.1, some are still waiting to be merged). Instead, QEMU's been autoselecting the DRI device on its own. There's no compelling reason for libvirt not doing that instead and thus prevent any permission-related issues.
Unlike for SPICE though, I deliberately didn't add an XML attribute for users to select the rendernode for egl-headless, because:
a) most of the time, users really don't care about which DRM node will be used and libvirt will most probably do a good decision
Picking a default does not conflict with displaying it in live XML.
b) egl-headless is only useful until we have a remote OpenGL acceleration support within SPICE
c) for SPICE (or for SDL for that matter at some point), the rendernode is specified as part of the <gl> subelement which says "if enabled, use OpenGL acceleration", but egl-headless graphics type essentially serves the same purpose, it's like having <gl enabled='yes'/> for SPICE, thus having a <gl> subelement for egl-headless type is rather confusing
Could be just <gl rendernode=''/> Even if its usefulness is short-lived, not exposing this knob of the domain while we do expose it for SPICE feels wrong. Jano
participants (3)
-
Erik Skultety
-
Ján Tomko
-
Peter Krempa