On Thu, Jun 28, 2018 at 06:34:56PM -0400, John Ferlan wrote:
On 06/27/2018 09:34 AM, Erik Skultety wrote:
> Since QEMU 2.10, there's a new cmdline option '-display egl-headless'
> which enables OpenGL support for cases where we can't render on a local
> display (i.e. <listen> is set to either 'address' or
'network').
> This is to work around the current restriction on the local node with
> native OpenGL SPICE support. However, compared to native support,
> egl-headless has overhead, because the rendered framebuffer has to be
> copied back to QEMU's display area which is then accessed by either
> SPICE or VNC and sent to the remote side.
blank line for readability
> This patch enables this configuration feature by introducing a new
> SPICE-only attribute 'native' which helps libvirt to determine whether
> to use '-spice gl=on' or '-display egl-headless' on the cmdline. If
the
> attribute wasn't specified but OpenGL is to be enabled, then given the
> nature of the <listen> element, libvirt will determine the default
> value.
>
> Signed-off-by: Erik Skultety <eskultet(a)redhat.com>
> ---
> docs/formatdomain.html.in | 8 ++++
> docs/schemas/domaincommon.rng | 16 +++++--
> src/conf/domain_conf.c | 49 ++++++++++++++++++++--
> src/conf/domain_conf.h | 1 +
> src/qemu/qemu_command.c | 43 ++++++++++++-------
> .../qemuxml2argvdata/graphics-spice-gl-native.args | 26 ++++++++++++
> .../qemuxml2argvdata/graphics-spice-gl-native.xml | 25 +++++++++++
> .../graphics-spice-gl-non-native.args | 27 ++++++++++++
> .../graphics-spice-gl-non-native.xml | 24 +++++++++++
> tests/qemuxml2argvtest.c | 8 ++++
> .../video-virtio-gpu-spice-gl.xml | 2 +-
> 11 files changed, 206 insertions(+), 23 deletions(-)
> create mode 100644 tests/qemuxml2argvdata/graphics-spice-gl-native.args
> create mode 100644 tests/qemuxml2argvdata/graphics-spice-gl-native.xml
> create mode 100644 tests/qemuxml2argvdata/graphics-spice-gl-non-native.args
> create mode 100644 tests/qemuxml2argvdata/graphics-spice-gl-non-native.xml
>
> diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
> index aa0d6b26df..c6ebd39bd9 100644
> --- a/docs/formatdomain.html.in
> +++ b/docs/formatdomain.html.in
> @@ -6461,6 +6461,14 @@ qemu-kvm -net nic,model=? /dev/null
> You can enable or disable OpenGL support explicitly with
> the <code>gl</code> element, by setting the
<code>enable</code>
> property. (QEMU only, <span class="since">since
1.3.3</span>).
> + Additionally, attribute <code>native</code>
> + (<span class="since">Since 4.6.0</span>) can be
used to specify
> + whether native OpenGL support should be used with SPICE or
> + egl-headless should be used instead. Note that the native OpenGL
> + support is only limited to the local setup
(<code>listen</code> is
> + either 'none' or 'socket') and for remote access
egl-headless
> + needs to be used. The supported values for this attribute are
'on'
> + and 'off'.
> </p>
> <p>
> By default, QEMU will pick the first available GPU DRM render node.
> diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng
> index 20649c5f6f..4ad53d976b 100644
> --- a/docs/schemas/domaincommon.rng
> +++ b/docs/schemas/domaincommon.rng
> @@ -3066,6 +3066,11 @@
> <attribute name="enable">
> <ref name="virYesNo"/>
> </attribute>
> + <optional>
> + <attribute name="native">
> + <ref name="virYesNo"/>
> + </attribute>
> + </optional>
> <empty/>
> </element>
> </optional>
> @@ -3322,9 +3327,14 @@
> <ref name="absFilePath"/>
> </attribute>
> </optional>
> - <empty/>
> - </element>
> - </optional>
> + <optional>
> + <attribute name="native">
> + <ref name="virYesNo"/>
> + </attribute>
> + </optional>
> + <empty/>
> + </element>
> + </optional>
> </interleave>
> </group>
> <group>
> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
> index 2ccd9e124f..6d399a198e 100644
> --- a/src/conf/domain_conf.c
> +++ b/src/conf/domain_conf.c
> @@ -4280,6 +4280,7 @@ virDomainDefPostParseGraphics(virDomainDef *def)
> * same. */
> if (graphics->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE) {
> virDomainGraphicsListenDefPtr glisten = &graphics->listens[0];
> + virDomainGraphicsGLDefPtr gl = graphics->gl;
>
> if (glisten->type == VIR_DOMAIN_GRAPHICS_LISTEN_TYPE_ADDRESS
&&
> graphics->data.spice.port == 0 &&
> @@ -4288,6 +4289,28 @@ virDomainDefPostParseGraphics(virDomainDef *def)
> VIR_FREE(glisten->address);
> glisten->type = VIR_DOMAIN_GRAPHICS_LISTEN_TYPE_NONE;
> }
> +
> + /* Next we need to figure out how to properly configure the OpenGL
> + * if that is enabled and the 'native' attribute is missing.
> + * The cases are:
> + * 1) Listen type is either 'socket' or 'none' -
SPICE native
> + * OpenGL support (,gl=on) should be used because we're
> + * rendering on a local display.
> + *
> + * 2) Listen is either network or address - SPICE can't use
> + * the native OpenGL support remotely yet, so we use
> + * native='no' and format '-display egl-headless'
onto the
> + * cmdline.
> + */
> + if (graphics->gl &&
> + graphics->gl->enable == VIR_TRISTATE_BOOL_YES &&
> + graphics->gl->native == VIR_TRISTATE_BOOL_ABSENT) {
> + gl->native = VIR_TRISTATE_BOOL_NO;
> +
> + if (glisten->type == VIR_DOMAIN_GRAPHICS_LISTEN_TYPE_NONE ||
> + glisten->type == VIR_DOMAIN_GRAPHICS_LISTEN_TYPE_SOCKET)
> + gl->native = VIR_TRISTATE_BOOL_YES;
> + }
So the upside/downside to setting something in the XML that wasn't
previously set by the consumer is that "we're stuck with it" going
forward, right? Additionally, we cannot ascertain whether we set native
to NO or YES or if the consumer did.
Does it matter? If libvirt set it, then we set it in a way that is safe for us,
IOW we can ensure the vm will start using that setting. If the consumer set it,
then we evaluate the setting and either fail or proceed with starting up the
vm.
So the "other" side of this is this is being done in a domain_conf
function instead of some qemu device post parse function since if I'm
reading the RFC comments right, the desire is not to somehow
automagically add it, but rather force some usage.
Correct, although, it's not that our "automagic" was wrong (in fact it was
safe), but it wasn't very flexible and 'egl-headless' can find use cases even
without mdevs, that was the main driver to expose it.
Still, I wonder if what's been done follows the spirit of Gerd's comment:
"Hmm, I think it would be better to have egl-headless explicitly
configured in the domain xml instead of doing it automagically depending
on configuration."
As I already explained in the cover letter reply, to me the facts that a
standalone graphics type='headless' doesn't make sense, is basically just
some
kind of OpenGL switch and can't be combined with some of the other types felt
like a wrong way of doing things, that's why I merged it with the 'gl'
element.
But okay, I'll give it a try on a separate branch and see how 'making it a
graphics type' goes. The one thing that won't be necessary and worth doing
anymore is separating the 'gl' data into a separate element, but I'm curious
how much code will be saved (if any) if done this way.
could be turned into a "headless='yes|no|absent'" instead of native
and
described as being necessary for specific conditions. Then perhaps a
qemu domain validation check to "force" the issue.
> }
> }
> }
> @@ -13573,6 +13596,7 @@ virDomainGraphicsGLDefParseXML(virDomainGraphicsDefPtr def,
> {
> virDomainGraphicsGLDefPtr gl = NULL;
> char *enable = NULL;
> + char *native = NULL;
> int ret = -1;
>
> if (!node)
> @@ -13596,14 +13620,26 @@ virDomainGraphicsGLDefParseXML(virDomainGraphicsDefPtr
def,
> goto cleanup;
> }
>
> - if (def->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE)
> - gl->rendernode = virXMLPropString(node, "rendernode");
> + /* SPICE recognizes a few more attributes */
> + if (def->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE) {
> + gl->rendernode = virXMLPropString(node, "rendernode");
> +
> + native = virXMLPropString(node, "native");
> + if (native &&
> + (gl->native = virTristateBoolTypeFromString(native)) <= 0) {
> + virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
> + _("unknown value for attribute enable
'%s'"),
> + enable);
I believe you wanted "s/enable/native" for 2 instances, copy pasta error
> + goto cleanup;
> + }
> + }
>
> VIR_STEAL_PTR(def->gl, gl);
>
> ret = 0;
> cleanup:
> VIR_FREE(enable);
> + VIR_FREE(native);
> virDomainGraphicsGLDefFree(gl);
> return ret;
> }
> @@ -26184,8 +26220,13 @@ virDomainGraphicsGLDefFormat(virBufferPtr buf,
virDomainGraphicsDefPtr def)
> virBufferAsprintf(buf, "<gl enable='%s'",
> virTristateBoolTypeToString(gl->enable));
>
> - if (def->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE)
> - virBufferEscapeString(buf, " rendernode='%s'",
gl->rendernode);
> + if (gl->enable == VIR_TRISTATE_BOOL_YES) {
Not clear why the YES guard was put in place. Not that "enable=no and
rendernode=xxx" means anything, but that would be dropped with this change.
Since rng/grammar using interleave, order of attributes shouldn't matter
should it?
> + if (def->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE) {
> + virBufferAsprintf(buf, " native='%s'",
> + virTristateBoolTypeToString(gl->native));
> + virBufferEscapeString(buf, " rendernode='%s'",
gl->rendernode);
> + }
> + }
>
> virBufferAddLit(buf, "/>\n");
> }
Do you think we need ABI checks for this? Could be challenging given the
post parse setting of the value.
I don't see an ABI problem here, old libvirt doesn't know what to do with it
(and VNC will just ignore the gl element) and new enough libvirt would ensure
that -spice gl=on and -display egl-headless will not appear on the cmdline.
Everything else is something that's been working for a very long time.
Erik