On Mon, Jun 04, 2018 at 07:39:00PM -0400, John Ferlan wrote:
On 05/30/2018 09:42 AM, Erik Skultety wrote:
> QEMU introduced a new type of display for mediated devices using
> vfio-pci backend which controls whether a mediated device can be used as
> a native rendering device as an alternative to an emulated video device.
> This patch adds the necessary bits to domain config handling in order to
> expose this feature.
Add blank line between paragraphs
> It's worth noting that even though QEMU supports and defaults to the
> value 'auto', this patch only introduces values 'on' and
'off'
> defaulting to 'off' (for safety), since there's no convenient way for
> libvirt to come up with relevant defaults based on the fact, that libvirt
> doesn't know whether the underlying device uses dma-buf or vfio region
> mapping.
Such a strange feature - almost seems like qemu tries to enable by
default by using auto and now it's up to libvirt to say, no, set the
default to off and if someone then turns it on make sure the "expected"
graphics is available in order to avoid issues.
QEMU has already realized that and they're trying to change it to 'off' too,
see [1].
Perhaps if you drag in some of the comments from the cover letter it may
clear up a few things... The TL;DR in particular and part of the
paragraph just above it.
Noted, I will do that.
Looking at the QEMU code it seems there is the expectation of
essentially "on" or "off" - auto just seems to be a way to have a
default value and try to force usage of on or off... It seems to me
that vfio_realize will check the value != ON_OFF_AUTO_OFF(2), then do
the probe. The "default" for the property is set as ON_OFF_AUTO_AUTO
(or 0), so perhaps since 2.12 there's always an attempt to do this. Of
course I could be reading things wrong. The assumption in that code
being that the "user" of the qemu api would "know" to add the right
graphics device. Mind boggling.
It doesn't really work reliably even if QEMU has much more info about the
device that libvirt could possibly have and there's still the factor that you
can't predict user's intention with the device, so 'auto' is not really
the
smartest default option in this case (given the 3rd party vendors and 2
different approaches), hence the 'off' default for libvirt, 'auto' would
be
IMHO too tricky and dangerous in terms of guaranteeing backwards compatibility.
...
> + a new optional <code>display</code> attribute to enable or
disable
> + support for an accelerated remote desktop backed by a mediated
> + device (such as NVIDIA vGPU or Intel GVT-g) as an alternative to
> + emulated <a href="#elementsVideo">video
devices</a>. This attribute
> + is limited to <code>model='vfio-pci'</code> only.
Supported values
> + are either 'on' or 'off' (default is 'off').
Usage of this option requires ???which types/styles??? of graphics
devices to be defined for the domain.
Right, worth mentioning that currently only works for QEMU+vnc/spice combo,
thanks.
...
> + <p>
> + Note: There are also some implications on the usage of guest's
> + address type depending on the <code>model</code>
attribute,
> + see the <code>address</code> element below.
> + </p>
> </dd>
> </dl>
> <p>
> diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng
> index 703a1bb6f8..345bd3c757 100644
> --- a/docs/schemas/domaincommon.rng
> +++ b/docs/schemas/domaincommon.rng
> @@ -4507,6 +4507,11 @@
> <value>vfio-ccw</value>
> </choice>
> </attribute>
> + <optional>
> + <attribute name="display">
> + <ref name="virOnOff"/>
> + </attribute>
> + </optional>
> <element name="source">
> <ref name="mdevaddress"/>
> </element>
> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
> index c868d8de08..7844b6d272 100644
> --- a/src/conf/domain_conf.c
> +++ b/src/conf/domain_conf.c
> @@ -7606,6 +7606,7 @@ virDomainHostdevDefParseXMLSubsys(xmlNodePtr node,
> char *rawio = NULL;
> char *backendStr = NULL;
> char *model = NULL;
> + char *display = NULL;
> int backend;
> int ret = -1;
> virDomainHostdevSubsysPCIPtr pcisrc = &def->source.subsys.u.pci;
> @@ -7625,6 +7626,7 @@ virDomainHostdevDefParseXMLSubsys(xmlNodePtr node,
> sgio = virXMLPropString(node, "sgio");
> rawio = virXMLPropString(node, "rawio");
> model = virXMLPropString(node, "model");
> + display = virXMLPropString(node, "display");
>
> /* @type is passed in from the caller rather than read from the
> * xml document, because it is specified in different places for
> @@ -7712,6 +7714,15 @@ virDomainHostdevDefParseXMLSubsys(xmlNodePtr node,
> model);
> goto error;
> }
> +
> + if (display &&
> + (mdevsrc->display = virTristateSwitchTypeFromString(display)) <
0) {
> + virReportError(VIR_ERR_XML_ERROR,
> + _("unknown value '%s' for <hostdev>
attribute "
> + "'display'"),
> + display);
> + goto error;
> + }
> }
>
> switch (def->source.subsys.type) {
> @@ -26297,9 +26308,14 @@ virDomainHostdevDefFormat(virBufferPtr buf,
> virTristateBoolTypeToString(scsisrc->rawio));
> }
>
> - if (def->source.subsys.type == VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_MDEV)
> + if (def->source.subsys.type == VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_MDEV) {
> virBufferAsprintf(buf, " model='%s'",
>
virMediatedDeviceModelTypeToString(mdevsrc->model));
> + if (mdevsrc->display)
->display > VIR_TRISTATE_SWITCH_ABSENT
> + virBufferAsprintf(buf, " display='%s'",
> +
virTristateSwitchTypeToString(mdevsrc->display));
> + }
> +
> }
> virBufferAddLit(buf, ">\n");
> virBufferAdjustIndent(buf, 2);
What about a virDomainHostdevDefCheckABIStability check that display
type didn't change?
I'm sorry, I'm failing to see how it could change, the way I designed it is that
whenever QEMU supports the capability we always default to 'off' which means
that it'll always get formatted explicitly as 'off', even if the attribute
was
missing in the source XML.
The only problem would an older libvirt who doesn't know what to do with that
attribute, so it would ignore it which could cause issues with a newer QEMU
without the patch linked below, but the ABI stability check wouldn't help
anyway.
> diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
> index 8493dfdd76..123a676ab9 100644
> --- a/src/conf/domain_conf.h
> +++ b/src/conf/domain_conf.h
> @@ -382,6 +382,7 @@ typedef struct _virDomainHostdevSubsysMediatedDev
virDomainHostdevSubsysMediated
> typedef virDomainHostdevSubsysMediatedDev *virDomainHostdevSubsysMediatedDevPtr;
> struct _virDomainHostdevSubsysMediatedDev {
> int model; /* enum virMediatedDeviceModelType */
> + int display; /* virTristateSwitchType */
> char uuidstr[VIR_UUID_STRING_BUFLEN]; /* mediated device's uuid string
*/
> };
>
> diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
> index 2c51e4c0d8..27088d4456 100644
> --- a/src/qemu/qemu_domain.c
> +++ b/src/qemu/qemu_domain.c
> @@ -4277,10 +4277,35 @@ qemuDomainDeviceDefValidateNetwork(const virDomainNetDef
*net)
> }
>
>
> +static int
> +qemuDomainDeviceDefValidateHostdevMdev(const virDomainHostdevSubsysMediatedDev
*mdevsrc,
> + const virDomainDef *def)
> +{
> + if (mdevsrc->display) {
> VIR_TRISTATE_SWITCH_ABSENT
> + if (mdevsrc->model != VIR_MDEV_MODEL_TYPE_VFIO_PCI) {
> + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
> + _("<hostdev> attribute 'display' is
only supported"
> + " with model='vfio-pci'"));
> +
> + return -1;
> + } else if (def->ngraphics == 0) {
> + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
> + _("graphics device is needed for attribute value
"
> + "'display=on' in
<hostdev>"));
Hmmm.. not sure we noted that in the formatdomain - probably should.
Also if this were a PostParse check, then would the xml2xml tests fail
if no graphics were defined (e.g. hostdev-mdev-display-missing-graphics.xml)
Right, good point, I preferred validation instead of post parse so as not to
risk 'loosing' a domain, but it doesn't make any sense wanting a display and
not having any graphical framebuffer, I'll change that.
> + return -1;
> + }
> + }
> +
> + return 0;
> +}
> +
> +
> static int
> qemuDomainDeviceDefValidateHostdev(const virDomainHostdevDef *hostdev,
> const virDomainDef *def)
> {
> + const virDomainHostdevSubsysMediatedDev *mdevsrc;
> +
> /* forbid capabilities mode hostdev in this kind of hypervisor */
> if (hostdev->mode == VIR_DOMAIN_HOSTDEV_MODE_CAPABILITIES) {
> virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
> @@ -4290,6 +4315,24 @@ qemuDomainDeviceDefValidateHostdev(const virDomainHostdevDef
*hostdev,
> return -1;
> }
>
> + if (hostdev->mode == VIR_DOMAIN_HOSTDEV_MODE_SUBSYS) {
> + switch ((virDomainHostdevSubsysType) hostdev->source.subsys.type) {
> + case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_USB:
> + case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_PCI:
> + case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_SCSI:
> + case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_SCSI_HOST:
> + break;
> + case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_MDEV:
> + mdevsrc = &hostdev->source.subsys.u.mdev;
> + return qemuDomainDeviceDefValidateHostdevMdev(mdevsrc, def);
> + case VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_LAST:
> + default:
> + virReportEnumRangeError(virDomainHostdevSubsysType,
> + hostdev->source.subsys.type);
> + return -1;
> + }
> + }
> +
> return 0;
> }
>
> diff --git a/tests/qemuxml2argvdata/hostdev-mdev-display-missing-graphics.xml
b/tests/qemuxml2argvdata/hostdev-mdev-display-missing-graphics.xml
> new file mode 100644
> index 0000000000..ea559a6444
> --- /dev/null
> +++ b/tests/qemuxml2argvdata/hostdev-mdev-display-missing-graphics.xml
This one is not used until the last patch... It could be moved there or
as I noted above, would moving the no graphics check to PostParse allow
xml2xml failure of some sort (I cannot remember any more and it's too
late for me to dig).
Sure, I'll make the PostParse change.
[1]
https://www.redhat.com/archives/libvir-list/2018-May/msg02128.html
Erik