From: Alex Williamson <alex.williamson(a)redhat.com>
Sent: Monday, April 25, 2022 10:38 PM
On Mon, 25 Apr 2022 11:10:14 +0100
Daniel P. Berrangé <berrange(a)redhat.com> wrote:
> On Fri, Apr 22, 2022 at 04:09:43PM -0600, Alex Williamson wrote:
> > [Cc +libvirt folks]
> >
> > On Thu, 14 Apr 2022 03:46:52 -0700
> > Yi Liu <yi.l.liu(a)intel.com> wrote:
> >
> > > With the introduction of iommufd[1], the linux kernel provides a
generic
> > > interface for userspace drivers to propagate their DMA mappings to
kernel
> > > for assigned devices. This series does the porting of the VFIO devices
> > > onto the /dev/iommu uapi and let it coexist with the legacy
implementation.
> > > Other devices like vpda, vfio mdev and etc. are not considered yet.
>
> snip
>
> > > The selection of the backend is made on a device basis using the new
> > > iommufd option (on/off/auto). By default the iommufd backend is
selected
> > > if supported by the host and by QEMU (iommufd KConfig). This option
is
> > > currently available only for the vfio-pci device. For other types of
> > > devices, it does not yet exist and the legacy BE is chosen by default.
> >
> > I've discussed this a bit with Eric, but let me propose a different
> > command line interface. Libvirt generally likes to pass file
> > descriptors to QEMU rather than grant it access to those files
> > directly. This was problematic with vfio-pci because libvirt can't
> > easily know when QEMU will want to grab another /dev/vfio/vfio
> > container. Therefore we abandoned this approach and instead libvirt
> > grants file permissions.
> >
> > However, with iommufd there's no reason that QEMU ever needs more
than
> > a single instance of /dev/iommufd and we're using per device vfio file
> > descriptors, so it seems like a good time to revisit this.
>
> I assume access to '/dev/iommufd' gives the process somewhat elevated
> privileges, such that you don't want to unconditionally give QEMU
> access to this device ?
It's not that much dissimilar to /dev/vfio/vfio, it's an unprivileged
interface which should have limited scope for abuse, but more so here
the goal would be to de-privilege QEMU that one step further that it
cannot open the device file itself.
> > The interface I was considering would be to add an iommufd object to
> > QEMU, so we might have a:
> >
> > -device iommufd[,fd=#][,id=foo]
> >
> > For non-libivrt usage this would have the ability to open /dev/iommufd
> > itself if an fd is not provided. This object could be shared with
> > other iommufd users in the VM and maybe we'd allow multiple instances
> > for more esoteric use cases. [NB, maybe this should be a -object rather
than
> > -device since the iommufd is not a guest visible device?]
>
> Yes, -object would be the right answer for something that's purely
> a host side backend impl selector.
>
> > The vfio-pci device might then become:
> >
> > -device vfio-
pci[,host=DDDD:BB:DD.f][,sysfsdev=/sys/path/to/device][,fd=#][,iommufd=f
oo]
> >
> > So essentially we can specify the device via host, sysfsdev, or passing
> > an fd to the vfio device file. When an iommufd object is specified,
> > "foo" in the example above, each of those options would use the
> > vfio-device access mechanism, essentially the same as iommufd=on in
> > your example. With the fd passing option, an iommufd object would be
> > required and necessarily use device level access.
> >
> > In your example, the iommufd=auto seems especially troublesome for
> > libvirt because QEMU is going to have different locked memory
> > requirements based on whether we're using type1 or iommufd, where
the
> > latter resolves the duplicate accounting issues. libvirt needs to know
Based on current plan there is probably a transition window between the
point where the first vfio device type (vfio-pci) gaining iommufd support
and the point where all vfio types supporting iommufd. Libvirt can figure
out which one to use iommufd by checking the presence of
/dev/vfio/devices/vfioX. But what would be the resource limit policy
in Libvirt in such transition window when both type1 and iommufd might
be used? Or do we just expect Libvirt to support iommufd only after the
transition window ends to avoid handling such mess?
> > deterministically which backed is being used, which this
proposal seems
> > to provide, while at the same time bringing us more in line with fd
> > passing. Thoughts? Thanks,
>
> Yep, I agree that libvirt needs to have more direct control over this.
> This is also even more important if there are notable feature differences
> in the 2 backends.
>
> I wonder if anyone has considered an even more distinct impl, whereby
> we have a completely different device type on the backend, eg
>
> -device vfio-iommu-
pci[,host=DDDD:BB:DD.f][,sysfsdev=/sys/path/to/device][,fd=#][,iommufd=f
oo]
>
> If a vendor wants to fully remove the legacy impl, they can then use the
> Kconfig mechanism to disable the build of the legacy impl device, while
> keeping the iommu impl (or vica-verca if the new iommu impl isn't
considered
> reliable enough for them to support yet).
>
> Libvirt would use
>
> -object iommu,id=iommu0,fd=NNN
> -device vfio-iommu-pci,fd=MMM,iommu=iommu0
>
> Non-libvirt would use a simpler
>
> -device vfio-iommu-pci,host=0000:03:22.1
>
> with QEMU auto-creating a 'iommu' object in the background.
>
> This would fit into libvirt's existing modelling better. We currently have
> a concept of a PCI assignment backend, which previously supported the
> legacy PCI assignment, vs the VFIO PCI assignment. This new iommu impl
> feels like a 3rd PCI assignment approach, and so fits with how we modelled
> it as a different device type in the past.
I don't think we want to conflate "iommu" and "iommufd",
we're creating
an object that interfaces into the iommufd uAPI, not an iommu itself.
Likewise "vfio-iommu-pci" is just confusing, there was an iommu
interface previously, it's just a different implementation now and as
far as the VM interface to the device, it's identical. Note that a
"vfio-iommufd-pci" device multiplies the matrix of every vfio device
for a rather subtle implementation detail.
My expectation would be that libvirt uses:
-object iommufd,id=iommufd0,fd=NNN
-device vfio-pci,fd=MMM,iommufd=iommufd0
Whereas simple QEMU command line would be:
-object iommufd,id=iommufd0
-device vfio-pci,iommufd=iommufd0,host=0000:02:00.0
The iommufd object would open /dev/iommufd itself. Creating an
implicit iommufd object is someone problematic because one of the
things I forgot to highlight in my previous description is that the
iommufd object is meant to be shared across not only various vfio
devices (platform, ccw, ap, nvme, etc), but also across subsystems, ex.
vdpa.
Out of curiosity - in concept one iommufd is sufficient to support all
ioas requirements across subsystems while having multiple iommufd's
instead lose the benefit of centralized accounting. The latter will also
cause some trouble when we start virtualizing ENQCMD which requires
VM-wide PASID virtualization thus further needs to share that
information across iommufd's. Not unsolvable but really no gain by
adding such complexity. So I'm curious whether Qemu provide
a way to restrict that certain object type can only have one instance
to discourage such multi-iommufd attempt?
If the old style were used:
-device vfio-pci,host=0000:02:00.0
Then QEMU would use vfio for the IOMMU backend.
If libvirt/userspace wants to query whether "legacy" vfio is still
supported by the host kernel, I think it'd only need to look for
whether the /dev/vfio/vfio container interface still exists.
If we need some means for QEMU to remove legacy support, I'd rather
find a way to do it via probing device options. It's easy enough to
see if iommufd support exists by looking for the presence of the
iommufd option for the vfio-pci device and Kconfig within QEMU could be
used regardless of whether we define a new device name. Thanks,
Alex