On Thu, 12 Dec 2019 12:09:48 +0800
Jason Wang <jasowang(a)redhat.com> wrote:
On 2019/12/7 上午1:42, Alex Williamson wrote:
> On Fri, 6 Dec 2019 17:40:02 +0800
> Jason Wang <jasowang(a)redhat.com> wrote:
>
>> On 2019/12/6 下午4:22, Yan Zhao wrote:
>>> On Thu, Dec 05, 2019 at 09:05:54PM +0800, Jason Wang wrote:
>>>> On 2019/12/5 下午4:51, Yan Zhao wrote:
>>>>> On Thu, Dec 05, 2019 at 02:33:19PM +0800, Jason Wang wrote:
>>>>>> Hi:
>>>>>>
>>>>>> On 2019/12/5 上午11:24, Yan Zhao wrote:
>>>>>>> For SRIOV devices, VFs are passthroughed into guest directly
without host
>>>>>>> driver mediation. However, when VMs migrating with
passthroughed VFs,
>>>>>>> dynamic host mediation is required to (1) get device
states, (2) get
>>>>>>> dirty pages. Since device states as well as other critical
information
>>>>>>> required for dirty page tracking for VFs are usually
retrieved from PFs,
>>>>>>> it is handy to provide an extension in PF driver to
centralizingly control
>>>>>>> VFs' migration.
>>>>>>>
>>>>>>> Therefore, in order to realize (1) passthrough VFs at normal
time, (2)
>>>>>>> dynamically trap VFs' bars for dirty page tracking and
>>>>>> A silly question, what's the reason for doing this, is this
a must for dirty
>>>>>> page tracking?
>>>>>>
>>>>> For performance consideration. VFs' bars should be passthoughed
at
>>>>> normal time and only enter into trap state on need.
>>>> Right, but how does this matter for the case of dirty page tracking?
>>>>
>>> Take NIC as an example, to trap its VF dirty pages, software way is
>>> required to trap every write of ring tail that resides in BAR0.
>>
>> Interesting, but it looks like we need:
>> - decode the instruction
>> - mediate all access to BAR0
>> All of which seems a great burden for the VF driver. I wonder whether or
>> not doing interrupt relay and tracking head is better in this case.
> This sounds like a NIC specific solution, I believe the goal here is to
> allow any device type to implement a partial mediation solution, in
> this case to sufficiently track the device while in the migration
> saving state.
I suspect there's a solution that can work for any device type. E.g for
virtio, avail index (head) doesn't belongs to any BAR and device may
decide to disable doorbell from guest. So did interrupt relay since
driver may choose to disable interrupt from device. In this case, the
only way to track dirty pages correctly is to switch to software datapath.
>
>>> There's
>>> still no IOMMU Dirty bit available.
>>>>>>> (3) centralizing
>>>>>>> VF critical states retrieving and VF controls into one
driver, we propose
>>>>>>> to introduce mediate ops on top of current vfio-pci device
driver.
>>>>>>>
>>>>>>>
>>>>>>> _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _
>>>>>>> __________ register mediate ops| ___________
___________ |
>>>>>>> | |<-----------------------| VF | |
|
>>>>>>> | vfio-pci | | | mediate | | PF
driver | |
>>>>>>> |__________|----------------------->| driver |
|___________|
>>>>>>> | open(pdev) | -----------
| |
>>>>>>> |
|
>>>>>>> | |_ _ _ _ _ _ _ _ _ _ _
_|_ _ _ _ _|
>>>>>>> \|/
\|/
>>>>>>> -----------
------------
>>>>>>> | VF | | PF
|
>>>>>>> -----------
------------
>>>>>>>
>>>>>>>
>>>>>>> VF mediate driver could be a standalone driver that does not
bind to
>>>>>>> any devices (as in demo code in patches 5-6) or it could be
a built-in
>>>>>>> extension of PF driver (as in patches 7-9) .
>>>>>>>
>>>>>>> Rather than directly bind to VF, VF mediate driver register
a mediate
>>>>>>> ops into vfio-pci in driver init. vfio-pci maintains a list
of such
>>>>>>> mediate ops.
>>>>>>> (Note that: VF mediate driver can register mediate ops into
vfio-pci
>>>>>>> before vfio-pci binding to any devices. And VF mediate
driver can
>>>>>>> support mediating multiple devices.)
>>>>>>>
>>>>>>> When opening a device (e.g. a VF), vfio-pci goes through the
mediate ops
>>>>>>> list and calls each vfio_pci_mediate_ops->open() with
pdev of the opening
>>>>>>> device as a parameter.
>>>>>>> VF mediate driver should return success or failure depending
on it
>>>>>>> supports the pdev or not.
>>>>>>> E.g. VF mediate driver would compare its supported VF devfn
with the
>>>>>>> devfn of the passed-in pdev.
>>>>>>> Once vfio-pci finds a successful
vfio_pci_mediate_ops->open(), it will
>>>>>>> stop querying other mediate ops and bind the opening device
with this
>>>>>>> mediate ops using the returned mediate handle.
>>>>>>>
>>>>>>> Further vfio-pci ops (VFIO_DEVICE_GET_REGION_INFO ioctl, rw,
mmap) on the
>>>>>>> VF will be intercepted into VF mediate driver as
>>>>>>> vfio_pci_mediate_ops->get_region_info(),
>>>>>>> vfio_pci_mediate_ops->rw,
>>>>>>> vfio_pci_mediate_ops->mmap, and get customized.
>>>>>>> For vfio_pci_mediate_ops->rw and
vfio_pci_mediate_ops->mmap, they will
>>>>>>> further return 'pt' to indicate whether vfio-pci
should further
>>>>>>> passthrough data to hw.
>>>>>>>
>>>>>>> when vfio-pci closes the VF, it calls its
vfio_pci_mediate_ops->release()
>>>>>>> with a mediate handle as parameter.
>>>>>>>
>>>>>>> The mediate handle returned from
vfio_pci_mediate_ops->open() lets VF
>>>>>>> mediate driver be able to differentiate two opening VFs of
the same device
>>>>>>> id and vendor id.
>>>>>>>
>>>>>>> When VF mediate driver exits, it unregisters its mediate ops
from
>>>>>>> vfio-pci.
>>>>>>>
>>>>>>>
>>>>>>> In this patchset, we enable vfio-pci to provide 3 things:
>>>>>>> (1) calling mediate ops to allow vendor driver customizing
default
>>>>>>> region info/rw/mmap of a region.
>>>>>>> (2) provide a migration region to support migration
>>>>>> What's the benefit of introducing a region? It looks to me
we don't expect
>>>>>> the region to be accessed directly from guest. Could we simply
extend device
>>>>>> fd ioctl for doing such things?
>>>>>>
>>>>> You may take a look on mdev live migration discussions in
>>>>>
https://lists.gnu.org/archive/html/qemu-devel/2019-11/msg01763.html
>>>>>
>>>>> or previous discussion at
>>>>>
https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg04908.html,
>>>>> which has kernel side implemetation
https://patchwork.freedesktop.org/series/56876/
>>>>>
>>>>> generaly speaking, qemu part of live migration is consistent for
>>>>> vfio-pci + mediate ops way or mdev way.
>>>> So in mdev, do you still have a mediate driver? Or you expect the
parent
>>>> to implement the region?
>>>>
>>> No, currently it's only for vfio-pci.
>> And specific to PCI.
> What's PCI specific? The implementation, yes, it's done in the bus
> vfio bus driver here but all device access is performed by the bus
> driver. I'm not sure how we could introduce the intercept at the
> vfio-core level, but I'm open to suggestions.
I haven't thought this too much, but if we can intercept at core level,
it basically can do what mdev can do right now.
An intercept at the core level is essentially a new vfio bus driver.
>>> mdev parent driver is free to customize its regions and
hence does not
>>> requires this mediate ops hooks.
>>>
>>>>> The region is only a channel for
>>>>> QEMU and kernel to communicate information without introducing
IOCTLs.
>>>> Well, at least you introduce new type of region in uapi. So this does
>>>> not answer why region is better than ioctl. If the region will only be
>>>> used by qemu, using ioctl is much more easier and straightforward.
>>>>
>>> It's not introduced by me :)
>>> mdev live migration is actually using this way, I'm just keeping
>>> compatible to the uapi.
>>
>> I meant e.g VFIO_REGION_TYPE_MIGRATION.
>>
>>
>>> From my own perspective, my answer is that a region is more flexible
>>> compared to ioctl. vendor driver can freely define the size,
>>>
>> Probably not since it's an ABI I think.
> I think Kirti's thread proposing the migration interface is a better
> place for this discussion, I believe Yan has already linked to it. In
> general we prefer to be frugal in our introduction of new ioctls,
> especially when we have existing mechanisms via regions to support the
> interactions. The interface is designed to be flexible to the vendor
> driver needs, partially thanks to it being a region.
>
>>> mmap cap of
>>> its data subregion.
>>>
>> It doesn't help much unless it can be mapped into guest (which I don't
>> think it was the case here).
>> /
>>> Also, there're already too many ioctls in vfio.
>> Probably not :) We had a brunch of subsystems that have much more
>> ioctls than VFIO. (e.g DRM)
> And this is a good thing?
Well, I just meant that "having too much ioctls already" is not a good
reason for not introducing new ones.
Avoiding ioctl proliferation is a reason to require a high bar for any
new ioctl though. Push back on every ioctl and maybe we won't get to
that point.
> We can more easily deprecate and revise
> region support than we can take back ioctls that have been previously
> used.
It belongs to uapi, how easily can we deprecate that?
I'm not saying there shouldn't be a deprecation process, but the core
uapi for vfio remains (relatively) unchanged. The user has a protocol
for discovering the features of a device and if we decide we've screwed
up the implementation of the migration_region-v1 we can simply add a
migration_region-v2 and both can be exposed via the same core ioctls
until we decide to no longer expose v1. Our address space of region
types is defined within vfio, not shared with every driver in the
kernel. The "add an ioctl for that" approach is not the model I
advocate.
> I generally don't like the "let's create a new
ioctl for that"
> approach versus trying to fit something within the existing
> architecture and convention.
>
>>>>>>> (3) provide a dynamic trap bar info region to allow vendor
driver
>>>>>>> control trap/untrap of device pci bars
>>>>>>>
>>>>>>> This vfio-pci + mediate ops way differs from mdev way in
that
>>>>>>> (1) medv way needs to create a 1:1 mdev device on top of one
VF, device
>>>>>>> specific mdev parent driver is bound to VF directly.
>>>>>>> (2) vfio-pci + mediate ops way does not create mdev devices
and VF
>>>>>>> mediate driver does not bind to VFs. Instead, vfio-pci binds
to VFs.
>>>>>>>
>>>>>>> The reason why we don't choose the way of writing mdev
parent driver is
>>>>>>> that
>>>>>>> (1) VFs are almost all the time directly passthroughed.
Directly binding
>>>>>>> to vfio-pci can make most of the code shared/reused.
>>>>>> Can we split out the common parts from vfio-pci?
>>>>>>
>>>>> That's very attractive. but one cannot implement a vfio-pci
except
>>>>> export everything in it as common part :)
>>>> Well, I think there should be not hard to do that. E..g you can route
it
>>>> back to like:
>>>>
>>>> vfio -> vfio_mdev -> parent -> vfio_pci
>>>>
>>> it's desired for us to have mediate driver binding to PF device.
>>> so once a VF device is created, only PF driver and vfio-pci are
>>> required. Just the same as what needs to be done for a normal VF
passthrough.
>>> otherwise, a separate parent driver binding to VF is required.
>>> Also, this parent driver has many drawbacks as I mentions in this
>>> cover-letter.
>> Well, as discussed, no need to duplicate the code, bar trick should
>> still work. The main issues I saw with this proposal is:
>>
>> 1) PCI specific, other bus may need something similar
> Propose how it could be implemented higher in the vfio stack to make it
> device agnostic.
E.g doing it in vfio_device_fops instead of vfio_pci_ops?
Which is essentially a new vfio bus driver. This is something vfio has
supported since day one. Issues with doing that here are that it puts
the burden on the mediation/vendor driver to re-implement or re-use a
lot of existing code in vfio-pci, and I think it creates user confusion
around which driver to use for what feature set when using a device
through vfio. You're complaining this series is PCI specific, when
re-using the vfio-pci code is exactly what we're trying to achieve.
Other bus types can do something similar and injecting vendor
specific drivers a layer above the bus driver is already a fundamental
part of the infrastructure.
>> 2) Function duplicated with mdev and mdev can do even more
> mdev also comes with a device lifecycle interface that doesn't really
> make sense when a driver is only trying to partially mediate a single
> physical device rather than multiplex a physical device into virtual
> devices.
Yes, but that part could be decoupled out of mdev.
There would be nothing left. vfio-mdev is essentially nothing more than
a vfio bus driver that forwards through to mdev to provide that
lifecycle interface to the vendor driver. Without that, it's just
another vfio bus driver.
> mdev would also require vendor drivers to re-implement
> much of vfio-pci for the direct access mechanisms. Also, do we really
> want users or management tools to decide between binding a device to
> vfio-pci or a separate mdev driver to get this functionality. We've
> already been burnt trying to use mdev beyond its scope.
The problem is, if we had a device that support both SRIOV and mdev.
Does this mean we need prepare two set of drivers?
We have this situation today, modulo SR-IOV, but that's a red herring
anyway, VF vs PF is irrelevant. For example we can either directly
assign IGD graphics to a VM with vfio-pci or we can enable GVT-g
support in the i915 driver, which registers vGPU support via mdev.
These are different use cases, expose different features, and have
different support models. NVIDIA is the same way, assigning a GPU via
vfio-pci or a vGPU via vfio-mdev are entirely separate usage models.
Once we use mdev, it's at the vendor driver's discretion how the device
resources are backed, they might make use of the resource isolation of
SR-IOV or they might divide a single function.
If your question is whether there's a concern around proliferation of
vfio bus drivers and user confusion over which to use for what
features, yes, absolutely. I think this is why we're starting with
seeing what it looks like to add mediation to vfio-pci rather than
modularize vfio-pci and ask Intel to develop a new vfio-pci-intel-dsa
driver. I'm not yet convinced we won't eventually come back to that
latter approach though if this initial draft is what we can expect of a
mediated vfio-pci.
>>>>>>> If we write a
>>>>>>> vendor specific mdev parent driver, most of the code (like
passthrough
>>>>>>> style of rw/mmap) still needs to be copied from vfio-pci
driver, which is
>>>>>>> actually a duplicated and tedious work.
>>>>>> The mediate ops looks quite similar to what vfio-mdev did. And
it looks to
>>>>>> me we need to consider live migration for mdev as well. In that
case, do we
>>>>>> still expect mediate ops through VFIO directly?
>>>>>>
>>>>>>
>>>>>>> (2) For features like dynamically trap/untrap pci bars, if
they are in
>>>>>>> vfio-pci, they can be available to most people without
repeated code
>>>>>>> copying and re-testing.
>>>>>>> (3) with a 1:1 mdev driver which passthrough VFs most of the
time, people
>>>>>>> have to decide whether to bind VFs to vfio-pci or mdev
parent driver before
>>>>>>> it runs into a real migration need. However, if vfio-pci is
bound
>>>>>>> initially, they have no chance to do live migration when
there's a need
>>>>>>> later.
>>>>>> We can teach management layer to do this.
>>>>>>
>>>>> No. not possible as vfio-pci by default has no migration region and
>>>>> dirty page tracking needs vendor's mediation at least for most
>>>>> passthrough devices now.
>>>> I'm not quite sure I get here but in this case, just tech them to
use
>>>> the driver that has migration support?
>>>>
>>> That's a way, but as more and more passthrough devices have demands and
>>> caps to do migration, will vfio-pci be used in future any more ?
>>
>> This should not be a problem:
>> - If we introduce a common mdev for vfio-pci, we can just bind that
>> driver always
> There's too much of mdev that doesn't make sense for this usage model,
> this is why Yi's proposed generic mdev PCI wrapper is only a sample
> driver. I think we do not want to introduce user confusion regarding
> which driver to use and there are outstanding non-singleton group
> issues with mdev that don't seem worthwhile to resolve.
I agree, but I think what user want is a unified driver that works for
both SRIOV and mdev. That's why trying to have a common way for doing
mediation may make sense.
I don't think we can get to one driver, nor is it clear to me that we
should. Direct assignment and mdev currently provide different usage
models. Both are valid, both are useful. That said, I don't
necessarily want a user to need to choose whether to bind a device to
vfio-pci for base functionality or vfio-pci-vendor-foo for extended
functionality either. I think that's why we're exploring this
mediation approach and there's already precedent in vfio-pci for some
extent of device specific support. It's still a question though
whether it can be done clean enough to make it worthwhile. Thanks,
Alex