On Wed, Sep 07, 2016 at 11:17:39AM -0700, Neo Jia wrote:
On Wed, Sep 07, 2016 at 10:44:56AM -0600, Alex Williamson wrote:
> On Wed, 7 Sep 2016 21:45:31 +0530
> Kirti Wankhede <kwankhede(a)nvidia.com> wrote:
>
> > To hot-plug mdev device to a domain in which there is already a mdev
> > device assigned, mdev device should be created with same group number as
> > the existing devices are and then hot-plug it. If there is no mdev
> > device in that domain, then group number should be a unique number.
> >
> > This simplifies the mdev grouping and also provide flexibility for
> > vendor driver implementation.
>
> The 'start' operation for NVIDIA mdev devices allocate peer-to-peer
> resources between mdev devices. Does this not represent some degree of
> an isolation hole between those devices? Will peer-to-peer DMA between
> devices honor the guest IOVA when mdev devices are placed into separate
> address spaces, such as possible with vIOMMU?
Hi Alex,
In reality, the p2p operation will only work under same translation domain.
As we are discussing the multiple mdev per VM use cases, I think we probably
should not just limit it for p2p operation.
So, in general, the NVIDIA vGPU device model's requirement is to know/register
all mdevs per VM before opening any those mdev devices.
It concerns me that if we bake this rule into the sysfs interface,
then it feels like we're making life very hard for future support
for hotplug / unplug of mdevs to running VMs.
Conversely, if we can solve the hotplug/unplug problem, then we
potentially would not need this grouping concept.
I'd hate us to do all this complex work to group multiple mdevs per
VM only to throw it away later when we hotplug support is made to
work.
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|:
http://entangle-photo.org -o-
http://live.gnome.org/gtk-vnc :|