On Tue, 2016-03-15 at 13:31 -0600, Alex Williamson wrote:
So we have all sorts of driver issues that are sure to come and go
over
time and all sorts of use cases that seem difficult to predict. If we
know we're in a ovirt/openstack environment, managed='detach' might
actually be a more typical use case than managed='yes'. It still
leaves a gap that we hope the host driver doesn't do anything bad when
it initializes the device and hope that it releases the device cleanly,
but it's probably better than tempting fate by unnecessarily bouncing
it back and forth between drivers.
Is sharing a hostdev between multiple guests more solid in general?
Eg. if I have g1 and g2, both configured to use the host's GPU, can
I start up g1, shut it down, start up g2 and expect things to just
work? Hopefully that's the case because the device would go through
a more complete set up / tear down cycle.
Anyway, after reading your explanation I'm wondering if we
shouldn't always recommend a setup where devices that are going
to be assigned to guests are just never bound to any host driver,
as that sounds like it would have the most chances of working
reliably.
IIUC, the pci-stubs.ids kernel parameter you mentioned above does
exactly that. Maybe blacklisting the host driver as well might be
a good idea? Anything else a user would need to do? Would the
user or management layer not be able to configure something like
that in a oVirt / OpenStack environment? What should we change
to make it possible or easier?
That would give us a setup we can rely on, and cover all use
cases you mentioned except that of someone assigning his laptop's
GPU to a guest and needing the console to be available before the
guest has started up because no other access to the machine is
available. But in that case, even with managed='detach', the user
would need to blindly restart the machine after guest shutdown,
wouldn't he?
Cheers.
--
Andrea Bolognani
Software Engineer - Virtualization Team