On 02/09/2016 19:15, Kirti Wankhede wrote:
On 9/2/2016 3:35 PM, Paolo Bonzini wrote:
> <device>
> <name>my-vgpu</name>
> <parent>pci_0000_86_00_0</parent>
> <capability type='mdev'>
> <type id='11'/>
> <uuid>0695d332-7831-493f-9e71-1c85c8911a08</uuid>
> </capability>
> </device>
>
> After creating the vGPU, if required by the host driver, all the other
> type ids would disappear from "virsh nodedev-dumpxml pci_0000_86_00_0"
too.
Thanks Paolo for details.
'nodedev-create' parse the xml file and accordingly write to 'create'
file in sysfs to create mdev device. Right?
At this moment, does libvirt know which VM this device would be
associated with?
No, the VM will associate to the nodedev through the UUID. The nodedev
is created separately from the VM.
> When dumping the mdev with nodedev-dumpxml, it could show more
complete
> info, again taken from sysfs:
>
> <device>
> <name>my-vgpu</name>
> <parent>pci_0000_86_00_0</parent>
> <capability type='mdev'>
> <uuid>0695d332-7831-493f-9e71-1c85c8911a08</uuid>
> <!-- only the chosen type -->
> <type id='11'>
> <!-- ... snip ... -->
> </type>
> <capability type='pci'>
> <!-- no domain/bus/slot/function of course -->
> <!-- could show whatever PCI IDs are seen by the guest: -->
> <product id='...'>...</product>
> <vendor id='0x10de'>NVIDIA</vendor>
> </capability>
> </capability>
> </device>
>
> Notice how the parent has mdev inside pci; the vGPU, if it has to have
> pci at all, would have it inside mdev. This represents the difference
> between the mdev provider and the mdev device.
Parent of mdev device might not always be a PCI device. I think we
shouldn't consider it as PCI capability.
The <capability type='pci'> in the vGPU means that it _will_ be exposed
as a PCI device by VFIO.
The <capability type='pci'> in the physical GPU means that the GPU is a
PCI device.
> Random proposal for the domain XML too:
>
> <hostdev mode='subsystem' type='pci'>
> <source type='mdev'>
> <!-- possible alternative to uuid: <name>my-vgpu</name> ?!?
-->
> <uuid>0695d332-7831-493f-9e71-1c85c8911a08</uuid>
> </source>
> <address type='pci' bus='0' slot='2'
function='0'/>
> </hostdev>
>
When user wants to assign two mdev devices to one VM, user have to add
such two entries or group the two devices in one entry?
Two entries, one per UUID, each with its own PCI address in the guest.
On other mail thread with same subject we are thinking of creating
group
of mdev devices to assign multiple mdev devices to one VM.
What is the advantage in managing mdev groups? (Sorry didn't follow the
other thread).
Paolo