On Wed, Nov 18, 2020 at 09:57:14 +0000, Thanos Makatos wrote:
> As a separate question, is there any performance benefit of
emulating a
> NVMe controller compared to e.g. virtio-scsi?
We haven't measured that yet; I would expect it to be slight faster and/or more
CPU efficient but wouldn't be surprised if it isn't. The main benefit of using
NVMe is that we don't have to install VirtIO drivers in the guest.
Okay, I'm not sold on the drivers bit but that is definitely not a
problem in regards of adding support for emulating NVME controllers to
libvirt.
As a starting point a trivial way to model this in the XML will be:
<controller type='nvme' index='1' model='nvme'>
And then add the storage into it as:
<disk type='file' device='disk'>
<source dev='/Host/QEMUGuest1.qcow2'/>
<target dev='sda' bus='nvme'/>
<address type='drive' controller='1' bus='0'
target='0' unit='0'/>
</disk>
<disk type='file' device='disk'>
<source dev='/Host/QEMUGuest2.qcow2'/>
<target dev='sdb' bus='nvme'/>
<address type='drive' controller='1' bus='0'
target='0' unit='1'/>
</disk>
The 'drive' address here maps the disk to the controller. This example
uses unit= as the way to specify the namespace ID. Both 'bus' and
'target'
must be 0.
You can theoretically also add your own address type if 'drive' doesn't
look right.
This model will have problems with hotplug/unplug if the NVMe spec
doesn't actually allow hotplug of a single namespace into a controller
as libvirt's hotplug APIs only deal with one element at time.
We theoretically could work this around by allowing hotplug of disks
which correspond to the namespace used while the controller was not
attached yet, and the attach of the controller then attaches both the
backends and the controller. This is a bit hacky though.
Another obvious solution is to disallow hotplug of the namespaces and
thus also the controller.