Hi Cole & Michal,
I'm sorry for my late response, I just end my journey today.
Thank your response, your suggestion is very helpful to me.
I have added Michal in this mail, Michal helps me review my initial patchset.
(
https://www.spinics.net/linux/fedora/libvir/msg191339.html)
All concern about this feature is the XML design.
My original XML design exposes more details of Qemu.
<vhost-user-blk-pci type='unix'>
<source type='bind' path='/tmp/vhost-blk.sock'>
<reconnect enabled='yes' timeout='5' />
</source>
<queue num='4'/>
</vhost-user-blk-pci>
As Cole's suggestion, the better design with all vhost-user-scsi/blk
features would like this:
vhost-user-blk:
<disk type='vhostuser' device='disk'>
<source type='unix' path='/path/to/vhost-user-blk.sock'
mode='client'>
<reconnect enabled='yes' timeout='5' />
</source>
<target dev='vda' bus='virtio'/>
<queue num='4'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x07' function='0x0'/>
</disk>
vhost-user-scsi:
<disk type='vhostuser' device='disk'>
<source type='unix' path='/path/to/vhost-user-scsi.sock'
mode='client'>
<reconnect enabled='yes' timeout='5' />
</source>
<target dev='sda' bus='scsi'/>
<queue num='4'/>
</disk>
Conclusion:
1. Add new type(vhostuser) for disk label;
2. Add queue sub-label for disk to support multiqueue(<queue
num='4'/>) or reusing the driver label
(<driver name='vhostuser' queues='4'), which one is better?
Qemu support multiqueue like this:
-device vhost-user-scsi-pci,id=scsi0,chardev=spdk_vhost_scsi0,num_queues=4
-device vhost-user-blk-pci,chardev=spdk_vhost_blk0,num-queues=4
Another question:
When qemu is connecting to a vhost-user-scsi controller[1], there may
exist multiple LUNs under one target,
then one disklabel(<disk/>) will represent multiple SCSI LUNs,
the 'dev' property(<target dev='sda' bus='scsi'/>) will be
ignored, right?
In other words, for vhost-user-scsi disk, it more likes a controller,
maybe the controller label is suitable.
I look forward to hearing from you as soon as possible.
[1]:
https://spdk.io/doc/vhost.html
Feng Li
Cole Robinson <crobinso(a)redhat.com> 于2019年10月10日周四 上午6:48写道:
Sorry for the late reply, and thanks Jano for pointing out elsewhere
that this didn't receive a response.
On 8/12/19 5:56 AM, Li Feng wrote:
> Hi Guys,
>
> And I want to add the vhost-user-scsi-pci/vhost-user-blk-pci support
> for libvirt.
>
> The usage in qemu like this:
>
> Vhost-SCSI
> -chardev socket,id=char0,path=/var/tmp/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0
> Vhost-BLK
> -chardev socket,id=char1,path=/var/tmp/vhost.1
> -device vhost-user-blk-pci,id=blk0,chardev=char1
>
Indeed that matches what I see for the qemu commits too:
https://git.qemu.org/?p=qemu.git;a=commit;h=00343e4b54b
https://git.qemu.org/?p=qemu.git;a=commit;h=f12c1ebddf7
> What type should I add for libvirt.
> Type1:
> <hostdev mode='subsystem' type='vhost-user'>
> <source protocol='vhost-user-scsi'
path='/tmp/vhost-scsi.sock'></source>
> <alias name="vhost-user-scsi-disk1"/>
> </hostdev>
>
>
> Type2:
>
> <disk type='network' device='disk'>
> <driver name='qemu' type='raw' cache='none'
io='native'/>
> <source protocol='vhost-user'
path='/tmp/vhost-scsi.sock'>
> </source>
> <target dev='sdb' bus='vhost-user-scsi'/>
> <boot order='3'/>
> <alias name='scsi0-0-0-1'/>
> <address type='drive' controller='0' bus='0'
target='0' unit='1'/>
> </disk>
>
>
> <disk type='network' device='disk'>
> <driver name='qemu' type='raw' cache='none'
io='native'/>
> <source protocol='vhost-user'
path='/tmp/vhost-blk.sock'>
> </source>
> <target dev='vda' bus='vhost-user-blk'/>
> <boot order='1'/>
> <alias name='virtio-disk0'/>
> <address type='pci' domain='0x0000' bus='0x00'
slot='0x07'
> function='0x0'/>
> </disk>
>
I think wiring this into <disk> makes more sense. <hostdev> is really an
abstraction for assigning a (typically) physical host device to the VM,
so it handles things like hiding a PCI device from the host, and passing
that exact device to the VM.
In the vhost-user-scsi/blk case, the host device is just a special
process running on the other side of a socket, and the device
represented to the guest is a typical virtio device. So to me it makes
more sense as a <disk> with a <source> that points at that socket.
target bus=virtio vs bus=scsi is already used to distinguish between
virtio-blk and virtio-scsi, so I think we can keep that bit as is, with
the <address type=drive|pci> to match. We just need to differentiate
between plain virtio and vhost-user
network devices already have vhostuser support:
<interface type='vhostuser'>
<source type='unix' path='/tmp/vhost1.sock'
mode='server|client'/>
<model type='virtio'/>
</interface>
Internally that <source> is a virDomainChrSourceDefPtr which is our
internal representation of a chardev. So I think something akin to this
is the way to go. It will likely require updating a LOT of places in the
code that check disk type= field, probably most places that care about
whether type=NETWORK or type=!NETWORK will need to be mirrored for the
new type.
<disk type='vhostuser' device='disk'>
<source type='unix' path='/path/to/vhost-user-blk.sock'
mode='client'/>
<target dev='vda' bus='virtio'/>
</disk>
<disk type='vhostuser' device='disk'>
<source type='unix' path='/path/to/vhost-user-scsi.sock'
mode='client'/>
<target dev='sda' bus='scsi'/>
</disk>
- Cole
--
The SmartX email address is only for business purpose. Any sent message
that is not related to the business is not authorized or permitted by
SmartX.
本邮箱为北京志凌海纳科技有限公司(SmartX)工作邮箱. 如本邮箱发出的邮件与工作无关,该邮件未得到本公司任何的明示或默示的授权.