How to hot plugin a new vhost-user-blk-pci device to running VM?

On Thu, May 13, 2021 at 15:25:23 +0800, 梁朝军 wrote:
Hi Guy,
Does anyone clear how to hot plugin a new vhost-user-blk-pci device to a running VM?
Before staring vm , I pass the disk through QEMU command line like below.
<qemu:commandline> <qemu:arg value='-object'/> <qemu:arg value='memory-backend-file,id=mem0,size=4G,mem-path=/dev/hugepages,share=on'/> <qemu:arg value='-numa'/> <qemu:arg value='node,memdev=mem0'/> <qemu:arg value='-chardev'/> <qemu:arg value='socket,id=spdk_vhost_blk721ea46a-b306-11eb-a280-525400a98761,path=/var/tmp/vhost.721ea46a-b306-11eb-a280-525400a98761,reconnect=1'/> <qemu:arg value='-device'/> <qemu:arg value='vhost-user-blk-pci,chardev=spdk_vhost_blk721ea46a-b306-11eb-a280-525400a98761,bootindex=1,num-queues=4'/> <qemu:arg value='-chardev'/> <qemu:arg value='socket,id=spdk_vhost_blk2f699c58-d222-4629-9fdc-400c3aadc55e,path=/var/tmp/vhost.2f699c58-d222-4629-9fdc-400c3aadc55e,reconnect=1'/> <qemu:arg value='-device'/> <qemu:arg value='vhost-user-blk-pci,chardev=spdk_vhost_blk2f699c58-d222-4629-9fdc-400c3aadc55e,num-queues=4'/> </qemu:commandline>
But I don**t know how to live add a vhost-user-blk-pci device on running VM even with calling attachDevice API now.
You need to use the proper and supported way to use vhost-user-blk: <disk type='vhostuser' device='disk'> <driver name='qemu' type='raw'/> <source type='unix' path='/tmp/vhost-blk.sock'> <reconnect enabled='yes' timeout='10'/> </source> <target dev='vdf' bus='virtio'/> </disk> That works also with attachDevice.
OS: redhat 7.4 Libvirt version: 3.4
This is obviously way too old for it. You'll need at least libvirt-7.1 for that.

Thanks Peter for your quickly response. Is there any workaround to do that?As you know we must take care the risk of using latest version in product environment. Thanks a lot!
在 2021年5月13日,22:25,Peter Krempa <pkrempa@redhat.com> 写道:
On Thu, May 13, 2021 at 15:25:23 +0800, 梁朝军 wrote:
Hi Guy,
Does anyone clear how to hot plugin a new vhost-user-blk-pci device to a running VM?
Before staring vm , I pass the disk through QEMU command line like below.
<qemu:commandline> <qemu:arg value='-object'/> <qemu:arg value='memory-backend-file,id=mem0,size=4G,mem-path=/dev/hugepages,share=on'/> <qemu:arg value='-numa'/> <qemu:arg value='node,memdev=mem0'/> <qemu:arg value='-chardev'/> <qemu:arg value='socket,id=spdk_vhost_blk721ea46a-b306-11eb-a280-525400a98761,path=/var/tmp/vhost.721ea46a-b306-11eb-a280-525400a98761,reconnect=1'/> <qemu:arg value='-device'/> <qemu:arg value='vhost-user-blk-pci,chardev=spdk_vhost_blk721ea46a-b306-11eb-a280-525400a98761,bootindex=1,num-queues=4'/> <qemu:arg value='-chardev'/> <qemu:arg value='socket,id=spdk_vhost_blk2f699c58-d222-4629-9fdc-400c3aadc55e,path=/var/tmp/vhost.2f699c58-d222-4629-9fdc-400c3aadc55e,reconnect=1'/> <qemu:arg value='-device'/> <qemu:arg value='vhost-user-blk-pci,chardev=spdk_vhost_blk2f699c58-d222-4629-9fdc-400c3aadc55e,num-queues=4'/> </qemu:commandline>
But I don**t know how to live add a vhost-user-blk-pci device on running VM even with calling attachDevice API now.
You need to use the proper and supported way to use vhost-user-blk:
<disk type='vhostuser' device='disk'> <driver name='qemu' type='raw'/> <source type='unix' path='/tmp/vhost-blk.sock'> <reconnect enabled='yes' timeout='10'/> </source> <target dev='vdf' bus='virtio'/> </disk>
That works also with attachDevice.
OS: redhat 7.4 Libvirt version: 3.4
This is obviously way too old for it. You'll need at least libvirt-7.1 for that.

On Thu, May 13, 2021 at 23:11:36 +0800, Liang Chaojun wrote:
Thanks Peter for your quickly response. Is there any workaround to do that?As you know we must take care the risk of using latest version in product environment.
Manual approach is to use 'virsh qemu-monitor-command' or the equivalent to attach the appropriate backends and frontends manually. Obviously that is very far from anything I'd recommend to use in any production environment.

Thanks,I have tried qemu monitor as below. I used chardev_add to add a chardev and used device_add it to running vm. But i often hit a issue and cause the vm crash. qemu-system-x86_64: ../hw/virtio/vhost.c:1566: vhost_dev_get_config: Assertion `hdev->vhost_ops' failed. virsh qemu-monitor-command spdk1 --hmp --cmd "chardev-add socket,id=spdk_vhost_blk0,path=/var/tmp/vhost.0,reconnect=1" virsh qemu-monitor-command spdk1 --hmp --cmd "device_add vhost-user-blk-pci,chardev=spdk_vhost_blk0,num-queues=4"
在 2021年5月14日,00:47,Peter Krempa <pkrempa@redhat.com> 写道:
On Thu, May 13, 2021 at 23:11:36 +0800, Liang Chaojun wrote:
Thanks Peter for your quickly response. Is there any workaround to do that?As you know we must take care the risk of using latest version in product environment.
Manual approach is to use 'virsh qemu-monitor-command' or the equivalent to attach the appropriate backends and frontends manually.
Obviously that is very far from anything I'd recommend to use in any production environment.

On 5/14/21 8:33 AM, Liang Chaojun wrote:
Thanks,I have tried qemu monitor as below. I used chardev_add to add a chardev and used device_add it to running vm. But i often hit a issue and cause the vm crash. qemu-system-x86_64: ../hw/virtio/vhost.c:1566: vhost_dev_get_config: Assertion `hdev->vhost_ops' failed.
virsh qemu-monitor-command spdk1 --hmp --cmd "chardev-add socket,id=spdk_vhost_blk0,path=/var/tmp/vhost.0,reconnect=1" virsh qemu-monitor-command spdk1 --hmp --cmd "device_add vhost-user-blk-pci,chardev=spdk_vhost_blk0,num-queues=4"
Smells like problem described here: https://gitlab.com/qemu-project/qemu/-/commit/bc79c87bcde6587a37347f81332fbb... But that's pretty fresh commit (part of qemu 6.0.0 release). And also the commit it references is not that old (qemu 5.1.0 release). What version of qemu are you running? Michal

Thanks Michal and Peter for your response. I‘ m running it on qemu 5.1 build by myself. BTW, follow Peter’s suggestion, where I can get the latest rpms if I want to upgrade to Libvirt 7.1? As I know it seems need more than twenty related rpms not include dependency.
在 2021年5月14日,21:15,Michal Prívozník <mprivozn@redhat.com> 写道:
On 5/14/21 8:33 AM, Liang Chaojun wrote:
Thanks,I have tried qemu monitor as below. I used chardev_add to add a chardev and used device_add it to running vm. But i often hit a issue and cause the vm crash. qemu-system-x86_64: ../hw/virtio/vhost.c:1566: vhost_dev_get_config: Assertion `hdev->vhost_ops' failed.
virsh qemu-monitor-command spdk1 --hmp --cmd "chardev-add socket,id=spdk_vhost_blk0,path=/var/tmp/vhost.0,reconnect=1" virsh qemu-monitor-command spdk1 --hmp --cmd "device_add vhost-user-blk-pci,chardev=spdk_vhost_blk0,num-queues=4"
Smells like problem described here:
https://gitlab.com/qemu-project/qemu/-/commit/bc79c87bcde6587a37347f81332fbb...
But that's pretty fresh commit (part of qemu 6.0.0 release). And also the commit it references is not that old (qemu 5.1.0 release). What version of qemu are

On Mon, May 17, 2021 at 1:09 PM Liang Chaojun <jesonliang040705@hotmail.com> wrote:
Thanks Michal and Peter for your response. I‘ m running it on qemu 5.1 build by myself. BTW, follow Peter’s suggestion, where I can get the latest rpms if I want to upgrade to Libvirt 7.1? As I know it seems need more than twenty related rpms not include dependency.
You can get the rpms from centos koji https://cbs.centos.org/koji/ https://koji.mbox.centos.org/koji/ or fedora koji https://koji.fedoraproject.org/koji/ Since the version of rpm pkgs in RHEL7.4 are too low, I am afraid it will be hard to install libvirt 7.1. It better to install it in RHEL8 or Centos8
在 2021年5月14日,21:15,Michal Prívozník <mprivozn@redhat.com> 写道:
On 5/14/21 8:33 AM, Liang Chaojun wrote:
Thanks,I have tried qemu monitor as below. I used chardev_add to add a
chardev and used device_add it to running vm. But i often hit a issue and cause the vm crash. qemu-system-x86_64: ../hw/virtio/vhost.c:1566: vhost_dev_get_config: Assertion `hdev->vhost_ops' failed.
virsh qemu-monitor-command spdk1 --hmp --cmd "chardev-add
socket,id=spdk_vhost_blk0,path=/var/tmp/vhost.0,reconnect=1" virsh qemu-monitor-command spdk1 --hmp --cmd "device_add vhost-user-blk-pci,chardev=spdk_vhost_blk0,num-queues=4"
Smells like problem described here:
https://gitlab.com/qemu-project/qemu/-/commit/bc79c87bcde6587a37347f81332fbb...
But that's pretty fresh commit (part of qemu 6.0.0 release). And also the commit it references is not that old (qemu 5.1.0 release). What version of qemu are

On 5/17/21 7:08 AM, Liang Chaojun wrote:
Thanks Michal and Peter for your response. I‘ m running it on qemu 5.1 build by myself. BTW, follow Peter’s suggestion, where I can get the latest rpms if I want to upgrade to Libvirt 7.1? As I know it seems need more than twenty related rpms not include dependency.
Well, since you're building qemu yourself you could also build libvirt. Just be aware that the v7.3.0 release was the last one that supports RHEL-7. Newer releases might still work, but upstream does not aim to make everything work. https://libvirt.org/platforms.html Michal

Thanks all of you for your help. One more question regarding vhost-user-blk-pci type device, how to identify a vhost-blk disk in QEUM VM ? for example, disk name looks like vda,vdb,..., but that some application in VM want to detect that a certain entry is really the device it is waiting for. Specific for windows , they always show as disk 0, 1, 2….etc Is there any way to identify those disk with each other in VM? thanks
在 2021年5月17日,下午3:13,Michal Prívozník <mprivozn@redhat.com> 写道:
On 5/17/21 7:08 AM, Liang Chaojun wrote:
Thanks Michal and Peter for your response. I‘ m running it on qemu 5.1 build by myself. BTW, follow Peter’s suggestion, where I can get the latest rpms if I want to upgrade to Libvirt 7.1? As I know it seems need more than twenty related rpms not include dependency.
Well, since you're building qemu yourself you could also build libvirt. Just be aware that the v7.3.0 release was the last one that supports RHEL-7. Newer releases might still work, but upstream does not aim to make everything work.
https://libvirt.org/platforms.html
Michal

On 5/21/21 5:28 PM, 梁朝军 wrote:
Thanks all of you for your help. One more question regarding vhost-user-blk-pci type device, how to identify a vhost-blk disk in QEUM VM ? for example, disk name looks like vda,vdb,..., but that some application in VM want to detect that a certain entry is really the device it is waiting for. Specific for windows , they always show as disk 0, 1, 2….etc Is there any way to identify those disk with each other in VM?
In general no. Usually disks will be enumerated sequentially - thus the first disk on a sata/scsi/usb/.. bus will be sda, the second will be sdb, and so on. But libvirt can't guarantee it - the same way you can't guarantee how a disk is going to be called with real HW. Michal

On Mon, May 24, 2021 at 01:04:44PM +0200, Michal Prívozník wrote:
On 5/21/21 5:28 PM, 梁朝军 wrote:
Thanks all of you for your help. One more question regarding vhost-user-blk-pci type device, how to identify a vhost-blk disk in QEUM VM ? for example, disk name looks like vda,vdb,..., but that some application in VM want to detect that a certain entry is really the device it is waiting for. Specific for windows , they always show as disk 0, 1, 2….etc Is there any way to identify those disk with each other in VM?
In general no. Usually disks will be enumerated sequentially - thus the first disk on a sata/scsi/usb/.. bus will be sda, the second will be sdb, and so on. But libvirt can't guarantee it - the same way you can't guarantee how a disk is going to be called with real HW.
You can set the 'serial' property in the disk in libvirt, and then match that in the guest. For Linux guests that's used in /dev/disk/by-id symlinks. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

Hi Guys: Who can help me ? What does this issue mean? When I attach a network I hit this issue. libvirt: QEMU Driver error : internal error: unable to execute QEMU command 'netdev_add': Invalid parameter type for 'vhost', expected: boolean Thanks!
在 2021年5月24日,下午7:08,Daniel P. Berrangé <berrange@redhat.com> 写道:
On Mon, May 24, 2021 at 01:04:44PM +0200, Michal Prívozník wrote:
On 5/21/21 5:28 PM, 梁朝军 wrote:
Thanks all of you for your help. One more question regarding vhost-user-blk-pci type device, how to identify a vhost-blk disk in QEUM VM ? for example, disk name looks like vda,vdb,..., but that some application in VM want to detect that a certain entry is really the device it is waiting for. Specific for windows , they always show as disk 0, 1, 2….etc Is there any way to identify those disk with each other in VM?
In general no. Usually disks will be enumerated sequentially - thus the first disk on a sata/scsi/usb/.. bus will be sda, the second will be sdb, and so on. But libvirt can't guarantee it - the same way you can't guarantee how a disk is going to be called with real HW.
You can set the 'serial' property in the disk in libvirt, and then match that in the guest. For Linux guests that's used in /dev/disk/by-id symlinks.
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Fri, Jun 04, 2021 at 19:22:31 +0800, 梁朝军 wrote:
Hi Guys:
Who can help me ? What does this issue mean? When I attach a network I hit this issue.
libvirt: QEMU Driver error : internal error: unable to execute QEMU command 'netdev_add': Invalid parameter type for 'vhost', expected: boolean
Your qemu is too-new for libvirt. the 'netdev_add' command was converted to a strict description by QMP schema and libvirt wasn't ready for that. commit b6738ffc9f8be5a2a61236cd9bef7fd317982f01 Author: Peter Krempa <pkrempa@redhat.com> Date: Thu May 14 22:50:59 2020 +0200 qemu: command: Generate -netdev command line via JSON->cmdline conversion The 'netdev_add' command was recently formally described in qemu via the QMP schema. This means that it also requires the arguments to be properly formatted. Our current approach is to generate the command line and then use qemuMonitorJSONKeywordStringToJSON to get the JSON properties for the monitor. This will not work if we need to pass some fields as numbers or booleans. In this step we re-do internals of qemuBuildHostNetStr to format a JSON object which is converted back via virQEMUBuildNetdevCommandlineFromJSON to the equivalent command line. This will later allow fixing of the monitor code to use the JSON object directly rather than rely on the conversion. v6.3.0-139-gb6738ffc9f Thus you need at least libvirt 6.4.0 with that qemu.

Thanks a lot. Peter . BTW, one more question, recently sometimes we often hit another issue that vm can’t boot from disk stuck in black screen show “Guest has not initialized the display (yet).” Our QEMU command line parameter like below. /usr/bin/qemu-system-x86_64 -name guest=testvm_j5ei9x2a,debug-threads=on -S \ -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-46-testvm_j5ei9x2a/master-key.aes \ -machine pc-i440fx-5.1,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off \ -m 8192 -mem-prealloc -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 \ -uuid 44b984dd-e1c7-45c8-b235-6c9ce3a8b86c -smbios type=0,vendor=phegda -smbios type=1,manufacturer=phegda.com,product=hippo -no-user-config \ -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-46-testvm_j5ei9x2a/monitor.sock,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on \ -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \ -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \ -drive if=none,id=drive-ide0-0-0,readonly=on \ -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 \ -drive file=/dev/disk/by-id/pbdx-vol-88663c6a-c5cb-11eb-9c3b-001b21bc1e4e,format=raw,if=none,id=drive-scsi0-0-0-0,cache=writethrough,werror=report,rerror=report \ -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 \ -drive file=/dev/disk/by-id/pbdx-vol-d8596026-862c-47cf-9fa5-8a16f337d02a,format=raw,if=none,id=drive-scsi0-0-0-1,cache=none,werror=report,rerror=report \ -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1 \ -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=32 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=e6:8e:42:c8:47:dd,bus=pci.0,addr=0x8 \ -netdev tap,fd=33,id=hostnet1,vhost=on,vhostfd=34 \ -device virtio-net-pci,netdev=hostnet1,id=net1,mac=e6:8e:fe:99:0f:63,bus=pci.0,addr=0x9 \ -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,path=/var/hippo/channel/44b984dd-e1c7-45c8-b235-6c9ce3a8b86c.channel,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=cn.com.pbdata.hippo.0 \ -chardev socket,id=charchannel1,path=/var/hippo/channel/testvm_j5ei9x2a.44b984dd-e1c7-45c8-b235-6c9ce3a8b86c.channel,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=cn.com.pbdata.hippo.1 \ -vnc 0.0.0.0:2 -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 \ -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \ -msg timestamp=on Is it related with that QEMU vesion higher for libvirt? Thanks
在 2021年6月4日,下午7:31,Peter Krempa <pkrempa@redhat.com> 写道:
On Fri, Jun 04, 2021 at 19:22:31 +0800, 梁朝军 wrote:
Hi Guys:
Who can help me ? What does this issue mean? When I attach a network I hit this issue.
libvirt: QEMU Driver error : internal error: unable to execute QEMU command 'netdev_add': Invalid parameter type for 'vhost', expected: boolean
Your qemu is too-new for libvirt. the 'netdev_add' command was converted to a strict description by QMP schema and libvirt wasn't ready for that.
commit b6738ffc9f8be5a2a61236cd9bef7fd317982f01 Author: Peter Krempa <pkrempa@redhat.com> Date: Thu May 14 22:50:59 2020 +0200
qemu: command: Generate -netdev command line via JSON->cmdline conversion
The 'netdev_add' command was recently formally described in qemu via the QMP schema. This means that it also requires the arguments to be properly formatted. Our current approach is to generate the command line and then use qemuMonitorJSONKeywordStringToJSON to get the JSON properties for the monitor. This will not work if we need to pass some fields as numbers or booleans.
In this step we re-do internals of qemuBuildHostNetStr to format a JSON object which is converted back via virQEMUBuildNetdevCommandlineFromJSON to the equivalent command line. This will later allow fixing of the monitor code to use the JSON object directly rather than rely on the conversion.
v6.3.0-139-gb6738ffc9f
Thus you need at least libvirt 6.4.0 with that qemu.

Thanks,I have tried qemu monitor as below. I used chardev_add to add a chardev and used device_add it to running vm. But i often hit a issue and cause the vm crash. qemu-system-x86_64: ../hw/virtio/vhost.c:1566: vhost_dev_get_config: Assertion `hdev->vhost_ops' failed. virsh qemu-monitor-command spdk1 --hmp --cmd "chardev-add socket,id=spdk_vhost_blk0,path=/var/tmp/vhost.0,reconnect=1" virsh qemu-monitor-command spdk1 --hmp --cmd "device_add vhost-user-blk-pci,chardev=spdk_vhost_blk0,num-queues=4"
在 2021年5月14日,00:47,Peter Krempa <pkrempa@redhat.com> 写道:
On Thu, May 13, 2021 at 23:11:36 +0800, Liang Chaojun wrote:
Thanks Peter for your quickly response. Is there any workaround to do that?As you know we must take care the risk of using latest version in product environment.
Manual approach is to use 'virsh qemu-monitor-command' or the equivalent to attach the appropriate backends and frontends manually.
Obviously that is very far from anything I'd recommend to use in any production environment.

On Fri, May 14, 2021 at 14:33:37 +0800, Liang Chaojun wrote:
Thanks,I have tried qemu monitor as below. I used chardev_add to add a chardev and used device_add it to running vm. But i often hit a issue and cause the vm crash. qemu-system-x86_64: ../hw/virtio/vhost.c:1566: vhost_dev_get_config: Assertion `hdev->vhost_ops' failed.
virsh qemu-monitor-command spdk1 --hmp --cmd "chardev-add socket,id=spdk_vhost_blk0,path=/var/tmp/vhost.0,reconnect=1" virsh qemu-monitor-command spdk1 --hmp --cmd "device_add vhost-user-blk-pci,chardev=spdk_vhost_blk0,num-queues=4"
As noted, this is not an interface we'd provide support for. Please upgrade both libvirt and qemu and try the supported interface if you want us to deal with any problems you might have.
participants (6)
-
Daniel P. Berrangé
-
Han Han
-
Liang Chaojun
-
Michal Prívozník
-
Peter Krempa
-
梁朝军