scsi passthrough differs between guests

Greetings, I have two machines running the same distro, both running qemu 5.1.0, one runs libvirt 6.7.0, the other 6.8.0. I've decided to test the viability of passing through my sata cdrom into a vm, so I went to the libvirt docs, read a bit and added the following to a debian10 uefi vm running on libvirt 6.8.0: <controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </controller> <hostdev mode='subsystem' type='scsi' managed='no'> <source> <adapter name='scsi_host11'/> <address bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> I've booted the vm and saw the cdrom in the vm, eject worked and so did mount. I've decided to move it to another machine running libvirt 6.7.0, I've verified the cdrom is visible on the new system, seE: # lsscsi [0:0:0:0] cd/dvd HL-DT-ST DVDRAM GH24NSD1 LW00 /dev/sr0 I've inserted the same xml into another vm running libreelec uefi but changed the host from 11 to 0, it looks like this: <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </controller> <hostdev mode='subsystem' type='scsi' managed='no'> <source> <adapter name='scsi_host0'/> <address bus='0' target='0' unit='0'/> </source> <readonly/> <alias name='hostdev0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> but when I boot the vm, I see no cdrom, lspci shows this: 03:00.0 SCSI storage controller: Red Hat, Inc. Virtio block device (rev 01) 07:01.0 SCSI storage controller: Red Hat, Inc. Virtio SCSI as there is no lsscsi or /proc/scsi/scsi I cannot see other possible scsis. I know for a fact that libreelec supports scsi/sata cdroms as it is a frequent he for streamers. I'm baffled about the two scsi controllers I see and why I cannot see the device. is there a known issue in 6.7.0 with scsi pass-trough? Dagg.

On Wed, Oct 14, 2020 at 23:00:39 +0200, daggs wrote:
Greetings,
[...]
I'm baffled about the two scsi controllers I see and why I cannot see the device. is there a known issue in 6.7.0 with scsi pass-trough?
I don't see anything wrong with you configs. There were some changes related to SCSI hostdevs, between 6.7.0 an 6.8.0, but none of them should actually impact that use case. Said that, could you please post the actual qemu command lines that libvirt formatted for the two VMs you mention above. The command line can be found in /var/log/libvirt/qemu/$VMNAME.log . Please make sure you post the latest/actual one. It'll help showing whether anything changed between the two or there is a different problem.

Greetings Peter,
Sent: Thursday, October 15, 2020 at 9:52 AM From: "Peter Krempa" <pkrempa@redhat.com> To: "daggs" <daggs@gmx.com> Cc: "libvirt-usersredhat.com" <libvirt-users@redhat.com> Subject: Re: scsi passthrough differs between guests
I don't see anything wrong with you configs. There were some changes related to SCSI hostdevs, between 6.7.0 an 6.8.0, but none of them should actually impact that use case.
Said that, could you please post the actual qemu command lines that libvirt formatted for the two VMs you mention above.
The command line can be found in /var/log/libvirt/qemu/$VMNAME.log . Please make sure you post the latest/actual one. It'll help showing whether anything changed between the two or there is a different problem.
here is the good cmd: /usr/bin/qemu-system-x86_64 \ -name guest=debian10,debug-threads=on \ -S \ -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-debian10/master-key.aes \ -blockdev '{"driver":"file","filename":"/usr/share/edk2-ovmf/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/debian9_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-q35-4.0,accel=kvm,usb=off,vmport=off,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \ -cpu host,migratable=on \ -m 6144 \ -overcommit mem-lock=off \ -smp 4,sockets=4,cores=1,threads=1 \ -uuid 84af935f-0afd-4021-a431-b6408a53efea \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=26,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -global ICH9-LPC.disable_s3=1 \ -global ICH9-LPC.disable_s4=1 \ -boot strict=on \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \ -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \ -device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.2,addr=0x0 \ -device virtio-scsi-pci,id=scsi0,bus=pci.7,addr=0x0 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.3,addr=0x0 \ -blockdev '{"driver":"file","filename":"/home/virt_admin/Machines/kvm/debian10.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \ -device virtio-blk-pci,bus=pci.4,addr=0x0,drive=libvirt-1-format,id=virtio-disk0,bootindex=2 \ -netdev tap,fd=28,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:11:92:dd,bus=pci.1,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=29,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -vnc 127.0.0.1:0 \ -k en-us \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \ -device ich9-intel-hda,id=sound0,bus=pcie.0,addr=0x1b \ -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 \ -drive file=/dev/sg5,if=none,format=raw,id=drive-hostdev0,readonly=on \ -device scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-hostdev0,id=hostdev0 \ -device virtio-balloon-pci,id=balloon0,bus=pci.5,addr=0x0 \ -object rng-random,id=objrng0,filename=/dev/urandom \ -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.6,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on the bad one: /usr/bin/qemu-system-x86_64 \ -name guest=streamer-vm-q35,debug-threads=on \ -S \ -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-12-streamer-vm-q35/master-key.aes \ -blockdev '{"driver":"file","filename":"/usr/share/qemu/edk2-x86_64-secure-code.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/streamer-vm-q35_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-q35-5.0,accel=kvm,usb=off,smm=on,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \ -cpu host,migratable=on \ -m 8192 \ -overcommit mem-lock=off \ -smp 1,maxcpus=2,sockets=1,dies=1,cores=1,threads=2 \ -uuid 4fb1463b-837c-40fc-a760-a69afc040a1a \ -display none \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=25,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -global ICH9-LPC.disable_s3=1 \ -global ICH9-LPC.disable_s4=1 \ -boot strict=on \ -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e \ -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x0 \ -device pcie-root-port,port=0x8,chassis=3,id=pci.3,bus=pcie.0,multifunction=on,addr=0x1 \ -device pcie-root-port,port=0x9,chassis=4,id=pci.4,bus=pcie.0,addr=0x1.0x1 \ -device pcie-root-port,port=0xa,chassis=5,id=pci.5,bus=pcie.0,addr=0x1.0x2 \ -device pcie-root-port,port=0xb,chassis=6,id=pci.6,bus=pcie.0,addr=0x1.0x3 \ -device pcie-root-port,port=0xc,chassis=7,id=pci.7,bus=pcie.0,addr=0x1.0x4 \ -device qemu-xhci,id=usb,bus=pci.4,addr=0x0 \ -device virtio-scsi-pci,id=scsi0,bus=pci.2,addr=0x1 \ -blockdev '{"driver":"file","filename":"/home/streamer/streamer.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \ -device virtio-blk-pci,bus=pci.5,addr=0x0,drive=libvirt-1-format,id=virtio-disk0,bootindex=1 \ -netdev tap,fd=28,id=hostnet0 \ -device e1000e,netdev=hostnet0,id=net0,mac=52:54:00:5a:4c:8c,bus=pci.3,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -blockdev '{"driver":"host_device","filename":"/dev/sg0","node-name":"libvirt-hostdev0-backend","read-only":true}' \ -device scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=libvirt-hostdev0-backend,id=hostdev0 \ -device vfio-pci,host=0000:00:02.0,id=hostdev1,bus=pci.7,addr=0x0,romfile=/home/streamer/gpu-8086:5902-uefi.rom \ -device vfio-pci,host=0000:00:1f.3,id=hostdev2,bus=pci.2,addr=0x2 \ -device usb-host,id=hostdev3,bus=usb.0,port=1 \ -device usb-host,id=hostdev4,bus=usb.0,port=2 \ -device virtio-balloon-pci,id=balloon0,bus=pci.6,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on Dagg

On Thu, Oct 15, 2020 at 12:36:08 +0200, daggs wrote:
Greetings Peter,
Sent: Thursday, October 15, 2020 at 9:52 AM From: "Peter Krempa" <pkrempa@redhat.com> To: "daggs" <daggs@gmx.com> Cc: "libvirt-usersredhat.com" <libvirt-users@redhat.com> Subject: Re: scsi passthrough differs between guests
I don't see anything wrong with you configs. There were some changes related to SCSI hostdevs, between 6.7.0 an 6.8.0, but none of them should actually impact that use case.
Said that, could you please post the actual qemu command lines that libvirt formatted for the two VMs you mention above.
The command line can be found in /var/log/libvirt/qemu/$VMNAME.log . Please make sure you post the latest/actual one. It'll help showing whether anything changed between the two or there is a different problem.
here is the good cmd: /usr/bin/qemu-system-x86_64 \
[...]
-drive file=/dev/sg5,if=none,format=raw,id=drive-hostdev0,readonly=on \ -device scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-hostdev0,id=hostdev0 \
[...]
the bad one: /usr/bin/qemu-system-x86_64 \
[...]
-blockdev '{"driver":"host_device","filename":"/dev/sg0","node-name":"libvirt-hostdev0-backend","read-only":true}' \ -device scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=libvirt-hostdev0-backend,id=hostdev0 \
This doesn't corelate with the version numbers you've mentioned, because the "new" syntax which uses -blockdev was present both in 6.7.0 and 6.8.0. Anyways the problem is almost certainly that the hostdev code doesn't detect that it's a cdrom. We have such a hack in the disk code which turns a 'host_device' into a 'host_cdrom'. I'll try fixing it but I don't have a machine with a cdrom handy, so it would be nice if you could test it afterwards. Thanks for the report.

Sent: Thursday, October 15, 2020 at 2:01 PM From: "Peter Krempa" <pkrempa@redhat.com> To: "daggs" <daggs@gmx.com> Cc: "libvirt-usersredhat.com" <libvirt-users@redhat.com> Subject: Re: scsi passthrough differs between guests
[...]
-drive file=/dev/sg5,if=none,format=raw,id=drive-hostdev0,readonly=on \ -device scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-hostdev0,id=hostdev0 \
[...]
the bad one: /usr/bin/qemu-system-x86_64 \
[...]
-blockdev '{"driver":"host_device","filename":"/dev/sg0","node-name":"libvirt-hostdev0-backend","read-only":true}' \ -device scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=libvirt-hostdev0-backend,id=hostdev0 \
This doesn't corelate with the version numbers you've mentioned, because the "new" syntax which uses -blockdev was present both in 6.7.0 and 6.8.0.
Greetings Peter, the "good" syntax was created with libvirt of version which is earlier than 6.7, libvirt was upgraded several times since the vm creation. I don't know if the syntax is changed every version or not.
Anyways the problem is almost certainly that the hostdev code doesn't detect that it's a cdrom. We have such a hack in the disk code which turns a 'host_device' into a 'host_cdrom'. I'll try fixing it but I don't have a machine with a cdrom handy, so it would be nice if you could test it afterwards.
Thanks for the report.
my system is gentoo, hence, every pkg is compiled, if you provide a patch, I can test it easily. Thanks for the effort, Dagg.

On Thu, Oct 15, 2020 at 13:14:56 +0200, daggs wrote:
Greetings Peter,
Sent: Thursday, October 15, 2020 at 2:01 PM From: "Peter Krempa" <pkrempa@redhat.com> To: "daggs" <daggs@gmx.com> Cc: "libvirt-usersredhat.com" <libvirt-users@redhat.com> Subject: Re: scsi passthrough differs between guests
[...]
-drive file=/dev/sg5,if=none,format=raw,id=drive-hostdev0,readonly=on \ -device scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-hostdev0,id=hostdev0 \
[...]
the bad one: /usr/bin/qemu-system-x86_64 \
[...]
-blockdev '{"driver":"host_device","filename":"/dev/sg0","node-name":"libvirt-hostdev0-backend","read-only":true}' \ -device scsi-generic,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=libvirt-hostdev0-backend,id=hostdev0 \
This doesn't corelate with the version numbers you've mentioned, because the "new" syntax which uses -blockdev was present both in 6.7.0 and 6.8.0. the "good" syntax was created with libvirt of version which is earlier than 6.7, libvirt was upgraded several times since the vm creation. I don't know if the syntax is changed every version or not.
The new syntax was added in libvirt-6.6.0. Your VM probably was started with libvirt-6.6.0 and you then upgraded your system to newer version. All running VMs keep their configuration. Restarting it will use the new syntax and thus break.
Anyways the problem is almost certainly that the hostdev code doesn't detect that it's a cdrom. We have such a hack in the disk code which turns a 'host_device' into a 'host_cdrom'. I'll try fixing it but I don't have a machine with a cdrom handy, so it would be nice if you could test it afterwards.
Thanks for the report.
my system is gentoo, hence, every pkg is compiled, if you provide a patch, I can test it easily.
Cool. I'll CC you on the patches, I need to do some changes to the test suite first though as we didn't even have unit tests for this case.

Greetings Peter,
Sent: Thursday, October 15, 2020 at 2:23 PM From: "Peter Krempa" <pkrempa@redhat.com> To: "daggs" <daggs@gmx.com> Cc: "libvirt-usersredhat.com" <libvirt-users@redhat.com> Subject: Re: scsi passthrough differs between guests
The new syntax was added in libvirt-6.6.0. Your VM probably was started with libvirt-6.6.0 and you then upgraded your system to newer version. All running VMs keep their configuration. Restarting it will use the new syntax and thus break. I've checked the log, first vm boot was with libvirt version: 5.6.0, qemu version: 4.0.0, kernel: 5.3.5
Cool. I'll CC you on the patches, I need to do some changes to the test suite first though as we didn't even have unit tests for this case.
np. send them when you can. Dagg.
participants (2)
-
daggs
-
Peter Krempa