On 09/24/2015 03:26 PM, Alex Holst wrote:
Quoting Laine Stump (laine(a)laine.org):
> On 08/12/2015 02:34 PM, Alex Holst wrote:
>> I would really appreciate some pointers on what I am doing wrong here.
>>
>> I have a need to run multiple virtual guests which have each their own GPU and
>> some USB controllers passed-through. I am able to run one of the guests like
>> this (assuming vfio stuff has happened elsewhere), but I would prefer to use
>> virsh:
[..]
Thank you for your input. I have been working on this issue on and off
since my original mail to this list.
I have been unable to properly migrate the single VM from a shell
script,
You will not be able to migrate a guest that has a passthrough GPU
device (or any other PCI device assigned to the guest using vfio or kvm
device assignment), if that's one of the things you're trying to do.
much less being able to run several VMs that each have a
pass-through.
So you have multiple GPUs on the hardware?
As for details missing from my previous mail: This is an Ubuntu 15.04
host running several Windows 10 guests. The entire kvm command line I
have running is from this guide at Puget Systems:
https://www.pugetsystems.com/labs/articles/Multiheaded-NVIDIA-Gaming-usin...
I haven't looked at this page in detail (and wouldn't know what to look
for if I did :-), but it appears that it was last edited over a year
ago, and I think there has been substantial progress/change in GPU
passthrough since then. The "new hotness" for information about GPU
passthrough is here:
http://vfio.blogspot.com/
In particular, start with this article:
http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part-1-hardware.html
I have discovered several problems with this guide, in particular that I
can remove the pci_stub ids from /etc/initramfs-tools/modules and the
virtual Windows host continues to work just fine.
So, now I'm back to scratch using virt-install and pointing to
the existing img file that works with the kvm shell script::
$ virt-install --name foo --memory 8192 --machine q35 \
--host-device 02:00.0 --host-device 02:00.1 \
--host-device 00:1a.0 --host-device 00:1d.0 \
--disk /vm2/foo.img --boot menu=on
Starting install...
Creating domain...
Connected to domain foo
Escape character is ^]
Even though the 02:00.0 and 02:00.1 devices are the GPU and on-board
audio, the console remains in text mode and the actual guest OS is
nowhere to be seen.
$ lspci -nn | grep 02:00
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1401] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fba] (rev a1)
$ lspci -nn | egrep 00:1[a,d]
00:1a.0 USB controller [0c03]: Intel Corporation C600/X79 series chipset USB2 Enhanced
Host Controller #2 [8086:1d2d] (rev 06)
00:1d.0 USB controller [0c03]: Intel Corporation C600/X79 series chipset USB2 Enhanced
Host Controller #1 [8086:1d26] (rev 06)
I'm guessing the above lspci is on the host rather than the guest (since
you say the guest OS is "nowhere to be seen").
Do you have any additional pointers for me on how to properly pass
the
GPU through so the guest OS detects it and is able to make use of the
attached display?
I don't use virt-install enough to be intimately familiar with what is
generated from the --host-device option, but I don't see that you've
specified the PCI address on the *guest* anywhere, nor that you've told
it to not setup an emulated graphics device, so I'm guessing that the
generated guest has an emulated graphics device at the standard
location, and the GPU is visible at some other address in the guest (so
at best it would show up in the guest as a secondary display). But
really I think you may have better luck by just starting over using the
information at
vfio.blogspot.com, since the person writing that is one
of the people actually debugging problems with GPU passthrough and
submitting kernel and qemu patches to fix them.