On Tue, Jun 29, 2021 at 10:05:17AM +0200, Erik Skultety wrote:
...
> > +Example guest definition without launchSecurity
> > +===============================================
> > +
> > +Minimal domain XML for a protected virtualization guest using the
> > +``iommu='on'`` setting for each virtio device.
>
> I don't know how s390-pv works but for example with AMD SEV it is
> required to use `iommu='on'` otherwise the device is not visible inside
> the VM so I would like to make sure there is no misunderstanding and
> it is correct.
Can you elaborate on how is the device not visible in the VM? IIRC 'iommu=on'
makes sure that the guest virtio driver is able to negotiate the
VIRTIO_F_IOMMU_PLATFORM feature which in connection with the correct IOMMU model
setting makes SEV work with virtio and IOMMU
(AFAIR OVMF has a dedicated SEV iommu driver).
Therefore, that flag should have nothing to do with device visibility, in fact
in x86_64's case it will be a PCI device, so you'll always be able to list
those.
https://bugzilla.redhat.com/show_bug.cgi?id=1804227
We had a discussion about this BZ that someone tried to hot-plug device
into VM with AMD-SEV enabled and the device was not visible (or possibly
was visible but did not work) in the VM because iommu was not set.
Here is a QEMU commit message that enables iommu_platform if
confidential guest support is used:
commit 9f88a7a3df11a5aaa6212ea535d40d5f92561683
Author: David Gibson <david(a)gibson.dropbear.id.au>
Date: Thu Jun 4 14:20:24 2020 +1000
confidential guest support: Alter virtio default properties for protected guests
The default behaviour for virtio devices is not to use the platforms normal
DMA paths, but instead to use the fact that it's running in a hypervisor
to directly access guest memory. That doesn't work if the guest's memory
is protected from hypervisor access, such as with AMD's SEV or POWER's PEF.
So, if a confidential guest mechanism is enabled, then apply the
iommu_platform=on option so it will go through normal DMA mechanisms.
Those will presumably have some way of marking memory as shared with
the hypervisor or hardware so that DMA will work.
Pavel