On 1/13/20 1:16 PM, Andrea Bolognani wrote:
On Mon, 2020-01-13 at 12:53 +0100, Peter Krempa wrote:
> On Mon, Jan 13, 2020 at 12:44:21 +0100, Andrea Bolognani wrote:
>> + <summary>
>> + qemu: Add NVMe support
>> + </summary>
>> + <description>
>> + NVMe disks present in the host can now be assigned to QEMU guests.
>
> This is severely misleading. NVMe could be used before in at least two
> different ways [1][2]. This one adds another way which is a combination of
> those two. The driver is in userspace but the qemu block layer can be
> used. This means that the frontend can be emulated and blockjobs are
> possible but there's some performance benefit.
>
> [1] device assignment: you get performance but can't migrate or use
> blockjobs. Guest requires drivers.
> [2] normal block device: kernel is involved thus has performance penalty
> but there's more features and flexibility.
I tried to describe the change as well as I could, based on my
limited understanding of the feature and what I could gather from
skimming the relevant commit messages, so I'm not entirely surprised
such description is lacking :)
This is *exactly* why we should get whoever contributes a change to
also document it in the release notes at the same time: not only it
naturally distributes the load so that I don't have to scramble
almost every month to get them done before release, but it also
ensures the result is of higher quality because of 1) deep
familiarity with the patchset at hand and 2) memory not having had
a chance to degrade in the intervening weeks.
Right. Mea culpa.
CC'ing Michal who contributed the patches. Can either you or him
please come up with a superior replacement for the above? Thanks!
How about this:
<change>
<summary>
qemu: Allow accessing NVMe disks directly
</summary>
<description>
Before this release there were two ways to configure a NVMe disk for a
domain. The first was using <disk/> with the <source/>
pointing to the <code>/dev/nvmeXXXX</code>. The other was using PCI
assignment via <hostdev/> element. Both have their disadvantages:
the former adds latency of file system and block layers of the host
kernel, the latter prohibits domain migration. In this release the third
way of configuring NVMe disk is added which combines the advantages and
drops disadvantages of the previous two ways. It's accessible via
<disk type='nvme'/>.
</description>
</change>
Michal