[libvirt] [PATCH 0/3] news: Fix and update for libvirt 6.0.0

<blurb/> Andrea Bolognani (3): news: Fix typo (Libivrt -> Libvirt) news: Rearrange a few entries news: Update for libvirt 6.0.0 docs/news.xml | 111 +++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 87 insertions(+), 24 deletions(-) -- 2.24.1

Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- docs/news.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/news.xml b/docs/news.xml index 4f1bea4fb5..f05d5f1736 100644 --- a/docs/news.xml +++ b/docs/news.xml @@ -95,7 +95,7 @@ <description> For a long time libvirt was assuming that a backing file is RAW when the format was not specified. This didn't pose a problem until blockdev - support was enabled in last release. Libivrt now requires that + support was enabled in last release. Libvirt now requires that the format is specified in the image metadata or domain XML and the VM will refuse to start otherwise. Additionally the error message now links to the knowledge base which summarizes how to fix the images. -- 2.24.1

On 1/13/20 12:44 PM, Andrea Bolognani wrote:
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- docs/news.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Michal

Some were in the wrong section, some in the wrong version. Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- docs/news.xml | 68 ++++++++++++++++++++++++++------------------------- 1 file changed, 35 insertions(+), 33 deletions(-) diff --git a/docs/news.xml b/docs/news.xml index f05d5f1736..56b1e36d38 100644 --- a/docs/news.xml +++ b/docs/news.xml @@ -43,7 +43,16 @@ <libvirt> <release version="v6.0.0" date="unreleased"> - <section title="New features"> + <section title="Packaging changes"> + <change> + <summary> + support for python2 is removed + </summary> + <description> + Libvirt is no longer able to be built using the + Python 2 binary. Python 3 must be used instead. + </description> + </change> <change> <summary> docs: the python docutils toolset is now required @@ -54,6 +63,8 @@ written in the RST as an alternative to HTML. </description> </change> + </section> + <section title="New features"> <change> <summary> new PCI hostdev address type: unassigned @@ -68,6 +79,29 @@ guest. </description> </change> + <change> + <summary> + Provide init scripts for sub-deaemons + </summary> + <description> + So far libvirt shipped systemd unit files for sub-daemons. With this + release, init scripts are available too. Package maintainers can + choose which one to install via <code>--with-init-script</code> + configure option. + </description> + </change> + </section> + <section title="Removed features"> + <change> + <summary> + 'phyp' Power Hypervisor driver removed + </summary> + <description> + The 'phyp' Power Hypervisor driver has not seen active development + since 2011 and does not seem to have any real world usage. It + has now been removed. + </description> + </change> </section> <section title="Improvements"> <change> @@ -102,27 +136,6 @@ </description> </change> </section> - <section title="Removed features"> - <change> - <summary> - support for python2 is removed - </summary> - <description> - Libvirt is no longer able to be built using the - Python 2 binary. Python 3 must be used instead. - </description> - </change> - <change> - <summary> - 'phyp' Power Hypervisor driver removed - </summary> - <description> - The 'phyp' Power Hypervisor driver has not seen active development - since 2011 and does not seem to have any real world usage. It - has now been removed. - </description> - </change> - </section> </release> <release version="v5.10.0" date="2019-12-02"> <section title="New features"> @@ -224,17 +237,6 @@ down, these scripts were rewritten into Python. </description> </change> - <change> - <summary> - Provide init scripts for sub-deaemons - </summary> - <description> - So far libvirt shipped systemd unit files for sub-daemons. With this - release, init scripts are available too. Package maintainers can - choose which one to install via <code>--with-init-script</code> - configure option. - </description> - </change> </section> <section title="Bug fixes"> <change> -- 2.24.1

On 1/13/20 12:44 PM, Andrea Bolognani wrote:
Some were in the wrong section, some in the wrong version.
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- docs/news.xml | 68 ++++++++++++++++++++++++++------------------------- 1 file changed, 35 insertions(+), 33 deletions(-)
Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Michal

Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- docs/news.xml | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/docs/news.xml b/docs/news.xml index 56b1e36d38..2da7e1e297 100644 --- a/docs/news.xml +++ b/docs/news.xml @@ -90,6 +90,30 @@ configure option. </description> </change> + <change> + <summary> + qemu: Support cold-unplug of sound devices + </summary> + </change> + <change> + <summary> + qemu: Implement VIR_MIGRATE_PARAM_TLS_DESTINATION + </summary> + <description> + This flag, which can be enabled using <code>virsh</code>'s + <code>--tls-destination</code> option, allows migration to succeed + in situations where there is a mismatch between the destination's + hostname and the information stored in its TLS certificate. + </description> + </change> + <change> + <summary> + qemu: Add NVMe support + </summary> + <description> + NVMe disks present in the host can now be assigned to QEMU guests. + </description> + </change> </section> <section title="Removed features"> <change> @@ -120,6 +144,24 @@ lzop should be used. </description> </change> + <change> + <summary> + domain: Improve job stat handling + </summary> + <description> + It is now possible to retrieve stats for completed and failed jobs. + </description> + </change> + <change> + <summary> + qemu: Don't hold monitor and agent job at the same time + </summary> + <description> + Before this change, a malicious (or buggy) + <code>qemu-guest-agent</code> running in the guest could make other + libvirt APIs unavailable for an unbounded amount of time. + </description> + </change> </section> <section title="Bug fixes"> <change> @@ -135,6 +177,25 @@ now links to the knowledge base which summarizes how to fix the images. </description> </change> + <change> + <summary> + qemu: Fix non-shared storage migration over NBD + </summary> + </change> + <change> + <summary> + qemu: Generate a single MAC address for hotplugged network devices + </summary> + <description> + Since libvirt 4.6.0, when hotplugging a network device that didn't + have a MAC address already assigned by the user, two separate + addresses would be generated: one for the live configuration, which + would show up immediately, and one for the inactive configuration, + which would show up after the first reboot. This situation was + clearly undesirable, so a single MAC address is now generated and + used both for the live configuration and the inactive one. + </description> + </change> </section> </release> <release version="v5.10.0" date="2019-12-02"> -- 2.24.1

On Mon, Jan 13, 2020 at 12:44:21 +0100, Andrea Bolognani wrote:
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- docs/news.xml | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+)
[...]
+ <change> + <summary> + qemu: Add NVMe support + </summary> + <description> + NVMe disks present in the host can now be assigned to QEMU guests.
This is severely misleading. NVMe could be used before in at least two different ways [1][2]. This one adds another way which is a combination of those two. The driver is in userspace but the qemu block layer can be used. This means that the frontend can be emulated and blockjobs are possible but there's some performance benefit. [1] device assignment: you get performance but can't migrate or use blockjobs. Guest requires drivers. [2] normal block device: kernel is involved thus has performance penalty but there's more features and flexibility.
+ </description> + </change> </section> <section title="Removed features"> <change>

On Mon, 2020-01-13 at 12:53 +0100, Peter Krempa wrote:
On Mon, Jan 13, 2020 at 12:44:21 +0100, Andrea Bolognani wrote:
+ <summary> + qemu: Add NVMe support + </summary> + <description> + NVMe disks present in the host can now be assigned to QEMU guests.
This is severely misleading. NVMe could be used before in at least two different ways [1][2]. This one adds another way which is a combination of those two. The driver is in userspace but the qemu block layer can be used. This means that the frontend can be emulated and blockjobs are possible but there's some performance benefit.
[1] device assignment: you get performance but can't migrate or use blockjobs. Guest requires drivers. [2] normal block device: kernel is involved thus has performance penalty but there's more features and flexibility.
I tried to describe the change as well as I could, based on my limited understanding of the feature and what I could gather from skimming the relevant commit messages, so I'm not entirely surprised such description is lacking :) This is *exactly* why we should get whoever contributes a change to also document it in the release notes at the same time: not only it naturally distributes the load so that I don't have to scramble almost every month to get them done before release, but it also ensures the result is of higher quality because of 1) deep familiarity with the patchset at hand and 2) memory not having had a chance to degrade in the intervening weeks. CC'ing Michal who contributed the patches. Can either you or him please come up with a superior replacement for the above? Thanks! -- Andrea Bolognani / Red Hat / Virtualization

On 1/13/20 1:16 PM, Andrea Bolognani wrote:
On Mon, 2020-01-13 at 12:53 +0100, Peter Krempa wrote:
On Mon, Jan 13, 2020 at 12:44:21 +0100, Andrea Bolognani wrote:
+ <summary> + qemu: Add NVMe support + </summary> + <description> + NVMe disks present in the host can now be assigned to QEMU guests.
This is severely misleading. NVMe could be used before in at least two different ways [1][2]. This one adds another way which is a combination of those two. The driver is in userspace but the qemu block layer can be used. This means that the frontend can be emulated and blockjobs are possible but there's some performance benefit.
[1] device assignment: you get performance but can't migrate or use blockjobs. Guest requires drivers. [2] normal block device: kernel is involved thus has performance penalty but there's more features and flexibility.
I tried to describe the change as well as I could, based on my limited understanding of the feature and what I could gather from skimming the relevant commit messages, so I'm not entirely surprised such description is lacking :)
This is *exactly* why we should get whoever contributes a change to also document it in the release notes at the same time: not only it naturally distributes the load so that I don't have to scramble almost every month to get them done before release, but it also ensures the result is of higher quality because of 1) deep familiarity with the patchset at hand and 2) memory not having had a chance to degrade in the intervening weeks.
Right. Mea culpa.
CC'ing Michal who contributed the patches. Can either you or him please come up with a superior replacement for the above? Thanks!
How about this: <change> <summary> qemu: Allow accessing NVMe disks directly </summary> <description> Before this release there were two ways to configure a NVMe disk for a domain. The first was using <disk/> with the <source/> pointing to the <code>/dev/nvmeXXXX</code>. The other was using PCI assignment via <hostdev/> element. Both have their disadvantages: the former adds latency of file system and block layers of the host kernel, the latter prohibits domain migration. In this release the third way of configuring NVMe disk is added which combines the advantages and drops disadvantages of the previous two ways. It's accessible via <disk type='nvme'/>. </description> </change> Michal

On Mon, 2020-01-13 at 13:35 +0100, Michal Privoznik wrote:
On 1/13/20 1:16 PM, Andrea Bolognani wrote:
CC'ing Michal who contributed the patches. Can either you or him please come up with a superior replacement for the above? Thanks!
How about this:
<change> <summary> qemu: Allow accessing NVMe disks directly </summary> <description> Before this release there were two ways to configure a NVMe disk for a domain. The first was using <disk/> with the <source/> pointing to the <code>/dev/nvmeXXXX</code>. The other was using PCI assignment via <hostdev/> element. Both have their disadvantages: the former adds latency of file system and block layers of the host kernel, the latter prohibits domain migration. In this release the third way of configuring NVMe disk is added which combines the advantages and drops disadvantages of the previous two ways. It's accessible via <disk type='nvme'/>. </description> </change>
Looks good. Care to send that as a separate patch, and possibly give a R-b to this one (with the NVMe hunk dropped, of course)? -- Andrea Bolognani / Red Hat / Virtualization

On 1/13/20 2:00 PM, Andrea Bolognani wrote:
On Mon, 2020-01-13 at 13:35 +0100, Michal Privoznik wrote:
On 1/13/20 1:16 PM, Andrea Bolognani wrote:
CC'ing Michal who contributed the patches. Can either you or him please come up with a superior replacement for the above? Thanks!
How about this:
<change> <summary> qemu: Allow accessing NVMe disks directly </summary> <description> Before this release there were two ways to configure a NVMe disk for a domain. The first was using <disk/> with the <source/> pointing to the <code>/dev/nvmeXXXX</code>. The other was using PCI assignment via <hostdev/> element. Both have their disadvantages: the former adds latency of file system and block layers of the host kernel, the latter prohibits domain migration. In this release the third way of configuring NVMe disk is added which combines the advantages and drops disadvantages of the previous two ways. It's accessible via <disk type='nvme'/>. </description> </change>
Looks good. Care to send that as a separate patch, and possibly give a R-b to this one (with the NVMe hunk dropped, of course)?
Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Michal
participants (3)
-
Andrea Bolognani
-
Michal Privoznik
-
Peter Krempa