Release of libvirt-9.9.0
by Jiri Denemark
The 9.9.0 release of both libvirt and libvirt-python is tagged and
signed tarballs and source RPMs are available at
https://download.libvirt.org/
https://download.libvirt.org/python/
Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing feedback. Your work is greatly
appreciated.
* New features
* QEMU: implement reverting external snapshots
Reverting external snapshots is now possible using the existing API
``virDomainSnapshotRevert()``. Management application can check host
capabilities for ``<externalSnapshot/>`` element within the list of
guest features to see if the current libvirt supports both deleting
and reverting external snapshots.
* virsh: add ``console --resume`` support
The ``virsh console`` subcommand now accepts a ``--resume`` option. This
will resume a paused guest after connecting to the console.
* Improvements
* virsh: Improve ``virsh start --console`` behavior
The ``virsh start --console`` now tries to connect to the guest console
before starting the vCPUs.
* virsh: Improve ``virsh create --console`` behavior
The ``virsh create --console`` now tries to connect to the guest console
before starting the vCPUs.
Enjoy.
Jirka
12 months
Re: [PATCH v8 0/4] pci hotplug tracking
by Vladimir Sementsov-Ogievskiy
On 02.11.23 14:31, Michael S. Tsirkin wrote:
> On Thu, Oct 05, 2023 at 12:29:22PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Hi all!
>>
>> Main thing this series does is DEVICE_ON event - a counter-part to
>> DEVICE_DELETED. A guest-driven event that device is powered-on.
>> Details are in patch 2. The new event is paried with corresponding
>> command query-hotplug.
>
> Several things questionable here:
> 1. depending on guest activity you can get as many
> DEVICE_ON events as you like
No, I've made it so it may be sent only once per device
> 2. it's just for shpc and native pcie - things are
> confusing enough for management, we should make sure
> it can work for all devices
Agree, I'm thinking about it
> 3. what about non hotpluggable devices? do we want the event for them?
>
I think, yes, especially if we make async=true|false flag for device_add, so that successful device_add must be always followed by DEVICE_ON - like device_del is followed by DEVICE_DELETED.
Maybe, to generalize, it should be called not DEVICE_ON (which mostly relate to hotplug controller statuses) but DEVICE_ADDED - a full counterpart for DEVICE_DELETED.
>
> I feel this needs actual motivation so we can judge what's the
> right way to do it.
My first motivation for this series was the fact that successful device_add doesn't guarantee that hard disk successfully hotplugged to the guest. It relates to some problems with shpc/pcie hotplug we had in the past, and they are mostly fixed. But still, for management tool it's good to understand that all actions related to hotplug controller are done and we have "green light".
Recently new motivation come, as I described in my "ping" letter <6bd19a07-5224-464d-b54d-1d738f5ba8f7(a)yandex-team.ru>, that we have a performance degradation because of 7bed89958bfbf40df, which introduces drain_call_rcu() in device_add, to make it more synchronous. So, my suggestion is make it instead more asynchronous (probably with special flag) and rely on DEVICE_ON event.
>
>
>>
>> v8:
>> - improve naming, wording and style
>> - make new QMP interface experimental
>>
>>
>> Vladimir Sementsov-Ogievskiy (4):
>> qapi/qdev.json: unite DEVICE_* event data into single structure
>> qapi: add DEVICE_ON and query-hotplug infrastructure
>> shpc: implement DEVICE_ON event and query-hotplug
>> pcie: implement DEVICE_ON event and query-hotplug
>>
>> hw/core/hotplug.c | 12 +++
>> hw/pci-bridge/pci_bridge_dev.c | 14 +++
>> hw/pci-bridge/pcie_pci_bridge.c | 1 +
>> hw/pci/pcie.c | 83 +++++++++++++++
>> hw/pci/pcie_port.c | 1 +
>> hw/pci/shpc.c | 86 +++++++++++++++
>> include/hw/hotplug.h | 11 ++
>> include/hw/pci/pci_bridge.h | 2 +
>> include/hw/pci/pcie.h | 2 +
>> include/hw/pci/shpc.h | 2 +
>> include/hw/qdev-core.h | 7 ++
>> include/monitor/qdev.h | 6 ++
>> qapi/qdev.json | 178 +++++++++++++++++++++++++++++---
>> softmmu/qdev-monitor.c | 58 +++++++++++
>> 14 files changed, 451 insertions(+), 12 deletions(-)
>>
>> --
>> 2.34.1
>
--
Best regards,
Vladimir
12 months
Re: [PATCH v8 0/4] pci hotplug tracking
by Vladimir Sementsov-Ogievskiy
[cc Peter, Nikolay and libvirt list]
On 02.11.23 11:06, Vladimir Sementsov-Ogievskiy wrote:
> Ping.
>
> And some addition. We have the case, when the commit
>
> commit 7bed89958bfbf40df9ca681cefbdca63abdde39d
> Author: Maxim Levitsky <mlevitsk(a)redhat.com>
> Date: Tue Oct 6 14:38:58 2020 +0200
>
> device_core: use drain_call_rcu in in qmp_device_add
> Soon, a device removal might only happen on RCU callback execution.
> This is okay for device-del which provides a DEVICE_DELETED event,
> but not for the failure case of device-add. To avoid changing
> monitor semantics, just drain all pending RCU callbacks on error.
>
> sensibly slows down vm initialization (several calls to device_add of pc-dimm).
>
> And looking at commit message, I see that what I do in the series is exactly a suggestion to change monitor semantics.
>
> What do you think?
>
> Maybe we need a boolean "async" parameter for device_add, which will turn off drain_call_rcu() call and rely on user to handle DEVICE_ON?
>
> On 05.10.23 12:29, Vladimir Sementsov-Ogievskiy wrote:
>> Hi all!
>>
>> Main thing this series does is DEVICE_ON event - a counter-part to
>> DEVICE_DELETED. A guest-driven event that device is powered-on.
>> Details are in patch 2. The new event is paried with corresponding
>> command query-hotplug.
>>
>>
>> v8:
>> - improve naming, wording and style
>> - make new QMP interface experimental
>>
>>
>> Vladimir Sementsov-Ogievskiy (4):
>> qapi/qdev.json: unite DEVICE_* event data into single structure
>> qapi: add DEVICE_ON and query-hotplug infrastructure
>> shpc: implement DEVICE_ON event and query-hotplug
>> pcie: implement DEVICE_ON event and query-hotplug
>>
>> hw/core/hotplug.c | 12 +++
>> hw/pci-bridge/pci_bridge_dev.c | 14 +++
>> hw/pci-bridge/pcie_pci_bridge.c | 1 +
>> hw/pci/pcie.c | 83 +++++++++++++++
>> hw/pci/pcie_port.c | 1 +
>> hw/pci/shpc.c | 86 +++++++++++++++
>> include/hw/hotplug.h | 11 ++
>> include/hw/pci/pci_bridge.h | 2 +
>> include/hw/pci/pcie.h | 2 +
>> include/hw/pci/shpc.h | 2 +
>> include/hw/qdev-core.h | 7 ++
>> include/monitor/qdev.h | 6 ++
>> qapi/qdev.json | 178 +++++++++++++++++++++++++++++---
>> softmmu/qdev-monitor.c | 58 +++++++++++
>> 14 files changed, 451 insertions(+), 12 deletions(-)
>>
>
--
Best regards,
Vladimir
12 months
ANNOUNCE: Mailing list move complete
by Daniel P. Berrangé
This is an announcement to the effect that the mailing list move is now
complete. TL;DR the new list addresses are:
* announce(a)lists.libvirt.org (formerly libvirt-announce(a)redhat.com)
Low volume, announcements of releases and other important info
* users(a)lists.libvirt.org (formerly libvirt-users(a)redhat.com)
End user questions and discussions and collaboration
* devel(a)lists.libvirt.org (formerly libvir-list(a)redhat.com)
Patch submission for development of main project
* security(a)lists.libvirt.org (formerly libvir-security(a)redhat.com)
Submission of security sensitive bug reports
The online archive and membership mgmt interface is
https://lists.libvirt.org
In my original announcement[1] I mentioned that people would need to manually
re-subscribe. Due to a mixup in communications, our IT admins went ahead and
migrated across the existing entire subscriber base for all lists. Thus there
is NO need to re-subscribe to any of the lists. If you were doing filtering
of mail, you may need to update filters for the new list ID matches.
With the new list server, HyperKitty is providing the web interface. Thus
if you wish to interact with the lists entire via the browser this is now
possible. Note that it requires you to register for an account and set a
password, even if you are already a list subscriber.
If you mistakenly send to the old lists you should receive an auto-reply
about the moved destinations.
Note, we had some technical issues on Thursday/Friday, so if you sent
mails on those two days they probably will not have reached any lists,
and so you may wish to re-send them.
With regards,
Daniel
[1] https://listman.redhat.com/archives/libvirt-announce/2023-October/000650....
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
12 months
RFC: Switch to a date-based versioning scheme
by Andrea Bolognani
Since we're just a few months away from the 10.0.0 release, I thought
it would be a good time to bring up this idea.
Can we move to date-based version numbers? I suggest having
libvirt 24.01.0 instead of 10.0.0
24.03.0 10.1.0
24.04.0 10.2.0
...
24.11.0 10.9.0
24.12.0 10.10.0
The big advantage is that, once version numbers are obviously
date-based, any expectation of them being interpreted according to
semver[1] are immediately gone.
Of course semver doesn't make sense for us, given our extremely
strong backwards compatibility guarantees, and that's exactly why
we've left it behind with 2.0.0; however, that's something that's not
immediately obvious to someone who's not very involved with our
development process, and regarless of our intentions libvirt version
numbers *will* be mistakenly assumed to be semver-compliant on
occasion.
People are quite used to date-based version numbers thanks to Ubuntu
having used them for almost two decades, so I don't think anyone is
going to be confused by the move. And since our release schedule is
already date-based, having the versioning scheme match that just
makes perfect sense IMO.
Up until now, one could have argued in favor of the current
versioning scheme because of the single-digit major version
component, but that's going away next year regardless, which makes
this the perfect time to raise the topic :)
Thoughts?
[1] https://semver.org/
--
Andrea Bolognani / Red Hat / Virtualization
12 months