[libvirt PATCH 0/5] Add support for vDPA block devices
by Jonathon Jongsma
see https://bugzilla.redhat.com/show_bug.cgi?id=1900770.
Jonathon Jongsma (5):
conf: add ability to configure a vdpa block disk device
qemu: add virtio-blk-vhost-vdpa capability
qemu: make vdpa connect function more generic
qemu: consider vdpa block devices for memlock limits
qemu: Implement support for vDPA block devices
docs/formatdomain.rst | 19 ++++++++-
src/ch/ch_monitor.c | 1 +
src/conf/domain_conf.c | 7 ++++
src/conf/schemas/domaincommon.rng | 13 +++++++
src/conf/storage_source_conf.c | 6 ++-
src/conf/storage_source_conf.h | 1 +
src/libxl/xen_xl.c | 1 +
src/qemu/qemu_block.c | 20 ++++++++++
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 24 +++++++++++-
src/qemu/qemu_command.h | 1 +
src/qemu/qemu_domain.c | 37 +++++++++++++++++-
src/qemu/qemu_interface.c | 23 -----------
src/qemu/qemu_interface.h | 2 -
src/qemu/qemu_migration.c | 2 +
src/qemu/qemu_snapshot.c | 4 ++
src/qemu/qemu_validate.c | 45 +++++++++++++++++++---
src/storage_file/storage_source.c | 1 +
tests/qemuhotplugmock.c | 4 +-
tests/qemuxml2argvdata/disk-vhostvdpa.args | 35 +++++++++++++++++
tests/qemuxml2argvdata/disk-vhostvdpa.xml | 21 ++++++++++
tests/qemuxml2argvmock.c | 2 +-
tests/qemuxml2argvtest.c | 2 +
24 files changed, 235 insertions(+), 39 deletions(-)
create mode 100644 tests/qemuxml2argvdata/disk-vhostvdpa.args
create mode 100644 tests/qemuxml2argvdata/disk-vhostvdpa.xml
--
2.40.1
1 year, 3 months
Error : virHostCPUGetKVMMaxVCPUs:1228 : KVM is not supported on this platform: Function not implemented.
by Mario Marietto
Hello.
I'm running Debian bookworm on my ARM Chromebook,model "xe303c12" and
I've recompiled the kernel (5.4) to enable KVM,so now my system sounds like
this :
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 12 (bookworm)
Release: 12
Codename: bookworm
$ uname -a
Linux chromarietto 5.4.244-stb-cbe
#8 SMP PREEMPT Sat Aug 19 22:19:32 UTC 2023 armv7l GNU/Linux
$ uname -r
5.4.244-stb-cbe
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
$ qemu-system-arm --version
QEMU emulator version 5.1.0 (v5.1.0-dirty)
Copyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers
$ python3 --version
Python 3.11.2
I have installed libvirt 9.7.0,qemu 5.1 and virt-manager from source
code with the final goal to be able to connect qemu,kvm and libvirt
together to virtualize FreeBSD 13.2 for arm 32 bit.
Some useful informations about my platform :
root@chromarietto:/home/marietto/Desktop# virt-host-validate
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'cpu' controller support : PASS
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuset' controller support : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'devices' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for device assignment IOMMU support
: WARN
(No ACPI IORT table found, IOMMU not supported by this hardware platform)
QEMU: Checking for secure guest support
: WARN
(Unknown if this platform has Secure Guest support)
LXC: Checking for Linux >= 2.6.26 : PASS
LXC: Checking for namespace ipc : PASS
LXC: Checking for namespace mnt : PASS
LXC: Checking for namespace pid : PASS
LXC: Checking for namespace uts : PASS
LXC: Checking for namespace net : PASS
LXC: Checking for namespace user : PASS
LXC: Checking for cgroup 'cpu' controller support : PASS
LXC: Checking for cgroup 'cpuacct' controller support : PASS
LXC: Checking for cgroup 'cpuset' controller support : PASS
LXC: Checking for cgroup 'memory' controller support : PASS
LXC: Checking for cgroup 'devices' controller support : PASS
LXC: Checking for cgroup 'freezer' controller support : FAIL
(Enable 'freezer' in kernel Kconfig file or mount/enable cgroup
controller in your system)
LXC: Checking for cgroup 'blkio' controller support : PASS
LXC: Checking if device /sys/fs/fuse/connections exists : PASS
# lsmod | grep kvm
no errors (I have embedded the options needed to enable KVM inside the kernel)
# virsh --connect qemu:///system capabilities | grep baselabel
<baselabel type='kvm'>+1002:+1002</baselabel>
<baselabel type='qemu'>+1002:+1002</baselabel>
The error that I'm not able to fix is the following one :
root@chromarietto:~# virsh domcapabilities --machine virt
--emulatorbin /usr/local/bin/qemu-system-arm
2023-08-29 10:17:59.110+0000: 1763: error : virHostCPUGetKVMMaxVCPUs:1228 :
KVM is not supported on this platform: Function not implemented ;
error: failed to get emulator capabilities
error: KVM is not supported on this platform: Function not implemented
and this is the log that I've got when I ran libvirtd with the debug
option enabled
root@chromarietto:~# libvirtd --debug
[Tue, 29 Aug 2023 10:10:11 virt-manager 2141] DEBUG (createvm:494)
UEFI found, setting it as default.
[Tue, 29 Aug 2023 10:10:11 virt-manager 2141] DEBUG (createvm:728)
Guest type set to os_type=hvm, arch=armv7l, dom_type=kvm
[Tue, 29 Aug 2023 10:10:11 virt-manager 2141] DEBUG (guest:546) Prefer
EFI => True
2023-08-29 10:10:12.972+0000: 1765: error :
virHostCPUGetKVMMaxVCPUs:1228 : KVM is not supported on this platform:
Function not implemented
[Tue, 29 Aug 2023 10:10:12 virt-manager 2141] DEBUG
(domcapabilities:250) Error fetching domcapabilities XML
Traceback (most recent call last):
File "/usr/local/share/virt-manager/virtinst/domcapabilities.py",
line 245, in build_from_params
xml = conn.getDomainCapabilities(emulator, arch,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/libvirt.py", line 4612, in
getDomainCapabilities
raise libvirtError('virConnectGetDomainCapabilities() failed')
libvirt.libvirtError: KVM is not supported on this platform: Function
not implemented
*2023-08-29 10:10:14.157+0000: 1762: error : virHostCPUGetKVMMaxVCPUs:1228
: KVM is not supported on this platform: Function not implemented*
[Tue, 29 Aug 2023 10:10:14 virt-manager 2141] DEBUG
(domcapabilities:250) Error fetching domcapabilities XML
Traceback (most recent call last):
File "/usr/local/share/virt-manager/virtinst/domcapabilities.py",
line 245, in build_from_params
xml = conn.getDomainCapabilities(emulator, arch,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/libvirt.py", line 4612, in
getDomainCapabilities
raise libvirtError('virConnectGetDomainCapabilities() failed')
libvirt.libvirtError: KVM is not supported on this platform: Function
not implemented
[Tue, 29 Aug 2023 10:10:14 virt-manager 2141] DEBUG (createvm:497)
Error checking for UEFI default
Traceback (most recent call last):
File "/usr/local/share/virt-manager/virtManager/createvm.py", line
491, in _set_caps_state
guest.enable_uefi()
File "/usr/local/share/virt-manager/virtinst/guest.py", line 589, in
enable_uefi
path = self._lookup_default_uefi_path()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/share/virt-manager/virtinst/guest.py", line 848, in
_lookup_default_uefi_path
raise RuntimeError(_("Libvirt version does not support UEFI."))
RuntimeError: Libvirt version does not support UEFI
Does anyone know how to fix that error ?
--
Mario.
1 year, 3 months
[PATCH Libvirt v2 00/10] Support dirty page rate upper limit
by ~hyman
Hi, This is the latest version for the series, comparing with version
1, there are some key modifications has made inspired and
suggested by Peter, see as follows:
1. Introduce XML for dirty limit persistent configuration
2. Merge the cancel API into the set API
3. Extend the domstats/virDomainListGetStats API for dirty limit
information query
4. Introduce the virDomainModificationImpact flags to control the
behavior of the API
5. Enrich the comments and docs about the feature and API
The patch set introduce the new API virDomainSetVcpuDirtyLimit to
allow upper Apps to set upper limits of dirty page rate for virtual
CPUs,
the corresponding virsh API as follows:
# limit-dirty-page-rate <domain> <rate> [--vcpu <number>] \
[--config] [--live] [--current]
We put the dirty limit persistent info with the "vcpus" element in
domain XML and
extend dirtylimit statistics for domGetStats:
<domain>
...
<vcpu current='2'>3</vcpu>
<vcpus>
<vcpu id='0' hotpluggable='no' dirty_limit='10' order='1'.../>
<vcpu id='1' hotpluggable='yes' dirty_limit='10' order='2'.../>
</vcpus>
...
If --vcpu option is not passed in the virsh command, set all virtual
CPUs;
if rate is set to zero, cancel the upper limit.
Examples:
To set the dirty page rate upper limit 10 MB/s for all virtual CPUs in
c81_node1, use:
[root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 --rate 10
--live
Set dirty page rate limit 10(MB/s) for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit
<vcpu id='0' enabled='yes' hotpluggable='no' order='1'
dirty_limit='10'/>
<vcpu id='1' enabled='yes' hotpluggable='no' order='2'
dirty_limit='10'/>
<vcpu id='2' enabled='yes' hotpluggable='no' order='3'
dirty_limit='10'/>
<vcpu id='3' enabled='no' hotpluggable='yes' dirty_limit='10'/>
<vcpu id='4' enabled='no' hotpluggable='yes' dirty_limit='10'/>
......
Query the dirty limit info dynamically:
[root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit
Domain: 'c81_node1'
dirtylimit.vcpu.0.limit=10
dirtylimit.vcpu.0.current=0
dirtylimit.vcpu.1.limit=10
dirtylimit.vcpu.1.current=0
dirtylimit.vcpu.2.limit=10
dirtylimit.vcpu.2.current=0
dirtylimit.vcpu.3.limit=10
dirtylimit.vcpu.3.current=0
dirtylimit.vcpu.4.limit=10
dirtylimit.vcpu.4.current=0
......
To cancel the upper limit, use:
[root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 \
--rate 0 --live
Cancel dirty page rate limit for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit
[root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit
Domain: 'c81_node1'
The dirty limit uses the QEMU dirty-limit feature introduced since
7.1.0, this feature allows CPU to be throttled as needed to keep
their dirty page rate within the limit. It could, in some scenes, be
used to provide quality-of-service in the aspect of the memory
workload for virtual CPUs and QEMU itself use the feature to
implement the dirty-limit throttle algorithm and apply it on the
live migration, which improve responsiveness of large guests
during live migration and can result in more stable read
performance. The other application scenarios remain
unexplored, before that, Libvirt could provide the basic API.
Please review, thanks
Yong
Hyman Huang(黄勇) (10):
qemu_capabilities: Introduce QEMU_CAPS_VCPU_DIRTY_LIMIT capability
conf: Introduce XML for dirty limit configuration
libvirt: Add virDomainSetVcpuDirtyLimit API
qemu_driver: Implement qemuDomainSetVcpuDirtyLimit
domain_validate: Export virDomainDefHasDirtyLimitStartupVcpus symbol
qemu_process: Setup dirty limit after launching VM
virsh: Introduce limit-dirty-page-rate api
qemu_monitor: Implement qemuMonitorQueryVcpuDirtyLimit
qemu_driver: Extend dirtlimit statistics for domGetStats
virsh: Introduce command 'virsh domstats --dirtylimit'
docs/formatdomain.rst | 7 +-
docs/manpages/virsh.rst | 33 +++-
include/libvirt/libvirt-domain.h | 5 +
src/conf/domain_conf.c | 26 +++
src/conf/domain_conf.h | 8 +
src/conf/domain_validate.c | 33 ++++
src/conf/domain_validate.h | 2 +
src/conf/schemas/domaincommon.rng | 5 +
src/driver-hypervisor.h | 7 +
src/libvirt-domain.c | 68 +++++++
src/libvirt_private.syms | 1 +
src/libvirt_public.syms | 5 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_driver.c | 181 ++++++++++++++++++
src/qemu/qemu_monitor.c | 25 +++
src/qemu/qemu_monitor.h | 22 +++
src/qemu/qemu_monitor_json.c | 107 +++++++++++
src/qemu/qemu_monitor_json.h | 9 +
src/qemu/qemu_process.c | 44 +++++
src/remote/remote_driver.c | 1 +
src/remote/remote_protocol.x | 17 +-
src/remote_protocol-structs | 7 +
.../qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 +
.../caps_7.1.0_x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 +
.../caps_7.2.0_x86_64+hvf.xml | 1 +
.../caps_7.2.0_x86_64.xml | 1 +
.../caps_8.0.0_riscv64.xml | 1 +
.../caps_8.0.0_x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_8.1.0_s390x.xml | 1 +
.../caps_8.1.0_x86_64.xml | 1 +
tools/virsh-domain-monitor.c | 7 +
tools/virsh-domain.c | 109 +++++++++++
34 files changed, 737 insertions(+), 4 deletions(-)
--
2.38.5
1 year, 3 months
[PATCH 0/8] Finish effort to decrease maximum stack frame to 2048
by Peter Krempa
Few outstanding patches of code compiled only on FreeBSD
Pipeline:
https://gitlab.com/pipo.sk/libvirt/-/pipelines/986470469
Peter Krempa (8):
bhyve: Don't stack-allocate huge error buffers
virHostValidateBhyve: Declare one variable per line
virHostValidateBhyve: Heap allocate massive 'struct kld_file_stat'
nss: aiforaf: Format one argument/variable per line
nss: aiforaf: Remove unused 'ret' variable
nss: aiforaf: Drop unused buffer 'port'
nss: aiforaf: Decrease stack size by scoping off large buffers.
build: Decrease maximum stack frame size to 2048
meson.build | 2 +-
src/bhyve/bhyve_process.c | 4 +-
tools/nss/libvirt_nss.c | 105 ++++++++++++++++++-------------
tools/virt-host-validate-bhyve.c | 20 +++---
4 files changed, 74 insertions(+), 57 deletions(-)
--
2.41.0
1 year, 3 months
[libvirt PATCH 00/33] ci: Unify the GitLab CI jobs with local executions && adopt lcitool container executions
by Erik Skultety
Technically a v2 of:
https://listman.redhat.com/archives/libvir-list/2023-February/237552.html
However, the approach here is slightly different and what that series said
about migration to lcitool container executions as a replacement for
ci/Makefile is actually done here. One of the core problems of the above
pointed out in review was that more Shell logic was introduced including CLI
parsing, conditional executions, etc. which we fought hard to get rid of in the
past. I reworked the Shell functions quite a bit and dropped whatever extra
Shell logic the original series added.
Obviously we can't get rid of Shell completely because of .gitlab-ci.yml and so
I merely extracted the recipes into functions which are then sourced as
ci/build.sh and executed. Now, that on its own would hide the actual commands
being run in the GitLab job log, so before any command is actually executed, it
is formatted with a color sequence so we don't miss that information as that
would be a regression to the status quo.
Lastly, this series then takes the effort inside the ci/build.sh script and
basically mirrors whatever GitLab would do to run a job inside a local
container which is executed by lcitool (yes, we already have that capability).
Please give this a try and I'm already looking forward to comments as I'd like
to expand this effort to local VM executions running the TCK integration tests,
so this series is quite important in that regard.
Erik Skultety (33):
ci: build.sh: Add variables from .gitlab-ci.yml
ci: build.sh: Add GIT_ROOT env helper variable
ci: build.sh: Don't mention that MESON_ARGS are available via CLI
ci: build.sh: Add a wrapper function over meson's setup
ci: build.sh: Add a wrapper function executing 'shell' commands
ci: build.sh: Add a wrapper function over the 'build' job
ci: build.sh: Add a helper function to create the dist tarball
ci: build.sh: Add a wrapper function over the 'test' job
ci: build.sh: Add a wrapper function over the 'codestyle' job
ci: build.sh: Add a wrapper function over the 'potfile' job
ci: build.sh: Add a wrapper function over the 'rpmbuild' job
ci: build.sh: Add a wrapper function over the 'website' job
ci: build.sh: Drop changing working directory to CI_CONT_DIR
ci: build.sh: Drop direct invocation of meson/ninja commands
ci: build.sh: Drop MESON_ARGS definition from global level
gitlab-ci.yml: Add 'after_script' stage to prep for artifact
collection
.gitlab-ci.yml: Convert the native build job to the build.sh usage
.gitlab-ci.yml: Convert the cross build job to the build.sh usage
.gitlab-ci.yml: Convert the website build job to the build.sh usage
.gitlab-ci.yml: Convert the codestyle job to the build.sh usage
.gitlab-ci.yml: Convert the potfile job to the build.sh usage
ci: helper: Drop _lcitool_get_targets method
ci: helper: Don't make ':' literal a static part of the image tag
ci: helper: Add --lcitool-path CLI option
ci: helper: Add a job argparse subparser
ci: helper: Add a required_deps higher order helper/decorator
ci: helper: Add Python code hangling git clones
ci: helper: Add a helper to create a local repo clone Pythonic way
ci: helper: Rework _lcitool_run method logic
ci: helper: Add an action to run the container workload via lcitool
ci: helper: Drop original actions
ci: helper: Drop the --meson-args/--ninja-args CLI options
ci: helper: Drop the _make_run method
.gitlab-ci.yml | 47 +++++++------
ci/build.sh | 105 +++++++++++++++++++++++++----
ci/helper | 176 ++++++++++++++++++++++++++++---------------------
3 files changed, 218 insertions(+), 110 deletions(-)
--
2.41.0
1 year, 3 months
[PATCH v2] NEWS: Announcing Network Metadata APIs
by K Shiva Kiran
Ref to patchset implementing the above:
https://listman.redhat.com/archives/libvir-list/2023-August/241250.html
Signed-off-by: K Shiva Kiran <shiva_kr(a)riseup.net>
---
This is a v2 of:
https://listman.redhat.com/archives/libvir-list/2023-August/241469.html
Diff to v1:
- Shortened the text and put all text under one section.
NEWS.rst | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index e40c8ac259..5275a8299a 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -28,6 +28,17 @@ v9.7.0 (unreleased)
2) pre-binding the variant driver using the ``--driver`` option of
``virsh nodedev-detach``.
+ * network: Support for ``<title>`` and ``<description>`` fields in Network XML
+
+ The network object adds two more user defined metadata fields ``<title>``
+ and ``<description>``.
+ Two new APIs ``virNetworkGetMetadata()`` and ``virNetworkSetMetadata()`` can be
+ used to view and modify the above including the existing ``<metadata>`` field.
+
+ virsh adds two new commands ``net-desc`` and ``net-metadata`` to view/modify the same.
+ ``net-list`` adds a new option ``--title`` that prints the content of ``<title>``
+ in an extra column within the default ``--table`` output.
+
* **Improvements**
* **Bug fixes**
--
2.42.0
1 year, 3 months
[PATCH] virsh: Fix net-desc --config output
by K Shiva Kiran
Fixes the following bug:
Command: `net-desc --config [--title] my_network`
Expected Output: Title/Description of persistent config
Output: Title/Description of live config
This was caused due to the usage of a single `flags` variable in
`virshGetNetworkDescription()` which ended up in a wrong enum being
passed to `virNetworkGetMetadata()` (enum being that of LIVE instead of
CONFIG).
Although the domain object has the same code, this didn't cause a problem
there because the enum values of `VIR_DOMAIN_INACTIVE_XML` and
`VIR_DOMAIN_METADATA_CONFIG` turn out to be the same (1 << 1), whereas
they are not for network equivalent ones (1 << 0, 1 << 1).
Signed-off-by: K Shiva Kiran <shiva_kr(a)riseup.net>
---
tools/virsh-network.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/tools/virsh-network.c b/tools/virsh-network.c
index 49778d0f4f..8965d87c9c 100644
--- a/tools/virsh-network.c
+++ b/tools/virsh-network.c
@@ -366,7 +366,8 @@ static const vshCmdOptDef opts_network_desc[] = {
/* extract description or title from network xml */
static char *
virshGetNetworkDescription(vshControl *ctl, virNetworkPtr net,
- bool title, unsigned int flags)
+ bool title, unsigned int flags,
+ unsigned int queryflags)
{
char *desc = NULL;
g_autoptr(xmlDoc) doc = NULL;
@@ -394,7 +395,7 @@ virshGetNetworkDescription(vshControl *ctl, virNetworkPtr net,
}
/* fall back to xml */
- if (virshNetworkGetXMLFromNet(ctl, net, flags, &doc, &ctxt) < 0)
+ if (virshNetworkGetXMLFromNet(ctl, net, queryflags, &doc, &ctxt) < 0)
return NULL;
if (title)
@@ -454,7 +455,7 @@ cmdNetworkDesc(vshControl *ctl, const vshCmd *cmd)
g_autofree char *descNet = NULL;
g_autofree char *descNew = NULL;
- if (!(descNet = virshGetNetworkDescription(ctl, net, title, queryflags)))
+ if (!(descNet = virshGetNetworkDescription(ctl, net, title, flags, queryflags)))
return false;
if (!descArg)
@@ -515,7 +516,7 @@ cmdNetworkDesc(vshControl *ctl, const vshCmd *cmd)
vshPrintExtra(ctl, "%s", _("Network description updated successfully"));
} else {
- g_autofree char *desc = virshGetNetworkDescription(ctl, net, title, queryflags);
+ g_autofree char *desc = virshGetNetworkDescription(ctl, net, title, flags, queryflags);
if (!desc)
return false;
@@ -1128,7 +1129,7 @@ cmdNetworkList(vshControl *ctl, const vshCmd *cmd G_GNUC_UNUSED)
if (optTitle) {
g_autofree char *title = NULL;
- if (!(title = virshGetNetworkDescription(ctl, network, true, 0)))
+ if (!(title = virshGetNetworkDescription(ctl, network, true, 0, 0)))
goto cleanup;
if (vshTableRowAppend(table,
virNetworkGetName(network),
--
2.42.0
1 year, 3 months
[libvirt PATCH 0/7] external snapshot revert fixes
by Pavel Hrdina
This fixes reverting external snapshots to not error out in cases where
it should work and makes it correctly load the memory state when
reverting to snapshot of running VM.
Pavel Hrdina (7):
qemu_saveimage: extract starting process to qemuSaveImageStartProcess
qemuSaveImageStartProcess: allow setting reason for audit log
qemuSaveImageStartProcess: add snapshot argument
qemuSaveImageStartProcess: make it possible to use without header
qemu_snapshot: fix reverting external snapshot when not all disks are
included
qemu_snapshot: correctly load the saved memory state file
NEWS: document support for reverting external snapshots
NEWS.rst | 8 +++
src/qemu/qemu_saveimage.c | 111 ++++++++++++++++++++++++++------------
src/qemu/qemu_saveimage.h | 14 +++++
src/qemu/qemu_snapshot.c | 90 ++++++++++++++++++++-----------
4 files changed, 158 insertions(+), 65 deletions(-)
--
2.41.0
1 year, 3 months
Entering freeze for libvirt-9.7.0
by Jiri Denemark
I have just tagged v9.7.0-rc1 in the repository and pushed signed
tarballs and source RPMs to https://download.libvirt.org/
Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.
If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.
Thanks,
Jirka
1 year, 3 months