[libvirt PATCH v2 0/5] Cleanup and test more firmware handling scenarios
by Daniel P. Berrangé
There are a mind bending number of possible ways to configure the
firmware with/without NVRAM. Only a small portion are tested and
many error scenarios are silently ignored.
This series attempts to get coverage of every possible XML config
scenario and report explicit errors in all invalid configs.
There is an open question on patch 4. Essentially the use of NVRAM
combined with writable executable feels like an accidental feature
in libvirt that hasn't really been thought through. I'd like to
better define expectations here but there are several possible
strategies and I'm undecided which is best.
Changed in v2:
- Merged 5 self contained patches already reviewed
- Moved checks out of post-parse, into validate methods
- Instead of rejecting R/W loader with NVRAM template
honour the template in QEMU driver. A R/W loader is
conceptually relevant if the loader allows guest
to live flash upgrade itself. Not possible today
but we shouldn't reject this combo since QEMU allows
it.
Daniel P. Berrangé (5):
qemu: fix populating NVRAM vars from template with R/W loader
tests: don't permit NVRAM path when using firmware auto-select
conf: switch nvram parsing to use XML node / property helpers
conf: move nvram parsing into virDomainLoaderDefParseXML
conf: stop ignoring <loader>/<nvram> with firmware auto-select
src/conf/domain_conf.c | 67 ++++++++----------
src/conf/domain_validate.c | 32 +++++++++
src/qemu/qemu_domain.c | 6 +-
...-nvram-rw-template-vars.x86_64-latest.args | 41 +++++++++++
.../bios-nvram-rw-template-vars.xml | 36 ++++++++++
.../bios-nvram-rw-template.x86_64-latest.args | 41 +++++++++++
.../bios-nvram-rw-template.xml | 36 ++++++++++
.../bios-nvram-rw-vars.x86_64-latest.args | 41 +++++++++++
tests/qemuxml2argvdata/bios-nvram-rw-vars.xml | 36 ++++++++++
tests/qemuxml2argvdata/os-firmware-bios.xml | 1 -
...ware-efi-bad-loader-path.x86_64-latest.err | 1 +
.../os-firmware-efi-bad-loader-path.xml | 67 ++++++++++++++++++
...ware-efi-bad-loader-type.x86_64-latest.err | 1 +
.../os-firmware-efi-bad-loader-type.xml | 67 ++++++++++++++++++
...mware-efi-bad-nvram-path.x86_64-latest.err | 1 +
.../os-firmware-efi-bad-nvram-path.xml | 68 +++++++++++++++++++
...e-efi-bad-nvram-template.x86_64-latest.err | 1 +
.../os-firmware-efi-bad-nvram-template.xml | 68 +++++++++++++++++++
.../os-firmware-efi-secboot.xml | 1 -
tests/qemuxml2argvdata/os-firmware-efi.xml | 1 -
tests/qemuxml2argvtest.c | 7 ++
.../os-firmware-bios.x86_64-latest.xml | 1 -
.../os-firmware-efi-secboot.x86_64-latest.xml | 1 -
.../os-firmware-efi.x86_64-latest.xml | 1 -
24 files changed, 578 insertions(+), 45 deletions(-)
create mode 100644 tests/qemuxml2argvdata/bios-nvram-rw-template-vars.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/bios-nvram-rw-template-vars.xml
create mode 100644 tests/qemuxml2argvdata/bios-nvram-rw-template.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/bios-nvram-rw-template.xml
create mode 100644 tests/qemuxml2argvdata/bios-nvram-rw-vars.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/bios-nvram-rw-vars.xml
create mode 100644 tests/qemuxml2argvdata/os-firmware-efi-bad-loader-path.x86_64-latest.err
create mode 100644 tests/qemuxml2argvdata/os-firmware-efi-bad-loader-path.xml
create mode 100644 tests/qemuxml2argvdata/os-firmware-efi-bad-loader-type.x86_64-latest.err
create mode 100644 tests/qemuxml2argvdata/os-firmware-efi-bad-loader-type.xml
create mode 100644 tests/qemuxml2argvdata/os-firmware-efi-bad-nvram-path.x86_64-latest.err
create mode 100644 tests/qemuxml2argvdata/os-firmware-efi-bad-nvram-path.xml
create mode 100644 tests/qemuxml2argvdata/os-firmware-efi-bad-nvram-template.x86_64-latest.err
create mode 100644 tests/qemuxml2argvdata/os-firmware-efi-bad-nvram-template.xml
--
2.34.1
2 years, 8 months
[PATCH] qemu: Don't ignore failure when building default memory backend
by Michal Privoznik
When building the default memory backend (which has id='pc.ram')
and no guest NUMA is configured then
qemuBuildMemCommandLineMemoryDefaultBackend() is called. However,
its return value is ignored which means that on invalid
configuration (e.g. when non-existent hugepage size was
requested) an error is reported into the logs but QEMU is started
anyway. And while QEMU does error out its error message doesn't
give much clue what's going on:
qemu-system-x86_64: Memory backend 'pc.ram' not found
While at it, introduce a test case. While I could chose a nice
looking value (e.g. 4MiB) that's exactly what I wanted to avoid,
because while such value might not be possible on x84_64 it may
be possible on other arches (e.g. ppc is notoriously known for
supporting wide range of HP sizes). Let's stick with obviously
wrong value of 5MiB.
Reported-by: Charles Polisher <chas(a)chasmo.org>
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/qemu/qemu_command.c | 5 ++--
.../hugepages-default-5M.x86_64-latest.err | 1 +
.../qemuxml2argvdata/hugepages-default-5M.xml | 27 +++++++++++++++++++
tests/qemuxml2argvtest.c | 1 +
4 files changed, 32 insertions(+), 2 deletions(-)
create mode 100644 tests/qemuxml2argvdata/hugepages-default-5M.x86_64-latest.err
create mode 100644 tests/qemuxml2argvdata/hugepages-default-5M.xml
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 2c963a7297..c836799888 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -7369,8 +7369,9 @@ qemuBuildMemCommandLine(virCommand *cmd,
* regular memory because -mem-path and -mem-prealloc are obsolete.
* However, if domain has one or more NUMA nodes then there is no
* default RAM and we mustn't generate the memory object. */
- if (!virDomainNumaGetNodeCount(def->numa))
- qemuBuildMemCommandLineMemoryDefaultBackend(cmd, def, priv, defaultRAMid);
+ if (!virDomainNumaGetNodeCount(def->numa) &&
+ qemuBuildMemCommandLineMemoryDefaultBackend(cmd, def, priv, defaultRAMid) < 0)
+ return -1;
} else {
/*
* Add '-mem-path' (and '-mem-prealloc') parameter here if
diff --git a/tests/qemuxml2argvdata/hugepages-default-5M.x86_64-latest.err b/tests/qemuxml2argvdata/hugepages-default-5M.x86_64-latest.err
new file mode 100644
index 0000000000..bf5e54c9e4
--- /dev/null
+++ b/tests/qemuxml2argvdata/hugepages-default-5M.x86_64-latest.err
@@ -0,0 +1 @@
+internal error: Unable to find any usable hugetlbfs mount for 5120 KiB
diff --git a/tests/qemuxml2argvdata/hugepages-default-5M.xml b/tests/qemuxml2argvdata/hugepages-default-5M.xml
new file mode 100644
index 0000000000..280ea4bb71
--- /dev/null
+++ b/tests/qemuxml2argvdata/hugepages-default-5M.xml
@@ -0,0 +1,27 @@
+<domain type="kvm">
+ <name>NonExistentPageSize</name>
+ <uuid>21433e10-aea8-434a-8f81-55781c2e9035</uuid>
+ <memory unit="KiB">4194304</memory>
+ <currentMemory unit="KiB">4194304</currentMemory>
+ <memoryBacking>
+ <hugepages>
+ <page size="5" unit="MiB"/>
+ </hugepages>
+ </memoryBacking>
+ <vcpu placement="static">2</vcpu>
+ <os>
+ <type arch="x86_64" machine="pc">hvm</type>
+ </os>
+ <features>
+ <acpi/>
+ <apic/>
+ <pae/>
+ </features>
+ <clock offset="utc"/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>restart</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu-system-x86_64</emulator>
+ </devices>
+</domain>
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index 9c5c394e03..a32c5a8250 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -1270,6 +1270,7 @@ mymain(void)
DO_TEST("hugepages-default", QEMU_CAPS_OBJECT_MEMORY_FILE);
DO_TEST("hugepages-default-2M", QEMU_CAPS_OBJECT_MEMORY_FILE);
DO_TEST("hugepages-default-system-size", QEMU_CAPS_OBJECT_MEMORY_FILE);
+ DO_TEST_CAPS_LATEST_FAILURE("hugepages-default-5M");
DO_TEST_PARSE_ERROR_NOCAPS("hugepages-default-1G-nodeset-2M");
DO_TEST("hugepages-nodeset", QEMU_CAPS_OBJECT_MEMORY_FILE);
DO_TEST_PARSE_ERROR("hugepages-nodeset-nonexist",
--
2.34.1
2 years, 8 months
[PATCH] NEWS: Mention chardev hot(un)plug fixes, '-sock' removal and RPM storage driver fix
by Peter Krempa
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
NEWS.rst | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index b684416909..169ac9b740 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -64,6 +64,25 @@ v8.1.0 (unreleased)
* **Bug fixes**
+ * Remove unix sockets from filesystem when disabling a '.socket' systemd unit
+
+ The presence of the socket files is used by our remote driver to determine
+ which service to access. Since neiter systemd nor the daemons clean up the
+ socket file clients were running into problems when a modular deployment was
+ switched to monolithic ``libvirtd``.
+
+ * qemu: Fixes of fd passing during hotplug and hotunplug of chardevs
+
+ FDs used as chardev backing are now properly removed when hot-unplugging
+ a chardev from qemu and hotplugged chardevs now properly use ``virtlogd``
+ to handle the input and output from qemu.
+
+ * RPM: Run pre/post-install steps on ``daemon-driver-storage-core``
+
+ Previously the pre/post-install code was part of the meta-package which
+ installed all storage driver sub-packages thus a minimalistic install
+ of the storage driver didn't behave correctly.
+
v8.0.0 (2022-01-14)
===================
--
2.35.1
2 years, 8 months
[PATCH 0/3] Unbreak MIPS Malta
by Lubomir Rintel
My day started like this:
# virt-install --connect qemu:///system --arch mips --machine malta --memory 256 --disk none --import
Using default --name vm-mips
Starting install...
ERROR XML error: No PCI buses available
Needless to say, it ended up completely ruined.
Chained to this message are the patches I've created in an attempt to
remedy the highly unfortunate situation, with hope that they'll be
treated with warmth, understanding and perhaps even applied to the
libvirt tree.
Yours,
Lubo
2 years, 8 months
[libvirt PATCH 00/11] Automatic mutex management - part 3
by Tim Wiederhake
Use the recently implemented VIR_LOCK_GUARD and VIR_WITH_MUTEX_LOCK_GUARD
to simplify mutex management.
Tim Wiederhake (11):
test: Use automatic mutex management
openvz: Use automatic mutex management
remote_daemon_dispatch: Use automatic mutex management
netdev: Use automatic mutex management
nodesuspend: Use automatic mutex management
admin: Use automatic mutex management
esx_stream: Use automatic mutex management
esx_vi: Use automatic mutex management
storage: Statically initialize mutex
storage: Move and split up storateStateCleanup
storage: Use automatic mutex management
src/admin/admin_server_dispatch.c | 3 +-
src/conf/virstorageobj.h | 2 -
src/esx/esx_stream.c | 65 ++++------
src/esx/esx_vi.c | 109 +++++++---------
src/openvz/openvz_driver.c | 91 +++++---------
src/remote/remote_daemon_dispatch.c | 187 +++++++++-------------------
src/storage/storage_driver.c | 97 +++++++--------
src/test/test_driver.c | 15 +--
src/util/virnetdev.c | 20 ++-
src/util/virnodesuspend.c | 54 +++-----
10 files changed, 228 insertions(+), 415 deletions(-)
--
2.31.1
2 years, 8 months
[libvirt PATCH][merged][trivial] Fix typo in NEWS
by Tim Wiederhake
Signed-off-by: Tim Wiederhake <twiederh(a)redhat.com>
---
NEWS.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/NEWS.rst b/NEWS.rst
index b684416909..cc5666fa91 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -32,7 +32,7 @@ v8.1.0 (unreleased)
either of the following 3 options:
``page-sampling, dirty-bitmap, dirty-ring``.
- Add ``calc_mode`` field for dirtyrate statistics retured by
+ Add ``calc_mode`` field for dirtyrate statistics returned by
``virsh domstats --dirtyrate``, also add ``vCPU dirtyrate`` if
``dirty-ring`` mode was used in last measurement.
--
2.31.1
2 years, 8 months
[PATCH] qemu: Move some enums impl to qemu_monitor.c
by Michal Privoznik
There are some enums that are declared in qemu_monitor.h but
implemented in qemu_monitor_json.c. While from compiler and
linker POV it doesn't matter, the code is cleaner if an enum is
implemented in .c file that corresponds to .h file which declared
the enum.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/qemu/qemu_monitor.c | 40 ++++++++++++++++++++++++++++++++++++
src/qemu/qemu_monitor_json.c | 34 ------------------------------
2 files changed, 40 insertions(+), 34 deletions(-)
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 0ff938a577..8fc2a49abf 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -111,6 +111,38 @@ static int qemuMonitorOnceInit(void)
VIR_ONCE_GLOBAL_INIT(qemuMonitor);
+VIR_ENUM_IMPL(qemuMonitorJob,
+ QEMU_MONITOR_JOB_TYPE_LAST,
+ "",
+ "commit",
+ "stream",
+ "mirror",
+ "backup",
+ "create",
+);
+
+VIR_ENUM_IMPL(qemuMonitorJobStatus,
+ QEMU_MONITOR_JOB_STATUS_LAST,
+ "",
+ "created",
+ "running",
+ "paused",
+ "ready",
+ "standby",
+ "waiting",
+ "pending",
+ "aborting",
+ "concluded",
+ "undefined",
+ "null",
+);
+
+VIR_ENUM_IMPL(qemuMonitorCPUProperty,
+ QEMU_MONITOR_CPU_PROPERTY_LAST,
+ "boolean",
+ "string",
+ "number",
+);
VIR_ENUM_IMPL(qemuMonitorMigrationStatus,
QEMU_MONITOR_MIGRATION_STATUS_LAST,
@@ -4473,6 +4505,14 @@ qemuMonitorTransactionBackup(virJSONValue *actions,
}
+VIR_ENUM_IMPL(qemuMonitorDirtyRateCalcMode,
+ QEMU_MONITOR_DIRTYRATE_CALC_MODE_LAST,
+ "page-sampling",
+ "dirty-bitmap",
+ "dirty-ring",
+);
+
+
int
qemuMonitorStartDirtyRateCalc(qemuMonitor *mon,
int seconds,
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 345b81cd12..4d339f29b8 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -53,30 +53,6 @@ VIR_LOG_INIT("qemu.qemu_monitor_json");
#define LINE_ENDING "\r\n"
-VIR_ENUM_IMPL(qemuMonitorJob,
- QEMU_MONITOR_JOB_TYPE_LAST,
- "",
- "commit",
- "stream",
- "mirror",
- "backup",
- "create");
-
-VIR_ENUM_IMPL(qemuMonitorJobStatus,
- QEMU_MONITOR_JOB_STATUS_LAST,
- "",
- "created",
- "running",
- "paused",
- "ready",
- "standby",
- "waiting",
- "pending",
- "aborting",
- "concluded",
- "undefined",
- "null");
-
static void qemuMonitorJSONHandleShutdown(qemuMonitor *mon, virJSONValue *data);
static void qemuMonitorJSONHandleReset(qemuMonitor *mon, virJSONValue *data);
static void qemuMonitorJSONHandleStop(qemuMonitor *mon, virJSONValue *data);
@@ -5347,11 +5323,6 @@ qemuMonitorJSONGetCPUDefinitions(qemuMonitor *mon,
}
-VIR_ENUM_IMPL(qemuMonitorCPUProperty,
- QEMU_MONITOR_CPU_PROPERTY_LAST,
- "boolean", "string", "number",
-);
-
static int
qemuMonitorJSONParseCPUModelProperty(const char *key,
virJSONValue *value,
@@ -8740,11 +8711,6 @@ qemuMonitorJSONGetCPUMigratable(qemuMonitor *mon,
migratable);
}
-VIR_ENUM_IMPL(qemuMonitorDirtyRateCalcMode,
- QEMU_MONITOR_DIRTYRATE_CALC_MODE_LAST,
- "page-sampling",
- "dirty-bitmap",
- "dirty-ring");
int
qemuMonitorJSONStartDirtyRateCalc(qemuMonitor *mon,
--
2.34.1
2 years, 8 months
[libvirt PATCH v2] Make systemd unit ordering more robust
by Martin Kletzander
Since libvirt-guests script/service can operate on various URIs and we do
support both socket activation and traditional services, the ordering should be
specified for all the possible sockets and services.
Also remove the Wants= dependency since do not want to start any service. We
cannot know which one libvirt-guests is configured, so we'd have to start all
the daemons which would break if unused colliding services are not
masked (libvirtd.service in the modular case and all the modular daemon service
units in the monolithic scenario). Fortunately we can assume that the system is
configured properly to start services/sockets that are of interest to the user.
That also works with the setup described in https://libvirt.org/daemons.html .
To make it even more robust we add the daemon service into the machine units
created for individual domains as it was missing there.
https://bugzilla.redhat.com/show_bug.cgi?id=1868537
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
src/util/virsystemd.c | 8 ++++++--
tools/libvirt-guests.service.in | 12 +++++++++++-
2 files changed, 17 insertions(+), 3 deletions(-)
diff --git a/src/util/virsystemd.c b/src/util/virsystemd.c
index a86d4c6bb905..f156c2f39ae5 100644
--- a/src/util/virsystemd.c
+++ b/src/util/virsystemd.c
@@ -441,8 +441,10 @@ int virSystemdCreateMachine(const char *name,
nicindexes, nnicindexes, sizeof(int));
gprops = g_variant_new_parsed("[('Slice', <%s>),"
" ('After', <['libvirtd.service']>),"
+ " ('After', <['virt%sd.service']>),"
" ('Before', <['virt-guest-shutdown.target']>)]",
- slicename);
+ slicename,
+ drivername);
message = g_variant_new("(s@ayssus@ai@a(sv))",
name,
guuid,
@@ -489,8 +491,10 @@ int virSystemdCreateMachine(const char *name,
uuid, 16, sizeof(unsigned char));
gprops = g_variant_new_parsed("[('Slice', <%s>),"
" ('After', <['libvirtd.service']>),"
+ " ('After', <['virt%sd.service']>),"
" ('Before', <['virt-guest-shutdown.target']>)]",
- slicename);
+ slicename,
+ drivername);
message = g_variant_new("(s@ayssus@a(sv))",
name,
guuid,
diff --git a/tools/libvirt-guests.service.in b/tools/libvirt-guests.service.in
index 1a9b233e1177..3cf647619612 100644
--- a/tools/libvirt-guests.service.in
+++ b/tools/libvirt-guests.service.in
@@ -1,10 +1,20 @@
[Unit]
Description=Suspend/Resume Running libvirt Guests
-Wants=libvirtd.service
Requires=virt-guest-shutdown.target
After=network.target
After=time-sync.target
+After=libvirtd.socket
+After=virtqemud.socket
+After=virtlxcd.socket
+After=virtvboxd.socket
+After=virtvzd.socket
+After=virtxend.socket
After=libvirtd.service
+After=virtqemud.service
+After=virtlxcd.service
+After=virtvboxd.service
+After=virtvzd.service
+After=virtxend.service
After=virt-guest-shutdown.target
Documentation=man:libvirt-guests(8)
Documentation=https://libvirt.org
--
2.35.1
2 years, 8 months
Re: Call for GSoC and Outreachy project ideas for summer 2022
by Paolo Bonzini
On 1/28/22 16:47, Stefan Hajnoczi wrote:
> Dear QEMU, KVM, and rust-vmm communities,
> QEMU will apply for Google Summer of Code 2022
> (https://summerofcode.withgoogle.com/) and has been accepted into
> Outreachy May-August 2022 (https://www.outreachy.org/). You can now
> submit internship project ideas for QEMU, KVM, and rust-vmm!
>
> If you have experience contributing to QEMU, KVM, or rust-vmm you can
> be a mentor. It's a great way to give back and you get to work with
> people who are just starting out in open source.
>
> Please reply to this email by February 21st with your project ideas.
I would like to co-mentor one or more projects about adding more
statistics to Mark Kanda's newly-born introspectable statistics
subsystem in QEMU
(https://patchew.org/QEMU/20220215150433.2310711-1-mark.kanda@oracle.com/),
for example integrating "info blockstats"; and/or, to add matching
functionality to libvirt.
However, I will only be available for co-mentoring unfortunately.
Paolo
> Good project ideas are suitable for remote work by a competent
> programmer who is not yet familiar with the codebase. In
> addition, they are:
> - Well-defined - the scope is clear
> - Self-contained - there are few dependencies
> - Uncontroversial - they are acceptable to the community
> - Incremental - they produce deliverables along the way
>
> Feel free to post ideas even if you are unable to mentor the project.
> It doesn't hurt to share the idea!
>
> I will review project ideas and keep you up-to-date on QEMU's
> acceptance into GSoC.
>
> Internship program details:
> - Paid, remote work open source internships
> - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> hrs/week for 12 weeks
> - Mentored by volunteers from QEMU, KVM, and rust-vmm
> - Mentors typically spend at least 5 hours per week during the coding period
>
> Changes since last year: GSoC now has 175 or 350 hour project sizes
> instead of 12 week full-time projects. GSoC will accept applicants who
> are not students, before it was limited to students.
>
> For more background on QEMU internships, check out this video:
> https://www.youtube.com/watch?v=xNVCX7YMUL8
>
> Please let me know if you have any questions!
>
> Stefan
>
2 years, 8 months
[PATCH] NEWS: Document domain dirty page rate calculation APIs
by huangy81@chinatelecom.cn
From: Hyman Huang(黄勇) <huangy81(a)chinatelecom.cn>
The Libvirt API virDomainStartDirtyRateCalc was extended.
Document this change.
Signed-off-by: Hyman Huang(黄勇) <huangy81(a)chinatelecom.cn>
---
NEWS.rst | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index f545325..b684416 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -22,6 +22,20 @@ v8.1.0 (unreleased)
It works on Intel machines as well as recent machines powered by Apple
Silicon. QEMU 6.2.0 is needed for Apple Silicon support.
+ * qemu: Support mode option for dirtyrate calculation
+
+ Introduce ``virDomainDirtyRateCalcFlags`` as parameter of
+ ``virDomainStartDirtyRateCalc``, which is used to specify the mode of
+ dirty page rate calculation.
+
+ Add ``--mode`` option to ``virsh domdirtyrate-calc``, which can be
+ either of the following 3 options:
+ ``page-sampling, dirty-bitmap, dirty-ring``.
+
+ Add ``calc_mode`` field for dirtyrate statistics retured by
+ ``virsh domstats --dirtyrate``, also add ``vCPU dirtyrate`` if
+ ``dirty-ring`` mode was used in last measurement.
+
* **Improvements**
* packaging: sysconfig files no longer installed
--
1.8.3.1
2 years, 8 months