[PATCH 00/33] qemu: monitor: Clean up qemu monitor event callbacks
by Peter Krempa
Peter Krempa (33):
qemu: monitor: Remove handlers for the 'POWERDOWN' event
qemu: monitor: Remove return value from qemuMonitorEmit* functions
qemu: Remove return value from qemuMonitorDomainEventCallback
qemu: Remove return value from qemuMonitorDomainShutdownCallback
qemu: Remove return value from qemuMonitorDomainResetCallback
qemu: Remove return value from qemuMonitorDomainStopCallback
qemu: Remove return value from qemuMonitorDomainResumeCallback
qemu: Remove return value from qemuMonitorDomainRTCChangeCallback
qemu: Remove return value from qemuMonitorDomainWatchdogCallback
qemu: Remove return value from qemuMonitorDomainIOErrorCallback
qemu: Remove return value from qemuMonitorDomainGraphicsCallback
qemu: Remove return value from qemuMonitorDomainBlockJobCallback
qemu: Remove return value from
qemuMonitorDomainJobStatusChangeCallback
qemu: Remove return value from qemuMonitorDomainTrayChangeCallback
qemu: Remove return value from qemuMonitorDomainPMWakeupCallback
qemu: Remove return value from qemuMonitorDomainPMSuspendCallback
qemu: Remove return value from qemuMonitorDomainBalloonChangeCallback
qemu: Remove return value from qemuMonitorDomainPMSuspendDiskCallback
qemu: Remove return value from qemuMonitorDomainGuestPanicCallback
qemu: Remove return value from qemuMonitorDomainDeviceDeletedCallback
qemu: Remove return value from
qemuMonitorDomainNicRxFilterChangedCallback
qemu: Remove return value from qemuMonitorDomainSerialChangeCallback
qemu: Remove return value from qemuMonitorDomainSpiceMigratedCallback
qemu: Remove return value from
qemuMonitorDomainMigrationStatusCallback
qemu: Remove return value from qemuMonitorDomainMigrationPassCallback
qemu: Remove return value from qemuMonitorDomainAcpiOstInfoCallback
qemu: Remove return value from qemuMonitorDomainBlockThresholdCallback
qemu: Remove return value from qemuMonitorDomainDumpCompletedCallback
qemu: Remove return value from
qemuMonitorDomainPRManagerStatusChangedCallback
qemu: Remove return value from
qemuMonitorDomainRdmaGidStatusChangedCallback
qemu: Remove return value from
qemuMonitorDomainGuestCrashloadedCallback
qemu: Remove return value from qemuMonitorDomainMemoryFailureCallback
qemu: process: Extract code for submitting event handling to separate
thread
src/qemu/qemu_monitor.c | 215 +++++------------
src/qemu/qemu_monitor.h | 443 +++++++++++++++++------------------
src/qemu/qemu_monitor_json.c | 7 -
src/qemu/qemu_process.c | 236 ++++++-------------
src/qemu/qemu_processpriv.h | 8 +-
5 files changed, 363 insertions(+), 546 deletions(-)
--
2.31.1
3 years, 4 months
[libvirt PATCH] ci: maximize cirrus CI ram/cpu allocation
by Daniel P. Berrangé
For macOS you always get the maximum configuration by default (12 CPUs,
24 GB RAM), but for FreeBSD you get 2 CPUs, 4 GBs by default. This
change increases the allocation to 8 CPUs, 8 GBs for FreeBSD.
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
In theory this could make builds quicker. In practice I've not been
able to measure a difference due to large variance between runs.
.gitlab-ci.yml | 8 ++++++++
ci/cirrus/build.yml | 2 ++
2 files changed, 10 insertions(+)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 3cb6ff5e6b..24588628f2 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -114,6 +114,8 @@ stages:
-e "s|[@]CIRRUS_VM_INSTANCE_TYPE@|$CIRRUS_VM_INSTANCE_TYPE|g"
-e "s|[@]CIRRUS_VM_IMAGE_SELECTOR@|$CIRRUS_VM_IMAGE_SELECTOR|g"
-e "s|[@]CIRRUS_VM_IMAGE_NAME@|$CIRRUS_VM_IMAGE_NAME|g"
+ -e "s|[@]CIRRUS_VM_CPUS@|$CIRRUS_VM_CPUS|g"
+ -e "s|[@]CIRRUS_VM_RAM@|$CIRRUS_VM_RAM|g"
-e "s|[@]UPDATE_COMMAND@|$UPDATE_COMMAND|g"
-e "s|[@]UPGRADE_COMMAND@|$UPGRADE_COMMAND|g"
-e "s|[@]INSTALL_COMMAND@|$INSTALL_COMMAND|g"
@@ -423,6 +425,8 @@ x64-freebsd-12-build:
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
CIRRUS_VM_IMAGE_SELECTOR: image_family
CIRRUS_VM_IMAGE_NAME: freebsd-12-2
+ CIRRUS_VM_CPUS: 8
+ CIRRUS_VM_RAM: 8G
UPDATE_COMMAND: pkg update
UPGRADE_COMMAND: pkg upgrade -y
INSTALL_COMMAND: pkg install -y
@@ -434,6 +438,8 @@ x64-freebsd-13-build:
CIRRUS_VM_INSTANCE_TYPE: freebsd_instance
CIRRUS_VM_IMAGE_SELECTOR: image_family
CIRRUS_VM_IMAGE_NAME: freebsd-13-0
+ CIRRUS_VM_CPUS: 8
+ CIRRUS_VM_RAM: 8G
UPDATE_COMMAND: pkg update
UPGRADE_COMMAND: pkg upgrade -y
INSTALL_COMMAND: pkg install -y
@@ -445,6 +451,8 @@ x64-macos-11-build:
CIRRUS_VM_INSTANCE_TYPE: osx_instance
CIRRUS_VM_IMAGE_SELECTOR: image
CIRRUS_VM_IMAGE_NAME: big-sur-base
+ CIRRUS_VM_CPUS: 12
+ CIRRUS_VM_RAM: 24G
UPDATE_COMMAND: brew update
UPGRADE_COMMAND: brew upgrade
INSTALL_COMMAND: brew install
diff --git a/ci/cirrus/build.yml b/ci/cirrus/build.yml
index 867d5f297b..e9ad427765 100644
--- a/ci/cirrus/build.yml
+++ b/ci/cirrus/build.yml
@@ -1,5 +1,7 @@
@CIRRUS_VM_INSTANCE_TYPE@:
@CIRRUS_VM_IMAGE_SELECTOR@: @CIRRUS_VM_IMAGE_NAME@
+ cpu: @CIRRUS_VM_CPUS@
+ memory: @CIRRUS_VM_RAM@
env:
CI_REPOSITORY_URL: "@CI_REPOSITORY_URL@"
--
2.31.1
3 years, 4 months
[libvirt PATCH 00/10] virHashNew refactorings - part VI
by Tim Wiederhake
"virHashNew" cannot return NULL, yet we check for NULL in various places.
See https://listman.redhat.com/archives/libvir-list/2021-July/msg00074.html.
Tim Wiederhake (10):
qemuStateInitialize: `virHashNew` cannot return NULL
qemuMonitorGetPRManagerInfo: `virHashNew` cannot return NULL
qemuMonitorGetPRManagerInfo: Use automatic memory management
qemuMonitorGetPRManagerInfo: Remove superfluous `goto`s
qemuMonitorJSONGetAllBlockJobInfo: `virHashNew` cannot return NULL
qemuMonitorJSONGetAllBlockJobInfo: Use automatic memory management
qemuMonitorJSONGetAllBlockJobInfo: Remove superfluous `goto`s
testQemuGetLatestCaps: `virHashNew` cannot return NULL
testQemuGetLatestCaps: Use automatic memory management
testQemuGetLatestCaps: Remove superfluous `goto`s
src/qemu/qemu_driver.c | 3 +--
src/qemu/qemu_monitor.c | 13 +++----------
src/qemu/qemu_monitor_json.c | 26 ++++++++------------------
tests/testutilsqemu.c | 13 +++----------
4 files changed, 15 insertions(+), 40 deletions(-)
--
2.31.1
3 years, 4 months
[PATCH v5 00/11] Support for launchSecurity type s390-pv
by Boris Fiuczynski
This patch series introduces the launch security type s390-pv.
Specifying s390-pv as launch security type in an s390 domain prepares for
running the guest in protected virtualization secure mode, also known as
IBM Secure Execution.
diff to v4:
- changed rng to do the verification for every launchSecurity type
- removed previously added XML fail tests
- added domain capability documentation
diff to v3:
- rebased to current master
- moved virDomainSEVDef into a union
- improved XML formating for launchSecurity
- use a shared id on the qemu cmd line for confidential-guest-support
- added check for s390-pv host support into XML validation
- changed from ignoring to failing if launchSecuroty child elements are provided for s390-pv
- reduced test to a single failing test
- add availability of s390-pv in domain capabilities
diff to v2:
- broke up previous patch one into three patches
diff to v1:
- rebased to current master
- added verification check for confidential-guest-support capability
Boris Fiuczynski (11):
schemas: Refactor launch security
conf: Rework SEV XML parse and format methods
qemu: Make KVMSupportsSecureGuest capability available
conf: Refactor launch security to allow more types
qemu: Add s390-pv-guest capability
conf: Add s390-pv as launch security type
docs: Add s390-pv documentation
conf: Add availability of s390-pv in domain capabilities
docs: Add s390-pv in domain capabilities documentation
qemu: Use common id lsec0 for launchSecurity
qemu: Fix error code for SEV launchSecurity unsupported
docs/formatdomain.rst | 7 +
docs/formatdomaincaps.html.in | 10 ++
docs/kbase/s390_protected_virt.rst | 55 ++++++--
docs/schemas/domaincaps.rng | 9 ++
docs/schemas/domaincommon.rng | 79 ++++++-----
src/conf/domain_capabilities.c | 1 +
src/conf/domain_capabilities.h | 1 +
src/conf/domain_conf.c | 130 ++++++++++++------
src/conf/domain_conf.h | 17 ++-
src/conf/virconftypes.h | 2 +
src/qemu/qemu_capabilities.c | 24 ++++
src/qemu/qemu_capabilities.h | 4 +
src/qemu/qemu_cgroup.c | 4 +-
src/qemu/qemu_command.c | 75 ++++++++--
src/qemu/qemu_driver.c | 3 +-
src/qemu/qemu_firmware.c | 33 +++--
src/qemu/qemu_namespace.c | 21 ++-
src/qemu/qemu_process.c | 35 ++++-
src/qemu/qemu_validate.c | 32 ++++-
src/security/security_dac.c | 6 +-
tests/domaincapsdata/qemu_2.11.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_2.12.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_3.0.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_4.0.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_6.0.0.s390x.xml | 1 +
tests/domaincapsmock.c | 17 +++
.../launch-security-s390-pv.xml | 18 +++
tests/genericxml2xmltest.c | 1 +
.../qemucapabilitiesdata/caps_6.0.0.s390x.xml | 1 +
.../launch-security-s390-pv.s390x-latest.args | 35 +++++
.../launch-security-s390-pv.xml | 30 ++++
...v-missing-platform-info.x86_64-2.12.0.args | 4 +-
.../launch-security-sev.x86_64-2.12.0.args | 4 +-
.../launch-security-sev.x86_64-6.0.0.args | 4 +-
tests/qemuxml2argvmock.c | 16 +++
tests/qemuxml2argvtest.c | 2 +
38 files changed, 552 insertions(+), 135 deletions(-)
create mode 100644 tests/genericxml2xmlindata/launch-security-s390-pv.xml
create mode 100644 tests/qemuxml2argvdata/launch-security-s390-pv.s390x-latest.args
create mode 100644 tests/qemuxml2argvdata/launch-security-s390-pv.xml
--
2.31.1
3 years, 4 months
[PATCH] qemu_migration: Unregister close callback only if connection still exists
by Michal Privoznik
When doing a peer-to-peer migration it may happen that the
connection to the destination disappears. If that happens,
there's no point in trying to unregister the close callback
because the connection is closed already. It results only in
polluting logs with this message:
error : virNetSocketReadWire:1814 : End of file while reading data: : Input/output error
and the reason for that is unregistering a connection callback
results in RPC (among other things).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1918211
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/qemu/qemu_migration.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index a4f44b465d..4d651aeb1a 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -5214,9 +5214,11 @@ qemuMigrationSrcPerformPeer2Peer(virQEMUDriver *driver,
cleanup:
virErrorPreserveLast(&orig_err);
- qemuDomainObjEnterRemote(vm);
- virConnectUnregisterCloseCallback(dconn, qemuMigrationSrcConnectionClosed);
- ignore_value(qemuDomainObjExitRemote(vm, false));
+ if (dconn && virConnectIsAlive(dconn) == 1) {
+ qemuDomainObjEnterRemote(vm);
+ virConnectUnregisterCloseCallback(dconn, qemuMigrationSrcConnectionClosed);
+ ignore_value(qemuDomainObjExitRemote(vm, false));
+ }
virErrorRestore(&orig_err);
return ret;
}
--
2.31.1
3 years, 4 months
[libvirt PATCH] virIdentityEnsureSystemToken: Fix error message
by Tim Wiederhake
This appears to be a copy-paste mistake from the check directly above.
Signed-off-by: Tim Wiederhake <twiederh(a)redhat.com>
---
src/util/viridentity.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/util/viridentity.c b/src/util/viridentity.c
index eb77f69e2e..c18326c8cb 100644
--- a/src/util/viridentity.c
+++ b/src/util/viridentity.c
@@ -284,7 +284,7 @@ virIdentityEnsureSystemToken(void)
} else {
if (virFileReadLimFD(fd, TOKEN_STRLEN, &token) < 0) {
virReportSystemError(errno,
- _("Failed to write system token '%s'"),
+ _("Failed to read system token '%s'"),
tokenfile);
return NULL;
}
--
2.31.1
3 years, 4 months
[PATCH v2 0/2] Add support for 'id' attribute for 'cachetune'
by Kristina Hanicova
This is v2 of:
https://listman.redhat.com/archives/libvir-list/2021-July/msg00441.html
Changes since v1 (suggested by Michal):
* slight change of documentation
* improved commit message
Kristina Hanicova (2):
docs: Allow 'id' attribute for 'cachetune' element
genericxml2xmltest: Modify cachetune test to include id
docs/formatdomain.rst | 1 +
docs/schemas/domaincommon.rng | 5 ++++
tests/genericxml2xmlindata/cachetune.xml | 8 +++---
tests/genericxml2xmloutdata/cachetune.xml | 34 -----------------------
tests/genericxml2xmltest.c | 2 +-
5 files changed, 11 insertions(+), 39 deletions(-)
delete mode 100644 tests/genericxml2xmloutdata/cachetune.xml
--
2.31.1
3 years, 4 months
[libvirt PATCH] docs: add kbase article on how to configure core dumps for QEMU
by Daniel P. Berrangé
Enabling core dumps is a reasonably straightforward task, but is not
documented clearly. This page provides as easy link to point users
to when they need to debug QEMU.
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
docs/kbase/index.rst | 4 ++
docs/kbase/meson.build | 1 +
docs/kbase/qemu-core-dump.rst | 132 ++++++++++++++++++++++++++++++++++
3 files changed, 137 insertions(+)
create mode 100644 docs/kbase/qemu-core-dump.rst
diff --git a/docs/kbase/index.rst b/docs/kbase/index.rst
index 91083ee49d..372042886d 100644
--- a/docs/kbase/index.rst
+++ b/docs/kbase/index.rst
@@ -67,3 +67,7 @@ Internals / Debugging
`VM migration internals <migrationinternals.html>`__
VM migration implementation details, complementing the info in
`migration <../migration.html>`__
+
+`Capturing core dumps for QEMU <qemu-core-dump.html>`__
+ How to configure libvirt to enable capture of core dumps from
+ QEMU virtual machines
diff --git a/docs/kbase/meson.build b/docs/kbase/meson.build
index 7631b47018..6d17a83d1d 100644
--- a/docs/kbase/meson.build
+++ b/docs/kbase/meson.build
@@ -12,6 +12,7 @@ docs_kbase_files = [
'locking-sanlock',
'merging_disk_image_chains',
'migrationinternals',
+ 'qemu-core-dump',
'qemu-passthrough-security',
'rpm-deployment',
's390_protected_virt',
diff --git a/docs/kbase/qemu-core-dump.rst b/docs/kbase/qemu-core-dump.rst
new file mode 100644
index 0000000000..d27f81c4d6
--- /dev/null
+++ b/docs/kbase/qemu-core-dump.rst
@@ -0,0 +1,132 @@
+=============================
+Capturing core dumps for QEMU
+=============================
+
+The default behaviour for a QEMU virtual machine launched by libvirt is to
+have core dumps disabled. There can be times, however, when it is beneficial
+to collect a core dump to enable debugging.
+
+QEMU driver configuration
+=========================
+
+There is a global setting in the QEMU driver configuration file that controls
+whether core dumps are permitted, and their maximum size. Enabling core dumps
+is simply a matter of setting the maximum size to a non-zero value by editting
+the ``/etc/libvirt/qemu.conf`` file:
+
+::
+
+ max_core = "unlimited"
+
+For an adhoc debugging session, setting the core dump size to "unlimited" is
+viable, on the assumption that the core dumps will be disabled again once the
+requisite information is collected. If the intention is to leave core dumps
+permanently enabled, more careful consideration of limits is required
+
+Note that by default, a core dump will **NOT** include the the guest RAM
+region, so will only include memory regions used by QEMU for emulation and
+backend purposes. This is expected to be sufficient for the vast majority
+of debugging needs.
+
+When there is a need to examine guest RAM though, a further setting is
+available
+
+::
+
+ dump_guest_core = 1
+
+This will of course result in core dumps that are as large as the biggest
+virtual machine on the host - potentially 10's or even 100's of GB in size.
+To allow more fine grained control it is possible to toggle this on a per
+VM basis in the XML configuration.
+
+After changing either of the settings in ``/etc/libvirt/qemu.conf`` the daemon
+hosting the QEMU driver must be restarted. For deployments using the monolithic
+daemons, this means ``libvirtd``, while for those using modular daemons this
+means ``virtqemud``
+
+::
+
+ systemctl restart libvirtd (for a monolithic deployment)
+ systemctl restart virtqemud (for a modular deployment)
+
+While libvirt attempts to make it possible to restart the daemons without
+negatively impacting running guests, there are some management operations
+that may get interrupted. In particular long running jobs like live
+migration or block device copy jobs may abort. It is thus wise to check
+that the host is mostly idle before restarting the daemons.
+
+Guest core dump configuration
+=============================
+
+The ``dump_guest_core`` setting mentioned above will allow guest RAM to be
+included in core dumps for all virtual machines on the host. This may not
+be desirable, so it is also possible to control this on a per-virtual
+machine basis in the XML configuration:
+
+::
+
+ <memory dumpCore="on">...</memory>
+
+Note, it is still neccessary to at least set ``max_core`` to a non-zero
+value in the global configuration file.
+
+Some management applications may not offer the ability to customimze the
+XML configuration for a guest. In such situations, using the global
+``dump_guest_core`` setting is the only option.
+
+Host OS core dump storage
+=========================
+
+The Linux kernel default behaviour is to write core dumps to a file in the
+current working directory of the process. This will not work with QEMU
+processes launched by libvirt, because their working directory is ``/``
+which will not be writable.
+
+Most modern OS distros, however, now include systemd which configures a
+custom core dump handler out of the box. When this is in effect, core dumps
+from QEMU can be seen using the ``coredumpctl`` commands
+
+::
+
+ $ coredumpctl list -r
+ TIME PID UID GID SIG COREFILE EXE SIZE
+ Tue 2021-07-20 12:12:52 BST 2649303 107 107 SIGABRT present /usr/bin/qemu-system-x86_64 1.8M
+ ...snip...
+
+ $ coredumpctl info 2649303
+ PID: 2649303 (qemu-system-x86)
+ UID: 107 (qemu)
+ GID: 107 (qemu)
+ Signal: 6 (ABRT)
+ Timestamp: Tue 2021-07-20 12:12:52 BST (48min ago)
+ Command Line: /usr/bin/qemu-system-x86_64 -name guest=f30,debug-threads=on ..snip... -msg timestamp=on
+ Executable: /usr/bin/qemu-system-x86_64
+ Control Group: /machine.slice/machine-qemu\x2d1\x2df30.scope/libvirt/emulator
+ Unit: machine-qemu\x2d1\x2df30.scope
+ Slice: machine.slice
+ Boot ID: 6b9015d0c05f4e7fbfe4197a2c7824a2
+ Machine ID: c78c8286d6d74b22ac0dd275975f9ced
+ Hostname: localhost.localdomain
+ Storage: /var/lib/systemd/coredump/core.qemu-system-x86.107.6b9015d0c05f4e7fbfe4197a2c7824a2.2649303.1626779572000000.zst (present)
+ Disk Size: 1.8M
+ Message: Process 2649303 (qemu-system-x86) of user 107 dumped core.
+
+ Stack trace of thread 2649303:
+ #0 0x00007ff3c32436be n/a (libc.so.6 + 0xf56be)
+ #1 0x000055a949c0ed05 qemu_poll_ns (qemu-system-x86_64 + 0x7b0d05)
+ #2 0x000055a949c0e476 main_loop_wait (qemu-system-x86_64 + 0x7b0476)
+ #3 0x000055a949a36d27 qemu_main_loop (qemu-system-x86_64 + 0x5d8d27)
+ #4 0x000055a94979e4d2 main (qemu-system-x86_64 + 0x3404d2)
+ #5 0x00007ff3c3175b75 n/a (libc.so.6 + 0x27b75)
+ #6 0x000055a9497a1f5e _start (qemu-system-x86_64 + 0x343f5e)
+
+ Stack trace of thread 2649368:
+ #0 0x00007ff3c32435bf n/a (libc.so.6 + 0xf55bf)
+ #1 0x00007ff3c3af547c g_main_context_iterate.constprop.0 (libglib-2.0.so.0 + 0xa947c)
+ #2 0x00007ff3c3aa0a93 g_main_loop_run (libglib-2.0.so.0 + 0x54a93)
+ #3 0x00007ff3c17a727a red_worker_main.lto_priv.0 (libspice-server.so.1 + 0x5227a)
+ #4 0x00007ff3c3326299 start_thread (libpthread.so.0 + 0x9299)
+ #5 0x00007ff3c324e353 n/a (libc.so.6 + 0x100353)
+
+ ...snip...
--
2.31.1
3 years, 4 months
Plans for the next release
by Jiri Denemark
We are getting close to the next release of libvirt. To aim for the
release on Aug 02 I suggest entering the freeze on Tuesday Jul 27 and
tagging RC2 on Thursday Jul 29.
I'll be on PTO next week and thus I won't be able to make the RC
releases, Pavel Hrdina volunteered to make them while I'm away. This
means RC1 and RC2 releases won't be signed by my PGP key, but it
shouldn't be a big deal as the final release is what matters and I will
be back in time to handle it myself.
Thanks Pavel for backing me up.
Jirka
3 years, 4 months
[PATCH v3 0/2] domstats:add haltpolling time statistic interface
by Yang Fei
This series add the ability to statistic the halt polling time when
VM execute HLT(arm is WFI).
v1:
https://listman.redhat.com/archives/libvir-list/2021-July/msg00029.html
v2:
https://listman.redhat.com/archives/libvir-list/2021-July/msg00339.html
changes from v1:
- Move virGetCgroupValueRaw to utils.c and rename it virGetValueRaw. So
that we can call it to obtain halt polling time.
- Helper function virGetCpuHaltPollTime and virGetDebugFsKvmValue are
added in a separate patch
- Use STRPREFIX to match the path prefix.
- Fix the logic that domstats will break when platform is non-linux,
debugfs isn't mounted and so on.
change from v2:
- Drop patch 1, use virFileReadValueUllong() to get halt polling data.
- Delete unnecessary error report in logs.
- Remove the qemuDomainGetStatsCpuHaltPollTime function conditionally
compiled on linux.
- Document the new parameters in src/libvirt-domain.c.
Yang Fei (2):
util: Add virGetCpuHaltPollTime
qemu: Introduce qemuDomainGetStatsCpuHaltPollTime
src/libvirt-domain.c | 7 +++++++
src/libvirt_private.syms | 1 +
src/qemu/qemu_driver.c | 20 +++++++++++++++++++
src/util/virutil.c | 43 ++++++++++++++++++++++++++++++++++++++++
src/util/virutil.h | 4 ++++
5 files changed, 75 insertions(+)
--
2.23.0
3 years, 4 months