[PATCH] qemu_tpm: Don't crash if qemuTPMPcrBankBitmapToStr(NULL)
by Michal Privoznik
Historically, the tpm->data.emulator.activePcrBanks member was an
unsigned int but since it was used as a bitmap it was converted
to virBitmap type instead. Now, the virBitmap is allocated inside
of virDomainTPMDefParseXML() but only if <activePcrBanks/> was
found with at last one child element. Otherwise it stays NULL.
Fast forward to starting a domain with TPM 2.0 and no
<activePcrBanks/> configured. Eventually,
qemuTPMEmulatorBuildCommand() is called, which subsequently calls
qemuTPMEmulatorReconfigure() and finally
qemuTPMPcrBankBitmapToStr() passing the NULL value. Before
rewrite to virBitmap this function would return NULL for empty
activePcrBanks but now, well, now it crashes.
Fixes: 52c7c31c8038aa31d502f59a40e4fb4ba9f61113
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/qemu/qemu_tpm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/qemu/qemu_tpm.c b/src/qemu/qemu_tpm.c
index c08b0851da..584c787b70 100644
--- a/src/qemu/qemu_tpm.c
+++ b/src/qemu/qemu_tpm.c
@@ -449,6 +449,9 @@ qemuTPMPcrBankBitmapToStr(virBitmap *activePcrBanks)
g_auto(virBuffer) buf = VIR_BUFFER_INITIALIZER;
ssize_t bank = -1;
+ if (!activePcrBanks)
+ return NULL;
+
while ((bank = virBitmapNextSetBit(activePcrBanks, bank)) > -1)
virBufferAsprintf(&buf, "%s,", virDomainTPMPcrBankTypeToString(bank));
--
2.35.1
2 years, 4 months
[PATCH 00/11] Dirty page rate limit support
by huangy81@chinatelecom.cn
From: Hyman Huang(黄勇) <huangy81(a)chinatelecom.cn>
Qemu introduced dirty page rate limit feature in 7.1.0, see the details in
the following link:
https://lore.kernel.org/qemu-devel/cover.1656177590.git.huangy81@chinatel...
So may be it's the right time to enabling this feature in libvirt meanwhile
so that upper user can play with it.
Expecting the upper app can use this feature to do a vcpu Qos or whatever else.
This patch add 2 new apis to implement dirty page rate limit:
1. virDomainSetVcpuDirtyLimit, which set vcpu dirty page rate
limit. virsh command 'vcpudirtylimit' also implemented correspondingly.
2. virDomainCancelVcpuDirtyLimit, which cancel vcpu dirty page rate
limit. 'cancel' option was introduced to 'vcpudirtylimit' to cancel
the limit correspondingly.
In addition, function 'qemuMonitorQueryVcpuDirtyLimit' was implemented
to query dirty page rate limit, virsh command 'vcpuinfo' was extended
so that user can query dirty page rate limit info via 'vcpuinfo'.
This series make main modifications as the following:
- introduce QEMU_CAPS_VCPU_DIRTY_LIMIT capability so that libvirt can probe
before using dirty page rate limit feature.
- implement virsh command 'vcpudirtylimit' to set/cancel dirty page rate
limit.
- extend vcpuinfo api so that it can display dirtylimit info.
- document dirty page rate limit feature.
Please review, thanks!
Best regards!
*** BLURB HERE ***
Hyman Huang(黄勇) (11):
qemu_capabilities: Introduce QEMU_CAPS_VCPU_DIRTY_LIMIT capability
libvirt: Add virDomainSetVcpuDirtyLimit API
qemu_driver: Implement qemuDomainSetVcpuDirtyLimit
virsh: Introduce vcpudirtylimit api
qemu_monitor: Implement qemuMonitorQueryVcpuDirtyLimit
qemu_driver: Extend qemuDomainGetVcpus for dirtylimit
virsh: Extend vcpuinfo api to display dirtylimit info
libvirt: Add virDomainCancelVcpuDirtyLimit API
qemu_driver: Implement qemuDomainCancelVcpuDirtyLimit
virsh: Add cancel option to vcpudirtylimit api
NEWS: Document dirty page rate limit APIs
NEWS.rst | 13 ++
include/libvirt/libvirt-domain.h | 22 ++++
src/driver-hypervisor.h | 13 ++
src/libvirt-domain.c | 95 +++++++++++++++
src/libvirt_public.syms | 6 +
src/qemu/qemu_capabilities.c | 4 +
src/qemu/qemu_capabilities.h | 3 +
src/qemu/qemu_driver.c | 146 +++++++++++++++++++++++
src/qemu/qemu_monitor.c | 35 ++++++
src/qemu/qemu_monitor.h | 26 ++++
src/qemu/qemu_monitor_json.c | 137 +++++++++++++++++++++
src/qemu/qemu_monitor_json.h | 13 ++
src/remote/remote_daemon_dispatch.c | 2 +
src/remote/remote_driver.c | 4 +
src/remote/remote_protocol.x | 30 ++++-
src/remote_protocol-structs | 13 ++
tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml | 1 +
tools/virsh-domain.c | 112 +++++++++++++++++
18 files changed, 673 insertions(+), 2 deletions(-)
--
1.8.3.1
2 years, 4 months
[PATCH for-7.1?] Fix some typos in documentation (most of them found by codespell)
by Stefan Weil
Signed-off-by: Stefan Weil <sw(a)weilnetz.de>
---
docs/about/deprecated.rst | 2 +-
docs/specs/acpi_erst.rst | 4 ++--
docs/system/devices/canokey.rst | 8 ++++----
docs/system/devices/cxl.rst | 12 ++++++------
4 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
index 7ee26626d5..91b03115ee 100644
--- a/docs/about/deprecated.rst
+++ b/docs/about/deprecated.rst
@@ -297,7 +297,7 @@ by using ``-machine graphics=off``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In QEMU versions 6.1, 6.2 and 7.0, the ``nvme-ns`` generates an EUI-64
-identifer that is not globally unique. If an EUI-64 identifer is required, the
+identifier that is not globally unique. If an EUI-64 identifier is required, the
user must set it explicitly using the ``nvme-ns`` device parameter ``eui64``.
``-device nvme,use-intel-id=on|off`` (since 7.1)
diff --git a/docs/specs/acpi_erst.rst b/docs/specs/acpi_erst.rst
index a8a9d22d25..2339b60ad7 100644
--- a/docs/specs/acpi_erst.rst
+++ b/docs/specs/acpi_erst.rst
@@ -108,7 +108,7 @@ Slot 0 contains a backend storage header that identifies the contents
as ERST and also facilitates efficient access to the records.
Depending upon the size of the backend storage, additional slots will
be designated to be a part of the slot 0 header. For example, at 8KiB,
-the slot 0 header can accomodate 1021 records. Thus a storage size
+the slot 0 header can accommodate 1021 records. Thus a storage size
of 8MiB (8KiB * 1024) requires an additional slot for use by the
header. In this scenario, slot 0 and slot 1 form the backend storage
header, and records can be stored starting at slot 2.
@@ -196,5 +196,5 @@ References
[2] "Unified Extensible Firmware Interface Specification",
version 2.1, October 2008.
-[3] "Windows Hardware Error Architecture", specfically
+[3] "Windows Hardware Error Architecture", specifically
"Error Record Persistence Mechanism".
diff --git a/docs/system/devices/canokey.rst b/docs/system/devices/canokey.rst
index c2c58ae3e7..cfa6186e48 100644
--- a/docs/system/devices/canokey.rst
+++ b/docs/system/devices/canokey.rst
@@ -28,9 +28,9 @@ With the same software configuration as a hardware key,
the guest OS can use all the functionalities of a secure key as if
there was actually an hardware key plugged in.
-CanoKey QEMU provides much convenience for debuging:
+CanoKey QEMU provides much convenience for debugging:
-* libcanokey-qemu supports debuging output thus developers can
+* libcanokey-qemu supports debugging output thus developers can
inspect what happens inside a secure key
* CanoKey QEMU supports trace event thus event
* QEMU USB stack supports pcap thus USB packet between the guest
@@ -102,8 +102,8 @@ and find CanoKey QEMU there:
You may setup the key as guided in [6]_. The console for the key is at [7]_.
-Debuging
-========
+Debugging
+=========
CanoKey QEMU consists of two parts, ``libcanokey-qemu.so`` and ``canokey.c``,
the latter of which resides in QEMU. The former provides core functionality
diff --git a/docs/system/devices/cxl.rst b/docs/system/devices/cxl.rst
index 36031325cc..f25783a4ec 100644
--- a/docs/system/devices/cxl.rst
+++ b/docs/system/devices/cxl.rst
@@ -83,7 +83,7 @@ CXL Fixed Memory Windows (CFMW)
A CFMW consists of a particular range of Host Physical Address space
which is routed to particular CXL Host Bridges. At time of generic
software initialization it will have a particularly interleaving
-configuration and associated Quality of Serice Throtling Group (QTG).
+configuration and associated Quality of Service Throttling Group (QTG).
This information is available to system software, when making
decisions about how to configure interleave across available CXL
memory devices. It is provide as CFMW Structures (CFMWS) in
@@ -98,7 +98,7 @@ specification defined register interface called CXL Host Bridge
Component Registers (CHBCR). The location of this CHBCR MMIO
space is described to system software via a CXL Host Bridge
Structure (CHBS) in the CEDT ACPI table. The actual interfaces
-are identical to those used for other parts of the CXL heirarchy
+are identical to those used for other parts of the CXL hierarchy
as CXL Component Registers in PCI BARs.
Interfaces provided include:
@@ -143,7 +143,7 @@ CXL Memory Devices - Type 3
~~~~~~~~~~~~~~~~~~~~~~~~~~~
CXL type 3 devices use a PCI class code and are intended to be supported
by a generic operating system driver. They have HDM decoders
-though in these EP devices, the decoder is reponsible not for
+though in these EP devices, the decoder is responsible not for
routing but for translation of the incoming host physical address (HPA)
into a Device Physical Address (DPA).
@@ -209,7 +209,7 @@ Notes:
ranges of the system physical address map. Each CFMW has
particular interleave setup across the CXL Host Bridges (HB)
CFMW0 provides uninterleaved access to HB0, CFW2 provides
- uninterleaved acess to HB1. CFW1 provides interleaved memory access
+ uninterleaved access to HB1. CFW1 provides interleaved memory access
across HB0 and HB1.
(2) **Two CXL Host Bridges**. Each of these has 2 CXL Root Ports and
@@ -282,7 +282,7 @@ Example topology involving a switch::
---------------------------------------------------
| Switch 0 USP as PCI 0d:00.0 |
| USP has HDM decoder which direct traffic to |
- | appropiate downstream port |
+ | appropriate downstream port |
| Switch BUS appears as 0e |
|x__________________________________________________|
| | | |
@@ -366,7 +366,7 @@ An example of 4 devices below a switch suitable for 1, 2 or 4 way interleave::
Kernel Configuration Options
----------------------------
-In Linux 5.18 the followings options are necessary to make use of
+In Linux 5.18 the following options are necessary to make use of
OS management of CXL memory devices as described here.
* CONFIG_CXL_BUS
--
2.30.2
2 years, 4 months
[libvirt PATCH] schema: Add maxphysaddr element to hostcpu
by Tim Wiederhake
The output of "virsh capabilities" was not conformant to the
capability.rng schema. Add the missing element to the schema.
Fixes: c647bf29afb9890c792172ecf7db2c9c27babbb6
Signed-off-by: Tim Wiederhake <twiederh(a)redhat.com>
---
src/conf/schemas/cputypes.rng | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/conf/schemas/cputypes.rng b/src/conf/schemas/cputypes.rng
index 4ae386c3c0..d02f2f88cf 100644
--- a/src/conf/schemas/cputypes.rng
+++ b/src/conf/schemas/cputypes.rng
@@ -387,6 +387,9 @@
<optional>
<ref name="cpuTopology"/>
</optional>
+ <optional>
+ <ref name="cpuMaxPhysAddr"/>
+ </optional>
<zeroOrMore>
<element name="feature">
<attribute name="name">
--
2.36.1
2 years, 4 months
[PATCH] NEWS: Mention support for specifying vCPU address size
by Jim Fehlig
Signed-off-by: Jim Fehlig <jfehlig(a)suse.com>
---
NEWS.rst | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index efd63bc9c3..548828c60d 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -17,6 +17,11 @@ v8.7.0 (unreleased)
* **New features**
+ * qemu: Add support for specifying vCPU physical address size in bits
+
+ Users can now specify the number of vCPU physical address bits with
+ the `<maxphysaddr>` subelment of the `<cpu>` element.
+
* **Improvements**
* **Bug fixes**
--
2.37.1
2 years, 4 months
[libvirt PATCH 0/3] qemu: retire QEMU_CAPS_VIRTIO_TX_ALG
by Ján Tomko
*** BRULB HERE ***
Ján Tomko (3):
tests: qemuxml2xmltest: remove interface from disk test
qemu: always assume QEMU_CAPS_VIRTIO_TX_ALG
qemu: retire QEMU_CAPS_VIRTIO_TX_ALG
src/qemu/qemu_capabilities.c | 3 +-
src/qemu/qemu_capabilities.h | 2 +-
src/qemu/qemu_command.c | 36 +++++++++----------
src/qemu/qemu_validate.c | 8 -----
.../caps_4.2.0.aarch64.xml | 1 -
.../qemucapabilitiesdata/caps_4.2.0.ppc64.xml | 1 -
.../qemucapabilitiesdata/caps_4.2.0.s390x.xml | 1 -
.../caps_4.2.0.x86_64.xml | 1 -
.../caps_5.0.0.aarch64.xml | 1 -
.../qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 1 -
.../caps_5.0.0.riscv64.xml | 1 -
.../caps_5.0.0.x86_64.xml | 1 -
.../caps_5.1.0.x86_64.xml | 1 -
.../caps_5.2.0.aarch64.xml | 1 -
.../qemucapabilitiesdata/caps_5.2.0.ppc64.xml | 1 -
.../caps_5.2.0.riscv64.xml | 1 -
.../qemucapabilitiesdata/caps_5.2.0.s390x.xml | 1 -
.../caps_5.2.0.x86_64.xml | 1 -
.../caps_6.0.0.aarch64.xml | 1 -
.../qemucapabilitiesdata/caps_6.0.0.s390x.xml | 1 -
.../caps_6.0.0.x86_64.xml | 1 -
.../caps_6.1.0.x86_64.xml | 1 -
.../caps_6.2.0.aarch64.xml | 1 -
.../qemucapabilitiesdata/caps_6.2.0.ppc64.xml | 1 -
.../caps_6.2.0.x86_64.xml | 1 -
.../caps_7.0.0.aarch64.xml | 1 -
.../qemucapabilitiesdata/caps_7.0.0.ppc64.xml | 1 -
.../caps_7.0.0.x86_64.xml | 1 -
.../caps_7.1.0.x86_64.xml | 1 -
.../disk-copy_on_read.x86_64-latest.args | 4 +--
tests/qemuxml2argvdata/disk-copy_on_read.xml | 5 ---
tests/qemuxml2argvtest.c | 3 +-
.../qemuxml2xmloutdata/disk-copy_on_read.xml | 8 +----
tests/qemuxml2xmltest.c | 4 +--
34 files changed, 24 insertions(+), 74 deletions(-)
--
2.37.1
2 years, 4 months
[libvirt PATCH 0/3] tests: Fix qemucapabilitiestest on macOS
by Andrea Bolognani
We need to mock the function that probes for HVF support.
Andrea Bolognani (3):
tests: Use domaincapsmock in qemucapabilitiestest
qemu: Make virQEMUCapsProbeHVF() non-static
tests: Mock virQEMUCapsProbeHVF()
src/qemu/qemu_capabilities.c | 4 ++--
src/qemu/qemu_capabilities.h | 2 ++
tests/domaincapsmock.c | 6 ++++++
tests/qemucapabilitiestest.c | 3 ++-
4 files changed, 12 insertions(+), 3 deletions(-)
--
2.37.1
2 years, 4 months
[PATCH v2] qemuDomainObjWait: Report error when VM is being destroyed
by Peter Krempa
Since we started handing the monitor EOF event inside a job any code
which uses virDomainObjWait would no longer properly abort in case when
the VM crashed during the wait.
This is because virDomainObjWait uses virDomainObjIsActive which checks
'vm->def->id' to see if the VM is still active. Unfortunately the domain
id is cleared in qemuProcessStop which is run only inside the job.
To fix this we can use the 'beingDestroyed' flag stored in the VM
private data which is set to true around the time when the condition is
signalled.
Reported-by: Pavel Hrdina <phrdina(a)redhat.com>
Fixes: 8c9ff9960b29d4703a99efdd1cadcf6f48799cc0
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
v2: Fix condition in qemuMigrationSrcWaitForCompletion, where we check
whether the VM is active to use same logic
src/qemu/qemu_domain.c | 12 +++++++++++-
src/qemu/qemu_migration.c | 2 +-
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 2caed7315b..43b5ad5d6e 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -11780,5 +11780,15 @@ qemuDomainRemoveLogs(virQEMUDriver *driver,
int
qemuDomainObjWait(virDomainObj *vm)
{
- return virDomainObjWait(vm);
+ qemuDomainObjPrivate *priv = vm->privateData;
+
+ if (virDomainObjWait(vm) < 0)
+ return -1;
+
+ if (priv->beingDestroyed) {
+ virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("domain is not running"));
+ return -1;
+ }
+
+ return 0;
}
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 0b48852b9d..8a8e9ab207 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -2078,7 +2078,7 @@ qemuMigrationSrcWaitForCompletion(virDomainObj *vm,
return rv;
if (qemuDomainObjWait(vm) < 0) {
- if (virDomainObjIsActive(vm))
+ if (virDomainObjIsActive(vm) && !priv->beingDestroyed)
jobData->status = VIR_DOMAIN_JOB_STATUS_FAILED;
return -2;
}
--
2.37.1
2 years, 4 months
[PATCH] qemuMigrationSrcWaitForSpice: Remove return value
by Peter Krempa
The only caller doesn't check the return value and actually doesn't have
one either. Remove the return value and adjust return statements.
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
src/qemu/qemu_migration.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 0b48852b9d..e28f738ebf 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1773,21 +1773,20 @@ qemuMigrationDstPostcopyFailed(virDomainObj *vm)
}
-static int
+static void
qemuMigrationSrcWaitForSpice(virDomainObj *vm)
{
qemuDomainObjPrivate *priv = vm->privateData;
qemuDomainJobPrivate *jobPriv = priv->job.privateData;
if (!jobPriv->spiceMigration)
- return 0;
+ return;
VIR_DEBUG("Waiting for SPICE to finish migration");
while (!jobPriv->spiceMigrated && !priv->job.abortJob) {
if (qemuDomainObjWait(vm) < 0)
- return -1;
+ return;
}
- return 0;
}
--
2.37.1
2 years, 4 months