[PATCH 0/3] conf,qemu: add AIA support for RISC-V 'virt'
by Daniel Henrique Barboza
Hi,
This series adds official support for RISC-V AIA (Advanced Interrupt
Architecture). AIA and has been supported by the 'virt' RISC-V board, as
a machine property, since QEMU 7.0.
Daniel Henrique Barboza (3):
qemu: add capability for RISC-V AIA feature
conf,qemu: implement RISC-V 'aia' virt domain feature
qemu: add RISC-V 'aia' command line
docs/formatdomain.rst | 8 ++++
src/conf/domain_conf.c | 39 +++++++++++++++++++
src/conf/domain_conf.h | 11 ++++++
src/conf/schemas/domaincommon.rng | 15 +++++++
src/libvirt_private.syms | 2 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 5 +++
src/qemu/qemu_validate.c | 15 +++++++
.../caps_8.0.0_riscv64.xml | 1 +
.../caps_9.1.0_riscv64.xml | 1 +
...cv64-virt-features-aia.riscv64-latest.args | 31 +++++++++++++++
...scv64-virt-features-aia.riscv64-latest.xml | 1 +
.../riscv64-virt-features-aia.xml | 27 +++++++++++++
tests/qemuxmlconftest.c | 2 +
15 files changed, 161 insertions(+)
create mode 100644 tests/qemuxmlconfdata/riscv64-virt-features-aia.riscv64-latest.args
create mode 120000 tests/qemuxmlconfdata/riscv64-virt-features-aia.riscv64-latest.xml
create mode 100644 tests/qemuxmlconfdata/riscv64-virt-features-aia.xml
--
2.45.2
4 days, 8 hours
[PATCH 00/12] Introduce SEV-SNP support
by Michal Privoznik
SEV-SNP support just landed in QEMU. Here is the first round of patches
to incorporate support into libvirt.
TODOs (aka problems of future me):
- Teach tools/virt-qemu-sev-validate how to deal with SEV-SNP
- Try to find a SEV-SNP machine a test these patches in real worl
- Write a kbase article on attestation with SEV-SNP
Michal Prívozník (12):
qemu_monitor_json: Report error in error paths in SEV related code
conf: Move some members of virDomainSEVDef into virDomainSEVCommonDef
conf: Separate SEV formatting into a function
Drop needless typecast to virDomainLaunchSecurity
src: Convert some _virDomainSecDef::sectype checks to switch()
qemu_monitor: Allow querying SEV-SNP state in 'query-sev'
qemu: Report snp-policy in virDomainGetLaunchSecurityInfo()
qemu_capabilities: Introduce QEMU_CAPS_SEV_SNP_GUEST
conf: Introduce SEV-SNP support
qemu: Build cmd line for SEV-SNP
qemu: Allow setting launch security for SEV-SNP
qemu_firmware: Pick the right firmware for SEV-SNP guests
docs/formatdomain.rst | 108 ++++++++++++
include/libvirt/libvirt-domain.h | 10 ++
src/conf/domain_conf.c | 156 ++++++++++++++----
src/conf/domain_conf.h | 28 +++-
src/conf/domain_validate.c | 44 +++++
src/conf/schemas/domaincommon.rng | 73 ++++++--
src/conf/virconftypes.h | 4 +
src/qemu/qemu_capabilities.c | 4 +
src/qemu/qemu_capabilities.h | 3 +
src/qemu/qemu_cgroup.c | 19 ++-
src/qemu/qemu_command.c | 56 ++++++-
src/qemu/qemu_driver.c | 60 +++++--
src/qemu/qemu_firmware.c | 20 ++-
src/qemu/qemu_monitor.c | 7 +-
src/qemu/qemu_monitor.h | 41 ++++-
src/qemu/qemu_monitor_json.c | 67 ++++++--
src/qemu/qemu_monitor_json.h | 8 +-
src/qemu/qemu_namespace.c | 3 +-
src/qemu/qemu_process.c | 34 ++--
src/qemu/qemu_validate.c | 13 +-
src/security/security_dac.c | 34 +++-
.../caps_9.1.0_x86_64.xml | 1 +
.../firmware/60-edk2-ovmf-x64-amdsev.json | 1 +
tests/qemumonitorjsontest.c | 65 +++++++-
...launch-security-sev-snp.x86_64-latest.args | 35 ++++
.../launch-security-sev-snp.x86_64-latest.xml | 1 +
.../launch-security-sev-snp.xml | 47 ++++++
tests/qemuxmlconftest.c | 2 +
28 files changed, 817 insertions(+), 127 deletions(-)
create mode 100644 tests/qemuxmlconfdata/launch-security-sev-snp.x86_64-latest.args
create mode 120000 tests/qemuxmlconfdata/launch-security-sev-snp.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/launch-security-sev-snp.xml
--
2.44.2
4 days, 11 hours
[PATCH 00/20] qemu: support mapped-ram+directio+mulitfd
by Jim Fehlig
This series is essentially V1 of a prior RFC [1] to support QEMU's
mapped-ram stream format [2] and migration capability. Along with
supporting mapped-ram, it implements a design approach we discussed
for supporting parallel save/restore [3]. In summary, the approach is
1. Add mapped-ram migration capability
2. Steal an element from save header 'unused' for a 'features' variable
and bump save version to 3.
3. Add /etc/libvirt/qemu.conf knob for the save format version,
defaulting to latest v3
4. Use v3 (aka mapped-ram) by default
5. Use mapped-ram with BYPASS_CACHE for v3, old approach for v2
6. include: Define constants for parallel save/restore
7. qemu: Add support for parallel save. Implies mapped-ram, reject if v2
8. qemu: Add support for parallel restore. Implies mapped-ram.
Reject if v2
9. tools: add parallel parameter to virsh save command
10. tools: add parallel parameter to virsh restore command
With this series, saving and restoring using mapped-ram is enabled by
default if the underlying QEMU advertises the mapped-ram migration
capability. It can be disabled by changing the 'save_image_version'
setting in qemu.conf.
To use mapped-ram with QEMU:
- The 'mapped-ram' migration capability must be set to true
- The 'multifd' migration capability must be set to true and
the 'multifd-channels' migration parameter must set to a
value >= 1
- QEMU must be provided an fdset containing the migration fd(s)
- The 'migrate' qmp command is invoked with a URI referencing the fdset
and an offset where to start reading or writing the data stream, e.g.
{"execute":"migrate",
"arguments":{"detach":true,"resume":false,
"uri":"file:/dev/fdset/0,offset=0x11921"}}
The mapped-ram stream, in conjunction with direct IO and multifd, can
significantly improve the time required to save VM memory state. The
following tables compare mapped-ram with the existing, sequential save
stream. In all cases, the save and restore operations are to/from a
block device comprised of two NVMe disks in RAID0 configuration with
xfs (~8600MiB/s). The values in the 'save time' and 'restore time'
columns were scraped from the 'real' time reported by time(1). The
'Size' and 'Blocks' columns were provided by the corresponding
outputs of stat(1).
VM: 32G RAM, 1 vcpu, idle (shortly after boot)
| save | restore |
| time | time | Size | Blocks
-----------------------+---------+---------+--------------+--------
legacy | 6.193s | 4.399s | 985744812 | 1925288
-----------------------+---------+---------+--------------+--------
mapped-ram | 5.109s | 1.176s | 34368554354 | 1774472
-----------------------+---------+---------+--------------+--------
legacy + direct IO | 5.725s | 4.512s | 985765251 | 1925328
-----------------------+---------+---------+--------------+--------
mapped-ram + direct IO | 4.627s | 1.490s | 34368554354 | 1774304
-----------------------+---------+---------+--------------+--------
mapped-ram + direct IO | | | |
+ multifd-channels=8 | 4.421s | 0.845s | 34368554318 | 1774312
-------------------------------------------------------------------
VM: 32G RAM, 30G dirty, 1 vcpu in tight loop dirtying memory
| save | restore |
| time | time | Size | Blocks
-----------------------+---------+---------+--------------+---------
legacy | 25.800s | 14.332s | 33154309983 | 64754512
-----------------------+---------+---------+--------------+---------
mapped-ram | 18.742s | 15.027s | 34368559228 | 64617160
-----------------------+---------+---------+--------------+---------
legacy + direct IO | 13.115s | 18.050s | 33154310496 | 64754520
-----------------------+---------+---------+--------------+---------
mapped-ram + direct IO | 13.623s | 15.959s | 34368557392 | 64662040
-----------------------+-------- +---------+--------------+---------
mapped-ram + direct IO | | | |
+ multifd-channels=8 | 6.994s | 6.470s | 34368554980 | 64665776
--------------------------------------------------------------------
As can be seen from the tables, one caveat of mapped-ram is the logical file
size of a saved image is basically equivalent to the VM memory size. Note
however that mapped-ram typically uses fewer blocks on disk.
Support for mapped-ram+direct-io only recently landed in upstream QEMU
and will first appear in the 9.1 release, which may complicate merging
support in libvirt. Specifically, I'm not sure how to detect if the
combination is supported by QEMU. Suggestions welcomed.
Similar to the RFC, V1 ignores compression. libvirt currently supports
compression by connecting the output of QEMU's save stream to the specified
compression program via a pipe. This approach is incompatible with mapped-ram
since the fd provided to QEMU must be seekable. In general, we can consider
mapped-ram and compression incompatible and document they cannot be used
together.
[1] https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/message/E...
[2] https://gitlab.com/qemu-project/qemu/-/blob/master/docs/devel/migration/m...
[3] https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/message/K...
Claudio Fontana (2):
include: Define constants for parallel save/restore
tools: add parallel parameter to virsh restore command
Jim Fehlig (17):
lib: virDomainSaveParams: Ensure absolute save path
qemu_fd: Add function to retrieve fdset ID
qemu: Add function to check capability in migration params
qemu: Add function to get bool value from migration params
qemu: Add mapped-ram migration capability
qemu: Add function to get migration params for save
qemu: QEMU_SAVE_VERSION: Bump to version 3
qemu: conf: Add setting for save image version
qemu: Add helper function for creating save image fd
qemu: Add support for mapped-ram on save
qemu: Decompose qemuSaveImageOpen
qemu: Move creation of qemuProcessIncomingDef struct
qemu: Apply migration parameters in qemuMigrationDstRun
qemu: Add support for mapped-ram on restore
qemu: Support O_DIRECT with mapped-ram on save
qemu: Support O_DIRECT with mapped-ram on restore
qemu: Add support for parallel save and restore
Li Zhang (1):
tools: add parallel parameter to virsh save command
docs/manpages/virsh.rst | 9 +-
include/libvirt/libvirt-domain.h | 13 ++
src/libvirt-domain.c | 52 +++++--
src/qemu/libvirtd_qemu.aug | 1 +
src/qemu/qemu.conf.in | 6 +
src/qemu/qemu_conf.c | 16 +++
src/qemu/qemu_conf.h | 5 +
src/qemu/qemu_driver.c | 104 +++++++++-----
src/qemu/qemu_fd.c | 18 +++
src/qemu/qemu_fd.h | 3 +
src/qemu/qemu_migration.c | 192 +++++++++++++++++--------
src/qemu/qemu_migration.h | 9 +-
src/qemu/qemu_migration_params.c | 86 ++++++++++++
src/qemu/qemu_migration_params.h | 17 +++
src/qemu/qemu_monitor.c | 39 ++++++
src/qemu/qemu_monitor.h | 5 +
src/qemu/qemu_process.c | 120 +++++++++++-----
src/qemu/qemu_process.h | 19 ++-
src/qemu/qemu_saveimage.c | 216 ++++++++++++++++++++---------
src/qemu/qemu_saveimage.h | 35 +++--
src/qemu/qemu_snapshot.c | 26 ++--
src/qemu/test_libvirtd_qemu.aug.in | 1 +
tools/virsh-domain.c | 79 +++++++++--
23 files changed, 827 insertions(+), 244 deletions(-)
--
2.35.3
1 week, 4 days
[PATCH v3 0/5] ch: handle events from cloud-hypervisor
by Purna Pavan Chandra Aekkaladevi
changes from v2->v3:
* Remove patch 'utils: Implement virFileIsNamedPipe' as it is no more needed.
* Remove the eventmonitorpath only if it exists
* Added domain name as a prefix to logs from ch_events.c. This will make
debugging easier.
* Simplified event parsing logic by reserving a byte for null char.
changes from v1->v2:
* Rebase on latest master
* Use /* */ for comments
* Remove fifo file if already exists
* Address other comments from Praveen Paladugu
cloud-hypervisor raises various events, including VM lifecylce operations
such as boot, shutdown, pause, resume, etc. Libvirt will now read these
events and take the necessary actions, such as correctly updating the
domain state. A FIFO file is passed to `--event-monitor` option of
cloud-hypervisor. Libvirt creates a new thread that acts as the reader
of the fifo file and continuously monitors for new events. Currently,
shutdown events are handled by updating the domain state appropriately.
Purna Pavan Chandra Aekkaladevi (5):
ch: pass --event-monitor option to cloud-hypervisor
ch: start a new thread for handling ch events
ch: events: Read and parse cloud-hypervisor events
ch: events: facilitate lifecycle events handling
NEWS: Mention event handling support in ch driver
NEWS.rst | 7 +
po/POTFILES | 1 +
src/ch/ch_events.c | 329 ++++++++++++++++++++++++++++++++++++++++++++
src/ch/ch_events.h | 54 ++++++++
src/ch/ch_monitor.c | 52 ++++++-
src/ch/ch_monitor.h | 11 ++
src/ch/meson.build | 2 +
7 files changed, 449 insertions(+), 7 deletions(-)
create mode 100644 src/ch/ch_events.c
create mode 100644 src/ch/ch_events.h
--
2.34.1
1 month, 3 weeks
[PATCH 0/4] Add news for recent features and CVEs
by Han Han
Han Han (4):
NEWS: qemu: Add support for hyperv enlightenments features
NEWS: cpu_map: Add the EPYC-Genoa cpu mode
NEWS: Add the news for CVE-2024-2494
NEWS: Add the news for CVE-2024-4418
NEWS.rst | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
--
2.47.0
2 months, 1 week
[PATCH] qemuDomainDiskChangeSupported: Add missing iothreads check
by Adam Julis
GSList of iothreads is not allowed to be changed while the
virtual machine is running.
Resolves: https://issues.redhat.com/browse/RHEL-23607
Signed-off-by: Adam Julis <ajulis(a)redhat.com>
---
While the qemuDomainDiskChangeSupported() design primarily uses
its macros (CHECK_EQ and CHECK_STREQ_NULLABLE), the logic for comparing 2
GSList of iothreads could perhaps be extracted into a separate function
(e.g. IothreadsGslistCompare(GSList *first, GSList *second)). I am
absolutely not sure about this idea so feel free to comment.
src/qemu/qemu_domain.c | 53 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 53 insertions(+)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 298f4bfb9e..2b5222c685 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -8505,6 +8505,59 @@ qemuDomainDiskChangeSupported(virDomainDiskDef *disk,
CHECK_EQ(discard, "discard", true);
CHECK_EQ(iothread, "iothread", true);
+ /* compare list of iothreads, no change allowed */
+ if (orig_disk->iothreads != disk->iothreads) {
+ GSList *old;
+ GSList *new = disk->iothreads;
+ bool print_err = true;
+
+ for (old = orig_disk->iothreads; old; old = old->next) {
+ virDomainDiskIothreadDef *orig = old->data;
+ virDomainDiskIothreadDef *update;
+ print_err = false;
+
+ if (new == NULL) {
+ print_err = true;
+ break;
+ }
+
+ update = new->data;
+
+ if (orig->id != update->id) {
+ print_err = true;
+ break;
+ }
+
+ if (orig->nqueues != update->nqueues) {
+ print_err = true;
+ break;
+ }
+
+ if (orig->nqueues != 0) {
+ ssize_t i = 0;
+
+ while (i < orig->nqueues) {
+ if (orig->queues[i] != update->queues[i]) {
+ print_err = true;
+ break;
+ }
+ }
+ }
+
+ new = new->next;
+ if (new)
+ print_err = true;
+ }
+
+ if (print_err) {
+ virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
+ _("cannot modify field '%1$s' (or it's parts) of the disk"),
+ "iothreads");
+ return false;
+ }
+ }
+
+
CHECK_STREQ_NULLABLE(domain_name,
"backenddomain");
--
2.45.2
2 months, 1 week
[PATCH v2] ch: Enable callbacks for ch domain events
by Praveen K Paladugu
From: Praveen K Paladugu <prapal(a)linux.microsoft.com>
Enable callbacks for define, undefine, started, booted, stopped,
destroyed events of ch guests.
Signed-off-by: Praveen K Paladugu <praveenkpaladugu(a)gmail.com>
---
src/ch/ch_conf.h | 4 +++
src/ch/ch_driver.c | 82 ++++++++++++++++++++++++++++++++++++++++++++--
2 files changed, 84 insertions(+), 2 deletions(-)
diff --git a/src/ch/ch_conf.h b/src/ch/ch_conf.h
index a77cad7a2a..97c6c24aa5 100644
--- a/src/ch/ch_conf.h
+++ b/src/ch/ch_conf.h
@@ -24,6 +24,7 @@
#include "virthread.h"
#include "ch_capabilities.h"
#include "virebtables.h"
+#include "object_event.h"
#define CH_DRIVER_NAME "CH"
#define CH_CMD "cloud-hypervisor"
@@ -75,6 +76,9 @@ struct _virCHDriver
* then lockless thereafter */
virCHDriverConfig *config;
+ /* Immutable pointer, self-locking APIs */
+ virObjectEventState *domainEventState;
+
/* pid file FD, ensures two copies of the driver can't use the same root */
int lockFD;
diff --git a/src/ch/ch_driver.c b/src/ch/ch_driver.c
index dab025edc1..d18f266387 100644
--- a/src/ch/ch_driver.c
+++ b/src/ch/ch_driver.c
@@ -28,6 +28,7 @@
#include "ch_monitor.h"
#include "ch_process.h"
#include "domain_cgroup.h"
+#include "domain_event.h"
#include "datatypes.h"
#include "driver.h"
#include "viraccessapicheck.h"
@@ -263,6 +264,7 @@ chDomainCreateWithFlags(virDomainPtr dom, unsigned int flags)
virCHDriver *driver = dom->conn->privateData;
virDomainObj *vm;
virCHDomainObjPrivate *priv;
+ virObjectEvent *event;
g_autofree char *managed_save_path = NULL;
int ret = -1;
@@ -304,6 +306,14 @@ chDomainCreateWithFlags(virDomainPtr dom, unsigned int flags)
ret = virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED);
}
+ if (ret == 0) {
+ event = virDomainEventLifecycleNewFromObj(vm,
+ VIR_DOMAIN_EVENT_STARTED,
+ VIR_DOMAIN_EVENT_STARTED_BOOTED);
+ if (event)
+ virObjectEventStateQueue(driver->domainEventState, event);
+ }
+
endjob:
virDomainObjEndJob(vm);
@@ -323,8 +333,10 @@ chDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int flags)
{
virCHDriver *driver = conn->privateData;
g_autoptr(virDomainDef) vmdef = NULL;
+ g_autoptr(virDomainDef) oldDef = NULL;
virDomainObj *vm = NULL;
virDomainPtr dom = NULL;
+ virObjectEvent *event = NULL;
g_autofree char *managed_save_path = NULL;
unsigned int parse_flags = VIR_DOMAIN_DEF_PARSE_INACTIVE;
@@ -345,7 +357,7 @@ chDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int flags)
if (!(vm = virDomainObjListAdd(driver->domains, &vmdef,
driver->xmlopt,
- 0, NULL)))
+ 0, &oldDef)))
goto cleanup;
/* cleanup if there's any stale managedsave dir */
@@ -358,11 +370,17 @@ chDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int flags)
}
vm->persistent = 1;
-
+ event = virDomainEventLifecycleNewFromObj(vm,
+ VIR_DOMAIN_EVENT_DEFINED,
+ !oldDef ?
+ VIR_DOMAIN_EVENT_DEFINED_ADDED :
+ VIR_DOMAIN_EVENT_DEFINED_UPDATED);
dom = virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id);
cleanup:
virDomainObjEndAPI(&vm);
+ virObjectEventStateQueue(driver->domainEventState, event);
+
return dom;
}
@@ -378,6 +396,7 @@ chDomainUndefineFlags(virDomainPtr dom,
{
virCHDriver *driver = dom->conn->privateData;
virDomainObj *vm;
+ virObjectEvent *event = NULL;
int ret = -1;
virCheckFlags(0, -1);
@@ -393,6 +412,9 @@ chDomainUndefineFlags(virDomainPtr dom,
"%s", _("Cannot undefine transient domain"));
goto cleanup;
}
+ event = virDomainEventLifecycleNewFromObj(vm,
+ VIR_DOMAIN_EVENT_UNDEFINED,
+ VIR_DOMAIN_EVENT_UNDEFINED_REMOVED);
vm->persistent = 0;
if (!virDomainObjIsActive(vm)) {
@@ -403,6 +425,8 @@ chDomainUndefineFlags(virDomainPtr dom,
cleanup:
virDomainObjEndAPI(&vm);
+ virObjectEventStateQueue(driver->domainEventState, event);
+
return ret;
}
@@ -643,6 +667,7 @@ chDomainDestroyFlags(virDomainPtr dom, unsigned int flags)
{
virCHDriver *driver = dom->conn->privateData;
virDomainObj *vm;
+ virObjectEvent *event = NULL;
int ret = -1;
virCheckFlags(0, -1);
@@ -662,6 +687,9 @@ chDomainDestroyFlags(virDomainPtr dom, unsigned int flags)
if (virCHProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_DESTROYED) < 0)
goto endjob;
+ event = virDomainEventLifecycleNewFromObj(vm,
+ VIR_DOMAIN_EVENT_STOPPED,
+ VIR_DOMAIN_EVENT_STOPPED_DESTROYED);
virCHDomainRemoveInactive(driver, vm);
ret = 0;
@@ -670,6 +698,8 @@ chDomainDestroyFlags(virDomainPtr dom, unsigned int flags)
cleanup:
virDomainObjEndAPI(&vm);
+ virObjectEventStateQueue(driver->domainEventState, event);
+
return ret;
}
@@ -1365,6 +1395,7 @@ static int chStateCleanup(void)
virObjectUnref(ch_driver->xmlopt);
virObjectUnref(ch_driver->caps);
virObjectUnref(ch_driver->domains);
+ virObjectUnref(ch_driver->domainEventState);
virMutexDestroy(&ch_driver->lock);
g_clear_pointer(&ch_driver, g_free);
@@ -1414,6 +1445,9 @@ chStateInitialize(bool privileged,
if (!(ch_driver->config = virCHDriverConfigNew(privileged)))
goto cleanup;
+ if (!(ch_driver->domainEventState = virObjectEventStateNew()))
+ goto cleanup;
+
if ((rv = chExtractVersion(ch_driver)) < 0) {
if (rv == -2)
ret = VIR_DRV_STATE_INIT_SKIPPED;
@@ -2205,6 +2239,48 @@ chDomainSetNumaParameters(virDomainPtr dom,
return ret;
}
+static int
+chConnectDomainEventRegisterAny(virConnectPtr conn,
+ virDomainPtr dom,
+ int eventID,
+ virConnectDomainEventGenericCallback callback,
+ void *opaque,
+ virFreeCallback freecb)
+{
+ virCHDriver *driver = conn->privateData;
+ int ret = -1;
+
+ if (virConnectDomainEventRegisterAnyEnsureACL(conn) < 0)
+ return -1;
+
+ if (virDomainEventStateRegisterID(conn,
+ driver->domainEventState,
+ dom, eventID,
+ callback, opaque, freecb, &ret) < 0)
+ ret = -1;
+
+ return ret;
+}
+
+
+static int
+chConnectDomainEventDeregisterAny(virConnectPtr conn,
+ int callbackID)
+{
+ virCHDriver *driver = conn->privateData;
+
+ if (virConnectDomainEventDeregisterAnyEnsureACL(conn) < 0)
+ return -1;
+
+ if (virObjectEventStateDeregisterID(conn,
+ driver->domainEventState,
+ callbackID, true) < 0)
+ return -1;
+
+ return 0;
+}
+
+
/* Function Tables */
static virHypervisorDriver chHypervisorDriver = {
.name = "CH",
@@ -2262,6 +2338,8 @@ static virHypervisorDriver chHypervisorDriver = {
.domainHasManagedSaveImage = chDomainHasManagedSaveImage, /* 10.2.0 */
.domainRestore = chDomainRestore, /* 10.2.0 */
.domainRestoreFlags = chDomainRestoreFlags, /* 10.2.0 */
+ .connectDomainEventRegisterAny = chConnectDomainEventRegisterAny, /* 10.8.0 */
+ .connectDomainEventDeregisterAny = chConnectDomainEventDeregisterAny, /* 10.8.0 */
};
static virConnectDriver chConnectDriver = {
--
2.44.0
2 months, 1 week
[PATCH 00/10] PCI passthrough support for ch guests
by Praveen K Paladugu
This patch series introduces PCI passthrough support for ch guests. While
enabling this feature I refactored a bunch of methods from qemu to hypervisor
to reduce duplication of logic between the drivers.
Praveen K Paladugu (7):
hypervisor: move HostdevNeedsVFIO to hypervisor
hypervisor: move HostdevHostSupportsPassthroughVFIO
qemu: replace qemuHostdevPreparePCIDevices
ch: prepare domain definition for pci passthrough
ch: allow hostdev in domain definitions
ch: reattach PCI devices to host while stopping guest
ch: explicitly set INFILESIZE to 0
Wei Liu (3):
ch: add host device manager to driver
ch: add scaffolding for host devices management
ch: prepare host for PCI passthrough
po/POTFILES | 1 +
src/ch/ch_conf.h | 4 ++
src/ch/ch_domain.c | 2 +-
src/ch/ch_driver.c | 4 ++
src/ch/ch_hostdev.c | 115 +++++++++++++++++++++++++++++++++++
src/ch/ch_hostdev.h | 32 ++++++++++
src/ch/ch_monitor.c | 1 +
src/ch/ch_process.c | 74 +++++++++++++++++++++-
src/ch/meson.build | 2 +
src/hypervisor/virhostdev.c | 23 +++++++
src/hypervisor/virhostdev.h | 5 ++
src/libvirt_private.syms | 2 +
src/qemu/qemu_capabilities.c | 2 +-
src/qemu/qemu_cgroup.c | 5 +-
src/qemu/qemu_domain.c | 2 +-
src/qemu/qemu_driver.c | 2 +-
src/qemu/qemu_hostdev.c | 40 +-----------
src/qemu/qemu_hostdev.h | 10 ---
src/qemu/qemu_hotplug.c | 5 +-
src/qemu/qemu_namespace.c | 2 +-
tests/domaincapstest.c | 2 +-
21 files changed, 276 insertions(+), 59 deletions(-)
create mode 100644 src/ch/ch_hostdev.c
create mode 100644 src/ch/ch_hostdev.h
--
2.44.0
2 months, 1 week
[PATCH] virnetdevopenvswitch: Warn on unsupported QoS settings
by Michal Privoznik
Let me preface this with stating the obvious: documentation on
QoS in OVS is very sparse. This is all based on my observation
and OVS codebase analysis.
For the following QoS setting:
<bandwidth>
<inbound average="512" peak="1024" burst="32"/>
</bandwidth>
the following QoS setting is generated into OVS (NB, our XML
values are in KiB/s, OVS has them in bits/s):
# ovs-vsctl list qos
_uuid : a087226b-2da6-4575-ad4c-bf570cb812a9
external_ids : {ifname=vnet1, vm-id="7714e6b5-4885-4140-bc59-2f77cc99b3b5"}
other_config : {burst="262144", max-rate="8192000", min-rate="4096000"}
queues : {0=655bf3a7-e530-4516-9caf-ec9555dfbd4c}
type : linux-htb
from which the following topology is generated:
# for i in qdisc class; do tc -s -d -g $i show dev vnet1; done
qdisc htb 1: root refcnt 2 r2q 10 default 0x1 direct_packets_stat 0 ver 3.17 direct_qlen 1000
Sent 2186 bytes 16 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
+---(1:fffe) htb rate 8192Kbit ceil 8192Kbit linklayer ethernet burst 1499b/1mpu 60b cburst 1499b/1mpu 60b level 7
| Sent 2186 bytes 16 pkt (dropped 0, overlimits 0 requeues 0)
| backlog 0b 0p requeues 0
|
+---(1:1) htb prio 0 quantum 51200 rate 4096Kbit ceil 8192Kbit linklayer ethernet burst 32Kb/1mpu 60b cburst 32Kb/1mpu 60b level 0
Sent 2186 bytes 16 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Long story short, the default class (1:) for an OVS interface has
average and peak set exactly as requested. But since it's nested
under another class (1:fffe), it can borrow unused bandwidth. And
the parent is set to have rate = ceil = peak from our XML. From
[1]: htb_tc_install() calls htb_parse_qdisc_details__() which
sets: 'hc->min_rate = hc->max_rate;' and then calls
htb_setup_class_(..., tc_make_handle(1, 0xfffe), tc_make_handle(1, 0), &hc);
to set up the top parent class.
In other words - the interface is set up to so that it can always
consume 'peak' bandwidth and there is no way for us to set it up
differently. It's too late to deny setting 'peak' different to
'average' at XML validation phase so do the next best thing -
throw a warning, just like we do in case <bandwidth/> is set for
an unsupported <interface/> type.
1: https://github.com/openvswitch/ovs/blob/main/lib/netdev-linux.c#L5039
Resolves: https://issues.redhat.com/browse/RHEL-53963
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/util/virnetdevopenvswitch.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/src/util/virnetdevopenvswitch.c b/src/util/virnetdevopenvswitch.c
index e23f4c83b6..598cfa0031 100644
--- a/src/util/virnetdevopenvswitch.c
+++ b/src/util/virnetdevopenvswitch.c
@@ -945,6 +945,10 @@ virNetDevOpenvswitchInterfaceSetQos(const char *ifname,
}
if (tx && tx->average) {
+ if (tx->peak && tx->peak != tx->average) {
+ VIR_WARN("Setting different 'peak' value than 'average' for QoS for OVS interface %s is unsupported",
+ ifname);
+ }
if (virNetDevOpenvswitchInterfaceSetTxQos(ifname, tx, vmuuid) < 0)
return -1;
} else {
@@ -954,6 +958,10 @@ virNetDevOpenvswitchInterfaceSetQos(const char *ifname,
}
if (rx) {
+ if (rx->peak && tx->peak != rx->average) {
+ VIR_WARN("Setting different 'peak' value than 'average' for QoS for OVS interface %s is unsupported",
+ ifname);
+ }
if (virNetDevOpenvswitchInterfaceSetRxQos(ifname, rx) < 0)
return -1;
} else {
--
2.45.2
2 months, 2 weeks
[PATCH v3 0/7] introduce job-change qmp command
by Vladimir Sementsov-Ogievskiy
Hi all!
Here is new job-change command - a replacement for (becoming deprecated)
block-job-change, as a first step of my "[RFC 00/15] block job API"
v3:
01: add a-b by Markus
03: add a-b by Markus, s/9.1/9.2/ in QAPI
05: update commit message, s/9.1/9.2/ in QAPI
06: update commit message (and subject!), s/9.1/9.2/ in deprecated.rst
Vladimir Sementsov-Ogievskiy (7):
qapi: rename BlockJobChangeOptions to JobChangeOptions
blockjob: block_job_change_locked(): check job type
qapi: block-job-change: make copy-mode parameter optional
blockjob: move change action implementation to job from block-job
qapi: add job-change
qapi/block-core: deprecate block-job-change
iotests/mirror-change-copy-mode: switch to job-change command
block/mirror.c | 13 +++++---
blockdev.c | 4 +--
blockjob.c | 20 ------------
docs/about/deprecated.rst | 5 +++
include/block/blockjob.h | 11 -------
include/block/blockjob_int.h | 7 -----
include/qemu/job.h | 12 +++++++
job-qmp.c | 15 +++++++++
job.c | 23 ++++++++++++++
qapi/block-core.json | 31 ++++++++++++++-----
.../tests/mirror-change-copy-mode | 2 +-
11 files changed, 90 insertions(+), 53 deletions(-)
--
2.34.1
2 months, 2 weeks