[RFC 0/4] meson: Enable -Wundef
by Andrea Bolognani
A few days ago I have posted a patch[1] that addresses an issue
introduced when a meson check was dropped but some uses of the
corresponding WITH_ macro were not removed at the same time.
That got me thinking about what we can do to prevent such scenarios
from happening again in the future. I have come up with something
that I think would be effective, but since applying the approach
throughout the entire codebase would require a non-trivial amount of
work, I figured I'd ask for feedback before embarking on it.
The idea is that there are two types of macros we can use for
conditional compilation: external ones, coming from the OS or other
libraries, and internal ones, which are the result of meson tests.
The external ones (e.g. SIOCSIFFLAGS, __APPLE__) are usually only
defined if they apply, so it is correct to check for their presence
with #ifdef. Using #if will also work, as undefined macros evaluate
to zero, but it's not good practice to use them that way. If -Wundef
has been passed to the compiler, those incorrect uses will be
reported (only on platforms where they are not defined, of course).
The internal ones (e.g. WITH_QEMU, WITH_STRUCT_IFREQ) are similar,
but in this case we control their definition. This means that using
means that the feature is not available on the machine we're building
on, but it could also mean that we've removed the meson check and
forgot to update all users of the macro. In this case, -Wundef would
work 100% reliably to detect the issue: if the meson check doesn't
exist, neither will the macro, regardless of what platform we're
building on.
So the approach I'm suggesting is to use a syntax-check rule to
ensure that internal macros are only ever checked with #if instead of
Of course this requires a full sweep to fix all cases in which we're
not already doing things according to the proposal. Should be fairly
easy, if annoying. A couple of examples are included here for
demonstration purposes.
The bigger impact is going to be on the build system. Right now we
generally only define WITH_ macros if the check passed, but that will
have to change and the result is going to be quite a bit of
additional meson code I'm afraid.
Thoughts?
[1] https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/message/S...
Andrea Bolognani (4):
configmake: Check for WIN32 correctly
meson: Always define WITH_*_DECL macros
syntax-check: Ensure WITH_ macros are used correctly
meson: Enable -Wundef
build-aux/syntax-check.mk | 5 +++++
configmake.h.in | 2 +-
meson.build | 3 +++
tests/virmockstathelpers.c | 28 ++++++++++++++--------------
4 files changed, 23 insertions(+), 15 deletions(-)
--
2.43.2
3 days, 18 hours
Re: [PATCH] Сheck snapshot disk is not NULL when searching it in the VM config
by Peter Krempa
On Mon, May 20, 2024 at 14:48:47 +0000, Efim Shevrin via Devel wrote:
> Hello,
>
> > If vmdisk is NULL, shouldn't this function (qemuSnapshotDeleteValidate()) return an error?
>
> I think this qemuSnapshotDeleteValidate should not return an error.
>
> It seems to me that when vmdisk is NULL, this does not invalidate
> the snapshot itself, but indicates that the config has changed since
> the snapshot was done. And if the VM config has changed, this adds evidence that the snapshot should be deleted,
> because the snapshot does not reflect the real vm config.
>
> Since we do not have an analogue of the --force option for deleting a snapshot, in the case when qemuSnapshotDeleteValidate returns
> an error when vmdisk is NULL, we will never delete a snapshot which has invalid disk.
Snapshot deletion does have something that can be considered force and
that is the '--metadata' option that removes just the snapshot
definition (metadata) and doesn't touch the disk images.
> > Similarly, disk can be NULL too
> Thank you for the comment regarding the disk variable. I`ve reworked patch.
>
> When creating a snapshot of a VM with multiple hard disks,
> the snapshot takes into account the presence of all disks
> in the system. If, over time, one of the disks is deleted,
> the snapshot will continue to store knowledge of the deleted disk.
> This results in the fact that at the moment of deleting the snapshot,
> at the validation stage, a disk from the snapshot will be searched which
> is not in the VM configuration. As a result, vmdisk variable will
> be equal to NULL. Dereferencing a null pointer at the time of calling
> virStorageSourceIsSameLocation(vmdisk->src, disk->src)
> will result in SIGSEGV.
Crashing is obviously not okay ...
> Also, the disk variable can also be equal to NULL and this
> requires to check that disk != NULL before calling the
> virStorageSourceIsSameLocation function to avoid SIGSEGV.
.. but going ahead with the snapshot deletion isn't always okay either.
The disk isn't referenced by the VM so the disk state can't be merged,
while the state would be merged for any other disk.
When reverting back to a previous snapshot, which is still referencing
the older state of the disk which was removed from the VM, the VM would
see that the image state of disks that were present at deletion would
contain the merged state, but only a partial state for the disk which
was later removed.
1 month
[libvirt PATCH] qemu_snapshot: allow reverting to external disk only snapshot
by Pavel Hrdina
When snapshot is created with disk-only flag it is always external
snapshot without memory state. Historically when there was not support
to revert external snapshots this produced error message.
error: Failed to revert snapshot s1
error: internal error: Invalid target domain state 'disk-snapshot'. Refusing snapshot reversion
Now we can simply consider this as reverting to offline snapshot as the
possible damage to file system is already done at the point of snapshot
creation.
Resolves: https://issues.redhat.com/browse/RHEL-21549
Signed-off-by: Pavel Hrdina <phrdina(a)redhat.com>
---
src/qemu/qemu_snapshot.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 0cac0c4146..7964f70553 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -2606,6 +2606,7 @@ qemuSnapshotRevert(virDomainObj *vm,
case VIR_DOMAIN_SNAPSHOT_SHUTDOWN:
case VIR_DOMAIN_SNAPSHOT_SHUTOFF:
case VIR_DOMAIN_SNAPSHOT_CRASHED:
+ case VIR_DOMAIN_SNAPSHOT_DISK_SNAPSHOT:
ret = qemuSnapshotRevertInactive(vm, snapshot, snap,
driver, cfg,
&inactiveConfig,
@@ -2617,8 +2618,6 @@ qemuSnapshotRevert(virDomainObj *vm,
_("qemu doesn't support reversion of snapshot taken in PMSUSPENDED state"));
goto endjob;
- case VIR_DOMAIN_SNAPSHOT_DISK_SNAPSHOT:
- /* Rejected earlier as an external snapshot */
case VIR_DOMAIN_SNAPSHOT_NOSTATE:
case VIR_DOMAIN_SNAPSHOT_BLOCKED:
case VIR_DOMAIN_SNAPSHOT_LAST:
--
2.43.0
1 month
[PATCH 0/3] conf,qemu: add AIA support for RISC-V 'virt'
by Daniel Henrique Barboza
Hi,
This series adds official support for RISC-V AIA (Advanced Interrupt
Architecture). AIA and has been supported by the 'virt' RISC-V board, as
a machine property, since QEMU 7.0.
Daniel Henrique Barboza (3):
qemu: add capability for RISC-V AIA feature
conf,qemu: implement RISC-V 'aia' virt domain feature
qemu: add RISC-V 'aia' command line
docs/formatdomain.rst | 8 ++++
src/conf/domain_conf.c | 39 +++++++++++++++++++
src/conf/domain_conf.h | 11 ++++++
src/conf/schemas/domaincommon.rng | 15 +++++++
src/libvirt_private.syms | 2 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 5 +++
src/qemu/qemu_validate.c | 15 +++++++
.../caps_8.0.0_riscv64.xml | 1 +
.../caps_9.1.0_riscv64.xml | 1 +
...cv64-virt-features-aia.riscv64-latest.args | 31 +++++++++++++++
...scv64-virt-features-aia.riscv64-latest.xml | 1 +
.../riscv64-virt-features-aia.xml | 27 +++++++++++++
tests/qemuxmlconftest.c | 2 +
15 files changed, 161 insertions(+)
create mode 100644 tests/qemuxmlconfdata/riscv64-virt-features-aia.riscv64-latest.args
create mode 120000 tests/qemuxmlconfdata/riscv64-virt-features-aia.riscv64-latest.xml
create mode 100644 tests/qemuxmlconfdata/riscv64-virt-features-aia.xml
--
2.45.2
1 month, 1 week
[PATCH 00/12] Introduce SEV-SNP support
by Michal Privoznik
SEV-SNP support just landed in QEMU. Here is the first round of patches
to incorporate support into libvirt.
TODOs (aka problems of future me):
- Teach tools/virt-qemu-sev-validate how to deal with SEV-SNP
- Try to find a SEV-SNP machine a test these patches in real worl
- Write a kbase article on attestation with SEV-SNP
Michal Prívozník (12):
qemu_monitor_json: Report error in error paths in SEV related code
conf: Move some members of virDomainSEVDef into virDomainSEVCommonDef
conf: Separate SEV formatting into a function
Drop needless typecast to virDomainLaunchSecurity
src: Convert some _virDomainSecDef::sectype checks to switch()
qemu_monitor: Allow querying SEV-SNP state in 'query-sev'
qemu: Report snp-policy in virDomainGetLaunchSecurityInfo()
qemu_capabilities: Introduce QEMU_CAPS_SEV_SNP_GUEST
conf: Introduce SEV-SNP support
qemu: Build cmd line for SEV-SNP
qemu: Allow setting launch security for SEV-SNP
qemu_firmware: Pick the right firmware for SEV-SNP guests
docs/formatdomain.rst | 108 ++++++++++++
include/libvirt/libvirt-domain.h | 10 ++
src/conf/domain_conf.c | 156 ++++++++++++++----
src/conf/domain_conf.h | 28 +++-
src/conf/domain_validate.c | 44 +++++
src/conf/schemas/domaincommon.rng | 73 ++++++--
src/conf/virconftypes.h | 4 +
src/qemu/qemu_capabilities.c | 4 +
src/qemu/qemu_capabilities.h | 3 +
src/qemu/qemu_cgroup.c | 19 ++-
src/qemu/qemu_command.c | 56 ++++++-
src/qemu/qemu_driver.c | 60 +++++--
src/qemu/qemu_firmware.c | 20 ++-
src/qemu/qemu_monitor.c | 7 +-
src/qemu/qemu_monitor.h | 41 ++++-
src/qemu/qemu_monitor_json.c | 67 ++++++--
src/qemu/qemu_monitor_json.h | 8 +-
src/qemu/qemu_namespace.c | 3 +-
src/qemu/qemu_process.c | 34 ++--
src/qemu/qemu_validate.c | 13 +-
src/security/security_dac.c | 34 +++-
.../caps_9.1.0_x86_64.xml | 1 +
.../firmware/60-edk2-ovmf-x64-amdsev.json | 1 +
tests/qemumonitorjsontest.c | 65 +++++++-
...launch-security-sev-snp.x86_64-latest.args | 35 ++++
.../launch-security-sev-snp.x86_64-latest.xml | 1 +
.../launch-security-sev-snp.xml | 47 ++++++
tests/qemuxmlconftest.c | 2 +
28 files changed, 817 insertions(+), 127 deletions(-)
create mode 100644 tests/qemuxmlconfdata/launch-security-sev-snp.x86_64-latest.args
create mode 120000 tests/qemuxmlconfdata/launch-security-sev-snp.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/launch-security-sev-snp.xml
--
2.44.2
1 month, 1 week
[PATCH 00/20] qemu: support mapped-ram+directio+mulitfd
by Jim Fehlig
This series is essentially V1 of a prior RFC [1] to support QEMU's
mapped-ram stream format [2] and migration capability. Along with
supporting mapped-ram, it implements a design approach we discussed
for supporting parallel save/restore [3]. In summary, the approach is
1. Add mapped-ram migration capability
2. Steal an element from save header 'unused' for a 'features' variable
and bump save version to 3.
3. Add /etc/libvirt/qemu.conf knob for the save format version,
defaulting to latest v3
4. Use v3 (aka mapped-ram) by default
5. Use mapped-ram with BYPASS_CACHE for v3, old approach for v2
6. include: Define constants for parallel save/restore
7. qemu: Add support for parallel save. Implies mapped-ram, reject if v2
8. qemu: Add support for parallel restore. Implies mapped-ram.
Reject if v2
9. tools: add parallel parameter to virsh save command
10. tools: add parallel parameter to virsh restore command
With this series, saving and restoring using mapped-ram is enabled by
default if the underlying QEMU advertises the mapped-ram migration
capability. It can be disabled by changing the 'save_image_version'
setting in qemu.conf.
To use mapped-ram with QEMU:
- The 'mapped-ram' migration capability must be set to true
- The 'multifd' migration capability must be set to true and
the 'multifd-channels' migration parameter must set to a
value >= 1
- QEMU must be provided an fdset containing the migration fd(s)
- The 'migrate' qmp command is invoked with a URI referencing the fdset
and an offset where to start reading or writing the data stream, e.g.
{"execute":"migrate",
"arguments":{"detach":true,"resume":false,
"uri":"file:/dev/fdset/0,offset=0x11921"}}
The mapped-ram stream, in conjunction with direct IO and multifd, can
significantly improve the time required to save VM memory state. The
following tables compare mapped-ram with the existing, sequential save
stream. In all cases, the save and restore operations are to/from a
block device comprised of two NVMe disks in RAID0 configuration with
xfs (~8600MiB/s). The values in the 'save time' and 'restore time'
columns were scraped from the 'real' time reported by time(1). The
'Size' and 'Blocks' columns were provided by the corresponding
outputs of stat(1).
VM: 32G RAM, 1 vcpu, idle (shortly after boot)
| save | restore |
| time | time | Size | Blocks
-----------------------+---------+---------+--------------+--------
legacy | 6.193s | 4.399s | 985744812 | 1925288
-----------------------+---------+---------+--------------+--------
mapped-ram | 5.109s | 1.176s | 34368554354 | 1774472
-----------------------+---------+---------+--------------+--------
legacy + direct IO | 5.725s | 4.512s | 985765251 | 1925328
-----------------------+---------+---------+--------------+--------
mapped-ram + direct IO | 4.627s | 1.490s | 34368554354 | 1774304
-----------------------+---------+---------+--------------+--------
mapped-ram + direct IO | | | |
+ multifd-channels=8 | 4.421s | 0.845s | 34368554318 | 1774312
-------------------------------------------------------------------
VM: 32G RAM, 30G dirty, 1 vcpu in tight loop dirtying memory
| save | restore |
| time | time | Size | Blocks
-----------------------+---------+---------+--------------+---------
legacy | 25.800s | 14.332s | 33154309983 | 64754512
-----------------------+---------+---------+--------------+---------
mapped-ram | 18.742s | 15.027s | 34368559228 | 64617160
-----------------------+---------+---------+--------------+---------
legacy + direct IO | 13.115s | 18.050s | 33154310496 | 64754520
-----------------------+---------+---------+--------------+---------
mapped-ram + direct IO | 13.623s | 15.959s | 34368557392 | 64662040
-----------------------+-------- +---------+--------------+---------
mapped-ram + direct IO | | | |
+ multifd-channels=8 | 6.994s | 6.470s | 34368554980 | 64665776
--------------------------------------------------------------------
As can be seen from the tables, one caveat of mapped-ram is the logical file
size of a saved image is basically equivalent to the VM memory size. Note
however that mapped-ram typically uses fewer blocks on disk.
Support for mapped-ram+direct-io only recently landed in upstream QEMU
and will first appear in the 9.1 release, which may complicate merging
support in libvirt. Specifically, I'm not sure how to detect if the
combination is supported by QEMU. Suggestions welcomed.
Similar to the RFC, V1 ignores compression. libvirt currently supports
compression by connecting the output of QEMU's save stream to the specified
compression program via a pipe. This approach is incompatible with mapped-ram
since the fd provided to QEMU must be seekable. In general, we can consider
mapped-ram and compression incompatible and document they cannot be used
together.
[1] https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/message/E...
[2] https://gitlab.com/qemu-project/qemu/-/blob/master/docs/devel/migration/m...
[3] https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/message/K...
Claudio Fontana (2):
include: Define constants for parallel save/restore
tools: add parallel parameter to virsh restore command
Jim Fehlig (17):
lib: virDomainSaveParams: Ensure absolute save path
qemu_fd: Add function to retrieve fdset ID
qemu: Add function to check capability in migration params
qemu: Add function to get bool value from migration params
qemu: Add mapped-ram migration capability
qemu: Add function to get migration params for save
qemu: QEMU_SAVE_VERSION: Bump to version 3
qemu: conf: Add setting for save image version
qemu: Add helper function for creating save image fd
qemu: Add support for mapped-ram on save
qemu: Decompose qemuSaveImageOpen
qemu: Move creation of qemuProcessIncomingDef struct
qemu: Apply migration parameters in qemuMigrationDstRun
qemu: Add support for mapped-ram on restore
qemu: Support O_DIRECT with mapped-ram on save
qemu: Support O_DIRECT with mapped-ram on restore
qemu: Add support for parallel save and restore
Li Zhang (1):
tools: add parallel parameter to virsh save command
docs/manpages/virsh.rst | 9 +-
include/libvirt/libvirt-domain.h | 13 ++
src/libvirt-domain.c | 52 +++++--
src/qemu/libvirtd_qemu.aug | 1 +
src/qemu/qemu.conf.in | 6 +
src/qemu/qemu_conf.c | 16 +++
src/qemu/qemu_conf.h | 5 +
src/qemu/qemu_driver.c | 104 +++++++++-----
src/qemu/qemu_fd.c | 18 +++
src/qemu/qemu_fd.h | 3 +
src/qemu/qemu_migration.c | 192 +++++++++++++++++--------
src/qemu/qemu_migration.h | 9 +-
src/qemu/qemu_migration_params.c | 86 ++++++++++++
src/qemu/qemu_migration_params.h | 17 +++
src/qemu/qemu_monitor.c | 39 ++++++
src/qemu/qemu_monitor.h | 5 +
src/qemu/qemu_process.c | 120 +++++++++++-----
src/qemu/qemu_process.h | 19 ++-
src/qemu/qemu_saveimage.c | 216 ++++++++++++++++++++---------
src/qemu/qemu_saveimage.h | 35 +++--
src/qemu/qemu_snapshot.c | 26 ++--
src/qemu/test_libvirtd_qemu.aug.in | 1 +
tools/virsh-domain.c | 79 +++++++++--
23 files changed, 827 insertions(+), 244 deletions(-)
--
2.35.3
1 month, 2 weeks
[PATCH v3 0/5] ch: handle events from cloud-hypervisor
by Purna Pavan Chandra Aekkaladevi
changes from v2->v3:
* Remove patch 'utils: Implement virFileIsNamedPipe' as it is no more needed.
* Remove the eventmonitorpath only if it exists
* Added domain name as a prefix to logs from ch_events.c. This will make
debugging easier.
* Simplified event parsing logic by reserving a byte for null char.
changes from v1->v2:
* Rebase on latest master
* Use /* */ for comments
* Remove fifo file if already exists
* Address other comments from Praveen Paladugu
cloud-hypervisor raises various events, including VM lifecylce operations
such as boot, shutdown, pause, resume, etc. Libvirt will now read these
events and take the necessary actions, such as correctly updating the
domain state. A FIFO file is passed to `--event-monitor` option of
cloud-hypervisor. Libvirt creates a new thread that acts as the reader
of the fifo file and continuously monitors for new events. Currently,
shutdown events are handled by updating the domain state appropriately.
Purna Pavan Chandra Aekkaladevi (5):
ch: pass --event-monitor option to cloud-hypervisor
ch: start a new thread for handling ch events
ch: events: Read and parse cloud-hypervisor events
ch: events: facilitate lifecycle events handling
NEWS: Mention event handling support in ch driver
NEWS.rst | 7 +
po/POTFILES | 1 +
src/ch/ch_events.c | 329 ++++++++++++++++++++++++++++++++++++++++++++
src/ch/ch_events.h | 54 ++++++++
src/ch/ch_monitor.c | 52 ++++++-
src/ch/ch_monitor.h | 11 ++
src/ch/meson.build | 2 +
7 files changed, 449 insertions(+), 7 deletions(-)
create mode 100644 src/ch/ch_events.c
create mode 100644 src/ch/ch_events.h
--
2.34.1
2 months, 4 weeks
[PATCH 0/4] Add news for recent features and CVEs
by Han Han
Han Han (4):
NEWS: qemu: Add support for hyperv enlightenments features
NEWS: cpu_map: Add the EPYC-Genoa cpu mode
NEWS: Add the news for CVE-2024-2494
NEWS: Add the news for CVE-2024-4418
NEWS.rst | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
--
2.47.0
3 months, 1 week
[PATCH] qemuDomainDiskChangeSupported: Add missing iothreads check
by Adam Julis
GSList of iothreads is not allowed to be changed while the
virtual machine is running.
Resolves: https://issues.redhat.com/browse/RHEL-23607
Signed-off-by: Adam Julis <ajulis(a)redhat.com>
---
While the qemuDomainDiskChangeSupported() design primarily uses
its macros (CHECK_EQ and CHECK_STREQ_NULLABLE), the logic for comparing 2
GSList of iothreads could perhaps be extracted into a separate function
(e.g. IothreadsGslistCompare(GSList *first, GSList *second)). I am
absolutely not sure about this idea so feel free to comment.
src/qemu/qemu_domain.c | 53 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 53 insertions(+)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 298f4bfb9e..2b5222c685 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -8505,6 +8505,59 @@ qemuDomainDiskChangeSupported(virDomainDiskDef *disk,
CHECK_EQ(discard, "discard", true);
CHECK_EQ(iothread, "iothread", true);
+ /* compare list of iothreads, no change allowed */
+ if (orig_disk->iothreads != disk->iothreads) {
+ GSList *old;
+ GSList *new = disk->iothreads;
+ bool print_err = true;
+
+ for (old = orig_disk->iothreads; old; old = old->next) {
+ virDomainDiskIothreadDef *orig = old->data;
+ virDomainDiskIothreadDef *update;
+ print_err = false;
+
+ if (new == NULL) {
+ print_err = true;
+ break;
+ }
+
+ update = new->data;
+
+ if (orig->id != update->id) {
+ print_err = true;
+ break;
+ }
+
+ if (orig->nqueues != update->nqueues) {
+ print_err = true;
+ break;
+ }
+
+ if (orig->nqueues != 0) {
+ ssize_t i = 0;
+
+ while (i < orig->nqueues) {
+ if (orig->queues[i] != update->queues[i]) {
+ print_err = true;
+ break;
+ }
+ }
+ }
+
+ new = new->next;
+ if (new)
+ print_err = true;
+ }
+
+ if (print_err) {
+ virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
+ _("cannot modify field '%1$s' (or it's parts) of the disk"),
+ "iothreads");
+ return false;
+ }
+ }
+
+
CHECK_STREQ_NULLABLE(domain_name,
"backenddomain");
--
2.45.2
3 months, 2 weeks
[PATCH v2] ch: Enable callbacks for ch domain events
by Praveen K Paladugu
From: Praveen K Paladugu <prapal(a)linux.microsoft.com>
Enable callbacks for define, undefine, started, booted, stopped,
destroyed events of ch guests.
Signed-off-by: Praveen K Paladugu <praveenkpaladugu(a)gmail.com>
---
src/ch/ch_conf.h | 4 +++
src/ch/ch_driver.c | 82 ++++++++++++++++++++++++++++++++++++++++++++--
2 files changed, 84 insertions(+), 2 deletions(-)
diff --git a/src/ch/ch_conf.h b/src/ch/ch_conf.h
index a77cad7a2a..97c6c24aa5 100644
--- a/src/ch/ch_conf.h
+++ b/src/ch/ch_conf.h
@@ -24,6 +24,7 @@
#include "virthread.h"
#include "ch_capabilities.h"
#include "virebtables.h"
+#include "object_event.h"
#define CH_DRIVER_NAME "CH"
#define CH_CMD "cloud-hypervisor"
@@ -75,6 +76,9 @@ struct _virCHDriver
* then lockless thereafter */
virCHDriverConfig *config;
+ /* Immutable pointer, self-locking APIs */
+ virObjectEventState *domainEventState;
+
/* pid file FD, ensures two copies of the driver can't use the same root */
int lockFD;
diff --git a/src/ch/ch_driver.c b/src/ch/ch_driver.c
index dab025edc1..d18f266387 100644
--- a/src/ch/ch_driver.c
+++ b/src/ch/ch_driver.c
@@ -28,6 +28,7 @@
#include "ch_monitor.h"
#include "ch_process.h"
#include "domain_cgroup.h"
+#include "domain_event.h"
#include "datatypes.h"
#include "driver.h"
#include "viraccessapicheck.h"
@@ -263,6 +264,7 @@ chDomainCreateWithFlags(virDomainPtr dom, unsigned int flags)
virCHDriver *driver = dom->conn->privateData;
virDomainObj *vm;
virCHDomainObjPrivate *priv;
+ virObjectEvent *event;
g_autofree char *managed_save_path = NULL;
int ret = -1;
@@ -304,6 +306,14 @@ chDomainCreateWithFlags(virDomainPtr dom, unsigned int flags)
ret = virCHProcessStart(driver, vm, VIR_DOMAIN_RUNNING_BOOTED);
}
+ if (ret == 0) {
+ event = virDomainEventLifecycleNewFromObj(vm,
+ VIR_DOMAIN_EVENT_STARTED,
+ VIR_DOMAIN_EVENT_STARTED_BOOTED);
+ if (event)
+ virObjectEventStateQueue(driver->domainEventState, event);
+ }
+
endjob:
virDomainObjEndJob(vm);
@@ -323,8 +333,10 @@ chDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int flags)
{
virCHDriver *driver = conn->privateData;
g_autoptr(virDomainDef) vmdef = NULL;
+ g_autoptr(virDomainDef) oldDef = NULL;
virDomainObj *vm = NULL;
virDomainPtr dom = NULL;
+ virObjectEvent *event = NULL;
g_autofree char *managed_save_path = NULL;
unsigned int parse_flags = VIR_DOMAIN_DEF_PARSE_INACTIVE;
@@ -345,7 +357,7 @@ chDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int flags)
if (!(vm = virDomainObjListAdd(driver->domains, &vmdef,
driver->xmlopt,
- 0, NULL)))
+ 0, &oldDef)))
goto cleanup;
/* cleanup if there's any stale managedsave dir */
@@ -358,11 +370,17 @@ chDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int flags)
}
vm->persistent = 1;
-
+ event = virDomainEventLifecycleNewFromObj(vm,
+ VIR_DOMAIN_EVENT_DEFINED,
+ !oldDef ?
+ VIR_DOMAIN_EVENT_DEFINED_ADDED :
+ VIR_DOMAIN_EVENT_DEFINED_UPDATED);
dom = virGetDomain(conn, vm->def->name, vm->def->uuid, vm->def->id);
cleanup:
virDomainObjEndAPI(&vm);
+ virObjectEventStateQueue(driver->domainEventState, event);
+
return dom;
}
@@ -378,6 +396,7 @@ chDomainUndefineFlags(virDomainPtr dom,
{
virCHDriver *driver = dom->conn->privateData;
virDomainObj *vm;
+ virObjectEvent *event = NULL;
int ret = -1;
virCheckFlags(0, -1);
@@ -393,6 +412,9 @@ chDomainUndefineFlags(virDomainPtr dom,
"%s", _("Cannot undefine transient domain"));
goto cleanup;
}
+ event = virDomainEventLifecycleNewFromObj(vm,
+ VIR_DOMAIN_EVENT_UNDEFINED,
+ VIR_DOMAIN_EVENT_UNDEFINED_REMOVED);
vm->persistent = 0;
if (!virDomainObjIsActive(vm)) {
@@ -403,6 +425,8 @@ chDomainUndefineFlags(virDomainPtr dom,
cleanup:
virDomainObjEndAPI(&vm);
+ virObjectEventStateQueue(driver->domainEventState, event);
+
return ret;
}
@@ -643,6 +667,7 @@ chDomainDestroyFlags(virDomainPtr dom, unsigned int flags)
{
virCHDriver *driver = dom->conn->privateData;
virDomainObj *vm;
+ virObjectEvent *event = NULL;
int ret = -1;
virCheckFlags(0, -1);
@@ -662,6 +687,9 @@ chDomainDestroyFlags(virDomainPtr dom, unsigned int flags)
if (virCHProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_DESTROYED) < 0)
goto endjob;
+ event = virDomainEventLifecycleNewFromObj(vm,
+ VIR_DOMAIN_EVENT_STOPPED,
+ VIR_DOMAIN_EVENT_STOPPED_DESTROYED);
virCHDomainRemoveInactive(driver, vm);
ret = 0;
@@ -670,6 +698,8 @@ chDomainDestroyFlags(virDomainPtr dom, unsigned int flags)
cleanup:
virDomainObjEndAPI(&vm);
+ virObjectEventStateQueue(driver->domainEventState, event);
+
return ret;
}
@@ -1365,6 +1395,7 @@ static int chStateCleanup(void)
virObjectUnref(ch_driver->xmlopt);
virObjectUnref(ch_driver->caps);
virObjectUnref(ch_driver->domains);
+ virObjectUnref(ch_driver->domainEventState);
virMutexDestroy(&ch_driver->lock);
g_clear_pointer(&ch_driver, g_free);
@@ -1414,6 +1445,9 @@ chStateInitialize(bool privileged,
if (!(ch_driver->config = virCHDriverConfigNew(privileged)))
goto cleanup;
+ if (!(ch_driver->domainEventState = virObjectEventStateNew()))
+ goto cleanup;
+
if ((rv = chExtractVersion(ch_driver)) < 0) {
if (rv == -2)
ret = VIR_DRV_STATE_INIT_SKIPPED;
@@ -2205,6 +2239,48 @@ chDomainSetNumaParameters(virDomainPtr dom,
return ret;
}
+static int
+chConnectDomainEventRegisterAny(virConnectPtr conn,
+ virDomainPtr dom,
+ int eventID,
+ virConnectDomainEventGenericCallback callback,
+ void *opaque,
+ virFreeCallback freecb)
+{
+ virCHDriver *driver = conn->privateData;
+ int ret = -1;
+
+ if (virConnectDomainEventRegisterAnyEnsureACL(conn) < 0)
+ return -1;
+
+ if (virDomainEventStateRegisterID(conn,
+ driver->domainEventState,
+ dom, eventID,
+ callback, opaque, freecb, &ret) < 0)
+ ret = -1;
+
+ return ret;
+}
+
+
+static int
+chConnectDomainEventDeregisterAny(virConnectPtr conn,
+ int callbackID)
+{
+ virCHDriver *driver = conn->privateData;
+
+ if (virConnectDomainEventDeregisterAnyEnsureACL(conn) < 0)
+ return -1;
+
+ if (virObjectEventStateDeregisterID(conn,
+ driver->domainEventState,
+ callbackID, true) < 0)
+ return -1;
+
+ return 0;
+}
+
+
/* Function Tables */
static virHypervisorDriver chHypervisorDriver = {
.name = "CH",
@@ -2262,6 +2338,8 @@ static virHypervisorDriver chHypervisorDriver = {
.domainHasManagedSaveImage = chDomainHasManagedSaveImage, /* 10.2.0 */
.domainRestore = chDomainRestore, /* 10.2.0 */
.domainRestoreFlags = chDomainRestoreFlags, /* 10.2.0 */
+ .connectDomainEventRegisterAny = chConnectDomainEventRegisterAny, /* 10.8.0 */
+ .connectDomainEventDeregisterAny = chConnectDomainEventDeregisterAny, /* 10.8.0 */
};
static virConnectDriver chConnectDriver = {
--
2.44.0
3 months, 2 weeks