[libvirt RFC 00/24] basic snapshot delete implementation
by Pavel Hrdina
I'm sending it as RFC even though it's somehow completed and works, it
probably needs some documentation and most likely unit testing.
This implements virDomainSnapshotDelete API to support external
snapshots. The support doesn't include flags
VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN and
VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN_ONLY as it would add more complexity
and IMHO these flags should not existed at all.
The last patch is just here to show how we could support deleting
external snapshot if all children are internal only, without this patch
the user would have to call children-only and then with another call
delete the external snapshot itself.
There are some limitation that will be needing the mentioned
documentation. If parent snapshot is internal the external snapshot
cannot be deleted, workaround is to delete any internal parent snapshots
and after that the external can be deleted.
Pavel Hrdina (24):
qemu_block: extract block commit code to separate function
qemu_block: move qemuDomainBlockPivot out of qemu_driver
qemu_block: extract qemuBlockCommit impl to separate function
qemu_block: add sync option to qemuBlockCommitImpl
qemu_monitor: introduce qemuMonitorJobFinalize
qemu_monitor: allow setting autofinalize for block commit
qemu_block: introduce qemuBlockFinalize
qemu_blockjob: process QEMU_MONITOR_JOB_STATUS_PENDING signal
qemu_snapshot: refactor qemuSnapshotDelete
qemu_snapshot: extract single snapshot deletion to separate function
qemu_snapshot: extract children snapshot deletion to separate function
qemu_snapshot: rework snapshot children deletion
qemu_snapshot: move snapshot discard out of qemu_domain.c
qemu_snapshot: introduce qemuSnapshotDiscardMetadata
qemu_snapshot: call qemuSnapshotDiscardMetadata from
qemuSnapshotDiscard
qemu_snapshot: pass update_parent into qemuSnapshotDiscardMetadata
qemu_snapshot: move metadata changes to qemuSnapshotDiscardMetadata
qemu_snapshot: introduce qemuSnapshotDeleteValidate function
qemu_snapshot: refactor validation of snapshot delete
qemu_snapshot: prepare data for external snapshot deletion
qemu_snapshot: implement deletion of external snapshot
qemu_snapshot: update metadata when deleting snapshots
qemu_snapshot: when deleting snapshot invalidate parent snapshot
qemu_snapshot: allow deletion of external snapshot with internal
snapshot children
src/conf/snapshot_conf.c | 5 +
src/conf/snapshot_conf.h | 1 +
src/qemu/qemu_backup.c | 1 +
src/qemu/qemu_block.c | 356 ++++++++++++++++
src/qemu/qemu_block.h | 30 ++
src/qemu/qemu_blockjob.c | 13 +-
src/qemu/qemu_blockjob.h | 1 +
src/qemu/qemu_domain.c | 95 +----
src/qemu/qemu_domain.h | 9 -
src/qemu/qemu_driver.c | 306 +-------------
src/qemu/qemu_monitor.c | 21 +-
src/qemu/qemu_monitor.h | 8 +-
src/qemu/qemu_monitor_json.c | 26 +-
src/qemu/qemu_monitor_json.h | 8 +-
src/qemu/qemu_snapshot.c | 764 +++++++++++++++++++++++++++++++----
src/qemu/qemu_snapshot.h | 4 +
tests/qemumonitorjsontest.c | 2 +-
17 files changed, 1151 insertions(+), 499 deletions(-)
--
2.37.2
2 years
[PATCH] Fix race condition when detaching a device
by Pierre LIBEAU
Qemu reply to libvirt "DeviceNotFound" and libvirt didn't clean on the
side the live configuration.
qemuMonitorDelDevice() return -2 to qemuDomainDeleteDevice() and during
this action in qemuDomainDetachDeviceLive() the remove is never call.
Ref #359
Signed-off-by: Pierre LIBEAU <pierre.libeau(a)corp.ovh.com>
---
src/qemu/qemu_hotplug.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c
index 9b508dc8f0..52a14a4476 100644
--- a/src/qemu/qemu_hotplug.c
+++ b/src/qemu/qemu_hotplug.c
@@ -93,6 +93,8 @@ qemuDomainResetDeviceRemoval(virDomainObj *vm);
*
* Returns: 0 on success,
* -1 otherwise.
+ * -2 device does not exist in qemu, but it still
+ * exists in libvirt
*/
static int
qemuDomainDeleteDevice(virDomainObj *vm,
@@ -124,7 +126,6 @@ qemuDomainDeleteDevice(virDomainObj *vm,
* domain XML is queried right after detach API the
* device would still be there. */
VIR_DEBUG("Detaching of device %s failed and no event arrived", alias);
- rc = 0;
}
}
@@ -6055,7 +6056,11 @@ qemuDomainDetachDeviceLive(virDomainObj *vm,
if (!async)
qemuDomainMarkDeviceForRemoval(vm, info);
- if (qemuDomainDeleteDevice(vm, info->alias) < 0) {
+ int rc;
+ rc = qemuDomainDeleteDevice(vm, info->alias);
+ if (rc < 0) {
+ if (rc == -2)
+ ret = qemuDomainRemoveDevice(driver, vm, &detach);
if (virDomainObjIsActive(vm))
qemuDomainRemoveAuditDevice(vm, &detach, false);
goto cleanup;
--
2.37.3
2 years, 1 month
[libvirt PATCH v2 00/16] Use nbdkit for http/ftp/ssh network drives in libvirt
by Jonathon Jongsma
After a bit of a lengthy delay, this is the second version of this patch
series. See https://bugzilla.redhat.com/show_bug.cgi?id=2016527 for more
information about the goal, but the summary is that RHEL does not want to ship
the qemu storage plugins for curl and ssh. Handling them outside of the qemu
process provides several advantages such as reduced attack surface and
stability.
A quick summary of the code:
- at startup I query to see whether nbdkit exists on the host and if
so, I query which plugins/filters are installed. These capabilities
are cached and stored in the qemu driver
- When the driver prepares the domain, we go through each disk source
and determine whether the nbdkit capabilities allow us to support
this disk via nbdkit, and if so, we allocate a qemuNbdkitProcess
object and stash it in the private data of the virStorageSource.
- The presence or absence of this qemuNbdkitProcess data then indicates
whether this disk will be served to qemu indirectly via nbdkit or
directly
- When we launch the qemuProcess, as part of the "external device
start" step, I launch a ndkit process for each disk that is supported
by nbdkit.
- for devices which are served by an intermediate ndkit process, I
change the qemu commandline in the following ways:
- I no longer pass auth/cookie secrets to qemu (those are handled by
nbdkit)
- I replace the actual network URL of the remote disk source with the
path to the nbdkit unix socket
Open questions
- selinux: I need some help from people more familiar with selinux to figure
out what is needed here. When selinux is enforcing, I get a failure to
launch nbdkit to serve the disks. I suspect we need a new context and policy
for /usr/sbin/nbdkit that allows it to transition to the appropriate selinux
context. The current context (on fedora) is "system_u:object_r:bin_t:s0".
When I (temporarily) change the context to something like qemu_exec_t,
I am able to start nbdkit and the domain launches.
Known shortcomings
- creating disks (in ssh) still isn't supported. I wanted to send out the
patch series anyway since it's been delayed too long already.
Changes since v1:
- split into multiple patches
- added a build option for nbdkit_moddir
- don't instantiate any secret / cookie props for disks that are being served
by nbdkit since we don't send secrets to qemu anymore
- ensure that nbdkit processes are started/stopped for the entire backing
chain
- switch to virFileCache-based capabilities for nbdkit so that we don't need
to requery every time
- switch to using pipes for communicating sensitive data to nbdkit
- use pidfile support built into virCommand rather than nbdkit's --pidfile
argument
- added significantly more tests
Jonathon Jongsma (16):
schema: allow 'ssh' as a protocol for network disks
qemu: Add qemuNbdkitCaps to qemu driver
qemu: expand nbdkit capabilities
util: Allow virFileCache data to be any GObject
qemu: implement basic virFileCache for nbdkit caps
qemu: implement persistent file cache for nbdkit caps
qemu: use file cache for nbdkit caps
qemu: Add qemuNbdkitProcess
qemu: add functions to start and stop nbdkit
tests: add ability to test various nbdkit capabilities
qemu: split qemuDomainSecretStorageSourcePrepare
qemu: use nbdkit to serve network disks if available
qemu: include nbdkit state in private xml
tests: add tests for nbdkit invocation
qemu: pass sensitive data to nbdkit via pipe
qemu: add test for authenticating a https network disk
build-aux/syntax-check.mk | 4 +-
docs/formatdomain.rst | 2 +-
meson.build | 6 +
meson_options.txt | 1 +
po/POTFILES | 1 +
src/conf/schemas/domaincommon.rng | 1 +
src/qemu/meson.build | 1 +
src/qemu/qemu_block.c | 168 ++-
src/qemu/qemu_command.c | 4 +-
src/qemu/qemu_conf.c | 22 +
src/qemu/qemu_conf.h | 6 +
src/qemu/qemu_domain.c | 176 ++-
src/qemu/qemu_domain.h | 4 +
src/qemu/qemu_driver.c | 3 +
src/qemu/qemu_extdevice.c | 84 ++
src/qemu/qemu_nbdkit.c | 1051 +++++++++++++++++
src/qemu/qemu_nbdkit.h | 90 ++
src/qemu/qemu_nbdkitpriv.h | 46 +
src/util/virfilecache.c | 15 +-
src/util/virfilecache.h | 2 +-
src/util/virutil.h | 2 +-
tests/meson.build | 1 +
.../disk-cdrom-network.args.disk0 | 7 +
.../disk-cdrom-network.args.disk1 | 9 +
.../disk-cdrom-network.args.disk1.pipe.45 | 1 +
.../disk-cdrom-network.args.disk2 | 9 +
.../disk-cdrom-network.args.disk2.pipe.47 | 1 +
.../disk-network-http.args.disk0 | 7 +
.../disk-network-http.args.disk1 | 6 +
.../disk-network-http.args.disk2 | 7 +
.../disk-network-http.args.disk2.pipe.45 | 1 +
.../disk-network-http.args.disk3 | 8 +
.../disk-network-http.args.disk3.pipe.47 | 1 +
...work-source-curl-nbdkit-backing.args.disk0 | 8 +
...rce-curl-nbdkit-backing.args.disk0.pipe.45 | 1 +
.../disk-network-source-curl.args.1.pipe.1 | 1 +
.../disk-network-source-curl.args.disk0 | 8 +
...isk-network-source-curl.args.disk0.pipe.45 | 1 +
.../disk-network-source-curl.args.disk1 | 10 +
...isk-network-source-curl.args.disk1.pipe.47 | 1 +
...isk-network-source-curl.args.disk1.pipe.49 | 1 +
.../disk-network-source-curl.args.disk2 | 8 +
...isk-network-source-curl.args.disk2.pipe.49 | 1 +
...isk-network-source-curl.args.disk2.pipe.51 | 1 +
.../disk-network-source-curl.args.disk3 | 7 +
.../disk-network-source-curl.args.disk4 | 7 +
.../disk-network-ssh.args.disk0 | 7 +
tests/qemunbdkittest.c | 271 +++++
...sk-cdrom-network-nbdkit.x86_64-latest.args | 42 +
.../disk-cdrom-network-nbdkit.xml | 1 +
...isk-network-http-nbdkit.x86_64-latest.args | 45 +
.../disk-network-http-nbdkit.xml | 1 +
...rce-curl-nbdkit-backing.x86_64-latest.args | 38 +
...isk-network-source-curl-nbdkit-backing.xml | 45 +
...work-source-curl-nbdkit.x86_64-latest.args | 50 +
.../disk-network-source-curl-nbdkit.xml | 1 +
...isk-network-source-curl.x86_64-latest.args | 54 +
.../disk-network-source-curl.xml | 74 ++
...disk-network-ssh-nbdkit.x86_64-latest.args | 36 +
.../disk-network-ssh-nbdkit.xml | 1 +
.../disk-network-ssh.x86_64-latest.args | 36 +
tests/qemuxml2argvdata/disk-network-ssh.xml | 31 +
tests/qemuxml2argvtest.c | 18 +
tests/testutilsqemu.c | 27 +
tests/testutilsqemu.h | 5 +
65 files changed, 2474 insertions(+), 111 deletions(-)
create mode 100644 src/qemu/qemu_nbdkit.c
create mode 100644 src/qemu/qemu_nbdkit.h
create mode 100644 src/qemu/qemu_nbdkitpriv.h
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk1
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk1.pipe.45
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk2
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk2.pipe.47
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk1
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk2
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk2.pipe.45
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk3
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk3.pipe.47
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl-nbdkit-backing.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl-nbdkit-backing.args.disk0.pipe.45
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.1.pipe.1
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk0.pipe.45
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk1
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk1.pipe.47
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk1.pipe.49
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk2
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk2.pipe.49
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk2.pipe.51
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk3
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk4
create mode 100644 tests/qemunbdkitdata/disk-network-ssh.args.disk0
create mode 100644 tests/qemunbdkittest.c
create mode 100644 tests/qemuxml2argvdata/disk-cdrom-network-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-cdrom-network-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-http-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-network-http-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit-backing.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit-backing.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-network-ssh-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh.xml
--
2.37.1
2 years, 1 month
[PATCH 0/2] qemu: tpm: Improve TPM state files management
by Stefan Berger
This series of patches adds the --keep-tpm and --tpm flags to virsh for
keeping and removing the TPM state directory structure when a VM is
undefined. It also fixes the removal of state when a VM is migrated so
that the state files are removed on the source upon successful
migration and deleted on the destination after migration failure.
Regards,
Stefan
Stefan Berger (2):
qemu: Add UNDEFINE_TPM and UNDEFINE_KEEP_TPM flags
qemu: tpm: Remove TPM state after successful migration
include/libvirt/libvirt-domain.h | 6 ++++++
src/qemu/qemu_domain.c | 12 +++++++-----
src/qemu/qemu_domain.h | 3 ++-
src/qemu/qemu_driver.c | 31 ++++++++++++++++++++-----------
src/qemu/qemu_extdevice.c | 5 +++--
src/qemu/qemu_extdevice.h | 3 ++-
src/qemu/qemu_migration.c | 22 +++++++++++++++-------
src/qemu/qemu_process.c | 4 ++--
src/qemu/qemu_snapshot.c | 4 ++--
src/qemu/qemu_tpm.c | 14 ++++++++++----
src/qemu/qemu_tpm.h | 15 ++++++++++++++-
tools/virsh-domain.c | 15 +++++++++++++++
12 files changed, 98 insertions(+), 36 deletions(-)
--
2.37.1
2 years, 1 month
[PATCH 0/7] Fix two bugs in XML schema
by Peter Krempa
Patches 1/7 and 6/7 outline the individual bugs.
Peter Krempa (7):
schema: nodedev: Fix schema attribute value for the 'vport_ops'
capability
nodedevschematest: Add example file for a HBA with 'vport_ops'
capability
qemudomainsnapshotxml2xmltest: Allow regenerating into non-existing
output file
schemas: Extract overrides for the domain element from 'domain.rng'
schemas: domaincommon: Extract contents of the 'domain' element
definition
schema: Add schema for '<inactiveDomain>' element used in the snapshot
definition
qemudomainsnapshotxml2xmltest: Add test case for a snapshot with
'inactiveDomain' element
docs/formatnode.rst | 2 +-
src/conf/schemas/domain.rng | 12 +-
src/conf/schemas/domaincommon.rng | 150 ++++++++++--------
src/conf/schemas/domainoverrides.rng | 16 ++
src/conf/schemas/domainsnapshot.rng | 5 +
src/conf/schemas/inactiveDomain.rng | 10 ++
src/conf/schemas/nodedev.rng | 2 +-
tests/nodedevschemadata/hba_vport_ops.xml | 18 +++
tests/nodedevxml2xmlout/hba_vport_ops.xml | 18 +++
tests/nodedevxml2xmltest.c | 1 +
.../memory-snapshot-inactivedomain.xml | 148 +++++++++++++++++
tests/qemudomainsnapshotxml2xmltest.c | 10 +-
12 files changed, 303 insertions(+), 89 deletions(-)
create mode 100644 src/conf/schemas/domainoverrides.rng
create mode 100644 src/conf/schemas/inactiveDomain.rng
create mode 100644 tests/nodedevschemadata/hba_vport_ops.xml
create mode 100644 tests/nodedevxml2xmlout/hba_vport_ops.xml
create mode 100644 tests/qemudomainsnapshotxml2xmlout/memory-snapshot-inactivedomain.xml
--
2.37.1
2 years, 1 month
[PATCH] libvirt-guests: Fix dependency ordering in service file
by Martin Kletzander
After some debugging and discussion with systemd team it turns out we
are misusing the ordering in libvirt-guests.service. That happened
because we want to support both monolithic and modular daemon setups and
on top of that we also want to support socket activation and services
without socket activation. Unfortunately this is impossible to express
in the unit file because of how transactions are handled in systemd when
dependencies are resolved and multiple actions (jobs) are queued. For
explanation from Michal Sekletar see comment #7 in the BZ this patch is
fixing:
https://bugzilla.redhat.com/show_bug.cgi?id=1964855#c7
In order to support all the scenarios this patch also amends the
manpages so that users that are changing the default can also read how
to correct the dependency ordering in libvirt-guests unit file.
Ideally we would also keep the existing configuration during upgrade,
but due to our huge support matrix this seems hardly feasible as it
could introduce even more problems.
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
docs/manpages/libvirtd.rst | 14 ++++++++++++++
docs/manpages/virtlxcd.rst | 14 ++++++++++++++
docs/manpages/virtqemud.rst | 14 ++++++++++++++
docs/manpages/virtvboxd.rst | 14 ++++++++++++++
docs/manpages/virtvzd.rst | 14 ++++++++++++++
docs/manpages/virtxend.rst | 14 ++++++++++++++
tools/libvirt-guests.service.in | 6 ------
7 files changed, 84 insertions(+), 6 deletions(-)
diff --git a/docs/manpages/libvirtd.rst b/docs/manpages/libvirtd.rst
index ee72f0838221..1347b9b21042 100644
--- a/docs/manpages/libvirtd.rst
+++ b/docs/manpages/libvirtd.rst
@@ -79,6 +79,20 @@ unit files must be masked:
$ systemctl mask libvirtd.socket libvirtd-ro.socket \
libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket
+If using libvirt-guests service then the ordering for that service needs to be
+adapted so that it is ordered after the service unit instead of the socket unit.
+Since dependencies and ordering cannot be changed with drop-in overrides, the
+whole libvirt-guests unit file needs to be changed. In order to preserve such
+change copy the installed ``/usr/lib/systemd/system/libvirt-guests.service`` to
+``/etc/systemd/system/libvirt-guests.service`` and make the change there,
+specifically make sure the ``After=`` ordering mentions ``libvirtd.service`` and
+not ``libvirtd.socket``:
+
+::
+
+ [Unit]
+ After=libvirtd.service
+
OPTIONS
=======
diff --git a/docs/manpages/virtlxcd.rst b/docs/manpages/virtlxcd.rst
index 2e9d8fd14bbb..aebc8adb5822 100644
--- a/docs/manpages/virtlxcd.rst
+++ b/docs/manpages/virtlxcd.rst
@@ -60,6 +60,20 @@ unit files must be masked:
$ systemctl mask virtlxcd.socket virtlxcd-ro.socket \
virtlxcd-admin.socket
+If using libvirt-guests service then the ordering for that service needs to be
+adapted so that it is ordered after the service unit instead of the socket unit.
+Since dependencies and ordering cannot be changed with drop-in overrides, the
+whole libvirt-guests unit file needs to be changed. In order to preserve such
+change copy the installed ``/usr/lib/systemd/system/libvirt-guests.service`` to
+``/etc/systemd/system/libvirt-guests.service`` and make the change there,
+specifically make sure the ``After=`` ordering mentions ``virtlxcd.service`` and
+not ``virtlxcd.socket``:
+
+::
+
+ [Unit]
+ After=virtlxcd.service
+
OPTIONS
=======
diff --git a/docs/manpages/virtqemud.rst b/docs/manpages/virtqemud.rst
index ea8d6e3105db..fa9a6ce3755c 100644
--- a/docs/manpages/virtqemud.rst
+++ b/docs/manpages/virtqemud.rst
@@ -60,6 +60,20 @@ unit files must be masked:
$ systemctl mask virtqemud.socket virtqemud-ro.socket \
virtqemud-admin.socket
+If using libvirt-guests service then the ordering for that service needs to be
+adapted so that it is ordered after the service unit instead of the socket unit.
+Since dependencies and ordering cannot be changed with drop-in overrides, the
+whole libvirt-guests unit file needs to be changed. In order to preserve such
+change copy the installed ``/usr/lib/systemd/system/libvirt-guests.service`` to
+``/etc/systemd/system/libvirt-guests.service`` and make the change there,
+specifically make sure the ``After=`` ordering mentions ``virtqemud.service`` and
+not ``virtqemud.socket``:
+
+::
+
+ [Unit]
+ After=virtqemud.service
+
OPTIONS
=======
diff --git a/docs/manpages/virtvboxd.rst b/docs/manpages/virtvboxd.rst
index d7339d99f22b..f90de3451d8d 100644
--- a/docs/manpages/virtvboxd.rst
+++ b/docs/manpages/virtvboxd.rst
@@ -58,6 +58,20 @@ unit files must be masked:
$ systemctl mask virtvboxd.socket virtvboxd-ro.socket \
virtvboxd-admin.socket
+If using libvirt-guests service then the ordering for that service needs to be
+adapted so that it is ordered after the service unit instead of the socket unit.
+Since dependencies and ordering cannot be changed with drop-in overrides, the
+whole libvirt-guests unit file needs to be changed. In order to preserve such
+change copy the installed ``/usr/lib/systemd/system/libvirt-guests.service`` to
+``/etc/systemd/system/libvirt-guests.service`` and make the change there,
+specifically make sure the ``After=`` ordering mentions ``virtvboxd.service`` and
+not ``virtvboxd.socket``:
+
+::
+
+ [Unit]
+ After=virtvboxd.service
+
OPTIONS
=======
diff --git a/docs/manpages/virtvzd.rst b/docs/manpages/virtvzd.rst
index 42dfa263e450..970719aac1d5 100644
--- a/docs/manpages/virtvzd.rst
+++ b/docs/manpages/virtvzd.rst
@@ -60,6 +60,20 @@ unit files must be masked:
$ systemctl mask virtvzd.socket virtvzd-ro.socket \
virtvzd-admin.socket
+If using libvirt-guests service then the ordering for that service needs to be
+adapted so that it is ordered after the service unit instead of the socket unit.
+Since dependencies and ordering cannot be changed with drop-in overrides, the
+whole libvirt-guests unit file needs to be changed. In order to preserve such
+change copy the installed ``/usr/lib/systemd/system/libvirt-guests.service`` to
+``/etc/systemd/system/libvirt-guests.service`` and make the change there,
+specifically make sure the ``After=`` ordering mentions ``virtvzd.service`` and
+not ``virtvzd.socket``:
+
+::
+
+ [Unit]
+ After=virtvzd.service
+
OPTIONS
=======
diff --git a/docs/manpages/virtxend.rst b/docs/manpages/virtxend.rst
index b08346b489d2..cf7685ecc0e6 100644
--- a/docs/manpages/virtxend.rst
+++ b/docs/manpages/virtxend.rst
@@ -60,6 +60,20 @@ unit files must be masked:
$ systemctl mask virtxend.socket virtxend-ro.socket \
virtxend-admin.socket
+If using libvirt-guests service then the ordering for that service needs to be
+adapted so that it is ordered after the service unit instead of the socket unit.
+Since dependencies and ordering cannot be changed with drop-in overrides, the
+whole libvirt-guests unit file needs to be changed. In order to preserve such
+change copy the installed ``/usr/lib/systemd/system/libvirt-guests.service`` to
+``/etc/systemd/system/libvirt-guests.service`` and make the change there,
+specifically make sure the ``After=`` ordering mentions ``virtxend.service`` and
+not ``virtxend.socket``:
+
+::
+
+ [Unit]
+ After=virtxend.service
+
OPTIONS
=======
diff --git a/tools/libvirt-guests.service.in b/tools/libvirt-guests.service.in
index 3cf647619612..1c569c320dfd 100644
--- a/tools/libvirt-guests.service.in
+++ b/tools/libvirt-guests.service.in
@@ -9,12 +9,6 @@ After=virtlxcd.socket
After=virtvboxd.socket
After=virtvzd.socket
After=virtxend.socket
-After=libvirtd.service
-After=virtqemud.service
-After=virtlxcd.service
-After=virtvboxd.service
-After=virtvzd.service
-After=virtxend.service
After=virt-guest-shutdown.target
Documentation=man:libvirt-guests(8)
Documentation=https://libvirt.org
--
2.37.2
2 years, 1 month
[libvirt PATCH] src: warn if client hits the max requests limit
by Daniel P. Berrangé
Since they are simply normal RPC messages, the keep alive packets are
subject to the "max_client_requests" limit just like any API calls.
Thus, if a client hits the 'max_client_requests' limit and all the
pending API calls take a long time to complete, it may result in
keep-alives firing and dropping the client connection.
This has been seen by a number of users with the default value of
max_client_requests=5, by issuing 5 concurrent live migration
operations.
By printing a warning message when this happens, admins will be alerted
to the fact that their active clients are exceeding the default client
requests limit.
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
I'm a little wary of this change. If we use anything less than VIR_WARN
it is not going to be useful, as we need it visible by default. At the
same time though I'm concerned that this might expose very many
deployments using an unreasonably low max_client_requests value for
their workload. For example OpenStack deployment tools have often left
this on the default setting and have been known to exceed it with live
migration running concurrently.
One possible optimization would be to only issue this warning once per
connected client, so we don't spam repeatedly ?
src/rpc/virnetserverclient.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/src/rpc/virnetserverclient.c b/src/rpc/virnetserverclient.c
index d57ca07167..0d82726194 100644
--- a/src/rpc/virnetserverclient.c
+++ b/src/rpc/virnetserverclient.c
@@ -1259,6 +1259,10 @@ static virNetMessage *virNetServerClientDispatchRead(virNetServerClient *client)
client->rx->buffer = g_new0(char, client->rx->bufferLength);
client->nrequests++;
}
+ } else {
+ VIR_WARN("Client hit max requests limit %zd. This may result "
+ "in keep-alive timeouts. Consider tuning the "
+ "max_client_requests server parameter", client->nrequests);
}
virNetServerClientUpdateEvent(client);
--
2.37.2
2 years, 2 months
[libvirt][PATCH v15 0/9] Support query and use SGX
by Lin Yang
The previous v14 version can be found here:
https://listman.redhat.com/archives/libvir-list/2022-July/233257.html
Diff to v14:
- Dropped SGX support for QEMU 6.2.0, only focus on QEMU 7.0.0 (BTW, I
noticed the default QEMU version in RHEL9 is still 6.2.0, so those
user cannot access this feature unless manually upgrade QEMU)
- Removed total EPC size from domain capability, since the corresponding
attribute is marked as deprecated in QMP command
"query-sgx-capabilities"
- Some cleanups to address comments (pin test to 7.0.0, more validations
on qemu_validate.c, name issue, use built-in functions, ...)
BTW, it still adds SGX EPC as memory device, since basically SGX EPC is
one kind of memory. More specifically, a private region of memory, so
didn't add additional general memory. QEMU allocate part of them and
pass through to guest VM. I don't have a better alternative to represent
it in domain definition.
Haibin Huang (4):
domain_capabilities: Define SGX capabilities structs
qemu: Get SGX capabilities form QMP
Convert QMP capabilities to domain capabilities
conf: expose SGX feature in domain capabilities
Lin Yang (2):
conf: Introduce SGX EPC element into device memory xml
qemu: Add command-line to generate SGX EPC memory backend
Michal Prívozník (3):
qemu_cgroup: Allow SGX in devices controller
qemu_namespace: Create SGX related nodes in domain's namespace
security_dac: Set DAC label on SGX /dev nodes
docs/formatdomain.rst | 25 +-
docs/formatdomaincaps.rst | 40 ++++
src/conf/domain_capabilities.c | 46 ++++
src/conf/domain_capabilities.h | 21 ++
src/conf/domain_conf.c | 30 +++
src/conf/domain_conf.h | 1 +
src/conf/domain_postparse.c | 1 +
src/conf/domain_validate.c | 9 +
src/conf/schemas/domaincaps.rng | 37 +++
src/conf/schemas/domaincommon.rng | 1 +
src/libvirt_private.syms | 1 +
src/qemu/qemu_alias.c | 6 +-
src/qemu/qemu_capabilities.c | 219 ++++++++++++++++++
src/qemu/qemu_capabilities.h | 6 +
src/qemu/qemu_cgroup.c | 76 +++++-
src/qemu/qemu_command.c | 66 +++++-
src/qemu/qemu_domain.c | 48 ++--
src/qemu/qemu_domain.h | 2 +
src/qemu/qemu_domain_address.c | 6 +
src/qemu/qemu_driver.c | 1 +
src/qemu/qemu_monitor.c | 10 +
src/qemu/qemu_monitor.h | 3 +
src/qemu/qemu_monitor_json.c | 137 ++++++++++-
src/qemu/qemu_monitor_json.h | 4 +
src/qemu/qemu_namespace.c | 20 +-
src/qemu/qemu_process.c | 2 +
src/qemu/qemu_validate.c | 40 ++++
src/security/security_apparmor.c | 1 +
src/security/security_dac.c | 46 ++--
src/security/security_selinux.c | 2 +
tests/domaincapsdata/bhyve_basic.x86_64.xml | 1 +
tests/domaincapsdata/bhyve_fbuf.x86_64.xml | 1 +
tests/domaincapsdata/bhyve_uefi.x86_64.xml | 1 +
tests/domaincapsdata/empty.xml | 1 +
tests/domaincapsdata/libxl-xenfv.xml | 1 +
tests/domaincapsdata/libxl-xenpv.xml | 1 +
.../domaincapsdata/qemu_4.2.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_4.2.0-tcg.x86_64.xml | 1 +
.../qemu_4.2.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.0.0-tcg.x86_64.xml | 1 +
.../qemu_5.0.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_5.1.0.sparc.xml | 1 +
tests/domaincapsdata/qemu_5.1.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.2.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.2.0-tcg.x86_64.xml | 1 +
.../qemu_5.2.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.0.0-tcg.x86_64.xml | 1 +
.../qemu_6.0.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.0.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.0.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_6.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_6.1.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.2.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml | 1 +
.../qemu_6.2.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 9 +
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 9 +
.../qemu_7.0.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 9 +
.../domaincapsdata/qemu_7.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.1.0.x86_64.xml | 1 +
.../caps_6.2.0.x86_64.replies | 27 ++-
.../caps_7.0.0.x86_64.replies | 34 ++-
.../caps_7.0.0.x86_64.xml | 10 +
.../caps_7.1.0.x86_64.replies | 21 +-
.../sgx-epc.x86_64-7.0.0.args | 40 ++++
tests/qemuxml2argvdata/sgx-epc.xml | 64 +++++
tests/qemuxml2argvtest.c | 2 +
.../sgx-epc.x86_64-7.0.0.xml | 1 +
tests/qemuxml2xmltest.c | 2 +
93 files changed, 1107 insertions(+), 79 deletions(-)
create mode 100644 tests/qemuxml2argvdata/sgx-epc.x86_64-7.0.0.args
create mode 100644 tests/qemuxml2argvdata/sgx-epc.xml
create mode 120000 tests/qemuxml2xmloutdata/sgx-epc.x86_64-7.0.0.xml
--
2.25.1
2 years, 2 months
libvirtd: failed to connect to socket after installation
by Carlos Bilbao
Hello,
I am trying to test some changes made to libvirt. I tried compiling and
installing, following the available documentation, with:
ninja -C build clean
meson build --prefix=$HOME/usr
ninja -C build -Dsystem=true
sudo ninja -C build install
After doing this, I try to run virt-install and get the following error on
the active libvirtd daemon:
Failed to connect socket to '/var/local/run/libvirt/virtqemud-sock': No
such file or directory
Indeed, that file does not exist:
$ ls /var/local/run/libvirt/
common hostdevmgr lockd lxc network nwfilter nwfilter-binding
secrets storage
virt-install was working fine before started changing libvirt's source code.
I'm working with Ubuntu 22.04 LTS, virsh v8.7.0.
I would appreciate any directions on how to fix this/successfully install
libvirt.
Thanks in advance.
Carlos
2 years, 2 months
[libvirt PATCH 00/12] qemu: retire some more capabilities
by Ján Tomko
Applies on top of Peter's QEMU_CAPS_VIRTIO_PCI_DISABLE_LEGACY series
Ján Tomko (12):
qemu: assume QEMU_CAPS_CHARDEV_FILE_APPEND
qemu: retire QEMU_CAPS_CHARDEV_FILE_APPEND
qemu: assume QEMU_CAPS_CHARDEV_LOGFILE
qemu: retire QEMU_CAPS_CHARDEV_LOGFILE
qemu: assume QEMU_CAPS_NEC_USB_XHCI_PORTS
qemu: retire QEMU_CAPS_NEC_USB_XHCI_PORTS
qemu: assume QEMU_CAPS_VIRTIO_SCSI_IOTHREAD
qemu: retire QEMU_CAPS_VIRTIO_SCSI_IOTHREAD
qemu: assume QEMU_CAPS_VIRTIO_PACKED_QUEUES
qemu: retire QEMU_CAPS_VIRTIO_PACKED_QUEUES
qemu: remove qemuValidateDomainVirtioOptions
qemu: do not probe for properties of nec-usb-xhci
src/qemu/qemu_capabilities.c | 25 +--
src/qemu/qemu_capabilities.h | 10 +-
src/qemu/qemu_command.c | 3 +-
src/qemu/qemu_process.c | 21 +-
src/qemu/qemu_validate.c | 80 +------
.../caps_4.2.0.aarch64.replies | 156 +++-----------
.../caps_4.2.0.aarch64.xml | 5 -
.../caps_4.2.0.ppc64.replies | 140 +++---------
.../qemucapabilitiesdata/caps_4.2.0.ppc64.xml | 4 -
.../qemucapabilitiesdata/caps_4.2.0.s390x.xml | 4 -
.../caps_4.2.0.x86_64.replies | 168 ++++-----------
.../caps_4.2.0.x86_64.xml | 5 -
.../caps_5.0.0.aarch64.replies | 169 +++------------
.../caps_5.0.0.aarch64.xml | 5 -
.../caps_5.0.0.ppc64.replies | 165 +++-----------
.../qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 5 -
.../caps_5.0.0.riscv64.replies | 153 +++----------
.../caps_5.0.0.riscv64.xml | 5 -
.../caps_5.0.0.x86_64.replies | 181 ++++------------
.../caps_5.0.0.x86_64.xml | 5 -
.../qemucapabilitiesdata/caps_5.1.0.sparc.xml | 2 -
.../caps_5.1.0.x86_64.replies | 185 ++++------------
.../caps_5.1.0.x86_64.xml | 5 -
.../caps_5.2.0.aarch64.replies | 181 ++++------------
.../caps_5.2.0.aarch64.xml | 5 -
.../caps_5.2.0.ppc64.replies | 173 +++------------
.../qemucapabilitiesdata/caps_5.2.0.ppc64.xml | 5 -
.../caps_5.2.0.riscv64.replies | 161 +++-----------
.../caps_5.2.0.riscv64.xml | 5 -
.../qemucapabilitiesdata/caps_5.2.0.s390x.xml | 4 -
.../caps_5.2.0.x86_64.replies | 193 ++++-------------
.../caps_5.2.0.x86_64.xml | 5 -
.../caps_6.0.0.aarch64.replies | 191 ++++------------
.../caps_6.0.0.aarch64.xml | 5 -
.../qemucapabilitiesdata/caps_6.0.0.s390x.xml | 4 -
.../caps_6.0.0.x86_64.replies | 203 ++++--------------
.../caps_6.0.0.x86_64.xml | 5 -
.../caps_6.1.0.x86_64.replies | 203 ++++--------------
.../caps_6.1.0.x86_64.xml | 5 -
.../caps_6.2.0.aarch64.replies | 191 ++++------------
.../caps_6.2.0.aarch64.xml | 5 -
.../caps_6.2.0.ppc64.replies | 183 +++-------------
.../qemucapabilitiesdata/caps_6.2.0.ppc64.xml | 5 -
.../caps_6.2.0.x86_64.replies | 203 ++++--------------
.../caps_6.2.0.x86_64.xml | 5 -
.../caps_7.0.0.aarch64.replies | 195 ++++-------------
.../caps_7.0.0.aarch64.xml | 5 -
.../caps_7.0.0.ppc64.replies | 183 +++-------------
.../qemucapabilitiesdata/caps_7.0.0.ppc64.xml | 5 -
.../caps_7.0.0.x86_64.replies | 203 ++++--------------
.../caps_7.0.0.x86_64.xml | 5 -
.../caps_7.1.0.x86_64.replies | 203 ++++--------------
.../caps_7.1.0.x86_64.xml | 5 -
.../virtio-options-controller-packed.err | 1 -
.../virtio-options-disk-packed.err | 1 -
.../virtio-options-fs-packed.err | 1 -
.../virtio-options-input-packed.err | 1 -
.../virtio-options-memballoon-packed.err | 1 -
.../virtio-options-net-packed.err | 1 -
.../virtio-options-rng-packed.err | 1 -
.../virtio-options-video-packed.err | 1 -
tests/qemuxml2argvtest.c | 28 +--
tests/qemuxml2xmltest.c | 1 -
63 files changed, 811 insertions(+), 3471 deletions(-)
delete mode 100644 tests/qemuxml2argvdata/virtio-options-controller-packed.err
delete mode 100644 tests/qemuxml2argvdata/virtio-options-disk-packed.err
delete mode 100644 tests/qemuxml2argvdata/virtio-options-fs-packed.err
delete mode 100644 tests/qemuxml2argvdata/virtio-options-input-packed.err
delete mode 100644 tests/qemuxml2argvdata/virtio-options-memballoon-packed.err
delete mode 100644 tests/qemuxml2argvdata/virtio-options-net-packed.err
delete mode 100644 tests/qemuxml2argvdata/virtio-options-rng-packed.err
delete mode 100644 tests/qemuxml2argvdata/virtio-options-video-packed.err
--
2.37.1
2 years, 2 months