[libvirt PATCH 00/16] refactor and fix graphics formatting
by Pavel Hrdina
Pavel Hrdina (16):
domain_conf: graphics: use a function to format gl element
domain_conf: graphics: use a function to format audio element
domain_conf: modernize graphics formatting
domain_conf: graphics: extract VNC formatting to separate function
domain_conf: graphics: extract SDL formatting to separate function
domain_conf: graphics: extract RDP formatting to separate function
domain_conf: graphics: extract Desktop formatting to separate function
domain_conf: graphics: extract Spice formatting to separate function
domain_conf: graphics: extract EGL-Headless formatting to separate
function
domain_conf: graphics: extract DBus formatting to separate function
domain_conf: graphics: extract listen formatting to separate function
domain_conf: graphics: move listens formatting to relevant graphics
types
domain_conf: graphics: move remaining spice formatting
domain_conf: graphics: move remaining VNC formatting
domain_conf: graphics: fix error messages when formatting XML
domain_conf: graphics: properly escape user provided strings when
formatting XML
src/conf/domain_conf.c | 724 +++++++++++++++++++++--------------------
1 file changed, 372 insertions(+), 352 deletions(-)
--
2.48.1
3 weeks, 5 days
[libvirt] [PATCH] Fix python error reporting for some storage operations
by Cole Robinson
In the python bindings, all vir* classes expect to be
passed a virConnect object when instantiated. Before
the storage stuff, these classes were only instantiated
in virConnect methods, so the generator is hardcoded to
pass 'self' as the connection instance to these classes.
Problem is there are some methods that return pool or vol
instances which aren't called from virConnect: you can
lookup a storage volume's associated pool, and can lookup
volumes from a pool. In these cases passing 'self' doesn't
give the vir* instance a connection, so when it comes time
to raise an exception crap hits the fan.
Rather than rework the generator to accomodate this edge
case, I just fixed the init functions for virStorage* to
pull the associated connection out of the passed value
if it's not a virConnect instance.
Thanks,
Cole
diff --git a/python/generator.py b/python/generator.py
index 01a17da..c706b19 100755
--- a/python/generator.py
+++ b/python/generator.py
@@ -962,8 +962,12 @@ def buildWrappers():
list = reference_keepers[classname]
for ref in list:
classes.write(" self.%s = None\n" % ref[1])
- if classname in [ "virDomain", "virNetwork", "virStoragePool", "virStorageVol" ]:
+ if classname in [ "virDomain", "virNetwork" ]:
classes.write(" self._conn = conn\n")
+ elif classname in [ "virStorageVol", "virStoragePool" ]:
+ classes.write(" self._conn = conn\n" + \
+ " if not isinstance(conn, virConnect):\n" + \
+ " self._conn = conn._conn\n")
classes.write(" if _obj != None:self._o = _obj;return\n")
classes.write(" self._o = None\n\n");
destruct=None
3 weeks, 5 days
[PATCH RFC] util: pick a better runtime directory when XDG_RUNTIME_DIR isn't set
by Laine Stump
======
I'm sending this as an RFC just because what it's doing feels kind of
dirty - directly examining XDG_RUNTIME_DIR seems like an "end run"
around the Glib API. If anyone has a better idea of what to do, please
give details :-)
======
When running unprivileged (i.e. not as root, but as a regular user),
libvirt calls g_get_user_runtime_dir() (from Glib) to get the name of
a directory where status files can be saved. This is a directory that
is 1) writeable by the current user, and 2) will remain there until
the host reboots, but then 3) be erased after the reboot. This is used
for pidfiles, sockets created to communicate between processes, status
XML of active domains, etc.
Normally g_get_user_runtime_dir() returns the setting of
XDG_RUNTIME_DIR in the user's environment; usually this is set to
/run/user/${UID} (e.g. /run/user/1000) - that directory is created
when a user first logs in and is owned by the user, but is cleared out
when the system reboots (more specifically, this directory usually
resides in a tmpfs, and so disappears when that tmpfs is unmounted).
But sometimes XDG_RUNTIME_DIR isn't set in the user's environment. In
that case, g_get_user_runtime_dir() returns ${HOME}/.config
(e.g. /home/laine/.config). This directory fulfills the first 2
criteria above, but fails the 3rd. This isn't just some pedantic
complaint - libvirt actually depends on the directory being cleared
out during a reboot - otherwise it might think that stale status files
are indicating active guests when in fact the guests were shutdown
during the reboot).
In my opinion this behavior is a bug in Glib - see the requirements
for XDG_RUNTIME in the FreeDesktop documentation here:
https://specifications.freedesktop.org/basedir-spec/latest/#variables
but they've documented the behavior as proper in the Glib docs for
g_get_user_runtime_dir(), and so likely will consider it not a bug.
Beyond that, aside from failing the "must be cleared out during a reboot"
requirement, use of $HOME/.cache in this way also disturbs SELinux,
which gives an AVC denial when libvirt (or passt) tries to create a
file or socket in that directory (the SELinux policy permits use of
/run/user/$UID, but not of $HOME/.config). We *could* add that to the
SELinux policy, but since the glib behavior doesn't
All of the above is a very long leadup to the functionality in this
patch: rather than blindly accepting the path returned from
g_get_user_runtime_dir(), we first check if XDG_RUNTIME_DIR is set; if
it isn't set then we look to see if /run/user/$UID exists and is
writable by this user, if so we use *that* as the directory for our
status files. Otherwise (both when XDG_RUNTIME_DIR is set, and when
/run/user/$UID isn't usable) we fallback to just using the path
returned by g_get_user_runtime_dir() - that isn't perfect, but it's
what we were doing before, so at least it's not any worse.
Resolves: https://issues.redhat.com/browse/RHEL-70222
Signed-off-by: Laine Stump <laine(a)redhat.com>
---
src/util/virutil.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/src/util/virutil.c b/src/util/virutil.c
index 2abcb282fe..4c7f4b62bc 100644
--- a/src/util/virutil.c
+++ b/src/util/virutil.c
@@ -538,6 +538,28 @@ char *virGetUserRuntimeDirectory(void)
#ifdef WIN32
return g_strdup(g_get_user_runtime_dir());
#else
+ /* tl;dr - if XDG_RUNTIME_DIR is set, use g_get_user_runtime_dir().
+ * if not set, then see if /run/user/$UID works
+ * if so, use that, else fallback to g_get_user_runtime_dir()
+ *
+ * this is done because the directory returned by
+ * g_get_user_runtime_dir() when XDG_RUNTIME_DIR isn't set is
+ * "suboptimal" (it's a location that is owned by the user, but
+ * isn't erased when the user completely logs out)
+ */
+
+ if (!getenv("XDG_RUNTIME_DIR")) {
+ g_autofree char *runtime_dir = NULL;
+ struct stat sb;
+
+ runtime_dir = g_strdup_printf("/run/user/%d", getuid());
+ if (virFileIsDir(runtime_dir) &&
+ (stat(runtime_dir, &sb) == 0) && (sb.st_mode & S_IWUSR)) {
+ return g_build_filename(runtime_dir, "libvirt", NULL);
+ }
+ }
+
+ /* either XDG_RUNTIME_DIR was set, or /run/usr/$UID wasn't writable */
return g_build_filename(g_get_user_runtime_dir(), "libvirt", NULL);
#endif
}
--
2.47.1
3 weeks, 6 days
[PATCH] qemu: snapshot: Remove dead code in qemuSnapshotDeleteBlockJobFinishing()
by Alexander Kuznetsov
qemuSnapshotDeleteBlockJobFinishing() returns only 0 and 1. Convert it
to bool and remove the dead code handling -1 return in the caller.
Found by Linux Verification Center (linuxtesting.org) with Svace.
Reported-by: Reported-by: Andrey Slepykh <a.slepykh(a)fobos-nt.ru>
Signed-off-by: Alexander Kuznetsov <kuznetsovam(a)altlinux.org>
---
src/qemu/qemu_snapshot.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 80cd54bf33..d277f76b4b 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -3465,7 +3465,7 @@ qemuSnapshotDeleteBlockJobIsRunning(qemuBlockjobState state)
/* When finishing or aborting qemu blockjob we only need to know if the
* job is still active or not. */
-static int
+static bool
qemuSnapshotDeleteBlockJobIsActive(qemuBlockjobState state)
{
switch (state) {
@@ -3475,7 +3475,7 @@ qemuSnapshotDeleteBlockJobIsActive(qemuBlockjobState state)
case QEMU_BLOCKJOB_STATE_ABORTING:
case QEMU_BLOCKJOB_STATE_PENDING:
case QEMU_BLOCKJOB_STATE_PIVOTING:
- return 1;
+ return true;
case QEMU_BLOCKJOB_STATE_COMPLETED:
case QEMU_BLOCKJOB_STATE_FAILED:
@@ -3485,7 +3485,7 @@ qemuSnapshotDeleteBlockJobIsActive(qemuBlockjobState state)
break;
}
- return 0;
+ return false;
}
@@ -3513,18 +3513,14 @@ static int
qemuSnapshotDeleteBlockJobFinishing(virDomainObj *vm,
qemuBlockJobData *job)
{
- int rc;
qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_SNAPSHOT);
- while ((rc = qemuSnapshotDeleteBlockJobIsActive(job->state)) > 0) {
+ while (qemuSnapshotDeleteBlockJobIsActive(job->state)) {
if (qemuDomainObjWait(vm) < 0)
return -1;
qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_SNAPSHOT);
}
- if (rc < 0)
- return -1;
-
return 0;
}
--
2.42.4
3 weeks, 6 days
[PATCH 0/2] ch: minor fixes - segfault fix and error preserving
by Kirill Shchetiniuk
*** BLURB HERE ***
Kirill Shchetiniuk (2):
ch: memory (segmentation fault) fix
ch: preserve last error before stop fix
src/ch/ch_events.c | 6 +++---
src/ch/ch_process.c | 6 ++++++
2 files changed, 9 insertions(+), 3 deletions(-)
--
2.48.1
3 weeks, 6 days
[PATCH pushed] domain_caps: Don't leak 'cpu0_id' in 'virSEVCapabilitiesFree'
by Peter Krempa
Freeing the 'virSEVCapability' object leaked the 'cpu0_id' field since
its introduction.
Fixes: 0236e6154c46603bc443eda2f05c8ce511c55b08
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
Trivial and fixed build.
src/conf/domain_capabilities.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/conf/domain_capabilities.c b/src/conf/domain_capabilities.c
index ab715b19d8..27551f6102 100644
--- a/src/conf/domain_capabilities.c
+++ b/src/conf/domain_capabilities.c
@@ -75,6 +75,7 @@ virSEVCapabilitiesFree(virSEVCapability *cap)
g_free(cap->pdh);
g_free(cap->cert_chain);
+ g_free(cap->cpu0_id);
g_free(cap);
}
--
2.48.1
3 weeks, 6 days
[PATCH v2 0/7] Add support for more ACPI table types
by Daniel P. Berrangé
This was triggered by a request by KubeVirt in
https://gitlab.com/libvirt/libvirt/-/issues/748
I've not functionally tested this, since I lack any suitable guest
windows environment this is looking for MSDM tables, nor does my
machine have MSDM ACPI tables to pass to a guest.
I'm blindly assuming that the QEMU CLI code is identical except for
s/SLIC/MSDM/.
In this 2nd version I've addressed two further issues
* the xen driver was incorrectly mapping its 'acpi_firmware'
option to type=slic. Xen's setting accepts a concatenation
of tables of any type. This is different from type=slic
which represents a single table, whose type will be forced
to 'SLIC' if not already set. To address this we introduce
a new 'rawset' type
* The QEMU driver does not require a type to be set in the
first place; if set it merely overrides what is in the
data file. Supporting this would let us handle any ACPI
table type without further XML changes. To address this
we introduce a new 'raw' type, which can occur many
times.
Daniel P. Berrangé (7):
conf: introduce support for multiple ACPI tables
src: validate permitted ACPI table types in libxl/qemu drivers
src: introduce 'raw' and 'rawset' ACPI table types
qemu: support 'raw' ACPI table type
libxl: support 'rawset' ACPI table type
conf: support MSDM ACPI table type
qemu: support MSDM ACPI table type
docs/formatdomain.rst | 23 ++++-
src/conf/domain_conf.c | 94 ++++++++++++++-----
src/conf/domain_conf.h | 24 ++++-
src/conf/schemas/domaincommon.rng | 7 +-
src/libvirt_private.syms | 2 +
src/libxl/libxl_conf.c | 5 +-
src/libxl/libxl_domain.c | 28 ++++++
src/libxl/xen_xl.c | 15 ++-
src/qemu/qemu_command.c | 18 +++-
src/qemu/qemu_validate.c | 23 +++++
src/security/security_dac.c | 18 ++--
src/security/security_selinux.c | 16 ++--
src/security/virt-aa-helper.c | 5 +-
.../acpi-table-many.x86_64-latest.args | 37 ++++++++
.../acpi-table-many.x86_64-latest.xml | 42 +++++++++
tests/qemuxmlconfdata/acpi-table-many.xml | 34 +++++++
tests/qemuxmlconftest.c | 1 +
.../xlconfigdata/test-fullvirt-acpi-slic.xml | 2 +-
18 files changed, 337 insertions(+), 57 deletions(-)
create mode 100644 tests/qemuxmlconfdata/acpi-table-many.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/acpi-table-many.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/acpi-table-many.xml
--
2.47.1
3 weeks, 6 days
[PATCH] Fix a typo in docs/formatdomain.rst
by Yalan Zhang
Fix a typo and update the setting in the example. The documentation
explains that "when passt is the backend,...the ``<source>``
path/type/mode are all implied to be "matching the passt process"
so **must not** be specified." Additionally, this source dev setting is
ignored in practice. Therefore, let's remove it from the example to avoid any
confusion.
Signed-off-by: Yalan Zhang <yalzhang(a)redhat.com>
---
docs/formatdomain.rst | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst
index cbe378e61d..e84e0a0e8e 100644
--- a/docs/formatdomain.rst
+++ b/docs/formatdomain.rst
@@ -5154,7 +5154,7 @@ destined for the host toward the guest instead), and a socket between
passt and QEMU forwards that traffic on to the guest (and back out,
of course).
-*(:since:`Since 11.1.0 (QEMU and KVM only)` you may prefer to use the
+:since:`Since 11.1.0 (QEMU and KVM only)` you may prefer to use the
passt backend with the more efficient and performant type='vhostuser'
rather than type='user'. All the options related to passt in the
paragraphs below here also apply when using the passt backend with
@@ -6378,7 +6378,6 @@ setting guest-side IP addresses with ``<ip>`` and port forwarding with
<interface type='vhostuser'>
<backend type='passt'/>
<mac address='52:54:00:3b:83:1a'/>
- <source dev='enp1s0'/>
<ip address='10.30.0.5 prefix='24'/>
</interface>
</devices>
--
2.48.1
3 weeks, 6 days
[PATCH 0/5] Introduce UEFI shim support
by Michal Privoznik
*** BLURB HERE ***
Michal Prívozník (5):
conf: Introduce os/shim element
qemu_capabilities: Introduce QEMU_CAPS_MACHINE_SHIM
qemu_validate: Check whether UEFI shim is supported
qemu_command: Generate cmd line for UEFI shim
security: Set seclabels on UEFI shim
docs/formatdomain.rst | 5 +++++
src/conf/domain_conf.c | 12 ++++++++----
src/conf/domain_conf.h | 1 +
src/conf/domain_validate.c | 6 ++++++
src/conf/schemas/domaincommon.rng | 5 +++++
src/qemu/qemu_capabilities.c | 2 ++
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 2 ++
src/qemu/qemu_validate.c | 7 +++++++
src/security/security_dac.c | 10 ++++++++++
src/security/security_selinux.c | 9 +++++++++
src/security/virt-aa-helper.c | 4 ++++
tests/qemucapabilitiesdata/caps_10.0.0_s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_10.0.0_x86_64.xml | 1 +
.../launch-security-sev-direct.x86_64-latest.args | 1 +
.../launch-security-sev-direct.x86_64-latest.xml | 1 +
tests/qemuxmlconfdata/launch-security-sev-direct.xml | 1 +
17 files changed, 65 insertions(+), 4 deletions(-)
--
2.45.3
3 weeks, 6 days
[PATCH 0/8] qemu: Follow-up to "schemas: domaincaps: Add missing schema for '<cpu0Id>'"
by Peter Krempa
As promised in the original patch fixing the schema this is the
test-case follow up.
As not entirely expected it's a bit more involved and also contains
fixes for other bugs.
Peter Krempa (8):
qemu: capabilities: Parse 'cpu0Id' from capability cache XML
domaincapstest: Use proper input file based on 'variant' in
'fillQemuCaps'
domaincapstest: Allow tests of all capability variants
qemucapabilitiesdata: Document '+amdsev' variant
qemucapabilitiestest: Add test data for 'qemu-9.2' on a SEV-enabled
AMD host
qemuxmlconftest: Propery discriminate output files for caps variants
qemuxmlconftest: Add 'latest' version of 'launch-security-sev*'
originally using 6.0.0
qemuxmlconftest: Add '+amdsev' versions of the rest of
'launch-security-sev*' cases
src/qemu/qemu_capabilities.c | 1 +
.../qemu_7.0.0-hvf.aarch64+hvf.xml | 43 +-
.../qemu_7.2.0-hvf.x86_64+hvf.xml | 952 +-
.../qemu_9.2.0-q35.x86_64+amdsev.xml | 852 +
.../qemu_9.2.0-tcg.x86_64+amdsev.xml | 1821 +
.../qemu_9.2.0.x86_64+amdsev.xml | 852 +
tests/domaincapstest.c | 21 +-
tests/qemucapabilitiesdata/README.rst | 4 +
.../caps_9.2.0_x86_64+amdsev.replies | 43857 ++++++++++++++++
.../caps_9.2.0_x86_64+amdsev.xml | 3132 ++
.../caps.x86_64+amdsev.xml | 29 +
...h64-virt-headless.aarch64-latest+hvf.args} | 0
...ch64-virt-headless.aarch64-latest+hvf.xml} | 0
...86_64-q35-headless.x86_64-latest+hvf.args} | 0
...x86_64-q35-headless.x86_64-latest+hvf.xml} | 0
...urity-sev-direct.x86_64-latest+amdsev.args | 38 +
...curity-sev-direct.x86_64-latest+amdsev.xml | 48 +
...ng-platform-info.x86_64-latest+amdsev.args | 35 +
...ing-platform-info.x86_64-latest+amdsev.xml | 43 +
...security-sev-snp.x86_64-latest+amdsev.args | 42 +
...-security-sev-snp.x86_64-latest+amdsev.xml | 73 +
...nch-security-sev.x86_64-latest+amdsev.args | 35 +
...unch-security-sev.x86_64-latest+amdsev.xml | 45 +
tests/qemuxmlconftest.c | 40 +-
tests/testutilsqemu.c | 6 +-
25 files changed, 51941 insertions(+), 28 deletions(-)
create mode 100644 tests/domaincapsdata/qemu_9.2.0-q35.x86_64+amdsev.xml
create mode 100644 tests/domaincapsdata/qemu_9.2.0-tcg.x86_64+amdsev.xml
create mode 100644 tests/domaincapsdata/qemu_9.2.0.x86_64+amdsev.xml
create mode 100644 tests/qemucapabilitiesdata/caps_9.2.0_x86_64+amdsev.replies
create mode 100644 tests/qemucapabilitiesdata/caps_9.2.0_x86_64+amdsev.xml
create mode 100644 tests/qemucaps2xmloutdata/caps.x86_64+amdsev.xml
rename tests/qemuxmlconfdata/{hvf-aarch64-virt-headless.aarch64-latest.args => hvf-aarch64-virt-headless.aarch64-latest+hvf.args} (100%)
rename tests/qemuxmlconfdata/{hvf-aarch64-virt-headless.aarch64-latest.xml => hvf-aarch64-virt-headless.aarch64-latest+hvf.xml} (100%)
rename tests/qemuxmlconfdata/{hvf-x86_64-q35-headless.x86_64-latest.args => hvf-x86_64-q35-headless.x86_64-latest+hvf.args} (100%)
rename tests/qemuxmlconfdata/{hvf-x86_64-q35-headless.x86_64-latest.xml => hvf-x86_64-q35-headless.x86_64-latest+hvf.xml} (100%)
create mode 100644 tests/qemuxmlconfdata/launch-security-sev-direct.x86_64-latest+amdsev.args
create mode 100644 tests/qemuxmlconfdata/launch-security-sev-direct.x86_64-latest+amdsev.xml
create mode 100644 tests/qemuxmlconfdata/launch-security-sev-missing-platform-info.x86_64-latest+amdsev.args
create mode 100644 tests/qemuxmlconfdata/launch-security-sev-missing-platform-info.x86_64-latest+amdsev.xml
create mode 100644 tests/qemuxmlconfdata/launch-security-sev-snp.x86_64-latest+amdsev.args
create mode 100644 tests/qemuxmlconfdata/launch-security-sev-snp.x86_64-latest+amdsev.xml
create mode 100644 tests/qemuxmlconfdata/launch-security-sev.x86_64-latest+amdsev.args
create mode 100644 tests/qemuxmlconfdata/launch-security-sev.x86_64-latest+amdsev.xml
--
2.48.1
4 weeks, 1 day