[PATCH v2] rpc: fix memory leak in virNetServerClientNew and virNetServerProgramDispatchCall
by Jiang Jiacheng
From: jiangjiacheng <jiangjiacheng(a)huawei.com>
In virNetServerProgramDispatchCall, The arg is passed as a void* and used to point
to a certain struct depended on the dispatcher, so I think it's the memory of the
struct's member that leaks and this memory shuld be freed by xdr_free.
In virNetServerClientNew, client->rx is assigned by invoking virNetServerClientNew,
but isn't freed if client->privateData's initialization failed, which leads to a
memory leak. Thanks to Liang Peng's suggestion, put virNetMessageFree(client->rx)
into virNetServerClientDispose() to release the memory.
Signed-off-by: jiangjiacheng <jiangjiacheng(a)huawei.com>
---
src/rpc/virnetserverclient.c | 2 ++
src/rpc/virnetserverprogram.c | 12 +++++++++---
2 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/src/rpc/virnetserverclient.c b/src/rpc/virnetserverclient.c
index a7d2dfa795..30f6af7be5 100644
--- a/src/rpc/virnetserverclient.c
+++ b/src/rpc/virnetserverclient.c
@@ -931,6 +931,8 @@ void virNetServerClientDispose(void *obj)
PROBE(RPC_SERVER_CLIENT_DISPOSE,
"client=%p", client);
+ if (client->rx)
+ virNetMessageFree(client->rx);
if (client->privateData)
client->privateDataFreeFunc(client->privateData);
diff --git a/src/rpc/virnetserverprogram.c b/src/rpc/virnetserverprogram.c
index 3ddf9f0428..a813e821a3 100644
--- a/src/rpc/virnetserverprogram.c
+++ b/src/rpc/virnetserverprogram.c
@@ -409,11 +409,15 @@ virNetServerProgramDispatchCall(virNetServerProgram *prog,
if (virNetMessageDecodePayload(msg, dispatcher->arg_filter, arg) < 0)
goto error;
- if (!(identity = virNetServerClientGetIdentity(client)))
+ if (!(identity = virNetServerClientGetIdentity(client))) {
+ xdr_free(dispatcher->arg_filter, arg);
goto error;
+ }
- if (virIdentitySetCurrent(identity) < 0)
+ if (virIdentitySetCurrent(identity) < 0) {
+ xdr_free(dispatcher->arg_filter, arg);
goto error;
+ }
/*
* When the RPC handler is called:
@@ -427,8 +431,10 @@ virNetServerProgramDispatchCall(virNetServerProgram *prog,
*/
rv = (dispatcher->func)(server, client, msg, &rerr, arg, ret);
- if (virIdentitySetCurrent(NULL) < 0)
+ if (virIdentitySetCurrent(NULL) < 0) {
+ xdr_free(dispatcher->arg_filter, arg);
goto error;
+ }
/*
* If rv == 1, this indicates the dispatch func has
--
2.27.0
2 years, 1 month
[libvirt PATCH v2 00/16] Use nbdkit for http/ftp/ssh network drives in libvirt
by Jonathon Jongsma
After a bit of a lengthy delay, this is the second version of this patch
series. See https://bugzilla.redhat.com/show_bug.cgi?id=2016527 for more
information about the goal, but the summary is that RHEL does not want to ship
the qemu storage plugins for curl and ssh. Handling them outside of the qemu
process provides several advantages such as reduced attack surface and
stability.
A quick summary of the code:
- at startup I query to see whether nbdkit exists on the host and if
so, I query which plugins/filters are installed. These capabilities
are cached and stored in the qemu driver
- When the driver prepares the domain, we go through each disk source
and determine whether the nbdkit capabilities allow us to support
this disk via nbdkit, and if so, we allocate a qemuNbdkitProcess
object and stash it in the private data of the virStorageSource.
- The presence or absence of this qemuNbdkitProcess data then indicates
whether this disk will be served to qemu indirectly via nbdkit or
directly
- When we launch the qemuProcess, as part of the "external device
start" step, I launch a ndkit process for each disk that is supported
by nbdkit.
- for devices which are served by an intermediate ndkit process, I
change the qemu commandline in the following ways:
- I no longer pass auth/cookie secrets to qemu (those are handled by
nbdkit)
- I replace the actual network URL of the remote disk source with the
path to the nbdkit unix socket
Open questions
- selinux: I need some help from people more familiar with selinux to figure
out what is needed here. When selinux is enforcing, I get a failure to
launch nbdkit to serve the disks. I suspect we need a new context and policy
for /usr/sbin/nbdkit that allows it to transition to the appropriate selinux
context. The current context (on fedora) is "system_u:object_r:bin_t:s0".
When I (temporarily) change the context to something like qemu_exec_t,
I am able to start nbdkit and the domain launches.
Known shortcomings
- creating disks (in ssh) still isn't supported. I wanted to send out the
patch series anyway since it's been delayed too long already.
Changes since v1:
- split into multiple patches
- added a build option for nbdkit_moddir
- don't instantiate any secret / cookie props for disks that are being served
by nbdkit since we don't send secrets to qemu anymore
- ensure that nbdkit processes are started/stopped for the entire backing
chain
- switch to virFileCache-based capabilities for nbdkit so that we don't need
to requery every time
- switch to using pipes for communicating sensitive data to nbdkit
- use pidfile support built into virCommand rather than nbdkit's --pidfile
argument
- added significantly more tests
Jonathon Jongsma (16):
schema: allow 'ssh' as a protocol for network disks
qemu: Add qemuNbdkitCaps to qemu driver
qemu: expand nbdkit capabilities
util: Allow virFileCache data to be any GObject
qemu: implement basic virFileCache for nbdkit caps
qemu: implement persistent file cache for nbdkit caps
qemu: use file cache for nbdkit caps
qemu: Add qemuNbdkitProcess
qemu: add functions to start and stop nbdkit
tests: add ability to test various nbdkit capabilities
qemu: split qemuDomainSecretStorageSourcePrepare
qemu: use nbdkit to serve network disks if available
qemu: include nbdkit state in private xml
tests: add tests for nbdkit invocation
qemu: pass sensitive data to nbdkit via pipe
qemu: add test for authenticating a https network disk
build-aux/syntax-check.mk | 4 +-
docs/formatdomain.rst | 2 +-
meson.build | 6 +
meson_options.txt | 1 +
po/POTFILES | 1 +
src/conf/schemas/domaincommon.rng | 1 +
src/qemu/meson.build | 1 +
src/qemu/qemu_block.c | 168 ++-
src/qemu/qemu_command.c | 4 +-
src/qemu/qemu_conf.c | 22 +
src/qemu/qemu_conf.h | 6 +
src/qemu/qemu_domain.c | 176 ++-
src/qemu/qemu_domain.h | 4 +
src/qemu/qemu_driver.c | 3 +
src/qemu/qemu_extdevice.c | 84 ++
src/qemu/qemu_nbdkit.c | 1051 +++++++++++++++++
src/qemu/qemu_nbdkit.h | 90 ++
src/qemu/qemu_nbdkitpriv.h | 46 +
src/util/virfilecache.c | 15 +-
src/util/virfilecache.h | 2 +-
src/util/virutil.h | 2 +-
tests/meson.build | 1 +
.../disk-cdrom-network.args.disk0 | 7 +
.../disk-cdrom-network.args.disk1 | 9 +
.../disk-cdrom-network.args.disk1.pipe.45 | 1 +
.../disk-cdrom-network.args.disk2 | 9 +
.../disk-cdrom-network.args.disk2.pipe.47 | 1 +
.../disk-network-http.args.disk0 | 7 +
.../disk-network-http.args.disk1 | 6 +
.../disk-network-http.args.disk2 | 7 +
.../disk-network-http.args.disk2.pipe.45 | 1 +
.../disk-network-http.args.disk3 | 8 +
.../disk-network-http.args.disk3.pipe.47 | 1 +
...work-source-curl-nbdkit-backing.args.disk0 | 8 +
...rce-curl-nbdkit-backing.args.disk0.pipe.45 | 1 +
.../disk-network-source-curl.args.1.pipe.1 | 1 +
.../disk-network-source-curl.args.disk0 | 8 +
...isk-network-source-curl.args.disk0.pipe.45 | 1 +
.../disk-network-source-curl.args.disk1 | 10 +
...isk-network-source-curl.args.disk1.pipe.47 | 1 +
...isk-network-source-curl.args.disk1.pipe.49 | 1 +
.../disk-network-source-curl.args.disk2 | 8 +
...isk-network-source-curl.args.disk2.pipe.49 | 1 +
...isk-network-source-curl.args.disk2.pipe.51 | 1 +
.../disk-network-source-curl.args.disk3 | 7 +
.../disk-network-source-curl.args.disk4 | 7 +
.../disk-network-ssh.args.disk0 | 7 +
tests/qemunbdkittest.c | 271 +++++
...sk-cdrom-network-nbdkit.x86_64-latest.args | 42 +
.../disk-cdrom-network-nbdkit.xml | 1 +
...isk-network-http-nbdkit.x86_64-latest.args | 45 +
.../disk-network-http-nbdkit.xml | 1 +
...rce-curl-nbdkit-backing.x86_64-latest.args | 38 +
...isk-network-source-curl-nbdkit-backing.xml | 45 +
...work-source-curl-nbdkit.x86_64-latest.args | 50 +
.../disk-network-source-curl-nbdkit.xml | 1 +
...isk-network-source-curl.x86_64-latest.args | 54 +
.../disk-network-source-curl.xml | 74 ++
...disk-network-ssh-nbdkit.x86_64-latest.args | 36 +
.../disk-network-ssh-nbdkit.xml | 1 +
.../disk-network-ssh.x86_64-latest.args | 36 +
tests/qemuxml2argvdata/disk-network-ssh.xml | 31 +
tests/qemuxml2argvtest.c | 18 +
tests/testutilsqemu.c | 27 +
tests/testutilsqemu.h | 5 +
65 files changed, 2474 insertions(+), 111 deletions(-)
create mode 100644 src/qemu/qemu_nbdkit.c
create mode 100644 src/qemu/qemu_nbdkit.h
create mode 100644 src/qemu/qemu_nbdkitpriv.h
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk1
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk1.pipe.45
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk2
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk2.pipe.47
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk1
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk2
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk2.pipe.45
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk3
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk3.pipe.47
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl-nbdkit-backing.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl-nbdkit-backing.args.disk0.pipe.45
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.1.pipe.1
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk0.pipe.45
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk1
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk1.pipe.47
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk1.pipe.49
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk2
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk2.pipe.49
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk2.pipe.51
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk3
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk4
create mode 100644 tests/qemunbdkitdata/disk-network-ssh.args.disk0
create mode 100644 tests/qemunbdkittest.c
create mode 100644 tests/qemuxml2argvdata/disk-cdrom-network-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-cdrom-network-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-http-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-network-http-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit-backing.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit-backing.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-network-ssh-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh.xml
--
2.37.1
2 years, 1 month
[PATCH 0/2] qemu: tpm: Improve TPM state files management
by Stefan Berger
This series of patches adds the --keep-tpm and --tpm flags to virsh for
keeping and removing the TPM state directory structure when a VM is
undefined. It also fixes the removal of state when a VM is migrated so
that the state files are removed on the source upon successful
migration and deleted on the destination after migration failure.
Regards,
Stefan
Stefan Berger (2):
qemu: Add UNDEFINE_TPM and UNDEFINE_KEEP_TPM flags
qemu: tpm: Remove TPM state after successful migration
include/libvirt/libvirt-domain.h | 6 ++++++
src/qemu/qemu_domain.c | 12 +++++++-----
src/qemu/qemu_domain.h | 3 ++-
src/qemu/qemu_driver.c | 31 ++++++++++++++++++++-----------
src/qemu/qemu_extdevice.c | 5 +++--
src/qemu/qemu_extdevice.h | 3 ++-
src/qemu/qemu_migration.c | 22 +++++++++++++++-------
src/qemu/qemu_process.c | 4 ++--
src/qemu/qemu_snapshot.c | 4 ++--
src/qemu/qemu_tpm.c | 14 ++++++++++----
src/qemu/qemu_tpm.h | 15 ++++++++++++++-
tools/virsh-domain.c | 15 +++++++++++++++
12 files changed, 98 insertions(+), 36 deletions(-)
--
2.37.1
2 years, 1 month
[PATCH] virpcivpd: reduce errors in log due to invalid VPD
by christian.ehrhardt@canonical.com
From: Christian Ehrhardt <christian.ehrhardt(a)canonical.com>
Sadly some devices provide invalid VPD data even with fully updated
firmware. Former hardning like 600f580d "PCI VPD: Skip fields with
invalid values" have already helped for those to some extent.
But if one happens to have such a device installed in the system,
despite all other things working properly the log potentially
flooded with messages like:
internal error: The keyword is not comprised only of uppercase ASCII
letters or digits
internal error: A field data length violates the resource length boundary.
The user can't do anything about it to change that, they will be there on
any libvirt restart and potentially distract from other more important
issues.
Since the vpd decoding is implemented rather resilient (if parsing fails
all goes on fine, the respective device just has no VPD data populated
eventually) we can lower those from virReportError(VIR_ERR_INTERNAL_ERROR
to just VIR_INFO. If needed for debugging people can set the level
accordingly, but otherwise we would no more fill the logs with errors
without a strong reason.
Fixes: https://launchpad.net/bugs/1990949
Signed-off-by: Christian Ehrhardt <christian.ehrhardt(a)canonical.com>
---
src/util/virpcivpd.c | 47 +++++++++++++++-----------------------------
1 file changed, 16 insertions(+), 31 deletions(-)
diff --git a/src/util/virpcivpd.c b/src/util/virpcivpd.c
index 4ba4fea237..39557c7347 100644
--- a/src/util/virpcivpd.c
+++ b/src/util/virpcivpd.c
@@ -62,12 +62,11 @@ virPCIVPDResourceGetKeywordPrefix(const char *keyword)
/* Keywords must have a length of 2 bytes. */
if (strlen(keyword) != 2) {
- virReportError(VIR_ERR_INTERNAL_ERROR, _("The keyword length is not 2 bytes: %s"), keyword);
+ VIR_INFO("The keyword length is not 2 bytes: %s", keyword);
return NULL;
} else if (!(virPCIVPDResourceIsUpperOrNumber(keyword[0]) &&
virPCIVPDResourceIsUpperOrNumber(keyword[1]))) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("The keyword is not comprised only of uppercase ASCII letters or digits"));
+ VIR_INFO("The keyword is not comprised only of uppercase ASCII letters or digits");
return NULL;
}
/* Special-case the system-specific keywords since they share the "Y" prefix with "YA". */
@@ -328,19 +327,16 @@ virPCIVPDResourceUpdateKeyword(virPCIVPDResource *res, const bool readOnly,
const char *const keyword, const char *const value)
{
if (!res) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Cannot update the resource: a NULL resource pointer has been provided."));
+ VIR_INFO("Cannot update the resource: a NULL resource pointer has been provided.");
return false;
} else if (!keyword) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Cannot update the resource: a NULL keyword pointer has been provided."));
+ VIR_INFO("Cannot update the resource: a NULL keyword pointer has been provided.");
return false;
}
if (readOnly) {
if (!res->ro) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Cannot update the read-only keyword: RO section not initialized."));
+ VIR_INFO("Cannot update the read-only keyword: RO section not initialized.");
return false;
}
@@ -375,9 +371,7 @@ virPCIVPDResourceUpdateKeyword(virPCIVPDResource *res, const bool readOnly,
} else {
if (!res->rw) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _
- ("Cannot update the read-write keyword: read-write section not initialized."));
+ VIR_INFO("Cannot update the read-write keyword: read-write section not initialized.");
return false;
}
@@ -476,8 +470,7 @@ virPCIVPDParseVPDLargeResourceFields(int vpdFileFd, uint16_t resPos, uint16_t re
if (virPCIVPDReadVPDBytes(vpdFileFd, buf, 3, fieldPos, csum) != 3) {
/* Invalid field encountered which means the resource itself is invalid too. Report
* That VPD has invalid format and bail. */
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not read a resource field header - VPD has invalid format"));
+ VIR_INFO("Could not read a resource field header - VPD has invalid format");
return false;
}
fieldDataLen = buf[2];
@@ -488,12 +481,10 @@ virPCIVPDParseVPDLargeResourceFields(int vpdFileFd, uint16_t resPos, uint16_t re
/* Handle special cases first */
if (!readOnly && fieldFormat == VIR_PCI_VPD_RESOURCE_FIELD_VALUE_FORMAT_RESVD) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Unexpected RV keyword in the read-write section."));
+ VIR_INFO("Unexpected RV keyword in the read-write section.");
return false;
} else if (readOnly && fieldFormat == VIR_PCI_VPD_RESOURCE_FIELD_VALUE_FORMAT_RDWR) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Unexpected RW keyword in the read-only section."));
+ VIR_INFO("Unexpected RW keyword in the read-only section.");
return false;
}
@@ -517,21 +508,18 @@ virPCIVPDParseVPDLargeResourceFields(int vpdFileFd, uint16_t resPos, uint16_t re
bytesToRead = fieldDataLen;
break;
default:
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Unexpected field value format encountered."));
+ VIR_INFO("Unexpected field value format encountered.");
return false;
}
if (resPos + resDataLen < fieldPos + fieldDataLen) {
/* In this case the field cannot simply be skipped since the position of the
* next field is determined based on the length of a previous field. */
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("A field data length violates the resource length boundary."));
+ VIR_INFO("A field data length violates the resource length boundary.");
return false;
}
if (virPCIVPDReadVPDBytes(vpdFileFd, buf, bytesToRead, fieldPos, csum) != bytesToRead) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not parse a resource field data - VPD has invalid format"));
+ VIR_INFO("Could not parse a resource field data - VPD has invalid format");
return false;
}
/* Advance the position to the first byte of the next field. */
@@ -552,7 +540,7 @@ virPCIVPDParseVPDLargeResourceFields(int vpdFileFd, uint16_t resPos, uint16_t re
} else if (fieldFormat == VIR_PCI_VPD_RESOURCE_FIELD_VALUE_FORMAT_RESVD) {
if (*csum) {
/* All bytes up to and including the checksum byte should add up to 0. */
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Checksum validation has failed"));
+ VIR_INFO("Checksum validation has failed");
return false;
}
hasChecksum = true;
@@ -578,8 +566,7 @@ virPCIVPDParseVPDLargeResourceFields(int vpdFileFd, uint16_t resPos, uint16_t re
}
/* The field format, keyword and value are determined. Attempt to update the resource. */
if (!virPCIVPDResourceUpdateKeyword(res, readOnly, fieldKeyword, fieldValue)) {
- virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Could not update the VPD resource keyword: %s"), fieldKeyword);
+ VIR_INFO("Could not update the VPD resource keyword: %s", fieldKeyword);
return false;
}
}
@@ -627,14 +614,12 @@ virPCIVPDParseVPDLargeResourceString(int vpdFileFd, uint16_t resPos,
g_autofree char *buf = g_malloc0(resDataLen + 1);
if (virPCIVPDReadVPDBytes(vpdFileFd, (uint8_t *)buf, resDataLen, resPos, csum) != resDataLen) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not read a part of a resource - VPD has invalid format"));
+ VIR_INFO("Could not read a part of a resource - VPD has invalid format");
return false;
}
resValue = g_strdup(g_strstrip(buf));
if (!virPCIVPDResourceIsValidTextValue(resValue)) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("The string resource has invalid characters in its value"));
+ VIR_INFO("The string resource has invalid characters in its value");
return false;
}
res->name = g_steal_pointer(&resValue);
--
2.37.3
2 years, 1 month
[PATCH] virt-aa-helper: allow common riscv64 loader paths
by christian.ehrhardt@canonical.com
From: Christian Ehrhardt <christian.ehrhardt(a)canonical.com>
Riscv64 usually uses u-boot as external -kernel and a loader from
the open implementation of RISC-V SBI. The paths for those binaries
as packaged in Debian and Ubuntu are in paths which are usually
forbidden to be added by the user under /usr/lib...
People used to start riscv64 guests only manually via qemu cmdline,
but trying to encapsulate that via libvirt now causes failures when
starting the guest due to the apparmor isolation not allowing that:
virt-aa-helper: error: skipped restricted file
virt-aa-helper: error: invalid VM definition
Explicitly allow the sub-paths used by u-boot-qemu and opensbi
under /usr/lib/ as readonly rules.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt(a)canonical.com>
---
src/security/virt-aa-helper.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/src/security/virt-aa-helper.c b/src/security/virt-aa-helper.c
index f338488da3..ceadaef99b 100644
--- a/src/security/virt-aa-helper.c
+++ b/src/security/virt-aa-helper.c
@@ -476,11 +476,13 @@ valid_path(const char *path, const bool readonly)
"/initrd",
"/initrd.img",
"/usr/share/edk2/",
- "/usr/share/OVMF/", /* for OVMF images */
- "/usr/share/ovmf/", /* for OVMF images */
- "/usr/share/AAVMF/", /* for AAVMF images */
- "/usr/share/qemu-efi/", /* for AAVMF images */
- "/usr/share/qemu-efi-aarch64/" /* for AAVMF images */
+ "/usr/share/OVMF/", /* for OVMF images */
+ "/usr/share/ovmf/", /* for OVMF images */
+ "/usr/share/AAVMF/", /* for AAVMF images */
+ "/usr/share/qemu-efi/", /* for AAVMF images */
+ "/usr/share/qemu-efi-aarch64/", /* for AAVMF images */
+ "/usr/lib/u-boot/", /* u-boot loaders for qemu */
+ "/usr/lib/riscv64-linux-gnu/opensbi" /* RISC-V SBI implementation */
};
/* override the above with these */
const char * const override[] = {
--
2.37.3
2 years, 1 month
[PATCH v3] rpc: fix memory leak in virNetServerClientNew and virNetServerProgramDispatchCall
by Jiang Jiacheng
From: jiangjiacheng <jiangjiacheng(a)huawei.com>
In virNetServerProgramDispatchCall, The arg is passed as a void* and used to point
to a certain struct depended on the dispatcher, so I think it's the memory of the
struct's member that leaks and this memory shuld be freed by xdr_free.
In virNetServerClientNew, client->rx is assigned by invoking virNetServerClientNew,
but isn't freed if client->privateData's initialization failed, which leads to a
memory leak. Thanks to Liang Peng's suggestion, put virNetMessageFree(client->rx)
into virNetServerClientDispose() to release the memory.
diff to v2:
- virNetServerProgramDispatchCall, only free the memory in successful path and error
label.
Signed-off-by: jiangjiacheng <jiangjiacheng(a)huawei.com>
---
src/rpc/virnetserverclient.c | 2 ++
src/rpc/virnetserverprogram.c | 22 ++++++++++------------
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/src/rpc/virnetserverclient.c b/src/rpc/virnetserverclient.c
index a7d2dfa795..30f6af7be5 100644
--- a/src/rpc/virnetserverclient.c
+++ b/src/rpc/virnetserverclient.c
@@ -931,6 +931,8 @@ void virNetServerClientDispose(void *obj)
PROBE(RPC_SERVER_CLIENT_DISPOSE,
"client=%p", client);
+ if (client->rx)
+ virNetMessageFree(client->rx);
if (client->privateData)
client->privateDataFreeFunc(client->privateData);
diff --git a/src/rpc/virnetserverprogram.c b/src/rpc/virnetserverprogram.c
index 3ddf9f0428..94660f867a 100644
--- a/src/rpc/virnetserverprogram.c
+++ b/src/rpc/virnetserverprogram.c
@@ -368,7 +368,7 @@ virNetServerProgramDispatchCall(virNetServerProgram *prog,
g_autofree char *arg = NULL;
g_autofree char *ret = NULL;
int rv = -1;
- virNetServerProgramProc *dispatcher;
+ virNetServerProgramProc *dispatcher = NULL;
virNetMessageError rerr;
size_t i;
g_autoptr(virIdentity) identity = NULL;
@@ -446,8 +446,6 @@ virNetServerProgramDispatchCall(virNetServerProgram *prog,
msg->nfds = 0;
}
- xdr_free(dispatcher->arg_filter, arg);
-
if (rv < 0)
goto error;
@@ -460,28 +458,28 @@ virNetServerProgramDispatchCall(virNetServerProgram *prog,
/*msg->header.serial = msg->header.serial;*/
msg->header.status = VIR_NET_OK;
- if (virNetMessageEncodeHeader(msg) < 0) {
- xdr_free(dispatcher->ret_filter, ret);
+ if (virNetMessageEncodeHeader(msg) < 0)
goto error;
- }
if (msg->nfds &&
- virNetMessageEncodeNumFDs(msg) < 0) {
- xdr_free(dispatcher->ret_filter, ret);
+ virNetMessageEncodeNumFDs(msg) < 0)
goto error;
- }
- if (virNetMessageEncodePayload(msg, dispatcher->ret_filter, ret) < 0) {
- xdr_free(dispatcher->ret_filter, ret);
+ if (virNetMessageEncodePayload(msg, dispatcher->ret_filter, ret) < 0)
goto error;
- }
+ xdr_free(dispatcher->arg_filter, arg);
xdr_free(dispatcher->ret_filter, ret);
/* Put reply on end of tx queue to send out */
return virNetServerClientSendMessage(client, msg);
error:
+ if (dispatcher) {
+ xdr_free(dispatcher->arg_filter, arg);
+ xdr_free(dispatcher->ret_filter, ret);
+ }
+
/* Bad stuff (de-)serializing message, but we have an
* RPC error message we can send back to the client */
rv = virNetServerProgramSendReplyError(prog, client, msg, &rerr, &msg->header);
--
2.33.0
2 years, 1 month
[libvirt PATCH 0/6] qemu: s390x: retire some CCW capabilites
by Ján Tomko
Now that we bumped the minimum QEMU version to 4.2.0, we can stop
probing some capabilities.
Ján Tomko (6):
tests: qemuxml2argvdata: switch zpci address generation to real caps
qemu: convert some s390x tests to use real capability data
qemu: Assume QEMU_CAPS_CCW_CSSID_UNRESTRICTED
qemu: Assume QEMU_CAPS_CCW
qemu: retire QEMU_CAPS_CCW_CSSID_UNRESTRICTED
qemu: retire QEMU_CAPS_CCW
src/qemu/qemu_capabilities.c | 21 +---
src/qemu/qemu_capabilities.h | 4 +-
src/qemu/qemu_domain_address.c | 7 +-
src/qemu/qemu_hotplug.c | 6 +-
src/qemu/qemu_validate.c | 6 -
.../caps_4.2.0.s390x.replies | 71 ++++-------
.../qemucapabilitiesdata/caps_4.2.0.s390x.xml | 2 -
.../caps_5.2.0.s390x.replies | 76 ++++--------
.../qemucapabilitiesdata/caps_5.2.0.s390x.xml | 2 -
.../caps_6.0.0.s390x.replies | 76 ++++--------
.../qemucapabilitiesdata/caps_6.0.0.s390x.xml | 2 -
tests/qemuhotplugtest.c | 1 -
... => balloon-ccw-deflate.s390x-latest.args} | 7 +-
...s => console-virtio-ccw.s390x-latest.args} | 7 +-
...ev-subsys-mdev-vfio-ccw.s390x-latest.args} | 7 +-
...-zpci-autogenerate-fids.s390x-latest.args} | 10 +-
...-zpci-autogenerate-uids.s390x-latest.args} | 10 +-
...-vfio-zpci-autogenerate.s390x-latest.args} | 10 +-
.../hostdev-vfio-zpci-boundaries.args | 5 +-
...fio-zpci-ccw-memballoon.s390x-latest.args} | 7 +-
...ci-invalid-uid-valid-fid.s390x-latest.err} | 0
...o-zpci-multidomain-many.s390x-latest.args} | 10 +-
tests/qemuxml2argvdata/hostdev-vfio-zpci.args | 3 +-
...othreads-disk-virtio-ccw.s390x-4.2.0.args} | 8 +-
....args => net-virtio-ccw.s390x-latest.args} | 7 +-
...> non-x86_64-timer-error.s390x-latest.err} | 0
....args => virtio-rng-ccw.s390x-latest.args} | 9 +-
tests/qemuxml2argvtest.c | 112 ++++++------------
.../hostdev-vfio-zpci-autogenerate-fids.xml | 4 +-
.../hostdev-vfio-zpci-autogenerate-uids.xml | 4 +-
.../hostdev-vfio-zpci-autogenerate.xml | 4 +-
.../hostdev-vfio-zpci-boundaries.xml | 4 +-
.../hostdev-vfio-zpci-multidomain-many.xml | 4 +-
tests/qemuxml2xmltest.c | 41 +++----
34 files changed, 199 insertions(+), 348 deletions(-)
rename tests/qemuxml2argvdata/{balloon-ccw-deflate.args => balloon-ccw-deflate.s390x-latest.args} (68%)
rename tests/qemuxml2argvdata/{console-virtio-ccw.args => console-virtio-ccw.s390x-latest.args} (77%)
rename tests/qemuxml2argvdata/{hostdev-subsys-mdev-vfio-ccw.args => hostdev-subsys-mdev-vfio-ccw.s390x-latest.args} (70%)
rename tests/qemuxml2argvdata/{hostdev-vfio-zpci-autogenerate-fids.args => hostdev-vfio-zpci-autogenerate-fids.s390x-latest.args} (69%)
rename tests/qemuxml2argvdata/{hostdev-vfio-zpci-autogenerate-uids.args => hostdev-vfio-zpci-autogenerate-uids.s390x-latest.args} (69%)
rename tests/qemuxml2argvdata/{hostdev-vfio-zpci-autogenerate.args => hostdev-vfio-zpci-autogenerate.s390x-latest.args} (66%)
rename tests/qemuxml2argvdata/{hostdev-vfio-zpci-ccw-memballoon.args => hostdev-vfio-zpci-ccw-memballoon.s390x-latest.args} (54%)
rename tests/qemuxml2argvdata/{hostdev-vfio-zpci-invalid-uid-valid-fid.err => hostdev-vfio-zpci-invalid-uid-valid-fid.s390x-latest.err} (100%)
rename tests/qemuxml2argvdata/{hostdev-vfio-zpci-multidomain-many.args => hostdev-vfio-zpci-multidomain-many.s390x-latest.args} (79%)
rename tests/qemuxml2argvdata/{iothreads-disk-virtio-ccw.args => iothreads-disk-virtio-ccw.s390x-4.2.0.args} (79%)
rename tests/qemuxml2argvdata/{net-virtio-ccw.args => net-virtio-ccw.s390x-latest.args} (73%)
rename tests/qemuxml2argvdata/{non-x86_64-timer-error.err => non-x86_64-timer-error.s390x-latest.err} (100%)
rename tests/qemuxml2argvdata/{virtio-rng-ccw.args => virtio-rng-ccw.s390x-latest.args} (74%)
--
2.37.3
2 years, 1 month
[PATCH 0/2] util: xml: Clean up other instances of problems pointed out in review
by Peter Krempa
In review of my series which was adding new XML Property fetching
helpers I was asked to change few things which were also present in
other instances in existing code. Fix those too.
Peter Krempa (2):
util: xml: Fix declararation of 'const char *' parameters in
virXMLProp* helpers
util: xml: Use common formatting of 'Bitwise-OR' in function param
description
src/util/virxml.c | 34 +++++++++++++++++-----------------
src/util/virxml.h | 12 ++++++------
2 files changed, 23 insertions(+), 23 deletions(-)
--
2.37.3
2 years, 1 month
Release of libvirt-8.8.0
by Jiri Denemark
The 8.8.0 release of both libvirt and libvirt-python is tagged and
signed tarballs and source RPMs are available at
https://libvirt.org/sources/
https://libvirt.org/sources/python/
Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing feedback. Your work is greatly
appreciated.
* Removed features
* storage: Remove 'sheepdog' storage driver backend
The 'sheepdog' project is no longer maintained and upstream bug reports
are unaddressed. Libvirt thus removed the support for the sheepdog storage
driver backend, following qemu's removal of sheepdog support in qemu-6.1.
* Improvements
* qemu: Implement VIR_DOMAIN_STATS_CPU_TOTAL for qemu:///session
Users can now query VIR_DOMAIN_STATS_CPU_TOTAL (also known as cpu.time)
statistics for session domains.
* Bug fixes
* qemu: Fix non-shared storage migration setup
This release fixes a bug in setup of a migration with non-shared storage
( ``virsh migrate --copy-storage-all``) which was broken by a refactor of
the code in libvirt-8.7.
* selinux: Don't ignore NVMe disks when setting image label
Libvirt did not set any SELinux label on NVMe disks and relied only on the
default SELinux policy. This turned out to cause problem when using
namespace or altered policy and thus is fixed now.
* qemu: Fix a deadlock when setting up namespace
When starting a domain, libvirt creates a mount namespace and manages
private /dev with only a handful nodes exposed. But when creating those a
deadlock inside glib might have occurred. The code was changed so that
libvirt does not tickle the glib bug.
* qemu: Don't build memory paths on daemon restart
When the daemon is restarted it tried to create domain private paths for
each mounted hugetlbfs. When this failed, the corresponding domain was
killed. This operation is now performed during domain startup and memory
hotplug and no longer leads to sudden kill of the domain.
Enjoy.
Jirka
2 years, 1 month