[PATCH] qemuAppendDomainMemoryMachineParams: Refactor formatting of 'dump-guest-core'
by Peter Krempa
Use virTristateSwitchFromBool to fill in the default if user didn't
request it explicitly.
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
src/qemu/qemu_command.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 150824f2e1..bb2a3ea82f 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -6625,15 +6625,13 @@ qemuAppendDomainMemoryMachineParams(virBuffer *buf,
const virDomainDef *def,
virQEMUCaps *qemuCaps)
{
+ virTristateSwitch dump = def->mem.dump_core;
size_t i;
- if (def->mem.dump_core) {
- virBufferAsprintf(buf, ",dump-guest-core=%s",
- virTristateSwitchTypeToString(def->mem.dump_core));
- } else {
- virBufferAsprintf(buf, ",dump-guest-core=%s",
- cfg->dumpGuestCore ? "on" : "off");
- }
+ if (dump == VIR_TRISTATE_SWITCH_ABSENT)
+ dump = virTristateSwitchFromBool(cfg->dumpGuestCore);
+
+ virBufferAsprintf(buf, ",dump-guest-core=%s", virTristateSwitchTypeToString(dump));
if (def->mem.nosharepages)
virBufferAddLit(buf, ",mem-merge=off");
--
2.37.3
2 years, 1 month
[PATCH] docs: Update best practices wrt "Fixes:" and GitLab
by Michal Privoznik
We document that a commit fixing an issue tracked in GitLab
should put just "Fixes: #NNN" into its commit message. But when
viewing git log, having full URL which is directly clickable is
more developer friendly and GitLab is capable of handling both.
Therefore, document that users should put full URL, just like
when fixing a bug tracked in other sites.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
docs/best-practices.rst | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/docs/best-practices.rst b/docs/best-practices.rst
index ba8478ab05..a4651c19d0 100644
--- a/docs/best-practices.rst
+++ b/docs/best-practices.rst
@@ -15,11 +15,11 @@ with minimal back-and-forth.
by any longer description of why your patch makes sense. If the
patch fixes a regression, and you know what commit introduced
the problem, mentioning that is useful. If the patch resolves a
- upstream bug reported in GitLab, put "Fixes: #NNN" in the commit
- message. For a downstream bug, mention the URL of the bug instead.
- In both cases also summarize the issue rather than making all
- readers follow the link. You can use 'git shortlog -30' to get
- an idea of typical summary lines.
+ upstream bug reported in GitLab, or downstream bug, put
+ "Resolves: $fullURL" of the bug. In both cases also summarize
+ the issue rather than making all readers follow the link. You
+ can use 'git shortlog -30' to get an idea of typical summary
+ lines.
- Split large changes into a series of smaller patches,
self-contained if possible, with an explanation of each patch
--
2.37.4
2 years, 1 month
Re: [libvirt PATCH v3] cgroup/LXC: Do not condition availability of v2 by controllers
by Pavel Hrdina
On Mon, Oct 24, 2022 at 12:51:03PM +0000, Eric van Blokland wrote:
> ------- Original Message -------
> On Monday, October 24th, 2022 at 1:54 PM, Pavel Hrdina <phrdina(a)redhat.com> wrote:
> >
> > On Sun, Oct 23, 2022 at 02:08:28PM +0200, Eric van Blokland wrote:
> >
> > > systemd in hybrid mode uses v1 hierarchies for controllers and v2 for
> > > process tracking.
> > >
> > > The LXC code uses virCgroupAddMachineProcess() to move processes into
> > > appropriate cgroup by manipulating cgroupfs directly. (Note, despite
> > > libvirt also supports talking to systemd directly via
> > > org.freedesktop.machine1 API.)
> > >
> > > If this path is taken, libvirt/lxc must convince systemd that processes
> > > really belong to new cgroup, i.e. also the tracking v2 hierarchy must
> > > undergo migration too.
> > >
> > > The current check would evaluate v2 backend as unavailable with hybrid
> > > mode (because there are no available controllers). Simplify the
> > > condition and consider the mounted cgroup2 as sufficient to touch v2
> > > hierarchy.
> > >
> > > This consequently creates an issue with binding the V2 mount. In hybrid
> > > mode the V2 filesystem may be mounted upon the V1 filesystem. By reversing
> > > the order in which backends are mounted in virCgroupBindMount this problem
> > > is circumvented.
> > >
> > > Fixes: #182
> > > Signed-off-by: Eric van Blokland mail(a)ericvanblokland.nl
> > > ---
> > > src/util/vircgroup.c | 8 +++++---
> > > src/util/vircgroupv2.c | 12 ------------
> > > 2 files changed, 5 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c
> > > index a6a409af3d..48fbcf625a 100644
> > > --- a/src/util/vircgroup.c
> > > +++ b/src/util/vircgroup.c
> > > @@ -2924,9 +2924,11 @@ virCgroupBindMount(virCgroup *group, const char *oldroot,
> > > size_t i;
> > > virCgroup *parent = virCgroupGetNested(group);
> > >
> > > - for (i = 0; i < VIR_CGROUP_BACKEND_TYPE_LAST; i++) {
> > > - if (parent->backends[i] &&
> > > - parent->backends[i]->bindMount(parent, oldroot, mountopts) < 0) {
> > > + /* In hybrid environments, V2 may be mounted over V1.
> > > + * Mount the backends in reverse order. */
> >
> >
> > I don't understand what you mean by mounted over?
> >
>
> In the hybrid environments I've seen V2 is mounted upon "/unified" in the V1 cgroup filesystem.
What I've seen is that hybrid systemd environments have:
/sys/fs/cgroup/unified
/sys/fs/cgroup/memory
/sys/fs/cgroup/blkio
...
but it doesn't mean that it is mounted in V1 cgroup filesystem.
In this case the /sys/fs/cgroup is tmpfs filesystem and there is nothing
mounted over V1. In case you've managed to run into environment where
unified is actually mounted inside cgroup filesystem then there is
something else seriously broken.
> > > + for (i = 1; i <= VIR_CGROUP_BACKEND_TYPE_LAST; i++) {
> > > + if (parent->backends[VIR_CGROUP_BACKEND_TYPE_LAST - i] &&
> > > + parent->backends[VIR_CGROUP_BACKEND_TYPE_LAST - i]->bindMount(parent, oldroot, mountopts) < 0) {
> > > return -1;
> > > }
> > > }
> > > diff --git a/src/util/vircgroupv2.c b/src/util/vircgroupv2.c
> > > index 4c110940cf..0e0c61d466 100644
> > > --- a/src/util/vircgroupv2.c
> > > +++ b/src/util/vircgroupv2.c
> > > @@ -75,22 +75,10 @@ virCgroupV2Available(void)
> > > if (STRNEQ(entry.mnt_type, "cgroup2"))
> > > continue;
> > >
> > > - /* Systemd uses cgroup v2 for process tracking but no controller is
> > > - * available. We should consider this configuration as cgroup v2 is
> > > - * not available. */
> > > - contFile = g_strdup_printf("%s/cgroup.controllers", entry.mnt_dir);
> > > -
> > > - if (virFileReadAll(contFile, 1024 * 1024, &contStr) < 0)
> > > - goto cleanup;
> > > -
> > > - if (STREQ(contStr, ""))
> > > - continue;
> > > -
> >
> >
> > I don't like this at all and IMO this is incorrect fix of the issue you
> > are trying to address. In hybrid mode with systemd the cgroup v2
> > controller is not a real controller. It's something systemd uses for
> > process tracking and some other features. It is owned by systemd and we
> > should not touch it directly at all. We need to use proper systemd APIs
> > to make any changes to that directory or if needed ask systemd to create
> > cgroup with Delegate=yes which in this case is probably also not the
> > correct approach.
> >
>
> I must admit I'm a little in over my head here, but if I understand correctly,
> there isn't anything done in the v2 backend in hybrid mode that wouldn't be done
> in the v2 backend in unified mode. Does systemd behave differently or is the v2
> implementation in error in both hybrid and unified modes?
>
> Also in theory there could be a controller bound to the v2 hierarchy which would
> activate the v2 backend anyway.
So org.freedesktop.systemd1.Manager has method called
AttachProcessesToUnit which would be most likely perfect for us for this
specific case but it is not documented and from the commit message it
was intended for internal use only.
I'll ask systemd developers what the state is and if we could use it
otherwise we might need to drop the check like this patch does.
Pavel
> > I know it was already pushed but I'll most likely revert this patch and
> > we should find better and proper solution.
> >
> > Pavel
> >
>
> I'd love to get suggestions for a better solution.
>
> Eric
>
> > > ret = true;
> > > break;
> > > }
> > >
> > > - cleanup:
> > > VIR_FORCE_FCLOSE(mounts);
> > > return ret;
> > > }
> > > --
> > > 2.35.3
>
2 years, 1 month
[libvirt PATCH 0/3] qemu: Fix canceling migration
by Jiri Denemark
This series fixes commit v8.7.0-57-g2d7b22b561 "qemu: Make
qemuMigrationSrcCancel optionally synchronous" which was broken in
several ways (although the overall idea was correct).
Jiri Denemark (3):
qemu_migration: Properly wait for migration to be canceled
qemu: Do not crash when canceling migration on reconnect
NEWS: Document daemon crash on reconnect
NEWS.rst | 5 ++++
src/qemu/qemu_migration.c | 61 ++++++++++++++++++++++++++++-----------
src/qemu/qemu_migration.h | 3 +-
src/qemu/qemu_process.c | 4 +--
4 files changed, 53 insertions(+), 20 deletions(-)
--
2.38.0
2 years, 1 month
[PATCH] vircgroup: Remove unused variables in virCgroupV2Available
by Peter Krempa
After recent commit 'contFile' and 'contStr' became unused breaking
build with clang:
../../../libvirt/src/util/vircgroupv2.c:72:26: error: unused variable 'contFile' [-Werror,-Wunused-variable]
g_autofree char *contFile = NULL;
^
../../../libvirt/src/util/vircgroupv2.c:73:26: error: unused variable 'contStr' [-Werror,-Wunused-variable]
g_autofree char *contStr = NULL;
^
Fixes: a0f37232b9c4296ca16955cc625f75eb848ace39
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
src/util/vircgroupv2.c | 3 ---
1 file changed, 3 deletions(-)
Pushed as a build fix.
diff --git a/src/util/vircgroupv2.c b/src/util/vircgroupv2.c
index 0e0c61d466..bf6bd11fef 100644
--- a/src/util/vircgroupv2.c
+++ b/src/util/vircgroupv2.c
@@ -69,9 +69,6 @@ virCgroupV2Available(void)
return false;
while (getmntent_r(mounts, &entry, buf, sizeof(buf)) != NULL) {
- g_autofree char *contFile = NULL;
- g_autofree char *contStr = NULL;
-
if (STRNEQ(entry.mnt_type, "cgroup2"))
continue;
--
2.37.3
2 years, 1 month
[PATCH v3 0/6] qemu: tpm: Add support for migration across shared storage
by Stefan Berger
This series of patches adds support for migrating vTPMs across hosts whose
storage has been set up to share the directory structure holding the state
of the TPM (swtpm). The existence of share storage influences the
management of the directory structure holding the TPM state, which for
example is only removed when a domain is undefined and not when a VM is
removed on the migration source host. Further, when shared storage is used
then security labeling on the destination side is skipped assuming that the
labeling was already done on the source side.
I have tested this with an NFS setup where I had to turn SELinux off on
the hosts since the SELinux MLS range labeling is not supported by NFS.
For shared storage support to work properly both sides of the migration
need to be clients of the shared storage setup, meaning that they both have
to have /var/lib/libvirt/swtpm mounted as shared storage, because other-
wise the virFileIsSharedFS() may not detect shared storage and in the
worst case may cause the TPM emulator (swtpm) to malfunction if for
example the source side removed the TPM state directory structure.
Shared storage migration requires (upcoming) swtpm v0.8.
Stefan
v3:
- Relying entirely on virFileIsSharedFS() on migration source and
destination sides to detect whether shared storage is set up between
hosts; no more hint about shared storage from user via flag
- Added support for virDomainTPMPrivate structure to store and persist
TPM-related private data
Stefan Berger (6):
util: Add parsing support for swtpm's cmdarg-migration capability
qemu: tpm: Conditionally create storage on incoming migration
qemu: tpm: Add support for storing private TPM-related data
qemu: tpm: Pass --migration option to swtpm if supported and needed
qemu: tpm: Avoid security labels on incoming migration with shared
storage
qemu: tpm: Never remove state on outgoing migration and shared storage
src/conf/domain_conf.c | 63 ++++++++++++++++++++--
src/conf/domain_conf.h | 9 ++++
src/qemu/qemu_domain.c | 85 +++++++++++++++++++++++++++--
src/qemu/qemu_domain.h | 17 +++++-
src/qemu/qemu_driver.c | 20 +++----
src/qemu/qemu_extdevice.c | 10 ++--
src/qemu/qemu_extdevice.h | 6 ++-
src/qemu/qemu_migration.c | 20 ++++---
src/qemu/qemu_process.c | 9 ++--
src/qemu/qemu_snapshot.c | 4 +-
src/qemu/qemu_tpm.c | 111 ++++++++++++++++++++++++++++++++++----
src/qemu/qemu_tpm.h | 12 ++++-
src/util/virtpm.c | 1 +
src/util/virtpm.h | 1 +
14 files changed, 318 insertions(+), 50 deletions(-)
--
2.37.3
2 years, 1 month
QEMU Advent Calendar 2022 Call for Images
by Eldon Stegall
Hi,
We are working to make QEMU Advent Calendar 2022 happen this year, and
if you have had an interesting experience with QEMU recently, we would
love for you to contribute! QEMU invocations that showcase new functionality,
something cool, bring back retro computing memories, or simply entertain
with a puzzle or game are welcome. If you have an idea but aren't sure
if it fits, email me and we can try to put something together.
QEMU Advent Calendar publishes a QEMU disk image each day from December
1-24. Each image is a surprise designed to delight an audience
consisting of the QEMU community and beyond. You can see previous years
here:
https://www.qemu-advent-calendar.org/
You can help us make this year's calendar awesome by:
* Sending disk images ( or links to larger images )
* Replying with ideas for disk images (reply off-list to avoid spoilers!)
If you have an idea after the start of the advent, go ahead and send it. We may
find space to include it, or go ahead and get a jump on 2024!
Here is the format we will work with you to create:
* A name and a short description of the disk image
(e.g. with hints on what to try)
* A ./run shell script that prints out the name and
description/hints and launches QEMU
* A 320x240 screenshot/image/logo for the website
Content must be freely redistributable (i.e. no proprietary
license that prevents distribution). For GPL based software,
you need to provide the source code, too.
Check out this disk image as an example of how to distribute an image:
https://www.qemu-advent-calendar.org/2018/download/day24.tar.xz
PS: QEMU Advent Calendar is a secular calendar (not
religious). The idea is to create a fun experience for the QEMU
community which can be shared with everyone. You don't need
to celebrate Christmas or another religious festival to participate!
Thanks, and best wishes!
Eldon
2 years, 2 months
Re: [PATCH RFC v2 00/13] IOMMUFD Generic interface
by Alex Williamson
[Cc+ Steve, libvirt, Daniel, Laine]
On Tue, 20 Sep 2022 16:56:42 -0300
Jason Gunthorpe <jgg(a)nvidia.com> wrote:
> On Tue, Sep 13, 2022 at 09:28:18AM +0200, Eric Auger wrote:
> > Hi,
> >
> > On 9/13/22 03:55, Tian, Kevin wrote:
> > > We didn't close the open of how to get this merged in LPC due to the
> > > audio issue. Then let's use mails.
> > >
> > > Overall there are three options on the table:
> > >
> > > 1) Require vfio-compat to be 100% compatible with vfio-type1
> > >
> > > Probably not a good choice given the amount of work to fix the remaining
> > > gaps. And this will block support of new IOMMU features for a longer time.
> > >
> > > 2) Leave vfio-compat as what it is in this series
> > >
> > > Treat it as a vehicle to validate the iommufd logic instead of immediately
> > > replacing vfio-type1. Functionally most vfio applications can work w/o
> > > change if putting aside the difference on locked mm accounting, p2p, etc.
> > >
> > > Then work on new features and 100% vfio-type1 compat. in parallel.
> > >
> > > 3) Focus on iommufd native uAPI first
> > >
> > > Require vfio_device cdev and adoption in Qemu. Only for new vfio app.
> > >
> > > Then work on new features and vfio-compat in parallel.
> > >
> > > I'm fine with either 2) or 3). Per a quick chat with Alex he prefers to 3).
> >
> > I am also inclined to pursue 3) as this was the initial Jason's guidance
> > and pre-requisite to integrate new features. In the past we concluded
> > vfio-compat would mostly be used for testing purpose. Our QEMU
> > integration fully is based on device based API.
>
> There are some poor chicken and egg problems here.
>
> I had some assumptions:
> a - the vfio cdev model is going to be iommufd only
> b - any uAPI we add as we go along should be generally useful going
> forward
> c - we should try to minimize the 'minimally viable iommufd' series
>
> The compat as it stands now (eg #2) is threading this needle. Since it
> can exist without cdev it means (c) is made smaller, to two series.
>
> Since we add something useful to some use cases, eg DPDK is deployable
> that way, (b) is OK.
>
> If we focus on a strict path with 3, and avoid adding non-useful code,
> then we have to have two more (unwritten!) series beyond where we are
> now - vfio group compartmentalization, and cdev integration, and the
> initial (c) will increase.
>
> 3 also has us merging something that currently has no usable
> userspace, which I also do dislike alot.
>
> I still think the compat gaps are small. I've realized that
> VFIO_DMA_UNMAP_FLAG_VADDR has no implementation in qemu, and since it
> can deadlock the kernel I propose we purge it completely.
Steve won't be happy to hear that, QEMU support exists but isn't yet
merged.
> P2P is ongoing.
>
> That really just leaves the accounting, and I'm still not convinced at
> this must be a critical thing. Linus's latest remarks reported in lwn
> at the maintainer summit on tracepoints/BPF as ABI seem to support
> this. Let's see an actual deployed production configuration that would
> be impacted, and we won't find that unless we move forward.
I'll try to summarize the proposed change so that we can get better
advice from libvirt folks, or potentially anyone else managing locked
memory limits for device assignment VMs.
Background: when a DMA range, ex. guest RAM, is mapped to a vfio device,
we use the system IOMMU to provide GPA to HPA translation for assigned
devices. Unlike CPU page tables, we don't generally have a means to
demand fault these translations, therefore the memory target of the
translation is pinned to prevent that it cannot be swapped or
relocated, ie. to guarantee the translation is always valid.
The issue is where we account these pinned pages, where accounting is
necessary such that a user cannot lock an arbitrary number of pages
into RAM to generate a DoS attack. Duplicate accounting should be
resolved by iommufd, but is outside the scope of this discussion.
Currently, vfio tests against the mm_struct.locked_vm relative to
rlimit(RLIMIT_MEMLOCK), which reads task->signal->rlim[limit].rlim_cur,
where task is the current process. This is the same limit set via the
setrlimit syscall used by prlimit(1) and reported via 'ulimit -l'.
Note that in both cases above, we're dealing with a task, or process
limit and both prlimit and ulimit man pages describe them as such.
iommufd supposes instead, and references existing kernel
implementations, that despite the descriptions above these limits are
actually meant to be user limits and therefore instead charges pinned
pages against user_struct.locked_vm and also marks them in
mm_struct.pinned_vm.
The proposed algorithm is to read the _task_ locked memory limit, then
attempt to charge the _user_ locked_vm, such that user_struct.locked_vm
cannot exceed the task locked memory limit.
This obviously has implications. AFAICT, any management tool that
doesn't instantiate assigned device VMs under separate users are
essentially untenable. For example, if we launch VM1 under userA and
set a locked memory limit of 4GB via prlimit to account for an assigned
device, that works fine, until we launch VM2 from userA as well. In
that case we can't simply set a 4GB limit on the VM2 task because
there's already 4GB charged against user_struct.locked_vm for VM1. So
we'd need to set the VM2 task limit to 8GB to be able to launch VM2.
But not only that, we'd need to go back and also set VM1's task limit
to 8GB or else it will fail if a DMA mapped memory region is transient
and needs to be re-mapped.
Effectively any task under the same user and requiring pinned memory
needs to have a locked memory limit set, and updated, to account for
all tasks using pinned memory by that user.
How does this affect known current use cases of locked memory
management for assigned device VMs?
Does qemu://system by default sandbox into per VM uids or do they all
use the qemu user by default. I imagine qemu://session mode is pretty
screwed by this, but I also don't know who/where locked limits are
lifted for such VMs. Boxes, who I think now supports assigned device
VMs, could also be affected.
> So, I still like 2 because it yields the smallest next step before we
> can bring all the parallel work onto the list, and it makes testing
> and converting non-qemu stuff easier even going forward.
If a vfio compatible interface isn't transparently compatible, then I
have a hard time understanding its value. Please correct my above
description and implications, but I suspect these are not just
theoretical ABI compat issues. Thanks,
Alex
2 years, 2 months
[libvirt RFC PATCH 0/4] add external backend for tpm
by Ján Tomko
Ján Tomko (3):
qemu: tpm: fix spacing
qemu: add external backend for tpm
qemu: add tests for external swtpm
Peter Krempa (1):
schema: domain: Allow interleaving of 'tpm' config elements
src/conf/domain_audit.c | 11 +++++
src/conf/domain_conf.c | 16 +++++++
src/conf/domain_conf.h | 4 ++
src/conf/domain_validate.c | 1 +
src/conf/schemas/domaincommon.rng | 42 ++++++++++++++-----
src/qemu/qemu_capabilities.c | 4 +-
src/qemu/qemu_cgroup.c | 1 +
src/qemu/qemu_command.c | 11 ++++-
src/qemu/qemu_domain.c | 3 ++
src/qemu/qemu_namespace.c | 1 +
src/qemu/qemu_tpm.c | 2 +-
src/security/security_dac.c | 2 +
src/security/security_selinux.c | 2 +
.../domaincapsdata/qemu_4.2.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_4.2.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.0.0-tcg.x86_64.xml | 1 +
.../qemu_5.0.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_5.1.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.2.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.2.0-tcg.x86_64.xml | 1 +
.../qemu_5.2.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.0.0-tcg.x86_64.xml | 1 +
.../qemu_6.0.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.0.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_6.1.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.2.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml | 1 +
.../qemu_6.2.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 1 +
.../qemu_7.0.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_7.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_7.1.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_7.1.0.x86_64.xml | 1 +
.../tpm-external.x86_64-latest.args | 36 ++++++++++++++++
tests/qemuxml2argvdata/tpm-external.xml | 40 ++++++++++++++++++
tests/qemuxml2argvtest.c | 1 +
.../tpm-external.x86_64-latest.xml | 1 +
tests/qemuxml2xmltest.c | 1 +
60 files changed, 208 insertions(+), 13 deletions(-)
create mode 100644 tests/qemuxml2argvdata/tpm-external.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/tpm-external.xml
create mode 120000 tests/qemuxml2xmloutdata/tpm-external.x86_64-latest.xml
--
2.37.3
2 years, 2 months