[PATCH 0/2] docs: two fixes
by Peter Krempa
Please see individual patches.
Peter Krempa (2):
docs: formatdomain: Document few NVRAM config limitations
docs: formatdomain: Mention that vhostuser interface with
mode='server' waits for connection
docs/formatdomain.rst | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
--
2.48.1
2 weeks, 6 days
[PATCH v2] NEWS: Improve mention of vTPM transient VM crash fix in v11.0.0
by Peter Krempa
The original NEWS entry for the vTPM transient VM crash was rather
vague and non-actionable.
As the bug is still actively experienced by users [1] of distros that
didn't yet ship an update to v11.0.0 and is hit by relatively common
usage improve the entry to mention situations when it happens and link
to upstream bug reports containing workarounds.
[1]: https://gitlab.com/libvirt/libvirt/-/issues/746
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
v2: Instead of listing the workarounds link to the upstream issues.
NEWS.rst | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/NEWS.rst b/NEWS.rst
index 7984f358f3..96e6ee9ada 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -116,10 +116,17 @@ v11.0.0 (2025-01-15)
* **Bug fixes**
- * qemu: tpm: do not update profile name for transient domains
+ * qemu: tpm: Fix crash on startup of transient domains with vTPM
- Fix a possible crash when starting a transient domain which was
- introduced in the previous release.
+ A bug was introduced in `v10.10.0 (2024-12-02)`_ when a transient VM with
+ a vTPM device is started either via ``virsh create domain.xml`` or when
+ creating VMs via ``virt-manager`` which uses transient definition for the
+ initial installation the libvirt daemon (``virtqemud`` or ``libvirtd``) can
+ crash.
+
+ More information and workarounds were documented in upstream issues
+ `#746 <https://gitlab.com/libvirt/libvirt/-/issues/746>`__ and
+ `#715 <https://gitlab.com/libvirt/libvirt/-/issues/715>`__.
* qemu: Fix snapshot to not delete disk image with internal snapshot
--
2.48.1
2 weeks, 6 days
[PATCH] NEWS: Improve mention of vTPM transient VM crash fix in v11.0.0
by Peter Krempa
The original NEWS entry for the vTPM transient VM crash was rather
vague and non-actionable.
As the bug is still actively experienced by users [1] of distros that
didn't yet ship an update to v11.0.0 and is hit by relatively common
usage improve the entry to mention situations when it happens a bit more
close and provide workarounds for users who are not able to update.
[1]: https://gitlab.com/libvirt/libvirt/-/issues/746
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
NEWS.rst | 29 ++++++++++++++++++++++++++---
1 file changed, 26 insertions(+), 3 deletions(-)
diff --git a/NEWS.rst b/NEWS.rst
index 7dc6a3fa37..117287044a 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -97,10 +97,33 @@ v11.0.0 (2025-01-15)
* **Bug fixes**
- * qemu: tpm: do not update profile name for transient domains
+ * qemu: tpm: Fix crash on startup of transient domains with vTPM
- Fix a possible crash when starting a transient domain which was
- introduced in the previous release.
+ A bug was introduced in `v10.10.0 (2024-12-02)`_ when a transient VM with
+ a vTPM device is started either via ``virsh create domain.xml`` or when
+ creating VMs via ``virt-manager`` which uses transient definition for the
+ initial installation the libvirt daemon (``virtqemud`` or ``libvirtd``) can
+ crash.
+
+ Note that vTPM is auto-added in many cases by ``virt-install`` based on your
+ guest OS that is about to be installed.
+
+ The bug is fixed in this release. Following workarounds are possible if
+ upgrade is currently not available:
+
+ - make the VM persistent instead of starting it as transient::
+
+ virsh define domain.xml
+ virsh start.xml
+
+ - disable vTPM if practical in your deployment, either by dropping ``<tpm``
+ element or::
+
+ virt-install --tpm none ...
+
+ To obtain the XML ``virt-install`` would use for the above steps you can use::
+
+ virt-install --print-xml ...
* qemu: Fix snapshot to not delete disk image with internal snapshot
--
2.48.1
2 weeks, 6 days
[PATCH RFC 00/13] qemu: Add support for iothread to virtqueue mapping for 'virtio-scsi'
by Peter Krempa
The first part of the series refactors the existing code for reuse and
then uses the new helpers to implement the feature.
Note that this series is in RFC state as the qemu patches are still
being discussed. Thus also the capability bump is not final.
Also note that we should discuss the libvirt interface perhaps as it
turns out that 'virtio-scsi' has two internal queues that need to be
mapped as well.
For now I've solved this administratively by instructing users to also
add mapping for queue '0' and '1' which are the special ones in case of
virtio-scsi.
qemu-patches:
https://mail.gnu.org/archive/html/qemu-devel/2025-02/msg02810.html
Peter Krempa (13):
conf: Rename 'virDomainDiskIothreadDef' to
'virDomainIothreadMappingDef'
conf: domain: Extract code for parsing and formatting iotrhead mapping
definition
hypervisor: domain: Extract code for checking iothread usage
qemu: command: Rename 'qemuBuildDiskDeviceIothreadMappingProps' to
'qemuBuildIothreadMappingProps'
qemu: validate: Extract iothread mapping validation code
qemuValidateCheckSCSIControllerIOThreads: Return '0' and '-1' instead
of bools
conf: schemas: Rename 'diskDriverIothreads' to 'iothreadMapping'
conf: Validate that iohtreads are used only with 'virtio-scsi'
controllers
qemucapabilitiestest: Update 'caps_10.0.0_x86_64' to XXXXXX
qemu: capabilities: Introduce QEMU_CAPS_VIRTIO_SCSI_IOTHREAD_MAPPING
conf: Add support for iothread to queue mapping config for
'virtio-scsi'
qemu: Implement support for iothread <-> virtqueue mapping for
'virtio-scsi' controllers
qemuxmlconftest: Add 'iothreads-virtio-scsi-mapping' case
docs/formatdomain.rst | 33 +++
src/conf/domain_conf.c | 157 +++++++-----
src/conf/domain_conf.h | 11 +-
src/conf/domain_validate.c | 19 ++
src/conf/schemas/domaincommon.rng | 7 +-
src/hypervisor/domain_driver.c | 34 +--
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 12 +-
src/qemu/qemu_domain.c | 4 +-
src/qemu/qemu_validate.c | 234 ++++++++++--------
.../caps_10.0.0_x86_64.replies | 12 +-
.../caps_10.0.0_x86_64.xml | 3 +-
...r-virtio-serial-iothread.x86_64-latest.err | 1 +
.../controller-virtio-serial-iothread.xml | 27 ++
...ads-virtio-scsi-mapping.x86_64-latest.args | 39 +++
...eads-virtio-scsi-mapping.x86_64-latest.xml | 54 ++++
.../iothreads-virtio-scsi-mapping.xml | 46 ++++
tests/qemuxmlconftest.c | 3 +
19 files changed, 506 insertions(+), 193 deletions(-)
create mode 100644 tests/qemuxmlconfdata/controller-virtio-serial-iothread.x86_64-latest.err
create mode 100644 tests/qemuxmlconfdata/controller-virtio-serial-iothread.xml
create mode 100644 tests/qemuxmlconfdata/iothreads-virtio-scsi-mapping.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/iothreads-virtio-scsi-mapping.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/iothreads-virtio-scsi-mapping.xml
--
2.48.1
2 weeks, 6 days
virsh console hangs when ssh connection is lost
by Olaf Hering
Hello,
the command 'virsh -c qemu+ssh://root@remotehost/system console vm' from
libvirt 10.0.0 just hangs when the remotehost is rebooted. It prints
error: Disconnected from qemu+ssh://root@remotehost/system due to end of file
and waits for the user to press return. Then it prints
error: internal error: client socket is closed
and the command terminates as expected. I tried to add -k and -K, but
that does not help. If for some reason the vm is shutdown manually via
'virsh shutdown --domain vm' on remotehost or via "poweroff", then
'virsh console' also terminates properly.
Is there a way to avoid such a hang state if the ssh connection drops?
It seems the simple reproducer is to send SIGTERM to the sshd child process.
Olaf
2 weeks, 6 days
[PATCH 0/4] Add support for MSDM ACPI table type
by Daniel P. Berrangé
This was requested by KubeVirt in
https://gitlab.com/libvirt/libvirt/-/issues/748
I've not functionally tested this, since I lack any suitable guest
windows environment this is looking for MSDM tables, nor does my
machine have MSDM ACPI tables to pass to a guest.
I'm blindly assuming that the QEMU CLI code is identical except for
s/SLIC/MSDM/.
Also I'm pretty unhappy about the situation with the Xen driver
support. This is pre-existing, and IMHO should never have been added
as it exists today, as it allows arbitrary passthrough of *any* set
of ACPI tables, as opposed to a single type of the specific type
listed in the XML. This should have been handled with a different
XML syntax, but with stuck with this undesirable approach now, so
I've kept it as is.
Daniel P. Berrangé (4):
conf: introduce support for multiple ACPI tables
src: validate permitted ACPI table types in libxl/qemu drivers
conf: support MSDM ACPI table type
qemu: support MSDM ACPI table type
docs/formatdomain.rst | 4 +-
src/conf/domain_conf.c | 88 ++++++++++++++-----
src/conf/domain_conf.h | 22 ++++-
src/conf/schemas/domaincommon.rng | 5 +-
src/libvirt_private.syms | 2 +
src/libxl/libxl_conf.c | 8 +-
src/libxl/libxl_domain.c | 21 +++++
src/libxl/xen_xl.c | 22 ++++-
src/qemu/qemu_command.c | 14 ++-
src/qemu/qemu_validate.c | 16 ++++
src/security/security_dac.c | 18 ++--
src/security/security_selinux.c | 16 ++--
src/security/virt-aa-helper.c | 5 +-
.../acpi-table-many.x86_64-latest.args | 34 +++++++
.../acpi-table-many.x86_64-latest.xml | 39 ++++++++
tests/qemuxmlconfdata/acpi-table-many.xml | 31 +++++++
tests/qemuxmlconftest.c | 1 +
17 files changed, 296 insertions(+), 50 deletions(-)
create mode 100644 tests/qemuxmlconfdata/acpi-table-many.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/acpi-table-many.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/acpi-table-many.xml
--
2.47.1
2 weeks, 6 days
Re: [PATCH 1/1] RFC: Add Arm CCA support for getting capability
information and running Realm VM
by Michal Prívozník
On 2/14/25 02:06, Akio Kakuno (Fujitsu) via Devel wrote:
> Hi, all!
>
> I'm adding three test for CCA compatibility:
> domaincapstest, qemucapabilitiestest, and qemuxmlconftest.
> This is because SEV-SNP added these three tests.
>
> I have three questions regarding these tests:
> 1. How to add tests to qemuxmlconftest
> 2. How to create launch-security-cca.xml
> 3. About the file output with VIR_TEST_REGENERATE_OUTPUT=1
>
> 1. How to add tests to qemuxmlconftest
> Following the example of launch-security-sev-snp tests, I've done the following.
> Is this correct?
>
> (1) Placed the following three files in tests/qemuxmlconfdata:
> launch-security-cca.xml
> launch-security-cca.aarch64-latest.xml
> launch-security-cca.aarch64-latest.args
>
The placement is correct, yes. BUT ...
> (2) Added the test processing to qemuxmlconftest.c's mymain() function:
> DO_TEST_CAPS_ARCH_LATEST_FULL("launch-security-cca",
> "aarch64",
> ARG_QEMU_CAPS,
> QEMU_CAPS_CCA_GUEST,
> QEMU_CAPS_LAST);
... this can be simplified to:
DO_TEST_CAPS_ARCH_LATEST("launch-security-cca", "aarch64");
>
> 2. How to create launch-security-cca.xml
> Do I need to handmade this from scratch or is there an automated method?
Basically it's hand written. What I usually do is I copy-paste the
domain XML I used when developing and testing a feature. And then cut
off all unnecessary elements.
>
> 3. About the file output with VIR_TEST_REGENERATE_OUTPUT=1
> I created launch-security-cca.aarch64-latest.* using the method described in
> doc/advanced-tests.rst.
> And, I created the test data for qemucapabilitiestest and domaincapstest using
> the method described in tests/qemucapabilitiesdata/README.rst.
> VIR_TEST_REGENERATE_OUTPUT=1 ./qemuxmlconftest
> VIR_TEST_REGENERATE_OUTPUT=1 ./domaincapstest
> VIR_TEST_REGENERATE_OUTPUT=1 ./qemucapabilitiestest
>
> Can I use the generated file for testing as is?
> Because doc/advanced-tests.rst says:
> "VERY CAREFULLY to ensure they are correct"
> > I assume that automatically generated expected values are
checked for accuracy.
Not really. It takes a machine brain to decide whether those files
follow some syntax (e.g. whether JSON is valid), but it takes human
brain to decide whether full combination of cmd line arguments actually
makes sense. I'd say - if you're able to start generated cmd line
(modulo some FD passing stuff - see my point above about cutting off
unnecessary elements), then you're probably fine.
> If correct, they are adopted; otherwise, investigation and remediation are undertaken.
> However, due to the lack of explicit documentation, we require confirmation.
>
> Also, it appears to be generating expected values based on the execution environment.
> Do we need to worry about variations in execution environments?
No. Our tests should generate stable enough (and reproducible!) environment.
> For example, executing qemuxmlconftest detects the following comparison errors,
> such as with aarch64-virt-minimal.aarch64-latest, etc.
> Expect [sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
> -m]
> Actual [m]
Yeah, printing diffs is not very userfriendly. You can get better
results with VIR_TEST_DEBUG=2.
>
> Best Regards.
>
Michal
2 weeks, 6 days
[PATCH v7 00/18] *** [PATCH v7 00/18] qemu: block: Support block disk along with throttle filters ***
by Harikumar Rajkumar
*** Support block disk along with throttle filters ***
Harikumar Rajkumar (18):
schema: Add new domain elements to support multiple throttle groups
schema: Add new domain elements to support multiple throttle filters
config: Introduce ThrottleGroup and corresponding XML parsing
config: Introduce ThrottleFilter and corresponding XML parsing
qemu: monitor: Add support for ThrottleGroup operations
tests: Test qemuMonitorJSONGetThrottleGroup and
qemuMonitorJSONUpdateThrottleGroup
remote: New APIs for ThrottleGroup lifecycle management
qemu: Refactor qemuDomainSetBlockIoTune to extract common methods
qemu: Implement qemu driver for throttle API
qemu: helper: throttle filter nodename and preparation processing
qemu: block: Support block disk along with throttle filters
config: validate: Verify iotune, throttle group and filter
qemuxmlconftest: Add 'throttlefilter' tests
qemustatusxml2xmldata: Add 'throttlefilter' tests
test_driver: Test throttle group lifecycle APIs
virsh: Refactor iotune options for re-use
virsh: Add support for throttle group operations
virsh: Add option throttle-groups to attach_disk
docs/formatdomain.rst | 47 ++
docs/manpages/virsh.rst | 137 +++-
include/libvirt/libvirt-domain.h | 14 +
src/conf/domain_conf.c | 407 ++++++++++
src/conf/domain_conf.h | 47 ++
src/conf/domain_validate.c | 118 ++-
src/conf/schemas/domaincommon.rng | 293 ++++---
src/conf/virconftypes.h | 4 +
src/driver-hypervisor.h | 14 +
src/libvirt-domain.c | 122 +++
src/libvirt_private.syms | 8 +
src/libvirt_public.syms | 6 +
src/qemu/qemu_block.c | 136 ++++
src/qemu/qemu_block.h | 49 ++
src/qemu/qemu_command.c | 180 +++++
src/qemu/qemu_command.h | 6 +
src/qemu/qemu_domain.c | 77 +-
src/qemu/qemu_driver.c | 486 +++++++++---
src/qemu/qemu_hotplug.c | 29 +
src/qemu/qemu_monitor.c | 21 +
src/qemu/qemu_monitor.h | 9 +
src/qemu/qemu_monitor_json.c | 129 +++
src/qemu/qemu_monitor_json.h | 14 +
src/remote/remote_daemon_dispatch.c | 105 +++
src/remote/remote_driver.c | 3 +
src/remote/remote_protocol.x | 50 +-
src/remote_protocol-structs | 28 +
src/test/test_driver.c | 367 ++++++---
tests/qemumonitorjsontest.c | 86 ++
.../throttlefilter-in.xml | 392 ++++++++++
.../throttlefilter-out.xml | 393 ++++++++++
tests/qemuxmlactivetest.c | 1 +
.../throttlefilter-invalid.x86_64-latest.err | 1 +
.../throttlefilter-invalid.xml | 89 +++
.../throttlefilter.x86_64-latest.args | 55 ++
.../throttlefilter.x86_64-latest.xml | 105 +++
tests/qemuxmlconfdata/throttlefilter.xml | 95 +++
tests/qemuxmlconftest.c | 2 +
tools/virsh-completer-domain.c | 82 ++
tools/virsh-completer-domain.h | 16 +
tools/virsh-domain.c | 736 ++++++++++++++----
41 files changed, 4429 insertions(+), 530 deletions(-)
create mode 100644 tests/qemustatusxml2xmldata/throttlefilter-in.xml
create mode 100644 tests/qemustatusxml2xmldata/throttlefilter-out.xml
create mode 100644 tests/qemuxmlconfdata/throttlefilter-invalid.x86_64-latest.err
create mode 100644 tests/qemuxmlconfdata/throttlefilter-invalid.xml
create mode 100644 tests/qemuxmlconfdata/throttlefilter.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/throttlefilter.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/throttlefilter.xml
--
2.39.5 (Apple Git-154)
3 weeks
libvirt hook to change domain xml
by Victor Toso
Hi,
I'm particularly interested in the PoC of KubeVirt to allow
custom changes in libvirt domain xml in a more straightforward
manner than they have Today [0].
When I was looking into the libvirt hooks, I was a bit excited
when I read:
> you can also place several hook scripts in the directory
> /etc/libvirt/hooks/qemu.d/. They are executed in alphabetical
> order after main script. In this case each script also acts as
> filter and can modify the domain XML and print it out on its
> standard output. This script output is passed to standard
> input next script in order. Empty output from any script is
> also identical to copying the input XML without changing it.
> In case any script returns failure common process will be
> aborted, but all scripts from the directory will are executed.
But that's not the case for every situation. When the domain is
being defined the scripts are run but the output is ignored [1]
Is there a reason for that?
In KubeVirt we basically have to re-implement [2] what the logic
around scripts under qemu.d already does. If we could use libvirt
only it would make testing easier for users and less code on
KubeVirt.
[0] https://kubevirt.io/user-guide/user_workloads/hook-sidecar/
[1] https://gitlab.com/libvirt/libvirt/-/blob/master/src/qemu/qemu_process.c#...
[2] https://github.com/kubevirt/kubevirt/blob/main/pkg/hooks/manager.go#L200
Cheers,
Victor
3 weeks
[PATCH 00/12] [PATCH v2 00/12] qemu: support passt as a backend for vhost-user network interfaces
by Laine Stump
====
Changes from V1:
* fixed missing change to error log message pointed out by abologna
* added a validation check to assure that shared memory is enabled
if there is a type='vhostuser' interface in the domain definition
* included a patch documenting differences between type='user' SLIRP
and passt behaviors (because I had to do it anyway, and the
reorganization made documenting type='vhostuser' passt slightly
easier.
* added documentation for type='vhostuser' backend type='passt'
=====
passt (https://passt.top) provides a method of connecting QEMU virtual
machines to the external network without requiring special privileges
or capabilities of any participating processes - even libvirt itself
can run unprivileged and create an instance of passt (which *always*
runs unprivileged) that is then connected to the qemu process (and
thus the virtual machine) with a unix socket.
Originally passt used its own protocol for this socket, sending both
control messages and data packets over the socket. This works, and is
already much more efficient than the previously
only-unprivileged-networking-solution slirp.
But recently passt added support for using the vhost-user protocol for
communication between the passt process (which is connected to the
external network) and the QEMU process (and thus the VM). vhost-user
also uses a unix socket, but only for control plane messages - all
data packets are "sent" between the VM and passt process via a shared
memory region. This is unsurprisingly much more efficient.
From the point of view of QEMU, the passt process looks identical to
any normal vhost-user backend, so we can run QEMU with exactly the
same interface commandline options as normal vhost-user. Also, the
passt process supports all of the same options as it does when used in
its "traditional" mode, so really in the end all we need to do is
twist libvirt around so that when <backend type='passt'/> is specified
for an <interface type='vhostuser'>, it will run passt just as before
(except with the added "--vhost-user" option so that passt will know
to use that), and then force feed the vhost-user code in libvirt with
the same socket path used by passt.
This series does that, while also switching up a few bits of code
prior to adding in the new functionality.
So far this has been tested both unprivileged and privileged on Fedora
40 (with latest passt packet) and selinux enabled (there are a couple
of selinux policy tweaks that still need to be pushed to
passt-selinux) as well as unprivileged on debian (I *think* with
AppArmor enabled) and everything seems to work.
(I haven't gotten to testing hotplug, but it *should* work, and I'll
be testing it while (hopefully) someone is reviewing these patches.)
To test, you will need the latest (20250121) passt package and the
aforementioned upstream passt-selinux patch if you're using selinux.
This Resolves: https://issues.redhat.com/browse/RHEL-69455
Laine Stump (12):
conf: change virDomainHostdevInsert() to return void
qemu: fix qemu validation to forbid guest-side IP address for
type='vdpa'
qemu: validate that model is virtio for vhostuser and vdpa interfaces
in the same place
qemu: automatically set model type='virtio' for interface
type='vhostuser'
qemu: do all vhostuser attribute validation in qemu driver
conf/qemu: make <source> element *almost* optional for type=vhostuser
qemu: use switch instead of if in qemuProcessPrepareDomainNetwork()
qemu: make qemuPasstCreateSocketPath() public
qemu: complete vhostuser + passt support
qemu: fail validation if a domain def has vhostuser/passt but no
shared mem
docs: improve type='user' docs to higlight differences between SLIRP
and passt
docs: document using passt backend with <interface type='vhostuser'>
docs/formatdomain.rst | 189 +++++++++++++-----
src/conf/domain_conf.c | 107 +++++-----
src/conf/domain_conf.h | 2 +-
src/conf/domain_validate.c | 85 +++-----
src/conf/schemas/domaincommon.rng | 32 ++-
src/libxl/libxl_domain.c | 5 +-
src/libxl/libxl_driver.c | 3 +-
src/lxc/lxc_driver.c | 3 +-
src/qemu/qemu_command.c | 7 +-
src/qemu/qemu_driver.c | 3 +-
src/qemu/qemu_extdevice.c | 6 +-
src/qemu/qemu_hotplug.c | 21 +-
src/qemu/qemu_passt.c | 5 +-
src/qemu/qemu_passt.h | 3 +
src/qemu/qemu_postparse.c | 3 +-
src/qemu/qemu_process.c | 85 +++++---
src/qemu/qemu_validate.c | 65 ++++--
...t-user-slirp-portforward.x86_64-latest.err | 2 +-
...vhostuser-passt-no-shmem.x86_64-latest.err | 1 +
.../net-vhostuser-passt-no-shmem.xml | 70 +++++++
.../net-vhostuser-passt.x86_64-latest.args | 42 ++++
.../net-vhostuser-passt.x86_64-latest.xml | 75 +++++++
tests/qemuxmlconfdata/net-vhostuser-passt.xml | 73 +++++++
tests/qemuxmlconftest.c | 2 +
24 files changed, 657 insertions(+), 232 deletions(-)
create mode 100644 tests/qemuxmlconfdata/net-vhostuser-passt-no-shmem.x86_64-latest.err
create mode 100644 tests/qemuxmlconfdata/net-vhostuser-passt-no-shmem.xml
create mode 100644 tests/qemuxmlconfdata/net-vhostuser-passt.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/net-vhostuser-passt.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/net-vhostuser-passt.xml
--
2.47.1
3 weeks, 1 day