[libvirt PATCH v3 00/16] Add QEMU "-display dbus" support
by marcandre.lureau@redhat.com
From: Marc-André Lureau <marcandre.lureau(a)redhat.com>
Hi,
This series implements supports for the uQEMU "-display dbus" support, that
landed earlier this week for 7.0.
By default, libvirt will start a private VM bus (sharing and reusing the
existing "vmstate" VM bus & code).
The feature set should cover the needs to replace Spice as local client of choice,
including 3daccel/dmabuf, audio, clipboard sharing, usb redirection, and arbitrary
chardev/channels (for serial etc).
The test Gtk4 client is also in progress, currently in development at
https://gitlab.com/marcandre.lureau/qemu-display/. A few dependencies, such as
zbus, require an upcoming release. virt-viewer & boxes will need a port to Gtk4
to make use of the shared widget.
Comments welcome, as we can still adjust the QEMU side etc.
thanks
v3: after QEMU 7.0 dev cycle opening and merge
- rebased
- add 7.0 x86-64 capabilities (instead of tweaking 6.2)
- fix version annotations
Marc-André Lureau (16):
qemu: add chardev-vdagent capability check
qemu: add -display dbus capability check
qemucapabilitiestest: Add x64 test data for the qemu-7.0 development
cycle
conf: add <graphics type='dbus'>
qemu: start the D-Bus daemon for the display
qemu: add -display dbus support
virsh: refactor/split cmdDomDisplay()
virsh: report the D-Bus bus URI for domdisplay
conf: add <audio type='dbus'> support
qemu: add audio type 'dbus'
conf: add dbus <clipboard>
qemu: add dbus clipboard sharing
conf: add <serial type='dbus'>
qemu: add -chardev dbus support
qemu: add usbredir type 'dbus'
docs: document <graphics> type dbus
NEWS.rst | 7 +-
docs/formatdomain.rst | 43 +-
docs/schemas/basictypes.rng | 7 +
docs/schemas/domaincommon.rng | 71 +
src/bhyve/bhyve_command.c | 1 +
src/conf/domain_conf.c | 141 +-
src/conf/domain_conf.h | 15 +
src/conf/domain_validate.c | 41 +-
src/libxl/libxl_conf.c | 1 +
src/qemu/qemu_capabilities.c | 8 +
src/qemu/qemu_capabilities.h | 4 +
src/qemu/qemu_command.c | 77 +-
src/qemu/qemu_domain.c | 1 +
src/qemu/qemu_driver.c | 10 +-
src/qemu/qemu_extdevice.c | 13 +
src/qemu/qemu_hotplug.c | 1 +
src/qemu/qemu_monitor_json.c | 10 +
src/qemu/qemu_process.c | 41 +-
src/qemu/qemu_validate.c | 33 +
src/security/security_dac.c | 2 +
src/vmx/vmx.c | 1 +
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 231 +
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 237 +
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 231 +
.../caps_6.1.0.x86_64.xml | 1 +
.../caps_6.2.0.aarch64.xml | 1 +
.../caps_6.2.0.x86_64.xml | 1 +
.../caps_7.0.0.x86_64.replies | 37335 ++++++++++++++++
.../caps_7.0.0.x86_64.xml | 3720 ++
.../graphics-dbus-address.args | 30 +
.../graphics-dbus-address.xml | 35 +
.../qemuxml2argvdata/graphics-dbus-audio.args | 33 +
.../qemuxml2argvdata/graphics-dbus-audio.xml | 45 +
.../graphics-dbus-chardev.args | 32 +
.../graphics-dbus-chardev.xml | 43 +
.../graphics-dbus-clipboard.args | 31 +
.../graphics-dbus-clipboard.xml | 35 +
tests/qemuxml2argvdata/graphics-dbus-p2p.args | 30 +
tests/qemuxml2argvdata/graphics-dbus-p2p.xml | 33 +
.../graphics-dbus-usbredir.args | 34 +
.../graphics-dbus-usbredir.xml | 30 +
tests/qemuxml2argvdata/graphics-dbus.args | 30 +
tests/qemuxml2argvdata/graphics-dbus.xml | 33 +
tests/qemuxml2argvtest.c | 22 +
.../graphics-dbus-address.xml | 1 +
.../graphics-dbus-audio.xml | 1 +
.../graphics-dbus-chardev.xml | 1 +
.../graphics-dbus-clipboard.xml | 1 +
.../qemuxml2xmloutdata/graphics-dbus-p2p.xml | 1 +
tests/qemuxml2xmloutdata/graphics-dbus.xml | 1 +
tests/qemuxml2xmltest.c | 20 +
tools/virsh-domain.c | 366 +-
52 files changed, 42981 insertions(+), 192 deletions(-)
create mode 100644 tests/domaincapsdata/qemu_7.0.0-q35.x86_64.xml
create mode 100644 tests/domaincapsdata/qemu_7.0.0-tcg.x86_64.xml
create mode 100644 tests/domaincapsdata/qemu_7.0.0.x86_64.xml
create mode 100644 tests/qemucapabilitiesdata/caps_7.0.0.x86_64.replies
create mode 100644 tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-address.args
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-address.xml
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-audio.args
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-audio.xml
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-chardev.args
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-chardev.xml
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-clipboard.args
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-clipboard.xml
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-p2p.args
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-p2p.xml
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-usbredir.args
create mode 100644 tests/qemuxml2argvdata/graphics-dbus-usbredir.xml
create mode 100644 tests/qemuxml2argvdata/graphics-dbus.args
create mode 100644 tests/qemuxml2argvdata/graphics-dbus.xml
create mode 120000 tests/qemuxml2xmloutdata/graphics-dbus-address.xml
create mode 120000 tests/qemuxml2xmloutdata/graphics-dbus-audio.xml
create mode 120000 tests/qemuxml2xmloutdata/graphics-dbus-chardev.xml
create mode 120000 tests/qemuxml2xmloutdata/graphics-dbus-clipboard.xml
create mode 120000 tests/qemuxml2xmloutdata/graphics-dbus-p2p.xml
create mode 120000 tests/qemuxml2xmloutdata/graphics-dbus.xml
--
2.34.1.8.g35151cf07204
2 years, 6 months
[libvirt PATCH 0/2] Fix tests on macOS
by Daniel P. Berrangé
NB, even with this done there is still a latent bug affecting all
platforms. When we call g_source_destroy the removal is async but
we usually close the FD synchronously. This leads to poll'ing on
a bad FD.
We've actually had this race in libvirt since day 1 - our previous
poll() event loop impl before glib would also implement the
virEventRemoveHandle call async by just writing to a pipe to
interrupt the other thread in poll, just as glib does.
We've always relied on parallelism to make this async call almost
instantaneous but under the right load conditions we trigger the
POLLNVAL / EBADF issue.
The only viable solution to this that I see is to only ever
call g_source_destroy + g_source_unref from an idle callback,
to guarantee that poll() isn't currently running.
We know this has a bit of a perf hit on code that is sensitive
to main loop iterations, so we tried to avoid it where possible
right now:
https://listman.redhat.com/archives/libvir-list/2020-November/212411.html
I think we'll need to revisit this though, as known BADF problems
are not good.
Daniel P. Berrangé (2):
ci: print stack traces on macOS if any tests fail
tests: don't set G_DEBUG=fatal-warnings on macOS
ci/cirrus/build.yml | 2 +-
tests/meson.build | 17 ++++++++++++++++-
2 files changed, 17 insertions(+), 2 deletions(-)
--
2.35.1
2 years, 6 months
[libvirt PATCH] conf: ensure only one vgpu has ramfb enabled
by Jonathon Jongsma
Validate the domain configuration to ensure that if there are more than
one vgpu assigned to a domain, only one of them has 'ramfb' enabled.
This was never a supported configuration. QEMU failed confusingly when
attempting to start a domain with this configuration. This change
attempts to provide better information about the error.
https://bugzilla.redhat.com/show_bug.cgi?id=2079760
Signed-off-by: Jonathon Jongsma <jjongsma(a)redhat.com>
---
src/conf/domain_validate.c | 19 ++++++++--
...v-display-ramfb-multiple.x86_64-latest.err | 1 +
.../hostdev-mdev-display-ramfb-multiple.xml | 38 +++++++++++++++++++
tests/qemuxml2argvtest.c | 1 +
4 files changed, 56 insertions(+), 3 deletions(-)
create mode 100644 tests/qemuxml2argvdata/hostdev-mdev-display-ramfb-multiple.x86_64-latest.err
create mode 100644 tests/qemuxml2argvdata/hostdev-mdev-display-ramfb-multiple.xml
diff --git a/src/conf/domain_validate.c b/src/conf/domain_validate.c
index 68190fc3e2..f4d6e6e0c5 100644
--- a/src/conf/domain_validate.c
+++ b/src/conf/domain_validate.c
@@ -1194,20 +1194,33 @@ virDomainDefDuplicateDiskInfoValidate(const virDomainDef *def)
}
static int
-virDomainDefDuplicateHostdevInfoValidate(const virDomainDef *def)
+virDomainDefHostdevValidate(const virDomainDef *def)
{
size_t i;
size_t j;
+ bool ramfbEnabled = false;
for (i = 0; i < def->nhostdevs; i++) {
+ virDomainHostdevDef *dev = def->hostdevs[i];
+
for (j = i + 1; j < def->nhostdevs; j++) {
- if (virDomainHostdevMatch(def->hostdevs[i],
+ if (virDomainHostdevMatch(dev,
def->hostdevs[j])) {
virReportError(VIR_ERR_XML_ERROR, "%s",
_("Hostdev already exists in the domain configuration"));
return -1;
}
}
+
+ if (dev->source.subsys.type == VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_MDEV &&
+ dev->source.subsys.u.mdev.ramfb == VIR_TRISTATE_SWITCH_ON) {
+ if (ramfbEnabled) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("Only one vgpu device can have 'ramfb' enabled"));
+ return -1;
+ }
+ ramfbEnabled = true;
+ }
}
return 0;
@@ -1664,7 +1677,7 @@ virDomainDefValidateInternal(const virDomainDef *def,
if (virDomainDefDuplicateDiskInfoValidate(def) < 0)
return -1;
- if (virDomainDefDuplicateHostdevInfoValidate(def) < 0)
+ if (virDomainDefHostdevValidate(def) < 0)
return -1;
if (virDomainDefDuplicateDriveAddressesValidate(def) < 0)
diff --git a/tests/qemuxml2argvdata/hostdev-mdev-display-ramfb-multiple.x86_64-latest.err b/tests/qemuxml2argvdata/hostdev-mdev-display-ramfb-multiple.x86_64-latest.err
new file mode 100644
index 0000000000..07ce47abf7
--- /dev/null
+++ b/tests/qemuxml2argvdata/hostdev-mdev-display-ramfb-multiple.x86_64-latest.err
@@ -0,0 +1 @@
+unsupported configuration: Only one vgpu device can have 'ramfb' enabled
diff --git a/tests/qemuxml2argvdata/hostdev-mdev-display-ramfb-multiple.xml b/tests/qemuxml2argvdata/hostdev-mdev-display-ramfb-multiple.xml
new file mode 100644
index 0000000000..1fe53726b5
--- /dev/null
+++ b/tests/qemuxml2argvdata/hostdev-mdev-display-ramfb-multiple.xml
@@ -0,0 +1,38 @@
+<domain type='qemu'>
+ <name>QEMUGuest2</name>
+ <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
+ <memory unit='KiB'>219136</memory>
+ <currentMemory unit='KiB'>219136</currentMemory>
+ <vcpu placement='static'>1</vcpu>
+ <os>
+ <type arch='x86_64' machine='pc'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu-system-x86_64</emulator>
+ <controller type='usb' index='0'>
+ </controller>
+ <controller type='pci' index='0' model='pci-root'/>
+ <controller type='ide' index='0'>
+ </controller>
+ <graphics type='vnc'/>
+ <hostdev mode='subsystem' type='mdev' model='vfio-pci' display='on' ramfb='on'>
+ <source>
+ <address uuid='53764d0e-85a0-42b4-af5c-2046b460b1dc'/>
+ </source>
+ </hostdev>
+ <hostdev mode='subsystem' type='mdev' model='vfio-pci' display='on' ramfb='on'>
+ <source>
+ <address uuid='53764d0e-85a0-42b4-af5c-2046b460b1dd'/>
+ </source>
+ </hostdev>
+ <video>
+ <model type='qxl' heads='1'/>
+ </video>
+ <memballoon model='none'/>
+ </devices>
+</domain>
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index 41fd032f19..e334c59eb7 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -1896,6 +1896,7 @@ mymain(void)
QEMU_CAPS_DEVICE_VFIO_PCI,
QEMU_CAPS_VFIO_PCI_DISPLAY);
DO_TEST_CAPS_LATEST("hostdev-mdev-display-ramfb");
+ DO_TEST_CAPS_LATEST_PARSE_ERROR("hostdev-mdev-display-ramfb-multiple");
DO_TEST_PARSE_ERROR("hostdev-vfio-zpci-wrong-arch",
QEMU_CAPS_DEVICE_VFIO_PCI);
DO_TEST("hostdev-vfio-zpci",
--
2.35.1
2 years, 6 months
[libvirt RFCv5 00/27] multifd save restore prototype
by Claudio Fontana
This is the multifd save prototype in semi-functional state,
with both save and restore minimally functional.
There are still quite a few rough edges.
KNOWN ISSUES:
1) this applies only to virsh save and virsh restore for now
(no managed save etc).
2) error handling is not good yet, especially during resume,
errors may leave behind a qemu process and such.
May need some help find all of these cases
3) the compression part is demonstrative only, there needs
to be more attention to compression options, and detecting
the compression used to store the multifd saves.
...
changes from v4:
* runIO renamed to virFileDiskCopy and rethought arguments
* renamed new APIs from ...ParametersFlags to ...Params
* introduce the new virDomainSaveParams and virDomainRestoreParams
without any additional parameters, so they can be upstreamed first.
* solved the issue in the gendispatch.pl script generating code that
was missing the conn parameter.
---
changes from v3:
* reordered series to have all helper-related change at the start
* solved all reported issues from ninja test, including documentation
* fixed most broken migration capabilities code (still imperfect likely)
* added G_GNUC_UNUSED as needed
* after multifd restore, added what I think were the missing operations:
qemuProcessRefreshState(),
qemuProcessStartCPUs() - most importantly,
virDomainObjSave()
The domain now starts running after restore without further encouragement
* removed the sleep(10) from the multifd-helper
changes from v2:
* added ability to restore the VM from disk using multifd
* fixed the multifd-helper to work in both directions,
assuming the need to listen for save, and connect for restore.
* fixed a large number of bugs, and probably introduced some :-)
Thanks for your thoughts,
Claudio
Claudio Fontana (27):
iohelper: introduce new struct to carry copy operation parameters
iohelper: refactor copy operation as a separate function
iohelper: move runIO function to virfile.c
virfile: rename runIO to virFileDiskCopy
virfile: change virFileDiskCopy arguments to extend beyond stdin,
stdout
virfile: add comment about the use of SEEK_END in virFileDiskCopy
multifd-helper: new helper for parallel save/restore
libvirt: introduce virDomainSaveParams public API
libvirt: introduce virDomainRestoreParams public API
remote: Add RPC support for the virDomainSaveParams API
gendispatch: add DomainRestoreParams as requiring conn argument
remote: Add RPC support for the virDomainRestoreParams API
qemu: add implementation for virDomainSaveParams API
qemu: add implementation for virDomainRestoreParams API
libvirt: add new VIR_DOMAIN_SAVE_PARALLEL flag and parameter
qemu: add stub support for VIR_DOMAIN_SAVE_PARALLEL in save
qemu: add stub support for VIR_DOMAIN_SAVE_PARALLEL in restore
qemu: saveimage: introduce virQEMUSaveFd
qemu: wire up saveimage code with the multifd helper
qemu: capabilities: add multifd to the probed migration capabilities
qemu: implement qemuMigrationSrcToFilesMultiFd
qemu: add parameter to qemuMigrationDstRun to skip waiting
qemu: implement qemuSaveImageLoadMultiFd
tools: add parallel parameter to virsh save command
tools: add parallel parameter to virsh restore command
docs: update refs to virDomainSaveParams and virDomainRestoreParams
qemu: add migration parameter multifd-compression
docs/formatsnapshot.rst | 5 +-
docs/manpages/virsh.rst | 34 +-
include/libvirt/libvirt-domain.h | 49 ++
po/POTFILES.in | 1 +
src/driver-hypervisor.h | 14 +
src/libvirt-domain.c | 99 +++-
src/libvirt_private.syms | 1 +
src/libvirt_public.syms | 6 +
src/qemu/qemu_capabilities.c | 6 +
src/qemu/qemu_capabilities.h | 4 +
src/qemu/qemu_driver.c | 239 +++++++--
src/qemu/qemu_migration.c | 155 ++++--
src/qemu/qemu_migration.h | 16 +-
src/qemu/qemu_migration_params.c | 71 ++-
src/qemu/qemu_migration_params.h | 15 +
src/qemu/qemu_process.c | 3 +-
src/qemu/qemu_process.h | 5 +-
src/qemu/qemu_saveimage.c | 496 ++++++++++++++----
src/qemu/qemu_saveimage.h | 49 +-
src/qemu/qemu_snapshot.c | 6 +-
src/remote/remote_driver.c | 2 +
src/remote/remote_protocol.x | 29 +-
src/remote_protocol-structs | 17 +
src/rpc/gendispatch.pl | 5 +-
src/util/iohelper.c | 162 +-----
src/util/meson.build | 19 +
src/util/multifd-helper.c | 249 +++++++++
src/util/virfile.c | 218 ++++++++
src/util/virfile.h | 2 +
src/util/virthread.c | 5 +
src/util/virthread.h | 1 +
.../caps_4.0.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_4.0.0.ppc64.xml | 1 +
.../caps_4.0.0.riscv32.xml | 1 +
.../caps_4.0.0.riscv64.xml | 1 +
.../qemucapabilitiesdata/caps_4.0.0.s390x.xml | 1 +
.../caps_4.0.0.x86_64.xml | 1 +
.../caps_4.1.0.x86_64.xml | 1 +
.../caps_4.2.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_4.2.0.ppc64.xml | 1 +
.../qemucapabilitiesdata/caps_4.2.0.s390x.xml | 1 +
.../caps_4.2.0.x86_64.xml | 1 +
.../caps_5.0.0.aarch64.xml | 2 +
.../qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 2 +
.../caps_5.0.0.riscv64.xml | 2 +
.../caps_5.0.0.x86_64.xml | 2 +
.../qemucapabilitiesdata/caps_5.1.0.sparc.xml | 2 +
.../caps_5.1.0.x86_64.xml | 2 +
.../caps_5.2.0.aarch64.xml | 2 +
.../qemucapabilitiesdata/caps_5.2.0.ppc64.xml | 2 +
.../caps_5.2.0.riscv64.xml | 2 +
.../qemucapabilitiesdata/caps_5.2.0.s390x.xml | 2 +
.../caps_5.2.0.x86_64.xml | 2 +
.../caps_6.0.0.aarch64.xml | 2 +
.../qemucapabilitiesdata/caps_6.0.0.s390x.xml | 2 +
.../caps_6.0.0.x86_64.xml | 2 +
.../caps_6.1.0.x86_64.xml | 2 +
.../caps_6.2.0.aarch64.xml | 2 +
.../qemucapabilitiesdata/caps_6.2.0.ppc64.xml | 2 +
.../caps_6.2.0.x86_64.xml | 2 +
.../caps_7.0.0.aarch64.xml | 2 +
.../qemucapabilitiesdata/caps_7.0.0.ppc64.xml | 2 +
.../caps_7.0.0.x86_64.xml | 2 +
tools/virsh-domain.c | 96 +++-
64 files changed, 1686 insertions(+), 446 deletions(-)
create mode 100644 src/util/multifd-helper.c
--
2.34.1
2 years, 6 months
[libvirt RFCv4 00/20] multifd save restore prototype
by Claudio Fontana
This is the multifd save prototype in semi-functional state,
with both save and restore minimally functional.
There are still quite a few rough edges.
KNOWN ISSUES:
1) this applies only to virsh save and virsh restore for now
(no managed save etc).
2) the .pl scripts to generate the headers for the new APIs
do not reliably work for me, for the Restore case. I get:
src/remote/remote_daemon_dispatch_stubs.h:10080:9:
error: too few arguments to function ‘virDomainRestoreParametersFlags’
if (virDomainRestoreParametersFlags(params, nparams, args->flags) < 0)
To work around this I had to fixup the header manually to add the
conn parameter like this:
...(conn, params, nparams, args->flags) < 0)
3) error handling is not good yet, especially during resume,
errors may leave behind a qemu process and such.
May need some help find all of these cases...
...
changes from v3:
* reordered series to have all helper-related change at the start
* solved all reported issues from ninja test, including documentation
* fixed most broken migration capabilities code (still imperfect likely)
* added G_GNUC_UNUSED as needed
* after multifd restore, added what I think were the missing operations:
qemuProcessRefreshState(),
qemuProcessStartCPUs() - most importantly,
virDomainObjSave()
The domain now starts running after restore without further encouragement
* removed the sleep(10) from the multifd-helper
changes from v2:
* added ability to restore the VM from disk using multifd
* fixed the multifd-helper to work in both directions,
assuming the need to listen for save, and connect for restore.
* fixed a large number of bugs, and probably introduced some :-)
Thanks for your thoughts,
Claudio
*** BLURB HERE ***
Claudio Fontana (20):
iohelper: introduce new struct to carry copy operation parameters
iohelper: refactor copy operation as a separate function
iohelper: move runIO function to a separate module
runio: add arguments to extend use beyond just stdin and stdout
multifd-helper: new helper for parallel save/restore
libvirt: introduce virDomainSaveParametersFlags public API
libvirt: introduce virDomainRestoreParametersFlags public API
remote: Add RPC support for the virDomainSaveParametersFlags API
remote: Add RPC support for the virDomainRestoreParametersFlags API
qemu: add a stub for virDomainSaveParametersFlags API
qemu: add a stub for virDomainRestoreParametersFlags API
qemu: saveimage: introduce virQEMUSaveFd
qemu: wire up saveimage code with the multifd helper
qemu: capabilities: add multifd to the probed migration capabilities
qemu: implement qemuMigrationSrcToFilesMultiFd
qemu: add parameter to qemuMigrationDstRun to skip waiting
qemu: implement qemuSaveImageLoadMultiFd
tools: add parallel parameter to virsh save command
tools: add parallel parameter to virsh restore command
qemu: add migration parameter multifd-compression
docs/manpages/virsh.rst | 34 +-
include/libvirt/libvirt-domain.h | 49 ++
po/POTFILES.in | 2 +
src/driver-hypervisor.h | 14 +
src/libvirt-domain.c | 99 ++++
src/libvirt_private.syms | 1 +
src/libvirt_public.syms | 6 +
src/qemu/qemu_capabilities.c | 6 +
src/qemu/qemu_capabilities.h | 4 +
src/qemu/qemu_driver.c | 239 +++++++--
src/qemu/qemu_migration.c | 155 ++++--
src/qemu/qemu_migration.h | 16 +-
src/qemu/qemu_migration_params.c | 71 ++-
src/qemu/qemu_migration_params.h | 15 +
src/qemu/qemu_process.c | 3 +-
src/qemu/qemu_process.h | 5 +-
src/qemu/qemu_saveimage.c | 496 ++++++++++++++----
src/qemu/qemu_saveimage.h | 49 +-
src/qemu/qemu_snapshot.c | 6 +-
src/remote/remote_driver.c | 2 +
src/remote/remote_protocol.x | 29 +-
src/remote_protocol-structs | 17 +
src/util/iohelper.c | 150 +-----
src/util/meson.build | 15 +
src/util/multifd-helper.c | 250 +++++++++
src/util/runio.c | 214 ++++++++
src/util/runio.h | 38 ++
src/util/virthread.c | 5 +
src/util/virthread.h | 1 +
.../caps_4.0.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_4.0.0.ppc64.xml | 1 +
.../caps_4.0.0.riscv32.xml | 1 +
.../caps_4.0.0.riscv64.xml | 1 +
.../qemucapabilitiesdata/caps_4.0.0.s390x.xml | 1 +
.../caps_4.0.0.x86_64.xml | 1 +
.../caps_4.1.0.x86_64.xml | 1 +
.../caps_4.2.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_4.2.0.ppc64.xml | 1 +
.../qemucapabilitiesdata/caps_4.2.0.s390x.xml | 1 +
.../caps_4.2.0.x86_64.xml | 1 +
.../caps_5.0.0.aarch64.xml | 2 +
.../qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 2 +
.../caps_5.0.0.riscv64.xml | 2 +
.../caps_5.0.0.x86_64.xml | 2 +
.../qemucapabilitiesdata/caps_5.1.0.sparc.xml | 2 +
.../caps_5.1.0.x86_64.xml | 2 +
.../caps_5.2.0.aarch64.xml | 2 +
.../qemucapabilitiesdata/caps_5.2.0.ppc64.xml | 2 +
.../caps_5.2.0.riscv64.xml | 2 +
.../qemucapabilitiesdata/caps_5.2.0.s390x.xml | 2 +
.../caps_5.2.0.x86_64.xml | 2 +
.../caps_6.0.0.aarch64.xml | 2 +
.../qemucapabilitiesdata/caps_6.0.0.s390x.xml | 2 +
.../caps_6.0.0.x86_64.xml | 2 +
.../caps_6.1.0.x86_64.xml | 2 +
.../caps_6.2.0.aarch64.xml | 2 +
.../qemucapabilitiesdata/caps_6.2.0.ppc64.xml | 2 +
.../caps_6.2.0.x86_64.xml | 2 +
.../caps_7.0.0.aarch64.xml | 2 +
.../qemucapabilitiesdata/caps_7.0.0.ppc64.xml | 2 +
.../caps_7.0.0.x86_64.xml | 2 +
tools/virsh-domain.c | 96 +++-
62 files changed, 1713 insertions(+), 427 deletions(-)
create mode 100644 src/util/multifd-helper.c
create mode 100644 src/util/runio.c
create mode 100644 src/util/runio.h
--
2.34.1
2 years, 6 months
[PATCH v2 2/5] hw/nvme: do not auto-generate eui64
by Klaus Jensen
From: Klaus Jensen <k.jensen(a)samsung.com>
We cannot provide auto-generated unique or persistent namespace
identifiers (EUI64, NGUID, UUID) easily. Since 6.1, namespaces have been
assigned a generated EUI64 of the form "52:54:00:<namespace counter>".
This is will be unique within a QEMU instance, but not globally.
Revert that this is assigned automatically and immediately deprecate the
compatibility parameter. Users can opt-in to this with the
`eui64-default=on` device parameter or set it explicitly with
`eui64=UINT64`.
Cc: libvir-list(a)redhat.com
Signed-off-by: Klaus Jensen <k.jensen(a)samsung.com>
---
docs/about/deprecated.rst | 7 +++++++
hw/core/machine.c | 4 +++-
hw/nvme/ns.c | 2 +-
3 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
index 896e5a97abbd..c65faa5ab4ad 100644
--- a/docs/about/deprecated.rst
+++ b/docs/about/deprecated.rst
@@ -356,6 +356,13 @@ contains native support for this feature and thus use of the option
ROM approach is obsolete. The native SeaBIOS support can be activated
by using ``-machine graphics=off``.
+``-device nvme-ns,eui64-default=on|off`` (since 7.1)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In QEMU versions 6.1, 6.2 and 7.0, the ``nvme-ns`` generates an EUI-64
+identifer that is not globally unique. If an EUI-64 identifer is required, the
+user must set it explicitly using the ``nvme-ns`` device parameter ``eui64``.
+
Block device options
''''''''''''''''''''
diff --git a/hw/core/machine.c b/hw/core/machine.c
index cb9bbc844d24..1e2108d95f11 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -37,7 +37,9 @@
#include "hw/virtio/virtio.h"
#include "hw/virtio/virtio-pci.h"
-GlobalProperty hw_compat_7_0[] = {};
+GlobalProperty hw_compat_7_0[] = {
+ { "nvme-ns", "eui64-default", "on"},
+};
const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
GlobalProperty hw_compat_6_2[] = {
diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c
index af6504fad2d8..06a04131f192 100644
--- a/hw/nvme/ns.c
+++ b/hw/nvme/ns.c
@@ -641,7 +641,7 @@ static Property nvme_ns_props[] = {
DEFINE_PROP_SIZE("zoned.zrwas", NvmeNamespace, params.zrwas, 0),
DEFINE_PROP_SIZE("zoned.zrwafg", NvmeNamespace, params.zrwafg, -1),
DEFINE_PROP_BOOL("eui64-default", NvmeNamespace, params.eui64_default,
- true),
+ false),
DEFINE_PROP_END_OF_LIST(),
};
--
2.35.1
2 years, 6 months
Entering freeze for libvirt-8.3.0
by Jiri Denemark
I have just tagged v8.3.0-rc1 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/
Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.
If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.
Thanks,
Jirka
2 years, 6 months
Re: [RFC 00/18] vfio: Adopt iommufd
by Alex Williamson
[Cc +libvirt folks]
On Thu, 14 Apr 2022 03:46:52 -0700
Yi Liu <yi.l.liu(a)intel.com> wrote:
> With the introduction of iommufd[1], the linux kernel provides a generic
> interface for userspace drivers to propagate their DMA mappings to kernel
> for assigned devices. This series does the porting of the VFIO devices
> onto the /dev/iommu uapi and let it coexist with the legacy implementation.
> Other devices like vpda, vfio mdev and etc. are not considered yet.
>
> For vfio devices, the new interface is tied with device fd and iommufd
> as the iommufd solution is device-centric. This is different from legacy
> vfio which is group-centric. To support both interfaces in QEMU, this
> series introduces the iommu backend concept in the form of different
> container classes. The existing vfio container is named legacy container
> (equivalent with legacy iommu backend in this series), while the new
> iommufd based container is named as iommufd container (may also be mentioned
> as iommufd backend in this series). The two backend types have their own
> way to setup secure context and dma management interface. Below diagram
> shows how it looks like with both BEs.
>
> VFIO AddressSpace/Memory
> +-------+ +----------+ +-----+ +-----+
> | pci | | platform | | ap | | ccw |
> +---+---+ +----+-----+ +--+--+ +--+--+ +----------------------+
> | | | | | AddressSpace |
> | | | | +------------+---------+
> +---V-----------V-----------V--------V----+ /
> | VFIOAddressSpace | <------------+
> | | | MemoryListener
> | VFIOContainer list |
> +-------+----------------------------+----+
> | |
> | |
> +-------V------+ +--------V----------+
> | iommufd | | vfio legacy |
> | container | | container |
> +-------+------+ +--------+----------+
> | |
> | /dev/iommu | /dev/vfio/vfio
> | /dev/vfio/devices/vfioX | /dev/vfio/$group_id
> Userspace | |
> ===========+============================+================================
> Kernel | device fd |
> +---------------+ | group/container fd
> | (BIND_IOMMUFD | | (SET_CONTAINER/SET_IOMMU)
> | ATTACH_IOAS) | | device fd
> | | |
> | +-------V------------V-----------------+
> iommufd | | vfio |
> (map/unmap | +---------+--------------------+-------+
> ioas_copy) | | | map/unmap
> | | |
> +------V------+ +-----V------+ +------V--------+
> | iommfd core | | device | | vfio iommu |
> +-------------+ +------------+ +---------------+
>
> [Secure Context setup]
> - iommufd BE: uses device fd and iommufd to setup secure context
> (bind_iommufd, attach_ioas)
> - vfio legacy BE: uses group fd and container fd to setup secure context
> (set_container, set_iommu)
> [Device access]
> - iommufd BE: device fd is opened through /dev/vfio/devices/vfioX
> - vfio legacy BE: device fd is retrieved from group fd ioctl
> [DMA Mapping flow]
> - VFIOAddressSpace receives MemoryRegion add/del via MemoryListener
> - VFIO populates DMA map/unmap via the container BEs
> *) iommufd BE: uses iommufd
> *) vfio legacy BE: uses container fd
>
> This series qomifies the VFIOContainer object which acts as a base class
> for a container. This base class is derived into the legacy VFIO container
> and the new iommufd based container. The base class implements generic code
> such as code related to memory_listener and address space management whereas
> the derived class implements callbacks that depend on the kernel user space
> being used.
>
> The selection of the backend is made on a device basis using the new
> iommufd option (on/off/auto). By default the iommufd backend is selected
> if supported by the host and by QEMU (iommufd KConfig). This option is
> currently available only for the vfio-pci device. For other types of
> devices, it does not yet exist and the legacy BE is chosen by default.
I've discussed this a bit with Eric, but let me propose a different
command line interface. Libvirt generally likes to pass file
descriptors to QEMU rather than grant it access to those files
directly. This was problematic with vfio-pci because libvirt can't
easily know when QEMU will want to grab another /dev/vfio/vfio
container. Therefore we abandoned this approach and instead libvirt
grants file permissions.
However, with iommufd there's no reason that QEMU ever needs more than
a single instance of /dev/iommufd and we're using per device vfio file
descriptors, so it seems like a good time to revisit this.
The interface I was considering would be to add an iommufd object to
QEMU, so we might have a:
-device iommufd[,fd=#][,id=foo]
For non-libivrt usage this would have the ability to open /dev/iommufd
itself if an fd is not provided. This object could be shared with
other iommufd users in the VM and maybe we'd allow multiple instances
for more esoteric use cases. [NB, maybe this should be a -object rather than
-device since the iommufd is not a guest visible device?]
The vfio-pci device might then become:
-device vfio-pci[,host=DDDD:BB:DD.f][,sysfsdev=/sys/path/to/device][,fd=#][,iommufd=foo]
So essentially we can specify the device via host, sysfsdev, or passing
an fd to the vfio device file. When an iommufd object is specified,
"foo" in the example above, each of those options would use the
vfio-device access mechanism, essentially the same as iommufd=on in
your example. With the fd passing option, an iommufd object would be
required and necessarily use device level access.
In your example, the iommufd=auto seems especially troublesome for
libvirt because QEMU is going to have different locked memory
requirements based on whether we're using type1 or iommufd, where the
latter resolves the duplicate accounting issues. libvirt needs to know
deterministically which backed is being used, which this proposal seems
to provide, while at the same time bringing us more in line with fd
passing. Thoughts? Thanks,
Alex
2 years, 7 months
[libvirt RFC v3 00/19] multifd save restore prototype
by Claudio Fontana
This is the multifd save prototype in its first semi-functional state,
now with both save and restore minimally functional.
Still as mentioned before, likely there are quite a few rough edges,
let me know what you think about this possible option.
changes from v2 are many, mainly:
* added ability to restore the VM from disk using multifd
* fixed the multifd-helper to work in both directions,
assuming the need to listen for save, and connect for restore.
* fixed a large number of bugs, and probably introduced some :-)
KNOWN ISSUES:
1) this applies only to virsh save and virsh restore for now
(no managed save etc).
2) the .pl scripts to generate the headers for the new APIs
do not reliably work for me, for the Restore case. I get:
src/remote/remote_daemon_dispatch_stubs.h:10080:9:
error: too few arguments to function ‘virDomainRestoreParametersFlags’
if (virDomainRestoreParametersFlags(params, nparams, args->flags) < 0)
To work around this I had to fixup the header manually to look like:
...(conn, params, nparams, args->flags) < 0)
Thanks for your thoughts,
Claudio
Claudio Fontana (19):
iohelper: introduce new struct to carry copy operation parameters
iohelper: refactor copy operation as a separate function
libvirt: introduce virDomainSaveParametersFlags public API
libvirt: introduce virDomainRestoreParametersFlags public API
remote: Add RPC support for the virDomainSaveParametersFlags API
remote: Add RPC support for the virDomainRestoreParametersFlags API
qemu: add a stub for virDomainSaveParametersFlags API
qemu: add a stub for virDomainRestoreParametersFlags API
qemu: saveimage: introduce virQEMUSaveFd
iohelper: move runIO function to a separate module
runio: add arguments to extend use beyond just stdin and stdout
multifd-helper: new helper for parallel save/restore
qemu: wire up saveimage code with the multifd helper
qemu: implement qemuMigrationSrcToFilesMultiFd
qemu: add parameter to qemuMigrationDstRun to skip waiting
qemu: implement qemuSaveImageLoadMultiFd
tools: add parallel parameter to virsh save command
tools: add parallel parameter to virsh restore command
qemu: add migration parameter multifd-compression
docs/manpages/virsh.rst | 34 ++-
include/libvirt/libvirt-domain.h | 13 +
src/driver-hypervisor.h | 14 +
src/libvirt-domain.c | 99 +++++++
src/libvirt_private.syms | 1 +
src/libvirt_public.syms | 6 +
src/qemu/qemu_capabilities.c | 5 +
src/qemu/qemu_capabilities.h | 3 +
src/qemu/qemu_driver.c | 233 +++++++++++----
src/qemu/qemu_migration.c | 155 ++++++----
src/qemu/qemu_migration.h | 16 +-
src/qemu/qemu_migration_params.c | 71 +++--
src/qemu/qemu_migration_params.h | 15 +
src/qemu/qemu_process.c | 3 +-
src/qemu/qemu_process.h | 5 +-
src/qemu/qemu_saveimage.c | 480 ++++++++++++++++++++++++-------
src/qemu/qemu_saveimage.h | 49 +++-
src/qemu/qemu_snapshot.c | 6 +-
src/remote/remote_driver.c | 2 +
src/remote/remote_protocol.x | 29 +-
src/remote_protocol-structs | 17 ++
src/util/iohelper.c | 150 +---------
src/util/meson.build | 15 +
src/util/multifd-helper.c | 250 ++++++++++++++++
src/util/runio.c | 214 ++++++++++++++
src/util/runio.h | 38 +++
src/util/virthread.c | 5 +
src/util/virthread.h | 1 +
tools/virsh-domain.c | 96 +++++--
29 files changed, 1599 insertions(+), 426 deletions(-)
create mode 100644 src/util/multifd-helper.c
create mode 100644 src/util/runio.c
create mode 100644 src/util/runio.h
--
2.34.1
2 years, 7 months
[PATCH] build-aux: remove duplicated syntax check filter for 'select'
by Daniel P. Berrangé
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
build-aux/syntax-check.mk | 3 ---
1 file changed, 3 deletions(-)
diff --git a/build-aux/syntax-check.mk b/build-aux/syntax-check.mk
index 66ebc3e066..6664763faf 100644
--- a/build-aux/syntax-check.mk
+++ b/build-aux/syntax-check.mk
@@ -1621,9 +1621,6 @@ exclude_file_name_regexp--sc_prohibit_newline_at_end_of_diagnostic = \
exclude_file_name_regexp--sc_prohibit_nonreentrant = \
^((po|tests|examples)/|docs/.*(py|js|html\.in|.rst)|run.in$$|tools/wireshark/util/genxdrstub\.pl|tools/virt-login-shell\.c$$)
-exclude_file_name_regexp--sc_prohibit_select = \
- ^build-aux/syntax-check\.mk$$
-
exclude_file_name_regexp--sc_prohibit_canonicalize_file_name = \
^(build-aux/syntax-check\.mk|tests/virfilemock\.c)$$
--
2.35.1
2 years, 7 months