[libvirt PATCH] kbase: More info on firmware change for existing VMs
by Andrea Bolognani
The need to remove the <loader> and <nvram> elements in order
to make the firmware autoselection process kick in again is
not exactly intuitive, so document it explicitly.
Signed-off-by: Andrea Bolognani <abologna(a)redhat.com>
---
docs/kbase/secureboot.rst | 30 +++++++++++++++++++++++++-----
1 file changed, 25 insertions(+), 5 deletions(-)
diff --git a/docs/kbase/secureboot.rst b/docs/kbase/secureboot.rst
index 4340454a7b..6c22b08d22 100644
--- a/docs/kbase/secureboot.rst
+++ b/docs/kbase/secureboot.rst
@@ -72,16 +72,36 @@ relevant documentation
Changing an existing VM
=======================
-Once the VM has been created, updating the XML configuration as
-described above is **not** enough to change the Secure Boot status:
-the NVRAM file associated with the VM has to be regenerated from its
-template as well.
+When a VM is defined, libvirt will pick the firmware that best
+satisfies the provided criteria and record this information for use
+on subsequent boots. The resulting XML configuration will look like
+this:
+
+::
+
+ <os firmware='efi'>
+ <firmware>
+ <feature enabled='yes' name='enrolled-keys'/>
+ <feature enabled='yes' name='secure-boot'/>
+ </firmware>
+ <loader readonly='yes' secure='yes' type='pflash'>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</loader>
+ <nvram template='/usr/share/edk2/ovmf/OVMF_VARS.secboot.fd'>/var/lib/libvirt/qemu/nvram/vm_VARS.fd</nvram>
+ </os>
+
+In order to force libvirt to repeat the firmware autoselection
+process, it's necessary to remove the ``<loader>`` and ``<nvram>``
+elements. Failure to do so will likely result in an error.
+
+Note that updating the XML configuration as described above is
+**not** enough to change the Secure Boot status: the NVRAM file
+associated with the VM has to be regenerated from its template as
+well.
In order to do that, update the XML and then start the VM with
::
- $ virsh start $vm --reset-nvram
+ $ virsh start vm --reset-nvram
This option is only available starting with libvirt 8.1.0, so if your
version of libvirt is older than that you will have to delete the
--
2.41.0
1 year, 1 month
[PATCH-for-8.2] target/nios2: Deprecate the Nios II architecture
by Philippe Mathieu-Daudé
See commit 9ba1caf510 ("MAINTAINERS: Mark the Nios II CPU as orphan"),
last contribution from Chris was in 2012 [1] and Marek in 2018 [2].
[1] https://lore.kernel.org/qemu-devel/1352607539-10455-2-git-send-email-crwu...
[2] https://lore.kernel.org/qemu-devel/805fc7b5-03f0-56d4-abfd-ed010d4fa769@d...
Signed-off-by: Philippe Mathieu-Daudé <philmd(a)linaro.org>
---
docs/about/deprecated.rst | 15 +++++++++++++++
hw/nios2/10m50_devboard.c | 1 +
hw/nios2/generic_nommu.c | 1 +
3 files changed, 17 insertions(+)
diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
index 78550c07bf..f7aa556294 100644
--- a/docs/about/deprecated.rst
+++ b/docs/about/deprecated.rst
@@ -236,6 +236,16 @@ it. Since all recent x86 hardware from the past >10 years is capable of the
64-bit x86 extensions, a corresponding 64-bit OS should be used instead.
+System emulator CPUs
+--------------------
+
+Nios II CPU (since 8.2)
+'''''''''''''''''''''''
+
+The Nios II architecture is orphan. The ``nios2`` guest CPU support is
+deprecated and will be removed in a future version of QEMU.
+
+
System emulator machines
------------------------
@@ -254,6 +264,11 @@ These old machine types are quite neglected nowadays and thus might have
various pitfalls with regards to live migration. Use a newer machine type
instead.
+Nios II ``10m50-ghrd`` and ``nios2-generic-nommu`` machines (since 8.2)
+'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+The Nios II architecture is orphan.
+
Backend options
---------------
diff --git a/hw/nios2/10m50_devboard.c b/hw/nios2/10m50_devboard.c
index 952a0dc33e..6cb32f777b 100644
--- a/hw/nios2/10m50_devboard.c
+++ b/hw/nios2/10m50_devboard.c
@@ -160,6 +160,7 @@ static void nios2_10m50_ghrd_class_init(ObjectClass *oc, void *data)
mc->desc = "Altera 10M50 GHRD Nios II design";
mc->init = nios2_10m50_ghrd_init;
mc->is_default = true;
+ mc->deprecation_reason = "Nios II architecture is deprecated";
object_class_property_add_bool(oc, "vic", get_vic, set_vic);
object_class_property_set_description(oc, "vic",
diff --git a/hw/nios2/generic_nommu.c b/hw/nios2/generic_nommu.c
index 48edb3ae37..defa16953f 100644
--- a/hw/nios2/generic_nommu.c
+++ b/hw/nios2/generic_nommu.c
@@ -95,6 +95,7 @@ static void nios2_generic_nommu_machine_init(struct MachineClass *mc)
{
mc->desc = "Generic NOMMU Nios II design";
mc->init = nios2_generic_nommu_init;
+ mc->deprecation_reason = "Nios II architecture is deprecated";
}
DEFINE_MACHINE("nios2-generic-nommu", nios2_generic_nommu_machine_init);
--
2.41.0
1 year, 1 month
[libvirt PATCH] rpc: Pass GPG_TTY and TERM environment variables
by Andrea Bolognani
gpg-agent can be used instead of ssh-agent to authenticate
against an SSH server, but in order to do so the GPG_TTY and
TERM environment variables need to be passed through.
For obvious reasons, we avoid doing that when no_tty=1 is found
in the connection URI.
https://bugs.debian.org/843863
https://gitlab.com/libvirt/libvirt/-/merge_requests/290
Thanks: Guilhem Moulin <guilhem(a)guilhem.org>
Thanks: Kunwu Chan <chentao(a)kylinos.cn>
Signed-off-by: Andrea Bolognani <abologna(a)redhat.com>
---
src/rpc/virnetsocket.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index b58f7a6b8f..151077c2dd 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -843,6 +843,11 @@ int virNetSocketNewConnectSSH(const char *nodename,
virCommandAddEnvPass(cmd, "OPENSSL_CONF");
virCommandAddEnvPass(cmd, "DISPLAY");
virCommandAddEnvPass(cmd, "XAUTHORITY");
+ if (!noTTY) {
+ /* Needed for gpg-agent's curses-based authentication prompt */
+ virCommandAddEnvPass(cmd, "GPG_TTY");
+ virCommandAddEnvPass(cmd, "TERM");
+ }
virCommandClearCaps(cmd);
if (service)
--
2.41.0
1 year, 1 month
[libvirt PATCH] qemu_snapshot: fix reverting to inactive snapshot
by Pavel Hrdina
When reverting to inactive snapshot updating the domain definition needs
to happen after the new overlays are created otherwise qemu-img will
correctly fail with error:
Trying to create an image with the same filename as the backing file
Signed-off-by: Pavel Hrdina <phrdina(a)redhat.com>
---
src/qemu/qemu_snapshot.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 1962ba4027..5fc0b82e79 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -2157,13 +2157,20 @@ qemuSnapshotRevertExternalInactive(virDomainObj *vm,
{
virQEMUDriver *driver = QEMU_DOMAIN_PRIVATE(vm)->driver;
g_autoptr(virBitmap) created = NULL;
+ int ret = -1;
created = virBitmapNew(tmpsnapdef->ndisks);
+ if (qemuSnapshotCreateQcow2Files(driver, domdef, tmpsnapdef, created) < 0)
+ goto cleanup;
+
if (qemuSnapshotDomainDefUpdateDisk(domdef, tmpsnapdef, false) < 0)
- return -1;
+ goto cleanup;
- if (qemuSnapshotCreateQcow2Files(driver, domdef, tmpsnapdef, created) < 0) {
+ ret = 0;
+
+ cleanup:
+ if (ret < 0 && created) {
ssize_t bit = -1;
virErrorPtr err = NULL;
@@ -2180,11 +2187,9 @@ qemuSnapshotRevertExternalInactive(virDomainObj *vm,
}
virErrorRestore(&err);
-
- return -1;
}
- return 0;
+ return ret;
}
--
2.41.0
1 year, 1 month
[libvirt PATCH] qemu_snapshot: fix snapshot deletion that had multiple children
by Pavel Hrdina
When we revert to non-leaf snapshot and create new branch or branches
the overlay in snapshot metadata is no longer usable as a disk source
for deletion of that snapshot. We need to use other places to figure out
the correct storage source.
Fixes: https://gitlab.com/libvirt/libvirt/-/issues/534
Signed-off-by: Pavel Hrdina <phrdina(a)redhat.com>
---
src/qemu/qemu_snapshot.c | 46 ++++++++++++++++++++++++++++++++++++++--
1 file changed, 44 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 1962ba4027..ee06e72b11 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -2748,6 +2748,44 @@ qemuSnapshotGetDisksWithBackingStore(virDomainObj *vm,
}
+/**
+ * qemuSnapshotExternalGetSnapDiskSrc:
+ * @vm: domain object
+ * @snap: snapshot object
+ * @snapDisk: disk definition from snapshost
+ *
+ * Try to get actual disk source for @snapDisk as the source stored in
+ * snapshot metadata is not always the correct source we need to work with.
+ * This happens mainly after reverting to non-leaf snapshot and creating
+ * new branch with new snapshot.
+ *
+ * Returns disk source on success, NULL on error.
+ */
+static virStorageSource *
+qemuSnapshotExternalGetSnapDiskSrc(virDomainObj *vm,
+ virDomainMomentObj *snap,
+ virDomainSnapshotDiskDef *snapDisk)
+{
+ virDomainDiskDef *disk = NULL;
+
+ /* Should never happen when deleting external snapshot as for now we do
+ * not support this specific case for now. */
+ if (snap->nchildren > 1)
+ return snapDisk->src;
+
+ if (snap->first_child) {
+ disk = qemuDomainDiskByName(snap->first_child->def->dom, snapDisk->name);
+ } else if (virDomainSnapshotGetCurrent(vm->snapshots) == snap) {
+ disk = qemuDomainDiskByName(vm->def, snapDisk->name);
+ }
+
+ if (disk)
+ return disk->src;
+
+ return snapDisk->src;
+}
+
+
/**
* qemuSnapshotDeleteExternalPrepareData:
* @vm: domain object
@@ -2802,18 +2840,22 @@ qemuSnapshotDeleteExternalPrepareData(virDomainObj *vm,
}
if (data->merge) {
+ virStorageSource *snapDiskSrc = NULL;
+
data->domDisk = qemuDomainDiskByName(vm->def, snapDisk->name);
if (!data->domDisk)
return -1;
+ snapDiskSrc = qemuSnapshotExternalGetSnapDiskSrc(vm, snap, data->snapDisk);
+
if (virDomainObjIsActive(vm)) {
data->diskSrc = virStorageSourceChainLookupBySource(data->domDisk->src,
- data->snapDisk->src,
+ snapDiskSrc,
&data->prevDiskSrc);
if (!data->diskSrc)
return -1;
- if (!virStorageSourceIsSameLocation(data->diskSrc, data->snapDisk->src)) {
+ if (!virStorageSourceIsSameLocation(data->diskSrc, snapDiskSrc)) {
virReportError(VIR_ERR_OPERATION_FAILED, "%s",
_("VM disk source and snapshot disk source are not the same"));
return -1;
--
2.41.0
1 year, 1 month
[libvirt PATCH 0/3] Introduce VIR_MIGRATE_ASSUME_SHARED_STORAGE
by Andrea Bolognani
This was initially motivated by a KubeVirt issue[1] concerning
integration with the Portworx storage provide, but it turns out to be
more generally applicable: since mounting an NFS share on the same
host that is exporting it is known to cause issues and is therefore
not recommended, we need a way to allow migration in such a
configuration while still not going quite as far as
VIR_MIGRATE_UNSAFE does and losing all handrails.
[1] https://issues.redhat.com/browse/CNV-34322
Andrea Bolognani (3):
include: Introduce VIR_MIGRATE_ASSUME_SHARED_STORAGE
qemu: Implement VIR_MIGRATE_ASSUME_SHARED_STORAGE support
virsh: Wire up VIR_MIGRATE_ASSUME_SHARED_STORAGE support
docs/manpages/virsh.rst | 5 ++++-
include/libvirt/libvirt-domain.h | 14 ++++++++++++++
src/qemu/qemu_migration.c | 5 +++++
src/qemu/qemu_migration.h | 1 +
tools/virsh-domain.c | 5 +++++
5 files changed, 29 insertions(+), 1 deletion(-)
--
2.41.0
1 year, 1 month
[v2 0/4] Support for dirty-limit live migration
by Hyman Huang
Hello, this is series version 2. Just certain that it
won't be overlooked.
Please review,
thanks, Yong
v2:
- rebase on master
v1:
In qemu>=8.1, the dirty-limit functionality for live
migration was included. In the live migration
scenario, it implements the force convergence
using the dirty-limit approach, which results in
better reliable read performance. A straightforward
dirty-limit capability for live migration is added by
this patchset. Users might not care about other
dirty-limit arguments like "x-vcpu-dirty-limit-period"
or "vcpu-dirty-limit," thus do not expose them to
Libvirt and leave the default settings in place.
values as default. For more details about dirty-limit.
please see the following reference:
https://lore.kernel.org/qemu-
devel/169024923116.19090.10825599068950039132-0(a)git.sr.ht/
Hyman Huang (4):
Add VIR_MIGRATE_DIRTY_LIMIT flag
qemu_migration: Implement VIR_MIGRATE_DIRTY_LIMIT flag
virsh: Add support for VIR_MIGRATE_DIRTY_LIMIT flag
NEWS: document support for dirty-limit live migration
NEWS.rst | 8 ++++++++
docs/manpages/virsh.rst | 10 +++++++++-
include/libvirt/libvirt-domain.h | 5 +++++
src/libvirt-domain.c | 8 ++++++++
src/qemu/qemu_migration.c | 8 ++++++++
src/qemu/qemu_migration.h | 1 +
src/qemu/qemu_migration_params.c | 6 ++++++
src/qemu/qemu_migration_params.h | 1 +
tools/virsh-domain.c | 10 ++++++++++
9 files changed, 56 insertions(+), 1 deletion(-)
--
2.39.1
1 year, 1 month
[libvirt PATCH 00/19] RFC: Add versioned CPUs to libvirt
by Jonathon Jongsma
This is not necessarily intended as a finished proposal, but as a discussion
starter. I mentioned in an email last week that for SEV-SNP support we will
need to be able to specify versioned CPU models that are not yet supported by
libvirt. Rather than just adding a versioned CPU or two that would satisfy my
immediate need, I decided to try to add versioned CPUs in a standard way. This
involves adding the concept of an 'alias' for a CPU model in libvirt. Qemu
already has the concept of a CPU alias for a versioned CPU. In fact, libvirt
already provides a select subset of these as configurable CPU models (e.g.
'EPYC-IBPB'). After this patchset, these aliased CPU versions would be
configurable by either their versioned name ('EPYC-v2') or their alias
('EPYC-IBPB'). And it would also provide non-aliased CPU versions as options
within libvirt ('EPYC-v4').
Assuming that we want to offer all versioned CPUs like this, there are two
approaches to naming. I chose to maintain the existing names (e.g. EPYC-IBPB)
as the primary name where available, and use the versioned name (EPYC-v2) as
the alias. However, some CPU models don't have an alias, so their versioned
name would be their primary name. So we have the following set of 'EPYC' CPU
models:
- EPYC (alias = EPYC-v1)
- EPYC-IBPB (alias = EPYC-v2)
- EPYC-v3 (no alias)
- EPYC-v4 (no alias)
An alternative approach is something more like:
- EPYC-v1 (alias = EPYC)
- EPYC-v2 (alias = EPYC-IBPB)
- EPYC-v3 (no alias)
- EPYC-v4 (no alias)
The naming of the second set is more consistent, but it could result in slight
changes to behavior. For example, any call to cpuDecode() that returned
EPYC-IBPB in the past might now return EPYC-v2. These two CPUs are just two
different names for the same model, so I'm not sure it would result in any
issues. But in this patch series I went with the first approach since it
maintained stability and resulted in less churn in the test output.
Note also that there are a couple of patches that update existing CPU models by
re-running this script against the current qemu source code. For example, the
patch "cpu_map: Update EPYC cpu definitions from qemu" results in some minor
changes to the existing EPYC CPUs by adding a couple of feature flags. In
theory, it seems like a good idea for our libvirt models to match how the model
is defined in qemu, but I admit that I don't have a great understanding of
whether this will result in undesirable side-effects. I'm hoping those of you
with deeper knowledge will tell me why this is or is not a good idea. In the
same vein, I've included the last patch of the series showing what it would
look like if we regenerated all of the other CPU definitions from the qemu
source code.
Jonathon Jongsma (19):
cpu_map: update script to generate versioned CPUs
cpu: handle aliases in CPU definitions
cpu_map: Update EPYC cpu definitions from qemu
cpu_map: Add versioned EPYC CPUs
cpu_map: Add versioned Intel Nehalem CPUs
cpu_map: Add versioned Intel Westmere CPUs
cpu_map: Add versioned Intel SandyBridge CPUs
cpu_map: Add versioned Intel IvyBridge CPUs
cpu_map: Add versioned Intel Haswell CPUs
cpu_map: Add versioned Intel Broadwell CPUs
cpu_map: Add versioned Intel Skylake CPUs
cpu_map: Add versioned Intel Cascadelake CPUs
cpu_map: Add versioned Intel Icelake CPUs
cpu_map: Add versioned Intel Cooperlake CPUs
cpu_map: Add versioned Intel Snowridge CPUs
cpu_map: Add versioned Intel SapphireRapids CPUs
cpu_map: Add versioned Dhyana CPUs
cpu: advertise CPU aliases
NOMERGE: RFC: regenerate all cpu definitions
src/cpu/cpu_x86.c | 88 ++++++++----
src/cpu_map/index.xml | 22 +++
src/cpu_map/meson.build | 22 +++
src/cpu_map/sync_qemu_models_i386.py | 44 ++++--
src/cpu_map/x86_Broadwell-IBRS.xml | 19 ++-
src/cpu_map/x86_Broadwell-noTSX-IBRS.xml | 19 ++-
src/cpu_map/x86_Broadwell-noTSX.xml | 19 ++-
src/cpu_map/x86_Broadwell.xml | 18 ++-
src/cpu_map/x86_Cascadelake-Server-noTSX.xml | 19 ++-
src/cpu_map/x86_Cascadelake-Server-v2.xml | 93 +++++++++++++
src/cpu_map/x86_Cascadelake-Server-v4.xml | 91 +++++++++++++
src/cpu_map/x86_Cascadelake-Server-v5.xml | 92 +++++++++++++
src/cpu_map/x86_Cascadelake-Server.xml | 11 +-
src/cpu_map/x86_Cooperlake-v2.xml | 98 ++++++++++++++
src/cpu_map/x86_Cooperlake.xml | 9 +-
src/cpu_map/x86_Dhyana-v2.xml | 81 ++++++++++++
src/cpu_map/x86_Dhyana.xml | 13 +-
src/cpu_map/x86_EPYC-Genoa.xml | 7 +
src/cpu_map/x86_EPYC-IBPB.xml | 14 +-
src/cpu_map/x86_EPYC-Milan-v2.xml | 108 +++++++++++++++
src/cpu_map/x86_EPYC-Milan.xml | 8 ++
src/cpu_map/x86_EPYC-Rome-v2.xml | 93 +++++++++++++
src/cpu_map/x86_EPYC-Rome-v3.xml | 95 +++++++++++++
src/cpu_map/x86_EPYC-Rome-v4.xml | 94 +++++++++++++
src/cpu_map/x86_EPYC-Rome.xml | 9 ++
src/cpu_map/x86_EPYC-v3.xml | 87 ++++++++++++
src/cpu_map/x86_EPYC-v4.xml | 88 ++++++++++++
src/cpu_map/x86_EPYC.xml | 13 +-
src/cpu_map/x86_Haswell-IBRS.xml | 20 ++-
src/cpu_map/x86_Haswell-noTSX-IBRS.xml | 20 ++-
src/cpu_map/x86_Haswell-noTSX.xml | 20 ++-
src/cpu_map/x86_Haswell.xml | 18 ++-
src/cpu_map/x86_Icelake-Server-noTSX.xml | 14 +-
src/cpu_map/x86_Icelake-Server-v3.xml | 103 +++++++++++++++
src/cpu_map/x86_Icelake-Server-v4.xml | 108 +++++++++++++++
src/cpu_map/x86_Icelake-Server-v5.xml | 109 +++++++++++++++
src/cpu_map/x86_Icelake-Server-v6.xml | 109 +++++++++++++++
src/cpu_map/x86_Icelake-Server.xml | 11 +-
src/cpu_map/x86_IvyBridge-IBRS.xml | 13 +-
src/cpu_map/x86_IvyBridge.xml | 12 +-
src/cpu_map/x86_Nehalem-IBRS.xml | 14 +-
src/cpu_map/x86_Nehalem.xml | 13 +-
src/cpu_map/x86_SandyBridge-IBRS.xml | 14 +-
src/cpu_map/x86_SandyBridge.xml | 13 +-
src/cpu_map/x86_SapphireRapids-v2.xml | 125 ++++++++++++++++++
src/cpu_map/x86_SapphireRapids.xml | 7 +
src/cpu_map/x86_Skylake-Client-IBRS.xml | 16 ++-
src/cpu_map/x86_Skylake-Client-noTSX-IBRS.xml | 18 +--
src/cpu_map/x86_Skylake-Client-v4.xml | 77 +++++++++++
src/cpu_map/x86_Skylake-Client.xml | 15 ++-
src/cpu_map/x86_Skylake-Server-IBRS.xml | 12 +-
src/cpu_map/x86_Skylake-Server-noTSX-IBRS.xml | 14 +-
src/cpu_map/x86_Skylake-Server-v4.xml | 83 ++++++++++++
src/cpu_map/x86_Skylake-Server-v5.xml | 85 ++++++++++++
src/cpu_map/x86_Skylake-Server.xml | 12 +-
src/cpu_map/x86_Snowridge-v2.xml | 78 +++++++++++
src/cpu_map/x86_Snowridge-v3.xml | 80 +++++++++++
src/cpu_map/x86_Snowridge-v4.xml | 78 +++++++++++
src/cpu_map/x86_Snowridge.xml | 10 +-
src/cpu_map/x86_Westmere-IBRS.xml | 13 +-
src/cpu_map/x86_Westmere.xml | 14 +-
...4-baseline-Westmere+Nehalem-migratable.xml | 4 +-
...86_64-baseline-Westmere+Nehalem-result.xml | 4 +-
.../x86_64-baseline-features-expanded.xml | 1 +
.../x86_64-baseline-features-result.xml | 2 -
.../x86_64-baseline-simple-expanded.xml | 3 +
.../x86_64-cpuid-Atom-P5362-guest.xml | 3 +-
.../x86_64-cpuid-Atom-P5362-host.xml | 3 -
.../x86_64-cpuid-Atom-P5362-json.xml | 3 +-
.../x86_64-cpuid-Cooperlake-host.xml | 3 +-
.../x86_64-cpuid-Core-i5-2500-guest.xml | 3 -
.../x86_64-cpuid-Core-i5-2500-host.xml | 3 -
.../x86_64-cpuid-Core-i5-2500-json.xml | 3 -
.../x86_64-cpuid-Core-i5-2540M-guest.xml | 3 -
.../x86_64-cpuid-Core-i5-2540M-host.xml | 3 -
.../x86_64-cpuid-Core-i5-2540M-json.xml | 3 -
.../x86_64-cpuid-Core-i5-4670T-guest.xml | 6 +-
.../x86_64-cpuid-Core-i5-4670T-host.xml | 19 ++-
.../x86_64-cpuid-Core-i5-4670T-json.xml | 6 +-
.../x86_64-cpuid-Core-i5-650-guest.xml | 3 -
.../x86_64-cpuid-Core-i5-650-host.xml | 3 -
.../x86_64-cpuid-Core-i5-650-json.xml | 3 -
.../x86_64-cpuid-Core-i5-6600-guest.xml | 1 +
.../x86_64-cpuid-Core-i5-6600-host.xml | 1 +
.../x86_64-cpuid-Core-i5-6600-json.xml | 1 +
.../x86_64-cpuid-Core-i7-2600-guest.xml | 3 -
.../x86_64-cpuid-Core-i7-2600-host.xml | 3 -
.../x86_64-cpuid-Core-i7-2600-json.xml | 3 -
...6_64-cpuid-Core-i7-2600-xsaveopt-guest.xml | 2 -
...86_64-cpuid-Core-i7-2600-xsaveopt-host.xml | 9 +-
...86_64-cpuid-Core-i7-2600-xsaveopt-json.xml | 2 -
.../x86_64-cpuid-Core-i7-3520M-guest.xml | 2 -
.../x86_64-cpuid-Core-i7-3520M-host.xml | 2 -
.../x86_64-cpuid-Core-i7-3740QM-guest.xml | 2 +-
.../x86_64-cpuid-Core-i7-3740QM-host.xml | 13 +-
.../x86_64-cpuid-Core-i7-3740QM-json.xml | 2 +-
.../x86_64-cpuid-Core-i7-3770-guest.xml | 2 -
.../x86_64-cpuid-Core-i7-3770-host.xml | 2 -
.../x86_64-cpuid-Core-i7-3770-json.xml | 2 +-
.../x86_64-cpuid-Core-i7-4510U-guest.xml | 6 -
.../x86_64-cpuid-Core-i7-4510U-host.xml | 3 -
.../x86_64-cpuid-Core-i7-4510U-json.xml | 6 -
.../x86_64-cpuid-Core-i7-4600U-guest.xml | 6 -
.../x86_64-cpuid-Core-i7-4600U-host.xml | 6 -
.../x86_64-cpuid-Core-i7-4600U-json.xml | 6 -
.../x86_64-cpuid-Core-i7-5600U-arat-guest.xml | 6 -
.../x86_64-cpuid-Core-i7-5600U-arat-host.xml | 6 -
.../x86_64-cpuid-Core-i7-5600U-arat-json.xml | 6 +-
.../x86_64-cpuid-Core-i7-5600U-guest.xml | 6 -
.../x86_64-cpuid-Core-i7-5600U-host.xml | 6 -
.../x86_64-cpuid-Core-i7-5600U-ibrs-guest.xml | 6 -
.../x86_64-cpuid-Core-i7-5600U-ibrs-host.xml | 6 -
.../x86_64-cpuid-Core-i7-5600U-ibrs-json.xml | 6 -
.../x86_64-cpuid-Core-i7-5600U-json.xml | 6 -
.../x86_64-cpuid-Core-i7-7600U-guest.xml | 1 +
.../x86_64-cpuid-Core-i7-7600U-host.xml | 1 +
.../x86_64-cpuid-Core-i7-7600U-json.xml | 1 +
.../x86_64-cpuid-Core-i7-7700-guest.xml | 1 +
.../x86_64-cpuid-Core-i7-7700-host.xml | 1 +
.../x86_64-cpuid-Core-i7-7700-json.xml | 1 +
.../x86_64-cpuid-Core-i7-8550U-guest.xml | 5 +-
.../x86_64-cpuid-Core-i7-8550U-host.xml | 4 +-
.../x86_64-cpuid-Core-i7-8550U-json.xml | 5 +-
.../x86_64-cpuid-Core-i7-8700-guest.xml | 1 +
.../x86_64-cpuid-Core-i7-8700-host.xml | 1 +
.../x86_64-cpuid-Core-i7-8700-json.xml | 1 +
.../x86_64-cpuid-EPYC-7502-32-Core-guest.xml | 1 -
.../x86_64-cpuid-EPYC-7502-32-Core-host.xml | 5 +-
.../x86_64-cpuid-EPYC-7502-32-Core-json.xml | 1 -
.../x86_64-cpuid-EPYC-7601-32-Core-guest.xml | 9 +-
.../x86_64-cpuid-EPYC-7601-32-Core-host.xml | 2 -
..._64-cpuid-EPYC-7601-32-Core-ibpb-guest.xml | 2 -
...6_64-cpuid-EPYC-7601-32-Core-ibpb-host.xml | 8 +-
...6_64-cpuid-EPYC-7601-32-Core-ibpb-json.xml | 3 -
.../x86_64-cpuid-EPYC-7601-32-Core-json.xml | 3 -
..._64-cpuid-Hygon-C86-7185-32-core-guest.xml | 5 +-
...6_64-cpuid-Hygon-C86-7185-32-core-host.xml | 5 +-
...6_64-cpuid-Hygon-C86-7185-32-core-json.xml | 3 -
.../x86_64-cpuid-Ice-Lake-Server-guest.xml | 1 +
.../x86_64-cpuid-Ice-Lake-Server-host.xml | 1 +
.../x86_64-cpuid-Ice-Lake-Server-json.xml | 2 +-
.../x86_64-cpuid-Pentium-P6100-guest.xml | 10 +-
...4-cpuid-Ryzen-7-1800X-Eight-Core-guest.xml | 9 +-
...64-cpuid-Ryzen-7-1800X-Eight-Core-host.xml | 2 -
...64-cpuid-Ryzen-7-1800X-Eight-Core-json.xml | 3 -
...6_64-cpuid-Ryzen-9-3900X-12-Core-guest.xml | 1 -
...86_64-cpuid-Ryzen-9-3900X-12-Core-host.xml | 1 -
...86_64-cpuid-Ryzen-9-3900X-12-Core-json.xml | 1 -
.../x86_64-cpuid-Xeon-E3-1225-v5-guest.xml | 1 +
.../x86_64-cpuid-Xeon-E3-1225-v5-host.xml | 1 +
.../x86_64-cpuid-Xeon-E3-1225-v5-json.xml | 1 +
.../x86_64-cpuid-Xeon-E3-1245-v5-guest.xml | 1 +
.../x86_64-cpuid-Xeon-E3-1245-v5-host.xml | 1 +
.../x86_64-cpuid-Xeon-E3-1245-v5-json.xml | 1 +
.../x86_64-cpuid-Xeon-E5-2609-v3-guest.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2609-v3-host.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2609-v3-json.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2623-v4-guest.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2623-v4-host.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2623-v4-json.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2630-v3-guest.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2630-v3-host.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2630-v3-json.xml | 6 +-
.../x86_64-cpuid-Xeon-E5-2630-v4-guest.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2630-v4-host.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2630-v4-json.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2650-guest.xml | 3 -
.../x86_64-cpuid-Xeon-E5-2650-host.xml | 3 -
.../x86_64-cpuid-Xeon-E5-2650-json.xml | 3 -
.../x86_64-cpuid-Xeon-E5-2650-v3-guest.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2650-v3-host.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2650-v3-json.xml | 6 +-
.../x86_64-cpuid-Xeon-E5-2650-v4-guest.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2650-v4-host.xml | 6 -
.../x86_64-cpuid-Xeon-E5-2650-v4-json.xml | 6 -
.../x86_64-cpuid-Xeon-E7-4820-guest.xml | 3 -
.../x86_64-cpuid-Xeon-E7-4820-host.xml | 3 -
.../x86_64-cpuid-Xeon-E7-4820-json.xml | 4 +-
.../x86_64-cpuid-Xeon-E7-4830-guest.xml | 3 -
.../x86_64-cpuid-Xeon-E7-4830-host.xml | 3 -
.../x86_64-cpuid-Xeon-E7-4830-json.xml | 3 -
.../x86_64-cpuid-Xeon-E7-8890-v3-guest.xml | 6 -
.../x86_64-cpuid-Xeon-E7-8890-v3-host.xml | 6 -
.../x86_64-cpuid-Xeon-E7-8890-v3-json.xml | 6 -
.../x86_64-cpuid-Xeon-E7540-guest.xml | 1 -
.../x86_64-cpuid-Xeon-E7540-host.xml | 1 -
.../x86_64-cpuid-Xeon-E7540-json.xml | 1 -
.../x86_64-cpuid-Xeon-Gold-5115-guest.xml | 2 +-
.../x86_64-cpuid-Xeon-Gold-5115-host.xml | 2 +-
.../x86_64-cpuid-Xeon-Gold-5115-json.xml | 2 +
.../x86_64-cpuid-Xeon-Gold-6130-guest.xml | 2 +-
.../x86_64-cpuid-Xeon-Gold-6130-host.xml | 2 +-
.../x86_64-cpuid-Xeon-Gold-6130-json.xml | 2 +-
.../x86_64-cpuid-Xeon-Gold-6148-guest.xml | 3 +-
.../x86_64-cpuid-Xeon-Gold-6148-host.xml | 3 +-
.../x86_64-cpuid-Xeon-Gold-6148-json.xml | 3 +-
.../x86_64-cpuid-Xeon-Platinum-8268-guest.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-8268-host.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-8268-json.xml | 2 +-
.../x86_64-cpuid-Xeon-Platinum-9242-guest.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-9242-host.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-9242-json.xml | 9 +-
.../x86_64-cpuid-Xeon-W3520-guest.xml | 1 -
.../x86_64-cpuid-Xeon-W3520-host.xml | 1 -
.../x86_64-cpuid-Xeon-W3520-json.xml | 1 -
...id-baseline-Broadwell-IBRS+Cascadelake.xml | 6 -
..._64-cpuid-baseline-Cascadelake+Icelake.xml | 9 +-
...puid-baseline-Cascadelake+Skylake-IBRS.xml | 2 +-
..._64-cpuid-baseline-Cascadelake+Skylake.xml | 3 +-
...-cpuid-baseline-Cooperlake+Cascadelake.xml | 9 +-
...6_64-cpuid-baseline-Cooperlake+Icelake.xml | 9 +-
.../x86_64-cpuid-baseline-EPYC+Rome.xml | 3 -
.../x86_64-cpuid-baseline-Haswell+Skylake.xml | 6 -
...-baseline-Haswell-noTSX-IBRS+Broadwell.xml | 6 -
...seline-Haswell-noTSX-IBRS+Skylake-IBRS.xml | 6 -
...id-baseline-Haswell-noTSX-IBRS+Skylake.xml | 6 -
.../x86_64-cpuid-baseline-Ryzen+Rome.xml | 3 -
...4-cpuid-baseline-Skylake-Client+Server.xml | 1 +
.../domaincapsdata/qemu_4.2.0-q35.x86_64.xml | 33 +++++
.../domaincapsdata/qemu_4.2.0-tcg.x86_64.xml | 32 +++++
tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 33 +++++
.../domaincapsdata/qemu_5.0.0-q35.x86_64.xml | 37 ++++++
.../domaincapsdata/qemu_5.0.0-tcg.x86_64.xml | 36 +++++
tests/domaincapsdata/qemu_5.0.0.x86_64.xml | 37 ++++++
.../domaincapsdata/qemu_5.1.0-q35.x86_64.xml | 40 +++++-
.../domaincapsdata/qemu_5.1.0-tcg.x86_64.xml | 39 ++++++
tests/domaincapsdata/qemu_5.1.0.x86_64.xml | 40 +++++-
.../domaincapsdata/qemu_5.2.0-q35.x86_64.xml | 40 +++++-
.../domaincapsdata/qemu_5.2.0-tcg.x86_64.xml | 39 ++++++
tests/domaincapsdata/qemu_5.2.0.x86_64.xml | 40 +++++-
.../domaincapsdata/qemu_6.0.0-q35.x86_64.xml | 42 +++++-
.../domaincapsdata/qemu_6.0.0-tcg.x86_64.xml | 41 ++++++
tests/domaincapsdata/qemu_6.0.0.x86_64.xml | 42 +++++-
.../domaincapsdata/qemu_6.1.0-q35.x86_64.xml | 49 ++++++-
.../domaincapsdata/qemu_6.1.0-tcg.x86_64.xml | 48 +++++++
tests/domaincapsdata/qemu_6.1.0.x86_64.xml | 49 ++++++-
.../domaincapsdata/qemu_6.2.0-q35.x86_64.xml | 50 ++++++-
.../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml | 49 +++++++
tests/domaincapsdata/qemu_6.2.0.x86_64.xml | 50 ++++++-
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 51 ++++++-
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 50 +++++++
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 51 ++++++-
.../domaincapsdata/qemu_7.1.0-q35.x86_64.xml | 51 ++++++-
.../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml | 50 +++++++
tests/domaincapsdata/qemu_7.1.0.x86_64.xml | 51 ++++++-
.../domaincapsdata/qemu_7.2.0-q35.x86_64.xml | 51 ++++++-
.../qemu_7.2.0-tcg.x86_64+hvf.xml | 51 ++++++-
.../domaincapsdata/qemu_7.2.0-tcg.x86_64.xml | 51 ++++++-
tests/domaincapsdata/qemu_7.2.0.x86_64.xml | 51 ++++++-
.../domaincapsdata/qemu_8.0.0-q35.x86_64.xml | 52 +++++++-
.../domaincapsdata/qemu_8.0.0-tcg.x86_64.xml | 52 +++++++-
tests/domaincapsdata/qemu_8.0.0.x86_64.xml | 52 +++++++-
.../domaincapsdata/qemu_8.1.0-q35.x86_64.xml | 61 ++++++++-
.../domaincapsdata/qemu_8.1.0-tcg.x86_64.xml | 57 +++++++-
tests/domaincapsdata/qemu_8.1.0.x86_64.xml | 61 ++++++++-
.../domaincapsdata/qemu_8.2.0-q35.x86_64.xml | 61 ++++++++-
.../domaincapsdata/qemu_8.2.0-tcg.x86_64.xml | 57 +++++++-
tests/domaincapsdata/qemu_8.2.0.x86_64.xml | 61 ++++++++-
...-Icelake-Server-pconfig.x86_64-latest.args | 2 +-
.../cpu-fallback.x86_64-5.2.0.args | 2 +-
.../cpu-fallback.x86_64-8.0.0.args | 2 +-
tests/qemuxml2argvdata/cpu-fallback.xml | 1 -
.../cpu-host-model-fallback.x86_64-7.2.0.args | 2 +-
.../cpu-host-model-fallback.x86_64-8.0.0.args | 2 +-
...cpu-host-model-fallback.x86_64-latest.args | 2 +-
...pu-host-model-nofallback.x86_64-7.2.0.args | 2 +-
...pu-host-model-nofallback.x86_64-8.0.0.args | 2 +-
...u-host-model-nofallback.x86_64-latest.args | 2 +-
.../cpu-host-model.x86_64-4.2.0.args | 2 +-
.../cpu-host-model.x86_64-5.0.0.args | 2 +-
.../cpu-host-model.x86_64-5.1.0.args | 2 +-
.../cpu-host-model.x86_64-5.2.0.args | 2 +-
.../cpu-host-model.x86_64-6.0.0.args | 2 +-
.../cpu-host-model.x86_64-6.1.0.args | 2 +-
.../cpu-host-model.x86_64-6.2.0.args | 2 +-
.../cpu-host-model.x86_64-7.0.0.args | 2 +-
.../cpu-host-model.x86_64-7.1.0.args | 2 +-
.../cpu-host-model.x86_64-7.2.0.args | 2 +-
.../cpu-host-model.x86_64-8.0.0.args | 2 +-
.../cpu-host-model.x86_64-latest.args | 2 +-
.../cpu-nofallback.x86_64-8.0.0.args | 2 +-
tests/qemuxml2argvdata/cpu-nofallback.xml | 1 -
282 files changed, 4591 insertions(+), 692 deletions(-)
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Cooperlake-v2.xml
create mode 100644 src/cpu_map/x86_Dhyana-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Milan-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v4.xml
create mode 100644 src/cpu_map/x86_EPYC-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v6.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v2.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Snowridge-v2.xml
create mode 100644 src/cpu_map/x86_Snowridge-v3.xml
create mode 100644 src/cpu_map/x86_Snowridge-v4.xml
--
2.41.0
1 year, 1 month
Versioned CPU types in libvirt
by Jonathon Jongsma
I'm currently looking at getting libvirt working with AMD's SEV-SNP
encrypted virtualization technology. I have access to a test machine
with an AMD EPYC 7713 processor which I can use to launch SNP guests
with qemu, but only when I specify one of the following versioned -cpu
values:
- EPYC-v4
- EPYC-Milan-v2
- EPYC-Rome-v3
From what I understand, the unversioned CPU models in qemu are supposed
to resolve to a specific versioned CPU model depending on the machine
type. But I'm not exactly sure how machine type influences it.
I've got some libvirt patches to launch an SEV-SNP guest working now
except for the CPU model specification. As far as I can tell, I can
currently only specify the un-versioned model in libvirt. Is there any
way to request a particular versioned CPU from qemu? I feel like I'm
missing something here.
I should perhaps also mention that I'm running a development version of
qemu from Cole's copr repo[1], which could still have some related bugs
[1] https://copr.fedorainfracloud.org/coprs/g/virtmaint-sig/sev-snp-coconut/
Thanks,
Jonathon
1 year, 1 month
[libvirt PATCH] src: reject empty string for 'dname' in migrate APIs
by Daniel P. Berrangé
A domain name is expected to be non-empty, and we validate this when
parsing XML, or accepting a new name during renames. We fail to
enforce this property, however, when performing a migration. This
was discovered when a user complained about inaccessible VMs after
migrating with the Rust APIs which mistakenly hardcoded 'dname' to
the empty string.
Fixes: https://gitlab.com/libvirt/libvirt-rust/-/issues/11
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
src/internal.h | 14 +++++++
src/libvirt-domain.c | 97 +++++++++++++++++++++++++++++++++++++++-----
2 files changed, 100 insertions(+), 11 deletions(-)
diff --git a/src/internal.h b/src/internal.h
index 5a9e1c7cd0..01860efad9 100644
--- a/src/internal.h
+++ b/src/internal.h
@@ -441,6 +441,20 @@
goto label; \
} \
} while (0)
+#define virCheckNonEmptyOptStringArgGoto(argname, label) \
+ do { \
+ if (argname && *argname == '\0') { \
+ virReportInvalidEmptyStringArg(argname); \
+ goto label; \
+ } \
+ } while (0)
+#define virCheckNonEmptyOptStringArgReturn(argname, retval) \
+ do { \
+ if (argname && *argname == '\0') { \
+ virReportInvalidEmptyStringArg(argname); \
+ return retval; \
+ } \
+ } while (0)
#define virCheckPositiveArgGoto(argname, label) \
do { \
if (argname <= 0) { \
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index 58e1e5ea8d..77a9682ecb 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -2991,6 +2991,8 @@ virDomainMigrateVersion1(virDomainPtr domain,
"dconn=%p, flags=0x%lx, dname=%s, uri=%s, bandwidth=%lu",
dconn, flags, NULLSTR(dname), NULLSTR(uri), bandwidth);
+ virCheckNonEmptyOptStringArgReturn(dname, NULL);
+
ret = virDomainGetInfo(domain, &info);
if (ret == 0 && info.state == VIR_DOMAIN_PAUSED)
flags |= VIR_MIGRATE_PAUSED;
@@ -3085,6 +3087,8 @@ virDomainMigrateVersion2(virDomainPtr domain,
"dconn=%p, flags=0x%lx, dname=%s, uri=%s, bandwidth=%lu",
dconn, flags, NULLSTR(dname), NULLSTR(uri), bandwidth);
+ virCheckNonEmptyOptStringArgReturn(dname, NULL);
+
/* Prepare the migration.
*
* The destination host may return a cookie, or leave cookie as
@@ -3242,6 +3246,8 @@ virDomainMigrateVersion3Full(virDomainPtr domain,
bandwidth, params, nparams, useParams, flags);
VIR_TYPED_PARAMS_DEBUG(params, nparams);
+ virCheckNonEmptyOptStringArgReturn(dname, NULL);
+
if ((!useParams &&
(!domain->conn->driver->domainMigrateBegin3 ||
!domain->conn->driver->domainMigratePerform3 ||
@@ -3582,6 +3588,8 @@ virDomainMigrateUnmanagedProto2(virDomainPtr domain,
return -1;
}
+ virCheckNonEmptyOptStringArgReturn(dname, -1);
+
if (flags & VIR_MIGRATE_PEER2PEER) {
if (miguri) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
@@ -3632,6 +3640,8 @@ virDomainMigrateUnmanagedProto3(virDomainPtr domain,
return -1;
}
+ virCheckNonEmptyOptStringArgReturn(dname, -1);
+
return domain->conn->driver->domainMigratePerform3
(domain, xmlin, NULL, 0, NULL, NULL, dconnuri,
miguri, flags, dname, bandwidth);
@@ -3802,6 +3812,8 @@ virDomainMigrate(virDomainPtr domain,
virCheckConnectGoto(dconn, error);
virCheckReadOnlyGoto(dconn->flags, error);
+ virCheckNonEmptyOptStringArgReturn(dname, NULL);
+
VIR_EXCLUSIVE_FLAGS_GOTO(VIR_MIGRATE_NON_SHARED_DISK,
VIR_MIGRATE_NON_SHARED_INC,
error);
@@ -3999,6 +4011,8 @@ virDomainMigrate2(virDomainPtr domain,
virCheckConnectGoto(dconn, error);
virCheckReadOnlyGoto(dconn->flags, error);
+ virCheckNonEmptyOptStringArgReturn(dname, NULL);
+
VIR_EXCLUSIVE_FLAGS_GOTO(VIR_MIGRATE_NON_SHARED_DISK,
VIR_MIGRATE_NON_SHARED_INC,
error);
@@ -4232,6 +4246,19 @@ virDomainMigrate3(virDomainPtr domain,
goto error;
}
+ if (virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_URI, &uri) < 0 ||
+ virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_DEST_NAME, &dname) < 0 ||
+ virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_DEST_XML, &dxml) < 0 ||
+ virTypedParamsGetULLong(params, nparams,
+ VIR_MIGRATE_PARAM_BANDWIDTH, &bandwidth) < 0) {
+ goto error;
+ }
+
+ virCheckNonEmptyOptStringArgReturn(dname, NULL);
+
if (flags & VIR_MIGRATE_OFFLINE) {
rc = VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
VIR_DRV_FEATURE_MIGRATION_OFFLINE);
@@ -4293,17 +4320,6 @@ virDomainMigrate3(virDomainPtr domain,
goto error;
}
- if (virTypedParamsGetString(params, nparams,
- VIR_MIGRATE_PARAM_URI, &uri) < 0 ||
- virTypedParamsGetString(params, nparams,
- VIR_MIGRATE_PARAM_DEST_NAME, &dname) < 0 ||
- virTypedParamsGetString(params, nparams,
- VIR_MIGRATE_PARAM_DEST_XML, &dxml) < 0 ||
- virTypedParamsGetULLong(params, nparams,
- VIR_MIGRATE_PARAM_BANDWIDTH, &bandwidth) < 0) {
- goto error;
- }
-
rc_src = VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
VIR_DRV_FEATURE_MIGRATION_V3);
if (rc_src < 0)
@@ -4475,6 +4491,7 @@ virDomainMigrateToURI(virDomainPtr domain,
virCheckDomainReturn(domain, -1);
virCheckReadOnlyGoto(domain->conn->flags, error);
virCheckNonNullArgGoto(duri, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
VIR_EXCLUSIVE_FLAGS_GOTO(VIR_MIGRATE_TUNNELLED,
VIR_MIGRATE_PARALLEL,
@@ -4554,6 +4571,8 @@ virDomainMigrateToURI2(virDomainPtr domain,
virCheckDomainReturn(domain, -1);
virCheckReadOnlyGoto(domain->conn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
+
VIR_EXCLUSIVE_FLAGS_GOTO(VIR_MIGRATE_TUNNELLED,
VIR_MIGRATE_PARALLEL,
error);
@@ -4679,6 +4698,7 @@ virDomainMigratePrepare(virConnectPtr dconn,
virCheckConnectReturn(dconn, -1);
virCheckReadOnlyGoto(dconn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
if (dconn->driver->domainMigratePrepare) {
int ret;
@@ -4723,6 +4743,7 @@ virDomainMigratePerform(virDomainPtr domain,
conn = domain->conn;
virCheckReadOnlyGoto(conn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
if (conn->driver->domainMigratePerform) {
int ret;
@@ -4762,6 +4783,7 @@ virDomainMigrateFinish(virConnectPtr dconn,
virCheckConnectReturn(dconn, NULL);
virCheckReadOnlyGoto(dconn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
if (dconn->driver->domainMigrateFinish) {
virDomainPtr ret;
@@ -4805,6 +4827,7 @@ virDomainMigratePrepare2(virConnectPtr dconn,
virCheckConnectReturn(dconn, -1);
virCheckReadOnlyGoto(dconn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
if (dconn->driver->domainMigratePrepare2) {
int ret;
@@ -4846,6 +4869,7 @@ virDomainMigrateFinish2(virConnectPtr dconn,
virCheckConnectReturn(dconn, NULL);
virCheckReadOnlyGoto(dconn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
if (dconn->driver->domainMigrateFinish2) {
virDomainPtr ret;
@@ -4886,6 +4910,7 @@ virDomainMigratePrepareTunnel(virConnectPtr conn,
virCheckConnectReturn(conn, -1);
virCheckReadOnlyGoto(conn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
if (conn != st->conn) {
virReportInvalidArg(conn, "%s",
@@ -4936,6 +4961,7 @@ virDomainMigrateBegin3(virDomainPtr domain,
conn = domain->conn;
virCheckReadOnlyGoto(conn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
if (conn->driver->domainMigrateBegin3) {
char *xml;
@@ -4983,6 +5009,7 @@ virDomainMigratePrepare3(virConnectPtr dconn,
virCheckConnectReturn(dconn, -1);
virCheckReadOnlyGoto(dconn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
if (dconn->driver->domainMigratePrepare3) {
int ret;
@@ -5031,6 +5058,7 @@ virDomainMigratePrepareTunnel3(virConnectPtr conn,
virCheckConnectReturn(conn, -1);
virCheckReadOnlyGoto(conn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
if (conn != st->conn) {
virReportInvalidArg(conn, "%s",
@@ -5089,6 +5117,7 @@ virDomainMigratePerform3(virDomainPtr domain,
conn = domain->conn;
virCheckReadOnlyGoto(conn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
if (conn->driver->domainMigratePerform3) {
int ret;
@@ -5135,6 +5164,7 @@ virDomainMigrateFinish3(virConnectPtr dconn,
virCheckConnectReturn(dconn, NULL);
virCheckReadOnlyGoto(dconn->flags, error);
+ virCheckNonEmptyOptStringArgGoto(dname, error);
if (dconn->driver->domainMigrateFinish3) {
virDomainPtr ret;
@@ -5211,6 +5241,7 @@ virDomainMigrateBegin3Params(virDomainPtr domain,
unsigned int flags)
{
virConnectPtr conn;
+ const char *dname = NULL;
VIR_DOMAIN_DEBUG(domain, "params=%p, nparams=%d, "
"cookieout=%p, cookieoutlen=%p, flags=0x%x",
@@ -5224,6 +5255,12 @@ virDomainMigrateBegin3Params(virDomainPtr domain,
virCheckReadOnlyGoto(conn->flags, error);
+ if (virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_DEST_NAME, &dname) < 0)
+ goto error;
+
+ virCheckNonEmptyOptStringArgGoto(dname, error);
+
if (conn->driver->domainMigrateBegin3Params) {
char *xml;
xml = conn->driver->domainMigrateBegin3Params(domain, params, nparams,
@@ -5258,6 +5295,8 @@ virDomainMigratePrepare3Params(virConnectPtr dconn,
char **uri_out,
unsigned int flags)
{
+ const char *dname = NULL;
+
VIR_DEBUG("dconn=%p, params=%p, nparams=%d, cookiein=%p, cookieinlen=%d, "
"cookieout=%p, cookieoutlen=%p, uri_out=%p, flags=0x%x",
dconn, params, nparams, cookiein, cookieinlen,
@@ -5269,6 +5308,12 @@ virDomainMigratePrepare3Params(virConnectPtr dconn,
virCheckConnectReturn(dconn, -1);
virCheckReadOnlyGoto(dconn->flags, error);
+ if (virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_DEST_NAME, &dname) < 0)
+ goto error;
+
+ virCheckNonEmptyOptStringArgGoto(dname, error);
+
if (dconn->driver->domainMigratePrepare3Params) {
int ret;
ret = dconn->driver->domainMigratePrepare3Params(dconn, params, nparams,
@@ -5303,6 +5348,8 @@ virDomainMigratePrepareTunnel3Params(virConnectPtr conn,
int *cookieoutlen,
unsigned int flags)
{
+ const char *dname = NULL;
+
VIR_DEBUG("conn=%p, stream=%p, params=%p, nparams=%d, cookiein=%p, "
"cookieinlen=%d, cookieout=%p, cookieoutlen=%p, flags=0x%x",
conn, st, params, nparams, cookiein, cookieinlen,
@@ -5320,6 +5367,12 @@ virDomainMigratePrepareTunnel3Params(virConnectPtr conn,
goto error;
}
+ if (virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_DEST_NAME, &dname) < 0)
+ goto error;
+
+ virCheckNonEmptyOptStringArgGoto(dname, error);
+
if (conn->driver->domainMigratePrepareTunnel3Params) {
int rv;
rv = conn->driver->domainMigratePrepareTunnel3Params(
@@ -5354,6 +5407,7 @@ virDomainMigratePerform3Params(virDomainPtr domain,
unsigned int flags)
{
virConnectPtr conn;
+ const char *dname = NULL;
VIR_DOMAIN_DEBUG(domain, "dconnuri=%s, params=%p, nparams=%d, cookiein=%p, "
"cookieinlen=%d, cookieout=%p, cookieoutlen=%p, flags=0x%x",
@@ -5368,6 +5422,12 @@ virDomainMigratePerform3Params(virDomainPtr domain,
virCheckReadOnlyGoto(conn->flags, error);
+ if (virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_DEST_NAME, &dname) < 0)
+ goto error;
+
+ virCheckNonEmptyOptStringArgGoto(dname, error);
+
if (conn->driver->domainMigratePerform3Params) {
int ret;
ret = conn->driver->domainMigratePerform3Params(
@@ -5401,6 +5461,8 @@ virDomainMigrateFinish3Params(virConnectPtr dconn,
unsigned int flags,
int cancelled)
{
+ const char *dname = NULL;
+
VIR_DEBUG("dconn=%p, params=%p, nparams=%d, cookiein=%p, cookieinlen=%d, "
"cookieout=%p, cookieoutlen=%p, flags=0x%x, cancelled=%d",
dconn, params, nparams, cookiein, cookieinlen, cookieout,
@@ -5412,6 +5474,12 @@ virDomainMigrateFinish3Params(virConnectPtr dconn,
virCheckConnectReturn(dconn, NULL);
virCheckReadOnlyGoto(dconn->flags, error);
+ if (virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_DEST_NAME, &dname) < 0)
+ goto error;
+
+ virCheckNonEmptyOptStringArgGoto(dname, error);
+
if (dconn->driver->domainMigrateFinish3Params) {
virDomainPtr ret;
ret = dconn->driver->domainMigrateFinish3Params(
@@ -5444,6 +5512,7 @@ virDomainMigrateConfirm3Params(virDomainPtr domain,
int cancelled)
{
virConnectPtr conn;
+ const char *dname = NULL;
VIR_DOMAIN_DEBUG(domain, "params=%p, nparams=%d, cookiein=%p, "
"cookieinlen=%d, flags=0x%x, cancelled=%d",
@@ -5457,6 +5526,12 @@ virDomainMigrateConfirm3Params(virDomainPtr domain,
virCheckReadOnlyGoto(conn->flags, error);
+ if (virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_DEST_NAME, &dname) < 0)
+ goto error;
+
+ virCheckNonEmptyOptStringArgGoto(dname, error);
+
if (conn->driver->domainMigrateConfirm3Params) {
int ret;
ret = conn->driver->domainMigrateConfirm3Params(
--
2.41.0
1 year, 1 month