[RFC] virsysinfo: Try reading DMI table
by brett.holman@canonical.com
This patch intends to add DMI support to libvirt for RISC-V and mips.
This change is based on commit ec6ce6363, which added ARM support.
This is not tested on the target arches, as I've been unable to find
hardware to test this on.
src/util/virsysinfo.c | 2 ++
1 file changed, 2 insertions(+)
Cheers,
Brett Holman
PS: This is my first post on this mailing list. I believe that I've followed
the documented steps, but if I've missed anything please let me know.
1 year
[PATCH 00/28] vsh: Fix handling of commands and help - part 2 (unwanted positional parsing of optional arguments)
by Peter Krempa
Part 2 was supposed to be the refactor of the parser, but it turns out
that our annotations of arguments and quirks of the parser itself
resulted with basically every optional argument with value being also
considered for positional parisng.
This series explains where this happens and annotates all cases.
Peter Krempa (28):
vshCmddefHelp: Drop empty line at the end
virsh: cmdDomdisplayReload: Require option name for --type
vshCmddefCheckInternals: Improve some checks
virsh: Inline VIRSH_COMMON_OPT_DOMAIN_OT_STRING macro
virsh: Inline VIRSH_COMMON_OPT_FILE_FULL macro
virsh: Inline VIRSH_COMMON_OPT_NETWORK_OT_STRING macro
vsh: Introduce tool to find unwanted positional arguments to
'self-test'
vsh: Introduce annotation for vsh options which are unexpectedly
parsed positionally
virsh: Annotate '--diskspec' _ARGV options as unwanted positional
virsh: Annotate rest of _ARGV arguments as positional
vsh: Fix option formatting for 'VHS_OT_ARGV' options
virsh: Require option flags for 'blkdeviotune' arguments
virsh: Fix "positional" argument annotations for 'migrate' command
virsh: Require option flags for all optional arguments of
'attach-disk'
virsh-checkpoint: Make 'checkpointname' positional and required
vsh: Allow one optional positional argument
vsh: Make the only argument of 'connect', 'cd', and 'help' commands
positional
virsh: snapshot: Make 'snapshotname' argument positional
virsh: Make '(snapshot|checkpoint)-create' 'xmlfile' argument
positional
virsh-backup: Fix argument annotations of 'backup-begin' command
virsh: Annotate some optional arguments as positional
virsh: volume: Mark optional 'pool' argument as 'positional'
virsh: Annotate "unwanted_positional" arguments for
'pool-(define|create)-as' commands
virsh: Annodate 'unwanted_positional' arguments
virt-admin: Annodate 'unwanted_positional' arguments
vsh: Make positional parsing of arguments opt-in
vsh: Replace 'VSH_OFLAG_EMPTY_OK' bitwise flag with a separate struct
member
vshCmdOptDef: Remove unused 'flags' member
tools/virsh-backup.c | 3 +-
tools/virsh-checkpoint.c | 26 ++++---
tools/virsh-domain-event.c | 10 ++-
tools/virsh-domain-monitor.c | 11 +--
tools/virsh-domain.c | 134 ++++++++++++++++++++++++++++-------
tools/virsh-host.c | 33 +++++++--
tools/virsh-interface.c | 2 +-
tools/virsh-network.c | 28 ++++----
tools/virsh-nodedev.c | 6 +-
tools/virsh-nwfilter.c | 2 -
tools/virsh-pool.c | 31 ++++++--
tools/virsh-secret.c | 6 +-
tools/virsh-snapshot.c | 13 +++-
tools/virsh-volume.c | 12 +++-
tools/virsh.c | 3 +-
tools/virsh.h | 20 +-----
tools/virt-admin.c | 15 ++--
tools/vsh.c | 64 +++++++++++++----
tools/vsh.h | 18 +++--
19 files changed, 316 insertions(+), 121 deletions(-)
--
2.44.0
1 year
RE: join running core dump job
by Thanos Makatos
> -----Original Message-----
> From: Thanos Makatos <thanos.makatos(a)nutanix.com>
> Sent: Friday, March 8, 2024 2:21 PM
> To: devel(a)lists.libvirt.org
> Cc: Michal Privoznik <mprivozn(a)redhat.com>
> Subject: RE: join running core dump job
>
> > -----Original Message-----
> > From: Thanos Makatos <thanos.makatos(a)nutanix.com>
> > Sent: Monday, March 4, 2024 9:45 PM
> > To: Thanos Makatos <thanos.makatos(a)nutanix.com>;
> devel(a)lists.libvirt.org
> > Subject: RE: join running core dump job
> >
> > > -----Original Message-----
> > > From: Thanos Makatos <thanos.makatos(a)nutanix.com>
> > > Sent: Monday, March 4, 2024 5:24 PM
> > > To: devel(a)lists.libvirt.org
> > > Subject: join running core dump job
> > >
> > > Is there a way to programmatically wait for a previously initiated
> > > virDomainCoreDumpWithFormat() where the process that started it died?
> > I'm
> > > looking at the API and don't seem to find anything relevant. I suppose I
> > could
> > > poll via virDomainGetJobStats(), but, ideally, I'd like a function that would
> > join
> > > the dump job and return when the dump job finishes.
> > > _______________________________________________
> > > Devel mailing list -- devel(a)lists.libvirt.org
> > > To unsubscribe send an email to devel-leave(a)lists.libvirt.org
> >
> > I see there's qemuDumpWaitForCompletion(), looks promising.
>
> I've made some progress (added a
> virHypervisorDriver.domainCoreDumpWait and the relevant scaffolding to
> make 'virsh dump --wait' work), calling qemuDumpWaitForCompletion() is all
> that's needed.
>
> However, it doesn't seem trivial to implement this in the test_driver.
> First, IIUC testDomainCoreDumpWithFormat() gets an exclusive lock on the
> domain (haven't tested anything yet), so calling domainCoreDumpWait()
> would block for the wrong reason. Is making
> testDomainCoreDumpWithFormat() using asynchronous jobs internally
> unavoidable?
> Second, I want to test behaviour in my application where (1) it calls
> domainCoreDump(), (2) crashes before domainCoreDump() finishes, (3) then
> my application starts again and looks for a pending dump job and (4) joins it
> using domainCoreDumpWait(). I can't see an easy wait of faking a dump job
> in the test_driver when it starts. How about adding persistent tasks, which I
> can pre-populate before starting my application, or fake jobs via an
> environment variable, so that when the test_driver starts it can internally
> continue them? E.g. we can specify how long to run the job for and
> domainCoreDumpWait() add a sleep for that long.
I ended up using and environment variable in the test_driver to fake jobs, so the application under test doesn't need to know anything. Would something like that be accepted?
>
> I'm open to suggestions.
I have managed to implement a PoC by introducing virDomainJobWait(), which checks the job type and if it's a dump it calls qemuDumpWaitForCompletion(), purely because it's the easiest thing to do (it fails with VIR_ERR_OPERATION_INVALID for all other operations). We'd like to implement this for all other job types and I'm looking for some pointers, is virCondWaitUntil(&priv->job.asyncCond, &obj->parent.lock) all that's required? (And return priv->job.error?)
Also, as I explained earlier, I want to implement joining a potentially existing dump job. In my specific use case, I want to either join an ongoing dump job if it exists, otherwise start a new one. I do this by calling virDomainGetJobStats(), but I'm thinking whether we could add a new, optional dump flag, e.g. 'join' which would make virDomainCoreDumpWithFormat do all this internally. Would something like that be accepted upstream.
1 year
[PATCH] qemuDomainChangeNet: Error when boot index changes in live XML
by Adam Julis
If the original code detected a missing or null boot index in the
new XML, it automatically added the current value. This
autocompletion was incorrect because it was impossible to
distinguish between user intent and user error - changing the
boot order itself is forbidden and should always be an error.
Resolves: https://issues.redhat.com/browse/RHEL-23416
Fixies: Commit hash aa3e07caec6179dfa6479deab14a21a493637d53
Signed-off-by: Adam Julis <ajulis(a)redhat.com>
---
src/qemu/qemu_hotplug.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c
index b9c502613c..62dc879ed4 100644
--- a/src/qemu/qemu_hotplug.c
+++ b/src/qemu/qemu_hotplug.c
@@ -3847,8 +3847,6 @@ qemuDomainChangeNet(virQEMUDriver *driver,
goto cleanup;
}
- if (newdev->info.bootIndex == 0)
- newdev->info.bootIndex = olddev->info.bootIndex;
if (olddev->info.bootIndex != newdev->info.bootIndex) {
virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
_("cannot modify network device boot index setting"));
--
2.44.0
1 year
[PATCH v3 00/12] Improve versioned CPU support in libvirt
by Jonathon Jongsma
For SEV-SNP support we will need to be able to specify versioned CPU models
that are not yet available in libvirt. Rather than just adding a versioned CPU
or two that would satisfy that immediate need, I decided to try to add
versioned CPUs in a more standard way. This series generates CPU definitions
for all cpu versions that are defined in upstream qemu (at least for
recent Intel and AMD CPUs).
libvirt already provides a select subset of these versions as configurable CPU
models. But we only include the ones that have defined aliases in qemu, such as
EPYC-IBPB. After this patchset, all verisioned cpu models supported by qemu
will be available in libvirt.
In addition to adding these new versioned models, based on feedback from Daniel
Berrange, I've also translated all CPU model aliases to a specific version when
specifying a CPU model to qemu. This means that we will no longer specify e.g.
'-cpu EPYC' to qemu, but will rather specify '-cpu EPYC-v1'
Changes in v3:
- handle unversioned aliases
Changes in v2:
- don't make any changes to existing CPU models
- drop concept of aliases from libvirt and only provide new versioned models
that aren't already available via their qemu alias.
Jonathon Jongsma (12):
cpu_map: update script to handle versioned CPUs
cpu_map: add canonical names to existing CPU models
cpu: parse the canonical name from the cpu model
qemu: use canonical name for CPU models
cpu_map: Add versioned EPYC CPUs
cpu_map: Add versioned Intel Skylake CPUs
cpu_map: Add versioned Intel Cascadelake CPUs
cpu_map: Add versioned Intel Icelake CPUs
cpu_map: Add versioned Intel Cooperlake CPUs
cpu_map: Add versioned Intel Snowridge CPUs
cpu_map: Add versioned Intel SapphireRapids CPUs
cpu_map: Add versioned Dhyana CPUs
src/conf/cpu_conf.c | 3 +
src/conf/cpu_conf.h | 1 +
src/cpu/cpu_x86.c | 24 ++++
src/cpu_map/index.xml | 22 +++
src/cpu_map/meson.build | 22 +++
src/cpu_map/sync_qemu_models_i386.py | 42 ++++--
src/cpu_map/x86_Broadwell-IBRS.xml | 1 +
src/cpu_map/x86_Broadwell-noTSX-IBRS.xml | 1 +
src/cpu_map/x86_Broadwell-noTSX.xml | 1 +
src/cpu_map/x86_Broadwell.xml | 1 +
src/cpu_map/x86_Cascadelake-Server-noTSX.xml | 1 +
src/cpu_map/x86_Cascadelake-Server-v2.xml | 93 +++++++++++++
src/cpu_map/x86_Cascadelake-Server-v4.xml | 91 +++++++++++++
src/cpu_map/x86_Cascadelake-Server-v5.xml | 92 +++++++++++++
src/cpu_map/x86_Cascadelake-Server.xml | 1 +
src/cpu_map/x86_Cooperlake-v2.xml | 98 ++++++++++++++
src/cpu_map/x86_Cooperlake.xml | 1 +
src/cpu_map/x86_Dhyana-v2.xml | 81 ++++++++++++
src/cpu_map/x86_Dhyana.xml | 1 +
src/cpu_map/x86_EPYC-IBPB.xml | 1 +
src/cpu_map/x86_EPYC-Milan-v2.xml | 108 +++++++++++++++
src/cpu_map/x86_EPYC-Milan.xml | 1 +
src/cpu_map/x86_EPYC-Rome-v2.xml | 93 +++++++++++++
src/cpu_map/x86_EPYC-Rome-v3.xml | 95 +++++++++++++
src/cpu_map/x86_EPYC-Rome-v4.xml | 94 +++++++++++++
src/cpu_map/x86_EPYC-Rome.xml | 1 +
src/cpu_map/x86_EPYC-v3.xml | 87 ++++++++++++
src/cpu_map/x86_EPYC-v4.xml | 88 ++++++++++++
src/cpu_map/x86_EPYC.xml | 1 +
src/cpu_map/x86_Haswell-IBRS.xml | 1 +
src/cpu_map/x86_Haswell-noTSX-IBRS.xml | 1 +
src/cpu_map/x86_Haswell-noTSX.xml | 1 +
src/cpu_map/x86_Haswell.xml | 1 +
src/cpu_map/x86_Icelake-Server-noTSX.xml | 1 +
src/cpu_map/x86_Icelake-Server-v3.xml | 103 +++++++++++++++
src/cpu_map/x86_Icelake-Server-v4.xml | 108 +++++++++++++++
src/cpu_map/x86_Icelake-Server-v5.xml | 109 +++++++++++++++
src/cpu_map/x86_Icelake-Server-v6.xml | 109 +++++++++++++++
src/cpu_map/x86_Icelake-Server.xml | 1 +
src/cpu_map/x86_IvyBridge-IBRS.xml | 1 +
src/cpu_map/x86_IvyBridge.xml | 1 +
src/cpu_map/x86_Nehalem-IBRS.xml | 1 +
src/cpu_map/x86_Nehalem.xml | 1 +
src/cpu_map/x86_SandyBridge-IBRS.xml | 1 +
src/cpu_map/x86_SandyBridge.xml | 1 +
src/cpu_map/x86_SapphireRapids-v2.xml | 125 ++++++++++++++++++
src/cpu_map/x86_SapphireRapids.xml | 1 +
src/cpu_map/x86_Skylake-Client-IBRS.xml | 1 +
src/cpu_map/x86_Skylake-Client-noTSX-IBRS.xml | 1 +
src/cpu_map/x86_Skylake-Client-v4.xml | 77 +++++++++++
src/cpu_map/x86_Skylake-Client.xml | 1 +
src/cpu_map/x86_Skylake-Server-IBRS.xml | 1 +
src/cpu_map/x86_Skylake-Server-noTSX-IBRS.xml | 1 +
src/cpu_map/x86_Skylake-Server-v4.xml | 83 ++++++++++++
src/cpu_map/x86_Skylake-Server-v5.xml | 85 ++++++++++++
src/cpu_map/x86_Skylake-Server.xml | 1 +
src/cpu_map/x86_Snowridge-v2.xml | 78 +++++++++++
src/cpu_map/x86_Snowridge-v3.xml | 80 +++++++++++
src/cpu_map/x86_Snowridge-v4.xml | 78 +++++++++++
src/cpu_map/x86_Snowridge.xml | 1 +
src/cpu_map/x86_Westmere-IBRS.xml | 1 +
src/cpu_map/x86_Westmere.xml | 1 +
src/qemu/qemu_command.c | 5 +-
.../x86_64-cpuid-Atom-P5362-guest.xml | 3 +-
.../x86_64-cpuid-Atom-P5362-json.xml | 3 +-
.../x86_64-cpuid-Cooperlake-host.xml | 3 +-
.../x86_64-cpuid-EPYC-7502-32-Core-host.xml | 5 +-
.../x86_64-cpuid-EPYC-7601-32-Core-guest.xml | 9 +-
...6_64-cpuid-EPYC-7601-32-Core-ibpb-host.xml | 8 +-
..._64-cpuid-Hygon-C86-7185-32-core-guest.xml | 5 +-
...6_64-cpuid-Hygon-C86-7185-32-core-host.xml | 5 +-
...6_64-cpuid-Hygon-C86-7185-32-core-json.xml | 6 +-
...4-cpuid-Ryzen-7-1800X-Eight-Core-guest.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-8268-guest.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-8268-host.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-9242-guest.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-9242-host.xml | 9 +-
.../x86_64-cpuid-Xeon-Platinum-9242-json.xml | 9 +-
..._64-cpuid-baseline-Cascadelake+Icelake.xml | 9 +-
...-cpuid-baseline-Cooperlake+Cascadelake.xml | 9 +-
...6_64-cpuid-baseline-Cooperlake+Icelake.xml | 9 +-
.../domaincapsdata/qemu_4.2.0-q35.x86_64.xml | 2 +
.../domaincapsdata/qemu_4.2.0-tcg.x86_64.xml | 2 +
tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 2 +
.../domaincapsdata/qemu_5.0.0-q35.x86_64.xml | 4 +
.../domaincapsdata/qemu_5.0.0-tcg.x86_64.xml | 4 +
tests/domaincapsdata/qemu_5.0.0.x86_64.xml | 4 +
.../domaincapsdata/qemu_5.1.0-q35.x86_64.xml | 7 +
.../domaincapsdata/qemu_5.1.0-tcg.x86_64.xml | 7 +
tests/domaincapsdata/qemu_5.1.0.x86_64.xml | 7 +
.../domaincapsdata/qemu_5.2.0-q35.x86_64.xml | 7 +
.../domaincapsdata/qemu_5.2.0-tcg.x86_64.xml | 7 +
tests/domaincapsdata/qemu_5.2.0.x86_64.xml | 7 +
.../domaincapsdata/qemu_6.0.0-q35.x86_64.xml | 8 ++
.../domaincapsdata/qemu_6.0.0-tcg.x86_64.xml | 8 ++
tests/domaincapsdata/qemu_6.0.0.x86_64.xml | 8 ++
.../domaincapsdata/qemu_6.1.0-q35.x86_64.xml | 15 +++
.../domaincapsdata/qemu_6.1.0-tcg.x86_64.xml | 15 +++
tests/domaincapsdata/qemu_6.1.0.x86_64.xml | 15 +++
.../domaincapsdata/qemu_6.2.0-q35.x86_64.xml | 16 +++
.../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml | 16 +++
tests/domaincapsdata/qemu_6.2.0.x86_64.xml | 16 +++
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 17 +++
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 17 +++
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 17 +++
.../domaincapsdata/qemu_7.1.0-q35.x86_64.xml | 17 +++
.../domaincapsdata/qemu_7.1.0-tcg.x86_64.xml | 17 +++
tests/domaincapsdata/qemu_7.1.0.x86_64.xml | 17 +++
.../domaincapsdata/qemu_7.2.0-q35.x86_64.xml | 17 +++
.../qemu_7.2.0-tcg.x86_64+hvf.xml | 17 +++
.../domaincapsdata/qemu_7.2.0-tcg.x86_64.xml | 17 +++
tests/domaincapsdata/qemu_7.2.0.x86_64.xml | 17 +++
.../domaincapsdata/qemu_8.0.0-q35.x86_64.xml | 17 +++
.../domaincapsdata/qemu_8.0.0-tcg.x86_64.xml | 17 +++
tests/domaincapsdata/qemu_8.0.0.x86_64.xml | 17 +++
.../domaincapsdata/qemu_8.1.0-q35.x86_64.xml | 27 +++-
.../domaincapsdata/qemu_8.1.0-tcg.x86_64.xml | 22 +++
tests/domaincapsdata/qemu_8.1.0.x86_64.xml | 27 +++-
.../domaincapsdata/qemu_8.2.0-q35.x86_64.xml | 27 +++-
.../domaincapsdata/qemu_8.2.0-tcg.x86_64.xml | 22 +++
tests/domaincapsdata/qemu_8.2.0.x86_64.xml | 27 +++-
.../cpu-Haswell-noTSX.x86_64-latest.args | 2 +-
.../cpu-Haswell.x86_64-latest.args | 2 +-
.../cpu-Haswell2.x86_64-latest.args | 2 +-
.../cpu-Haswell3.x86_64-latest.args | 2 +-
...-Icelake-Server-pconfig.x86_64-latest.args | 2 +-
.../cpu-cache-disable3.x86_64-latest.args | 2 +-
...u-check-default-partial.x86_64-latest.args | 2 +-
.../cpu-fallback.x86_64-5.2.0.args | 2 +-
.../cpu-fallback.x86_64-8.0.0.args | 2 +-
.../cpu-host-model-cmt.x86_64-latest.args | 2 +-
...-host-model-fallback-kvm.x86_64-4.2.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-5.0.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-5.1.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-5.2.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-6.0.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-6.1.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-6.2.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-7.0.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-7.1.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-7.2.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-8.0.0.args | 2 +-
...-host-model-fallback-kvm.x86_64-8.1.0.args | 2 +-
...host-model-fallback-kvm.x86_64-latest.args | 2 +-
...-host-model-fallback-tcg.x86_64-7.2.0.args | 2 +-
...-host-model-fallback-tcg.x86_64-8.0.0.args | 2 +-
...-host-model-fallback-tcg.x86_64-8.1.0.args | 2 +-
...host-model-fallback-tcg.x86_64-latest.args | 2 +-
.../cpu-host-model-kvm.x86_64-4.2.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-5.0.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-5.1.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-5.2.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-6.0.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-6.1.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-6.2.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-7.0.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-7.1.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-7.2.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-8.0.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-8.1.0.args | 2 +-
.../cpu-host-model-kvm.x86_64-latest.args | 2 +-
...ost-model-nofallback-kvm.x86_64-4.2.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-5.0.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-5.1.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-5.2.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-6.0.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-6.1.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-6.2.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-7.0.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-7.1.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-7.2.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-8.0.0.args | 2 +-
...ost-model-nofallback-kvm.x86_64-8.1.0.args | 2 +-
...st-model-nofallback-kvm.x86_64-latest.args | 2 +-
...ost-model-nofallback-tcg.x86_64-7.2.0.args | 2 +-
...ost-model-nofallback-tcg.x86_64-8.0.0.args | 2 +-
...ost-model-nofallback-tcg.x86_64-8.1.0.args | 2 +-
...st-model-nofallback-tcg.x86_64-latest.args | 2 +-
.../cpu-host-model-tcg.x86_64-7.2.0.args | 2 +-
.../cpu-host-model-tcg.x86_64-8.0.0.args | 2 +-
.../cpu-host-model-tcg.x86_64-8.1.0.args | 2 +-
.../cpu-host-model-tcg.x86_64-latest.args | 2 +-
.../cpu-host-model-vendor.x86_64-latest.args | 2 +-
.../cpu-minimum1.x86_64-latest.args | 2 +-
.../cpu-minimum2.x86_64-latest.args | 2 +-
.../cpu-nofallback.x86_64-8.0.0.args | 2 +-
.../cpu-phys-bits-emulate2.x86_64-latest.args | 2 +-
.../cpu-strict1.x86_64-latest.args | 2 +-
.../cpu-translation.x86_64-latest.args | 2 +-
.../cpu-tsc-frequency.x86_64-latest.args | 2 +-
190 files changed, 2837 insertions(+), 187 deletions(-)
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v2.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Cascadelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Cooperlake-v2.xml
create mode 100644 src/cpu_map/x86_Dhyana-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Milan-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v2.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-Rome-v4.xml
create mode 100644 src/cpu_map/x86_EPYC-v3.xml
create mode 100644 src/cpu_map/x86_EPYC-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v3.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Icelake-Server-v6.xml
create mode 100644 src/cpu_map/x86_SapphireRapids-v2.xml
create mode 100644 src/cpu_map/x86_Skylake-Client-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v4.xml
create mode 100644 src/cpu_map/x86_Skylake-Server-v5.xml
create mode 100644 src/cpu_map/x86_Snowridge-v2.xml
create mode 100644 src/cpu_map/x86_Snowridge-v3.xml
create mode 100644 src/cpu_map/x86_Snowridge-v4.xml
--
2.41.0
1 year
[PATCH 0/2] NEWS: Mention loongarch64 guest support
by Andrea Bolognani
Plus a random fix.
Andrea Bolognani (2):
NEWS: Fix spacing
NEWS: Mention loongarch64 guest support
NEWS.rst | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
--
2.44.0
1 year
[PATCH 0/2] Add 'display' and 'ramfb' option for pci hostdevs
by Jonathon Jongsma
see https://issues.redhat.com/browse/RHEL-28808
Jonathon Jongsma (2):
conf: allow display and ramfb for vfio pci hostdevs
qemu: enable display/ramfb for vfio pci hostdevs
docs/formatdomain.rst | 8 ++++
src/conf/domain_conf.c | 20 ++++++++-
src/conf/domain_conf.h | 2 +
src/conf/domain_validate.c | 21 +++++----
src/conf/schemas/domaincommon.rng | 25 ++++++-----
src/qemu/qemu_command.c | 10 ++++-
src/qemu/qemu_validate.c | 8 ++++
...stdev-pci-display-ramfb.x86_64-latest.args | 33 ++++++++++++++
...ostdev-pci-display-ramfb.x86_64-latest.xml | 44 +++++++++++++++++++
.../hostdev-pci-display-ramfb.xml | 33 ++++++++++++++
tests/qemuxmlconftest.c | 1 +
11 files changed, 185 insertions(+), 20 deletions(-)
create mode 100644 tests/qemuxmlconfdata/hostdev-pci-display-ramfb.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/hostdev-pci-display-ramfb.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/hostdev-pci-display-ramfb.xml
--
2.44.0
1 year
[PATCH] qemu: warn on pausing of guest due to watchdog or io error
by Michal Privoznik
From: Lennart Fricke <lennart.fricke(a)drehpunkt.com>
Change the log level for pauses of guests due to watchdog timeouts
or io errors from debug to warn to enhance the visibility of such
events.
Signed-off-by: Lennart Fricke <lennart.fricke(a)drehpunkt.com>
Reviewed-by: Michal Privoznik <mprivozn(a)redhat.com>
---
Here's a commit from MR 339 [1] I've merged.
1: https://gitlab.com/libvirt/libvirt/-/merge_requests/339
src/qemu/qemu_process.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 8e35d0a73f..8145205fa8 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -790,7 +790,7 @@ qemuProcessHandleWatchdog(qemuMonitor *mon G_GNUC_UNUSED,
if (action == VIR_DOMAIN_EVENT_WATCHDOG_PAUSE &&
virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) {
qemuDomainObjPrivate *priv = vm->privateData;
- VIR_DEBUG("Transitioned guest %s to paused state due to watchdog", vm->def->name);
+ VIR_WARN("Transitioned guest %s to paused state due to watchdog", vm->def->name);
virDomainObjSetState(vm, VIR_DOMAIN_PAUSED, VIR_DOMAIN_PAUSED_WATCHDOG);
lifecycleEvent = virDomainEventLifecycleNewFromObj(vm,
@@ -860,7 +860,7 @@ qemuProcessHandleIOError(qemuMonitor *mon G_GNUC_UNUSED,
if (action == VIR_DOMAIN_EVENT_IO_ERROR_PAUSE &&
virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) {
qemuDomainObjPrivate *priv = vm->privateData;
- VIR_DEBUG("Transitioned guest %s to paused state due to IO error", vm->def->name);
+ VIR_WARN("Transitioned guest %s to paused state due to IO error", vm->def->name);
if (priv->signalIOError)
virDomainObjBroadcast(vm);
--
2.43.2
1 year
[libvirt PATCH V4 0/4] add loongarch support for libvirt
by Xianglai Li
Hello, Everyone:
This patch series adds libvirt support for loongarch.
After three times of code review and modification, our patch has come to the 4th version.
We would like to thank Andrea very much for his comments and help us improve patch.
You can get the full patch from the link below:
https://gitlab.com/lixianglai/libvirt
branch: loongarch
Since the patch associated with loongarch has not yet been submitted to
the virt-manager community, we are providing a temporary patch with
loongarch for the time being patch's virt-manager, the open source work
of virt-manager adding loongarch will be followed up later or
synchronized with the open source libvirt.
You can get the virt-manager code with loongarch patch from the link below:
https://github.com/lixianglai/virt-manager
branch: loongarch
loongarch's virtual machine bios is not yet available in qemu,So you need to compile loongarch UEFI yourself,
you can refer to the following link to compile UEFI:
https://github.com/tianocore/edk2-platforms/blob/master/Platform/Loongson...
Here we provide compiled UEFI for ease of testing,you can get it from the following link:
https://github.com/lixianglai/LoongarchVirtFirmware
(Note: You should clone the repository using git instead of downloading the file via wget or you'll get xml)
We named the bios QEMU_EFI.fd, QEMU_VARS.fd is used to store pflash images of non-volatile variables.
After installing qemu-system-loongarch64,You can install the loongarch bios by executing the script
install-loongarch-virt-firmware.sh
Since there is no fedora operating system that supports the loongarch
architecture, you can find an iso that supports loongarch at the link below for testing purposes:
https://github.com/fedora-remix-loongarch/releases-info
The operating system provided above may have minor problems of one kind or another,
but you can also find out from the following link that openEuler's operating system supports loongarch:
https://muug.ca/mirror/openeuler/openEuler-22.03-LTS/ISO/loongarch64/
Well, if you have completed the above steps I think you can now install loongarch virtual machine,
you can install it through the virt-manager graphical interface, or install it through vrit-install,
here is an example of installing it using virt-install:
virt-install \
--virt-type=qemu \
--name loongarch-test \
--memory 4096 \
--vcpus=4 \
--arch=loongarch64 \
--boot cdrom \
--disk device=cdrom,bus=scsi,path=/root/livecd-fedora-mate-4.loongarch64.iso \
--disk path=/var/lib/libvirt/images/debian12-loongarch64.qcow2,size=10,format=qcow2,bus=scsi \
--network network=default \
--osinfo archlinux \
--video=virtio \
--graphics=vnc,listen=0.0.0.0
CHANGES
V3->V4:
1.Rebase according to the latest master code, making appropriate adjustments for relevant changes.
2.Modify the default scsi controller model.
3.Add firmware test cases.
4.Remove unnecessary acpi functional flags from test cases.
V2->V3:
1.Delete redundant header files in cpu_loongarch.c
2.Fixed code formatting issues
3.Modify the commit message of [PATCH 2/4]
4.Remove useless test cases for loongarch,
Rebuild loongarch's test case based on the latest code.
V1->V2:
1.Modify the link addresses of virtu-manager and firmeware in the cover
letter. Please obtain the code and firmware from the latest link
address.
2.Rename the bios name. Delete the loongarch bios name from libvirt
and use the json file to obtain the bios path.
3.Refer to riscv64 to simplify the implementation of loongarch cpu
driver.And fix some code style errors.
4.Delete unnecessary or redundant device enablement.Such as USB NEC.
5.Add some test cases for loongarch.
Xianglai Li (4):
Add loongarch cpu support
Support for loongarch64 in the QEMU driver
Implement the method of getting host info for loongarch
Add test script for loongarch
src/conf/schemas/basictypes.rng | 1 +
src/cpu/cpu.c | 2 +
src/cpu/cpu_loongarch.c | 58 +
src/cpu/cpu_loongarch.h | 25 +
src/cpu/meson.build | 1 +
src/qemu/qemu_capabilities.c | 16 +-
src/qemu/qemu_command.c | 6 +-
src/qemu/qemu_domain.c | 42 +-
src/qemu/qemu_domain.h | 1 +
src/qemu/qemu_validate.c | 1 +
src/util/virarch.c | 13 +-
src/util/virarch.h | 13 +-
src/util/virhostcpu.c | 7 +-
src/util/virsysinfo.c | 3 +-
.../qemu_8.2.0-tcg-virt.loongarch64.xml | 165 +
.../qemu_8.2.0-virt.loongarch64.xml | 169 +
tests/domaincapstest.c | 4 +-
.../caps_8.2.0_loongarch64.replies | 30121 ++++++++++++++++
.../caps_8.2.0_loongarch64.xml | 177 +
.../qemucaps2xmloutdata/caps.loongarch64.xml | 28 +
tests/qemufirmwaretest.c | 2 +-
...o-type-loongarch64.loongarch64-latest.args | 34 +
...eo-type-loongarch64.loongarch64-latest.xml | 45 +
.../default-video-type-loongarch64.xml | 18 +
...garch64.loongarch64-latest.abi-update.args | 35 +
...ngarch64.loongarch64-latest.abi-update.xml | 34 +
...to-efi-loongarch64.loongarch64-latest.args | 35 +
...uto-efi-loongarch64.loongarch64-latest.xml | 34 +
.../firmware-auto-efi-loongarch64.xml | 17 +
...-models.loongarch64-latest.abi-update.args | 43 +
...t-models.loongarch64-latest.abi-update.xml | 72 +
...irt-default-models.loongarch64-latest.args | 43 +
...virt-default-models.loongarch64-latest.xml | 72 +
.../loongarch64-virt-default-models.xml | 21 +
...ch64-virt-graphics.loongarch64-latest.args | 56 +
...rch64-virt-graphics.loongarch64-latest.xml | 116 +
.../loongarch64-virt-graphics.xml | 48 +
...ch64-virt-headless.loongarch64-latest.args | 52 +
...rch64-virt-headless.loongarch64-latest.xml | 102 +
.../loongarch64-virt-headless.xml | 42 +
...minimal.loongarch64-latest.abi-update.args | 31 +
...-minimal.loongarch64-latest.abi-update.xml | 23 +
...rch64-virt-minimal.loongarch64-latest.args | 31 +
...arch64-virt-minimal.loongarch64-latest.xml | 23 +
.../loongarch64-virt-minimal.xml | 12 +
tests/qemuxmlconftest.c | 9 +
tests/testutilshostcpus.h | 10 +
47 files changed, 31885 insertions(+), 28 deletions(-)
create mode 100644 src/cpu/cpu_loongarch.c
create mode 100644 src/cpu/cpu_loongarch.h
create mode 100644 tests/domaincapsdata/qemu_8.2.0-tcg-virt.loongarch64.xml
create mode 100644 tests/domaincapsdata/qemu_8.2.0-virt.loongarch64.xml
create mode 100644 tests/qemucapabilitiesdata/caps_8.2.0_loongarch64.replies
create mode 100644 tests/qemucapabilitiesdata/caps_8.2.0_loongarch64.xml
create mode 100644 tests/qemucaps2xmloutdata/caps.loongarch64.xml
create mode 100644 tests/qemuxmlconfdata/default-video-type-loongarch64.loongarch64-latest.args
create mode 100644 tests/qemuxmlconfdata/default-video-type-loongarch64.loongarch64-latest.xml
create mode 100644 tests/qemuxmlconfdata/default-video-type-loongarch64.xml
create mode 100644 tests/qemuxmlconfdata/firmware-auto-efi-loongarch64.loongarch64-latest.abi-update.args
create mode 100644 tests/qemuxmlconfdata/firmware-auto-efi-loongarch64.loongarch64-latest.abi-update.xml
create mode 100644 tests/qemuxmlconfdata/firmware-auto-efi-loongarch64.loongarch64-latest.args
create mode 100644 tests/qemuxmlconfdata/firmware-auto-efi-loongarch64.loongarch64-latest.xml
create mode 100644 tests/qemuxmlconfdata/firmware-auto-efi-loongarch64.xml
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-default-models.loongarch64-latest.abi-update.args
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-default-models.loongarch64-latest.abi-update.xml
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-default-models.loongarch64-latest.args
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-default-models.loongarch64-latest.xml
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-default-models.xml
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-graphics.loongarch64-latest.args
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-graphics.loongarch64-latest.xml
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-graphics.xml
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-headless.loongarch64-latest.args
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-headless.loongarch64-latest.xml
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-headless.xml
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-minimal.loongarch64-latest.abi-update.args
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-minimal.loongarch64-latest.abi-update.xml
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-minimal.loongarch64-latest.args
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-minimal.loongarch64-latest.xml
create mode 100644 tests/qemuxmlconfdata/loongarch64-virt-minimal.xml
--
2.39.1
1 year
[PATCH] remote: check for negative array lengths before allocation
by Daniel P. Berrangé
While the C API entry points will validate non-negative lengths
for various parameters, the RPC server de-serialization code
will need to allocate memory for arrays before entering the C
API. These allocations will thus happen before the non-negative
length check is performed.
Passing a negative length to the g_new0 function will usually
result in a crash due to the negative length being treated as
a huge positive number.
This was found and diagnosed by ALT Linux Team with AFLplusplus.
CVE-2024-2494
Found-by: Alexandr Shashkin <dutyrok(a)altlinux.org>
Co-developed-by: Alexander Kuznetsov <kuznetsovam(a)altlinux.org>
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
src/remote/remote_daemon_dispatch.c | 65 +++++++++++++++++++++++++++++
src/rpc/gendispatch.pl | 5 +++
2 files changed, 70 insertions(+)
diff --git a/src/remote/remote_daemon_dispatch.c b/src/remote/remote_daemon_dispatch.c
index aaabd1e56c..01dcac4b12 100644
--- a/src/remote/remote_daemon_dispatch.c
+++ b/src/remote/remote_daemon_dispatch.c
@@ -2291,6 +2291,10 @@ remoteDispatchDomainGetSchedulerParameters(virNetServer *server G_GNUC_UNUSED,
if (!conn)
goto cleanup;
+ if (args->nparams < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be non-negative"));
+ goto cleanup;
+ }
if (args->nparams > REMOTE_DOMAIN_SCHEDULER_PARAMETERS_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
goto cleanup;
@@ -2339,6 +2343,10 @@ remoteDispatchDomainGetSchedulerParametersFlags(virNetServer *server G_GNUC_UNUS
if (!conn)
goto cleanup;
+ if (args->nparams < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be non-negative"));
+ goto cleanup;
+ }
if (args->nparams > REMOTE_DOMAIN_SCHEDULER_PARAMETERS_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
goto cleanup;
@@ -2497,6 +2505,10 @@ remoteDispatchDomainBlockStatsFlags(virNetServer *server G_GNUC_UNUSED,
goto cleanup;
flags = args->flags;
+ if (args->nparams < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be non-negative"));
+ goto cleanup;
+ }
if (args->nparams > REMOTE_DOMAIN_BLOCK_STATS_PARAMETERS_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
goto cleanup;
@@ -2717,6 +2729,14 @@ remoteDispatchDomainGetVcpuPinInfo(virNetServer *server G_GNUC_UNUSED,
if (!(dom = get_nonnull_domain(conn, args->dom)))
goto cleanup;
+ if (args->ncpumaps < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("ncpumaps must be non-negative"));
+ goto cleanup;
+ }
+ if (args->maplen < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("maplen must be non-negative"));
+ goto cleanup;
+ }
if (args->ncpumaps > REMOTE_VCPUINFO_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("ncpumaps > REMOTE_VCPUINFO_MAX"));
goto cleanup;
@@ -2811,6 +2831,11 @@ remoteDispatchDomainGetEmulatorPinInfo(virNetServer *server G_GNUC_UNUSED,
if (!(dom = get_nonnull_domain(conn, args->dom)))
goto cleanup;
+ if (args->maplen < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("maplen must be non-negative"));
+ goto cleanup;
+ }
+
/* Allocate buffers to take the results */
if (args->maplen > 0)
cpumaps = g_new0(unsigned char, args->maplen);
@@ -2858,6 +2883,14 @@ remoteDispatchDomainGetVcpus(virNetServer *server G_GNUC_UNUSED,
if (!(dom = get_nonnull_domain(conn, args->dom)))
goto cleanup;
+ if (args->maxinfo < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("maxinfo must be non-negative"));
+ goto cleanup;
+ }
+ if (args->maplen < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("maxinfo must be non-negative"));
+ goto cleanup;
+ }
if (args->maxinfo > REMOTE_VCPUINFO_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("maxinfo > REMOTE_VCPUINFO_MAX"));
goto cleanup;
@@ -3096,6 +3129,10 @@ remoteDispatchDomainGetMemoryParameters(virNetServer *server G_GNUC_UNUSED,
flags = args->flags;
+ if (args->nparams < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be non-negative"));
+ goto cleanup;
+ }
if (args->nparams > REMOTE_DOMAIN_MEMORY_PARAMETERS_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
goto cleanup;
@@ -3156,6 +3193,10 @@ remoteDispatchDomainGetNumaParameters(virNetServer *server G_GNUC_UNUSED,
flags = args->flags;
+ if (args->nparams < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be non-negative"));
+ goto cleanup;
+ }
if (args->nparams > REMOTE_DOMAIN_NUMA_PARAMETERS_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
goto cleanup;
@@ -3216,6 +3257,10 @@ remoteDispatchDomainGetBlkioParameters(virNetServer *server G_GNUC_UNUSED,
flags = args->flags;
+ if (args->nparams < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be non-negative"));
+ goto cleanup;
+ }
if (args->nparams > REMOTE_DOMAIN_BLKIO_PARAMETERS_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
goto cleanup;
@@ -3277,6 +3322,10 @@ remoteDispatchNodeGetCPUStats(virNetServer *server G_GNUC_UNUSED,
flags = args->flags;
+ if (args->nparams < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be non-negative"));
+ goto cleanup;
+ }
if (args->nparams > REMOTE_NODE_CPU_STATS_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
goto cleanup;
@@ -3339,6 +3388,10 @@ remoteDispatchNodeGetMemoryStats(virNetServer *server G_GNUC_UNUSED,
flags = args->flags;
+ if (args->nparams < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be non-negative"));
+ goto cleanup;
+ }
if (args->nparams > REMOTE_NODE_MEMORY_STATS_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
goto cleanup;
@@ -3514,6 +3567,10 @@ remoteDispatchDomainGetBlockIoTune(virNetServer *server G_GNUC_UNUSED,
if (!conn)
goto cleanup;
+ if (args->nparams < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be non-negative"));
+ goto cleanup;
+ }
if (args->nparams > REMOTE_DOMAIN_BLOCK_IO_TUNE_PARAMETERS_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
goto cleanup;
@@ -5081,6 +5138,10 @@ remoteDispatchDomainGetInterfaceParameters(virNetServer *server G_GNUC_UNUSED,
flags = args->flags;
+ if (args->nparams < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be non-negative"));
+ goto cleanup;
+ }
if (args->nparams > REMOTE_DOMAIN_INTERFACE_PARAMETERS_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
goto cleanup;
@@ -5301,6 +5362,10 @@ remoteDispatchNodeGetMemoryParameters(virNetServer *server G_GNUC_UNUSED,
flags = args->flags;
+ if (args->nparams < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams must be non-negative"));
+ goto cleanup;
+ }
if (args->nparams > REMOTE_NODE_MEMORY_PARAMETERS_MAX) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("nparams too large"));
goto cleanup;
diff --git a/src/rpc/gendispatch.pl b/src/rpc/gendispatch.pl
index 5ce988c5ae..c5842dc796 100755
--- a/src/rpc/gendispatch.pl
+++ b/src/rpc/gendispatch.pl
@@ -1070,6 +1070,11 @@ elsif ($mode eq "server") {
print "\n";
if ($single_ret_as_list) {
+ print " if (args->$single_ret_list_max_var < 0) {\n";
+ print " virReportError(VIR_ERR_RPC,\n";
+ print " \"%s\", _(\"max$single_ret_list_name must be non-negative\"));\n";
+ print " goto cleanup;\n";
+ print " }\n";
print " if (args->$single_ret_list_max_var > $single_ret_list_max_define) {\n";
print " virReportError(VIR_ERR_RPC,\n";
print " \"%s\", _(\"max$single_ret_list_name > $single_ret_list_max_define\"));\n";
--
2.43.0
1 year