[libvirt PATCH 0/6] scripts: group-qemu-caps: improve readability
by Ján Tomko
Even though it still stays a Perl script at heart.
Ján Tomko (6):
scripts: group-qemu-caps: use read
scripts: group-qemu-caps: remove cryptic bool
scripts: group-qemu-caps: remove unecessary regex
scripts: group-qemu-caps: separate file loading
scripts: group-qemu-caps: regroup_caps: operate on split lines
scripts: group-qemu-caps: separate file operations from the check
scripts/group-qemu-caps.py | 120 +++++++++++++++++++++----------------
1 file changed, 68 insertions(+), 52 deletions(-)
--
2.45.2
6 months, 2 weeks
[RFC PATCH] conf: check for migration job during domain start
by Sergey Dyasli
It's possible to hit the following situation during qemu p2p live
migration:
1. qemu has live migrated and exited (making virDomainObjIsActive()
return false)
2. the live migration job is still in progress, waiting for a
confirmation from the remote libvirt daemon. This may last for
a while with a presence of networking issues (up to keepalive
timeout).
Any attempt to start the domain again would fail with "domain is already
being started" message which is misleading in this …
[View More]situation as it
doesn't reflect what's really happening.
Add a check for the migration job and report a different error message
if the migration job is still running.
Signed-off-by: Sergey Dyasli <sergey.dyasli(a)nutanix.com>
---
src/conf/virdomainobjlist.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/src/conf/virdomainobjlist.c b/src/conf/virdomainobjlist.c
index bb5807d00b42..166bbc5cfd57 100644
--- a/src/conf/virdomainobjlist.c
+++ b/src/conf/virdomainobjlist.c
@@ -301,9 +301,15 @@ virDomainObjListAddLocked(virDomainObjList *doms,
goto error;
}
if (!vm->persistent) {
- virReportError(VIR_ERR_OPERATION_INVALID,
- _("domain '%1$s' is already being started"),
- vm->def->name);
+ if (vm->job->asyncJob == VIR_ASYNC_JOB_MIGRATION_OUT) {
+ virReportError(VIR_ERR_OPERATION_INVALID,
+ _("domain '%1$s' is being migrated out"),
+ vm->def->name);
+ } else {
+ virReportError(VIR_ERR_OPERATION_INVALID,
+ _("domain '%1$s' is already being started"),
+ vm->def->name);
+ }
goto error;
}
}
--
2.22.3
[View Less]
7 months, 3 weeks
[PATCH RFC v3 00/16] Support throttle block filters
by wucf@linux.ibm.com
From: Chun Feng Wu <wucf(a)linux.ibm.com>
Hi,
I am thinking to leverage "throttle block filter" in QEMU to support more flexible I/O limits(e.g. tiered I/O groups), one sample provided by QEMU doc is:
https://github.com/qemu/qemu/blob/master/docs/throttle.txt
"For example, let's say that we have three different drives and we want to set I/O limits for
each one of them and an additional set of limits for the combined I/O of all three drives."
The implementation idea is to
- Define …
[View More]throttle groups(limit) in domain
- Define throttle filter to reference throttle group within disk
- Within domain disk, throttle filters references multiple throttle groups to form filter chain to apply multiple limits in QEMU like above sample
- Add new virsh cmds for throttle group management:
throttlegroupset Add or update a throttling group.
throttlegroupdel Delete a throttling group.
throttlegroupinfo Get a throttling group.
throttlegrouplist list all domain throttlegroups
- Update "attach-disk" to add one more option "--throttle-groups" to apply throttle filters e.g. "virsh attach-disk $VM_ID ${DISK_PATH}/vm1_disk_2.qcow2 vdd --driver qemu --subdriver qcow2 --targetbus virtio --throttle-groups limit2,limit012"
- I chose above semantics as I felt they're appropriate, if there are better ones please kindly suggest.
Note, this implementation requires flag "QEMU_CAPS_OBJECT_JSON".
From QMP perspective, the sample flow works this way:
- Throttle group creation:
virsh qemu-monitor-command 1 '{"execute":"object-add", "arguments":{"qom-type":"throttle-group","id":"limit0","limits":{"iops-total":200,"iops-read":0,"iops-total-max":200,"iops-total-max-length":1}}}'
virsh qemu-monitor-command 1 '{"execute":"object-add", "arguments":{"qom-type":"throttle-group","id":"limit1","limits":{"iops-total":250,"iops-read":0,"iops-total-max":250,"iops-total-max-length":1}}}'
virsh qemu-monitor-command 1 '{"execute":"object-add", "arguments":{"qom-type":"throttle-group","id":"limit2","limits":{"iops-total":300,"iops-read":0,"iops-total-max":300,"iops-total-max-length":1}}}'
virsh qemu-monitor-command 1 '{"execute":"object-add", "arguments":{"qom-type":"throttle-group","id":"limit012","limits":{"iops-total":400,"iops-read":0,"iops-total-max":400,"iops-total-max-length":1}}}'
- Chain up filters during attaching disk to apply two filters(limit0 and limit012):
virsh qemu-monitor-command 1 '{"execute":"blockdev-add", "arguments": {"driver":"file","filename":"/virt/disks/vm1_disk_1.qcow2","node-name":"test-3-storage","auto-read-only":true,"discard":"unmap"}}'
virsh qemu-monitor-command 1 '{"execute":"blockdev-add", "arguments":{"node-name":"test-4-format","read-only":false,"driver":"qcow2","file":"test-3-storage","backing":null}}'
virsh qemu-monitor-command 1 '{"execute":"blockdev-add", "arguments":{"driver":"throttle","node-name":"libvirt-5-filter","throttle-group": "limit0","file":"test-4-format"}}'
virsh qemu-monitor-command 1 '{"execute":"blockdev-add", "arguments": {"driver":"throttle","node-name":"libvirt-6-filter","throttle-group":"limit012","file":"libvirt-5-filter"}}'
virsh qemu-monitor-command 1 '{"execute": "device_add", "arguments": {"driver":"virtio-blk-pci","scsi":false,"bus":"pci.0","addr":"0x5","drive":"libvirt-6-filter","id":"virtio-disk1"}}'
This patchset includes:
- Throttle group XML schema definition in patch 1
- Throttle filter XML schema definition in patch 2
- Throttle group struct definition, parsing and formating in patch 3
- Throttle filter struct definition, parsing and formating in patch 4
- New QMP processing to update and get throttle group in patch 5&6
- New API definition and implementation in patch 7
- QEMU driver implementation in patch 8
- Hotplug processing for throttle filters in patch 9
- Extract common iotune validation in patch 10
- qemuProcessLaunch flow implemenation for throttle group in patch 11
- qemuProcessLaunch flow implemenation for throttle filter in patch 12
- Domain XML test for processing throttle groups and filters in patch 13
- Test new implemented driver in patch 14
- New virsh cmd implementation for group in patch 15
- Update Virsh cmd "attach_disk" to include throttle filters in patch 16
v3 changes:
- re-org commits by splitting changes containing throttle group and filters
- update commits msgs
- move schema commits to be the first ones
- refactor "diskIoTune" to extract common schema "iotune"
- add new tests for throttle groups and filters in qemuxmlconftest
- check flag "QEMU_CAPS_OBJECT_JSON" when preparing "-object"(qemu: command: Support throttle groups during qemuProcessLaunch ) or creating throttle group (qemu: Implement qemu driver for throttle API)
- when creating throttle group through "object-add" (qemu: Implement qemu driver for throttle API), reuse "qemuMonitorAddObject" to check if "objectAddNoWrap"("props") is requried
- remove "virObject parent;" in "_virDomainThrottleFilterDef" in domain_conf.h
- remove "virDomainThrottleGroupIndexByName" in both domain_conf.h and domain_conf.c
- remove "virDomainThrottleFilterDefNew" in domain_conf.c
- update "virDomainThrottleFilterDefFree" to use "g_free" rather than "VIR_FREE" in domain_conf.c
- update "virDomainDiskThrottleFilterDefParse" to remove "xmlXPathContextPtr ctx" parameter and check NULL against "filter->group_name" in domain_conf.c
- use "virBufferEscapeString" instead of "virBufferAsprintf" in "virDomainDiskDefFormatThrottleFilterChain"
- use "group->val > 0" instead of "if (group->val)" in FORMAT_THROTTLE_GROUP
- remove NULL check for "group->group_name" since virBufferEscapeString checked NULL already in "virDomainThrottleGroupFormat"
- I haven't added new conf module (src/conf/virdomainthrottle.c/h) because "virDomainThrottleGroupDef" is alias of "_virDomainBlockIoTuneInfo", try to avoid circular dependency
- remove "NULLSTR" in qemuMonitorUpdateThrottleGroup and qemuMonitorGetThrottleGroup in qemu_monitor.c
- refactor "qemuMonitorMakeThrottleGroupLimits" to use virJSONValueObjectAdd in qemu_monitor_json.c
- use "g_strdup_printf" to avoid static buffers in "qemuMonitorJSONGetThrottleGroup"
- remove virReportError after qemuMonitorJSONGetReply in "qemuMonitorJSONGetThrottleGroup" to avoid overriding error
- remove "VIR_DOMAIN_THROTTLE_GROUP" in libvirt/include/libvirt/libvirt-domain.h
- update "virDomainGetThrottleGroup" to not first query the number of parameters,
- update "remote_domain_get_throttle_group_args" and "remote_domain_get_throttle_group_ret" to remove "nparams" in src/remote_protocol-structs, also updated "remote_domain_get_throttle_group_args" and "remote_domain_get_throttle_group_ret" in src/remote/remote_protocol.x
- update parameter "virTypedParameterPtr params" to be "virTypedParameterPtr *params" in "virDrvDomainGetThrottleGroup" in driver-hypervisor.h
- update "qemuDomainSetThrottleGroup" to not query number of parameters first
- remove wrapper "qemuDomainThrottleGroupByName" and "qemuDomainSetThrottleGroupDefaults"
- refactor "qemuDomainSetThrottleGroup" and "qemuDomainSetBlockIoTune" to use common logic
- update "qemuDomainDelThrottleGroup" to use VIR_JOB_MODIFY by referencing "qemuDomainHotplugDelIOThread"
- check if group is still being used by some filter(qemuDomainCheckThrottleGroupRef) during deletion
- replace "ThrottleFilterChain" with "ThrottleFilters"
- update "qemuDomainDiskGetTopNodename" to take top throttle node name if disk has throttles, and reuse "qemuDomainDiskGetBackendAlias" in "qemuBuildDiskDeviceProps" to get top node name as "drive"
- after enabling throttlerfilter and if disk has throttlefilters, during blockcommit, the top node name is not "libvirt-x-format" anymore, instead, top node name referencies top filter like "libvirt-x-filter"
- add check "cdrom device with throttle filters isn't supported"
- delete "filternodenameindex" and reuse "nodenameindex" to generate index for throttle nodes
- refactor detaching filters by adding "qemuBuildThrottleFiltersDetachPrepareBlockdev" to just build parameters for "blockdev-del"
- refactor "testDomainSetBlockIoTune" and "testDomainSetThrottleGroup" to use common logic
Any comments/suggestions will be appriciated!
Chun Feng Wu (16):
schema: Add new domain elements to support multiple throttle groups
schema: Add new domain elements to support multiple throttle filters
config: Introduce ThrottleGroup and corresponding XML parsing
config: Introduce ThrottleFilter and corresponding XML parsing
qemu: monitor: Add support for ThrottleGroup operations
tests: Test qemuMonitorJSONGetThrottleGroup and
qemuMonitorJSONUpdateThrottleGroup
remote: New APIs for ThrottleGroup lifecycle management
qemu: Implement qemu driver for throttle API
qemu: hotplug: Support hot attach and detach block disk along with
throttle filters
config: validate: Refactor disk iotune validation for reuse
qemu: command: Support throttle groups during qemuProcessLaunch
qemu: command: Support throttle filters during qemuProcessLaunch
qemuxmlconftest: Add 'throttlefilter' tests
test_driver: Test throttle group lifecycle APIs
virsh: Add support for throttle group operations
virsh: Add option "throttle-groups" to "attach_disk"
docs/formatdomain.rst | 48 ++
include/libvirt/libvirt-domain.h | 21 +
src/conf/domain_conf.c | 376 +++++++++++
src/conf/domain_conf.h | 51 ++
src/conf/domain_validate.c | 118 +++-
src/conf/schemas/domaincommon.rng | 293 +++++----
src/conf/virconftypes.h | 4 +
src/driver-hypervisor.h | 22 +
src/libvirt-domain.c | 196 ++++++
src/libvirt_private.syms | 9 +
src/libvirt_public.syms | 7 +
src/qemu/qemu_block.c | 131 ++++
src/qemu/qemu_block.h | 53 ++
src/qemu/qemu_command.c | 182 ++++++
src/qemu/qemu_command.h | 12 +
src/qemu/qemu_domain.c | 39 +-
src/qemu/qemu_domain.h | 8 +
src/qemu/qemu_driver.c | 617 +++++++++++++++---
src/qemu/qemu_hotplug.c | 33 +
src/qemu/qemu_monitor.c | 34 +
src/qemu/qemu_monitor.h | 14 +
src/qemu/qemu_monitor_json.c | 150 +++++
src/qemu/qemu_monitor_json.h | 14 +
src/remote/remote_daemon_dispatch.c | 44 ++
src/remote/remote_driver.c | 40 ++
src/remote/remote_protocol.x | 48 +-
src/remote_protocol-structs | 28 +
src/test/test_driver.c | 452 +++++++++----
tests/qemumonitorjsontest.c | 86 +++
.../throttlefilter.x86_64-latest.args | 43 ++
.../throttlefilter.x86_64-latest.xml | 65 ++
tests/qemuxmlconfdata/throttlefilter.xml | 55 ++
tests/qemuxmlconftest.c | 1 +
tools/virsh-completer-domain.c | 64 ++
tools/virsh-completer-domain.h | 5 +
tools/virsh-domain.c | 453 ++++++++++++-
36 files changed, 3447 insertions(+), 369 deletions(-)
create mode 100644 tests/qemuxmlconfdata/throttlefilter.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/throttlefilter.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/throttlefilter.xml
--
2.34.1
[View Less]
7 months, 4 weeks
Re: [PATCH v2 4/4] virtio-net: Add support for USO features
by Thomas Huth
On 26/07/2024 08.08, Michael S. Tsirkin wrote:
> On Thu, Jul 25, 2024 at 06:18:20PM -0400, Peter Xu wrote:
>> On Tue, Aug 01, 2023 at 01:31:48AM +0300, Yuri Benditovich wrote:
>>> USO features of virtio-net device depend on kernel ability
>>> to support them, for backward compatibility by default the
>>> features are disabled on 8.0 and earlier.
>>>
>>> Signed-off-by: Yuri Benditovich <yuri.benditovich(a)daynix.com>
>>> Signed-…
[View More]off-by: Andrew Melnychecnko <andrew(a)daynix.com>
>>
>> Looks like this patch broke migration when the VM starts on a host that has
>> USO supported, to another host that doesn't..
>
> This was always the case with all offloads. The answer at the moment is,
> don't do this.
May I ask for my understanding:
"don't do this" = don't automatically enable/disable virtio features in QEMU
depending on host kernel features, or "don't do this" = don't try to migrate
between machines that have different host kernel features?
> Long term, we need to start exposing management APIs
> to discover this, and management has to disable unsupported features.
Ack, this likely needs some treatments from the libvirt side, too.
Thomas
[View Less]
8 months
[RFC PATCH v4 0/5] Added virtio-net RSS with eBPF support.
by Andrew Melnychenko
This series of rfc patches adds support for loading the RSS eBPF
program and passing it to the QEMU.
Comments and suggestions would be useful.
QEMU with vhost may work with RSS through eBPF. To load eBPF,
the capabilities required that Libvirt may provide.
eBPF program and maps may be unique for particular QEMU and
Libvirt retrieves eBPF through qapi.
For now, there is only "RSS" eBPF object in QEMU, in the future,
there may be another one(g.e. network filters).
That's why in Libvirt added …
[View More]logic to load and store any
eBPF object that QEMU provides using qapi schema.
One of the reasons why this series of patches is in RFC are tests.
To this series of patches, the tests were added.
For now, the tests are synthetic, the proper "reply" file should
be generated with a new "caps" file. Currently, there are changes
in caps-9.0.0* and caps-9.1.0 files. There was added support for
ebpf_rss_fds feature, and request-ebpf command.
So, overall, the tests are required for review, comment, and discussion
how we want them to be implemented in the future.
For virtio-net RSS, the document has not changed.
```
<interface type="network">
<model type="virtio"/>
<driver queues="4" rss="on" rss_hash_report="off"/>
<interface type="network">
```
Simplified routine for RSS:
* Libvirt retrieves eBPF "RSS" and load it.
* Libvirt passes file descriptors to virtio-net with property "ebpf_rss_fds" ("rss" property should be "on" too).
* if fds was provided - QEMU using eBPF RSS implementation.
* if fds was not provided - QEMU tries to load eBPF RSS in own context and use it.
* if eBPF RSS was not loaded - QEMU uses "in-qemu" RSS(vhost not supported).
Changes since RFC v3:
* changed tests a bit
* refactored and rebased
* removed "allowEBPF" from qemu config(now env is used for tests)
Changes since RFC v2:
* refactored and rebased.
* applied changes according to the Qemu.
* added basic test.
Changes since RFC v1:
* changed eBPF format saved in the XML cache.
* refactored and checked with syntax test.
* refactored patch hunks.
Andrew Melnychenko (5):
qemu_monitor: Added QEMU's "request-ebpf" support.
qemu_capabilities: Added logic for retrieving eBPF objects from QEMU.
qemu_interface: Added routine for loading the eBPF objects.
qemu_command: Added "ebpf_rss_fds" support for virtio-net.
tests: Added tests for eBPF blob loading.
libvirt.spec.in | 3 +
meson.build | 7 +
meson_options.txt | 1 +
src/qemu/meson.build | 1 +
src/qemu/qemu_capabilities.c | 132 ++++++++++++
src/qemu/qemu_capabilities.h | 5 +
src/qemu/qemu_command.c | 60 ++++++
src/qemu/qemu_domain.c | 4 +
src/qemu/qemu_domain.h | 3 +
src/qemu/qemu_interface.c | 87 ++++++++
src/qemu/qemu_interface.h | 7 +
src/qemu/qemu_monitor.c | 9 +
src/qemu/qemu_monitor.h | 4 +
src/qemu/qemu_monitor_json.c | 27 +++
src/qemu/qemu_monitor_json.h | 4 +
.../caps_9.0.0_sparc.replies | 95 +++++----
.../qemucapabilitiesdata/caps_9.0.0_sparc.xml | 3 +
.../caps_9.0.0_x86_64.replies | 199 ++++++++++--------
.../caps_9.0.0_x86_64.xml | 4 +
.../caps_9.1.0_x86_64.replies | 199 ++++++++++--------
.../caps_9.1.0_x86_64.xml | 4 +
tests/qemuxml2argvmock.c | 24 +++
.../net-virtio-rss-bpf.x86_64-latest.args | 37 ++++
.../net-virtio-rss-bpf.x86_64-latest.xml | 46 ++++
tests/qemuxmlconfdata/net-virtio-rss-bpf.xml | 46 ++++
tests/qemuxmlconftest.c | 5 +
26 files changed, 792 insertions(+), 224 deletions(-)
create mode 100644 tests/qemuxmlconfdata/net-virtio-rss-bpf.x86_64-latest.args
create mode 100644 tests/qemuxmlconfdata/net-virtio-rss-bpf.x86_64-latest.xml
create mode 100644 tests/qemuxmlconfdata/net-virtio-rss-bpf.xml
--
2.45.2
[View Less]
8 months
[PATCH RFC 0/9] qemu: Support mapped-ram migration capability
by Jim Fehlig
This series is a RFC for support of QEMU's mapped-ram migration
capability [1] for saving and restoring VMs. It implements the first
part of the design approach we discussed for supporting parallel
save/restore [2]. In summary, the approach is
1. Add mapped-ram migration capability
2. Steal an element from save header 'unused' for a 'features' variable
and bump save version to 3.
3. Add /etc/libvirt/qemu.conf knob for the save format version,
defaulting to latest v3
4. Use v3 (aka mapped-…
[View More]ram) by default
5. Use mapped-ram with BYPASS_CACHE for v3, old approach for v2
6. include: Define constants for parallel save/restore
7. qemu: Add support for parallel save. Implies mapped-ram, reject if v2
8. qemu: Add support for parallel restore. Implies mapped-ram.
Reject if v2
9. tools: add parallel parameter to virsh save command
10. tools: add parallel parameter to virsh restore command
This series implements 1-5, with the BYPASS_CACHE support in patches 8
and 9 being quite hacky. They are included to discuss approaches to make
them less hacky. See the patches for details.
The QEMU mapped-ram capability currently does not support directio.
Fabino is working on that now [3]. This complicates merging support
in libvirt. I don't think it's reasonable to enable mapped-ram by
default when BYPASS_CACHE cannot be supported. Should we wait until
the mapped-ram directio support is merged in QEMU before supporting
mapped-ram in libvirt?
For the moment, compression is ignored in the new save version.
Currently, libvirt connects the output of QEMU's save stream to the
specified compression program via a pipe. This approach is incompatible
with mapped-ram since the fd provided to QEMU must be seekable. One
option is to reopen and compress the saved image after the actual save
operation has completed. This has the downside of requiring the iohelper
to handle BYPASS_CACHE, which would preclude us from removing it
sometime in the future. Other suggestions much welcomed.
Note the logical file size of mapped-ram saved images is slightly
larger than guest RAM size, so the files are often much larger than the
files produced by the existing, sequential format. However, actual blocks
written to disk is often lower with mapped-ram saved images. E.g. a saved
image from a 30G, freshly booted, idle guest results in the following
'Size' and 'Blocks' values reported by stat(1)
Size Blocks
sequential 998595770 1950392
mapped-ram 34368584225 1800456
With the same guest running a workload that dirties memory
Size Blocks
sequential 33173330615 64791672
mapped-ram 34368578210 64706944
Thanks for any comments on this RFC!
[1] https://gitlab.com/qemu-project/qemu/-/blob/master/docs/devel/migration/m...
[2] https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/message/K...
[3] https://mail.gnu.org/archive/html/qemu-devel/2024-05/msg04432.html
Jim Fehlig (9):
qemu: Enable mapped-ram migration capability
qemu_fd: Add function to retrieve fdset ID
qemu: Add function to get migration params for save
qemu: Add a 'features' element to save image header and bump version
qemu: conf: Add setting for save image version
qemu: Add support for mapped-ram on save
qemu: Enable mapped-ram on restore
qemu: Support O_DIRECT with mapped-ram on save
qemu: Support O_DIRECT with mapped-ram on restore
src/qemu/libvirtd_qemu.aug | 1 +
src/qemu/qemu.conf.in | 6 +
src/qemu/qemu_conf.c | 8 ++
src/qemu/qemu_conf.h | 1 +
src/qemu/qemu_driver.c | 25 ++--
src/qemu/qemu_fd.c | 18 +++
src/qemu/qemu_fd.h | 3 +
src/qemu/qemu_migration.c | 99 ++++++++++++++-
src/qemu/qemu_migration.h | 11 +-
src/qemu/qemu_migration_params.c | 20 +++
src/qemu/qemu_migration_params.h | 4 +
src/qemu/qemu_monitor.c | 40 ++++++
src/qemu/qemu_monitor.h | 5 +
src/qemu/qemu_process.c | 63 +++++++---
src/qemu/qemu_process.h | 16 ++-
src/qemu/qemu_saveimage.c | 187 +++++++++++++++++++++++------
src/qemu/qemu_saveimage.h | 20 ++-
src/qemu/qemu_snapshot.c | 12 +-
src/qemu/test_libvirtd_qemu.aug.in | 1 +
19 files changed, 455 insertions(+), 85 deletions(-)
--
2.44.0
[View Less]
8 months
[PATCH] conf: Check for bandwidth limits during parsing
by Michal Privoznik
The 'tc' program stores speeds in 64bit integers (unit is bytes
per second) and sizes in uints (unit is bytes). We use different
units: kilobytes per second and kibibytes and therefore we can
parse values larger than 'tc' can handle. Reject those values
right away.
And while at it, fix the schema which assumed speed values fit
into uint.
Resolves: https://issues.redhat.com/browse/RHEL-45200
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/conf/netdev_bandwidth_conf.c | …
[View More]17 +++++++++++++++++
src/conf/schemas/networkcommon.rng | 2 +-
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/src/conf/netdev_bandwidth_conf.c b/src/conf/netdev_bandwidth_conf.c
index 9faa46a27f..f3f0b2209a 100644
--- a/src/conf/netdev_bandwidth_conf.c
+++ b/src/conf/netdev_bandwidth_conf.c
@@ -24,6 +24,16 @@
#define VIR_FROM_THIS VIR_FROM_NONE
+#define CHECK_LIMIT(val, limit, name) \
+ do { \
+ if ((val) > (limit)) { \
+ virReportError(VIR_ERR_OVERFLOW, \
+ _("value '%1$llu' is too big for '%2$s' parameter, maximum is '%3$llu'"), \
+ val, name, (unsigned long long) limit); \
+ return -1; \
+ } \
+ } while (0)
+
static int
virNetDevBandwidthParseRate(xmlNodePtr node,
virNetDevBandwidthRate *rate,
@@ -50,6 +60,11 @@ virNetDevBandwidthParseRate(xmlNodePtr node,
&rate->floor)) < 0)
return -1;
+ CHECK_LIMIT(rate->average, 1ULL << 54, "average");
+ CHECK_LIMIT(rate->peak, 1ULL << 54, "peak");
+ CHECK_LIMIT(rate->burst, UINT_MAX >> 10, "burst");
+ CHECK_LIMIT(rate->floor, 1ULL << 54, "floor");
+
if (!rc_average && !rc_floor) {
virReportError(VIR_ERR_XML_DETAIL, "%s",
_("Missing mandatory average or floor attributes"));
@@ -71,6 +86,8 @@ virNetDevBandwidthParseRate(xmlNodePtr node,
return 0;
}
+#undef CHECK_LIMIT
+
/**
* virNetDevBandwidthParse:
* @bandwidth: parsed bandwidth
diff --git a/src/conf/schemas/networkcommon.rng b/src/conf/schemas/networkcommon.rng
index 6df6d43f54..2b3f902ffe 100644
--- a/src/conf/schemas/networkcommon.rng
+++ b/src/conf/schemas/networkcommon.rng
@@ -180,7 +180,7 @@
</define>
<define name="speed">
- <data type="unsignedInt">
+ <data type="unsignedLong">
<param name="pattern">[0-9]+</param>
<param name="minInclusive">1</param>
</data>
--
2.44.2
[View Less]
8 months, 1 week
[PATCH v2] vmx: Ensure unique disk targets when parsing
by Adam Julis
Disk targets are generated in virVMXParseConfig() with
virVMXGenerateDiskTarget(). It works on combination of
controller, fix offset, unit and prefix. While SCSI and SATA have
same prefix "sd", function virVMXGenerateDiskTarget() could
returned in some cases same targets.
In this patch, after loaded SCSI and SATA disks to the def,
indexes are regenerated, now simply from position of the disk in
array of disks (def). With this, required uniqueness is
guaranteed.
Because assigned addresses of …
[View More]disks are generated from their
indexes, for every changed SATA disk is called
virDomainDiskDefAssignAddress() with the updated value.
The corresponding tests have been modified to match the index
changes.
Signed-off-by: Adam Julis <ajulis(a)redhat.com>
---
Since previous version in mailing list was complicated for trying to
preserve the indexes of SCSI and previous tests, this one going to
straightforward, although it changes all (SCSI and SATA) indexes. It's
not a bug, since we cannot guarantee the same naming inside the guest
anyway.
src/vmx/vmx.c | 19 +++++++++++++++++++
tests/vmx2xmldata/esx-in-the-wild-11.xml | 4 ++--
tests/vmx2xmldata/esx-in-the-wild-12.xml | 4 ++--
tests/vmx2xmldata/esx-in-the-wild-2.xml | 4 ++--
tests/vmx2xmldata/esx-in-the-wild-8.xml | 4 ++--
tests/vmx2xmldata/scsi-driver.xml | 12 ++++++------
6 files changed, 33 insertions(+), 14 deletions(-)
diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c
index 227744d062..22e59726c8 100644
--- a/src/vmx/vmx.c
+++ b/src/vmx/vmx.c
@@ -1400,6 +1400,7 @@ virVMXParseConfig(virVMXContext *ctx,
virCPUDef *cpu = NULL;
char *firmware = NULL;
size_t saved_ndisks = 0;
+ size_t i;
if (ctx->parseFileName == NULL) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
@@ -1805,6 +1806,24 @@ virVMXParseConfig(virVMXContext *ctx,
}
}
+ /* now disks contain only SCSI and SATA, SATA could have same name (dst) as SCSI
+ * so replace all their names with new ones to guarantee their uniqueness
+ * finally, regenerate correct addresses, while it depends on the index */
+
+ for (i = 0; i < def->ndisks; i++) {
+ virDomainDiskDef *disk = def->disks[i];
+
+ VIR_FREE(disk->dst);
+ disk->dst = virIndexToDiskName(i, "sd");
+
+ if (virDomainDiskDefAssignAddress(NULL, disk, def) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ _("Could not assign address to disk '%1$s'"),
+ virDomainDiskGetSource(disk));
+ goto cleanup;
+ }
+ }
+
/* def:disks (ide) */
for (bus = 0; bus < 2; ++bus) {
for (unit = 0; unit < 2; ++unit) {
diff --git a/tests/vmx2xmldata/esx-in-the-wild-11.xml b/tests/vmx2xmldata/esx-in-the-wild-11.xml
index 8807a057d7..d9522c1be2 100644
--- a/tests/vmx2xmldata/esx-in-the-wild-11.xml
+++ b/tests/vmx2xmldata/esx-in-the-wild-11.xml
@@ -22,8 +22,8 @@
</disk>
<disk type='file' device='disk'>
<source file='[datastore] directory/esx6.7-rhel7.7-x86_64_3.vmdk'/>
- <target dev='sdp' bus='scsi'/>
- <address type='drive' controller='0' bus='0' target='0' unit='16'/>
+ <target dev='sdb' bus='scsi'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='scsi' index='0' model='vmpvscsi'/>
<interface type='bridge'>
diff --git a/tests/vmx2xmldata/esx-in-the-wild-12.xml b/tests/vmx2xmldata/esx-in-the-wild-12.xml
index 42184501d0..a7730845ee 100644
--- a/tests/vmx2xmldata/esx-in-the-wild-12.xml
+++ b/tests/vmx2xmldata/esx-in-the-wild-12.xml
@@ -21,9 +21,9 @@
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='cdrom'>
- <target dev='sda' bus='sata'/>
+ <target dev='sdb' bus='sata'/>
<readonly/>
- <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='scsi' index='0' model='vmpvscsi'/>
<controller type='sata' index='0'/>
diff --git a/tests/vmx2xmldata/esx-in-the-wild-2.xml b/tests/vmx2xmldata/esx-in-the-wild-2.xml
index 59071b5d3a..1a66f5e9c7 100644
--- a/tests/vmx2xmldata/esx-in-the-wild-2.xml
+++ b/tests/vmx2xmldata/esx-in-the-wild-2.xml
@@ -20,9 +20,9 @@
</disk>
<disk type='file' device='cdrom'>
<source file='[datastore] directory/Debian1-cdrom.iso'/>
- <target dev='sdp' bus='scsi'/>
+ <target dev='sdb' bus='scsi'/>
<readonly/>
- <address type='drive' controller='1' bus='0' target='0' unit='0'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/vmimages/tools-isoimages/linux.iso'/>
diff --git a/tests/vmx2xmldata/esx-in-the-wild-8.xml b/tests/vmx2xmldata/esx-in-the-wild-8.xml
index 47d22ced2a..d5356bda34 100644
--- a/tests/vmx2xmldata/esx-in-the-wild-8.xml
+++ b/tests/vmx2xmldata/esx-in-the-wild-8.xml
@@ -36,9 +36,9 @@
</disk>
<disk type='file' device='cdrom'>
<source file='[692eb778-2d4937fe] CentOS-4.7.ServerCD-x86_64.iso'/>
- <target dev='sda' bus='sata'/>
+ <target dev='sdd' bus='sata'/>
<readonly/>
- <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='3'/>
</disk>
<controller type='scsi' index='0' model='vmpvscsi'/>
<controller type='sata' index='0'/>
diff --git a/tests/vmx2xmldata/scsi-driver.xml b/tests/vmx2xmldata/scsi-driver.xml
index e5b73420c3..42b6fffe24 100644
--- a/tests/vmx2xmldata/scsi-driver.xml
+++ b/tests/vmx2xmldata/scsi-driver.xml
@@ -19,18 +19,18 @@
</disk>
<disk type='file' device='disk'>
<source file='[datastore] directory/harddisk2.vmdk'/>
- <target dev='sdp' bus='scsi'/>
- <address type='drive' controller='1' bus='0' target='0' unit='0'/>
+ <target dev='sdb' bus='scsi'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<disk type='file' device='disk'>
<source file='[datastore] directory/harddisk3.vmdk'/>
- <target dev='sdae' bus='scsi'/>
- <address type='drive' controller='2' bus='0' target='0' unit='0'/>
+ <target dev='sdc' bus='scsi'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<source file='[datastore] directory/harddisk4.vmdk'/>
- <target dev='sdat' bus='scsi'/>
- <address type='drive' controller='3' bus='0' target='0' unit='0'/>
+ <target dev='sdd' bus='scsi'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='3'/>
</disk>
<controller type='scsi' index='0' model='buslogic'/>
<controller type='scsi' index='1' model='lsilogic'/>
--
2.45.2
[View Less]
8 months, 1 week
[PATCH v2 0/8] New changes in v2:
by Purna Pavan Chandra
* Add version checks in save/restore validations
* Add use_timeout in chSocketRecv
* Address Praveen Paladugu's comments
v1: https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/PT...
ch: support restore with network devices
Current ch driver supports restore only for domains without any network
configuration defined. This was because libvirt explicitly passes network fds
and CH did not had support to restore with new net FDS. This support has been
added recently, https://…
[View More]github.com/cloud-hypervisor/cloud-hypervisor/pull/6402
The changes in this patch series includes moving to socket communication for
restore api, create new net fds and pass them via SCM_RIGHTS to CH.
Purna Pavan Chandra (8):
ch: report response message instead of just code
ch: Pass net ids explicitly during vm creation
ch: refactor chProcessAddNetworkDevices
ch: support poll with -1 in chSocketRecv
ch: use monitor socket fd to send restore request
ch: refactor virCHMonitorSaveVM
ch: support restore with net devices
ch: kill CH process if restore fails
src/ch/ch_capabilities.c | 6 +
src/ch/ch_capabilities.h | 1 +
src/ch/ch_driver.c | 29 +++--
src/ch/ch_monitor.c | 62 +++++++----
src/ch/ch_monitor.h | 6 +-
src/ch/ch_process.c | 233 +++++++++++++++++++++++++++++++--------
6 files changed, 254 insertions(+), 83 deletions(-)
--
2.34.1
[View Less]
8 months, 2 weeks