[PATCH Libvirt v2 00/10] Support dirty page rate upper limit

Hi, This is the latest version for the series, comparing with version 1, there are some key modifications has made inspired and suggested by Peter, see as follows: 1. Introduce XML for dirty limit persistent configuration 2. Merge the cancel API into the set API 3. Extend the domstats/virDomainListGetStats API for dirty limit information query 4. Introduce the virDomainModificationImpact flags to control the behavior of the API 5. Enrich the comments and docs about the feature and API The patch set introduce the new API virDomainSetVcpuDirtyLimit to allow upper Apps to set upper limits of dirty page rate for virtual CPUs, the corresponding virsh API as follows: # limit-dirty-page-rate <domain> <rate> [--vcpu <number>] \ [--config] [--live] [--current] We put the dirty limit persistent info with the "vcpus" element in domain XML and extend dirtylimit statistics for domGetStats: <domain> ... <vcpu current='2'>3</vcpu> <vcpus> <vcpu id='0' hotpluggable='no' dirty_limit='10' order='1'.../> <vcpu id='1' hotpluggable='yes' dirty_limit='10' order='2'.../> </vcpus> ... If --vcpu option is not passed in the virsh command, set all virtual CPUs; if rate is set to zero, cancel the upper limit. Examples: To set the dirty page rate upper limit 10 MB/s for all virtual CPUs in c81_node1, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 --rate 10 --live Set dirty page rate limit 10(MB/s) for all virtual CPUs successfully [root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit <vcpu id='0' enabled='yes' hotpluggable='no' order='1' dirty_limit='10'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='2' dirty_limit='10'/> <vcpu id='2' enabled='yes' hotpluggable='no' order='3' dirty_limit='10'/> <vcpu id='3' enabled='no' hotpluggable='yes' dirty_limit='10'/> <vcpu id='4' enabled='no' hotpluggable='yes' dirty_limit='10'/> ...... Query the dirty limit info dynamically: [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1' dirtylimit.vcpu.0.limit=10 dirtylimit.vcpu.0.current=0 dirtylimit.vcpu.1.limit=10 dirtylimit.vcpu.1.current=0 dirtylimit.vcpu.2.limit=10 dirtylimit.vcpu.2.current=0 dirtylimit.vcpu.3.limit=10 dirtylimit.vcpu.3.current=0 dirtylimit.vcpu.4.limit=10 dirtylimit.vcpu.4.current=0 ...... To cancel the upper limit, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 \ --rate 0 --live Cancel dirty page rate limit for all virtual CPUs successfully [root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1' The dirty limit uses the QEMU dirty-limit feature introduced since 7.1.0, this feature allows CPU to be throttled as needed to keep their dirty page rate within the limit. It could, in some scenes, be used to provide quality-of-service in the aspect of the memory workload for virtual CPUs and QEMU itself use the feature to implement the dirty-limit throttle algorithm and apply it on the live migration, which improve responsiveness of large guests during live migration and can result in more stable read performance. The other application scenarios remain unexplored, before that, Libvirt could provide the basic API. Please review, thanks Yong Hyman Huang(黄勇) (10): qemu_capabilities: Introduce QEMU_CAPS_VCPU_DIRTY_LIMIT capability conf: Introduce XML for dirty limit configuration libvirt: Add virDomainSetVcpuDirtyLimit API qemu_driver: Implement qemuDomainSetVcpuDirtyLimit domain_validate: Export virDomainDefHasDirtyLimitStartupVcpus symbol qemu_process: Setup dirty limit after launching VM virsh: Introduce limit-dirty-page-rate api qemu_monitor: Implement qemuMonitorQueryVcpuDirtyLimit qemu_driver: Extend dirtlimit statistics for domGetStats virsh: Introduce command 'virsh domstats --dirtylimit' docs/formatdomain.rst | 7 +- docs/manpages/virsh.rst | 33 +++- include/libvirt/libvirt-domain.h | 5 + src/conf/domain_conf.c | 26 +++ src/conf/domain_conf.h | 8 + src/conf/domain_validate.c | 33 ++++ src/conf/domain_validate.h | 2 + src/conf/schemas/domaincommon.rng | 5 + src/driver-hypervisor.h | 7 + src/libvirt-domain.c | 68 +++++++ src/libvirt_private.syms | 1 + src/libvirt_public.syms | 5 + src/qemu/qemu_capabilities.c | 2 + src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_driver.c | 181 ++++++++++++++++++ src/qemu/qemu_monitor.c | 25 +++ src/qemu/qemu_monitor.h | 22 +++ src/qemu/qemu_monitor_json.c | 107 +++++++++++ src/qemu/qemu_monitor_json.h | 9 + src/qemu/qemu_process.c | 44 +++++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 17 +- src/remote_protocol-structs | 7 + .../qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 + .../caps_7.1.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 + .../caps_7.2.0_x86_64+hvf.xml | 1 + .../caps_7.2.0_x86_64.xml | 1 + .../caps_8.0.0_riscv64.xml | 1 + .../caps_8.0.0_x86_64.xml | 1 + .../qemucapabilitiesdata/caps_8.1.0_s390x.xml | 1 + .../caps_8.1.0_x86_64.xml | 1 + tools/virsh-domain-monitor.c | 7 + tools/virsh-domain.c | 109 +++++++++++ 34 files changed, 737 insertions(+), 4 deletions(-) -- 2.38.5

From: Hyman Huang(黄勇) <yong.huang@smartx.com> The upper limit (MB/s) of the dirty page rate configured by the user can be tracked by the XML. To allow this, add the following XML: <domain> ... <vcpu current='2'>3</vcpu> <vcpus> <vcpu id='0' hotpluggable='no' dirty_limit='10' order='1'.../> <vcpu id='1' hotpluggable='yes' dirty_limit='10' order='2'.../> </vcpus> ... The "dirty_limit" attribute in "vcpu" sub-element within "vcpus" element allows to set an upper limit for the individual vCPU. The value can be set dynamically by limit-dirty-page-rate API. Note that the dirty limit feature is based on the dirty-ring feature, so it requires dirty-ring size configuration in XML. Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> --- docs/formatdomain.rst | 7 ++++++- src/conf/domain_conf.c | 26 ++++++++++++++++++++++++ src/conf/domain_conf.h | 8 ++++++++ src/conf/domain_validate.c | 33 +++++++++++++++++++++++++++++++ src/conf/schemas/domaincommon.rng | 5 +++++ 5 files changed, 78 insertions(+), 1 deletion(-) diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst index cd9cb02bf8..7305ba38ea 100644 --- a/docs/formatdomain.rst +++ b/docs/formatdomain.rst @@ -649,7 +649,7 @@ CPU Allocation ... <vcpu placement='static' cpuset="1-4,^3,6" current="1">2</vcpu> <vcpus> - <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/> + <vcpu id='0' enabled='yes' hotpluggable='no' order='1' dirty_limit='64'/> <vcpu id='1' enabled='no' hotpluggable='yes'/> </vcpus> ... @@ -715,6 +715,11 @@ CPU Allocation be enabled and non-hotpluggable. On PPC64 along with it vCPUs that are in the same core need to be enabled as well. All non-hotpluggable CPUs present at boot need to be grouped after vCPU 0. :since:`Since 2.2.0 (QEMU only)` + ``dirty_limit`` :since:`Since 9.6.0 (QEMU and KVM only)` + The optional attribute ``dirty_limit`` allows to set an upper limit (MB/s) + of the dirty page rate for the vCPU. User can change the upper limit value + dynamically by using ``limit-dirty-page-rate`` API. Require ``dirty-ring`` + size configured. IOThreads Allocation diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 47693a49bf..0af6ddd358 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -17061,6 +17061,7 @@ virDomainVcpuParse(virDomainDef *def, virDomainXMLOption *xmlopt) { int n; + int rv; xmlNodePtr vcpuNode; size_t i; unsigned int maxvcpus; @@ -17148,6 +17149,13 @@ virDomainVcpuParse(virDomainDef *def, if (virXMLPropUInt(nodes[i], "order", 10, VIR_XML_PROP_NONE, &vcpu->order) < 0) return -1; + + if ((rv = virXMLPropULongLong(nodes[i], "dirty_limit", 10, VIR_XML_PROP_NONNEGATIVE, + &vcpu->dirty_limit)) < 0) { + return -1; + } else if (rv > 0) { + vcpu->dirtyLimitSet = true; + } } } else { if (virDomainDefSetVcpus(def, vcpus) < 0) @@ -21147,6 +21155,20 @@ virDomainDefVcpuCheckAbiStability(virDomainDef *src, i); return false; } + + if (svcpu->dirtyLimitSet != dvcpu->dirtyLimitSet) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Dirty limit state of vCPU '%1$zu' differs between source and destination definitions"), + i); + return false; + } + + if (svcpu->dirty_limit != dvcpu->dirty_limit) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Dirty limit of vCPU '%1$zu' differs between source and destination definitions"), + i); + return false; + } } return true; @@ -26712,6 +26734,10 @@ virDomainCpuDefFormat(virBuffer *buf, if (vcpu->order != 0) virBufferAsprintf(buf, " order='%d'", vcpu->order); + if (vcpu->dirtyLimitSet) { + virBufferAsprintf(buf, " dirty_limit='%llu'", vcpu->dirty_limit); + } + virBufferAddLit(buf, "/>\n"); } virBufferAdjustIndent(buf, -2); diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index c857ba556f..7e8bfcb884 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2785,6 +2785,14 @@ struct _virDomainVcpuDef { virDomainThreadSchedParam sched; virObject *privateData; + + /* set to true if the dirty page rate upper limit for + * the virtual CPU is configured + * */ + bool dirtyLimitSet; + + /* dirty page rate upper limit */ + unsigned long long dirty_limit; }; struct _virDomainBlkiotune { diff --git a/src/conf/domain_validate.c b/src/conf/domain_validate.c index ad383b604e..db3b7e1d9d 100644 --- a/src/conf/domain_validate.c +++ b/src/conf/domain_validate.c @@ -1798,6 +1798,36 @@ virDomainDefValidateIOThreads(const virDomainDef *def) return 0; } +static int +virDomainDefHasDirtyLimitStartupVcpus(const virDomainDef *def) +{ + size_t maxvcpus = virDomainDefGetVcpusMax(def); + virDomainVcpuDef *vcpu; + size_t i; + + for (i = 0; i < maxvcpus; i++) { + vcpu = def->vcpus[i]; + + if (vcpu->dirtyLimitSet && (vcpu->dirty_limit != 0)) + return true; + } + + return false; +} + +static int +virDomainDefDirtyLimitValidate(const virDomainDef *def) +{ + if (virDomainDefHasDirtyLimitStartupVcpus(def)) { + if (def->kvm_features->dirty_ring_size == 0) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("Dirty limit requires dirty-ring size configuration")); + return -1; + } + } + + return 0; +} static int virDomainDefValidateInternal(const virDomainDef *def, @@ -1854,6 +1884,9 @@ virDomainDefValidateInternal(const virDomainDef *def, if (virDomainDefValidateIOThreads(def) < 0) return -1; + if (virDomainDefDirtyLimitValidate(def) < 0) + return -1; + return 0; } diff --git a/src/conf/schemas/domaincommon.rng b/src/conf/schemas/domaincommon.rng index c2f56b0490..da0986b7c3 100644 --- a/src/conf/schemas/domaincommon.rng +++ b/src/conf/schemas/domaincommon.rng @@ -859,6 +859,11 @@ <ref name="unsignedInt"/> </attribute> </optional> + <optional> + <attribute name="dirty_limit"> + <ref name="unsignedLong"/> + </attribute> + </optional> </element> </zeroOrMore> </element> -- 2.38.5

From: Hyman Huang(黄勇) <yong.huang@smartx.com> The dirty_limit attribute in XML requires setting up the upper limit of dirty page rate once after launching the VM, so add the implementation. Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> --- src/qemu/qemu_process.c | 44 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 0644f80161..47763bbefc 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -6161,6 +6161,46 @@ qemuDomainHasHotpluggableStartupVcpus(virDomainDef *def) } +static int +qemuProcessSetupDirtyLimit(virDomainObj *vm, + virDomainAsyncJob asyncJob) +{ + qemuDomainObjPrivate *priv = vm->privateData; + virDomainDef *def = vm->def; + int ret = -1; + + /* Dirty limit capability is not present, skip the setup */ + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_VCPU_DIRTY_LIMIT)) + return 0; + + if (virDomainDefHasDirtyLimitStartupVcpus(def)) { + size_t maxvcpus = virDomainDefGetVcpusMax(def); + virDomainVcpuDef *vcpu; + size_t i; + + if (qemuDomainObjEnterMonitorAsync(vm, asyncJob) < 0) + return -1; + + for (i = 0; i < maxvcpus; i++) { + vcpu = virDomainDefGetVcpu(def, i); + + if (vcpu->dirtyLimitSet && (vcpu->dirty_limit != 0)) { + if ((ret = qemuMonitorSetVcpuDirtyLimit(priv->mon, i, vcpu->dirty_limit)) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Failed to set dirty page rate limit of vcpu[%1$zu]"), i); + qemuDomainObjExitMonitor(vm); + return ret; + } + VIR_DEBUG("Set vcpu[%zu] dirty page rate limit %lld", i, vcpu->dirty_limit); + } + } + qemuDomainObjExitMonitor(vm); + } + + return 0; +} + + static int qemuProcessVcpusSortOrder(const void *a, const void *b) @@ -7839,6 +7879,10 @@ qemuProcessLaunch(virConnectPtr conn, if (qemuProcessUpdateAndVerifyCPU(vm, asyncJob) < 0) goto cleanup; + VIR_DEBUG("Setting Dirty Limit for virtual CPUs"); + if (qemuProcessSetupDirtyLimit(vm, asyncJob) < 0) + goto cleanup; + VIR_DEBUG("Detecting IOThread PIDs"); if (qemuProcessDetectIOThreadPIDs(vm, asyncJob) < 0) goto cleanup; -- 2.38.5

From: Hyman Huang(黄勇) <yong.huang@smartx.com> Export virDomainDefHasDirtyLimitStartupVcpus as a util function, which could be used in qemu_process.c file for the next commit. Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> --- src/conf/domain_validate.c | 2 +- src/conf/domain_validate.h | 2 ++ src/libvirt_private.syms | 1 + 3 files changed, 4 insertions(+), 1 deletion(-) diff --git a/src/conf/domain_validate.c b/src/conf/domain_validate.c index db3b7e1d9d..7036fcae1d 100644 --- a/src/conf/domain_validate.c +++ b/src/conf/domain_validate.c @@ -1798,7 +1798,7 @@ virDomainDefValidateIOThreads(const virDomainDef *def) return 0; } -static int +int virDomainDefHasDirtyLimitStartupVcpus(const virDomainDef *def) { size_t maxvcpus = virDomainDefGetVcpusMax(def); diff --git a/src/conf/domain_validate.h b/src/conf/domain_validate.h index fc441cef5b..ccec3663cc 100644 --- a/src/conf/domain_validate.h +++ b/src/conf/domain_validate.h @@ -47,3 +47,5 @@ int virDomainDiskDefSourceLUNValidate(const virStorageSource *src); int virDomainDefOSValidate(const virDomainDef *def, virDomainXMLOption *xmlopt); + +int virDomainDefHasDirtyLimitStartupVcpus(const virDomainDef *def); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index da60c965dd..984e9d6b3a 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -793,6 +793,7 @@ virDomainDefPostParse; # conf/domain_validate.h virDomainActualNetDefValidate; +virDomainDefHasDirtyLimitStartupVcpus; virDomainDefOSValidate; virDomainDefValidate; virDomainDeviceValidateAliasForHotplug; -- 2.38.5

From: Hyman Huang(黄勇) <yong.huang@smartx.com> Extend dirtylimit statistics for domGetStats to display the information of the upper limit of dirty page rate for virtual CPUs. Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> --- include/libvirt/libvirt-domain.h | 1 + src/libvirt-domain.c | 9 ++++++ src/qemu/qemu_driver.c | 50 ++++++++++++++++++++++++++++++++ 3 files changed, 60 insertions(+) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index 3d3c7cdcba..14fc5ff82e 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -2739,6 +2739,7 @@ typedef enum { VIR_DOMAIN_STATS_MEMORY = (1 << 8), /* return domain memory info (Since: 6.0.0) */ VIR_DOMAIN_STATS_DIRTYRATE = (1 << 9), /* return domain dirty rate info (Since: 7.2.0) */ VIR_DOMAIN_STATS_VM = (1 << 10), /* return vm info (Since: 8.9.0) */ + VIR_DOMAIN_STATS_DIRTYLIMIT = (1 << 11), /* return domain dirty limit info (Since: 9.6.0) */ } virDomainStatsTypes; /** diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index 9a60ac7f67..9117881703 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -12531,6 +12531,15 @@ virConnectGetDomainCapabilities(virConnectPtr conn, * naming or meaning will stay consistent. Changes to existing fields, * however, are expected to be rare. * + * VIR_DOMAIN_STATS_DIRTYLIMIT: + * Return virtual CPU dirty limit information. The typed parameter keys are in + * this format: + * + * "dirtylimit.vcpu.<num>.limit" - The dirty page rate upper limit for the + * virtual CPU, in MB/s. + * "dirtylimit.vcpu.<num>.current" - The dirty page rate for the virtual CPU + * currently, in MB/s. + * * Note that entire stats groups or individual stat fields may be missing from * the output in case they are not supported by the given hypervisor, are not * applicable for the current state of the guest domain, or their retrieval diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9779cd0579..cbeab252a4 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -17679,6 +17679,50 @@ qemuDomainGetStatsVm(virQEMUDriver *driver G_GNUC_UNUSED, return 0; } + +static int +qemuDomainGetStatsDirtyLimitMon(virDomainObj *vm, + qemuMonitorVcpuDirtyLimitInfo *info) +{ + qemuDomainObjPrivate *priv = vm->privateData; + int ret; + + qemuDomainObjEnterMonitor(vm); + ret = qemuMonitorQueryVcpuDirtyLimit(priv->mon, info); + qemuDomainObjExitMonitor(vm); + + return ret; +} + + +static int +qemuDomainGetStatsDirtyLimit(virQEMUDriver *driver G_GNUC_UNUSED, + virDomainObj *dom, + virTypedParamList *params, + unsigned int privflags) +{ + qemuMonitorVcpuDirtyLimitInfo info; + size_t i; + + if (!HAVE_JOB(privflags) || !virDomainObjIsActive(dom)) + return 0; + + if (qemuDomainGetStatsDirtyLimitMon(dom, &info) < 0) + return -1; + + for (i = 0; i < info.nvcpus; i++) { + virTypedParamListAddULLong(params, info.limits[i].limit, + "dirtylimit.vcpu.%d.limit", + info.limits[i].idx); + virTypedParamListAddULLong(params, info.limits[i].current, + "dirtylimit.vcpu.%d.current", + info.limits[i].idx); + } + + return 0; +} + + typedef int (*qemuDomainGetStatsFunc)(virQEMUDriver *driver, virDomainObj *dom, @@ -17703,6 +17747,11 @@ static virQEMUCapsFlags queryVmRequired[] = { QEMU_CAPS_LAST }; +static virQEMUCapsFlags queryDirtyLimitRequired[] = { + QEMU_CAPS_VCPU_DIRTY_LIMIT, + QEMU_CAPS_LAST +}; + static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = { { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE, false, NULL }, { qemuDomainGetStatsCpu, VIR_DOMAIN_STATS_CPU_TOTAL, true, NULL }, @@ -17715,6 +17764,7 @@ static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = { { qemuDomainGetStatsMemory, VIR_DOMAIN_STATS_MEMORY, false, NULL }, { qemuDomainGetStatsDirtyRate, VIR_DOMAIN_STATS_DIRTYRATE, true, queryDirtyRateRequired }, { qemuDomainGetStatsVm, VIR_DOMAIN_STATS_VM, true, queryVmRequired }, + { qemuDomainGetStatsDirtyLimit, VIR_DOMAIN_STATS_DIRTYLIMIT, true, queryDirtyLimitRequired }, { NULL, 0, false, NULL } }; -- 2.38.5

From: Hyman Huang(黄勇) <yong.huang@smartx.com> Introduce command 'virsh domstats --dirtylimit' for reporting dirty page rate upper limit infomation. The info is listed as follows: Domain: 'vm' dirtylimit.vcpu.0.limit=10 dirtylimit.vcpu.0.current=16 dirtylimit.vcpu.1.limit=10 dirtylimit.vcpu.1.current=0 ... Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> --- docs/manpages/virsh.rst | 12 ++++++++++-- tools/virsh-domain-monitor.c | 7 +++++++ 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index 59eecbcef0..ac619b7697 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -2317,7 +2317,7 @@ domstats domstats [--raw] [--enforce] [--backing] [--nowait] [--state] [--cpu-total] [--balloon] [--vcpu] [--interface] [--block] [--perf] [--iothread] [--memory] [--dirtyrate] [--vm] - [[--list-active] [--list-inactive] + [--dirtylimit] [[--list-active] [--list-inactive] [--list-persistent] [--list-transient] [--list-running]y [--list-paused] [--list-shutoff] [--list-other]] | [domain ...] @@ -2336,7 +2336,7 @@ The individual statistics groups are selectable via specific flags. By default all supported statistics groups are returned. Supported statistics groups flags are: *--state*, *--cpu-total*, *--balloon*, *--vcpu*, *--interface*, *--block*, *--perf*, *--iothread*, *--memory*, -*--dirtyrate*, *--vm*. +*--dirtyrate*, *--vm*, *--dirtylimit*. Note that - depending on the hypervisor type and version or the domain state - not all of the following statistics may be returned. @@ -2579,6 +2579,14 @@ not available for statistical purposes. The *--vm* option enables reporting of hypervisor-specific statistics. Naming and meaning of the fields is entirely hypervisor dependent. + +*--dirtylimit* returns: + +* ``dirtylimit.vcpu.<num>.limit`` - the upper limit of dirty page rate for a + virtual CPU in MiB/s +* ``dirtylimit.vcpu.<num>.current`` - the dirty page rate for a virtual CPU + currently in MiB/s + The statistics in this group have the following naming scheme: ``vm.$NAME.$TYPE`` diff --git a/tools/virsh-domain-monitor.c b/tools/virsh-domain-monitor.c index 89fdc7a050..efa2609719 100644 --- a/tools/virsh-domain-monitor.c +++ b/tools/virsh-domain-monitor.c @@ -2063,6 +2063,10 @@ static const vshCmdOptDef opts_domstats[] = { .type = VSH_OT_BOOL, .help = N_("report hypervisor-specific statistics"), }, + {.name = "dirtylimit", + .type = VSH_OT_BOOL, + .help = N_("report domain dirty page rate upper limit infomation"), + }, {.name = "list-active", .type = VSH_OT_BOOL, .help = N_("list only active domains"), @@ -2187,6 +2191,9 @@ cmdDomstats(vshControl *ctl, const vshCmd *cmd) if (vshCommandOptBool(cmd, "vm")) stats |= VIR_DOMAIN_STATS_VM; + if (vshCommandOptBool(cmd, "dirtylimit")) + stats |= VIR_DOMAIN_STATS_DIRTYLIMIT; + if (vshCommandOptBool(cmd, "list-active")) flags |= VIR_CONNECT_GET_ALL_DOMAINS_STATS_ACTIVE; -- 2.38.5

From: Hyman Huang(黄勇) <yong.huang@smartx.com> Introduce virDomainSetVcpuDirtyLimit API to set or cancel the dirty page rate upper limit. The API will throttle the virtual CPU as needed to keep their dirty page rate within the limit set by @rate. Since it just throttles the virtual CPU, which dirties memory, read processes in the guest OS aren't penalized. This could, in some scenes, be used to provide quality-of-service in the aspect of the memory workload for virtual CPUs. Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> --- include/libvirt/libvirt-domain.h | 4 +++ src/driver-hypervisor.h | 7 ++++ src/libvirt-domain.c | 59 ++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 5 +++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 17 ++++++++- src/remote_protocol-structs | 7 ++++ 7 files changed, 99 insertions(+), 1 deletion(-) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index a1902546bb..3d3c7cdcba 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -6506,4 +6506,8 @@ int virDomainFDAssociate(virDomainPtr domain, int *fds, unsigned int flags); +int virDomainSetVcpuDirtyLimit(virDomainPtr domain, + int vcpu, + unsigned long long rate, + unsigned int flags); #endif /* LIBVIRT_DOMAIN_H */ diff --git a/src/driver-hypervisor.h b/src/driver-hypervisor.h index 5219344b72..e61b9efca5 100644 --- a/src/driver-hypervisor.h +++ b/src/driver-hypervisor.h @@ -1448,6 +1448,12 @@ typedef int int *fds, unsigned int flags); +typedef int +(*virDrvDomainSetVcpuDirtyLimit)(virDomainPtr domain, + int vcpu, + unsigned long long rate, + unsigned int flags); + typedef struct _virHypervisorDriver virHypervisorDriver; /** @@ -1720,4 +1726,5 @@ struct _virHypervisorDriver { virDrvDomainGetMessages domainGetMessages; virDrvDomainStartDirtyRateCalc domainStartDirtyRateCalc; virDrvDomainFDAssociate domainFDAssociate; + virDrvDomainSetVcpuDirtyLimit domainSetVcpuDirtyLimit; }; diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index ec42bb9a53..9a60ac7f67 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -14059,3 +14059,62 @@ virDomainFDAssociate(virDomainPtr domain, virDispatchError(conn); return -1; } + +/** + * virDomainSetVcpuDirtyLimit: + * @domain: pointer to domain object + * @vcpu: index of the limited virtual CPU + * @rate: upper limit of dirty page rate (mebibyte/s) for virtual CPUs + * @flags: bitwise-OR of virDomainModificationImpact + * + * Dynamically set the dirty page rate upper limit for the virtual CPUs. + * + * @vcpu may be a positive value, zero, or equal to -1. If -1 is set, + * the change affects all virtual CPUs of VM; it affects the specified + * virtual CPU otherwise. + * @rate may be 0 to cancel the limit or a positive value to enable. The + * hypervisors are free to round it down to the nearest mebibyte/s. + * + * The API will throttle the virtual CPU as needed to keep their dirty + * page rate within the limit set by @rate. Since it just throttles the + * virtual CPU, which dirties memory, read processes in the guest OS + * aren't penalized. This could, in some scenes, be used to provide + * quality-of-service in the aspect of the memory workload for virtual + * CPUs. + * + * Returns 0 in case of success, -1 in case of failure. + * + * Since: 9.6.0 + */ +int +virDomainSetVcpuDirtyLimit(virDomainPtr domain, + int vcpu, + unsigned long long rate, + unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "vcpu=%d, rate=%llu, flags=0x%x", + vcpu, rate, flags); + + virResetLastError(); + + virCheckDomainReturn(domain, -1); + conn = domain->conn; + + virCheckReadOnlyGoto(conn->flags, error); + + if (conn->driver->domainSetVcpuDirtyLimit) { + int ret; + ret = conn->driver->domainSetVcpuDirtyLimit(domain, vcpu, rate, flags); + if (ret < 0) + goto error; + return ret; + } + + virReportUnsupportedError(); + + error: + virDispatchError(domain->conn); + return -1; +} diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index 80742f268e..6fc01b518f 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -932,4 +932,9 @@ LIBVIRT_9.0.0 { virDomainFDAssociate; } LIBVIRT_8.5.0; +LIBVIRT_9.6.0 { + global: + virDomainSetVcpuDirtyLimit; +} LIBVIRT_9.0.0; + # .... define new API here using predicted next version number .... diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index faad7292ed..9d7522d3bf 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -8119,6 +8119,7 @@ static virHypervisorDriver hypervisor_driver = { .domainStartDirtyRateCalc = remoteDomainStartDirtyRateCalc, /* 7.2.0 */ .domainSetLaunchSecurityState = remoteDomainSetLaunchSecurityState, /* 8.0.0 */ .domainFDAssociate = remoteDomainFDAssociate, /* 9.0.0 */ + .domainSetVcpuDirtyLimit = remoteDomainSetVcpuDirtyLimit, /* 9.6.0 */ }; static virNetworkDriver network_driver = { diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 5d86a51116..33bdad7865 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -3935,6 +3935,14 @@ struct remote_domain_fd_associate_args { remote_nonnull_string name; unsigned int flags; }; + +struct remote_domain_set_vcpu_dirty_limit_args { + remote_nonnull_domain dom; + int vcpu; + unsigned hyper rate; + unsigned int flags; +}; + /*----- Protocol. -----*/ /* Define the program number, protocol version and procedure numbers here. */ @@ -6974,5 +6982,12 @@ enum remote_procedure { * @generate: none * @acl: domain:write */ - REMOTE_PROC_DOMAIN_FD_ASSOCIATE = 443 + REMOTE_PROC_DOMAIN_FD_ASSOCIATE = 443, + /** + * @generate: both + * @acl: domain:write + * @acl: domain:save:!VIR_DOMAIN_AFFECT_CONFIG|VIR_DOMAIN_AFFECT_LIVE + * @acl: domain:save:VIR_DOMAIN_AFFECT_CONFIG + */ + REMOTE_PROC_DOMAIN_SET_VCPU_DIRTY_LIMIT = 444 }; diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 3c6c230a16..f7543ec667 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -3273,6 +3273,12 @@ struct remote_domain_fd_associate_args { remote_nonnull_string name; u_int flags; }; +struct remote_domain_set_vcpu_dirty_limit_args { + remote_nonnull_domain dom; + int vcpu; + uint64_t rate; + u_int flags; +}; enum remote_procedure { REMOTE_PROC_CONNECT_OPEN = 1, REMOTE_PROC_CONNECT_CLOSE = 2, @@ -3717,4 +3723,5 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_RESTORE_PARAMS = 441, REMOTE_PROC_DOMAIN_ABORT_JOB_FLAGS = 442, REMOTE_PROC_DOMAIN_FD_ASSOCIATE = 443, + REMOTE_PROC_DOMAIN_SET_VCPU_DIRTY_LIMIT = 444, }; -- 2.38.5

From: Hyman Huang(黄勇) <yong.huang@smartx.com> set-vcpu-dirty-limit/cancel-vcpu-dirty-limit/query-vcpu-dirty-limit were introduced since qemu >=7.1.0. Introduce corresponding capability. Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> --- src/qemu/qemu_capabilities.c | 2 ++ src/qemu/qemu_capabilities.h | 1 + tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml | 1 + tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml | 1 + tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml | 1 + 11 files changed, 12 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index f80bdb579d..6e0c095b55 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -697,6 +697,7 @@ VIR_ENUM_IMPL(virQEMUCaps, /* 450 */ "run-with.async-teardown", /* QEMU_CAPS_RUN_WITH_ASYNC_TEARDOWN */ + "set-vcpu-dirty-limit", /* QEMU_CAPS_VCPU_DIRTY_LIMIT */ ); @@ -1221,6 +1222,7 @@ struct virQEMUCapsStringFlags virQEMUCapsCommands[] = { { "calc-dirty-rate", QEMU_CAPS_CALC_DIRTY_RATE }, { "query-stats", QEMU_CAPS_QUERY_STATS }, { "query-stats-schemas", QEMU_CAPS_QUERY_STATS_SCHEMAS }, + { "set-vcpu-dirty-limit", QEMU_CAPS_VCPU_DIRTY_LIMIT }, }; struct virQEMUCapsStringFlags virQEMUCapsMigration[] = { diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index c72f73a161..9e96f548af 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -676,6 +676,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for syntax-check */ /* 450 */ QEMU_CAPS_RUN_WITH_ASYNC_TEARDOWN, /* asynchronous teardown -run-with async-teardown=on|off */ + QEMU_CAPS_VCPU_DIRTY_LIMIT, /* 'set-vcpu-dirty-limit' QMP command present */ QEMU_CAPS_LAST /* this must always be the last item */ } virQEMUCapsFlags; diff --git a/tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml b/tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml index 3ff7a88cd2..f333df6599 100644 --- a/tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_7.1.0_ppc64.xml @@ -158,6 +158,7 @@ <flag name='virtio-crypto'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='set-vcpu-dirty-limit'/> <version>7001000</version> <microcodeVersion>42900244</microcodeVersion> <package>v7.1.0</package> diff --git a/tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml b/tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml index 4e2addd76b..20e10b3090 100644 --- a/tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_7.1.0_x86_64.xml @@ -195,6 +195,7 @@ <flag name='virtio-crypto'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='set-vcpu-dirty-limit'/> <version>7001000</version> <microcodeVersion>43100244</microcodeVersion> <package>v7.1.0</package> diff --git a/tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml b/tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml index 06f8c5801f..50e1d6c359 100644 --- a/tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml +++ b/tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml @@ -153,6 +153,7 @@ <flag name='virtio-crypto'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='set-vcpu-dirty-limit'/> <version>7002000</version> <microcodeVersion>0</microcodeVersion> <package>qemu-7.2.0-6.fc37</package> diff --git a/tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml b/tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml index 0007a33dca..d804bb51e1 100644 --- a/tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml +++ b/tests/qemucapabilitiesdata/caps_7.2.0_x86_64+hvf.xml @@ -199,6 +199,7 @@ <flag name='cryptodev-backend-lkcf'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='set-vcpu-dirty-limit'/> <version>7002000</version> <microcodeVersion>43100245</microcodeVersion> <package>v7.2.0</package> diff --git a/tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml b/tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml index e298cbd9b1..618e2e7778 100644 --- a/tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_7.2.0_x86_64.xml @@ -199,6 +199,7 @@ <flag name='cryptodev-backend-lkcf'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='set-vcpu-dirty-limit'/> <version>7002000</version> <microcodeVersion>43100245</microcodeVersion> <package>v7.2.0</package> diff --git a/tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml b/tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml index 987962ca41..0643fd8054 100644 --- a/tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml +++ b/tests/qemucapabilitiesdata/caps_8.0.0_riscv64.xml @@ -140,6 +140,7 @@ <flag name='virtio-crypto'/> <flag name='pvpanic-pci'/> <flag name='virtio-gpu.blob'/> + <flag name='set-vcpu-dirty-limit'/> <version>7002050</version> <microcodeVersion>0</microcodeVersion> <package>v7.2.0-333-g222059a0fc</package> diff --git a/tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml b/tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml index c43c209328..1e0bc96f88 100644 --- a/tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_8.0.0_x86_64.xml @@ -203,6 +203,7 @@ <flag name='virtio-gpu.blob'/> <flag name='rbd-encryption-layering'/> <flag name='rbd-encryption-luks-any'/> + <flag name='set-vcpu-dirty-limit'/> <version>8000000</version> <microcodeVersion>43100244</microcodeVersion> <package>v8.0.0</package> diff --git a/tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml b/tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml index 35751ed441..6d5e6ee76f 100644 --- a/tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml +++ b/tests/qemucapabilitiesdata/caps_8.1.0_s390x.xml @@ -114,6 +114,7 @@ <flag name='rbd-encryption-layering'/> <flag name='rbd-encryption-luks-any'/> <flag name='run-with.async-teardown'/> + <flag name='set-vcpu-dirty-limit'/> <version>8000050</version> <microcodeVersion>39100245</microcodeVersion> <package>v8.0.0-1270-g1c12355b</package> diff --git a/tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml b/tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml index e656a2024a..ca8b5d056c 100644 --- a/tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_8.1.0_x86_64.xml @@ -204,6 +204,7 @@ <flag name='rbd-encryption-luks-any'/> <flag name='qcow2-discard-no-unref'/> <flag name='run-with.async-teardown'/> + <flag name='set-vcpu-dirty-limit'/> <version>8000050</version> <microcodeVersion>43100245</microcodeVersion> <package>v8.0.0-2835-g361d539735</package> -- 2.38.5

From: Hyman Huang(黄勇) <yong.huang@smartx.com> Introduce limit-dirty-page-rate virsh api to set or cancel dirty page rate upper limit for virtual CPUs. Usage is below: $ virsh limit-dirty-page-rate <domain> --rate <number> \ [--vcpu <number>] Set the dirty page rate upper limit for the given vcpu specified by the "vcpu"; set for all virtual CPUs if vcpu option is not passed in. Cancel the dirty page rate upper limit if the "rate" option is set to zero. Note that the API requires dirty-ring size configured. Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> --- docs/manpages/virsh.rst | 21 ++++++++ tools/virsh-domain.c | 109 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 130 insertions(+) diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index f4e5a0bd62..59eecbcef0 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -5273,6 +5273,27 @@ use to avoid keeping them open unnecessarily. Best-effort security label restore may be requested by using the *--seclabel-restore* flag. +limit-dirty-page-rate +----------------- + +**Syntax:** + +:: + + limit-dirty-page-rate <domain> --rate <number> [--vcpu <number>] + +Set or cancel a domain's dirty page rate upper limit for the given vcpu specified +by the ``vcpu``; set for all virtual CPUs if ``vcpu`` if not specified and cancel +the domain's dirty page rate upper limit if ``rate`` is set to zero. +Set or cancel a domain's dirty page rate upper limit for the given virtual CPU +specified by ``vcpu``; set for all virtual CPUs if ``vcpu`` is not specified; +and cancel the domain's dirty page rate upper limit if ``rate`` is set to zero. + +CPU will be throttled as needed to keep their dirty page rate within the limit +if the feature enabled. This could, in some scenes, be used to provide +quality-of-service in the aspect of the memory workload for virtual CPUs. + + NODEDEV COMMANDS ================ diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index f8758f18a3..c0b0ef7472 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -13812,6 +13812,109 @@ cmdDomDirtyRateCalc(vshControl *ctl, const vshCmd *cmd) return true; } +/* + * "limit-dirty-page-rate" command + */ +static const vshCmdInfo info_limit_dirty_page_rate[] = { + {.name = "help", + .data = N_("Set or cancel dirty page rate upper limit") + }, + {.name = "desc", + .data = N_("Set or cancel dirty page rate upper limit, " + "require dirty-ring size configured") + }, + {.name = NULL} +}; + +static const vshCmdOptDef opts_limit_dirty_page_rate[] = { + VIRSH_COMMON_OPT_DOMAIN_FULL(0), + {.name = "rate", + .type = VSH_OT_INT, + .flags = VSH_OFLAG_REQ, + .help = N_("Upper limit of dirty page rate (MB/s) for " + "virtual CPUs, use 0 to cancel") + }, + {.name = "vcpu", + .type = VSH_OT_INT, + .help = N_("Index of a virtual CPU") + }, + VIRSH_COMMON_OPT_DOMAIN_PERSISTENT, + VIRSH_COMMON_OPT_DOMAIN_CONFIG, + VIRSH_COMMON_OPT_DOMAIN_LIVE, + VIRSH_COMMON_OPT_DOMAIN_CURRENT, + {.name = NULL} +}; + +static bool +cmdLimitDirtyPageRate(vshControl *ctl, const vshCmd *cmd) +{ + g_autoptr(virshDomain) dom = NULL; + int vcpu_idx = -1; + unsigned long long rate = 0; + unsigned int flags = VIR_DOMAIN_AFFECT_CURRENT; + bool vcpu = vshCommandOptBool(cmd, "vcpu"); + bool current = vshCommandOptBool(cmd, "current"); + bool config = vshCommandOptBool(cmd, "config"); + bool live = vshCommandOptBool(cmd, "live"); + + VSH_EXCLUSIVE_OPTIONS_VAR(current, live); + VSH_EXCLUSIVE_OPTIONS_VAR(current, config); + + if (config) + flags |= VIR_DOMAIN_AFFECT_CONFIG; + if (live) + flags |= VIR_DOMAIN_AFFECT_LIVE; + + if (!(dom = virshCommandOptDomain(ctl, cmd, NULL))) + return false; + + if (vshCommandOptULongLong(ctl, cmd, "rate", &rate) < 0) + return false; + + if (vcpu) { + if (vshCommandOptInt(ctl, cmd, "vcpu", &vcpu_idx) < 0) + return false; + + if (vcpu_idx < 0) { + vshError(ctl, "%s", _("Invalid vcpu index, using --vcpu " + "to specify cpu index")); + return false; + } + } + + if (vcpu) { + /* Set the dirty page rate upper limit for the specified + * virtual CPU in the given VM; cancel it if rate is set + * to zero. + */ + if (virDomainSetVcpuDirtyLimit(dom, vcpu_idx, + rate, flags) < 0) + return false; + if (rate == 0) + vshPrintExtra(ctl, _("Cancel vcpu[%1$d] dirty page rate upper " + "limit successfully\n"), + vcpu_idx); + else + vshPrintExtra(ctl, _("Set vcpu[%1$d] dirty page rate upper " + "limit %2$lld(MB/s) successfully\n"), + vcpu_idx, rate); + } else { + /* Set all dirty page rate upper limits for virtual CPUs in + * the given VM; cancel it if the rate is set to zero. + */ + if (virDomainSetVcpuDirtyLimit(dom, -1, rate, flags) < 0) + return false; + if (rate == 0) + vshPrintExtra(ctl, "%s", _("Cancel dirty page rate limit for " + "all virtual CPUs successfully\n")); + else + vshPrintExtra(ctl, _("Set dirty page rate limit %1$lld(MB/s) " + "for all virtual CPUs successfully\n"), + rate); + } + + return true; +} const vshCmdDef domManagementCmds[] = { {.name = "attach-device", @@ -14476,5 +14579,11 @@ const vshCmdDef domManagementCmds[] = { .info = info_dom_fd_associate, .flags = 0 }, + {.name = "limit-dirty-page-rate", + .handler = cmdLimitDirtyPageRate, + .opts = opts_limit_dirty_page_rate, + .info = info_limit_dirty_page_rate, + .flags = 0 + }, {.name = NULL} }; -- 2.38.5

From: Hyman Huang(黄勇) <yong.huang@smartx.com> Implement qemuMonitorQueryVcpuDirtyLimit which query vcpu dirty limit info by calling qmp 'query-vcpu-dirty-limit'. Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> --- src/qemu/qemu_monitor.c | 12 +++++++ src/qemu/qemu_monitor.h | 17 ++++++++++ src/qemu/qemu_monitor_json.c | 64 ++++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor_json.h | 4 +++ 4 files changed, 97 insertions(+) diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 5756b4ff50..14a70404ec 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -4514,3 +4514,15 @@ qemuMonitorSetVcpuDirtyLimit(qemuMonitor *mon, return qemuMonitorJSONSetVcpuDirtyLimit(mon, vcpu, rate); } + + +int +qemuMonitorQueryVcpuDirtyLimit(qemuMonitor *mon, + qemuMonitorVcpuDirtyLimitInfo *info) +{ + VIR_DEBUG("info=%p", info); + + QEMU_CHECK_MONITOR(mon); + + return qemuMonitorJSONQueryVcpuDirtyLimit(mon, info); +} diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 07a05365cf..1828bb202a 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -1584,3 +1584,20 @@ int qemuMonitorSetVcpuDirtyLimit(qemuMonitor *mon, int vcpu, unsigned long long rate); + +typedef struct _qemuMonitorVcpuDirtyLimit qemuMonitorVcpuDirtyLimit; +struct _qemuMonitorVcpuDirtyLimit { + int idx; /* virtual cpu index */ + unsigned long long limit; /* virtual cpu dirty page rate limit in MB/s */ + unsigned long long current; /* virtual cpu dirty page rate in MB/s */ +}; + +typedef struct _qemuMonitorVcpuDirtyLimitInfo qemuMonitorVcpuDirtyLimitInfo; +struct _qemuMonitorVcpuDirtyLimitInfo { + size_t nvcpus; /* number of virtual cpu */ + qemuMonitorVcpuDirtyLimit *limits; /* array of dirty page rate limit */ +}; + +int +qemuMonitorQueryVcpuDirtyLimit(qemuMonitor *mon, + qemuMonitorVcpuDirtyLimitInfo *info); diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 6200cf097d..2eff813de1 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -8929,3 +8929,67 @@ qemuMonitorJSONSetVcpuDirtyLimit(qemuMonitor *mon, return 0; } + +static int +qemuMonitorJSONExtractVcpuDirtyLimitInfo(virJSONValue *data, + qemuMonitorVcpuDirtyLimitInfo *info) +{ + size_t nvcpus; + size_t i; + + nvcpus = virJSONValueArraySize(data); + info->nvcpus = nvcpus; + info->limits = g_new0(qemuMonitorVcpuDirtyLimit, nvcpus); + + for (i = 0; i < nvcpus; i++) { + virJSONValue *entry = virJSONValueArrayGet(data, i); + if (virJSONValueObjectGetNumberInt(entry, "cpu-index", + &info->limits[i].idx) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-vcpu-dirty-limit reply was missing 'cpu-index' data")); + return -1; + } + + if (virJSONValueObjectGetNumberUlong(entry, "limit-rate", + &info->limits[i].limit) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-vcpu-dirty-limit reply was missing 'limit-rate' data")); + return -1; + } + + if (virJSONValueObjectGetNumberUlong(entry, "current-rate", + &info->limits[i].current) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-vcpu-dirty-limit reply was missing 'current-rate' data")); + return -1; + } + } + + return 0; +} + +int +qemuMonitorJSONQueryVcpuDirtyLimit(qemuMonitor *mon, + qemuMonitorVcpuDirtyLimitInfo *info) +{ + g_autoptr(virJSONValue) cmd = NULL; + g_autoptr(virJSONValue) reply = NULL; + virJSONValue *data = NULL; + + if (!(cmd = qemuMonitorJSONMakeCommand("query-vcpu-dirty-limit", NULL))) + return -1; + + if (qemuMonitorJSONCommand(mon, cmd, &reply) < 0) + return -1; + + if (qemuMonitorJSONCheckError(cmd, reply) < 0) + return -1; + + if (!(data = virJSONValueObjectGetArray(reply, "return"))) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("query-vcpu-dirty-limit reply was missing 'return' data")); + return -1; + } + + return qemuMonitorJSONExtractVcpuDirtyLimitInfo(data, info); +} diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 89f61b3052..bd8131508b 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -830,3 +830,7 @@ int qemuMonitorJSONSetVcpuDirtyLimit(qemuMonitor *mon, int vcpu, unsigned long long rate); + +int +qemuMonitorJSONQueryVcpuDirtyLimit(qemuMonitor *mon, + qemuMonitorVcpuDirtyLimitInfo *info); -- 2.38.5

From: Hyman Huang(黄勇) <yong.huang@smartx.com> Implement qemuDomainSetVcpuDirtyLimit, which can be used to set or cancel the upper limit of the dirty page rate for virtual CPUs. Signed-off-by: Hyman Huang(黄勇) <yong.huang@smartx.com> --- src/qemu/qemu_driver.c | 131 +++++++++++++++++++++++++++++++++++ src/qemu/qemu_monitor.c | 13 ++++ src/qemu/qemu_monitor.h | 5 ++ src/qemu/qemu_monitor_json.c | 43 ++++++++++++ src/qemu/qemu_monitor_json.h | 5 ++ 5 files changed, 197 insertions(+) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index f8039160f4..9779cd0579 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -19906,6 +19906,136 @@ qemuDomainFDAssociate(virDomainPtr domain, return ret; } +static void +qemuDomainSetDirtyLimit(virDomainVcpuDef *vcpu, + unsigned long long rate) +{ + if (rate > 0) { + vcpu->dirtyLimitSet = true; + vcpu->dirty_limit = rate; + } else { + vcpu->dirtyLimitSet = false; + vcpu->dirty_limit = 0; + } +} + +static void +qemuDomainSetVcpuDirtyLimitConfig(virDomainDef *def, + int vcpu, + unsigned long long rate) +{ + def->individualvcpus = true; + + if (vcpu == -1) { + size_t maxvcpus = virDomainDefGetVcpusMax(def); + size_t i; + for (i = 0; i < maxvcpus; i++) { + qemuDomainSetDirtyLimit(virDomainDefGetVcpu(def, i), rate); + } + } else { + qemuDomainSetDirtyLimit(virDomainDefGetVcpu(def, vcpu), rate); + } +} + +static int +qemuDomainSetVcpuDirtyLimitInternal(virQEMUDriver *driver, + virDomainObj *vm, + virDomainDef *def, + virDomainDef *persistentDef, + int vcpu, + unsigned long long rate) +{ + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); + qemuDomainObjPrivate *priv = vm->privateData; + + VIR_DEBUG("vcpu %d, rate %llu", vcpu, rate); + if (def) { + qemuDomainObjEnterMonitor(vm); + if (qemuMonitorSetVcpuDirtyLimit(priv->mon, vcpu, rate) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Failed to set dirty page rate limit")); + qemuDomainObjExitMonitor(vm); + return -1; + } + qemuDomainObjExitMonitor(vm); + qemuDomainSetVcpuDirtyLimitConfig(def, vcpu, rate); + } + + if (persistentDef) { + qemuDomainSetVcpuDirtyLimitConfig(persistentDef, vcpu, rate); + if (virDomainDefSave(persistentDef, driver->xmlopt, cfg->configDir) < 0) + return -1; + } + + return 0; +} + +static int +qemuDomainSetVcpuDirtyLimit(virDomainPtr domain, + int vcpu, + unsigned long long rate, + unsigned int flags) +{ + virQEMUDriver *driver = domain->conn->privateData; + virDomainObj *vm = NULL; + qemuDomainObjPrivate *priv; + virDomainDef *def = NULL; + virDomainDef *persistentDef = NULL; + int ret = -1; + + virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | + VIR_DOMAIN_AFFECT_CONFIG, -1); + + if (!(vm = qemuDomainObjFromDomain(domain))) + return -1; + + if (virDomainSetVcpuDirtyLimitEnsureACL(domain->conn, vm->def, flags) < 0) + goto cleanup; + + if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) + goto cleanup; + + if (virDomainObjGetDefs(vm, flags, &def, &persistentDef) < 0) + goto endjob; + + if (persistentDef) { + if (vcpu >= 0 && vcpu >= (int)virDomainDefGetVcpusMax(persistentDef)) { + virReportError(VIR_ERR_INVALID_ARG, + _("vcpu %1$d is not present in persistent config"), + vcpu); + goto endjob; + } + } + + if (def) { + if (virDomainObjCheckActive(vm) < 0) + goto endjob; + + priv = vm->privateData; + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_VCPU_DIRTY_LIMIT)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("QEMU does not support setting dirty page rate limit")); + goto endjob; + } + + if (vcpu >= 0 && vcpu >= (int)virDomainDefGetVcpusMax(def)) { + virReportError(VIR_ERR_INVALID_ARG, + _("vcpu %1$d is not present in live config"), + vcpu); + goto endjob; + } + } + + ret = qemuDomainSetVcpuDirtyLimitInternal(driver, vm, def, persistentDef, + vcpu, rate); + + endjob: + virDomainObjEndJob(vm); + + cleanup: + virDomainObjEndAPI(&vm); + return ret; +} static virHypervisorDriver qemuHypervisorDriver = { .name = QEMU_DRIVER_NAME, @@ -20156,6 +20286,7 @@ static virHypervisorDriver qemuHypervisorDriver = { .domainStartDirtyRateCalc = qemuDomainStartDirtyRateCalc, /* 7.2.0 */ .domainSetLaunchSecurityState = qemuDomainSetLaunchSecurityState, /* 8.0.0 */ .domainFDAssociate = qemuDomainFDAssociate, /* 9.0.0 */ + .domainSetVcpuDirtyLimit = qemuDomainSetVcpuDirtyLimit, /* 9.6.0 */ }; diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 02da1d6dfc..5756b4ff50 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -4501,3 +4501,16 @@ qemuMonitorGetStatsByQOMPath(virJSONValue *arr, return NULL; } + + +int +qemuMonitorSetVcpuDirtyLimit(qemuMonitor *mon, + int vcpu, + unsigned long long rate) +{ + VIR_DEBUG("set vcpu %d dirty page rate limit %llu", vcpu, rate); + + QEMU_CHECK_MONITOR(mon); + + return qemuMonitorJSONSetVcpuDirtyLimit(mon, vcpu, rate); +} diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 6c590933aa..07a05365cf 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -1579,3 +1579,8 @@ qemuMonitorExtractQueryStats(virJSONValue *info); virJSONValue * qemuMonitorGetStatsByQOMPath(virJSONValue *arr, char *qom_path); + +int +qemuMonitorSetVcpuDirtyLimit(qemuMonitor *mon, + int vcpu, + unsigned long long rate); diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 34c4b543e8..6200cf097d 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -8886,3 +8886,46 @@ qemuMonitorJSONQueryStats(qemuMonitor *mon, return virJSONValueObjectStealArray(reply, "return"); } + +/** + * qemuMonitorJSONSetVcpuDirtyLimit: + * @mon: monitor object + * @vcpu: virtual cpu index to be set, -1 affects all virtual CPUs + * @rate: dirty page rate upper limit to be set, use 0 to disable + * and a positive value to enable + * + * Returns -1 on failure. + */ +int +qemuMonitorJSONSetVcpuDirtyLimit(qemuMonitor *mon, + int vcpu, + unsigned long long rate) +{ + g_autoptr(virJSONValue) cmd = NULL; + g_autoptr(virJSONValue) reply = NULL; + + if (rate != 0) { + /* set the vcpu dirty page rate limit */ + if (!(cmd = qemuMonitorJSONMakeCommand("set-vcpu-dirty-limit", + "k:cpu-index", vcpu, + "U:dirty-rate", rate, + NULL))) { + return -1; + } + } else { + /* cancel the vcpu dirty page rate limit */ + if (!(cmd = qemuMonitorJSONMakeCommand("cancel-vcpu-dirty-limit", + "k:cpu-index", vcpu, + NULL))) { + return -1; + } + } + + if (qemuMonitorJSONCommand(mon, cmd, &reply) < 0) + return -1; + + if (qemuMonitorJSONCheckError(cmd, reply) < 0) + return -1; + + return 0; +} diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 06023b98ea..89f61b3052 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -825,3 +825,8 @@ qemuMonitorJSONQueryStats(qemuMonitor *mon, qemuMonitorQueryStatsTargetType target, char **vcpus, GPtrArray *providers); + +int +qemuMonitorJSONSetVcpuDirtyLimit(qemuMonitor *mon, + int vcpu, + unsigned long long rate); -- 2.38.5

Ping. On Mon, Aug 7, 2023 at 11:56 PM ~hyman <hyman@git.sr.ht> wrote:
Hi, This is the latest version for the series, comparing with version 1, there are some key modifications has made inspired and suggested by Peter, see as follows: 1. Introduce XML for dirty limit persistent configuration 2. Merge the cancel API into the set API 3. Extend the domstats/virDomainListGetStats API for dirty limit information query 4. Introduce the virDomainModificationImpact flags to control the behavior of the API 5. Enrich the comments and docs about the feature and API
The patch set introduce the new API virDomainSetVcpuDirtyLimit to allow upper Apps to set upper limits of dirty page rate for virtual CPUs, the corresponding virsh API as follows: # limit-dirty-page-rate <domain> <rate> [--vcpu <number>] \ [--config] [--live] [--current]
We put the dirty limit persistent info with the "vcpus" element in domain XML and extend dirtylimit statistics for domGetStats: <domain> ... <vcpu current='2'>3</vcpu> <vcpus> <vcpu id='0' hotpluggable='no' dirty_limit='10' order='1'.../> <vcpu id='1' hotpluggable='yes' dirty_limit='10' order='2'.../> </vcpus> ...
If --vcpu option is not passed in the virsh command, set all virtual CPUs; if rate is set to zero, cancel the upper limit.
Examples: To set the dirty page rate upper limit 10 MB/s for all virtual CPUs in c81_node1, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 --rate 10 --live Set dirty page rate limit 10(MB/s) for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit <vcpu id='0' enabled='yes' hotpluggable='no' order='1' dirty_limit='10'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='2' dirty_limit='10'/> <vcpu id='2' enabled='yes' hotpluggable='no' order='3' dirty_limit='10'/> <vcpu id='3' enabled='no' hotpluggable='yes' dirty_limit='10'/> <vcpu id='4' enabled='no' hotpluggable='yes' dirty_limit='10'/> ......
Query the dirty limit info dynamically: [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1' dirtylimit.vcpu.0.limit=10 dirtylimit.vcpu.0.current=0 dirtylimit.vcpu.1.limit=10 dirtylimit.vcpu.1.current=0 dirtylimit.vcpu.2.limit=10 dirtylimit.vcpu.2.current=0 dirtylimit.vcpu.3.limit=10 dirtylimit.vcpu.3.current=0 dirtylimit.vcpu.4.limit=10 dirtylimit.vcpu.4.current=0 ...... To cancel the upper limit, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 \ --rate 0 --live Cancel dirty page rate limit for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1'
The dirty limit uses the QEMU dirty-limit feature introduced since 7.1.0, this feature allows CPU to be throttled as needed to keep their dirty page rate within the limit. It could, in some scenes, be used to provide quality-of-service in the aspect of the memory workload for virtual CPUs and QEMU itself use the feature to implement the dirty-limit throttle algorithm and apply it on the live migration, which improve responsiveness of large guests during live migration and can result in more stable read performance. The other application scenarios remain unexplored, before that, Libvirt could provide the basic API.
Please review, thanks
Yong
Hyman Huang(黄勇) (10): qemu_capabilities: Introduce QEMU_CAPS_VCPU_DIRTY_LIMIT capability conf: Introduce XML for dirty limit configuration libvirt: Add virDomainSetVcpuDirtyLimit API qemu_driver: Implement qemuDomainSetVcpuDirtyLimit domain_validate: Export virDomainDefHasDirtyLimitStartupVcpus symbol qemu_process: Setup dirty limit after launching VM virsh: Introduce limit-dirty-page-rate api qemu_monitor: Implement qemuMonitorQueryVcpuDirtyLimit qemu_driver: Extend dirtlimit statistics for domGetStats virsh: Introduce command 'virsh domstats --dirtylimit'
docs/formatdomain.rst | 7 +- docs/manpages/virsh.rst | 33 +++- include/libvirt/libvirt-domain.h | 5 + src/conf/domain_conf.c | 26 +++ src/conf/domain_conf.h | 8 + src/conf/domain_validate.c | 33 ++++ src/conf/domain_validate.h | 2 + src/conf/schemas/domaincommon.rng | 5 + src/driver-hypervisor.h | 7 + src/libvirt-domain.c | 68 +++++++ src/libvirt_private.syms | 1 + src/libvirt_public.syms | 5 + src/qemu/qemu_capabilities.c | 2 + src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_driver.c | 181 ++++++++++++++++++ src/qemu/qemu_monitor.c | 25 +++ src/qemu/qemu_monitor.h | 22 +++ src/qemu/qemu_monitor_json.c | 107 +++++++++++ src/qemu/qemu_monitor_json.h | 9 + src/qemu/qemu_process.c | 44 +++++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 17 +- src/remote_protocol-structs | 7 + .../qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 + .../caps_7.1.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 + .../caps_7.2.0_x86_64+hvf.xml | 1 + .../caps_7.2.0_x86_64.xml | 1 + .../caps_8.0.0_riscv64.xml | 1 + .../caps_8.0.0_x86_64.xml | 1 + .../qemucapabilitiesdata/caps_8.1.0_s390x.xml | 1 + .../caps_8.1.0_x86_64.xml | 1 + tools/virsh-domain-monitor.c | 7 + tools/virsh-domain.c | 109 +++++++++++ 34 files changed, 737 insertions(+), 4 deletions(-)
-- 2.38.5
-- Best regards

Ping1 On Tue, Aug 15, 2023 at 9:48 AM Yong Huang <yong.huang@smartx.com> wrote:
Ping.
On Mon, Aug 7, 2023 at 11:56 PM ~hyman <hyman@git.sr.ht> wrote:
Hi, This is the latest version for the series, comparing with version 1, there are some key modifications has made inspired and suggested by Peter, see as follows: 1. Introduce XML for dirty limit persistent configuration 2. Merge the cancel API into the set API 3. Extend the domstats/virDomainListGetStats API for dirty limit information query 4. Introduce the virDomainModificationImpact flags to control the behavior of the API 5. Enrich the comments and docs about the feature and API
The patch set introduce the new API virDomainSetVcpuDirtyLimit to allow upper Apps to set upper limits of dirty page rate for virtual CPUs, the corresponding virsh API as follows: # limit-dirty-page-rate <domain> <rate> [--vcpu <number>] \ [--config] [--live] [--current]
We put the dirty limit persistent info with the "vcpus" element in domain XML and extend dirtylimit statistics for domGetStats: <domain> ... <vcpu current='2'>3</vcpu> <vcpus> <vcpu id='0' hotpluggable='no' dirty_limit='10' order='1'.../> <vcpu id='1' hotpluggable='yes' dirty_limit='10' order='2'.../> </vcpus> ...
If --vcpu option is not passed in the virsh command, set all virtual CPUs; if rate is set to zero, cancel the upper limit.
Examples: To set the dirty page rate upper limit 10 MB/s for all virtual CPUs in c81_node1, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 --rate 10 --live Set dirty page rate limit 10(MB/s) for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit <vcpu id='0' enabled='yes' hotpluggable='no' order='1' dirty_limit='10'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='2' dirty_limit='10'/> <vcpu id='2' enabled='yes' hotpluggable='no' order='3' dirty_limit='10'/> <vcpu id='3' enabled='no' hotpluggable='yes' dirty_limit='10'/> <vcpu id='4' enabled='no' hotpluggable='yes' dirty_limit='10'/> ......
Query the dirty limit info dynamically: [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1' dirtylimit.vcpu.0.limit=10 dirtylimit.vcpu.0.current=0 dirtylimit.vcpu.1.limit=10 dirtylimit.vcpu.1.current=0 dirtylimit.vcpu.2.limit=10 dirtylimit.vcpu.2.current=0 dirtylimit.vcpu.3.limit=10 dirtylimit.vcpu.3.current=0 dirtylimit.vcpu.4.limit=10 dirtylimit.vcpu.4.current=0 ...... To cancel the upper limit, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 \ --rate 0 --live Cancel dirty page rate limit for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1'
The dirty limit uses the QEMU dirty-limit feature introduced since 7.1.0, this feature allows CPU to be throttled as needed to keep their dirty page rate within the limit. It could, in some scenes, be used to provide quality-of-service in the aspect of the memory workload for virtual CPUs and QEMU itself use the feature to implement the dirty-limit throttle algorithm and apply it on the live migration, which improve responsiveness of large guests during live migration and can result in more stable read performance. The other application scenarios remain unexplored, before that, Libvirt could provide the basic API.
Please review, thanks
Yong
Hyman Huang(黄勇) (10): qemu_capabilities: Introduce QEMU_CAPS_VCPU_DIRTY_LIMIT capability conf: Introduce XML for dirty limit configuration libvirt: Add virDomainSetVcpuDirtyLimit API qemu_driver: Implement qemuDomainSetVcpuDirtyLimit domain_validate: Export virDomainDefHasDirtyLimitStartupVcpus symbol qemu_process: Setup dirty limit after launching VM virsh: Introduce limit-dirty-page-rate api qemu_monitor: Implement qemuMonitorQueryVcpuDirtyLimit qemu_driver: Extend dirtlimit statistics for domGetStats virsh: Introduce command 'virsh domstats --dirtylimit'
docs/formatdomain.rst | 7 +- docs/manpages/virsh.rst | 33 +++- include/libvirt/libvirt-domain.h | 5 + src/conf/domain_conf.c | 26 +++ src/conf/domain_conf.h | 8 + src/conf/domain_validate.c | 33 ++++ src/conf/domain_validate.h | 2 + src/conf/schemas/domaincommon.rng | 5 + src/driver-hypervisor.h | 7 + src/libvirt-domain.c | 68 +++++++ src/libvirt_private.syms | 1 + src/libvirt_public.syms | 5 + src/qemu/qemu_capabilities.c | 2 + src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_driver.c | 181 ++++++++++++++++++ src/qemu/qemu_monitor.c | 25 +++ src/qemu/qemu_monitor.h | 22 +++ src/qemu/qemu_monitor_json.c | 107 +++++++++++ src/qemu/qemu_monitor_json.h | 9 + src/qemu/qemu_process.c | 44 +++++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 17 +- src/remote_protocol-structs | 7 + .../qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 + .../caps_7.1.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 + .../caps_7.2.0_x86_64+hvf.xml | 1 + .../caps_7.2.0_x86_64.xml | 1 + .../caps_8.0.0_riscv64.xml | 1 + .../caps_8.0.0_x86_64.xml | 1 + .../qemucapabilitiesdata/caps_8.1.0_s390x.xml | 1 + .../caps_8.1.0_x86_64.xml | 1 + tools/virsh-domain-monitor.c | 7 + tools/virsh-domain.c | 109 +++++++++++ 34 files changed, 737 insertions(+), 4 deletions(-)
-- 2.38.5
-- Best regards
-- Best regards

Ping2, I'm hoping for comments about the series. Thanks, Yong On Sun, Aug 27, 2023 at 11:11 AM Yong Huang <yong.huang@smartx.com> wrote:
Ping1
On Tue, Aug 15, 2023 at 9:48 AM Yong Huang <yong.huang@smartx.com> wrote:
Ping.
On Mon, Aug 7, 2023 at 11:56 PM ~hyman <hyman@git.sr.ht> wrote:
Hi, This is the latest version for the series, comparing with version 1, there are some key modifications has made inspired and suggested by Peter, see as follows: 1. Introduce XML for dirty limit persistent configuration 2. Merge the cancel API into the set API 3. Extend the domstats/virDomainListGetStats API for dirty limit information query 4. Introduce the virDomainModificationImpact flags to control the behavior of the API 5. Enrich the comments and docs about the feature and API
The patch set introduce the new API virDomainSetVcpuDirtyLimit to allow upper Apps to set upper limits of dirty page rate for virtual CPUs, the corresponding virsh API as follows: # limit-dirty-page-rate <domain> <rate> [--vcpu <number>] \ [--config] [--live] [--current]
We put the dirty limit persistent info with the "vcpus" element in domain XML and extend dirtylimit statistics for domGetStats: <domain> ... <vcpu current='2'>3</vcpu> <vcpus> <vcpu id='0' hotpluggable='no' dirty_limit='10' order='1'.../> <vcpu id='1' hotpluggable='yes' dirty_limit='10' order='2'.../> </vcpus> ...
If --vcpu option is not passed in the virsh command, set all virtual CPUs; if rate is set to zero, cancel the upper limit.
Examples: To set the dirty page rate upper limit 10 MB/s for all virtual CPUs in c81_node1, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 --rate 10 --live Set dirty page rate limit 10(MB/s) for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit <vcpu id='0' enabled='yes' hotpluggable='no' order='1' dirty_limit='10'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='2' dirty_limit='10'/> <vcpu id='2' enabled='yes' hotpluggable='no' order='3' dirty_limit='10'/> <vcpu id='3' enabled='no' hotpluggable='yes' dirty_limit='10'/> <vcpu id='4' enabled='no' hotpluggable='yes' dirty_limit='10'/> ......
Query the dirty limit info dynamically: [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1' dirtylimit.vcpu.0.limit=10 dirtylimit.vcpu.0.current=0 dirtylimit.vcpu.1.limit=10 dirtylimit.vcpu.1.current=0 dirtylimit.vcpu.2.limit=10 dirtylimit.vcpu.2.current=0 dirtylimit.vcpu.3.limit=10 dirtylimit.vcpu.3.current=0 dirtylimit.vcpu.4.limit=10 dirtylimit.vcpu.4.current=0 ...... To cancel the upper limit, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 \ --rate 0 --live Cancel dirty page rate limit for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1'
The dirty limit uses the QEMU dirty-limit feature introduced since 7.1.0, this feature allows CPU to be throttled as needed to keep their dirty page rate within the limit. It could, in some scenes, be used to provide quality-of-service in the aspect of the memory workload for virtual CPUs and QEMU itself use the feature to implement the dirty-limit throttle algorithm and apply it on the live migration, which improve responsiveness of large guests during live migration and can result in more stable read performance. The other application scenarios remain unexplored, before that, Libvirt could provide the basic API.
Please review, thanks
Yong
Hyman Huang(黄勇) (10): qemu_capabilities: Introduce QEMU_CAPS_VCPU_DIRTY_LIMIT capability conf: Introduce XML for dirty limit configuration libvirt: Add virDomainSetVcpuDirtyLimit API qemu_driver: Implement qemuDomainSetVcpuDirtyLimit domain_validate: Export virDomainDefHasDirtyLimitStartupVcpus symbol qemu_process: Setup dirty limit after launching VM virsh: Introduce limit-dirty-page-rate api qemu_monitor: Implement qemuMonitorQueryVcpuDirtyLimit qemu_driver: Extend dirtlimit statistics for domGetStats virsh: Introduce command 'virsh domstats --dirtylimit'
docs/formatdomain.rst | 7 +- docs/manpages/virsh.rst | 33 +++- include/libvirt/libvirt-domain.h | 5 + src/conf/domain_conf.c | 26 +++ src/conf/domain_conf.h | 8 + src/conf/domain_validate.c | 33 ++++ src/conf/domain_validate.h | 2 + src/conf/schemas/domaincommon.rng | 5 + src/driver-hypervisor.h | 7 + src/libvirt-domain.c | 68 +++++++ src/libvirt_private.syms | 1 + src/libvirt_public.syms | 5 + src/qemu/qemu_capabilities.c | 2 + src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_driver.c | 181 ++++++++++++++++++ src/qemu/qemu_monitor.c | 25 +++ src/qemu/qemu_monitor.h | 22 +++ src/qemu/qemu_monitor_json.c | 107 +++++++++++ src/qemu/qemu_monitor_json.h | 9 + src/qemu/qemu_process.c | 44 +++++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 17 +- src/remote_protocol-structs | 7 + .../qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 + .../caps_7.1.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 + .../caps_7.2.0_x86_64+hvf.xml | 1 + .../caps_7.2.0_x86_64.xml | 1 + .../caps_8.0.0_riscv64.xml | 1 + .../caps_8.0.0_x86_64.xml | 1 + .../qemucapabilitiesdata/caps_8.1.0_s390x.xml | 1 + .../caps_8.1.0_x86_64.xml | 1 + tools/virsh-domain-monitor.c | 7 + tools/virsh-domain.c | 109 +++++++++++ 34 files changed, 737 insertions(+), 4 deletions(-)
-- 2.38.5
-- Best regards
-- Best regards
-- Best regards

Sorry for not looking into this earlier, but it's been quite a while and I, personally, received only patches 2, 6, 5, 9, and 10 from this series. I, however, see the rest in the archive, so the issue is probably somewhere on my part. Would you mind resending the second version again, ideally rebased? Thanks, Martin On Mon, Sep 04, 2023 at 09:32:08PM +0800, Yong Huang wrote:
Ping2, I'm hoping for comments about the series.
Thanks, Yong
On Sun, Aug 27, 2023 at 11:11 AM Yong Huang <yong.huang@smartx.com> wrote:
Ping1
On Tue, Aug 15, 2023 at 9:48 AM Yong Huang <yong.huang@smartx.com> wrote:
Ping.
On Mon, Aug 7, 2023 at 11:56 PM ~hyman <hyman@git.sr.ht> wrote:
Hi, This is the latest version for the series, comparing with version 1, there are some key modifications has made inspired and suggested by Peter, see as follows: 1. Introduce XML for dirty limit persistent configuration 2. Merge the cancel API into the set API 3. Extend the domstats/virDomainListGetStats API for dirty limit information query 4. Introduce the virDomainModificationImpact flags to control the behavior of the API 5. Enrich the comments and docs about the feature and API
The patch set introduce the new API virDomainSetVcpuDirtyLimit to allow upper Apps to set upper limits of dirty page rate for virtual CPUs, the corresponding virsh API as follows: # limit-dirty-page-rate <domain> <rate> [--vcpu <number>] \ [--config] [--live] [--current]
We put the dirty limit persistent info with the "vcpus" element in domain XML and extend dirtylimit statistics for domGetStats: <domain> ... <vcpu current='2'>3</vcpu> <vcpus> <vcpu id='0' hotpluggable='no' dirty_limit='10' order='1'.../> <vcpu id='1' hotpluggable='yes' dirty_limit='10' order='2'.../> </vcpus> ...
If --vcpu option is not passed in the virsh command, set all virtual CPUs; if rate is set to zero, cancel the upper limit.
Examples: To set the dirty page rate upper limit 10 MB/s for all virtual CPUs in c81_node1, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 --rate 10 --live Set dirty page rate limit 10(MB/s) for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit <vcpu id='0' enabled='yes' hotpluggable='no' order='1' dirty_limit='10'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='2' dirty_limit='10'/> <vcpu id='2' enabled='yes' hotpluggable='no' order='3' dirty_limit='10'/> <vcpu id='3' enabled='no' hotpluggable='yes' dirty_limit='10'/> <vcpu id='4' enabled='no' hotpluggable='yes' dirty_limit='10'/> ......
Query the dirty limit info dynamically: [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1' dirtylimit.vcpu.0.limit=10 dirtylimit.vcpu.0.current=0 dirtylimit.vcpu.1.limit=10 dirtylimit.vcpu.1.current=0 dirtylimit.vcpu.2.limit=10 dirtylimit.vcpu.2.current=0 dirtylimit.vcpu.3.limit=10 dirtylimit.vcpu.3.current=0 dirtylimit.vcpu.4.limit=10 dirtylimit.vcpu.4.current=0 ...... To cancel the upper limit, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 \ --rate 0 --live Cancel dirty page rate limit for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1'
The dirty limit uses the QEMU dirty-limit feature introduced since 7.1.0, this feature allows CPU to be throttled as needed to keep their dirty page rate within the limit. It could, in some scenes, be used to provide quality-of-service in the aspect of the memory workload for virtual CPUs and QEMU itself use the feature to implement the dirty-limit throttle algorithm and apply it on the live migration, which improve responsiveness of large guests during live migration and can result in more stable read performance. The other application scenarios remain unexplored, before that, Libvirt could provide the basic API.
Please review, thanks
Yong
Hyman Huang(黄勇) (10): qemu_capabilities: Introduce QEMU_CAPS_VCPU_DIRTY_LIMIT capability conf: Introduce XML for dirty limit configuration libvirt: Add virDomainSetVcpuDirtyLimit API qemu_driver: Implement qemuDomainSetVcpuDirtyLimit domain_validate: Export virDomainDefHasDirtyLimitStartupVcpus symbol qemu_process: Setup dirty limit after launching VM virsh: Introduce limit-dirty-page-rate api qemu_monitor: Implement qemuMonitorQueryVcpuDirtyLimit qemu_driver: Extend dirtlimit statistics for domGetStats virsh: Introduce command 'virsh domstats --dirtylimit'
docs/formatdomain.rst | 7 +- docs/manpages/virsh.rst | 33 +++- include/libvirt/libvirt-domain.h | 5 + src/conf/domain_conf.c | 26 +++ src/conf/domain_conf.h | 8 + src/conf/domain_validate.c | 33 ++++ src/conf/domain_validate.h | 2 + src/conf/schemas/domaincommon.rng | 5 + src/driver-hypervisor.h | 7 + src/libvirt-domain.c | 68 +++++++ src/libvirt_private.syms | 1 + src/libvirt_public.syms | 5 + src/qemu/qemu_capabilities.c | 2 + src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_driver.c | 181 ++++++++++++++++++ src/qemu/qemu_monitor.c | 25 +++ src/qemu/qemu_monitor.h | 22 +++ src/qemu/qemu_monitor_json.c | 107 +++++++++++ src/qemu/qemu_monitor_json.h | 9 + src/qemu/qemu_process.c | 44 +++++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 17 +- src/remote_protocol-structs | 7 + .../qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 + .../caps_7.1.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 + .../caps_7.2.0_x86_64+hvf.xml | 1 + .../caps_7.2.0_x86_64.xml | 1 + .../caps_8.0.0_riscv64.xml | 1 + .../caps_8.0.0_x86_64.xml | 1 + .../qemucapabilitiesdata/caps_8.1.0_s390x.xml | 1 + .../caps_8.1.0_x86_64.xml | 1 + tools/virsh-domain-monitor.c | 7 + tools/virsh-domain.c | 109 +++++++++++ 34 files changed, 737 insertions(+), 4 deletions(-)
-- 2.38.5
-- Best regards
-- Best regards
-- Best regards

On Tue, Sep 5, 2023 at 6:22 PM Martin Kletzander <mkletzan@redhat.com> wrote:
Sorry for not looking into this earlier, but it's been quite a while and I, personally, received only patches 2, 6, 5, 9, and 10 from this series. I, however, see the rest in the archive, so the issue is probably somewhere on my part.
Would you mind resending the second version again, ideally rebased?
Sure yes, I'll rebase the master and resend in the near future Thanks, Yong
Thanks, Martin
On Mon, Sep 04, 2023 at 09:32:08PM +0800, Yong Huang wrote:
Ping2, I'm hoping for comments about the series.
Thanks, Yong
On Sun, Aug 27, 2023 at 11:11 AM Yong Huang <yong.huang@smartx.com> wrote:
Ping1
On Tue, Aug 15, 2023 at 9:48 AM Yong Huang <yong.huang@smartx.com> wrote:
Ping.
On Mon, Aug 7, 2023 at 11:56 PM ~hyman <hyman@git.sr.ht> wrote:
Hi, This is the latest version for the series, comparing with version 1, there are some key modifications has made inspired and suggested by Peter, see as follows: 1. Introduce XML for dirty limit persistent configuration 2. Merge the cancel API into the set API 3. Extend the domstats/virDomainListGetStats API for dirty limit information query 4. Introduce the virDomainModificationImpact flags to control the behavior of the API 5. Enrich the comments and docs about the feature and API
The patch set introduce the new API virDomainSetVcpuDirtyLimit to allow upper Apps to set upper limits of dirty page rate for virtual CPUs, the corresponding virsh API as follows: # limit-dirty-page-rate <domain> <rate> [--vcpu <number>] \ [--config] [--live] [--current]
We put the dirty limit persistent info with the "vcpus" element in domain XML and extend dirtylimit statistics for domGetStats: <domain> ... <vcpu current='2'>3</vcpu> <vcpus> <vcpu id='0' hotpluggable='no' dirty_limit='10' order='1'.../> <vcpu id='1' hotpluggable='yes' dirty_limit='10' order='2'.../> </vcpus> ...
If --vcpu option is not passed in the virsh command, set all virtual CPUs; if rate is set to zero, cancel the upper limit.
Examples: To set the dirty page rate upper limit 10 MB/s for all virtual CPUs in c81_node1, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 --rate 10 --live Set dirty page rate limit 10(MB/s) for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit <vcpu id='0' enabled='yes' hotpluggable='no' order='1' dirty_limit='10'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='2' dirty_limit='10'/> <vcpu id='2' enabled='yes' hotpluggable='no' order='3' dirty_limit='10'/> <vcpu id='3' enabled='no' hotpluggable='yes' dirty_limit='10'/> <vcpu id='4' enabled='no' hotpluggable='yes' dirty_limit='10'/> ......
Query the dirty limit info dynamically: [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1' dirtylimit.vcpu.0.limit=10 dirtylimit.vcpu.0.current=0 dirtylimit.vcpu.1.limit=10 dirtylimit.vcpu.1.current=0 dirtylimit.vcpu.2.limit=10 dirtylimit.vcpu.2.current=0 dirtylimit.vcpu.3.limit=10 dirtylimit.vcpu.3.current=0 dirtylimit.vcpu.4.limit=10 dirtylimit.vcpu.4.current=0 ...... To cancel the upper limit, use: [root@srv2 my_libvirt]# virsh limit-dirty-page-rate c81_node1 \ --rate 0 --live Cancel dirty page rate limit for all virtual CPUs successfully
[root@srv2 my_libvirt]# virsh dumpxml c81_node1 | grep dirty_limit [root@srv2 my_libvirt]# virsh domstats c81_node1 --dirtylimit Domain: 'c81_node1'
The dirty limit uses the QEMU dirty-limit feature introduced since 7.1.0, this feature allows CPU to be throttled as needed to keep their dirty page rate within the limit. It could, in some scenes, be used to provide quality-of-service in the aspect of the memory workload for virtual CPUs and QEMU itself use the feature to implement the dirty-limit throttle algorithm and apply it on the live migration, which improve responsiveness of large guests during live migration and can result in more stable read performance. The other application scenarios remain unexplored, before that, Libvirt could provide the basic API.
Please review, thanks
Yong
Hyman Huang(黄勇) (10): qemu_capabilities: Introduce QEMU_CAPS_VCPU_DIRTY_LIMIT capability conf: Introduce XML for dirty limit configuration libvirt: Add virDomainSetVcpuDirtyLimit API qemu_driver: Implement qemuDomainSetVcpuDirtyLimit domain_validate: Export virDomainDefHasDirtyLimitStartupVcpus symbol qemu_process: Setup dirty limit after launching VM virsh: Introduce limit-dirty-page-rate api qemu_monitor: Implement qemuMonitorQueryVcpuDirtyLimit qemu_driver: Extend dirtlimit statistics for domGetStats virsh: Introduce command 'virsh domstats --dirtylimit'
docs/formatdomain.rst | 7 +- docs/manpages/virsh.rst | 33 +++- include/libvirt/libvirt-domain.h | 5 + src/conf/domain_conf.c | 26 +++ src/conf/domain_conf.h | 8 + src/conf/domain_validate.c | 33 ++++ src/conf/domain_validate.h | 2 + src/conf/schemas/domaincommon.rng | 5 + src/driver-hypervisor.h | 7 + src/libvirt-domain.c | 68 +++++++ src/libvirt_private.syms | 1 + src/libvirt_public.syms | 5 + src/qemu/qemu_capabilities.c | 2 + src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_driver.c | 181 ++++++++++++++++++ src/qemu/qemu_monitor.c | 25 +++ src/qemu/qemu_monitor.h | 22 +++ src/qemu/qemu_monitor_json.c | 107 +++++++++++ src/qemu/qemu_monitor_json.h | 9 + src/qemu/qemu_process.c | 44 +++++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 17 +- src/remote_protocol-structs | 7 + .../qemucapabilitiesdata/caps_7.1.0_ppc64.xml | 1 + .../caps_7.1.0_x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.2.0_ppc.xml | 1 + .../caps_7.2.0_x86_64+hvf.xml | 1 + .../caps_7.2.0_x86_64.xml | 1 + .../caps_8.0.0_riscv64.xml | 1 + .../caps_8.0.0_x86_64.xml | 1 + .../qemucapabilitiesdata/caps_8.1.0_s390x.xml | 1 + .../caps_8.1.0_x86_64.xml | 1 + tools/virsh-domain-monitor.c | 7 + tools/virsh-domain.c | 109 +++++++++++ 34 files changed, 737 insertions(+), 4 deletions(-)
-- 2.38.5
-- Best regards
-- Best regards
-- Best regards
-- Best regards
participants (3)
-
Martin Kletzander
-
Yong Huang
-
~hyman