Thanks Daniel for the comments !
On Wed, Sep 6, 2023 at 4:48 PM Daniel P. Berrangé <berrange(a)redhat.com>
wrote:
On Tue, Aug 01, 2023 at 05:31:12PM +0800, ~hyman wrote:
> From: Hyman Huang(黄勇) <yong.huang(a)smartx.com>
>
> The upper limit (megabyte/s) of the dirty page rate configured
> by the user can be tracked by the XML. To allow this, add the
> following XML:
>
> <domain>
> ...
> <vcpu current='2'>3</vcpu>
> <vcpus>
> <vcpu id='0' hotpluggable='no' dirty_limit='10'
order='1'.../>
> <vcpu id='1' hotpluggable='yes' dirty_limit='10'
order='2'.../>
> </vcpus>
> ...
>
> The "dirty_limit" attribute in "vcpu" sub-element within
"vcpus"
> element allows to set an upper limit for the individual vCPU. The
> value can be set dynamically by limit-dirty-page-rate API.
>
> Note that the dirty limit feature is based on the dirty-ring
> feature, so it requires dirty-ring size configuration in XML.
>
> Signed-off-by: Hyman Huang(黄勇) <yong.huang(a)smartx.com>
> ---
> docs/formatdomain.rst | 7 ++++++-
> src/conf/domain_conf.c | 26 ++++++++++++++++++++++++
> src/conf/domain_conf.h | 8 ++++++++
> src/conf/domain_validate.c | 33 +++++++++++++++++++++++++++++++
> src/conf/schemas/domaincommon.rng | 5 +++++
> 5 files changed, 78 insertions(+), 1 deletion(-)
>
> diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst
> index bc469e5f9f..337b7ec9cc 100644
> --- a/docs/formatdomain.rst
> +++ b/docs/formatdomain.rst
> @@ -715,6 +715,11 @@ CPU Allocation
> be enabled and non-hotpluggable. On PPC64 along with it vCPUs that
are in the
> same core need to be enabled as well. All non-hotpluggable CPUs
present at
> boot need to be grouped after vCPU 0. :since:`Since 2.2.0 (QEMU
only)`
> + ``dirty_limit`` :since:`Since 9.7.0 (QEMU and KVM only)`
> + The optional attribute ``dirty_limit`` allows to set an upper limit
(MB/s)
> + of the dirty page rate for the vCPU. User can change the upper limit
value
> + dynamically by using ``limit-dirty-page-rate`` API. Require
``dirty-ring``
> + size configured.
What scenarios would you want to apply such a limit ?
To be completely honest, I haven't given the scenarios anythought. This
method has been utilized up till now by migration to throttle the guest
within
QEMU.
The most likely situation, in my opinion, is that some virtual machines on a
host are using up all of the memory bandwidth, either intentionally or
accidentally, while other virtual machines perform worse when writing to
memory.
Is there to admins for sensible values to use when
setting this limit ?
Currently, the "calc-dirty-rate" API can be used by upper-level apps to
peek at
the dirty page rate of the vCPU. If more details regarding setting values
were
required, we would add them to the comments.
What is the impact on the guest if it hits the limit ?
Guest memory writes would take longer than usual to complete.
Is the snigle vCPU blocked for the remainder of some
timeslice, or are all vCPUs blocked ?
Single vCPU.
Does it even make sense to control this with different
values per-VCPU as opposed to a single value for the VM
as a whole ?
Since this issue needs to be discussed, I'm inclined to offer a per-vCPU API
and let the top APP make the final decision.
Yong
--
Best regards