On Tue, Nov 26, 2013 at 07:32:37AM -0700, Eric Blake wrote:
[adding libvirt]
On 11/26/2013 06:58 AM, Paolo Bonzini wrote:
> Il 26/11/2013 14:43, Amos Kong ha scritto:
>> /* Set a default rate limit of 2^47 bytes per minute or roughly 2TB/s. If
>> * you have an entropy source capable of generating more entropy than this
>> * and you can pass it through via virtio-rng, then hats off to you. Until
>> * then, this is unlimited for all practical purposes.
>> */
>>
>> But the current rate is (INT64_MAX) bytes per (1 << 16) ms, it's
128,000 TB/s
>
> You are changing:
>
> * max-bytes from 2^63 to 2^47
>
> * period from 65536 to 60000
>
> For a user, changing only period would have no effect, the limit rate
> would remain effectively infinite. Changing max-bytes would give a 7%
> higher rate after your patch.
>
> Not a big deal, and max-bytes is easier to explain after your patch
> (bytes/minute) than before (bytes/65536ms).
>
> Reviewed-by: Paolo Bonzini <pbonzini(a)redhat.com>
>
Hmm. Libvirt is already converting a user's rate of bytes/period into
the qemu parameters, defaulting to 1 second as its default period.
So libvirt will always pass a fixed 1 second period to qemu?
Am I
correct that as long as libvirt specified both rate AND period, then
this change has no impact (and that the 7% change occurs if you specify
period while leaving max-bytes alone)?
Or is this an ABI change where
libvirt will have to be taught to be smart enough to know whether it is
old qemu or new qemu to adjust how libvirt does its calculations when
converting the user's rate into qemu terms?
Nothing need to do in Libvirt for _this patch_
No API change here, just change default rate from 1.9 TB/s to 2.1 TB/s
This patch didn't change another limit logic.
== Effect of the period parameter ==
When we set a fixed ratespeed, we can use different periods. The
period still effect the stable of system IO.
If the period is too large, and we set a higher speed, then IO will wave.
Example: the IO max speed is 20 M/s, and we test 5 mins
it clear that the first period is better
_Theory_ Condition:
* period 1: 20M / 1s
from 0 ~ 20 second, read packets,
from 21 ~ 100 second, wait ...
from 0 ~ 20 second, read packets,
from 21 ~ 100 second, wait ...
from 260 ~ 281 second, read packets,
from 281 ~ 300 second, wait ...
* period 2: 100M / 5
from 0 ~ 60 second, read packets,
from 61 ~ 300 second, wait ...
Smaller period is better, but smaller period will cause more timer
expired, and IO wave will be balance by scheduling other process.
So Libvirt should pass period & max-bytes in xml to qemu without
converting.
--
Amos.