Hi Jinsheng,
Thank you for the explanation. From the statistics above, the tc outputs
for outbound matches. But I'm confused about the inbound statistics:
# virsh domiftune rhel vnet5
inbound.average: *100*
inbound.peak : *200*
inbound.burst : 256
...
# tc -d class show dev vnet5
class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate 819200bit ceil
1638Kbit linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b
level 0
class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst
1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7
As the value in libvirt xml is KB, inbound.average: **100 KB** can not
match with *"rate 819200bit"* in tc outputs*,* I supposed it should be
*800Kbit.
*Please help to confirm.
And so does "ceil 1638Kbit" (may be it should be 1600Kbit as "inbound.peak
: 200").
I have run netperf to test the actual rate, the result is pass. 2 vm
connected to the same bridge, set one vm with Qos, see test results below:
# virsh domiftune rhel vnet0
inbound.average: 400
inbound.peak : 500
inbound.burst : 125
inbound.floor : 0
outbound.average: 100
outbound.peak : 200
outbound.burst : 256
Throughput for inbound: 3.92 * 10^6bits/sec
Throughput for outbound: 0.93 * 10^6bits/sec
These patches fixed the bug [1] which closed with deferred resolution.
Thank you!
And this reminds me of another ovs Qos related bug [2], which was about
network.
And I tried with the scenarios in [2], there are no changes(not fixed).
Just for information. :-)
[1]
https://bugzilla.redhat.com/show_bug.cgi?id=1510237
[2]
https://bugzilla.redhat.com/show_bug.cgi?id=1826168
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
On Tue, Oct 26, 2021 at 3:23 PM Jinsheng Zhang (张金生)-云服务集团 <
zhangjl02(a)inspur.com> wrote:
Hi Yalan,
1) For inbound, we can use `ovs-vsctl list qos` and `ovs-vsctl list
queue` to check them from the openvswitch side. Values can be found in other_config.
Inbound is in kbyte when set qos with `virsh domiftune …`, well it is in
bit in ovs, Therefore, when inbound.average is set to 100, the
corresponding value will be set to 819200 in ovs.
2) For outbound, it is in kbyte in libvirt and ingress_policing_XX
in ovs interface is in kbit.
3) Ovs use tc to set qos, so we can see output from tc command.
This patch is to unify the qos control and query on ovs ports.
The conversion explanation is added in this patch:
https://listman.redhat.com/archives/libvir-list/2021-August/msg00422.html
And there are 6 following patches to fix some bugs. See
https://listman.redhat.com/archives/libvir-list/2021-August/msg00423.html
-------
Best Regards,
Jinsheng Zhang
*发件人:* Yalan Zhang [mailto:yalzhang@redhat.com]
*发送时间:* 2021年10月25日 17:54
*收件人:* Michal Prívozník; Jinsheng Zhang (张金生)-云服务集团
*抄送:* libvir-list(a)redhat.com; Norman Shen(申嘉童); zhangjl02
*主题:* Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs
interface
Hi Jinsheng,
I have tested the patch and have some questions, could you please help to
confirm?
1) For inbound, how to check it from the openvswitch side? tc will still
show the statistics, is that expected?
2) For outbound, the peak is ignored. I just can not understand the
"ingress_policing_burst:
2048", how can it come from the setting "outbound.burst : 256"?
3) Is the output from tc command expected?
Test inbound:
1. start vm with setting as below:
<interface type='bridge'>
<source bridge='ovsbr0'/>
<virtualport type='openvswitch'/>
<bandwidth>
<inbound average='100' peak='200' burst='256'/>
</bandwidth>
...
</interface>
2.
# virsh domiftune rhel vnet5
inbound.average: 100
inbound.peak : 200
inbound.burst : 256
inbound.floor : 0
outbound.average: 0
outbound.peak : 0
outbound.burst : 0
# ip l
17: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master
ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:54:00:4d:43:5a brd ff:ff:ff:ff:ff:ff
# ovs-vsctl show interface
…...
ingress_policing_burst: 0
ingress_policing_kpkts_burst: 0
ingress_policing_kpkts_rate: 0
ingress_policing_rate: 0
…...
name : vnet5
# tc -d class show dev vnet5
class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate *819200bit* ceil
*1638Kbit* linklayer ethernet burst *256Kb*/1 mpu 0b cburst 256Kb/1 mpu
0b level 0
class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst
1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7
# tc -d filter show dev vnet5 parent ffff:
(no outputs)
For outbound:
# virsh dumpxml rhel | grep /bandwidth -B2
<bandwidth>
<outbound average='100' peak='200' burst='256'/>
</bandwidth>
# virsh domiftune rhel vnet9
inbound.average: 0
inbound.peak : 0
inbound.burst : 0
inbound.floor : 0
outbound.average: 100
outbound.peak : 200
outbound.burst : 256
# ovs-vsctl list interface
ingress_policing_burst: *2048*
ingress_policing_kpkts_burst: 0
ingress_policing_kpkts_rate: 0
ingress_policing_rate: *800*
...
# tc -d filter show dev vnet9 parent ffff:
filter protocol all pref 49 basic chain 0
filter protocol all pref 49 basic chain 0 handle 0x1
action order 1: police 0x1 rate* 800Kbit burst 256Kb* mtu 64Kb
action drop/pipe overhead 0b linklayer unspec
ref 1 bind 1
# tc -d class show dev vnet9
(no outputs)
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
On Mon, Jul 12, 2021 at 3:43 PM Michal Prívozník <mprivozn(a)redhat.com>
wrote:
On 7/9/21 3:31 PM, Jinsheng Zhang (张金生)-云服务集团 wrote:
> Here is my signed-off-by line
>
> Signed-off-by: zhangjl02(a)inspur.com
>
> Thanks again for reminding:) .
Perfect.
Reviewed-by: Michal Privoznik <mprivozn(a)redhat.com>
and pushed. Congratulations on your first libvirt contribution!
Michal