my original conclusion is based on the following test xml:
<domain type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
<throttlegroups>
<throttlegroup>
<total_iops_sec>200</total_iops_sec>
<total_iops_sec_max>200</total_iops_sec_max>
<group_name>limit0</group_name>
<total_iops_sec_max_length>1</total_iops_sec_max_length>
</throttlegroup>
<throttlegroup>
<total_iops_sec>250</total_iops_sec>
<total_iops_sec_max>250</total_iops_sec_max>
<group_name>limit1</group_name>
<total_iops_sec_max_length>1</total_iops_sec_max_length>
</throttlegroup>
<throttlegroup>
<total_iops_sec>300</total_iops_sec>
<total_iops_sec_max>300</total_iops_sec_max>
<group_name>limit2</group_name>
<total_iops_sec_max_length>1</total_iops_sec_max_length>
</throttlegroup>
<throttlegroup>
<total_iops_sec>400</total_iops_sec>
<total_iops_sec_max>400</total_iops_sec_max>
<group_name>limit012</group_name>
<total_iops_sec_max_length>1</total_iops_sec_max_length>
</throttlegroup>
</throttlegroups>
...
<devices>
<!-- Disk for the operating system -->
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/virt/images/jammy-server-cloudimg-amd64.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x04'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/virt/disks/vm1_disk_1.qcow2'/>
<target dev='vdb' bus='virtio'/>
<throttlefilters>
<throttlefilter group='limit0'/>
<throttlefilter group='limit012'/>
</throttlefilters>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x05'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/virt/disks/vm1_disk_2.qcow2'/>
<target dev='vdc' bus='virtio'/>
<throttlefilters>
<throttlefilter group='limit1'/>
<throttlefilter group='limit012'/>
</throttlefilters>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x06'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/virt/disks/vm1_disk_3.qcow2'/>
<target dev='vdd' bus='virtio'/>
<throttlefilters>
<throttlefilter group='limit2'/>
<throttlefilter group='limit012'/>
</throttlefilters>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x07'
function='0x0'/>
</disk>
...
</devices>
</domain>
if I re-order filters in vdc as below, fio tests(randwrite) show the
same result for both concurrent(400 iops in total, around 133(400/3) for
each disk) and individual disk test(200 for vdb, 250 for vdc, 300 for vdd).
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/virt/disks/vm1_disk_2.qcow2'/>
<target dev='vdc' bus='virtio'/>
<throttlefilters>
<throttlefilter group='limit012'/>
<throttlefilter group='limit1'/>
</throttlefilters>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x06'
function='0x0'/>
</disk>
and back to your case(vdb, vdc in the following xml):
<domain type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
<throttlegroups>
<throttlegroup>
<total_iops_sec>200</total_iops_sec>
<total_iops_sec_max>200</total_iops_sec_max>
<group_name>limit0</group_name>
<total_iops_sec_max_length>1</total_iops_sec_max_length>
</throttlegroup>
<throttlegroup>
<total_iops_sec>250</total_iops_sec>
<total_iops_sec_max>250</total_iops_sec_max>
<group_name>limit1</group_name>
<total_iops_sec_max_length>1</total_iops_sec_max_length>
</throttlegroup>
<throttlegroup>
<total_iops_sec>300</total_iops_sec>
<total_iops_sec_max>300</total_iops_sec_max>
<group_name>limit2</group_name>
<total_iops_sec_max_length>1</total_iops_sec_max_length>
</throttlegroup>
<throttlegroup>
<total_iops_sec>400</total_iops_sec>
<total_iops_sec_max>400</total_iops_sec_max>
<group_name>limit012</group_name>
<total_iops_sec_max_length>1</total_iops_sec_max_length>
</throttlegroup>
</throttlegroups>
...
<devices>
<!-- Disk for the operating system -->
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/virt/images/jammy-server-cloudimg-amd64.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x04'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/virt/disks/vm1_disk_1.qcow2'/>
<target dev='vdb' bus='virtio'/>
<throttlefilters>
<throttlefilter group='limit012'/>
<throttlefilter group='limit0'/>
</throttlefilters>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x05'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/virt/disks/vm1_disk_2.qcow2'/>
<target dev='vdc' bus='virtio'/>
<throttlefilters>
<throttlefilter group='limit012'/>
</throttlefilters>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x06'
function='0x0'/>
</disk>
...
</devices>
</domain>
with above xml, fio tests(randwrite) show:
- concurrent: 400 iops in total, around 200(400/2) for each disk
- individual disk test: 200 for vdb, 400 for vdc
after I re-order vdb disk as below, tests have the same result:
- concurrent: 400 iops in total, around 200(400/2) for each disk
- individual disk test: 200 for vdb, 400 for vdc
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/virt/disks/vm1_disk_1.qcow2'/>
<target dev='vdb' bus='virtio'/>
<throttlefilters>
<throttlefilter group='limit0'/>
<throttlefilter group='limit012'/>
</throttlefilters>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x05'
function='0x0'/>
</disk>
let me know if I understand your case correctly, thanks!
On 2024/8/6 15:36, Peter Krempa wrote:
On Tue, Aug 06, 2024 at 00:27:58 -0000, Chun Feng Wu wrote:
Please keep the context in the reply. I had to check back what I've
asked.
> The order of such ``throttlefilter`` doesn't matter within ``throttlefilters``.
So IIUC, re-ordering of the filters doesn't have any guest-OS visible
impact? I'm trying to understand whether one disk can exhaust one layer
while be blocked on the next, in which case a different disk which has
only one layer (equivalent to the first disk's first layer) would be
starved, but if the filters were ordered the other way around at the
first disk it would not.
If the above can happen you'll need to document how it's supposed to
behave.
--
Thanks and Regards,
Wu