
On Fri, Aug 16, 2024 at 11:51:17 +0800, Chun Feng Wu wrote:
On 2024/8/15 13:04, Chun Feng Wu wrote:
On 2024/8/9 22:04, Peter Krempa wrote:
On Wed, Jun 12, 2024 at 03:02:17 -0700,wucf@linux.ibm.com wrote:
From: Chun Feng Wu<wucf@linux.ibm.com>
[...]
I'd strongly prefer if you modify this such that the trottle layers will be instantiated at per-storage-source level both in XML and in the code.
regarding per-storage-source, do you mean xml like below? if so, when specifying "--throttle-groups" in "attach-disk"
it apply groups on top source(vm1_disk_2.qcow2) only? is there scenario to apply throttle-groups on backing store source?
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/virt/disks/vm1_disk_2.qcow2' index='1'> <throttlefilters> <throttlefilter group='limit0'/> </throttlefilters> </source> <backingStore type='file' index='4'> <format type='qcow2'/> <source file='/virt/disks/backing.qcow2'> <throttlefilters> <throttlefilter group='limit1'/> </throttlefilters> </source> <backingStore/> </backingStore> <target dev='vdc' bus='virtio'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> and regarding code part for per-storage-source, do you mean preparing throttle filter chain json within "qemuBuildStorageSourceChainAttachPrepareBlockdev(virStorageSource *top)"? if so, current "qemuBuildStorageSourceChainAttachPrepareBlockdev" doesn't include copy-on-read layer, however, throttlefilter chain depends on it for
Yeah, this would necessarily mean that copy-on-read is on top of the throttle layer.
top source, in that case, should preparation of copy-on-read be moved into "qemuBuildStorageSourceChainAttachPrepareBlockdev" as well? if yes, current
No. There's no sense to have multiple copy-on-read layers.
parameter "virStorageSource *top" is not enough, also, do you see any concern about updating "qemuBuildStorageSourceChainAttachPrepareBlockdev" for throttle chain, it seems it has a lot of callers
As I've said in previous reply, having it on per-node level may be overkill. If you don't ever forsee to use this differently then maybe let's stick to the per-disk config, by just keeping proper ordering and remembering the node names.