It seems the interface is only implement the isolated case, I
remember that you have proposed that for some overlap case?
I have not see the whole patch set yet, but I have some quick
testing on you patch, will try to find more time to review patches
(Currently I am maintain another daemon software which is dedicated
for RDT feature called RMD)
Only the issue 1 is the true issue, for the others, I think they
should be discussed, or be treat as the 'known issue'.
My env:
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 56320K
NUMA node0 CPU(s): 0-21,44-65
NUMA node1 CPU(s): 22-43,66-87
virsh capabilities:
171
<cache>
172 <bank id='0' level='3' type='both' size='55' unit='MiB'
cpus='0-21,44-65'>
173 <control granularity='2816' unit='KiB' type='both'
maxAllocs='16'/>
174
</bank>
175 <bank id='1' level='3' type='both' size='55' unit='MiB'
cpus='22-43,66-87'>
176 <control granularity='2816' unit='KiB' type='both'
maxAllocs='16'/>
177
</bank>
178 </cache>
Issue:
1. Doesn't support asynchronous cache allocation. e.g, I need
provide all cache allocation require ways, but I am only care about
the allocation on one of the cache id, cause the VM won't be
schedule to another cache (socket).
So I got this error if I define the domain like this:
<vcpu placement='static'>6</vcpu>
<cputune>
<emulatorpin cpuset='0,37-38,44,81-82'/>
<cachetune vcpus='0-4'>
<cache id='0' level='3' type='both' size='2816'
unit='KiB'/>
^^^ not provide cache id='1'
</cachetune>
root@s2600wt:~# virsh start kvm-cat
error: Failed to start domain kvm-cat
error: Cannot write into schemata file
'/sys/fs/resctrl/qemu-qemu-13-kvm-cat-0-4/schemata': Invalid
argument
This behavior is not correct.
I expect the CBM will be look like:
root@s2600wt:/sys/fs/resctrl# cat qemu-qemu-14-kvm-cat-0-4/*
000000,00000000,00000000
L3:0=80;1=fffff (no matter what it is, cause my VM won't be
schedule on it, ether I have deinfe the vcpu->cpu pining or, I
assume that kernel won't schedule it to cache 1)
Or at least, restrict xml when I define this domain, tell me I
need to provide all cache ids (even if I have 4 cache but I only run
my VM on 'cache 0')
2. cache way fragment (no good answers)
I see that for now we allocate cache ways start from the low bits,
newly created VM will allocate cache from the next way, if some of
the VM (allocated ways in the middle, eg it's schemata is 00100)
destroyed, and that slot (1 cache way) may not fit others and it
will be wasted, But, how can we handle this, seems no good way,
rearrangement? That will lead cache missing in a time window I
think.
3. The admin/user should manually operate the default resource
group, that's is to say, after resctrl is mounted, the admin/user
should manually change the schemata of default group. Will libvirt
provide interface/API to handle it?
4. Will provide some APIs like `FreeCacheWay` to end user to see how
many cache ways could be allocated on the host?
For other users/orchestrator (nova), they may need to know if a
VM can schedule on the host, but the cache ways is not liner, it may
have fragment.
5, What if other application want to have some shared cache ways
with some of the VM?
Libvirt for now try to read all of the resource group (instead
of maintain the consumed cache ways itself), so if another resource
group was created under /sys/fs/resctl, and the schemata of it is
"FFFFF", then libvirt will report not enough room for new VM. But
the user actually want to have another Appliation(e.g. ovs, dpdk
pmds) share cache ways with the VM created by libvirt.
I will try to more cases..
Thanks Eli.