[libvirt] RFC for support Intel RDT/CAT in libvirt

Hi folks I would like to start a discussion about how to support a new cpu feature in libvirt. CAT support is not fully merged into linux kernel yet, the target release is 4.10, and all patches has been merged into linux tip branch. So there won’t be interface/design changes. ## Background Intel RDT is a toolkit to do resource Qos for cpu such as llc(l3) cache, memory bandwidth usage, these fine granularity resource control features are very useful in a cloud environment which will run logs of noisy instances. Currently, Libvirt has supported CAT/MBMT/MBML already, they are only for resource usage monitor, propose to supporting CAT to control VM’s l3 cache quota. ## CAT interface in kernel In kernel, a new resource interface has been introduced under /sys/fs/resctrl, it’s used for resource control, for more information, refer Intel_rdt_ui [ https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/tree/Documentation/... ] Kernel requires to provide schemata for l3 cache before add a task to a new partition, these interface is too much detail for a virtual machine user, so propose to let Libvirt manage schemata on the host. ## What will libvirt do? ### Questions: To enable CAT support in libvirt, we need to think about follow questions 1. Only set CAT when an VM has CPU pin, which is to say, l3 cache is per cpu socket resources. On a host which has 2 cpu sockets, each cpu socket has it own cache, and can not be shared.. 2. What the cache allocation policy should be used, this will be looks like: a. VM has it’s own dedicated l3 cache and also can share other l3 cache. b. VM can only use the caches which allocated to it. c. Has some pre-defined policies and priority for a VM Like COB [1] 1. Should reserve some l3 cache for host’s system usage (related to 2) 2. What’s the unit for l3 cache allocation? (related to 2) ### Propose Changes XML domain user interface changes: Option 1: explicit specify cache allocation for a VM 1 work with numa node Some cloud orchestration software use numa + vcpu pin together, so we can enable cat supporting with numa infra. Expose how many l3 cache a VM want to reserved and we require that the l3 cache should be bind on some specify cpu socket, just like what we did for numa node. This is an domain xml example which is generated by OpenStack Nova for allocate llc(l3 cache) when booting a new VM <domain> … <cputune> <vcpupin vcpu='0' cpuset='19'/> <vcpupin vcpu='1' cpuset='63'/> <vcpupin vcpu='2' cpuset='83'/> <vcpupin vcpu='3' cpuset='39'/> <vcpupin vcpu='4' cpuset='40'/> <vcpupin vcpu='5' cpuset='84'/> <emulatorpin cpuset='19,39-40,63,83-84'/> </cputune> ... <cpu mode='host-model'> <model fallback='allow'/> <topology sockets='3' cores='1' threads='2'/> <numa> <cell id='0' cpus='0-1' memory='2097152' l3cache='1408' unit='KiB'/> <cell id='1' cpus='2-5' memory='4194304' l3cache='5632' unit='KiB'/> </numa> </cpu> ... </domain> Refer to [http://libvirt.org/formatdomain.html#elementsCPUTuning] So finally we can calculate on which CPU socket(cell) we need to allocate how may l3cache for a VM. 2. work with vcpu pin Forget numa part, CAT setting should have relationship with cpu core setting, we can apply CAT policy if VM has set cpu pin setting (only VM won’t be schedule to another CPU sockets) Cache allocation on which CPU socket can be calculate as just as 1. We may need to enable both 1 and 2. There are several policy for cache allocation: Let’ take some examples: For intel e5 v4 2699(Signal socket), there are 55M l3 cache on the chip , the default of L3 schemata is L3:0=ffffff , it represents to use 20 bit to control l3 cache, each bit will represent 2.75M, which will be the minimal unit on this host. The allocation policy could be 3 policies : 1. One priority VM: A priority import VM can be allocated a dedicated amount l3 cache (let’s say 2.75 * 4 = 11M) and it can also reach the left 44 M cache which will be shared with other process and VM on the same host. So that we need to create a new ‘Partition’ n-20371 root@s2600wt:/sys/fs/resctrl# ls cpus info n-20371 schemata tasks Inside of n-20371 directory: root@s2600wt:/sys/fs/resctrl# ls n-20371/ cpus schemata tasks The schemata content will be L3:0=fffff The tasks content will be the pids of that VM Along we need to change the default schemata of system: root@s2600wt:/sys/fs/resctrl# cat schemata L3:0=ffff # which can not use the highest 4 bits, only tasks in n-20371 can reach that. In this design , we can only get 1 priority VM. Let’s change it a bit to have 2 VMs The schemata of the 2 VMs could be: 1. L3:0=ffff0 # could not use the 4 low bits 11M l3 cache 2. L3:0=0ffff # could not use the 4 high bits 11M l3 cache Default schemata changes to L3:0=0fff0 # default system process could only use the middle 33M l3 cache 2. Isolated l3 cache dedicated allocation for each VM(if required) A VM can only use the cache allocated to it. For example VM 1 requires to allocate 11 M It’s schemata will be L3:0=f0000 # VM 2 requires to allocate 11M It’s schemata will be L3:0=f000 And default schemata will be L3:0=fff In this case, we can create multiple VMs which each of them can have dedicated l3 cache. The disadvantage is that we the allocated cache could be not be shared efficiently. 3. Isolated l3 cache shared allocation for each VM(if required by user) In this case, we will put some VMs (which consider as noisy neighbors) to a ‘Partition’, restrict them to use the only caches allocated to them, by do this, other much more priority VM can be ensure to have enough l3 cache Then we should decide how much cache the noisy group should have, and put all of their pids in that tasks file. Option 2: set cache priority and apply policies Don’t specify cache amount at all, only define cache usage priority when define a VM domain XML. Cache priority will decide how much the VM can use l3 cache on a host, it’s not a quantized. So user don’t need to think about how much cache it should have when define a domain XML. Libvirt will decide cache allocation by the priority of VM defined and policies using. Disadvantage is that caches ability on different host may be different. Same VM domain XML on different host may have vary caches allocation amount. # Support CAT in libvirt itself or leverage other software COB COB is Intel Cache Orchestrator Balancer (COB). please refer http://clrgitlab.intel.com/rdt/cob/tree/master COB supports some pre-defined policies, it will monitor cpu/cache/cache missing and do cache allocation based on policy using. If COB support monitor some specified process (VM process) and accept priority defined, it will be good to reuse. At last the question came out: * Support a fine-grained llc cache control , let user specify cache allocation * Support pre-defined policies and user specify llc allocation priority. Reference [1] COB http://clrgitlab.intel.com/rdt/cob/tree/master [2] CAT intro: https://software.intel.com/en-us/articles/software-enabling-for-cache-alloca... [3] kernel Intel_rdt_ui [ https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/tree/Documentation/... ] Best Regards Eli Qiao(乔立勇)OpenStack Core team OTC Intel. --

On Wed, Dec 21, 2016 at 09:51:44AM +0000, Qiao, Liyong wrote:
Hi folks
I would like to start a discussion about how to support a new cpu feature in libvirt. CAT support is not fully merged into linux kernel yet, the target release is 4.10, and all patches has been merged into linux tip branch. So there won’t be interface/design changes.
## Background
Intel RDT is a toolkit to do resource Qos for cpu such as llc(l3) cache, memory bandwidth usage, these fine granularity resource control features are very useful in a cloud environment which will run logs of noisy instances. Currently, Libvirt has supported CAT/MBMT/MBML already, they are only for resource usage monitor, propose to supporting CAT to control VM’s l3 cache quota.
## CAT interface in kernel
In kernel, a new resource interface has been introduced under /sys/fs/resctrl, it’s used for resource control, for more information, refer Intel_rdt_ui [ https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/tree/Documentation/... ]
Kernel requires to provide schemata for l3 cache before add a task to a new partition, these interface is too much detail for a virtual machine user, so propose to let Libvirt manage schemata on the host.
I don't quite understand this paragraph.
## What will libvirt do?
### Questions:
To enable CAT support in libvirt, we need to think about follow questions
1. Only set CAT when an VM has CPU pin, which is to say, l3 cache is per cpu socket resources. On a host which has 2 cpu sockets, each cpu socket has it own cache, and can not be shared..
It makes sense to only do it when vCPU is pinned. It can happen that someone will want to pin it on multiple threads that are on different sockets, and at that point it's their fault.
2. What the cache allocation policy should be used, this will be looks like: a. VM has it’s own dedicated l3 cache and also can share other l3 cache. b. VM can only use the caches which allocated to it.
I though we need to provide option for both of these ^^. However the difference is a setting for the default top-most hierarchical points, so, actually, the admin needs to make that decision and set it outside of libvirt. If VM should use only .25 of the cache and *can* use the rest, then (in cgroups terms) / should have 0fff and /domain should be ffff. If the VM should use just a .25, but the rest of the system can access it as well, then / = ffff and /domain = f000. If they are supposed to be exclusive, then / should be either not set or 0fff and /domain = f000. In all cases, libvirt should not touch /, just /domain. That's as far as I understand it.
c. Has some pre-defined policies and priority for a VM Like COB [1]
1. Should reserve some l3 cache for host’s system usage (related to 2) 2. What’s the unit for l3 cache allocation? (related to 2)
I think it should be size (as opposed to percentage). We should add a way to check for the size of the caches.
### Propose Changes
XML domain user interface changes:
Option 1: explicit specify cache allocation for a VM
I vote for this option as it's more introspectable and predictable from my point of view.
1 work with numa node
Some cloud orchestration software use numa + vcpu pin together, so we can enable cat supporting with numa infra.
Expose how many l3 cache a VM want to reserved and we require that the l3 cache should be bind on some specify cpu socket, just like what we did for numa node.
This is an domain xml example which is generated by OpenStack Nova for allocate llc(l3 cache) when booting a new VM
<domain> … <cputune> <vcpupin vcpu='0' cpuset='19'/> <vcpupin vcpu='1' cpuset='63'/> <vcpupin vcpu='2' cpuset='83'/> <vcpupin vcpu='3' cpuset='39'/> <vcpupin vcpu='4' cpuset='40'/> <vcpupin vcpu='5' cpuset='84'/> <emulatorpin cpuset='19,39-40,63,83-84'/> </cputune>
This part ^^ describes the settings for the domain in the host.
... <cpu mode='host-model'> <model fallback='allow'/> <topology sockets='3' cores='1' threads='2'/> <numa> <cell id='0' cpus='0-1' memory='2097152' l3cache='1408' unit='KiB'/> <cell id='1' cpus='2-5' memory='4194304' l3cache='5632' unit='KiB'/> </numa> </cpu>
This part ^^ is describing how the domain will look like from the guest's point of view. It looks like the domain has 1408 KiB of l3 cache. It needs to be somewhere else. Like in the top part for example. Since at some point it can be something else than L3, I would choose a slightly different schema to allow for readable updates. As to what place it should be defined in (cputune/memtune/cachetune), I'm afraid of voting because it's already so messy that I won't like my choise few minutes after making it. It needs to be done per-thread, thought.
... </domain>
Refer to [http://libvirt.org/formatdomain.html#elementsCPUTuning]
So finally we can calculate on which CPU socket(cell) we need to allocate how may l3cache for a VM.
2. work with vcpu pin
Forget numa part, CAT setting should have relationship with cpu core setting, we can apply CAT policy if VM has set cpu pin setting (only VM won’t be schedule to another CPU sockets)
Cache allocation on which CPU socket can be calculate as just as 1.
We may need to enable both 1 and 2.
No, please no setting in multiple locations with overlapping meanings. I started getting lost in the rest of the mail, sorry for skipping that.
# Support CAT in libvirt itself or leverage other software
COB
COB is Intel Cache Orchestrator Balancer (COB). please refer http://clrgitlab.intel.com/rdt/cob/tree/master
COB supports some pre-defined policies, it will monitor cpu/cache/cache missing and do cache allocation based on policy using.
If COB support monitor some specified process (VM process) and accept priority defined, it will be good to reuse.
At last the question came out: * Support a fine-grained llc cache control , let user specify cache allocation * Support pre-defined policies and user specify llc allocation priority.
I'm for the first option. We eventually need to do it with another tool, because otherwise we need to support the host settings and poeple won't want to install libvirt just to manage CAT allocations. The links for the COB you provided don't work, but there's a tiny little helper [1] that manages cache allocations. [1] https://lkml.org/lkml/2017/1/3/171
Reference
[1] COB http://clrgitlab.intel.com/rdt/cob/tree/master [2] CAT intro: https://software.intel.com/en-us/articles/software-enabling-for-cache-alloca... [3] kernel Intel_rdt_ui [ https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/tree/Documentation/... ]
Best Regards
Eli Qiao(乔立勇)OpenStack Core team OTC Intel. --
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
participants (2)
-
Martin Kletzander
-
Qiao, Liyong