On Tuesday, 7 February 2017 at 7:56 PM, Marcelo Tosatti wrote:
On Tue, Feb 07, 2017 at 02:43:13PM +0800, Eli Qiao wrote:--Eli QiaoSent with Sparrow (http://www.sparrowmailapp.com/?sig)On Tuesday, 7 February 2017 at 3:03 AM, Marcelo Tosatti wrote:On Mon, Feb 06, 2017 at 01:33:09PM -0200, Marcelo Tosatti wrote:On Mon, Feb 06, 2017 at 10:23:35AM +0800, Eli Qiao wrote:This series patches are for supportting CAT featues, which alsocalled cache tune in libvirt.First to expose cache information which could be tuned in capabilites XML.Then add new domain xml element support to add cacahe bank which will applyon this libvirt domain.This series patches add a util file `resctrl.c/h`, an interface to talk withlinux kernel's sys fs.There are still some TODOs such as expose new public interface to get freecache information.Some discussion about this feature support can be found from:Two comments:1) Please perform appropriate filesystem locking when accessingresctrlfs, as described at:Sure.2)<cachetune id='10' host_id='1' type='l3' size='3072' unit='KiB'/>[b4c270b5-e0f9-4106-a446-69032872ed7d]# cat tasks8654[b4c270b5-e0f9-4106-a446-69032872ed7d]# pstree -p | grep qemu|-qemu-kvm(8654)-+-{qemu-kvm}(8688)| |-{qemu-kvm}(8692)| `-{qemu-kvm}(8693)Should add individual vcpus to the "tasks" file, not the main QEMUprocess.The NFV usecase requires exclusive CAT allocation for the vcpu whichruns the sensitive workload.Perhaps:<cachetune id='10' host_id='1' type='l3' size='3072' unit='KiB'/>Adds all vcpus that are pinned to the socket which cachebank withhost_id=1.<cachetune id='10' host_id='1' type='l3' size='3072' unit='KiB' vcpus=2,3/>Adds PID of vcpus 2 and 3 to resctrl directory created for thisallocation.Hmm.. in this case, we need to figure out what’s the pid of vcpus=2 and vcpu=3 and added them to the resctrl directory.Yes and only the pids of vcpus=2 and vcpus=3, not of any other vcpus.currently, I create a resctrl directory(called resctrl domain) for a VM so just put all task ids to it.this is my though:let say the vm has vcpus=0 1 2 3, and you want to let 0, 1 benefit cache on host_id=0, and 2, 3 on host_id=1you will do:1)pin vcpus 0, 1 on the cpus of socket 0pin vcpus 2, 3 on the cpus of socket 1this can be done in vcputune2) define cache tune like this:<cachetune id='0' host_id=‘0' type='l3' size='3072' unit='KiB'/><cachetune id='1' host_id='1' type='l3' size='3072' unit='KiB'/>in libvirt:we create a resctrl directory naming with the VM’s uuidand set schemata for each socket 0, and socket 1, put all qemu tasks ids into tasks file, this will work fine.No, please don't do this.please be note that in a resctrl directory, we can define schemata for each socket id separately.Please do not put vcpus automatically into the reservations.Its necessary to have certain vcpus in a reservation and some not.For example: 2 vcpu guest, vcpu0 pinned to socket 0 cpu0,vcpu1 pinned to socket 0 cpu1.<cachetune id='0' host_id=‘0' type='l3' size='3072' unit='KiB'/>We want _only_ vcpu1 to be part of this reservation, and not vcpu0(want vcpu0 to use the default group, ie. schemata file at/sys/fs/resctrl/schemataSo please have the ability to add vcpus to the XML syntax:<cachetune id='0' host_id=‘0' type='l3' size='3072' unit='KiB'vcpus='1'/>or<cachetune id='0' host_id=‘0' type='l3' size='3072' unit='KiB'vcpus='2,3'/>
This also allows different sizes to be specified.3) CDP / non-CDP convertion.In case the size determination has been performed with non-CDP,to emulate such allocation on a CDP host,it would be good to allow both code and data allocations to sharethe CBM space:IOM, I don’t think it’s good to have this.in libvirt capabilities xml, the application will get to know if the host support cdp or not.<cachetune id='10' host_id='1' type='l3data' size='3072' unit='KiB'/><cachetune id='10' host_id='1' type='l3code' size='3072' unit='KiB'/>Perhaps if using the same ID?I am open to hear about what other’s say,Other than that, testing looks good.Thanks for the testing.