2010/8/24 Balbir Singh <balbir(a)linux.vnet.ibm.com>:
* Nikunj A. Dadhania <nikunj(a)linux.vnet.ibm.com> [2010-08-24
13:35:10]:
> On Tue, 24 Aug 2010 13:05:26 +0530, Balbir Singh <balbir(a)linux.vnet.ibm.com>
wrote:
> > * Nikunj A. Dadhania <nikunj(a)linux.vnet.ibm.com> [2010-08-24 11:53:27]:
> >
> > >
> > > Subject: [RFC] Memory controller exploitation in libvirt
> > >
> > > Corresponding libvirt public API:
> > > int virDomainSetMemoryParamters (virDomainPtr domain,
> > > virMemoryParamterPtr params,
> > > unsigned int nparams);
> > > int virDomainGetMemoryParamters (virDomainPtr domain,
> > > virMemoryParamterPtr params,
> > > unsigned int nparams);
> > >
> > >
> >
> > Does nparams imply setting several parameters together? Does bulk
> > loading help? I would prefer splitting out the API if possible
> > into
> Yes it helps, when parsing the parameters from the domain xml file, we can call
> this API and set them at once. BTW, it can also be called with one parameter
> if desired.
>
> >
> > virCgroupSetMemory() - already present in src/util/cgroup.c
> > virCgroupGetMemory() - already present in src/util/cgroup.c
> > virCgroupSetMemorySoftLimit()
> > virCgroupSetMemoryHardLimit()
> > virCgroupSetMemorySwapHardLimit()
> > virCgroupGetStats()
> This is at the cgroup level(internal API) and will be implemented in the way
> that is suggested. The RFC should not be specific to cgroups. libvirt is
> supported on multiple OS and the above described APIs in the RFC are public
> API.
>
I thought we were talking of cgroups in the QEMU driver for Linux.
IMHO the generalization is too big. ESX for example, already abstracts
their WLM/RM needs in their driver.
Yes the ESX driver allows to control ballooning through
virDomainSetMemory and virDomainSetMaxMemory.
ESX itself also allows to set what's called memoryMinGaurantee in the
thread, but this is not exposed in libvirt.
So you can control how much virtual memory a guest has
(virDomainSetMaxMemory) and define and upper (virDomainSetMemory) and
a lower (not exposed via libvirt) bound for the physical memory that
the hypervisor should use to satisfy the virtual memory of a guest.
ESX also allows to defines shares, a relative value that defines a
priority between guests in case there is not enough physical memory to
satisfy all guests, the remaining virtual memory is then satisfied by
swapping at the hypervisor level.
The same pattern applies to the virtual CPUs. There is an upper and a
lower limit for the CPU allocation of a guest and a shares value to
define priority in case of contention. All three are exposed using the
virDomainSetSchedulerParameters API for ESX.
Regarding the new elements proposed here:
<define name="resource">
<element memory>
<element memoryHardLimit/>
<element memorySoftLimit/>
<element memoryMinGaurantee/>
<element swapHardLimit/>
<element swapSoftLimit/>
</element>
</define>
memoryHardLimit is already there and called memory, memorySoftLimit is
also there and called currentMemory, memoryMinGaurantee is new.
I'm not sure where swapHardLimit and swapSoftLimit apply, is that for
swapping that the hypervisor level?
Also keep in mind that there was a recent discussion about how to
express ballooning and memory configuration in the domain XML config:
https://www.redhat.com/archives/libvir-list/2010-August/msg00118.html
Regarding future additions:
CPUHardLimit
CPUSoftLimit
CPUShare
CPUPercentage
IO_BW_Softlimit
IO_BW_Hardlimit
IO_BW_percentage
The CPU part of this is already possible via the
virDomainSetSchedulerParameters API. But they aren't expressed in the
domain XML config, maybe your suggesting to do this?
The I/O part is in fact new, I think.
In general when you want to extend the domain XML config make sure
that you don't model it to closely based on a specific implementation
like CGroup.
Matthias