On Mon, Jun 14, 2010 at 3:10 PM, Daniel P. Berrange <berrange(a)redhat.com> wrote:
On Sat, Jun 12, 2010 at 07:23:33AM -0400, Alex Jia wrote:
> Hey Daniel,
> The cgroup mechanism have been integrated into libvirt for LXC and QEMU driver,
> and the LXC driver uses all of cgroup controllers except for net_cls and cpuset,
> while the QEMU driver only uses the cpu and devices controllers at present.
>
> From the user point of view, user can use some virsh commands to control some
> guest resources:
> 1. Using 'virsh schedinfo' command to get/set CPU scheduler priority for a
guest
QEMU + LXC use the cpu controller 'cpu_shares' tunable
> 2. Using 'virsh vcpuin' command to control guest vcpu affinity
QEMU pins the process directly, doesn't use cgroups. LXC has't
implemented this yet
> 3. Using 'virsh setmem' command to change memory allocation
> 4. Using 'virsh setmaxmem' command to change maximum memory limit
QEMU uses balloon driver. LXC uses cgroups memory controller
Not sure if I understand this, but the balloon driver and memory
cgroups are not mutually exclusive. One could use both together and I
would certainly like to see additional commands to support cgroups.
What happens if a guest (like freebsd) does not support ballooning?
Are you suggesting we'll not need cgroups at all with QEMU?
> 5. Using 'virsh setvcpus' command to change number of
virtual CPUs
QEMU uses cpu hotplug. LXC hasn't implemented this.
> I just make sure the above 1 using CPU scheduler controller, maybe 4 using Memory
> controller? and maybe 5 using CPU set controller? I am not sure.
>
I think we'll some notion of soft limits as well, not sure if they can
be encapsulated using the current set. We need memory shares for
example to encapsulate them.
Balbir