[libvirt] About cgroup mechanism using in libvirt

Hey Daniel, The cgroup mechanism have been integrated into libvirt for LXC and QEMU driver, and the LXC driver uses all of cgroup controllers except for net_cls and cpuset, while the QEMU driver only uses the cpu and devices controllers at present. >From the user point of view, user can use some virsh commands to control some guest resources: 1. Using 'virsh schedinfo' command to get/set CPU scheduler priority for a guest 2. Using 'virsh vcpuin' command to control guest vcpu affinity 3. Using 'virsh setmem' command to change memory allocation 4. Using 'virsh setmaxmem' command to change maximum memory limit 5. Using 'virsh setvcpus' command to change number of virtual CPUs I just make sure the above 1 using CPU scheduler controller, maybe 4 using Memory controller? and maybe 5 using CPU set controller? I am not sure. And I wonder how to control devices access via virsh command or libvirt binding API such as python binding? in addition, for CPU accounting and Freezer controller, how to use them to control guest resource from libvirt application layer? and how to check setting result is valid such as cpuacct? these issues let me confuse at recent. Any comments and suggestions are welcomed, thanks for your help. Best Regards, Alex

On Sat, Jun 12, 2010 at 07:23:33AM -0400, Alex Jia wrote:
Hey Daniel, The cgroup mechanism have been integrated into libvirt for LXC and QEMU driver, and the LXC driver uses all of cgroup controllers except for net_cls and cpuset, while the QEMU driver only uses the cpu and devices controllers at present.
From the user point of view, user can use some virsh commands to control some guest resources: 1. Using 'virsh schedinfo' command to get/set CPU scheduler priority for a guest
QEMU + LXC use the cpu controller 'cpu_shares' tunable
2. Using 'virsh vcpuin' command to control guest vcpu affinity
QEMU pins the process directly, doesn't use cgroups. LXC has't implemented this yet
3. Using 'virsh setmem' command to change memory allocation 4. Using 'virsh setmaxmem' command to change maximum memory limit
QEMU uses balloon driver. LXC uses cgroups memory controller
5. Using 'virsh setvcpus' command to change number of virtual CPUs
QEMU uses cpu hotplug. LXC hasn't implemented this.
I just make sure the above 1 using CPU scheduler controller, maybe 4 using Memory controller? and maybe 5 using CPU set controller? I am not sure.
And I wonder how to control devices access via virsh command or libvirt binding API such as python binding? in addition, for CPU accounting and Freezer controller, how to use them to control guest resource from libvirt application layer? and how to check setting result is valid such as cpuacct? these issues let me confuse at recent.
There isn't any direct access to cgroups via any APIs. The use of cgroups is a private implementation details only. You just need to uses the APIs that correspond to those virsh commands Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://deltacloud.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

On Mon, Jun 14, 2010 at 3:10 PM, Daniel P. Berrange <berrange@redhat.com> wrote:
On Sat, Jun 12, 2010 at 07:23:33AM -0400, Alex Jia wrote:
Hey Daniel, The cgroup mechanism have been integrated into libvirt for LXC and QEMU driver, and the LXC driver uses all of cgroup controllers except for net_cls and cpuset, while the QEMU driver only uses the cpu and devices controllers at present.
From the user point of view, user can use some virsh commands to control some guest resources: 1. Using 'virsh schedinfo' command to get/set CPU scheduler priority for a guest
QEMU + LXC use the cpu controller 'cpu_shares' tunable
2. Using 'virsh vcpuin' command to control guest vcpu affinity
QEMU pins the process directly, doesn't use cgroups. LXC has't implemented this yet
3. Using 'virsh setmem' command to change memory allocation 4. Using 'virsh setmaxmem' command to change maximum memory limit
QEMU uses balloon driver. LXC uses cgroups memory controller
Not sure if I understand this, but the balloon driver and memory cgroups are not mutually exclusive. One could use both together and I would certainly like to see additional commands to support cgroups. What happens if a guest (like freebsd) does not support ballooning? Are you suggesting we'll not need cgroups at all with QEMU?
5. Using 'virsh setvcpus' command to change number of virtual CPUs
QEMU uses cpu hotplug. LXC hasn't implemented this.
I just make sure the above 1 using CPU scheduler controller, maybe 4 using Memory controller? and maybe 5 using CPU set controller? I am not sure.
I think we'll some notion of soft limits as well, not sure if they can be encapsulated using the current set. We need memory shares for example to encapsulate them. Balbir

On Mon, Jun 14, 2010 at 03:28:42PM +0530, Balbir Singh wrote:
On Mon, Jun 14, 2010 at 3:10 PM, Daniel P. Berrange <berrange@redhat.com> wrote:
On Sat, Jun 12, 2010 at 07:23:33AM -0400, Alex Jia wrote:
Hey Daniel, The cgroup mechanism have been integrated into libvirt for LXC and QEMU driver, and the LXC driver uses all of cgroup controllers except for net_cls and cpuset, while the QEMU driver only uses the cpu and devices controllers at present.
From the user point of view, user can use some virsh commands to control some guest resources: 1. Using 'virsh schedinfo' command to get/set CPU scheduler priority for a guest
QEMU + LXC use the cpu controller 'cpu_shares' tunable
2. Using 'virsh vcpuin' command to control guest vcpu affinity
QEMU pins the process directly, doesn't use cgroups. LXC has't implemented this yet
3. Using 'virsh setmem' command to change memory allocation 4. Using 'virsh setmaxmem' command to change maximum memory limit
QEMU uses balloon driver. LXC uses cgroups memory controller
Not sure if I understand this, but the balloon driver and memory cgroups are not mutually exclusive. One could use both together and I would certainly like to see additional commands to support cgroups. What happens if a guest (like freebsd) does not support ballooning? Are you suggesting we'll not need cgroups at all with QEMU?
No, I was merely describing the current usage. Making use of cgroups to enforce the limit is certainly a desirable RFE for the future.
5. Using 'virsh setvcpus' command to change number of virtual CPUs
QEMU uses cpu hotplug. LXC hasn't implemented this.
I just make sure the above 1 using CPU scheduler controller, maybe 4 using Memory controller? and maybe 5 using CPU set controller? I am not sure.
I think we'll some notion of soft limits as well, not sure if they can be encapsulated using the current set. We need memory shares for example to encapsulate them.
Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://deltacloud.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
participants (3)
-
Alex Jia
-
Balbir Singh
-
Daniel P. Berrange