hi~
On 07/03/2012 07:06 AM, Eric Blake wrote:
On 06/05/2012 02:16 AM, tangchen wrote:
> create a new cgroup and move all hypervisor threads to the new cgroup.
> And then we can do the other things:
> 1. limit only vcpu usage rather than the whole qemu
> 2. limit for hypervisor threads(include vhost-net threads)
A really useful thing to add to this commit message would be an ascii
view of the cgroup hierarchy being created. If I understand correctly,
this creates the following levels:
cgroup mount point
libvirt subdirectory (all libvirt management)
driver subdirectory (all guests tied to one driver)
hypervisor subdirectory (all processes tied to one guest)
vcpu subdirectory (all processes tied to one VCPU of a guest)
I would almost prefer to call it a VM cgroup instead of a hypervisor
cgroup (and that reflects back to naming chosen in 2/13), as I tend to
think of 'hypervisor' meaning 'qemu' - the technology that drives
multiple guests, while I think of 'VM' meaning 'single guest', a
collection of possible multiple processes under a single qemu process
umbrella for running a given guest.
Well, actually I see it as follow:
cgroup mount point
libvirt subdirectory (all libvirt management)
driver subdirectory (all guests tied to one driver)
VM subdirectory (all processes tied to one guest)
vcpu subdirectory (all processes tied to one VCPU of a guest) & hypervisor
subdirectory
So I think the name is fine. What do you think?
Now, I didn't change anything here. But if you insist, I think we can discuss it
farther.
> +
> + if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) {
> + /* If we does not know VCPU<->PID mapping or all vcpu runs in the
same
s/vcpu runs/vcpus run/
> + * thread, we cannot control each vcpu.
> + */
> + virCgroupFree(&cgroup);
> + return 0;
It makes sense to ignore failure to set up a vcpu sub-cgroup if the user
never requested the feature, with the end result being that they lose
out on the functionality. But if the user explicitly requested per-vcpu
usage and we can't set it up, should this return a failure? In other
words, I'm worried whether we need to detect whether it is always safe
to ignore the failure (as done here) or whether there are situations
where setup failure should prevent running the VM until the cgroup
situation is resolved.
I report an error here, please refer to my new patches.
Thanks. :)
--
Best Regards,
Tang chen