At 07/19/2011 04:59 AM, Adam Litke Write:
On 07/18/2011 04:42 AM, Wen Congyang wrote:
> +int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm)
> +{
> + virCgroupPtr cgroup = NULL;
> + virCgroupPtr cgroup_vcpu = NULL;
> + qemuDomainObjPrivatePtr priv = vm->privateData;
> + int rc;
> + unsigned int i;
> + unsigned long long period = vm->def->cputune.period;
> + long long quota = vm->def->cputune.quota;
> +
> + if (driver->cgroup == NULL)
> + return 0; /* Not supported, so claim success */
> +
> + rc = virCgroupForDomain(driver->cgroup, vm->def->name, &cgroup,
0);
> + if (rc != 0) {
> + virReportSystemError(-rc,
> + _("Unable to find cgroup for %s"),
> + vm->def->name);
> + goto cleanup;
> + }
> +
> + if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) {
> + /* If we does not know VCPU<->PID mapping or all vcpu runs in the
same
> + * thread, we can not control each vcpu.
> + */
> + if (period || quota) {
> + if (qemuCgroupControllerActive(driver, VIR_CGROUP_CONTROLLER_CPU)) {
> + if (qemuSetupCgroupVcpuBW(cgroup, period, quota) < 0)
> + goto cleanup;
> + }
> + }
> + return 0;
> + }
I found a problem above. In the case where we are controlling quota at
the domain level cgroup we must multiply the user-specified quota by the
number of vcpus in the domain in order to get the same performance as we
would with per-vcpu cgroups. As written, the vm will be essentially
capped at 1 vcpu worth of quota regardless of the number of vcpus. You
will also have to apply this logic in reverse when reporting the
scheduler statistics so that the quota number is a per-vcpu quantity.
When quota is 1000, and per-vcpu thread is not active, we can start
vm successfully. When the per-vcpu thread is active, and the num of
vcpu is more than 1, we can not start vm if we multiply the user-specified
quota. It will confuse the user: sometimes the vm can be started, but
sometimes the vm can not be started with the same configuration.