On 07/18/2011 04:42 AM, Wen Congyang wrote:
> +int qemuSetupCgroupForVcpu(struct qemud_driver *driver, virDomainObjPtr vm)
> +{
> +    virCgroupPtr cgroup = NULL;
> +    virCgroupPtr cgroup_vcpu = NULL;
> +    qemuDomainObjPrivatePtr priv = vm->privateData;
> +    int rc;
> +    unsigned int i;
> +    unsigned long long period = vm->def->cputune.period;
> +    long long quota = vm->def->cputune.quota;
> +
> +    if (driver->cgroup == NULL)
> +        return 0; /* Not supported, so claim success */
> +
> +    rc = virCgroupForDomain(driver->cgroup, vm->def->name, &cgroup, 0);
> +    if (rc != 0) {
> +        virReportSystemError(-rc,
> +                             _("Unable to find cgroup for %s"),
> +                             vm->def->name);
> +        goto cleanup;
> +    }
> +
> +    if (priv->nvcpupids == 0 || priv->vcpupids[0] == vm->pid) {
> +        /* If we does not know VCPU<->PID mapping or all vcpu runs in the 
> same
> +         * thread, we can not control each vcpu.
> +         */
> +        if (period || quota) {
> +            if (qemuCgroupControllerActive(driver, 
> VIR_CGROUP_CONTROLLER_CPU)) {
> +                if (qemuSetupCgroupVcpuBW(cgroup, period, quota) < 0)
> +                    goto cleanup;
> +            }
> +        }
> +        return 0;
> +    }

I found a problem above.  In the case where we are controlling quota at
the domain level cgroup we must multiply the user-specified quota by the
number of vcpus in the domain in order to get the same performance as we
would with per-vcpu cgroups.  As written, the vm will be essentially
capped at 1 vcpu worth of quota regardless of the number of vcpus.  You
will also have to apply this logic in reverse when reporting the
scheduler statistics so that the quota number is a per-vcpu quantity.


-- 
Adam Litke
IBM Linux Technology Center

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Reply via email to