On Mon, Jun 3, 2013 at 11:23 PM, Michael Wang
<[email protected]> wrote:
> In sched_init(), there is no need to initialize 'root_task_group.shares' and
> 'root_task_group.cfs_bandwidth' repeatedly.
>
> CC: Ingo Molnar <[email protected]>
> CC: Peter Zijlstra <[email protected]>
> Signed-off-by: Michael Wang <[email protected]>
> ---
>  kernel/sched/core.c |   46 +++++++++++++++++++++++++---------------------
>  1 files changed, 25 insertions(+), 21 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 58453b8..c0c3716 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6955,6 +6955,31 @@ void __init sched_init(void)
>
>  #endif /* CONFIG_CGROUP_SCHED */
>
> +#ifdef CONFIG_FAIR_GROUP_SCHED
> +       root_task_group.shares = ROOT_TASK_GROUP_LOAD;
> +
> +       /*
> +        * How much cpu bandwidth does root_task_group get?
> +        *
> +        * In case of task-groups formed thr' the cgroup filesystem, it
> +        * gets 100% of the cpu resources in the system. This overall
> +        * system cpu resource is divided among the tasks of
> +        * root_task_group and its child task-groups in a fair manner,
> +        * based on each entity's (task or task-group's) weight
> +        * (se->load.weight).
> +        *
> +        * In other words, if root_task_group has 10 tasks of weight
> +        * 1024) and two child groups A0 and A1 (of weight 1024 each),
> +        * then A0's share of the cpu resource is:
> +        *
> +        *      A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33%
> +        *
> +        * We achieve this by letting root_task_group's tasks sit
> +        * directly in rq->cfs (i.e root_task_group->se[] = NULL).
> +        */

This comment has become unglued from what it's supposed to be attached
to (it's tied to root_task_group.shares & init_tg_cfs_entry, not
init_cfs_bandwidth).

> +       init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
> +#endif
> +
>         for_each_possible_cpu(i) {
>                 struct rq *rq;
>
> @@ -6966,28 +6991,7 @@ void __init sched_init(void)
>                 init_cfs_rq(&rq->cfs);
>                 init_rt_rq(&rq->rt, rq);
>  #ifdef CONFIG_FAIR_GROUP_SCHED
> -               root_task_group.shares = ROOT_TASK_GROUP_LOAD;
>                 INIT_LIST_HEAD(&rq->leaf_cfs_rq_list);
> -               /*
> -                * How much cpu bandwidth does root_task_group get?
> -                *
> -                * In case of task-groups formed thr' the cgroup filesystem, 
> it
> -                * gets 100% of the cpu resources in the system. This overall
> -                * system cpu resource is divided among the tasks of
> -                * root_task_group and its child task-groups in a fair manner,
> -                * based on each entity's (task or task-group's) weight
> -                * (se->load.weight).
> -                *
> -                * In other words, if root_task_group has 10 tasks of weight
> -                * 1024) and two child groups A0 and A1 (of weight 1024 each),
> -                * then A0's share of the cpu resource is:
> -                *
> -                *      A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 
> 8.33%
> -                *
> -                * We achieve this by letting root_task_group's tasks sit
> -                * directly in rq->cfs (i.e root_task_group->se[] = NULL).
> -                */
> -               init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
>                 init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL);
>  #endif /* CONFIG_FAIR_GROUP_SCHED */
>
> --
> 1.7.4.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to