On Mon, Jun 06, 2016 at 08:20:38AM +0800, Yuyang Du wrote:
> Vincent reported that the first task to a new task group's cfs_rq will
> be attached in attach_task_cfs_rq() and once more when it is enqueued
> (see https://lkml.org/lkml/2016/5/25/388).
> 
> Actually, it is worse. The sched avgs can be sometimes attached twice
> not only when we change task groups but also when we switch to fair class,
> these two scenarios will descripbe in the following respectively.
> 
> 1) Switch to fair class:
> 
> The sched class change is done like this:
> 
>       if (queued)
>         enqueue_task();
>       check_class_changed()
>         switched_from()
>         switched_to()
> 
> If the task is on_rq, it should have already been enqueued, which
> MAY have attached sched avgs to the cfs_rq, if so, we shouldn't attach
> it again in switched_to(), otherwise, we attach it twice.
> 
> To address both the on_rq and !on_rq cases, as well as both the task
> was switched from fair and otherwise, the simplest solution is to reset
> the task's last_update_time to 0, when the task is switched from fair.
> Then let task enqueue do the sched avgs attachment only once.
> 
> 2) Change between fair task groups:
> 
> The task groups are changed like this:
> 
>       if (queued)
>           dequeue_task()
>       task_move_group()
>       if (queued)
>           enqueue_task()
> 
> Unlike the switch to fair class case, if the task is on_rq, it will be
> enqueued after we move task groups, so the simplest solution is to reset
> the task's last_update_time when we do task_move_group(), but not to
> attach sched avgs in task_move_group(), and then let enqueue_task() do
> the sched avgs attachment.

So this patch completely removes the detach->attach aging you moved
around in the previous patch -- leading me to wonder what the purpose of
the previous patch was.

Also, this Changelog completely fails to mention this fact, nor does it
explain why this is 'right'.

> +/* Virtually synchronize task with its cfs_rq */

I don't feel this comment actually enlightens the function much.

> @@ -8372,9 +8363,6 @@ static void attach_task_cfs_rq(struct task_struct *p)
>       se->depth = se->parent ? se->parent->depth + 1 : 0;
>  #endif
>  
> -     /* Synchronize task with its cfs_rq */
> -     attach_entity_load_avg(cfs_rq, se);
> -
>       if (!vruntime_normalized(p))
>               se->vruntime += cfs_rq->min_vruntime;
>  }

You leave attach/detach asymmetric and not a comment in sight explaining
why.

> @@ -8382,16 +8370,18 @@ static void attach_task_cfs_rq(struct task_struct *p)
>  static void switched_from_fair(struct rq *rq, struct task_struct *p)
>  {
>       detach_task_cfs_rq(p);
> +     reset_task_last_update_time(p);
> +     /*
> +      * If we change back to fair class, we will attach the sched
> +      * avgs when we are enqueued, which will be done only once. We
> +      * won't have the chance to consistently age the avgs before
> +      * attaching them, so we have to continue with the last updated
> +      * sched avgs when we were detached.
> +      */

This comment needs improvement; it confuses.

> @@ -8444,6 +8434,11 @@ static void task_move_group_fair(struct task_struct *p)
>       detach_task_cfs_rq(p);
>       set_task_rq(p, task_cpu(p));
>       attach_task_cfs_rq(p);
> +     /*
> +      * This assures we will attach the sched avgs when we are enqueued,

"ensures" ? Also, more confusion.

> +      * which will be done only once.
> +      */
> +     reset_task_last_update_time(p);
>  }


Reply via email to