On Wed, Aug 06, 2014 at 12:06:56PM +0400, Kirill Tkhai wrote:
> 
> Bad situation:
> 
> double_lock_balance() drops busiest_rq lock. The busiest_rq is *busiest*,
> and a lot of tasks and context switches there. We are dropping the lock
> and waiting for it again.
> 
> Let's just detach the task and once finally unlock it!

that wants rewording, much like the previous one I did.

> 
> Warning: this admits unlocked using of can_migrate_task(), 
> throttled_lb_pair(),
> and task_hot(). I've added lockdep asserts to point on this.

That doesn't make sense; see below.

> Signed-off-by: Kirill Tkhai <[email protected]>
> ---
>  kernel/sched/fair.c |   55 
> +++++++++++++++++++++++++++++++++++----------------
>  1 file changed, 38 insertions(+), 17 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d54b72c..cfeafb1 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3297,6 +3297,8 @@ static inline int throttled_hierarchy(struct cfs_rq 
> *cfs_rq)
>   * Ensure that neither of the group entities corresponding to src_cpu or
>   * dest_cpu are members of a throttled hierarchy when performing group
>   * load-balance operations.
> + *
> + * Note: RQs may be unlocked.
>   */
>  static inline int throttled_lb_pair(struct task_group *tg,
>                                   int src_cpu, int dest_cpu)

I'm not immediately seeing this; this function is only ever called from
can_migrate_task(); and there you assert that we must be holding src_rq.

And at this point src_rq is the only relevant rq, since that is the one
the task is still on.

so let me remove this comment.

Attachment: pgpYir9N6Y8Tv.pgp
Description: PGP signature

Reply via email to