I see this was just applied to Linus's tree. It probably should be
tagged for stable as well.

-- Steve


On Tue, 6 Feb 2018 03:54:16 -0800
"tip-bot for Steven Rostedt (VMware)" <[email protected]> wrote:

> Commit-ID:  ad0f1d9d65938aec72a698116cd73a980916895e
> Gitweb:     
> https://git.kernel.org/tip/ad0f1d9d65938aec72a698116cd73a980916895e
> Author:     Steven Rostedt (VMware) <[email protected]>
> AuthorDate: Tue, 23 Jan 2018 20:45:37 -0500
> Committer:  Ingo Molnar <[email protected]>
> CommitDate: Tue, 6 Feb 2018 10:20:33 +0100
> 
> sched/rt: Use container_of() to get root domain in rto_push_irq_work_func()
> 
> When the rto_push_irq_work_func() is called, it looks at the RT overloaded
> bitmask in the root domain via the runqueue (rq->rd). The problem is that
> during CPU up and down, nothing here stops rq->rd from changing between
> taking the rq->rd->rto_lock and releasing it. That means the lock that is
> released is not the same lock that was taken.
> 
> Instead of using this_rq()->rd to get the root domain, as the irq work is
> part of the root domain, we can simply get the root domain from the irq work
> that is passed to the routine:
> 
>  container_of(work, struct root_domain, rto_push_work)
> 
> This keeps the root domain consistent.
> 
> Reported-by: Pavan Kondeti <[email protected]>
> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Linus Torvalds <[email protected]>
> Cc: Mike Galbraith <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Fixes: 4bdced5c9a292 ("sched/rt: Simplify the IPI based RT balancing logic")
> Link: 
> http://lkml.kernel.org/r/CAEU1=pkiho35dzna8eqqnskw1fr1y1zrq5y66x117mg06sq...@mail.gmail.com
> Signed-off-by: Ingo Molnar <[email protected]>
> ---
>  kernel/sched/rt.c | 15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 862a513..2fb627d 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1907,9 +1907,8 @@ static void push_rt_tasks(struct rq *rq)
>   * the rt_loop_next will cause the iterator to perform another scan.
>   *
>   */
> -static int rto_next_cpu(struct rq *rq)
> +static int rto_next_cpu(struct root_domain *rd)
>  {
> -     struct root_domain *rd = rq->rd;
>       int next;
>       int cpu;
>  
> @@ -1985,7 +1984,7 @@ static void tell_cpu_to_push(struct rq *rq)
>        * Otherwise it is finishing up and an ipi needs to be sent.
>        */
>       if (rq->rd->rto_cpu < 0)
> -             cpu = rto_next_cpu(rq);
> +             cpu = rto_next_cpu(rq->rd);
>  
>       raw_spin_unlock(&rq->rd->rto_lock);
>  
> @@ -1998,6 +1997,8 @@ static void tell_cpu_to_push(struct rq *rq)
>  /* Called from hardirq context */
>  void rto_push_irq_work_func(struct irq_work *work)
>  {
> +     struct root_domain *rd =
> +             container_of(work, struct root_domain, rto_push_work);
>       struct rq *rq;
>       int cpu;
>  
> @@ -2013,18 +2014,18 @@ void rto_push_irq_work_func(struct irq_work *work)
>               raw_spin_unlock(&rq->lock);
>       }
>  
> -     raw_spin_lock(&rq->rd->rto_lock);
> +     raw_spin_lock(&rd->rto_lock);
>  
>       /* Pass the IPI to the next rt overloaded queue */
> -     cpu = rto_next_cpu(rq);
> +     cpu = rto_next_cpu(rd);
>  
> -     raw_spin_unlock(&rq->rd->rto_lock);
> +     raw_spin_unlock(&rd->rto_lock);
>  
>       if (cpu < 0)
>               return;
>  
>       /* Try the next RT overloaded CPU */
> -     irq_work_queue_on(&rq->rd->rto_push_work, cpu);
> +     irq_work_queue_on(&rd->rto_push_work, cpu);
>  }
>  #endif /* HAVE_RT_PUSH_IPI */
>  

Reply via email to