Hi Wanpeng,

On Tue, Aug 25, 2015 at 03:59:54PM +0800, Wanpeng Li wrote:
> [   15.273708] ------------[ cut here ]------------
> [   15.274097] WARNING: CPU: 0 PID: 13 at kernel/sched/core.c:1156 
> do_set_cpus_allowed+0x7e/0x80()
> [   15.274857] Modules linked in:
> [   15.275101] CPU: 0 PID: 13 Comm: migration/0 Not tainted 
> 4.2.0-rc1-00049-g25834c7 #2
> [   15.275674]  00000000 00000000 d21f1d24 c19228b2 00000000 d21f1d58 
> c1056a3b c1ba00e4
> [   15.276084]  00000000 0000000d c1ba17d8 00000484 c10838be 00000484 
> c10838be d21e5000
> [   15.276084]  d2121900 d21e5158 d21f1d68 c1056b12 00000009 00000000 
> d21f1d7c c10838be
> [   15.276084] Call Trace:
> [   15.276084]  [<c19228b2>] dump_stack+0x4b/0x75
> [   15.276084]  [<c1056a3b>] warn_slowpath_common+0x8b/0xc0
> [   15.276084]  [<c10838be>] ? do_set_cpus_allowed+0x7e/0x80
> [   15.276084]  [<c10838be>] ? do_set_cpus_allowed+0x7e/0x80
> [   15.276084]  [<c1056b12>] warn_slowpath_null+0x22/0x30
> [   15.276084]  [<c10838be>] do_set_cpus_allowed+0x7e/0x80
> [   15.276084]  [<c110154c>] cpuset_cpus_allowed_fallback+0x7c/0x170
> [   15.276084]  [<c11014d0>] ? cpuset_cpus_allowed+0x180/0x180
> [   15.276084]  [<c1083ae1>] select_fallback_rq+0x221/0x280
> [   15.276084]  [<c1085073>] migration_call+0xe3/0x250
> [   15.276084]  [<c1079e23>] notifier_call_chain+0x53/0x70
> [   15.276084]  [<c1079e5e>] __raw_notifier_call_chain+0x1e/0x30
> [   15.276084]  [<c1056cc8>] cpu_notify+0x28/0x50
> [   15.276084]  [<c191e4d2>] take_cpu_down+0x22/0x40
> [   15.276084]  [<c1102895>] multi_cpu_stop+0xd5/0x140
> [   15.276084]  [<c11027c0>] ? __stop_cpus+0x80/0x80
> [   15.276084]  [<c11025cc>] cpu_stopper_thread+0xbc/0x170
> [   15.276084]  [<c1085ec9>] ? preempt_count_sub+0x9/0x50
> [   15.276084]  [<c192b6a7>] ? _raw_spin_unlock_irq+0x37/0x50
> [   15.276084]  [<c192b655>] ? _raw_spin_unlock_irqrestore+0x55/0x70
> [   15.276084]  [<c10a9074>] ? trace_hardirqs_on_caller+0x144/0x1e0
> [   15.276084]  [<c11024a5>] ? cpu_stop_should_run+0x35/0x40
> [   15.276084]  [<c1085ec9>] ? preempt_count_sub+0x9/0x50
> [   15.276084]  [<c192b641>] ? _raw_spin_unlock_irqrestore+0x41/0x70
> [   15.276084]  [<c107c944>] smpboot_thread_fn+0x174/0x2f0
> [   15.276084]  [<c107c7d0>] ? sort_range+0x30/0x30
> [   15.276084]  [<c1078934>] kthread+0xc4/0xe0
> [   15.276084]  [<c192c041>] ret_from_kernel_thread+0x21/0x30
> [   15.276084]  [<c1078870>] ? kthread_create_on_node+0x180/0x180
> [   15.276084] ---[ end trace 15f4c86d404693b0 ]---
> 
> After commit (25834c73f93: sched: Fix a race between __kthread_bind() 
> and sched_setaffinity()) do_set_cpus_allowed() should be called w/
> p->pi_lock held, however, it is not true in cpuset path currently. 
> 
> This patch fix it by holding p->pi_lock in cpuset path.
> 
> Signed-off-by: Wanpeng Li <wanpeng...@hotmail.com>
> ---
>  kernel/cpuset.c |    4 ++++
>  1 files changed, 4 insertions(+), 0 deletions(-)
> 
> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
> index e414ae9..605ed66 100644
> --- a/kernel/cpuset.c
> +++ b/kernel/cpuset.c
> @@ -2376,8 +2376,12 @@ void cpuset_cpus_allowed(struct task_struct *tsk, 
> struct cpumask *pmask)
>  
>  void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
>  {
> +     unsigned long flags;
> +
>       rcu_read_lock();
> +     raw_spin_lock_irqsave(&tsk->pi_lock, flags);
>       do_set_cpus_allowed(tsk, task_cs(tsk)->effective_cpus);
> +     raw_spin_lock_irqsave(&tsk->pi_lock, flags);

Just curious, Will introduce deadlock after acquire lock twice? ;)

Thanks,
Leo Yan

>       rcu_read_unlock();
>  
>       /*
> -- 
> 1.7.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to