On Tue, 29 Sep 2015 11:28:28 +0200 Peter Zijlstra <pet...@infradead.org> wrote:
> When we warn about a preempt_count leak; reset the preempt_count to > the known good value such that the problem does not ripple forward. > > Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> > --- > kernel/exit.c | 4 +++- > kernel/sched/core.c | 4 +++- > 2 files changed, 6 insertions(+), 2 deletions(-) > > --- a/kernel/exit.c > +++ b/kernel/exit.c > @@ -706,10 +706,12 @@ void do_exit(long code) > smp_mb(); > raw_spin_unlock_wait(&tsk->pi_lock); > > - if (unlikely(in_atomic())) > + if (unlikely(in_atomic())) { > pr_info("note: %s[%d] exited with preempt_count %d\n", > current->comm, task_pid_nr(current), > preempt_count()); > + preempt_count_set(PREEMPT_ENABLED); > + } Looks good. > > /* sync mm's RSS info before statistics gathering */ > if (tsk->mm) > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -2960,8 +2960,10 @@ static inline void schedule_debug(struct > * schedule() atomically, we ignore that path. Otherwise whine > * if we are scheduling when we should not. > */ > - if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD)) > + if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD)) { > __schedule_bug(prev); > + preempt_count_set(PREEMPT_DISABLED); > + } Of course, if this was not a preemption leak, but something that called schedule within a preempt_disable()/preempt_enable() section, when it returns, preemption will be enabled, right? -- Steve > rcu_sleep_check(); > > profile_hit(SCHED_PROFILING, __builtin_return_address(0)); > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/