On Tue, Sep 29, 2015 at 11:07:34AM -0400, Steven Rostedt wrote:
> On Tue, 29 Sep 2015 11:28:28 +0200
> Peter Zijlstra <pet...@infradead.org> wrote:

> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -2960,8 +2960,10 @@ static inline void schedule_debug(struct
> >      * schedule() atomically, we ignore that path. Otherwise whine
> >      * if we are scheduling when we should not.
> >      */
> > -   if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD))
> > +   if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD)) {
> >             __schedule_bug(prev);
> > +           preempt_count_set(PREEMPT_DISABLED);
> > +   }
> 
> Of course, if this was not a preemption leak, but something that called
> schedule within a preempt_disable()/preempt_enable() section, when it
> returns, preemption will be enabled, right?

Indeed.. But it ensures only the task that incorrectly called schedule()
gets screwed and not everybody else.

This is most important on x86 which has a per cpu preempt_count that is
not saved/restored (after this series). So if you schedule with an
invalid (!2*PREEMPT_DISABLE_OFFSET) preempt_count the next task is
messed up too.

Enforcing this invariant limits the borkage to just the one task.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to