On Fri, Jan 23, 2015 at 10:13:53AM +0100, Peter Zijlstra wrote:
> 
> I picked up the patch; will drop it if Ingo also does ;-)
> 
> On Thu, Jan 22, 2015 at 06:08:04PM +0100, Frederic Weisbecker wrote:
> > +++ b/kernel/sched/core.c
> > @@ -2877,6 +2877,21 @@ void __sched schedule_preempt_disabled(void)
> >     preempt_disable();
> >  }
> >  
> > +static void preempt_schedule_common(void)
> > +{
> > +   do {
> > +           __preempt_count_add(PREEMPT_ACTIVE);
> > +           __schedule();
> > +           __preempt_count_sub(PREEMPT_ACTIVE);
> > +
> > +           /*
> > +            * Check again in case we missed a preemption opportunity
> > +            * between schedule and now.
> > +            */
> > +           barrier();
> 
> I do however wonder about this barrier() here; why do we think we need
> it?
> 
> Is that because test_bit() it 'broken'? The bitops are typically atomic
> ops and atomic reads should be through a volatile cast (x86
> constant_test_bit doesn't seem to do this).
> 
> Should we go audit and fix that?

I looked up with git blame and this was already there prior the first git 
commit v2.6.12
without appropriate explanation.

We must make sure that the PREEMPT_ACTIVE decrement is visible before we do the 
NEED_RESCHED
test or an interrupt could spuriously miss a preempt_schedule_irq() opportunity.

__preempt_count_sub() in asm-generic is an inline, so an implicit barrier(). 
Only x86
overwrites it yet and it does so through an inline as well.

And __preempt_count_ops() must really imply a barrier() anyway, anything else 
would
be insane (probably we should specify that in a comment somewhere). Although I 
see
that responsability is taken from non-underscored callers...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to