On 06/03, Peter Zijlstra wrote:
>
> It now also has concurrency on wakeup; but afaict that's harmless, we'll
> get racing stores of p->state = TASK_RUNNING, much the same as if there
> was a remote wakeup vs a wait-loop terminating early.
>
> I suppose the tracepoint consumers might have to deal with some
> artifacts there, but that's their problem.

I guess you mean that trace_sched_waking/wakeup can be reported twice if
try_to_wake_up(current) races with ttwu_remote(). And ttwu_stat().

> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -1990,6 +1990,28 @@ try_to_wake_up(struct task_struct *p, unsigned int 
> > state, int wake_flags)
> >     unsigned long flags;
> >     int cpu, success = 0;
> >  
> > +   if (p == current) {
> > +           /*
> > +            * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
> > +            * == smp_processor_id()'. Together this means we can special
> > +            * case the whole 'p->on_rq && ttwu_remote()' case below
> > +            * without taking any locks.
> > +            *
> > +            * In particular:
> > +            *  - we rely on Program-Order guarantees for all the ordering,
> > +            *  - we're serialized against set_special_state() by virtue of
> > +            *    it disabling IRQs (this allows not taking ->pi_lock).
> > +            */
> > +           if (!(p->state & state))
> > +                   goto out;
> > +
> > +           success = 1;
> > +           trace_sched_waking(p);
> > +           p->state = TASK_RUNNING;
> > +           trace_sched_woken(p);
                ^^^^^^^^^^^^^^^^^
trace_sched_wakeup(p) ?

I see nothing wrong... but probably this is because I don't fully understand
this change. In particular, I don't really understand who else can benefit from
this optimization...

Oleg.

Reply via email to