On Fri, Jul 03, 2020 at 10:30:12AM +0200, Peter Zijlstra wrote: > On Thu, Jul 02, 2020 at 07:39:16PM +0100, Valentin Schneider wrote: > > > @@ -3134,8 +3274,12 @@ static inline void prepare_task(struct task_struct > > > *next) > > > /* > > > * Claim the task as running, we do this before switching to it > > > * such that any running task will have this set. > > > + * > > > + * __schedule()'s rq->lock and smp_mb__after_spin_lock() orders this > > > + * store against prior state change of @next, also see > > > + * try_to_wake_up(), specifically smp_load_acquire(&p->on_cpu). > > > > smp_*cond*_load_acquire(&p->on_cpu, <blah>) > > Both, but yeah.. arguably the cond one is the more important one.
Ah no, this one really want to match the WF_ON_CPU case. I'll clarify non-the-less.

