On Tue, Nov 17, 2020 at 10:46:21AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 17, 2020 at 10:29:36AM +0100, Peter Zijlstra wrote:
> > On Tue, Nov 17, 2020 at 09:15:46AM +0000, Will Deacon wrote:
> > > On Tue, Nov 17, 2020 at 09:30:16AM +0100, Peter Zijlstra wrote:
> > > >         /* Unserialized, strictly 'current' */
> > > >  
> > > > +       /*
> > > > +        * p->in_iowait = 1;            ttwu()
> > > > +        * schedule()                     if (p->on_rq && ..) // false
> > > > +        *   smp_mb__after_spinlock();    if 
> > > > (smp_load_acquire(&p->on_cpu) && //true
> > > > +        *   deactivate_task()                ttwu_queue_wakelist())
> > > > +        *     p->on_rq = 0;                    p->sched_remote_wakeup 
> > > > = X;
> > > > +        *
> > > > +        * Guarantees all stores of 'current' are visible before
> > > > +        * ->sched_remote_wakeup gets used.
> > > 
> > > I'm still not sure this is particularly clear -- don't we want to 
> > > highlight
> > > that the store of p->on_rq is unordered wrt the update to
> > > p->sched_contributes_to_load() in deactivate_task()?
> 
> How's this then? It still doesn't explicitly call out the specific race,
> but does mention the more fundamental issue that wakelist queueing
> doesn't respect the regular rules anymore.
> 
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -775,7 +775,6 @@ struct task_struct {
>       unsigned                        sched_reset_on_fork:1;
>       unsigned                        sched_contributes_to_load:1;
>       unsigned                        sched_migrated:1;
> -     unsigned                        sched_remote_wakeup:1;
>  #ifdef CONFIG_PSI
>       unsigned                        sched_psi_wake_requeue:1;
>  #endif
> @@ -785,6 +784,21 @@ struct task_struct {
>  
>       /* Unserialized, strictly 'current' */
>  
> +     /*
> +      * This field must not be in the scheduler word above due to wakelist
> +      * queueing no longer being serialized by p->on_cpu. However:
> +      *
> +      * p->XXX = X;                  ttwu()
> +      * schedule()                     if (p->on_rq && ..) // false
> +      *   smp_mb__after_spinlock();    if (smp_load_acquire(&p->on_cpu) && 
> //true
> +      *   deactivate_task()                ttwu_queue_wakelist())
> +      *     p->on_rq = 0;                    p->sched_remote_wakeup = Y;
> +      *
> +      * guarantees all stores of 'current' are visible before
> +      * ->sched_remote_wakeup gets used, so it can be in this word.
> +      */
> +     unsigned                        sched_remote_wakeup:1;

Much better, thanks!

Will

Reply via email to