On 02/07/2020 14:52, Peter Zijlstra wrote:
> 
> Dave hit the problem fixed by commit:
> 
>   b6e13e85829f ("sched/core: Fix ttwu() race")
> 
> and failed to understand much of the code involved. Per his request a
> few comments to (hopefully) clarify things.
> 
> Requested-by: Dave Chinner <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>

LGTM. Just a couple of nitpicks below.

[...]

> + * Special state:
> + *
> + * System-calls and anything external will use task_rq_lock() which acquires
> + * both p->lock and rq->lock. As a consequence the state they change is 
> stable

s/p->lock/p->pi_lock ?

> + * while holding either lock:
> + *
> + *  - sched_setaffinity():   p->cpus_ptr
> + *  - set_user_nice():               p->se.load, p->static_prio

Doesn't set_user_nice() also change p->prio and p->normal_prio, so
p->*prio ?

> + *  - __sched_setscheduler():        p->sched_class, p->policy, p->*prio, 
> p->se.load,
> + *                           p->dl.dl_{runtime, deadline, period, flags, bw, 
> density}

p->rt_priority ?

> + *  - sched_setnuma():               p->numa_preferred_nid
> + *  - sched_move_task()/
> + *    cpu_cgroup_fork():     p->sched_task_group

maybe also: set_cpus_allowed_ptr() -> __set_cpus_allowed_ptr() (like
sched_setaffinity()) ?

[...]

> @@ -3134,8 +3274,12 @@ static inline void prepare_task(struct task_struct 
> *next)
>       /*
>        * Claim the task as running, we do this before switching to it
>        * such that any running task will have this set.
> +      *
> +      * __schedule()'s rq->lock and smp_mb__after_spin_lock() orders this
> +      * store against prior state change of @next, also see
> +      * try_to_wake_up(), specifically smp_load_acquire(&p->on_cpu).

s/smp_load_acquire/smp_cond_load_acquire ?

[...]

Reply via email to