On Thu, Nov 29, 2018 at 01:26:34PM -0500, Waiman Long wrote: > On 11/29/2018 01:08 PM, Peter Zijlstra wrote: > > Hmm, I think we're missing a barrier in wake_q_add(); when cmpxchg() > > fails we still need an smp_mb(). > > > > Something like so. > > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index 3d87a28da378..69def558edf6 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -400,6 +400,13 @@ void wake_q_add(struct wake_q_head *head, struct > > task_struct *task) > > { > > struct wake_q_node *node = &task->wake_q; > > > > + /* > > + * Ensure, that when the below cmpxchg() fails, the corresponding > > + * wake_up_q() will observe our prior state. > > + * > > + * Pairs with the smp_mb() from wake_up_q()'s wake_up_process(). > > + */ > > + smp_mb(); > > /* > > * Atomically grab the task, if ->wake_q is !nil already it means > > * its already queued (either by us or someone else) and will get the > > @@ -408,7 +415,7 @@ void wake_q_add(struct wake_q_head *head, struct > > task_struct *task) > > * This cmpxchg() executes a full barrier, which pairs with the full > > * barrier executed by the wakeup in wake_up_q(). > > */ > > - if (cmpxchg(&node->next, NULL, WAKE_Q_TAIL)) > > + if (cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)) > > return; > > > > get_task_struct(task); > > That can be costly for x86 which will now have 2 locked instructions. > Should we introduce a kind of special cmpxchg (e.g. cmpxchg_mb) that > will guarantee a memory barrier whether the operation fails or not?
I thought smp_mb__before_atomic() was designed for this sort of thing? Will