On Fri, Jul 28, 2017 at 03:38:15PM +0000, Mathieu Desnoyers wrote:
> > Which only leaves PPC stranded.. but the 'good' news is that mpe says
> > they'll probably need a barrier in switch_mm() in any case.
> 
> As I pointed out in my other email, I plan to do this:
> 
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2636,6 +2636,11 @@ static struct rq *finish_task_switch(struct 
> task_struct *prev)
>         vtime_task_switch(prev);
>         perf_event_task_sched_in(prev, current);

Here would place it _inside_ the rq->lock, which seems to make more
sense given the purpose of the barrier, but either way works given its
definition.

>         finish_lock_switch(rq, prev);

You could put the whole thing inside IS_ENABLED(CONFIG_SYSMEMBARRIER) or
something.

> +       /*
> +        * The membarrier system call requires a full memory barrier
> +        * after storing to rq->curr, before going back to user-space.
> +        */
> +       smp_mb__after_unlock_lock();
>         finish_arch_post_lock_switch();
>  
>         fire_sched_in_preempt_notifiers(current);

Reply via email to