On Sun, Jul 29, 2018 at 03:54:52PM -0400, Rik van Riel wrote:
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index c45de46fdf10..11724c9e88b0 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2691,7 +2691,7 @@ static struct rq *finish_task_switch(struct task_struct 
> *prev)
>        */
>       if (mm) {
>               membarrier_mm_sync_core_before_usermode(mm);
> -             mmdrop(mm);
> +             drop_lazy_mm(mm);
>       }
>       if (unlikely(prev_state == TASK_DEAD)) {
>               if (prev->sched_class->task_dead)
> @@ -2805,7 +2805,7 @@ context_switch(struct rq *rq, struct task_struct *prev,
>        */
>       if (!mm) {
>               next->active_mm = oldmm;
> -             mmgrab(oldmm);
> +             grab_lazy_mm(oldmm);
>               enter_lazy_tlb(oldmm, next);
>       } else
>               switch_mm_irqs_off(oldmm, mm, next);

What happened to the rework I did there? That not only avoided fiddling
with active_mm, but also avoids grab/drop cycles for the other
architectures when doing task->kthread->kthread->task things.

I agree with Andy that if you avoid the refcount fiddling, then you
should also not muck with active_mm.

That is, if you keep active_mm for now (which seems a reasonable first
step) then at least ensure you keep ->mm == ->active_mm at all times.

Reply via email to