On Wed, Sep 04, 2019 at 01:28:19PM +0200, Peter Zijlstra wrote:
> @@ -196,6 +198,17 @@ static int membarrier_register_global_expedited(void)
>                */
>               smp_mb();
>       } else {
> +             struct task_struct *g, *t;
> +
> +             read_lock(&tasklist_lock);
> +             do_each_thread(g, t) {
> +                     if (t->mm == mm) {
> +                             atomic_or(MEMBARRIER_STATE_GLOBAL_EXPEDITED,
> +                                       &t->membarrier_state);
> +                     }
> +             } while_each_thread(g, t);
> +             read_unlock(&tasklist_lock);
> +
>               /*
>                * For multi-mm user threads, we need to ensure all
>                * future scheduler executions will observe the new

Arguably, because this is exposed to unpriv users and a potential
preemption latency issue, we could do it in 3 passes:

        - RCU, mark all found lacking, count
        - RCU, mark all found lacking, count
        - if count of last pass, tasklist_lock

That way, it becomes much harder to trigger the bad case.

Do we worry about that?

Reply via email to