----- On Sep 4, 2019, at 7:49 AM, Peter Zijlstra [email protected] wrote:

> On Wed, Sep 04, 2019 at 01:28:19PM +0200, Peter Zijlstra wrote:
>> @@ -196,6 +198,17 @@ static int membarrier_register_global_expedited(void)
>>               */
>>              smp_mb();
>>      } else {
>> +            struct task_struct *g, *t;
>> +
>> +            read_lock(&tasklist_lock);
>> +            do_each_thread(g, t) {
>> +                    if (t->mm == mm) {
>> +                            atomic_or(MEMBARRIER_STATE_GLOBAL_EXPEDITED,
>> +                                      &t->membarrier_state);
>> +                    }
>> +            } while_each_thread(g, t);
>> +            read_unlock(&tasklist_lock);
>> +
>>              /*
>>               * For multi-mm user threads, we need to ensure all
>>               * future scheduler executions will observe the new
> 
> Arguably, because this is exposed to unpriv users and a potential
> preemption latency issue, we could do it in 3 passes:
> 
>       - RCU, mark all found lacking, count
>       - RCU, mark all found lacking, count
>       - if count of last pass, tasklist_lock
> 
> That way, it becomes much harder to trigger the bad case.
> 
> Do we worry about that?

Allowing unprivileged processes to iterate over all processes/threads
with the tasklist lock held is something I try to avoid.

Thanks,

Mathieu


-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

Reply via email to