----- On Aug 27, 2017, at 3:53 PM, Andy Lutomirski [email protected] wrote:
>> On Aug 27, 2017, at 1:50 PM, Mathieu Desnoyers >> <[email protected]> >> wrote: >> >> Add a new MEMBARRIER_CMD_REGISTER_SYNC_CORE command to the membarrier >> system call. It allows processes to register their intent to have their >> threads issue core serializing barriers in addition to memory barriers >> whenever a membarrier command is performed. >> > > Why is this stateful? That is, why not just have a new membarrier command to > sync every thread's icache? If we'd do it on every CPU icache, it would be as trivial as you say. The concern here is sending IPIs only to CPUs running threads that belong to the same process, so we don't disturb unrelated processes. If we could just grab each CPU's runqueue lock, it would be fairly simple to do. But we want to avoid hitting each runqueue with exclusive atomic access associated with grabbing the lock. (cache-line bouncing) So, the "private" membarrier command end up reading the rq->curr->mm pointer value for each runqueue, and compare them to its own current->mm value. However, this means that whenever we skip a CPU, we're not sending an IPI to that CPU. So we rely on the scheduler for providing the required full barrier either before storing to rq->curr, after user-space memory accesses performed by "prev", as well as after storing to rq->curr, before user-space memory accesses performed by "next". The IPI of the private membarrier can issue issue both smp_mb() and sync_core() (that's what my implementation does). However, having sys_membarrier issue core serializing barriers adds extra constraints on entry into the scheduler/resuming to user-space. It's not sufficient to order user-space memory accesses wrt storing to rq->curr; we also want to serialize the core execution. This is why I'm adding sync_core before the full barrier on entry, and sync_core after the full barrier on exit. Arguably, some architectures may not need the extra sync_core on exit (e.g. x86 has iret which implies core serialization), there are cases where it's not guaranteed (AFAIK sysexit), and it's rarely guaranteed on entry. So, we end up with the possibility of adding the core serialization unconditionally on entry and exit of scheduler. However, as my numbers below show, performance is slightly impacted in heavy benchmarks. Therefore, I propose to make processes register their intent to have the scheduler issue core serializing barriers on their behalf when it schedules them out/in. >> * Scheduler Overhead Benchmarks >> >> Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz >> taskset 01 ./perf bench sched pipe -T >> Linux v4.13-rc6 >> >> Avg. usecs/op Std.Dev. usecs/op >> Before this change: 2.75 0.12 >> Non-registered processes: 2.73 0.08 >> Registered processes: 3.07 0.02 >> Thanks, Mathieu -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com

