On Wed, 2016-05-11 at 11:33 -0700, Davidlohr Bueso wrote: > On Wed, 11 May 2016, Peter Zijlstra wrote: > > >On Mon, May 09, 2016 at 12:16:37PM -0700, Jason Low wrote: > >> When acquiring the rwsem write lock in the slowpath, we first try > >> to set count to RWSEM_WAITING_BIAS. When that is successful, > >> we then atomically add the RWSEM_WAITING_BIAS in cases where > >> there are other tasks on the wait list. This causes write lock > >> operations to often issue multiple atomic operations. > >> > >> We can instead make the list_is_singular() check first, and then > >> set the count accordingly, so that we issue at most 1 atomic > >> operation when acquiring the write lock and reduce unnecessary > >> cacheline contention. > >> > >> Signed-off-by: Jason Low <[email protected]> > > Acked-by: Davidlohr Bueso <[email protected]> > > (one nit: the patch title could be more informative to what > optimization we are talking about here... ie: reduce atomic ops > in writer slowpath' or something.)
Yeah, the "optimize write lock slowpath" subject is a bit generic. I'll make the title more specific in the next version. Thanks, Jason

