On Mon, May 20, 2019 at 04:59:15PM -0400, Waiman Long wrote: > static struct rw_semaphore __sched * > +rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long > adjustment) > { > + long count; > bool wake = false; > struct rwsem_waiter waiter; > DEFINE_WAKE_Q(wake_q); > > + if (unlikely(!adjustment)) { > + /* > + * This shouldn't happen. If it does, there is probably > + * something wrong in the system. > + */ > + WARN_ON_ONCE(1);
if (WARN_ON_ONCE(!adjustment)) { > + > + /* > + * An adjustment of 0 means that there are too many readers > + * holding or trying to acquire the lock. So disable > + * optimistic spinning and go directly into the wait list. > + */ > + if (rwsem_test_oflags(sem, RWSEM_RD_NONSPINNABLE)) > + rwsem_set_nonspinnable(sem); ISTR rwsem_set_nonspinnable() already does that test, so no need to do it again, right? > + goto queue; > + } > + > /* > * Save the current read-owner of rwsem, if available, and the > * reader nonspinnable bit.