On 10/24, Paul E. McKenney wrote: > > static inline void percpu_up_read(struct percpu_rw_semaphore *p) > { > /* > * Decrement our count, but protected by RCU-sched so that > * the writer can force proper serialization. > */ > rcu_read_lock_sched(); > this_cpu_dec(*p->counters); > rcu_read_unlock_sched(); > }
Yes, the explicit lock/unlock makes the new assumptions about synchronize_sched && barriers unnecessary. And iiuc this could even written as rcu_read_lock_sched(); rcu_read_unlock_sched(); this_cpu_dec(*p->counters); > Of course, it would be nice to get rid of the extra synchronize_sched(). > One way to do this is to use SRCU, which allows blocking operations in > its read-side critical sections (though also increasing read-side overhead > a bit, and also untested): > > ------------------------------------------------------------------------ > > struct percpu_rw_semaphore { > bool locked; > struct mutex mtx; /* Could also be rw_semaphore. */ > struct srcu_struct s; > wait_queue_head_t wq; > }; but in this case I don't understand > static inline void percpu_up_write(struct percpu_rw_semaphore *p) > { > /* Allow others to proceed, but not yet locklessly. */ > mutex_unlock(&p->mtx); > > /* > * Ensure that all calls to percpu_down_read() that did not > * start unambiguously after the above mutex_unlock() still > * acquire the lock, forcing their critical sections to be > * serialized with the one terminated by this call to > * percpu_up_write(). > */ > synchronize_sched(); how this synchronize_sched() can help... Oleg. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/