On Wed, May 23, 2018 at 02:35:20PM +0100, Mark Rutland wrote: > This series contains a few cleanups of the atomic API, fixing an > inconsistency between atomic_* and atomic64_*, and minimizing repetition > in arch code. This is nicer for arch code, and the improved regularity > will help when generating the atomic headers in future. > > The bulk of the patches reorganise things so architectures consistently > provide <atomic>_fetch_add_unless(), with atomic_fetch_add_unless() > provided as a wrapper by core code. A generic fallback is provided for > <atomic>_fetch_add_unless(), based on <atomic>_read() and > <atomic>_try_cmpxchg(). > > Other patches in the series add common fallbacks for: > > * atomic64_inc_not_zero() > * <atomic>_inc_and_test() > * <atomic>_dec_and_test() > * <atomic>_sub_and_test() > * <atomic>add_negative() > > ... as almost all architectures provide identical implementation of > these today. > > The end result is a strongly negative diffstat, though <linux/atomic.h> > grows by a reasonable amount. When we generate the headers, we can halve > this by templating the various fallbacks for atomic{,64}_t. >
Thanks for this Mark, Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org> Ingo, can you magic this into tip somewhere?