Currently architecture must implement atomic_fetch_add_unless(), with common code providing atomic_add_unless(). Architectures must also implement atmic64_add_unless() directly, with no corresponding atomic64_fetch_add_unless().
This divergenece is unfortunate, and means that the APIs for atomic_t, atomic64_t, and atomic_long_t differ. In preparation for unifying things, with architectures providing atomic64_fetch_add_unless, this patch adds a generic atomic64_add_unless() which will use atomic64_fetch_add_unless(). The instrumented atomics are updated to take this case into account. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: Boqun Feng <[email protected]> Cc: Will Deacon <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Richard Henderson <[email protected]> Cc: Ivan Kokshaysky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Russell King <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Albert Ou <[email protected]> --- include/asm-generic/atomic-instrumented.h | 9 +++++++++ include/linux/atomic.h | 16 ++++++++++++++++ 2 files changed, 25 insertions(+) diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 1f9b2a767d3c..ab011e1a02fc 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -93,11 +93,20 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) } #endif +#ifdef arch_atomic64_fetch_add_unless +#define atomic64_fetch_add_unless atomic64_fetch_add_unless +static __always_inline int atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + kasan_check_write(v, sizeof(*v)); + return arch_atomic64_fetch_add_unless(v, a, u); +} +#else static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { kasan_check_write(v, sizeof(*v)); return arch_atomic64_add_unless(v, a, u); } +#endif static __always_inline void atomic_inc(atomic_t *v) { diff --git a/include/linux/atomic.h b/include/linux/atomic.h index b89ba36cab94..3c03de648007 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -1043,6 +1043,22 @@ static inline int atomic_dec_if_positive(atomic_t *v) #endif /* atomic64_try_cmpxchg */ /** + * atomic64_add_unless - add unless the number is already a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, if @v was not already @u. + * Returns true if the addition was done. + */ +#ifdef atomic64_fetch_add_unless +static inline bool atomic64_add_unless(atomic64_t *v, long long a, long long u) +{ + return atomic64_fetch_add_unless(v, a, u) != u; +} +#endif + +/** * atomic64_inc_not_zero - increment unless the number is zero * @v: pointer of type atomic64_t * -- 2.11.0

