From: Ash Wilding <ash.j.wild...@gmail.com> Now that we have explicit implementations of LL/SC and LSE atomics helpers after porting Linux's versions to Xen, we can drop the reference to gcc's built-in __sync_fetch_and_add().
This requires some fudging using container_of() because the users of __sync_fetch_and_add(), namely xen/spinlock.c, expect the ptr to be directly to the u32 being modified while the atomics helpers expect the ptr to be to an atomic_t and then access that atomic_t's counter member. By using container_of() we can create a "fake" (atomic_t *) pointer and pass that to the atomic_fetch_add() that we ported from Linux. NOTE: spinlock.c is using u32 for the value being added while the atomics helpers use int for their counter member. This shouldn't actually matter because we do the addition in assembly and the compiler isn't smart enough to detect potential signed integer overflow in inline assembly, but I thought it worth calling out in the commit message. Signed-off-by: Ash Wilding <ash.j.wild...@gmail.com> --- xen/include/asm-arm/system.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h index 65d5c8e423..0326e3ade4 100644 --- a/xen/include/asm-arm/system.h +++ b/xen/include/asm-arm/system.h @@ -58,7 +58,14 @@ static inline int local_abort_is_enabled(void) return !(flags & PSR_ABT_MASK); } -#define arch_fetch_and_add(x, v) __sync_fetch_and_add(x, v) +#define arch_fetch_and_add(ptr, x) ({ \ + int ret; \ + \ + atomic_t * tmp = container_of((int *)(&(x)), atomic_t, counter); \ + ret = atomic_fetch_add(x, tmp); \ + \ + ret; \ +}) extern struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next); -- 2.24.3 (Apple Git-128)