Hi! On Thu, Apr 08, 2021 at 03:33:45PM +0000, Christophe Leroy wrote: > +#define ATOMIC_OP(op, asm_op, dot, sign) \ > static __inline__ void atomic_##op(int a, atomic_t *v) > \ > { \ > int t; \ > \ > __asm__ __volatile__( \ > "1: lwarx %0,0,%3 # atomic_" #op "\n" \ > - #asm_op " %0,%2,%0\n" \ > + #asm_op "%I2" dot " %0,%0,%2\n" \ > " stwcx. %0,0,%3 \n" \ > " bne- 1b\n" \ > - : "=&r" (t), "+m" (v->counter) \ > - : "r" (a), "r" (&v->counter) \ > + : "=&b" (t), "+m" (v->counter) \ > + : "r"#sign (a), "r" (&v->counter) \ > : "cc"); \ > } \
You need "b" (instead of "r") only for "addi". You can use "addic" instead, which clobbers XER[CA], but *all* inline asm does, so that is not a downside here (it is also not slower on any CPU that matters). > @@ -238,14 +238,14 @@ static __inline__ int atomic_fetch_add_unless(atomic_t > *v, int a, int u) > "1: lwarx %0,0,%1 # atomic_fetch_add_unless\n\ > cmpw 0,%0,%3 \n\ > beq 2f \n\ > - add %0,%2,%0 \n" > + add%I2 %0,%0,%2 \n" > " stwcx. %0,0,%1 \n\ > bne- 1b \n" > PPC_ATOMIC_EXIT_BARRIER > -" subf %0,%2,%0 \n\ > +" sub%I2 %0,%0,%2 \n\ > 2:" > - : "=&r" (t) > - : "r" (&v->counter), "r" (a), "r" (u) > + : "=&b" (t) > + : "r" (&v->counter), "rI" (a), "r" (u) > : "cc", "memory"); Same here. Nice patches! Acked-by: Segher Boessenkool <seg...@kernel.crashing.org> Segher