Hi Mark,

On Fri, Jan 20, 2023 at 03:18:48PM +0000, Mark Rutland wrote:
> On Fri, Jan 20, 2023 at 03:09:42PM +0100, Yann Sionneau wrote:
> > Add common headers (atomic, bitops, barrier and locking) for basic
> > kvx support.
> > 
> > Co-developed-by: Clement Leger <clem...@clement-leger.fr>
> > Signed-off-by: Clement Leger <clem...@clement-leger.fr>
> > Co-developed-by: Jules Maselbas <jmasel...@kalray.eu>
> > Signed-off-by: Jules Maselbas <jmasel...@kalray.eu>
> > Co-developed-by: Julian Vetter <jvet...@kalray.eu>
> > Signed-off-by: Julian Vetter <jvet...@kalray.eu>
> > Co-developed-by: Julien Villette <jville...@kalray.eu>
> > Signed-off-by: Julien Villette <jville...@kalray.eu>
> > Co-developed-by: Yann Sionneau <ysionn...@kalray.eu>
> > Signed-off-by: Yann Sionneau <ysionn...@kalray.eu>
> > ---
> > 
> > Notes:
> >     V1 -> V2:
> >      - use {READ,WRITE}_ONCE for arch_atomic64_{read,set}
> >      - use asm-generic/bitops/atomic.h instead of __test_and_*_bit
> >      - removed duplicated includes
> >      - rewrite xchg and cmpxchg in C using builtins for acswap insn
> 
> Thanks for those changes. I see one issue below (instantiated a few times), 
> but
> other than that this looks good to me.
> 
> [...]
> 
> > +#define ATOMIC64_RETURN_OP(op, c_op)                                       
> > \
> > +static inline long arch_atomic64_##op##_return(long i, atomic64_t *v)      
> > \
> > +{                                                                  \
> > +   long new, old, ret;                                             \
> > +                                                                   \
> > +   do {                                                            \
> > +           old = v->counter;                                       \
> 
> This should be arch_atomic64_read(v), in order to avoid the potential for the
> compiler to replay the access and introduce ABA races and other such problems.
Thanks for the suggestion, this will be into v3.

> For details, see:
> 
>   https://lore.kernel.org/lkml/Y70SWXHDmOc3RhMd@osiris/
>   https://lore.kernel.org/lkml/Y71LoCIl+IFdy9D8@FVFF77S0Q05N/
> 
> I see that the generic 32-bit atomic code suffers from that issue, and we
> should fix it.
I took a look at the generic 32-bit atomic, but I am unsure if this
needs to be done for both the SMP and non-SMP implementations. But I
can send a first patch and we can discuss from there.

> > +           new = old c_op i;                                       \
> > +           ret = arch_cmpxchg(&v->counter, old, new);              \
> > +   } while (ret != old);                                           \
> > +                                                                   \
> > +   return new;                                                     \
> > +}
> > +
> > +#define ATOMIC64_OP(op, c_op)                                              
> > \
> > +static inline void arch_atomic64_##op(long i, atomic64_t *v)               
> > \
> > +{                                                                  \
> > +   long new, old, ret;                                             \
> > +                                                                   \
> > +   do {                                                            \
> > +           old = v->counter;                                       \
> 
> Likewise, arch_atomic64_read(v) here.
ack

> > +           new = old c_op i;                                       \
> > +           ret = arch_cmpxchg(&v->counter, old, new);              \
> > +   } while (ret != old);                                           \
> > +}
> > +
> > +#define ATOMIC64_FETCH_OP(op, c_op)                                        
> > \
> > +static inline long arch_atomic64_fetch_##op(long i, atomic64_t *v) \
> > +{                                                                  \
> > +   long new, old, ret;                                             \
> > +                                                                   \
> > +   do {                                                            \
> > +           old = v->counter;                                       \
> 
> Likewise, arch_atomic64_read(v) here.
ack

> > +           new = old c_op i;                                       \
> > +           ret = arch_cmpxchg(&v->counter, old, new);              \
> > +   } while (ret != old);                                           \
> > +                                                                   \
> > +   return old;                                                     \
> > +}
> > +
> > +#define ATOMIC64_OPS(op, c_op)                                             
> > \
> > +   ATOMIC64_OP(op, c_op)                                           \
> > +   ATOMIC64_RETURN_OP(op, c_op)                                    \
> > +   ATOMIC64_FETCH_OP(op, c_op)
> > +
> > +ATOMIC64_OPS(and, &)
> > +ATOMIC64_OPS(or, |)
> > +ATOMIC64_OPS(xor, ^)
> > +ATOMIC64_OPS(add, +)
> > +ATOMIC64_OPS(sub, -)
> > +
> > +#undef ATOMIC64_OPS
> > +#undef ATOMIC64_FETCH_OP
> > +#undef ATOMIC64_OP
> > +
> > +static inline int arch_atomic_add_return(int i, atomic_t *v)
> > +{
> > +   int new, old, ret;
> > +
> > +   do {
> > +           old = v->counter;
> 
> Likewise, arch_atomic64_read(v) here.
ack, this will bt arch_atomic_read(v) here since this is not atomic64_t
here.


Thanks
-- Jules





--
Linux-audit mailing list
Linux-audit@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-audit

Reply via email to