On Fri, Apr 26, 2019 at 08:37:37AM +0000, Nadav Amit wrote:

> Interesting! (and thanks for the reference). Well, I said it would be quite
> surprising, and I see you wrote the same thing in the patch ;-)
> 
> But correct me if I’m wrong - it does sound as if you “screw” all the uses
> of atomic_inc() and atomic_dec() (~4000 instances) for the fewer uses of
> smp_mb__after_atomic() and smp_mb__before_atomic() (~400 instances).
> 
> Do you intend to at least introduce a variant of atomic_inc() without a
> memory barrier?

Based on defconfig build changes that that patch caused, no.

  
https://lkml.kernel.org/r/20190423121715.gq4...@hirez.programming.kicks-ass.net

Also note that except x86 and MIPS, the others: ia64, sparc, s390 and
xtensa already have this exact behaviour. Also note, as the patch notes,
that on 86 only atomic_{inc,dec,add,sub}() have this,
atomic_{and,or,xor}() already have the memory clobber.

Also, that would complicate the API too much, people are already getting
it wrong _a_lot_.

  
https://lkml.kernel.org/r/20190423123209.gr4...@hirez.programming.kicks-ass.net

Reply via email to