On 03/17, Paul E. McKenney wrote:
>
> On Sat, Mar 16, 2013 at 07:30:22PM +0100, Oleg Nesterov wrote:
> >
> > > The rule is that if an atomic primitive returns non-void, then there is
> > > a full memory barrier before and after.
> >
> > This case is documented...
> >
> > > This applies to primitives
> > > returning boolean as well, with atomic_dec_and_test() setting this
> > > precedent from what I can see.
> >
> > I don't think this is the "fair" comparison. Unlike atomic_add_unless(),
> > atomic_dec_and_test() always changes the memory even if it "fails".
> >
> > If atomic_add_unless() returns 0, nothing was changed and if we add
> > the barrier it is not clear what it should be paired with.
> >
> > But OK. I have to agree that "keep the rules simple" makes sense, so
> > we should change atomic_add_unless() as well.
>
> Agreed!

OK... since nobody volunteered to make a patch, what do you think about
the change below?

It should "fix" atomic_add_unless() (only on x86) and optimize
atomic_inc/dec_unless.

With this change atomic_*_unless() can do the unnecessary mb() after
cmpxchg() fails, but I think this case is very unlikely.

And, in the likely case atomic_inc/dec_unless avoids the 1st cmpxchg()
which in most cases just reads the memory for the next cmpxchg().

Oleg.

--- x/arch/x86/include/asm/atomic.h
+++ x/arch/x86/include/asm/atomic.h
@@ -212,15 +212,12 @@ static inline int atomic_xchg(atomic_t *
 static inline int __atomic_add_unless(atomic_t *v, int a, int u)
 {
        int c, old;
-       c = atomic_read(v);
-       for (;;) {
-               if (unlikely(c == (u)))
-                       break;
-               old = atomic_cmpxchg((v), c, c + (a));
+       for (c = atomic_read(v); c != u; c = old) {
+               old = atomic_cmpxchg(v, c, c + a);
                if (likely(old == c))
-                       break;
-               c = old;
+                       return c;
        }
+       smp_mb();
        return c;
 }
 
--- x/include/linux/atomic.h
+++ x/include/linux/atomic.h
@@ -64,11 +64,12 @@ static inline int atomic_inc_not_zero_hi
 static inline int atomic_inc_unless_negative(atomic_t *p)
 {
        int v, v1;
-       for (v = 0; v >= 0; v = v1) {
+       for (v = atomic_read(p); v >= 0; v = v1) {
                v1 = atomic_cmpxchg(p, v, v + 1);
                if (likely(v1 == v))
                        return 1;
        }
+       smp_mb();
        return 0;
 }
 #endif
@@ -77,11 +78,12 @@ static inline int atomic_inc_unless_nega
 static inline int atomic_dec_unless_positive(atomic_t *p)
 {
        int v, v1;
-       for (v = 0; v <= 0; v = v1) {
+       for (atomic_read(p); v <= 0; v = v1) {
                v1 = atomic_cmpxchg(p, v, v - 1);
                if (likely(v1 == v))
                        return 1;
        }
+       smp_mb();
        return 0;
 }
 #endif

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to