Hi,

This all started when Andrea Parri found a 'surprising' behaviour for x86:

  http://lkml.kernel.org/r/20190418125412.GA10817@andrea

Basically we fail for:

        *x = 1;
        atomic_inc(u);
        smp_mb__after_atomic();
        r0 = *y;

Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:

        atomic_inc(u);
        *x = 1;
        smp_mb__after_atomic();
        r0 = *y;

Which the CPU is then allowed to re-order (under TSO rules) like:

        atomic_inc(u);
        r0 = *y;
        *x = 1;

And this very much was not intended.

This had me audit all the (strong) architectures that had weak
smp_mb__{before,after}_atomic: ia64, mips, sparc, s390, x86, xtensa.

Of those, only x86 and mips were affected. Looking at MIPS to solve this, led
to the other MIPS patches.

All these patches have been through 0day for quite a while.

Paul, how do you want to route the MIPS bits?

Reply via email to