On Mon, Oct 20, 2014 at 3:49 AM, Catalin Marinas
<catalin.mari...@arm.com> wrote:
>
> Since you mention symmetry, something like below makes the barriers more
> explicit.

Borken, for two reasons:

> diff --git a/kernel/futex.c b/kernel/futex.c
> index f3a3a071283c..5b9d857d0816 100644
> --- a/kernel/futex.c
> +++ b/kernel/futex.c
> @@ -143,9 +143,7 @@
>  static inline void futex_get_mm(union futex_key *key)
>  {
>         atomic_inc(&key->private.mm->mm_count);
> -       /*
> -        * Ensure futex_get_mm() implies a full barrier such that
> -        * get_futex_key() implies a full barrier. This is relied upon
> -        * as full barrier (B), see the ordering comment above.
> -        */
> -       smp_mb__after_atomic();
>  }

So the thing is, this means that we can't take advantage of the fact
that "atomic_inc" is already an atomic. So this is just a performance
breakage. But:

>
>  static inline int hb_waiters_pending(struct futex_hash_bucket *hb)
>  {
> +       /*
> +        * Full barrier (B), see the ordering comment above.
> +        */
> +       smp_mb__before_atomic();
>  #ifdef CONFIG_SMP
>         return atomic_read(&hb->waiters);

This is just entirely broken.

"atomic_read()" isn't really an "atomic op" at all. despite the name,
it's just a read that is basically ACCESS_ONCE.

So smp_mb__before_atomic() doesn't work for atomic_read(), and the
code is nonsensical and doesn't work. It would need to be a full
memory barrier.

                   Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to