[tip:sched/core] sched/core: Use READ_ONCE()/WRITE_ONCE() in move_queued_task()/task_rq_lock()

2019-02-04 Thread tip-bot for Andrea Parri
Commit-ID:  c546951d9c9300065bad253ecdf1ac59ce9d06c8
Gitweb: https://git.kernel.org/tip/c546951d9c9300065bad253ecdf1ac59ce9d06c8
Author: Andrea Parri 
AuthorDate: Mon, 21 Jan 2019 16:52:40 +0100
Committer:  Ingo Molnar 
CommitDate: Mon, 4 Feb 2019 09:13:21 +0100

sched/core: Use READ_ONCE()/WRITE_ONCE() in move_queued_task()/task_rq_lock()

move_queued_task() synchronizes with task_rq_lock() as follows:

move_queued_task()  task_rq_lock()

[S] ->on_rq = MIGRATING [L] rq = task_rq()
WMB (__set_task_cpu())  ACQUIRE (rq->lock);
[S] ->cpu = new_cpu [L] ->on_rq

where "[L] rq = task_rq()" is ordered before "ACQUIRE (rq->lock)" by an
address dependency and, in turn, "ACQUIRE (rq->lock)" is ordered before
"[L] ->on_rq" by the ACQUIRE itself.

Use READ_ONCE() to load ->cpu in task_rq() (c.f., task_cpu()) to honor
this address dependency.  Also, mark the accesses to ->cpu and ->on_rq
with READ_ONCE()/WRITE_ONCE() to comply with the LKMM.

Signed-off-by: Andrea Parri 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Alan Stern 
Cc: Linus Torvalds 
Cc: Mike Galbraith 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Link: 
https://lkml.kernel.org/r/20190121155240.27173-1-andrea.pa...@amarulasolutions.com
Signed-off-by: Ingo Molnar 
---
 include/linux/sched.h | 4 ++--
 kernel/sched/core.c   | 9 +
 kernel/sched/sched.h  | 6 +++---
 3 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 351c0fe64c85..4112639c2a85 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1745,9 +1745,9 @@ static __always_inline bool need_resched(void)
 static inline unsigned int task_cpu(const struct task_struct *p)
 {
 #ifdef CONFIG_THREAD_INFO_IN_TASK
-   return p->cpu;
+   return READ_ONCE(p->cpu);
 #else
-   return task_thread_info(p)->cpu;
+   return READ_ONCE(task_thread_info(p)->cpu);
 #endif
 }
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 32e06704565e..ec1b67a195cc 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -107,11 +107,12 @@ struct rq *task_rq_lock(struct task_struct *p, struct 
rq_flags *rf)
 *  [L] ->on_rq
 *  RELEASE (rq->lock)
 *
-* If we observe the old CPU in task_rq_lock, the acquire of
+* If we observe the old CPU in task_rq_lock(), the acquire of
 * the old rq->lock will fully serialize against the stores.
 *
-* If we observe the new CPU in task_rq_lock, the acquire will
-* pair with the WMB to ensure we must then also see migrating.
+* If we observe the new CPU in task_rq_lock(), the address
+* dependency headed by '[L] rq = task_rq()' and the acquire
+* will pair with the WMB to ensure we then also see migrating.
 */
if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
rq_pin_lock(rq, rf);
@@ -916,7 +917,7 @@ static struct rq *move_queued_task(struct rq *rq, struct 
rq_flags *rf,
 {
lockdep_assert_held(>lock);
 
-   p->on_rq = TASK_ON_RQ_MIGRATING;
+   WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
dequeue_task(rq, p, DEQUEUE_NOCLOCK);
set_task_cpu(p, new_cpu);
rq_unlock(rq, rf);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 99e2a7772d16..c688ef5012e5 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1479,9 +1479,9 @@ static inline void __set_task_cpu(struct task_struct *p, 
unsigned int cpu)
 */
smp_wmb();
 #ifdef CONFIG_THREAD_INFO_IN_TASK
-   p->cpu = cpu;
+   WRITE_ONCE(p->cpu, cpu);
 #else
-   task_thread_info(p)->cpu = cpu;
+   WRITE_ONCE(task_thread_info(p)->cpu, cpu);
 #endif
p->wake_cpu = cpu;
 #endif
@@ -1582,7 +1582,7 @@ static inline int task_on_rq_queued(struct task_struct *p)
 
 static inline int task_on_rq_migrating(struct task_struct *p)
 {
-   return p->on_rq == TASK_ON_RQ_MIGRATING;
+   return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
 }
 
 /*


[tip:locking/core] tools/memory-model: Model smp_mb__after_unlock_lock()

2019-01-21 Thread tip-bot for Andrea Parri
Commit-ID:  5b735eb1ce481b2f1674a47c0995944b1cb6f5d5
Gitweb: https://git.kernel.org/tip/5b735eb1ce481b2f1674a47c0995944b1cb6f5d5
Author: Andrea Parri 
AuthorDate: Mon, 3 Dec 2018 15:04:49 -0800
Committer:  Ingo Molnar 
CommitDate: Mon, 21 Jan 2019 11:06:55 +0100

tools/memory-model: Model smp_mb__after_unlock_lock()

The kernel documents smp_mb__after_unlock_lock() the following way:

  "Place this after a lock-acquisition primitive to guarantee that
   an UNLOCK+LOCK pair acts as a full barrier.  This guarantee applies
   if the UNLOCK and LOCK are executed by the same CPU or if the
   UNLOCK and LOCK operate on the same lock variable."

Formalize in LKMM the above guarantee by defining (new) mb-links according
to the law:

  ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
fencerel(After-unlock-lock) ; [M])

where the component ([UL] ; co ; [LKW]) identifies "UNLOCK+LOCK pairs on
the same lock variable" and the component ([UL] ; po ; [LKW]) identifies
"UNLOCK+LOCK pairs executed by the same CPU".

In particular, the LKMM forbids the following two behaviors (the second
litmus test below is based on:

  Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html

c.f., Section "Tree RCU Grace Period Memory Ordering Building Blocks"):

C after-unlock-lock-same-cpu

(*
 * Result: Never
 *)

{}

P0(spinlock_t *s, spinlock_t *t, int *x, int *y)
{
int r0;

spin_lock(s);
WRITE_ONCE(*x, 1);
spin_unlock(s);
spin_lock(t);
smp_mb__after_unlock_lock();
r0 = READ_ONCE(*y);
spin_unlock(t);
}

P1(int *x, int *y)
{
int r0;

WRITE_ONCE(*y, 1);
smp_mb();
r0 = READ_ONCE(*x);
}

exists (0:r0=0 /\ 1:r0=0)

C after-unlock-lock-same-lock-variable

(*
 * Result: Never
 *)

{}

P0(spinlock_t *s, int *x, int *y)
{
int r0;

spin_lock(s);
WRITE_ONCE(*x, 1);
r0 = READ_ONCE(*y);
spin_unlock(s);
}

P1(spinlock_t *s, int *y, int *z)
{
int r0;

spin_lock(s);
smp_mb__after_unlock_lock();
WRITE_ONCE(*y, 1);
r0 = READ_ONCE(*z);
spin_unlock(s);
}

P2(int *z, int *x)
{
int r0;

WRITE_ONCE(*z, 1);
smp_mb();
r0 = READ_ONCE(*x);
}

exists (0:r0=0 /\ 1:r0=0 /\ 2:r0=0)

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Boqun Feng 
Cc: Daniel Lustig 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: http://lkml.kernel.org/r/20181203230451.28921-1-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/linux-kernel.bell | 3 ++-
 tools/memory-model/linux-kernel.cat  | 4 +++-
 tools/memory-model/linux-kernel.def  | 1 +
 3 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/linux-kernel.bell 
b/tools/memory-model/linux-kernel.bell
index b84fb2f67109..796513362c05 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -29,7 +29,8 @@ enum Barriers = 'wmb (*smp_wmb*) ||
'sync-rcu (*synchronize_rcu*) ||
'before-atomic (*smp_mb__before_atomic*) ||
'after-atomic (*smp_mb__after_atomic*) ||
-   'after-spinlock (*smp_mb__after_spinlock*)
+   'after-spinlock (*smp_mb__after_spinlock*) ||
+   'after-unlock-lock (*smp_mb__after_unlock_lock*)
 instructions F[Barriers]
 
 (* Compute matching pairs of nested Rcu-lock and Rcu-unlock *)
diff --git a/tools/memory-model/linux-kernel.cat 
b/tools/memory-model/linux-kernel.cat
index 882fc33274ac..8f23c74a96fd 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -30,7 +30,9 @@ let wmb = [W] ; fencerel(Wmb) ; [W]
 let mb = ([M] ; fencerel(Mb) ; [M]) |
([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
-   ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M])
+   ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
+   ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
+   fencerel(After-unlock-lock) ; [M])
 let gp = po ; [Sync-rcu] ; po?
 
 let strong-fence = mb | gp
diff --git a/tools/memory-model/linux-kernel.def 
b/tools/memory-model/linux-kernel.def
index 6fa3eb28d40b..b27911cc087d 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -23,6 +23,7 @@ smp_wmb() { __fence{wmb}; }
 smp_mb__before_atomic() { __fence{before-atomic}; }
 smp_mb__after_atomic() { __fence{after-atomic}; }
 smp_mb__after_spinlock() { __fence{after-spinlock}; }
+smp_mb__after_unlock_lock() { __fence{after-unlock-lock}; }
 
 // Exchange
 xchg(X,V)  __xchg{mb}(X,V)


[tip:locking/core] sched/fair: Clean up comment in nohz_idle_balance()

2018-12-11 Thread tip-bot for Andrea Parri
Commit-ID:  80eb865768703c0f85a0603762742ae1dedf21f0
Gitweb: https://git.kernel.org/tip/80eb865768703c0f85a0603762742ae1dedf21f0
Author: Andrea Parri 
AuthorDate: Tue, 27 Nov 2018 12:01:10 +0100
Committer:  Ingo Molnar 
CommitDate: Tue, 11 Dec 2018 14:54:57 +0100

sched/fair: Clean up comment in nohz_idle_balance()

Concerning the comment associated to the atomic_fetch_andnot() in
nohz_idle_balance(), Vincent explains [1]:

  "[...] the comment is useless and can be removed [...]  it was
   referring to a line code above the comment that was present in
   a previous iteration of the patchset. This line disappeared in
   final version but the comment has stayed."

So remove the comment.

Vincent also points out that the full ordering associated to the
atomic_fetch_andnot() primitive could be relaxed, but this patch
insists on the current more conservative/fully ordered solution:

"Performance" isn't a concern, stay away from "correctness"/subtle
relaxed (re)ordering if possible..., just make sure not to confuse
the next reader with misleading/out-of-date comments.

[1] 
http://lkml.kernel.org/r/cakftptbja-ocbrko6__npqwl3+hljzk7riccpu1r7ydo-ep...@mail.gmail.com

Suggested-by: Vincent Guittot 
Signed-off-by: Andrea Parri 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Link: 
https://lkml.kernel.org/r/20181127110110.5533-1-andrea.pa...@amarulasolutions.com
Signed-off-by: Ingo Molnar 
---
 kernel/sched/fair.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ac855b2f4774..db514993565b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9533,9 +9533,7 @@ static bool nohz_idle_balance(struct rq *this_rq, enum 
cpu_idle_type idle)
return false;
}
 
-   /*
-* barrier, pairs with nohz_balance_enter_idle(), ensures ...
-*/
+   /* could be _relaxed() */
flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(this_cpu));
if (!(flags & NOHZ_KICK_MASK))
return false;


[tip:locking/core] tools/memory-model: Model smp_mb__after_unlock_lock()

2018-12-03 Thread tip-bot for Andrea Parri
Commit-ID:  4607abbcf464ea2be14da444215d05c73025cf6e
Gitweb: https://git.kernel.org/tip/4607abbcf464ea2be14da444215d05c73025cf6e
Author: Andrea Parri 
AuthorDate: Mon, 3 Dec 2018 15:04:49 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 4 Dec 2018 07:29:51 +0100

tools/memory-model: Model smp_mb__after_unlock_lock()

The kernel documents smp_mb__after_unlock_lock() the following way:

  "Place this after a lock-acquisition primitive to guarantee that
   an UNLOCK+LOCK pair acts as a full barrier.  This guarantee applies
   if the UNLOCK and LOCK are executed by the same CPU or if the
   UNLOCK and LOCK operate on the same lock variable."

Formalize in LKMM the above guarantee by defining (new) mb-links according
to the law:

  ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
fencerel(After-unlock-lock) ; [M])

where the component ([UL] ; co ; [LKW]) identifies "UNLOCK+LOCK pairs on
the same lock variable" and the component ([UL] ; po ; [LKW]) identifies
"UNLOCK+LOCK pairs executed by the same CPU".

In particular, the LKMM forbids the following two behaviors (the second
litmus test below is based on:

  Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html

c.f., Section "Tree RCU Grace Period Memory Ordering Building Blocks"):

C after-unlock-lock-same-cpu

(*
 * Result: Never
 *)

{}

P0(spinlock_t *s, spinlock_t *t, int *x, int *y)
{
int r0;

spin_lock(s);
WRITE_ONCE(*x, 1);
spin_unlock(s);
spin_lock(t);
smp_mb__after_unlock_lock();
r0 = READ_ONCE(*y);
spin_unlock(t);
}

P1(int *x, int *y)
{
int r0;

WRITE_ONCE(*y, 1);
smp_mb();
r0 = READ_ONCE(*x);
}

exists (0:r0=0 /\ 1:r0=0)

C after-unlock-lock-same-lock-variable

(*
 * Result: Never
 *)

{}

P0(spinlock_t *s, int *x, int *y)
{
int r0;

spin_lock(s);
WRITE_ONCE(*x, 1);
r0 = READ_ONCE(*y);
spin_unlock(s);
}

P1(spinlock_t *s, int *y, int *z)
{
int r0;

spin_lock(s);
smp_mb__after_unlock_lock();
WRITE_ONCE(*y, 1);
r0 = READ_ONCE(*z);
spin_unlock(s);
}

P2(int *z, int *x)
{
int r0;

WRITE_ONCE(*z, 1);
smp_mb();
r0 = READ_ONCE(*x);
}

exists (0:r0=0 /\ 1:r0=0 /\ 2:r0=0)

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Boqun Feng 
Cc: Daniel Lustig 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: http://lkml.kernel.org/r/20181203230451.28921-1-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/linux-kernel.bell | 3 ++-
 tools/memory-model/linux-kernel.cat  | 4 +++-
 tools/memory-model/linux-kernel.def  | 1 +
 3 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/linux-kernel.bell 
b/tools/memory-model/linux-kernel.bell
index b84fb2f67109..796513362c05 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -29,7 +29,8 @@ enum Barriers = 'wmb (*smp_wmb*) ||
'sync-rcu (*synchronize_rcu*) ||
'before-atomic (*smp_mb__before_atomic*) ||
'after-atomic (*smp_mb__after_atomic*) ||
-   'after-spinlock (*smp_mb__after_spinlock*)
+   'after-spinlock (*smp_mb__after_spinlock*) ||
+   'after-unlock-lock (*smp_mb__after_unlock_lock*)
 instructions F[Barriers]
 
 (* Compute matching pairs of nested Rcu-lock and Rcu-unlock *)
diff --git a/tools/memory-model/linux-kernel.cat 
b/tools/memory-model/linux-kernel.cat
index 882fc33274ac..8f23c74a96fd 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -30,7 +30,9 @@ let wmb = [W] ; fencerel(Wmb) ; [W]
 let mb = ([M] ; fencerel(Mb) ; [M]) |
([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
-   ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M])
+   ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
+   ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
+   fencerel(After-unlock-lock) ; [M])
 let gp = po ; [Sync-rcu] ; po?
 
 let strong-fence = mb | gp
diff --git a/tools/memory-model/linux-kernel.def 
b/tools/memory-model/linux-kernel.def
index 6fa3eb28d40b..b27911cc087d 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -23,6 +23,7 @@ smp_wmb() { __fence{wmb}; }
 smp_mb__before_atomic() { __fence{before-atomic}; }
 smp_mb__after_atomic() { __fence{after-atomic}; }
 smp_mb__after_spinlock() { __fence{after-spinlock}; }
+smp_mb__after_unlock_lock() { __fence{after-unlock-lock}; }
 
 // Exchange
 xchg(X,V)  __xchg{mb}(X,V)


[tip:locking/core] tools/memory-model: Model smp_mb__after_unlock_lock()

2018-12-03 Thread tip-bot for Andrea Parri
Commit-ID:  4607abbcf464ea2be14da444215d05c73025cf6e
Gitweb: https://git.kernel.org/tip/4607abbcf464ea2be14da444215d05c73025cf6e
Author: Andrea Parri 
AuthorDate: Mon, 3 Dec 2018 15:04:49 -0800
Committer:  Ingo Molnar 
CommitDate: Tue, 4 Dec 2018 07:29:51 +0100

tools/memory-model: Model smp_mb__after_unlock_lock()

The kernel documents smp_mb__after_unlock_lock() the following way:

  "Place this after a lock-acquisition primitive to guarantee that
   an UNLOCK+LOCK pair acts as a full barrier.  This guarantee applies
   if the UNLOCK and LOCK are executed by the same CPU or if the
   UNLOCK and LOCK operate on the same lock variable."

Formalize in LKMM the above guarantee by defining (new) mb-links according
to the law:

  ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
fencerel(After-unlock-lock) ; [M])

where the component ([UL] ; co ; [LKW]) identifies "UNLOCK+LOCK pairs on
the same lock variable" and the component ([UL] ; po ; [LKW]) identifies
"UNLOCK+LOCK pairs executed by the same CPU".

In particular, the LKMM forbids the following two behaviors (the second
litmus test below is based on:

  Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html

c.f., Section "Tree RCU Grace Period Memory Ordering Building Blocks"):

C after-unlock-lock-same-cpu

(*
 * Result: Never
 *)

{}

P0(spinlock_t *s, spinlock_t *t, int *x, int *y)
{
int r0;

spin_lock(s);
WRITE_ONCE(*x, 1);
spin_unlock(s);
spin_lock(t);
smp_mb__after_unlock_lock();
r0 = READ_ONCE(*y);
spin_unlock(t);
}

P1(int *x, int *y)
{
int r0;

WRITE_ONCE(*y, 1);
smp_mb();
r0 = READ_ONCE(*x);
}

exists (0:r0=0 /\ 1:r0=0)

C after-unlock-lock-same-lock-variable

(*
 * Result: Never
 *)

{}

P0(spinlock_t *s, int *x, int *y)
{
int r0;

spin_lock(s);
WRITE_ONCE(*x, 1);
r0 = READ_ONCE(*y);
spin_unlock(s);
}

P1(spinlock_t *s, int *y, int *z)
{
int r0;

spin_lock(s);
smp_mb__after_unlock_lock();
WRITE_ONCE(*y, 1);
r0 = READ_ONCE(*z);
spin_unlock(s);
}

P2(int *z, int *x)
{
int r0;

WRITE_ONCE(*z, 1);
smp_mb();
r0 = READ_ONCE(*x);
}

exists (0:r0=0 /\ 1:r0=0 /\ 2:r0=0)

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Boqun Feng 
Cc: Daniel Lustig 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: http://lkml.kernel.org/r/20181203230451.28921-1-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/linux-kernel.bell | 3 ++-
 tools/memory-model/linux-kernel.cat  | 4 +++-
 tools/memory-model/linux-kernel.def  | 1 +
 3 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/linux-kernel.bell 
b/tools/memory-model/linux-kernel.bell
index b84fb2f67109..796513362c05 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -29,7 +29,8 @@ enum Barriers = 'wmb (*smp_wmb*) ||
'sync-rcu (*synchronize_rcu*) ||
'before-atomic (*smp_mb__before_atomic*) ||
'after-atomic (*smp_mb__after_atomic*) ||
-   'after-spinlock (*smp_mb__after_spinlock*)
+   'after-spinlock (*smp_mb__after_spinlock*) ||
+   'after-unlock-lock (*smp_mb__after_unlock_lock*)
 instructions F[Barriers]
 
 (* Compute matching pairs of nested Rcu-lock and Rcu-unlock *)
diff --git a/tools/memory-model/linux-kernel.cat 
b/tools/memory-model/linux-kernel.cat
index 882fc33274ac..8f23c74a96fd 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -30,7 +30,9 @@ let wmb = [W] ; fencerel(Wmb) ; [W]
 let mb = ([M] ; fencerel(Mb) ; [M]) |
([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
-   ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M])
+   ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
+   ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
+   fencerel(After-unlock-lock) ; [M])
 let gp = po ; [Sync-rcu] ; po?
 
 let strong-fence = mb | gp
diff --git a/tools/memory-model/linux-kernel.def 
b/tools/memory-model/linux-kernel.def
index 6fa3eb28d40b..b27911cc087d 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -23,6 +23,7 @@ smp_wmb() { __fence{wmb}; }
 smp_mb__before_atomic() { __fence{before-atomic}; }
 smp_mb__after_atomic() { __fence{after-atomic}; }
 smp_mb__after_spinlock() { __fence{after-spinlock}; }
+smp_mb__after_unlock_lock() { __fence{after-unlock-lock}; }
 
 // Exchange
 xchg(X,V)  __xchg{mb}(X,V)


[tip:perf/urgent] uprobes: Fix handle_swbp() vs. unregister() + register() race once more

2018-11-22 Thread tip-bot for Andrea Parri
Commit-ID:  09d3f015d1e1b4fee7e9bbdcf54201d239393391
Gitweb: https://git.kernel.org/tip/09d3f015d1e1b4fee7e9bbdcf54201d239393391
Author: Andrea Parri 
AuthorDate: Thu, 22 Nov 2018 17:10:31 +0100
Committer:  Ingo Molnar 
CommitDate: Fri, 23 Nov 2018 08:31:19 +0100

uprobes: Fix handle_swbp() vs. unregister() + register() race once more

Commit:

  142b18ddc8143 ("uprobes: Fix handle_swbp() vs unregister() + register() race")

added the UPROBE_COPY_INSN flag, and corresponding smp_wmb() and smp_rmb()
memory barriers, to ensure that handle_swbp() uses fully-initialized
uprobes only.

However, the smp_rmb() is mis-placed: this barrier should be placed
after handle_swbp() has tested for the flag, thus guaranteeing that
(program-order) subsequent loads from the uprobe can see the initial
stores performed by prepare_uprobe().

Move the smp_rmb() accordingly.  Also amend the comments associated
to the two memory barriers to indicate their actual locations.

Signed-off-by: Andrea Parri 
Acked-by: Oleg Nesterov 
Cc: Alexander Shishkin 
Cc: Andrew Morton 
Cc: Arnaldo Carvalho de Melo 
Cc: Jiri Olsa 
Cc: Linus Torvalds 
Cc: Namhyung Kim 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Stephane Eranian 
Cc: Thomas Gleixner 
Cc: Vince Weaver 
Cc: sta...@kernel.org
Fixes: 142b18ddc8143 ("uprobes: Fix handle_swbp() vs unregister() + register() 
race")
Link: 
http://lkml.kernel.org/r/20181122161031.15179-1-andrea.pa...@amarulasolutions.com
Signed-off-by: Ingo Molnar 
---
 kernel/events/uprobes.c | 12 ++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 96d4bee83489..322e97bbb437 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -829,7 +829,7 @@ static int prepare_uprobe(struct uprobe *uprobe, struct 
file *file,
BUG_ON((uprobe->offset & ~PAGE_MASK) +
UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);
 
-   smp_wmb(); /* pairs with rmb() in find_active_uprobe() */
+   smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */
set_bit(UPROBE_COPY_INSN, >flags);
 
  out:
@@ -2178,10 +2178,18 @@ static void handle_swbp(struct pt_regs *regs)
 * After we hit the bp, _unregister + _register can install the
 * new and not-yet-analyzed uprobe at the same address, restart.
 */
-   smp_rmb(); /* pairs with wmb() in install_breakpoint() */
if (unlikely(!test_bit(UPROBE_COPY_INSN, >flags)))
goto out;
 
+   /*
+* Pairs with the smp_wmb() in prepare_uprobe().
+*
+* Guarantees that if we see the UPROBE_COPY_INSN bit set, then
+* we must also see the stores to >arch performed by the
+* prepare_uprobe() call.
+*/
+   smp_rmb();
+
/* Tracing handlers use ->utask to communicate with fetch methods */
if (!get_utask())
goto out;


[tip:perf/urgent] uprobes: Fix handle_swbp() vs. unregister() + register() race once more

2018-11-22 Thread tip-bot for Andrea Parri
Commit-ID:  09d3f015d1e1b4fee7e9bbdcf54201d239393391
Gitweb: https://git.kernel.org/tip/09d3f015d1e1b4fee7e9bbdcf54201d239393391
Author: Andrea Parri 
AuthorDate: Thu, 22 Nov 2018 17:10:31 +0100
Committer:  Ingo Molnar 
CommitDate: Fri, 23 Nov 2018 08:31:19 +0100

uprobes: Fix handle_swbp() vs. unregister() + register() race once more

Commit:

  142b18ddc8143 ("uprobes: Fix handle_swbp() vs unregister() + register() race")

added the UPROBE_COPY_INSN flag, and corresponding smp_wmb() and smp_rmb()
memory barriers, to ensure that handle_swbp() uses fully-initialized
uprobes only.

However, the smp_rmb() is mis-placed: this barrier should be placed
after handle_swbp() has tested for the flag, thus guaranteeing that
(program-order) subsequent loads from the uprobe can see the initial
stores performed by prepare_uprobe().

Move the smp_rmb() accordingly.  Also amend the comments associated
to the two memory barriers to indicate their actual locations.

Signed-off-by: Andrea Parri 
Acked-by: Oleg Nesterov 
Cc: Alexander Shishkin 
Cc: Andrew Morton 
Cc: Arnaldo Carvalho de Melo 
Cc: Jiri Olsa 
Cc: Linus Torvalds 
Cc: Namhyung Kim 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Stephane Eranian 
Cc: Thomas Gleixner 
Cc: Vince Weaver 
Cc: sta...@kernel.org
Fixes: 142b18ddc8143 ("uprobes: Fix handle_swbp() vs unregister() + register() 
race")
Link: 
http://lkml.kernel.org/r/20181122161031.15179-1-andrea.pa...@amarulasolutions.com
Signed-off-by: Ingo Molnar 
---
 kernel/events/uprobes.c | 12 ++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 96d4bee83489..322e97bbb437 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -829,7 +829,7 @@ static int prepare_uprobe(struct uprobe *uprobe, struct 
file *file,
BUG_ON((uprobe->offset & ~PAGE_MASK) +
UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);
 
-   smp_wmb(); /* pairs with rmb() in find_active_uprobe() */
+   smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */
set_bit(UPROBE_COPY_INSN, >flags);
 
  out:
@@ -2178,10 +2178,18 @@ static void handle_swbp(struct pt_regs *regs)
 * After we hit the bp, _unregister + _register can install the
 * new and not-yet-analyzed uprobe at the same address, restart.
 */
-   smp_rmb(); /* pairs with wmb() in install_breakpoint() */
if (unlikely(!test_bit(UPROBE_COPY_INSN, >flags)))
goto out;
 
+   /*
+* Pairs with the smp_wmb() in prepare_uprobe().
+*
+* Guarantees that if we see the UPROBE_COPY_INSN bit set, then
+* we must also see the stores to >arch performed by the
+* prepare_uprobe() call.
+*/
+   smp_rmb();
+
/* Tracing handlers use ->utask to communicate with fetch methods */
if (!get_utask())
goto out;


[tip:locking/core] locking/memory-barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()

2018-10-02 Thread tip-bot for Andrea Parri
Commit-ID:  2f359c7ea554ba9cb78a52c82bedff351cdabd2b
Gitweb: https://git.kernel.org/tip/2f359c7ea554ba9cb78a52c82bedff351cdabd2b
Author: Andrea Parri 
AuthorDate: Wed, 26 Sep 2018 11:29:20 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 2 Oct 2018 10:28:05 +0200

locking/memory-barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()

Amend the changes in commit:

  1f03e8d2919270 ("locking/barriers: Replace smp_cond_acquire() with 
smp_cond_load_acquire()")

... by updating the documentation accordingly.

Also remove some obsolete information related to the implementation.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Acked-by: Alan Stern 
Cc: Akira Yokosawa 
Cc: Alexander Shishkin 
Cc: Arnaldo Carvalho de Melo 
Cc: Boqun Feng 
Cc: Daniel Lustig 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Jiri Olsa 
Cc: Jonathan Corbet 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Stephane Eranian 
Cc: Thomas Gleixner 
Cc: Vince Weaver 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: http://lkml.kernel.org/r/20180926182920.27644-5-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 0d8d7ef131e9..c1d913944ad8 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -471,8 +471,7 @@ And a couple of implicit varieties:
  operations after the ACQUIRE operation will appear to happen after the
  ACQUIRE operation with respect to the other components of the system.
  ACQUIRE operations include LOCK operations and both smp_load_acquire()
- and smp_cond_acquire() operations. The later builds the necessary ACQUIRE
- semantics from relying on a control dependency and smp_rmb().
+ and smp_cond_load_acquire() operations.
 
  Memory operations that occur before an ACQUIRE operation may appear to
  happen after it completes.


[tip:locking/core] locking/memory-barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()

2018-10-02 Thread tip-bot for Andrea Parri
Commit-ID:  2f359c7ea554ba9cb78a52c82bedff351cdabd2b
Gitweb: https://git.kernel.org/tip/2f359c7ea554ba9cb78a52c82bedff351cdabd2b
Author: Andrea Parri 
AuthorDate: Wed, 26 Sep 2018 11:29:20 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 2 Oct 2018 10:28:05 +0200

locking/memory-barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()

Amend the changes in commit:

  1f03e8d2919270 ("locking/barriers: Replace smp_cond_acquire() with 
smp_cond_load_acquire()")

... by updating the documentation accordingly.

Also remove some obsolete information related to the implementation.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Acked-by: Alan Stern 
Cc: Akira Yokosawa 
Cc: Alexander Shishkin 
Cc: Arnaldo Carvalho de Melo 
Cc: Boqun Feng 
Cc: Daniel Lustig 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Jiri Olsa 
Cc: Jonathan Corbet 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Stephane Eranian 
Cc: Thomas Gleixner 
Cc: Vince Weaver 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: http://lkml.kernel.org/r/20180926182920.27644-5-paul...@linux.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 0d8d7ef131e9..c1d913944ad8 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -471,8 +471,7 @@ And a couple of implicit varieties:
  operations after the ACQUIRE operation will appear to happen after the
  ACQUIRE operation with respect to the other components of the system.
  ACQUIRE operations include LOCK operations and both smp_load_acquire()
- and smp_cond_acquire() operations. The later builds the necessary ACQUIRE
- semantics from relying on a control dependency and smp_rmb().
+ and smp_cond_load_acquire() operations.
 
  Memory operations that occur before an ACQUIRE operation may appear to
  happen after it completes.


[tip:locking/core] tools/memory-model: Rename litmus tests to comply to norm7

2018-07-17 Thread tip-bot for Andrea Parri
Commit-ID:  71b7ff5ebc9b1d5aa95eb48d6388234f1304fd19
Gitweb: https://git.kernel.org/tip/71b7ff5ebc9b1d5aa95eb48d6388234f1304fd19
Author: Andrea Parri 
AuthorDate: Mon, 16 Jul 2018 11:06:05 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:36 +0200

tools/memory-model: Rename litmus tests to comply to norm7

norm7 produces the 'normalized' name of a litmus test,  when the test
can be generated from a single cycle that passes through each process
exactly once. The commit renames such tests in order to comply to the
naming scheme implemented by this tool.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Akira Yokosawa 
Cc: Boqun Feng 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/20180716180605.16115-14-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/Documentation/recipes.txt |  8 
 tools/memory-model/README| 20 ++--
 litmus => IRIW+fencembonceonces+OnceOnce.litmus} |  2 +-
 ...litmus => LB+fencembonceonce+ctrlonceonce.litmus} |  2 +-
 ...s => MP+fencewmbonceonce+fencermbonceonce.litmus} |  2 +-
 ...+mbonceonces.litmus => R+fencembonceonces.litmus} |  2 +-
 tools/memory-model/litmus-tests/README   | 16 
 ...itmus => S+fencewmbonceonce+poacquireonce.litmus} |  2 +-
 ...mbonceonces.litmus => SB+fencembonceonces.litmus} |  2 +-
 ...> WRC+pooncerelease+fencermbonceonce+Once.litmus} |  2 +-
 ...erelease+poacquirerelease+fencembonceonce.litmus} |  2 +-
 11 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/tools/memory-model/Documentation/recipes.txt 
b/tools/memory-model/Documentation/recipes.txt
index 1fea8ef2b184..af72700cc20a 100644
--- a/tools/memory-model/Documentation/recipes.txt
+++ b/tools/memory-model/Documentation/recipes.txt
@@ -126,7 +126,7 @@ However, it is not necessarily the case that accesses 
ordered by
 locking will be seen as ordered by CPUs not holding that lock.
 Consider this example:
 
-   /* See Z6.0+pooncelock+pooncelock+pombonce.litmus. */
+   /* See Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus. */
void CPU0(void)
{
spin_lock();
@@ -292,7 +292,7 @@ and to use smp_load_acquire() instead of smp_rmb().  
However, the older
 smp_wmb() and smp_rmb() APIs are still heavily used, so it is important
 to understand their use cases.  The general approach is shown below:
 
-   /* See MP+wmbonceonce+rmbonceonce.litmus. */
+   /* See MP+fencewmbonceonce+fencermbonceonce.litmus. */
void CPU0(void)
{
WRITE_ONCE(x, 1);
@@ -360,7 +360,7 @@ can be seen in the LB+poonceonces.litmus litmus test.
 One way of avoiding the counter-intuitive outcome is through the use of a
 control dependency paired with a full memory barrier:
 
-   /* See LB+ctrlonceonce+mbonceonce.litmus. */
+   /* See LB+fencembonceonce+ctrlonceonce.litmus. */
void CPU0(void)
{
r0 = READ_ONCE(x);
@@ -476,7 +476,7 @@ that one CPU first stores to one variable and then loads 
from a second,
 while another CPU stores to the second variable and then loads from the
 first.  Preserving order requires nothing less than full barriers:
 
-   /* See SB+mbonceonces.litmus. */
+   /* See SB+fencembonceonces.litmus. */
void CPU0(void)
{
WRITE_ONCE(x, 1);
diff --git a/tools/memory-model/README b/tools/memory-model/README
index 734f7feaa5dc..ee987ce20aae 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -35,13 +35,13 @@ BASIC USAGE: HERD7
 The memory model is used, in conjunction with "herd7", to exhaustively
 explore the state space of small litmus tests.
 
-For example, to run SB+mbonceonces.litmus against the memory model:
+For example, to run SB+fencembonceonces.litmus against the memory model:
 
-  $ herd7 -conf linux-kernel.cfg litmus-tests/SB+mbonceonces.litmus
+  $ herd7 -conf linux-kernel.cfg litmus-tests/SB+fencembonceonces.litmus
 
 Here is the corresponding output:
 
-  Test SB+mbonceonces Allowed
+  Test SB+fencembonceonces Allowed
   States 3
   0:r0=0; 1:r0=1;
   0:r0=1; 1:r0=0;
@@ -50,8 +50,8 @@ Here is the corresponding output:
   Witnesses
   Positive: 0 Negative: 3
   Condition exists (0:r0=0 /\ 1:r0=0)
-  Observation SB+mbonceonces Never 0 3
-  Time SB+mbonceonces 0.01
+  Observation SB+fencembonceonces Never 0 3
+  Time SB+fencembonceonces 0.01
   Hash=d66d99523e2cac6b06e66f4c995ebb48
 
 The "Positive: 0 Negative: 3" and the "Never 0 3" each indicate that
@@ -67,16 +67,16 @@ BASIC USAGE: KLITMUS7
 The "klitmus7" tool converts a litmus test into a Linux kernel module,
 which may then be loaded and run.
 
-For example, to run SB+mbonceonces.litmus against 

[tip:locking/core] tools/memory-model: Rename litmus tests to comply to norm7

2018-07-17 Thread tip-bot for Andrea Parri
Commit-ID:  71b7ff5ebc9b1d5aa95eb48d6388234f1304fd19
Gitweb: https://git.kernel.org/tip/71b7ff5ebc9b1d5aa95eb48d6388234f1304fd19
Author: Andrea Parri 
AuthorDate: Mon, 16 Jul 2018 11:06:05 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:36 +0200

tools/memory-model: Rename litmus tests to comply to norm7

norm7 produces the 'normalized' name of a litmus test,  when the test
can be generated from a single cycle that passes through each process
exactly once. The commit renames such tests in order to comply to the
naming scheme implemented by this tool.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Akira Yokosawa 
Cc: Boqun Feng 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/20180716180605.16115-14-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/Documentation/recipes.txt |  8 
 tools/memory-model/README| 20 ++--
 litmus => IRIW+fencembonceonces+OnceOnce.litmus} |  2 +-
 ...litmus => LB+fencembonceonce+ctrlonceonce.litmus} |  2 +-
 ...s => MP+fencewmbonceonce+fencermbonceonce.litmus} |  2 +-
 ...+mbonceonces.litmus => R+fencembonceonces.litmus} |  2 +-
 tools/memory-model/litmus-tests/README   | 16 
 ...itmus => S+fencewmbonceonce+poacquireonce.litmus} |  2 +-
 ...mbonceonces.litmus => SB+fencembonceonces.litmus} |  2 +-
 ...> WRC+pooncerelease+fencermbonceonce+Once.litmus} |  2 +-
 ...erelease+poacquirerelease+fencembonceonce.litmus} |  2 +-
 11 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/tools/memory-model/Documentation/recipes.txt 
b/tools/memory-model/Documentation/recipes.txt
index 1fea8ef2b184..af72700cc20a 100644
--- a/tools/memory-model/Documentation/recipes.txt
+++ b/tools/memory-model/Documentation/recipes.txt
@@ -126,7 +126,7 @@ However, it is not necessarily the case that accesses 
ordered by
 locking will be seen as ordered by CPUs not holding that lock.
 Consider this example:
 
-   /* See Z6.0+pooncelock+pooncelock+pombonce.litmus. */
+   /* See Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus. */
void CPU0(void)
{
spin_lock();
@@ -292,7 +292,7 @@ and to use smp_load_acquire() instead of smp_rmb().  
However, the older
 smp_wmb() and smp_rmb() APIs are still heavily used, so it is important
 to understand their use cases.  The general approach is shown below:
 
-   /* See MP+wmbonceonce+rmbonceonce.litmus. */
+   /* See MP+fencewmbonceonce+fencermbonceonce.litmus. */
void CPU0(void)
{
WRITE_ONCE(x, 1);
@@ -360,7 +360,7 @@ can be seen in the LB+poonceonces.litmus litmus test.
 One way of avoiding the counter-intuitive outcome is through the use of a
 control dependency paired with a full memory barrier:
 
-   /* See LB+ctrlonceonce+mbonceonce.litmus. */
+   /* See LB+fencembonceonce+ctrlonceonce.litmus. */
void CPU0(void)
{
r0 = READ_ONCE(x);
@@ -476,7 +476,7 @@ that one CPU first stores to one variable and then loads 
from a second,
 while another CPU stores to the second variable and then loads from the
 first.  Preserving order requires nothing less than full barriers:
 
-   /* See SB+mbonceonces.litmus. */
+   /* See SB+fencembonceonces.litmus. */
void CPU0(void)
{
WRITE_ONCE(x, 1);
diff --git a/tools/memory-model/README b/tools/memory-model/README
index 734f7feaa5dc..ee987ce20aae 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -35,13 +35,13 @@ BASIC USAGE: HERD7
 The memory model is used, in conjunction with "herd7", to exhaustively
 explore the state space of small litmus tests.
 
-For example, to run SB+mbonceonces.litmus against the memory model:
+For example, to run SB+fencembonceonces.litmus against the memory model:
 
-  $ herd7 -conf linux-kernel.cfg litmus-tests/SB+mbonceonces.litmus
+  $ herd7 -conf linux-kernel.cfg litmus-tests/SB+fencembonceonces.litmus
 
 Here is the corresponding output:
 
-  Test SB+mbonceonces Allowed
+  Test SB+fencembonceonces Allowed
   States 3
   0:r0=0; 1:r0=1;
   0:r0=1; 1:r0=0;
@@ -50,8 +50,8 @@ Here is the corresponding output:
   Witnesses
   Positive: 0 Negative: 3
   Condition exists (0:r0=0 /\ 1:r0=0)
-  Observation SB+mbonceonces Never 0 3
-  Time SB+mbonceonces 0.01
+  Observation SB+fencembonceonces Never 0 3
+  Time SB+fencembonceonces 0.01
   Hash=d66d99523e2cac6b06e66f4c995ebb48
 
 The "Positive: 0 Negative: 3" and the "Never 0 3" each indicate that
@@ -67,16 +67,16 @@ BASIC USAGE: KLITMUS7
 The "klitmus7" tool converts a litmus test into a Linux kernel module,
 which may then be loaded and run.
 
-For example, to run SB+mbonceonces.litmus against 

[tip:locking/core] sched/Documentation: Update wake_up() & co. memory-barrier guarantees

2018-07-17 Thread tip-bot for Andrea Parri
Commit-ID:  7696f9910a9a40b8a952f57d3428515fabd2d889
Gitweb: https://git.kernel.org/tip/7696f9910a9a40b8a952f57d3428515fabd2d889
Author: Andrea Parri 
AuthorDate: Mon, 16 Jul 2018 11:06:03 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:34 +0200

sched/Documentation: Update wake_up() & co. memory-barrier guarantees

Both the implementation and the users' expectation [1] for the various
wakeup primitives have evolved over time, but the documentation has not
kept up with these changes: brings it into 2018.

[1] 
http://lkml.kernel.org/r/20180424091510.gb4...@hirez.programming.kicks-ass.net

Also applied feedback from Alan Stern.

Suggested-by: Peter Zijlstra 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra (Intel) 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Boqun Feng 
Cc: Daniel Lustig 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Jonathan Corbet 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/20180716180605.16115-12-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 43 ---
 include/linux/sched.h |  4 ++--
 kernel/sched/completion.c |  8 
 kernel/sched/core.c   | 30 +++
 kernel/sched/wait.c   |  8 
 5 files changed, 49 insertions(+), 44 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index a02d6bbfc9d0..0d8d7ef131e9 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -2179,32 +2179,41 @@ or:
event_indicated = 1;
wake_up_process(event_daemon);
 
-A write memory barrier is implied by wake_up() and co.  if and only if they
-wake something up.  The barrier occurs before the task state is cleared, and so
-sits between the STORE to indicate the event and the STORE to set TASK_RUNNING:
+A general memory barrier is executed by wake_up() if it wakes something up.
+If it doesn't wake anything up then a memory barrier may or may not be
+executed; you must not rely on it.  The barrier occurs before the task state
+is accessed, in particular, it sits between the STORE to indicate the event
+and the STORE to set TASK_RUNNING:
 
-   CPU 1   CPU 2
+   CPU 1 (Sleeper) CPU 2 (Waker)
=== ===
set_current_state();STORE event_indicated
  smp_store_mb();   wake_up();
-   STORE current->state  
-STORE current->state
-   LOAD event_indicated
+   STORE current->state  ...
+
+   LOAD event_indicated  if ((LOAD task->state) & TASK_NORMAL)
+   STORE task->state
 
-To repeat, this write memory barrier is present if and only if something
-is actually awakened.  To see this, consider the following sequence of
-events, where X and Y are both initially zero:
+where "task" is the thread being woken up and it equals CPU 1's "current".
+
+To repeat, a general memory barrier is guaranteed to be executed by wake_up()
+if something is actually awakened, but otherwise there is no such guarantee.
+To see this, consider the following sequence of events, where X and Y are both
+initially zero:
 
CPU 1   CPU 2
=== ===
-   X = 1;  STORE event_indicated
+   X = 1;  Y = 1;
smp_mb();   wake_up();
-   Y = 1;  wait_event(wq, Y == 1);
-   wake_up();load from Y sees 1, no memory barrier
-   load from X might see 0
+   LOAD Y  LOAD X
+
+If a wakeup does occur, one (at least) of the two loads must see 1.  If, on
+the other hand, a wakeup does not occur, both loads might see 0.
 
-In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
-to see 1.
+wake_up_process() always executes a general memory barrier.  The barrier again
+occurs before the task state is accessed.  In particular, if the wake_up() in
+the previous snippet were replaced by a call to wake_up_process() then one of
+the two loads would be guaranteed to see 1.
 
 The available waker functions include:
 
@@ -2224,6 +2233,8 @@ The available waker functions include:
wake_up_poll();
wake_up_process();
 
+In terms of memory ordering, these functions all provide the same guarantees of
+a wake_up() (or stronger).
 
 [!] Note that the memory barriers implied by the sleeper and the waker do _not_
 order multiple stores before the 

[tip:locking/core] sched/Documentation: Update wake_up() & co. memory-barrier guarantees

2018-07-17 Thread tip-bot for Andrea Parri
Commit-ID:  7696f9910a9a40b8a952f57d3428515fabd2d889
Gitweb: https://git.kernel.org/tip/7696f9910a9a40b8a952f57d3428515fabd2d889
Author: Andrea Parri 
AuthorDate: Mon, 16 Jul 2018 11:06:03 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:34 +0200

sched/Documentation: Update wake_up() & co. memory-barrier guarantees

Both the implementation and the users' expectation [1] for the various
wakeup primitives have evolved over time, but the documentation has not
kept up with these changes: brings it into 2018.

[1] 
http://lkml.kernel.org/r/20180424091510.gb4...@hirez.programming.kicks-ass.net

Also applied feedback from Alan Stern.

Suggested-by: Peter Zijlstra 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra (Intel) 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Boqun Feng 
Cc: Daniel Lustig 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Jonathan Corbet 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/20180716180605.16115-12-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 43 ---
 include/linux/sched.h |  4 ++--
 kernel/sched/completion.c |  8 
 kernel/sched/core.c   | 30 +++
 kernel/sched/wait.c   |  8 
 5 files changed, 49 insertions(+), 44 deletions(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index a02d6bbfc9d0..0d8d7ef131e9 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -2179,32 +2179,41 @@ or:
event_indicated = 1;
wake_up_process(event_daemon);
 
-A write memory barrier is implied by wake_up() and co.  if and only if they
-wake something up.  The barrier occurs before the task state is cleared, and so
-sits between the STORE to indicate the event and the STORE to set TASK_RUNNING:
+A general memory barrier is executed by wake_up() if it wakes something up.
+If it doesn't wake anything up then a memory barrier may or may not be
+executed; you must not rely on it.  The barrier occurs before the task state
+is accessed, in particular, it sits between the STORE to indicate the event
+and the STORE to set TASK_RUNNING:
 
-   CPU 1   CPU 2
+   CPU 1 (Sleeper) CPU 2 (Waker)
=== ===
set_current_state();STORE event_indicated
  smp_store_mb();   wake_up();
-   STORE current->state  
-STORE current->state
-   LOAD event_indicated
+   STORE current->state  ...
+
+   LOAD event_indicated  if ((LOAD task->state) & TASK_NORMAL)
+   STORE task->state
 
-To repeat, this write memory barrier is present if and only if something
-is actually awakened.  To see this, consider the following sequence of
-events, where X and Y are both initially zero:
+where "task" is the thread being woken up and it equals CPU 1's "current".
+
+To repeat, a general memory barrier is guaranteed to be executed by wake_up()
+if something is actually awakened, but otherwise there is no such guarantee.
+To see this, consider the following sequence of events, where X and Y are both
+initially zero:
 
CPU 1   CPU 2
=== ===
-   X = 1;  STORE event_indicated
+   X = 1;  Y = 1;
smp_mb();   wake_up();
-   Y = 1;  wait_event(wq, Y == 1);
-   wake_up();load from Y sees 1, no memory barrier
-   load from X might see 0
+   LOAD Y  LOAD X
+
+If a wakeup does occur, one (at least) of the two loads must see 1.  If, on
+the other hand, a wakeup does not occur, both loads might see 0.
 
-In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
-to see 1.
+wake_up_process() always executes a general memory barrier.  The barrier again
+occurs before the task state is accessed.  In particular, if the wake_up() in
+the previous snippet were replaced by a call to wake_up_process() then one of
+the two loads would be guaranteed to see 1.
 
 The available waker functions include:
 
@@ -2224,6 +2233,8 @@ The available waker functions include:
wake_up_poll();
wake_up_process();
 
+In terms of memory ordering, these functions all provide the same guarantees of
+a wake_up() (or stronger).
 
 [!] Note that the memory barriers implied by the sleeper and the waker do _not_
 order multiple stores before the 

[tip:locking/core] locking/spinlock, sched/core: Clarify requirements for smp_mb__after_spinlock()

2018-07-17 Thread tip-bot for Andrea Parri
Commit-ID:  3d85b2703783636366560c94842affd8608ec9d1
Gitweb: https://git.kernel.org/tip/3d85b2703783636366560c94842affd8608ec9d1
Author: Andrea Parri 
AuthorDate: Mon, 16 Jul 2018 11:06:02 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:33 +0200

locking/spinlock, sched/core: Clarify requirements for smp_mb__after_spinlock()

There are 11 interpretations of the requirements described in the header
comment for smp_mb__after_spinlock(): one for each LKMM maintainer, and
one currently encoded in the Cat file. Stick to the latter (until a more
satisfactory solution is available).

This also reworks some snippets related to the barrier to illustrate the
requirements and to link them to the idioms which are relied upon at its
call sites.

Suggested-by: Boqun Feng 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/20180716180605.16115-11-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 include/linux/spinlock.h | 53 
 kernel/sched/core.c  | 41 +++--
 2 files changed, 57 insertions(+), 37 deletions(-)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index fd57888d4942..3190997df9ca 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -114,29 +114,48 @@ do {  
\
 #endif /*arch_spin_is_contended*/
 
 /*
- * This barrier must provide two things:
+ * smp_mb__after_spinlock() provides the equivalent of a full memory barrier
+ * between program-order earlier lock acquisitions and program-order later
+ * memory accesses.
  *
- *   - it must guarantee a STORE before the spin_lock() is ordered against a
- * LOAD after it, see the comments at its two usage sites.
+ * This guarantees that the following two properties hold:
  *
- *   - it must ensure the critical section is RCsc.
+ *   1) Given the snippet:
  *
- * The latter is important for cases where we observe values written by other
- * CPUs in spin-loops, without barriers, while being subject to scheduling.
+ *   { X = 0;  Y = 0; }
  *
- * CPU0CPU1CPU2
+ *   CPU0  CPU1
  *
- * for (;;) {
- *   if (READ_ONCE(X))
- * break;
- * }
- * X=1
- * 
- * 
- * r = X;
+ *   WRITE_ONCE(X, 1); WRITE_ONCE(Y, 1);
+ *   spin_lock(S); smp_mb();
+ *   smp_mb__after_spinlock(); r1 = READ_ONCE(X);
+ *   r0 = READ_ONCE(Y);
+ *   spin_unlock(S);
  *
- * without transitivity it could be that CPU1 observes X!=0 breaks the loop,
- * we get migrated and CPU2 sees X==0.
+ *  it is forbidden that CPU0 does not observe CPU1's store to Y (r0 = 0)
+ *  and CPU1 does not observe CPU0's store to X (r1 = 0); see the comments
+ *  preceding the call to smp_mb__after_spinlock() in __schedule() and in
+ *  try_to_wake_up().
+ *
+ *   2) Given the snippet:
+ *
+ *  { X = 0;  Y = 0; }
+ *
+ *  CPU0   CPU1CPU2
+ *
+ *  spin_lock(S);  spin_lock(S);   r1 = READ_ONCE(Y);
+ *  WRITE_ONCE(X, 1);  smp_mb__after_spinlock();   smp_rmb();
+ *  spin_unlock(S);r0 = READ_ONCE(X);  r2 = READ_ONCE(X);
+ * WRITE_ONCE(Y, 1);
+ * spin_unlock(S);
+ *
+ *  it is forbidden that CPU0's critical section executes before CPU1's
+ *  critical section (r0 = 1), CPU2 observes CPU1's store to Y (r1 = 1)
+ *  and CPU2 does not observe CPU0's store to X (r2 = 0); see the comments
+ *  preceding the calls to smp_rmb() in try_to_wake_up() for similar
+ *  snippets but "projected" onto two CPUs.
+ *
+ * Property (2) upgrades the lock to an RCsc lock.
  *
  * Since most load-store architectures implement ACQUIRE with an smp_mb() after
  * the LL/SC loop, they need no further barriers. Similarly all our TSO
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fe365c9a08e9..0c5ec2abdf93 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1998,21 +1998,20 @@ try_to_wake_up(struct task_struct *p, unsigned int 
state, int wake_flags)
 * be possible to, falsely, observe p->on_rq == 0 and get stuck
 * in smp_cond_load_acquire() below.
 *
-* sched_ttwu_pending() try_to_wake_up()
-*   [S] p->on_rq = 1;  [L] P->state
-*   UNLOCK rq->lock  

[tip:locking/core] locking/spinlock, sched/core: Clarify requirements for smp_mb__after_spinlock()

2018-07-17 Thread tip-bot for Andrea Parri
Commit-ID:  3d85b2703783636366560c94842affd8608ec9d1
Gitweb: https://git.kernel.org/tip/3d85b2703783636366560c94842affd8608ec9d1
Author: Andrea Parri 
AuthorDate: Mon, 16 Jul 2018 11:06:02 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:33 +0200

locking/spinlock, sched/core: Clarify requirements for smp_mb__after_spinlock()

There are 11 interpretations of the requirements described in the header
comment for smp_mb__after_spinlock(): one for each LKMM maintainer, and
one currently encoded in the Cat file. Stick to the latter (until a more
satisfactory solution is available).

This also reworks some snippets related to the barrier to illustrate the
requirements and to link them to the idioms which are relied upon at its
call sites.

Suggested-by: Boqun Feng 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/20180716180605.16115-11-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 include/linux/spinlock.h | 53 
 kernel/sched/core.c  | 41 +++--
 2 files changed, 57 insertions(+), 37 deletions(-)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index fd57888d4942..3190997df9ca 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -114,29 +114,48 @@ do {  
\
 #endif /*arch_spin_is_contended*/
 
 /*
- * This barrier must provide two things:
+ * smp_mb__after_spinlock() provides the equivalent of a full memory barrier
+ * between program-order earlier lock acquisitions and program-order later
+ * memory accesses.
  *
- *   - it must guarantee a STORE before the spin_lock() is ordered against a
- * LOAD after it, see the comments at its two usage sites.
+ * This guarantees that the following two properties hold:
  *
- *   - it must ensure the critical section is RCsc.
+ *   1) Given the snippet:
  *
- * The latter is important for cases where we observe values written by other
- * CPUs in spin-loops, without barriers, while being subject to scheduling.
+ *   { X = 0;  Y = 0; }
  *
- * CPU0CPU1CPU2
+ *   CPU0  CPU1
  *
- * for (;;) {
- *   if (READ_ONCE(X))
- * break;
- * }
- * X=1
- * 
- * 
- * r = X;
+ *   WRITE_ONCE(X, 1); WRITE_ONCE(Y, 1);
+ *   spin_lock(S); smp_mb();
+ *   smp_mb__after_spinlock(); r1 = READ_ONCE(X);
+ *   r0 = READ_ONCE(Y);
+ *   spin_unlock(S);
  *
- * without transitivity it could be that CPU1 observes X!=0 breaks the loop,
- * we get migrated and CPU2 sees X==0.
+ *  it is forbidden that CPU0 does not observe CPU1's store to Y (r0 = 0)
+ *  and CPU1 does not observe CPU0's store to X (r1 = 0); see the comments
+ *  preceding the call to smp_mb__after_spinlock() in __schedule() and in
+ *  try_to_wake_up().
+ *
+ *   2) Given the snippet:
+ *
+ *  { X = 0;  Y = 0; }
+ *
+ *  CPU0   CPU1CPU2
+ *
+ *  spin_lock(S);  spin_lock(S);   r1 = READ_ONCE(Y);
+ *  WRITE_ONCE(X, 1);  smp_mb__after_spinlock();   smp_rmb();
+ *  spin_unlock(S);r0 = READ_ONCE(X);  r2 = READ_ONCE(X);
+ * WRITE_ONCE(Y, 1);
+ * spin_unlock(S);
+ *
+ *  it is forbidden that CPU0's critical section executes before CPU1's
+ *  critical section (r0 = 1), CPU2 observes CPU1's store to Y (r1 = 1)
+ *  and CPU2 does not observe CPU0's store to X (r2 = 0); see the comments
+ *  preceding the calls to smp_rmb() in try_to_wake_up() for similar
+ *  snippets but "projected" onto two CPUs.
+ *
+ * Property (2) upgrades the lock to an RCsc lock.
  *
  * Since most load-store architectures implement ACQUIRE with an smp_mb() after
  * the LL/SC loop, they need no further barriers. Similarly all our TSO
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fe365c9a08e9..0c5ec2abdf93 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1998,21 +1998,20 @@ try_to_wake_up(struct task_struct *p, unsigned int 
state, int wake_flags)
 * be possible to, falsely, observe p->on_rq == 0 and get stuck
 * in smp_cond_load_acquire() below.
 *
-* sched_ttwu_pending() try_to_wake_up()
-*   [S] p->on_rq = 1;  [L] P->state
-*   UNLOCK rq->lock  

[tip:locking/core] sched/core: Use smp_mb() in wake_woken_function()

2018-07-17 Thread tip-bot for Andrea Parri
Commit-ID:  76e079fefc8f62bd9b2cd2950814d1ee806e31a5
Gitweb: https://git.kernel.org/tip/76e079fefc8f62bd9b2cd2950814d1ee806e31a5
Author: Andrea Parri 
AuthorDate: Mon, 16 Jul 2018 11:06:01 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:33 +0200

sched/core: Use smp_mb() in wake_woken_function()

wake_woken_function() synchronizes with wait_woken() as follows:

  [wait_woken]   [wake_woken_function]

  entry->flags &= ~wq_flag_woken;condition = true;
  smp_mb();  smp_wmb();
  if (condition) wq_entry->flags |= wq_flag_woken;
 break;

This commit replaces the above smp_wmb() with an smp_mb() in order to
guarantee that either wait_woken() sees the wait condition being true
or the store to wq_entry->flags in woken_wake_function() follows the
store in wait_woken() in the coherence order (so that the former can
eventually be observed by wait_woken()).

The commit also fixes a comment associated to set_current_state() in
wait_woken(): the comment pairs the barrier in set_current_state() to
the above smp_wmb(), while the actual pairing involves the barrier in
set_current_state() and the barrier executed by the try_to_wake_up()
in wake_woken_function().

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/20180716180605.16115-10-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 kernel/sched/wait.c | 47 +--
 1 file changed, 21 insertions(+), 26 deletions(-)

diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
index 928be527477e..a7a2aaa3026a 100644
--- a/kernel/sched/wait.c
+++ b/kernel/sched/wait.c
@@ -392,35 +392,36 @@ static inline bool is_kthread_should_stop(void)
  * if (condition)
  * break;
  *
- * p->state = mode;condition = true;
- * smp_mb(); // A  smp_wmb(); // C
- * if (!wq_entry->flags & WQ_FLAG_WOKEN)   wq_entry->flags |= 
WQ_FLAG_WOKEN;
- * schedule()  try_to_wake_up();
- * p->state = TASK_RUNNING;~~
- * wq_entry->flags &= ~WQ_FLAG_WOKEN;  condition = true;
- * smp_mb() // B   smp_wmb(); // C
- * wq_entry->flags |= 
WQ_FLAG_WOKEN;
- * }
- * remove_wait_queue(_head, );
+ * // in wait_woken()  // in woken_wake_function()
  *
+ * p->state = mode;wq_entry->flags |= 
WQ_FLAG_WOKEN;
+ * smp_mb(); // A  try_to_wake_up():
+ * if (!(wq_entry->flags & WQ_FLAG_WOKEN))
+ * schedule() if (p->state & mode)
+ * p->state = TASK_RUNNING;  p->state = 
TASK_RUNNING;
+ * wq_entry->flags &= ~WQ_FLAG_WOKEN;  ~~
+ * smp_mb(); // B  condition = true;
+ * }   smp_mb(); // C
+ * remove_wait_queue(_head, ); wq_entry->flags |= 
WQ_FLAG_WOKEN;
  */
 long wait_woken(struct wait_queue_entry *wq_entry, unsigned mode, long timeout)
 {
-   set_current_state(mode); /* A */
/*
-* The above implies an smp_mb(), which matches with the smp_wmb() from
-* woken_wake_function() such that if we observe WQ_FLAG_WOKEN we must
-* also observe all state before the wakeup.
+* The below executes an smp_mb(), which matches with the full barrier
+* executed by the try_to_wake_up() in woken_wake_function() such that
+* either we see the store to wq_entry->flags in woken_wake_function()
+* or woken_wake_function() sees our store to current->state.
 */
+   set_current_state(mode); /* A */
if (!(wq_entry->flags & WQ_FLAG_WOKEN) && !is_kthread_should_stop())
timeout = schedule_timeout(timeout);
__set_current_state(TASK_RUNNING);
 
/*
-* The below implies an smp_mb(), it too pairs with the smp_wmb() from
-* woken_wake_function() such that we must either observe the wait
-* condition being true _OR_ WQ_FLAG_WOKEN such that we will not miss
-* an event.
+* The below executes an smp_mb(), which matches with the smp_mb() (C)
+* in woken_wake_function() such that either we see the wait condition
+* being true or the store to wq_entry->flags in woken_wake_function()
+* follows ours in the coherence order.
 */

[tip:locking/core] sched/core: Use smp_mb() in wake_woken_function()

2018-07-17 Thread tip-bot for Andrea Parri
Commit-ID:  76e079fefc8f62bd9b2cd2950814d1ee806e31a5
Gitweb: https://git.kernel.org/tip/76e079fefc8f62bd9b2cd2950814d1ee806e31a5
Author: Andrea Parri 
AuthorDate: Mon, 16 Jul 2018 11:06:01 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Jul 2018 09:30:33 +0200

sched/core: Use smp_mb() in wake_woken_function()

wake_woken_function() synchronizes with wait_woken() as follows:

  [wait_woken]   [wake_woken_function]

  entry->flags &= ~wq_flag_woken;condition = true;
  smp_mb();  smp_wmb();
  if (condition) wq_entry->flags |= wq_flag_woken;
 break;

This commit replaces the above smp_wmb() with an smp_mb() in order to
guarantee that either wait_woken() sees the wait condition being true
or the store to wq_entry->flags in woken_wake_function() follows the
store in wait_woken() in the coherence order (so that the former can
eventually be observed by wait_woken()).

The commit also fixes a comment associated to set_current_state() in
wait_woken(): the comment pairs the barrier in set_current_state() to
the above smp_wmb(), while the actual pairing involves the barrier in
set_current_state() and the barrier executed by the try_to_wake_up()
in wake_woken_function().

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/20180716180605.16115-10-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 kernel/sched/wait.c | 47 +--
 1 file changed, 21 insertions(+), 26 deletions(-)

diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
index 928be527477e..a7a2aaa3026a 100644
--- a/kernel/sched/wait.c
+++ b/kernel/sched/wait.c
@@ -392,35 +392,36 @@ static inline bool is_kthread_should_stop(void)
  * if (condition)
  * break;
  *
- * p->state = mode;condition = true;
- * smp_mb(); // A  smp_wmb(); // C
- * if (!wq_entry->flags & WQ_FLAG_WOKEN)   wq_entry->flags |= 
WQ_FLAG_WOKEN;
- * schedule()  try_to_wake_up();
- * p->state = TASK_RUNNING;~~
- * wq_entry->flags &= ~WQ_FLAG_WOKEN;  condition = true;
- * smp_mb() // B   smp_wmb(); // C
- * wq_entry->flags |= 
WQ_FLAG_WOKEN;
- * }
- * remove_wait_queue(_head, );
+ * // in wait_woken()  // in woken_wake_function()
  *
+ * p->state = mode;wq_entry->flags |= 
WQ_FLAG_WOKEN;
+ * smp_mb(); // A  try_to_wake_up():
+ * if (!(wq_entry->flags & WQ_FLAG_WOKEN))
+ * schedule() if (p->state & mode)
+ * p->state = TASK_RUNNING;  p->state = 
TASK_RUNNING;
+ * wq_entry->flags &= ~WQ_FLAG_WOKEN;  ~~
+ * smp_mb(); // B  condition = true;
+ * }   smp_mb(); // C
+ * remove_wait_queue(_head, ); wq_entry->flags |= 
WQ_FLAG_WOKEN;
  */
 long wait_woken(struct wait_queue_entry *wq_entry, unsigned mode, long timeout)
 {
-   set_current_state(mode); /* A */
/*
-* The above implies an smp_mb(), which matches with the smp_wmb() from
-* woken_wake_function() such that if we observe WQ_FLAG_WOKEN we must
-* also observe all state before the wakeup.
+* The below executes an smp_mb(), which matches with the full barrier
+* executed by the try_to_wake_up() in woken_wake_function() such that
+* either we see the store to wq_entry->flags in woken_wake_function()
+* or woken_wake_function() sees our store to current->state.
 */
+   set_current_state(mode); /* A */
if (!(wq_entry->flags & WQ_FLAG_WOKEN) && !is_kthread_should_stop())
timeout = schedule_timeout(timeout);
__set_current_state(TASK_RUNNING);
 
/*
-* The below implies an smp_mb(), it too pairs with the smp_wmb() from
-* woken_wake_function() such that we must either observe the wait
-* condition being true _OR_ WQ_FLAG_WOKEN such that we will not miss
-* an event.
+* The below executes an smp_mb(), which matches with the smp_mb() (C)
+* in woken_wake_function() such that either we see the wait condition
+* being true or the store to wq_entry->flags in woken_wake_function()
+* follows ours in the coherence order.
 */

[tip:locking/core] tools/memory-model: Update ASPLOS information

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  1a00b4554d477f05199e22ee71ba4c2525ca44cb
Gitweb: https://git.kernel.org/tip/1a00b4554d477f05199e22ee71ba4c2525ca44cb
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:56 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:18 +0200

tools/memory-model: Update ASPLOS information

ASPLOS 2018 was held in March: make sure this is reflected in
header comments and references.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Boqun Feng 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-18-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/Documentation/references.txt | 11 ++-
 tools/memory-model/linux-kernel.bell|  4 ++--
 tools/memory-model/linux-kernel.cat |  4 ++--
 tools/memory-model/linux-kernel.def |  4 ++--
 4 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/tools/memory-model/Documentation/references.txt 
b/tools/memory-model/Documentation/references.txt
index ba2e34c2ec3f..74f448f2616a 100644
--- a/tools/memory-model/Documentation/references.txt
+++ b/tools/memory-model/Documentation/references.txt
@@ -67,11 +67,12 @@ o   Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan 
Nienhuis,
 Linux-kernel memory model
 =
 
-o  Andrea Parri, Alan Stern, Luc Maranget, Paul E. McKenney,
-   and Jade Alglave.  2017. "A formal model of
-   Linux-kernel memory ordering - companion webpage".
-   http://moscova.inria.fr/∼maranget/cats7/linux/. (2017). [Online;
-   accessed 30-January-2017].
+o  Jade Alglave, Luc Maranget, Paul E. McKenney, Andrea Parri, and
+   Alan Stern.  2018. "Frightening small children and disconcerting
+   grown-ups: Concurrency in the Linux kernel". In Proceedings of
+   the 23rd International Conference on Architectural Support for
+   Programming Languages and Operating Systems (ASPLOS 2018). ACM,
+   New York, NY, USA, 405-418.  Webpage: http://diy.inria.fr/linux/.
 
 o  Jade Alglave, Luc Maranget, Paul E. McKenney, Andrea Parri, and
Alan Stern.  2017.  "A formal kernel memory-ordering model (part 1)"
diff --git a/tools/memory-model/linux-kernel.bell 
b/tools/memory-model/linux-kernel.bell
index 432c7cf71b23..64f5740e0e75 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -5,10 +5,10 @@
  * Copyright (C) 2017 Alan Stern ,
  *Andrea Parri 
  *
- * An earlier version of this file appears in the companion webpage for
+ * An earlier version of this file appeared in the companion webpage for
  * "Frightening small children and disconcerting grown-ups: Concurrency
  * in the Linux kernel" by Alglave, Maranget, McKenney, Parri, and Stern,
- * which is to appear in ASPLOS 2018.
+ * which appeared in ASPLOS 2018.
  *)
 
 "Linux-kernel memory consistency model"
diff --git a/tools/memory-model/linux-kernel.cat 
b/tools/memory-model/linux-kernel.cat
index 1e5c4653dd12..59b5cbe6b624 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -5,10 +5,10 @@
  * Copyright (C) 2017 Alan Stern ,
  *Andrea Parri 
  *
- * An earlier version of this file appears in the companion webpage for
+ * An earlier version of this file appeared in the companion webpage for
  * "Frightening small children and disconcerting grown-ups: Concurrency
  * in the Linux kernel" by Alglave, Maranget, McKenney, Parri, and Stern,
- * which is to appear in ASPLOS 2018.
+ * which appeared in ASPLOS 2018.
  *)
 
 "Linux-kernel memory consistency model"
diff --git a/tools/memory-model/linux-kernel.def 
b/tools/memory-model/linux-kernel.def
index f0553bd37c08..6fa3eb28d40b 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -1,9 +1,9 @@
 // SPDX-License-Identifier: GPL-2.0+
 //
-// An earlier version of this file appears in the companion webpage for
+// An earlier version of this file appeared in the companion webpage for
 // "Frightening small children and disconcerting grown-ups: Concurrency
 // in the Linux kernel" by Alglave, Maranget, McKenney, Parri, and Stern,
-// which is to 

[tip:locking/core] tools/memory-model: Add reference for 'Simplifying ARM concurrency'

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  99c12749b172758f6973fc023484f2fc8b91cd5a
Gitweb: https://git.kernel.org/tip/99c12749b172758f6973fc023484f2fc8b91cd5a
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:57 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:19 +0200

tools/memory-model: Add reference for 'Simplifying ARM concurrency'

The paper discusses the revised ARMv8 memory model; such revision
had an important impact on the design of the LKMM.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Boqun Feng 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-19-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/Documentation/references.txt | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/tools/memory-model/Documentation/references.txt 
b/tools/memory-model/Documentation/references.txt
index 74f448f2616a..b177f3e4a614 100644
--- a/tools/memory-model/Documentation/references.txt
+++ b/tools/memory-model/Documentation/references.txt
@@ -63,6 +63,12 @@ oShaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan 
Nienhuis,
Principles of Programming Languages (POPL 2017). ACM, New York,
NY, USA, 429–442.
 
+o  Christopher Pulte, Shaked Flur, Will Deacon, Jon French,
+   Susmit Sarkar, and Peter Sewell. 2018. "Simplifying ARM concurrency:
+   multicopy-atomic axiomatic and operational models for ARMv8". In
+   Proceedings of the ACM on Programming Languages, Volume 2, Issue
+   POPL, Article No. 19. ACM, New York, NY, USA.
+
 
 Linux-kernel memory model
 =


[tip:locking/core] tools/memory-model: Update ASPLOS information

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  1a00b4554d477f05199e22ee71ba4c2525ca44cb
Gitweb: https://git.kernel.org/tip/1a00b4554d477f05199e22ee71ba4c2525ca44cb
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:56 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:18 +0200

tools/memory-model: Update ASPLOS information

ASPLOS 2018 was held in March: make sure this is reflected in
header comments and references.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Boqun Feng 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-18-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/Documentation/references.txt | 11 ++-
 tools/memory-model/linux-kernel.bell|  4 ++--
 tools/memory-model/linux-kernel.cat |  4 ++--
 tools/memory-model/linux-kernel.def |  4 ++--
 4 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/tools/memory-model/Documentation/references.txt 
b/tools/memory-model/Documentation/references.txt
index ba2e34c2ec3f..74f448f2616a 100644
--- a/tools/memory-model/Documentation/references.txt
+++ b/tools/memory-model/Documentation/references.txt
@@ -67,11 +67,12 @@ o   Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan 
Nienhuis,
 Linux-kernel memory model
 =
 
-o  Andrea Parri, Alan Stern, Luc Maranget, Paul E. McKenney,
-   and Jade Alglave.  2017. "A formal model of
-   Linux-kernel memory ordering - companion webpage".
-   http://moscova.inria.fr/∼maranget/cats7/linux/. (2017). [Online;
-   accessed 30-January-2017].
+o  Jade Alglave, Luc Maranget, Paul E. McKenney, Andrea Parri, and
+   Alan Stern.  2018. "Frightening small children and disconcerting
+   grown-ups: Concurrency in the Linux kernel". In Proceedings of
+   the 23rd International Conference on Architectural Support for
+   Programming Languages and Operating Systems (ASPLOS 2018). ACM,
+   New York, NY, USA, 405-418.  Webpage: http://diy.inria.fr/linux/.
 
 o  Jade Alglave, Luc Maranget, Paul E. McKenney, Andrea Parri, and
Alan Stern.  2017.  "A formal kernel memory-ordering model (part 1)"
diff --git a/tools/memory-model/linux-kernel.bell 
b/tools/memory-model/linux-kernel.bell
index 432c7cf71b23..64f5740e0e75 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -5,10 +5,10 @@
  * Copyright (C) 2017 Alan Stern ,
  *Andrea Parri 
  *
- * An earlier version of this file appears in the companion webpage for
+ * An earlier version of this file appeared in the companion webpage for
  * "Frightening small children and disconcerting grown-ups: Concurrency
  * in the Linux kernel" by Alglave, Maranget, McKenney, Parri, and Stern,
- * which is to appear in ASPLOS 2018.
+ * which appeared in ASPLOS 2018.
  *)
 
 "Linux-kernel memory consistency model"
diff --git a/tools/memory-model/linux-kernel.cat 
b/tools/memory-model/linux-kernel.cat
index 1e5c4653dd12..59b5cbe6b624 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -5,10 +5,10 @@
  * Copyright (C) 2017 Alan Stern ,
  *Andrea Parri 
  *
- * An earlier version of this file appears in the companion webpage for
+ * An earlier version of this file appeared in the companion webpage for
  * "Frightening small children and disconcerting grown-ups: Concurrency
  * in the Linux kernel" by Alglave, Maranget, McKenney, Parri, and Stern,
- * which is to appear in ASPLOS 2018.
+ * which appeared in ASPLOS 2018.
  *)
 
 "Linux-kernel memory consistency model"
diff --git a/tools/memory-model/linux-kernel.def 
b/tools/memory-model/linux-kernel.def
index f0553bd37c08..6fa3eb28d40b 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -1,9 +1,9 @@
 // SPDX-License-Identifier: GPL-2.0+
 //
-// An earlier version of this file appears in the companion webpage for
+// An earlier version of this file appeared in the companion webpage for
 // "Frightening small children and disconcerting grown-ups: Concurrency
 // in the Linux kernel" by Alglave, Maranget, McKenney, Parri, and Stern,
-// which is to appear in ASPLOS 2018.
+// which appeared in ASPLOS 2018.
 
 // ONCE
 READ_ONCE(X) __load{once}(X)


[tip:locking/core] tools/memory-model: Add reference for 'Simplifying ARM concurrency'

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  99c12749b172758f6973fc023484f2fc8b91cd5a
Gitweb: https://git.kernel.org/tip/99c12749b172758f6973fc023484f2fc8b91cd5a
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:57 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:19 +0200

tools/memory-model: Add reference for 'Simplifying ARM concurrency'

The paper discusses the revised ARMv8 memory model; such revision
had an important impact on the design of the LKMM.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Boqun Feng 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-19-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/Documentation/references.txt | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/tools/memory-model/Documentation/references.txt 
b/tools/memory-model/Documentation/references.txt
index 74f448f2616a..b177f3e4a614 100644
--- a/tools/memory-model/Documentation/references.txt
+++ b/tools/memory-model/Documentation/references.txt
@@ -63,6 +63,12 @@ oShaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan 
Nienhuis,
Principles of Programming Languages (POPL 2017). ACM, New York,
NY, USA, 429–442.
 
+o  Christopher Pulte, Shaked Flur, Will Deacon, Jon French,
+   Susmit Sarkar, and Peter Sewell. 2018. "Simplifying ARM concurrency:
+   multicopy-atomic axiomatic and operational models for ARMv8". In
+   Proceedings of the ACM on Programming Languages, Volume 2, Issue
+   POPL, Article No. 19. ACM, New York, NY, USA.
+
 
 Linux-kernel memory model
 =


[tip:locking/core] MAINTAINERS, tools/memory-model: Update e-mail address for Andrea Parri

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  5ccdb7536ebec7a5f8a3883ba1985a80cec80dd3
Gitweb: https://git.kernel.org/tip/5ccdb7536ebec7a5f8a3883ba1985a80cec80dd3
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:55 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:18 +0200

MAINTAINERS, tools/memory-model: Update e-mail address for Andrea Parri

I moved to Amarula Solutions; switch to work e-mail address.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Boqun Feng 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-17-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 649e782e4415..b6341e8a3587 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8203,7 +8203,7 @@ F:drivers/misc/lkdtm/*
 
 LINUX KERNEL MEMORY CONSISTENCY MODEL (LKMM)
 M: Alan Stern 
-M: Andrea Parri 
+M: Andrea Parri 
 M: Will Deacon 
 M: Peter Zijlstra 
 M: Boqun Feng 


[tip:locking/core] MAINTAINERS, tools/memory-model: Update e-mail address for Andrea Parri

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  5ccdb7536ebec7a5f8a3883ba1985a80cec80dd3
Gitweb: https://git.kernel.org/tip/5ccdb7536ebec7a5f8a3883ba1985a80cec80dd3
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:55 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:18 +0200

MAINTAINERS, tools/memory-model: Update e-mail address for Andrea Parri

I moved to Amarula Solutions; switch to work e-mail address.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Akira Yokosawa 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Boqun Feng 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-17-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 649e782e4415..b6341e8a3587 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8203,7 +8203,7 @@ F:drivers/misc/lkdtm/*
 
 LINUX KERNEL MEMORY CONSISTENCY MODEL (LKMM)
 M: Alan Stern 
-M: Andrea Parri 
+M: Andrea Parri 
 M: Will Deacon 
 M: Peter Zijlstra 
 M: Boqun Feng 


[tip:locking/core] tools/memory-model: Fix coding style in 'lock.cat'

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  05604e7e3adbd78f074b7f86b14f50888bf66252
Gitweb: https://git.kernel.org/tip/05604e7e3adbd78f074b7f86b14f50888bf66252
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:54 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:18 +0200

tools/memory-model: Fix coding style in 'lock.cat'

This commit uses tabs for indentation and adds spaces around binary
operator.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1526340837-1-16-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/lock.cat | 28 ++--
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/tools/memory-model/lock.cat b/tools/memory-model/lock.cat
index cd002a33ca8a..305ded17e741 100644
--- a/tools/memory-model/lock.cat
+++ b/tools/memory-model/lock.cat
@@ -84,16 +84,16 @@ let rfi-lf = ([LKW] ; po-loc ; [LF]) \ ([LKW] ; po-loc ; 
[UL] ; po-loc)
 
 (* rfe for LF events *)
 let all-possible-rfe-lf =
-  (*
-   * Given an LF event r, compute the possible rfe edges for that event
-   * (all those starting from LKW events in other threads),
-   * and then convert that relation to a set of single-edge relations.
-   *)
-  let possible-rfe-lf r =
-let pair-to-relation p = p ++ 0
-in map pair-to-relation ((LKW * {r}) & loc & ext)
-  (* Do this for each LF event r that isn't in rfi-lf *)
-  in map possible-rfe-lf (LF \ range(rfi-lf))
+   (*
+* Given an LF event r, compute the possible rfe edges for that event
+* (all those starting from LKW events in other threads),
+* and then convert that relation to a set of single-edge relations.
+*)
+   let possible-rfe-lf r =
+   let pair-to-relation p = p ++ 0
+   in map pair-to-relation ((LKW * {r}) & loc & ext)
+   (* Do this for each LF event r that isn't in rfi-lf *)
+   in map possible-rfe-lf (LF \ range(rfi-lf))
 
 (* Generate all rf relations for LF events *)
 with rfe-lf from cross(all-possible-rfe-lf)
@@ -110,10 +110,10 @@ let rfi-ru = ([UL] ; po-loc ; [RU]) \ ([UL] ; po-loc ; 
[LKW] ; po-loc)
 
 (* rfe for RU events: an RU may read from an external UL or the initial write 
*)
 let all-possible-rfe-ru =
-   let possible-rfe-ru r =
- let pair-to-relation p = p ++ 0
- in map pair-to-relation (((UL|IW) * {r}) & loc & ext)
-  in map possible-rfe-ru RU
+   let possible-rfe-ru r =
+   let pair-to-relation p = p ++ 0
+   in map pair-to-relation (((UL | IW) * {r}) & loc & ext)
+   in map possible-rfe-ru RU
 
 (* Generate all rf relations for RU events *)
 with rfe-ru from cross(all-possible-rfe-ru)


[tip:locking/core] tools/memory-model: Fix coding style in 'lock.cat'

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  05604e7e3adbd78f074b7f86b14f50888bf66252
Gitweb: https://git.kernel.org/tip/05604e7e3adbd78f074b7f86b14f50888bf66252
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:54 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:18 +0200

tools/memory-model: Fix coding style in 'lock.cat'

This commit uses tabs for indentation and adds spaces around binary
operator.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1526340837-1-16-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/lock.cat | 28 ++--
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/tools/memory-model/lock.cat b/tools/memory-model/lock.cat
index cd002a33ca8a..305ded17e741 100644
--- a/tools/memory-model/lock.cat
+++ b/tools/memory-model/lock.cat
@@ -84,16 +84,16 @@ let rfi-lf = ([LKW] ; po-loc ; [LF]) \ ([LKW] ; po-loc ; 
[UL] ; po-loc)
 
 (* rfe for LF events *)
 let all-possible-rfe-lf =
-  (*
-   * Given an LF event r, compute the possible rfe edges for that event
-   * (all those starting from LKW events in other threads),
-   * and then convert that relation to a set of single-edge relations.
-   *)
-  let possible-rfe-lf r =
-let pair-to-relation p = p ++ 0
-in map pair-to-relation ((LKW * {r}) & loc & ext)
-  (* Do this for each LF event r that isn't in rfi-lf *)
-  in map possible-rfe-lf (LF \ range(rfi-lf))
+   (*
+* Given an LF event r, compute the possible rfe edges for that event
+* (all those starting from LKW events in other threads),
+* and then convert that relation to a set of single-edge relations.
+*)
+   let possible-rfe-lf r =
+   let pair-to-relation p = p ++ 0
+   in map pair-to-relation ((LKW * {r}) & loc & ext)
+   (* Do this for each LF event r that isn't in rfi-lf *)
+   in map possible-rfe-lf (LF \ range(rfi-lf))
 
 (* Generate all rf relations for LF events *)
 with rfe-lf from cross(all-possible-rfe-lf)
@@ -110,10 +110,10 @@ let rfi-ru = ([UL] ; po-loc ; [RU]) \ ([UL] ; po-loc ; 
[LKW] ; po-loc)
 
 (* rfe for RU events: an RU may read from an external UL or the initial write 
*)
 let all-possible-rfe-ru =
-   let possible-rfe-ru r =
- let pair-to-relation p = p ++ 0
- in map pair-to-relation (((UL|IW) * {r}) & loc & ext)
-  in map possible-rfe-ru RU
+   let possible-rfe-ru r =
+   let pair-to-relation p = p ++ 0
+   in map pair-to-relation (((UL | IW) * {r}) & loc & ext)
+   in map possible-rfe-ru RU
 
 (* Generate all rf relations for RU events *)
 with rfe-ru from cross(all-possible-rfe-ru)


[tip:locking/core] tools/memory-model: Model 'smp_store_mb()'

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  bf8c6d963d16d40fbe70e94b61d9bf18c455fc6b
Gitweb: https://git.kernel.org/tip/bf8c6d963d16d40fbe70e94b61d9bf18c455fc6b
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:45 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:16 +0200

tools/memory-model: Model 'smp_store_mb()'

This commit models 'smp_store_mb(x, val);' to be semantically equivalent
to 'WRITE_ONCE(x, val); smp_mb();'.

Suggested-by: Paolo Bonzini 
Suggested-by: Peter Zijlstra 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-7-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/linux-kernel.def | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/memory-model/linux-kernel.def 
b/tools/memory-model/linux-kernel.def
index 397e4e67e8c8..acf86f6f360a 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -14,6 +14,7 @@ smp_store_release(X,V) { __store{release}(*X,V); }
 smp_load_acquire(X) __load{acquire}(*X)
 rcu_assign_pointer(X,V) { __store{release}(X,V); }
 rcu_dereference(X) __load{once}(X)
+smp_store_mb(X,V) { __store{once}(X,V); __fence{mb}; }
 
 // Fences
 smp_mb() { __fence{mb} ; }


[tip:locking/core] tools/memory-model: Model 'smp_store_mb()'

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  bf8c6d963d16d40fbe70e94b61d9bf18c455fc6b
Gitweb: https://git.kernel.org/tip/bf8c6d963d16d40fbe70e94b61d9bf18c455fc6b
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:45 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:16 +0200

tools/memory-model: Model 'smp_store_mb()'

This commit models 'smp_store_mb(x, val);' to be semantically equivalent
to 'WRITE_ONCE(x, val); smp_mb();'.

Suggested-by: Paolo Bonzini 
Suggested-by: Peter Zijlstra 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Alan Stern 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526340837-1-7-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/linux-kernel.def | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/memory-model/linux-kernel.def 
b/tools/memory-model/linux-kernel.def
index 397e4e67e8c8..acf86f6f360a 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -14,6 +14,7 @@ smp_store_release(X,V) { __store{release}(*X,V); }
 smp_load_acquire(X) __load{acquire}(*X)
 rcu_assign_pointer(X,V) { __store{release}(X,V); }
 rcu_dereference(X) __load{once}(X)
+smp_store_mb(X,V) { __store{once}(X,V); __fence{mb}; }
 
 // Fences
 smp_mb() { __fence{mb} ; }


[tip:locking/core] tools/memory-model: Fix coding style in 'linux-kernel.def'

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  d17013e0bac66bb4d1be44f061754c7e53292b64
Gitweb: https://git.kernel.org/tip/d17013e0bac66bb4d1be44f061754c7e53292b64
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:46 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:17 +0200

tools/memory-model: Fix coding style in 'linux-kernel.def'

This commit fixes white spaces around semicolons.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1526340837-1-8-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/linux-kernel.def | 28 ++--
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/tools/memory-model/linux-kernel.def 
b/tools/memory-model/linux-kernel.def
index acf86f6f360a..6bd3bc431b3d 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -17,12 +17,12 @@ rcu_dereference(X) __load{once}(X)
 smp_store_mb(X,V) { __store{once}(X,V); __fence{mb}; }
 
 // Fences
-smp_mb() { __fence{mb} ; }
-smp_rmb() { __fence{rmb} ; }
-smp_wmb() { __fence{wmb} ; }
-smp_mb__before_atomic() { __fence{before-atomic} ; }
-smp_mb__after_atomic() { __fence{after-atomic} ; }
-smp_mb__after_spinlock() { __fence{after-spinlock} ; }
+smp_mb() { __fence{mb}; }
+smp_rmb() { __fence{rmb}; }
+smp_wmb() { __fence{wmb}; }
+smp_mb__before_atomic() { __fence{before-atomic}; }
+smp_mb__after_atomic() { __fence{after-atomic}; }
+smp_mb__after_spinlock() { __fence{after-spinlock}; }
 
 // Exchange
 xchg(X,V)  __xchg{mb}(X,V)
@@ -35,26 +35,26 @@ cmpxchg_acquire(X,V,W) __cmpxchg{acquire}(X,V,W)
 cmpxchg_release(X,V,W) __cmpxchg{release}(X,V,W)
 
 // Spinlocks
-spin_lock(X) { __lock(X) ; }
-spin_unlock(X) { __unlock(X) ; }
+spin_lock(X) { __lock(X); }
+spin_unlock(X) { __unlock(X); }
 spin_trylock(X) __trylock(X)
 
 // RCU
 rcu_read_lock() { __fence{rcu-lock}; }
-rcu_read_unlock() { __fence{rcu-unlock};}
+rcu_read_unlock() { __fence{rcu-unlock}; }
 synchronize_rcu() { __fence{sync-rcu}; }
 synchronize_rcu_expedited() { __fence{sync-rcu}; }
 
 // Atomic
 atomic_read(X) READ_ONCE(*X)
-atomic_set(X,V) { WRITE_ONCE(*X,V) ; }
+atomic_set(X,V) { WRITE_ONCE(*X,V); }
 atomic_read_acquire(X) smp_load_acquire(X)
 atomic_set_release(X,V) { smp_store_release(X,V); }
 
-atomic_add(V,X) { __atomic_op(X,+,V) ; }
-atomic_sub(V,X) { __atomic_op(X,-,V) ; }
-atomic_inc(X)   { __atomic_op(X,+,1) ; }
-atomic_dec(X)   { __atomic_op(X,-,1) ; }
+atomic_add(V,X) { __atomic_op(X,+,V); }
+atomic_sub(V,X) { __atomic_op(X,-,V); }
+atomic_inc(X)   { __atomic_op(X,+,1); }
+atomic_dec(X)   { __atomic_op(X,-,1); }
 
 atomic_add_return(V,X) __atomic_op_return{mb}(X,+,V)
 atomic_add_return_relaxed(V,X) __atomic_op_return{once}(X,+,V)


[tip:locking/core] tools/memory-model: Fix coding style in 'linux-kernel.def'

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  d17013e0bac66bb4d1be44f061754c7e53292b64
Gitweb: https://git.kernel.org/tip/d17013e0bac66bb4d1be44f061754c7e53292b64
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:33:46 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:17 +0200

tools/memory-model: Fix coding style in 'linux-kernel.def'

This commit fixes white spaces around semicolons.

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1526340837-1-8-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/linux-kernel.def | 28 ++--
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/tools/memory-model/linux-kernel.def 
b/tools/memory-model/linux-kernel.def
index acf86f6f360a..6bd3bc431b3d 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -17,12 +17,12 @@ rcu_dereference(X) __load{once}(X)
 smp_store_mb(X,V) { __store{once}(X,V); __fence{mb}; }
 
 // Fences
-smp_mb() { __fence{mb} ; }
-smp_rmb() { __fence{rmb} ; }
-smp_wmb() { __fence{wmb} ; }
-smp_mb__before_atomic() { __fence{before-atomic} ; }
-smp_mb__after_atomic() { __fence{after-atomic} ; }
-smp_mb__after_spinlock() { __fence{after-spinlock} ; }
+smp_mb() { __fence{mb}; }
+smp_rmb() { __fence{rmb}; }
+smp_wmb() { __fence{wmb}; }
+smp_mb__before_atomic() { __fence{before-atomic}; }
+smp_mb__after_atomic() { __fence{after-atomic}; }
+smp_mb__after_spinlock() { __fence{after-spinlock}; }
 
 // Exchange
 xchg(X,V)  __xchg{mb}(X,V)
@@ -35,26 +35,26 @@ cmpxchg_acquire(X,V,W) __cmpxchg{acquire}(X,V,W)
 cmpxchg_release(X,V,W) __cmpxchg{release}(X,V,W)
 
 // Spinlocks
-spin_lock(X) { __lock(X) ; }
-spin_unlock(X) { __unlock(X) ; }
+spin_lock(X) { __lock(X); }
+spin_unlock(X) { __unlock(X); }
 spin_trylock(X) __trylock(X)
 
 // RCU
 rcu_read_lock() { __fence{rcu-lock}; }
-rcu_read_unlock() { __fence{rcu-unlock};}
+rcu_read_unlock() { __fence{rcu-unlock}; }
 synchronize_rcu() { __fence{sync-rcu}; }
 synchronize_rcu_expedited() { __fence{sync-rcu}; }
 
 // Atomic
 atomic_read(X) READ_ONCE(*X)
-atomic_set(X,V) { WRITE_ONCE(*X,V) ; }
+atomic_set(X,V) { WRITE_ONCE(*X,V); }
 atomic_read_acquire(X) smp_load_acquire(X)
 atomic_set_release(X,V) { smp_store_release(X,V); }
 
-atomic_add(V,X) { __atomic_op(X,+,V) ; }
-atomic_sub(V,X) { __atomic_op(X,-,V) ; }
-atomic_inc(X)   { __atomic_op(X,+,1) ; }
-atomic_dec(X)   { __atomic_op(X,-,1) ; }
+atomic_add(V,X) { __atomic_op(X,+,V); }
+atomic_sub(V,X) { __atomic_op(X,-,V); }
+atomic_inc(X)   { __atomic_op(X,+,1); }
+atomic_dec(X)   { __atomic_op(X,-,1); }
 
 atomic_add_return(V,X) __atomic_op_return{mb}(X,+,V)
 atomic_add_return_relaxed(V,X) __atomic_op_return{once}(X,+,V)


[tip:locking/core] locking/spinlocks: Clean up comment and #ifndef for {,queued_}spin_is_locked()

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  1362ae43c503a4e333ab6948fc4c6e0e794e1558
Gitweb: https://git.kernel.org/tip/1362ae43c503a4e333ab6948fc4c6e0e794e1558
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:01:29 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:15 +0200

locking/spinlocks: Clean up comment and #ifndef for {,queued_}spin_is_locked()

Removes "#ifndef queued_spin_is_locked" from the generic code: this is
unused and it's reasonable to conclude that it will continue to be unused.

Also removes the comment about spin_is_locked() from mutex_is_locked():
the comment remains valid but not particularly useful.

Suggested-by: Will Deacon 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1526338889-7003-3-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 include/asm-generic/qspinlock.h | 2 --
 include/linux/mutex.h   | 3 ---
 2 files changed, 5 deletions(-)

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index a8ed0a352d75..9cc457597ddf 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -26,7 +26,6 @@
  * @lock: Pointer to queued spinlock structure
  * Return: 1 if it is locked, 0 otherwise
  */
-#ifndef queued_spin_is_locked
 static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
 {
/*
@@ -35,7 +34,6 @@ static __always_inline int queued_spin_is_locked(struct 
qspinlock *lock)
 */
return atomic_read(>val);
 }
-#endif
 
 /**
  * queued_spin_value_unlocked - is the spinlock structure unlocked?
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 14bc0d5d0ee5..3093dd162424 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -146,9 +146,6 @@ extern void __mutex_init(struct mutex *lock, const char 
*name,
  */
 static inline bool mutex_is_locked(struct mutex *lock)
 {
-   /*
-* XXX think about spin_is_locked
-*/
return __mutex_owner(lock) != NULL;
 }
 


[tip:locking/core] locking/spinlocks: Clean up comment and #ifndef for {,queued_}spin_is_locked()

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  1362ae43c503a4e333ab6948fc4c6e0e794e1558
Gitweb: https://git.kernel.org/tip/1362ae43c503a4e333ab6948fc4c6e0e794e1558
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:01:29 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:15 +0200

locking/spinlocks: Clean up comment and #ifndef for {,queued_}spin_is_locked()

Removes "#ifndef queued_spin_is_locked" from the generic code: this is
unused and it's reasonable to conclude that it will continue to be unused.

Also removes the comment about spin_is_locked() from mutex_is_locked():
the comment remains valid but not particularly useful.

Suggested-by: Will Deacon 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1526338889-7003-3-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 include/asm-generic/qspinlock.h | 2 --
 include/linux/mutex.h   | 3 ---
 2 files changed, 5 deletions(-)

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index a8ed0a352d75..9cc457597ddf 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -26,7 +26,6 @@
  * @lock: Pointer to queued spinlock structure
  * Return: 1 if it is locked, 0 otherwise
  */
-#ifndef queued_spin_is_locked
 static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
 {
/*
@@ -35,7 +34,6 @@ static __always_inline int queued_spin_is_locked(struct 
qspinlock *lock)
 */
return atomic_read(>val);
 }
-#endif
 
 /**
  * queued_spin_value_unlocked - is the spinlock structure unlocked?
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 14bc0d5d0ee5..3093dd162424 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -146,9 +146,6 @@ extern void __mutex_init(struct mutex *lock, const char 
*name,
  */
 static inline bool mutex_is_locked(struct mutex *lock)
 {
-   /*
-* XXX think about spin_is_locked
-*/
return __mutex_owner(lock) != NULL;
 }
 


[tip:locking/core] locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  c6f5d02b6a0fb91be5d656885ce02cf28952181d
Gitweb: https://git.kernel.org/tip/c6f5d02b6a0fb91be5d656885ce02cf28952181d
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:01:28 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:15 +0200

locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()

The following commit:

  38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait} against 
local locks")

... added an smp_mb() to arch_spin_is_locked(), in order
"to ensure that the lock value is always loaded after any other locks have
been taken by the current CPU", and reported one example (the "insane case"
in ipc/sem.c) relying on such guarantee.

It is however understood that spin_is_locked() is not required to provide
such an ordering guarantee (a guarantee that is currently not provided by
all the implementations/archs), and that callers relying on such ordering
should instead insert suitable memory barriers before acting on the result
of spin_is_locked().

Following a recent auditing [1] of the callers of {,raw_}spin_is_locked(),
revealing that none of them are relying on the ordering guarantee anymore,
this commit removes the leading smp_mb() from the primitive thus reverting
38b850a73034f.

[1] https://marc.info/?l=linux-kernel=151981440005264=2
https://marc.info/?l=linux-kernel=152042843808540=2
https://marc.info/?l=linux-kernel=152043346110262=2

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Cc: Andrew Morton 
Cc: Catalin Marinas 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1526338889-7003-2-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 arch/arm64/include/asm/spinlock.h | 5 -
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/include/asm/spinlock.h 
b/arch/arm64/include/asm/spinlock.h
index ebdae15d665d..26c5bd7d88d8 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t 
lock)
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
-   /*
-* Ensure prior spin_lock operations to other locks have completed
-* on this CPU before we test whether "lock" is locked.
-*/
-   smp_mb(); /* ^^^ */
return !arch_spin_value_unlocked(READ_ONCE(*lock));
 }
 


[tip:locking/core] locking/spinlocks: Document the semantics of spin_is_locked()

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  b7e4aadef28f217de8907eec60a964328797a2be
Gitweb: https://git.kernel.org/tip/b7e4aadef28f217de8907eec60a964328797a2be
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:01:27 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:15 +0200

locking/spinlocks: Document the semantics of spin_is_locked()

There appeared to be a certain, recurrent uncertainty concerning the
semantics of spin_is_locked(), likely a consequence of the fact that
this semantics remains undocumented or that it has been historically
linked to the (likewise unclear) semantics of spin_unlock_wait().

A recent auditing [1] of the callers of the primitive confirmed that
none of them are relying on particular ordering guarantees; document
this semantics by adding a docbook header to spin_is_locked(). Also,
describe behaviors specific to certain CONFIG_SMP=n builds.

[1] https://marc.info/?l=linux-kernel=151981440005264=2
https://marc.info/?l=linux-kernel=152042843808540=2
https://marc.info/?l=linux-kernel=152043346110262=2

Co-Developed-by: Andrea Parri 
Co-Developed-by: Alan Stern 
Co-Developed-by: David Howells 
Signed-off-by: Andrea Parri 
Signed-off-by: Alan Stern 
Signed-off-by: David Howells 
Signed-off-by: Paul E. McKenney 
Acked-by: Randy Dunlap 
Cc: Akira Yokosawa 
Cc: Andrew Morton 
Cc: Boqun Feng 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526338889-7003-1-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 include/linux/spinlock.h | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 4894d322d258..1e8a46435838 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -380,6 +380,24 @@ static __always_inline int spin_trylock_irq(spinlock_t 
*lock)
raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
 })
 
+/**
+ * spin_is_locked() - Check whether a spinlock is locked.
+ * @lock: Pointer to the spinlock.
+ *
+ * This function is NOT required to provide any memory ordering
+ * guarantees; it could be used for debugging purposes or, when
+ * additional synchronization is needed, accompanied with other
+ * constructs (memory barriers) enforcing the synchronization.
+ *
+ * Returns: 1 if @lock is locked, 0 otherwise.
+ *
+ * Note that the function only tells you that the spinlock is
+ * seen to be locked, not that it is locked on your CPU.
+ *
+ * Further, on CONFIG_SMP=n builds with CONFIG_DEBUG_SPINLOCK=n,
+ * the return value is always 0 (see include/linux/spinlock_up.h).
+ * Therefore you should not rely heavily on the return value.
+ */
 static __always_inline int spin_is_locked(spinlock_t *lock)
 {
return raw_spin_is_locked(>rlock);


[tip:locking/core] locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  c6f5d02b6a0fb91be5d656885ce02cf28952181d
Gitweb: https://git.kernel.org/tip/c6f5d02b6a0fb91be5d656885ce02cf28952181d
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:01:28 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:15 +0200

locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()

The following commit:

  38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait} against 
local locks")

... added an smp_mb() to arch_spin_is_locked(), in order
"to ensure that the lock value is always loaded after any other locks have
been taken by the current CPU", and reported one example (the "insane case"
in ipc/sem.c) relying on such guarantee.

It is however understood that spin_is_locked() is not required to provide
such an ordering guarantee (a guarantee that is currently not provided by
all the implementations/archs), and that callers relying on such ordering
should instead insert suitable memory barriers before acting on the result
of spin_is_locked().

Following a recent auditing [1] of the callers of {,raw_}spin_is_locked(),
revealing that none of them are relying on the ordering guarantee anymore,
this commit removes the leading smp_mb() from the primitive thus reverting
38b850a73034f.

[1] https://marc.info/?l=linux-kernel=151981440005264=2
https://marc.info/?l=linux-kernel=152042843808540=2
https://marc.info/?l=linux-kernel=152043346110262=2

Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Will Deacon 
Cc: Andrew Morton 
Cc: Catalin Marinas 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: npig...@gmail.com
Cc: parri.and...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1526338889-7003-2-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 arch/arm64/include/asm/spinlock.h | 5 -
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/include/asm/spinlock.h 
b/arch/arm64/include/asm/spinlock.h
index ebdae15d665d..26c5bd7d88d8 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t 
lock)
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
-   /*
-* Ensure prior spin_lock operations to other locks have completed
-* on this CPU before we test whether "lock" is locked.
-*/
-   smp_mb(); /* ^^^ */
return !arch_spin_value_unlocked(READ_ONCE(*lock));
 }
 


[tip:locking/core] locking/spinlocks: Document the semantics of spin_is_locked()

2018-05-15 Thread tip-bot for Andrea Parri
Commit-ID:  b7e4aadef28f217de8907eec60a964328797a2be
Gitweb: https://git.kernel.org/tip/b7e4aadef28f217de8907eec60a964328797a2be
Author: Andrea Parri 
AuthorDate: Mon, 14 May 2018 16:01:27 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 15 May 2018 08:11:15 +0200

locking/spinlocks: Document the semantics of spin_is_locked()

There appeared to be a certain, recurrent uncertainty concerning the
semantics of spin_is_locked(), likely a consequence of the fact that
this semantics remains undocumented or that it has been historically
linked to the (likewise unclear) semantics of spin_unlock_wait().

A recent auditing [1] of the callers of the primitive confirmed that
none of them are relying on particular ordering guarantees; document
this semantics by adding a docbook header to spin_is_locked(). Also,
describe behaviors specific to certain CONFIG_SMP=n builds.

[1] https://marc.info/?l=linux-kernel=151981440005264=2
https://marc.info/?l=linux-kernel=152042843808540=2
https://marc.info/?l=linux-kernel=152043346110262=2

Co-Developed-by: Andrea Parri 
Co-Developed-by: Alan Stern 
Co-Developed-by: David Howells 
Signed-off-by: Andrea Parri 
Signed-off-by: Alan Stern 
Signed-off-by: David Howells 
Signed-off-by: Paul E. McKenney 
Acked-by: Randy Dunlap 
Cc: Akira Yokosawa 
Cc: Andrew Morton 
Cc: Boqun Feng 
Cc: Jade Alglave 
Cc: Linus Torvalds 
Cc: Luc Maranget 
Cc: Nicholas Piggin 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-a...@vger.kernel.org
Cc: parri.and...@gmail.com
Link: 
http://lkml.kernel.org/r/1526338889-7003-1-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 include/linux/spinlock.h | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 4894d322d258..1e8a46435838 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -380,6 +380,24 @@ static __always_inline int spin_trylock_irq(spinlock_t 
*lock)
raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
 })
 
+/**
+ * spin_is_locked() - Check whether a spinlock is locked.
+ * @lock: Pointer to the spinlock.
+ *
+ * This function is NOT required to provide any memory ordering
+ * guarantees; it could be used for debugging purposes or, when
+ * additional synchronization is needed, accompanied with other
+ * constructs (memory barriers) enforcing the synchronization.
+ *
+ * Returns: 1 if @lock is locked, 0 otherwise.
+ *
+ * Note that the function only tells you that the spinlock is
+ * seen to be locked, not that it is locked on your CPU.
+ *
+ * Further, on CONFIG_SMP=n builds with CONFIG_DEBUG_SPINLOCK=n,
+ * the return value is always 0 (see include/linux/spinlock_up.h).
+ * Therefore you should not rely heavily on the return value.
+ */
 static __always_inline int spin_is_locked(spinlock_t *lock)
 {
return raw_spin_is_locked(>rlock);


[tip:locking/core] locking/xchg/alpha: Remove superfluous memory barriers from the _local() variants

2018-03-12 Thread tip-bot for Andrea Parri
Commit-ID:  fbfcd0199170984bd3c2812e49ed0fe7b226959a
Gitweb: https://git.kernel.org/tip/fbfcd0199170984bd3c2812e49ed0fe7b226959a
Author: Andrea Parri 
AuthorDate: Tue, 27 Feb 2018 05:00:58 +0100
Committer:  Ingo Molnar 
CommitDate: Mon, 12 Mar 2018 10:59:03 +0100

locking/xchg/alpha: Remove superfluous memory barriers from the _local() 
variants

The following two commits:

  79d442461df74 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() 
in place of __ASM__MB")
  472e8c55cf662 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering 
bugs")

... ended up adding unnecessary barriers to the _local() variants on Alpha,
which the previous code took care to avoid.

Fix them by adding the smp_mb() into the cmpxchg() macro rather than into the
cmpxchg() variants.

Reported-by: Will Deacon 
Signed-off-by: Andrea Parri 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Ivan Kokshaysky 
Cc: Linus Torvalds 
Cc: Matt Turner 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Richard Henderson 
Cc: Thomas Gleixner 
Cc: linux-al...@vger.kernel.org
Fixes: 472e8c55cf662 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory 
ordering bugs")
Fixes: 79d442461df74 ("locking/xchg/alpha: Clean up barrier usage by using 
smp_mb() in place of __ASM__MB")
Link: 
http://lkml.kernel.org/r/1519704058-13430-1-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar 
---
 arch/alpha/include/asm/cmpxchg.h | 20 
 arch/alpha/include/asm/xchg.h| 27 ---
 2 files changed, 16 insertions(+), 31 deletions(-)

diff --git a/arch/alpha/include/asm/cmpxchg.h b/arch/alpha/include/asm/cmpxchg.h
index 8a2b331e43fe..6c7c39452471 100644
--- a/arch/alpha/include/asm/cmpxchg.h
+++ b/arch/alpha/include/asm/cmpxchg.h
@@ -38,19 +38,31 @@
 #define cmpxchg(type, args...) __cmpxchg ##type(args)
 #include 
 
+/*
+ * The leading and the trailing memory barriers guarantee that these
+ * operations are fully ordered.
+ */
 #define xchg(ptr, x)   \
 ({ \
+   __typeof__(*(ptr)) __ret;   \
__typeof__(*(ptr)) _x_ = (x);   \
-   (__typeof__(*(ptr))) __xchg((ptr), (unsigned long)_x_,  \
-sizeof(*(ptr)));   \
+   smp_mb();   \
+   __ret = (__typeof__(*(ptr)))\
+   __xchg((ptr), (unsigned long)_x_, sizeof(*(ptr)));  \
+   smp_mb();   \
+   __ret;  \
 })
 
 #define cmpxchg(ptr, o, n) \
 ({ \
+   __typeof__(*(ptr)) __ret;   \
__typeof__(*(ptr)) _o_ = (o);   \
__typeof__(*(ptr)) _n_ = (n);   \
-   (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_,   \
-   (unsigned long)_n_, sizeof(*(ptr)));\
+   smp_mb();   \
+   __ret = (__typeof__(*(ptr))) __cmpxchg((ptr),   \
+   (unsigned long)_o_, (unsigned long)_n_, sizeof(*(ptr)));\
+   smp_mb();   \
+   __ret;  \
 })
 
 #define cmpxchg64(ptr, o, n)   \
diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
index e2b59fac5257..7adb80c6746a 100644
--- a/arch/alpha/include/asm/xchg.h
+++ b/arch/alpha/include/asm/xchg.h
@@ -12,10 +12,6 @@
  * Atomic exchange.
  * Since it can be used to implement critical sections
  * it must clobber "memory" (also for interrupts in UP).
- *
- * The leading and the trailing memory barriers guarantee that these
- * operations are fully ordered.
- *
  */
 
 static inline unsigned long
@@ -23,7 +19,6 @@ xchg(_u8, volatile char *m, unsigned long val)
 {
unsigned long ret, tmp, addr64;
 
-   smp_mb();
__asm__ __volatile__(
"   andnot  %4,7,%3\n"
"   insbl   %1,%4,%1\n"
@@ -38,7 +33,6 @@ xchg(_u8, volatile char *m, unsigned long val)
".previous"
: "=" (ret), "=" (val), "=" (tmp), "=" (addr64)
: "r" ((long)m), "1" (val) : 

[tip:locking/core] locking/xchg/alpha: Remove superfluous memory barriers from the _local() variants

2018-03-12 Thread tip-bot for Andrea Parri
Commit-ID:  fbfcd0199170984bd3c2812e49ed0fe7b226959a
Gitweb: https://git.kernel.org/tip/fbfcd0199170984bd3c2812e49ed0fe7b226959a
Author: Andrea Parri 
AuthorDate: Tue, 27 Feb 2018 05:00:58 +0100
Committer:  Ingo Molnar 
CommitDate: Mon, 12 Mar 2018 10:59:03 +0100

locking/xchg/alpha: Remove superfluous memory barriers from the _local() 
variants

The following two commits:

  79d442461df74 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() 
in place of __ASM__MB")
  472e8c55cf662 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering 
bugs")

... ended up adding unnecessary barriers to the _local() variants on Alpha,
which the previous code took care to avoid.

Fix them by adding the smp_mb() into the cmpxchg() macro rather than into the
cmpxchg() variants.

Reported-by: Will Deacon 
Signed-off-by: Andrea Parri 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Ivan Kokshaysky 
Cc: Linus Torvalds 
Cc: Matt Turner 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Richard Henderson 
Cc: Thomas Gleixner 
Cc: linux-al...@vger.kernel.org
Fixes: 472e8c55cf662 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory 
ordering bugs")
Fixes: 79d442461df74 ("locking/xchg/alpha: Clean up barrier usage by using 
smp_mb() in place of __ASM__MB")
Link: 
http://lkml.kernel.org/r/1519704058-13430-1-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar 
---
 arch/alpha/include/asm/cmpxchg.h | 20 
 arch/alpha/include/asm/xchg.h| 27 ---
 2 files changed, 16 insertions(+), 31 deletions(-)

diff --git a/arch/alpha/include/asm/cmpxchg.h b/arch/alpha/include/asm/cmpxchg.h
index 8a2b331e43fe..6c7c39452471 100644
--- a/arch/alpha/include/asm/cmpxchg.h
+++ b/arch/alpha/include/asm/cmpxchg.h
@@ -38,19 +38,31 @@
 #define cmpxchg(type, args...) __cmpxchg ##type(args)
 #include 
 
+/*
+ * The leading and the trailing memory barriers guarantee that these
+ * operations are fully ordered.
+ */
 #define xchg(ptr, x)   \
 ({ \
+   __typeof__(*(ptr)) __ret;   \
__typeof__(*(ptr)) _x_ = (x);   \
-   (__typeof__(*(ptr))) __xchg((ptr), (unsigned long)_x_,  \
-sizeof(*(ptr)));   \
+   smp_mb();   \
+   __ret = (__typeof__(*(ptr)))\
+   __xchg((ptr), (unsigned long)_x_, sizeof(*(ptr)));  \
+   smp_mb();   \
+   __ret;  \
 })
 
 #define cmpxchg(ptr, o, n) \
 ({ \
+   __typeof__(*(ptr)) __ret;   \
__typeof__(*(ptr)) _o_ = (o);   \
__typeof__(*(ptr)) _n_ = (n);   \
-   (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_,   \
-   (unsigned long)_n_, sizeof(*(ptr)));\
+   smp_mb();   \
+   __ret = (__typeof__(*(ptr))) __cmpxchg((ptr),   \
+   (unsigned long)_o_, (unsigned long)_n_, sizeof(*(ptr)));\
+   smp_mb();   \
+   __ret;  \
 })
 
 #define cmpxchg64(ptr, o, n)   \
diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
index e2b59fac5257..7adb80c6746a 100644
--- a/arch/alpha/include/asm/xchg.h
+++ b/arch/alpha/include/asm/xchg.h
@@ -12,10 +12,6 @@
  * Atomic exchange.
  * Since it can be used to implement critical sections
  * it must clobber "memory" (also for interrupts in UP).
- *
- * The leading and the trailing memory barriers guarantee that these
- * operations are fully ordered.
- *
  */
 
 static inline unsigned long
@@ -23,7 +19,6 @@ xchg(_u8, volatile char *m, unsigned long val)
 {
unsigned long ret, tmp, addr64;
 
-   smp_mb();
__asm__ __volatile__(
"   andnot  %4,7,%3\n"
"   insbl   %1,%4,%1\n"
@@ -38,7 +33,6 @@ xchg(_u8, volatile char *m, unsigned long val)
".previous"
: "=" (ret), "=" (val), "=" (tmp), "=" (addr64)
: "r" ((long)m), "1" (val) : "memory");
-   smp_mb();
 
return ret;
 }
@@ -48,7 +42,6 @@ xchg(_u16, volatile short *m, unsigned long val)
 {
unsigned long ret, tmp, addr64;
 
-   smp_mb();
__asm__ __volatile__(
"   andnot  %4,7,%3\n"
"   inswl   %1,%4,%1\n"
@@ -63,7 +56,6 @@ xchg(_u16, 

[tip:locking/urgent] locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB

2018-02-23 Thread tip-bot for Andrea Parri
Commit-ID:  79d442461df7478cdd0c50d9b8a76f431f150fa3
Gitweb: https://git.kernel.org/tip/79d442461df7478cdd0c50d9b8a76f431f150fa3
Author: Andrea Parri 
AuthorDate: Thu, 22 Feb 2018 10:24:29 +0100
Committer:  Ingo Molnar 
CommitDate: Fri, 23 Feb 2018 08:38:15 +0100

locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of 
__ASM__MB

Replace each occurrence of __ASM__MB with a (trailing) smp_mb() in
xchg(), cmpxchg(), and remove the now unused __ASM__MB definitions;
this improves readability, with no additional synchronization cost.

Suggested-by: Will Deacon 
Signed-off-by: Andrea Parri 
Acked-by: Paul E. McKenney 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Ivan Kokshaysky 
Cc: Linus Torvalds 
Cc: Matt Turner 
Cc: Peter Zijlstra 
Cc: Richard Henderson 
Cc: Thomas Gleixner 
Cc: linux-al...@vger.kernel.org
Link: 
http://lkml.kernel.org/r/1519291469-5702-1-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar 
---
 arch/alpha/include/asm/cmpxchg.h |  6 --
 arch/alpha/include/asm/xchg.h| 16 
 2 files changed, 8 insertions(+), 14 deletions(-)

diff --git a/arch/alpha/include/asm/cmpxchg.h b/arch/alpha/include/asm/cmpxchg.h
index 46ebf14aed4e..8a2b331e43fe 100644
--- a/arch/alpha/include/asm/cmpxchg.h
+++ b/arch/alpha/include/asm/cmpxchg.h
@@ -6,7 +6,6 @@
  * Atomic exchange routines.
  */
 
-#define __ASM__MB
 #define xchg(type, args...)__xchg ## type ## _local(args)
 #define cmpxchg(type, args...) __cmpxchg ## type ## _local(args)
 #include 
@@ -33,10 +32,6 @@
cmpxchg_local((ptr), (o), (n)); \
 })
 
-#ifdef CONFIG_SMP
-#undef __ASM__MB
-#define __ASM__MB  "\tmb\n"
-#endif
 #undef xchg
 #undef cmpxchg
 #define xchg(type, args...)__xchg ##type(args)
@@ -64,7 +59,6 @@
cmpxchg((ptr), (o), (n));   \
 })
 
-#undef __ASM__MB
 #undef cmpxchg
 
 #endif /* _ALPHA_CMPXCHG_H */
diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
index e2660866ce97..e1facf6fc244 100644
--- a/arch/alpha/include/asm/xchg.h
+++ b/arch/alpha/include/asm/xchg.h
@@ -28,12 +28,12 @@ xchg(_u8, volatile char *m, unsigned long val)
"   or  %1,%2,%2\n"
"   stq_c   %2,0(%3)\n"
"   beq %2,2f\n"
-   __ASM__MB
".subsection 2\n"
"2: br  1b\n"
".previous"
: "=" (ret), "=" (val), "=" (tmp), "=" (addr64)
: "r" ((long)m), "1" (val) : "memory");
+   smp_mb();
 
return ret;
 }
@@ -52,12 +52,12 @@ xchg(_u16, volatile short *m, unsigned long val)
"   or  %1,%2,%2\n"
"   stq_c   %2,0(%3)\n"
"   beq %2,2f\n"
-   __ASM__MB
".subsection 2\n"
"2: br  1b\n"
".previous"
: "=" (ret), "=" (val), "=" (tmp), "=" (addr64)
: "r" ((long)m), "1" (val) : "memory");
+   smp_mb();
 
return ret;
 }
@@ -72,12 +72,12 @@ xchg(_u32, volatile int *m, unsigned long val)
"   bis $31,%3,%1\n"
"   stl_c %1,%2\n"
"   beq %1,2f\n"
-   __ASM__MB
".subsection 2\n"
"2: br 1b\n"
".previous"
: "=" (val), "=" (dummy), "=m" (*m)
: "rI" (val), "m" (*m) : "memory");
+   smp_mb();
 
return val;
 }
@@ -92,12 +92,12 @@ xchg(_u64, volatile long *m, unsigned long val)
"   bis $31,%3,%1\n"
"   stq_c %1,%2\n"
"   beq %1,2f\n"
-   __ASM__MB
".subsection 2\n"
"2: br 1b\n"
".previous"
: "=" (val), "=" (dummy), "=m" (*m)
: "rI" (val), "m" (*m) : "memory");
+   smp_mb();
 
return val;
 }
@@ -150,12 +150,12 @@ cmpxchg(_u8, volatile char *m, unsigned char old, 
unsigned char new)
"   stq_c   %2,0(%4)\n"
"   beq %2,3f\n"
"2:\n"
-   __ASM__MB
".subsection 2\n"
"3: br  1b\n"
".previous"
: "=" (prev), "=" (new), "=" (tmp), "=" (cmp), "=" (addr64)
: "r" ((long)m), "Ir" (old), "1" (new) : "memory");
+   smp_mb();
 
return prev;
 }
@@ -177,12 +177,12 @@ cmpxchg(_u16, volatile short *m, unsigned short old, 
unsigned short new)
"   stq_c   %2,0(%4)\n"
"   beq %2,3f\n"
"2:\n"
-   __ASM__MB
".subsection 2\n"
"3: br  1b\n"
".previous"
: "=" (prev), "=" (new), "=" (tmp), "=" (cmp), "=" (addr64)
: "r" ((long)m), "Ir" (old), "1" (new) 

[tip:locking/urgent] locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs

2018-02-23 Thread tip-bot for Andrea Parri
Commit-ID:  472e8c55cf6622d1c112dc2bc777f68bbd4189db
Gitweb: https://git.kernel.org/tip/472e8c55cf6622d1c112dc2bc777f68bbd4189db
Author: Andrea Parri 
AuthorDate: Thu, 22 Feb 2018 10:24:48 +0100
Committer:  Ingo Molnar 
CommitDate: Fri, 23 Feb 2018 08:38:16 +0100

locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs

Successful RMW operations are supposed to be fully ordered, but
Alpha's xchg() and cmpxchg() do not meet this requirement.

Will Deacon noticed the bug:

  > So MP using xchg:
  >
  > WRITE_ONCE(x, 1)
  > xchg(y, 1)
  >
  > smp_load_acquire(y) == 1
  > READ_ONCE(x) == 0
  >
  > would be allowed.

... which thus violates the above requirement.

Fix it by adding a leading smp_mb() to the xchg() and cmpxchg() implementations.

Reported-by: Will Deacon 
Signed-off-by: Andrea Parri 
Acked-by: Paul E. McKenney 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Ivan Kokshaysky 
Cc: Linus Torvalds 
Cc: Matt Turner 
Cc: Peter Zijlstra 
Cc: Richard Henderson 
Cc: Thomas Gleixner 
Cc: linux-al...@vger.kernel.org
Link: 
http://lkml.kernel.org/r/1519291488-5752-1-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar 
---
 arch/alpha/include/asm/xchg.h | 21 ++---
 1 file changed, 18 insertions(+), 3 deletions(-)

diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
index e1facf6fc244..e2b59fac5257 100644
--- a/arch/alpha/include/asm/xchg.h
+++ b/arch/alpha/include/asm/xchg.h
@@ -12,6 +12,10 @@
  * Atomic exchange.
  * Since it can be used to implement critical sections
  * it must clobber "memory" (also for interrupts in UP).
+ *
+ * The leading and the trailing memory barriers guarantee that these
+ * operations are fully ordered.
+ *
  */
 
 static inline unsigned long
@@ -19,6 +23,7 @@ xchg(_u8, volatile char *m, unsigned long val)
 {
unsigned long ret, tmp, addr64;
 
+   smp_mb();
__asm__ __volatile__(
"   andnot  %4,7,%3\n"
"   insbl   %1,%4,%1\n"
@@ -43,6 +48,7 @@ xchg(_u16, volatile short *m, unsigned long val)
 {
unsigned long ret, tmp, addr64;
 
+   smp_mb();
__asm__ __volatile__(
"   andnot  %4,7,%3\n"
"   inswl   %1,%4,%1\n"
@@ -67,6 +73,7 @@ xchg(_u32, volatile int *m, unsigned long val)
 {
unsigned long dummy;
 
+   smp_mb();
__asm__ __volatile__(
"1: ldl_l %0,%4\n"
"   bis $31,%3,%1\n"
@@ -87,6 +94,7 @@ xchg(_u64, volatile long *m, unsigned long val)
 {
unsigned long dummy;
 
+   smp_mb();
__asm__ __volatile__(
"1: ldq_l %0,%4\n"
"   bis $31,%3,%1\n"
@@ -128,9 +136,12 @@ xchg(, volatile void *ptr, unsigned long x, int size)
  * store NEW in MEM.  Return the initial value in MEM.  Success is
  * indicated by comparing RETURN with OLD.
  *
- * The memory barrier is placed in SMP unconditionally, in order to
- * guarantee that dependency ordering is preserved when a dependency
- * is headed by an unsuccessful operation.
+ * The leading and the trailing memory barriers guarantee that these
+ * operations are fully ordered.
+ *
+ * The trailing memory barrier is placed in SMP unconditionally, in
+ * order to guarantee that dependency ordering is preserved when a
+ * dependency is headed by an unsuccessful operation.
  */
 
 static inline unsigned long
@@ -138,6 +149,7 @@ cmpxchg(_u8, volatile char *m, unsigned char old, 
unsigned char new)
 {
unsigned long prev, tmp, cmp, addr64;
 
+   smp_mb();
__asm__ __volatile__(
"   andnot  %5,7,%4\n"
"   insbl   %1,%5,%1\n"
@@ -165,6 +177,7 @@ cmpxchg(_u16, volatile short *m, unsigned short old, 
unsigned short new)
 {
unsigned long prev, tmp, cmp, addr64;
 
+   smp_mb();
__asm__ __volatile__(
"   andnot  %5,7,%4\n"
"   inswl   %1,%5,%1\n"
@@ -192,6 +205,7 @@ cmpxchg(_u32, volatile int *m, int old, int new)
 {
unsigned long prev, cmp;
 
+   smp_mb();
__asm__ __volatile__(
"1: ldl_l %0,%5\n"
"   cmpeq %0,%3,%1\n"
@@ -215,6 +229,7 @@ cmpxchg(_u64, volatile long *m, unsigned long old, 
unsigned long new)
 {
unsigned long prev, cmp;
 
+   smp_mb();
__asm__ __volatile__(
"1: ldq_l %0,%5\n"
"   cmpeq %0,%3,%1\n"


[tip:locking/urgent] locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB

2018-02-23 Thread tip-bot for Andrea Parri
Commit-ID:  79d442461df7478cdd0c50d9b8a76f431f150fa3
Gitweb: https://git.kernel.org/tip/79d442461df7478cdd0c50d9b8a76f431f150fa3
Author: Andrea Parri 
AuthorDate: Thu, 22 Feb 2018 10:24:29 +0100
Committer:  Ingo Molnar 
CommitDate: Fri, 23 Feb 2018 08:38:15 +0100

locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of 
__ASM__MB

Replace each occurrence of __ASM__MB with a (trailing) smp_mb() in
xchg(), cmpxchg(), and remove the now unused __ASM__MB definitions;
this improves readability, with no additional synchronization cost.

Suggested-by: Will Deacon 
Signed-off-by: Andrea Parri 
Acked-by: Paul E. McKenney 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Ivan Kokshaysky 
Cc: Linus Torvalds 
Cc: Matt Turner 
Cc: Peter Zijlstra 
Cc: Richard Henderson 
Cc: Thomas Gleixner 
Cc: linux-al...@vger.kernel.org
Link: 
http://lkml.kernel.org/r/1519291469-5702-1-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar 
---
 arch/alpha/include/asm/cmpxchg.h |  6 --
 arch/alpha/include/asm/xchg.h| 16 
 2 files changed, 8 insertions(+), 14 deletions(-)

diff --git a/arch/alpha/include/asm/cmpxchg.h b/arch/alpha/include/asm/cmpxchg.h
index 46ebf14aed4e..8a2b331e43fe 100644
--- a/arch/alpha/include/asm/cmpxchg.h
+++ b/arch/alpha/include/asm/cmpxchg.h
@@ -6,7 +6,6 @@
  * Atomic exchange routines.
  */
 
-#define __ASM__MB
 #define xchg(type, args...)__xchg ## type ## _local(args)
 #define cmpxchg(type, args...) __cmpxchg ## type ## _local(args)
 #include 
@@ -33,10 +32,6 @@
cmpxchg_local((ptr), (o), (n)); \
 })
 
-#ifdef CONFIG_SMP
-#undef __ASM__MB
-#define __ASM__MB  "\tmb\n"
-#endif
 #undef xchg
 #undef cmpxchg
 #define xchg(type, args...)__xchg ##type(args)
@@ -64,7 +59,6 @@
cmpxchg((ptr), (o), (n));   \
 })
 
-#undef __ASM__MB
 #undef cmpxchg
 
 #endif /* _ALPHA_CMPXCHG_H */
diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
index e2660866ce97..e1facf6fc244 100644
--- a/arch/alpha/include/asm/xchg.h
+++ b/arch/alpha/include/asm/xchg.h
@@ -28,12 +28,12 @@ xchg(_u8, volatile char *m, unsigned long val)
"   or  %1,%2,%2\n"
"   stq_c   %2,0(%3)\n"
"   beq %2,2f\n"
-   __ASM__MB
".subsection 2\n"
"2: br  1b\n"
".previous"
: "=" (ret), "=" (val), "=" (tmp), "=" (addr64)
: "r" ((long)m), "1" (val) : "memory");
+   smp_mb();
 
return ret;
 }
@@ -52,12 +52,12 @@ xchg(_u16, volatile short *m, unsigned long val)
"   or  %1,%2,%2\n"
"   stq_c   %2,0(%3)\n"
"   beq %2,2f\n"
-   __ASM__MB
".subsection 2\n"
"2: br  1b\n"
".previous"
: "=" (ret), "=" (val), "=" (tmp), "=" (addr64)
: "r" ((long)m), "1" (val) : "memory");
+   smp_mb();
 
return ret;
 }
@@ -72,12 +72,12 @@ xchg(_u32, volatile int *m, unsigned long val)
"   bis $31,%3,%1\n"
"   stl_c %1,%2\n"
"   beq %1,2f\n"
-   __ASM__MB
".subsection 2\n"
"2: br 1b\n"
".previous"
: "=" (val), "=" (dummy), "=m" (*m)
: "rI" (val), "m" (*m) : "memory");
+   smp_mb();
 
return val;
 }
@@ -92,12 +92,12 @@ xchg(_u64, volatile long *m, unsigned long val)
"   bis $31,%3,%1\n"
"   stq_c %1,%2\n"
"   beq %1,2f\n"
-   __ASM__MB
".subsection 2\n"
"2: br 1b\n"
".previous"
: "=" (val), "=" (dummy), "=m" (*m)
: "rI" (val), "m" (*m) : "memory");
+   smp_mb();
 
return val;
 }
@@ -150,12 +150,12 @@ cmpxchg(_u8, volatile char *m, unsigned char old, 
unsigned char new)
"   stq_c   %2,0(%4)\n"
"   beq %2,3f\n"
"2:\n"
-   __ASM__MB
".subsection 2\n"
"3: br  1b\n"
".previous"
: "=" (prev), "=" (new), "=" (tmp), "=" (cmp), "=" (addr64)
: "r" ((long)m), "Ir" (old), "1" (new) : "memory");
+   smp_mb();
 
return prev;
 }
@@ -177,12 +177,12 @@ cmpxchg(_u16, volatile short *m, unsigned short old, 
unsigned short new)
"   stq_c   %2,0(%4)\n"
"   beq %2,3f\n"
"2:\n"
-   __ASM__MB
".subsection 2\n"
"3: br  1b\n"
".previous"
: "=" (prev), "=" (new), "=" (tmp), "=" (cmp), "=" (addr64)
: "r" ((long)m), "Ir" (old), "1" (new) : "memory");
+   smp_mb();
 
return prev;
 }
@@ -200,12 +200,12 @@ cmpxchg(_u32, volatile int *m, int old, int new)
"   stl_c %1,%2\n"
"   beq %1,3f\n"
"2:\n"
-   __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"
: "="(prev), 

[tip:locking/urgent] locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs

2018-02-23 Thread tip-bot for Andrea Parri
Commit-ID:  472e8c55cf6622d1c112dc2bc777f68bbd4189db
Gitweb: https://git.kernel.org/tip/472e8c55cf6622d1c112dc2bc777f68bbd4189db
Author: Andrea Parri 
AuthorDate: Thu, 22 Feb 2018 10:24:48 +0100
Committer:  Ingo Molnar 
CommitDate: Fri, 23 Feb 2018 08:38:16 +0100

locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs

Successful RMW operations are supposed to be fully ordered, but
Alpha's xchg() and cmpxchg() do not meet this requirement.

Will Deacon noticed the bug:

  > So MP using xchg:
  >
  > WRITE_ONCE(x, 1)
  > xchg(y, 1)
  >
  > smp_load_acquire(y) == 1
  > READ_ONCE(x) == 0
  >
  > would be allowed.

... which thus violates the above requirement.

Fix it by adding a leading smp_mb() to the xchg() and cmpxchg() implementations.

Reported-by: Will Deacon 
Signed-off-by: Andrea Parri 
Acked-by: Paul E. McKenney 
Cc: Alan Stern 
Cc: Andrew Morton 
Cc: Ivan Kokshaysky 
Cc: Linus Torvalds 
Cc: Matt Turner 
Cc: Peter Zijlstra 
Cc: Richard Henderson 
Cc: Thomas Gleixner 
Cc: linux-al...@vger.kernel.org
Link: 
http://lkml.kernel.org/r/1519291488-5752-1-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar 
---
 arch/alpha/include/asm/xchg.h | 21 ++---
 1 file changed, 18 insertions(+), 3 deletions(-)

diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
index e1facf6fc244..e2b59fac5257 100644
--- a/arch/alpha/include/asm/xchg.h
+++ b/arch/alpha/include/asm/xchg.h
@@ -12,6 +12,10 @@
  * Atomic exchange.
  * Since it can be used to implement critical sections
  * it must clobber "memory" (also for interrupts in UP).
+ *
+ * The leading and the trailing memory barriers guarantee that these
+ * operations are fully ordered.
+ *
  */
 
 static inline unsigned long
@@ -19,6 +23,7 @@ xchg(_u8, volatile char *m, unsigned long val)
 {
unsigned long ret, tmp, addr64;
 
+   smp_mb();
__asm__ __volatile__(
"   andnot  %4,7,%3\n"
"   insbl   %1,%4,%1\n"
@@ -43,6 +48,7 @@ xchg(_u16, volatile short *m, unsigned long val)
 {
unsigned long ret, tmp, addr64;
 
+   smp_mb();
__asm__ __volatile__(
"   andnot  %4,7,%3\n"
"   inswl   %1,%4,%1\n"
@@ -67,6 +73,7 @@ xchg(_u32, volatile int *m, unsigned long val)
 {
unsigned long dummy;
 
+   smp_mb();
__asm__ __volatile__(
"1: ldl_l %0,%4\n"
"   bis $31,%3,%1\n"
@@ -87,6 +94,7 @@ xchg(_u64, volatile long *m, unsigned long val)
 {
unsigned long dummy;
 
+   smp_mb();
__asm__ __volatile__(
"1: ldq_l %0,%4\n"
"   bis $31,%3,%1\n"
@@ -128,9 +136,12 @@ xchg(, volatile void *ptr, unsigned long x, int size)
  * store NEW in MEM.  Return the initial value in MEM.  Success is
  * indicated by comparing RETURN with OLD.
  *
- * The memory barrier is placed in SMP unconditionally, in order to
- * guarantee that dependency ordering is preserved when a dependency
- * is headed by an unsuccessful operation.
+ * The leading and the trailing memory barriers guarantee that these
+ * operations are fully ordered.
+ *
+ * The trailing memory barrier is placed in SMP unconditionally, in
+ * order to guarantee that dependency ordering is preserved when a
+ * dependency is headed by an unsuccessful operation.
  */
 
 static inline unsigned long
@@ -138,6 +149,7 @@ cmpxchg(_u8, volatile char *m, unsigned char old, 
unsigned char new)
 {
unsigned long prev, tmp, cmp, addr64;
 
+   smp_mb();
__asm__ __volatile__(
"   andnot  %5,7,%4\n"
"   insbl   %1,%5,%1\n"
@@ -165,6 +177,7 @@ cmpxchg(_u16, volatile short *m, unsigned short old, 
unsigned short new)
 {
unsigned long prev, tmp, cmp, addr64;
 
+   smp_mb();
__asm__ __volatile__(
"   andnot  %5,7,%4\n"
"   inswl   %1,%5,%1\n"
@@ -192,6 +205,7 @@ cmpxchg(_u32, volatile int *m, int old, int new)
 {
unsigned long prev, cmp;
 
+   smp_mb();
__asm__ __volatile__(
"1: ldl_l %0,%5\n"
"   cmpeq %0,%3,%1\n"
@@ -215,6 +229,7 @@ cmpxchg(_u64, volatile long *m, unsigned long old, 
unsigned long new)
 {
unsigned long prev, cmp;
 
+   smp_mb();
__asm__ __volatile__(
"1: ldq_l %0,%5\n"
"   cmpeq %0,%3,%1\n"


[tip:locking/urgent] locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()

2018-02-21 Thread tip-bot for Andrea Parri
Commit-ID:  cb13b424e986aed68d74cbaec3449ea23c50e167
Gitweb: https://git.kernel.org/tip/cb13b424e986aed68d74cbaec3449ea23c50e167
Author: Andrea Parri 
AuthorDate: Tue, 20 Feb 2018 19:45:56 +0100
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 10:12:29 +0100

locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()

Continuing along with the fight against smp_read_barrier_depends() [1]
(or rather, against its improper use), add an unconditional barrier to
cmpxchg.  This guarantees that dependency ordering is preserved when a
dependency is headed by an unsuccessful cmpxchg.  As it turns out, the
change could enable further simplification of LKMM as proposed in [2].

[1] https://marc.info/?l=linux-kernel=150884953419377=2
https://marc.info/?l=linux-kernel=150884946319353=2
https://marc.info/?l=linux-kernel=151215810824468=2
https://marc.info/?l=linux-kernel=151215816324484=2

[2] https://marc.info/?l=linux-kernel=151881978314872=2

Signed-off-by: Andrea Parri 
Acked-by: Peter Zijlstra 
Acked-by: Paul E. McKenney 
Cc: Alan Stern 
Cc: Ivan Kokshaysky 
Cc: Linus Torvalds 
Cc: Matt Turner 
Cc: Richard Henderson 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-al...@vger.kernel.org
Link: 
http://lkml.kernel.org/r/1519152356-4804-1-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar 
---
 arch/alpha/include/asm/xchg.h | 15 +++
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
index 68dfb3c..e266086 100644
--- a/arch/alpha/include/asm/xchg.h
+++ b/arch/alpha/include/asm/xchg.h
@@ -128,10 +128,9 @@ xchg(, volatile void *ptr, unsigned long x, int size)
  * store NEW in MEM.  Return the initial value in MEM.  Success is
  * indicated by comparing RETURN with OLD.
  *
- * The memory barrier should be placed in SMP only when we actually
- * make the change. If we don't change anything (so if the returned
- * prev is equal to old) then we aren't acquiring anything new and
- * we don't need any memory barrier as far I can tell.
+ * The memory barrier is placed in SMP unconditionally, in order to
+ * guarantee that dependency ordering is preserved when a dependency
+ * is headed by an unsuccessful operation.
  */
 
 static inline unsigned long
@@ -150,8 +149,8 @@ cmpxchg(_u8, volatile char *m, unsigned char old, 
unsigned char new)
"   or  %1,%2,%2\n"
"   stq_c   %2,0(%4)\n"
"   beq %2,3f\n"
-   __ASM__MB
"2:\n"
+   __ASM__MB
".subsection 2\n"
"3: br  1b\n"
".previous"
@@ -177,8 +176,8 @@ cmpxchg(_u16, volatile short *m, unsigned short old, 
unsigned short new)
"   or  %1,%2,%2\n"
"   stq_c   %2,0(%4)\n"
"   beq %2,3f\n"
-   __ASM__MB
"2:\n"
+   __ASM__MB
".subsection 2\n"
"3: br  1b\n"
".previous"
@@ -200,8 +199,8 @@ cmpxchg(_u32, volatile int *m, int old, int new)
"   mov %4,%1\n"
"   stl_c %1,%2\n"
"   beq %1,3f\n"
-   __ASM__MB
"2:\n"
+   __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"
@@ -223,8 +222,8 @@ cmpxchg(_u64, volatile long *m, unsigned long old, 
unsigned long new)
"   mov %4,%1\n"
"   stq_c %1,%2\n"
"   beq %1,3f\n"
-   __ASM__MB
"2:\n"
+   __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"


[tip:locking/urgent] locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()

2018-02-21 Thread tip-bot for Andrea Parri
Commit-ID:  cb13b424e986aed68d74cbaec3449ea23c50e167
Gitweb: https://git.kernel.org/tip/cb13b424e986aed68d74cbaec3449ea23c50e167
Author: Andrea Parri 
AuthorDate: Tue, 20 Feb 2018 19:45:56 +0100
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 10:12:29 +0100

locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()

Continuing along with the fight against smp_read_barrier_depends() [1]
(or rather, against its improper use), add an unconditional barrier to
cmpxchg.  This guarantees that dependency ordering is preserved when a
dependency is headed by an unsuccessful cmpxchg.  As it turns out, the
change could enable further simplification of LKMM as proposed in [2].

[1] https://marc.info/?l=linux-kernel=150884953419377=2
https://marc.info/?l=linux-kernel=150884946319353=2
https://marc.info/?l=linux-kernel=151215810824468=2
https://marc.info/?l=linux-kernel=151215816324484=2

[2] https://marc.info/?l=linux-kernel=151881978314872=2

Signed-off-by: Andrea Parri 
Acked-by: Peter Zijlstra 
Acked-by: Paul E. McKenney 
Cc: Alan Stern 
Cc: Ivan Kokshaysky 
Cc: Linus Torvalds 
Cc: Matt Turner 
Cc: Richard Henderson 
Cc: Thomas Gleixner 
Cc: Will Deacon 
Cc: linux-al...@vger.kernel.org
Link: 
http://lkml.kernel.org/r/1519152356-4804-1-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar 
---
 arch/alpha/include/asm/xchg.h | 15 +++
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
index 68dfb3c..e266086 100644
--- a/arch/alpha/include/asm/xchg.h
+++ b/arch/alpha/include/asm/xchg.h
@@ -128,10 +128,9 @@ xchg(, volatile void *ptr, unsigned long x, int size)
  * store NEW in MEM.  Return the initial value in MEM.  Success is
  * indicated by comparing RETURN with OLD.
  *
- * The memory barrier should be placed in SMP only when we actually
- * make the change. If we don't change anything (so if the returned
- * prev is equal to old) then we aren't acquiring anything new and
- * we don't need any memory barrier as far I can tell.
+ * The memory barrier is placed in SMP unconditionally, in order to
+ * guarantee that dependency ordering is preserved when a dependency
+ * is headed by an unsuccessful operation.
  */
 
 static inline unsigned long
@@ -150,8 +149,8 @@ cmpxchg(_u8, volatile char *m, unsigned char old, 
unsigned char new)
"   or  %1,%2,%2\n"
"   stq_c   %2,0(%4)\n"
"   beq %2,3f\n"
-   __ASM__MB
"2:\n"
+   __ASM__MB
".subsection 2\n"
"3: br  1b\n"
".previous"
@@ -177,8 +176,8 @@ cmpxchg(_u16, volatile short *m, unsigned short old, 
unsigned short new)
"   or  %1,%2,%2\n"
"   stq_c   %2,0(%4)\n"
"   beq %2,3f\n"
-   __ASM__MB
"2:\n"
+   __ASM__MB
".subsection 2\n"
"3: br  1b\n"
".previous"
@@ -200,8 +199,8 @@ cmpxchg(_u32, volatile int *m, int old, int new)
"   mov %4,%1\n"
"   stl_c %1,%2\n"
"   beq %1,3f\n"
-   __ASM__MB
"2:\n"
+   __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"
@@ -223,8 +222,8 @@ cmpxchg(_u64, volatile long *m, unsigned long old, 
unsigned long new)
"   mov %4,%1\n"
"   stq_c %1,%2\n"
"   beq %1,3f\n"
-   __ASM__MB
"2:\n"
+   __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"


[tip:locking/core] Documentation/memory-barriers.txt: Cross-reference "tools/memory-model/"

2018-02-21 Thread tip-bot for Andrea Parri
Commit-ID:  621df431b0ac931e318679f54047c47eb23cfdd2
Gitweb: https://git.kernel.org/tip/621df431b0ac931e318679f54047c47eb23cfdd2
Author: Andrea Parri 
AuthorDate: Tue, 20 Feb 2018 15:25:07 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:14 +0100

Documentation/memory-barriers.txt: Cross-reference "tools/memory-model/"

A memory consistency model is now available for the Linux kernel [1],
which "can (roughly speaking) be thought of as an automated version of
memory-barriers.txt" and which is (in turn) "accompanied by extensive
documentation on its use and its design".

Inform the (occasional) reader of memory-barriers.txt of these
developments.

[1] https://marc.info/?l=linux-kernel=151687290114799=2

Co-developed-by: Andrea Parri 
Co-developed-by: Akira Yokosawa 
Signed-off-by: Andrea Parri 
Signed-off-by: Akira Yokosawa 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-7-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index a863009..a37d3af 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -14,7 +14,11 @@ DISCLAIMER
 This document is not a specification; it is intentionally (for the sake of
 brevity) and unintentionally (due to being human) incomplete. This document is
 meant as a guide to using the various memory barriers provided by Linux, but
-in case of any doubt (and there are many) please ask.
+in case of any doubt (and there are many) please ask.  Some doubts may be
+resolved by referring to the formal memory consistency model and related
+documentation at tools/memory-model/.  Nevertheless, even this memory
+model should be viewed as the collective opinion of its maintainers rather
+than as an infallible oracle.
 
 To repeat, this document is not a specification of what Linux expects from
 hardware.


[tip:locking/core] Documentation/memory-barriers.txt: Cross-reference "tools/memory-model/"

2018-02-21 Thread tip-bot for Andrea Parri
Commit-ID:  621df431b0ac931e318679f54047c47eb23cfdd2
Gitweb: https://git.kernel.org/tip/621df431b0ac931e318679f54047c47eb23cfdd2
Author: Andrea Parri 
AuthorDate: Tue, 20 Feb 2018 15:25:07 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:14 +0100

Documentation/memory-barriers.txt: Cross-reference "tools/memory-model/"

A memory consistency model is now available for the Linux kernel [1],
which "can (roughly speaking) be thought of as an automated version of
memory-barriers.txt" and which is (in turn) "accompanied by extensive
documentation on its use and its design".

Inform the (occasional) reader of memory-barriers.txt of these
developments.

[1] https://marc.info/?l=linux-kernel=151687290114799=2

Co-developed-by: Andrea Parri 
Co-developed-by: Akira Yokosawa 
Signed-off-by: Andrea Parri 
Signed-off-by: Akira Yokosawa 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: st...@rowland.harvard.edu
Cc: will.dea...@arm.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-7-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 Documentation/memory-barriers.txt | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index a863009..a37d3af 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -14,7 +14,11 @@ DISCLAIMER
 This document is not a specification; it is intentionally (for the sake of
 brevity) and unintentionally (due to being human) incomplete. This document is
 meant as a guide to using the various memory barriers provided by Linux, but
-in case of any doubt (and there are many) please ask.
+in case of any doubt (and there are many) please ask.  Some doubts may be
+resolved by referring to the formal memory consistency model and related
+documentation at tools/memory-model/.  Nevertheless, even this memory
+model should be viewed as the collective opinion of its maintainers rather
+than as an infallible oracle.
 
 To repeat, this document is not a specification of what Linux expects from
 hardware.


[tip:locking/core] MAINTAINERS: Add the Memory Consistency Model subsystem

2018-02-21 Thread tip-bot for Andrea Parri
Commit-ID:  e7d74c9f900a12ea0bd5cabb3be142441530e24e
Gitweb: https://git.kernel.org/tip/e7d74c9f900a12ea0bd5cabb3be142441530e24e
Author: Andrea Parri 
AuthorDate: Tue, 20 Feb 2018 15:25:02 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:12 +0100

MAINTAINERS: Add the Memory Consistency Model subsystem

Move the contents of tools/memory-model/MAINTAINERS into the main
MAINTAINERS file, removing tools/memory-model/MAINTAINERS. This
allows get_maintainer.pl to correctly identify the maintainers of
tools/memory-model/.

Suggested-by: Ingo Molnar 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Acked-by: Will Deacon 
Acked-by: Alan Stern 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-2-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS| 16 
 tools/memory-model/MAINTAINERS | 15 ---
 2 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 9a7f76e..654c6c6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8148,6 +8148,22 @@ M:   Kees Cook 
 S: Maintained
 F: drivers/misc/lkdtm*
 
+LINUX KERNEL MEMORY CONSISTENCY MODEL (LKMM)
+M: Alan Stern 
+M: Andrea Parri 
+M: Will Deacon 
+M: Peter Zijlstra 
+M: Boqun Feng 
+M: Nicholas Piggin 
+M: David Howells 
+M: Jade Alglave 
+M: Luc Maranget 
+M: "Paul E. McKenney" 
+L: linux-kernel@vger.kernel.org
+S: Supported
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
+F: tools/memory-model/
+
 LINUX SECURITY MODULE (LSM) FRAMEWORK
 M: Chris Wright 
 L: linux-security-mod...@vger.kernel.org
diff --git a/tools/memory-model/MAINTAINERS b/tools/memory-model/MAINTAINERS
deleted file mode 100644
index db3bd3f..000
--- a/tools/memory-model/MAINTAINERS
+++ /dev/null
@@ -1,15 +0,0 @@
-LINUX KERNEL MEMORY CONSISTENCY MODEL
-M: Alan Stern 
-M: Andrea Parri 
-M: Will Deacon 
-M: Peter Zijlstra 
-M: Boqun Feng 
-M: Nicholas Piggin 
-M: David Howells 
-M: Jade Alglave 
-M: Luc Maranget 
-M: "Paul E. McKenney" 
-L: linux-kernel@vger.kernel.org
-S: Supported
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
-F: tools/memory-model/


[tip:locking/core] MAINTAINERS: List file memory-barriers.txt within the LKMM entry

2018-02-21 Thread tip-bot for Andrea Parri
Commit-ID:  ea52d698c1ed0c4555656de0dd1f7ac5866f89e1
Gitweb: https://git.kernel.org/tip/ea52d698c1ed0c4555656de0dd1f7ac5866f89e1
Author: Andrea Parri 
AuthorDate: Tue, 20 Feb 2018 15:25:03 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:13 +0100

MAINTAINERS: List file memory-barriers.txt within the LKMM entry

We now have a shiny new Linux-kernel memory model (LKMM) and the old
tried-and-true Documentation/memory-barrier.txt.  It would be good to
keep these automatically synchronized, but in the meantime we need at
least let people know that they are related.  Will suggested adding the
Documentation/memory-barrier.txt file to the LKMM maintainership list,
thus making the LKMM maintainers responsible for both the old and the new.
This commit follows Will's excellent suggestion.

Suggested-by: Will Deacon 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1519169112-20593-3-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 654c6c6..42da350 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8163,6 +8163,7 @@ L:linux-kernel@vger.kernel.org
 S: Supported
 T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
 F: tools/memory-model/
+F: Documentation/memory-barriers.txt
 
 LINUX SECURITY MODULE (LSM) FRAMEWORK
 M: Chris Wright 


[tip:locking/core] MAINTAINERS: Add the Memory Consistency Model subsystem

2018-02-21 Thread tip-bot for Andrea Parri
Commit-ID:  e7d74c9f900a12ea0bd5cabb3be142441530e24e
Gitweb: https://git.kernel.org/tip/e7d74c9f900a12ea0bd5cabb3be142441530e24e
Author: Andrea Parri 
AuthorDate: Tue, 20 Feb 2018 15:25:02 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:12 +0100

MAINTAINERS: Add the Memory Consistency Model subsystem

Move the contents of tools/memory-model/MAINTAINERS into the main
MAINTAINERS file, removing tools/memory-model/MAINTAINERS. This
allows get_maintainer.pl to correctly identify the maintainers of
tools/memory-model/.

Suggested-by: Ingo Molnar 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Acked-by: Will Deacon 
Acked-by: Alan Stern 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-2-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS| 16 
 tools/memory-model/MAINTAINERS | 15 ---
 2 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 9a7f76e..654c6c6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8148,6 +8148,22 @@ M:   Kees Cook 
 S: Maintained
 F: drivers/misc/lkdtm*
 
+LINUX KERNEL MEMORY CONSISTENCY MODEL (LKMM)
+M: Alan Stern 
+M: Andrea Parri 
+M: Will Deacon 
+M: Peter Zijlstra 
+M: Boqun Feng 
+M: Nicholas Piggin 
+M: David Howells 
+M: Jade Alglave 
+M: Luc Maranget 
+M: "Paul E. McKenney" 
+L: linux-kernel@vger.kernel.org
+S: Supported
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
+F: tools/memory-model/
+
 LINUX SECURITY MODULE (LSM) FRAMEWORK
 M: Chris Wright 
 L: linux-security-mod...@vger.kernel.org
diff --git a/tools/memory-model/MAINTAINERS b/tools/memory-model/MAINTAINERS
deleted file mode 100644
index db3bd3f..000
--- a/tools/memory-model/MAINTAINERS
+++ /dev/null
@@ -1,15 +0,0 @@
-LINUX KERNEL MEMORY CONSISTENCY MODEL
-M: Alan Stern 
-M: Andrea Parri 
-M: Will Deacon 
-M: Peter Zijlstra 
-M: Boqun Feng 
-M: Nicholas Piggin 
-M: David Howells 
-M: Jade Alglave 
-M: Luc Maranget 
-M: "Paul E. McKenney" 
-L: linux-kernel@vger.kernel.org
-S: Supported
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
-F: tools/memory-model/


[tip:locking/core] MAINTAINERS: List file memory-barriers.txt within the LKMM entry

2018-02-21 Thread tip-bot for Andrea Parri
Commit-ID:  ea52d698c1ed0c4555656de0dd1f7ac5866f89e1
Gitweb: https://git.kernel.org/tip/ea52d698c1ed0c4555656de0dd1f7ac5866f89e1
Author: Andrea Parri 
AuthorDate: Tue, 20 Feb 2018 15:25:03 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:13 +0100

MAINTAINERS: List file memory-barriers.txt within the LKMM entry

We now have a shiny new Linux-kernel memory model (LKMM) and the old
tried-and-true Documentation/memory-barrier.txt.  It would be good to
keep these automatically synchronized, but in the meantime we need at
least let people know that they are related.  Will suggested adding the
Documentation/memory-barrier.txt file to the LKMM maintainership list,
thus making the LKMM maintainers responsible for both the old and the new.
This commit follows Will's excellent suggestion.

Suggested-by: Will Deacon 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Cc: st...@rowland.harvard.edu
Link: 
http://lkml.kernel.org/r/1519169112-20593-3-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 654c6c6..42da350 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8163,6 +8163,7 @@ L:linux-kernel@vger.kernel.org
 S: Supported
 T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
 F: tools/memory-model/
+F: Documentation/memory-barriers.txt
 
 LINUX SECURITY MODULE (LSM) FRAMEWORK
 M: Chris Wright 


[tip:locking/core] tools/memory-model: Clarify the origin/scope of the tool name

2018-02-21 Thread tip-bot for Andrea Parri
Commit-ID:  48d44d4e8a583c66d9f376e18c1a1fcc445f4b64
Gitweb: https://git.kernel.org/tip/48d44d4e8a583c66d9f376e18c1a1fcc445f4b64
Author: Andrea Parri 
AuthorDate: Tue, 20 Feb 2018 15:25:01 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:12 +0100

tools/memory-model: Clarify the origin/scope of the tool name

Ingo pointed out that:

  "The "memory model" name is overly generic, ambiguous and somewhat
   misleading, as we usually mean the virtual memory layout/model
   when we say "memory model". GCC too uses it in that sense [...]"

Make it clear that tools/memory-model/ uses the term "memory model" as
shorthand for "memory consistency model" by calling out this convention
in tools/memory-model/README.

Stick to the original "memory model" term in sources' headers and for
the subsystem name.

Suggested-by: Ingo Molnar 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Acked-by: Will Deacon 
Acked-by: Alan Stern 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-1-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/MAINTAINERS   |  2 +-
 tools/memory-model/README| 14 +++---
 tools/memory-model/linux-kernel.bell |  2 +-
 tools/memory-model/linux-kernel.cat  |  2 +-
 4 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/tools/memory-model/MAINTAINERS b/tools/memory-model/MAINTAINERS
index 711cbe7..db3bd3f 100644
--- a/tools/memory-model/MAINTAINERS
+++ b/tools/memory-model/MAINTAINERS
@@ -1,4 +1,4 @@
-LINUX KERNEL MEMORY MODEL
+LINUX KERNEL MEMORY CONSISTENCY MODEL
 M: Alan Stern 
 M: Andrea Parri 
 M: Will Deacon 
diff --git a/tools/memory-model/README b/tools/memory-model/README
index 43ba494..91414a4 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -1,15 +1,15 @@
-   =
-   LINUX KERNEL MEMORY MODEL
-   =
+   =
+   LINUX KERNEL MEMORY CONSISTENCY MODEL
+   =
 
 
 INTRODUCTION
 
 
-This directory contains the memory model of the Linux kernel, written
-in the "cat" language and executable by the (externally provided)
-"herd7" simulator, which exhaustively explores the state space of
-small litmus tests.
+This directory contains the memory consistency model (memory model, for
+short) of the Linux kernel, written in the "cat" language and executable
+by the externally provided "herd7" simulator, which exhaustively explores
+the state space of small litmus tests.
 
 In addition, the "klitmus7" tool (also externally provided) may be used
 to convert a litmus test to a Linux kernel module, which in turn allows
diff --git a/tools/memory-model/linux-kernel.bell 
b/tools/memory-model/linux-kernel.bell
index 5711250..b984bbd 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -11,7 +11,7 @@
  * which is to appear in ASPLOS 2018.
  *)
 
-"Linux kernel memory model"
+"Linux-kernel memory consistency model"
 
 enum Accesses = 'once (*READ_ONCE,WRITE_ONCE,ACCESS_ONCE*) ||
'release (*smp_store_release*) ||
diff --git a/tools/memory-model/linux-kernel.cat 
b/tools/memory-model/linux-kernel.cat
index 15b7a5d..babe2b3 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -11,7 +11,7 @@
  * which is to appear in ASPLOS 2018.
  *)
 
-"Linux kernel memory model"
+"Linux-kernel memory consistency model"
 
 (*
  * File "lock.cat" handles locks and is experimental.


[tip:locking/core] tools/memory-model: Clarify the origin/scope of the tool name

2018-02-21 Thread tip-bot for Andrea Parri
Commit-ID:  48d44d4e8a583c66d9f376e18c1a1fcc445f4b64
Gitweb: https://git.kernel.org/tip/48d44d4e8a583c66d9f376e18c1a1fcc445f4b64
Author: Andrea Parri 
AuthorDate: Tue, 20 Feb 2018 15:25:01 -0800
Committer:  Ingo Molnar 
CommitDate: Wed, 21 Feb 2018 09:58:12 +0100

tools/memory-model: Clarify the origin/scope of the tool name

Ingo pointed out that:

  "The "memory model" name is overly generic, ambiguous and somewhat
   misleading, as we usually mean the virtual memory layout/model
   when we say "memory model". GCC too uses it in that sense [...]"

Make it clear that tools/memory-model/ uses the term "memory model" as
shorthand for "memory consistency model" by calling out this convention
in tools/memory-model/README.

Stick to the original "memory model" term in sources' headers and for
the subsystem name.

Suggested-by: Ingo Molnar 
Signed-off-by: Andrea Parri 
Signed-off-by: Paul E. McKenney 
Acked-by: Peter Zijlstra 
Acked-by: Will Deacon 
Acked-by: Alan Stern 
Cc: Linus Torvalds 
Cc: Thomas Gleixner 
Cc: aki...@gmail.com
Cc: boqun.f...@gmail.com
Cc: dhowe...@redhat.com
Cc: j.algl...@ucl.ac.uk
Cc: linux-a...@vger.kernel.org
Cc: luc.maran...@inria.fr
Cc: nbori...@suse.com
Cc: npig...@gmail.com
Link: 
http://lkml.kernel.org/r/1519169112-20593-1-git-send-email-paul...@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar 
---
 tools/memory-model/MAINTAINERS   |  2 +-
 tools/memory-model/README| 14 +++---
 tools/memory-model/linux-kernel.bell |  2 +-
 tools/memory-model/linux-kernel.cat  |  2 +-
 4 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/tools/memory-model/MAINTAINERS b/tools/memory-model/MAINTAINERS
index 711cbe7..db3bd3f 100644
--- a/tools/memory-model/MAINTAINERS
+++ b/tools/memory-model/MAINTAINERS
@@ -1,4 +1,4 @@
-LINUX KERNEL MEMORY MODEL
+LINUX KERNEL MEMORY CONSISTENCY MODEL
 M: Alan Stern 
 M: Andrea Parri 
 M: Will Deacon 
diff --git a/tools/memory-model/README b/tools/memory-model/README
index 43ba494..91414a4 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -1,15 +1,15 @@
-   =
-   LINUX KERNEL MEMORY MODEL
-   =
+   =
+   LINUX KERNEL MEMORY CONSISTENCY MODEL
+   =
 
 
 INTRODUCTION
 
 
-This directory contains the memory model of the Linux kernel, written
-in the "cat" language and executable by the (externally provided)
-"herd7" simulator, which exhaustively explores the state space of
-small litmus tests.
+This directory contains the memory consistency model (memory model, for
+short) of the Linux kernel, written in the "cat" language and executable
+by the externally provided "herd7" simulator, which exhaustively explores
+the state space of small litmus tests.
 
 In addition, the "klitmus7" tool (also externally provided) may be used
 to convert a litmus test to a Linux kernel module, which in turn allows
diff --git a/tools/memory-model/linux-kernel.bell 
b/tools/memory-model/linux-kernel.bell
index 5711250..b984bbd 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -11,7 +11,7 @@
  * which is to appear in ASPLOS 2018.
  *)
 
-"Linux kernel memory model"
+"Linux-kernel memory consistency model"
 
 enum Accesses = 'once (*READ_ONCE,WRITE_ONCE,ACCESS_ONCE*) ||
'release (*smp_store_release*) ||
diff --git a/tools/memory-model/linux-kernel.cat 
b/tools/memory-model/linux-kernel.cat
index 15b7a5d..babe2b3 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -11,7 +11,7 @@
  * which is to appear in ASPLOS 2018.
  *)
 
-"Linux kernel memory model"
+"Linux-kernel memory consistency model"
 
 (*
  * File "lock.cat" handles locks and is experimental.


[tip:sched/core] sched/deadline: Fix comment in push_dl_tasks()

2015-08-12 Thread tip-bot for Andrea Parri
Commit-ID:  4ffa08ed4cc4c5d47d197d749aae6f79af91eb73
Gitweb: http://git.kernel.org/tip/4ffa08ed4cc4c5d47d197d749aae6f79af91eb73
Author: Andrea Parri 
AuthorDate: Wed, 5 Aug 2015 15:56:18 +0200
Committer:  Ingo Molnar 
CommitDate: Wed, 12 Aug 2015 12:06:10 +0200

sched/deadline: Fix comment in push_dl_tasks()

The comment is "misleading"; fix it by adapting a comment from
push_rt_tasks().

Signed-off-by: Andrea Parri 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Mike Galbraith 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Link: 
http://lkml.kernel.org/r/1438782979-9057-1-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar 
---
 kernel/sched/deadline.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index b473056..82c0dd0 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1563,7 +1563,7 @@ out:
 
 static void push_dl_tasks(struct rq *rq)
 {
-   /* Terminates as it moves a -deadline task */
+   /* push_dl_task() will return true if it moved a -deadline task */
while (push_dl_task(rq))
;
 }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:sched/core] sched/deadline: Fix comment in enqueue_task_dl()

2015-08-12 Thread tip-bot for Andrea Parri
Commit-ID:  ff277d4250fe715b219b1a3423b863418794
Gitweb: http://git.kernel.org/tip/ff277d4250fe715b219b1a3423b863418794
Author: Andrea Parri 
AuthorDate: Wed, 5 Aug 2015 15:56:19 +0200
Committer:  Ingo Molnar 
CommitDate: Wed, 12 Aug 2015 12:06:10 +0200

sched/deadline: Fix comment in enqueue_task_dl()

The "dl_boosted" flag is set by comparing *absolute* deadlines
(c.f., rt_mutex_setprio()).

Signed-off-by: Andrea Parri 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Mike Galbraith 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Link: 
http://lkml.kernel.org/r/1438782979-9057-2-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar 
---
 kernel/sched/deadline.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 82c0dd0..fc8f010 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -953,7 +953,7 @@ static void enqueue_task_dl(struct rq *rq, struct 
task_struct *p, int flags)
 
/*
 * Use the scheduling parameters of the top pi-waiter
-* task if we have one and its (relative) deadline is
+* task if we have one and its (absolute) deadline is
 * smaller than our one... OTW we keep our runtime and
 * deadline.
 */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:sched/core] sched/deadline: Fix comment in enqueue_task_dl()

2015-08-12 Thread tip-bot for Andrea Parri
Commit-ID:  ff277d4250fe715b219b1a3423b863418794
Gitweb: http://git.kernel.org/tip/ff277d4250fe715b219b1a3423b863418794
Author: Andrea Parri parri.and...@gmail.com
AuthorDate: Wed, 5 Aug 2015 15:56:19 +0200
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Wed, 12 Aug 2015 12:06:10 +0200

sched/deadline: Fix comment in enqueue_task_dl()

The dl_boosted flag is set by comparing *absolute* deadlines
(c.f., rt_mutex_setprio()).

Signed-off-by: Andrea Parri parri.and...@gmail.com
Signed-off-by: Peter Zijlstra (Intel) pet...@infradead.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Mike Galbraith efa...@gmx.de
Cc: Peter Zijlstra pet...@infradead.org
Cc: Thomas Gleixner t...@linutronix.de
Link: 
http://lkml.kernel.org/r/1438782979-9057-2-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 kernel/sched/deadline.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 82c0dd0..fc8f010 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -953,7 +953,7 @@ static void enqueue_task_dl(struct rq *rq, struct 
task_struct *p, int flags)
 
/*
 * Use the scheduling parameters of the top pi-waiter
-* task if we have one and its (relative) deadline is
+* task if we have one and its (absolute) deadline is
 * smaller than our one... OTW we keep our runtime and
 * deadline.
 */
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:sched/core] sched/deadline: Fix comment in push_dl_tasks()

2015-08-12 Thread tip-bot for Andrea Parri
Commit-ID:  4ffa08ed4cc4c5d47d197d749aae6f79af91eb73
Gitweb: http://git.kernel.org/tip/4ffa08ed4cc4c5d47d197d749aae6f79af91eb73
Author: Andrea Parri parri.and...@gmail.com
AuthorDate: Wed, 5 Aug 2015 15:56:18 +0200
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Wed, 12 Aug 2015 12:06:10 +0200

sched/deadline: Fix comment in push_dl_tasks()

The comment is misleading; fix it by adapting a comment from
push_rt_tasks().

Signed-off-by: Andrea Parri parri.and...@gmail.com
Signed-off-by: Peter Zijlstra (Intel) pet...@infradead.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Mike Galbraith efa...@gmx.de
Cc: Peter Zijlstra pet...@infradead.org
Cc: Thomas Gleixner t...@linutronix.de
Link: 
http://lkml.kernel.org/r/1438782979-9057-1-git-send-email-parri.and...@gmail.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 kernel/sched/deadline.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index b473056..82c0dd0 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1563,7 +1563,7 @@ out:
 
 static void push_dl_tasks(struct rq *rq)
 {
-   /* Terminates as it moves a -deadline task */
+   /* push_dl_task() will return true if it moved a -deadline task */
while (push_dl_task(rq))
;
 }
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/