[tip:locking/core] jump_label: Move CPU hotplug locking

2017-08-10 Thread tip-bot for Marc Zyngier
Commit-ID:  b70cecf4b6b72a9977576ab32cca0e24f286f517
Gitweb: http://git.kernel.org/tip/b70cecf4b6b72a9977576ab32cca0e24f286f517
Author: Marc Zyngier 
AuthorDate: Tue, 1 Aug 2017 09:02:54 +0100
Committer:  Ingo Molnar 
CommitDate: Thu, 10 Aug 2017 12:28:58 +0200

jump_label: Move CPU hotplug locking

As we're about to rework the locking, let's move the taking and
release of the CPU hotplug lock to locations that will make its
reworking completely obvious.

Signed-off-by: Marc Zyngier 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Leo Yan 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: linux-arm-ker...@lists.infradead.org
Link: http://lkml.kernel.org/r/20170801080257.5056-2-marc.zyng...@arm.com
Signed-off-by: Ingo Molnar 
---
 kernel/jump_label.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index f2ea678..161301f 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -83,6 +83,7 @@ void static_key_slow_inc(struct static_key *key)
 {
int v, v1;
 
+   cpus_read_lock();
STATIC_KEY_CHECK_USE();
 
/*
@@ -99,11 +100,12 @@ void static_key_slow_inc(struct static_key *key)
 */
for (v = atomic_read(>enabled); v > 0; v = v1) {
v1 = atomic_cmpxchg(>enabled, v, v + 1);
-   if (likely(v1 == v))
+   if (likely(v1 == v)) {
+   cpus_read_unlock();
return;
+   }
}
 
-   cpus_read_lock();
jump_label_lock();
if (atomic_read(>enabled) == 0) {
atomic_set(>enabled, -1);


[tip:locking/core] jump_label: Move CPU hotplug locking

2017-08-10 Thread tip-bot for Marc Zyngier
Commit-ID:  b70cecf4b6b72a9977576ab32cca0e24f286f517
Gitweb: http://git.kernel.org/tip/b70cecf4b6b72a9977576ab32cca0e24f286f517
Author: Marc Zyngier 
AuthorDate: Tue, 1 Aug 2017 09:02:54 +0100
Committer:  Ingo Molnar 
CommitDate: Thu, 10 Aug 2017 12:28:58 +0200

jump_label: Move CPU hotplug locking

As we're about to rework the locking, let's move the taking and
release of the CPU hotplug lock to locations that will make its
reworking completely obvious.

Signed-off-by: Marc Zyngier 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Leo Yan 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: linux-arm-ker...@lists.infradead.org
Link: http://lkml.kernel.org/r/20170801080257.5056-2-marc.zyng...@arm.com
Signed-off-by: Ingo Molnar 
---
 kernel/jump_label.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index f2ea678..161301f 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -83,6 +83,7 @@ void static_key_slow_inc(struct static_key *key)
 {
int v, v1;
 
+   cpus_read_lock();
STATIC_KEY_CHECK_USE();
 
/*
@@ -99,11 +100,12 @@ void static_key_slow_inc(struct static_key *key)
 */
for (v = atomic_read(>enabled); v > 0; v = v1) {
v1 = atomic_cmpxchg(>enabled, v, v + 1);
-   if (likely(v1 == v))
+   if (likely(v1 == v)) {
+   cpus_read_unlock();
return;
+   }
}
 
-   cpus_read_lock();
jump_label_lock();
if (atomic_read(>enabled) == 0) {
atomic_set(>enabled, -1);