Commit-ID:  90d1098478fb08a1ef166fe91622d8046869e17b
Gitweb:     http://git.kernel.org/tip/90d1098478fb08a1ef166fe91622d8046869e17b
Author:     Davidlohr Bueso <[email protected]>
AuthorDate: Wed, 9 Mar 2016 17:55:35 -0800
Committer:  Ingo Molnar <[email protected]>
CommitDate: Thu, 10 Mar 2016 10:28:35 +0100

locking/csd_lock: Explicitly inline csd_lock*() helpers

While the compiler tends to already to it for us (except for
csd_unlock), make it explicit. These helpers mainly deal with
the ->flags, are short-lived  and can be called, for example,
from smp_call_function_many().

Signed-off-by: Davidlohr Bueso <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
 kernel/smp.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index d903c02..5099db1 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -105,13 +105,13 @@ void __init call_function_init(void)
  * previous function call. For multi-cpu calls its even more interesting
  * as we'll have to ensure no other cpu is observing our csd.
  */
-static void csd_lock_wait(struct call_single_data *csd)
+static __always_inline void csd_lock_wait(struct call_single_data *csd)
 {
        while (smp_load_acquire(&csd->flags) & CSD_FLAG_LOCK)
                cpu_relax();
 }
 
-static void csd_lock(struct call_single_data *csd)
+static __always_inline void csd_lock(struct call_single_data *csd)
 {
        csd_lock_wait(csd);
        csd->flags |= CSD_FLAG_LOCK;
@@ -124,7 +124,7 @@ static void csd_lock(struct call_single_data *csd)
        smp_wmb();
 }
 
-static void csd_unlock(struct call_single_data *csd)
+static __always_inline void csd_unlock(struct call_single_data *csd)
 {
        WARN_ON(!(csd->flags & CSD_FLAG_LOCK));
 

Reply via email to