Commit-ID: 38460a2178d225b39ade5ac66586c3733391cf86 Gitweb: http://git.kernel.org/tip/38460a2178d225b39ade5ac66586c3733391cf86 Author: Davidlohr Bueso <dave@stgolabs> AuthorDate: Wed, 9 Mar 2016 17:55:36 -0800 Committer: Ingo Molnar <mi...@kernel.org> CommitDate: Thu, 10 Mar 2016 10:28:35 +0100
locking/csd_lock: Use smp_cond_acquire() in csd_lock_wait() We can micro-optimize this call and mildly relax the barrier requirements by relying on ctrl + rmb, keeping the acquire semantics. In addition, this is pretty much the now standard for busy-waiting under such restraints. Signed-off-by: Davidlohr Bueso <dbu...@suse.de> Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org> Cc: Linus Torvalds <torva...@linux-foundation.org> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Thomas Gleixner <t...@linutronix.de> Cc: d...@stgolabs.net Link: http://lkml.kernel.org/r/1457574936-19065-3-git-send-email-dbu...@suse.de Signed-off-by: Ingo Molnar <mi...@kernel.org> --- kernel/smp.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index 5099db1..300d293 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -107,8 +107,7 @@ void __init call_function_init(void) */ static __always_inline void csd_lock_wait(struct call_single_data *csd) { - while (smp_load_acquire(&csd->flags) & CSD_FLAG_LOCK) - cpu_relax(); + smp_cond_acquire(!(csd->flags & CSD_FLAG_LOCK)); } static __always_inline void csd_lock(struct call_single_data *csd)