Since clearing a bit in thread_info is an atomic operation, the spinlock
is redundant and can be removed, reducing lock contention is good for
performance.

Acked-by: Oleg Nesterov <o...@redhat.com>
Signed-off-by: Liao Chang <liaocha...@huawei.com>
---
 kernel/events/uprobes.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 73cc47708679..76a51a1f51e2 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1979,9 +1979,7 @@ bool uprobe_deny_signal(void)
        WARN_ON_ONCE(utask->state != UTASK_SSTEP);
 
        if (task_sigpending(t)) {
-               spin_lock_irq(&t->sighand->siglock);
                clear_tsk_thread_flag(t, TIF_SIGPENDING);
-               spin_unlock_irq(&t->sighand->siglock);
 
                if (__fatal_signal_pending(t) || 
arch_uprobe_xol_was_trapped(t)) {
                        utask->state = UTASK_SSTEP_TRAPPED;
-- 
2.34.1


Reply via email to