When nohz or nohz_full is configured, the concurrency calls of
tick_do_update_jiffies64 increases, and the conflict between
jiffies_lock and jiffies_seq increases, especially in multi-core
scenarios.

However, it is unnecessary to update the jiffies_seq lock multiple
times in a tick period, so the critical region of the jiffies_seq
can be reduced to reduce latency overheads. By the way,
last_jiffies_update is protected by jiffies_lock, so reducing the
jiffies_seq critical area is safe.

Signed-off-by: Yunfeng Ye <yeyunf...@huawei.com>
---
 kernel/time/tick-sched.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index f0199a4ba1ad..41fb1400439b 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -66,11 +66,11 @@ static void tick_do_update_jiffies64(ktime_t now)

        /* Reevaluate with jiffies_lock held */
        raw_spin_lock(&jiffies_lock);
-       write_seqcount_begin(&jiffies_seq);

        delta = ktime_sub(now, last_jiffies_update);
        if (delta >= tick_period) {

+               write_seqcount_begin(&jiffies_seq);
                delta = ktime_sub(delta, tick_period);
                /* Pairs with the lockless read in this function. */
                WRITE_ONCE(last_jiffies_update,
@@ -91,12 +91,11 @@ static void tick_do_update_jiffies64(ktime_t now)

                /* Keep the tick_next_period variable up to date */
                tick_next_period = ktime_add(last_jiffies_update, tick_period);
-       } else {
                write_seqcount_end(&jiffies_seq);
+       } else {
                raw_spin_unlock(&jiffies_lock);
                return;
        }
-       write_seqcount_end(&jiffies_seq);
        raw_spin_unlock(&jiffies_lock);
        update_wall_time();
 }
-- 
2.18.4

Reply via email to