The following commit has been merged into the sched/core branch of tip:

Commit-ID:     b641a8b52c6162172ca31590510569eaadcd5e49
Gitweb:        
https://git.kernel.org/tip/b641a8b52c6162172ca31590510569eaadcd5e49
Author:        Vincent Donnefort <vincent.donnef...@arm.com>
AuthorDate:    Thu, 25 Feb 2021 08:36:12 
Committer:     Peter Zijlstra <pet...@infradead.org>
CommitterDate: Wed, 03 Mar 2021 10:33:00 +01:00

sched/fair: use lsub_positive in cpu_util_next()

The sub_positive local version is saving an explicit load-store and is
enough for the cpu_util_next() usage.

Signed-off-by: Vincent Donnefort <vincent.donnef...@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Reviewed-by: Quentin Perret <qper...@google.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggem...@arm.com>
Link: 
https://lkml.kernel.org/r/20210225083612.1113823-3-vincent.donnef...@arm.com
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b994db9..7b2fac0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6471,7 +6471,7 @@ static unsigned long cpu_util_next(int cpu, struct 
task_struct *p, int dst_cpu)
         * util_avg should already be correct.
         */
        if (task_cpu(p) == cpu && dst_cpu != cpu)
-               sub_positive(&util, task_util(p));
+               lsub_positive(&util, task_util(p));
        else if (task_cpu(p) != cpu && dst_cpu == cpu)
                util += task_util(p);
 

Reply via email to