task_numa_fault is invoked from do_numa_page/do_huge_pmd_numa_page, for task_numa_work induced memory faults. task_numa_work is scheduled from task_tick_numa which is invoked only if sched_numa_balancing is true.
So task_numa_fault will not get invoked if sched_numa_balancing is false and hence we can avoid checking it again in task_numa_fault. Signed-off-by: Imran Khan <imran.f.k...@oracle.com> --- kernel/sched/fair.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 04a3ce20da67..282ebd6c4197 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2643,9 +2643,6 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags) struct numa_group *ng; int priv; - if (!static_branch_likely(&sched_numa_balancing)) - return; - /* for example, ksmd faulting in a user's mm */ if (!p->mm) return; -- 2.25.1