From: Rik van Riel <r...@redhat.com> commit 5085e2a328849bdee6650b32d52c87c3788ab01c upstream
When tasks have not converged on their preferred nodes yet, we want to retry fairly often, to make sure we do not migrate a task's memory to an undesirable location, only to have to move it again later. This patch reduces the interval at which migration is retried, when the task's numa_scan_period is small. Signed-off-by: Rik van Riel <r...@redhat.com> Tested-by: Vinod Chegu <chegu_vi...@hp.com> Acked-by: Mel Gorman <mgor...@suse.de> Signed-off-by: Peter Zijlstra <pet...@infradead.org> Cc: Linus Torvalds <torva...@linux-foundation.org> Link: http://lkml.kernel.org/r/1397235629-16328-3-git-send-email-r...@redhat.com Signed-off-by: Ingo Molnar <mi...@kernel.org> Signed-off-by: Yang Shi <yang....@windriver.com> --- kernel/sched/fair.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 813cd8e..1eda55e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1343,12 +1343,15 @@ static int task_numa_migrate(struct task_struct *p) /* Attempt to migrate a task to a CPU on the preferred node. */ static void numa_migrate_preferred(struct task_struct *p) { + unsigned long interval = HZ; + /* This task has no NUMA fault statistics yet */ if (unlikely(p->numa_preferred_nid == -1 || !p->numa_faults_memory)) return; /* Periodically retry migrating the task to the preferred node */ - p->numa_migrate_retry = jiffies + HZ; + interval = min(interval, msecs_to_jiffies(p->numa_scan_period) / 16); + p->numa_migrate_retry = jiffies + interval; /* Success if task is already running on preferred CPU */ if (task_node(p) == p->numa_preferred_nid) -- 2.0.2 -- _______________________________________________ linux-yocto mailing list linux-yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/linux-yocto