Make p->numa_shared flip/flop less around unstable equilibriums,
instead require a significant move in either direction to trigger
'dominantly shared accesses' versus 'dominantly private accesses'
NUMA status.

Suggested-by: Rik van Riel <r...@redhat.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijls...@chello.nl>
Cc: Andrea Arcangeli <aarca...@redhat.com>
Cc: Rik van Riel <r...@redhat.com>
Cc: Mel Gorman <mgor...@suse.de>
Cc: Hugh Dickins <hu...@google.com>
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 kernel/sched/fair.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8aa4b36..ab4a7130 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1111,7 +1111,20 @@ static void task_numa_placement(struct task_struct *p)
         * we might want to consider a different equation below to reduce
         * the impact of a little private memory accesses.
         */
-       shared = (total[0] >= total[1] / 2);
+       shared = p->numa_shared;
+
+       if (shared < 0) {
+               shared = (total[0] >= total[1]);
+       } else if (shared == 0) {
+               /* If it was private before, make it harder to become shared: */
+               if (total[0] >= total[1]*2)
+                       shared = 1;
+       } else if (shared == 1 ) {
+                /* If it was shared before, make it harder to become private: 
*/
+               if (total[0]*2 <= total[1])
+                       shared = 0;
+       }
+
        if (shared)
                p->ideal_cpu = sched_update_ideal_cpu_shared(p);
        else
-- 
1.7.11.7

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to