Hi,

This series implements an improved version of NUMA scheduling, based on
the review and testing feedback we got.

Like the previous version, this code is driven by working set probing
faults (so much of the VM machinery remains) - but the subsequent
utilization of those faults and the scheduler policy has changed
substantially.

The scheduler's affinity logic has been generalized, and this allowed us
to eliminate the 'home node' concept that was needlessly restrictive.

The biggest conceptual addition, beyond the elimination of the home
node, is that the scheduler is now able to recognize 'private' versus
'shared' pages, by carefully analyzing the pattern of how CPUs touch the
working set pages. The scheduler automatically recognizes tasks that
share memory with each other (and make dominant use of that memory) -
versus tasks that allocate and use their working set privately.

This new scheduler code is then able to group tasks that are "memory
related" via their memory access patterns together: in the NUMA context
moving them on the same node if possible, and spreading them amongst
nodes if they use private memory.

Note that this adaptive NUMA affinity mechanism integrated into the
scheduler is essentially free of heuristics - only the access patterns
determine which tasks are related and grouped. As a result this adaptive
affinity code is able to move both threads and processes close(r) to
each other if they are related - and let them spread if they are not. If
a workload changes its characteristics dynamically then its scheduling
will adapt dynamically as well.

You can find the finer details in the individual patches. The series is
based on commit 02743c9c03f1 you can find in linux-next. Reviews and
testing feedback are welcome! (We'll also review some of the other
feedback we got in the last 2 weeks that we might not have reacted to
yet, please be patient.)

Next we plan to pick up bits from Mel's recent series like his page
migration patch.

Thanks,

        Peter, Ingo


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to