On Fri, 16 Nov 2012, Ingo Molnar wrote:
> > The interleaving of memory areas that have an equal amount of
> > shared accesses from multiple nodes is essential to limit the
> > traffic on the interconnect and get top performance.
>
> That is true only if the load is symmetric.
Which is usually tru
* Christoph Lameter wrote:
> On Tue, 13 Nov 2012, Ingo Molnar wrote:
>
> > > the pages over both nodes in use.
> >
> > I'd not go as far as to claim that to be a general rule: the
> > correct placement depends on the system and workload
> > specifics: how much memory is on each node, how many
On Tue, 13 Nov 2012, Ingo Molnar wrote:
> > the pages over both nodes in use.
>
> I'd not go as far as to claim that to be a general rule: the
> correct placement depends on the system and workload specifics:
> how much memory is on each node, how many tasks run on each
> node, and whether the acc
* Christoph Lameter wrote:
> On Mon, 12 Nov 2012, Peter Zijlstra wrote:
>
> > The biggest conceptual addition, beyond the elimination of
> > the home node, is that the scheduler is now able to
> > recognize 'private' versus 'shared' pages, by carefully
> > analyzing the pattern of how CPUs t
On Mon, 12 Nov 2012, Peter Zijlstra wrote:
> The biggest conceptual addition, beyond the elimination of the home
> node, is that the scheduler is now able to recognize 'private' versus
> 'shared' pages, by carefully analyzing the pattern of how CPUs touch the
> working set pages. The scheduler aut
Hi,
This series implements an improved version of NUMA scheduling, based on
the review and testing feedback we got.
Like the previous version, this code is driven by working set probing
faults (so much of the VM machinery remains) - but the subsequent
utilization of those faults and the scheduler
6 matches
Mail list logo