Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-22 Thread Rik van Riel
On 06/22/2015 12:13 PM, Srikar Dronamraju wrote: >> + * migrating the task to where it really belongs. >> + * The exception is a task that belongs to a large numa_group, which >> + * spans multiple NUMA nodes. If that task migrates into one of the >> + * workload's active nodes, rem

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-22 Thread Srikar Dronamraju
Updated autonumabenchmark numbers Plain 4.1.0-rc7-tip (i) Testcase: Min Max Avg StdDev elapsed_numa01: 858.85 949.18 915.64 33.06 elapsed_numa02: 23.09 29.89 26.432.18 Te

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-22 Thread Srikar Dronamraju
> + * migrating the task to where it really belongs. > + * The exception is a task that belongs to a large numa_group, which > + * spans multiple NUMA nodes. If that task migrates into one of the > + * workload's active nodes, remember that node as the task's > + * numa_pre

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-22 Thread Srikar Dronamraju
> Would you happen to have 2 instance and 4 instance SPECjbb > numbers, too? The single instance numbers seem to be within > the margin of error, but I would expect multi-instance numbers > to show more dramatic changes, due to changes in how workloads > converge... > > Those behave very differen

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-19 Thread Rik van Riel
On 06/19/2015 01:16 PM, Srikar Dronamraju wrote: >> >> OK, so we are looking at two multi-threaded processes >> on a 4 node system, and waiting for them to converge? >> >> It may make sense to add my patch in with your patch >> 1/4 from last week, as well as the correct part of >> your patch 4/4, a

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-19 Thread Srikar Dronamraju
> > OK, so we are looking at two multi-threaded processes > on a 4 node system, and waiting for them to converge? > > It may make sense to add my patch in with your patch > 1/4 from last week, as well as the correct part of > your patch 4/4, and see how they all work together. > Tested specjbb

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-18 Thread Rik van Riel
On 06/18/2015 12:12 PM, Ingo Molnar wrote: > * Srikar Dronamraju wrote: > >>> if (p->numa_group) { >>> if (env.best_cpu == -1) >>> @@ -1513,7 +1520,7 @@ static int task_numa_migrate(struct task_struct *p) >>> nid = env.dst_nid; >>> >>> if (node_isse

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-18 Thread Srikar Dronamraju
> > OK, so we are looking at two multi-threaded processes > on a 4 node system, and waiting for them to converge? > > It may make sense to add my patch in with your patch > 1/4 from last week, as well as the correct part of > your patch 4/4, and see how they all work together. > -- Okay, I will

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-18 Thread Rik van Riel
On 06/18/2015 12:41 PM, Srikar Dronamraju wrote: > * Rik van Riel [2015-06-18 12:06:49]: > >>> >>> Overall this patch does seem to produce better results. However numa02 >>> gets affected -vely. >> >> OK, that is kind of expected. >> >> The way numa02 runs means that we are essentially guara

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-18 Thread Srikar Dronamraju
* Rik van Riel [2015-06-18 12:06:49]: > >> > > > > Overall this patch does seem to produce better results. However numa02 > > gets affected -vely. > > OK, that is kind of expected. > > The way numa02 runs means that we are essentially guaranteed > that, on a two node system, both nodes end up

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-18 Thread Ingo Molnar
* Srikar Dronamraju wrote: > > if (p->numa_group) { > > if (env.best_cpu == -1) > > @@ -1513,7 +1520,7 @@ static int task_numa_migrate(struct task_struct *p) > > nid = env.dst_nid; > > > > if (node_isset(nid, p->numa_group->active_nodes)) > > -

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-18 Thread Rik van Riel
On 06/18/2015 11:55 AM, Srikar Dronamraju wrote: >> if (p->numa_group) { >> if (env.best_cpu == -1) >> @@ -1513,7 +1520,7 @@ static int task_numa_migrate(struct task_struct *p) >> nid = env.dst_nid; >> >> if (node_isset(nid, p->numa_group->active_

Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-18 Thread Srikar Dronamraju
> if (p->numa_group) { > if (env.best_cpu == -1) > @@ -1513,7 +1520,7 @@ static int task_numa_migrate(struct task_struct *p) > nid = env.dst_nid; > > if (node_isset(nid, p->numa_group->active_nodes)) > - sched_setnuma(p, e

[PATCH] sched,numa: document and fix numa_preferred_nid setting

2015-06-16 Thread Rik van Riel
There are two places where the numa balancing code sets a task's numa_preferred_nid. The primary location is task_numa_placement(), where the kernel examines the NUMA fault statistics to determine the location where most of the memory that the task (or numa_group) accesses is. The second location