On 06/22/2015 12:13 PM, Srikar Dronamraju wrote:
>> + * migrating the task to where it really belongs.
>> + * The exception is a task that belongs to a large numa_group, which
>> + * spans multiple NUMA nodes. If that task migrates into one of the
>> + * workload's active nodes, rem
Updated autonumabenchmark numbers
Plain 4.1.0-rc7-tip (i)
Testcase: Min Max Avg StdDev
elapsed_numa01: 858.85 949.18 915.64 33.06
elapsed_numa02: 23.09 29.89 26.432.18
Te
> + * migrating the task to where it really belongs.
> + * The exception is a task that belongs to a large numa_group, which
> + * spans multiple NUMA nodes. If that task migrates into one of the
> + * workload's active nodes, remember that node as the task's
> + * numa_pre
> Would you happen to have 2 instance and 4 instance SPECjbb
> numbers, too? The single instance numbers seem to be within
> the margin of error, but I would expect multi-instance numbers
> to show more dramatic changes, due to changes in how workloads
> converge...
>
> Those behave very differen
On 06/19/2015 01:16 PM, Srikar Dronamraju wrote:
>>
>> OK, so we are looking at two multi-threaded processes
>> on a 4 node system, and waiting for them to converge?
>>
>> It may make sense to add my patch in with your patch
>> 1/4 from last week, as well as the correct part of
>> your patch 4/4, a
>
> OK, so we are looking at two multi-threaded processes
> on a 4 node system, and waiting for them to converge?
>
> It may make sense to add my patch in with your patch
> 1/4 from last week, as well as the correct part of
> your patch 4/4, and see how they all work together.
>
Tested specjbb
On 06/18/2015 12:12 PM, Ingo Molnar wrote:
> * Srikar Dronamraju wrote:
>
>>> if (p->numa_group) {
>>> if (env.best_cpu == -1)
>>> @@ -1513,7 +1520,7 @@ static int task_numa_migrate(struct task_struct *p)
>>> nid = env.dst_nid;
>>>
>>> if (node_isse
>
> OK, so we are looking at two multi-threaded processes
> on a 4 node system, and waiting for them to converge?
>
> It may make sense to add my patch in with your patch
> 1/4 from last week, as well as the correct part of
> your patch 4/4, and see how they all work together.
> --
Okay, I will
On 06/18/2015 12:41 PM, Srikar Dronamraju wrote:
> * Rik van Riel [2015-06-18 12:06:49]:
>
>>>
>>> Overall this patch does seem to produce better results. However numa02
>>> gets affected -vely.
>>
>> OK, that is kind of expected.
>>
>> The way numa02 runs means that we are essentially guara
* Rik van Riel [2015-06-18 12:06:49]:
> >>
> >
> > Overall this patch does seem to produce better results. However numa02
> > gets affected -vely.
>
> OK, that is kind of expected.
>
> The way numa02 runs means that we are essentially guaranteed
> that, on a two node system, both nodes end up
* Srikar Dronamraju wrote:
> > if (p->numa_group) {
> > if (env.best_cpu == -1)
> > @@ -1513,7 +1520,7 @@ static int task_numa_migrate(struct task_struct *p)
> > nid = env.dst_nid;
> >
> > if (node_isset(nid, p->numa_group->active_nodes))
> > -
On 06/18/2015 11:55 AM, Srikar Dronamraju wrote:
>> if (p->numa_group) {
>> if (env.best_cpu == -1)
>> @@ -1513,7 +1520,7 @@ static int task_numa_migrate(struct task_struct *p)
>> nid = env.dst_nid;
>>
>> if (node_isset(nid, p->numa_group->active_
> if (p->numa_group) {
> if (env.best_cpu == -1)
> @@ -1513,7 +1520,7 @@ static int task_numa_migrate(struct task_struct *p)
> nid = env.dst_nid;
>
> if (node_isset(nid, p->numa_group->active_nodes))
> - sched_setnuma(p, e
There are two places where the numa balancing code sets a task's
numa_preferred_nid.
The primary location is task_numa_placement(), where the kernel
examines the NUMA fault statistics to determine the location where
most of the memory that the task (or numa_group) accesses is.
The second location
14 matches
Mail list logo