On Tue, Jul 16, 2019 at 10:41:36AM +0800, 王贇
wrote:
> Actually whatever the memory node sets or cpu allow sets is, it will
> take effect on task's behavior regarding memory location and cpu
> location, while the locality only care about the results rather than
> the sets.
My previous response
Hi Michal,
Thx for the comments :-)
On 2019/7/15 下午8:10, Michal Koutný wrote:
> Hello Yun.
>
> On Fri, Jul 12, 2019 at 06:10:24PM +0800, 王贇
> wrote:
>> Forgive me but I have no idea on how to combined this
>> with memory cgroup's locality hierarchical update...
>> parent memory cgroup do not
Hello Yun.
On Fri, Jul 12, 2019 at 06:10:24PM +0800, 王贇
wrote:
> Forgive me but I have no idea on how to combined this
> with memory cgroup's locality hierarchical update...
> parent memory cgroup do not have influence on mems_allowed
> to it's children, correct?
I'd recommend to look at the
On 2019/7/12 下午6:10, 王贇 wrote:
[snip]
>>
>> Documentation/cgroup-v1/cpusets.txt
>>
>> Look for mems_allowed.
>
> This is the attribute belong to cpuset cgroup isn't it?
>
> Forgive me but I have no idea on how to combined this
> with memory cgroup's locality hierarchical update...
> parent
On 2019/7/12 下午5:42, Peter Zijlstra wrote:
> On Fri, Jul 12, 2019 at 05:11:25PM +0800, 王贇 wrote:
>>
>>
>> On 2019/7/12 下午3:58, Peter Zijlstra wrote:
>> [snip]
>
> Then our task t1 should be accounted to B (as you do), but also to A and
> R.
I get the point but not quite
On Fri, Jul 12, 2019 at 05:11:25PM +0800, 王贇 wrote:
>
>
> On 2019/7/12 下午3:58, Peter Zijlstra wrote:
> [snip]
> >>>
> >>> Then our task t1 should be accounted to B (as you do), but also to A and
> >>> R.
> >>
> >> I get the point but not quite sure about this...
> >>
> >> Not like pages there
On 2019/7/12 下午3:58, Peter Zijlstra wrote:
[snip]
>>>
>>> Then our task t1 should be accounted to B (as you do), but also to A and
>>> R.
>>
>> I get the point but not quite sure about this...
>>
>> Not like pages there are no hierarchical limitation on locality, also tasks
>
> You can use
On Fri, Jul 12, 2019 at 11:43:17AM +0800, 王贇 wrote:
>
>
> On 2019/7/11 下午9:47, Peter Zijlstra wrote:
> [snip]
> >> + rcu_read_lock();
> >> + memcg = mem_cgroup_from_task(p);
> >> + if (idx != -1)
> >> + this_cpu_inc(memcg->stat_numa->locality[idx]);
> >
> > I thought cgroups were
On 2019/7/11 下午9:47, Peter Zijlstra wrote:
[snip]
>> +rcu_read_lock();
>> +memcg = mem_cgroup_from_task(p);
>> +if (idx != -1)
>> +this_cpu_inc(memcg->stat_numa->locality[idx]);
>
> I thought cgroups were supposed to be hierarchical. That is, if we have:
>
>
On 2019/7/11 下午9:43, Peter Zijlstra wrote:
> On Wed, Jul 03, 2019 at 11:28:10AM +0800, 王贇 wrote:
>> +#ifdef CONFIG_NUMA_BALANCING
>> +
>> +enum memcg_numa_locality_interval {
>> +PERCENT_0_29,
>> +PERCENT_30_39,
>> +PERCENT_40_49,
>> +PERCENT_50_59,
>> +PERCENT_60_69,
>> +
On Wed, Jul 03, 2019 at 11:28:10AM +0800, 王贇 wrote:
> @@ -3562,10 +3563,53 @@ static int memcg_numa_stat_show(struct seq_file *m,
> void *v)
> seq_putc(m, '\n');
> }
>
> +#ifdef CONFIG_NUMA_BALANCING
> + seq_puts(m, "locality");
> + for (nr = 0; nr < NR_NL_INTERVAL;
On Wed, Jul 03, 2019 at 11:28:10AM +0800, 王贇 wrote:
> +#ifdef CONFIG_NUMA_BALANCING
> +
> +enum memcg_numa_locality_interval {
> + PERCENT_0_29,
> + PERCENT_30_39,
> + PERCENT_40_49,
> + PERCENT_50_59,
> + PERCENT_60_69,
> + PERCENT_70_79,
> + PERCENT_80_89,
> +
This patch introduced numa locality statistic, which try to imply
the numa balancing efficiency per memory cgroup.
By doing 'cat /sys/fs/cgroup/memory/CGROUP_PATH/memory.numa_stat', we
see new output line heading with 'locality', the format is:
locality 0%~29% 30%~39% 40%~49% 50%~59% 60%~69%
13 matches
Mail list logo