Hi Jianyu, On Fri, Apr 11, 2014 at 01:11:08AM +0800, Jianyu Zhan wrote: > Currently, mem_cgroup_read_stat() is used for user interface. The > user accounts memory usage by memory cgroup and he _always_ requires > exact value because he accounts memory. So we don't use quick-and-fuzzy > -read-and-do-periodic-synchronization way. Thus, we iterate all cpus > for one read. > > And we mem_cgroup_usage() and mem_cgroup_recursive_stat() both finally > call into mem_cgroup_read_stat(). > > However, these *stat snapshot* operations are implemented in a quite > coarse way: it takes M*N iteration for each stat item(M=nr_memcgs, > N=nr_possible_cpus). There are two deficiencies: > > 1. for every stat item, we have to iterate over all percpu value, which > is not so cache friendly. > 2. for every stat item, we call mem_cgroup_read_stat() once, which > increase the probablity of contending on pcp_counter_lock. > > So, this patch improve this a bit. Concretely, for all interested stat > items, mark them in a bitmap, and then make mem_cgroup_read_stat() read > them all in one go. > > This is more efficient, and to some degree make it more like *stat snapshot*. > > Signed-off-by: Jianyu Zhan <nasa4...@gmail.com> > --- > mm/memcontrol.c | 91 > +++++++++++++++++++++++++++++++++++++++------------------ > 1 file changed, 62 insertions(+), 29 deletions(-)
This is when the user reads statistics or when OOM happens, neither of which I would consider fast paths. I don't think it's worth the extra code, which looks more cumbersome than what we have. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/