On Thu, Sep 03, 2020 at 09:10:59PM -0700, Andrew Morton wrote: > On Thu, 3 Sep 2020 16:00:55 -0700 Roman Gushchin <g...@fb.com> wrote: > > > In the memcg case count_shadow_nodes() sums the number of pages in lru > > lists and the amount of slab memory (reclaimable and non-reclaimable) > > as a baseline for the allowed number of shadow entries. > > > > It seems to be a good analogy for the !memcg case, where > > node_present_pages() is used. However, it's not quite true, as there > > two problems: > > > > 1) Due to slab reparenting introduced by commit fb2f2b0adb98 ("mm: > > memcg/slab: reparent memcg kmem_caches on cgroup removal") local > > per-lruvec slab counters might be inaccurate on non-leaf levels. > > It's the only place where local slab counters are used. > > > > 2) Shadow nodes by themselves are backed by slabs. So there is a loop > > dependency: the more shadow entries are there, the less pressure the > > kernel applies to reclaim them. > > > > Fortunately, there is a simple way to solve both problems: slab > > counters shouldn't be taken into the account by count_shadow_nodes(). > > > > ... > > > > --- a/mm/workingset.c > > +++ b/mm/workingset.c > > @@ -495,10 +495,6 @@ static unsigned long count_shadow_nodes(struct > > shrinker *shrinker, > > for (pages = 0, i = 0; i < NR_LRU_LISTS; i++) > > pages += lruvec_page_state_local(lruvec, > > NR_LRU_BASE + i); > > - pages += lruvec_page_state_local( > > - lruvec, NR_SLAB_RECLAIMABLE_B) >> PAGE_SHIFT; > > - pages += lruvec_page_state_local( > > - lruvec, NR_SLAB_UNRECLAIMABLE_B) >> PAGE_SHIFT; > > } else > > #endif > > pages = node_present_pages(sc->nid); > > Did this have any observable runtime effects?
Most likely not. I maybe saw the second effect once, but it was backed up by a bug in the inode reclaim path in the exact kernel version I used (not an upstream one). The first problem is pure theoretical, I'm just not comfortable with using these counters, which are known to be inaccurate after reparenting. That's why I didn't add stable@. Thanks!