On Tue, Mar 16, 2021 at 10:08:51AM +0800, Huang, Ying wrote: > Yu Zhao <yuz...@google.com> writes: > [snip] > > > +/* Main function used by foreground, background and user-triggered aging. > > */ > > +static bool walk_mm_list(struct lruvec *lruvec, unsigned long next_seq, > > + struct scan_control *sc, int swappiness) > > +{ > > + bool last; > > + struct mm_struct *mm = NULL; > > + int nid = lruvec_pgdat(lruvec)->node_id; > > + struct mem_cgroup *memcg = lruvec_memcg(lruvec); > > + struct lru_gen_mm_list *mm_list = get_mm_list(memcg); > > + > > + VM_BUG_ON(next_seq > READ_ONCE(lruvec->evictable.max_seq)); > > + > > + /* > > + * For each walk of the mm list of a memcg, we decrement the priority > > + * of its lruvec. For each walk of memcgs in kswapd, we increment the > > + * priorities of all lruvecs. > > + * > > + * So if this lruvec has a higher priority (smaller value), it means > > + * other concurrent reclaimers (global or memcg reclaim) have walked > > + * its mm list. Skip it for this priority to balance the pressure on > > + * all memcgs. > > + */ > > +#ifdef CONFIG_MEMCG > > + if (!mem_cgroup_disabled() && !cgroup_reclaim(sc) && > > + sc->priority > atomic_read(&lruvec->evictable.priority)) > > + return false; > > +#endif > > + > > + do { > > + last = get_next_mm(lruvec, next_seq, swappiness, &mm); > > + if (mm) > > + walk_mm(lruvec, mm, swappiness); > > + > > + cond_resched(); > > + } while (mm); > > It appears that we need to scan the whole address space of multiple > processes in this loop? > > If so, I have some concerns about the duration of the function. Do you > have some number of the distribution of the duration of the function? > And may be the number of mm_struct and the number of pages scanned. > > In comparison, in the traditional LRU algorithm, for each round, only a > small subset of the whole physical memory is scanned.
Reasonable concerns, and insightful too. We are sensitive to direct reclaim latency, and we tuned another path carefully so that direct reclaims virtually don't hit this path :) Some numbers from the cover letter first: In addition, direct reclaim latency is reduced by 22% at 99th percentile and the number of refaults is reduced 7%. These metrics are important to phones and laptops as they are correlated to user experience. And "another path" is the background aging in kswapd: age_active_anon() age_lru_gens() try_walk_mm_list() /* try to spread pages out across spread+1 generations */ if (old_and_young[0] >= old_and_young[1] * spread && min_nr_gens(max_seq, min_seq, swappiness) > max(spread, MIN_NR_GENS)) return; walk_mm_list(lruvec, max_seq, sc, swappiness); By default, spread = 2, which makes kswapd slight more aggressive than direct reclaim for our use cases. This can be entirely disabled by setting spread to 0, for worloads that don't care about direct reclaim latency, or larger values, they are more sensitive than ours. It's worth noting that walk_mm_list() is multithreaded -- reclaiming threads can work on different mm_structs on the same list concurrently. We do occasionally see this function in direct reclaims, on over-overcommitted systems, i.e., kswapd CPU usage is 100%. Under the same condition, we saw the current page reclaim live locked and triggered hardware watchdog timeouts (our hardware watchdog is set to 2 hours) many times.