For multi-gen lru reclaim in evict_folios, like shrink_inactive_list,
gather folios which isolate to reclaim, and invoke shirnk_folio_list.

But, when complete shrink, it not gather shrink reclaim stat into sc,
we can't get info like nr_dirty\congested in reclaim, and then
control writeback, dirty number and mark as LRUVEC_CONGESTED, or
just bpf trace shrink and get correct sc stat.

This patch fix this by simple copy code from shrink_inactive_list when
end of shrink list.

Signed-off-by: Huan Yang <l...@vivo.com>
---
 mm/vmscan.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 21099b9f21e0..88d1d586aea5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4593,6 +4593,41 @@ static int evict_folios(struct lruvec *lruvec, struct 
scan_control *sc, int swap
         */
        nr_taken = sc->nr_scanned - nr_taken;
 
+       /*
+        * If dirty folios are scanned that are not queued for IO, it
+        * implies that flushers are not doing their job. This can
+        * happen when memory pressure pushes dirty folios to the end of
+        * the LRU before the dirty limits are breached and the dirty
+        * data has expired. It can also happen when the proportion of
+        * dirty folios grows not through writes but through memory
+        * pressure reclaiming all the clean cache. And in some cases,
+        * the flushers simply cannot keep up with the allocation
+        * rate. Nudge the flusher threads in case they are asleep.
+        */
+       if (unlikely(stat.nr_unqueued_dirty == nr_taken)) {
+               wakeup_flusher_threads(WB_REASON_VMSCAN);
+               /*
+                * For cgroupv1 dirty throttling is achieved by waking up
+                * the kernel flusher here and later waiting on folios
+                * which are in writeback to finish (see shrink_folio_list()).
+                *
+                * Flusher may not be able to issue writeback quickly
+                * enough for cgroupv1 writeback throttling to work
+                * on a large system.
+                */
+               if (!writeback_throttling_sane(sc))
+                       reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK);
+       }
+
+       sc->nr.dirty += stat.nr_dirty;
+       sc->nr.congested += stat.nr_congested;
+       sc->nr.unqueued_dirty += stat.nr_unqueued_dirty;
+       sc->nr.writeback += stat.nr_writeback;
+       sc->nr.immediate += stat.nr_immediate;
+       sc->nr.taken += nr_taken;
+       if (type)
+               sc->nr.file_taken += nr_taken;
+
        sc->nr_reclaimed += total_reclaimed;
        trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, nr_taken,
                                             total_reclaimed, &stat,
-- 
2.34.1


Reply via email to