On 03/21/2014 05:17 PM, John Stultz wrote:
Currently we don't shrink/scan the anonymous lrus when swap is off.
This is problematic for volatile range purging on swapless systems/
This patch naievely changes the vmscan code to continue scanning
and shrinking the lrus even when there is no swap.
It obviously has performance issues.
Thoughts on how best to implement this would be appreciated.
Cc: Andrew Morton <[email protected]>
Cc: Android Kernel Team <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Robert Love <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Dmitry Adamushko <[email protected]>
Cc: Neil Brown <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Mike Hommey <[email protected]>
Cc: Taras Glek <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: KOSAKI Motohiro <[email protected]>
Cc: Michel Lespinasse <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: [email protected] <[email protected]>
Signed-off-by: John Stultz <[email protected]>
---
mm/vmscan.c | 26 ++++----------------------
1 file changed, 4 insertions(+), 22 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 34f159a..07b0a8c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -155,9 +155,8 @@ static unsigned long zone_reclaimable_pages(struct zone
*zone)
nr = zone_page_state(zone, NR_ACTIVE_FILE) +
zone_page_state(zone, NR_INACTIVE_FILE);
- if (get_nr_swap_pages() > 0)
- nr += zone_page_state(zone, NR_ACTIVE_ANON) +
- zone_page_state(zone, NR_INACTIVE_ANON);
+ nr += zone_page_state(zone, NR_ACTIVE_ANON) +
+ zone_page_state(zone, NR_INACTIVE_ANON);
return nr;
Not all of the anonymous pages will be reclaimable.
Is there some counter that keeps track of how many
volatile range pages there are in each zone?
@@ -1764,13 +1763,6 @@ static int inactive_anon_is_low_global(struct zone *zone)
*/
static int inactive_anon_is_low(struct lruvec *lruvec)
{
- /*
- * If we don't have swap space, anonymous page deactivation
- * is pointless.
- */
- if (!total_swap_pages)
- return 0;
-
if (!mem_cgroup_disabled())
return mem_cgroup_inactive_anon_is_low(lruvec);
This part is correct, and needed.
@@ -1880,12 +1872,6 @@ static void get_scan_count(struct lruvec *lruvec, struct
scan_control *sc,
if (!global_reclaim(sc))
force_scan = true;
- /* If we have no swap space, do not bother scanning anon pages. */
- if (!sc->may_swap || (get_nr_swap_pages() <= 0)) {
- scan_balance = SCAN_FILE;
- goto out;
- }
-
/*
This part is too.
@@ -2181,8 +2166,8 @@ static inline bool should_continue_reclaim(struct zone
*zone,
*/
pages_for_compaction = (2UL << sc->order);
inactive_lru_pages = zone_page_state(zone, NR_INACTIVE_FILE);
- if (get_nr_swap_pages() > 0)
- inactive_lru_pages += zone_page_state(zone, NR_INACTIVE_ANON);
+ inactive_lru_pages += zone_page_state(zone, NR_INACTIVE_ANON);
+
if (sc->nr_reclaimed < pages_for_compaction &&
inactive_lru_pages > pages_for_compaction)
Not sure this is a good idea, since the pages may not actually
be reclaimable, and the inactive list will continue to be
refilled indefinitely...
If there was a counter of the number of volatile range pages
in a zone, this would be easier.
Of course, the overhead of keeping such a counter might be
too high for what volatile ranges are designed for...
return true;
@@ -2726,9 +2711,6 @@ static void age_active_anon(struct zone *zone, struct
scan_control *sc)
{
struct mem_cgroup *memcg;
- if (!total_swap_pages)
- return;
-
This bit is correct and needed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/