[+CC Michal Hocko]

On 06/09/2016 08:04 PM, Mel Gorman wrote:
> This moves the LRU lists from the zone to the node and all related data
> such as counters, tracing, congestion tracking and writeback tracking.
> This is mostly a mechanical patch but note that it introduces a number
> of anomalies. For example, the scans are per-zone but using per-node
> counters. We also mark a node as congested when a zone is congested. This
> causes weird problems that are fixed later but is easier to review.
> 
> Signed-off-by: Mel Gorman <mgor...@techsingularity.net>
> Acked-by: Johannes Weiner <han...@cmpxchg.org>


> @@ -535,17 +525,21 @@ struct zone {
>  
>  enum zone_flags {
>       ZONE_RECLAIM_LOCKED,            /* prevents concurrent reclaim */
> -     ZONE_CONGESTED,                 /* zone has many dirty pages backed by
> +     ZONE_OOM_LOCKED,                /* zone is in OOM killer zonelist */

This one has been zapped recently, looks like rebasing resurrected it.

> @@ -1455,13 +1455,22 @@ bool compaction_zonelist_suitable(struct 
> alloc_context *ac, int order,
>               enum compact_result compact_result;
>  
>               /*
> +              * This over-estimates the number of pages available for
> +              * reclaim/compaction but walking the LRU would take too
> +              * long. The consequences are that compaction may retry
> +              * longer than it should for a zone-constrained allocation
> +              * request.
> +              */
> +             available = pgdat_reclaimable_pages(zone->zone_pgdat);

I'm worried if "longer than it should" means "potentially forever", as
the limit on retries in should_compact_retry() doesn't apply when this
function returns true. Unless some later patches change that.

I'm starting to wonder if it's a good idea to give up per-zone LRU
accounting, because we still have per-zone watermarks that we are trying
to satisfy. How will we even recognize situation where a small zone is
so depleted of LRU pages that it can't even reach its watermarks,
causing a massive whole-node reclaim? Couldn't we have a combination of
per-node lru with per-zone accounting?

> +
> +             /*
>                * Do not consider all the reclaimable memory because we do not
>                * want to trash just for a single high order allocation which
>                * is even not guaranteed to appear even if 
> __compaction_suitable
>                * is happy about the watermark check.
>                */
> -             available = zone_reclaimable_pages(zone) / order;

This removed the scaling by order. Accidentally I guess, as the comment
is still there.

>               available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
> +             available = min(zone->managed_pages, available);
>               compact_result = __compaction_suitable(zone, order, alloc_flags,
>                               ac_classzone_idx(ac), available);
>               if (compact_result != COMPACT_SKIPPED &&

[...]

> @@ -1826,7 +1827,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, 
> struct page *page)
>       }
>  
>       page_lru = page_is_file_cache(page);
> -     mod_zone_page_state(page_zone(page), NR_ISOLATED_ANON + page_lru,
> +     mod_node_page_state(page_zone(page)->zone_pgdat, NR_ISOLATED_ANON + 
> page_lru,

This again, I won't point out further. But I think a page_node() (or
page_pgdat()?) function is called for?

> @@ -3486,10 +3486,19 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>               unsigned long available;
>               unsigned long reclaimable;
>  
> -             available = reclaimable = zone_reclaimable_pages(zone);
> -             available -= DIV_ROUND_UP(no_progress_loops * available,
> +             /*
> +              * This over-estimates the number of pages available for
> +              * reclaim but walking the LRU would take too long. The
> +              * consequences are that this may continue trying to
> +              * reclaim for zone-constrained allocations even if those
> +              * zones are already depleted.
> +              */
> +             reclaimable = pgdat_reclaimable_pages(zone->zone_pgdat);
> +             reclaimable = min(zone->managed_pages, reclaimable);
> +             available = reclaimable - DIV_ROUND_UP(no_progress_loops * 
> reclaimable,
>                                         MAX_RECLAIM_RETRIES);
>               available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
> +             available = min(zone->managed_pages, available);
>  
>               /*
>                * Would the allocation succeed if we reclaimed the whole

This adds to my worries about per-node LRU accounting :/


Reply via email to