On Wed, 19 Apr 2017, Minchan Kim wrote:

> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 24efcc20af91..5d2f3fa41e92 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2174,8 +2174,17 @@ static void get_scan_count(struct lruvec *lruvec, 
> struct mem_cgroup *memcg,
>               }
>  
>               if (unlikely(pgdatfile + pgdatfree <= total_high_wmark)) {
> -                     scan_balance = SCAN_ANON;
> -                     goto out;
> +                     /*
> +                      * force SCAN_ANON if inactive anonymous LRU lists of
> +                      * eligible zones are enough pages. Otherwise, thrashing
> +                      * can be happen on the small anonymous LRU list.
> +                      */
> +                     if (!inactive_list_is_low(lruvec, false, NULL, sc, 
> false) &&
> +                          lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, 
> sc->reclaim_idx)
> +                                     >> sc->priority) {
> +                             scan_balance = SCAN_ANON;
> +                             goto out;
> +                     }
>               }
>       }
>  

Hi Minchan,

This looks good and it correctly biases against SCAN_ANON for my workload 
that was thrashing the anon lrus.  Feel free to use parts of my changelog 
if you'd like.

Tested-by: David Rientjes <[email protected]>

Reply via email to