On Mon 01-05-17 14:34:21, David Rientjes wrote:
[...]
> @@ -2204,8 +2204,17 @@ static void get_scan_count(struct lruvec *lruvec, 
> struct mem_cgroup *memcg,
>               }
>  
>               if (unlikely(pgdatfile + pgdatfree <= total_high_wmark)) {
> -                     scan_balance = SCAN_ANON;
> -                     goto out;
> +                     /*
> +                      * Force SCAN_ANON if there are enough inactive
> +                      * anonymous pages on the LRU in eligible zones.
> +                      * Otherwise, the small LRU gets thrashed.
> +                      */
> +                     if (!inactive_list_is_low(lruvec, false, sc, false) &&
> +                         lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, 
> sc->reclaim_idx)
> +                                     >> sc->priority) {
> +                             scan_balance = SCAN_ANON;
> +                             goto out;
> +                     }

I have already asked and my questions were ignored. So let me ask again
and hopefuly not get ignored this time. So Why do we need a different
criterion on anon pages than file pages? I do agree that blindly
scanning anon pages when file pages are low is very suboptimal but this
adds yet another heuristic without _any_ numbers. Why cannot we simply
treat anon and file pages equally? Something like the following

        if (pgdatfile + pgdatanon + pgdatfree > 2*total_high_wmark) {
                scan_balance = SCAN_FILE;
                if (pgdatfile < pgdatanon)
                        scan_balance = SCAN_ANON;
                goto out;
        }

Also it would help to describe the workload which can trigger this
behavior so that we can compare numbers before and after this patch.
-- 
Michal Hocko
SUSE Labs

Reply via email to