On Thu 11-01-18 15:21:33, Andrey Ryabinin wrote:
> 
> 
> On 01/11/2018 01:42 PM, Michal Hocko wrote:
> > On Wed 10-01-18 15:43:17, Andrey Ryabinin wrote:
> > [...]
> >> @@ -2506,15 +2480,13 @@ static int mem_cgroup_resize_limit(struct 
> >> mem_cgroup *memcg,
> >>            if (!ret)
> >>                    break;
> >>  
> >> -          try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, !memsw);
> >> -
> >> -          curusage = page_counter_read(counter);
> >> -          /* Usage is reduced ? */
> >> -          if (curusage >= oldusage)
> >> -                  retry_count--;
> >> -          else
> >> -                  oldusage = curusage;
> >> -  } while (retry_count);
> >> +          usage = page_counter_read(counter);
> >> +          if (!try_to_free_mem_cgroup_pages(memcg, usage - limit,
> >> +                                          GFP_KERNEL, !memsw)) {
> > 
> > If the usage drops below limit in the meantime then you get underflow
> > and reclaim the whole memcg. I do not think this is a good idea. This
> > can also lead to over reclaim. Why don't you simply stick with the
> > original SWAP_CLUSTER_MAX (aka 1 for try_to_free_mem_cgroup_pages)?
> > 
> 
> Because, if new limit is gigabytes bellow the current usage, retrying to set
> new limit after reclaiming only 32 pages seems unreasonable.

Who would do insanity like that?

> @@ -2487,8 +2487,8 @@ static int mem_cgroup_resize_limit(struct mem_cgroup 
> *memcg,
>               if (!ret)
>                       break;
>  
> -             usage = page_counter_read(counter);
> -             if (!try_to_free_mem_cgroup_pages(memcg, usage - limit,
> +             nr_pages = max_t(long, 1, page_counter_read(counter) - limit);
> +             if (!try_to_free_mem_cgroup_pages(memcg, nr_pages,
>                                               GFP_KERNEL, !memsw)) {
>                       ret = -EBUSY;
>                       break;

How does this address the over reclaim concern?
-- 
Michal Hocko
SUSE Labs

Reply via email to