Hello Tim,

On Tue, Feb 09, 2021 at 12:29:47PM -0800, Tim Chen wrote:
> @@ -6849,7 +6850,9 @@ static void uncharge_page(struct page *page, struct 
> uncharge_gather *ug)
>        * exclusive access to the page.
>        */
>  
> -     if (ug->memcg != page_memcg(page)) {
> +     if (ug->memcg != page_memcg(page) ||
> +         /* uncharge batch update soft limit tree on a node basis */
> +         (ug->dummy_page && ug->nid != page_to_nid(page))) {

The fix makes sense to me.

However, unconditionally breaking up the batch by node can
unnecessarily regress workloads in cgroups that do not have a soft
limit configured, and cgroup2 which doesn't have soft limits at
all. Consider an interleaving allocation policy for example.

Can you please further gate on memcg->soft_limit != PAGE_COUNTER_MAX,
or at least on !cgroup_subsys_on_dfl(memory_cgrp_subsys)?

Thanks

Reply via email to