On Thu, Jan 31, 2019 at 06:37:02PM +0300, Kirill Tkhai wrote:
> On path shrink_inactive_list() ---> shrink_page_list()
> we allocate stack variables for the statistics twice.
> This is completely useless, and this just consumes stack
> much more, then we really need.
> 
> The patch kills duplicate stack variables from shrink_page_list(),
> and this reduce stack usage and object file size significantly:
> 
> Stack usage:
> Before: vmscan.c:1122:22:shrink_page_list     648     static
> After:  vmscan.c:1122:22:shrink_page_list     616     static
> 
> Size of vmscan.o:
>          text    data     bss     dec     hex filename
> Before: 56866    4720     128   61714    f112 mm/vmscan.o
> After:  56770    4720     128   61618    f0b2 mm/vmscan.o
> 
> Signed-off-by: Kirill Tkhai <[email protected]>
> ---
>  mm/vmscan.c |   44 ++++++++++++++------------------------------
>  1 file changed, 14 insertions(+), 30 deletions(-)


> @@ -1534,6 +1517,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone 
> *zone,
>               .priority = DEF_PRIORITY,
>               .may_unmap = 1,
>       };
> +     struct reclaim_stat dummy_stat;
>       unsigned long ret;
>       struct page *page, *next;
>       LIST_HEAD(clean_pages);
> @@ -1547,7 +1531,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone 
> *zone,
>       }
>  
>       ret = shrink_page_list(&clean_pages, zone->zone_pgdat, &sc,
> -                     TTU_IGNORE_ACCESS, NULL, true);
> +                     TTU_IGNORE_ACCESS, &dummy_stat, true);
>       list_splice(&clean_pages, page_list);
>       mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_FILE, -ret);
>       return ret;

Stack usage stays the same coming from reclaim_clean_pages_from_list, with a
dummy variable added back after many were taken away in 3c710c1ad11b ("mm,
vmscan: extract shrink_page_list...").

But overall seems like a win to me.  You can add

Reviewed-by: Daniel Jordan <[email protected]>

Reply via email to