On Mon, 26 Feb 2018, Aaron Lu wrote:

> Matthew Wilcox found that all callers of free_pcppages_bulk() currently
> update pcp->count immediately after so it's natural to do it inside
> free_pcppages_bulk().
> 
> No functionality or performance change is expected from this patch.
> 
> Suggested-by: Matthew Wilcox <wi...@infradead.org>
> Signed-off-by: Aaron Lu <aaron...@intel.com>
> ---
>  mm/page_alloc.c | 10 +++-------
>  1 file changed, 3 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index cb416723538f..3154859cccd6 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1117,6 +1117,7 @@ static void free_pcppages_bulk(struct zone *zone, int 
> count,
>       int batch_free = 0;
>       bool isolated_pageblocks;
>  
> +     pcp->count -= count;
>       spin_lock(&zone->lock);
>       isolated_pageblocks = has_isolate_pageblock(zone);
>  

Why modify pcp->count before the pages have actually been freed?

I doubt that it matters too much, but at least /proc/zoneinfo uses 
zone->lock.  I think it should be done after the lock is dropped.

Otherwise, looks good.

Reply via email to