On Tue, Feb 12, 2019 at 06:14:05PM +0300, Kirill Tkhai wrote:
> We know, which LRU is not active.

s/,//

> 
> Signed-off-by: Kirill Tkhai <ktk...@virtuozzo.com>
> ---
>  mm/vmscan.c |   10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 84542004a277..8d7d55e71511 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2040,12 +2040,6 @@ static unsigned move_active_pages_to_lru(struct lruvec 
> *lruvec,
>               }
>       }
>  
> -     if (!is_active_lru(lru)) {
> -             __count_vm_events(PGDEACTIVATE, nr_moved);
> -             count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE,
> -                                nr_moved);
> -     }
> -
>       return nr_moved;
>  }
>  
> @@ -2137,6 +2131,10 @@ static void shrink_active_list(unsigned long 
> nr_to_scan,
>  
>       nr_activate = move_active_pages_to_lru(lruvec, &l_active, &l_hold, lru);
>       nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, &l_hold, 
> lru - LRU_ACTIVE);
> +
> +     __count_vm_events(PGDEACTIVATE, nr_deactivate);
> +     __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);

Nice, you're using the irq-unsafe one since irqs are already disabled.  I guess
this was missed in c3cc39118c361.  Do you want to insert a patch before this
one that converts all instances of this pattern in vmscan.c over?

There's a similar oversight in lru_lazyfree_fn with count_memcg_page_event, but
that'd mean __count_memcg_page_event which is probably overkill.

Reply via email to