On Fri, Jul 01, 2016 at 02:41:01PM +0800, Ganesh Mahendran wrote:
> the obj index value should be updated after return from
> find_alloced_obj()
 
        to avoid CPU buring caused by unnecessary object scanning.

Description should include what's the goal.

> 
> Signed-off-by: Ganesh Mahendran <opensource.gan...@gmail.com>
> ---
>  mm/zsmalloc.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 405baa5..5c96ed1 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1744,15 +1744,16 @@ static void zs_object_copy(struct size_class *class, 
> unsigned long dst,
>   * return handle.
>   */
>  static unsigned long find_alloced_obj(struct size_class *class,
> -                                     struct page *page, int index)
> +                                     struct page *page, int *index)
>  {
>       unsigned long head;
>       int offset = 0;
> +     int objidx = *index;

Nit:

We have used obj_idx so I prefer it for consistency with others.

Suggestion:
Could you mind changing index in zs_compact_control and
migrate_zspage with obj_idx in this chance?

Strictly speaking, such clean up is separate patch but I don't mind
mixing them here(Of course, you will send it as another clean up patch,
it would be better). If you mind, just let it leave as is. Sometime,
I wil do it.

>       unsigned long handle = 0;
>       void *addr = kmap_atomic(page);
>  
>       offset = get_first_obj_offset(page);
> -     offset += class->size * index;
> +     offset += class->size * objidx;
>  
>       while (offset < PAGE_SIZE) {
>               head = obj_to_head(page, addr + offset);
> @@ -1764,9 +1765,11 @@ static unsigned long find_alloced_obj(struct 
> size_class *class,
>               }
>  
>               offset += class->size;
> -             index++;
> +             objidx++;
>       }
>  
> +     *index = objidx;

We can do this out of kmap section right before returing handle.

Thanks!

> +
>       kunmap_atomic(addr);
>       return handle;
>  }
> @@ -1794,11 +1797,11 @@ static int migrate_zspage(struct zs_pool *pool, 
> struct size_class *class,
>       unsigned long handle;
>       struct page *s_page = cc->s_page;
>       struct page *d_page = cc->d_page;
> -     unsigned long index = cc->index;
> +     unsigned int index = cc->index;
>       int ret = 0;
>  
>       while (1) {
> -             handle = find_alloced_obj(class, s_page, index);
> +             handle = find_alloced_obj(class, s_page, &index);
>               if (!handle) {
>                       s_page = get_next_page(s_page);
>                       if (!s_page)
> -- 
> 1.9.1
> 

Reply via email to