On Tue 26-03-19 17:02:25, Baoquan He wrote:
> Reorder the allocation of usemap and memmap since usemap allocation
> is much simpler and easier. Otherwise hard work is done to make
> memmap ready, then have to rollback just because of usemap allocation
> failure.

Is this really worth it? I can see that !VMEMMAP is doing memmap size
allocation which would be 2MB aka costly allocation but we do not do
__GFP_RETRY_MAYFAIL so the allocator backs off early.

> And also check if section is present earlier. Then don't bother to
> allocate usemap and memmap if yes.

Moving the check up makes some sense.

> Signed-off-by: Baoquan He <b...@redhat.com>

The patch is not incorrect but I am wondering whether it is really worth
it for the current code base. Is it fixing anything real or it is a mere
code shuffling to please an eye?

> ---
> v1->v2:
>   Do section existence checking earlier to further optimize code.
> 
>  mm/sparse.c | 29 +++++++++++------------------
>  1 file changed, 11 insertions(+), 18 deletions(-)
> 
> diff --git a/mm/sparse.c b/mm/sparse.c
> index b2111f996aa6..f4f34d69131e 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -714,20 +714,18 @@ int __meminit sparse_add_one_section(int nid, unsigned 
> long start_pfn,
>       ret = sparse_index_init(section_nr, nid);
>       if (ret < 0 && ret != -EEXIST)
>               return ret;
> -     ret = 0;
> -     memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> -     if (!memmap)
> -             return -ENOMEM;
> -     usemap = __kmalloc_section_usemap();
> -     if (!usemap) {
> -             __kfree_section_memmap(memmap, altmap);
> -             return -ENOMEM;
> -     }
>  
>       ms = __pfn_to_section(start_pfn);
> -     if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
> -             ret = -EEXIST;
> -             goto out;
> +     if (ms->section_mem_map & SECTION_MARKED_PRESENT)
> +             return -EEXIST;
> +
> +     usemap = __kmalloc_section_usemap();
> +     if (!usemap)
> +             return -ENOMEM;
> +     memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> +     if (!memmap) {
> +             kfree(usemap);
> +             return  -ENOMEM;
>       }
>  
>       /*
> @@ -739,12 +737,7 @@ int __meminit sparse_add_one_section(int nid, unsigned 
> long start_pfn,
>       section_mark_present(ms);
>       sparse_init_one_section(ms, section_nr, memmap, usemap);
>  
> -out:
> -     if (ret < 0) {
> -             kfree(usemap);
> -             __kfree_section_memmap(memmap, altmap);
> -     }
> -     return ret;
> +     return 0;
>  }
>  
>  #ifdef CONFIG_MEMORY_HOTREMOVE
> -- 
> 2.17.2
> 

-- 
Michal Hocko
SUSE Labs

Reply via email to