Hello Barry,
On 07/08/2020 05:53 AM, Barry Song wrote:
> Rather than splitting huge_cma in online nodes, it is better to do it in
> nodes with memory.
Right, it makes sense to avoid nodes without memory, hence loosing portions
of CMA reservation intended for HugeTLB. N_MEMORY is better than N_ONLINE
and will help avoid this situation.
> For an ARM64 server with four numa nodes and only node0 has memory. If I
> set hugetlb_cma=4G in bootargs,
>
> without this patch, I got the below printk:
> hugetlb_cma: reserve 4096 MiB, up to 1024 MiB per node
> hugetlb_cma: reserved 1024 MiB on node 0
> hugetlb_cma: reservation failed: err -12, node 1
> hugetlb_cma: reservation failed: err -12, node 2
> hugetlb_cma: reservation failed: err -12, node 3
As expected.
>
> hugetlb_cma size is broken once the system has nodes without memory.
I would not say that it is 'broken'. It is just not optimal but still works
as designed.
>
> With this patch, I got the below printk:
> hugetlb_cma: reserve 4096 MiB, up to 4096 MiB per node
> hugetlb_cma: reserved 4096 MiB on node 0
As expected, the per node CMA reservation quota has changed from N_ONLINE
to N_MEMORY.
>
> So this patch fixes the broken hugetlb_cma size on arm64.
There is nothing arm64 specific here. A platform where N_ONLINE != N_MEMORY
i.e with some nodes without memory when CMA reservation gets called, will
have this problem.
>
> Jonathan Cameron tested this patch on x86 platform. Jonathan figured out x86
> is much different with arm64. hugetlb_cma size has never broken on x86.
> On arm64 all nodes are marked online at the same time. On x86, only
> nodes with memory are initially marked as online:
> initmem_init()->x86_numa_init()->numa_init()->
> numa_register_memblks()->alloc_node_data()->node_set_online()
> So at time of the existing cma setup call only the memory containing nodes
> are online. The other nodes are brought up much later.
The problem is always there if N_ONLINE != N_MEMORY but in this case, it
is just hidden because N_ONLINE happen to match N_MEMORY during the boot
process when hugetlb_cma_reserve() gets called.
>
> Thus, the change is simply to fix ARM64. A change is needed to x86 only
> because the inherent assumptions in cma_hugetlb_reserve() have changed.
cma_hugetlb_reserve() will now scan over N_MEMORY and hence expects all
platforms to have N_MEMORY initialized properly before calling it. This
needs to be well documented for the hugetlb_cma_reserve() function along
with it's call sites.
>
> Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages
> using cma")
I would not call this a "Fix". The current code still works, though in
a sub optimal manner.
> Cc: Roman Gushchin
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: Borislav Petkov
> Cc: H. Peter Anvin
> Cc: Mike Kravetz
> Cc: Mike Rapoport
> Cc: Andrew Morton
> Cc: Anshuman Khandual
> Cc: Jonathan Cameron
> Signed-off-by: Barry Song
> ---
> arch/arm64/mm/init.c| 18 +-
> arch/x86/kernel/setup.c | 13 ++---
> mm/hugetlb.c| 4 ++--
> 3 files changed, 21 insertions(+), 14 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 1e93cfc7c47a..f6090ef6812b 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -420,15 +420,6 @@ void __init bootmem_init(void)
>
> arm64_numa_init();
>
> - /*
> - * must be done after arm64_numa_init() which calls numa_init() to
> - * initialize node_online_map that gets used in hugetlb_cma_reserve()
> - * while allocating required CMA size across online nodes.
> - */
> -#ifdef CONFIG_ARM64_4K_PAGES
> - hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> -#endif
> -
> /*
>* Sparsemem tries to allocate bootmem in memory_present(), so must be
>* done after the fixed reservations.
> @@ -438,6 +429,15 @@ void __init bootmem_init(void)
> sparse_init();
> zone_sizes_init(min, max);
>
> + /*
> + * must be done after zone_sizes_init() which calls node_set_state() to
> + * setup node_states[N_MEMORY] that gets used in hugetlb_cma_reserve()
> + * while allocating required CMA size across nodes with memory.
> + */
Needs better wording here, in particular a reference to free_area_init()
that updates N_MEMORY via node_set_state(). Also mention the fact that
now hugetlb_cma_reserve() scans over N_MEMORY nodemask and hence expects
the platforms to have a properly initialized one.
> +#ifdef CONFIG_ARM64_4K_PAGES
> + hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> +#endif
> +
> memblock_dump_all();
> }
>
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index a3767e74c758..fdb3a934b6c6 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -1164,9 +1164,6 @@ void __init setup_arch(char **cmdline_p)
> initmem_init();
> dma_contiguous_reserve(max_pfn_mapped