> -----Original Message-----
> From: Matthias Brugger [mailto:[email protected]]
> Sent: Monday, June 8, 2020 8:15 AM
> To: Roman Gushchin <[email protected]>; Song Bao Hua (Barry Song)
> <[email protected]>
> Cc: [email protected]; John Garry <[email protected]>;
> [email protected]; Linuxarm <[email protected]>;
> [email protected]; Zengtao (B) <[email protected]>;
> Jonathan Cameron <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]
> Subject: Re: [PATCH 2/3] arm64: mm: reserve hugetlb CMA after numa_init
> 
> 
> 
> On 03/06/2020 05:22, Roman Gushchin wrote:
> > On Wed, Jun 03, 2020 at 02:42:30PM +1200, Barry Song wrote:
> >> hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
> >> done yet. so all reserved memory will be located at node0.
> >>
> >> Cc: Roman Gushchin <[email protected]>
> >> Signed-off-by: Barry Song <[email protected]>
> >
> > Acked-by: Roman Gushchin <[email protected]>
> >
> 
> When did this break or was it broken since the beginning?
> In any case, could you provide a "Fixes" tag for it, so that it can easily be
> backported to older releases.

I guess it was broken at the first beginning.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cf11e85fc08cc

Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using 
cma")

Would you think it is better for me to send v2 for this patch separately with 
this tag and take this out of my original patch set for per-numa CMA?
Please give your suggestion.

Best Regards
Barry

> 
> Regards,
> Matthias
> 
> > Thanks!
> >
> >> ---
> >>  arch/arm64/mm/init.c | 10 +++++-----
> >>  1 file changed, 5 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> >> index e42727e3568e..8f0e70ebb49d 100644
> >> --- a/arch/arm64/mm/init.c
> >> +++ b/arch/arm64/mm/init.c
> >> @@ -458,11 +458,6 @@ void __init arm64_memblock_init(void)
> >>    high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
> >>
> >>    dma_contiguous_reserve(arm64_dma32_phys_limit);
> >> -
> >> -#ifdef CONFIG_ARM64_4K_PAGES
> >> -  hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> >> -#endif
> >> -
> >>  }
> >>
> >>  void __init bootmem_init(void)
> >> @@ -478,6 +473,11 @@ void __init bootmem_init(void)
> >>    min_low_pfn = min;
> >>
> >>    arm64_numa_init();
> >> +
> >> +#ifdef CONFIG_ARM64_4K_PAGES
> >> +  hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> >> +#endif
> >> +
> >>    /*
> >>     * Sparsemem tries to allocate bootmem in memory_present(), so must
> be
> >>     * done after the fixed reservations.
> >> --
> >> 2.23.0

Reply via email to