Re: [PATCHv2 3/7] mm/memblock: introduce allocation boundary for tracing purpose

2019-01-14 Thread Pingfan Liu
On Mon, Jan 14, 2019 at 4:50 PM Mike Rapoport  wrote:
>
> On Mon, Jan 14, 2019 at 04:33:50PM +0800, Pingfan Liu wrote:
> > On Mon, Jan 14, 2019 at 3:51 PM Mike Rapoport  wrote:
> > >
> > > Hi Pingfan,
> > >
> > > On Fri, Jan 11, 2019 at 01:12:53PM +0800, Pingfan Liu wrote:
> > > > During boot time, there is requirement to tell whether a series of func
> > > > call will consume memory or not. For some reason, a temporary memory
> > > > resource can be loan to those func through memblock allocator, but at a
> > > > check point, all of the loan memory should be turned back.
> > > > A typical using style:
> > > >  -1. find a usable range by memblock_find_in_range(), said, [A,B]
> > > >  -2. before calling a series of func, 
> > > > memblock_set_current_limit(A,B,true)
> > > >  -3. call funcs
> > > >  -4. memblock_find_in_range(A,B,B-A,1), if failed, then some memory is 
> > > > not
> > > >  turned back.
> > > >  -5. reset the original limit
> > > >
> > > > E.g. in the case of hotmovable memory, some acpi routines should be 
> > > > called,
> > > > and they are not allowed to own some movable memory. Although at present
> > > > these functions do not consume memory, but later, if changed without
> > > > awareness, they may do. With the above method, the allocation can be
> > > > detected, and pr_warn() to ask people to resolve it.
> > >
> > > To ensure there were that a sequence of function calls didn't create new
> > > memblock allocations you can simply check the number of the reserved
> > > regions before and after that sequence.
> > >
> > Yes, thank you point out it.
> >
> > > Still, I'm not sure it would be practical to try tracking what code 
> > > that's called
> > > from x86::setup_arch() did memory allocation.
> > > Probably a better approach is to verify no memory ended up in the movable
> > > areas after their extents are known.
> > >
> > It is a probability problem whether allocated memory sit on hotmovable
> > memory or not. And if warning based on the verification, then it is
> > also a probability problem and maybe we will miss it.
>
> I'm not sure I'm following you here.
>
> After the hotmovable memory configuration is detected it is possible to
> traverse reserved memblock areas and warn if some of them reside in the
> hotmovable memory.
>
Oh, sorry that I did not explain it accurately. Let use say a machine
with nodeA/B/C from low to high memory address. With top-down
allocation by default, at this point, memory will always be allocated
from nodeC. But it depends on machine whether nodeC is hotmovable or
not. The verification can pass on a machine with unmovable nodeC , but
fails on a machine with movable nodeC. It will be a probability issue.

Thanks

[...]


Re: [PATCHv2 3/7] mm/memblock: introduce allocation boundary for tracing purpose

2019-01-14 Thread Mike Rapoport
On Mon, Jan 14, 2019 at 04:33:50PM +0800, Pingfan Liu wrote:
> On Mon, Jan 14, 2019 at 3:51 PM Mike Rapoport  wrote:
> >
> > Hi Pingfan,
> >
> > On Fri, Jan 11, 2019 at 01:12:53PM +0800, Pingfan Liu wrote:
> > > During boot time, there is requirement to tell whether a series of func
> > > call will consume memory or not. For some reason, a temporary memory
> > > resource can be loan to those func through memblock allocator, but at a
> > > check point, all of the loan memory should be turned back.
> > > A typical using style:
> > >  -1. find a usable range by memblock_find_in_range(), said, [A,B]
> > >  -2. before calling a series of func, memblock_set_current_limit(A,B,true)
> > >  -3. call funcs
> > >  -4. memblock_find_in_range(A,B,B-A,1), if failed, then some memory is not
> > >  turned back.
> > >  -5. reset the original limit
> > >
> > > E.g. in the case of hotmovable memory, some acpi routines should be 
> > > called,
> > > and they are not allowed to own some movable memory. Although at present
> > > these functions do not consume memory, but later, if changed without
> > > awareness, they may do. With the above method, the allocation can be
> > > detected, and pr_warn() to ask people to resolve it.
> >
> > To ensure there were that a sequence of function calls didn't create new
> > memblock allocations you can simply check the number of the reserved
> > regions before and after that sequence.
> >
> Yes, thank you point out it.
> 
> > Still, I'm not sure it would be practical to try tracking what code that's 
> > called
> > from x86::setup_arch() did memory allocation.
> > Probably a better approach is to verify no memory ended up in the movable
> > areas after their extents are known.
> >
> It is a probability problem whether allocated memory sit on hotmovable
> memory or not. And if warning based on the verification, then it is
> also a probability problem and maybe we will miss it.

I'm not sure I'm following you here.

After the hotmovable memory configuration is detected it is possible to
traverse reserved memblock areas and warn if some of them reside in the
hotmovable memory.

> Thanks and regards,
> Pingfan
> 
> > > Signed-off-by: Pingfan Liu 
> > > Cc: Thomas Gleixner 
> > > Cc: Ingo Molnar 
> > > Cc: Borislav Petkov 
> > > Cc: "H. Peter Anvin" 
> > > Cc: Dave Hansen 
> > > Cc: Andy Lutomirski 
> > > Cc: Peter Zijlstra 
> > > Cc: "Rafael J. Wysocki" 
> > > Cc: Len Brown 
> > > Cc: Yinghai Lu 
> > > Cc: Tejun Heo 
> > > Cc: Chao Fan 
> > > Cc: Baoquan He 
> > > Cc: Juergen Gross 
> > > Cc: Andrew Morton 
> > > Cc: Mike Rapoport 
> > > Cc: Vlastimil Babka 
> > > Cc: Michal Hocko 
> > > Cc: x...@kernel.org
> > > Cc: linux-a...@vger.kernel.org
> > > Cc: linux...@kvack.org
> > > ---
> > >  arch/arm/mm/init.c  |  3 ++-
> > >  arch/arm/mm/mmu.c   |  4 ++--
> > >  arch/arm/mm/nommu.c |  2 +-
> > >  arch/csky/kernel/setup.c|  2 +-
> > >  arch/microblaze/mm/init.c   |  2 +-
> > >  arch/mips/kernel/setup.c|  2 +-
> > >  arch/powerpc/mm/40x_mmu.c   |  6 --
> > >  arch/powerpc/mm/44x_mmu.c   |  2 +-
> > >  arch/powerpc/mm/8xx_mmu.c   |  2 +-
> > >  arch/powerpc/mm/fsl_booke_mmu.c |  5 +++--
> > >  arch/powerpc/mm/hash_utils_64.c |  4 ++--
> > >  arch/powerpc/mm/init_32.c   |  2 +-
> > >  arch/powerpc/mm/pgtable-radix.c |  2 +-
> > >  arch/powerpc/mm/ppc_mmu_32.c|  8 ++--
> > >  arch/powerpc/mm/tlb_nohash.c|  6 --
> > >  arch/unicore32/mm/mmu.c |  2 +-
> > >  arch/x86/kernel/setup.c |  2 +-
> > >  arch/xtensa/mm/init.c   |  2 +-
> > >  include/linux/memblock.h| 10 +++---
> > >  mm/memblock.c   | 23 ++-
> > >  20 files changed, 59 insertions(+), 32 deletions(-)
> > >
> > > diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> > > index 32e4845..58a4342 100644
> > > --- a/arch/arm/mm/init.c
> > > +++ b/arch/arm/mm/init.c
> > > @@ -93,7 +93,8 @@ __tagtable(ATAG_INITRD2, parse_tag_initrd2);
> > >  static void __init find_limits(unsigned long *min, unsigned long 
> > > *max_low,
> > >  unsigned long *max_high)
> > >  {
> > > - *max_low = PFN_DOWN(memblock_get_current_limit());
> > > + memblock_get_current_limit(NULL, max_low);
> > > + *max_low = PFN_DOWN(*max_low);
> > >   *min = PFN_UP(memblock_start_of_DRAM());
> > >   *max_high = PFN_DOWN(memblock_end_of_DRAM());
> > >  }
> > > diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> > > index f5cc1cc..9025418 100644
> > > --- a/arch/arm/mm/mmu.c
> > > +++ b/arch/arm/mm/mmu.c
> > > @@ -1240,7 +1240,7 @@ void __init adjust_lowmem_bounds(void)
> > >   }
> > >   }
> > >
> > > - memblock_set_current_limit(memblock_limit);
> > > + memblock_set_current_limit(0, memblock_limit, false);
> > >  }
> > >
> > >  static inline void prepare_page_table(void)
> > > @@ -1625,7 +1625,7 @@ void __init paging_init(const struct machine_desc 
> 

Re: [PATCHv2 3/7] mm/memblock: introduce allocation boundary for tracing purpose

2019-01-14 Thread Pingfan Liu
On Mon, Jan 14, 2019 at 3:51 PM Mike Rapoport  wrote:
>
> Hi Pingfan,
>
> On Fri, Jan 11, 2019 at 01:12:53PM +0800, Pingfan Liu wrote:
> > During boot time, there is requirement to tell whether a series of func
> > call will consume memory or not. For some reason, a temporary memory
> > resource can be loan to those func through memblock allocator, but at a
> > check point, all of the loan memory should be turned back.
> > A typical using style:
> >  -1. find a usable range by memblock_find_in_range(), said, [A,B]
> >  -2. before calling a series of func, memblock_set_current_limit(A,B,true)
> >  -3. call funcs
> >  -4. memblock_find_in_range(A,B,B-A,1), if failed, then some memory is not
> >  turned back.
> >  -5. reset the original limit
> >
> > E.g. in the case of hotmovable memory, some acpi routines should be called,
> > and they are not allowed to own some movable memory. Although at present
> > these functions do not consume memory, but later, if changed without
> > awareness, they may do. With the above method, the allocation can be
> > detected, and pr_warn() to ask people to resolve it.
>
> To ensure there were that a sequence of function calls didn't create new
> memblock allocations you can simply check the number of the reserved
> regions before and after that sequence.
>
Yes, thank you point out it.

> Still, I'm not sure it would be practical to try tracking what code that's 
> called
> from x86::setup_arch() did memory allocation.
> Probably a better approach is to verify no memory ended up in the movable
> areas after their extents are known.
>
It is a probability problem whether allocated memory sit on hotmovable
memory or not. And if warning based on the verification, then it is
also a probability problem and maybe we will miss it.

Thanks and regards,
Pingfan

> > Signed-off-by: Pingfan Liu 
> > Cc: Thomas Gleixner 
> > Cc: Ingo Molnar 
> > Cc: Borislav Petkov 
> > Cc: "H. Peter Anvin" 
> > Cc: Dave Hansen 
> > Cc: Andy Lutomirski 
> > Cc: Peter Zijlstra 
> > Cc: "Rafael J. Wysocki" 
> > Cc: Len Brown 
> > Cc: Yinghai Lu 
> > Cc: Tejun Heo 
> > Cc: Chao Fan 
> > Cc: Baoquan He 
> > Cc: Juergen Gross 
> > Cc: Andrew Morton 
> > Cc: Mike Rapoport 
> > Cc: Vlastimil Babka 
> > Cc: Michal Hocko 
> > Cc: x...@kernel.org
> > Cc: linux-a...@vger.kernel.org
> > Cc: linux...@kvack.org
> > ---
> >  arch/arm/mm/init.c  |  3 ++-
> >  arch/arm/mm/mmu.c   |  4 ++--
> >  arch/arm/mm/nommu.c |  2 +-
> >  arch/csky/kernel/setup.c|  2 +-
> >  arch/microblaze/mm/init.c   |  2 +-
> >  arch/mips/kernel/setup.c|  2 +-
> >  arch/powerpc/mm/40x_mmu.c   |  6 --
> >  arch/powerpc/mm/44x_mmu.c   |  2 +-
> >  arch/powerpc/mm/8xx_mmu.c   |  2 +-
> >  arch/powerpc/mm/fsl_booke_mmu.c |  5 +++--
> >  arch/powerpc/mm/hash_utils_64.c |  4 ++--
> >  arch/powerpc/mm/init_32.c   |  2 +-
> >  arch/powerpc/mm/pgtable-radix.c |  2 +-
> >  arch/powerpc/mm/ppc_mmu_32.c|  8 ++--
> >  arch/powerpc/mm/tlb_nohash.c|  6 --
> >  arch/unicore32/mm/mmu.c |  2 +-
> >  arch/x86/kernel/setup.c |  2 +-
> >  arch/xtensa/mm/init.c   |  2 +-
> >  include/linux/memblock.h| 10 +++---
> >  mm/memblock.c   | 23 ++-
> >  20 files changed, 59 insertions(+), 32 deletions(-)
> >
> > diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> > index 32e4845..58a4342 100644
> > --- a/arch/arm/mm/init.c
> > +++ b/arch/arm/mm/init.c
> > @@ -93,7 +93,8 @@ __tagtable(ATAG_INITRD2, parse_tag_initrd2);
> >  static void __init find_limits(unsigned long *min, unsigned long *max_low,
> >  unsigned long *max_high)
> >  {
> > - *max_low = PFN_DOWN(memblock_get_current_limit());
> > + memblock_get_current_limit(NULL, max_low);
> > + *max_low = PFN_DOWN(*max_low);
> >   *min = PFN_UP(memblock_start_of_DRAM());
> >   *max_high = PFN_DOWN(memblock_end_of_DRAM());
> >  }
> > diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> > index f5cc1cc..9025418 100644
> > --- a/arch/arm/mm/mmu.c
> > +++ b/arch/arm/mm/mmu.c
> > @@ -1240,7 +1240,7 @@ void __init adjust_lowmem_bounds(void)
> >   }
> >   }
> >
> > - memblock_set_current_limit(memblock_limit);
> > + memblock_set_current_limit(0, memblock_limit, false);
> >  }
> >
> >  static inline void prepare_page_table(void)
> > @@ -1625,7 +1625,7 @@ void __init paging_init(const struct machine_desc 
> > *mdesc)
> >
> >   prepare_page_table();
> >   map_lowmem();
> > - memblock_set_current_limit(arm_lowmem_limit);
> > + memblock_set_current_limit(0, arm_lowmem_limit, false);
> >   dma_contiguous_remap();
> >   early_fixmap_shutdown();
> >   devicemaps_init(mdesc);
> > diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
> > index 7d67c70..721535c 100644
> > --- a/arch/arm/mm/nommu.c
> > +++ b/arch/arm/mm/nommu.c
> > @@ -138,7 +138,7 @@ void __init 

Re: [PATCHv2 3/7] mm/memblock: introduce allocation boundary for tracing purpose

2019-01-13 Thread Mike Rapoport
Hi Pingfan,

On Fri, Jan 11, 2019 at 01:12:53PM +0800, Pingfan Liu wrote:
> During boot time, there is requirement to tell whether a series of func
> call will consume memory or not. For some reason, a temporary memory
> resource can be loan to those func through memblock allocator, but at a
> check point, all of the loan memory should be turned back.
> A typical using style:
>  -1. find a usable range by memblock_find_in_range(), said, [A,B]
>  -2. before calling a series of func, memblock_set_current_limit(A,B,true)
>  -3. call funcs
>  -4. memblock_find_in_range(A,B,B-A,1), if failed, then some memory is not
>  turned back.
>  -5. reset the original limit
> 
> E.g. in the case of hotmovable memory, some acpi routines should be called,
> and they are not allowed to own some movable memory. Although at present
> these functions do not consume memory, but later, if changed without
> awareness, they may do. With the above method, the allocation can be
> detected, and pr_warn() to ask people to resolve it.

To ensure there were that a sequence of function calls didn't create new
memblock allocations you can simply check the number of the reserved
regions before and after that sequence.

Still, I'm not sure it would be practical to try tracking what code that's 
called
from x86::setup_arch() did memory allocation.
Probably a better approach is to verify no memory ended up in the movable
areas after their extents are known.
 
> Signed-off-by: Pingfan Liu 
> Cc: Thomas Gleixner 
> Cc: Ingo Molnar 
> Cc: Borislav Petkov 
> Cc: "H. Peter Anvin" 
> Cc: Dave Hansen 
> Cc: Andy Lutomirski 
> Cc: Peter Zijlstra 
> Cc: "Rafael J. Wysocki" 
> Cc: Len Brown 
> Cc: Yinghai Lu 
> Cc: Tejun Heo 
> Cc: Chao Fan 
> Cc: Baoquan He 
> Cc: Juergen Gross 
> Cc: Andrew Morton 
> Cc: Mike Rapoport 
> Cc: Vlastimil Babka 
> Cc: Michal Hocko 
> Cc: x...@kernel.org
> Cc: linux-a...@vger.kernel.org
> Cc: linux...@kvack.org
> ---
>  arch/arm/mm/init.c  |  3 ++-
>  arch/arm/mm/mmu.c   |  4 ++--
>  arch/arm/mm/nommu.c |  2 +-
>  arch/csky/kernel/setup.c|  2 +-
>  arch/microblaze/mm/init.c   |  2 +-
>  arch/mips/kernel/setup.c|  2 +-
>  arch/powerpc/mm/40x_mmu.c   |  6 --
>  arch/powerpc/mm/44x_mmu.c   |  2 +-
>  arch/powerpc/mm/8xx_mmu.c   |  2 +-
>  arch/powerpc/mm/fsl_booke_mmu.c |  5 +++--
>  arch/powerpc/mm/hash_utils_64.c |  4 ++--
>  arch/powerpc/mm/init_32.c   |  2 +-
>  arch/powerpc/mm/pgtable-radix.c |  2 +-
>  arch/powerpc/mm/ppc_mmu_32.c|  8 ++--
>  arch/powerpc/mm/tlb_nohash.c|  6 --
>  arch/unicore32/mm/mmu.c |  2 +-
>  arch/x86/kernel/setup.c |  2 +-
>  arch/xtensa/mm/init.c   |  2 +-
>  include/linux/memblock.h| 10 +++---
>  mm/memblock.c   | 23 ++-
>  20 files changed, 59 insertions(+), 32 deletions(-)
> 
> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> index 32e4845..58a4342 100644
> --- a/arch/arm/mm/init.c
> +++ b/arch/arm/mm/init.c
> @@ -93,7 +93,8 @@ __tagtable(ATAG_INITRD2, parse_tag_initrd2);
>  static void __init find_limits(unsigned long *min, unsigned long *max_low,
>  unsigned long *max_high)
>  {
> - *max_low = PFN_DOWN(memblock_get_current_limit());
> + memblock_get_current_limit(NULL, max_low);
> + *max_low = PFN_DOWN(*max_low);
>   *min = PFN_UP(memblock_start_of_DRAM());
>   *max_high = PFN_DOWN(memblock_end_of_DRAM());
>  }
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index f5cc1cc..9025418 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -1240,7 +1240,7 @@ void __init adjust_lowmem_bounds(void)
>   }
>   }
> 
> - memblock_set_current_limit(memblock_limit);
> + memblock_set_current_limit(0, memblock_limit, false);
>  }
> 
>  static inline void prepare_page_table(void)
> @@ -1625,7 +1625,7 @@ void __init paging_init(const struct machine_desc 
> *mdesc)
> 
>   prepare_page_table();
>   map_lowmem();
> - memblock_set_current_limit(arm_lowmem_limit);
> + memblock_set_current_limit(0, arm_lowmem_limit, false);
>   dma_contiguous_remap();
>   early_fixmap_shutdown();
>   devicemaps_init(mdesc);
> diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
> index 7d67c70..721535c 100644
> --- a/arch/arm/mm/nommu.c
> +++ b/arch/arm/mm/nommu.c
> @@ -138,7 +138,7 @@ void __init adjust_lowmem_bounds(void)
>   adjust_lowmem_bounds_mpu();
>   end = memblock_end_of_DRAM();
>   high_memory = __va(end - 1) + 1;
> - memblock_set_current_limit(end);
> + memblock_set_current_limit(0, end, false);
>  }
> 
>  /*
> diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c
> index dff8b89..e6f88bf 100644
> --- a/arch/csky/kernel/setup.c
> +++ b/arch/csky/kernel/setup.c
> @@ -100,7 +100,7 @@ static void __init csky_memblock_init(void)
> 
>   highend_pfn = max_pfn;
>  #endif
> -