Re: [PATCH RFC] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP (was: Re: [PATCH v3 0/5] mm: Enable CONFIG_NODES_SPAN_OTHER_NODES by default for NUMA)

2020-04-10 Thread Baoquan He
On 04/09/20 at 05:33pm, Michal Hocko wrote:
> On Thu 09-04-20 22:41:19, Baoquan He wrote:
> > On 04/02/20 at 10:01am, Michal Hocko wrote:
> > > On Wed 01-04-20 10:51:55, Mike Rapoport wrote:
> > > > Hi,
> > > > 
> > > > On Wed, Apr 01, 2020 at 01:42:27PM +0800, Baoquan He wrote:
> > > [...]
> > > > > From above information, we can remove HAVE_MEMBLOCK_NODE_MAP, and
> > > > > replace it with CONFIG_NUMA. That sounds more sensible to store nid 
> > > > > into
> > > > > memblock when NUMA support is enabled.
> > > >  
> > > > Replacing CONFIG_HAVE_MEMBLOCK_NODE_MAP with CONFIG_NUMA will work, but
> > > > this will not help cleaning up the whole node/zone initialization mess 
> > > > and
> > > > we'll be stuck with two implementations.
> > > 
> > > Yeah, this is far from optimal.
> > > 
> > > > The overhead of enabling HAVE_MEMBLOCK_NODE_MAP is only for init time as
> > > > most architectures will anyway discard the entire memblock, so having 
> > > > it in
> > > > a UMA arch won't be a problem. The only exception is arm that uses
> > > > memblock for pfn_valid(), here we may also think about a solution to
> > > > compensate the addition of nid to the memblock structures. 
> > > 
> > > Well, we can make memblock_region->nid defined only for CONFIG_NUMA.
> > > memblock_get_region_node would then unconditionally return 0 on UMA.
> > > Essentially the same way we do NUMA for other MM code. I only see few
> > > direct usage of region->nid.
> > 
> > Checked code again, seems HAVE_MEMBLOCK_NODE_MAP is selected directly in
> > all ARCHes which support it. Means HAVE_MEMBLOCK_NODE_MAP is enabled by
> > default on those ARCHes, and has no dependency on CONFIG_NUMA at all.
> > E.g on x86, it just calls free_area_init_nodes() in generic code path,
> > while free_area_init_nodes() is defined in CONFIG_HAVE_MEMBLOCK_NODE_MAP
> > ifdeffery scope. So I tend to agree with Mike to remove
> > HAVE_MEMBLOCK_NODE_MAP firstly on all ARCHes. We can check if it's worth
> > only defining memblock_region->nid for CONFIG_NUMA case after
> > HAVE_MEMBLOCK_NODE_MAP is removed.
> 
> This can surely go in separate patches. What I meant to say is the
> region->nid is by definition 0 on !CONFIG_NUMA.

I see, thanks.



Re: [PATCH RFC] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP (was: Re: [PATCH v3 0/5] mm: Enable CONFIG_NODES_SPAN_OTHER_NODES by default for NUMA)

2020-04-09 Thread Michal Hocko
On Thu 09-04-20 22:41:19, Baoquan He wrote:
> On 04/02/20 at 10:01am, Michal Hocko wrote:
> > On Wed 01-04-20 10:51:55, Mike Rapoport wrote:
> > > Hi,
> > > 
> > > On Wed, Apr 01, 2020 at 01:42:27PM +0800, Baoquan He wrote:
> > [...]
> > > > From above information, we can remove HAVE_MEMBLOCK_NODE_MAP, and
> > > > replace it with CONFIG_NUMA. That sounds more sensible to store nid into
> > > > memblock when NUMA support is enabled.
> > >  
> > > Replacing CONFIG_HAVE_MEMBLOCK_NODE_MAP with CONFIG_NUMA will work, but
> > > this will not help cleaning up the whole node/zone initialization mess and
> > > we'll be stuck with two implementations.
> > 
> > Yeah, this is far from optimal.
> > 
> > > The overhead of enabling HAVE_MEMBLOCK_NODE_MAP is only for init time as
> > > most architectures will anyway discard the entire memblock, so having it 
> > > in
> > > a UMA arch won't be a problem. The only exception is arm that uses
> > > memblock for pfn_valid(), here we may also think about a solution to
> > > compensate the addition of nid to the memblock structures. 
> > 
> > Well, we can make memblock_region->nid defined only for CONFIG_NUMA.
> > memblock_get_region_node would then unconditionally return 0 on UMA.
> > Essentially the same way we do NUMA for other MM code. I only see few
> > direct usage of region->nid.
> 
> Checked code again, seems HAVE_MEMBLOCK_NODE_MAP is selected directly in
> all ARCHes which support it. Means HAVE_MEMBLOCK_NODE_MAP is enabled by
> default on those ARCHes, and has no dependency on CONFIG_NUMA at all.
> E.g on x86, it just calls free_area_init_nodes() in generic code path,
> while free_area_init_nodes() is defined in CONFIG_HAVE_MEMBLOCK_NODE_MAP
> ifdeffery scope. So I tend to agree with Mike to remove
> HAVE_MEMBLOCK_NODE_MAP firstly on all ARCHes. We can check if it's worth
> only defining memblock_region->nid for CONFIG_NUMA case after
> HAVE_MEMBLOCK_NODE_MAP is removed.

This can surely go in separate patches. What I meant to say is the
region->nid is by definition 0 on !CONFIG_NUMA.

-- 
Michal Hocko
SUSE Labs


Re: [PATCH RFC] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP (was: Re: [PATCH v3 0/5] mm: Enable CONFIG_NODES_SPAN_OTHER_NODES by default for NUMA)

2020-04-09 Thread Baoquan He
On 04/02/20 at 10:01am, Michal Hocko wrote:
> On Wed 01-04-20 10:51:55, Mike Rapoport wrote:
> > Hi,
> > 
> > On Wed, Apr 01, 2020 at 01:42:27PM +0800, Baoquan He wrote:
> [...]
> > > From above information, we can remove HAVE_MEMBLOCK_NODE_MAP, and
> > > replace it with CONFIG_NUMA. That sounds more sensible to store nid into
> > > memblock when NUMA support is enabled.
> >  
> > Replacing CONFIG_HAVE_MEMBLOCK_NODE_MAP with CONFIG_NUMA will work, but
> > this will not help cleaning up the whole node/zone initialization mess and
> > we'll be stuck with two implementations.
> 
> Yeah, this is far from optimal.
> 
> > The overhead of enabling HAVE_MEMBLOCK_NODE_MAP is only for init time as
> > most architectures will anyway discard the entire memblock, so having it in
> > a UMA arch won't be a problem. The only exception is arm that uses
> > memblock for pfn_valid(), here we may also think about a solution to
> > compensate the addition of nid to the memblock structures. 
> 
> Well, we can make memblock_region->nid defined only for CONFIG_NUMA.
> memblock_get_region_node would then unconditionally return 0 on UMA.
> Essentially the same way we do NUMA for other MM code. I only see few
> direct usage of region->nid.

Checked code again, seems HAVE_MEMBLOCK_NODE_MAP is selected directly in
all ARCHes which support it. Means HAVE_MEMBLOCK_NODE_MAP is enabled by
default on those ARCHes, and has no dependency on CONFIG_NUMA at all.
E.g on x86, it just calls free_area_init_nodes() in generic code path,
while free_area_init_nodes() is defined in CONFIG_HAVE_MEMBLOCK_NODE_MAP
ifdeffery scope. So I tend to agree with Mike to remove
HAVE_MEMBLOCK_NODE_MAP firstly on all ARCHes. We can check if it's worth
only defining memblock_region->nid for CONFIG_NUMA case after
HAVE_MEMBLOCK_NODE_MAP is removed.

config X86
def_bool y
...
select HAVE_MEMBLOCK_NODE_MAP
...



Re: [PATCH RFC] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP (was: Re: [PATCH v3 0/5] mm: Enable CONFIG_NODES_SPAN_OTHER_NODES by default for NUMA)

2020-04-02 Thread Michal Hocko
On Wed 01-04-20 10:51:55, Mike Rapoport wrote:
> Hi,
> 
> On Wed, Apr 01, 2020 at 01:42:27PM +0800, Baoquan He wrote:
[...]
> > From above information, we can remove HAVE_MEMBLOCK_NODE_MAP, and
> > replace it with CONFIG_NUMA. That sounds more sensible to store nid into
> > memblock when NUMA support is enabled.
>  
> Replacing CONFIG_HAVE_MEMBLOCK_NODE_MAP with CONFIG_NUMA will work, but
> this will not help cleaning up the whole node/zone initialization mess and
> we'll be stuck with two implementations.

Yeah, this is far from optimal.

> The overhead of enabling HAVE_MEMBLOCK_NODE_MAP is only for init time as
> most architectures will anyway discard the entire memblock, so having it in
> a UMA arch won't be a problem. The only exception is arm that uses
> memblock for pfn_valid(), here we may also think about a solution to
> compensate the addition of nid to the memblock structures. 

Well, we can make memblock_region->nid defined only for CONFIG_NUMA.
memblock_get_region_node would then unconditionally return 0 on UMA.
Essentially the same way we do NUMA for other MM code. I only see few
direct usage of region->nid.
-- 
Michal Hocko
SUSE Labs


Re: [PATCH RFC] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP (was: Re: [PATCH v3 0/5] mm: Enable CONFIG_NODES_SPAN_OTHER_NODES by default for NUMA)

2020-04-01 Thread Mike Rapoport
Hi,

On Wed, Apr 01, 2020 at 01:42:27PM +0800, Baoquan He wrote:
> On 04/01/20 at 12:56am, Mike Rapoport wrote:
> > On Mon, Mar 30, 2020 at 11:58:43AM +0200, Michal Hocko wrote:
> > > 
> > > What would it take to make ia64 use HAVE_MEMBLOCK_NODE_MAP? I would
> > > really love to see that thing go away. It is causing problems when
> > > people try to use memblock api.
> > 
> > Well, it's a small patch in the end :)
> > 
> > Currently all NUMA architectures currently enable
> > CONFIG_HAVE_MEMBLOCK_NODE_MAP and use free_area_init_nodes() to initialize
> > nodes and zones structures.
> 
> I did some investigation, there are nine ARCHes having NUMA config. And
> among them, alpha doesn't have HAVE_MEMBLOCK_NODE_MAP support.

alpha has NUMA marked as BROKEN for a long time now. I doubt anybody is
going to fix that.

> While the interesting thing is there are two ARCHes which have
> HAVE_MEMBLOCK_NODE_MAP config, but don't have NUMA config adding, they
> are microblaze and riscv. Obviously it was not carefully considered to
> add HAVE_MEMBLOCK_NODE_MAP config into riscv and microblaze.

Well, microblaze probably copied it and forgot to remove and riscv might
get actual NUMA at some point.

> arch/alpha/Kconfig:config NUMA
> arch/arm64/Kconfig:config NUMA
> arch/ia64/Kconfig:config NUMA
> arch/mips/Kconfig:config NUMA
> arch/powerpc/Kconfig:config NUMA
> arch/s390/Kconfig:config NUMA
> arch/sh/mm/Kconfig:config NUMA
> arch/sparc/Kconfig:config NUMA
> arch/x86/Kconfig:config NUMA
> 
> From above information, we can remove HAVE_MEMBLOCK_NODE_MAP, and
> replace it with CONFIG_NUMA. That sounds more sensible to store nid into
> memblock when NUMA support is enabled.
 
Replacing CONFIG_HAVE_MEMBLOCK_NODE_MAP with CONFIG_NUMA will work, but
this will not help cleaning up the whole node/zone initialization mess and
we'll be stuck with two implementations.

The overhead of enabling HAVE_MEMBLOCK_NODE_MAP is only for init time as
most architectures will anyway discard the entire memblock, so having it in
a UMA arch won't be a problem. The only exception is arm that uses
memblock for pfn_valid(), here we may also think about a solution to
compensate the addition of nid to the memblock structures. 

We also may want to make all the movable_node and movable_core
functionality dependent on CONFIG_NUMA.
 
> > diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> > index 079d17d96410..9de81112447e 100644
> > --- a/include/linux/memblock.h
> > +++ b/include/linux/memblock.h
> > @@ -50,9 +50,7 @@ struct memblock_region {
> > phys_addr_t base;
> > phys_addr_t size;
> > enum memblock_flags flags;
> > -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
> > int nid;
> > -#endif
> 
> I didn't look into other change very carefully, but feel enabling
> memblock node map for all ARCHes looks a little radical. After all, many
> ARCHes even don't have NUMA support.
> 
> >  };
> >  
> >  /**
> > @@ -215,7 +213,6 @@ static inline bool memblock_is_nomap(struct 
> > memblock_region *m)
> > return m->flags & MEMBLOCK_NOMAP;
> >  }
> >  
> > -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
> >  int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn,
> > unsigned long  *end_pfn);
> >  void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
> > @@ -234,7 +231,6 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned 
> > long *out_start_pfn,
> >  #define for_each_mem_pfn_range(i, nid, p_start, p_end, p_nid)  
> > \
> > for (i = -1, __next_mem_pfn_range(, nid, p_start, p_end, p_nid); \
> >  i >= 0; __next_mem_pfn_range(, nid, p_start, p_end, p_nid))
> > -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
> >  
> >  #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
> >  void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
> > @@ -310,7 +306,6 @@ void __next_mem_pfn_range_in_zone(u64 *idx, struct zone 
> > *zone,
> > for_each_mem_range_rev(i, , , \
> >nid, flags, p_start, p_end, p_nid)
> >  
> > -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
> >  int memblock_set_node(phys_addr_t base, phys_addr_t size,
> >   struct memblock_type *type, int nid);
> >  
> > @@ -323,16 +318,6 @@ static inline int memblock_get_region_node(const 
> > struct memblock_region *r)
> >  {
> > return r->nid;
> >  }
> > -#else
> > -static inline void memblock_set_region_node(struct memblock_region *r, int 
> > nid)
> > -{
> > -}
> > -
> > -static inline int memblock_get_region_node(const struct memblock_region *r)
> > -{
> > -   return 0;
> > -}
> > -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
> >  
> >  /* Flags for memblock allocation APIs */
> >  #define MEMBLOCK_ALLOC_ANYWHERE(~(phys_addr_t)0)
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index c54fb96cb1e6..368a45d4696a 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -2125,9 +2125,8 @@ static inline unsigned long get_num_physpages(void)
> > return 

Re: [PATCH RFC] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP (was: Re: [PATCH v3 0/5] mm: Enable CONFIG_NODES_SPAN_OTHER_NODES by default for NUMA)

2020-03-31 Thread Baoquan He
On 04/01/20 at 12:56am, Mike Rapoport wrote:
> On Mon, Mar 30, 2020 at 11:58:43AM +0200, Michal Hocko wrote:
> > 
> > What would it take to make ia64 use HAVE_MEMBLOCK_NODE_MAP? I would
> > really love to see that thing go away. It is causing problems when
> > people try to use memblock api.
> 
> Well, it's a small patch in the end :)
> 
> Currently all NUMA architectures currently enable
> CONFIG_HAVE_MEMBLOCK_NODE_MAP and use free_area_init_nodes() to initialize
> nodes and zones structures.

I did some investigation, there are nine ARCHes having NUMA config. And
among them, alpha doesn't have HAVE_MEMBLOCK_NODE_MAP support. While the
interesting thing is there are two ARCHes which have
HAVE_MEMBLOCK_NODE_MAP config, but don't have NUMA config adding, they
are microblaze and riscv. Obviously it was not carefully considered to
add HAVE_MEMBLOCK_NODE_MAP config into riscv and microblaze.

arch/alpha/Kconfig:config NUMA
arch/arm64/Kconfig:config NUMA
arch/ia64/Kconfig:config NUMA
arch/mips/Kconfig:config NUMA
arch/powerpc/Kconfig:config NUMA
arch/s390/Kconfig:config NUMA
arch/sh/mm/Kconfig:config NUMA
arch/sparc/Kconfig:config NUMA
arch/x86/Kconfig:config NUMA

>From above information, we can remove HAVE_MEMBLOCK_NODE_MAP, and
replace it with CONFIG_NUMA. That sounds more sensible to store nid into
memblock when NUMA support is enabled.


> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index 079d17d96410..9de81112447e 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -50,9 +50,7 @@ struct memblock_region {
>   phys_addr_t base;
>   phys_addr_t size;
>   enum memblock_flags flags;
> -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
>   int nid;
> -#endif

I didn't look into other change very carefully, but feel enabling
memblock node map for all ARCHes looks a little radical. After all, many
ARCHes even don't have NUMA support.

>  };
>  
>  /**
> @@ -215,7 +213,6 @@ static inline bool memblock_is_nomap(struct 
> memblock_region *m)
>   return m->flags & MEMBLOCK_NOMAP;
>  }
>  
> -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
>  int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn,
>   unsigned long  *end_pfn);
>  void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
> @@ -234,7 +231,6 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned 
> long *out_start_pfn,
>  #define for_each_mem_pfn_range(i, nid, p_start, p_end, p_nid)
> \
>   for (i = -1, __next_mem_pfn_range(, nid, p_start, p_end, p_nid); \
>i >= 0; __next_mem_pfn_range(, nid, p_start, p_end, p_nid))
> -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
>  
>  #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
>  void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
> @@ -310,7 +306,6 @@ void __next_mem_pfn_range_in_zone(u64 *idx, struct zone 
> *zone,
>   for_each_mem_range_rev(i, , , \
>  nid, flags, p_start, p_end, p_nid)
>  
> -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
>  int memblock_set_node(phys_addr_t base, phys_addr_t size,
> struct memblock_type *type, int nid);
>  
> @@ -323,16 +318,6 @@ static inline int memblock_get_region_node(const struct 
> memblock_region *r)
>  {
>   return r->nid;
>  }
> -#else
> -static inline void memblock_set_region_node(struct memblock_region *r, int 
> nid)
> -{
> -}
> -
> -static inline int memblock_get_region_node(const struct memblock_region *r)
> -{
> - return 0;
> -}
> -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
>  
>  /* Flags for memblock allocation APIs */
>  #define MEMBLOCK_ALLOC_ANYWHERE  (~(phys_addr_t)0)
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c54fb96cb1e6..368a45d4696a 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2125,9 +2125,8 @@ static inline unsigned long get_num_physpages(void)
>   return phys_pages;
>  }
>  
> -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
>  /*
> - * With CONFIG_HAVE_MEMBLOCK_NODE_MAP set, an architecture may initialise its
> + * Using memblock node mappings, an architecture may initialise its
>   * zones, allocate the backing mem_map and account for memory holes in a more
>   * architecture independent manner. This is a substitute for creating the
>   * zone_sizes[] and zholes_size[] arrays and passing them to
> @@ -2148,9 +2147,6 @@ static inline unsigned long get_num_physpages(void)
>   * registered physical page range.  Similarly
>   * sparse_memory_present_with_active_regions() calls memory_present() for
>   * each range when SPARSEMEM is enabled.
> - *
> - * See mm/page_alloc.c for more information on each function exposed by
> - * CONFIG_HAVE_MEMBLOCK_NODE_MAP.
>   */
>  extern void free_area_init_nodes(unsigned long *max_zone_pfn);
>  unsigned long node_map_pfn_alignment(void);
> @@ -2165,22 +2161,12 @@ extern void free_bootmem_with_active_regions(int nid,
>   unsigned long max_low_pfn);

[PATCH RFC] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP (was: Re: [PATCH v3 0/5] mm: Enable CONFIG_NODES_SPAN_OTHER_NODES by default for NUMA)

2020-03-31 Thread Mike Rapoport
On Mon, Mar 30, 2020 at 11:58:43AM +0200, Michal Hocko wrote:
> 
> What would it take to make ia64 use HAVE_MEMBLOCK_NODE_MAP? I would
> really love to see that thing go away. It is causing problems when
> people try to use memblock api.

Well, it's a small patch in the end :)

Currently all NUMA architectures currently enable
CONFIG_HAVE_MEMBLOCK_NODE_MAP and use free_area_init_nodes() to initialize
nodes and zones structures.

On the other hand, the systems that don't have
CONFIG_HAVE_MEMBLOCK_NODE_MAP use free_area_init() or free_area_init_node()
for this purpose.

With this assumptions, it's possible to make selection of the functions
that calculate spanned and absent pages at runtime.

This patch builds for arm and x86-64 and boots on qemu-system for both.

>From f907df987db4d6735c4940b30cfb4764fc0007d4 Mon Sep 17 00:00:00 2001
From: Mike Rapoport 
Date: Wed, 1 Apr 2020 00:27:17 +0300
Subject: [PATCH RFC] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP option

The CONFIG_HAVE_MEMBLOCK_NODE_MAP is used to differentiate initialization
of nodes and zones structures between the systems that have region to node
mapping in memblock and those that don't.

Currently all the NUMA architectures enable this option and for the
non-NUMA systems we can presume that all the memory belongs to node 0 and
therefore the compile time configuration option is not required.

Still, free_area_init_node() must have a backward compatible version
because its semantics with and without CONFIG_HAVE_MEMBLOCK_NODE_MAP is
different. Once all the architectures will be converted from
free_area_init() and free_area_init_node() to free_area_init_nodes(), the
entire compatibility layer can be dropped.

Signed-off-by: Mike Rapoport 
---
 include/linux/memblock.h |  15 --
 include/linux/mm.h   |  16 +-
 include/linux/mmzone.h   |  13 -
 mm/memblock.c|   9 +---
 mm/memory_hotplug.c  |   4 --
 mm/page_alloc.c  | 103 ++-
 6 files changed, 61 insertions(+), 99 deletions(-)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 079d17d96410..9de81112447e 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -50,9 +50,7 @@ struct memblock_region {
phys_addr_t base;
phys_addr_t size;
enum memblock_flags flags;
-#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
int nid;
-#endif
 };
 
 /**
@@ -215,7 +213,6 @@ static inline bool memblock_is_nomap(struct memblock_region 
*m)
return m->flags & MEMBLOCK_NOMAP;
 }
 
-#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
 int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn,
unsigned long  *end_pfn);
 void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
@@ -234,7 +231,6 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long 
*out_start_pfn,
 #define for_each_mem_pfn_range(i, nid, p_start, p_end, p_nid)  \
for (i = -1, __next_mem_pfn_range(, nid, p_start, p_end, p_nid); \
 i >= 0; __next_mem_pfn_range(, nid, p_start, p_end, p_nid))
-#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
 
 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
 void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
@@ -310,7 +306,6 @@ void __next_mem_pfn_range_in_zone(u64 *idx, struct zone 
*zone,
for_each_mem_range_rev(i, , , \
   nid, flags, p_start, p_end, p_nid)
 
-#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
 int memblock_set_node(phys_addr_t base, phys_addr_t size,
  struct memblock_type *type, int nid);
 
@@ -323,16 +318,6 @@ static inline int memblock_get_region_node(const struct 
memblock_region *r)
 {
return r->nid;
 }
-#else
-static inline void memblock_set_region_node(struct memblock_region *r, int nid)
-{
-}
-
-static inline int memblock_get_region_node(const struct memblock_region *r)
-{
-   return 0;
-}
-#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
 
 /* Flags for memblock allocation APIs */
 #define MEMBLOCK_ALLOC_ANYWHERE(~(phys_addr_t)0)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index c54fb96cb1e6..368a45d4696a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2125,9 +2125,8 @@ static inline unsigned long get_num_physpages(void)
return phys_pages;
 }
 
-#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
 /*
- * With CONFIG_HAVE_MEMBLOCK_NODE_MAP set, an architecture may initialise its
+ * Using memblock node mappings, an architecture may initialise its
  * zones, allocate the backing mem_map and account for memory holes in a more
  * architecture independent manner. This is a substitute for creating the
  * zone_sizes[] and zholes_size[] arrays and passing them to
@@ -2148,9 +2147,6 @@ static inline unsigned long get_num_physpages(void)
  * registered physical page range.  Similarly
  * sparse_memory_present_with_active_regions() calls memory_present() for
  * each range when SPARSEMEM is enabled.
- *
- * See mm/page_alloc.c