Re: [PATCH v3 5/6] mm/cma: remove MIGRATE_CMA

2016-06-28 Thread Joonsoo Kim
On Mon, Jun 27, 2016 at 11:46:39AM +0200, Vlastimil Babka wrote:
> On 05/26/2016 08:22 AM, js1...@gmail.com wrote:
> >From: Joonsoo Kim 
> >
> >Now, all reserved pages for CMA region are belong to the ZONE_CMA
> >and there is no other type of pages. Therefore, we don't need to
> >use MIGRATE_CMA to distinguish and handle differently for CMA pages
> >and ordinary pages. Remove MIGRATE_CMA.
> >
> >Unfortunately, this patch make free CMA counter incorrect because
> >we count it when pages are on the MIGRATE_CMA. It will be fixed
> >by next patch. I can squash next patch here but it makes changes
> >complicated and hard to review so I separate that.
> 
> Doesn't sound like a big deal.

Okay.

> 
> >Signed-off-by: Joonsoo Kim 
> 
> [...]
> 
> >@@ -7442,14 +7401,14 @@ int alloc_contig_range(unsigned long start, unsigned 
> >long end,
> >  * allocator removing them from the buddy system.  This way
> >  * page allocator will never consider using them.
> >  *
> >- * This lets us mark the pageblocks back as
> >- * MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the
> >- * aligned range but not in the unaligned, original range are
> >- * put back to page allocator so that buddy can use them.
> >+ * This lets us mark the pageblocks back as MIGRATE_MOVABLE
> >+ * so that free pages in the aligned range but not in the
> >+ * unaligned, original range are put back to page allocator
> >+ * so that buddy can use them.
> >  */
> >
> > ret = start_isolate_page_range(pfn_max_align_down(start),
> >-   pfn_max_align_up(end), migratetype,
> >+   pfn_max_align_up(end), MIGRATE_MOVABLE,
> >false);
> > if (ret)
> > return ret;
> >@@ -7528,7 +7487,7 @@ int alloc_contig_range(unsigned long start, unsigned 
> >long end,
> >
> > done:
> > undo_isolate_page_range(pfn_max_align_down(start),
> >-pfn_max_align_up(end), migratetype);
> >+pfn_max_align_up(end), MIGRATE_MOVABLE);
> > return ret;
> > }
> 
> Looks like all callers of {start,undo}_isolate_page_range() now use
> MIGRATE_MOVABLE, so it could be removed.

You're right. Will do in next spin.

Thanks.


Re: [PATCH v3 5/6] mm/cma: remove MIGRATE_CMA

2016-06-28 Thread Joonsoo Kim
On Mon, Jun 27, 2016 at 11:46:39AM +0200, Vlastimil Babka wrote:
> On 05/26/2016 08:22 AM, js1...@gmail.com wrote:
> >From: Joonsoo Kim 
> >
> >Now, all reserved pages for CMA region are belong to the ZONE_CMA
> >and there is no other type of pages. Therefore, we don't need to
> >use MIGRATE_CMA to distinguish and handle differently for CMA pages
> >and ordinary pages. Remove MIGRATE_CMA.
> >
> >Unfortunately, this patch make free CMA counter incorrect because
> >we count it when pages are on the MIGRATE_CMA. It will be fixed
> >by next patch. I can squash next patch here but it makes changes
> >complicated and hard to review so I separate that.
> 
> Doesn't sound like a big deal.

Okay.

> 
> >Signed-off-by: Joonsoo Kim 
> 
> [...]
> 
> >@@ -7442,14 +7401,14 @@ int alloc_contig_range(unsigned long start, unsigned 
> >long end,
> >  * allocator removing them from the buddy system.  This way
> >  * page allocator will never consider using them.
> >  *
> >- * This lets us mark the pageblocks back as
> >- * MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the
> >- * aligned range but not in the unaligned, original range are
> >- * put back to page allocator so that buddy can use them.
> >+ * This lets us mark the pageblocks back as MIGRATE_MOVABLE
> >+ * so that free pages in the aligned range but not in the
> >+ * unaligned, original range are put back to page allocator
> >+ * so that buddy can use them.
> >  */
> >
> > ret = start_isolate_page_range(pfn_max_align_down(start),
> >-   pfn_max_align_up(end), migratetype,
> >+   pfn_max_align_up(end), MIGRATE_MOVABLE,
> >false);
> > if (ret)
> > return ret;
> >@@ -7528,7 +7487,7 @@ int alloc_contig_range(unsigned long start, unsigned 
> >long end,
> >
> > done:
> > undo_isolate_page_range(pfn_max_align_down(start),
> >-pfn_max_align_up(end), migratetype);
> >+pfn_max_align_up(end), MIGRATE_MOVABLE);
> > return ret;
> > }
> 
> Looks like all callers of {start,undo}_isolate_page_range() now use
> MIGRATE_MOVABLE, so it could be removed.

You're right. Will do in next spin.

Thanks.


Re: [PATCH v3 5/6] mm/cma: remove MIGRATE_CMA

2016-06-27 Thread Vlastimil Babka

On 05/26/2016 08:22 AM, js1...@gmail.com wrote:

From: Joonsoo Kim 

Now, all reserved pages for CMA region are belong to the ZONE_CMA
and there is no other type of pages. Therefore, we don't need to
use MIGRATE_CMA to distinguish and handle differently for CMA pages
and ordinary pages. Remove MIGRATE_CMA.

Unfortunately, this patch make free CMA counter incorrect because
we count it when pages are on the MIGRATE_CMA. It will be fixed
by next patch. I can squash next patch here but it makes changes
complicated and hard to review so I separate that.


Doesn't sound like a big deal.


Signed-off-by: Joonsoo Kim 


[...]


@@ -7442,14 +7401,14 @@ int alloc_contig_range(unsigned long start, unsigned 
long end,
 * allocator removing them from the buddy system.  This way
 * page allocator will never consider using them.
 *
-* This lets us mark the pageblocks back as
-* MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the
-* aligned range but not in the unaligned, original range are
-* put back to page allocator so that buddy can use them.
+* This lets us mark the pageblocks back as MIGRATE_MOVABLE
+* so that free pages in the aligned range but not in the
+* unaligned, original range are put back to page allocator
+* so that buddy can use them.
 */

ret = start_isolate_page_range(pfn_max_align_down(start),
-  pfn_max_align_up(end), migratetype,
+  pfn_max_align_up(end), MIGRATE_MOVABLE,
   false);
if (ret)
return ret;
@@ -7528,7 +7487,7 @@ int alloc_contig_range(unsigned long start, unsigned long 
end,

 done:
undo_isolate_page_range(pfn_max_align_down(start),
-   pfn_max_align_up(end), migratetype);
+   pfn_max_align_up(end), MIGRATE_MOVABLE);
return ret;
 }


Looks like all callers of {start,undo}_isolate_page_range() now use 
MIGRATE_MOVABLE, so it could be removed.




Re: [PATCH v3 5/6] mm/cma: remove MIGRATE_CMA

2016-06-27 Thread Vlastimil Babka

On 05/26/2016 08:22 AM, js1...@gmail.com wrote:

From: Joonsoo Kim 

Now, all reserved pages for CMA region are belong to the ZONE_CMA
and there is no other type of pages. Therefore, we don't need to
use MIGRATE_CMA to distinguish and handle differently for CMA pages
and ordinary pages. Remove MIGRATE_CMA.

Unfortunately, this patch make free CMA counter incorrect because
we count it when pages are on the MIGRATE_CMA. It will be fixed
by next patch. I can squash next patch here but it makes changes
complicated and hard to review so I separate that.


Doesn't sound like a big deal.


Signed-off-by: Joonsoo Kim 


[...]


@@ -7442,14 +7401,14 @@ int alloc_contig_range(unsigned long start, unsigned 
long end,
 * allocator removing them from the buddy system.  This way
 * page allocator will never consider using them.
 *
-* This lets us mark the pageblocks back as
-* MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the
-* aligned range but not in the unaligned, original range are
-* put back to page allocator so that buddy can use them.
+* This lets us mark the pageblocks back as MIGRATE_MOVABLE
+* so that free pages in the aligned range but not in the
+* unaligned, original range are put back to page allocator
+* so that buddy can use them.
 */

ret = start_isolate_page_range(pfn_max_align_down(start),
-  pfn_max_align_up(end), migratetype,
+  pfn_max_align_up(end), MIGRATE_MOVABLE,
   false);
if (ret)
return ret;
@@ -7528,7 +7487,7 @@ int alloc_contig_range(unsigned long start, unsigned long 
end,

 done:
undo_isolate_page_range(pfn_max_align_down(start),
-   pfn_max_align_up(end), migratetype);
+   pfn_max_align_up(end), MIGRATE_MOVABLE);
return ret;
 }


Looks like all callers of {start,undo}_isolate_page_range() now use 
MIGRATE_MOVABLE, so it could be removed.




Re: [PATCH v3 5/6] mm/cma: remove MIGRATE_CMA

2016-05-26 Thread Joonsoo Kim
On Fri, May 27, 2016 at 09:42:24AM +0800, Chen Feng wrote:
> Hi Joonsoo,
> > -/* Free whole pageblock and set its migration type to MIGRATE_CMA. */
> > +/* Free whole pageblock and set its migration type to MIGRATE_MOVABLE. */
> >  void __init init_cma_reserved_pageblock(struct page *page)
> >  {
> > unsigned i = pageblock_nr_pages;
> > @@ -1605,7 +1602,7 @@ void __init init_cma_reserved_pageblock(struct page 
> > *page)
> >  
> > adjust_present_page_count(page, pageblock_nr_pages);
> >  
> > -   set_pageblock_migratetype(page, MIGRATE_CMA);
> > +   set_pageblock_migratetype(page, MIGRATE_MOVABLE);
> 
> I have a question here, if the ZONE_CMA pages are all movable.
> 
> Then the unmovable alloc will also use CMA memory. Is this right?

No, previous patch changes that the CMA memory is on separate zone,
ZONE_CMA. We allow that zone when gfp is GFP_HIGHUSER_MOVABLE so
unmovable allocation cannot happen on CMA memory.

Thanks.


Re: [PATCH v3 5/6] mm/cma: remove MIGRATE_CMA

2016-05-26 Thread Joonsoo Kim
On Fri, May 27, 2016 at 09:42:24AM +0800, Chen Feng wrote:
> Hi Joonsoo,
> > -/* Free whole pageblock and set its migration type to MIGRATE_CMA. */
> > +/* Free whole pageblock and set its migration type to MIGRATE_MOVABLE. */
> >  void __init init_cma_reserved_pageblock(struct page *page)
> >  {
> > unsigned i = pageblock_nr_pages;
> > @@ -1605,7 +1602,7 @@ void __init init_cma_reserved_pageblock(struct page 
> > *page)
> >  
> > adjust_present_page_count(page, pageblock_nr_pages);
> >  
> > -   set_pageblock_migratetype(page, MIGRATE_CMA);
> > +   set_pageblock_migratetype(page, MIGRATE_MOVABLE);
> 
> I have a question here, if the ZONE_CMA pages are all movable.
> 
> Then the unmovable alloc will also use CMA memory. Is this right?

No, previous patch changes that the CMA memory is on separate zone,
ZONE_CMA. We allow that zone when gfp is GFP_HIGHUSER_MOVABLE so
unmovable allocation cannot happen on CMA memory.

Thanks.


Re: [PATCH v3 5/6] mm/cma: remove MIGRATE_CMA

2016-05-26 Thread Chen Feng
Hi Joonsoo,

On 2016/5/26 14:22, js1...@gmail.com wrote:
> From: Joonsoo Kim 
> 
> Now, all reserved pages for CMA region are belong to the ZONE_CMA
> and there is no other type of pages. Therefore, we don't need to
> use MIGRATE_CMA to distinguish and handle differently for CMA pages
> and ordinary pages. Remove MIGRATE_CMA.
> 
> Unfortunately, this patch make free CMA counter incorrect because
> we count it when pages are on the MIGRATE_CMA. It will be fixed
> by next patch. I can squash next patch here but it makes changes
> complicated and hard to review so I separate that.
> 
> Signed-off-by: Joonsoo Kim 
> ---
>  include/linux/gfp.h|  3 +-
>  include/linux/mmzone.h | 22 -
>  include/linux/vmstat.h |  8 -
>  mm/cma.c   |  2 +-
>  mm/compaction.c| 10 ++
>  mm/hugetlb.c   |  2 +-
>  mm/page_alloc.c| 87 
> +-
>  mm/page_isolation.c|  5 ++-
>  mm/vmstat.c|  5 +--
>  9 files changed, 31 insertions(+), 113 deletions(-)
> 
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index 4d6c008..1a3b869 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -559,8 +559,7 @@ static inline bool pm_suspended_storage(void)
>  
>  #if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) || 
> defined(CONFIG_CMA)
>  /* The below functions must be run on a range from a single zone. */
> -extern int alloc_contig_range(unsigned long start, unsigned long end,
> -   unsigned migratetype);
> +extern int alloc_contig_range(unsigned long start, unsigned long end);
>  extern void free_contig_range(unsigned long pfn, unsigned nr_pages);
>  #endif
>  
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 54c92a6..236d0bd 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -41,22 +41,6 @@ enum {
>   MIGRATE_RECLAIMABLE,
>   MIGRATE_PCPTYPES,   /* the number of types on the pcp lists */
>   MIGRATE_HIGHATOMIC = MIGRATE_PCPTYPES,
> -#ifdef CONFIG_CMA
> - /*
> -  * MIGRATE_CMA migration type is designed to mimic the way
> -  * ZONE_MOVABLE works.  Only movable pages can be allocated
> -  * from MIGRATE_CMA pageblocks and page allocator never
> -  * implicitly change migration type of MIGRATE_CMA pageblock.
> -  *
> -  * The way to use it is to change migratetype of a range of
> -  * pageblocks to MIGRATE_CMA which can be done by
> -  * __free_pageblock_cma() function.  What is important though
> -  * is that a range of pageblocks must be aligned to
> -  * MAX_ORDER_NR_PAGES should biggest page be bigger then
> -  * a single pageblock.
> -  */
> - MIGRATE_CMA,
> -#endif
>  #ifdef CONFIG_MEMORY_ISOLATION
>   MIGRATE_ISOLATE,/* can't allocate from here */
>  #endif
> @@ -66,12 +50,6 @@ enum {
>  /* In mm/page_alloc.c; keep in sync also with show_migration_types() there */
>  extern char * const migratetype_names[MIGRATE_TYPES];
>  
> -#ifdef CONFIG_CMA
> -#  define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA)
> -#else
> -#  define is_migrate_cma(migratetype) false
> -#endif
> -
>  #define for_each_migratetype_order(order, type) \
>   for (order = 0; order < MAX_ORDER; order++) \
>   for (type = 0; type < MIGRATE_TYPES; type++)
> diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
> index 0aa613d..e0eb3e5 100644
> --- a/include/linux/vmstat.h
> +++ b/include/linux/vmstat.h
> @@ -264,14 +264,6 @@ static inline void drain_zonestat(struct zone *zone,
>   struct per_cpu_pageset *pset) { }
>  #endif   /* CONFIG_SMP */
>  
> -static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages,
> -  int migratetype)
> -{
> - __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages);
> - if (is_migrate_cma(migratetype))
> - __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
> -}
> -
>  extern const char * const vmstat_text[];
>  
>  #endif /* _LINUX_VMSTAT_H */
> diff --git a/mm/cma.c b/mm/cma.c
> index 8684f50..bd436e4 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -444,7 +444,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, 
> unsigned int align)
>  
>   pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
>   mutex_lock(_mutex);
> - ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
> + ret = alloc_contig_range(pfn, pfn + count);
>   mutex_unlock(_mutex);
>   if (ret == 0) {
>   page = pfn_to_page(pfn);
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 1427366..acb1d1a 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -76,7 +76,7 @@ static void map_pages(struct list_head *list)
>  
>  static inline bool 

Re: [PATCH v3 5/6] mm/cma: remove MIGRATE_CMA

2016-05-26 Thread Chen Feng
Hi Joonsoo,

On 2016/5/26 14:22, js1...@gmail.com wrote:
> From: Joonsoo Kim 
> 
> Now, all reserved pages for CMA region are belong to the ZONE_CMA
> and there is no other type of pages. Therefore, we don't need to
> use MIGRATE_CMA to distinguish and handle differently for CMA pages
> and ordinary pages. Remove MIGRATE_CMA.
> 
> Unfortunately, this patch make free CMA counter incorrect because
> we count it when pages are on the MIGRATE_CMA. It will be fixed
> by next patch. I can squash next patch here but it makes changes
> complicated and hard to review so I separate that.
> 
> Signed-off-by: Joonsoo Kim 
> ---
>  include/linux/gfp.h|  3 +-
>  include/linux/mmzone.h | 22 -
>  include/linux/vmstat.h |  8 -
>  mm/cma.c   |  2 +-
>  mm/compaction.c| 10 ++
>  mm/hugetlb.c   |  2 +-
>  mm/page_alloc.c| 87 
> +-
>  mm/page_isolation.c|  5 ++-
>  mm/vmstat.c|  5 +--
>  9 files changed, 31 insertions(+), 113 deletions(-)
> 
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index 4d6c008..1a3b869 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -559,8 +559,7 @@ static inline bool pm_suspended_storage(void)
>  
>  #if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) || 
> defined(CONFIG_CMA)
>  /* The below functions must be run on a range from a single zone. */
> -extern int alloc_contig_range(unsigned long start, unsigned long end,
> -   unsigned migratetype);
> +extern int alloc_contig_range(unsigned long start, unsigned long end);
>  extern void free_contig_range(unsigned long pfn, unsigned nr_pages);
>  #endif
>  
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 54c92a6..236d0bd 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -41,22 +41,6 @@ enum {
>   MIGRATE_RECLAIMABLE,
>   MIGRATE_PCPTYPES,   /* the number of types on the pcp lists */
>   MIGRATE_HIGHATOMIC = MIGRATE_PCPTYPES,
> -#ifdef CONFIG_CMA
> - /*
> -  * MIGRATE_CMA migration type is designed to mimic the way
> -  * ZONE_MOVABLE works.  Only movable pages can be allocated
> -  * from MIGRATE_CMA pageblocks and page allocator never
> -  * implicitly change migration type of MIGRATE_CMA pageblock.
> -  *
> -  * The way to use it is to change migratetype of a range of
> -  * pageblocks to MIGRATE_CMA which can be done by
> -  * __free_pageblock_cma() function.  What is important though
> -  * is that a range of pageblocks must be aligned to
> -  * MAX_ORDER_NR_PAGES should biggest page be bigger then
> -  * a single pageblock.
> -  */
> - MIGRATE_CMA,
> -#endif
>  #ifdef CONFIG_MEMORY_ISOLATION
>   MIGRATE_ISOLATE,/* can't allocate from here */
>  #endif
> @@ -66,12 +50,6 @@ enum {
>  /* In mm/page_alloc.c; keep in sync also with show_migration_types() there */
>  extern char * const migratetype_names[MIGRATE_TYPES];
>  
> -#ifdef CONFIG_CMA
> -#  define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA)
> -#else
> -#  define is_migrate_cma(migratetype) false
> -#endif
> -
>  #define for_each_migratetype_order(order, type) \
>   for (order = 0; order < MAX_ORDER; order++) \
>   for (type = 0; type < MIGRATE_TYPES; type++)
> diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
> index 0aa613d..e0eb3e5 100644
> --- a/include/linux/vmstat.h
> +++ b/include/linux/vmstat.h
> @@ -264,14 +264,6 @@ static inline void drain_zonestat(struct zone *zone,
>   struct per_cpu_pageset *pset) { }
>  #endif   /* CONFIG_SMP */
>  
> -static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages,
> -  int migratetype)
> -{
> - __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages);
> - if (is_migrate_cma(migratetype))
> - __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
> -}
> -
>  extern const char * const vmstat_text[];
>  
>  #endif /* _LINUX_VMSTAT_H */
> diff --git a/mm/cma.c b/mm/cma.c
> index 8684f50..bd436e4 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -444,7 +444,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, 
> unsigned int align)
>  
>   pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
>   mutex_lock(_mutex);
> - ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
> + ret = alloc_contig_range(pfn, pfn + count);
>   mutex_unlock(_mutex);
>   if (ret == 0) {
>   page = pfn_to_page(pfn);
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 1427366..acb1d1a 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -76,7 +76,7 @@ static void map_pages(struct list_head *list)
>  
>  static inline bool migrate_async_suitable(int migratetype)
>  {
> - return 

[PATCH v3 5/6] mm/cma: remove MIGRATE_CMA

2016-05-26 Thread js1304
From: Joonsoo Kim 

Now, all reserved pages for CMA region are belong to the ZONE_CMA
and there is no other type of pages. Therefore, we don't need to
use MIGRATE_CMA to distinguish and handle differently for CMA pages
and ordinary pages. Remove MIGRATE_CMA.

Unfortunately, this patch make free CMA counter incorrect because
we count it when pages are on the MIGRATE_CMA. It will be fixed
by next patch. I can squash next patch here but it makes changes
complicated and hard to review so I separate that.

Signed-off-by: Joonsoo Kim 
---
 include/linux/gfp.h|  3 +-
 include/linux/mmzone.h | 22 -
 include/linux/vmstat.h |  8 -
 mm/cma.c   |  2 +-
 mm/compaction.c| 10 ++
 mm/hugetlb.c   |  2 +-
 mm/page_alloc.c| 87 +-
 mm/page_isolation.c|  5 ++-
 mm/vmstat.c|  5 +--
 9 files changed, 31 insertions(+), 113 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 4d6c008..1a3b869 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -559,8 +559,7 @@ static inline bool pm_suspended_storage(void)
 
 #if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) || 
defined(CONFIG_CMA)
 /* The below functions must be run on a range from a single zone. */
-extern int alloc_contig_range(unsigned long start, unsigned long end,
- unsigned migratetype);
+extern int alloc_contig_range(unsigned long start, unsigned long end);
 extern void free_contig_range(unsigned long pfn, unsigned nr_pages);
 #endif
 
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 54c92a6..236d0bd 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -41,22 +41,6 @@ enum {
MIGRATE_RECLAIMABLE,
MIGRATE_PCPTYPES,   /* the number of types on the pcp lists */
MIGRATE_HIGHATOMIC = MIGRATE_PCPTYPES,
-#ifdef CONFIG_CMA
-   /*
-* MIGRATE_CMA migration type is designed to mimic the way
-* ZONE_MOVABLE works.  Only movable pages can be allocated
-* from MIGRATE_CMA pageblocks and page allocator never
-* implicitly change migration type of MIGRATE_CMA pageblock.
-*
-* The way to use it is to change migratetype of a range of
-* pageblocks to MIGRATE_CMA which can be done by
-* __free_pageblock_cma() function.  What is important though
-* is that a range of pageblocks must be aligned to
-* MAX_ORDER_NR_PAGES should biggest page be bigger then
-* a single pageblock.
-*/
-   MIGRATE_CMA,
-#endif
 #ifdef CONFIG_MEMORY_ISOLATION
MIGRATE_ISOLATE,/* can't allocate from here */
 #endif
@@ -66,12 +50,6 @@ enum {
 /* In mm/page_alloc.c; keep in sync also with show_migration_types() there */
 extern char * const migratetype_names[MIGRATE_TYPES];
 
-#ifdef CONFIG_CMA
-#  define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA)
-#else
-#  define is_migrate_cma(migratetype) false
-#endif
-
 #define for_each_migratetype_order(order, type) \
for (order = 0; order < MAX_ORDER; order++) \
for (type = 0; type < MIGRATE_TYPES; type++)
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 0aa613d..e0eb3e5 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -264,14 +264,6 @@ static inline void drain_zonestat(struct zone *zone,
struct per_cpu_pageset *pset) { }
 #endif /* CONFIG_SMP */
 
-static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages,
-int migratetype)
-{
-   __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages);
-   if (is_migrate_cma(migratetype))
-   __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
-}
-
 extern const char * const vmstat_text[];
 
 #endif /* _LINUX_VMSTAT_H */
diff --git a/mm/cma.c b/mm/cma.c
index 8684f50..bd436e4 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -444,7 +444,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, 
unsigned int align)
 
pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
mutex_lock(_mutex);
-   ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
+   ret = alloc_contig_range(pfn, pfn + count);
mutex_unlock(_mutex);
if (ret == 0) {
page = pfn_to_page(pfn);
diff --git a/mm/compaction.c b/mm/compaction.c
index 1427366..acb1d1a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -76,7 +76,7 @@ static void map_pages(struct list_head *list)
 
 static inline bool migrate_async_suitable(int migratetype)
 {
-   return is_migrate_cma(migratetype) || migratetype == MIGRATE_MOVABLE;
+   return migratetype == MIGRATE_MOVABLE;
 }
 
 #ifdef CONFIG_COMPACTION
@@ -953,7 +953,7 @@ static bool suitable_migration_target(struct 

[PATCH v3 5/6] mm/cma: remove MIGRATE_CMA

2016-05-26 Thread js1304
From: Joonsoo Kim 

Now, all reserved pages for CMA region are belong to the ZONE_CMA
and there is no other type of pages. Therefore, we don't need to
use MIGRATE_CMA to distinguish and handle differently for CMA pages
and ordinary pages. Remove MIGRATE_CMA.

Unfortunately, this patch make free CMA counter incorrect because
we count it when pages are on the MIGRATE_CMA. It will be fixed
by next patch. I can squash next patch here but it makes changes
complicated and hard to review so I separate that.

Signed-off-by: Joonsoo Kim 
---
 include/linux/gfp.h|  3 +-
 include/linux/mmzone.h | 22 -
 include/linux/vmstat.h |  8 -
 mm/cma.c   |  2 +-
 mm/compaction.c| 10 ++
 mm/hugetlb.c   |  2 +-
 mm/page_alloc.c| 87 +-
 mm/page_isolation.c|  5 ++-
 mm/vmstat.c|  5 +--
 9 files changed, 31 insertions(+), 113 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 4d6c008..1a3b869 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -559,8 +559,7 @@ static inline bool pm_suspended_storage(void)
 
 #if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) || 
defined(CONFIG_CMA)
 /* The below functions must be run on a range from a single zone. */
-extern int alloc_contig_range(unsigned long start, unsigned long end,
- unsigned migratetype);
+extern int alloc_contig_range(unsigned long start, unsigned long end);
 extern void free_contig_range(unsigned long pfn, unsigned nr_pages);
 #endif
 
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 54c92a6..236d0bd 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -41,22 +41,6 @@ enum {
MIGRATE_RECLAIMABLE,
MIGRATE_PCPTYPES,   /* the number of types on the pcp lists */
MIGRATE_HIGHATOMIC = MIGRATE_PCPTYPES,
-#ifdef CONFIG_CMA
-   /*
-* MIGRATE_CMA migration type is designed to mimic the way
-* ZONE_MOVABLE works.  Only movable pages can be allocated
-* from MIGRATE_CMA pageblocks and page allocator never
-* implicitly change migration type of MIGRATE_CMA pageblock.
-*
-* The way to use it is to change migratetype of a range of
-* pageblocks to MIGRATE_CMA which can be done by
-* __free_pageblock_cma() function.  What is important though
-* is that a range of pageblocks must be aligned to
-* MAX_ORDER_NR_PAGES should biggest page be bigger then
-* a single pageblock.
-*/
-   MIGRATE_CMA,
-#endif
 #ifdef CONFIG_MEMORY_ISOLATION
MIGRATE_ISOLATE,/* can't allocate from here */
 #endif
@@ -66,12 +50,6 @@ enum {
 /* In mm/page_alloc.c; keep in sync also with show_migration_types() there */
 extern char * const migratetype_names[MIGRATE_TYPES];
 
-#ifdef CONFIG_CMA
-#  define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA)
-#else
-#  define is_migrate_cma(migratetype) false
-#endif
-
 #define for_each_migratetype_order(order, type) \
for (order = 0; order < MAX_ORDER; order++) \
for (type = 0; type < MIGRATE_TYPES; type++)
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 0aa613d..e0eb3e5 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -264,14 +264,6 @@ static inline void drain_zonestat(struct zone *zone,
struct per_cpu_pageset *pset) { }
 #endif /* CONFIG_SMP */
 
-static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages,
-int migratetype)
-{
-   __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages);
-   if (is_migrate_cma(migratetype))
-   __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
-}
-
 extern const char * const vmstat_text[];
 
 #endif /* _LINUX_VMSTAT_H */
diff --git a/mm/cma.c b/mm/cma.c
index 8684f50..bd436e4 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -444,7 +444,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, 
unsigned int align)
 
pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
mutex_lock(_mutex);
-   ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
+   ret = alloc_contig_range(pfn, pfn + count);
mutex_unlock(_mutex);
if (ret == 0) {
page = pfn_to_page(pfn);
diff --git a/mm/compaction.c b/mm/compaction.c
index 1427366..acb1d1a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -76,7 +76,7 @@ static void map_pages(struct list_head *list)
 
 static inline bool migrate_async_suitable(int migratetype)
 {
-   return is_migrate_cma(migratetype) || migratetype == MIGRATE_MOVABLE;
+   return migratetype == MIGRATE_MOVABLE;
 }
 
 #ifdef CONFIG_COMPACTION
@@ -953,7 +953,7 @@ static bool suitable_migration_target(struct page *page)
return