Re: [PATCH 3/9] mm: alloc_contig_range() added

2011-10-18 Thread Mel Gorman
On Thu, Oct 06, 2011 at 03:54:43PM +0200, Marek Szyprowski wrote:
 From: Michal Nazarewicz m.nazarew...@samsung.com
 
 This commit adds the alloc_contig_range() function which tries
 to allocate given range of pages.  It tries to migrate all
 already allocated pages that fall in the range thus freeing them.
 Once all pages in the range are freed they are removed from the
 buddy system thus allocated for the caller to use.
 
 Signed-off-by: Michal Nazarewicz m.nazarew...@samsung.com
 Signed-off-by: Kyungmin Park kyungmin.p...@samsung.com
 [m.szyprowski: renamed some variables for easier code reading]
 Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com
 CC: Michal Nazarewicz min...@mina86.com
 Acked-by: Arnd Bergmann a...@arndb.de
 ---
  include/linux/page-isolation.h |2 +
  mm/page_alloc.c|  148 
 
  2 files changed, 150 insertions(+), 0 deletions(-)
 
 diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
 index b9fc428..774ecec 100644
 --- a/include/linux/page-isolation.h
 +++ b/include/linux/page-isolation.h
 @@ -36,6 +36,8 @@ extern void unset_migratetype_isolate(struct page *page);
  /* The below functions must be run on a range from a single zone. */
  extern unsigned long alloc_contig_freed_pages(unsigned long start,
 unsigned long end, gfp_t flag);
 +extern int alloc_contig_range(unsigned long start, unsigned long end,
 +   gfp_t flags);
  extern void free_contig_pages(unsigned long pfn, unsigned nr_pages);
  
  /*
 diff --git a/mm/page_alloc.c b/mm/page_alloc.c
 index fbfb920..8010854 100644
 --- a/mm/page_alloc.c
 +++ b/mm/page_alloc.c
 @@ -5773,6 +5773,154 @@ void free_contig_pages(unsigned long pfn, unsigned 
 nr_pages)
   }
  }
  
 +static unsigned long pfn_to_maxpage(unsigned long pfn)
 +{
 + return pfn  ~(MAX_ORDER_NR_PAGES - 1);
 +}
 +

pfn_to_maxpage is a very confusing name here. It would be preferable to
create a MAX_ORDER_MASK that you apply directly.

Maybe something like SECTION_ALIGN_UP and SECTION_ALIGN_DOWN.

 +static unsigned long pfn_to_maxpage_up(unsigned long pfn)
 +{
 + return ALIGN(pfn, MAX_ORDER_NR_PAGES);
 +}
 +
 +#define MIGRATION_RETRY  5
 +static int __alloc_contig_migrate_range(unsigned long start, unsigned long 
 end)
 +{
 + int migration_failed = 0, ret;
 + unsigned long pfn = start;
 +
 + /*
 +  * Some code borrowed from KAMEZAWA Hiroyuki's
 +  * __alloc_contig_pages().
 +  */
 +

There is no need to put a comment like this here. Credit him in the
changelog.

 + /* drop all pages in pagevec and pcp list */
 + lru_add_drain_all();
 + drain_all_pages();
 +

Very similar to migrate_prep(). drain_all_pages should not be required
at this point.

 + for (;;) {
 + pfn = scan_lru_pages(pfn, end);

scan_lru_pages() is inefficient, this is going to be costly.

 + if (!pfn || pfn = end)
 + break;
 +
 + ret = do_migrate_range(pfn, end);
 + if (!ret) {
 + migration_failed = 0;
 + } else if (ret != -EBUSY
 + || ++migration_failed = MIGRATION_RETRY) {
 + return ret;
 + } else {
 + /* There are unstable pages.on pagevec. */
 + lru_add_drain_all();
 + /*
 +  * there may be pages on pcplist before
 +  * we mark the range as ISOLATED.
 +  */
 + drain_all_pages();
 + }
 + cond_resched();
 + }
 +
 + if (!migration_failed) {
 + /* drop all pages in pagevec and pcp list */
 + lru_add_drain_all();
 + drain_all_pages();
 + }
 +
 + /* Make sure all pages are isolated */
 + if (WARN_ON(test_pages_isolated(start, end)))
 + return -EBUSY;
 +

In some respects, this is very similar to mm/compaction#compact_zone().
They could have shared significant code if you reworked compact_zone
to work on ranges of memory and express compact_zone to operate on
zone-zone_start_pfn zone-zone_start_pfn+zone-spanned_pages . The
compaction code is 

 + return 0;
 +}
 +
 +/**
 + * alloc_contig_range() -- tries to allocate given range of pages
 + * @start:   start PFN to allocate
 + * @end: one-past-the-last PFN to allocate
 + * @flags:   flags passed to alloc_contig_freed_pages().
 + *
 + * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
 + * aligned, hovewer it's callers responsibility to guarantee that we
 + * are the only thread that changes migrate type of pageblocks the
 + * pages fall in.
 + *
 + * Returns zero on success or negative error code.  On success all
 + * pages which PFN is in (start, end) are allocated for the caller and
 + * need to be freed with free_contig_pages().
 + */
 +int 

Re: [PATCH 3/9] mm: alloc_contig_range() added

2011-10-14 Thread Andrew Morton
On Thu, 06 Oct 2011 15:54:43 +0200
Marek Szyprowski m.szyprow...@samsung.com wrote:

 From: Michal Nazarewicz m.nazarew...@samsung.com
 
 This commit adds the alloc_contig_range() function which tries
 to allocate given range of pages.  It tries to migrate all
 already allocated pages that fall in the range thus freeing them.
 Once all pages in the range are freed they are removed from the
 buddy system thus allocated for the caller to use.
 
 Signed-off-by: Michal Nazarewicz m.nazarew...@samsung.com
 Signed-off-by: Kyungmin Park kyungmin.p...@samsung.com
 [m.szyprowski: renamed some variables for easier code reading]
 Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com
 CC: Michal Nazarewicz min...@mina86.com
 Acked-by: Arnd Bergmann a...@arndb.de

Where-is: Mel Gorman m...@csn.ul.ie

 +#define MIGRATION_RETRY  5
 +static int __alloc_contig_migrate_range(unsigned long start, unsigned long 
 end)
 +{
 + int migration_failed = 0, ret;
 + unsigned long pfn = start;
 +
 + /*
 +  * Some code borrowed from KAMEZAWA Hiroyuki's
 +  * __alloc_contig_pages().
 +  */
 +
 + /* drop all pages in pagevec and pcp list */
 + lru_add_drain_all();
 + drain_all_pages();

These operations are sometimes wrong ;) Have you confirmed that we
really need to perform them here?  If so, a little comment explaining
why we're using them here would be good.

 + for (;;) {
 + pfn = scan_lru_pages(pfn, end);
 + if (!pfn || pfn = end)
 + break;
 +
 + ret = do_migrate_range(pfn, end);
 + if (!ret) {
 + migration_failed = 0;
 + } else if (ret != -EBUSY
 + || ++migration_failed = MIGRATION_RETRY) {

Sigh, magic numbers.

Have you ever seen this retry loop actually expire in testing?

migrate_pages() tries ten times.  This code tries five times.  Is there
any science to all of this?

 + return ret;
 + } else {
 + /* There are unstable pages.on pagevec. */
 + lru_add_drain_all();
 + /*
 +  * there may be pages on pcplist before
 +  * we mark the range as ISOLATED.
 +  */
 + drain_all_pages();
 + }
 + cond_resched();
 + }
 +
 + if (!migration_failed) {
 + /* drop all pages in pagevec and pcp list */
 + lru_add_drain_all();
 + drain_all_pages();

hm.

 + }
 +
 + /* Make sure all pages are isolated */
 + if (WARN_ON(test_pages_isolated(start, end)))
 + return -EBUSY;
 +
 + return 0;
 +}
 +
 +/**
 + * alloc_contig_range() -- tries to allocate given range of pages
 + * @start:   start PFN to allocate
 + * @end: one-past-the-last PFN to allocate
 + * @flags:   flags passed to alloc_contig_freed_pages().
 + *
 + * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
 + * aligned, hovewer it's callers responsibility to guarantee that we

however

however it is the caller's responsibility..

 + * are the only thread that changes migrate type of pageblocks the
 + * pages fall in.
 + *
 + * Returns zero on success or negative error code.  On success all
 + * pages which PFN is in (start, end) are allocated for the caller and
 + * need to be freed with free_contig_pages().
 + */

 ...


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/9] mm: alloc_contig_range() added

2011-10-06 Thread Marek Szyprowski
From: Michal Nazarewicz m.nazarew...@samsung.com

This commit adds the alloc_contig_range() function which tries
to allocate given range of pages.  It tries to migrate all
already allocated pages that fall in the range thus freeing them.
Once all pages in the range are freed they are removed from the
buddy system thus allocated for the caller to use.

Signed-off-by: Michal Nazarewicz m.nazarew...@samsung.com
Signed-off-by: Kyungmin Park kyungmin.p...@samsung.com
[m.szyprowski: renamed some variables for easier code reading]
Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com
CC: Michal Nazarewicz min...@mina86.com
Acked-by: Arnd Bergmann a...@arndb.de
---
 include/linux/page-isolation.h |2 +
 mm/page_alloc.c|  148 
 2 files changed, 150 insertions(+), 0 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index b9fc428..774ecec 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -36,6 +36,8 @@ extern void unset_migratetype_isolate(struct page *page);
 /* The below functions must be run on a range from a single zone. */
 extern unsigned long alloc_contig_freed_pages(unsigned long start,
  unsigned long end, gfp_t flag);
+extern int alloc_contig_range(unsigned long start, unsigned long end,
+ gfp_t flags);
 extern void free_contig_pages(unsigned long pfn, unsigned nr_pages);
 
 /*
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fbfb920..8010854 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5773,6 +5773,154 @@ void free_contig_pages(unsigned long pfn, unsigned 
nr_pages)
}
 }
 
+static unsigned long pfn_to_maxpage(unsigned long pfn)
+{
+   return pfn  ~(MAX_ORDER_NR_PAGES - 1);
+}
+
+static unsigned long pfn_to_maxpage_up(unsigned long pfn)
+{
+   return ALIGN(pfn, MAX_ORDER_NR_PAGES);
+}
+
+#define MIGRATION_RETRY5
+static int __alloc_contig_migrate_range(unsigned long start, unsigned long end)
+{
+   int migration_failed = 0, ret;
+   unsigned long pfn = start;
+
+   /*
+* Some code borrowed from KAMEZAWA Hiroyuki's
+* __alloc_contig_pages().
+*/
+
+   /* drop all pages in pagevec and pcp list */
+   lru_add_drain_all();
+   drain_all_pages();
+
+   for (;;) {
+   pfn = scan_lru_pages(pfn, end);
+   if (!pfn || pfn = end)
+   break;
+
+   ret = do_migrate_range(pfn, end);
+   if (!ret) {
+   migration_failed = 0;
+   } else if (ret != -EBUSY
+   || ++migration_failed = MIGRATION_RETRY) {
+   return ret;
+   } else {
+   /* There are unstable pages.on pagevec. */
+   lru_add_drain_all();
+   /*
+* there may be pages on pcplist before
+* we mark the range as ISOLATED.
+*/
+   drain_all_pages();
+   }
+   cond_resched();
+   }
+
+   if (!migration_failed) {
+   /* drop all pages in pagevec and pcp list */
+   lru_add_drain_all();
+   drain_all_pages();
+   }
+
+   /* Make sure all pages are isolated */
+   if (WARN_ON(test_pages_isolated(start, end)))
+   return -EBUSY;
+
+   return 0;
+}
+
+/**
+ * alloc_contig_range() -- tries to allocate given range of pages
+ * @start: start PFN to allocate
+ * @end:   one-past-the-last PFN to allocate
+ * @flags: flags passed to alloc_contig_freed_pages().
+ *
+ * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
+ * aligned, hovewer it's callers responsibility to guarantee that we
+ * are the only thread that changes migrate type of pageblocks the
+ * pages fall in.
+ *
+ * Returns zero on success or negative error code.  On success all
+ * pages which PFN is in (start, end) are allocated for the caller and
+ * need to be freed with free_contig_pages().
+ */
+int alloc_contig_range(unsigned long start, unsigned long end,
+  gfp_t flags)
+{
+   unsigned long outer_start, outer_end;
+   int ret;
+
+   /*
+* What we do here is we mark all pageblocks in range as
+* MIGRATE_ISOLATE.  Because of the way page allocator work, we
+* align the range to MAX_ORDER pages so that page allocator
+* won't try to merge buddies from different pageblocks and
+* change MIGRATE_ISOLATE to some other migration type.
+*
+* Once the pageblocks are marked as MIGRATE_ISOLATE, we
+* migrate the pages from an unaligned range (ie. pages that
+* we are interested in).  This will put all the pages in
+* range back to page allocator as MIGRATE_ISOLATE.
+*
+* When 

[PATCH 3/9] mm: alloc_contig_range() added

2011-08-12 Thread Marek Szyprowski
From: Michal Nazarewicz m.nazarew...@samsung.com

This commit adds the alloc_contig_range() function which tries
to allecate given range of pages.  It tries to migrate all
already allocated pages that fall in the range thus freeing them.
Once all pages in the range are freed they are removed from the
buddy system thus allocated for the caller to use.

Signed-off-by: Michal Nazarewicz m.nazarew...@samsung.com
Signed-off-by: Kyungmin Park kyungmin.p...@samsung.com
[m.szyprowski: renamed some variables for easier code reading]
Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com
CC: Michal Nazarewicz min...@mina86.com
Acked-by: Arnd Bergmann a...@arndb.de
---
 include/linux/page-isolation.h |2 +
 mm/page_alloc.c|  144 
 2 files changed, 146 insertions(+), 0 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index f1417ed..c5d1a7c 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -34,6 +34,8 @@ extern int set_migratetype_isolate(struct page *page);
 extern void unset_migratetype_isolate(struct page *page);
 extern unsigned long alloc_contig_freed_pages(unsigned long start,
  unsigned long end, gfp_t flag);
+extern int alloc_contig_range(unsigned long start, unsigned long end,
+ gfp_t flags);
 extern void free_contig_pages(struct page *page, int nr_pages);
 
 /*
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ad6ae3f..35423c2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5706,6 +5706,150 @@ unsigned long alloc_contig_freed_pages(unsigned long 
start, unsigned long end,
return pfn;
 }
 
+static unsigned long pfn_to_maxpage(unsigned long pfn)
+{
+   return pfn  ~(MAX_ORDER_NR_PAGES - 1);
+}
+
+static unsigned long pfn_to_maxpage_up(unsigned long pfn)
+{
+   return ALIGN(pfn, MAX_ORDER_NR_PAGES);
+}
+
+#define MIGRATION_RETRY5
+static int __alloc_contig_migrate_range(unsigned long start, unsigned long end)
+{
+   int migration_failed = 0, ret;
+   unsigned long pfn = start;
+
+   /*
+* Some code borrowed from KAMEZAWA Hiroyuki's
+* __alloc_contig_pages().
+*/
+
+   for (;;) {
+   pfn = scan_lru_pages(pfn, end);
+   if (!pfn || pfn = end)
+   break;
+
+   ret = do_migrate_range(pfn, end);
+   if (!ret) {
+   migration_failed = 0;
+   } else if (ret != -EBUSY
+   || ++migration_failed = MIGRATION_RETRY) {
+   return ret;
+   } else {
+   /* There are unstable pages.on pagevec. */
+   lru_add_drain_all();
+   /*
+* there may be pages on pcplist before
+* we mark the range as ISOLATED.
+*/
+   drain_all_pages();
+   }
+   cond_resched();
+   }
+
+   if (!migration_failed) {
+   /* drop all pages in pagevec and pcp list */
+   lru_add_drain_all();
+   drain_all_pages();
+   }
+
+   /* Make sure all pages are isolated */
+   if (WARN_ON(test_pages_isolated(start, end)))
+   return -EBUSY;
+
+   return 0;
+}
+
+/**
+ * alloc_contig_range() -- tries to allocate given range of pages
+ * @start: start PFN to allocate
+ * @end:   one-past-the-last PFN to allocate
+ * @flags: flags passed to alloc_contig_freed_pages().
+ *
+ * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
+ * aligned, hovewer it's callers responsibility to guarantee that we
+ * are the only thread that changes migrate type of pageblocks the
+ * pages fall in.
+ *
+ * Returns zero on success or negative error code.  On success all
+ * pages which PFN is in (start, end) are allocated for the caller and
+ * need to be freed with free_contig_pages().
+ */
+int alloc_contig_range(unsigned long start, unsigned long end,
+  gfp_t flags)
+{
+   unsigned long outer_start, outer_end;
+   int ret;
+
+   /*
+* What we do here is we mark all pageblocks in range as
+* MIGRATE_ISOLATE.  Because of the way page allocator work, we
+* align the range to MAX_ORDER pages so that page allocator
+* won't try to merge buddies from different pageblocks and
+* change MIGRATE_ISOLATE to some other migration type.
+*
+* Once the pageblocks are marked as MIGRATE_ISOLATE, we
+* migrate the pages from an unaligned range (ie. pages that
+* we are interested in).  This will put all the pages in
+* range back to page allocator as MIGRATE_ISOLATE.
+*
+* When this is done, we take the pages in range from page
+* allocator removing them from the buddy