On 10.02.21 15:09, Oscar Salvador wrote:
On Wed, Feb 10, 2021 at 09:56:37AM +0100, David Hildenbrand wrote:
On 08.02.21 11:38, Oscar Salvador wrote:
alloc_contig_range is not prepared to handle hugetlb pages and will
fail if it ever sees one, but since they can be migrated as any other
page (LRU and Movable), it makes sense to also handle them.

For now, do it only when coming from alloc_contig_range.

Signed-off-by: Oscar Salvador <osalva...@suse.de>
---
   mm/compaction.c | 17 +++++++++++++++++
   mm/vmscan.c     |  5 +++--
   2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index e5acb9714436..89cd2e60da29 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -940,6 +940,22 @@ isolate_migratepages_block(struct compact_control *cc, 
unsigned long low_pfn,
                        goto isolate_fail;
                }
+               /*
+                * Handle hugetlb pages only when coming from alloc_contig
+                */
+               if (PageHuge(page) && cc->alloc_contig) {
+                       if (page_count(page)) {

I wonder if we should care about races here. What if someone concurrently
allocates/frees?

Note that PageHuge() succeeds on tail pages, isolate_huge_page() not, i
assume we'll have to handle that as well.

I wonder if it would make sense to move some of the magic to hugetlb code
and handle it there with less chances for races (isolate if used,
alloc-and-dissolve if not).

Yes, it makes sense to keep the magic in hugetlb code.
Note, though, that removing all races might be tricky.

isolate_huge_page() checks for PageHuge under hugetlb_lock,
so there is a race between a call to PageHuge(x) and a subsequent
call to isolate_huge_page().
But we should be fine as isolate_huge_page will fail in case the page is
no longer HugeTLB.

Also, since isolate_migratepages_block() gets called with ranges
pageblock aligned, we should never be handling tail pages in the core
of the function. E.g: the same way we handle THP:

Gigantic pages? (spoiler: see my comments to next patch :) )

--
Thanks,

David / dhildenb

Reply via email to