There are two code path which invoke __populate_section_memmap()

  * sparse_init_nid()
  * sparse_add_section()

For both case, we are sure the memory range is sub-section aligned.

  * we pass PAGES_PER_SECTION to sparse_init_nid()
  * we check range by check_pfn_span() before calling
    sparse_add_section()

Also, the counterpart of __populate_section_memmap(), we don't do such
calculation and check since the range is checked by check_pfn_span() in
__remove_pages().

Clear the calculation and check to keep it simple and comply with its
counterpart.

Signed-off-by: Wei Yang <[email protected]>
---
 mm/sparse-vmemmap.c | 16 ++--------------
 1 file changed, 2 insertions(+), 14 deletions(-)

diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 0db7738d76e9..24b01ebae111 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -247,20 +247,8 @@ int __meminit vmemmap_populate_basepages(unsigned long 
start,
 struct page * __meminit __populate_section_memmap(unsigned long pfn,
                unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
 {
-       unsigned long start;
-       unsigned long end;
-
-       /*
-        * The minimum granularity of memmap extensions is
-        * PAGES_PER_SUBSECTION as allocations are tracked in the
-        * 'subsection_map' bitmap of the section.
-        */
-       end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
-       pfn &= PAGE_SUBSECTION_MASK;
-       nr_pages = end - pfn;
-
-       start = (unsigned long) pfn_to_page(pfn);
-       end = start + nr_pages * sizeof(struct page);
+       unsigned long start = (unsigned long) pfn_to_page(pfn);
+       unsigned long end = start + nr_pages * sizeof(struct page);
 
        if (vmemmap_populate(start, end, nid, altmap))
                return NULL;
-- 
2.20.1 (Apple Git-117)

Reply via email to