From: Ralph Campbell <rcampb...@nvidia.com>

The mmotm patch [1] adds hugetlbfs support for HMM but the initial
PFN used to fill the HMM range->pfns[] array doesn't properly
compute the starting PFN offset.
This can be tested by running test-hugetlbfs-read from [2].

Fix the PFN offset by adjusting the page offset by the device's
page size.

Andrew, this should probably be squashed into Jerome's patch.

[1] https://marc.info/?l=linux-mm&m=155432003506068&w=2
("mm/hmm: mirror hugetlbfs (snapshoting, faulting and DMA mapping)")
[2] https://gitlab.freedesktop.org/glisse/svm-cl-tests

Signed-off-by: Ralph Campbell <rcampb...@nvidia.com>
Cc: Jérôme Glisse <jgli...@redhat.com>
Cc: Ira Weiny <ira.we...@intel.com>
Cc: John Hubbard <jhubb...@nvidia.com>
Cc: Dan Williams <dan.j.willi...@intel.com>
Cc: Arnd Bergmann <a...@arndb.de>
Cc: Balbir Singh <bsinghar...@gmail.com>
Cc: Dan Carpenter <dan.carpen...@oracle.com>
Cc: Matthew Wilcox <wi...@infradead.org>
Cc: Souptick Joarder <jrdr.li...@gmail.com>
Cc: Andrew Morton <a...@linux-foundation.org>
---
 mm/hmm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/hmm.c b/mm/hmm.c
index def451a56c3e..fcf8e4fb5770 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -868,7 +868,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned 
long hmask,
                goto unlock;
        }
 
-       pfn = pte_pfn(entry) + (start & mask);
+       pfn = pte_pfn(entry) + ((start & mask) >> range->page_shift);
        for (; addr < end; addr += size, i++, pfn += pfn_inc)
                range->pfns[i] = hmm_device_entry_from_pfn(range, pfn) |
                                 cpu_flags;
-- 
2.20.1

Reply via email to