Nothing uses PFN_DEV anymore so no need to create devmap pXd's when mapping a PFN. Instead special mappings will be created which ensures vm_normal_page_pXd() will not return pages which don't have an associated page. This could change behaviour slightly on architectures where pXd_devmap() does not imply pXd_special() as the normal page checks would have fallen through to checking VM_PFNMAP/MIXEDMAP instead, which in theory at least could have returned a page.
However vm_normal_page_pXd() should never have been returning pages for pXd_devmap() entries anyway, so anything relying on that would have been a bug. Signed-off-by: Alistair Popple <apop...@nvidia.com> --- Changes since v1: - New for v2 --- mm/huge_memory.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b096240..6514e25 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1415,11 +1415,7 @@ static int insert_pmd(struct vm_area_struct *vma, unsigned long addr, add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR); } else { entry = pmd_mkhuge(pfn_t_pmd(fop.pfn, prot)); - - if (pfn_t_devmap(fop.pfn)) - entry = pmd_mkdevmap(entry); - else - entry = pmd_mkspecial(entry); + entry = pmd_mkspecial(entry); } if (write) { entry = pmd_mkyoung(pmd_mkdirty(entry)); @@ -1565,11 +1561,7 @@ static void insert_pud(struct vm_area_struct *vma, unsigned long addr, add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PUD_NR); } else { entry = pud_mkhuge(pfn_t_pud(fop.pfn, prot)); - - if (pfn_t_devmap(fop.pfn)) - entry = pud_mkdevmap(entry); - else - entry = pud_mkspecial(entry); + entry = pud_mkspecial(entry); } if (write) { entry = pud_mkyoung(pud_mkdirty(entry)); -- git-series 0.9.1