We setup the cache mode but ... don't forward the updated pgprot to
insert_pfn_pud().

Only a problem on x86-64 PAT when mapping PFNs using PUDs that
require a special cachemode.

Fix it by using the proper pgprot where the cachemode was setup.

It is unclear in which configurations we would get the cachemode wrong:
through vfio seems possible. Getting cachemodes wrong is usually ... bad.
As the fix is easy, let's backport it to stable.

Identified by code inspection.

Fixes: 7b806d229ef1 ("mm: remove vmf_insert_pfn_xxx_prot() for huge page-table 
entries")
Reviewed-by: Dan Williams <dan.j.willi...@intel.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoa...@oracle.com>
Reviewed-by: Jason Gunthorpe <j...@nvidia.com>
Tested-by: Dan Williams <dan.j.willi...@intel.com>
Cc: <sta...@vger.kernel.org>
Signed-off-by: David Hildenbrand <da...@redhat.com>
---
 mm/huge_memory.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d3e66136e41a3..49b98082c5401 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1516,10 +1516,9 @@ static pud_t maybe_pud_mkwrite(pud_t pud, struct 
vm_area_struct *vma)
 }
 
 static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
-               pud_t *pud, pfn_t pfn, bool write)
+               pud_t *pud, pfn_t pfn, pgprot_t prot, bool write)
 {
        struct mm_struct *mm = vma->vm_mm;
-       pgprot_t prot = vma->vm_page_prot;
        pud_t entry;
 
        if (!pud_none(*pud)) {
@@ -1581,7 +1580,7 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t 
pfn, bool write)
        pfnmap_setup_cachemode_pfn(pfn_t_to_pfn(pfn), &pgprot);
 
        ptl = pud_lock(vma->vm_mm, vmf->pud);
-       insert_pfn_pud(vma, addr, vmf->pud, pfn, write);
+       insert_pfn_pud(vma, addr, vmf->pud, pfn, pgprot, write);
        spin_unlock(ptl);
 
        return VM_FAULT_NOPAGE;
@@ -1625,7 +1624,7 @@ vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, 
struct folio *folio,
                add_mm_counter(mm, mm_counter_file(folio), HPAGE_PUD_NR);
        }
        insert_pfn_pud(vma, addr, vmf->pud, pfn_to_pfn_t(folio_pfn(folio)),
-               write);
+                      vma->vm_page_prot, write);
        spin_unlock(ptl);
 
        return VM_FAULT_NOPAGE;
-- 
2.49.0


Reply via email to