Le 02/09/2020 à 13:42, Aneesh Kumar K.V a écrit :
powerpc used to set the pte specific flags in set_pte_at(). This is
different from other architectures. To be consistent with other
architecture update pfn_pte to set _PAGE_PTE on ppc64. Also, drop now
unused pte_mkpte.

We add a VM_WARN_ON() to catch the usage of calling set_pte_at()
without setting _PAGE_PTE bit. We will remove that after a few releases.

With respect to huge pmd entries, pmd_mkhuge() takes care of adding the
_PAGE_PTE bit.

Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com>

Reviewed-by: Christophe Leroy <christophe.le...@csgroup.eu>

Small nit below.

---
  arch/powerpc/include/asm/book3s/64/pgtable.h | 15 +++++++++------
  arch/powerpc/include/asm/nohash/pgtable.h    |  5 -----
  arch/powerpc/mm/pgtable.c                    |  5 -----
  3 files changed, 9 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 079211968987..2382fd516f6b 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -823,6 +818,14 @@ static inline int pte_none(pte_t pte)
  static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
                                pte_t *ptep, pte_t pte, int percpu)
  {
+
+       VM_WARN_ON(!(pte_raw(pte) & cpu_to_be64(_PAGE_PTE)));
+       /*
+        * Keep the _PAGE_PTE added till we are sure we handle _PAGE_PTE
+        * in all the callers.
+        */
+        pte = __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_PTE));

Wrong alignment, there is a leading space.

+
        if (radix_enabled())
                return radix__set_pte_at(mm, addr, ptep, pte, percpu);
        return hash__set_pte_at(mm, addr, ptep, pte, percpu);

Christophe

Reply via email to