Nicholas Piggin <npig...@gmail.com> writes:

> Commit 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion
> in pte helpers") changed the actual bitwise tests in pte_access_permitted
> by using pte_write() and pte_present() helpers rather than raw bitwise
> testing _PAGE_WRITE and _PAGE_PRESENT bits.
>
> The pte_present change now returns true for ptes which are !_PAGE_PRESENT
> and _PAGE_INVALID, which is the combination used by pmdp_invalidate to
> synchronize access from lock-free lookups. pte_access_permitted is used by
> pmd_access_permitted, so allowing GUP lock free access to proceed with
> such PTEs breaks this synchronisation.
>
> This bug has been observed on HPT host, with random crashes and corruption
> in guests, usually together with bad PMD messages in the host.
>
> Fix this by adding an explicit check in pmd_access_permitted, and
> documenting the condition explicitly.
>
> The pte_write() change should be okay, and would prevent GUP from falling
> back to the slow path when encountering savedwrite ptes, which matches
> what x86 (that does not implement savedwrite) does.
>

Reviewed-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com>

> Fixes: 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion in 
> pte helpers")
> Cc: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com>
> Cc: Christophe Leroy <christophe.le...@c-s.fr>
> Signed-off-by: Nicholas Piggin <npig...@gmail.com>
> ---
>
> I accounted for Aneesh's and Christophe's feedback, except I couldn't
> find a good way to replace the ifdef with IS_ENABLED because of
> _PAGE_INVALID etc., but at least cleaned that up a bit nicer.
>
> Patch 1 solves a problem I can hit quite reliably running HPT/HPT KVM.
> Patch 2 was noticed by Aneesh when inspecting code for similar bugs.
> They should probably both be merged in stable kernels after upstream.
>
>  arch/powerpc/include/asm/book3s/64/pgtable.h | 30 ++++++++++++++++++++
>  arch/powerpc/mm/book3s64/pgtable.c           |  3 ++
>  2 files changed, 33 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
> b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 7dede2e34b70..ccf00a8b98c6 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -876,6 +876,23 @@ static inline int pmd_present(pmd_t pmd)
>       return false;
>  }
>  
> +static inline int pmd_is_serializing(pmd_t pmd)
> +{
> +     /*
> +      * If the pmd is undergoing a split, the _PAGE_PRESENT bit is clear
> +      * and _PAGE_INVALID is set (see pmd_present, pmdp_invalidate).
> +      *
> +      * This condition may also occur when flushing a pmd while flushing
> +      * it (see ptep_modify_prot_start), so callers must ensure this
> +      * case is fine as well.
> +      */
> +     if ((pmd_raw(pmd) & cpu_to_be64(_PAGE_PRESENT | _PAGE_INVALID)) ==
> +                                             cpu_to_be64(_PAGE_INVALID))
> +             return true;
> +
> +     return false;
> +}
> +
>  static inline int pmd_bad(pmd_t pmd)
>  {
>       if (radix_enabled())
> @@ -1092,6 +1109,19 @@ static inline int pmd_protnone(pmd_t pmd)
>  #define pmd_access_permitted pmd_access_permitted
>  static inline bool pmd_access_permitted(pmd_t pmd, bool write)
>  {
> +     /*
> +      * pmdp_invalidate sets this combination (which is not caught by
> +      * !pte_present() check in pte_access_permitted), to prevent
> +      * lock-free lookups, as part of the serialize_against_pte_lookup()
> +      * synchronisation.
> +      *
> +      * This also catches the case where the PTE's hardware PRESENT bit is
> +      * cleared while TLB is flushed, which is suboptimal but should not
> +      * be frequent.
> +      */
> +     if (pmd_is_serializing(pmd))
> +             return false;
> +
>       return pte_access_permitted(pmd_pte(pmd), write);
>  }
>  
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c 
> b/arch/powerpc/mm/book3s64/pgtable.c
> index 16bda049187a..ff98b663c83e 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -116,6 +116,9 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, 
> unsigned long address,
>       /*
>        * This ensures that generic code that rely on IRQ disabling
>        * to prevent a parallel THP split work as expected.
> +      *
> +      * Marking the entry with _PAGE_INVALID && ~_PAGE_PRESENT requires
> +      * a special case check in pmd_access_permitted.
>        */
>       serialize_against_pte_lookup(vma->vm_mm);
>       return __pmd(old_pmd);
> -- 
> 2.20.1

Reply via email to