Re: [PATCH 1/2] powerpc/64s: Fix THP PMD collapse serialisation

2019-06-11 Thread Michael Ellerman
On Fri, 2019-06-07 at 03:56:35 UTC, Nicholas Piggin wrote:
> Commit 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion
> in pte helpers") changed the actual bitwise tests in pte_access_permitted
> by using pte_write() and pte_present() helpers rather than raw bitwise
> testing _PAGE_WRITE and _PAGE_PRESENT bits.
> 
> The pte_present change now returns true for ptes which are !_PAGE_PRESENT
> and _PAGE_INVALID, which is the combination used by pmdp_invalidate to
> synchronize access from lock-free lookups. pte_access_permitted is used by
> pmd_access_permitted, so allowing GUP lock free access to proceed with
> such PTEs breaks this synchronisation.
> 
> This bug has been observed on HPT host, with random crashes and corruption
> in guests, usually together with bad PMD messages in the host.
> 
> Fix this by adding an explicit check in pmd_access_permitted, and
> documenting the condition explicitly.
> 
> The pte_write() change should be okay, and would prevent GUP from falling
> back to the slow path when encountering savedwrite ptes, which matches
> what x86 (that does not implement savedwrite) does.
> 
> Fixes: 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion in 
> pte helpers")
> Cc: Aneesh Kumar K.V 
> Cc: Christophe Leroy 
> Signed-off-by: Nicholas Piggin 
> Reviewed-by: Aneesh Kumar K.V 

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/33258a1db165cf43a9e6382587ad06e9

cheers


Re: [PATCH 1/2] powerpc/64s: Fix THP PMD collapse serialisation

2019-06-06 Thread Nicholas Piggin
Christophe Leroy's on June 7, 2019 3:34 pm:
> 
> 
> Le 07/06/2019 à 05:56, Nicholas Piggin a écrit :
>> Commit 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion
>> in pte helpers") changed the actual bitwise tests in pte_access_permitted
>> by using pte_write() and pte_present() helpers rather than raw bitwise
>> testing _PAGE_WRITE and _PAGE_PRESENT bits.
>> 
>> The pte_present change now returns true for ptes which are !_PAGE_PRESENT
>> and _PAGE_INVALID, which is the combination used by pmdp_invalidate to
>> synchronize access from lock-free lookups. pte_access_permitted is used by
>> pmd_access_permitted, so allowing GUP lock free access to proceed with
>> such PTEs breaks this synchronisation.
>> 
>> This bug has been observed on HPT host, with random crashes and corruption
>> in guests, usually together with bad PMD messages in the host.
>> 
>> Fix this by adding an explicit check in pmd_access_permitted, and
>> documenting the condition explicitly.
>> 
>> The pte_write() change should be okay, and would prevent GUP from falling
>> back to the slow path when encountering savedwrite ptes, which matches
>> what x86 (that does not implement savedwrite) does.
>> 
>> Fixes: 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion in 
>> pte helpers")
>> Cc: Aneesh Kumar K.V 
>> Cc: Christophe Leroy 
>> Signed-off-by: Nicholas Piggin 
>> ---
>> 
>> I accounted for Aneesh's and Christophe's feedback, except I couldn't
>> find a good way to replace the ifdef with IS_ENABLED because of
>> _PAGE_INVALID etc., but at least cleaned that up a bit nicer.
> 
> I guess the standard way is to add a pmd_is_serializing() which return 
> always false in book3s/32/pgtable.h and in nohash/pgtable.h


> 
>> 
>> Patch 1 solves a problem I can hit quite reliably running HPT/HPT KVM.
>> Patch 2 was noticed by Aneesh when inspecting code for similar bugs.
>> They should probably both be merged in stable kernels after upstream.
>> 
>>   arch/powerpc/include/asm/book3s/64/pgtable.h | 30 
>>   arch/powerpc/mm/book3s64/pgtable.c   |  3 ++
>>   2 files changed, 33 insertions(+)
>> 
>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
>> b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> index 7dede2e34b70..ccf00a8b98c6 100644
>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> @@ -876,6 +876,23 @@ static inline int pmd_present(pmd_t pmd)
>>  return false;
>>   }
>>   
>> +static inline int pmd_is_serializing(pmd_t pmd)
> 
> should be static inline bool instead of int ?

I think just about all the p?d_blah boolean functions in the tree are
int at the moment, so I followed that pattern.

May be a good tree wide change to make at some point.

Thanks,
Nick




Re: [PATCH 1/2] powerpc/64s: Fix THP PMD collapse serialisation

2019-06-06 Thread Aneesh Kumar K.V
Nicholas Piggin  writes:

> Commit 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion
> in pte helpers") changed the actual bitwise tests in pte_access_permitted
> by using pte_write() and pte_present() helpers rather than raw bitwise
> testing _PAGE_WRITE and _PAGE_PRESENT bits.
>
> The pte_present change now returns true for ptes which are !_PAGE_PRESENT
> and _PAGE_INVALID, which is the combination used by pmdp_invalidate to
> synchronize access from lock-free lookups. pte_access_permitted is used by
> pmd_access_permitted, so allowing GUP lock free access to proceed with
> such PTEs breaks this synchronisation.
>
> This bug has been observed on HPT host, with random crashes and corruption
> in guests, usually together with bad PMD messages in the host.
>
> Fix this by adding an explicit check in pmd_access_permitted, and
> documenting the condition explicitly.
>
> The pte_write() change should be okay, and would prevent GUP from falling
> back to the slow path when encountering savedwrite ptes, which matches
> what x86 (that does not implement savedwrite) does.
>

Reviewed-by: Aneesh Kumar K.V 

> Fixes: 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion in 
> pte helpers")
> Cc: Aneesh Kumar K.V 
> Cc: Christophe Leroy 
> Signed-off-by: Nicholas Piggin 
> ---
>
> I accounted for Aneesh's and Christophe's feedback, except I couldn't
> find a good way to replace the ifdef with IS_ENABLED because of
> _PAGE_INVALID etc., but at least cleaned that up a bit nicer.
>
> Patch 1 solves a problem I can hit quite reliably running HPT/HPT KVM.
> Patch 2 was noticed by Aneesh when inspecting code for similar bugs.
> They should probably both be merged in stable kernels after upstream.
>
>  arch/powerpc/include/asm/book3s/64/pgtable.h | 30 
>  arch/powerpc/mm/book3s64/pgtable.c   |  3 ++
>  2 files changed, 33 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
> b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 7dede2e34b70..ccf00a8b98c6 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -876,6 +876,23 @@ static inline int pmd_present(pmd_t pmd)
>   return false;
>  }
>  
> +static inline int pmd_is_serializing(pmd_t pmd)
> +{
> + /*
> +  * If the pmd is undergoing a split, the _PAGE_PRESENT bit is clear
> +  * and _PAGE_INVALID is set (see pmd_present, pmdp_invalidate).
> +  *
> +  * This condition may also occur when flushing a pmd while flushing
> +  * it (see ptep_modify_prot_start), so callers must ensure this
> +  * case is fine as well.
> +  */
> + if ((pmd_raw(pmd) & cpu_to_be64(_PAGE_PRESENT | _PAGE_INVALID)) ==
> + cpu_to_be64(_PAGE_INVALID))
> + return true;
> +
> + return false;
> +}
> +
>  static inline int pmd_bad(pmd_t pmd)
>  {
>   if (radix_enabled())
> @@ -1092,6 +1109,19 @@ static inline int pmd_protnone(pmd_t pmd)
>  #define pmd_access_permitted pmd_access_permitted
>  static inline bool pmd_access_permitted(pmd_t pmd, bool write)
>  {
> + /*
> +  * pmdp_invalidate sets this combination (which is not caught by
> +  * !pte_present() check in pte_access_permitted), to prevent
> +  * lock-free lookups, as part of the serialize_against_pte_lookup()
> +  * synchronisation.
> +  *
> +  * This also catches the case where the PTE's hardware PRESENT bit is
> +  * cleared while TLB is flushed, which is suboptimal but should not
> +  * be frequent.
> +  */
> + if (pmd_is_serializing(pmd))
> + return false;
> +
>   return pte_access_permitted(pmd_pte(pmd), write);
>  }
>  
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c 
> b/arch/powerpc/mm/book3s64/pgtable.c
> index 16bda049187a..ff98b663c83e 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -116,6 +116,9 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, 
> unsigned long address,
>   /*
>* This ensures that generic code that rely on IRQ disabling
>* to prevent a parallel THP split work as expected.
> +  *
> +  * Marking the entry with _PAGE_INVALID && ~_PAGE_PRESENT requires
> +  * a special case check in pmd_access_permitted.
>*/
>   serialize_against_pte_lookup(vma->vm_mm);
>   return __pmd(old_pmd);
> -- 
> 2.20.1



Re: [PATCH 1/2] powerpc/64s: Fix THP PMD collapse serialisation

2019-06-06 Thread Christophe Leroy




Le 07/06/2019 à 05:56, Nicholas Piggin a écrit :

Commit 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion
in pte helpers") changed the actual bitwise tests in pte_access_permitted
by using pte_write() and pte_present() helpers rather than raw bitwise
testing _PAGE_WRITE and _PAGE_PRESENT bits.

The pte_present change now returns true for ptes which are !_PAGE_PRESENT
and _PAGE_INVALID, which is the combination used by pmdp_invalidate to
synchronize access from lock-free lookups. pte_access_permitted is used by
pmd_access_permitted, so allowing GUP lock free access to proceed with
such PTEs breaks this synchronisation.

This bug has been observed on HPT host, with random crashes and corruption
in guests, usually together with bad PMD messages in the host.

Fix this by adding an explicit check in pmd_access_permitted, and
documenting the condition explicitly.

The pte_write() change should be okay, and would prevent GUP from falling
back to the slow path when encountering savedwrite ptes, which matches
what x86 (that does not implement savedwrite) does.

Fixes: 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion in pte 
helpers")
Cc: Aneesh Kumar K.V 
Cc: Christophe Leroy 
Signed-off-by: Nicholas Piggin 
---

I accounted for Aneesh's and Christophe's feedback, except I couldn't
find a good way to replace the ifdef with IS_ENABLED because of
_PAGE_INVALID etc., but at least cleaned that up a bit nicer.


I guess the standard way is to add a pmd_is_serializing() which return 
always false in book3s/32/pgtable.h and in nohash/pgtable.h




Patch 1 solves a problem I can hit quite reliably running HPT/HPT KVM.
Patch 2 was noticed by Aneesh when inspecting code for similar bugs.
They should probably both be merged in stable kernels after upstream.

  arch/powerpc/include/asm/book3s/64/pgtable.h | 30 
  arch/powerpc/mm/book3s64/pgtable.c   |  3 ++
  2 files changed, 33 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 7dede2e34b70..ccf00a8b98c6 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -876,6 +876,23 @@ static inline int pmd_present(pmd_t pmd)
return false;
  }
  
+static inline int pmd_is_serializing(pmd_t pmd)


should be static inline bool instead of int ?

Christophe


+{
+   /*
+* If the pmd is undergoing a split, the _PAGE_PRESENT bit is clear
+* and _PAGE_INVALID is set (see pmd_present, pmdp_invalidate).
+*
+* This condition may also occur when flushing a pmd while flushing
+* it (see ptep_modify_prot_start), so callers must ensure this
+* case is fine as well.
+*/
+   if ((pmd_raw(pmd) & cpu_to_be64(_PAGE_PRESENT | _PAGE_INVALID)) ==
+   cpu_to_be64(_PAGE_INVALID))
+   return true;
+
+   return false;
+}
+
  static inline int pmd_bad(pmd_t pmd)
  {
if (radix_enabled())
@@ -1092,6 +1109,19 @@ static inline int pmd_protnone(pmd_t pmd)
  #define pmd_access_permitted pmd_access_permitted
  static inline bool pmd_access_permitted(pmd_t pmd, bool write)
  {
+   /*
+* pmdp_invalidate sets this combination (which is not caught by
+* !pte_present() check in pte_access_permitted), to prevent
+* lock-free lookups, as part of the serialize_against_pte_lookup()
+* synchronisation.
+*
+* This also catches the case where the PTE's hardware PRESENT bit is
+* cleared while TLB is flushed, which is suboptimal but should not
+* be frequent.
+*/
+   if (pmd_is_serializing(pmd))
+   return false;
+
return pte_access_permitted(pmd_pte(pmd), write);
  }
  
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c

index 16bda049187a..ff98b663c83e 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -116,6 +116,9 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned 
long address,
/*
 * This ensures that generic code that rely on IRQ disabling
 * to prevent a parallel THP split work as expected.
+*
+* Marking the entry with _PAGE_INVALID && ~_PAGE_PRESENT requires
+* a special case check in pmd_access_permitted.
 */
serialize_against_pte_lookup(vma->vm_mm);
return __pmd(old_pmd);



[PATCH 1/2] powerpc/64s: Fix THP PMD collapse serialisation

2019-06-06 Thread Nicholas Piggin
Commit 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion
in pte helpers") changed the actual bitwise tests in pte_access_permitted
by using pte_write() and pte_present() helpers rather than raw bitwise
testing _PAGE_WRITE and _PAGE_PRESENT bits.

The pte_present change now returns true for ptes which are !_PAGE_PRESENT
and _PAGE_INVALID, which is the combination used by pmdp_invalidate to
synchronize access from lock-free lookups. pte_access_permitted is used by
pmd_access_permitted, so allowing GUP lock free access to proceed with
such PTEs breaks this synchronisation.

This bug has been observed on HPT host, with random crashes and corruption
in guests, usually together with bad PMD messages in the host.

Fix this by adding an explicit check in pmd_access_permitted, and
documenting the condition explicitly.

The pte_write() change should be okay, and would prevent GUP from falling
back to the slow path when encountering savedwrite ptes, which matches
what x86 (that does not implement savedwrite) does.

Fixes: 1b2443a547f9 ("powerpc/book3s64: Avoid multiple endian conversion in pte 
helpers")
Cc: Aneesh Kumar K.V 
Cc: Christophe Leroy 
Signed-off-by: Nicholas Piggin 
---

I accounted for Aneesh's and Christophe's feedback, except I couldn't
find a good way to replace the ifdef with IS_ENABLED because of
_PAGE_INVALID etc., but at least cleaned that up a bit nicer.

Patch 1 solves a problem I can hit quite reliably running HPT/HPT KVM.
Patch 2 was noticed by Aneesh when inspecting code for similar bugs.
They should probably both be merged in stable kernels after upstream.

 arch/powerpc/include/asm/book3s/64/pgtable.h | 30 
 arch/powerpc/mm/book3s64/pgtable.c   |  3 ++
 2 files changed, 33 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 7dede2e34b70..ccf00a8b98c6 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -876,6 +876,23 @@ static inline int pmd_present(pmd_t pmd)
return false;
 }
 
+static inline int pmd_is_serializing(pmd_t pmd)
+{
+   /*
+* If the pmd is undergoing a split, the _PAGE_PRESENT bit is clear
+* and _PAGE_INVALID is set (see pmd_present, pmdp_invalidate).
+*
+* This condition may also occur when flushing a pmd while flushing
+* it (see ptep_modify_prot_start), so callers must ensure this
+* case is fine as well.
+*/
+   if ((pmd_raw(pmd) & cpu_to_be64(_PAGE_PRESENT | _PAGE_INVALID)) ==
+   cpu_to_be64(_PAGE_INVALID))
+   return true;
+
+   return false;
+}
+
 static inline int pmd_bad(pmd_t pmd)
 {
if (radix_enabled())
@@ -1092,6 +1109,19 @@ static inline int pmd_protnone(pmd_t pmd)
 #define pmd_access_permitted pmd_access_permitted
 static inline bool pmd_access_permitted(pmd_t pmd, bool write)
 {
+   /*
+* pmdp_invalidate sets this combination (which is not caught by
+* !pte_present() check in pte_access_permitted), to prevent
+* lock-free lookups, as part of the serialize_against_pte_lookup()
+* synchronisation.
+*
+* This also catches the case where the PTE's hardware PRESENT bit is
+* cleared while TLB is flushed, which is suboptimal but should not
+* be frequent.
+*/
+   if (pmd_is_serializing(pmd))
+   return false;
+
return pte_access_permitted(pmd_pte(pmd), write);
 }
 
diff --git a/arch/powerpc/mm/book3s64/pgtable.c 
b/arch/powerpc/mm/book3s64/pgtable.c
index 16bda049187a..ff98b663c83e 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -116,6 +116,9 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned 
long address,
/*
 * This ensures that generic code that rely on IRQ disabling
 * to prevent a parallel THP split work as expected.
+*
+* Marking the entry with _PAGE_INVALID && ~_PAGE_PRESENT requires
+* a special case check in pmd_access_permitted.
 */
serialize_against_pte_lookup(vma->vm_mm);
return __pmd(old_pmd);
-- 
2.20.1