Re: [PATCH] arm64: mm: reset address tag set by kasan sw tagging

2020-06-12 Thread Shyam Thombre

Hi Catalin,


On 6/10/2020 5:06 PM, Catalin Marinas wrote:

On Wed, Jun 10, 2020 at 04:39:44PM +0530, Shyam Thombre wrote:

KASAN sw tagging sets a random tag of 8 bits in the top byte of the pointer
returned by the memory allocating functions. So for the functions unaware
of this change, the top 8 bits of the address must be reset which is done
by the function arch_kasan_reset_tag().

Signed-off-by: Shyam Thombre 
---
  arch/arm64/mm/mmu.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index e7fbc62..eae7655 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -723,6 +723,7 @@ int kern_addr_valid(unsigned long addr)
pmd_t *pmdp, pmd;
pte_t *ptep, pte;
  
+	addr = arch_kasan_reset_tag(addr);

if long)addr) >> VA_BITS) != -1UL)
return 0;


It would be interesting to know what fails without this patch. The only
user seems to be read_kcore() and, at a quick look, I don't see how it
can generate tagged addresses.



This issue is seen in downstream GPU drivers. It currently doesn't look 
to impact any upstream users.



--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


Re: [PATCH] arm64: mm: reset address tag set by kasan sw tagging

2020-06-15 Thread Will Deacon
On Wed, 10 Jun 2020 16:39:44 +0530, Shyam Thombre wrote:
> KASAN sw tagging sets a random tag of 8 bits in the top byte of the pointer
> returned by the memory allocating functions. So for the functions unaware
> of this change, the top 8 bits of the address must be reset which is done
> by the function arch_kasan_reset_tag().

Applied to arm64 (for-next/fixes), thanks!

[1/1] arm64: mm: reset address tag set by kasan sw tagging
  https://git.kernel.org/arm64/c/8dd4daa04278

Cheers,
-- 
Will

https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev


Re: [PATCH] arm64: mm: reset address tag set by kasan sw tagging

2020-06-10 Thread Catalin Marinas
On Wed, Jun 10, 2020 at 04:39:44PM +0530, Shyam Thombre wrote:
> KASAN sw tagging sets a random tag of 8 bits in the top byte of the pointer
> returned by the memory allocating functions. So for the functions unaware
> of this change, the top 8 bits of the address must be reset which is done
> by the function arch_kasan_reset_tag().
> 
> Signed-off-by: Shyam Thombre 
> ---
>  arch/arm64/mm/mmu.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index e7fbc62..eae7655 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -723,6 +723,7 @@ int kern_addr_valid(unsigned long addr)
>   pmd_t *pmdp, pmd;
>   pte_t *ptep, pte;
>  
> + addr = arch_kasan_reset_tag(addr);
>   if long)addr) >> VA_BITS) != -1UL)
>   return 0;

It would be interesting to know what fails without this patch. The only
user seems to be read_kcore() and, at a quick look, I don't see how it
can generate tagged addresses.

-- 
Catalin