On Wed, Jun 10, 2020 at 04:39:44PM +0530, Shyam Thombre wrote:
> KASAN sw tagging sets a random tag of 8 bits in the top byte of the pointer
> returned by the memory allocating functions. So for the functions unaware
> of this change, the top 8 bits of the address must be reset which is done
> by the function arch_kasan_reset_tag().
> 
> Signed-off-by: Shyam Thombre <sthom...@codeaurora.org>
> ---
>  arch/arm64/mm/mmu.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index e7fbc62..eae7655 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -723,6 +723,7 @@ int kern_addr_valid(unsigned long addr)
>       pmd_t *pmdp, pmd;
>       pte_t *ptep, pte;
>  
> +     addr = arch_kasan_reset_tag(addr);
>       if ((((long)addr) >> VA_BITS) != -1UL)
>               return 0;

It would be interesting to know what fails without this patch. The only
user seems to be read_kcore() and, at a quick look, I don't see how it
can generate tagged addresses.

-- 
Catalin

Reply via email to