On Mon, Aug 19, 2019 at 05:37:36PM +0200, Andrey Konovalov wrote: > On Mon, Aug 19, 2019 at 5:03 PM Mark Rutland <mark.rutl...@arm.com> wrote: > > > > On Mon, Aug 19, 2019 at 04:05:22PM +0200, Andrey Konovalov wrote: > > > On Mon, Aug 19, 2019 at 3:34 PM Will Deacon <w...@kernel.org> wrote: > > > > > > > > On Mon, Aug 19, 2019 at 02:23:48PM +0100, Mark Rutland wrote: > > > > > On Mon, Aug 19, 2019 at 01:56:26PM +0100, Will Deacon wrote: > > > > > > On Mon, Aug 19, 2019 at 07:44:20PM +0800, Walter Wu wrote: > > > > > > > __arm_v7s_unmap() call iopte_deref() to translate pyh_to_virt > > > > > > > address, > > > > > > > but it will modify pointer tag into 0xff, so there is a false > > > > > > > positive. > > > > > > > > > > > > > > When enable tag-based kasan, phys_to_virt() function need to > > > > > > > rewrite > > > > > > > its original pointer tag in order to avoid kasan report an > > > > > > > incorrect > > > > > > > memory corruption. > > > > > > > > > > > > Hmm. Which tree did you see this on? We've recently queued a load > > > > > > of fixes > > > > > > in this area, but I /thought/ they were only needed after the > > > > > > support for > > > > > > 52-bit virtual addressing in the kernel. > > > > > > > > > > I'm seeing similar issues in the virtio blk code (splat below), atop > > > > > of > > > > > the arm64 for-next/core branch. I think this is a latent issue, and > > > > > people are only just starting to test with KASAN_SW_TAGS. > > > > > > > > > > It looks like the virtio blk code will round-trip a SLUB-allocated > > > > > pointer from > > > > > virt->page->virt, losing the per-object tag in the process. > > > > > > > > > > Our page_to_virt() seems to get a per-page tag, but this only makes > > > > > sense if you're dealing with the page allocator, rather than something > > > > > like SLUB which carves a page into smaller objects giving each object > > > > > a > > > > > distinct tag. > > > > > > > > > > Any round-trip of a pointer from SLUB is going to lose the per-object > > > > > tag. > > > > > > > > Urgh, I wonder how this is supposed to work? > > > > > > > > If we end up having to check the KASAN shadow for *_to_virt(), then why > > > > do we need to store anything in the page flags at all? Andrey? > > > > > > As per 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via > > > pagealloc") we should only save a non-0xff tag in page flags for non > > > slab pages. > > > > > > Could you share your .config so I can reproduce this? > > > > I wrote a test (below) to do so. :) > > > > It fires with arm64 defconfig, + CONFIG_TEST_KASAN=m. > > > > With Andrey Ryabinin's patch it works as expected with no KASAN splats > > for the two new test cases. > > OK, Andrey's patch makes sense and fixes both Mark's test patch and > reports from CONFIG_IOMMU_IO_PGTABLE_ARMV7S_SELFTEST. > > Tested-by: Andrey Konovalov <andreyk...@google.com> > Reviewed-by: Andrey Konovalov <andreyk...@google.com> > > on both patches. > > > > > Thanks, > > Mark. > > > > ---->8---- > > From 7e8569b558fca21ad4e80fddae659591bc84ce1f Mon Sep 17 00:00:00 2001 > > From: Mark Rutland <mark.rutl...@arm.com> > > Date: Mon, 19 Aug 2019 15:39:32 +0100 > > Subject: [PATCH] lib/test_kasan: add roundtrip tests > > > > In several places we needs to be able to operate on pointers which have > > "needs" => "need"
Thanks! I'll spin a standalone v2 of this with that fixed and your tags folded in. Mark. > > > gone via a roundtrip: > > > > virt -> {phys,page} -> virt > > > > With KASAN_SW_TAGS, we can't preserve the tag for SLUB objects, and the > > {phys,page} -> virt conversion will use KASAN_TAG_KERNEL. > > > > This patch adds tests to ensure that this works as expected, without > > false positives. > > > > Signed-off-by: Mark Rutland <mark.rutl...@arm.com> > > Cc: Andrey Ryabinin <aryabi...@virtuozzo.com> > > Cc: Andrey Konovalov <andreyk...@google.com> > > Cc: Will Deacon <will.dea...@arm.com> > > --- > > lib/test_kasan.c | 40 ++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 40 insertions(+) > > > > diff --git a/lib/test_kasan.c b/lib/test_kasan.c > > index b63b367a94e8..cf7b93f0d90c 100644 > > --- a/lib/test_kasan.c > > +++ b/lib/test_kasan.c > > @@ -19,6 +19,8 @@ > > #include <linux/string.h> > > #include <linux/uaccess.h> > > > > +#include <asm/page.h> > > + > > /* > > * Note: test functions are marked noinline so that their names appear in > > * reports. > > @@ -337,6 +339,42 @@ static noinline void __init kmalloc_uaf2(void) > > kfree(ptr2); > > } > > > > +static noinline void __init kfree_via_page(void) > > +{ > > + char *ptr; > > + size_t size = 8; > > + struct page *page; > > + unsigned long offset; > > + > > + pr_info("invalid-free false positive (via page)\n"); > > + ptr = kmalloc(size, GFP_KERNEL); > > + if (!ptr) { > > + pr_err("Allocation failed\n"); > > + return; > > + } > > + > > + page = virt_to_page(ptr); > > + offset = offset_in_page(ptr); > > + kfree(page_address(page) + offset); > > +} > > + > > +static noinline void __init kfree_via_phys(void) > > +{ > > + char *ptr; > > + size_t size = 8; > > + phys_addr_t phys; > > + > > + pr_info("invalid-free false positive (via phys)\n"); > > + ptr = kmalloc(size, GFP_KERNEL); > > + if (!ptr) { > > + pr_err("Allocation failed\n"); > > + return; > > + } > > + > > + phys = virt_to_phys(ptr); > > + kfree(phys_to_virt(phys)); > > +} > > + > > static noinline void __init kmem_cache_oob(void) > > { > > char *p; > > @@ -737,6 +775,8 @@ static int __init kmalloc_tests_init(void) > > kmalloc_uaf(); > > kmalloc_uaf_memset(); > > kmalloc_uaf2(); > > + kfree_via_page(); > > + kfree_via_phys(); > > kmem_cache_oob(); > > memcg_accounted_kmem_cache(); > > kasan_stack_oob(); > > -- > > 2.11.0 > >