From: Andrey Konovalov <andreyk...@google.com> [ Upstream commit a71012242837fe5e67d8c999cfc357174ed5dba0 ]
With tag based KASAN page_address() looks at the page flags to see whether the resulting pointer needs to have a tag set. Since we don't want to set a tag when page_address() is called on SLAB pages, we call page_kasan_tag_reset() in kasan_poison_slab(). However in allocate_slab() page_address() is called before kasan_poison_slab(). Fix it by changing the order. [andreyk...@google.com: fix compilation error when CONFIG_SLUB_DEBUG=n] Link: http://lkml.kernel.org/r/ac27cc0bbaeb414ed77bcd6671a877cf3546d56e.1550066133.git.andreyk...@google.com Link: http://lkml.kernel.org/r/cd895d627465a3f1c712647072d17f10883be2a1.1549921721.git.andreyk...@google.com Signed-off-by: Andrey Konovalov <andreyk...@google.com> Cc: Alexander Potapenko <gli...@google.com> Cc: Andrey Ryabinin <aryabi...@virtuozzo.com> Cc: Catalin Marinas <catalin.mari...@arm.com> Cc: Christoph Lameter <c...@linux.com> Cc: David Rientjes <rient...@google.com> Cc: Dmitry Vyukov <dvyu...@google.com> Cc: Evgeniy Stepanov <euge...@google.com> Cc: Joonsoo Kim <iamjoonsoo....@lge.com> Cc: Kostya Serebryany <k...@google.com> Cc: Pekka Enberg <penb...@kernel.org> Cc: Qian Cai <c...@lca.pw> Cc: Vincenzo Frascino <vincenzo.frasc...@arm.com> Signed-off-by: Andrew Morton <a...@linux-foundation.org> Signed-off-by: Linus Torvalds <torva...@linux-foundation.org> Signed-off-by: Sasha Levin <sas...@kernel.org> --- mm/slub.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index e3629cd7aff1..d1e053d48f46 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1075,6 +1075,16 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page, init_tracking(s, object); } +static void setup_page_debug(struct kmem_cache *s, void *addr, int order) +{ + if (!(s->flags & SLAB_POISON)) + return; + + metadata_access_enable(); + memset(addr, POISON_INUSE, PAGE_SIZE << order); + metadata_access_disable(); +} + static inline int alloc_consistency_checks(struct kmem_cache *s, struct page *page, void *object, unsigned long addr) @@ -1330,6 +1340,8 @@ slab_flags_t kmem_cache_flags(unsigned int object_size, #else /* !CONFIG_SLUB_DEBUG */ static inline void setup_object_debug(struct kmem_cache *s, struct page *page, void *object) {} +static inline void setup_page_debug(struct kmem_cache *s, + void *addr, int order) {} static inline int alloc_debug_processing(struct kmem_cache *s, struct page *page, void *object, unsigned long addr) { return 0; } @@ -1640,12 +1652,11 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) if (page_is_pfmemalloc(page)) SetPageSlabPfmemalloc(page); - start = page_address(page); + kasan_poison_slab(page); - if (unlikely(s->flags & SLAB_POISON)) - memset(start, POISON_INUSE, PAGE_SIZE << order); + start = page_address(page); - kasan_poison_slab(page); + setup_page_debug(s, start, order); shuffle = shuffle_freelist(s, page); -- 2.19.1