4.7-stable review patch. If anyone has any objections, please let me know.
------------------ From: Alexander Potapenko <[email protected]> commit c146a2b98eb5898eb0fab15a332257a4102ecae9 upstream. When looking up the nearest SLUB object for a given address, correctly calculate its offset if SLAB_RED_ZONE is enabled for that cache. Previously, when KASAN had detected an error on an object from a cache with SLAB_RED_ZONE set, the actual start address of the object was miscalculated, which led to random stacks having been reported. When looking up the nearest SLUB object for a given address, correctly calculate its offset if SLAB_RED_ZONE is enabled for that cache. Fixes: 7ed2f9e663854db ("mm, kasan: SLAB support") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexander Potapenko <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Steven Rostedt (Red Hat) <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Kostya Serebryany <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Kuthonuzo Luruo <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> --- include/linux/slub_def.h | 10 ++++++---- mm/slub.c | 2 +- 2 files changed, 7 insertions(+), 5 deletions(-) --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -114,15 +114,17 @@ static inline void sysfs_slab_remove(str void object_err(struct kmem_cache *s, struct page *page, u8 *object, char *reason); +void *fixup_red_left(struct kmem_cache *s, void *p); + static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, void *x) { void *object = x - (x - page_address(page)) % cache->size; void *last_object = page_address(page) + (page->objects - 1) * cache->size; - if (unlikely(object > last_object)) - return last_object; - else - return object; + void *result = (unlikely(object > last_object)) ? last_object : object; + + result = fixup_red_left(cache, result); + return result; } #endif /* _LINUX_SLUB_DEF_H */ --- a/mm/slub.c +++ b/mm/slub.c @@ -124,7 +124,7 @@ static inline int kmem_cache_debug(struc #endif } -static inline void *fixup_red_left(struct kmem_cache *s, void *p) +inline void *fixup_red_left(struct kmem_cache *s, void *p) { if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) p += s->red_left_pad;

