This patch series adds proper validation of an object in ksize() --
ksize() has been unconditionally unpoisoning the entire memory region
associated with an allocation. This can lead to various undetected bugs.

To correctly address this for all allocators, and a requirement that we
still need access to an unchecked ksize(), we introduce __ksize(), and
then refactor the common logic in ksize() to slab_common.c.

Furthermore, we introduce __kasan_check_{read,write}, which can be used
even if KASAN is disabled in a compilation unit (as is the case for
slab_common.c). See inline comment for why __kasan_check_read() is
chosen to check validity of an object inside ksize().

Previous version:
http://lkml.kernel.org/r/20190624110532.41065-1-el...@google.com

v2:
* Complete rewrite of patch, refactoring ksize() and relying on
  kasan_check_read for validation.

Marco Elver (4):
  mm/kasan: Introduce __kasan_check_{read,write}
  lib/test_kasan: Add test for double-kzfree detection
  mm/slab: Refactor common ksize KASAN logic into slab_common.c
  mm/kasan: Add object validation in ksize()

 include/linux/kasan-checks.h | 35 ++++++++++++++++++++++------
 include/linux/kasan.h        |  7 ++++--
 include/linux/slab.h         |  1 +
 lib/test_kasan.c             | 17 ++++++++++++++
 mm/kasan/common.c            | 14 +++++------
 mm/kasan/generic.c           | 13 ++++++-----
 mm/kasan/kasan.h             | 10 +++++++-
 mm/kasan/tags.c              | 12 ++++++----
 mm/slab.c                    | 28 +++++-----------------
 mm/slab_common.c             | 45 ++++++++++++++++++++++++++++++++++++
 mm/slob.c                    |  4 ++--
 mm/slub.c                    | 14 ++---------
 12 files changed, 135 insertions(+), 65 deletions(-)

-- 
2.22.0.410.gd8fdbe21b5-goog

Reply via email to