On Fri, May 22, 2020 at 06:00:52PM -0400, Qian Cai wrote:
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 8c3bb5e508b8..460b0feced26 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -43,6 +43,7 @@
>  #include <linux/spinlock.h>
>  #include <linux/zpool.h>
>  #include <linux/magic.h>
> +#include <linux/kmemleak.h>
>  
>  /*
>   * NCHUNKS_ORDER determines the internal allocation granularity, effectively
> @@ -215,6 +216,8 @@ static inline struct z3fold_buddy_slots 
> *alloc_slots(struct z3fold_pool *pool,
>                                (gfp & ~(__GFP_HIGHMEM | __GFP_MOVABLE)));
>  
>       if (slots) {
> +             /* It will be freed separately in free_handle(). */
> +             kmemleak_not_leak(slots);
>               memset(slots->slot, 0, sizeof(slots->slot));
>               slots->pool = (unsigned long)pool;
>               rwlock_init(&slots->lock);

Acked-by: Catalin Marinas <catalin.mari...@arm.com>

An alternative would have been a kmemleak_alloc(zhdr, sizeof(*zhdr), 1)
in init_z3fold_page() and a corresponding kmemleak_free() in
free_z3fold_page() (if !headless) since kmemleak doesn't track page
allocations. The advantage is that it would track the slots in case
there is a leak. But if the code is clear enough that the slots are
freed, just keep the kmemleak_not_leak() annotation.

-- 
Catalin

Reply via email to