On 4/13/19 2:26 AM, Qian Cai wrote:
> has_unmovable_pages() is used by allocating CMA and gigantic pages as
> well as the memory hotplug. The later doesn't know how to offline CMA
> pool properly now, but if an unused (free) CMA page is encountered, then
> has_unmovable_pages() happily considers it as a free memory and
> propagates this up the call chain. Memory offlining code then frees the
> page without a proper CMA tear down which leads to an accounting issues.
> Moreover if the same memory range is onlined again then the memory never
> gets back to the CMA pool.
> 
> State after memory offline:
>  # grep cma /proc/vmstat
>  nr_free_cma 205824
> 
>  # cat /sys/kernel/debug/cma/cma-kvm_cma/count
>  209920
> 
> Also, kmemleak still think those memory address are reserved but have
> already been used by the buddy allocator after onlining.
> 
> Offlined Pages 4096
> kmemleak: Cannot insert 0xc000201f7d040008 into the object search tree
> (overlaps existing)
> Call Trace:
> [c00000003dc2faf0] [c000000000884b2c] dump_stack+0xb0/0xf4 (unreliable)
> [c00000003dc2fb30] [c000000000424fb4] create_object+0x344/0x380
> [c00000003dc2fbf0] [c0000000003d178c] __kmalloc_node+0x3ec/0x860
> [c00000003dc2fc90] [c000000000319078] kvmalloc_node+0x58/0x110
> [c00000003dc2fcd0] [c000000000484d9c] seq_read+0x41c/0x620
> [c00000003dc2fd60] [c0000000004472bc] __vfs_read+0x3c/0x70
> [c00000003dc2fd80] [c0000000004473ac] vfs_read+0xbc/0x1a0
> [c00000003dc2fdd0] [c00000000044783c] ksys_read+0x7c/0x140
> [c00000003dc2fe20] [c00000000000b108] system_call+0x5c/0x70
> kmemleak: Kernel memory leak detector disabled
> kmemleak: Object 0xc000201cc8000000 (size 13757317120):
> kmemleak:   comm "swapper/0", pid 0, jiffies 4294937297
> kmemleak:   min_count = -1
> kmemleak:   count = 0
> kmemleak:   flags = 0x5
> kmemleak:   checksum = 0
> kmemleak:   backtrace:
>      cma_declare_contiguous+0x2a4/0x3b0
>      kvm_cma_reserve+0x11c/0x134
>      setup_arch+0x300/0x3f8
>      start_kernel+0x9c/0x6e8
>      start_here_common+0x1c/0x4b0
> kmemleak: Automatic memory scanning thread ended

There's nothing about what the patch does, except subject, which is long
forgotten when reading up to here. What about something like:

This patch fixes the situation by treating CMA pageblocks as unmovable,
except when has_unmovable_pages() is called as part of CMA allocation.

> Acked-by: Michal Hocko <mho...@suse.com>
> Signed-off-by: Qian Cai <c...@lca.pw>

Acked-by: Vlastimil Babka <vba...@suse.cz>

> @@ -8015,17 +8018,20 @@ bool has_unmovable_pages(struct zone *zone, struct 
> page *page, int count,
>        * can still lead to having bootmem allocations in zone_movable.
>        */
>  
> -     /*
> -      * CMA allocations (alloc_contig_range) really need to mark isolate
> -      * CMA pageblocks even when they are not movable in fact so consider
> -      * them movable here.
> -      */
> -     if (is_migrate_cma(migratetype) &&
> -                     is_migrate_cma(get_pageblock_migratetype(page)))
> -             return false;
> +     if (is_migrate_cma(get_pageblock_migratetype(page))) {

Nit, since you were already refactoring a bit as part of the patch:
this could use is_migrate_cma_page(page)

Reply via email to