On Wed, 13 May 2020 17:05:25 +0300 Konstantin Khlebnikov 
<khlebni...@yandex-team.ru> wrote:

> Function isolate_migratepages_block() runs some checks out of lru_lock
> when choose pages for migration. After checking PageLRU() it checks extra
> page references by comparing page_count() and page_mapcount(). Between
> these two checks page could be removed from lru, freed and taken by slab.
> 
> As a result this race triggers VM_BUG_ON(PageSlab()) in page_mapcount().
> Race window is tiny. For certain workload this happens around once a year.
> 
> 
>  page:ffffea0105ca9380 count:1 mapcount:0 mapping:ffff88ff7712c180 index:0x0 
> compound_mapcount: 0
>  flags: 0x500000000008100(slab|head)
>  raw: 0500000000008100 dead000000000100 dead000000000200 ffff88ff7712c180
>  raw: 0000000000000000 0000000080200020 00000001ffffffff 0000000000000000
>  page dumped because: VM_BUG_ON_PAGE(PageSlab(page))
>  ------------[ cut here ]------------
>  kernel BUG at ./include/linux/mm.h:628!
>  invalid opcode: 0000 [#1] SMP NOPTI
>  CPU: 77 PID: 504 Comm: kcompactd1 Tainted: G        W         4.19.109-27 #1
>  Hardware name: Yandex T175-N41-Y3N/MY81-EX0-Y3N, BIOS R05 06/20/2019
>  RIP: 0010:isolate_migratepages_block+0x986/0x9b0
> 
> 
> To fix just opencode page_mapcount() in racy check for 0-order case and
> recheck carefully under lru_lock when page cannot escape from lru.
> 
> Also add checking extra references for file pages and swap cache.

It sounds like a cc:stable is appropriate?

> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -935,12 +935,16 @@ isolate_migratepages_block(struct compact_control *cc, 
> unsigned long low_pfn,
>               }
>  
>               /*
> -              * Migration will fail if an anonymous page is pinned in memory,
> +              * Migration will fail if an page is pinned in memory,
>                * so avoid taking lru_lock and isolating it unnecessarily in an
> -              * admittedly racy check.
> +              * admittedly racy check simplest case for 0-order pages.
> +              *
> +              * Open code page_mapcount() to avoid VM_BUG_ON(PageSlab(page)).
> +              * Page could have extra reference from mapping or swap cache.
>                */
> -             if (!page_mapping(page) &&
> -                 page_count(page) > page_mapcount(page))
> +             if (!PageCompound(page) &&
> +                 page_count(page) > atomic_read(&page->_mapcount) + 1 +
> +                             (!PageAnon(page) || PageSwapCache(page)))
>                       goto isolate_fail;
>  
>               /*
> @@ -975,6 +979,11 @@ isolate_migratepages_block(struct compact_control *cc, 
> unsigned long low_pfn,
>                               low_pfn += compound_nr(page) - 1;
>                               goto isolate_fail;
>                       }
> +
> +                     /* Recheck page extra references under lock */
> +                     if (page_count(page) > page_mapcount(page) +
> +                                 (!PageAnon(page) || PageSwapCache(page)))
> +                             goto isolate_fail;
>               }
>  
>               lruvec = mem_cgroup_page_lruvec(page, pgdat);

Reply via email to