On 2016/3/26 3:22, Andrew Morton wrote:

> On Fri, 25 Mar 2016 14:56:04 +0800 Xishi Qiu <[email protected]> wrote:
> 
>> It is incorrect to use next_node to find a target node, it will
>> return MAX_NUMNODES or invalid node. This will lead to crash in
>> buddy system allocation.
>>
>> ...
>>
>> --- a/mm/page_isolation.c
>> +++ b/mm/page_isolation.c
>> @@ -289,11 +289,11 @@ struct page *alloc_migrate_target(struct page *page, 
>> unsigned long private,
>>       * now as a simple work-around, we use the next node for destination.
>>       */
>>      if (PageHuge(page)) {
>> -            nodemask_t src = nodemask_of_node(page_to_nid(page));
>> -            nodemask_t dst;
>> -            nodes_complement(dst, src);
>> +            int node = next_online_node(page_to_nid(page));
>> +            if (node == MAX_NUMNODES)
>> +                    node = first_online_node;
>>              return alloc_huge_page_node(page_hstate(compound_head(page)),
>> -                                        next_node(page_to_nid(page), dst));
>> +                                        node);
>>      }
>>  
>>      if (PageHighMem(page))
> 
> Indeed.  Can you tell us more about this circumstances under which the
> kernel will crash?  I need to decide which kernel version(s) need the
> patch, but the changelog doesn't contain the info needed to make this
> decision (it should).
> 

Hi Andrew,

I read the code v4.4, and find the following path maybe trigger the bug.

alloc_migrate_target()
        alloc_huge_page_node()  // the node may be offline or MAX_NUMNODES
                __alloc_buddy_huge_page_no_mpol()
                        __alloc_buddy_huge_page()
                                __hugetlb_alloc_buddy_huge_page()
                                        alloc_pages_node()
                                                __alloc_pages_node()
                                                        VM_BUG_ON(nid < 0 || 
nid >= MAX_NUMNODES);
                                                        
VM_WARN_ON(!node_online(nid));

Thanks,
Xishi Qiu

Reply via email to