On 9/2/20 7:25 PM, Mike Kravetz wrote:
> On 9/2/20 3:49 AM, Vlastimil Babka wrote:
>> On 9/1/20 3:46 AM, Wei Yang wrote:
>>> The page allocated from buddy is not on any list, so just use list_add()
>>> is enough.
>>>
>>> Signed-off-by: Wei Yang <richard.weiy...@linux.alibaba.com>
>>> Reviewed-by: Baoquan He <b...@redhat.com>
>>> Reviewed-by: Mike Kravetz <mike.krav...@oracle.com>
>>> ---
>>>  mm/hugetlb.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>> index 441b7f7c623e..c9b292e664c4 100644
>>> --- a/mm/hugetlb.c
>>> +++ b/mm/hugetlb.c
>>> @@ -2405,7 +2405,7 @@ struct page *alloc_huge_page(struct vm_area_struct 
>>> *vma,
>>>                     h->resv_huge_pages--;
>>>             }
>>>             spin_lock(&hugetlb_lock);
>>> -           list_move(&page->lru, &h->hugepage_activelist);
>>> +           list_add(&page->lru, &h->hugepage_activelist);
>> 
>> Hmm, how does that list_move() actually not crash today?
>> Page has been taken from free lists, thus there was list_del() and page->lru
>> should be poisoned.
>> list_move() does __list_del_entry() which will either detect the poison with
>> CONFIG_DEBUG_LIST, or crash accessing the poison, no?
>> Am I missing something or does it mean this code is actually never executed 
>> in wild?
>> 
> 
> There is not enough context in the diff, but the hugetlb page was not taken
> from the free list.  Rather, it was just created by a call to
> alloc_buddy_huge_page_with_mpol().  As part of the allocation/creation
> prep_new_huge_page will be called which will INIT_LIST_HEAD(&page->lru).

Ah so indeed I was missing something :) Thanks. Then this is indeed a an
optimization and not a bugfix and doesn't need stable@. Sorry for the noise.

Reply via email to