On Mon, Apr 05, 2021 at 04:00:41PM -0700, Mike Kravetz wrote: > free_pool_huge_page was called with hugetlb_lock held. It would remove > a hugetlb page, and then free the corresponding pages to the lower level > allocators such as buddy. free_pool_huge_page was called in a loop to > remove hugetlb pages and these loops could hold the hugetlb_lock for a > considerable time. > > Create new routine remove_pool_huge_page to replace free_pool_huge_page. > remove_pool_huge_page will remove the hugetlb page, and it must be > called with the hugetlb_lock held. It will return the removed page and > it is the responsibility of the caller to free the page to the lower > level allocators. The hugetlb_lock is dropped before freeing to these > allocators which results in shorter lock hold times. > > Add new helper routine to call update_and_free_page for a list of pages. > > Note: Some changes to the routine return_unused_surplus_pages are in > need of explanation. Commit e5bbc8a6c992 ("mm/hugetlb.c: fix reservation > race when freeing surplus pages") modified this routine to address a > race which could occur when dropping the hugetlb_lock in the loop that > removes pool pages. Accounting changes introduced in that commit were > subtle and took some thought to understand. This commit removes the > cond_resched_lock() and the potential race. Therefore, remove the > subtle code and restore the more straight forward accounting effectively > reverting the commit. > > Signed-off-by: Mike Kravetz <mike.krav...@oracle.com> > Reviewed-by: Muchun Song <songmuc...@bytedance.com> > Acked-by: Michal Hocko <mho...@suse.com>
Reviewed-by: Oscar Salvador <osalva...@suse.de> It crossed my mind that we could use update_and_free_pages_bulk() from dissolve_free_huge_pages() <-> dissolve_free_huge_page() but the latter looks too hairy already and it is probably not worth it. -- Oscar Salvador SUSE L3