Re: [External] Re: [PATCH v20 6/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page

2021-04-20 Thread Mike Kravetz
On 4/20/21 1:46 AM, Muchun Song wrote: > On Tue, Apr 20, 2021 at 7:20 AM Mike Kravetz wrote: >> >> On 4/15/21 1:40 AM, Muchun Song wrote: >>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h >>> index 0abed7e766b8..6e970a7d3480 100644 >>&g

Re: [PATCH v20 7/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap

2021-04-19 Thread Mike Kravetz
gt; mm/hugetlb_vmemmap.c| 24 > 5 files changed, 69 insertions(+), 2 deletions(-) Thanks, Reviewed-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v20 6/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page

2021-04-19 Thread Mike Kravetz
ge_vmemmap(h, page); > + if (!rc) { > + /* > + * Move PageHWPoison flag from head page to the raw > + * error page, which makes any subpages rather than > + * the error page reusable. > +

Re: [PATCH v20 5/9] mm: hugetlb: defer freeing of HugeTLB pages

2021-04-16 Thread Mike Kravetz
ageHuge(page), page) in page_hstate is going to trigger because a previous call to remove_hugetlb_page() will set_compound_page_dtor(page, NULL_COMPOUND_DTOR) Note how h(hstate) is grabbed before calling update_and_free_page in existing code. We could potentially drop the !PageHuge(page) in page_hstate. Or,

Re: [PATCH v20 4/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page

2021-04-16 Thread Mike Kravetz
ested-by: Chen Huang > Tested-by: Bodeddula Balasubramaniam > Acked-by: Michal Hocko There may need to be some trivial rebasing due to Oscar's changes when they go in. Reviewed-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v20 3/9] mm: hugetlb: gather discrete indexes of tail page

2021-04-16 Thread Mike Kravetz
Tested-by: Bodeddula Balasubramaniam > Acked-by: Michal Hocko Thanks, Reviewed-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v8 5/7] mm: Make alloc_contig_range handle free hugetlb pages

2021-04-15 Thread Mike Kravetz
se above we retry as the window race is quite small and we have high > chances to succeed next time. > > With regard to the allocation, we restrict it to the node the page belongs > to with __GFP_THISNODE, meaning we do not fallback on other node's zones. > > Note that gigantic hugetlb pages are fenced off since there is a cyclic > dependency between them and alloc_contig_range. > > Signed-off-by: Oscar Salvador > Acked-by: Michal Hocko Reviewed-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v8 3/7] mm,hugetlb: Drop clearing of flag from prep_new_huge_page

2021-04-15 Thread Mike Kravetz
see where Michal's suggestion was coming from (list the allocators that do the clearing). Also, listing this as a left over would be a good idea. Reviewed-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v7 6/7] mm: Make alloc_contig_range handle in-use hugetlb pages

2021-04-14 Thread Mike Kravetz
On 4/13/21 9:52 PM, Oscar Salvador wrote: > On Tue, Apr 13, 2021 at 03:48:53PM -0700, Mike Kravetz wrote: >> The label free_new is: >> >> free_new: >> spin_unlock_irq(_lock); >> __free_pages(new_page, huge_page_order(h)); >> >>

Re: [PATCH v7 4/7] mm,hugetlb: Split prep_new_huge_page functionality

2021-04-14 Thread Mike Kravetz
On 4/13/21 9:59 PM, Oscar Salvador wrote: > On Tue, Apr 13, 2021 at 02:33:41PM -0700, Mike Kravetz wrote: >>> -static void prep_new_huge_page(struct hstate *h, struct page *page, int >>> nid) >>> +/* >>> + * Must be called with the huge

Re: [PATCH v7 3/7] mm,hugetlb: Clear HPageFreed outside of the lock

2021-04-14 Thread Mike Kravetz
gt; > Yes, but I do not think that is really possible unless I missed something. > Let us see what Mike thinks of it, if there are no objections, we can > get rid of the clearing flag right there. > Thanks for crawling through that code Oscar! I do not think you missed anything. Let's just get rid of the flag clearing. -- Mike Kravetz

Re: [PATCH v7 7/7] mm,page_alloc: Drop unnecessary checks from pfn_range_valid_contig

2021-04-13 Thread Mike Kravetz
and > --- > mm/page_alloc.c | 6 -- > 1 file changed, 6 deletions(-) Acked-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v7 6/7] mm: Make alloc_contig_range handle in-use hugetlb pages

2021-04-13 Thread Mike Kravetz
he interface to recognize in-use HugeTLB pages so we can migrate > them, and have much better chances to succeed the call. > > Signed-off-by: Oscar Salvador > Reviewed-by: Mike Kravetz > Acked-by: Michal Hocko One small issue/question/request below. > diff --git a/mm/hu

Re: [PATCH v7 5/7] mm: Make alloc_contig_range handle free hugetlb pages

2021-04-13 Thread Mike Kravetz
return ret; > +} > + > +int isolate_or_dissolve_huge_page(struct page *page) > +{ > + struct hstate *h; > + struct page *head; > + > + /* > + * The page might have been dissolved from under our feet, so make sure > + * to carefully check the state under the lock. > + * Return success when racing as if we dissolved the page ourselves. > + */ > + spin_lock_irq(_lock); > + if (PageHuge(page)) { > + head = compound_head(page); > + h = page_hstate(head); > + } else { > + spin_unlock(_lock); Should be be spin_unlock_irq(_lock); Other than that, it looks good. -- Mike Kravetz

Re: [PATCH v7 4/7] mm,hugetlb: Split prep_new_huge_page functionality

2021-04-13 Thread Mike Kravetz
ng the destructor to this routine. set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); That way, PageHuge() will be false until it 'really' is a huge page. If not, we could potentially go into that retry loop in dissolve_free_huge_page or alloc_and_dissolve_huge_page in patch 5. --

Re: [PATCH v7 3/7] mm,hugetlb: Clear HPageFreed outside of the lock

2021-04-13 Thread Mike Kravetz
e list via put_page/free_huge_page so the appropriate flags will be cleared before anyone notices. I'm wondering if we should just do a set_page_private(page, 0) here in prep_new_huge_page since we now use that field for flags. Or, is that overkill? -- Mike Kravetz

Re: [PATCH v7 2/7] mm,compaction: Let isolate_migratepages_{range,block} return error codes

2021-04-13 Thread Mike Kravetz
pfn to be > scanned, we reuse the cc->migrate_pfn field to keep track of that. > > Signed-off-by: Oscar Salvador > Acked-by: Vlastimil Babka Acked-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v7 1/7] mm,page_alloc: Bail out earlier on -ENOMEM in alloc_contig_migrate_range

2021-04-13 Thread Mike Kravetz
ve some cycles by backing off ealier > > Signed-off-by: Oscar Salvador > Acked-by: Vlastimil Babka > Reviewed-by: David Hildenbrand > Acked-by: Michal Hocko Acked-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v2 5/5] mm/hugetlb: remove unused variable pseudo_vma in remove_inode_hugepages()

2021-04-12 Thread Mike Kravetz
On 4/10/21 12:23 AM, Miaohe Lin wrote: > The local variable pseudo_vma is not used anymore. > > Signed-off-by: Miaohe Lin Thanks, That should have been removed with 1b426bac66e6 ("hugetlb: use same fault hash key for shared and private mappings"). Reviewed-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v2 4/5] mm/hugeltb: handle the error case in hugetlb_fix_reserve_counts()

2021-04-12 Thread Mike Kravetz
memory could possibly fail too. We should correctly handle > these cases. > > Fixes: b5cec28d36f5 ("hugetlbfs: truncate_hugepages() takes a range of pages") > Signed-off-by: Miaohe Lin Thanks, Reviewed-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v2 3/5] mm/hugeltb: clarify (chg - freed) won't go negative in hugetlb_unreserve_pages()

2021-04-12 Thread Mike Kravetz
nd also avoid confusion. > > Signed-off-by: Miaohe Lin Thanks, Reviewed-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v2 2/5] mm/hugeltb: simplify the return code of __vma_reservation_common()

2021-04-12 Thread Mike Kravetz
is set here. Simplify the return code to make it more > clear. > > Signed-off-by: Miaohe Lin Thanks, Reviewed-by: Mike Kravetz -- Mike Kravetz

Re: [PATCH v5 4/8] hugetlb: create remove_hugetlb_page() to separate functionality

2021-04-12 Thread Mike Kravetz
On 4/12/21 12:33 AM, Oscar Salvador wrote: > On Fri, Apr 09, 2021 at 01:52:50PM -0700, Mike Kravetz wrote: >> The new remove_hugetlb_page() routine is designed to remove a hugetlb >> page from hugetlbfs processing. It will remove the page from the active >> or free list, u

Re: [PATCH 0/9] userfaultfd: add minor fault handling for shmem

2021-04-09 Thread Mike Kravetz
both Peter's error handling and the hugetlbfs >> minor faulting patches are ready to go. (Peter's most importantly; we >> should establish that as a base, and put all the burden on resolving >> conflicts with it on us instead of you :).) >> >> My memory was that Peter's

[PATCH v5 0/8] make hugetlb put_page safe for all calling contexts

2021-04-09 Thread Mike Kravetz
de. - Use Michal's suggestion to batch pages for freeing. This eliminated the need to recalculate loop control variables when dropping the lock. - Added lockdep_assert_held() calls - Rebased to v5.12-rc3-mmotm-2021-03-17-22-24 Mike Kravetz (8): mm/cma: change cma mutex to irq safe spinlock hugetlb: no

[PATCH v5 6/8] hugetlb: change free_pool_huge_page to remove_pool_huge_page

2021-04-09 Thread Mike Kravetz
his commit removes the cond_resched_lock() and the potential race. Therefore, remove the subtle code and restore the more straight forward accounting effectively reverting the commit. Signed-off-by: Mike Kravetz Reviewed-by: Muchun Song Acked-by: Michal Hocko Reviewed-by: Oscar Salvador --- mm

[PATCH v5 7/8] hugetlb: make free_huge_page irq safe

2021-04-09 Thread Mike Kravetz
k irq safe in a similar manner. - Revert the !in_task check and workqueue handoff. [1] https://lore.kernel.org/linux-mm/f1c03b05bc43a...@google.com/ Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Muchun Song Reviewed-by: Oscar Salvador --- mm/hugetlb.c

[PATCH v5 5/8] hugetlb: call update_and_free_page without hugetlb_lock

2021-04-09 Thread Mike Kravetz
page to reduce long hold times. The ugly unlock/lock cycle in free_pool_huge_page will be removed in a subsequent patch which restructures free_pool_huge_page. Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Muchun Song Reviewed-by: Miaohe Lin Reviewed-by: Oscar Salvador

[PATCH v5 8/8] hugetlb: add lockdep_assert_held() calls for hugetlb_lock

2021-04-09 Thread Mike Kravetz
After making hugetlb lock irq safe and separating some functionality done under the lock, add some lockdep_assert_held to help verify locking. Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Miaohe Lin Reviewed-by: Muchun Song Reviewed-by: Oscar Salvador --- mm/hugetlb.c | 9

[PATCH v5 2/8] hugetlb: no need to drop hugetlb_lock to call cma_release

2021-04-09 Thread Mike Kravetz
Now that cma_release is non-blocking and irq safe, there is no need to drop hugetlb_lock before calling. Signed-off-by: Mike Kravetz Acked-by: Roman Gushchin Acked-by: Michal Hocko Reviewed-by: Oscar Salvador Reviewed-by: David Hildenbrand --- mm/hugetlb.c | 6 -- 1 file changed, 6

[PATCH v5 4/8] hugetlb: create remove_hugetlb_page() to separate functionality

2021-04-09 Thread Mike Kravetz
uce any changes to functionality. Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/hugetlb.c | 65 1 file changed, 40 insertions(+), 25 deletions(-) diff --git a/mm/hugetlb.c b

[PATCH v5 3/8] hugetlb: add per-hstate mutex to synchronize user adjustments

2021-04-09 Thread Mike Kravetz
pages. It makes little sense to allow multiple adjustment to the number of hugetlb pages in parallel. Add a mutex to the hstate and use it to only allow one hugetlb page adjustment at a time. This will synchronize modifications to the next_nid_to_alloc variable. Signed-off-by: Mike Kravetz Acked

[PATCH v5 1/8] mm/cma: change cma mutex to irq safe spinlock

2021-04-09 Thread Mike Kravetz
ged to a (irq aware) spin lock. The bitmap processing should be quite fast in typical case but if cma sizes grow to TB then we will likely need to replace the lock by a more optimized bitmap implementation. Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: David Hildenbrand Acked-by: Ro

Re: [PATCH v4 0/8] make hugetlb put_page safe for all calling contexts

2021-04-09 Thread Mike Kravetz
-ass.net > > might need attention and that this: > > hugetlb-make-free_huge_page-irq-safe.patch > > might need updating. > Thank you Andrew! I will send a v5 shortly based on dropping the above patch. -- Mike Kravetz

Re: [PATCH 3/4] mm/hugeltb: fix potential wrong gbl_reserve value for hugetlb_acct_memory()

2021-04-08 Thread Mike Kravetz
On 4/8/21 8:01 PM, Miaohe Lin wrote: > On 2021/4/9 6:53, Mike Kravetz wrote: >> >> Yes, add a comment to hugetlb_unreserve_pages saying that !resv_map >> implies freed == 0. >> > > Sounds good! > >> It would also be helpful to check fo

Re: [PATCH 4/4] mm/hugeltb: handle the error case in hugetlb_fix_reserve_counts()

2021-04-08 Thread Mike Kravetz
> + reserved = true; > } > + > + if (!reserved) > + pr_warn("hugetlb: fix reserve count failed\n"); We should expand this warning message a bit to indicate what this may mean to the user. Add something like" "Huge Page Reserved count may go negative". -- Mike Kravetz

Re: [PATCH 3/4] mm/hugeltb: fix potential wrong gbl_reserve value for hugetlb_acct_memory()

2021-04-08 Thread Mike Kravetz
On 4/7/21 8:26 PM, Miaohe Lin wrote: > On 2021/4/8 11:24, Miaohe Lin wrote: >> On 2021/4/8 4:53, Mike Kravetz wrote: >>> On 4/7/21 12:24 AM, Miaohe Lin wrote: >>>> Hi: >>>> On 2021/4/7 10:49, Mike Kravetz wrote: >>>>> On 4/2/21 2:32 AM,

Re: [PATCH 2/4] mm/hugeltb: simplify the return code of __vma_reservation_common()

2021-04-08 Thread Mike Kravetz
On 4/7/21 7:44 PM, Miaohe Lin wrote: > On 2021/4/8 5:23, Mike Kravetz wrote: >> On 4/6/21 8:09 PM, Miaohe Lin wrote: >>> On 2021/4/7 10:37, Mike Kravetz wrote: >>>> On 4/6/21 7:05 PM, Miaohe Lin wrote: >>>>> Hi: >>>>> On 2021/4/7 8:53, Mi

Re: [PATCH v4 0/8] make hugetlb put_page safe for all calling contexts

2021-04-07 Thread Mike Kravetz
ou suggest. Please do not start until we get an Ack from Oscar as he will need to participate. Remove patches for this series in your tree from Mike Kravetz: - hugetlb: add lockdep_assert_held() calls for hugetlb_lock - hugetlb: fix irq locking omissions - hugetlb: make free_huge_page irq safe - huget

Re: [PATCH 2/4] mm/hugeltb: simplify the return code of __vma_reservation_common()

2021-04-07 Thread Mike Kravetz
On 4/6/21 8:09 PM, Miaohe Lin wrote: > On 2021/4/7 10:37, Mike Kravetz wrote: >> On 4/6/21 7:05 PM, Miaohe Lin wrote: >>> Hi: >>> On 2021/4/7 8:53, Mike Kravetz wrote: >>>> On 4/2/21 2:32 AM, Miaohe Lin wrote: >>>>> It's guaranteed t

Re: [PATCH 3/4] mm/hugeltb: fix potential wrong gbl_reserve value for hugetlb_acct_memory()

2021-04-07 Thread Mike Kravetz
On 4/7/21 12:24 AM, Miaohe Lin wrote: > Hi: > On 2021/4/7 10:49, Mike Kravetz wrote: >> On 4/2/21 2:32 AM, Miaohe Lin wrote: >>> The resv_map could be NULL since this routine can be called in the evict >>> inode path for all hugetlbfs inodes. So we could have chg

Re: [PATCH 3/4] mm/hugeltb: fix potential wrong gbl_reserve value for hugetlb_acct_memory()

2021-04-06 Thread Mike Kravetz
lb pages can be allocated/associated with the file. As a result, remove_inode_hugepages will never find any huge pages associated with the inode and the passed value 'freed' will always be zero. Does that sound correct? -- Mike Kravetz > > Fixes: b5cec28d36f5 ("hugetlbfs: truncate_hu

Re: [PATCH 2/4] mm/hugeltb: simplify the return code of __vma_reservation_common()

2021-04-06 Thread Mike Kravetz
On 4/6/21 7:05 PM, Miaohe Lin wrote: > Hi: > On 2021/4/7 8:53, Mike Kravetz wrote: >> On 4/2/21 2:32 AM, Miaohe Lin wrote: >>> It's guaranteed that the vma is associated with a resv_map, i.e. either >>> VM_MAYSHARE or HPAGE_RESV_OWNER, when the code reaches here or

Re: [PATCH 2/4] mm/hugeltb: simplify the return code of __vma_reservation_common()

2021-04-06 Thread Mike Kravetz
, we never want to indicate reservations are available. The ternary makes sure a positive value is never returned. -- Mike Kravetz > - return ret < 0 ? ret : 0; > + return ret; > } > > static long vma_needs_reservation(struct hstate *h, >

Re: [PATCH 1/4] mm/hugeltb: remove redundant VM_BUG_ON() in region_add()

2021-04-06 Thread Mike Kravetz
On 4/2/21 2:32 AM, Miaohe Lin wrote: > The same VM_BUG_ON() check is already done in the callee. Remove this extra > one to simplify the code slightly. > > Signed-off-by: Miaohe Lin Thanks, Reviewed-by: Mike Kravetz -- Mike Kravetz > --- > mm/hugetlb.c | 1 - > 1 fil

Re: [PATCH v4 4/8] hugetlb: create remove_hugetlb_page() to separate functionality

2021-04-06 Thread Mike Kravetz
On 4/6/21 6:41 AM, Oscar Salvador wrote: > On Mon, Apr 05, 2021 at 04:00:39PM -0700, Mike Kravetz wrote: >> +static void remove_hugetlb_page(struct hstate *h, struct page *page, >> +bool adjust_surplus) >> +{ >> +

Re: [PATCH v4 4/8] hugetlb: create remove_hugetlb_page() to separate functionality

2021-04-06 Thread Mike Kravetz
On 4/6/21 2:56 AM, Michal Hocko wrote: > On Mon 05-04-21 16:00:39, Mike Kravetz wrote: >> The new remove_hugetlb_page() routine is designed to remove a hugetlb >> page from hugetlbfs processing. It will remove the page from the active >> or free list, update global counters

[PATCH v4 0/8] make hugetlb put_page safe for all calling contexts

2021-04-05 Thread Mike Kravetz
v1 - Add Roman's cma_release_nowait() patches. This eliminated the need to do a workqueue handoff in hugetlb code. - Use Michal's suggestion to batch pages for freeing. This eliminated the need to recalculate loop control variables when dropping the lock. - Added lockdep_assert_held() calls

[PATCH v4 4/8] hugetlb: create remove_hugetlb_page() to separate functionality

2021-04-05 Thread Mike Kravetz
uce any changes to functionality. Signed-off-by: Mike Kravetz --- mm/hugetlb.c | 88 ++-- 1 file changed, 51 insertions(+), 37 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8497a3598c86..df2a3d1f632b 100644 --- a/mm/hugetlb.c +++ b

[PATCH v4 1/8] mm/cma: change cma mutex to irq safe spinlock

2021-04-05 Thread Mike Kravetz
ged to a (irq aware) spin lock. The bitmap processing should be quite fast in typical case but if cma sizes grow to TB then we will likely need to replace the lock by a more optimized bitmap implementation. Signed-off-by: Mike Kravetz --- mm/cma.c | 18 +- mm/cma.h | 2 +-

[PATCH v4 3/8] hugetlb: add per-hstate mutex to synchronize user adjustments

2021-04-05 Thread Mike Kravetz
pages. It makes little sense to allow multiple adjustment to the number of hugetlb pages in parallel. Add a mutex to the hstate and use it to only allow one hugetlb page adjustment at a time. This will synchronize modifications to the next_nid_to_alloc variable. Signed-off-by: Mike Kravetz Acked

[PATCH v4 7/8] hugetlb: make free_huge_page irq safe

2021-04-05 Thread Mike Kravetz
k irq safe in a similar manner. - Revert the !in_task check and workqueue handoff. [1] https://lore.kernel.org/linux-mm/f1c03b05bc43a...@google.com/ Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Muchun Song --- mm/hugetlb.c

[PATCH v4 8/8] hugetlb: add lockdep_assert_held() calls for hugetlb_lock

2021-04-05 Thread Mike Kravetz
After making hugetlb lock irq safe and separating some functionality done under the lock, add some lockdep_assert_held to help verify locking. Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/hugetlb.c | 9 + 1 file changed, 9

[PATCH v4 2/8] hugetlb: no need to drop hugetlb_lock to call cma_release

2021-04-05 Thread Mike Kravetz
Now that cma_release is non-blocking and irq safe, there is no need to drop hugetlb_lock before calling. Signed-off-by: Mike Kravetz Acked-by: Roman Gushchin Acked-by: Michal Hocko --- mm/hugetlb.c | 6 -- 1 file changed, 6 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index

[PATCH v4 5/8] hugetlb: call update_and_free_page without hugetlb_lock

2021-04-05 Thread Mike Kravetz
page to reduce long hold times. The ugly unlock/lock cycle in free_pool_huge_page will be removed in a subsequent patch which restructures free_pool_huge_page. Signed-off-by: Mike Kravetz --- mm/hugetlb.c | 43 +-- 1 file changed, 33 insertions(+), 10

[PATCH v4 6/8] hugetlb: change free_pool_huge_page to remove_pool_huge_page

2021-04-05 Thread Mike Kravetz
his commit removes the cond_resched_lock() and the potential race. Therefore, remove the subtle code and restore the more straight forward accounting effectively reverting the commit. Signed-off-by: Mike Kravetz Reviewed-by: Muchun Song Acked-by: Michal Hocko --- mm/huge

Re: [External] [PATCH v3 7/8] hugetlb: make free_huge_page irq safe

2021-04-03 Thread Mike Kravetz
On 4/2/21 10:59 PM, Muchun Song wrote: > On Sat, Apr 3, 2021 at 4:56 AM Mike Kravetz wrote: >> >> On 4/2/21 5:47 AM, Muchun Song wrote: >>> On Wed, Mar 31, 2021 at 11:42 AM Mike Kravetz >>> wrote: >>>> >>>> Commit c77c0a8ac4c5 (&qu

Re: [External] [PATCH v3 7/8] hugetlb: make free_huge_page irq safe

2021-04-02 Thread Mike Kravetz
On 4/2/21 5:47 AM, Muchun Song wrote: > On Wed, Mar 31, 2021 at 11:42 AM Mike Kravetz wrote: >> >> Commit c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in >> non-task context") was added to address the issue of free_huge_page >> being called fro

[PATCH v3 3/8] hugetlb: add per-hstate mutex to synchronize user adjustments

2021-03-30 Thread Mike Kravetz
pages. It makes little sense to allow multiple adjustment to the number of hugetlb pages in parallel. Add a mutex to the hstate and use it to only allow one hugetlb page adjustment at a time. This will synchronize modifications to the next_nid_to_alloc variable. Signed-off-by: Mike Kravetz Acked

[PATCH v3 6/8] hugetlb: change free_pool_huge_page to remove_pool_huge_page

2021-03-30 Thread Mike Kravetz
his commit removes the cond_resched_lock() and the potential race. Therefore, remove the subtle code and restore the more straight forward accounting effectively reverting the commit. Signed-off-by: Mike Kravetz Reviewed-by: Muchun Song Acked-by: Michal Hocko --- mm/huge

[PATCH v3 2/8] hugetlb: no need to drop hugetlb_lock to call cma_release

2021-03-30 Thread Mike Kravetz
Now that cma_release is non-blocking and irq safe, there is no need to drop hugetlb_lock before calling. Signed-off-by: Mike Kravetz Acked-by: Roman Gushchin Acked-by: Michal Hocko --- mm/hugetlb.c | 6 -- 1 file changed, 6 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index

[PATCH v3 7/8] hugetlb: make free_huge_page irq safe

2021-03-30 Thread Mike Kravetz
k irq safe in a similar manner. - Revert the !in_task check and workqueue handoff. [1] https://lore.kernel.org/linux-mm/f1c03b05bc43a...@google.com/ Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Muchun Song --- mm/hugetlb.c

[PATCH v3 4/8] hugetlb: create remove_hugetlb_page() to separate functionality

2021-03-30 Thread Mike Kravetz
not introduce any changes to functionality. Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/hugetlb.c | 67 1 file changed, 42 insertions(+), 25 deletions(-) diff --git a/mm/hugetlb.c b/mm

[PATCH v3 0/8] make hugetlb put_page safe for all calling contexts

2021-03-30 Thread Mike Kravetz
eliminated the need to do a workqueue handoff in hugetlb code. - Use Michal's suggestion to batch pages for freeing. This eliminated the need to recalculate loop control variables when dropping the lock. - Added lockdep_assert_held() calls - Rebased to v5.12-rc3-mmotm-2021-03-17-22-24 Mike K

[PATCH v3 5/8] hugetlb: call update_and_free_page without hugetlb_lock

2021-03-30 Thread Mike Kravetz
page to reduce long hold times. The ugly unlock/lock cycle in free_pool_huge_page will be removed in a subsequent patch which restructures free_pool_huge_page. Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Muchun Song Reviewed-by: Miaohe Lin --- mm/hugetlb.c | 31

[PATCH v3 1/8] mm/cma: change cma mutex to irq safe spinlock

2021-03-30 Thread Mike Kravetz
ged to a (irq aware) spin lock. The bitmap processing should be quite fast in typical case but if cma sizes grow to TB then we will likely need to replace the lock by a more optimized bitmap implementation. Signed-off-by: Mike Kravetz --- mm/cma.c | 18 +- mm/cma.h | 2 +-

[PATCH v3 8/8] hugetlb: add lockdep_assert_held() calls for hugetlb_lock

2021-03-30 Thread Mike Kravetz
After making hugetlb lock irq safe and separating some functionality done under the lock, add some lockdep_assert_held to help verify locking. Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/hugetlb.c | 9 + 1 file changed, 9

Re: [External] [PATCH v2 5/8] hugetlb: call update_and_free_page without hugetlb_lock

2021-03-30 Thread Mike Kravetz
On 3/29/21 7:21 PM, Muchun Song wrote: > On Tue, Mar 30, 2021 at 7:24 AM Mike Kravetz wrote: >> >> With the introduction of remove_hugetlb_page(), there is no need for >> update_and_free_page to hold the hugetlb lock. Change all callers to >> drop the lock before cal

Re: [PATCH v2 1/8] mm/cma: change cma mutex to irq safe spinlock

2021-03-30 Thread Mike Kravetz
On 3/30/21 1:01 AM, Michal Hocko wrote: > On Mon 29-03-21 16:23:55, Mike Kravetz wrote: >> Ideally, cma_release could be called from any context. However, that is >> not possible because a mutex is used to protect the per-area bitmap. >> Change the bitmap to an irq safe s

Re: [PATCH v2 1/8] mm/cma: change cma mutex to irq safe spinlock

2021-03-29 Thread Mike Kravetz
On 3/29/21 6:20 PM, Song Bao Hua (Barry Song) wrote: > > >> -Original Message----- >> From: Mike Kravetz [mailto:mike.krav...@oracle.com] >> Sent: Tuesday, March 30, 2021 12:24 PM >> To: linux...@kvack.org; linux-kernel@vger.kernel.org >> Cc: Roman Gushch

[PATCH v2 8/8] hugetlb: add lockdep_assert_held() calls for hugetlb_lock

2021-03-29 Thread Mike Kravetz
After making hugetlb lock irq safe and separating some functionality done under the lock, add some lockdep_assert_held to help verify locking. Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/hugetlb.c | 9 + 1 file changed, 9

[PATCH v2 7/8] hugetlb: make free_huge_page irq safe

2021-03-29 Thread Mike Kravetz
k irq safe in a similar manner. - Revert the !in_task check and workqueue handoff. [1] https://lore.kernel.org/linux-mm/f1c03b05bc43a...@google.com/ Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Muchun Song --- mm/hugetlb.c

[PATCH v2 5/8] hugetlb: call update_and_free_page without hugetlb_lock

2021-03-29 Thread Mike Kravetz
page to reduce long hold times. The ugly unlock/lock cycle in free_pool_huge_page will be removed in a subsequent patch which restructures free_pool_huge_page. Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Muchun Song --- mm/hugetlb.c | 32 +++- 1

[PATCH v2 6/8] hugetlb: change free_pool_huge_page to remove_pool_huge_page

2021-03-29 Thread Mike Kravetz
his commit removes the cond_resched_lock() and the potential race. Therefore, remove the subtle code and restore the more straight forward accounting effectively reverting the commit. Signed-off-by: Mike Kravetz --- mm/hugetlb.c | 95 +--- 1 file c

[PATCH v2 4/8] hugetlb: create remove_hugetlb_page() to separate functionality

2021-03-29 Thread Mike Kravetz
not introduce any changes to functionality. Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Miaohe Lin Reviewed-by: Muchun Song --- mm/hugetlb.c | 67 1 file changed, 42 insertions(+), 25 deletions(-) diff --git a/mm/hugetlb.c b/mm

[PATCH v2 2/8] hugetlb: no need to drop hugetlb_lock to call cma_release

2021-03-29 Thread Mike Kravetz
Now that cma_release is non-blocking and irq safe, there is no need to drop hugetlb_lock before calling. Signed-off-by: Mike Kravetz --- mm/hugetlb.c | 6 -- 1 file changed, 6 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3c3e4baa4156..1d62f0492e7b 100644 --- a/mm/hugetlb.c

[PATCH v2 0/8] make hugetlb put_page safe for all calling contexts

2021-03-29 Thread Mike Kravetz
ased to v5.12-rc3-mmotm-2021-03-17-22-24 Mike Kravetz (8): mm/cma: change cma mutex to irq safe spinlock hugetlb: no need to drop hugetlb_lock to call cma_release hugetlb: add per-hstate mutex to synchronize user adjustments hugetlb: create remove_hugetlb_page() to separate functionalit

[PATCH v2 1/8] mm/cma: change cma mutex to irq safe spinlock

2021-03-29 Thread Mike Kravetz
Ideally, cma_release could be called from any context. However, that is not possible because a mutex is used to protect the per-area bitmap. Change the bitmap to an irq safe spinlock. Signed-off-by: Mike Kravetz --- mm/cma.c | 20 +++- mm/cma.h | 2 +- mm

[PATCH v2 3/8] hugetlb: add per-hstate mutex to synchronize user adjustments

2021-03-29 Thread Mike Kravetz
pages. It makes little sense to allow multiple adjustment to the number of hugetlb pages in parallel. Add a mutex to the hstate and use it to only allow one hugetlb page adjustment at a time. This will synchronize modifications to the next_nid_to_alloc variable. Signed-off-by: Mike Kravetz Acked

Re: [External] [PATCH 7/8] hugetlb: make free_huge_page irq safe

2021-03-29 Thread Mike Kravetz
On 3/29/21 12:49 AM, Michal Hocko wrote: > On Sat 27-03-21 15:06:36, Muchun Song wrote: >> On Thu, Mar 25, 2021 at 8:29 AM Mike Kravetz wrote: >>> >>> Commit c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in >>> non-task context") wa

Re: [PATCH 1/8] mm: cma: introduce cma_release_nowait()

2021-03-29 Thread Mike Kravetz
On 3/29/21 12:46 AM, Michal Hocko wrote: > On Fri 26-03-21 14:32:01, Mike Kravetz wrote: > [...] >> - Just change the mutex to an irq safe spinlock. > > Yes please. > >> AFAICT, the potential >> downsides could be: >> - Interrupts disabled during

Re: [External] [PATCH 5/8] hugetlb: call update_and_free_page without hugetlb_lock

2021-03-28 Thread Mike Kravetz
On 3/26/21 11:54 PM, Muchun Song wrote: > On Thu, Mar 25, 2021 at 8:29 AM Mike Kravetz wrote: >> >> With the introduction of remove_hugetlb_page(), there is no need for >> update_and_free_page to hold the hugetlb lock. Change all callers to >> drop the lock before cal

Re: [PATCH 1/8] mm: cma: introduce cma_release_nowait()

2021-03-26 Thread Mike Kravetz
On 3/25/21 4:49 PM, Mike Kravetz wrote: > On 3/25/21 4:19 PM, Roman Gushchin wrote: >> On Thu, Mar 25, 2021 at 01:12:51PM -0700, Minchan Kim wrote: >>> On Thu, Mar 25, 2021 at 06:15:11PM +0100, David Hildenbrand wrote: >>>> On 25.03.21 17:56, Mike Kravetz wrote: &g

Re: [PATCH 0/8] make hugetlb put_page safe for all calling contexts

2021-03-26 Thread Mike Kravetz
On 3/25/21 6:42 PM, Miaohe Lin wrote: > Hi: > On 2021/3/25 8:28, Mike Kravetz wrote: >> This effort is the result a recent bug report [1]. In subsequent >> discussions [2], it was deemed necessary to properly fix the hugetlb > > Many thanks for the effort. I h

Re: [PATCH 4/8] hugetlb: create remove_hugetlb_page() to separate functionality

2021-03-26 Thread Mike Kravetz
On 3/25/21 7:10 PM, Miaohe Lin wrote: > On 2021/3/25 8:28, Mike Kravetz wrote: >> The new remove_hugetlb_page() routine is designed to remove a hugetlb >> page from hugetlbfs processing. It will remove the page from the active >> or free list, update global counters and

Re: [PATCH 1/8] mm: cma: introduce cma_release_nowait()

2021-03-25 Thread Mike Kravetz
On 3/25/21 4:19 PM, Roman Gushchin wrote: > On Thu, Mar 25, 2021 at 01:12:51PM -0700, Minchan Kim wrote: >> On Thu, Mar 25, 2021 at 06:15:11PM +0100, David Hildenbrand wrote: >>> On 25.03.21 17:56, Mike Kravetz wrote: >>>> On 3/25/21 3:22 AM, Michal Hocko wrote:

Re: [PATCH 5/8] hugetlb: call update_and_free_page without hugetlb_lock

2021-03-25 Thread Mike Kravetz
On 3/25/21 12:39 PM, Michal Hocko wrote: > On Thu 25-03-21 10:12:05, Mike Kravetz wrote: >> On 3/25/21 3:55 AM, Michal Hocko wrote: >>> On Wed 24-03-21 17:28:32, Mike Kravetz wrote: >>>> With the introduction of remove_hugetlb_page(), there is no need for >

Re: [PATCH 7/8] hugetlb: make free_huge_page irq safe

2021-03-25 Thread Mike Kravetz
On 3/25/21 4:21 AM, Michal Hocko wrote: > On Wed 24-03-21 17:28:34, Mike Kravetz wrote: >> Commit c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in >> non-task context") was added to address the issue of free_huge_page >> being called from irq

Re: [PATCH 6/8] hugetlb: change free_pool_huge_page to remove_pool_huge_page

2021-03-25 Thread Mike Kravetz
On 3/25/21 4:06 AM, Michal Hocko wrote: > On Wed 24-03-21 17:28:33, Mike Kravetz wrote: > [...] >> @@ -2074,17 +2067,16 @@ static int gather_surplus_pages(struct hstate *h, >> long delta) >> *to the associated reservation map. >> * 2) Free any unused su

Re: [PATCH 5/8] hugetlb: call update_and_free_page without hugetlb_lock

2021-03-25 Thread Mike Kravetz
On 3/25/21 3:55 AM, Michal Hocko wrote: > On Wed 24-03-21 17:28:32, Mike Kravetz wrote: >> With the introduction of remove_hugetlb_page(), there is no need for >> update_and_free_page to hold the hugetlb lock. Change all callers to >> drop the lock before calling. >&

Re: [PATCH 1/8] mm: cma: introduce cma_release_nowait()

2021-03-25 Thread Mike Kravetz
On 3/25/21 3:22 AM, Michal Hocko wrote: > On Thu 25-03-21 10:56:38, David Hildenbrand wrote: >> On 25.03.21 01:28, Mike Kravetz wrote: >>> From: Roman Gushchin >>> >>> cma_release() has to lock the cma_lock mutex to clear the cma bitmap. >>> It makes

[PATCH 5/8] hugetlb: call update_and_free_page without hugetlb_lock

2021-03-24 Thread Mike Kravetz
page to reduce long hold times. The ugly unlock/lock cycle in free_pool_huge_page will be removed in a subsequent patch which restructures free_pool_huge_page. Signed-off-by: Mike Kravetz --- mm/hugetlb.c | 34 +- 1 file changed, 29 insertions(+), 5 deletions

[PATCH 7/8] hugetlb: make free_huge_page irq safe

2021-03-24 Thread Mike Kravetz
task check and workqueue handoff. [1] https://lore.kernel.org/linux-mm/f1c03b05bc43a...@google.com/ Signed-off-by: Mike Kravetz --- mm/hugetlb.c| 169 +--- mm/hugetlb_cgroup.c | 8 +-- 2 files changed, 67 insertions(+), 110 deletions(-)

[PATCH 6/8] hugetlb: change free_pool_huge_page to remove_pool_huge_page

2021-03-24 Thread Mike Kravetz
allocators. The hugetlb_lock is dropped before freeing to these allocators which results in shorter lock hold times. Signed-off-by: Mike Kravetz --- mm/hugetlb.c | 88 ++-- 1 file changed, 51 insertions(+), 37 deletions(-) diff --git a/mm/hugetlb.c b

[PATCH 4/8] hugetlb: create remove_hugetlb_page() to separate functionality

2021-03-24 Thread Mike Kravetz
, the 'page' can be treated as a normal compound page or a collection of base size pages. remove_hugetlb_page is to be called with the hugetlb_lock held. Creating this routine and separating functionality is in preparation for restructuring code to reduce lock hold times. Signed-off-by: Mike Kravetz

[PATCH 2/8] mm: hugetlb: don't drop hugetlb_lock around cma_release() call

2021-03-24 Thread Mike Kravetz
From: Roman Gushchin Replace blocking cma_release() with a non-blocking cma_release_nowait() call, so there is no more need to temporarily drop hugetlb_lock. Signed-off-by: Roman Gushchin Signed-off-by: Mike Kravetz --- mm/hugetlb.c | 11 +++ 1 file changed, 3 insertions(+), 8

[PATCH 8/8] hugetlb: add lockdep_assert_held() calls for hugetlb_lock

2021-03-24 Thread Mike Kravetz
After making hugetlb lock irq safe and separating some functionality done under the lock, add some lockdep_assert_held to help verify locking. Signed-off-by: Mike Kravetz --- mm/hugetlb.c | 9 + 1 file changed, 9 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e4c441b878f2

[PATCH 0/8] make hugetlb put_page safe for all calling contexts

2021-03-24 Thread Mike Kravetz
eliminated the need to recalculate loop control variables when dropping the lock. - Added lockdep_assert_held() calls - Rebased to v5.12-rc3-mmotm-2021-03-17-22-24 Mike Kravetz (6): hugetlb: add per-hstate mutex to synchronize user adjustments hugetlb: create remove_hugetlb_page() to separate fu

[PATCH 1/8] mm: cma: introduce cma_release_nowait()

2021-03-24 Thread Mike Kravetz
-by: Roman Gushchin [mike.krav...@oracle.com: rebased to v5.12-rc3-mmotm-2021-03-17-22-24] Signed-off-by: Mike Kravetz --- include/linux/cma.h | 2 + mm/cma.c| 93 + mm/cma.h| 5 +++ 3 files changed, 100 insertions(+) diff --git

[PATCH 3/8] hugetlb: add per-hstate mutex to synchronize user adjustments

2021-03-24 Thread Mike Kravetz
pages. It makes little sense to allow multiple adjustment to the number of hugetlb pages in parallel. Add a mutex to the hstate and use it to only allow one hugetlb page adjustment at a time. This will synchronize modifications to the next_nid_to_alloc variable. Signed-off-by: Mike Kravetz

  1   2   3   4   5   6   7   8   9   10   >