On 4/20/21 1:46 AM, Muchun Song wrote:
> On Tue, Apr 20, 2021 at 7:20 AM Mike Kravetz wrote:
>>
>> On 4/15/21 1:40 AM, Muchun Song wrote:
>>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>>> index 0abed7e766b8..6e970a7d3480 100644
>>&g
gt; mm/hugetlb_vmemmap.c| 24
> 5 files changed, 69 insertions(+), 2 deletions(-)
Thanks,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
e_page(h, head, false);
> - return 0;
> +
> + rc = alloc_huge_page_vmemmap(h, page);
> + if (!rc) {
> + /*
> + * Move PageHWPoison flag from head page to the raw
> + * error page, which makes any subpages rather tha
E(!PageHuge(page), page) in page_hstate is going to
trigger because a previous call to remove_hugetlb_page() will
set_compound_page_dtor(page, NULL_COMPOUND_DTOR)
Note how h(hstate) is grabbed before calling update_and_free_page in
existing code.
We could potentially drop the !PageHuge(page) in page_hstate.
ested-by: Chen Huang
> Tested-by: Bodeddula Balasubramaniam
> Acked-by: Michal Hocko
There may need to be some trivial rebasing due to Oscar's changes
when they go in.
Reviewed-by: Mike Kravetz
--
Mike Kravetz
Tested-by: Bodeddula Balasubramaniam
> Acked-by: Michal Hocko
Thanks,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
>
> In the case above we retry as the window race is quite small and we have high
> chances to succeed next time.
>
> With regard to the allocation, we restrict it to the node the page belongs
> to with __GFP_THISNODE, meaning we do not fallback on other node's zones.
>
> Note that gigantic hugetlb pages are fenced off since there is a cyclic
> dependency between them and alloc_contig_range.
>
> Signed-off-by: Oscar Salvador
> Acked-by: Michal Hocko
Reviewed-by: Mike Kravetz
--
Mike Kravetz
where
Michal's suggestion was coming from (list the allocators that do the
clearing).
Also, listing this as a left over would be a good idea.
Reviewed-by: Mike Kravetz
--
Mike Kravetz
On 4/13/21 9:52 PM, Oscar Salvador wrote:
> On Tue, Apr 13, 2021 at 03:48:53PM -0700, Mike Kravetz wrote:
>> The label free_new is:
>>
>> free_new:
>> spin_unlock_irq(&hugetlb_lock);
>> __free_pages(new_page, huge_page_order(h));
>>
>
On 4/13/21 9:59 PM, Oscar Salvador wrote:
> On Tue, Apr 13, 2021 at 02:33:41PM -0700, Mike Kravetz wrote:
>>> -static void prep_new_huge_page(struct hstate *h, struct page *page, int
>>> nid)
>>> +/*
>>> + * Must be called with the huge
> Yes, but I do not think that is really possible unless I missed something.
> Let us see what Mike thinks of it, if there are no objections, we can
> get rid of the clearing flag right there.
>
Thanks for crawling through that code Oscar!
I do not think you missed anything. Let's just get rid of the flag
clearing.
--
Mike Kravetz
and
> ---
> mm/page_alloc.c | 6 --
> 1 file changed, 6 deletions(-)
Acked-by: Mike Kravetz
--
Mike Kravetz
interface to recognize in-use HugeTLB pages so we can migrate
> them, and have much better chances to succeed the call.
>
> Signed-off-by: Oscar Salvador
> Reviewed-by: Mike Kravetz
> Acked-by: Michal Hocko
One small issue/question/request below.
> diff --git a/mm/hu
gt; + __free_pages(new_page, huge_page_order(h));
> +
> + return ret;
> +}
> +
> +int isolate_or_dissolve_huge_page(struct page *page)
> +{
> + struct hstate *h;
> + struct page *head;
> +
> + /*
> + * The page might have been dissolved from under our feet, so make sure
> + * to carefully check the state under the lock.
> + * Return success when racing as if we dissolved the page ourselves.
> + */
> + spin_lock_irq(&hugetlb_lock);
> + if (PageHuge(page)) {
> + head = compound_head(page);
> + h = page_hstate(head);
> + } else {
> + spin_unlock(&hugetlb_lock);
Should be be spin_unlock_irq(&hugetlb_lock);
Other than that, it looks good.
--
Mike Kravetz
move setting the destructor to this routine.
set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
That way, PageHuge() will be false until it 'really' is a huge page.
If not, we could potentially go into that retry loop in
dissolve_free_huge_page or alloc_and_dissolve_huge_page i
e list via put_page/free_huge_page so the appropriate
flags will be cleared before anyone notices.
I'm wondering if we should just do a set_page_private(page, 0) here in
prep_new_huge_page since we now use that field for flags. Or, is that
overkill?
--
Mike Kravetz
pfn to be
> scanned, we reuse the cc->migrate_pfn field to keep track of that.
>
> Signed-off-by: Oscar Salvador
> Acked-by: Vlastimil Babka
Acked-by: Mike Kravetz
--
Mike Kravetz
ve some cycles by backing off ealier
>
> Signed-off-by: Oscar Salvador
> Acked-by: Vlastimil Babka
> Reviewed-by: David Hildenbrand
> Acked-by: Michal Hocko
Acked-by: Mike Kravetz
--
Mike Kravetz
On 4/10/21 12:23 AM, Miaohe Lin wrote:
> The local variable pseudo_vma is not used anymore.
>
> Signed-off-by: Miaohe Lin
Thanks,
That should have been removed with 1b426bac66e6 ("hugetlb: use same fault
hash key for shared and private mappings").
Reviewed-by: Mike Kravetz
--
Mike Kravetz
memory could possibly fail too. We should correctly handle
> these cases.
>
> Fixes: b5cec28d36f5 ("hugetlbfs: truncate_hugepages() takes a range of pages")
> Signed-off-by: Miaohe Lin
Thanks,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
e it clear and also avoid confusion.
>
> Signed-off-by: Miaohe Lin
Thanks,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
RESV_OWNER is set here. Simplify the return code to make it more
> clear.
>
> Signed-off-by: Miaohe Lin
Thanks,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
On 4/12/21 12:33 AM, Oscar Salvador wrote:
> On Fri, Apr 09, 2021 at 01:52:50PM -0700, Mike Kravetz wrote:
>> The new remove_hugetlb_page() routine is designed to remove a hugetlb
>> page from hugetlbfs processing. It will remove the page from the active
>> or free list, u
Please let me know.
>>
>> From my perspective, both Peter's error handling and the hugetlbfs
>> minor faulting patches are ready to go. (Peter's most importantly; we
>> should establish that as a base, and put all the burden on resolving
>> conflicts with it o
kqueue handoff in hugetlb code.
- Use Michal's suggestion to batch pages for freeing. This eliminated
the need to recalculate loop control variables when dropping the lock.
- Added lockdep_assert_held() calls
- Rebased to v5.12-rc3-mmotm-2021-03-17-22-24
Mike Kravetz (8):
mm/cma: change cma
his commit removes the
cond_resched_lock() and the potential race. Therefore, remove the
subtle code and restore the more straight forward accounting effectively
reverting the commit.
Signed-off-by: Mike Kravetz
Reviewed-by: Muchun Song
Acked-by: Michal Hocko
Reviewed-by: Oscar Salvador
---
mm
l lock irq safe in a similar manner.
- Revert the !in_task check and workqueue handoff.
[1] https://lore.kernel.org/linux-mm/f1c03b05bc43a...@google.com/
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Muchun Song
Reviewed-by: Oscar Salvador
---
mm/hu
page to reduce
long hold times.
The ugly unlock/lock cycle in free_pool_huge_page will be removed in
a subsequent patch which restructures free_pool_huge_page.
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Muchun Song
Reviewed-by: Miaohe Lin
Reviewed-by: Oscar Salvador
After making hugetlb lock irq safe and separating some functionality
done under the lock, add some lockdep_assert_held to help verify
locking.
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Miaohe Lin
Reviewed-by: Muchun Song
Reviewed-by: Oscar Salvador
---
mm/hugetlb.c | 9
Now that cma_release is non-blocking and irq safe, there is no need to
drop hugetlb_lock before calling.
Signed-off-by: Mike Kravetz
Acked-by: Roman Gushchin
Acked-by: Michal Hocko
Reviewed-by: Oscar Salvador
Reviewed-by: David Hildenbrand
---
mm/hugetlb.c | 6 --
1 file changed, 6
it should not
introduce any changes to functionality.
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Miaohe Lin
Reviewed-by: Muchun Song
---
mm/hugetlb.c | 65
1 file changed, 40 insertions(+), 25 deletions(-)
diff --
pages.
It makes little sense to allow multiple adjustment to the number of
hugetlb pages in parallel. Add a mutex to the hstate and use it to only
allow one hugetlb page adjustment at a time. This will synchronize
modifications to the next_nid_to_alloc variable.
Signed-off-by: Mike Kravetz
Acked
ged to
a (irq aware) spin lock. The bitmap processing should be quite fast in
typical case but if cma sizes grow to TB then we will likely need to
replace the lock by a more optimized bitmap implementation.
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: David Hildenbrand
Acked-b
-ass.net
>
> might need attention and that this:
>
> hugetlb-make-free_huge_page-irq-safe.patch
>
> might need updating.
>
Thank you Andrew!
I will send a v5 shortly based on dropping the above patch.
--
Mike Kravetz
On 4/8/21 8:01 PM, Miaohe Lin wrote:
> On 2021/4/9 6:53, Mike Kravetz wrote:
>>
>> Yes, add a comment to hugetlb_unreserve_pages saying that !resv_map
>> implies freed == 0.
>>
>
> Sounds good!
>
>> It would also be helpful to check for (
e if (!rsv_adjust) {
> + reserved = true;
> }
> +
> + if (!reserved)
> + pr_warn("hugetlb: fix reserve count failed\n");
We should expand this warning message a bit to indicate what this may
mean to the user. Add something like"
"Huge Page Reserved count may go negative".
--
Mike Kravetz
On 4/7/21 8:26 PM, Miaohe Lin wrote:
> On 2021/4/8 11:24, Miaohe Lin wrote:
>> On 2021/4/8 4:53, Mike Kravetz wrote:
>>> On 4/7/21 12:24 AM, Miaohe Lin wrote:
>>>> Hi:
>>>> On 2021/4/7 10:49, Mike Kravetz wrote:
>>>>> On 4/2/21 2:32 AM,
On 4/7/21 7:44 PM, Miaohe Lin wrote:
> On 2021/4/8 5:23, Mike Kravetz wrote:
>> On 4/6/21 8:09 PM, Miaohe Lin wrote:
>>> On 2021/4/7 10:37, Mike Kravetz wrote:
>>>> On 4/6/21 7:05 PM, Miaohe Lin wrote:
>>>>> Hi:
>>>>> On 2021/4/7 8:53, Mi
ing you suggest. Please do
not start until we get an Ack from Oscar as he will need to participate.
Remove patches for this series in your tree from Mike Kravetz:
- hugetlb: add lockdep_assert_held() calls for hugetlb_lock
- hugetlb: fix irq locking omissions
- hugetlb: make free_huge_page irq safe
-
On 4/6/21 8:09 PM, Miaohe Lin wrote:
> On 2021/4/7 10:37, Mike Kravetz wrote:
>> On 4/6/21 7:05 PM, Miaohe Lin wrote:
>>> Hi:
>>> On 2021/4/7 8:53, Mike Kravetz wrote:
>>>> On 4/2/21 2:32 AM, Miaohe Lin wrote:
>>>>> It's guarant
On 4/7/21 12:24 AM, Miaohe Lin wrote:
> Hi:
> On 2021/4/7 10:49, Mike Kravetz wrote:
>> On 4/2/21 2:32 AM, Miaohe Lin wrote:
>>> The resv_map could be NULL since this routine can be called in the evict
>>> inode path for all hugetlbfs inodes. So we could have chg = 0
pages can be allocated/associated
with the file. As a result, remove_inode_hugepages will never find any
huge pages associated with the inode and the passed value 'freed' will
always be zero.
Does that sound correct?
--
Mike Kravetz
>
> Fixes: b5cec28d36f5 ("hugetlbfs: trun
On 4/6/21 7:05 PM, Miaohe Lin wrote:
> Hi:
> On 2021/4/7 8:53, Mike Kravetz wrote:
>> On 4/2/21 2:32 AM, Miaohe Lin wrote:
>>> It's guaranteed that the vma is associated with a resv_map, i.e. either
>>> VM_MAYSHARE or HPAGE_RESV_OWNER, when the code reaches her
RESV_OWNER. In this case, we
never want to indicate reservations are available. The ternary makes
sure a positive value is never returned.
--
Mike Kravetz
> - return ret < 0 ? ret : 0;
> + return ret;
> }
>
> static long vma_needs_reservation(struct hstate *h,
>
On 4/2/21 2:32 AM, Miaohe Lin wrote:
> The same VM_BUG_ON() check is already done in the callee. Remove this extra
> one to simplify the code slightly.
>
> Signed-off-by: Miaohe Lin
Thanks,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
> ---
> mm/hugetlb.c | 1 -
> 1 fil
On 4/6/21 6:41 AM, Oscar Salvador wrote:
> On Mon, Apr 05, 2021 at 04:00:39PM -0700, Mike Kravetz wrote:
>> +static void remove_hugetlb_page(struct hstate *h, struct page *page,
>> +bool adjust_surplus)
>> +{
>> +
On 4/6/21 2:56 AM, Michal Hocko wrote:
> On Mon 05-04-21 16:00:39, Mike Kravetz wrote:
>> The new remove_hugetlb_page() routine is designed to remove a hugetlb
>> page from hugetlbfs processing. It will remove the page from the active
>> or free list, update global counters
viewed-by: from v1
RFC -> v1
- Add Roman's cma_release_nowait() patches. This eliminated the need
to do a workqueue handoff in hugetlb code.
- Use Michal's suggestion to batch pages for freeing. This eliminated
the need to recalculate loop control variables when dropping the lo
it should not
introduce any changes to functionality.
Signed-off-by: Mike Kravetz
---
mm/hugetlb.c | 88 ++--
1 file changed, 51 insertions(+), 37 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8497a3598c86..df2a3d1f632b 100644
---
ged to
a (irq aware) spin lock. The bitmap processing should be quite fast in
typical case but if cma sizes grow to TB then we will likely need to
replace the lock by a more optimized bitmap implementation.
Signed-off-by: Mike Kravetz
---
mm/cma.c | 18 +-
mm/cma.h |
pages.
It makes little sense to allow multiple adjustment to the number of
hugetlb pages in parallel. Add a mutex to the hstate and use it to only
allow one hugetlb page adjustment at a time. This will synchronize
modifications to the next_nid_to_alloc variable.
Signed-off-by: Mike Kravetz
Acked
l lock irq safe in a similar manner.
- Revert the !in_task check and workqueue handoff.
[1] https://lore.kernel.org/linux-mm/f1c03b05bc43a...@google.com/
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Muchun Song
---
mm/hu
After making hugetlb lock irq safe and separating some functionality
done under the lock, add some lockdep_assert_held to help verify
locking.
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Miaohe Lin
Reviewed-by: Muchun Song
---
mm/hugetlb.c | 9 +
1 file changed, 9
Now that cma_release is non-blocking and irq safe, there is no need to
drop hugetlb_lock before calling.
Signed-off-by: Mike Kravetz
Acked-by: Roman Gushchin
Acked-by: Michal Hocko
---
mm/hugetlb.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index
page to reduce
long hold times.
The ugly unlock/lock cycle in free_pool_huge_page will be removed in
a subsequent patch which restructures free_pool_huge_page.
Signed-off-by: Mike Kravetz
---
mm/hugetlb.c | 43 +--
1 file changed, 33 insertions(+), 10
his commit removes the
cond_resched_lock() and the potential race. Therefore, remove the
subtle code and restore the more straight forward accounting effectively
reverting the commit.
Signed-off-by: Mike Kravetz
Reviewed-by: Muchun Song
Acked-by: Michal Hocko
---
mm/huge
On 4/2/21 10:59 PM, Muchun Song wrote:
> On Sat, Apr 3, 2021 at 4:56 AM Mike Kravetz wrote:
>>
>> On 4/2/21 5:47 AM, Muchun Song wrote:
>>> On Wed, Mar 31, 2021 at 11:42 AM Mike Kravetz
>>> wrote:
>>>>
>>>> Commit c77c0a8ac4c5 (&qu
On 4/2/21 5:47 AM, Muchun Song wrote:
> On Wed, Mar 31, 2021 at 11:42 AM Mike Kravetz wrote:
>>
>> Commit c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in
>> non-task context") was added to address the issue of free_huge_page
>> being called fro
pages.
It makes little sense to allow multiple adjustment to the number of
hugetlb pages in parallel. Add a mutex to the hstate and use it to only
allow one hugetlb page adjustment at a time. This will synchronize
modifications to the next_nid_to_alloc variable.
Signed-off-by: Mike Kravetz
Acked
his commit removes the
cond_resched_lock() and the potential race. Therefore, remove the
subtle code and restore the more straight forward accounting effectively
reverting the commit.
Signed-off-by: Mike Kravetz
Reviewed-by: Muchun Song
Acked-by: Michal Hocko
---
mm/huge
Now that cma_release is non-blocking and irq safe, there is no need to
drop hugetlb_lock before calling.
Signed-off-by: Mike Kravetz
Acked-by: Roman Gushchin
Acked-by: Michal Hocko
---
mm/hugetlb.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index
l lock irq safe in a similar manner.
- Revert the !in_task check and workqueue handoff.
[1] https://lore.kernel.org/linux-mm/f1c03b05bc43a...@google.com/
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Muchun Song
---
mm/hu
is commit should not
introduce any changes to functionality.
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Miaohe Lin
Reviewed-by: Muchun Song
---
mm/hugetlb.c | 67
1 file changed, 42 insertions(+), 25 deletions(-)
diff --git
patches. This eliminated the need
to do a workqueue handoff in hugetlb code.
- Use Michal's suggestion to batch pages for freeing. This eliminated
the need to recalculate loop control variables when dropping the lock.
- Added lockdep_assert_held() calls
- Rebased to v5.12-rc3-mmotm
page to reduce
long hold times.
The ugly unlock/lock cycle in free_pool_huge_page will be removed in
a subsequent patch which restructures free_pool_huge_page.
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Muchun Song
Reviewed-by: Miaohe Lin
---
mm/hugetlb.c | 31
ged to
a (irq aware) spin lock. The bitmap processing should be quite fast in
typical case but if cma sizes grow to TB then we will likely need to
replace the lock by a more optimized bitmap implementation.
Signed-off-by: Mike Kravetz
---
mm/cma.c | 18 +-
mm/cma.h |
After making hugetlb lock irq safe and separating some functionality
done under the lock, add some lockdep_assert_held to help verify
locking.
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Miaohe Lin
Reviewed-by: Muchun Song
---
mm/hugetlb.c | 9 +
1 file changed, 9
On 3/29/21 7:21 PM, Muchun Song wrote:
> On Tue, Mar 30, 2021 at 7:24 AM Mike Kravetz wrote:
>>
>> With the introduction of remove_hugetlb_page(), there is no need for
>> update_and_free_page to hold the hugetlb lock. Change all callers to
>> drop the lock before cal
On 3/30/21 1:01 AM, Michal Hocko wrote:
> On Mon 29-03-21 16:23:55, Mike Kravetz wrote:
>> Ideally, cma_release could be called from any context. However, that is
>> not possible because a mutex is used to protect the per-area bitmap.
>> Change the bitmap to an irq safe s
On 3/29/21 6:20 PM, Song Bao Hua (Barry Song) wrote:
>
>
>> -Original Message-----
>> From: Mike Kravetz [mailto:mike.krav...@oracle.com]
>> Sent: Tuesday, March 30, 2021 12:24 PM
>> To: linux...@kvack.org; linux-kernel@vger.kernel.org
>> Cc: Roman Gushch
After making hugetlb lock irq safe and separating some functionality
done under the lock, add some lockdep_assert_held to help verify
locking.
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Miaohe Lin
Reviewed-by: Muchun Song
---
mm/hugetlb.c | 9 +
1 file changed, 9
l lock irq safe in a similar manner.
- Revert the !in_task check and workqueue handoff.
[1] https://lore.kernel.org/linux-mm/f1c03b05bc43a...@google.com/
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Muchun Song
---
mm/hu
page to reduce
long hold times.
The ugly unlock/lock cycle in free_pool_huge_page will be removed in
a subsequent patch which restructures free_pool_huge_page.
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Muchun Song
---
mm/hugetlb.c | 32 +++-
1
his commit removes the
cond_resched_lock() and the potential race. Therefore, remove the
subtle code and restore the more straight forward accounting effectively
reverting the commit.
Signed-off-by: Mike Kravetz
---
mm/hugetlb.c | 95 +---
1 file c
is commit should not
introduce any changes to functionality.
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Miaohe Lin
Reviewed-by: Muchun Song
---
mm/hugetlb.c | 67
1 file changed, 42 insertions(+), 25 deletions(-)
diff --git
Now that cma_release is non-blocking and irq safe, there is no need to
drop hugetlb_lock before calling.
Signed-off-by: Mike Kravetz
---
mm/hugetlb.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 3c3e4baa4156..1d62f0492e7b 100644
--- a/mm/hugetlb.c
assert_held() calls
- Rebased to v5.12-rc3-mmotm-2021-03-17-22-24
Mike Kravetz (8):
mm/cma: change cma mutex to irq safe spinlock
hugetlb: no need to drop hugetlb_lock to call cma_release
hugetlb: add per-hstate mutex to synchronize user adjustments
hugetlb: create remove_hugetlb_page()
Ideally, cma_release could be called from any context. However, that is
not possible because a mutex is used to protect the per-area bitmap.
Change the bitmap to an irq safe spinlock.
Signed-off-by: Mike Kravetz
---
mm/cma.c | 20 +++-
mm/cma.h | 2 +-
mm
pages.
It makes little sense to allow multiple adjustment to the number of
hugetlb pages in parallel. Add a mutex to the hstate and use it to only
allow one hugetlb page adjustment at a time. This will synchronize
modifications to the next_nid_to_alloc variable.
Signed-off-by: Mike Kravetz
Acked
On 3/29/21 12:49 AM, Michal Hocko wrote:
> On Sat 27-03-21 15:06:36, Muchun Song wrote:
>> On Thu, Mar 25, 2021 at 8:29 AM Mike Kravetz wrote:
>>>
>>> Commit c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in
>>> non-task context") wa
On 3/29/21 12:46 AM, Michal Hocko wrote:
> On Fri 26-03-21 14:32:01, Mike Kravetz wrote:
> [...]
>> - Just change the mutex to an irq safe spinlock.
>
> Yes please.
>
>> AFAICT, the potential
>> downsides could be:
>> - Interrupts disabled during
On 3/26/21 11:54 PM, Muchun Song wrote:
> On Thu, Mar 25, 2021 at 8:29 AM Mike Kravetz wrote:
>>
>> With the introduction of remove_hugetlb_page(), there is no need for
>> update_and_free_page to hold the hugetlb lock. Change all callers to
>> drop the lock before cal
On 3/25/21 4:49 PM, Mike Kravetz wrote:
> On 3/25/21 4:19 PM, Roman Gushchin wrote:
>> On Thu, Mar 25, 2021 at 01:12:51PM -0700, Minchan Kim wrote:
>>> On Thu, Mar 25, 2021 at 06:15:11PM +0100, David Hildenbrand wrote:
>>>> On 25.03.21 17:56, Mike Kravetz wrote:
&g
On 3/25/21 6:42 PM, Miaohe Lin wrote:
> Hi:
> On 2021/3/25 8:28, Mike Kravetz wrote:
>> This effort is the result a recent bug report [1]. In subsequent
>> discussions [2], it was deemed necessary to properly fix the hugetlb
>
> Many thanks for the effort. I have read t
On 3/25/21 7:10 PM, Miaohe Lin wrote:
> On 2021/3/25 8:28, Mike Kravetz wrote:
>> The new remove_hugetlb_page() routine is designed to remove a hugetlb
>> page from hugetlbfs processing. It will remove the page from the active
>> or free list, update global counters and
On 3/25/21 4:19 PM, Roman Gushchin wrote:
> On Thu, Mar 25, 2021 at 01:12:51PM -0700, Minchan Kim wrote:
>> On Thu, Mar 25, 2021 at 06:15:11PM +0100, David Hildenbrand wrote:
>>> On 25.03.21 17:56, Mike Kravetz wrote:
>>>> On 3/25/21 3:22 AM, Michal Hocko wrote:
On 3/25/21 12:39 PM, Michal Hocko wrote:
> On Thu 25-03-21 10:12:05, Mike Kravetz wrote:
>> On 3/25/21 3:55 AM, Michal Hocko wrote:
>>> On Wed 24-03-21 17:28:32, Mike Kravetz wrote:
>>>> With the introduction of remove_hugetlb_page(), there is no need for
>>
On 3/25/21 4:21 AM, Michal Hocko wrote:
> On Wed 24-03-21 17:28:34, Mike Kravetz wrote:
>> Commit c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in
>> non-task context") was added to address the issue of free_huge_page
>> being called from irq
On 3/25/21 4:06 AM, Michal Hocko wrote:
> On Wed 24-03-21 17:28:33, Mike Kravetz wrote:
> [...]
>> @@ -2074,17 +2067,16 @@ static int gather_surplus_pages(struct hstate *h,
>> long delta)
>> *to the associated reservation map.
>> * 2) Free any unused su
On 3/25/21 3:55 AM, Michal Hocko wrote:
> On Wed 24-03-21 17:28:32, Mike Kravetz wrote:
>> With the introduction of remove_hugetlb_page(), there is no need for
>> update_and_free_page to hold the hugetlb lock. Change all callers to
>> drop the lock before calling.
>&
On 3/25/21 3:22 AM, Michal Hocko wrote:
> On Thu 25-03-21 10:56:38, David Hildenbrand wrote:
>> On 25.03.21 01:28, Mike Kravetz wrote:
>>> From: Roman Gushchin
>>>
>>> cma_release() has to lock the cma_lock mutex to clear the cma bitmap.
>>> It makes
page to reduce
long hold times.
The ugly unlock/lock cycle in free_pool_huge_page will be removed in
a subsequent patch which restructures free_pool_huge_page.
Signed-off-by: Mike Kravetz
---
mm/hugetlb.c | 34 +-
1 file changed, 29 insertions(+), 5 deletions
check and workqueue handoff.
[1] https://lore.kernel.org/linux-mm/f1c03b05bc43a...@google.com/
Signed-off-by: Mike Kravetz
---
mm/hugetlb.c| 169 +---
mm/hugetlb_cgroup.c | 8 +--
2 files changed, 67 insertions(+), 110 deletions(-)
di
allocators. The hugetlb_lock is dropped before freeing to these
allocators which results in shorter lock hold times.
Signed-off-by: Mike Kravetz
---
mm/hugetlb.c | 88 ++--
1 file changed, 51 insertions(+), 37 deletions(-)
diff --git a/mm/hugetlb.c b
call, the 'page' can be treated as a normal compound page or
a collection of base size pages.
remove_hugetlb_page is to be called with the hugetlb_lock held.
Creating this routine and separating functionality is in preparation for
restructuring code to reduce lock hold times.
Signed-
From: Roman Gushchin
Replace blocking cma_release() with a non-blocking cma_release_nowait()
call, so there is no more need to temporarily drop hugetlb_lock.
Signed-off-by: Roman Gushchin
Signed-off-by: Mike Kravetz
---
mm/hugetlb.c | 11 +++
1 file changed, 3 insertions(+), 8
After making hugetlb lock irq safe and separating some functionality
done under the lock, add some lockdep_assert_held to help verify
locking.
Signed-off-by: Mike Kravetz
---
mm/hugetlb.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e4c441b878f2
This eliminated
the need to recalculate loop control variables when dropping the lock.
- Added lockdep_assert_held() calls
- Rebased to v5.12-rc3-mmotm-2021-03-17-22-24
Mike Kravetz (6):
hugetlb: add per-hstate mutex to synchronize user adjustments
hugetlb: create remove_hugetlb_page() to s
ff-by: Roman Gushchin
[mike.krav...@oracle.com: rebased to v5.12-rc3-mmotm-2021-03-17-22-24]
Signed-off-by: Mike Kravetz
---
include/linux/cma.h | 2 +
mm/cma.c| 93 +
mm/cma.h| 5 +++
3 files changed, 100 insertions(+)
diff --
pages.
It makes little sense to allow multiple adjustment to the number of
hugetlb pages in parallel. Add a mutex to the hstate and use it to only
allow one hugetlb page adjustment at a time. This will synchronize
modifications to the next_nid_to_alloc variable.
Signed-off-by: Mike Kravetz
1 - 100 of 1001 matches
Mail list logo