/testing/selftests/mm/run_vmtests.sh | 4
> 1 file changed, 4 insertions(+)
>
Acked-by: Zi Yan
Best Regards,
Yan, Zi
,
> usleep(TICK);
> }
>
> - madvise(p, nr_hpages * hpage_pmd_size, MADV_NOHUGEPAGE);
> -
> return timeout == -1;
> }
>
I assume you are going to just remove this madvise based on your discussion
with David. With that, feel free to add Reviewed-by: Zi Yan
Thanks.
Best Regards,
Yan, Zi
When running hugevm tests in a machine without kernel config present, e.g.,
a VM running a kernel without CONFIG_IKCONFIG_PROC nor /boot/config-*,
skip hugevm tests, which reads kernel config to get page table level
information.
Signed-off-by: Zi Yan
Acked-by: Lorenzo Stoakes
---
.../selftests
When userfaultfd is not compiled into kernel, userfaultfd() returns -1,
causing guard_regions.uffd tests to fail. Skip the tests instead.
Signed-off-by: Zi Yan
Reviewed-by: Lorenzo Stoakes
Reviewed-by: Pedro Falcato
---
tools/testing/selftests/mm/guard-regions.c | 17 +++--
1 file
Two guard_regions tests on userfaultfd fail when userfaultfd is not
present. Skip them instead.
hugevm test reads kernel config to get page table level information and
fails when neither /proc/config.gz nor /boot/config-* is present. Skip
it instead.
Zi Yan (2):
selftests/mm: skip
On 15 May 2025, at 14:49, Lorenzo Stoakes wrote:
> On Thu, May 15, 2025 at 02:46:41PM -0400, Zi Yan wrote:
>> On 15 May 2025, at 14:41, Lorenzo Stoakes wrote:
>>
>>> Ah you got to this first :) thanks!
>>>
>>> Could you do this with a cover letter thoug
n overview, thanks!
>
> On Thu, May 15, 2025 at 02:23:32PM -0400, Zi Yan wrote:
>> When userfaultfd is not compiled into kernel, userfaultfd() returns -1,
>> causing uffd tests in madv_guard fail. Skip the tests instead.
>
> 'madv_guard'? I'd just say the guard_re
When running hugevm tests in a machine without kernel config present, e.g.,
a VM running a kernel without CONFIG_IKCONFIG_PROC nor /boot/config-*,
skip hugevm tests, which reads kernel config to get page table level
information.
Signed-off-by: Zi Yan
---
.../selftests/mm/va_high_addr_switch.sh
When userfaultfd is not compiled into kernel, userfaultfd() returns -1,
causing uffd tests in madv_guard fail. Skip the tests instead.
Signed-off-by: Zi Yan
---
tools/testing/selftests/mm/guard-regions.c | 17 +++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a
On 14 May 2025, at 15:51, David Hildenbrand wrote:
> Thanks a bucnh for the review!
>
>>> +
>>> + if (PageOffline(page) && PageOfflineSkippable(page))
>>> + continue;
>>> +
>>
>> Some comment like "Skippable PageOffline() pages are not migratable but are
>> skipped duri
e pages for memory offlining.
>*/
> - if ((flags & MEMORY_OFFLINE) && PageOffline(page))
> + if ((flags & MEMORY_OFFLINE) && PageOffline(page) &&
> + PageOfflineSkippable(page))
> continue;
With this change, we are no longer give non-virtio-mem driver a chance
to decrease PageOffline(page) refcount? Or virtio-mem is the only driver
doing this?
>
> if (__PageMovable(page) || PageLRU(page))
> @@ -577,11 +572,11 @@ __test_page_isolated_in_pageblock(unsigned long pfn,
> unsigned long end_pfn,
> /* A HWPoisoned page cannot be also PageBuddy */
> pfn++;
> else if ((flags & MEMORY_OFFLINE) && PageOffline(page) &&
> - !page_count(page))
> + PageOfflineSkippable(page))
The same question as above.
> /*
> - * The responsible driver agreed to skip PageOffline()
> - * pages when offlining memory by dropping its
> - * reference in MEM_GOING_OFFLINE.
> + * If the page is a skippable PageOffline() page,
> + * we can offline the memory block, as the driver will
> + * re-discover them when re-onlining the memory.
>*/
> pfn++;
> else
> --
> 2.49.0
Otherwise, LGTM. Acked-by: Zi Yan
--
Best Regards,
Yan, Zi
On 14 May 2025, at 13:28, David Hildenbrand wrote:
>>>
>>> Note that PageOffline() is a bit confusing because it's "Memory block
>>> online but page is logically offline (e.g., has a memmap that can be
>>> touched, but the page content should not be touched)".
>>
>> So PageOffline() is before me
On 14 May 2025, at 10:12, David Hildenbrand wrote:
> On 14.05.25 15:45, Zi Yan wrote:
>> On 14 May 2025, at 7:15, David Hildenbrand wrote:
>>
>>> This is a requirement for making PageOffline pages not have a refcount
>>> in the long future ("frozen"),
car Salvador
> Cc: Vlastimil Babka
> Cc: Suren Baghdasaryan
> Cc: Michal Hocko
> Cc: Brendan Jackman
> Cc: Johannes Weiner
> Cc: Zi Yan
> Cc: "Matthew Wilcox (Oracle)"
>
> David Hildenbrand (2):
> mm/memory_hotplug: PG_offline_skippable for off
On 10 Mar 2025, at 12:14, Zi Yan wrote:
> On 7 Mar 2025, at 12:39, Zi Yan wrote:
>
>> This is a preparation patch, both added functions are not used yet.
>>
>> The added __split_unmapped_folio() is able to split a folio with its
>> mapping removed in two manners:
On 10 Mar 2025, at 4:54, Hugh Dickins wrote:
> On Thu, 6 Mar 2025, Zi Yan wrote:
>> On 5 Mar 2025, at 17:38, Hugh Dickins wrote:
>>> On Wed, 5 Mar 2025, Zi Yan wrote:
>>>> On 5 Mar 2025, at 16:03, Hugh Dickins wrote:
>>>>>
>>>>> Beyon
On 10 Mar 2025, at 13:32, Zi Yan wrote:
> On 10 Mar 2025, at 12:14, Zi Yan wrote:
>
>> On 7 Mar 2025, at 12:39, Zi Yan wrote:
>>
>>> This is a preparation patch, both added functions are not used yet.
>>>
>>> The added __split_unmapped_folio() is able
On 10 Mar 2025, at 13:00, Matthew Wilcox wrote:
> On Mon, Mar 10, 2025 at 12:42:06PM -0400, Zi Yan wrote:
>>> Because of the “Careful” comment. But new_folio->* should be fine,
>>> since it is the same as new_head. So I probably can replace all
>>> new
On 10 Mar 2025, at 12:39, Zi Yan wrote:
> On 10 Mar 2025, at 12:30, Matthew Wilcox wrote:
>
>> On Fri, Mar 07, 2025 at 12:39:55PM -0500, Zi Yan wrote:
>>> + for (index = new_nr_pages; index < nr_pages; index += new_nr_pages) {
>>> +
f-f9a7-2853b5318...@google.com/
Cc: sta...@vger.kernel.org
Signed-off-by: Zi Yan
---
mm/huge_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3d3ebdc002d5..373781b21e5c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3304,7 +
On 10 Mar 2025, at 12:30, Matthew Wilcox wrote:
> On Fri, Mar 07, 2025 at 12:39:55PM -0500, Zi Yan wrote:
>> +for (index = new_nr_pages; index < nr_pages; index += new_nr_pages) {
>> +struct page *head = &folio->page;
>> +struc
On 7 Mar 2025, at 12:39, Zi Yan wrote:
> This is a preparation patch, both added functions are not used yet.
>
> The added __split_unmapped_folio() is able to split a folio with its
> mapping removed in two manners: 1) uniform split (the existing way), and
> 2) buddy allocat
Now split_huge_page_to_list_to_order() uses the new backend split code in
__split_unmapped_folio(), the old __split_huge_page() and
__split_huge_page_tail() can be removed.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc
outside __folio_split() in the
following commit.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc: Miaohe Lin
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu Zhao
Cc: Kairui Song
---
mm
It splits page cache folios to orders from 0 to 8 at different in-folio
offset.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc: Miaohe Lin
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu
-...@nvidia.com
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc: Miaohe Lin
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu Zhao
Cc: Kairui Song
Signed-off-by: Andrew Morton
---
include
This allows to test folio_split() by specifying an additional in folio
page offset parameter to split_huge_page debugfs interface.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc
-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc: Miaohe Lin
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu Zhao
Cc: Kairui Song
---
mm/huge_memory.c | 112
given page to one lower
order.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc: Miaohe Lin
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu Zhao
Cc: Kairui Song
---
mm/huge_memory.c | 348
|
---
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Miaohe Lin
Cc: Matthew Wilcox
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu Zhao
Cc: Zi Yan
Cc: Kairui Song
---
Documentation/core-api/xarray.rst | 14
/
[12]
https://lore.kernel.org/linux-mm/d45d4f01-e5a5-47e6-8724-01610cc19...@nvidia.com/
[13] https://lore.kernel.org/linux-mm/20250226210032.2044041-1-...@nvidia.com/
[14]
https://lore.kernel.org/linux-mm/2fae27fe-6e2e-3587-4b68-072118d80...@google.com/
[15] https://lore.kernel.org/all/202503031630
On 6 Mar 2025, at 11:21, Zi Yan wrote:
> On 5 Mar 2025, at 17:38, Hugh Dickins wrote:
>
>> On Wed, 5 Mar 2025, Zi Yan wrote:
>>> On 5 Mar 2025, at 16:03, Hugh Dickins wrote:
>>>>
>>>> Beyond checking that, I didn't have time yesterday to investi
On 6 Mar 2025, at 4:19, David Hildenbrand wrote:
> On 05.03.25 22:08, Zi Yan wrote:
>> On 5 Mar 2025, at 15:50, Hugh Dickins wrote:
>>
>>> On Wed, 5 Mar 2025, Zi Yan wrote:
>>>> On 4 Mar 2025, at 6:49, Hugh Dickins wrote:
>>>>>
>>>>&g
On 5 Mar 2025, at 17:38, Hugh Dickins wrote:
> On Wed, 5 Mar 2025, Zi Yan wrote:
>> On 5 Mar 2025, at 16:03, Hugh Dickins wrote:
>>>
>>> Beyond checking that, I didn't have time yesterday to investigate
>>> further, but I'll try again today (still u
On 5 Mar 2025, at 16:03, Hugh Dickins wrote:
> On Tue, 4 Mar 2025, Zi Yan wrote:
>> On 4 Mar 2025, at 6:49, Hugh Dickins wrote:
>>>
>>> I'd been unable to complete even a single iteration of my "kernel builds
>>> on huge tmpfs while swapping to SSD&q
On 5 Mar 2025, at 15:50, Hugh Dickins wrote:
> On Wed, 5 Mar 2025, Zi Yan wrote:
>> On 4 Mar 2025, at 6:49, Hugh Dickins wrote:
>>>
>>> I think (might be wrong, I'm in a rush) my mods are all to this
>>> "add two new (not yet used) functions for fol
On 4 Mar 2025, at 6:49, Hugh Dickins wrote:
> On Wed, 26 Feb 2025, Zi Yan wrote:
>
>> This is a preparation patch, both added functions are not used yet.
>>
>> The added __split_unmapped_folio() is able to split a folio with its
>> mapping removed in two manners:
On 4 Mar 2025, at 15:29, Andrew Morton wrote:
> On Tue, 04 Mar 2025 11:20:53 -0500 Zi Yan wrote:
>
>> Do you mind folding Hugh’s fixes to this patch? Let me know if you prefer
>> a V10. Thanks.
>
> I think a new series, please. I'll remove the current version from m
On 4 Mar 2025, at 6:49, Hugh Dickins wrote:
> On Wed, 26 Feb 2025, Zi Yan wrote:
>
>> This is a preparation patch, both added functions are not used yet.
>>
>> The added __split_unmapped_folio() is able to split a folio with its
>> mapping removed in two manners:
On 26 Feb 2025, at 16:00, Zi Yan wrote:
> Instead of splitting the large folio uniformly during truncation, try to
> use buddy allocator like split at the start of truncation range to
> minimize the number of resulting folios if it is supported.
> try_folio_split() is intro
On 27 Feb 2025, at 10:14, Matthew Wilcox wrote:
> On Thu, Feb 27, 2025 at 05:55:43AM +, Matthew Wilcox wrote:
>> On Wed, Feb 26, 2025 at 04:00:25PM -0500, Zi Yan wrote:
>>> +static int __split_unmapped_folio(struct folio *folio, int new_order,
>>> + st
It splits page cache folios to orders from 0 to 8 at different in-folio
offset.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc: Miaohe Lin
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu
..10] can be dropped.
One possible optimization is to make folio_split() to split a folio based
on a given range, like [3..10] above. But that complicates folio_split(),
so it will be investigated when necessary.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc
This allows to test folio_split() by specifying an additional in folio
page offset parameter to split_huge_page debugfs interface.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc
used outside __folio_split() in the
following commit.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc: Miaohe Lin
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu Zhao
Cc: Kairui Song
---
mm
Now split_huge_page_to_list_to_order() uses the new backend split code in
__folio_split_without_mapping(), the old __split_huge_page() and
__split_huge_page_tail() can be removed.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc: Miaohe Lin
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu Zhao
Cc: Kairui Song
---
mm/huge_memory.c | 338
This is a preparation patch for folio_split().
In the upcoming patch folio_split() will share folio unmapping and
remapping code with split_huge_page_to_list_to_order(), so move the code
to a common function __folio_split() first.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc
|
---
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Miaohe Lin
Cc: Matthew Wilcox
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu Zhao
Cc: Zi Yan
Cc: Kairui Song
---
Documentation/core-api/xarray.rst | 14
https://lore.kernel.org/linux-mm/20250211155034.268962-1-...@nvidia.com/
[10] https://lore.kernel.org/all/67af65cb.050a0220.21dd3.004a@google.com/
[11] https://lore.kernel.org/linux-mm/20250218235012.1542225-1-...@nvidia.com/
Zi Yan (8):
xarray: add xas_try_split() to split a multi-index entry
mm/
On 26 Feb 2025, at 10:07, Baolin Wang wrote:
> On 2025/2/26 23:00, Zi Yan wrote:
>> On 26 Feb 2025, at 2:11, Baolin Wang wrote:
>>
>>> Hi Zi,
>>>
>>> On 2025/2/19 07:50, Zi Yan wrote:
>>>> A preparation patch for non-uniform folio split, whic
On 26 Feb 2025, at 2:11, Baolin Wang wrote:
> Hi Zi,
>
> On 2025/2/19 07:50, Zi Yan wrote:
>> A preparation patch for non-uniform folio split, which always split a
>> folio into half iteratively, and minimal xarray entry split.
>>
>> Currently, xas_split_alloc()
0] https://lore.kernel.org/all/67af65cb.050a0220.21dd3.004a@google.com/
Zi Yan (8):
xarray: add xas_try_split() to split a multi-index entry
mm/huge_memory: add two new (not yet used) functions for folio_split()
mm/huge_memory: move folio split common code to __folio_split()
mm/huge_memory:
|
---
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Miaohe Lin
Cc: Matthew Wilcox
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu Zhao
Cc: Zi Yan
---
Documentation/core-api/xarray.rst | 14 ++-
include/linux
It splits page cache folios to orders from 0 to 8 at different in-folio
offset.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc: Miaohe Lin
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu
..10] can be dropped.
One possible optimization is to make folio_split() to split a folio based
on a given range, like [3..10] above. But that complicates folio_split(),
so it will be investigated when necessary.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc
This allows to test folio_split() by specifying an additional in folio
page offset parameter to split_huge_page debugfs interface.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc
Now split_huge_page_to_list_to_order() uses the new backend split code in
__folio_split_without_mapping(), the old __split_huge_page() and
__split_huge_page_tail() can be removed.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
used outside __folio_split() in the
following commit.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc: Miaohe Lin
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu Zhao
---
mm/huge_memory.c | 160
This is a preparation patch for folio_split().
In the upcoming patch folio_split() will share folio unmapping and
remapping code with split_huge_page_to_list_to_order(), so move the code
to a common function __folio_split() first.
Signed-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc
-off-by: Zi Yan
Cc: Baolin Wang
Cc: David Hildenbrand
Cc: Hugh Dickins
Cc: John Hubbard
Cc: Kefeng Wang
Cc: Kirill A. Shuemov
Cc: Matthew Wilcox
Cc: Miaohe Lin
Cc: Ryan Roberts
Cc: Yang Shi
Cc: Yu Zhao
---
mm/huge_memory.c | 339 ++-
1 file
On 18 Feb 2025, at 10:44, David Hildenbrand wrote:
> On 17.02.25 23:05, Zi Yan wrote:
>> On 17 Feb 2025, at 16:44, David Hildenbrand wrote:
>>
>>> On 11.02.25 16:50, Zi Yan wrote:
>>>> It is a preparation patch for non-uniform folio split, which always spli
On 17 Feb 2025, at 23:12, Andrew Morton wrote:
> On Mon, 17 Feb 2025 10:22:44 -0500 Zi Yan wrote:
>
>>>
>>> Thanks. The patch below should fix it.
>>>
>>> I am going to send V8, since
>>> 1. there have been 4 fixes so far for V7, a new series
On 17 Feb 2025, at 16:44, David Hildenbrand wrote:
> On 11.02.25 16:50, Zi Yan wrote:
>> It is a preparation patch for non-uniform folio split, which always split
>> a folio into half iteratively, and minimal xarray entry split.
>>
>> Currently, xas_split_alloc() and x
On 16 Feb 2025, at 9:17, Zi Yan wrote:
> On 16 Feb 2025, at 5:32, David Hildenbrand wrote:
>
>> On 11.02.25 16:50, Zi Yan wrote:
>>> folio_split() splits a large folio in the same way as buddy allocator
>>> splits a large free page for allocation. The purpose is
On 16 Feb 2025, at 5:32, David Hildenbrand wrote:
> On 11.02.25 16:50, Zi Yan wrote:
>> folio_split() splits a large folio in the same way as buddy allocator
>> splits a large free page for allocation. The purpose is to minimize the
>> number of folios after the split. For e
On 11 Feb 2025, at 10:50, Zi Yan wrote:
> This is a preparation patch, both added functions are not used yet.
>
> The added __split_unmapped_folio() is able to split a folio with
> its mapping removed in two manners: 1) uniform split (the existing way),
> and 2) buddy alloc
On 14 Feb 2025, at 17:06, David Hildenbrand wrote:
> On 14.02.25 23:03, Zi Yan wrote:
>> On 14 Feb 2025, at 16:59, David Hildenbrand wrote:
>>
>>> On 11.02.25 16:50, Zi Yan wrote:
>>>> This is a preparation patch, both added functions are not used yet.
>&g
On 14 Feb 2025, at 16:59, David Hildenbrand wrote:
> On 11.02.25 16:50, Zi Yan wrote:
>> This is a preparation patch, both added functions are not used yet.
>>
>> The added __split_unmapped_folio() is able to split a folio with
>> its mapping removed in two manners: 1)
On 11 Feb 2025, at 19:57, Zi Yan wrote:
> On 11 Feb 2025, at 10:50, Zi Yan wrote:
>
>> It is a preparation patch for non-uniform folio split, which always split
>> a folio into half iteratively, and minimal xarray entry split.
>>
>> Currently, xas_split_alloc() an
On 11 Feb 2025, at 10:50, Zi Yan wrote:
> It is a preparation patch for non-uniform folio split, which always split
> a folio into half iteratively, and minimal xarray entry split.
>
> Currently, xas_split_alloc() and xas_split() always split all slots from a
> multi-index entry
..10] can be dropped.
One possible optimization is to make folio_split() to split a folio
based on a given range, like [3..10] above. But that complicates
folio_split(), so it will be investigated when necessary.
Signed-off-by: Zi Yan
---
include/linux/huge_mm.h | 36
It splits page cache folios to orders from 0 to 8 at different in-folio
offset.
Signed-off-by: Zi Yan
---
.../selftests/mm/split_huge_page_test.c | 34 +++
1 file changed, 27 insertions(+), 7 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c
b
This allows to test folio_split() by specifying an additional in folio
page offset parameter to split_huge_page debugfs interface.
Signed-off-by: Zi Yan
---
mm/huge_memory.c | 47 ++-
1 file changed, 34 insertions(+), 13 deletions(-)
diff --git a/mm
Now split_huge_page_to_list_to_order() uses the new backend split code in
__folio_split_without_mapping(), the old __split_huge_page() and
__split_huge_page_tail() can be removed.
Signed-off-by: Zi Yan
---
mm/huge_memory.c | 207 ---
1 file changed
outside __folio_split() in the
following commit.
Signed-off-by: Zi Yan
---
mm/huge_memory.c | 137 ++-
1 file changed, 100 insertions(+), 37 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 21ebe2dec5a4..400dfe8a6e60 100644
--- a/mm
This is a preparation patch for folio_split().
In the upcoming patch folio_split() will share folio unmapping and
remapping code with split_huge_page_to_list_to_order(), so move the code
to a common function __folio_split() first.
Signed-off-by: Zi Yan
---
mm/huge_memory.c | 107
-off-by: Zi Yan
---
mm/huge_memory.c | 349 ++-
1 file changed, 348 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a0277f4154c2..12d3f515c408 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3262,7 +3262,6
|
---
Signed-off-by: Zi Yan
---
Documentation/core-api/xarray.rst | 14 ++-
include/linux/xarray.h| 7 ++
lib/test_xarray.c | 47 +++
lib/xarray.c | 136 ++
tools/testing/radix-tree/Makefile | 1 +
5 files changed
[8] https://lore.kernel.org/linux-mm/20250205031417.1771278-1-...@nvidia.com/
Zi Yan (8):
xarray: add xas_try_split() to split a multi-index entry.
mm/huge_memory: add two new (not yet used) functions for folio_split()
mm/huge_memory: move folio split common code to __folio_split()
mm/huge_memor
On 7 Feb 2025, at 9:25, Matthew Wilcox wrote:
> On Fri, Feb 07, 2025 at 09:11:39AM -0500, Zi Yan wrote:
>> Existing uniform split requires 2^(order % XA_CHUNK_SHIFT) xa_node
>> allocations
>> during split, when the folio needs to be split to order-0. But non-uniform
>>
On 6 Feb 2025, at 3:01, Andrew Morton wrote:
> On Tue, 4 Feb 2025 22:14:10 -0500 Zi Yan wrote:
>
>> This patchset adds a new buddy allocator like (or non-uniform) large folio
>> split to reduce the total number of after-split folios, the amount of memory
>> needed for
This is a preparation patch for folio_split().
In the upcoming patch folio_split() will share folio unmapping and
remapping code with split_huge_page_to_list_to_order(), so move the code
to a common function __folio_split() first.
Signed-off-by: Zi Yan
---
mm/huge_memory.c | 107
..10] can be dropped.
One possible optimization is to make folio_split() to split a folio
based on a given range, like [3..10] above. But that complicates
folio_split(), so it will be investigated when necessary.
Signed-off-by: Zi Yan
---
include/linux/huge_mm.h | 36
It splits page cache folios to orders from 0 to 8 at different in-folio
offset.
Signed-off-by: Zi Yan
---
.../selftests/mm/split_huge_page_test.c | 34 +++
1 file changed, 27 insertions(+), 7 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c
b
This allows to test folio_split() by specifying an additional in folio
page offset parameter to split_huge_page debugfs interface.
Signed-off-by: Zi Yan
---
mm/huge_memory.c | 47 ++-
1 file changed, 34 insertions(+), 13 deletions(-)
diff --git a/mm
Now split_huge_page_to_list_to_order() uses the new backend split code in
__folio_split_without_mapping(), the old __split_huge_page() and
__split_huge_page_tail() can be removed.
Signed-off-by: Zi Yan
---
mm/huge_memory.c | 207 ---
1 file changed
non_uniform_split_supported() are added
to factor out check code and will be used outside __folio_split() in the
following commit.
Signed-off-by: Zi Yan
---
mm/huge_memory.c | 134 ++-
1 file changed, 97 insertions(+), 37 deletions(-)
diff --git a/mm/huge_memory.c b/mm
e.kernel.org/linux-mm/20250116211042.741543-1-...@nvidia.com/
Zi Yan (7):
mm/huge_memory: add two new (not yet used) functions for folio_split()
mm/huge_memory: move folio split common code to __folio_split()
mm/huge_memory: add buddy allocator like folio_split()
mm/huge_memory: remove the old, unused __s
-off-by: Zi Yan
---
mm/huge_memory.c | 350 ++-
1 file changed, 349 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index de72713b1c45..1948d86ac4ce 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3149,7 +3149,6
Now split_huge_page*() supports shmem THP split to any lower order.
Test it.
The test now reads file content out after split to check if the split
corrupts the file data.
Signed-off-by: Zi Yan
Reviewed-by: Baolin Wang
Tested-by: Baolin Wang
---
tools/testing/selftests/mm
Commit 4d684b5f92ba ("mm: shmem: add large folio support for tmpfs") has
added large folio support to shmem. Remove the restriction in
split_huge_page*().
Signed-off-by: Zi Yan
Reviewed-by: Baolin Wang
---
mm/huge_memory.c | 8 +---
1 file changed, 1 insertion(+), 7 deletion
Commit acd7ccb284b8 ("mm: shmem: add large folio support for tmpfs")
changes huge=always to allocate THP/mTHP based on write size and
split_huge_page_test does not write PMD size data, so file-back THP is not
created during the test. Fix it by writing PMD size data.
Signed-off-by: Zi Y
On Wed Jan 22, 2025 at 10:27 AM EST, David Hildenbrand wrote:
> On 22.01.25 16:16, Zi Yan wrote:
>> On Wed Jan 22, 2025 at 9:26 AM EST, David Hildenbrand wrote:
>>> On 22.01.25 13:40, Zi Yan wrote:
>>>> Commit acd7ccb284b8 ("mm: shmem: add large folio support f
On Wed Jan 22, 2025 at 9:26 AM EST, David Hildenbrand wrote:
> On 22.01.25 13:40, Zi Yan wrote:
>> Commit acd7ccb284b8 ("mm: shmem: add large folio support for tmpfs")
>> changes huge=always to allocate THP/mTHP based on write size and
>> split_huge_page_test does n
Now split_huge_page*() supports shmem THP split to any lower order.
Test it.
The test now reads file content out after split to check if the split
corrupts the file data.
Signed-off-by: Zi Yan
Reviewed-by: Baolin Wang
Tested-by: Baolin Wang
---
.../selftests/mm/split_huge_page_test.c
Commit 4d684b5f92ba ("mm: shmem: add large folio support for tmpfs") has
added large folio support to shmem. Remove the restriction in
split_huge_page*().
Signed-off-by: Zi Yan
Reviewed-by: Baolin Wang
---
mm/huge_memory.c | 8 +---
1 file changed, 1 insertion(+), 7 deletion
to "force" to force
THP allocation.
Signed-off-by: Zi Yan
Reviewed-by: Baolin Wang
Tested-by: Baolin Wang
---
.../selftests/mm/split_huge_page_test.c | 48 +--
1 file changed, 45 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_
On Wed Jan 22, 2025 at 1:32 AM EST, Baolin Wang wrote:
>
>
> On 2025/1/17 05:10, Zi Yan wrote:
>> Commit acd7ccb284b8 ("mm: shmem: add large folio support for tmpfs")
>> changes huge=always to allocate THP/mTHP based on write size and
>> split_huge_page_test
-off-by: Zi Yan
---
mm/huge_memory.c | 350 ++-
1 file changed, 349 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index deb4e72daeb9..c98a373babbb 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3146,7 +3146,6
1 - 100 of 634 matches
Mail list logo