migration and the pmd-related works.
Signed-off-by: Naoya Horiguchi
Signed-off-by: Zi Yan
---
arch/x86/mm/gup.c | 3 +++
fs/proc/task_mmu.c | 20 +++---
mm/gup.c | 8 ++
mm/huge_memory.c | 76 +++---
mm/memcontrol.c
From: Zi Yan
Hi all,
This patchset is based on Naoya Horiguchi's page migration enchancement
for thp patchset with additional IBM ppc64 support. And I rebase it
on the latest upstream commit.
The motivation is that 4KB page migration is underutilizing the memory
bandwidth compared to 2M
From: Naoya Horiguchi
Introduce a separate check routine related to MPOL_MF_INVERT flag. This patch
just does cleanup, no behavioral change.
Signed-off-by: Naoya Horiguchi
---
mm/mempolicy.c | 16 +++-
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/mm/mempolicy.c b/
From: Naoya Horiguchi
Introduces CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION to limit thp migration
functionality to x86_64, which should be safer at the first step.
Signed-off-by: Naoya Horiguchi
---
arch/x86/Kconfig| 4
include/linux/huge_mm.h | 10 ++
mm/Kconfig
From: Naoya Horiguchi
Soft dirty bit is designed to keep tracked over page migration, so this patch
makes it done for thp migration too.
This patch changes the bit for _PAGE_SWP_SOFT_DIRTY bit, because it's necessary
for thp migration (i.e. both of _PAGE_PSE and _PAGE_PRESENT is used to detect
p
From: Naoya Horiguchi
During testing thp migration, I saw the BUG_ON triggered due to the race between
soft offline and unpoison (what I actually saw was "bad page" warning of freeing
page with PageActive set, then subsequent bug messages differ each time.)
I tried to solve similar problem a few
From: Naoya Horiguchi
This patch enables thp migration for soft offline.
Signed-off-by: Naoya Horiguchi
---
mm/memory-failure.c | 31 ---
1 file changed, 12 insertions(+), 19 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index e105f91..36eb064
From: Zi Yan
Signed-off-by: Zi Yan
---
arch/powerpc/Kconfig | 4
arch/powerpc/include/asm/book3s/64/pgtable.h | 23 +++
2 files changed, 27 insertions(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 927d2ab..84ffd4c 100644
From: Naoya Horiguchi
This patch enables thp migration for move_pages(2).
Signed-off-by: Naoya Horiguchi
---
mm/migrate.c | 24 +---
1 file changed, 21 insertions(+), 3 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index dfca530..132e8db 100644
--- a/mm/migrate.c
+
From: Naoya Horiguchi
This patch enables thp migration for memory hotremove. Stub definition of
prep_transhuge_page() is added for CONFIG_TRANSPARENT_HUGEPAGE=n.
Signed-off-by: Naoya Horiguchi
---
include/linux/huge_mm.h | 3 +++
mm/memory_hotplug.c | 8
mm/page_isolation.c |
From: Naoya Horiguchi
This patch enables thp migration for mbind(2) and migrate_pages(2).
Signed-off-by: Naoya Horiguchi
---
mm/mempolicy.c | 92 --
1 file changed, 70 insertions(+), 22 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempo
From: Naoya Horiguchi
This patch prepares thp migration's core code. These code will be open when
unmap_and_move() stops unconditionally splitting thp and get_new_page() starts
to allocate destination thps.
Signed-off-by: Naoya Horiguchi
---
arch/x86/include/asm/pgtable.h| 11 ++
arch/
From: Naoya Horiguchi
This patch makes it possible to support thp migration gradually. If you fail
to allocate a destination page as a thp, you just split the source thp as we
do now, and then enter the normal page migration. If you succeed to allocate
destination thp, you enter thp migration. Su
On 26 Sep 2016, at 11:22, zi@sent.com wrote:
> From: Zi Yan
>
> Hi all,
>
> This patchset is based on Naoya Horiguchi's page migration enchancement
> for thp patchset with additional IBM ppc64 support. And I rebase it
> on the latest upstream commit.
>
>
off-by: Zi Yan
---
include/linux/migrate.h | 15 ++-
mm/memory_hotplug.c | 4 +++-
2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index f80c9882403a..f67755ae72c9 100644
--- a/include/linux/migrate.h
+++ b/include
From: Naoya Horiguchi
TTU_MIGRATION is used to convert pte into migration entry until thp split
completes. This behavior conflicts with thp migration added later patches,
so let's introduce a new TTU flag specifically for freezing.
try_to_unmap() is used both for thp split (via freeze_page()) an
try
Signed-off-by: Naoya Horiguchi
ChangeLog v1 -> v5:
- read soft dirty bit from correct place (*src_pmd) in copy_huge_pmd()
- add missing soft dirty bit transfer in change_huge_pmd()
Signed-off-by: Zi Yan
---
arch/x86/include/asm/pgtable.h | 17 +
fs/proc/task_mmu.c
From: Naoya Horiguchi
Introduce a separate check routine related to MPOL_MF_INVERT flag.
This patch just does cleanup, no behavioral change.
Signed-off-by: Naoya Horiguchi
Signed-off-by: Zi Yan
---
mm/mempolicy.c | 22 +-
1 file changed, 17 insertions(+), 5 deletions
From: Zi Yan
Hi all,
The patches are rebased on mmotm-2017-06-16-13-59 with the feedbacks
(the kbuild bot warning and error) from v6 patches.
Hi Kirill, I have cleaned up Patch 5 and Patch 6, so PTE-mapped THP migration is
handled fully by existing code. Can you review these two patches
or thp allocations.
Signed-off-by: Zi Yan
---
mm/mempolicy.c | 108 +
1 file changed, 79 insertions(+), 29 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index a6160e9ce8dc..539144ec9c59 100644
--- a/mm/mempolicy.c
+++ b/mm/mempo
From: Naoya Horiguchi
This patch enables thp migration for move_pages(2).
Signed-off-by: Naoya Horiguchi
ChangeLog: v1 -> v5:
- fix page counting
ChangeLog: v5 -> v6:
- drop changes on soft-offline in unmap_and_move()
Signed-off-by: Zi Yan
---
mm/migrate.
From: Zi Yan
This patch adds thp migration's core code, including conversions
between a PMD entry and a swap entry, setting PMD migration entry,
removing PMD migration entry, and waiting on PMD migration entries.
This patch makes it possible to support thp migration.
If you fail to alloc
From: Zi Yan
If one of callers of page migration starts to handle thp,
memory management code start to see pmd migration entry, so we need
to prepare for it before enabling. This patch changes various code
point which checks the status of given pmds in order to prevent race
between thp migration
("x86/mm: Move swap offset/type up in PTE to
work around erratum"). So let's move _PAGE_SWP_SOFT_DIRTY to bit 1.
Bit 7 is used as reserved (always clear), so please don't use it for
other purpose.
Signed-off-by: Naoya Horiguchi
Signed-off-by: Zi Yan
Acked-by: Dave Hansen
---
From: Naoya Horiguchi
Introduces CONFIG_ARCH_ENABLE_THP_MIGRATION to limit thp migration
functionality to x86_64, which should be safer at the first step.
ChangeLog v1 -> v2:
- fixed config name in subject and patch description
Signed-off-by: Naoya Horiguchi
Reviewed-by: Anshuman Khandual
---
On 21 Jun 2017, at 7:23, Kirill A. Shutemov wrote:
> On Tue, Jun 20, 2017 at 07:07:10PM -0400, Zi Yan wrote:
>> From: Zi Yan
>>
>> This patch adds thp migration's core code, including conversions
>> between a PMD entry and a swap entry, setting PMD migration
On 21 Jun 2017, at 7:49, Kirill A. Shutemov wrote:
> On Tue, Jun 20, 2017 at 07:07:11PM -0400, Zi Yan wrote:
>> @@ -1220,6 +1238,9 @@ int do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t
>> orig_pmd)
>> if (unlikely(!pmd_same(*vmf->pmd, orig_pmd)))
>>
On 21 Jun 2017, at 10:50, Kirill A. Shutemov wrote:
> On Wed, Jun 21, 2017 at 10:37:30AM -0400, Zi Yan wrote:
>> On 21 Jun 2017, at 7:23, Kirill A. Shutemov wrote:
>>
>>> On Tue, Jun 20, 2017 at 07:07:10PM -0400, Zi Yan wrote:
>>>> From: Zi Yan
>>>>
ocko
>>> Cc: "Kirill A. Shutemov"
>>> Cc: David Rientjes
>>> Cc: Arnd Bergmann
>>> Cc: Hugh Dickins
>>> Cc: "J.r.me Glisse"
>>> Cc: Daniel Colascione
>>> Cc: Zi Yan
>>> Cc: Naoya Horiguchi
>>>
On 05/25/2017 07:35 PM, Zi Yan wrote:
> On 25 May 2017, at 18:43, Andrew Morton wrote:
>
>> On Thu, 25 May 2017 13:19:54 -0400 "Zi Yan" wrote:
>>
>>> On 25 May 2017, at 13:06, kbuild test robot wrote:
>>>
>>>> Hi Zi,
>>>>
On 3 Nov 2017, at 3:52, Huang, Ying wrote:
> From: Huang Ying
>
> If THP migration is enabled, the following situation is possible,
>
> - A THP is mapped at source address
> - Migration is started to move the THP to another node
> - Page fault occurs
> - The PMD (migration entry) is copied to the
On 4 Nov 2017, at 23:01, huang ying wrote:
On Fri, Nov 3, 2017 at 11:00 PM, Zi Yan wrote:
On 3 Nov 2017, at 3:52, Huang, Ying wrote:
From: Huang Ying
If THP migration is enabled, the following situation is possible,
- A THP is mapped at source address
- Migration is started to move the
00 RSI: 003c RDI:
>>
>> [ 956.671633] RBP: 7fff12384aa0 R08: 00e7 R09:
>> ff90
>> [ 956.671634] R10: 7f92a088b070 R11: 000000000246 R12:
>> 7f92a088add8
>> [ 956.671634] R13: 7ff
From: Zi Yan
We need to deposit pre-allocated PTE page table when a PMD migration
entry is copied in copy_huge_pmd(). Otherwise, we will leak the
pre-allocated page and cause a NULL pointer dereference later
in zap_huge_pmd().
The missing counters during PMD migration entry copy process are
Hi Michal,
On 28 Aug 2018, at 11:45, Michal Hocko wrote:
> On Tue 28-08-18 17:42:06, Michal Hocko wrote:
>> On Tue 28-08-18 11:36:59, Jerome Glisse wrote:
>>> On Tue, Aug 28, 2018 at 05:24:14PM +0200, Michal Hocko wrote:
>>>> On Fri 24-08-18 20:05:46, Zi Yan wrote:
Hi Jérôme,
On 24 Aug 2018, at 15:25, jgli...@redhat.com wrote:
> From: Jérôme Glisse
>
> Before this patch migration pmd entry (!pmd_present()) would have
> been treated as a bad entry (pmd_bad() returns true on migration
> pmd entry). The outcome was that device driver would believe that
> the
>
>>> With this change, whenever an application issues MADV_DONTNEED on a
>>> memory region, the region is marked as "space-efficient". For such
>>> regions, a hugepage is not immediately allocated on first write.
>>
>> Kirill didn't like it in the previous version and I do not like this
>> either
Michal Hocko wrote:
> On Mon 29-01-18 22:00:11, Zi Yan wrote:
>> From: Zi Yan
>>
>> migrate_pages() requires at least down_read(mmap_sem) to protect
>> related page tables and VMAs from changing. Let's do it in
>> do_page_moves() for both do_move_pages_t
Hugh Dickins wrote:
> On Mon, 29 Jan 2018, Zi Yan wrote:
>> From: Zi Yan
>>
>> migrate_pages() requires at least down_read(mmap_sem) to protect
>> related page tables and VMAs from changing. Let's do it in
>
> Page tables are protected by their locks.
On 30 Jan 2018, at 11:10, Michal Hocko wrote:
> On Tue 30-01-18 10:52:58, Zi Yan wrote:
>>
>>
>> Michal Hocko wrote:
>>> On Mon 29-01-18 22:00:11, Zi Yan wrote:
>>>> From: Zi Yan
>>>>
>>>> migrate_pages() requires at least dow
From: Zi Yan
When CONFIG_DMA_ENGINE_RAID is enabled, unmap pool size can reach to
256. But in struct dmaengine_unmap_data, map_cnt is only u8, wrapping
to 0, if the unmap pool is maximally used. This triggers BUG() when
struct dmaengine_unmap_data is freed. Use u16 to fix the problem.
Signed
Hi Michal,
I discover that this patch does not hold mmap_sem while migrating pages in
do_move_pages_to_node().
A simple fix below moves mmap_sem from add_page_for_migration()
to the outmost do_pages_move():
diff --git a/mm/migrate.c b/mm/migrate.c
index 5d0dc7b85f90..28b9e126cb38 100644
--- a/m
On 29 Jan 2018, at 17:35, Andrew Morton wrote:
> On Mon, 29 Jan 2018 17:06:14 -0500 "Zi Yan" wrote:
>
>> I discover that this patch does not hold mmap_sem while migrating pages in
>> do_move_pages_to_node().
>>
>> A simple fix below moves mmap_sem from ad
From: Zi Yan
migrate_pages() requires at least down_read(mmap_sem) to protect
related page tables and VMAs from changing. Let's do it in
do_page_moves() for both do_move_pages_to_node() and
add_page_for_migration().
Also add this lock requirement in the comment of migrate_pages().
Signe
On 12 Jan 2018, at 11:56, Vinod Koul wrote:
On Mon, Jan 08, 2018 at 10:50:50AM -0500, Zi Yan wrote:
From: Zi Yan
When CONFIG_DMA_ENGINE_RAID is enabled, unmap pool size can reach to
256. But in struct dmaengine_unmap_data, map_cnt is only u8, wrapping
to 0, if the unmap pool is maximally
On 5 Apr 2018, at 21:57, huang ying wrote:
> On Wed, Apr 4, 2018 at 11:02 PM, Zi Yan wrote:
>> On 3 Apr 2018, at 23:22, Huang, Ying wrote:
>>
>>> From: Huang Ying
>>>
>>> mmap_sem will be read locked when calling follow_pmd_mask(). But this
>>
On 9 Nov 2018, at 8:11, Mel Gorman wrote:
> On Fri, Nov 09, 2018 at 03:13:18PM +0300, Kirill A. Shutemov wrote:
>> On Thu, Nov 08, 2018 at 10:48:58PM -0800, Anthony Yznaga wrote:
>>> The basic idea as outlined by Mel Gorman in [2] is:
>>>
>>> 1) On first fault in a sufficiently sized range, alloca
e(old_pmd);
> - young = pmd_young(old_pmd);
> - soft_dirty = pmd_soft_dirty(old_pmd);
>
> /*
> * Withdraw the table only after we mark the pmd entry invalid.
>
This one should fix the issue. Thanks.
Reviewed-by: Zi Yan
Fixes 84c3fc4e9c563 ("mm:
cc: Naoya Horiguchi (who proposed to use !_PAGE_PRESENT && !_PAGE_PSE for x86
PMD migration entry check)
On 8 Oct 2018, at 23:58, Anshuman Khandual wrote:
> A normal mapped THP page at PMD level should be correctly differentiated
> from a PMD migration entry while walking the page table. A mapped
On 10 Oct 2018, at 0:05, Anshuman Khandual wrote:
> On 10/09/2018 07:28 PM, Zi Yan wrote:
>> cc: Naoya Horiguchi (who proposed to use !_PAGE_PRESENT && !_PAGE_PSE for x86
>> PMD migration entry check)
>>
>> On 8 Oct 2018, at 23:58, Anshuman Khandual wrote:
>
On 15 Oct 2018, at 0:06, Anshuman Khandual wrote:
> On 10/15/2018 06:23 AM, Zi Yan wrote:
>> On 12 Oct 2018, at 4:00, Anshuman Khandual wrote:
>>
>>> On 10/10/2018 06:13 PM, Zi Yan wrote:
>>>> On 10 Oct 2018, at 0:05, Anshuman Khandual wrote:
>>>
On 5 Nov 2018, at 21:20, Daniel Jordan wrote:
> Hi Zi,
>
> On Mon, Nov 05, 2018 at 01:49:14PM -0500, Zi Yan wrote:
>> On 5 Nov 2018, at 11:55, Daniel Jordan wrote:
>>
>> Do you think if it makes sense to use ktask for huge page migration (the data
>> copy part)?
&
eli
> Cc: Andrew Morton
> Cc: Greg Kroah-Hartman
> Cc: Zi Yan
> Cc: Kirill A. Shutemov
> Cc: "H. Peter Anvin"
> Cc: Anshuman Khandual
> Cc: Dave Hansen
> Cc: David Nellans
> Cc: Ingo Molnar
> Cc: Mel Gorman
> Cc: Minchan Kim
> Cc: Naoya Horiguc
On 12 Oct 2018, at 4:00, Anshuman Khandual wrote:
> On 10/10/2018 06:13 PM, Zi Yan wrote:
>> On 10 Oct 2018, at 0:05, Anshuman Khandual wrote:
>>
>>> On 10/09/2018 07:28 PM, Zi Yan wrote:
>>>> cc: Naoya Horiguchi (who proposed to use !_PAGE_PRESENT &&
On 4 Oct 2018, at 16:17, David Rientjes wrote:
> On Wed, 26 Sep 2018, Kirill A. Shutemov wrote:
>
>> On Tue, Sep 25, 2018 at 02:03:26PM +0200, Michal Hocko wrote:
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index c3bc7e9c9a2a..c0bcede31930 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/m
Hi Daniel,
On 5 Nov 2018, at 11:55, Daniel Jordan wrote:
> Hi,
>
> This version addresses some of the feedback from Andrew and Michal last year
> and describes the plan for tackling the rest. I'm posting now since I'll be
> presenting ktask at Plumbers next week.
>
> Andrew, you asked about para
On 25 Oct 2018, at 4:10, Anshuman Khandual wrote:
> On 10/16/2018 08:01 PM, Zi Yan wrote:
>> On 15 Oct 2018, at 0:06, Anshuman Khandual wrote:
>>
>>> On 10/15/2018 06:23 AM, Zi Yan wrote:
>>>> On 12 Oct 2018, at 4:00, Anshuman Khandual wrote:
>>>
Hi Andrea,
On 16 Oct 2018, at 22:09, Andrea Arcangeli wrote:
> Hello Zi,
>
> On Sun, Oct 14, 2018 at 08:53:55PM -0400, Zi Yan wrote:
>> Hi Andrea, what is the purpose/benefit of making x86’s pmd_present() returns
>> true
>> for a THP under splitting? Does it
Hi David,
On 22 Oct 2018, at 17:04, David Rientjes wrote:
On Tue, 16 Oct 2018, Mel Gorman wrote:
I consider this to be an unfortunate outcome. On the one hand, we
have a
problem that three people can trivially reproduce with known test
cases
and a patch shown to resolve the problem. Two of t
/r/CAOMGZ=g52r-30rzvhgxebktw7rllwbgadvyeo--iizcd3up...@mail.gmail.com
>
> Signed-off-by: Kirill A. Shutemov
> Reported-by: Vegard Nossum
> Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path")
> Cc: [v4.14+]
> Cc: Zi Yan
> Cc: Naoya Horiguchi
> C
On 16 Jul 2024, at 7:13, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)"
>
> Move numa_emulation codfrom arch/x86 to mm/numa_emulation.c
>
> This code will be later reused by arch_numa.
>
> No functional changes.
>
> Signed-off-by: Mike Rapoport (Microsoft)
> ---
> arch/x86/Kconfig
On 3 Apr 2018, at 23:22, Huang, Ying wrote:
> From: Huang Ying
>
> mmap_sem will be read locked when calling follow_pmd_mask(). But this
> cannot prevent PMD from being changed for all cases when PTL is
> unlocked, for example, from pmd_trans_huge() to pmd_none() via
> MADV_DONTNEED. So it is p
On 5 Apr 2018, at 12:03, Michal Hocko wrote:
> On Thu 05-04-18 18:55:51, Kirill A. Shutemov wrote:
>> On Thu, Apr 05, 2018 at 05:05:47PM +0200, Michal Hocko wrote:
>>> On Thu 05-04-18 16:40:45, Kirill A. Shutemov wrote:
On Thu, Apr 05, 2018 at 02:48:30PM +0200, Michal Hocko wrote:
>>> [...]
>
On 5 Apr 2018, at 15:04, Michal Hocko wrote:
> On Thu 05-04-18 13:58:43, Zi Yan wrote:
>> On 5 Apr 2018, at 12:03, Michal Hocko wrote:
>>
>>> On Thu 05-04-18 18:55:51, Kirill A. Shutemov wrote:
>>>> On Thu, Apr 05, 2018 at 05:05:47PM +0200, Michal Hocko w
From: Zi Yan
Signed-off-by: Zi Yan
Cc: Catalin Marinas
Cc: Will Deacon
Cc: Steve Capper
Cc: Marc Zyngier
Cc: Kristina Martsenko
Cc: Dan Williams
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux...@kvack.org
---
arch/arm64/include/asm/pgtable.h | 2 ++
1 file changed, 2 insertions
From: Zi Yan
Signed-off-by: Zi Yan
Cc: Russell King
Cc: Christoffer Dall
Cc: Marc Zyngier
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux...@kvack.org
---
arch/arm/include/asm/pgtable.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm
From: Zi Yan
Signed-off-by: Zi Yan
Cc: Vineet Gupta
Cc: linux-snps-...@lists.infradead.org
Cc: linux...@kvack.org
---
arch/arc/include/asm/pgtable.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
index 08fe33830d4b
From: Zi Yan
pmd swap soft dirty support is added, too.
Signed-off-by: Zi Yan
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: "Aneesh Kumar K.V"
Cc: Ram Pai
Cc: Balbir Singh
Cc: Naoya Horiguchi
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux...@kvack.org
From: Zi Yan
Hi all,
THP migration is only enabled on x86_64 with a special
ARCH_ENABLE_THP_MIGRATION macro. This patchset enables THP migration for
all architectures that uses transparent hugepage, so that special macro can
be dropped. Instead, THP migration is enabled/disabled via
/sys/kernel
From: Zi Yan
pmd swap soft dirty support is added, too.
Signed-off-by: Zi Yan
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Cc: Janosch Frank
Cc: Naoya Horiguchi
Cc: linux-s...@vger.kernel.org
Cc: linux...@kvack.org
---
arch/s390/include/asm/pgtable.h | 5 +
1 file changed, 5 insertions
From: Zi Yan
Signed-off-by: Zi Yan
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "Kirill A. Shutemov"
Cc: x...@kernel.org
Cc: linux...@kvack.org
---
arch/x86/include/asm/pgtable-2level.h | 2 ++
arch/x86/include/asm/pgtable-3level.h | 2 ++
2 files changed, 4 insertions(+)
diff --git
From: Zi Yan
Signed-off-by: Zi Yan
Cc: Ralf Baechle
Cc: James Hogan
Cc: Michal Hocko
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: linux-m...@linux-mips.org
Cc: linux...@kvack.org
---
arch/mips/include/asm/pgtable-64.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/mips/include/asm
From: Zi Yan
Signed-off-by: Zi Yan
Cc: "David S. Miller"
Cc: sparcli...@vger.kernel.org
Cc: linux...@kvack.org
---
arch/sparc/include/asm/pgtable_32.h | 2 ++
arch/sparc/include/asm/pgtable_64.h | 2 ++
2 files changed, 4 insertions(+)
diff --git a/arch/sparc/include/asm/pgtable_32
From: Zi Yan
Remove CONFIG_ARCH_ENABLE_THP_MIGRATION. thp migration is enabled along
with transparent hugepage and can be toggled via
/sys/kernel/mm/transparent_hugepage/enable_thp_migration.
Signed-off-by: Zi Yan
Cc: linux...@kvack.org
Cc: Vineet Gupta
Cc: linux-snps-...@lists.infradead.org
On 16 Apr 2019, at 10:30, Dave Hansen wrote:
> On 4/16/19 12:47 AM, Michal Hocko wrote:
>> You definitely have to follow policy. You cannot demote to a node which
>> is outside of the cpuset/mempolicy because you are breaking contract
>> expected by the userspace. That implies doing a rmap walk.
>
On 16 Apr 2019, at 11:55, Dave Hansen wrote:
> On 4/16/19 8:33 AM, Zi Yan wrote:
>>> We have a reasonable argument that demotion is better than
>>> swapping. So, we could say that even if a VMA has a strict NUMA
>>> policy, demoting pages mapped there pages st
On 19 Feb 2019, at 20:38, Anshuman Khandual wrote:
On 02/19/2019 06:26 PM, Matthew Wilcox wrote:
On Tue, Feb 19, 2019 at 01:12:07PM +0530, Anshuman Khandual wrote:
But the location of this temp page matters as well because you would
like to
saturate the inter node interface. It needs to be eit
On 27 Mar 2019, at 6:08, Keith Busch wrote:
> On Tue, Mar 26, 2019 at 08:41:15PM -0700, Yang Shi wrote:
>> On 3/26/19 5:35 PM, Keith Busch wrote:
>>> migration nodes have higher free capacity than source nodes. And since
>>> your attempting THP's without ever splitting them, that also requires
>>>
On 27 Mar 2019, at 10:05, Dave Hansen wrote:
> On 3/27/19 10:00 AM, Zi Yan wrote:
>> I ask this because I observe that migrating a list of pages can
>> achieve higher throughput compared to migrating individual page.
>> For example, migrating 512 4KB pages can achieve ~
On 27 Mar 2019, at 11:00, Dave Hansen wrote:
> On 3/27/19 10:48 AM, Zi Yan wrote:
>> For 40MB/s vs 750MB/s, they were using sys_migrate_pages(). Sorry
>> about the confusion there. As I measure only the migrate_pages() in
>> the kernel, the throughput becomes: migrating 4KB
From: Zi Yan
copy_page_multithread() function is added to migrate huge pages
in multi-threaded way, which provides higher throughput than
a single-threaded way.
Internally, copy_page_multithread() splits and distributes a huge page
into multiple threads, then send them as jobs to
From: Zi Yan
This is only done for the basic exchange pages, because we might
need to lock multiple files when doing concurrent exchange pages,
which could cause deadlocks easily.
Signed-off-by: Zi Yan
---
mm/exchange.c | 284 ++
mm
From: Zi Yan
Signed-off-by: Zi Yan
---
include/linux/mm_inline.h | 21 +
mm/vmscan.c | 25 ++---
2 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 04ec454..b9fbd0b
From: Zi Yan
It unmaps two lists of pages, then exchange them in
exchange_page_lists_mthread(), and finally remaps both lists of
pages.
Signed-off-by: Zi Yan
---
include/linux/exchange.h | 2 +
mm/exchange.c| 397 +++
mm
From: Zi Yan
An option is added to move_pages() syscall to use multi-threaded
page migration.
Signed-off-by: Zi Yan
---
include/linux/migrate_mode.h | 1 +
include/uapi/linux/mempolicy.h | 2 ++
mm/migrate.c | 29 +++--
3 files changed, 22
From: Zi Yan
This prepares for the following patches to provide a user API to
manipulate pages in two memory nodes with the help of memcg.
missing memcg_max_size_node()
Signed-off-by: Zi Yan
---
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
include/linux/sched/coredump.h | 1
From: Zi Yan
It prepares for the following patches to enable memcg-based NUMA
node page migration. We are going to limit memory usage in each node
on a per-memcg basis.
Signed-off-by: Zi Yan
---
include/linux/cgroup-defs.h | 1 +
include/linux/memcontrol.h | 67
From: Zi Yan
The syscall allows users to trigger page list scanning to actively
move pages between active/inactive lists according to page
references. This is limited to the memcg which the process belongs
to. It would not impact the global LRU lists, which is the root
memcg.
Signed-off-by: Zi
From: Zi Yan
Exchange two pages using multi threads. Exchange two lists of pages
using multi threads.
Signed-off-by: Zi Yan
---
mm/Makefile| 1 +
mm/exchange.c | 15 ++--
mm/exchange_page.c | 229 +
mm/internal.h | 5
From: Zi Yan
Users can use the syscall to exchange two lists of pages, similar
to move_pages() syscall.
Signed-off-by: Zi Yan
---
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
include/linux/syscalls.h | 5 +
mm/exchange.c | 346
From: Zi Yan
Enable exchange THPs in the process. It also need to take care of
exchanging PTE-mapped THPs.
Signed-off-by: Zi Yan
---
include/linux/exchange.h | 2 ++
mm/exchange.c| 73 +---
mm/migrate.c | 2 +-
3 files
From: Zi Yan
With MPOL_F_MEMCG set and MPOL_PREFERRED is used, we will enforce
the memory limit set in the corresponding memcg.
Signed-off-by: Zi Yan
---
include/uapi/linux/mempolicy.h | 3 ++-
mm/mempolicy.c | 36
2 files changed, 38
On 28 Jan 2021, at 16:53, Mike Kravetz wrote:
> On 1/28/21 10:26 AM, Joao Martins wrote:
>> For a given hugepage backing a VA, there's a rather ineficient
>> loop which is solely responsible for storing subpages in GUP
>> @pages/@vmas array. For each subpage we check whether it's within
>> range o
On 11 Feb 2021, at 18:44, Mike Kravetz wrote:
> On 2/11/21 12:47 PM, Zi Yan wrote:
>> On 28 Jan 2021, at 16:53, Mike Kravetz wrote:
>>
>>> On 1/28/21 10:26 AM, Joao Martins wrote:
>>>> For a given hugepage backing a VA, there's a rather ineficient
>>
On 18 Feb 2021, at 12:25, Jason Gunthorpe wrote:
> On Thu, Feb 18, 2021 at 02:45:54PM +, Matthew Wilcox wrote:
>> On Wed, Feb 17, 2021 at 11:02:52AM -0800, Andrew Morton wrote:
>>> On Wed, 17 Feb 2021 10:49:25 -0800 Mike Kravetz
>>> wrote:
page structs are not guaranteed to be contiguou
On 18 Feb 2021, at 12:32, Jason Gunthorpe wrote:
> On Thu, Feb 18, 2021 at 12:27:58PM -0500, Zi Yan wrote:
>> On 18 Feb 2021, at 12:25, Jason Gunthorpe wrote:
>>
>>> On Thu, Feb 18, 2021 at 02:45:54PM +, Matthew Wilcox wrote:
>>>> On Wed, Feb 17, 2021 at 1
On 18 Feb 2021, at 12:51, Mike Kravetz wrote:
> On 2/18/21 9:40 AM, Zi Yan wrote:
>> On 18 Feb 2021, at 12:32, Jason Gunthorpe wrote:
>>
>>> On Thu, Feb 18, 2021 at 12:27:58PM -0500, Zi Yan wrote:
>>>> On 18 Feb 2021, at 12:25, Jason Gunthorpe wrote:
>>&
On 28 Jan 2021, at 5:49, Saravanan D wrote:
> To help with debugging the sluggishness caused by TLB miss/reload,
> we introduce monotonic lifetime hugepage split event counts since
> system state: SYSTEM_RUNNING to be displayed as part of
> /proc/vmstat in x86 servers
>
> The lifetime split event
On 28 Jan 2021, at 11:41, Dave Hansen wrote:
> On 1/28/21 8:33 AM, Zi Yan wrote:
>>> One of the many lasting (as we don't coalesce back) sources for
>>> huge page splits is tracing as the granular page
>>> attribute/permission changes would force the kernel t
On 24 Mar 2021, at 15:16, David Rientjes wrote:
> On Mon, 22 Mar 2021, Zi Yan wrote:
>
>> From: Zi Yan
>>
>> We did not have a direct user interface of splitting the compound page
>> backing a THP and there is no need unless we want to expose the THP
>> i
1 - 100 of 634 matches
Mail list logo