d_leaf_supported()
when making the decision.
Thanks,
--
Peter Xu
On Thu, Aug 22, 2024 at 05:22:03PM +, LEROY Christophe wrote:
>
>
> Le 18/07/2024 à 00:02, Peter Xu a écrit :
> > Introduce two more sub-options for PGTABLE_HAS_HUGE_LEAVES:
> >
> >- PGTABLE_HAS_PMD_LEAVES: set when there can be PMD mappings
> >-
ct()
should apply on the 1g dax range properly.
Thanks,
--
Peter Xu
x...@kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Fixes: a00cc7d9dd93 ("mm, x86: add support for PUD-sized transparent hugepages")
Fixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage")
Signed-off-by: Peter Xu
---
include/linux/huge_mm.h | 24 +++
, simplify the pud_modify()/pmd_modify() comments on shadow stack
pgtable entries to reference pte_modify() to avoid duplicating the whole
paragraph three times.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Signed-off-by: Peter Xu
---
arch/x86/include/asm
a separate effort.
[1]
https://lore.kernel.org/all/59d518698f664e07c036a5098833d7b56b953305.ca...@intel.com
Cc: "Edgecombe, Rick P"
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Acked-by: David Hildenbrand
Signed-off-by: Peter Xu
---
Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Acked-by: Dave Hansen
Reviewed-by: David Hildenbrand
Signed-off-by: Peter Xu
---
arch/x86/include/asm/pgtable.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include
Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Aneesh Kumar K.V
Signed-off-by: Peter Xu
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 3 +++
arch/powerpc/mm/book3s64/pgtable.c | 20
2 files changed, 23 insertions(+)
diff
coming on any known archs.
Cc: k...@vger.kernel.org
Cc: Sean Christopherson
Cc: Paolo Bonzini
Cc: David Rientjes
Cc: Rik van Riel
Signed-off-by: Peter Xu
---
mm/mprotect.c | 32
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/mm/mprotect.c b/mm/m
Currently the dax fault handler dumps the vma range when dynamic debugging
enabled. That's mostly not useful. Dump the (aligned) address instead
with the order info.
Acked-by: David Hildenbrand
Signed-off-by: Peter Xu
---
drivers/dax/device.c | 6 +++---
1 file changed, 3 insertions(
o not allowed to do smaller than 1G faults in
this case. So skip too.
- Power, as no hardware on hand.
Thanks,
[1] https://gitlab.com/peterx/lkb-harness/-/blob/main/config.json
[2] https://github.com/xzpeter/clibs/blob/master/misc/dax.c
[3] https://github.com/qemu/qemu/blob/master/docs/nvdimm.txt
P
hen zapping a huge pud that is with PROT_NONE permission.
Here the problem is x86's pud_leaf() requires both PRESENT and PSE bits
set to report a pud entry as a leaf, but that doesn't look right, as
it's not following the pXd_leaf() definition that we stick with so far,
where PROT_NONE entries should be reported as leaves.
To fix it, change x86's pud_leaf() implementation to only check against
PSE bit to report a leaf, irrelevant of whether PRESENT bit is set.
Thanks,
--
Peter Xu
On Thu, Aug 08, 2024 at 02:31:19PM -0700, Sean Christopherson wrote:
> On Thu, Aug 08, 2024, Peter Xu wrote:
> > Hi, Sean,
> >
> > On Thu, Aug 08, 2024 at 08:33:59AM -0700, Sean Christopherson wrote:
> > > On Wed, Aug 07, 2024, Peter Xu wrote:
> > > > m
Hi, Sean,
On Thu, Aug 08, 2024 at 08:33:59AM -0700, Sean Christopherson wrote:
> On Wed, Aug 07, 2024, Peter Xu wrote:
> > mprotect() does mmu notifiers in PMD levels. It's there since 2014 of
> > commit a5338093bfb4 ("mm: move mmu notifier call from change_protectio
On Thu, Aug 08, 2024 at 12:37:21AM +0200, Thomas Gleixner wrote:
> On Wed, Aug 07 2024 at 15:48, Peter Xu wrote:
> > These new helpers will be needed for pud entry updates soon. Introduce
> > these helpers by referencing the pmd ones. Namely:
> >
> > - pudp_in
On Thu, Aug 08, 2024 at 12:28:47AM +0200, Thomas Gleixner wrote:
> On Wed, Aug 07 2024 at 15:48, Peter Xu wrote:
>
> > Subject: mm/x86: arch_check_zapped_pud()
>
> Is not a proper subject line. It clearly lacks a verb.
>
> Subject: mm/x86: Implement arch_check_zapped_p
On Thu, Aug 08, 2024 at 12:22:38AM +0200, Thomas Gleixner wrote:
> On Wed, Aug 07 2024 at 15:48, Peter Xu wrote:
> > An entry should be reported as PUD leaf even if it's PROT_NONE, in which
> > case PRESENT bit isn't there. I hit bad pud without this when testing dax
>
On Wed, Aug 07, 2024 at 02:44:54PM -0700, Andrew Morton wrote:
> On Wed, 7 Aug 2024 17:34:10 -0400 Peter Xu wrote:
>
> > The problem is mprotect() will skip the dax 1G PUD while it shouldn't;
> > meanwhile it'll dump some bad PUD in dmesg. Both of them look like
On Wed, Aug 07, 2024 at 02:23:16PM -0700, Andrew Morton wrote:
> On Wed, 7 Aug 2024 15:48:04 -0400 Peter Xu wrote:
>
> >
> > Tests
> > =
> >
> > What I did test:
> >
> > - cross-build tests that I normally cover [1]
> >
> > - s
On Wed, Aug 07, 2024 at 02:17:03PM -0700, Andrew Morton wrote:
> On Wed, 7 Aug 2024 15:48:04 -0400 Peter Xu wrote:
>
> >
> > Dax supports pud pages for a while, but mprotect on puds was missing since
> > the start. This series tries to fix that by providing pud
x...@kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Fixes: a00cc7d9dd93 ("mm, x86: add support for PUD-sized transparent hugepages")
Fixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage")
Signed-off-by: Peter Xu
---
include/linux/huge_mm.h | 24 +++
These new helpers will be needed for pud entry updates soon. Introduce
these helpers by referencing the pmd ones. Namely:
- pudp_invalidate()
- pud_modify()
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Signed-off-by: Peter Xu
---
arch/x86
ave Hansen
Reviewed-by: David Hildenbrand
Signed-off-by: Peter Xu
---
arch/x86/include/asm/pgtable.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index e39311a89bf4..a2a3bd4c1bda 100644
--- a/arch/x86/i
a separate effort.
[1]
https://lore.kernel.org/all/59d518698f664e07c036a5098833d7b56b953305.ca...@intel.com
Cc: "Edgecombe, Rick P"
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Acked-by: David Hildenbrand
Signed-off-by: Peter Xu
---
These new helpers will be needed for pud entry updates soon. Introduce
them by referencing the pmd ones. Namely:
- pudp_invalidate()
- pud_modify()
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Aneesh Kumar K.V
Signed-off-by: Peter Xu
Currently the dax fault handler dumps the vma range when dynamic debugging
enabled. That's mostly not useful. Dump the (aligned) address instead
with the order info.
Acked-by: David Hildenbrand
Signed-off-by: Peter Xu
---
drivers/dax/device.c | 6 +++---
1 file changed, 3 insertions(
coming on any known archs.
Cc: k...@vger.kernel.org
Cc: Sean Christopherson
Cc: Paolo Bonzini
Cc: David Rientjes
Cc: Rik van Riel
Signed-off-by: Peter Xu
---
mm/mprotect.c | 32
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/mm/mprotect.c b/mm/m
om/peterx/lkb-harness/-/blob/main/config.json
[2] https://github.com/xzpeter/clibs/blob/master/misc/dax.c
[3] https://github.com/qemu/qemu/blob/master/docs/nvdimm.txt
Peter Xu (7):
mm/dax: Dump start address in fault handler
mm/mprotect: Push mmu notifier to PUDs
mm/powerpc: Add missing pud he
On Tue, Aug 06, 2024 at 06:32:10PM +0200, David Hildenbrand wrote:
> On 06.08.24 18:26, Peter Xu wrote:
> > On Tue, Aug 06, 2024 at 03:02:00PM +0200, David Hildenbrand wrote:
> > > > Right.
> > > >
> > > > I don't have a reason to change n
we should look into dropping that PMD counter completely.
No strong opinion here. If we prefer keeping that as separate topic, I'll
drop this patch. You're right, it's not yet relevant to the fix.
Thanks,
--
Peter Xu
On Wed, Jul 31, 2024 at 02:18:26PM +0200, David Hildenbrand wrote:
> On 15.07.24 21:21, Peter Xu wrote:
> > In 2013, commit 72403b4a0fbd ("mm: numa: return the number of base pages
> > altered by protection changes") introduced "numa_huge_pte_updates" vmstat
&g
On Wed, Jul 31, 2024 at 02:04:38PM +0200, David Hildenbrand wrote:
> On 15.07.24 21:21, Peter Xu wrote:
> > Currently the dax fault handler dumps the vma range when dynamic debugging
> > enabled. That's mostly not useful. Dump the (aligned) address instead
>
On Thu, Jul 25, 2024 at 05:23:48PM -0700, James Houghton wrote:
> On Thu, Jul 25, 2024 at 3:41 PM Peter Xu wrote:
> >
> > On Thu, Jul 25, 2024 at 11:29:49AM -0700, James Houghton wrote:
> > > > - pages += change_pmd_range(tlb, vma, pud, a
e PUD (or PMDs/PTEs underneath) does not have
> this issue. WDYT?
Could you elaborate more on the DONTNEED issue you're mentioning here?
>
> Thanks for this series!
Thanks for reviewing it, James.
--
Peter Xu
On Mon, Jul 15, 2024 at 03:21:34PM -0400, Peter Xu wrote:
> [Based on mm-unstable, commit 31334cf98dbd, July 2nd]
>
> v3:
> - Fix a build issue on i386 PAE config
> - Moved one line from patch 8 to patch 3
>
> v1: https://lore.kernel.org/r/20240621142504.1940209-1-pet...@r
On Tue, Jul 23, 2024 at 10:18:37AM +0200, David Hildenbrand wrote:
> On 22.07.24 17:31, Peter Xu wrote:
> > On Mon, Jul 22, 2024 at 03:29:43PM +0200, David Hildenbrand wrote:
> > > On 18.07.24 00:02, Peter Xu wrote:
> > > > This is an RFC series, so not yet for me
On Mon, Jul 22, 2024 at 03:29:43PM +0200, David Hildenbrand wrote:
> On 18.07.24 00:02, Peter Xu wrote:
> > This is an RFC series, so not yet for merging. Please don't be scared by
> > the code changes: most of them are code movements only.
> >
> > This series
the whole -pud
file, with the hope to reduce the size of object compiled and linked.
No functional change intended, but only code movement. Said that, there
will be some "ifdef" machinery changes to pass all kinds of compilations.
Cc: Jason Gunthorpe
Cc: Matthew Wilcox
Cc: Osca
lies to pxx_leaf() API.
Cc: Alistair Popple
Cc: Dan Williams
Cc: Jason Gunthorpe
Signed-off-by: Peter Xu
---
include/linux/huge_mm.h| 6 +++---
include/linux/pgtable.h| 30 +-
mm/hmm.c | 4 ++--
mm/huge_mapping_pmd.c | 9 +++--
internal.h, huge_mm.h).
Signed-off-by: Peter Xu
---
include/linux/huge_mm.h | 10 ++
include/linux/mm.h | 18 ++
mm/internal.h | 33 -
3 files changed, 28 insertions(+), 33 deletions(-)
diff --git a/include/linux/huge_mm.h b/inc
Make pmd/pud helpers to rely on the new PGTABLE_HAS_*_LEAVES option, rather
than THP alone, as THP is only one form of huge mapping.
Signed-off-by: Peter Xu
---
arch/arm64/include/asm/pgtable.h | 6 ++--
arch/powerpc/include/asm/book3s/64/pgtable.h | 2 +-
arch/powerpc/mm/book3s64
It's always 0 for all archs, and there's no sign to even support p4d entry
in the near future. Remove it until it's needed for real.
Signed-off-by: Peter Xu
---
arch/arm64/include/asm/pgtable.h | 5 -
arch/powerpc/include/asm/book3s/64/pgtable.h | 5 -
arch/
. However let's
leave that for later as that's the easy part. So far, we use these options
to stably detect per-arch huge mapping support.
Signed-off-by: Peter Xu
---
include/linux/huge_mm.h | 10 +++---
mm/Kconfig | 6 ++
2 files changed, 13 insertions(+), 3 deleti
7;re a few tree-wide changes into arch/, but that's not a
lot, to make this not disturbing too much people, I only copied the open
lists of each arch not yet the arch maintainers.
Tests
=
My normal 19-archs cross-compilation tests pass with it, and smoke tested
on x86_64 with a local
On Mon, Jul 15, 2024 at 03:21:34PM -0400, Peter Xu wrote:
> [Based on mm-unstable, commit 31334cf98dbd, July 2nd]
I forgot to update here in the cover letter; it's actually based on the
lastest.. Which is 79ae458094ff, as of today (July 15th).
--
Peter Xu
ixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage")
Signed-off-by: Peter Xu
---
include/linux/huge_mm.h | 24 +++
mm/huge_memory.c| 52 +
mm/mprotect.c | 34 ++-
3
These new helpers will be needed for pud entry updates soon. Introduce
these helpers by referencing the pmd ones. Namely:
- pudp_invalidate()
- pud_modify()
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Signed-off-by: Peter Xu
---
arch/x86
a separate effort.
[1]
https://lore.kernel.org/all/59d518698f664e07c036a5098833d7b56b953305.ca...@intel.com
Cc: "Edgecombe, Rick P"
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Signed-off-by: Peter Xu
---
arch/x86/include/asm/pgta
ave Hansen
Signed-off-by: Peter Xu
---
arch/x86/include/asm/pgtable.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 65b8e5bb902c..25fc6d809572 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/i
These new helpers will be needed for pud entry updates soon. Introduce
them by referencing the pmd ones. Namely:
- pudp_invalidate()
- pud_modify()
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Aneesh Kumar K.V
Signed-off-by: Peter Xu
coming on any known archs.
Cc: k...@vger.kernel.org
Cc: Sean Christopherson
Cc: Paolo Bonzini
Cc: David Rientjes
Cc: Rik van Riel
Signed-off-by: Peter Xu
---
mm/mprotect.c | 32
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/mm/mprotect.c b/mm/m
ant to do it right if any NUMA developers would like it to exist,
but we should do that with all above resolved, on both considering PUDs,
but also on correct accountings. That should be able to be done on top
when there's a real need of such.
Cc: Huang Ying
Cc: Mel Gorman
Cc: Alex Thorl
Currently the dax fault handler dumps the vma range when dynamic debugging
enabled. That's mostly not useful. Dump the (aligned) address instead
with the order info.
Signed-off-by: Peter Xu
---
drivers/dax/device.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --
k for devdax anyway due to not allowed to do smaller than 1G faults in
this case. So skip too.
- Power, as no hardware on hand.
Thanks,
[1] https://gitlab.com/peterx/lkb-harness/-/blob/main/config.json
[2] https://github.com/xzpeter/clibs/blob/master/misc/dax.c
[3] https://github.com/qemu/qe
On Fri, Jul 12, 2024 at 12:40:39PM +1000, Alistair Popple wrote:
>
> Peter Xu writes:
>
> > On Tue, Jul 09, 2024 at 02:07:31PM +1000, Alistair Popple wrote:
> >>
> >> Peter Xu writes:
> >>
> >> > Hi, Alistair,
> >> >
>
On Sat, Jul 06, 2024 at 05:16:15PM +0800, kernel test robot wrote:
> Hi Peter,
>
> kernel test robot noticed the following build errors:
>
> [auto build test ERROR on akpm-mm/mm-everything]
>
> url:
> https://github.com/intel-lab-lkp/linux/commits/Peter-Xu/mm-dax-Dum
On Tue, Jul 09, 2024 at 02:07:31PM +1000, Alistair Popple wrote:
>
> Peter Xu writes:
>
> > Hi, Alistair,
> >
> > On Thu, Jun 27, 2024 at 10:54:26AM +1000, Alistair Popple wrote:
> >> Now that DAX is managing page reference counts the same as normal
unt plan, but at the meantime working for pfn injections when
there's no page struct?
Thanks,
--
Peter Xu
ixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage")
Signed-off-by: Peter Xu
---
include/linux/huge_mm.h | 24 +++
mm/huge_memory.c| 52 +
mm/mprotect.c | 40 ---
3
These new helpers will be needed for pud entry updates soon. Introduce
these helpers by referencing the pmd ones. Namely:
- pudp_invalidate()
- pud_modify()
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Signed-off-by: Peter Xu
---
arch/x86
a separate effort.
[1]
https://lore.kernel.org/all/59d518698f664e07c036a5098833d7b56b953305.ca...@intel.com
Cc: "Edgecombe, Rick P"
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Signed-off-by: Peter Xu
---
arch/x86/include/asm/pgta
ave Hansen
Signed-off-by: Peter Xu
---
arch/x86/include/asm/pgtable.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 65b8e5bb902c..25fc6d809572 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/i
These new helpers will be needed for pud entry updates soon. Introduce
them by referencing the pmd ones. Namely:
- pudp_invalidate()
- pud_modify()
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Aneesh Kumar K.V
Signed-off-by: Peter Xu
coming on any known archs.
Cc: k...@vger.kernel.org
Cc: Sean Christopherson
Cc: Paolo Bonzini
Cc: David Rientjes
Cc: Rik van Riel
Signed-off-by: Peter Xu
---
mm/mprotect.c | 26 --
1 file changed, 12 insertions(+), 14 deletions(-)
diff --git a/mm/mprotect.c b/mm/mpro
ant to do it right if any NUMA developers would like it to exist,
but we should do that with all above resolved, on both considering PUDs,
but also on correct accountings. That should be able to be done on top
when there's a real need of such.
Cc: Huang Ying
Cc: Mel Gorman
Cc: Alex Thorl
Currently the dax fault handler dumps the vma range when dynamic debugging
enabled. That's mostly not useful. Dump the (aligned) address instead
with the order info.
Signed-off-by: Peter Xu
---
drivers/dax/device.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --
ge puds (here it's simply a clear_pud.. though), but it won't
work for devdax anyway due to not allowed to do smaller than 1G faults in
this case. So skip too.
- Power, as no hardware on hand.
Thanks,
[1] https://gitlab.com/peterx/lkb-harness/-/blob/main/config.json
[2] https://git
hit a bug but fixed around this area, so I
forgot to add it back, but I really can't remember. I'll keep an extra eye
on that.
Thanks,
--
Peter Xu
eck only happens when zapping,
and IIUC it means there can still be outliers floating around. I wonder
whether it should rely on page_table_check_pxx_set() from that regard.
Thanks,
--
Peter Xu
On Fri, Jun 21, 2024 at 07:51:26AM -0700, Dave Hansen wrote:
> On 6/21/24 07:25, Peter Xu wrote:
> > These new helpers will be needed for pud entry updates soon. Namely:
> >
> > - pudp_invalidate()
> > - pud_modify()
>
> I think it's also definitely wor
On Fri, Jun 21, 2024 at 10:25:01AM -0400, Peter Xu wrote:
> +pmd_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address,
> + pud_t *pudp)
> +{
> + unsigned long old_pud;
> +
> + VM_WARN_ON_ONCE(!pmd_present(*pmdp));
> + old_pmd = pm
An entry should be reported as PUD leaf even if it's PROT_NONE, in which
case PRESENT bit isn't there. I hit bad pud without this when testing dax
1G on zapping a PROT_NONE PUD.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Signed-off-by
ixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage")
Signed-off-by: Peter Xu
---
include/linux/huge_mm.h | 24 +++
mm/huge_memory.c| 52 +
mm/mprotect.c | 40 ---
3
These new helpers will be needed for pud entry updates soon. Namely:
- pudp_invalidate()
- pud_modify()
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: x...@kernel.org
Signed-off-by: Peter Xu
---
arch/x86/include/asm/pgtable.h | 36
These new helpers will be needed for pud entry updates soon. Namely:
- pudp_invalidate()
- pud_modify()
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Aneesh Kumar K.V
Signed-off-by: Peter Xu
---
arch/powerpc/include/asm/book3s/64
coming on any known archs.
Cc: k...@vger.kernel.org
Cc: Sean Christopherson
Cc: Paolo Bonzini
Cc: David Rientjes
Cc: Rik van Riel
Signed-off-by: Peter Xu
---
mm/mprotect.c | 26 --
1 file changed, 12 insertions(+), 14 deletions(-)
diff --git a/mm/mprotect.c b/mm/mpro
ant to do it right if any NUMA developers would like it to exist,
but we should do that with all above resolved, on both considering PUDs,
but also on correct accountings. That should be able to be done on top
when there's a real need of such.
Cc: Huang Ying
Cc: Mel Gorman
Cc: Alex Thorl
Currently the dax fault handler dumps the vma range when dynamic debugging
enabled. That's mostly not useful. Dump the (aligned) address instead
with the order info.
Signed-off-by: Peter Xu
---
drivers/dax/device.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --
work for devdax anyway due to not allowed to do smaller than 1G faults in
this case. So skip too.
- Power, as no hardware on hand.
Thanks,
[1] https://gitlab.com/peterx/lkb-harness/-/blob/main/config.json
[2] https://lore.kernel.org/all/202406190956.9j1ucie5-...@intel.com
[2] https://github
er. So maybe worth trying if we're
careful and with some good testing coverages.
Thanks,
--
Peter Xu
where contpte can go over >1 pmds.
>
> I am really curious though how we handle that for THP? Or THP on 8xx
> does not support that size?
I'll leave this to Christophe, but IIUC thp is only PMD_ORDER sized, so
shouldn't apply to the 8MB pages.
Thanks,
--
Peter Xu
s to a huge page larger than
* PAGE_SIZE of the platform. The PFN format isn't important here.
But now it's a pgtable page, containing cont-ptes. Similarly, I think most
pmd_*() helpers will stop working there if we report it as a leaf.
Thanks,
--
Peter Xu
On Mon, May 27, 2024 at 06:03:30AM +, Christophe Leroy wrote:
>
>
> Le 18/03/2024 à 21:04, pet...@redhat.com a écrit :
> > From: Peter Xu
> >
> > This API is not used anymore, drop it for the whole tree.
>
> Some documentation remain in v6.10
able to
> help, he at least knows mm better than me, but he also has other work.
>
> Hopefully we can make this series work, and replace hugepd. But if we
> can't make that work then there is the possibility of just dropping
> support for 16M/16G pages with HPT/4K pages.
Great, thank you!
--
Peter Xu
checks for
hugetlb in any new code.
Currently Oscar offered help on that hugetlb project, and Oscar will start
to work on page_walk API refactoring. I guess currently the simple way is
we'll work on top of Christophe's series. Some proper review on this
series will definitely make it clearer on what we should do next.
Thanks,
--
Peter Xu
On Thu, May 23, 2024 at 05:08:29AM +0200, Oscar Salvador wrote:
> On Wed, May 22, 2024 at 05:46:09PM -0400, Peter Xu wrote:
> > > Now, ProcessB still has the page mapped, so upon re-accessing it,
> > > it will trigger a new MCE event. memory-failure code will see that this
>
for a real hwpoison,
e.g. SIGBUS with the address encoded, then KVM work naturally with that
just like a real MCE.
One other thing we can do is to inject-poison to the VA together with the
page backing it, but that'll pollute a PFN on dst host to be a real bad PFN
and won't be able to be used by the dst OS anymore, so it's less optimal.
Thanks,
--
Peter Xu
On Wed, May 15, 2024 at 12:21:51PM +0200, Oscar Salvador wrote:
> On Tue, May 14, 2024 at 03:34:24PM -0600, Peter Xu wrote:
> > The question is whether we can't.
> >
> > Now we reserved a swp entry just for hwpoison and it makes sense only
> > because we cached t
On Tue, May 14, 2024 at 10:26:49PM +0200, Oscar Salvador wrote:
> On Fri, May 10, 2024 at 03:29:48PM -0400, Peter Xu wrote:
> > IMHO we shouldn't mention that detail, but only state the effect which is
> > to not report the event to syslog.
> >
> > There's
are mutually
> exclusive).
>
> Reviewed-by: John Hubbard
> Signed-off-by: Axel Rasmussen
Acked-by: Peter Xu
One nicpick below.
> ---
> arch/parisc/mm/fault.c | 7 +--
> arch/powerpc/mm/fault.c | 6 --
> arch/x86/mm/fault.c | 6 --
> include/linux
VM_FAULT_SET_HINDEX(hstate_index(h));
> goto out_mutex;
> }
> diff --git a/mm/memory.c b/mm/memory.c
> index d2155ced45f8..29a833b996ae 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3910,7 +3910,7 @@ static vm_fault_t handle_pte_marker(struct vm_fault
> *vmf)
>
> /* Higher priority than uffd-wp when data corrupted */
> if (marker & PTE_MARKER_POISONED)
> - return VM_FAULT_HWPOISON;
> + return VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_SIM;
>
> if (pte_marker_entry_uffd_wp(entry))
> return pte_marker_handle_uffd_wp(vmf);
> --
> 2.45.0.118.g7fe29c98d7-goog
>
--
Peter Xu
rand
Fixes: a12083d721d7 ("mm/gup: handle hugepd for follow_page()")
Reviewed-by: David Hildenbrand
Signed-off-by: Peter Xu
---
v1: https://lore.kernel.org/r/20240428190151.201002-1-pet...@redhat.com
This is v2 and dropped the 2nd test patch as a better one can come later,
this p
IIUC it used to be not
> > touched because of pte_write() always returns true with a write prefault.
> >
> > Then we let patch 1 go through first, and drop this one?
>
> Whatever you prefer!
Thanks!
Andrew, would you consider taking patch 1 but ignore this patch 2? Or do
you prefer me to resend?
--
Peter Xu
On Mon, Apr 29, 2024 at 09:28:15AM +0200, David Hildenbrand wrote:
> On 28.04.24 21:01, Peter Xu wrote:
> > Prefault, especially with RW, makes the GUP test too easy, and may not yet
> > reach the core of the test.
> >
> > For example, R/O longterm pins will just
rand
Fixes: a12083d721d7 ("mm/gup: handle hugepd for follow_page()")
Signed-off-by: Peter Xu
---
Note: The target commit to be fixed should just been moved into mm-stable,
so no need to cc stable.
---
mm/gup.c | 64 ++--
1 file chan
n Andrew's
tree with that 16MB huge page.
Thanks,
[1] https://lore.kernel.org/r/20240327152332.950956-1-pet...@redhat.com
Peter Xu (2):
mm/gup: Fix hugepd handling in hugetlb rework
mm/selftests: Don't prefault in gup_longterm tests
mm/gup.c | 64 +++
hs at least to
cover the unshare care for R/O longterm pins, in which case the first R/O
GUP attempt will fault in the page R/O first, then the 2nd will go through
the unshare path, checking whether an unshare is needed.
Cc: David Hildenbrand
Signed-off-by: Peter Xu
---
tools/testing/selftes
head with the Power fix on
hugepd putting this aside.
I hope that before the end of this year, whatever I'll fix can go away, by
removing hugepd completely from Linux. For now that may or may not be as
smooth, so we'd better still fix it.
--
Peter Xu
On Fri, Apr 26, 2024 at 07:28:31PM +0200, David Hildenbrand wrote:
> On 26.04.24 18:12, Peter Xu wrote:
> > On Fri, Apr 26, 2024 at 09:44:58AM -0400, Peter Xu wrote:
> > > On Fri, Apr 26, 2024 at 09:17:47AM +0200, David Hildenbrand wrote:
> > > > On 02.04.24
On Fri, Apr 26, 2024 at 09:44:58AM -0400, Peter Xu wrote:
> On Fri, Apr 26, 2024 at 09:17:47AM +0200, David Hildenbrand wrote:
> > On 02.04.24 14:55, David Hildenbrand wrote:
> > > Let's consistently call the "fast-only" part of GUP "GUP-fast" and rena
1 - 100 of 215 matches
Mail list logo