[PATCH v5] mm: Avoid unnecessary page fault retires on shared memory types

2022-05-30 Thread Peter Xu
I observed that for each of the shared file-backed page faults, we're very
likely to retry one more time for the 1st write fault upon no page.  It's
because we'll need to release the mmap lock for dirty rate limit purpose
with balance_dirty_pages_ratelimited() (in fault_dirty_shared_page()).

Then after that throttling we return VM_FAULT_RETRY.

We did that probably because VM_FAULT_RETRY is the only way we can return
to the fault handler at that time telling it we've released the mmap lock.

However that's not ideal because it's very likely the fault does not need
to be retried at all since the pgtable was well installed before the
throttling, so the next continuous fault (including taking mmap read lock,
walk the pgtable, etc.) could be in most cases unnecessary.

It's not only slowing down page faults for shared file-backed, but also add
more mmap lock contention which is in most cases not needed at all.

To observe this, one could try to write to some shmem page and look at
"pgfault" value in /proc/vmstat, then we should expect 2 counts for each
shmem write simply because we retried, and vm event "pgfault" will capture
that.

To make it more efficient, add a new VM_FAULT_COMPLETED return code just to
show that we've completed the whole fault and released the lock.  It's also
a hint that we should very possibly not need another fault immediately on
this page because we've just completed it.

This patch provides a ~12% perf boost on my aarch64 test VM with a simple
program sequentially dirtying 400MB shmem file being mmap()ed and these are
the time it needs:

  Before: 650.980 ms (+-1.94%)
  After:  569.396 ms (+-1.38%)

I believe it could help more than that.

We need some special care on GUP and the s390 pgfault handler (for gmap
code before returning from pgfault), the rest changes in the page fault
handlers should be relatively straightforward.

Another thing to mention is that mm_account_fault() does take this new
fault as a generic fault to be accounted, unlike VM_FAULT_RETRY.

I explicitly didn't touch hmm_vma_fault() and break_ksm() because they do
not handle VM_FAULT_RETRY even with existing code, so I'm literally keeping
them as-is.

Acked-by: Geert Uytterhoeven 
Acked-by: Peter Zijlstra (Intel) 
Acked-by: Johannes Weiner 
Acked-by: Vineet Gupta 
Acked-by: Guo Ren 
Acked-by: Max Filippov 
Acked-by: Christian Borntraeger 
Acked-by: Michael Ellerman  (powerpc)
Acked-by: Catalin Marinas 
Reviewed-by: Alistair Popple 
Reviewed-by: Ingo Molnar 
Signed-off-by: Peter Xu 
---
v5:
- Picked up a few more a-bs
- For s390, further optimize gmap==NULL case [Heiko]
---
 arch/alpha/mm/fault.c |  4 
 arch/arc/mm/fault.c   |  4 
 arch/arm/mm/fault.c   |  4 
 arch/arm64/mm/fault.c |  4 
 arch/csky/mm/fault.c  |  4 
 arch/hexagon/mm/vm_fault.c|  4 
 arch/ia64/mm/fault.c  |  4 
 arch/m68k/mm/fault.c  |  4 
 arch/microblaze/mm/fault.c|  4 
 arch/mips/mm/fault.c  |  4 
 arch/nios2/mm/fault.c |  4 
 arch/openrisc/mm/fault.c  |  4 
 arch/parisc/mm/fault.c|  4 
 arch/powerpc/mm/copro_fault.c |  5 +
 arch/powerpc/mm/fault.c   |  5 +
 arch/riscv/mm/fault.c |  4 
 arch/s390/mm/fault.c  | 12 
 arch/sh/mm/fault.c|  4 
 arch/sparc/mm/fault_32.c  |  4 
 arch/sparc/mm/fault_64.c  |  5 +
 arch/um/kernel/trap.c |  4 
 arch/x86/mm/fault.c   |  4 
 arch/xtensa/mm/fault.c|  4 
 include/linux/mm_types.h  |  2 ++
 mm/gup.c  | 34 +-
 mm/memory.c   |  2 +-
 26 files changed, 139 insertions(+), 2 deletions(-)

diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
index ec20c1004abf..ef427a6bdd1a 100644
--- a/arch/alpha/mm/fault.c
+++ b/arch/alpha/mm/fault.c
@@ -155,6 +155,10 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
if (fault_signal_pending(fault, regs))
return;
 
+   /* The fault is fully completed (including releasing mmap lock) */
+   if (fault & VM_FAULT_COMPLETED)
+   return;
+
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index dad27e4d69ff..5ca59a482632 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -146,6 +146,10 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
return;
}
 
+   /* The fault is fully completed (including releasing mmap lock) */
+   if (fault & VM_FAULT_COMPLETED)
+   return;
+
/*
 * Fault retry nuances, mmap_lock already relinquished by core mm
 */
diff --git a/arch/arm/mm/fault.c b/arch/ar

Re: [PATCH v4] mm: Avoid unnecessary page fault retires on shared memory types

2022-05-30 Thread Peter Xu
On Mon, May 30, 2022 at 07:03:31PM +0200, Heiko Carstens wrote:
> On Mon, May 30, 2022 at 12:00:52PM -0400, Peter Xu wrote:
> > On Mon, May 30, 2022 at 11:52:54AM -0400, Peter Xu wrote:
> > > On Mon, May 30, 2022 at 11:35:10AM +0200, Christian Borntraeger wrote:
> > > > > diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
> > > > > index 4608cc962ecf..e1d40ca341b7 100644
> > > > > --- a/arch/s390/mm/fault.c
> > > > > +++ b/arch/s390/mm/fault.c
> > > > > @@ -436,12 +436,11 @@ static inline vm_fault_t do_exception(struct 
> > > > > pt_regs *regs, int access)
> > > > >   /* The fault is fully completed (including releasing mmap lock) 
> > > > > */
> > > > >   if (fault & VM_FAULT_COMPLETED) {
> > > > > - /*
> > > > > -  * Gmap will need the mmap lock again, so retake it.  
> > > > > TODO:
> > > > > -  * only conditionally take the lock when CONFIG_PGSTE 
> > > > > set.
> > > > > -  */
> > > > > - mmap_read_lock(mm);
> > > > > - goto out_gmap;
> > > > > + if (gmap) {
> > > > > + mmap_read_lock(mm);
> > > > > + goto out_gmap;
> > > > > + }
>   fault = 0;  <
> > > > > + goto out;
> > 
> > Hmm, right after I replied I found "goto out" could be problematic, since
> > all s390 callers of do_exception() will assume it an error condition (side
> > note: "goto out_gmap" contains one step to clear "fault" to 0).  I'll
> > replace this with "return 0" instead if it looks good to both of you.
> > 
> > I'll wait for a confirmation before reposting.  Thanks,
> 
> Right, that was stupid. Thanks for double checking!
> 
> However could you please add "fault = 0" just in front of the goto out
> like above? I'd like to avoid having returns and gotos mixed.

Sure thing.

-- 
Peter Xu


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH v4] mm: Avoid unnecessary page fault retires on shared memory types

2022-05-30 Thread Peter Xu
On Mon, May 30, 2022 at 11:52:54AM -0400, Peter Xu wrote:
> On Mon, May 30, 2022 at 11:35:10AM +0200, Christian Borntraeger wrote:
> > 
> > 
> > Am 29.05.22 um 22:33 schrieb Heiko Carstens:
> > [...]
> > > 
> > > Guess the patch below on top of your patch is what we want.
> > > Just for clarification: if gmap is not NULL then the process is a kvm
> > > process. So, depending on the workload, this optimization makes sense.
> > > 
> > > diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
> > > index 4608cc962ecf..e1d40ca341b7 100644
> > > --- a/arch/s390/mm/fault.c
> > > +++ b/arch/s390/mm/fault.c
> > > @@ -436,12 +436,11 @@ static inline vm_fault_t do_exception(struct 
> > > pt_regs *regs, int access)
> > >   /* The fault is fully completed (including releasing mmap lock) 
> > > */
> > >   if (fault & VM_FAULT_COMPLETED) {
> > > - /*
> > > -  * Gmap will need the mmap lock again, so retake it.  TODO:
> > > -  * only conditionally take the lock when CONFIG_PGSTE set.
> > > -  */
> > > - mmap_read_lock(mm);
> > > - goto out_gmap;
> > > + if (gmap) {
> > > + mmap_read_lock(mm);
> > > + goto out_gmap;
> > > + }
> > > + goto out;

Hmm, right after I replied I found "goto out" could be problematic, since
all s390 callers of do_exception() will assume it an error condition (side
note: "goto out_gmap" contains one step to clear "fault" to 0).  I'll
replace this with "return 0" instead if it looks good to both of you.

I'll wait for a confirmation before reposting.  Thanks,

-- 
Peter Xu


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH v4] mm: Avoid unnecessary page fault retires on shared memory types

2022-05-30 Thread Peter Xu
On Mon, May 30, 2022 at 11:35:10AM +0200, Christian Borntraeger wrote:
> 
> 
> Am 29.05.22 um 22:33 schrieb Heiko Carstens:
> [...]
> > 
> > Guess the patch below on top of your patch is what we want.
> > Just for clarification: if gmap is not NULL then the process is a kvm
> > process. So, depending on the workload, this optimization makes sense.
> > 
> > diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
> > index 4608cc962ecf..e1d40ca341b7 100644
> > --- a/arch/s390/mm/fault.c
> > +++ b/arch/s390/mm/fault.c
> > @@ -436,12 +436,11 @@ static inline vm_fault_t do_exception(struct pt_regs 
> > *regs, int access)
> > /* The fault is fully completed (including releasing mmap lock) */
> > if (fault & VM_FAULT_COMPLETED) {
> > -   /*
> > -* Gmap will need the mmap lock again, so retake it.  TODO:
> > -* only conditionally take the lock when CONFIG_PGSTE set.
> > -*/
> > -   mmap_read_lock(mm);
> > -   goto out_gmap;
> > +   if (gmap) {
> > +   mmap_read_lock(mm);
> > +   goto out_gmap;
> > +   }
> > +   goto out;
> 
> Yes, that makes sense. With that
> 
> Acked-by: Christian Borntraeger 

Looks sane, thanks Heiko, Christian.  I'll cook another one.

-- 
Peter Xu


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[PATCH v4] mm: Avoid unnecessary page fault retires on shared memory types

2022-05-27 Thread Peter Xu
I observed that for each of the shared file-backed page faults, we're very
likely to retry one more time for the 1st write fault upon no page.  It's
because we'll need to release the mmap lock for dirty rate limit purpose
with balance_dirty_pages_ratelimited() (in fault_dirty_shared_page()).

Then after that throttling we return VM_FAULT_RETRY.

We did that probably because VM_FAULT_RETRY is the only way we can return
to the fault handler at that time telling it we've released the mmap lock.

However that's not ideal because it's very likely the fault does not need
to be retried at all since the pgtable was well installed before the
throttling, so the next continuous fault (including taking mmap read lock,
walk the pgtable, etc.) could be in most cases unnecessary.

It's not only slowing down page faults for shared file-backed, but also add
more mmap lock contention which is in most cases not needed at all.

To observe this, one could try to write to some shmem page and look at
"pgfault" value in /proc/vmstat, then we should expect 2 counts for each
shmem write simply because we retried, and vm event "pgfault" will capture
that.

To make it more efficient, add a new VM_FAULT_COMPLETED return code just to
show that we've completed the whole fault and released the lock.  It's also
a hint that we should very possibly not need another fault immediately on
this page because we've just completed it.

This patch provides a ~12% perf boost on my aarch64 test VM with a simple
program sequentially dirtying 400MB shmem file being mmap()ed and these are
the time it needs:

  Before: 650.980 ms (+-1.94%)
  After:  569.396 ms (+-1.38%)

I believe it could help more than that.

We need some special care on GUP and the s390 pgfault handler (for gmap
code before returning from pgfault), the rest changes in the page fault
handlers should be relatively straightforward.

Another thing to mention is that mm_account_fault() does take this new
fault as a generic fault to be accounted, unlike VM_FAULT_RETRY.

I explicitly didn't touch hmm_vma_fault() and break_ksm() because they do
not handle VM_FAULT_RETRY even with existing code, so I'm literally keeping
them as-is.

Acked-by: Geert Uytterhoeven 
Acked-by: Peter Zijlstra (Intel) 
Acked-by: Johannes Weiner 
Acked-by: Vineet Gupta 
Acked-by: Guo Ren 
Acked-by: Max Filippov 
Reviewed-by: Alistair Popple 
Reviewed-by: Ingo Molnar 
Signed-off-by: Peter Xu 
---
v4:
- Picked up a-bs and r-bs
- Fix grammar in the comment of faultin_page() [Ingo]
- Fix s390 for gmap since gmap needs the mmap lock [Heiko]
v3:
- Rebase to akpm/mm-unstable
- Copy arch maintainers
---
 arch/alpha/mm/fault.c |  4 
 arch/arc/mm/fault.c   |  4 
 arch/arm/mm/fault.c   |  4 
 arch/arm64/mm/fault.c |  4 
 arch/csky/mm/fault.c  |  4 
 arch/hexagon/mm/vm_fault.c|  4 
 arch/ia64/mm/fault.c  |  4 
 arch/m68k/mm/fault.c  |  4 
 arch/microblaze/mm/fault.c|  4 
 arch/mips/mm/fault.c  |  4 
 arch/nios2/mm/fault.c |  4 
 arch/openrisc/mm/fault.c  |  4 
 arch/parisc/mm/fault.c|  4 
 arch/powerpc/mm/copro_fault.c |  5 +
 arch/powerpc/mm/fault.c   |  5 +
 arch/riscv/mm/fault.c |  4 
 arch/s390/mm/fault.c  | 12 
 arch/sh/mm/fault.c|  4 
 arch/sparc/mm/fault_32.c  |  4 
 arch/sparc/mm/fault_64.c  |  5 +
 arch/um/kernel/trap.c |  4 
 arch/x86/mm/fault.c   |  4 
 arch/xtensa/mm/fault.c|  4 
 include/linux/mm_types.h  |  2 ++
 mm/gup.c  | 34 +-
 mm/memory.c   |  2 +-
 26 files changed, 139 insertions(+), 2 deletions(-)

diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
index ec20c1004abf..ef427a6bdd1a 100644
--- a/arch/alpha/mm/fault.c
+++ b/arch/alpha/mm/fault.c
@@ -155,6 +155,10 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
if (fault_signal_pending(fault, regs))
return;
 
+   /* The fault is fully completed (including releasing mmap lock) */
+   if (fault & VM_FAULT_COMPLETED)
+   return;
+
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index dad27e4d69ff..5ca59a482632 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -146,6 +146,10 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
return;
}
 
+   /* The fault is fully completed (including releasing mmap lock) */
+   if (fault & VM_FAULT_COMPLETED)
+   return;
+
/*
 * Fault retry nuances, mmap_lock already relinquished by core mm
 */
diff --git a/arch/arm/mm

Re: [PATCH v3] mm: Avoid unnecessary page fault retires on shared memory types

2022-05-27 Thread Peter Xu
On Fri, May 27, 2022 at 12:46:31PM +0200, Ingo Molnar wrote:
> 
> * Peter Xu  wrote:
> 
> > This patch provides a ~12% perf boost on my aarch64 test VM with a simple
> > program sequentially dirtying 400MB shmem file being mmap()ed and these are
> > the time it needs:
> >
> >   Before: 650.980 ms (+-1.94%)
> >   After:  569.396 ms (+-1.38%)
> 
> Nice!
> 
> >  arch/x86/mm/fault.c   |  4 
> 
> Reviewed-by: Ingo Molnar 
> 
> Minor comment typo:
> 
> > +   /*
> > +* We should do the same as VM_FAULT_RETRY, but let's not
> > +* return -EBUSY since that's not reflecting the reality on
> > +* what has happened - we've just fully completed a page
> > +* fault, with the mmap lock released.  Use -EAGAIN to show
> > +* that we want to take the mmap lock _again_.
> > +*/
> 
> s/reflecting the reality on what has happened
>  /reflecting the reality of what has happened

Will fix.

> 
> > ret = handle_mm_fault(vma, address, fault_flags, NULL);
> > +
> > +   if (ret & VM_FAULT_COMPLETED) {
> > +   /*
> > +* NOTE: it's a pity that we need to retake the lock here
> > +* to pair with the unlock() in the callers. Ideally we
> > +* could tell the callers so they do not need to unlock.
> > +*/
> > +   mmap_read_lock(mm);
> > +   *unlocked = true;
> > +   return 0;
> 
> Indeed that's a pity - I guess more performance could be gained here, 
> especially in highly parallel threaded workloads?

Yes I think so.

The patch avoids the page fault retry, including the mmap lock/unlock side.
Now if we retake the lock for fixup_user_fault() we still safe time for
pgtable walks but the lock overhead will be somehow kept, just with smaller
critical sections.

Some fixup_user_fault() callers won't be affected as long as unlocked==NULL
is passed - e.g. the futex code path (fault_in_user_writeable).  After all
they never needed to retake the lock before/after this patch.

It's about the other callers, and they may need some more touch-ups case by
case.  Examples are follow_fault_pfn() in vfio and hva_to_pfn_remapped() in
KVM: both of them returns -EAGAIN when *unlocked==true.  We need to teach
them to know "*unlocked==true" does not necessarily mean a retry attempt.

I think I can look into them if this patch can be accepted as a follow up.

Thanks for taking a look!

-- 
Peter Xu


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH v3] mm: Avoid unnecessary page fault retires on shared memory types

2022-05-27 Thread Peter Xu
Hi, Heiko,

On Fri, May 27, 2022 at 02:23:42PM +0200, Heiko Carstens wrote:
> On Tue, May 24, 2022 at 07:45:31PM -0400, Peter Xu wrote:
> > I observed that for each of the shared file-backed page faults, we're very
> > likely to retry one more time for the 1st write fault upon no page.  It's
> > because we'll need to release the mmap lock for dirty rate limit purpose
> > with balance_dirty_pages_ratelimited() (in fault_dirty_shared_page()).
> > 
> > Then after that throttling we return VM_FAULT_RETRY.
> > 
> > We did that probably because VM_FAULT_RETRY is the only way we can return
> > to the fault handler at that time telling it we've released the mmap lock.
> > 
> > However that's not ideal because it's very likely the fault does not need
> > to be retried at all since the pgtable was well installed before the
> > throttling, so the next continuous fault (including taking mmap read lock,
> > walk the pgtable, etc.) could be in most cases unnecessary.
> > 
> > It's not only slowing down page faults for shared file-backed, but also add
> > more mmap lock contention which is in most cases not needed at all.
> > 
> > To observe this, one could try to write to some shmem page and look at
> > "pgfault" value in /proc/vmstat, then we should expect 2 counts for each
> > shmem write simply because we retried, and vm event "pgfault" will capture
> > that.
> > 
> > To make it more efficient, add a new VM_FAULT_COMPLETED return code just to
> > show that we've completed the whole fault and released the lock.  It's also
> > a hint that we should very possibly not need another fault immediately on
> > this page because we've just completed it.
> > 
> > This patch provides a ~12% perf boost on my aarch64 test VM with a simple
> > program sequentially dirtying 400MB shmem file being mmap()ed and these are
> > the time it needs:
> > 
> >   Before: 650.980 ms (+-1.94%)
> >   After:  569.396 ms (+-1.38%)
> > 
> > I believe it could help more than that.
> > 
> > We need some special care on GUP and the s390 pgfault handler (for gmap
> > code before returning from pgfault), the rest changes in the page fault
> > handlers should be relatively straightforward.
> > 
> > Another thing to mention is that mm_account_fault() does take this new
> > fault as a generic fault to be accounted, unlike VM_FAULT_RETRY.
> > 
> > I explicitly didn't touch hmm_vma_fault() and break_ksm() because they do
> > not handle VM_FAULT_RETRY even with existing code, so I'm literally keeping
> > them as-is.
> > 
> > Signed-off-by: Peter Xu 
> ...
> > diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
> > index e173b6187ad5..9503a7cfaf03 100644
> > --- a/arch/s390/mm/fault.c
> > +++ b/arch/s390/mm/fault.c
> > @@ -339,6 +339,7 @@ static inline vm_fault_t do_exception(struct pt_regs 
> > *regs, int access)
> > unsigned long address;
> > unsigned int flags;
> > vm_fault_t fault;
> > +   bool need_unlock = true;
> > bool is_write;
> >  
> > tsk = current;
> > @@ -433,6 +434,13 @@ static inline vm_fault_t do_exception(struct pt_regs 
> > *regs, int access)
> > goto out_up;
> > goto out;
> > }
> > +
> > +   /* The fault is fully completed (including releasing mmap lock) */
> > +   if (fault & VM_FAULT_COMPLETED) {
> > +   need_unlock = false;
> > +   goto out_gmap;
> > +   }
> > +
> > if (unlikely(fault & VM_FAULT_ERROR))
> > goto out_up;
> >  
> > @@ -452,6 +460,7 @@ static inline vm_fault_t do_exception(struct pt_regs 
> > *regs, int access)
> > mmap_read_lock(mm);
> > goto retry;
> > }
> > +out_gmap:
> > if (IS_ENABLED(CONFIG_PGSTE) && gmap) {
> > address =  __gmap_link(gmap, current->thread.gmap_addr,
> >address);
> > @@ -466,7 +475,8 @@ static inline vm_fault_t do_exception(struct pt_regs 
> > *regs, int access)
> > }
> > fault = 0;
> >  out_up:
> > -   mmap_read_unlock(mm);
> > +   if (need_unlock)
> > +   mmap_read_unlock(mm);
> >  out:
> 
> This seems to be incorrect. __gmap_link() requires the mmap_lock to be
> held. Christian, Janosch, or David, could you please check?

Thanks for pointing that out.  Indeed I see the clue right above the
comment of __gmap_link():

/*
 * ...
 * The mmap_loc

[PATCH v3] mm: Avoid unnecessary page fault retires on shared memory types

2022-05-24 Thread Peter Xu
I observed that for each of the shared file-backed page faults, we're very
likely to retry one more time for the 1st write fault upon no page.  It's
because we'll need to release the mmap lock for dirty rate limit purpose
with balance_dirty_pages_ratelimited() (in fault_dirty_shared_page()).

Then after that throttling we return VM_FAULT_RETRY.

We did that probably because VM_FAULT_RETRY is the only way we can return
to the fault handler at that time telling it we've released the mmap lock.

However that's not ideal because it's very likely the fault does not need
to be retried at all since the pgtable was well installed before the
throttling, so the next continuous fault (including taking mmap read lock,
walk the pgtable, etc.) could be in most cases unnecessary.

It's not only slowing down page faults for shared file-backed, but also add
more mmap lock contention which is in most cases not needed at all.

To observe this, one could try to write to some shmem page and look at
"pgfault" value in /proc/vmstat, then we should expect 2 counts for each
shmem write simply because we retried, and vm event "pgfault" will capture
that.

To make it more efficient, add a new VM_FAULT_COMPLETED return code just to
show that we've completed the whole fault and released the lock.  It's also
a hint that we should very possibly not need another fault immediately on
this page because we've just completed it.

This patch provides a ~12% perf boost on my aarch64 test VM with a simple
program sequentially dirtying 400MB shmem file being mmap()ed and these are
the time it needs:

  Before: 650.980 ms (+-1.94%)
  After:  569.396 ms (+-1.38%)

I believe it could help more than that.

We need some special care on GUP and the s390 pgfault handler (for gmap
code before returning from pgfault), the rest changes in the page fault
handlers should be relatively straightforward.

Another thing to mention is that mm_account_fault() does take this new
fault as a generic fault to be accounted, unlike VM_FAULT_RETRY.

I explicitly didn't touch hmm_vma_fault() and break_ksm() because they do
not handle VM_FAULT_RETRY even with existing code, so I'm literally keeping
them as-is.

Signed-off-by: Peter Xu 
---

v3:
- Rebase to akpm/mm-unstable
- Copy arch maintainers
---
 arch/alpha/mm/fault.c |  4 
 arch/arc/mm/fault.c   |  4 
 arch/arm/mm/fault.c   |  4 
 arch/arm64/mm/fault.c |  4 
 arch/csky/mm/fault.c  |  4 
 arch/hexagon/mm/vm_fault.c|  4 
 arch/ia64/mm/fault.c  |  4 
 arch/m68k/mm/fault.c  |  4 
 arch/microblaze/mm/fault.c|  4 
 arch/mips/mm/fault.c  |  4 
 arch/nios2/mm/fault.c |  4 
 arch/openrisc/mm/fault.c  |  4 
 arch/parisc/mm/fault.c|  4 
 arch/powerpc/mm/copro_fault.c |  5 +
 arch/powerpc/mm/fault.c   |  5 +
 arch/riscv/mm/fault.c |  4 
 arch/s390/mm/fault.c  | 12 +++-
 arch/sh/mm/fault.c|  4 
 arch/sparc/mm/fault_32.c  |  4 
 arch/sparc/mm/fault_64.c  |  5 +
 arch/um/kernel/trap.c |  4 
 arch/x86/mm/fault.c   |  4 
 arch/xtensa/mm/fault.c|  4 
 include/linux/mm_types.h  |  2 ++
 mm/gup.c  | 34 +-
 mm/memory.c   |  2 +-
 26 files changed, 138 insertions(+), 3 deletions(-)

diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
index ec20c1004abf..ef427a6bdd1a 100644
--- a/arch/alpha/mm/fault.c
+++ b/arch/alpha/mm/fault.c
@@ -155,6 +155,10 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
if (fault_signal_pending(fault, regs))
return;
 
+   /* The fault is fully completed (including releasing mmap lock) */
+   if (fault & VM_FAULT_COMPLETED)
+   return;
+
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index dad27e4d69ff..5ca59a482632 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -146,6 +146,10 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
return;
}
 
+   /* The fault is fully completed (including releasing mmap lock) */
+   if (fault & VM_FAULT_COMPLETED)
+   return;
+
/*
 * Fault retry nuances, mmap_lock already relinquished by core mm
 */
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index a062e07516dd..46cccd6bf705 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -322,6 +322,10 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct 
pt_regs *regs)
return 0;
}
 
+   /* The fault is fully completed (including releasing mmap lock) */
+   

[PATCH v5 03/25] mm/arc: Use general page fault accounting

2020-07-07 Thread Peter Xu
Use the general page fault accounting by passing regs into handle_mm_fault().
It naturally solve the issue of multiple page fault accounting when page fault
retry happened.

Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by
moving it before taking mmap_sem.

CC: Vineet Gupta 
CC: linux-snps-arc@lists.infradead.org
Signed-off-by: Peter Xu 
---
 arch/arc/mm/fault.c | 18 +++---
 1 file changed, 3 insertions(+), 15 deletions(-)

diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index 587dea524e6b..f5657cb68e4f 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -105,6 +105,7 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
if (write)
flags |= FAULT_FLAG_WRITE;
 
+   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
 retry:
mmap_read_lock(mm);
 
@@ -130,7 +131,7 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
goto bad_area;
}
 
-   fault = handle_mm_fault(vma, address, flags, NULL);
+   fault = handle_mm_fault(vma, address, flags, regs);
 
/* Quick path to respond to signals */
if (fault_signal_pending(fault, regs)) {
@@ -155,22 +156,9 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
 * Major/minor page fault accounting
 * (in case of retry we only land here once)
 */
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
-
-   if (likely(!(fault & VM_FAULT_ERROR))) {
-   if (fault & VM_FAULT_MAJOR) {
-   tsk->maj_flt++;
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
- regs, address);
-   } else {
-   tsk->min_flt++;
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
- regs, address);
-   }
-
+   if (likely(!(fault & VM_FAULT_ERROR)))
/* Normal return path: fault Handled Gracefully */
return;
-   }
 
if (!user_mode(regs))
goto no_context;
-- 
2.26.2


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[PATCH v4 03/26] mm/arc: Use general page fault accounting

2020-06-30 Thread Peter Xu
Use the general page fault accounting by passing regs into handle_mm_fault().
It naturally solve the issue of multiple page fault accounting when page fault
retry happened.

Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by
moving it before taking mmap_sem.

CC: Vineet Gupta 
CC: linux-snps-arc@lists.infradead.org
Signed-off-by: Peter Xu 
---
 arch/arc/mm/fault.c | 18 +++---
 1 file changed, 3 insertions(+), 15 deletions(-)

diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index 1b178dc147fd..5601dec319b5 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -106,6 +106,7 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
if (write)
flags |= FAULT_FLAG_WRITE;
 
+   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
 retry:
mmap_read_lock(mm);
 
@@ -131,7 +132,7 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
goto bad_area;
}
 
-   fault = handle_mm_fault(vma, address, flags, NULL);
+   fault = handle_mm_fault(vma, address, flags, regs);
 
/* Quick path to respond to signals */
if (fault_signal_pending(fault, regs)) {
@@ -156,22 +157,9 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
 * Major/minor page fault accounting
 * (in case of retry we only land here once)
 */
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
-
-   if (likely(!(fault & VM_FAULT_ERROR))) {
-   if (fault & VM_FAULT_MAJOR) {
-   tsk->maj_flt++;
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
- regs, address);
-   } else {
-   tsk->min_flt++;
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
- regs, address);
-   }
-
+   if (likely(!(fault & VM_FAULT_ERROR)))
/* Normal return path: fault Handled Gracefully */
return;
-   }
 
if (!user_mode(regs))
goto no_context;
-- 
2.26.2


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[PATCH 03/26] mm/arc: Use general page fault accounting

2020-06-26 Thread Peter Xu
Use the general page fault accounting by passing regs into handle_mm_fault().
It naturally solve the issue of multiple page fault accounting when page fault
retry happened.

Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by
moving it before taking mmap_sem.

CC: Vineet Gupta 
CC: linux-snps-arc@lists.infradead.org
Signed-off-by: Peter Xu 
---
 arch/arc/mm/fault.c | 18 +++---
 1 file changed, 3 insertions(+), 15 deletions(-)

diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index 34380139e7a2..68e6849cf086 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -106,6 +106,7 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
if (write)
flags |= FAULT_FLAG_WRITE;
 
+   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
 retry:
down_read(&mm->mmap_sem);
 
@@ -131,7 +132,7 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
goto bad_area;
}
 
-   fault = handle_mm_fault(vma, address, flags, NULL);
+   fault = handle_mm_fault(vma, address, flags, regs);
 
/* Quick path to respond to signals */
if (fault_signal_pending(fault, regs)) {
@@ -156,22 +157,9 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
 * Major/minor page fault accounting
 * (in case of retry we only land here once)
 */
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
-
-   if (likely(!(fault & VM_FAULT_ERROR))) {
-   if (fault & VM_FAULT_MAJOR) {
-   tsk->maj_flt++;
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
- regs, address);
-   } else {
-   tsk->min_flt++;
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
- regs, address);
-   }
-
+   if (likely(!(fault & VM_FAULT_ERROR)))
/* Normal return path: fault Handled Gracefully */
return;
-   }
 
if (!user_mode(regs))
goto no_context;
-- 
2.26.2


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[PATCH 03/26] mm/arc: Use general page fault accounting

2020-06-19 Thread Peter Xu
Use the general page fault accounting by passing regs into handle_mm_fault().
It naturally solve the issue of multiple page fault accounting when page fault
retry happened.

Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by
moving it before taking mmap_sem.

CC: Vineet Gupta 
CC: linux-snps-arc@lists.infradead.org
Signed-off-by: Peter Xu 
---
 arch/arc/mm/fault.c | 18 +++---
 1 file changed, 3 insertions(+), 15 deletions(-)

diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index 34380139e7a2..68e6849cf086 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -106,6 +106,7 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
if (write)
flags |= FAULT_FLAG_WRITE;
 
+   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
 retry:
down_read(&mm->mmap_sem);
 
@@ -131,7 +132,7 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
goto bad_area;
}
 
-   fault = handle_mm_fault(vma, address, flags, NULL);
+   fault = handle_mm_fault(vma, address, flags, regs);
 
/* Quick path to respond to signals */
if (fault_signal_pending(fault, regs)) {
@@ -156,22 +157,9 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
 * Major/minor page fault accounting
 * (in case of retry we only land here once)
 */
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
-
-   if (likely(!(fault & VM_FAULT_ERROR))) {
-   if (fault & VM_FAULT_MAJOR) {
-   tsk->maj_flt++;
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
- regs, address);
-   } else {
-   tsk->min_flt++;
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
- regs, address);
-   }
-
+   if (likely(!(fault & VM_FAULT_ERROR)))
/* Normal return path: fault Handled Gracefully */
return;
-   }
 
if (!user_mode(regs))
goto no_context;
-- 
2.26.2


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[PATCH 04/25] mm/arc: Use mm_fault_accounting()

2020-06-15 Thread Peter Xu
Use the new mm_fault_accounting() helper for page fault accounting.

The functional change here is that now we take the whole page fault as a major
fault as long as any of the retried page fault is a major fault.  Previously we
only considered the last fault.

CC: Vineet Gupta 
CC: linux-snps-arc@lists.infradead.org
Signed-off-by: Peter Xu 
---
 arch/arc/mm/fault.c | 15 +++
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index 92b339c7adba..bc89d4b9c59d 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -72,6 +72,7 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
int sig, si_code = SEGV_MAPERR;
unsigned int write = 0, exec = 0, mask;
vm_fault_t fault = VM_FAULT_SIGSEGV;/* handle_mm_fault() output */
+   vm_fault_t major = 0;
unsigned int flags; /* handle_mm_fault() input */
 
/*
@@ -132,6 +133,7 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
}
 
fault = handle_mm_fault(vma, address, flags);
+   major |= fault & VM_FAULT_MAJOR;
 
/* Quick path to respond to signals */
if (fault_signal_pending(fault, regs)) {
@@ -156,20 +158,9 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
 * Major/minor page fault accounting
 * (in case of retry we only land here once)
 */
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
-
if (likely(!(fault & VM_FAULT_ERROR))) {
-   if (fault & VM_FAULT_MAJOR) {
-   tsk->maj_flt++;
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
- regs, address);
-   } else {
-   tsk->min_flt++;
-   perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
- regs, address);
-   }
-
/* Normal return path: fault Handled Gracefully */
+   mm_fault_accounting(tsk, regs, address, major);
return;
}
 
-- 
2.26.2


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH RFC] mm: arc: fix potential double realease of mmap_sem

2018-11-05 Thread Peter Xu
On Tue, Nov 06, 2018 at 12:48:31AM +, Vineet Gupta wrote:
> On 10/31/18 8:24 PM, Peter Xu wrote:
> > In do_page_fault() of ARC we have:
> >
> > ...
> > fault = handle_mm_fault(vma, address, flags);
> >
> > /* If Pagefault was interrupted by SIGKILL, exit page fault "early" 
> > */
> > if (unlikely(fatal_signal_pending(current))) {
> > if ((fault & VM_FAULT_ERROR) && !(fault & VM_FAULT_RETRY))
> > up_read(&mm->mmap_sem); < [1]
> > if (user_mode(regs))
> > return;
> > }
> > ...
> > if (likely(!(fault & VM_FAULT_ERROR))) {
> > ...
> > return;
> > }
> >
> > if (fault & VM_FAULT_OOM)
> > goto out_of_memory;<- [2]
> > else if (fault & VM_FAULT_SIGSEGV)
> > goto bad_area; <- [3]
> > else if (fault & VM_FAULT_SIGBUS)
> > goto do_sigbus;<- [4]
> >
> > Logically it's possible that we might try to release the mmap_sem twice
> > by having a scenario like:
> >
> > - task received SIGKILL,
> > - task handled kernel mode page fault,
> > - handle_mm_fault() returned with one of VM_FAULT_ERROR,
> >
> > Then we'll go into path [1] to release the mmap_sem, however we won't
> > return immediately since user_mode(regs) check will fail (a kernel page
> > fault).  Then we might go into either [2]-[4] and either of them will
> > try to release the mmap_sem again.
> >
> > To fix this, we only release the mmap_sem at [1] when we're sure we'll
> > quit immediately (after we checked with user_mode(regs)).
> 
> Hmm, do_page_fault() needs a serious makeover. There's a known problem in the 
> area
> you touched (with test case) where we fail to relinquish the mmap_sem for 
> which
> Alexey had provided a fix. But I'm going to redo this part now and CC you 
> folks
> for review. OK ?

Fine with me.  Thanks,

-- 
Peter Xu

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


[PATCH RFC] mm: arc: fix potential double realease of mmap_sem

2018-10-31 Thread Peter Xu
In do_page_fault() of ARC we have:

...
fault = handle_mm_fault(vma, address, flags);

/* If Pagefault was interrupted by SIGKILL, exit page fault "early" */
if (unlikely(fatal_signal_pending(current))) {
if ((fault & VM_FAULT_ERROR) && !(fault & VM_FAULT_RETRY))
up_read(&mm->mmap_sem); < [1]
if (user_mode(regs))
return;
}
...
if (likely(!(fault & VM_FAULT_ERROR))) {
...
return;
}

if (fault & VM_FAULT_OOM)
goto out_of_memory;<- [2]
else if (fault & VM_FAULT_SIGSEGV)
goto bad_area; <- [3]
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;<- [4]

Logically it's possible that we might try to release the mmap_sem twice
by having a scenario like:

- task received SIGKILL,
- task handled kernel mode page fault,
- handle_mm_fault() returned with one of VM_FAULT_ERROR,

Then we'll go into path [1] to release the mmap_sem, however we won't
return immediately since user_mode(regs) check will fail (a kernel page
fault).  Then we might go into either [2]-[4] and either of them will
try to release the mmap_sem again.

To fix this, we only release the mmap_sem at [1] when we're sure we'll
quit immediately (after we checked with user_mode(regs)).

CC: Vineet Gupta 
CC: "Eric W. Biederman" 
CC: Peter Xu 
CC: Andrew Morton 
CC: Souptick Joarder 
CC: Andrea Arcangeli 
CC: linux-snps-arc@lists.infradead.org
CC: linux-ker...@vger.kernel.org
Signed-off-by: Peter Xu 
---

I noticed this only by reading the code.  Neither have I verified the
issue, nor have I tested the patch since I even don't know how to (I'm
totally unfamiliar with the arc architecture).  However I'm posting this
out first to see whether there's any quick feedback, and in case it's a
valid issue that we've ignored.
---
 arch/arc/mm/fault.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index c9da6102eb4f..2d28c3dad5c1 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -142,11 +142,10 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
fault = handle_mm_fault(vma, address, flags);
 
/* If Pagefault was interrupted by SIGKILL, exit page fault "early" */
-   if (unlikely(fatal_signal_pending(current))) {
-   if ((fault & VM_FAULT_ERROR) && !(fault & VM_FAULT_RETRY))
+   if (unlikely(fatal_signal_pending(current) && user_mode(regs))) {
+   if (!(fault & VM_FAULT_RETRY))
up_read(&mm->mmap_sem);
-   if (user_mode(regs))
-   return;
+   return;
}
 
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
-- 
2.17.1


___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc