On Thu, Apr 15, 2021 at 01:13:13AM -0600, Yu Zhao wrote:
> Page table scanning doesn't replace the existing rmap walk. It is
> complementary and only happens when it is likely that most of the
> pages on a system under pressure have been referenced, i.e., out of
> *inactive* pages, by definition of
On Thu, Apr 08, 2021 at 08:13:43AM +0100, Matthew Wilcox wrote:
> On Thu, Apr 08, 2021 at 09:00:26AM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 07, 2021 at 10:27:12PM +0100, Matthew Wilcox wrote:
> > > Doing I/O without any lock held already works; it just uses the file
> > > refcount. It would
On Wed, Apr 07, 2021 at 03:50:06AM +0100, Matthew Wilcox wrote:
> On Tue, Apr 06, 2021 at 06:44:59PM -0700, Michel Lespinasse wrote:
> > Performance tuning: as single threaded userspace does not use
> > speculative page faults, it does not require rcu safe vma freeing.
> > T
On Wed, Apr 07, 2021 at 04:40:34PM +0200, Peter Zijlstra wrote:
> On Tue, Apr 06, 2021 at 06:44:49PM -0700, Michel Lespinasse wrote:
> > In the speculative case, call the vm_ops->fault() method from within
> > an rcu read locked section, and verify the mmap sequence lock at th
On Wed, Apr 07, 2021 at 04:47:34PM +0200, Peter Zijlstra wrote:
> On Tue, Apr 06, 2021 at 06:44:34PM -0700, Michel Lespinasse wrote:
> > The counter's write side is hooked into the existing mmap locking API:
> > mmap_write_lock() increments the counter to the n
On Wed, Apr 07, 2021 at 04:35:28PM +0100, Matthew Wilcox wrote:
> On Wed, Apr 07, 2021 at 04:48:44PM +0200, Peter Zijlstra wrote:
> > On Tue, Apr 06, 2021 at 06:44:36PM -0700, Michel Lespinasse wrote:
> > > --- a/arch/x86/mm/fault.c
> > > +++ b/arch/x86/mm/fault.c
>
On Wed, Apr 07, 2021 at 01:14:53PM -0700, Michel Lespinasse wrote:
> On Wed, Apr 07, 2021 at 04:48:44PM +0200, Peter Zijlstra wrote:
> > On Tue, Apr 06, 2021 at 06:44:36PM -0700, Michel Lespinasse wrote:
> > > --- a/arch/x86/mm/fault.c
> > > +++ b/arch/x86/mm/fault
On Wed, Apr 07, 2021 at 04:48:44PM +0200, Peter Zijlstra wrote:
> On Tue, Apr 06, 2021 at 06:44:36PM -0700, Michel Lespinasse wrote:
> > --- a/arch/x86/mm/fault.c
> > +++ b/arch/x86/mm/fault.c
> > @@ -1219,6 +1219,8 @@ void do_user_addr_fault(struct pt_regs *regs,
> &g
On Wed, Apr 07, 2021 at 03:35:27AM +0100, Matthew Wilcox wrote:
> On Tue, Apr 06, 2021 at 06:44:49PM -0700, Michel Lespinasse wrote:
> > In the speculative case, call the vm_ops->fault() method from within
> > an rcu read locked section, and verify the mmap sequence lock at th
Hi Bibo,
You introduced this code in commit 7df676974359f back in May.
Could you check that the change is correct ?
Thanks,
On Tue, Apr 06, 2021 at 06:44:28PM -0700, Michel Lespinasse wrote:
> update_mmu_tlb() can be used instead of update_mmu_cache() when the
> page fault handler detect
s
they will all be adjusted together before use, so they just need to be
consistent with each other, and using the original fault address and
pte allows us to reuse pte_map_lock() without any changes to it.
Signed-off-by: Michel Lespinasse
---
mm/filemap.c | 27 ---
1 file
when finalizing the fault.
Signed-off-by: Michel Lespinasse
---
arch/x86/mm/fault.c | 36 +++
include/linux/vm_event_item.h | 4
mm/vmstat.c | 4
3 files changed, 44 insertions(+)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm
In the speculative case, we want to avoid direct pmd checks (which
would require some extra synchronization to be safe), and rely on
pte_map_lock which will both lock the page table and verify that the
pmd has not changed from its initial value.
Signed-off-by: Michel Lespinasse
---
mm/memory.c
h any mmap writer.
This is very similar to a seqlock, but both the writer and speculative
readers are allowed to block. In the fail case, the speculative reader
does not spin on the sequence counter; instead it should fall back to
a different mechanism such as grabbing the mmap lock read side
This prepares for speculative page faults looking up and copying vmas
under protection of an rcu read lock, instead of the usual mmap read lock.
Signed-off-by: Michel Lespinasse
---
include/linux/mm_types.h | 16 +++-
kernel/fork.c| 11 ++-
2 files changed, 21
where the entire page table walk (higher levels down to ptes)
needs special care in the speculative case.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 98 ++---
1 file changed, 49 insertions(+), 49 deletions(-)
diff --git a/mm/memory.c b/mm
tables.
Signed-off-by: Michel Lespinasse
---
include/linux/mm.h | 4 +++
mm/memory.c| 77 --
2 files changed, 79 insertions(+), 2 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index d5988e78e6ab..dee8a4833779 100644
--- a
tests that do not have any frequent
concurrent page faults ! This is because rcu safe vma freeing prevents
recently released vmas from being immediately reused in a new thread.
Signed-off-by: Michel Lespinasse
---
kernel/fork.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff
We just need to make sure f2fs_filemap_fault() doesn't block in the
speculative case as it is called with an rcu read lock held.
Signed-off-by: Michel Lespinasse
---
fs/f2fs/file.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT so that the speculative fault
handling code can be compiled on this architecture.
Signed-off-by: Michel Lespinasse
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e4e1b6550115
Performance tuning: single threaded userspace does not benefit from
speculative page faults, so we turn them off to avoid any related
(small) extra overheads.
Signed-off-by: Michel Lespinasse
---
arch/x86/mm/fault.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/x86/mm/fault.c b
Change mmap_lock_is_contended to return a bool value, rather than an
int which the callers are then supposed to interpret as a bool. This
is to ensure consistency with other mmap lock API functions (such as
the trylock functions).
Signed-off-by: Michel Lespinasse
---
include/linux/mmap_lock.h
the anon case, but maybe not as clear for the file cases.
- Is the Android use case compelling enough to merge the entire patchset ?
- Can we use this as a foundation for other mmap scalability work ?
I hear several proposals involving the idea of RCU based fault handling,
and hope this propo
We just need to make sure ext4_filemap_fault() doesn't block in the
speculative case as it is called with an rcu read lock held.
Signed-off-by: Michel Lespinasse
---
fs/ext4/file.c | 1 +
fs/ext4/inode.c | 7 ++-
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/fs/ext4/f
Add a new CONFIG_SPECULATIVE_PAGE_FAULT_STATS config option,
and dump extra statistics about executed spf cases and abort reasons
when the option is set.
Signed-off-by: Michel Lespinasse
---
arch/x86/mm/fault.c | 19 +++---
include/linux/mmap_lock.h | 19 +-
include
Add a speculative field to the vm_operations_struct, which indicates if
the associated file type supports speculative faults.
Initially this is set for files that implement fault() with filemap_fault().
Signed-off-by: Michel Lespinasse
---
fs/btrfs/file.c| 1 +
fs/cifs/file.c | 1 +
fs
anymore, as it is now running within an rcu read lock.
Signed-off-by: Michel Lespinasse
---
fs/xfs/xfs_file.c | 3 +++
mm/memory.c | 22 --
2 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index a007ca0711d9..b360
() API is kept as a wrapper around
do_handle_mm_fault() so that we do not have to immediately update
every handle_mm_fault() call site.
Signed-off-by: Michel Lespinasse
---
include/linux/mm.h | 12 +---
mm/memory.c| 10 +++---
2 files changed, 16 insertions(+), 6 deletions
trying that unimplemented case.
Signed-off-by: Michel Lespinasse
---
arch/x86/mm/fault.c | 3 ++-
include/linux/mm.h | 14 ++
mm/memory.c | 17 -
3 files changed, 28 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
, and that readahead is not
necessary at this time. In all other cases, the fault is aborted to be
handled non-speculatively.
Signed-off-by: Michel Lespinasse
---
mm/filemap.c | 45 -
1 file changed, 44 insertions(+), 1 deletion(-)
diff --git a/mm
in order to satisfy pte_map_lock()'s preconditions.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 31 ++-
1 file changed, 22 insertions(+), 9 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index eea72bd78d06..547d9d0ee962 100644
--- a/mm/memory.c
lative fault handling.
The speculative handling case also does not preallocate page tables,
as it is always called with a pre-existing page table.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 63 +++--
1 file changed, 42 insertions(+), 21 deleti
: Michel Lespinasse
---
arch/arm64/mm/fault.c | 52 +++
1 file changed, 52 insertions(+)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index f37d4e3830b7..3757bfbb457a 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -25,6 +25,7
Define the new FAULT_FLAG_SPECULATIVE flag, which indicates when we are
attempting speculative fault handling (without holding the mmap lock).
Signed-off-by: Michel Lespinasse
---
include/linux/mm.h | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/include/linux/mm.h b
Change handle_pte_fault() to allow speculative fault execution to proceed
through do_numa_page().
do_swap_page() does not implement speculative execution yet, so it
needs to abort with VM_FAULT_RETRY in that case.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 15 ++-
1 file
Defer freeing of vma->vm_file when freeing vmas.
This is to allow speculative page faults in the mapped file case.
Signed-off-by: Michel Lespinasse
---
fs/exec.c | 1 +
kernel/fork.c | 17 +++--
mm/mmap.c | 11 +++
mm/nommu.c| 6 ++
4 files changed,
is set (the original pte was not
pte_none), catch speculative faults and return VM_FAULT_RETRY as
those cases are not implemented yet. Also assert that do_fault()
is not reached in the speculative case.
Signed-off-by: Michel Lespinasse
---
arch/x86/mm/fault.c | 2 +-
mm/memory.c |
tical between the two cases.
This change reduces the code duplication between the two cases.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 85 +++--
1 file changed, 37 insertions(+), 48 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
change do_numa_page() to use pte_spinlock() when locking the page table,
so that the mmap sequence counter will be validated in the speculative case.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memory.c b/mm
wp_pfn_shared() or wp_page_shared() (both unreachable as we only
handle anon vmas so far) or handle_userfault() (needs an explicit
abort to handle non-speculatively).
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/memory.c
ind less readable.
Signed-off-by: Michel Lespinasse
---
include/linux/mmap_lock.h | 32
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 4e27f755766b..8ff276a7560e 100644
--- a/include/l
page faulting code, and some code has to
be added there to try speculative fault handling first.
Signed-off-by: Michel Lespinasse
---
mm/Kconfig | 22 ++
1 file changed, 22 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..322bda319dea 100644
--- a/mm/Kconfig
when finally committing
the faulted page to the mm address space.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 74 ++---
1 file changed, 42 insertions(+), 32 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index fc555fae0844..ab3160719bf3
update_mmu_tlb() can be used instead of update_mmu_cache() when the
page fault handler detects that it lost the race to another page fault.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index
that point the page table lock serializes any further
races with concurrent mmap lock writers.
If the mmap sequence count check fails, both functions will return false
with the pte being left unmapped and unlocked.
Signed-off-by: Michel Lespinasse
---
include/linux/mm.h | 34 +
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT so that the speculative fault
handling code can be compiled on this architecture.
Signed-off-by: Michel Lespinasse
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2792879d398e
Change do_anonymous_page() to handle the speculative case.
This involves aborting speculative faults if they have to allocate a new
anon_vma, and using pte_map_lock() instead of pte_offset_map_lock()
to complete the page fault.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 17
Change do_swap_page() to allow speculative fault execution to proceed.
Signed-off-by: Michel Lespinasse
---
mm/memory.c | 5 -
1 file changed, 5 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index ab3160719bf3..6eddd7b4e89c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3340,11
ve come to depend on it (the old "common
law feature" issue).
Just a concern I have, with 0 evidence behind it, so I hope it turns
out not to be an actual issue.
Acked-by: Michel Lespinasse
On Thu, Apr 1, 2021 at 12:51 PM Liam Howlett wrote:
>
> find_vma() will continue to search up
tch has effectively no overhead unless tracepoints are enabled at
> runtime. If tracepoints are enabled, there is a performance impact, but
> how much depends on exactly what e.g. the BPF program does.
>
> Signed-off-by: Axel Rasmussen
Reviewed-by: Michel Lespinasse
Looks good to me, thanks!
On Wed, Oct 7, 2020 at 11:44 AM Axel Rasmussen wrote:
> The goal of these tracepoints is to be able to debug lock contention
> issues. This lock is acquired on most (all?) mmap / munmap / page fault
> operations, so a multi-threaded process which does a lot of these can
> experience significant co
his by using
> e.g. u8 (assuming sizeof(bool) is 1, and bool is unsigned; if either of
> these properties don't match, you get EINVAL [2]).
>
> Supporting "bool" explicitly makes hooking this up easier and more
> portable for userspace.
Acked-by: Michel Lespinasse
Lo
On Fri, Oct 2, 2020 at 9:33 AM Jann Horn wrote:
> On Fri, Oct 2, 2020 at 11:18 AM Michel Lespinasse wrote:
> > On Thu, Oct 1, 2020 at 6:25 PM Jann Horn wrote:
> > > Until now, the mmap lock of the nascent mm was ordered inside the mmap
> > > lock
> > > of th
On Thu, Oct 1, 2020 at 6:25 PM Jann Horn wrote:
> Until now, the mmap lock of the nascent mm was ordered inside the mmap lock
> of the old mm (in dup_mmap() and in UML's activate_mm()).
> A following patch will change the exec path to very broadly lock the
> nascent mm, but fine-grained locking sh
On Wed, Sep 30, 2020 at 1:15 PM Jann Horn wrote:
> On Wed, Sep 30, 2020 at 2:50 PM Jann Horn wrote:
> > On Wed, Sep 30, 2020 at 2:30 PM Jason Gunthorpe wrote:
> > > On Tue, Sep 29, 2020 at 06:20:00PM -0700, Jann Horn wrote:
> > > > In preparation for adding a mmap_assert_locked() check in
> > >
ensure that they hold the mmap lock when calling into GUP (unless the mm is
> > not yet globally visible), add an assertion to make sure it stays that way
> > going forward.
Thanks for doing this, there is a lot of value in ensuring that a
function's callers follows the prerequisites.
Acked-by: Michel Lespinasse
ck before doing
> anything with `vma`, but that's because we actually don't do anything with
> it apart from the NULL check.)
>
> Signed-off-by: Jann Horn
Thanks for these cleanups :)
Acked-by: Michel Lespinasse
nly for testing, and it's only reachable by root through
> debugfs, so this doesn't really have any impact; however, if we want to add
> lockdep asserts into the GUP path, we need to have clean locking here.
>
> Signed-off-by: Jann Horn
Acked-by: Michel Lespinasse
Tha
On Sat, Aug 22, 2020 at 9:04 AM Michel Lespinasse wrote:
> - B's implementation could, when lockdep is enabled, always release
> lock A before acquiring lock B. This is not ideal though, since this
> would hinder testing of the not-blocked code path in the acquire
> sequence.
A
On Sat, Aug 22, 2020 at 9:39 AM wrote:
> On Sat, Aug 22, 2020 at 09:04:09AM -0700, Michel Lespinasse wrote:
> > Hi,
> >
> > I am wondering about how to describe the following situation to lockdep:
> >
> > - lock A would be something that's already implement
Hi,
I am wondering about how to describe the following situation to lockdep:
- lock A would be something that's already implemented (a mutex or
possibly a spinlock).
- lock B is a range lock, which I would be writing the code for
(including lockdep hooks). I do not expect lockdep to know about ra
On Wed, Aug 12, 2020 at 7:13 PM Chinwen Chang
wrote:
> smaps_rollup will try to grab mmap_lock and go through the whole vma
> list until it finishes the iterating. When encountering large processes,
> the mmap_lock will be held for a longer time, which may block other
> write requests like mmap an
On Wed, Aug 12, 2020 at 7:14 PM Chinwen Chang
wrote:
>
> Add new API to query if someone wants to acquire mmap_lock
> for write attempts.
>
> Using this instead of rwsem_is_contended makes it more tolerant
> of future changes to the lock type.
>
> Signed-off-by: Chinwen
On Thu, Aug 13, 2020 at 9:11 AM Chinwen Chang
wrote:
> On Thu, 2020-08-13 at 02:53 -0700, Michel Lespinasse wrote:
> > On Wed, Aug 12, 2020 at 7:14 PM Chinwen Chang
> > wrote:
> > > Recently, we have observed some janky issues caused by unpleasantly long
> > >
On Wed, Aug 12, 2020 at 7:14 PM Chinwen Chang
wrote:
> Recently, we have observed some janky issues caused by unpleasantly long
> contention on mmap_lock which is held by smaps_rollup when probing large
> processes. To address the problem, we let smaps_rollup detect if anyone
> wants to acquire mm
pipermail/linux-riscv/2020-June/010335.html
>
> Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
> Signed-off-by: Atish Patra
Thanks for the fix.
Reviewed-by: Michel Lespinasse
ocking checks exposed the issue that OpenRISC was not taking
> this mmap lock when during page walks for DMA operations. This patch
> locks and unlocks the mmap lock for page walking.
>
> Fixes: 42fc541404f2 ("mmap locking API: add mmap_assert_locked() and
> mmap_assert_write_lo
On Tue, Jun 16, 2020 at 11:07 PM Stafford Horne wrote:
> On Wed, Jun 17, 2020 at 02:35:39PM +0900, Stafford Horne wrote:
> > On Tue, Jun 16, 2020 at 01:47:24PM -0700, Michel Lespinasse wrote:
> > > This makes me wonder actually - maybe there is a latent bug that got
> > &
(!rwsem_is_locked(&walk.mm->mmap_lock)) added to
walk_page_range() / walk_page_range_novma() / walk_page_vma() ...
On Tue, Jun 16, 2020 at 12:41 PM Atish Patra wrote:
>
> On Tue, Jun 16, 2020 at 12:19 PM Stafford Horne wrote:
> >
> > On Tue, Jun 16, 2020 at 03:44:49AM -0700, Michel
ffe00107b76b
> >> > [ 10.393096] status: 0120 badaddr:
> >> > cause: 0003
> >> > [ 10.397755] ---[ end trace 861659596ac28841 ]---
> >> >
nts")
> Signed-off-by: Randy Dunlap
> Cc: Mauro Carvalho Chehab
> Cc: Michel Lespinasse
> Cc: Andrew Morton
Acked-by: Michel Lespinasse
Thanks for the fixes !
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
On Thu, Jun 4, 2020 at 1:16 AM youling 257 wrote:
> 2020-06-04 13:57 GMT+08:00, Michel Lespinasse :
> > However I would like more information about your report. Did you apply
> > the series yourself ? If so, what base tree did you apply it onto ? If
> > not, what tree did
On Wed, Jun 3, 2020 at 9:35 PM youling 257 wrote:
> I have build error about kernel/sys.c,
>
> kernel/sys.c: In function ‘prctl_set_vma’:
> kernel/sys.c:2392:18: error:
> ‘struct mm_struct’ has no member named ‘mmap_sem’; did you mean
> ‘mmap_base’?
> 2392 | down_write(&mm->mmap_sem);
>
On Thu, May 21, 2020 at 12:42 AM Vlastimil Babka wrote:
> On 5/20/20 7:29 AM, Michel Lespinasse wrote:
> > Convert comments that reference mmap_sem to reference mmap_lock instead.
> >
> > Signed-off-by: Michel Lespinasse
>
> Reviewed-by: Vlastimil Babka
>
Looks good, thanks !
On Wed, May 20, 2020 at 8:22 PM Andrew Morton wrote:
> On Tue, 19 May 2020 22:29:08 -0700 Michel Lespinasse
> wrote:
> > Convert comments that reference mmap_sem to reference mmap_lock instead.
>
> This may not be complete..
>
> From: Andrew Morton
Looks good. I'm not sure if you need a review, but just in case:
On Wed, May 20, 2020 at 8:23 PM Andrew Morton wrote:
> On Tue, 19 May 2020 22:29:01 -0700 Michel Lespinasse
> wrote:
>
> > Convert the last few remaining mmap_sem rwsem calls to use the new
> > mmap lock
On Wed, May 20, 2020 at 12:32 AM John Hubbard wrote:
> On 2020-05-19 19:39, Michel Lespinasse wrote:
> >> That gives you additional options inside internal_get_user_pages_fast(),
> >> such
> >> as, approximately:
> >>
> >> if (!(gup_flags & F
Convert comments that reference old mmap_sem APIs to reference
corresponding new mmap locking APIs instead.
Signed-off-by: Michel Lespinasse
---
Documentation/vm/hmm.rst | 6 +++---
arch/alpha/mm/fault.c | 2 +-
arch/ia64/mm/fault.c | 2 +-
arch/m68k/mm/fault.c
Add new APIs to assert that mmap_sem is held.
Using this instead of rwsem_is_locked and lockdep_assert_held[_write]
makes the assertions more tolerant of future changes to the lock type.
Signed-off-by: Michel Lespinasse
---
arch/x86/events/core.c| 2 +-
fs/userfaultfd.c | 6
This use is converted manually ahead of the next patch in the series,
as it requires including a new header which the automated conversion
would miss.
Signed-off-by: Michel Lespinasse
Reviewed-by: Daniel Jordan
Reviewed-by: Davidlohr Bueso
Reviewed-by: Laurent Dufour
Reviewed-by: Vlastimil
Rename the mmap_sem field to mmap_lock. Any new uses of this lock
should now go through the new mmap locking api. The mmap_lock is
still implemented as a rwsem, though this could change in the future.
Signed-off-by: Michel Lespinasse
Reviewed-by: Vlastimil Babka
---
arch/ia64/mm/fault.c
Define a new initializer for the mmap locking api.
Initially this just evaluates to __RWSEM_INITIALIZER as the API
is defined as wrappers around rwsem.
Signed-off-by: Michel Lespinasse
Reviewed-by: Laurent Dufour
Reviewed-by: Vlastimil Babka
---
arch/x86/kernel/tboot.c| 2 +-
drivers
least-ugly way of addressing this in the short term.
Signed-off-by: Michel Lespinasse
Reviewed-by: Daniel Jordan
Reviewed-by: Vlastimil Babka
---
include/linux/mmap_lock.h | 14 ++
kernel/bpf/stackmap.c | 17 +
2 files changed, 19 insertions(+), 12 deletions(-)
Convert the last few remaining mmap_sem rwsem calls to use the new
mmap locking API. These were missed by coccinelle for some reason
(I think coccinelle does not support some of the preprocessor
constructs in these files ?)
Signed-off-by: Michel Lespinasse
Reviewed-by: Daniel Jordan
Reviewed-by
ould be delayed for
a bit, so that we'd get a chance to convert any new code that locks
mmap_sem in the -rc1 release before applying that last patch.
Michel Lespinasse (12):
mmap locking API: initial implementation as rwsem wrappers
MMU notifier: use the new mmap locking API
DMA reser
Add API for nested write locks and convert the few call sites doing that.
Signed-off-by: Michel Lespinasse
Reviewed-by: Daniel Jordan
Reviewed-by: Laurent Dufour
Reviewed-by: Vlastimil Babka
---
arch/um/include/asm/mmu_context.h | 3 ++-
include/linux/mmap_lock.h | 5 +
kernel
This use is converted manually ahead of the next patch in the series,
as it requires including a new header which the automated conversion
would miss.
Signed-off-by: Michel Lespinasse
Reviewed-by: Daniel Jordan
Reviewed-by: Laurent Dufour
Reviewed-by: Vlastimil Babka
---
drivers/dma-buf/dma
Convert comments that reference mmap_sem to reference mmap_lock instead.
Signed-off-by: Michel Lespinasse
---
.../admin-guide/mm/numa_memory_policy.rst | 10 ++---
Documentation/admin-guide/mm/userfaultfd.rst | 2 +-
Documentation/filesystems/locking.rst | 2 +-
Documentation/vm
point for replacing the rwsem
implementation with a different one, such as range locks.
Signed-off-by: Michel Lespinasse
Reviewed-by: Daniel Jordan
Reviewed-by: Davidlohr Bueso
Reviewed-by: Laurent Dufour
Reviewed-by: Vlastimil Babka
---
include/linux/mm.h| 1 +
include/linux
On Tue, May 19, 2020 at 11:15 AM John Hubbard wrote:
> On 2020-05-19 08:32, Matthew Wilcox wrote:
> > On Tue, May 19, 2020 at 03:20:40PM +0200, Laurent Dufour wrote:
> >> Le 19/05/2020 à 15:10, Michel Lespinasse a écrit :
> >>> On Mon, May 18, 2020 at 03:45:22
On Mon, May 18, 2020 at 01:07:26PM +0200, Vlastimil Babka wrote:
> Any plan about all the code comments mentioning mmap_sem? :) Not urgent.
It's mostly a sed job, I'll add it in the next version as it seems
the patchset is getting ready for inclusion.
--
Michel "Walken" Lespinasse
A program is n
On Mon, May 18, 2020 at 03:45:22PM +0200, Laurent Dufour wrote:
> Le 24/04/2020 à 03:39, Michel Lespinasse a écrit :
> > Rename the mmap_sem field to mmap_lock. Any new uses of this lock
> > should now go through the new mmap locking api. The mmap_lock is
> > still implement
On Mon, May 18, 2020 at 01:01:33PM +0200, Vlastimil Babka wrote:
> On 4/24/20 3:38 AM, Michel Lespinasse wrote:
> > +static inline void mmap_assert_locked(struct mm_struct *mm)
> > +{
> > + VM_BUG_ON_MM(!lockdep_is_held_type(&mm->mmap_sem, -1), mm);
> > +
On Mon, May 18, 2020 at 12:45:06PM +0200, Vlastimil Babka wrote:
> On 4/22/20 2:14 AM, Michel Lespinasse wrote:
> > Define a new initializer for the mmap locking api.
> > Initially this just evaluates to __RWSEM_INITIALIZER as the API
> > is defined as wrappers around rwsem.
&
On Mon, May 18, 2020 at 12:32:03PM +0200, Vlastimil Babka wrote:
> On 4/22/20 2:14 AM, Michel Lespinasse wrote:
> > Add API for nested write locks and convert the few call sites doing that.
> >
> > Signed-off-by: Michel Lespinasse
> > Reviewed-by: Daniel Jordan
>
On Fri, May 15, 2020 at 9:52 PM Lai Jiangshan
wrote:
>
> On Sat, May 16, 2020 at 12:28 PM Michel Lespinasse wrote:
> >
> > On Fri, May 15, 2020 at 03:59:09PM +, Lai Jiangshan wrote:
> > > latch_tree_find() should be protected by caller via RCU or so.
> > &g
On Fri, May 15, 2020 at 03:59:09PM +, Lai Jiangshan wrote:
> latch_tree_find() should be protected by caller via RCU or so.
> When it find a node in an attempt, the node must be a valid one
> in RCU's point's of view even the tree is (being) updated with a
> new node with the same key which is
On Thu, Apr 30, 2020 at 12:28 AM Juri Lelli wrote:
> > --- a/include/linux/rbtree.h
> > +++ b/include/linux/rbtree.h
> > @@ -141,12 +141,18 @@ static inline void rb_insert_color_cache
> > rb_insert_color(node, &root->rb_root);
> > }
> >
> > -static inline void rb_erase_cached(struct rb_node
code is similar (if you checked and rejected it because of bad code,
please just say so).
Reviewed-by: Michel Lespinasse
I also looked at the other commits in the series, making use of the
helpers, and they seem very reasonable but I did not give them as
thorough a look at this one
On Thu, Oct 03, 2019 at 01:18:55PM -0700, Davidlohr Bueso wrote:
> The vma and anon vma interval tree really wants [a, b) intervals,
> not fully closed. As such convert it to use the new
> interval_tree_gen.h. Because of vma_last_pgoff(), the conversion
> is quite straightforward.
I am not certain
1 - 100 of 610 matches
Mail list logo