Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index e0739a1aa4b2..164383273147 100644
cesses]
[Try to the VMA fetch during the speculative path in case of retry]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/x86/mm/fault.c | 38 +-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x8
in case of retry]
Signed-off-by: Laurent Dufour
---
arch/x86/mm/fault.c | 38 +-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 800de815519c..d9f9236ccb9a 100644
--- a/arch/x86/mm/fault.c
+++ b/arch
still valid as explained
above.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/
still valid as explained
above.
Signed-off-by: Laurent Dufour
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 988d176472df..
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 7 +
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 7 +--
mm/memory.c| 18
4.12 kernel]
[Build depends on CONFIG_SPECULATIVE_PAGE_FAULT]
[Introduce vm_write_* inline function depending on
CONFIG_SPECULATIVE_PAGE_FAULT]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
using vm_raw_write* functions]
Signed-off-by: Laurent Dufour <lduf.
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git
on CONFIG_SPECULATIVE_PAGE_FAULT]
[Introduce vm_write_* inline function depending on
CONFIG_SPECULATIVE_PAGE_FAULT]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
using vm_raw_write* functions]
Signed-off-by: Laurent Dufour
---
include/linux/mm.h
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memor
for book3e_hugetlb_preload()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 73ce5dd07642..acf2696a6505 100644
--- a/arch/powerpc/K
for book3e_hugetlb_preload()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 73ce5dd07642..acf2696a6505 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
e-page-faults-tt965642.html#none
[2] https://patchwork.kernel.org/patch/687/
Laurent Dufour (20):
mm: Introduce CONFIG_SPECULATIVE_PAGE_FAULT
x86/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
powerpc/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIV
e-page-faults-tt965642.html#none
[2] https://patchwork.kernel.org/patch/687/
Laurent Dufour (20):
mm: Introduce CONFIG_SPECULATIVE_PAGE_FAULT
x86/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
powerpc/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIV
This configuration variable will be used to build the code needed to
handle speculative page fault.
By default it is turned off, and activated depending on architecture
support.
Suggested-by: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com
ernel]
[move pte_map_lock()'s definition upper in the file]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 1 +
mm/memory.c| 56 ++
2 files changed, 41 insertions(+), 16 deletions(-)
diff --git a/in
This configuration variable will be used to build the code needed to
handle speculative page fault.
By default it is turned off, and activated depending on architecture
support.
Suggested-by: Thomas Gleixner
Signed-off-by: Laurent Dufour
---
mm/Kconfig | 3 +++
1 file changed, 3 insertions
]
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 1 +
mm/memory.c| 56 ++
2 files changed, 41 insertions(+), 16 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 47c06fd20f6a..51d950cac772 100644
--- a/include
On 08/02/2018 21:53, Andrew Morton wrote:
> On Tue, 6 Feb 2018 17:49:46 +0100 Laurent Dufour
> <lduf...@linux.vnet.ibm.com> wrote:
>
>> This is a port on kernel 4.15 of the work done by Peter Zijlstra to
>> handle page fault without holding the mm semaphore [1
On 08/02/2018 21:53, Andrew Morton wrote:
> On Tue, 6 Feb 2018 17:49:46 +0100 Laurent Dufour
> wrote:
>
>> This is a port on kernel 4.15 of the work done by Peter Zijlstra to
>> handle page fault without holding the mm semaphore [1].
>>
>> The idea is to try
On 08/02/2018 16:00, Matthew Wilcox wrote:
> On Thu, Feb 08, 2018 at 03:35:58PM +0100, Laurent Dufour wrote:
>> I reviewed that part of code, and I think I could now change the way
>> pte_unmap_safe() is checking for the pte's value. Since we now have all the
>> needed de
On 08/02/2018 16:00, Matthew Wilcox wrote:
> On Thu, Feb 08, 2018 at 03:35:58PM +0100, Laurent Dufour wrote:
>> I reviewed that part of code, and I think I could now change the way
>> pte_unmap_safe() is checking for the pte's value. Since we now have all the
>> needed de
On 06/02/2018 21:28, Matthew Wilcox wrote:
> On Tue, Feb 06, 2018 at 05:49:50PM +0100, Laurent Dufour wrote:
>> From: Peter Zijlstra <pet...@infradead.org>
>>
>> One of the side effects of speculating on faults (without holding
>> mmap_sem) is that we can race with
On 06/02/2018 21:28, Matthew Wilcox wrote:
> On Tue, Feb 06, 2018 at 05:49:50PM +0100, Laurent Dufour wrote:
>> From: Peter Zijlstra
>>
>> One of the side effects of speculating on faults (without holding
>> mmap_sem) is that we can race with free_pgtables() and th
to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6 +-
mm/mempo
to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6 +-
mm/mempolicy.c | 51
d by calling vm_raw_write_end() by the callee once the ptes have
been moved.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 16
mm/mmap.c | 47 ---
mm/mremap.c| 13 +
3
d by calling vm_raw_write_end() by the callee once the ptes have
been moved.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 16
mm/mmap.c | 47 ---
mm/mremap.c| 13 +
3 files changed, 61 insertions(+),
pointer.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/swap.h | 10 --
mm/memory.c | 8
mm/swap.c| 6 +++---
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
pointer.
Signed-off-by: Laurent Dufour
---
include/linux/swap.h | 10 --
mm/memory.c | 8
mm/swap.c| 6 +++---
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index a1a3f4ed94ce..99377b66ea93 100644
ess speculative page fault.
[1] https://patchwork.kernel.org/patch/5108281/
Cc: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: Matthew Wilcox <wi...@infradead.org>
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm_types.h | 4 ++
kernel/fork.c
ess speculative page fault.
[1] https://patchwork.kernel.org/patch/5108281/
Cc: Peter Zijlstra (Intel)
Cc: Matthew Wilcox
Signed-off-by: Laurent Dufour
---
include/linux/mm_types.h | 4 ++
kernel/fork.c| 3 ++
mm/init-mm.c | 3 ++
mm/internal.h| 6 +++
D against concurrent collapsing operation]
[Try spin lock the pte during the speculative path to avoid deadlock with
other CPU's invalidating the TLB and requiring this CPU to catch the
inter processor's interrupt]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/hug
lock the pte during the speculative path to avoid deadlock with
other CPU's invalidating the TLB and requiring this CPU to catch the
inter processor's interrupt]
Signed-off-by: Laurent Dufour
---
include/linux/hugetlb_inline.h | 2 +-
include/linux/mm.h | 8 +
include/linux/pagemap.h
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/trace/events/pagefault.h | 87
mm/memory.c
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour
---
include/trace/events/pagefault.h | 87
mm/memory.c | 62 ++--
2 files changed, 136
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index e0739a1aa4b2..164383273147 100644
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l| 1 +
tools/perf/util/python.c
for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.
Build on if CONFIG_SPECULATIVE_PAGE_FAULT is defined (currently for
BOOK3S_64 && SMP).
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/mm/fault.c | 31
for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.
Build on if CONFIG_SPECULATIVE_PAGE_FAULT is defined (currently for
BOOK3S_64 && SMP).
Signed-off-by: Laurent Dufour
---
arch/powerpc/mm/fault.c | 31 ++-
1 file changed, 30 in
matched the passed address and release
the reference on the VMA so that it can be freed if needed.
In the case the VMA is freed, can_reuse_spf_vma() will have returned false
as the VMA is no more in the RB tree.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h
matched the passed address and release
the reference on the VMA so that it can be freed if needed.
In the case the VMA is freed, can_reuse_spf_vma() will have returned false
as the VMA is no more in the RB tree.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 5 +-
mm/memory.c| 136
cesses]
[Try to the VMA fetch during the speculative path in case of retry]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/x86/mm/fault.c | 38 +-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x8
in case of retry]
Signed-off-by: Laurent Dufour
---
arch/x86/mm/fault.c | 38 +-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 800de815519c..d9f9236ccb9a 100644
--- a/arch/x86/mm/fault.c
+++ b/arch
still valid as explained
above.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/
still valid as explained
above.
Signed-off-by: Laurent Dufour
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 988d176472df..
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 7 +
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 7 +--
mm/memory.c| 18
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memor
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2 files changed, 10 insertions(+), 5
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.i
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour
---
include/linux/migrat
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 1 +
mm/memory.c| 56 ++
2 files changed, 41 insertions(+), 16 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 47c06fd2
fail in case we find the VMA changed
since we started the fault.
Signed-off-by: Peter Zijlstra (Intel)
[Port to 4.12 kernel]
[Remove the comment about the fault_env structure which has been
implemented as the vm_fault structure in the kernel]
Signed-off-by: Laurent Dufour
---
include/linux/mm.h
4.12 kernel]
[Build depends on CONFIG_SPECULATIVE_PAGE_FAULT]
[Introduce vm_write_* inline function depending on
CONFIG_SPECULATIVE_PAGE_FAULT]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
using vm_raw_write* functions]
Signed-off-by: Laurent Dufour <lduf.
on CONFIG_SPECULATIVE_PAGE_FAULT]
[Introduce vm_write_* inline function depending on
CONFIG_SPECULATIVE_PAGE_FAULT]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
using vm_raw_write* functions]
Signed-off-by: Laurent Dufour
---
include/linux/mm.h
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour <l
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour
---
mm/memory.c
for book3e_hugetlb_preload()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 9d3329811cc1..57c19ee79c00 100644
--- a/arch/powerpc/K
for book3e_hugetlb_preload()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 9d3329811cc1..57c19ee79c00 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
[Remove only if !CONFIG_SPECULATIVE_PAGE_FAULT]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index 5ec6433d6a5c..32b9eb77d95c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2288,6 +2288,7 @@ int a
page. In that case one thread will take
much time looping in __read_swap_cache_async(). But in the regular page
fault path, this is even worse since the thread would wait for semaphore to
be released before starting anything.
[Remove only if !CONFIG_SPECULATIVE_PAGE_FAULT]
Signed-off-by: Laurent
[2] https://patchwork.kernel.org/patch/687/
Laurent Dufour (19):
mm: Introduce CONFIG_SPECULATIVE_PAGE_FAULT
x86/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
powerpc/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
mm: Protect VMA modifications
This configuration variable will be used to build the code needed to
handle speculative page fault.
By default it is turned off, and activated depending on architecture
support.
Suggested-by: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com
This configuration variable will be used to build the code needed to
handle speculative page fault.
By default it is turned off, and activated depending on architecture
support.
Suggested-by: Thomas Gleixner
Signed-off-by: Laurent Dufour
---
mm/Kconfig | 3 +++
1 file changed, 3 insertions
[2] https://patchwork.kernel.org/patch/687/
Laurent Dufour (19):
mm: Introduce CONFIG_SPECULATIVE_PAGE_FAULT
x86/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
powerpc/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
mm: Protect VMA modifications
Introduce CONFIG_SPECULATIVE_PAGE_FAULT which turns on the Speculative Page
Fault handler when building for 64bits with SMP.
Cc: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
Introduce CONFIG_SPECULATIVE_PAGE_FAULT which turns on the Speculative Page
Fault handler when building for 64bits with SMP.
Cc: Thomas Gleixner
Signed-off-by: Laurent Dufour
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index
On 05/02/2018 02:26, Davidlohr Bueso wrote:
> From: Davidlohr Bueso
>
> Hi,
>
> This patchset is a new version of both the range locking machinery as well
> as a full mmap_sem conversion that makes use of it -- as the worst case
> scenario as all mmap_sem calls are converted
On 05/02/2018 02:26, Davidlohr Bueso wrote:
> From: Davidlohr Bueso
>
> Hi,
>
> This patchset is a new version of both the range locking machinery as well
> as a full mmap_sem conversion that makes use of it -- as the worst case
> scenario as all mmap_sem calls are converted to a full range
On 02/02/2018 15:40, Laurent Dufour wrote:
>
>
> On 01/02/2018 00:04, daniel.m.jor...@oracle.com wrote:
>> A common case in release_pages is for the 'pages' list to be in roughly
>> the same order as they are in their LRU. With LRU batch locking, when a
>> sentinel p
On 02/02/2018 15:40, Laurent Dufour wrote:
>
>
> On 01/02/2018 00:04, daniel.m.jor...@oracle.com wrote:
>> A common case in release_pages is for the 'pages' list to be in roughly
>> the same order as they are in their LRU. With LRU batch locking, when a
>> sentinel p
On 01/02/2018 00:04, daniel.m.jor...@oracle.com wrote:
> Now that release_pages is scaling better with concurrent removals from
> the LRU, the performance results (included below) showed increased
> contention on lru_lock in the add-to-LRU path.
>
> To alleviate some of this contention, do more
On 01/02/2018 00:04, daniel.m.jor...@oracle.com wrote:
> Now that release_pages is scaling better with concurrent removals from
> the LRU, the performance results (included below) showed increased
> contention on lru_lock in the add-to-LRU path.
>
> To alleviate some of this contention, do more
On 01/02/2018 00:04, daniel.m.jor...@oracle.com wrote:
> A common case in release_pages is for the 'pages' list to be in roughly
> the same order as they are in their LRU. With LRU batch locking, when a
> sentinel page is removed, an adjacent non-sentinel page must be promoted
> to a sentinel
On 01/02/2018 00:04, daniel.m.jor...@oracle.com wrote:
> A common case in release_pages is for the 'pages' list to be in roughly
> the same order as they are in their LRU. With LRU batch locking, when a
> sentinel page is removed, an adjacent non-sentinel page must be promoted
> to a sentinel
;> On Tue, 2017-04-25 at 16:27 +0200, Laurent Dufour wrote:
>>>>> The commit b023f46813cd ("memory-hotplug: skip HWPoisoned page when
>>>>> offlining pages") skip the HWPoisoned pages when offlining pages, but
>>>>> this should be skipped when
Hi Andrew,
On 18/01/2018 00:03, Andrew Morton wrote:
> On Fri, 28 Apr 2017 08:30:48 +0200 Michal Hocko wrote:
>
>> On Wed 26-04-17 03:13:04, Naoya Horiguchi wrote:
>>> On Wed, Apr 26, 2017 at 12:10:15PM +1000, Balbir Singh wrote:
>>>> On Tue, 2017-04-25 at
Hi Kirill,
Thanks for reviewing this series.
On 16/01/2018 16:11, Kirill A. Shutemov wrote:
> On Fri, Jan 12, 2018 at 06:25:44PM +0100, Laurent Dufour wrote:
>> --
>> Benchmarks results
>>
>> Base kernel is 4.15-rc6-mmotm-2018-01-04-16-19
>> SPF
Hi Kirill,
Thanks for reviewing this series.
On 16/01/2018 16:11, Kirill A. Shutemov wrote:
> On Fri, Jan 12, 2018 at 06:25:44PM +0100, Laurent Dufour wrote:
>> --
>> Benchmarks results
>>
>> Base kernel is 4.15-rc6-mmotm-2018-01-04-16-19
>> SPF
On 17/01/2018 04:04, Andi Kleen wrote:
> Laurent Dufour <lduf...@linux.vnet.ibm.com> writes:
>
>> From: Peter Zijlstra <pet...@infradead.org>
>>
>> One of the side effects of speculating on faults (without holding
>> mmap_sem) is that we can race with
On 17/01/2018 04:04, Andi Kleen wrote:
> Laurent Dufour writes:
>
>> From: Peter Zijlstra
>>
>> One of the side effects of speculating on faults (without holding
>> mmap_sem) is that we can race with free_pgtables() and therefore we
>> cannot assu
On 13/01/2018 05:23, Matthew Wilcox wrote:
> On Fri, Jan 12, 2018 at 11:02:51AM -0800, Matthew Wilcox wrote:
>> On Fri, Jan 12, 2018 at 06:26:06PM +0100, Laurent Dufour wrote:
>>> @@ -1354,7 +1354,10 @@ extern int handle_mm_fault(struct vm_area_struct
>>>
On 13/01/2018 05:23, Matthew Wilcox wrote:
> On Fri, Jan 12, 2018 at 11:02:51AM -0800, Matthew Wilcox wrote:
>> On Fri, Jan 12, 2018 at 06:26:06PM +0100, Laurent Dufour wrote:
>>> @@ -1354,7 +1354,10 @@ extern int handle_mm_fault(struct vm_area_struct
>>>
On 12/01/2018 19:18, Matthew Wilcox wrote:
> On Fri, Jan 12, 2018 at 06:26:02PM +0100, Laurent Dufour wrote:
>> There is a deadlock when a CPU is doing a speculative page fault and
>> another one is calling do_unmap().
>>
>> The deadlock occurred because the specu
On 12/01/2018 19:18, Matthew Wilcox wrote:
> On Fri, Jan 12, 2018 at 06:26:02PM +0100, Laurent Dufour wrote:
>> There is a deadlock when a CPU is doing a speculative page fault and
>> another one is calling do_unmap().
>>
>> The deadlock occurred because the specu
On 15/01/2018 18:49, Thomas Gleixner wrote:
> On Mon, 15 Jan 2018, Laurent Dufour wrote:
>> On 12/01/2018 19:57, Thomas Gleixner wrote:
>>> On Fri, 12 Jan 2018, Laurent Dufour wrote:
>>>
>>>> Introduce CONFIG_SPF which turns on the Speculative Page Faul
On 15/01/2018 18:49, Thomas Gleixner wrote:
> On Mon, 15 Jan 2018, Laurent Dufour wrote:
>> On 12/01/2018 19:57, Thomas Gleixner wrote:
>>> On Fri, 12 Jan 2018, Laurent Dufour wrote:
>>>
>>>> Introduce CONFIG_SPF which turns on the Speculative Page Faul
Hi Matthew,
Thanks for reviewing this series.
On 12/01/2018 19:48, Matthew Wilcox wrote:
> On Fri, Jan 12, 2018 at 06:26:00PM +0100, Laurent Dufour wrote:
>> -static void __vma_rb_erase(struct vm_area_struct *vma, struct rb_root *root)
>> +static void __vma_rb_erase(struct vm_a
Hi Matthew,
Thanks for reviewing this series.
On 12/01/2018 19:48, Matthew Wilcox wrote:
> On Fri, Jan 12, 2018 at 06:26:00PM +0100, Laurent Dufour wrote:
>> -static void __vma_rb_erase(struct vm_area_struct *vma, struct rb_root *root)
>> +static void __vma_rb_erase(struct vm_a
Hi Thomas,
Thanks for reviewing this series.
On 12/01/2018 19:57, Thomas Gleixner wrote:
> On Fri, 12 Jan 2018, Laurent Dufour wrote:
>
>> Introduce CONFIG_SPF which turns on the Speculative Page Fault handler when
>> building for 64bits with SMP.
>>
>> Signe
Hi Thomas,
Thanks for reviewing this series.
On 12/01/2018 19:57, Thomas Gleixner wrote:
> On Fri, 12 Jan 2018, Laurent Dufour wrote:
>
>> Introduce CONFIG_SPF which turns on the Speculative Page Fault handler when
>> building for 64bits with SMP.
>>
>>
()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index d99250d9185d..31be1d69b350 100644
--- a/arch/powerpc/Kconfig
+++
501 - 600 of 1353 matches
Mail list logo