()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour
---
arch/powerpc/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index d99250d9185d..31be1d69b350 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -1209,6
Peter Zijlstra (Intel) <pet...@infradead.org>
[Remove only if !CONFIG_SPF]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index 8a80986fff48..259f621345b2 100644
--- a/mm/memory.c
only if !CONFIG_SPF]
Signed-off-by: Laurent Dufour
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index 8a80986fff48..259f621345b2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2274,6 +2274,7 @@ int apply_to_page_range(struct mm_struct *mm
4.12 kernel]
[Build depends on CONFIG_SPF]
[Introduce vm_write_* inline function depending on CONFIG_SPF]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
using vm_raw_write* functions]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
inclu
vm_write_* inline function depending on CONFIG_SPF]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
using vm_raw_write* functions]
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 41 +
include/linux/mm_types.
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.i
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour
---
include/linux/migrat
d by calling vm_raw_write_end() by the callee once the ptes have
been moved.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 16
mm/mmap.c | 47 ---
mm/mremap.c| 13 +
3
d by calling vm_raw_write_end() by the callee once the ptes have
been moved.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 16
mm/mmap.c | 47 ---
mm/mremap.c| 13 +
3 files changed, 61 insertions(+),
ess speculative page fault.
[1] https://patchwork.kernel.org/patch/5108281/
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
Cc: Peter Zijlstra (Intel) <pet...@infradead.org>
---
include/linux/mm_types.h | 4 ++
kernel/fork.c| 3 ++
mm/init-mm.c | 3 ++
ess speculative page fault.
[1] https://patchwork.kernel.org/patch/5108281/
Signed-off-by: Laurent Dufour
Cc: Peter Zijlstra (Intel)
---
include/linux/mm_types.h | 4 ++
kernel/fork.c| 3 ++
mm/init-mm.c | 3 ++
mm/internal.h| 6 +++
mm/mma
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2 files changed, 10 insertions(+), 5
still valid as explained
above.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/
still valid as explained
above.
Signed-off-by: Laurent Dufour
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 988d176472df..
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 7 +
ollapsing operation]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/hugetlb_inline.h | 2 +-
include/linux/mm.h | 8 +
include/linux/pagemap.h| 4 +-
mm/internal.h | 16 +-
mm/memory.c| 321 +
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 7 +--
mm/memory.c| 18
em cgroup oom check]
[Use READ_ONCE to access p*d entries]
[Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
[Don't fetch pte again in handle_pte_fault() when running the speculative
path]
[Check PMD against concurrent collapsing operation]
Signed-off-by: Laurent Dufour
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/trace/events/pagefault.h | 87
mm/memory.c
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour
---
include/trace/events/pagefault.h | 87
mm/memory.c | 62 ++--
2 files changed, 136
async_page_fault+0x28/0x30
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 19 ---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 96720cc7ca74..83640079d407 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -
async_page_fault+0x28/0x30
Signed-off-by: Laurent Dufour
---
mm/memory.c | 19 ---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 96720cc7ca74..83640079d407 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2472,7 +2472,8 @@ static bool
matched the passed address and release
the reference on the VMA so that it can be freed if needed.
In the case the VMA is freed, can_reuse_spf_vma() will have returned false
as the VMA is no more in the RB tree.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h
matched the passed address and release
the reference on the VMA so that it can be freed if needed.
In the case the VMA is freed, can_reuse_spf_vma() will have returned false
as the VMA is no more in the RB tree.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 5 +-
mm/memory.c| 136
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.
Build on if CONFIG_SPF is defined (currently for BOOK3S_64 && SMP).
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/mm/fault.c | 31 ++-
1 fi
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 769533696483..06c7fdb14f89 100644
for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.
Build on if CONFIG_SPF is defined (currently for BOOK3S_64 && SMP).
Signed-off-by: Laurent Dufour
---
arch/powerpc/mm/fault.c | 31 ++-
1 file changed, 30 insertions(+), 1
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l| 1 +
tools/perf/util/python.c
he VMA fetch during the speculative path in case of retry]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/x86/mm/fault.c | 38 +-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
]
Signed-off-by: Laurent Dufour
---
arch/x86/mm/fault.c | 38 +-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 06fe3d51d385..8db69a116521 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memor
pointer.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/swap.h | 10 --
mm/memory.c | 8
mm/swap.c| 6 +++---
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
pointer.
Signed-off-by: Laurent Dufour
---
include/linux/swap.h | 10 --
mm/memory.c | 8
mm/swap.c| 6 +++---
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index a1a3f4ed94ce..99377b66ea93 100644
/
[4] http://ck.kolivas.org/apps/kernbench/kernbench-0.50/
[5] https://lwn.net/Articles/725607/
[6] https://github.com/antonblanchard/will-it-scale.git
Laurent Dufour (19):
x86/mm: Define CONFIG_SPF
powerpc/mm: Define CONFIG_SPF
mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
mm: P
/
[4] http://ck.kolivas.org/apps/kernbench/kernbench-0.50/
[5] https://lwn.net/Articles/725607/
[6] https://github.com/antonblanchard/will-it-scale.git
Laurent Dufour (19):
x86/mm: Define CONFIG_SPF
powerpc/mm: Define CONFIG_SPF
mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
mm: P
]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 1 +
mm/memory.c| 56 ++
2 files changed, 41 insertions(+), 16 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 63f7ba11
to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6 +-
mm/mempo
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour <l
fail in case we find the VMA changed
since we started the fault.
Signed-off-by: Peter Zijlstra (Intel)
[Port to 4.12 kernel]
[Remove the comment about the fault_env structure which has been
implemented as the vm_fault structure in the kernel]
Signed-off-by: Laurent Dufour
---
include/linux/mm.h
to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6 +-
mm/mempolicy.c | 51
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour
---
mm/memory.c
Introduce CONFIG_SPF which turns on the Speculative Page Fault handler when
building for 64bits with SMP.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/x86/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a317d5
Introduce CONFIG_SPF which turns on the Speculative Page Fault handler when
building for 64bits with SMP.
Signed-off-by: Laurent Dufour
---
arch/x86/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a317d5594b6a..d74353b85aaf 100644
Hi Andrea,
On 02/11/2017 21:08, Andrea Arcangeli wrote:
> On Thu, Nov 02, 2017 at 06:25:11PM +0100, Laurent Dufour wrote:
>> I think there is some memory barrier missing when the VMA is modified so
>> currently the modifications done in the VMA structure may not be written
>
Hi Andrea,
On 02/11/2017 21:08, Andrea Arcangeli wrote:
> On Thu, Nov 02, 2017 at 06:25:11PM +0100, Laurent Dufour wrote:
>> I think there is some memory barrier missing when the VMA is modified so
>> currently the modifications done in the VMA structure may not be written
>
On 02/11/2017 16:16, Laurent Dufour wrote:
> Hi Andrea,
>
> Thanks for reviewing this series, and sorry for the late answer, I took few
> days off...
>
> On 26/10/2017 12:18, Andrea Arcangeli wrote:
>> Hello Laurent,
>>
>> Message-ID: <7ca80231-fe02-
On 02/11/2017 16:16, Laurent Dufour wrote:
> Hi Andrea,
>
> Thanks for reviewing this series, and sorry for the late answer, I took few
> days off...
>
> On 26/10/2017 12:18, Andrea Arcangeli wrote:
>> Hello Laurent,
>>
>> Message-ID: <7ca80231-fe02-
Hi Andrea,
Thanks for reviewing this series, and sorry for the late answer, I took few
days off...
On 26/10/2017 12:18, Andrea Arcangeli wrote:
> Hello Laurent,
>
> Message-ID: <7ca80231-fe02-a3a7-84bc-ce81690ea...@intel.com> shows
> significant slowdown even for brk/malloc ops both single and
Hi Andrea,
Thanks for reviewing this series, and sorry for the late answer, I took few
days off...
On 26/10/2017 12:18, Andrea Arcangeli wrote:
> Hello Laurent,
>
> Message-ID: <7ca80231-fe02-a3a7-84bc-ce81690ea...@intel.com> shows
> significant slowdown even for brk/malloc ops both single and
ill-it-scale.per_process_ops
> read2 1204838 -1.7% 1183993
> will-it-scale.per_process_ops
> futex1 5017718 -1.6% 4938677
> will-it-scale.per_process_ops
> 1408250 -1.0% 1394022
> w
ill-it-scale.per_process_ops
> read2 1204838 -1.7% 1183993
> will-it-scale.per_process_ops
> futex1 5017718 -1.6% 4938677
> will-it-scale.per_process_ops
> 1408250 -1.0% 1394022
> w
Hi Vlastimil,
Sorry for the late answer I got a few day off.
On 31/10/2017 14:57, Vlastimil Babka wrote:
> +CC Andrea, Thorsten, Linus
>
> On 10/31/2017 02:20 PM, Vlastimil Babka wrote:
>> On 10/31/2017 01:42 PM, Dmitry Vyukov wrote:
My vm_area_struct is 192 bytes, could be your layout is
Hi Vlastimil,
Sorry for the late answer I got a few day off.
On 31/10/2017 14:57, Vlastimil Babka wrote:
> +CC Andrea, Thorsten, Linus
>
> On 10/31/2017 02:20 PM, Vlastimil Babka wrote:
>> On 10/31/2017 01:42 PM, Dmitry Vyukov wrote:
My vm_area_struct is 192 bytes, could be your layout is
net/
[4] http://ck.kolivas.org/apps/kernbench/kernbench-0.50/
[5] https://lwn.net/Articles/725607/
Laurent Dufour (16):
x86/mm: Define CONFIG_SPF
powerpc/mm: Define CONFIG_SPF
mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
mm: Protect VMA modifications using VMA sequence count
mm: C
net/
[4] http://ck.kolivas.org/apps/kernbench/kernbench-0.50/
[5] https://lwn.net/Articles/725607/
Laurent Dufour (16):
x86/mm: Define CONFIG_SPF
powerpc/mm: Define CONFIG_SPF
mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
mm: Protect VMA modifications using VMA sequence count
mm: C
]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 1 +
mm/memory.c| 55 ++
2 files changed, 40 insertions(+), 16 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3cc40742
fail in case we find the VMA changed
since we started the fault.
Signed-off-by: Peter Zijlstra (Intel)
[Port to 4.12 kernel]
[Remove the comment about the fault_env structure which has been
implemented as the vm_fault structure in the kernel]
Signed-off-by: Laurent Dufour
---
include/linux/mm.h
Peter Zijlstra (Intel) <pet...@infradead.org>
[Remove only if !CONFIG_SPF]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index 6632c9b357c9..b7a9baf3df8a 100644
only if !CONFIG_SPF]
Signed-off-by: Laurent Dufour
---
mm/memory.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index 6632c9b357c9..b7a9baf3df8a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2287,6 +2287,7 @@ int apply_to_page_range(struct
4.12 kernel]
[Build depends on CONFIG_SPF]
[Introduce vm_write_* inline function depending on CONFIG_SPF]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
using vm_raw_write* functions]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
Fix lock
vm_write_* inline function depending on CONFIG_SPF]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
using vm_raw_write* functions]
Signed-off-by: Laurent Dufour
Fix locked by raw function
undo lockdep fix as raw services are now used
---
include/linux/mm.h
Rename vma_is_dead() to vma_has_changed() and move its adding to the next
patch]
[Postpone call to mpol_put() as the policy can be used during the
speculative path]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
mm/spf: Fix policy free
---
include/linux/mm_types.h | 2 +
kernel/fork.
e next
patch]
[Postpone call to mpol_put() as the policy can be used during the
speculative path]
Signed-off-by: Laurent Dufour
mm/spf: Fix policy free
---
include/linux/mm_types.h | 2 +
kernel/fork.c| 1 +
mm/init-mm.c | 1 +
mm/internal.h| 5 +
using WRITE_ONCE to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6
using WRITE_ONCE to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6 +-
mm/mempolicy.c | 51
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2 files changed, 10 insertions(+), 5
pointer.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/swap.h | 11 +--
mm/memory.c | 8
mm/swap.c| 12 ++--
3 files changed, 19 insertions(+), 12 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
pointer.
Signed-off-by: Laurent Dufour
---
include/linux/swap.h | 11 +--
mm/memory.c | 8
mm/swap.c| 12 ++--
3 files changed, 19 insertions(+), 12 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index cd2f66fdfc2d..a50d64f06bcf
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.i
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour
---
include/linux/migrat
t VMA growing up or down]
[Move check on vm_sequence just before calling handle_pte_fault()]
[Don't build SPF services if !CONFIG_SPF]
[Add mem cgroup oom check]
[Use use READ_ONCE to access p*d entries]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/hugetlb_
just before calling handle_pte_fault()]
[Don't build SPF services if !CONFIG_SPF]
[Add mem cgroup oom check]
[Use use READ_ONCE to access p*d entries]
Signed-off-by: Laurent Dufour
---
include/linux/hugetlb_inline.h | 2 +-
include/linux/mm.h | 5 +
include/linux/pagemap.h|
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 7 +
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 7 +--
mm/memory.c| 18
async_page_fault+0x28/0x30
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 19 ---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index eff40abfc1a6..d1278fc15a91 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -
async_page_fault+0x28/0x30
Signed-off-by: Laurent Dufour
---
mm/memory.c | 19 ---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index eff40abfc1a6..d1278fc15a91 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2476,7 +2476,8 @@ static bool
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/trace/events/pagefault.h | 87
mm/memory.c
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour
---
include/trace/events/pagefault.h | 87
mm/memory.c | 59 ++-
2 files changed, 135
_ALLOW_RETRY is now done in
handle_speculative_fault()]
[Retry with usual fault path in the case VM_ERROR is returned by
handle_speculative_fault(). This allows signal to be delivered]
[Don't build SPF call if !CONFIG_SPF]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/x
()]
[Retry with usual fault path in the case VM_ERROR is returned by
handle_speculative_fault(). This allows signal to be delivered]
[Don't build SPF call if !CONFIG_SPF]
Signed-off-by: Laurent Dufour
---
arch/x86/mm/fault.c | 21 +
1 file changed, 21 insertions(+)
diff --git
(currently for BOOK3S_64 && SMP).
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/mm/fault.c | 17 +
1 file changed, 17 insertions(+)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 4797d08581ce..c018c2554cc8 100644
--- a/a
(currently for BOOK3S_64 && SMP).
Signed-off-by: Laurent Dufour
---
arch/powerpc/mm/fault.c | 17 +
1 file changed, 17 insertions(+)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 4797d08581ce..c018c2554cc8 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/po
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l| 1 +
tools/perf/util/python.c
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 140ae638cfd6..101e509ee39b 100644
still valid as explained
above.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/
still valid as explained
above.
Signed-off-by: Laurent Dufour
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 733d3d8181e2..
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memor
()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 809c468edab1..661ba5bcf60e 100644
--- a/arch/powerpc/Kconfig
+++
601 - 700 of 1353 matches
Mail list logo