When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour <l
()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour
---
arch/powerpc/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 809c468edab1..661ba5bcf60e 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -1207,6
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour
---
mm/memory.c
Introduce CONFIG_SPF which turns on the Speculative Page Fault handler when
building for 64bits with SMP.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/x86/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 063f1e
Introduce CONFIG_SPF which turns on the Speculative Page Fault handler when
building for 64bits with SMP.
Signed-off-by: Laurent Dufour
---
arch/x86/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 063f1e0d51aa..a726618b7018 100644
On 10/10/2017 23:23, Andrew Morton wrote:
> On Mon, 9 Oct 2017 12:07:51 +0200 Laurent Dufour
> <lduf...@linux.vnet.ibm.com> wrote:
>
>> +/*
>> + * Advertise that we call the Speculative Page Fault handler.
>> + */
>> +#if defined(CONFIG_X86
On 10/10/2017 23:23, Andrew Morton wrote:
> On Mon, 9 Oct 2017 12:07:51 +0200 Laurent Dufour
> wrote:
>
>> +/*
>> + * Advertise that we call the Speculative Page Fault handler.
>> + */
>> +#if defined(CONFIG_X86_64) && defined(CONFIG_SMP)
>> +#def
ne
[2]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=da915ad5cf25b5f5d358dd3670c3378d8ae8c03e
[3] http://ebizzy.sourceforge.net/
[4] http://ck.kolivas.org/apps/kernbench/kernbench-0.50/
[5] https://lwn.net/Articles/725607/
Laurent Dufour (14):
mm: Introduce
ne
[2]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=da915ad5cf25b5f5d358dd3670c3378d8ae8c03e
[3] http://ebizzy.sourceforge.net/
[4] http://ck.kolivas.org/apps/kernbench/kernbench-0.50/
[5] https://lwn.net/Articles/725607/
Laurent Dufour (14):
mm: Introduce
Peter Zijlstra (Intel) <pet...@infradead.org>
[Remove only if !__HAVE_ARCH_CALL_SPF]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index 6632c9b357c9..4e4fe233d0
only if !__HAVE_ARCH_CALL_SPF]
Signed-off-by: Laurent Dufour
---
mm/memory.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index 6632c9b357c9..4e4fe233d066 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2287,6 +2287,7 @@ int apply_to_page_range
using WRITE_ONCE to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6
Rename vma_is_dead() to vma_has_changed() and move its adding to the next
patch]
[Postpone call to mpol_put() as the policy can be used during the
speculative path]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
mm/spf: Fix policy free
---
include/linux/mm_types.h | 2 +
kernel/fork.
using WRITE_ONCE to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6 +-
mm/mempolicy.c | 51
e next
patch]
[Postpone call to mpol_put() as the policy can be used during the
speculative path]
Signed-off-by: Laurent Dufour
mm/spf: Fix policy free
---
include/linux/mm_types.h | 2 +
kernel/fork.c| 1 +
mm/init-mm.c | 1 +
mm/internal.h| 5 +
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.i
4.12 kernel]
[Build depends on __HAVE_ARCH_CALL_SPF]
[Introduce vm_write_* inline function depending on __HAVE_ARCH_CALL_SPF]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
using vm_raw_write* functions]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.co
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour
---
include/linux/migrat
]
[Introduce vm_write_* inline function depending on __HAVE_ARCH_CALL_SPF]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
using vm_raw_write* functions]
Signed-off-by: Laurent Dufour
Fix locked by raw function
undo lockdep fix as raw services are now used
---
i
async_page_fault+0x28/0x30
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 19 ---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 6761e3007500..8abfc0e12e25 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -
async_page_fault+0x28/0x30
Signed-off-by: Laurent Dufour
---
mm/memory.c | 19 ---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 6761e3007500..8abfc0e12e25 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2476,7 +2476,8 @@ static bool
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 7 +
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 7 +--
mm/memory.c| 18
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2
pointer.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/swap.h | 11 +--
mm/memory.c | 8
mm/swap.c| 12 ++--
3 files changed, 19 insertions(+), 12 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2 files changed, 10 insertions(+), 5
pointer.
Signed-off-by: Laurent Dufour
---
include/linux/swap.h | 11 +--
mm/memory.c | 8
mm/swap.c| 12 ++--
3 files changed, 19 insertions(+), 12 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index cd2f66fdfc2d..a50d64f06bcf
currently because:
- require CONFIG_PPC_STD_MMU because checks done in
set_access_flags_filter()
- require BOOK3S because we can't support for book3e_hugetlb_preload()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/boo
currently because:
- require CONFIG_PPC_STD_MMU because checks done in
set_access_flags_filter()
- require BOOK3S because we can't support for book3e_hugetlb_preload()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 5 +
arch
access p*d entries]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/hugetlb_inline.h | 2 +-
include/linux/mm.h | 5 +
include/linux/pagemap.h| 4 +-
mm/internal.h | 16 +++
mm/memory.c| 285 +
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l
the processing done in mpol_misplaced()]
[Don't support VMA growing up or down]
[Move check on vm_sequence just before calling handle_pte_fault()]
[Don't build SPF services if !__HAVE_ARCH_CALL_SPF]
[Add mem cgroup oom check]
[Use use READ_ONCE to access p*d entries]
Signed-off-by: Laurent Dufour
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l| 1 +
tools/perf/util/python.c
_ALLOW_RETRY is now done in
handle_speculative_fault()]
[Retry with usual fault path in the case VM_ERROR is returned by
handle_speculative_fault(). This allows signal to be delivered]
[Don't build SPF call if !__HAVE_ARCH_CALL_SPF]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com
()]
[Retry with usual fault path in the case VM_ERROR is returned by
handle_speculative_fault(). This allows signal to be delivered]
[Don't build SPF call if !__HAVE_ARCH_CALL_SPF]
Signed-off-by: Laurent Dufour
---
arch/x86/include/asm/pgtable_types.h | 7 +++
arch/x86/mm/fault.c
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 140ae638cfd6..101e509ee39b 100644
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/trace/events/pagefault.h | 87
mm/memory.c
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour
---
include/trace/events/pagefault.h | 87
mm/memory.c | 59 ++-
2 files changed, 135
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour <l
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour
---
mm/memory.c
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git
still valid as explained
above.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/
]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 1 +
mm/memory.c| 55 ++
2 files changed, 40 insertions(+), 16 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3cc40742
still valid as explained
above.
Signed-off-by: Laurent Dufour
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 733d3d8181e2..
fail in case we find the VMA changed
since we started the fault.
Signed-off-by: Peter Zijlstra (Intel)
[Port to 4.12 kernel]
[Remove the comment about the fault_env structure which has been
implemented as the vm_fault structure in the kernel]
Signed-off-by: Laurent Dufour
---
include/linux/mm.h
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memor
On 25/09/2017 18:27, Alexei Starovoitov wrote:
> On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour
> <lduf...@linux.vnet.ibm.com> wrote:
>> Despite the unprovable lockdep warning raised by Sergey, I didn't get any
>> feedback on this series.
>>
>> Is the
On 25/09/2017 18:27, Alexei Starovoitov wrote:
> On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour
> wrote:
>> Despite the unprovable lockdep warning raised by Sergey, I didn't get any
>> feedback on this series.
>>
>> Is there a chance to get it moved upstream ?
>
On 03/10/2017 03:27, Michael Ellerman wrote:
> Laurent Dufour <lduf...@linux.vnet.ibm.com> writes:
>
>> Hi Andrew,
>>
>> On 28/09/2017 22:38, Andrew Morton wrote:
>>> On Thu, 28 Sep 2017 14:29:02 +0200 Laurent Dufour
>>> <lduf...@linux.vnet.ibm.
On 03/10/2017 03:27, Michael Ellerman wrote:
> Laurent Dufour writes:
>
>> Hi Andrew,
>>
>> On 28/09/2017 22:38, Andrew Morton wrote:
>>> On Thu, 28 Sep 2017 14:29:02 +0200 Laurent Dufour
>>> wrote:
>>>
>>>>> Laurent's [0/n] p
Hi Andrew,
On 28/09/2017 22:38, Andrew Morton wrote:
On Thu, 28 Sep 2017 14:29:02 +0200 Laurent Dufour <lduf...@linux.vnet.ibm.com>
wrote:
Laurent's [0/n] provides some nice-looking performance benefits for
workloads which are chosen to show performance benefits(!) but, alas,
no quanti
Hi Andrew,
On 28/09/2017 22:38, Andrew Morton wrote:
On Thu, 28 Sep 2017 14:29:02 +0200 Laurent Dufour
wrote:
Laurent's [0/n] provides some nice-looking performance benefits for
workloads which are chosen to show performance benefits(!) but, alas,
no quantitative testing results
Hi Andrew,
On 26/09/2017 01:34, Andrew Morton wrote:
On Mon, 25 Sep 2017 09:27:43 -0700 Alexei Starovoitov
<alexei.starovoi...@gmail.com> wrote:
On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour
<lduf...@linux.vnet.ibm.com> wrote:
Despite the unprovable lockdep warning raised
Hi Andrew,
On 26/09/2017 01:34, Andrew Morton wrote:
On Mon, 25 Sep 2017 09:27:43 -0700 Alexei Starovoitov
wrote:
On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour
wrote:
Despite the unprovable lockdep warning raised by Sergey, I didn't get any
feedback on this series.
Is there a chance
Hi,
On 26/09/2017 01:34, Andrew Morton wrote:
On Mon, 25 Sep 2017 09:27:43 -0700 Alexei Starovoitov
<alexei.starovoi...@gmail.com> wrote:
On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour
<lduf...@linux.vnet.ibm.com> wrote:
Despite the unprovable lockdep warning raised by Serg
Hi,
On 26/09/2017 01:34, Andrew Morton wrote:
On Mon, 25 Sep 2017 09:27:43 -0700 Alexei Starovoitov
wrote:
On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour
wrote:
Despite the unprovable lockdep warning raised by Sergey, I didn't get any
feedback on this series.
Is there a chance to get
Hi Alexei,
Le 25/09/2017 à 18:27, Alexei Starovoitov a écrit :
On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour
<lduf...@linux.vnet.ibm.com> wrote:
Despite the unprovable lockdep warning raised by Sergey, I didn't get any
feedback on this series.
Is there a chance to get it moved up
Hi Alexei,
Le 25/09/2017 à 18:27, Alexei Starovoitov a écrit :
On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour
wrote:
Despite the unprovable lockdep warning raised by Sergey, I didn't get any
feedback on this series.
Is there a chance to get it moved upstream ?
what is the status
Commit-ID: a3c4fb7c9c2ebfd50b8c60f6c069932bb319bc37
Gitweb: http://git.kernel.org/tip/a3c4fb7c9c2ebfd50b8c60f6c069932bb319bc37
Author: Laurent Dufour <lduf...@linux.vnet.ibm.com>
AuthorDate: Mon, 4 Sep 2017 10:32:15 +0200
Committer: Thomas Gleixner <t...@linutronix.de>
Com
Commit-ID: a3c4fb7c9c2ebfd50b8c60f6c069932bb319bc37
Gitweb: http://git.kernel.org/tip/a3c4fb7c9c2ebfd50b8c60f6c069932bb319bc37
Author: Laurent Dufour
AuthorDate: Mon, 4 Sep 2017 10:32:15 +0200
Committer: Thomas Gleixner
CommitDate: Mon, 25 Sep 2017 09:36:15 +0200
x86/mm: Fix fault
Despite the unprovable lockdep warning raised by Sergey, I didn't get any
feedback on this series.
Is there a chance to get it moved upstream ?
Thanks,
Laurent.
On 08/09/2017 20:06, Laurent Dufour wrote:
> This is a port on kernel 4.13 of the work done by Peter Zijlstra to
> handle page
Despite the unprovable lockdep warning raised by Sergey, I didn't get any
feedback on this series.
Is there a chance to get it moved upstream ?
Thanks,
Laurent.
On 08/09/2017 20:06, Laurent Dufour wrote:
> This is a port on kernel 4.13 of the work done by Peter Zijlstra to
> handle page
Hi,
On 14/09/2017 11:40, Sergey Senozhatsky wrote:
> On (09/14/17 11:15), Laurent Dufour wrote:
>> On 14/09/2017 11:11, Sergey Senozhatsky wrote:
>>> On (09/14/17 10:58), Laurent Dufour wrote:
>>> [..]
>>>> That's right, but here this is the sequence coun
Hi,
On 14/09/2017 11:40, Sergey Senozhatsky wrote:
> On (09/14/17 11:15), Laurent Dufour wrote:
>> On 14/09/2017 11:11, Sergey Senozhatsky wrote:
>>> On (09/14/17 10:58), Laurent Dufour wrote:
>>> [..]
>>>> That's right, but here this is the sequence coun
On 14/09/2017 11:11, Sergey Senozhatsky wrote:
> On (09/14/17 10:58), Laurent Dufour wrote:
> [..]
>> That's right, but here this is the sequence counter mm->mm_seq, not the
>> vm_seq one.
>
> d'oh... you are right.
So I'm doubting about the probability of a dead
On 14/09/2017 11:11, Sergey Senozhatsky wrote:
> On (09/14/17 10:58), Laurent Dufour wrote:
> [..]
>> That's right, but here this is the sequence counter mm->mm_seq, not the
>> vm_seq one.
>
> d'oh... you are right.
So I'm doubting about the probability of a dead
On 14/09/2017 10:13, Sergey Senozhatsky wrote:
> Hi,
>
> On (09/14/17 09:55), Laurent Dufour wrote:
> [..]
>>> so if there are two CPUs, one doing write_seqcount() and the other one
>>> doing read_seqcount() then what can happen is someth
On 14/09/2017 10:13, Sergey Senozhatsky wrote:
> Hi,
>
> On (09/14/17 09:55), Laurent Dufour wrote:
> [..]
>>> so if there are two CPUs, one doing write_seqcount() and the other one
>>> doing read_seqcount() then what can happen is someth
Hi,
On 14/09/2017 02:31, Sergey Senozhatsky wrote:
> Hi,
>
> On (09/13/17 18:56), Laurent Dufour wrote:
>> Hi Sergey,
>>
>> On 13/09/2017 13:53, Sergey Senozhatsky wrote:
>>> Hi,
>>>
>>> On (09/08/17 20:06), Laurent Dufour wrote:
> [
Hi,
On 14/09/2017 02:31, Sergey Senozhatsky wrote:
> Hi,
>
> On (09/13/17 18:56), Laurent Dufour wrote:
>> Hi Sergey,
>>
>> On 13/09/2017 13:53, Sergey Senozhatsky wrote:
>>> Hi,
>>>
>>> On (09/08/17 20:06), Laurent Dufour wrote:
> [
Hi Sergey,
On 13/09/2017 13:53, Sergey Senozhatsky wrote:
> Hi,
>
> On (09/08/17 20:06), Laurent Dufour wrote:
> [..]
>> @@ -903,6 +910,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned
>> long start,
>> mm->map_count--;
>>
Hi Sergey,
On 13/09/2017 13:53, Sergey Senozhatsky wrote:
> Hi,
>
> On (09/08/17 20:06), Laurent Dufour wrote:
> [..]
>> @@ -903,6 +910,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned
>> long start,
>> mm->map_count--;
>>
On 04/09/2017 10:32, Laurent Dufour wrote:
> The commit 7b2d0dbac489 ("x86/mm/pkeys: Pass VMA down in to fault signal
> generation code") pass down a vma pointer to the error path, but that is
> done once the mmap_sem is released when calling mm_fault_error() fro
On 04/09/2017 10:32, Laurent Dufour wrote:
> The commit 7b2d0dbac489 ("x86/mm/pkeys: Pass VMA down in to fault signal
> generation code") pass down a vma pointer to the error path, but that is
> done once the mmap_sem is released when calling mm_fault_error() fro
On 11/09/2017 02:45, Sergey Senozhatsky wrote:
> On (09/08/17 11:24), Laurent Dufour wrote:
>> Hi Sergey,
>>
>> I can't see where such a chain could happen.
>>
>> I tried to recreate it on top of the latest mm tree, to latest stack output
>> but I can't
On 11/09/2017 02:45, Sergey Senozhatsky wrote:
> On (09/08/17 11:24), Laurent Dufour wrote:
>> Hi Sergey,
>>
>> I can't see where such a chain could happen.
>>
>> I tried to recreate it on top of the latest mm tree, to latest stack output
>> but I can't
From: Peter Zijlstra
One of the side effects of speculating on faults (without holding
mmap_sem) is that we can race with free_pgtables() and therefore we
cannot assume the page-tables will stick around.
Remove the reliance on the pte pointer.
Signed-off-by: Peter
From: Peter Zijlstra
One of the side effects of speculating on faults (without holding
mmap_sem) is that we can race with free_pgtables() and therefore we
cannot assume the page-tables will stick around.
Remove the reliance on the pte pointer.
Signed-off-by: Peter Zijlstra (Intel)
---
Rename vma_is_dead() to vma_has_changed() and move its adding to the next
patch]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm_types.h | 2 +
kernel/fork.c| 1 +
mm/init-mm.c | 1 +
mm/internal.h| 5 +++
mm/mma
e next
patch]
Signed-off-by: Laurent Dufour
---
include/linux/mm_types.h | 2 +
kernel/fork.c| 1 +
mm/init-mm.c | 1 +
mm/internal.h| 5 +++
mm/mmap.c| 100 +++
5 files changed, 83 insertions(
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour <l
using WRITE_ONCE to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour
---
mm/memory.c
using WRITE_ONCE to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6 +-
mm/mempolicy.c | 51
4.12 kernel]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm_types.h | 1 +
mm/memory.c | 2 ++
mm/mmap.c| 21 ++---
3 files chan
->i_mmap_rwsem and vma->vm_sequence]
Signed-off-by: Laurent Dufour
---
include/linux/mm_types.h | 1 +
mm/memory.c | 2 ++
mm/mmap.c| 21 ++---
3 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/include/linux/mm_types.h b/include
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.i
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour
---
include/linux/migrat
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 7 +
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 7 +--
mm/memory.c| 18
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/trace/events/pagefault.h | 87
mm/memory.c
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour
---
include/trace/events/pagefault.h | 87
mm/memory.c | 59 ++-
2 files changed, 135
still valid as explained
above.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/
still valid as explained
above.
Signed-off-by: Laurent Dufour
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 733d3d8181e2..
currently because:
- require CONFIG_PPC_STD_MMU because checks done in
set_access_flags_filter()
- require BOOK3S because we can't support for book3e_hugetlb_preload()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/boo
currently because:
- require CONFIG_PPC_STD_MMU because checks done in
set_access_flags_filter()
- require BOOK3S because we can't support for book3e_hugetlb_preload()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 5 +
arch
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l
701 - 800 of 1353 matches
Mail list logo