On 22/03/2018 17:13, Matthew Wilcox wrote:
> On Thu, Mar 22, 2018 at 09:06:14AM -0700, Yang Shi wrote:
>> On 3/22/18 2:10 AM, Michal Hocko wrote:
>>> On Wed 21-03-18 15:36:12, Yang Shi wrote:
On 3/21/18 2:23 PM, Michal Hocko wrote:
> On Wed 21-03-18 10:16:41, Yang Shi wrote:
>>
On 22/03/2018 17:05, Matthew Wilcox wrote:
> On Thu, Mar 22, 2018 at 04:54:52PM +0100, Laurent Dufour wrote:
>> On 22/03/2018 16:40, Matthew Wilcox wrote:
>>> On Thu, Mar 22, 2018 at 04:32:00PM +0100, Laurent Dufour wrote:
>>>> Regarding the page fault, why
On 22/03/2018 17:05, Matthew Wilcox wrote:
> On Thu, Mar 22, 2018 at 04:54:52PM +0100, Laurent Dufour wrote:
>> On 22/03/2018 16:40, Matthew Wilcox wrote:
>>> On Thu, Mar 22, 2018 at 04:32:00PM +0100, Laurent Dufour wrote:
>>>> Regarding the page fault, why
On 22/03/2018 16:40, Matthew Wilcox wrote:
> On Thu, Mar 22, 2018 at 04:32:00PM +0100, Laurent Dufour wrote:
>> On 21/03/2018 23:46, Matthew Wilcox wrote:
>>> On Wed, Mar 21, 2018 at 02:45:44PM -0700, Yang Shi wrote:
>>>> Marking vma as deleted sounds good. The p
On 22/03/2018 16:40, Matthew Wilcox wrote:
> On Thu, Mar 22, 2018 at 04:32:00PM +0100, Laurent Dufour wrote:
>> On 21/03/2018 23:46, Matthew Wilcox wrote:
>>> On Wed, Mar 21, 2018 at 02:45:44PM -0700, Yang Shi wrote:
>>>> Marking vma as deleted sounds good. The p
On 21/03/2018 23:46, Matthew Wilcox wrote:
> On Wed, Mar 21, 2018 at 02:45:44PM -0700, Yang Shi wrote:
>> Marking vma as deleted sounds good. The problem for my current approach is
>> the concurrent page fault may succeed if it access the not yet unmapped
>> section. Marking deleted vma could
On 21/03/2018 23:46, Matthew Wilcox wrote:
> On Wed, Mar 21, 2018 at 02:45:44PM -0700, Yang Shi wrote:
>> Marking vma as deleted sounds good. The problem for my current approach is
>> the concurrent page fault may succeed if it access the not yet unmapped
>> section. Marking deleted vma could
On 17/03/2018 08:51, kernel test robot wrote:
> FYI, we noticed the following commit (built with gcc-7):
>
> commit: b1f0502d04537ef55b0c296823affe332b100eb5 ("mm: VMA sequence count")
> url:
> https://github.com/0day-ci/linux/commits/Laurent-Dufour/Speculative-pa
On 17/03/2018 08:51, kernel test robot wrote:
> FYI, we noticed the following commit (built with gcc-7):
>
> commit: b1f0502d04537ef55b0c296823affe332b100eb5 ("mm: VMA sequence count")
> url:
> https://github.com/0day-ci/linux/commits/Laurent-Dufour/Speculative-pa
an be split. This was mostly for device-dax and
> hugetlbfs mappings which have specific alignment constraints.
>
> mappings initiated via shmget/shmat have their original vm_ops
> overwritten with shm_vm_ops. shm_vm_ops functions will call back
> to the original vm_ops if neede
evice-dax and
> hugetlbfs mappings which have specific alignment constraints.
>
> mappings initiated via shmget/shmat have their original vm_ops
> overwritten with shm_vm_ops. shm_vm_ops functions will call back
> to the original vm_ops if needed. Add such a split function.
On 20/03/2018 22:26, Mike Kravetz wrote:
> On 03/20/2018 10:25 AM, Laurent Dufour wrote:
>> When running the sampler detailed below, the kernel, if built with the VM
>> debug option turned on (as many distro do), is panicing with the following
>> message :
>> kernel
On 20/03/2018 22:26, Mike Kravetz wrote:
> On 03/20/2018 10:25 AM, Laurent Dufour wrote:
>> When running the sampler detailed below, the kernel, if built with the VM
>> debug option turned on (as many distro do), is panicing with the following
>> message :
>> kernel
rror("shmdt");
goto out;
}
printf("test done.\n");
ret = 0;
out:
shmctl(shmid, IPC_RMID, NULL);
return ret;
}
--- End of code
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/mmap.c | 11 +++
1 file changed, 11 ins
goto out;
}
printf("test done.\n");
ret = 0;
out:
shmctl(shmid, IPC_RMID, NULL);
return ret;
}
--- End of code
Signed-off-by: Laurent Dufour
---
mm/mmap.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/mm/mmap.c b/mm/mmap.c
index 188f195883b
On 16/03/2018 11:23, kernel test robot wrote:
> FYI, we noticed the following commit (built with gcc-7):
>
> commit: b33ddf50ebcc740b990dd2e0e8ff0b92c7acf58e ("mm: Protect mm_rb tree
> with a rwlock")
> url:
> https://github.com/0day-ci/linux/commits/Laurent-Du
On 16/03/2018 11:23, kernel test robot wrote:
> FYI, we noticed the following commit (built with gcc-7):
>
> commit: b33ddf50ebcc740b990dd2e0e8ff0b92c7acf58e ("mm: Protect mm_rb tree
> with a rwlock")
> url:
> https://github.com/0day-ci/linux/commits/Laurent-Du
On 14/03/2018 09:48, Peter Zijlstra wrote:
> On Tue, Mar 13, 2018 at 06:59:47PM +0100, Laurent Dufour wrote:
>> This change is inspired by the Peter's proposal patch [1] which was
>> protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
>>
On 14/03/2018 09:48, Peter Zijlstra wrote:
> On Tue, Mar 13, 2018 at 06:59:47PM +0100, Laurent Dufour wrote:
>> This change is inspired by the Peter's proposal patch [1] which was
>> protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
>>
On 14/03/2018 14:11, Michal Hocko wrote:
> On Tue 13-03-18 18:59:30, Laurent Dufour wrote:
>> Changes since v8:
>> - Don't check PMD when locking the pte when THP is disabled
>>Thanks to Daniel Jordan for reporting this.
>> - Rebase on 4.16
>
> Is this real
On 14/03/2018 14:11, Michal Hocko wrote:
> On Tue 13-03-18 18:59:30, Laurent Dufour wrote:
>> Changes since v8:
>> - Don't check PMD when locking the pte when THP is disabled
>>Thanks to Daniel Jordan for reporting this.
>> - Rebase on 4.16
>
> Is this real
This configuration variable will be used to build the code needed to
handle speculative page fault.
By default it is turned off, and activated depending on architecture
support.
Suggested-by: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com
This configuration variable will be used to build the code needed to
handle speculative page fault.
By default it is turned off, and activated depending on architecture
support.
Suggested-by: Thomas Gleixner
Signed-off-by: Laurent Dufour
---
mm/Kconfig | 3 +++
1 file changed, 3 insertions
for book3e_hugetlb_preload()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 73ce5dd07642..acf2696a6505 100644
--- a/arch/powerpc/K
for book3e_hugetlb_preload()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 73ce5dd07642..acf2696a6505 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.i
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour
---
include/linux/migrat
to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6 +-
mm/mempo
to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6 +-
mm/mempolicy.c | 51
is then trapped in cow_user_page().
If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
page fault while holding the mmap_sem.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 1 +
mm/memory.c| 29 +++--
2 files c
is then trapped in cow_user_page().
If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
page fault while holding the mmap_sem.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 1 +
mm/memory.c| 29 +++--
2 files changed, 20 insertions(+), 10
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
pointer.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/swap.h | 10 --
mm/memory.c | 8
mm/swap.c| 6 +++---
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
pointer.
Signed-off-by: Laurent Dufour
---
include/linux/swap.h | 10 --
mm/memory.c | 8
mm/swap.c| 6 +++---
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 1985940af479..a7dc37e0e405 100644
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git
sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour
---
mm/memory.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memor
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 7 +
value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 7 +--
mm/memory.c| 18
still valid as explained
above.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/
ess speculative page fault.
[1] https://patchwork.kernel.org/patch/5108281/
Cc: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: Matthew Wilcox <wi...@infradead.org>
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm_types.h | 4 ++
kernel/fork.c
still valid as explained
above.
Signed-off-by: Laurent Dufour
---
include/linux/rmap.h | 12 ++--
mm/memory.c | 8
mm/rmap.c| 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 988d176472df..
ess speculative page fault.
[1] https://patchwork.kernel.org/patch/5108281/
Cc: Peter Zijlstra (Intel)
Cc: Matthew Wilcox
Signed-off-by: Laurent Dufour
---
include/linux/mm_types.h | 4 ++
kernel/fork.c| 3 ++
mm/init-mm.c | 3 ++
mm/internal.h| 6 +++
D against concurrent collapsing operation]
[Try spin lock the pte during the speculative path to avoid deadlock with
other CPU's invalidating the TLB and requiring this CPU to catch the
inter processor's interrupt]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/hug
lock the pte during the speculative path to avoid deadlock with
other CPU's invalidating the TLB and requiring this CPU to catch the
inter processor's interrupt]
Signed-off-by: Laurent Dufour
---
include/linux/hugetlb_inline.h | 2 +-
include/linux/mm.h | 8 +
include/linux/pagemap.h
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 6f873503552d..a6ddab9edeec 100644
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l| 1 +
tools/perf/util/python.c
matched the passed address and release
the reference on the VMA so that it can be freed if needed.
In the case the VMA is freed, can_reuse_spf_vma() will have returned false
as the VMA is no more in the RB tree.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h
matched the passed address and release
the reference on the VMA so that it can be freed if needed.
In the case the VMA is freed, can_reuse_spf_vma() will have returned false
as the VMA is no more in the RB tree.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 5 +-
mm/memory.c| 136
for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.
Build on if CONFIG_SPECULATIVE_PAGE_FAULT is defined (currently for
BOOK3S_64 && SMP).
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/mm/fault.c | 31
for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.
Build on if CONFIG_SPECULATIVE_PAGE_FAULT is defined (currently for
BOOK3S_64 && SMP).
Signed-off-by: Laurent Dufour
---
arch/powerpc/mm/fault.c | 31 ++-
1 file changed, 30 in
cesses]
[Try to the VMA fetch during the speculative path in case of retry]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/x86/mm/fault.c | 38 +-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x8
in case of retry]
Signed-off-by: Laurent Dufour
---
arch/x86/mm/fault.c | 38 +-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index e6af2b464c3d..a73cf227edd6 100644
--- a/arch/x86/mm/fault.c
+++ b/arch
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/trace/events/pagefault.h | 87
mm/memory.c
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour
---
include/trace/events/pagefault.h | 87
mm/memory.c | 62 ++--
2 files changed, 136
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2 files changed, 10 insertions(+), 5
d by calling vm_raw_write_end() by the callee once the ptes have
been moved.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 16
mm/mmap.c | 47 ---
mm/mremap.c| 13 +
3
d by calling vm_raw_write_end() by the callee once the ptes have
been moved.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 16
mm/mmap.c | 47 ---
mm/mremap.c| 13 +
3 files changed, 61 insertions(+),
Introduce CONFIG_SPECULATIVE_PAGE_FAULT which turns on the Speculative Page
Fault handler when building for 64bits with SMP.
Cc: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
Introduce CONFIG_SPECULATIVE_PAGE_FAULT which turns on the Speculative Page
Fault handler when building for 64bits with SMP.
Cc: Thomas Gleixner
Signed-off-by: Laurent Dufour
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour <l
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour
---
mm/memory.c
nce events to report number of successful
and failed speculative events.
[1]
http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
[2] https://patchwork.kernel.org/patch/687/
Laurent Dufour (20):
mm: Introduce CONFIG_SPECULATIVE_
nce events to report number of successful
and failed speculative events.
[1]
http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
[2] https://patchwork.kernel.org/patch/687/
Laurent Dufour (20):
mm: Introduce CONFIG_SPECULATIVE_
>
> On 02/16/2018 10:25 AM, Laurent Dufour wrote:
>> +static bool pte_map_lock(struct vm_fault *vmf)
>> +{
> ...snip...
>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>> + goto out;
>
> Since SPF can now call pmd_same without THP, maybe the way to fi
>
> On 02/16/2018 10:25 AM, Laurent Dufour wrote:
>> +static bool pte_map_lock(struct vm_fault *vmf)
>> +{
> ...snip...
>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>> + goto out;
>
> Since SPF can now call pmd_same without THP, maybe the way to fi
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour <l
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour
---
mm/memory.c
Introduce CONFIG_SPECULATIVE_PAGE_FAULT which turns on the Speculative Page
Fault handler when building for 64bits with SMP.
Cc: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
Introduce CONFIG_SPECULATIVE_PAGE_FAULT which turns on the Speculative Page
Fault handler when building for 64bits with SMP.
Cc: Thomas Gleixner
Signed-off-by: Laurent Dufour
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index
is then trapped in cow_user_page().
If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
page fault while holding the mmap_sem.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 1 +
mm/memory.c| 37 ++---
2
is then trapped in cow_user_page().
If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
page fault while holding the mmap_sem.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 1 +
mm/memory.c| 37 ++---
2 files changed, 27 insertions(+), 11
d by calling vm_raw_write_end() by the callee once the ptes have
been moved.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 16
mm/mmap.c | 47 ---
mm/mremap.c| 13 +
3
d by calling vm_raw_write_end() by the callee once the ptes have
been moved.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 16
mm/mmap.c | 47 ---
mm/mremap.c| 13 +
3 files changed, 61 insertions(+),
to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6 +-
mm/mempo
to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour
---
fs/proc/task_mmu.c | 5 -
fs/userfaultfd.c | 17 +
mm/khugepaged.c| 3 +++
mm/madvise.c | 6 +-
mm/mempolicy.c | 51
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.i
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour
---
include/linux/migrat
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
hanges.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 6 ++
mm/hugetlb.c | 2 ++
mm/khugepaged.c| 2 ++
mm/memory.c
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2
() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 9 +++--
mm/memory.c| 6 +++---
2 files changed, 10 insertions(+), 5
pointer.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/swap.h | 10 --
mm/memory.c | 8
mm/swap.c| 6 +++---
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
pointer.
Signed-off-by: Laurent Dufour
---
include/linux/swap.h | 10 --
mm/memory.c | 8
mm/swap.c| 6 +++---
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index a1a3f4ed94ce..99377b66ea93 100644
ess speculative page fault.
[1] https://patchwork.kernel.org/patch/5108281/
Cc: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: Matthew Wilcox <wi...@infradead.org>
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm_types.h | 4 ++
kernel/fork.c
ess speculative page fault.
[1] https://patchwork.kernel.org/patch/5108281/
Cc: Peter Zijlstra (Intel)
Cc: Matthew Wilcox
Signed-off-by: Laurent Dufour
---
include/linux/mm_types.h | 4 ++
kernel/fork.c| 3 ++
mm/init-mm.c | 3 ++
mm/internal.h| 6 +++
D against concurrent collapsing operation]
[Try spin lock the pte during the speculative path to avoid deadlock with
other CPU's invalidating the TLB and requiring this CPU to catch the
inter processor's interrupt]
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/hug
lock the pte during the speculative path to avoid deadlock with
other CPU's invalidating the TLB and requiring this CPU to catch the
inter processor's interrupt]
Signed-off-by: Laurent Dufour
---
include/linux/hugetlb_inline.h | 2 +-
include/linux/mm.h | 8 +
include/linux/pagemap.h
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c| 4
tools/perf/util/parse-events.l| 1 +
tools/perf/util/python.c
for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.
Build on if CONFIG_SPECULATIVE_PAGE_FAULT is defined (currently for
BOOK3S_64 && SMP).
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
arch/powerpc/mm/fault.c | 31
for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.
Build on if CONFIG_SPECULATIVE_PAGE_FAULT is defined (currently for
BOOK3S_64 && SMP).
Signed-off-by: Laurent Dufour
---
arch/powerpc/mm/fault.c | 31 ++-
1 file changed, 30 in
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/trace/events/pagefault.h | 87
mm/memory.c
matched the passed address and release
the reference on the VMA so that it can be freed if needed.
In the case the VMA is freed, can_reuse_spf_vma() will have returned false
as the VMA is no more in the RB tree.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/linux/mm.h
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour
---
include/trace/events/pagefault.h | 87
mm/memory.c | 62 ++--
2 files changed, 136
matched the passed address and release
the reference on the VMA so that it can be freed if needed.
In the case the VMA is freed, can_reuse_spf_vma() will have returned false
as the VMA is no more in the RB tree.
Signed-off-by: Laurent Dufour
---
include/linux/mm.h | 5 +-
mm/memory.c| 136
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
401 - 500 of 1353 matches
Mail list logo