if it is really locked or not
as there are explicit APIs to set PG_locked.
I didn't get traces of history for having PG_locked being set non-atomically.
I believe it could be because of performance reasons. Not sure though.
Chintan Pandya (2):
page-flags: Make page lock operation atomic
page
: I13bdbedc2b198af014d885e1925c93b83ed6660e
Signed-off-by: Chintan Pandya
---
fs/cifs/file.c | 8
fs/pipe.c | 2 +-
include/linux/page-flags.h | 2 +-
include/linux/pagemap.h| 6 +++---
mm/filemap.c | 4 ++--
mm/khugepaged.c| 2 +-
mm/ksm.c
wait-until-set. So, at least, find out who is
doing double setting and fix them.
Change-Id: I1295fcb8527ce4b54d5d11c11287fc7516006cf0
Signed-off-by: Chintan Pandya
---
include/linux/page-flags.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/page-flags.h b/include
Commit-ID: 785a19f9d1dd8a4ab2d0633be4656653bd3de1fc
Gitweb: https://git.kernel.org/tip/785a19f9d1dd8a4ab2d0633be4656653bd3de1fc
Author: Chintan Pandya
AuthorDate: Wed, 27 Jun 2018 08:13:47 -0600
Committer: Thomas Gleixner
CommitDate: Wed, 4 Jul 2018 21:37:08 +0200
ioremap: Update
Commit-ID: 785a19f9d1dd8a4ab2d0633be4656653bd3de1fc
Gitweb: https://git.kernel.org/tip/785a19f9d1dd8a4ab2d0633be4656653bd3de1fc
Author: Chintan Pandya
AuthorDate: Wed, 27 Jun 2018 08:13:47 -0600
Committer: Thomas Gleixner
CommitDate: Wed, 4 Jul 2018 21:37:08 +0200
ioremap: Update
Hi Andrew,
On 6/6/2018 9:15 PM, Will Deacon wrote:
[...]
On Wed, Jun 06, 2018 at 12:31:18PM +0530, Chintan Pandya wrote:
This series of patches re-bring huge vmap back for arm64.
Patch 1/3 has been taken by Toshi in his series of patches
by name "[PATCH v3 0/3] fix free pmd/pte
Hi Andrew,
On 6/6/2018 9:15 PM, Will Deacon wrote:
[...]
On Wed, Jun 06, 2018 at 12:31:18PM +0530, Chintan Pandya wrote:
This series of patches re-bring huge vmap back for arm64.
Patch 1/3 has been taken by Toshi in his series of patches
by name "[PATCH v3 0/3] fix free pmd/pte
On 6/6/2018 9:15 PM, Will Deacon wrote:
Hi Chintan,
Hi Will,
Thanks for sticking with this. I've reviewed the series now and I'm keen
for it to land in mainline. Just a couple of things below.
Thanks for all the reviews so far.
On Wed, Jun 06, 2018 at 12:31:18PM +0530, Chintan Pandya
On 6/6/2018 9:15 PM, Will Deacon wrote:
Hi Chintan,
Hi Will,
Thanks for sticking with this. I've reviewed the series now and I'm keen
for it to land in mainline. Just a couple of things below.
Thanks for all the reviews so far.
On Wed, Jun 06, 2018 at 12:31:18PM +0530, Chintan Pandya
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/asm/tlbflush.h
index dfc61d7
From: Chintan Pandya
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.
1. ioremap with 4K size, a valid pte page table is set.
2. iounmap it, its pte entry is set to 0.
3. ioremap the same address with 2M size, update its pmd entry with
a new value.
4
and also free the leaking page tables.
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidation of TLB
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 48
1 file
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/asm/tlbflush.h
index dfc61d7
From: Chintan Pandya
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.
1. ioremap with 4K size, a valid pte page table is set.
2. iounmap it, its pte entry is set to 0.
3. ioremap the same address with 2M size, update its pmd entry with
a new value.
4
and also free the leaking page tables.
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidation of TLB
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 48
1 file
>From V2->V3:
- Use the exisiting page table free interface to do arm64
specific things
>From V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc:
Add interfaces to free unmapped page table"
- Honored BBM for ARM64
Chintan Pandya (3):
ioremap: Up
>From V2->V3:
- Use the exisiting page table free interface to do arm64
specific things
>From V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc:
Add interfaces to free unmapped page table"
- Honored BBM for ARM64
Chintan Pandya (3):
ioremap: Up
On 6/4/2018 5:43 PM, Will Deacon wrote:
On Fri, Jun 01, 2018 at 06:09:16PM +0530, Chintan Pandya wrote:
Add helper macros to give virtual references to page
tables. These will be used while freeing dangling
page tables.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/pgtable.h
On 6/4/2018 5:43 PM, Will Deacon wrote:
On Fri, Jun 01, 2018 at 06:09:16PM +0530, Chintan Pandya wrote:
Add helper macros to give virtual references to page
tables. These will be used while freeing dangling
page tables.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/pgtable.h
On 6/4/2018 5:44 PM, Will Deacon wrote:
On Fri, Jun 01, 2018 at 06:09:18PM +0530, Chintan Pandya wrote:
Huge mappings have had stability issues due to stale
TLB entry and memory leak issues. Since, those are
addressed in this series of patches, it is now safe
to allow huge mappings.
Signed
On 6/4/2018 5:44 PM, Will Deacon wrote:
On Fri, Jun 01, 2018 at 06:09:18PM +0530, Chintan Pandya wrote:
Huge mappings have had stability issues due to stale
TLB entry and memory leak issues. Since, those are
addressed in this series of patches, it is now safe
to allow huge mappings.
Signed
On 6/4/2018 5:43 PM, Will Deacon wrote:
On Fri, Jun 01, 2018 at 06:09:17PM +0530, Chintan Pandya wrote:
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid
On 6/4/2018 5:43 PM, Will Deacon wrote:
On Fri, Jun 01, 2018 at 06:09:17PM +0530, Chintan Pandya wrote:
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid
positively.
Thanks,
On 6/1/2018 6:09 PM, Chintan Pandya wrote:
This series of patches re-bring huge vmap back for arm64.
Patch 1/4 has been taken by Toshi in his series of patches
by name "[PATCH v3 0/3] fix free pmd/pte page handlings on x86"
to avoid merge conflict with this series.
The
positively.
Thanks,
On 6/1/2018 6:09 PM, Chintan Pandya wrote:
This series of patches re-bring huge vmap back for arm64.
Patch 1/4 has been taken by Toshi in his series of patches
by name "[PATCH v3 0/3] fix free pmd/pte page handlings on x86"
to avoid merge conflict with this series.
The
Add helper macros to give virtual references to page
tables. These will be used while freeing dangling
page tables.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/pgtable.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm
Add helper macros to give virtual references to page
tables. These will be used while freeing dangling
page tables.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/pgtable.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm
From: Chintan Pandya
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.
1. ioremap with 4K size, a valid pte page table is set.
2. iounmap it, its pte entry is set to 0.
3. ioremap the same address with 2M size, update its pmd entry with
a new value.
4
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/asm/tlbflush.h
index dfc61d7
From: Chintan Pandya
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.
1. ioremap with 4K size, a valid pte page table is set.
2. iounmap it, its pte entry is set to 0.
3. ioremap the same address with 2M size, update its pmd entry with
a new value.
4
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/asm/tlbflush.h
index dfc61d7
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya
---
arch/arm64
void redundant TLB invalidatation in one perticular case
>From V2->V3:
- Use the exisiting page table free interface to do arm64
specific things
>From V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc:
Add interfaces to free unmapped page table"
Huge mappings have had stability issues due to stale
TLB entry and memory leak issues. Since, those are
addressed in this series of patches, it is now safe
to allow huge mappings.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 18 ++
1 file changed, 2 insertions(+), 16
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya
---
arch/arm64
void redundant TLB invalidatation in one perticular case
>From V2->V3:
- Use the exisiting page table free interface to do arm64
specific things
>From V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc:
Add interfaces to free unmapped page table"
Huge mappings have had stability issues due to stale
TLB entry and memory leak issues. Since, those are
addressed in this series of patches, it is now safe
to allow huge mappings.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 18 ++
1 file changed, 2 insertions(+), 16
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya <c
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya
---
arch/arm64
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
arch/arm64/include/asm/tlbflush.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/inclu
From: Chintan Pandya <cpan...@codeaurora.org>
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.
1. ioremap with 4K size, a valid pte page table is set.
2. iounmap it, its pte entry is set to 0.
3. ioremap the same address with 2M size, update its pmd
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/asm/tlbflush.h
index dfc61d7
From: Chintan Pandya
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.
1. ioremap with 4K size, a valid pte page table is set.
2. iounmap it, its pte entry is set to 0.
3. ioremap the same address with 2M size, update its pmd entry with
a new value.
4
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
arc
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 8 --
rom V2->V3:
- Use the exisiting page table free interface to do arm64
specific things
>From V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc:
Add interfaces to free unmapped page table"
- Honored BBM for ARM64
Chintan Pandya (4):
ioremap: Update p
rom V2->V3:
- Use the exisiting page table free interface to do arm64
specific things
>From V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc:
Add interfaces to free unmapped page table"
- Honored BBM for ARM64
Chintan Pandya (4):
ioremap: Update p
On 5/24/2018 7:27 PM, Chintan Pandya wrote:
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page
On 5/24/2018 7:27 PM, Chintan Pandya wrote:
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
arc
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 8 --
From: Chintan Pandya <cpan...@codeaurora.org>
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.
1. ioremap with 4K size, a valid pte page table is set.
2. iounmap it, its pte entry is set to 0.
3. ioremap the same address with 2M size, update its pmd
From: Chintan Pandya
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.
1. ioremap with 4K size, a valid pte page table is set.
2. iounmap it, its pte entry is set to 0.
3. ioremap the same address with 2M size, update its pmd entry with
a new value.
4
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya <c
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya
---
arch/arm64
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
arch/arm64/include/asm/tlbflush.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/inclu
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/asm/tlbflush.h
index dfc61d7
x86 implementation
- Re-order pmd/pud clear and table free
- Avoid redundant TLB invalidatation in one perticular case
>From V2->V3:
- Use the exisiting page table free interface to do arm64
specific things
>From V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vm
x86 implementation
- Re-order pmd/pud clear and table free
- Avoid redundant TLB invalidatation in one perticular case
>From V2->V3:
- Use the exisiting page table free interface to do arm64
specific things
>From V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vm
as updated by Toshi.
On Mon, Apr 30, 2018 at 01:11:33PM +0530, Chintan Pandya wrote:
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3
as updated by Toshi.
On Mon, Apr 30, 2018 at 01:11:33PM +0530, Chintan Pandya wrote:
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3
On 5/23/2018 8:04 PM, Kani, Toshi wrote:
On Wed, 2018-05-23 at 15:01 +0100, Will Deacon wrote:
Hi Chintan,
[as a side note: I'm confused on the status of this patch series, as part
of it was reposted separately by Toshi. Please can you work together?]
I do not know the status of my patch
On 5/23/2018 8:04 PM, Kani, Toshi wrote:
On Wed, 2018-05-23 at 15:01 +0100, Will Deacon wrote:
Hi Chintan,
[as a side note: I'm confused on the status of this patch series, as part
of it was reposted separately by Toshi. Please can you work together?]
I do not know the status of my patch
On 5/4/2018 3:12 AM, Andrew Morton wrote:
On Tue, 17 Apr 2018 16:13:48 +0530 Chintan Pandya <cpan...@codeaurora.org>
wrote:
Client can call vunmap with some intermediate 'addr'
which may not be the start of the VM area. Entire
unmap code works with vm->vm_start which is proper
On 5/4/2018 3:12 AM, Andrew Morton wrote:
On Tue, 17 Apr 2018 16:13:48 +0530 Chintan Pandya
wrote:
Client can call vunmap with some intermediate 'addr'
which may not be the start of the VM area. Entire
unmap code works with vm->vm_start which is proper
but debug object API is cal
On 5/2/2018 1:24 PM, Ganesh Mahendran wrote:
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT for arm64. This
enables Speculative Page Fault handler.
Signed-off-by: Ganesh Mahendran
---
This patch is on top of Laurent's v10 spf
---
arch/arm64/Kconfig | 1 +
1 file
On 5/2/2018 1:24 PM, Ganesh Mahendran wrote:
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT for arm64. This
enables Speculative Page Fault handler.
Signed-off-by: Ganesh Mahendran
---
This patch is on top of Laurent's v10 spf
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff
) + 45.468 us |}
6) 2.760 us|vunmap_page_range();
6) ! 505.105 us | }
Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
mm/vmalloc.c | 29 +++--
1 file changed, 7 insertions(+), 22 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
) + 45.468 us |}
6) 2.760 us|vunmap_page_range();
6) ! 505.105 us | }
Signed-off-by: Chintan Pandya
---
mm/vmalloc.c | 29 +++--
1 file changed, 7 insertions(+), 22 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ebff729..781ce02 100644
On 5/1/2018 4:22 AM, Andrew Morton wrote:
On Mon, 16 Apr 2018 16:29:02 +0530 Chintan Pandya <cpan...@codeaurora.org>
wrote:
vunmap does page table clear operations twice in the
case when DEBUG_PAGEALLOC_ENABLE_DEFAULT is enabled.
So, clean up the code as that is unintended.
As a per
On 5/1/2018 4:22 AM, Andrew Morton wrote:
On Mon, 16 Apr 2018 16:29:02 +0530 Chintan Pandya
wrote:
vunmap does page table clear operations twice in the
case when DEBUG_PAGEALLOC_ENABLE_DEFAULT is enabled.
So, clean up the code as that is unintended.
As a perf gain, we save few us. Below
On 5/1/2018 4:34 AM, Andrew Morton wrote:
should check for it and do a WARN_ONCE so it gets fixed.
Yes, that was an idea in discussion but I've been suggested that it
could be intentional. But since you are raising this, I will try to dig
once again and share a patch with WARN_ONCE if
On 5/1/2018 4:34 AM, Andrew Morton wrote:
should check for it and do a WARN_ONCE so it gets fixed.
Yes, that was an idea in discussion but I've been suggested that it
could be intentional. But since you are raising this, I will try to dig
once again and share a patch with WARN_ONCE if
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya <c
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya
---
arch/arm64
pagetable entry even in map.
Why ? Read this,
https://patchwork.kernel.org/patch/10134581/
Pass 'addr' in these interfaces so that proper TLB ops
can be performed.
Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
arch/arm64/mm/mmu.c | 4 ++--
arch/x86/mm/pgtab
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
arc
pagetable entry even in map.
Why ? Read this,
https://patchwork.kernel.org/patch/10134581/
Pass 'addr' in these interfaces so that proper TLB ops
can be performed.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 4 ++--
arch/x86/mm/pgtable.c | 8 +---
include/a
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 8 --
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
arch/arm64/include/asm/tlbflush.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/inclu
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/asm/tlbflush.h
index dfc61d7
cific things
>From V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc:
Add interfaces to free unmapped page table"
- Honored BBM for ARM64
Chintan Pandya (4):
ioremap: Update pgtable free interfaces with addr
arm64: tlbflush: Introduce __flush_tlb_kernel_pgtab
V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc:
Add interfaces to free unmapped page table"
- Honored BBM for ARM64
Chintan Pandya (4):
ioremap: Update pgtable free interfaces with addr
arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable
arm64: Implement p
On 4/29/2018 2:24 AM, Kani, Toshi wrote:
On Sat, 2018-04-28 at 11:02 +0200, j...@8bytes.org wrote:
On Fri, Apr 27, 2018 at 02:31:51PM +, Kani, Toshi wrote:
So, we can add the step 2 on top of this patch.
1. Clear pud/pmd entry.
2. System wide TLB flush <-- TO BE ADDED BY NEW PATCH
On 4/29/2018 2:24 AM, Kani, Toshi wrote:
On Sat, 2018-04-28 at 11:02 +0200, j...@8bytes.org wrote:
On Fri, Apr 27, 2018 at 02:31:51PM +, Kani, Toshi wrote:
So, we can add the step 2 on top of this patch.
1. Clear pud/pmd entry.
2. System wide TLB flush <-- TO BE ADDED BY NEW PATCH
On 4/27/2018 6:18 PM, j...@8bytes.org wrote:
On Fri, Apr 27, 2018 at 05:22:28PM +0530, Chintan Pandya wrote:
I'm bit confused here. Are you pointing to race within ioremap/vmalloc
framework while updating the page table or race during tlb ops. Since
later is arch dependent, I would
On 4/27/2018 6:18 PM, j...@8bytes.org wrote:
On Fri, Apr 27, 2018 at 05:22:28PM +0530, Chintan Pandya wrote:
I'm bit confused here. Are you pointing to race within ioremap/vmalloc
framework while updating the page table or race during tlb ops. Since
later is arch dependent, I would
On 4/27/2018 3:59 PM, Catalin Marinas wrote:
On Tue, Apr 03, 2018 at 01:30:44PM +0530, Chintan Pandya wrote:
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
arch/arm64/include/asm/tlbflush
On 4/27/2018 3:59 PM, Catalin Marinas wrote:
On Tue, Apr 03, 2018 at 01:30:44PM +0530, Chintan Pandya wrote:
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 6 ++
1 file changed, 6
On 4/27/2018 1:07 PM, j...@8bytes.org wrote:
On Thu, Apr 26, 2018 at 10:30:14PM +, Kani, Toshi wrote:
Thanks for the clarification. After reading through SDM one more time, I
agree that we need a TLB purge here. Here is my current understanding.
- INVLPG purges both TLB and
On 4/27/2018 1:07 PM, j...@8bytes.org wrote:
On Thu, Apr 26, 2018 at 10:30:14PM +, Kani, Toshi wrote:
Thanks for the clarification. After reading through SDM one more time, I
agree that we need a TLB purge here. Here is my current understanding.
- INVLPG purges both TLB and
the debug objects corresponding to this vm area.
Here, we actually free 'other' client's debug objects.
Fix this by freeing the debug objects first and then
releasing the VM area.
Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
mm/vmalloc.c | 3 ++-
1 file changed, 2 insertions
the debug objects corresponding to this vm area.
Here, we actually free 'other' client's debug objects.
Fix this by freeing the debug objects first and then
releasing the VM area.
Signed-off-by: Chintan Pandya
---
mm/vmalloc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git
API.
Signed-off-by: Chintan Pandya <cpan...@codeaurora.org>
---
mm/vmalloc.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 12d675c..033c918 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1124,15 +1124,15 @@ void vm_unmap_ram(const
API.
Signed-off-by: Chintan Pandya
---
mm/vmalloc.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 12d675c..033c918 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1124,15 +1124,15 @@ void vm_unmap_ram(const void *mem, unsigned int co
are rebased over tip + my other patch in
review "[PATCH v2] mm: vmalloc: Clean up vunmap to avoid
pgtable ops twice"
Chintan Pandya (2):
mm: vmalloc: Avoid racy handling of debugobjects in vunmap
mm: vmalloc: Pass proper vm_start into debugobjects
>From V1->V2:
- Incorpo
are rebased over tip + my other patch in
review "[PATCH v2] mm: vmalloc: Clean up vunmap to avoid
pgtable ops twice"
Chintan Pandya (2):
mm: vmalloc: Avoid racy handling of debugobjects in vunmap
mm: vmalloc: Pass proper vm_start into debugobjects
>From V1->V2:
- Incorpo
Ping...
On 4/3/2018 1:30 PM, Chintan Pandya wrote:
This series of patches are follow up work (and depends on)
Toshi Kani <toshi.k...@hpe.com>'s patches "fix memory leak/
panic in ioremap huge pages".
This series of patches are tested on 4.9 kernel with Cortex-A75
based SoC.
T
Ping...
On 4/3/2018 1:30 PM, Chintan Pandya wrote:
This series of patches are follow up work (and depends on)
Toshi Kani 's patches "fix memory leak/
panic in ioremap huge pages".
This series of patches are tested on 4.9 kernel with Cortex-A75
based SoC.
These patches c
On 4/17/2018 8:39 AM, Anshuman Khandual wrote:
On 04/16/2018 05:39 PM, Chintan Pandya wrote:
On 4/13/2018 5:31 PM, Anshuman Khandual wrote:
On 04/13/2018 05:03 PM, Chintan Pandya wrote:
Client can call vunmap with some intermediate 'addr'
which may not be the start of the VM area. Entire
1 - 100 of 341 matches
Mail list logo