Re: [PATCH 2/2] mm: vmalloc: Pass proper vm_start into debugobjects

2018-04-16 Thread Chintan Pandya
On 4/13/2018 5:31 PM, Anshuman Khandual wrote: On 04/13/2018 05:03 PM, Chintan Pandya wrote: Client can call vunmap with some intermediate 'addr' which may not be the start of the VM area. Entire unmap code works with vm->vm_start which is proper but debug object API is call

[PATCH v2] mm: vmalloc: Clean up vunmap to avoid pgtable ops twice

2018-04-16 Thread Chintan Pandya
) + 45.468 us |} 6) 2.760 us|vunmap_page_range(); 6) ! 505.105 us | } Signed-off-by: Chintan Pandya --- mm/vmalloc.c | 25 +++-- 1 file changed, 3 insertions(+), 22 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ebff729..6729400 100644 --- a/mm

Re: [PATCH] mm: vmalloc: Remove double execution of vunmap_page_range

2018-04-13 Thread Chintan Pandya
On 4/13/2018 5:11 PM, Michal Hocko wrote: On Fri 13-04-18 16:57:06, Chintan Pandya wrote: On 4/13/2018 4:39 PM, Michal Hocko wrote: On Fri 13-04-18 16:15:26, Chintan Pandya wrote: On 4/13/2018 4:10 PM, Anshuman Khandual wrote: On 04/13/2018 03:47 PM, Chintan Pandya wrote: On 4/13

[PATCH 2/2] mm: vmalloc: Pass proper vm_start into debugobjects

2018-04-13 Thread Chintan Pandya
s into debug object API. Signed-off-by: Chintan Pandya --- mm/vmalloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 9ff21a1..28034c55 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1526,8 +1526,8 @@ static void __vunmap(const void *addr

[PATCH 1/2] mm: vmalloc: Avoid racy handling of debugobjects in vunmap

2018-04-13 Thread Chintan Pandya
the debug objects corresponding to this vm area. Here, we actually free 'other' client's debug objects. Fix this by freeing the debug objects first and then releasing the VM area. Signed-off-by: Chintan Pandya --- mm/vmalloc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-

[PATCH 0/2] vunmap and debug objects

2018-04-13 Thread Chintan Pandya
help debug objects to be in consistent state. We've observed some list corruptions in debug objects. However, no claims that these patches will be fixing them. If one has an opinion that debug object has no use in vmalloc framework, I would raise a patch to remove them from the vunmap leg. Ch

Re: [PATCH] mm: vmalloc: Remove double execution of vunmap_page_range

2018-04-13 Thread Chintan Pandya
On 4/13/2018 4:39 PM, Michal Hocko wrote: On Fri 13-04-18 16:15:26, Chintan Pandya wrote: On 4/13/2018 4:10 PM, Anshuman Khandual wrote: On 04/13/2018 03:47 PM, Chintan Pandya wrote: On 4/13/2018 3:29 PM, Anshuman Khandual wrote: On 04/13/2018 02:46 PM, Chintan Pandya wrote: Unmap

Re: [PATCH] mm: vmalloc: Remove double execution of vunmap_page_range

2018-04-13 Thread Chintan Pandya
On 4/13/2018 4:10 PM, Anshuman Khandual wrote: On 04/13/2018 03:47 PM, Chintan Pandya wrote: On 4/13/2018 3:29 PM, Anshuman Khandual wrote: On 04/13/2018 02:46 PM, Chintan Pandya wrote: Unmap legs do call vunmap_page_range() irrespective of debug_pagealloc_enabled() is enabled or not. So

Re: [PATCH] mm: vmalloc: Remove double execution of vunmap_page_range

2018-04-13 Thread Chintan Pandya
On 4/13/2018 3:29 PM, Anshuman Khandual wrote: On 04/13/2018 02:46 PM, Chintan Pandya wrote: Unmap legs do call vunmap_page_range() irrespective of debug_pagealloc_enabled() is enabled or not. So, remove redundant check and optional vunmap_page_range() routines. vunmap_page_range() tears

[PATCH] mm: vmalloc: Remove double execution of vunmap_page_range

2018-04-13 Thread Chintan Pandya
Unmap legs do call vunmap_page_range() irrespective of debug_pagealloc_enabled() is enabled or not. So, remove redundant check and optional vunmap_page_range() routines. Signed-off-by: Chintan Pandya --- mm/vmalloc.c | 23 +-- 1 file changed, 1 insertion(+), 22 deletions

Re: [PATCH v8 0/4] Fix issues with huge mapping in ioremap for ARM64

2018-04-05 Thread Chintan Pandya
On 4/3/2018 5:25 PM, Chintan Pandya wrote: On 4/3/2018 2:13 PM, Marc Zyngier wrote: Hi Chintan, Hi Marc, On 03/04/18 09:00, Chintan Pandya wrote: This series of patches are follow up work (and depends on) Toshi Kani 's patches "fix memory leak/ panic in ioremap huge pages"

Re: [PATCH v8 0/4] Fix issues with huge mapping in ioremap for ARM64

2018-04-03 Thread Chintan Pandya
On 4/3/2018 2:13 PM, Marc Zyngier wrote: Hi Chintan, Hi Marc, On 03/04/18 09:00, Chintan Pandya wrote: This series of patches are follow up work (and depends on) Toshi Kani 's patches "fix memory leak/ panic in ioremap huge pages". This series of patches are tested on

[PATCH v8 2/4] arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable

2018-04-03 Thread Chintan Pandya
Add an interface to invalidate intermediate page tables from TLB for kernel. Signed-off-by: Chintan Pandya --- arch/arm64/include/asm/tlbflush.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 9e82dd7

[PATCH v8 4/4] Revert "arm64: Enforce BBM for huge IO/VMAP mappings"

2018-04-03 Thread Chintan Pandya
This commit 15122ee2c515a ("arm64: Enforce BBM for huge IO/VMAP mappings") is a temporary work-around until the issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed. Revert this change as we have fixes for the issue. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 8 --

[PATCH v8 1/4] ioremap: Update pgtable free interfaces with addr

2018-04-03 Thread Chintan Pandya
pagetable entry even in map. Why ? Read this, https://patchwork.kernel.org/patch/10134581/ Pass 'addr' in these interfaces so that proper TLB ops can be performed. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 4 ++-- arch/x86/mm/pgtable.c | 8 +---

[PATCH v8 3/4] arm64: Implement page table free interfaces

2018-04-03 Thread Chintan Pandya
Implement pud_free_pmd_page() and pmd_free_pte_page(). Implementation requires, 1) Clearing off the current pud/pmd entry 2) Invalidate TLB which could have previously valid but not stale entry 3) Freeing of the un-used next level page tables Signed-off-by: Chintan Pandya --- arch/arm64

Re: [PATCH v7 1/4] ioremap: Update pgtable free interfaces with addr

2018-04-03 Thread Chintan Pandya
wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Chintan-Pandya/ioremap-Update-pgtable-free-interfaces-with-addr/20180329-133736 config: x86_64-rhel (attached as .config) compiler: gcc-7 (Debian 7.3.0-1) 7.3.0 reproduce

[PATCH v8 0/4] Fix issues with huge mapping in ioremap for ARM64

2018-04-03 Thread Chintan Pandya
d redundant TLB invalidatation in one perticular case >From V2->V3: - Use the exisiting page table free interface to do arm64 specific things >From V1->V2: - Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc: Add interfaces to free unmapped page table" - Honor

[PATCH v7 4/4] Revert "arm64: Enforce BBM for huge IO/VMAP mappings"

2018-03-28 Thread Chintan Pandya
This commit 15122ee2c515a ("arm64: Enforce BBM for huge IO/VMAP mappings") is a temporary work-around until the issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed. Revert this change as we have fixes for the issue. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 8 --

[PATCH v7 3/4] arm64: Implement page table free interfaces

2018-03-28 Thread Chintan Pandya
Implement pud_free_pmd_page() and pmd_free_pte_page(). Implementation requires, 1) Clearing off the current pud/pmd entry 2) Invalidate TLB which could have previously valid but not stale entry 3) Freeing of the un-used next level page tables Signed-off-by: Chintan Pandya --- arch/arm64

[PATCH v7 2/4] arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable

2018-03-28 Thread Chintan Pandya
Add an interface to invalidate intermediate page tables from TLB for kernel. Signed-off-by: Chintan Pandya --- arch/arm64/include/asm/tlbflush.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 9e82dd7

[PATCH v7 1/4] ioremap: Update pgtable free interfaces with addr

2018-03-28 Thread Chintan Pandya
pagetable entry even in map. Why ? Read this, https://patchwork.kernel.org/patch/10134581/ Pass 'addr' in these interfaces so that proper TLB ops can be performed. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 4 ++-- arch/x86/mm/pgtable.c | 8 +---

[PATCH v7 0/4] Fix issues with huge mapping in ioremap for ARM64

2018-03-28 Thread Chintan Pandya
;V3: - Use the exisiting page table free interface to do arm64 specific things >From V1->V2: - Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc: Add interfaces to free unmapped page table" - Honored BBM for ARM64 Chintan Pandya (4): ioremap: Update pgtable free inte

Re: [PATCH v5 1/4] ioremap: Update pgtable free interfaces with addr

2018-03-28 Thread Chintan Pandya
On 3/28/2018 5:20 PM, kbuild test robot wrote: @725 if (!pmd_free_pte_page(&pmd[i])) My bad ! Will fix this in v7 Chintan -- Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative

Re: [PATCH v6 0/4] Fix issues with huge mapping in ioremap for ARM64

2018-03-28 Thread Chintan Pandya
I goofed up in making a patch file so enumeration is wrong. I'll upload v7 On 3/28/2018 4:28 PM, Chintan Pandya wrote: This series of patches are follow up work (and depends on) Toshi Kani 's patches "fix memory leak/ panic in ioremap huge pages". This series of patch

[PATCH v6 1/4] ioremap: Update pgtable free interfaces with addr

2018-03-28 Thread Chintan Pandya
pagetable entry even in map. Why ? Read this, https://patchwork.kernel.org/patch/10134581/ Pass 'addr' in these interfaces so that proper TLB ops can be performed. Signed-off-by: Chintan Pandya --- >From V4->V6: - No change arch/arm64/mm/mmu.c | 4 ++-- arch/x8

[PATCH v6 4/4] Revert "arm64: Enforce BBM for huge IO/VMAP mappings"

2018-03-28 Thread Chintan Pandya
This commit 15122ee2c515a ("arm64: Enforce BBM for huge IO/VMAP mappings") is a temporary work-around until the issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed. Revert this change as we have fixes for the issue. Signed-off-by: Chintan Pandya --- From: V1-> V6: - No change

[PATCH v6 1/2] arm64: Implement page table free interfaces

2018-03-28 Thread Chintan Pandya
Implement pud_free_pmd_page() and pmd_free_pte_page(). Implementation requires, 1) Clearing off the current pud/pmd entry 2) Invalidate TLB which could have previously valid but not stale entry 3) Freeing of the un-used next level page tables Signed-off-by: Chintan Pandya --- >From

[PATCH v6 2/4] arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable

2018-03-28 Thread Chintan Pandya
Add an interface to invalidate intermediate page tables from TLB for kernel. Signed-off-by: Chintan Pandya --- From: V5->V6: - No change arch/arm64/include/asm/tlbflush.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/

[PATCH v6 0/4] Fix issues with huge mapping in ioremap for ARM64

2018-03-28 Thread Chintan Pandya
->V2: - Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc: Add interfaces to free unmapped page table" - Honored BBM for ARM64 Chintan Pandya (4): ioremap: Update pgtable free interfaces with addr arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable arm64: Implement page table

Re: [PATCH v5 3/4] arm64: Implement page table free interfaces

2018-03-28 Thread Chintan Pandya
On 3/27/2018 11:30 PM, Will Deacon wrote: Hi Chintan, Hi Will, On Tue, Mar 27, 2018 at 06:54:59PM +0530, Chintan Pandya wrote: Implement pud_free_pmd_page() and pmd_free_pte_page(). Implementation requires, 1) Freeing of the un-used next level page tables 2) Clearing off the current

[PATCH v5 1/4] ioremap: Update pgtable free interfaces with addr

2018-03-27 Thread Chintan Pandya
pagetable entry even in map. Why ? Read this, https://patchwork.kernel.org/patch/10134581/ Pass 'addr' in these interfaces so that proper TLB ops can be performed. Signed-off-by: Chintan Pandya --- No change in v5. arch/arm64/mm/mmu.c | 4 ++-- arch/x86/mm/pgtable.c | 6

[PATCH v5 3/4] arm64: Implement page table free interfaces

2018-03-27 Thread Chintan Pandya
Implement pud_free_pmd_page() and pmd_free_pte_page(). Implementation requires, 1) Freeing of the un-used next level page tables 2) Clearing off the current pud/pmd entry 3) Invalidate TLB which could have previously valid but not stale entry Signed-off-by: Chintan Pandya --- V4->

[PATCH v5 4/4] Revert "arm64: Enforce BBM for huge IO/VMAP mappings"

2018-03-27 Thread Chintan Pandya
This commit 15122ee2c515a ("arm64: Enforce BBM for huge IO/VMAP mappings") is a temporary work-around until the issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed. Revert this change as we have fixes for the issue. Signed-off-by: Chintan Pandya --- No change in v5 arch/arm64/mm

[PATCH v5 2/4] arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable

2018-03-27 Thread Chintan Pandya
Add an interface to invalidate intermediate page tables from TLB for kernel. Signed-off-by: Chintan Pandya --- Introduced in v5 arch/arm64/include/asm/tlbflush.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h

[PATCH v5 0/4] Fix issues with huge mapping in ioremap for ARM64

2018-03-27 Thread Chintan Pandya
This series of patches are follow up work (and depends on) Toshi Kani 's patches "fix memory leak/ panic in ioremap huge pages". This series of patches are tested on 4.9 kernel with Cortex-A75 based SoC. These patches can also go into '-stable' branch. Chintan Pand

Re: [PATCH v4 2/3] arm64: Implement page table free interfaces

2018-03-26 Thread Chintan Pandya
On 3/26/2018 3:25 PM, Mark Rutland wrote: On Tue, Mar 20, 2018 at 05:15:13PM +0530, Chintan Pandya wrote: +static int __pmd_free_pte_page(pmd_t *pmd, unsigned long addr, bool tlb_inv) +{ + pmd_t *table; + + if (pmd_val(*pmd)) { + table = __va(pmd_val(*pmd

[PATCH v4 2/3] arm64: Implement page table free interfaces

2018-03-20 Thread Chintan Pandya
Implement pud_free_pmd_page() and pmd_free_pte_page(). Implementation requires, 1) Freeing of the un-used next level page tables 2) Clearing off the current pud/pmd entry 3) Invalidate TLB which could have previously valid but not stale entry Signed-off-by: Chintan Pandya --- arch/arm64

[PATCH v4 1/3] ioremap: Update pgtable free interfaces with addr

2018-03-20 Thread Chintan Pandya
pagetable entry even in map. Why ? Read this, https://patchwork.kernel.org/patch/10134581/ Pass 'addr' in these interfaces so that proper TLB ops can be performed. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 4 ++-- arch/x86/mm/pgtable.c | 6 -- include/a

[PATCH v4 3/3] Revert "arm64: Enforce BBM for huge IO/VMAP mappings"

2018-03-20 Thread Chintan Pandya
This commit 15122ee2c515a ("arm64: Enforce BBM for huge IO/VMAP mappings") is a temporary work-around until the issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed. Revert this change as we have fixes for the issue. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 8 --

[PATCH v4 0/3] Fix issues with huge mapping in ioremap for ARM64

2018-03-20 Thread Chintan Pandya
This series of patches are follow up work (and depends on) Toshi Kani 's patches "fix memory leak/ panic in ioremap huge pages". This series of patches are tested on 4.9 kernel with Cortex-A75 based SoC. Chintan Pandya (3): ioremap: Update pgtable free interfaces with addr a

Re: [PATCH v3 2/3] arm64: Implement page table free interfaces

2018-03-20 Thread Chintan Pandya
On 3/20/2018 12:59 AM, Kani, Toshi wrote: On Mon, 2018-03-19 at 18:10 +0530, Chintan Pandya wrote: Implement pud_free_pmd_page() and pmd_free_pte_page(). Implementation requires, 1) Freeing of the un-used next level page tables 2) Clearing off the current pud/pmd entry 3) Invalidate

Re: [PATCH v3 1/3] ioremap: Update pgtable free interfaces with addr

2018-03-20 Thread Chintan Pandya
On 3/20/2018 12:31 AM, Kani, Toshi wrote: On Mon, 2018-03-19 at 18:10 +0530, Chintan Pandya wrote: This patch ("mm/vmalloc: Add interfaces to free unmapped page table") adds following 2 interfaces to free the page table in case we implement huge mapping. pud_free_pmd_

[PATCH v3 0/3] Fix issues with huge mapping in ioremap for ARM64

2018-03-19 Thread Chintan Pandya
This series of patches are follow up work (and depends on) Toshi Kani 's patches "fix memory leak/ panic in ioremap huge pages". This series of patches are tested on 4.9 kernel with Cortex-A75 based SoC. Chintan Pandya (3): ioremap: Update pgtable free interfaces with addr a

[PATCH v3 2/3] arm64: Implement page table free interfaces

2018-03-19 Thread Chintan Pandya
Implement pud_free_pmd_page() and pmd_free_pte_page(). Implementation requires, 1) Freeing of the un-used next level page tables 2) Clearing off the current pud/pmd entry 3) Invalidate TLB which could have previously valid but not stale entry Signed-off-by: Chintan Pandya --- arch/arm64

[PATCH v3 3/3] Revert "arm64: Enforce BBM for huge IO/VMAP mappings"

2018-03-19 Thread Chintan Pandya
This commit 15122ee2c515a ("arm64: Enforce BBM for huge IO/VMAP mappings") is a temporary work-around until the issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed. Revert this change as we have fixes for the issue. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 8 --

[PATCH v3 1/3] ioremap: Update pgtable free interfaces with addr

2018-03-19 Thread Chintan Pandya
pagetable entry even in map. Why ? Read this, https://patchwork.kernel.org/patch/10134581/ Pass 'addr' in these interfaces so that proper TLB ops can be performed. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 4 ++-- arch/x86/mm/pgtable.c | 4 ++-- include/a

Re: [PATCH v2 3/4] arm64: Implement page table free interfaces

2018-03-18 Thread Chintan Pandya
On 3/15/2018 6:48 PM, Mark Rutland wrote: On Thu, Mar 15, 2018 at 06:15:05PM +0530, Chintan Pandya wrote: Implement pud_free_pmd_page() and pmd_free_pte_page(). Make sure, that they are indeed a page table before taking them to free. As mentioned on the prior patch, if the tables we&#x

Re: [PATCH v2 2/4] ioremap: Implement TLB_INV before huge mapping

2018-03-18 Thread Chintan Pandya
On 3/16/2018 8:20 PM, Kani, Toshi wrote: On Fri, 2018-03-16 at 13:10 +0530, Chintan Pandya wrote: On 3/15/2018 9:42 PM, Kani, Toshi wrote: On Thu, 2018-03-15 at 18:15 +0530, Chintan Pandya wrote: Huge mapping changes PMD/PUD which could have valid previous entries. This requires proper TLB

Re: [PATCH v2 2/4] ioremap: Implement TLB_INV before huge mapping

2018-03-16 Thread Chintan Pandya
On 3/15/2018 9:42 PM, Kani, Toshi wrote: On Thu, 2018-03-15 at 18:15 +0530, Chintan Pandya wrote: Huge mapping changes PMD/PUD which could have valid previous entries. This requires proper TLB maintanance on some architectures, like ARM64. Implent BBM (break-before-make) safe TLB

Re: [PATCH v2 2/4] ioremap: Implement TLB_INV before huge mapping

2018-03-16 Thread Chintan Pandya
On 3/15/2018 8:46 PM, Mark Rutland wrote: On Thu, Mar 15, 2018 at 06:55:32PM +0530, Chintan Pandya wrote: On 3/15/2018 6:43 PM, Mark Rutland wrote: On Thu, Mar 15, 2018 at 06:15:04PM +0530, Chintan Pandya wrote: Huge mapping changes PMD/PUD which could have valid previous entries. This

Re: [PATCH v2 2/4] ioremap: Implement TLB_INV before huge mapping

2018-03-15 Thread Chintan Pandya
On 3/15/2018 7:01 PM, Mark Rutland wrote: On Thu, Mar 15, 2018 at 06:15:04PM +0530, Chintan Pandya wrote: @@ -91,10 +93,15 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr, if (ioremap_pmd_enabled() && ((next - addr) ==

Re: [PATCH v2 2/4] ioremap: Implement TLB_INV before huge mapping

2018-03-15 Thread Chintan Pandya
On 3/15/2018 6:43 PM, Mark Rutland wrote: Hi, As a general note, pleas wrap commit text to 72 characters. On Thu, Mar 15, 2018 at 06:15:04PM +0530, Chintan Pandya wrote: Huge mapping changes PMD/PUD which could have valid previous entries. This requires proper TLB maintanance on some

[PATCH v2 3/4] arm64: Implement page table free interfaces

2018-03-15 Thread Chintan Pandya
Implement pud_free_pmd_page() and pmd_free_pte_page(). Make sure, that they are indeed a page table before taking them to free. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 20 ++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/mmu.c b

[PATCH v2 4/4] Revert "arm64: Enforce BBM for huge IO/VMAP mappings"

2018-03-15 Thread Chintan Pandya
This commit 15122ee2c515a ("arm64: Enforce BBM for huge IO/VMAP mappings") is a temporary work-around until the issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed. Revert this change as we have fixes for the issue. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 8 --

[PATCH v2 2/4] ioremap: Implement TLB_INV before huge mapping

2018-03-15 Thread Chintan Pandya
ating intermediate page_table entries could have been optimized for specific arch. That's the case with ARM64 at least. Signed-off-by: Chintan Pandya --- lib/ioremap.c | 25 +++-- 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/lib/ioremap.c b/lib/ioremap.c ind

[PATCH v2 1/4] asm/tlbflush: Add flush_tlb_pgtable() for ARM64

2018-03-15 Thread Chintan Pandya
ARM64 MMU implements invalidation of TLB for intermediate page tables for perticular VA. This may or may not be available for other arch. So, provide this API hook only for ARM64, for now. Signed-off-by: Chintan Pandya --- arch/arm64/include/asm/tlbflush.h | 5 + include/asm-generic/tlb.h

[PATCH v2 0/4] Fix issues with huge mapping in ioremap for ARM64

2018-03-15 Thread Chintan Pandya
These series of patches are follow up work (and depends on) Toshi Kani 's patches "fix memory leak/ panic in ioremap huge pages". IOREMAP code has been touched up to honor BBM which is requirement for some arch (like arm64) and works well with all other. Chintan Pandya (4): as

Re: [PATCH v2 2/2] x86/mm: implement free pmd/pte page interfaces

2018-03-15 Thread Chintan Pandya
On 3/14/2018 11:31 PM, Toshi Kani wrote: Implement pud_free_pmd_page() and pmd_free_pte_page() on x86, which clear a given pud/pmd entry and free up lower level page table(s). Address range associated with the pud/pmd entry must have been purged by INVLPG. fixes: e61ce6ade404e ("mm: change ior

Re: [PATCH v1 0/4] Fix issues with huge mapping in ioremap

2018-03-15 Thread Chintan Pandya
On 3/14/2018 8:08 PM, Kani, Toshi wrote: On Wed, 2018-03-14 at 14:18 +0530, Chintan Pandya wrote: Note: I was working on these patches for quite sometime and realized that Toshi Kani has shared some patches addressing the same isssue with subject "[PATCH 0/2] fix memory leak / pan

Re: [PATCH v1 4/4] Revert "arm64: Enforce BBM for huge IO/VMAP mappings"

2018-03-14 Thread Chintan Pandya
On 3/14/2018 4:16 PM, Marc Zyngier wrote: On 14/03/18 08:48, Chintan Pandya wrote: This commit 15122ee2c515a ("arm64: Enforce BBM for huge IO/VMAP mappings") is a temporary work-around until the issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed. Revert this change as we have fix

Re: [PATCH v1 3/4] arm64: Fix the page leak in pud/pmd_set_huge

2018-03-14 Thread Chintan Pandya
On 3/14/2018 4:23 PM, Mark Rutland wrote: On Wed, Mar 14, 2018 at 02:18:24PM +0530, Chintan Pandya wrote: While setting huge page, we need to take care of previously existing next level mapping. Since, we are going to overrite previous mapping, the only reference to next level page table will

Re: [PATCH v1 2/4] ioremap: Invalidate TLB after huge mappings

2018-03-14 Thread Chintan Pandya
On 3/14/2018 4:18 PM, Mark Rutland wrote: On Wed, Mar 14, 2018 at 02:18:23PM +0530, Chintan Pandya wrote: If huge mappings are enabled, they can override valid intermediate previous mappings. Some MMU can speculatively pre-fetch these intermediate entries even after unmap. That's be

[PATCH v1 2/4] ioremap: Invalidate TLB after huge mappings

2018-03-14 Thread Chintan Pandya
idate once we override pmd/pud with huge mappings. Signed-off-by: Chintan Pandya --- lib/ioremap.c | 9 +++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/lib/ioremap.c b/lib/ioremap.c index b808a39..c1e1341 100644 --- a/lib/ioremap.c +++ b/lib/ioremap.c @@ -13,6

[PATCH v1 4/4] Revert "arm64: Enforce BBM for huge IO/VMAP mappings"

2018-03-14 Thread Chintan Pandya
This commit 15122ee2c515a ("arm64: Enforce BBM for huge IO/VMAP mappings") is a temporary work-around until the issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed. Revert this change as we have fixes for the issue. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 8 --

[PATCH v1 3/4] arm64: Fix the page leak in pud/pmd_set_huge

2018-03-14 Thread Chintan Pandya
. Signed-off-by: Chintan Pandya --- arch/arm64/mm/mmu.c | 9 - 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 8c704f1..c0df264 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -32,7 +32,7 @@ #include #include

[PATCH v1 1/4] asm/tlbflush: Add flush_tlb_pgtable() for ARM64

2018-03-14 Thread Chintan Pandya
ARM64 MMU implements invalidation of TLB for intermediate page tables for perticular VA. This may or may not be available for other arch. So, provide this API hook only for ARM64, for now. Signed-off-by: Chintan Pandya --- arch/arm64/include/asm/tlbflush.h | 5 + include/asm-generic/tlb.h

[PATCH v1 0/4] Fix issues with huge mapping in ioremap

2018-03-14 Thread Chintan Pandya
err("my tests will run now 1\n"); t = kthread_create(&io_remap_test, NULL, "ioremap-testing"); /* * Do this so that we can run this thread on GOLD cores */ kthread_bind(t, 6); wake_up_process(t); return 0; } late_initcall(iorem

Re: [PATCH v2] slub: use jitter-free reference while printing age

2018-03-08 Thread Chintan Pandya
On 3/8/2018 11:42 PM, Christopher Lameter wrote: On Thu, 8 Mar 2018, Chintan Pandya wrote: In this case, object got freed later but 'age' shows otherwise. This could be because, while printing this info, we print allocation traces first and free traces thereafter. In between,

[PATCH v2] slub: use jitter-free reference while printing age

2018-03-07 Thread Chintan Pandya
while printing this info, we print allocation traces first and free traces thereafter. In between, if we get schedule out or jiffies increment, (jiffies - t->when) could become meaningless. Use the jitter free reference to calculate age. Change-Id: I0846565807a4229748649bbecb1ffb743d71fcd8 Signed

Re: [PATCH] slub: Fix misleading 'age' in verbose slub prints

2018-03-07 Thread Chintan Pandya
On 3/7/2018 11:52 PM, Matthew Wilcox wrote: On Wed, Mar 07, 2018 at 12:13:56PM -0600, Christopher Lameter wrote: On Wed, 7 Mar 2018, Chintan Pandya wrote: In this case, object got freed later but 'age' shows otherwise. This could be because, while printing this info, we print

[PATCH] slub: Fix misleading 'age' in verbose slub prints

2018-03-07 Thread Chintan Pandya
we get schedule out, (jiffies - t->when) could become meaningless. So, simply print when the object was allocated/freed. Signed-off-by: Chintan Pandya --- mm/slub.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index e381728..b173f85 100644 -

Re: [PATCH v3] of: cache phandle nodes to reduce cost of of_find_node_by_phandle()

2018-02-28 Thread Chintan Pandya
On 2/15/2018 6:22 AM, frowand.l...@gmail.com wrote: +static void of_populate_phandle_cache(void) +{ + unsigned long flags; + u32 cache_entries; + struct device_node *np; + u32 phandles = 0; + + raw_spin_lock_irqsave(&devtree_lock, flags); + + kfree(phandle_c

Re: [RFC patch] ioremap: don't set up huge I/O mappings when p4d/pud/pmd is zero

2018-02-20 Thread Chintan Pandya
On 12/28/2017 4:54 PM, Hanjun Guo wrote: From: Hanjun Guo When we using iounmap() to free the 4K mapping, it just clear the PTEs but leave P4D/PUD/PMD unchanged, also will not free the memory of page tables. This will cause issues on ARM64 platform (not sure if other archs have the same issu

Re: [PATCH] of: add early boot allocation of of_find_node_by_phandle() cache

2018-02-16 Thread Chintan Pandya
On 2/15/2018 6:14 AM, frowand.l...@gmail.com wrote: From: Frank Rowand The initial implementation of the of_find_node_by_phandle() cache allocates the cache using kcalloc(). Add an early boot allocation of the cache so it will be usable during early boot. Switch over to the kcalloc() based

Re: [PATCH v3] of: cache phandle nodes to reduce cost of of_find_node_by_phandle()

2018-02-16 Thread Chintan Pandya
increase by one, resulting in a range of 1..n for n phandle values. This implementation should also provide a good reduction of overhead for any range of phandle values that are mostly in a monotonic range. Performance measurements by Chintan Pandya of several implementations of patches that are

Re: [PATCH v2] of: cache phandle nodes to reduce cost of of_find_node_by_phandle()

2018-02-12 Thread Chintan Pandya
On 2/12/2018 11:57 AM, frowand.l...@gmail.com wrote: From: Frank Rowand Create a cache of the nodes that contain a phandle property. Use this cache to find the node for a given phandle value instead of scanning the devicetree to find the node. If the phandle value is not found in the cache,

Re: [PATCH] of: cache phandle nodes to decrease cost of of_find_node_by_phandle()

2018-02-07 Thread Chintan Pandya
On 2/5/2018 5:53 PM, Chintan Pandya wrote: My question was trying to determine whether the numbers reported above are for a debug configuration or a production configuration. My reported numbers are from debug configuration. not a production configuration, I was requesting the numbers

Re: [PATCH] of: cache phandle nodes to decrease cost of of_find_node_by_phandle()

2018-02-05 Thread Chintan Pandya
My question was trying to determine whether the numbers reported above are for a debug configuration or a production configuration. My reported numbers are from debug configuration. not a production configuration, I was requesting the numbers for a production configuration. I'm working on it

Re: [PATCH] of: cache phandle nodes to decrease cost of of_find_node_by_phandle()

2018-02-01 Thread Chintan Pandya
On 2/2/2018 12:40 AM, Frank Rowand wrote: On 02/01/18 02:31, Chintan Pandya wrote: Anyways, will fix this locally and share test results. Thanks, I look forward to the results. Set up for this time was slightly different. So, taken all the numbers again. Boot to shell time (in ms

Re: [PATCH] of: cache phandle nodes to decrease cost of of_find_node_by_phandle()

2018-02-01 Thread Chintan Pandya
On 2/2/2018 2:39 AM, Frank Rowand wrote: On 02/01/18 06:24, Rob Herring wrote: And so far, no one has explained why a bigger cache got slower. Yes, I still find that surprising. I thought a bit about this. And realized that increasing the cache size should help improve the performance onl

Re: [PATCH] of: cache phandle nodes to decrease cost of of_find_node_by_phandle()

2018-02-01 Thread Chintan Pandya
Anyways, will fix this locally and share test results. Thanks, I look forward to the results. Set up for this time was slightly different. So, taken all the numbers again. Boot to shell time (in ms): Experiment 2 [1] Base: 14.843805 14.784842 14.842338 [2] 64 size

Re: [PATCH] of: cache phandle nodes to decrease cost of of_find_node_by_phandle()

2018-01-31 Thread Chintan Pandya
On 2/1/2018 1:35 AM, frowand.l...@gmail.com wrote: From: Frank Rowand + +static void of_populate_phandle_cache(void) +{ + unsigned long flags; + phandle max_phandle; + u32 nodes = 0; + struct device_node *np; + + if (phandle_cache) + return; + +

Re: [PATCH v2] of: use hash based search in of_find_node_by_phandle

2018-01-30 Thread Chintan Pandya
(1) Can you point me to the driver code that is invoking the search? There are many locations. Few of them being, https://source.codeaurora.org/quic/la/kernel/msm-4.9/tree/drivers/of/irq.c?h=msm-4.9#n214 https://source.codeaurora.org/quic/la/kernel/msm-4.9/tree/drivers/irqchip/irq-gic-v3.c?h=msm

Re: [PATCH v2] of: use hash based search in of_find_node_by_phandle

2018-01-29 Thread Chintan Pandya
ne with it. But at present, no idea how will I achieve this. If you can share any pointers around this, that would help ! Thanks, Chintan Pandya -- The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project

Re: [PATCH v2] of: use hash based search in of_find_node_by_phandle

2018-01-28 Thread Chintan Pandya
55 15.041847 --> 0 Previously reported 400ms gain for [2] was from different set up. These tests and new data is from my own debug set up. When we take any of these patch to production, result might deviate accordingly. Chin

Re: [PATCH v2] of: use hash based search in of_find_node_by_phandle

2018-01-26 Thread Chintan Pandya
ch. Rasmus This is certainly doable if current approach is not welcomed due to addition on hlish_node in device_node. Chintan Pandya -- The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project

[PATCH v2] of: use hash based search in of_find_node_by_phandle

2018-01-26 Thread Chintan Pandya
boot is 400ms. Signed-off-by: Chintan Pandya --- drivers/of/base.c | 8 ++-- drivers/of/fdt.c | 18 ++ include/linux/of.h | 6 ++ 3 files changed, 30 insertions(+), 2 deletions(-) diff --git a/drivers/of/base.c b/drivers/of/base.c index 26618ba..bfbfa99 100644 ---

Re: [PATCH] of: use hash based search in of_find_node_by_phandle

2018-01-26 Thread Chintan Pandya
On 1/26/2018 1:24 AM, Frank Rowand wrote: On 01/25/18 02:14, Chintan Pandya wrote: of_find_node_by_phandle() takes a lot of time finding right node when your intended device is too right-side in the fdt. Reason is, we search each device serially from the fdt, starting from left-most to right

Re: [PATCH] of: use hash based search in of_find_node_by_phandle

2018-01-25 Thread Chintan Pandya
On 1/25/2018 8:20 PM, Rob Herring wrote: On Thu, Jan 25, 2018 at 4:14 AM, Chintan Pandya wrote: of_find_node_by_phandle() takes a lot of time finding Got some numbers for what is "a lot of time"? On my SDM device, I see total saving of 400ms during boot time. For some clients who

[PATCH] of: use hash based search in of_find_node_by_phandle

2018-01-25 Thread Chintan Pandya
. Change-Id: I4a2bc7eff6de142e4f91a7bf474893a45e61c128 Signed-off-by: Chintan Pandya --- drivers/of/base.c | 9 +++-- drivers/of/fdt.c | 18 ++ include/linux/of.h | 6 ++ 3 files changed, 31 insertions(+), 2 deletions(-) diff --git a/drivers/of/base.c b/drivers/of

[PATCH] lowmemorykiller: Avoid excessive/redundant calling of LMK

2015-01-12 Thread Chintan Pandya
cycle waste. Fix that by returning to the shrinker with SHRINK_STOP when LMK doesn't find any more work to do. The deciding factor here is, no process found in the selected LMK bucket or memory conditions are sane. Signed-off-by: Chintan Pandya --- drivers/staging/android/lowmemorykiller.

Re: [PATCH] lowmemorykiller: Avoid excessive/redundant calling of LMK

2015-01-12 Thread Chintan Pandya
Please ignore this patch. My extreme bad that I merged commit messages applicable to some very old kernel into this patch. Updating shortly. On 01/12/2015 09:38 PM, Chintan Pandya wrote: The global shrinker will invoke lowmem_shrink in a loop. The loop will be run (total_scan_pages/batch_size

[PATCH] lowmemorykiller: Avoid excessive/redundant calling of LMK

2015-01-12 Thread Chintan Pandya
cycle waste. Fix that by giving excessively large batch size so that lowmem_shrink will be called just once and in the same try LMK does the needful. Signed-off-by: Chintan Pandya --- drivers/staging/android/lowmemorykiller.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a

Re: [PATCH] memcg: Provide knob for force OOM into the memcg

2014-12-17 Thread Chintan Pandya
that it adds to memcg model and it's synchronization requirements from VM hotpaths. Hence, I'm inclined to not add charge moving to version 2 of memcg. Do you say charge migration is discouraged at runtime ? Difficult to live with this limitation. -- Chintan Pandya QUALCOMM IND

Re: [PATCH] memcg: Provide knob for force OOM into the memcg

2014-12-17 Thread Chintan Pandya
me task-selection to be killed by OOM on kernel rather than userspace decides by itself. -- Chintan Pandya QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, hosted by The Linux Foundation -- To unsubscribe from this list: send the line "unsu

[PATCH] memcg: Provide knob for force OOM into the memcg

2014-12-16 Thread Chintan Pandya
used process, it can get killed by in-cgroup OOM. To avoid such scenarios, provide a convenient knob by which we can forcefully trigger OOM and make a room for upcoming process. To trigger force OOM, $ echo 1 > //memory.force_oom Signed-off-by: Chintan Pandya --- mm/memcontrol.c |

Re: [PATCH v3 6/8] mm/page_owner: keep track of page owners

2014-12-03 Thread Chintan Pandya
loc(max_size * sizeof(*list)); + + for ( ; ; ) { + ret = read_block(buf, BUF_SIZE, fin); + if (ret< 0) + break; + + add_list(buf, ret); + } + + printf("loaded %d\n", list_size); + + printf("sort

Re: [PATCH v4 2/2] ksm: provide support to use deferrable timers for scanner thread

2014-09-11 Thread Chintan Pandya
s from deep sleep. This is exactly the preference we are looking for. But yes, cannot be generalized for all. I know both RCU and some NOHZ_FULL muck already track when the system is completely idle. This is yet another case of that. Hugh -- Chintan Pandya QUALCOMM INDIA, on behalf of Qual

Re: [PATCH v4 2/2] ksm: provide support to use deferrable timers for scanner thread

2014-09-09 Thread Chintan Pandya
I will publish new patch with your comments on v4. Thanks, Hugh [PATCH] ksm: avoid periodic wakeup while mergeable mms are quiet Description yet to be written! Reported-by: Chintan Pandya Not-Signed-off-by: Hugh Dickins >>> So looking at Hughs test results I'm quite sure that

<    1   2   3   >