On 4/13/2018 5:31 PM, Anshuman Khandual wrote:
On 04/13/2018 05:03 PM, Chintan Pandya wrote:
Client can call vunmap with some intermediate 'addr'
which may not be the start of the VM area. Entire
unmap code works with vm->vm_start which is proper
but debug object API is call
) + 45.468 us |}
6) 2.760 us|vunmap_page_range();
6) ! 505.105 us | }
Signed-off-by: Chintan Pandya
---
mm/vmalloc.c | 25 +++--
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ebff729..6729400 100644
--- a/mm
On 4/13/2018 5:11 PM, Michal Hocko wrote:
On Fri 13-04-18 16:57:06, Chintan Pandya wrote:
On 4/13/2018 4:39 PM, Michal Hocko wrote:
On Fri 13-04-18 16:15:26, Chintan Pandya wrote:
On 4/13/2018 4:10 PM, Anshuman Khandual wrote:
On 04/13/2018 03:47 PM, Chintan Pandya wrote:
On 4/13
s into debug object API.
Signed-off-by: Chintan Pandya
---
mm/vmalloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 9ff21a1..28034c55 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1526,8 +1526,8 @@ static void __vunmap(const void *addr
the debug objects corresponding to this vm area.
Here, we actually free 'other' client's debug objects.
Fix this by freeing the debug objects first and then
releasing the VM area.
Signed-off-by: Chintan Pandya
---
mm/vmalloc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-
help
debug objects to be in consistent state.
We've observed some list corruptions in debug objects.
However, no claims that these patches will be fixing
them.
If one has an opinion that debug object has no use in
vmalloc framework, I would raise a patch to remove
them from the vunmap leg.
Ch
On 4/13/2018 4:39 PM, Michal Hocko wrote:
On Fri 13-04-18 16:15:26, Chintan Pandya wrote:
On 4/13/2018 4:10 PM, Anshuman Khandual wrote:
On 04/13/2018 03:47 PM, Chintan Pandya wrote:
On 4/13/2018 3:29 PM, Anshuman Khandual wrote:
On 04/13/2018 02:46 PM, Chintan Pandya wrote:
Unmap
On 4/13/2018 4:10 PM, Anshuman Khandual wrote:
On 04/13/2018 03:47 PM, Chintan Pandya wrote:
On 4/13/2018 3:29 PM, Anshuman Khandual wrote:
On 04/13/2018 02:46 PM, Chintan Pandya wrote:
Unmap legs do call vunmap_page_range() irrespective of
debug_pagealloc_enabled() is enabled or not. So
On 4/13/2018 3:29 PM, Anshuman Khandual wrote:
On 04/13/2018 02:46 PM, Chintan Pandya wrote:
Unmap legs do call vunmap_page_range() irrespective of
debug_pagealloc_enabled() is enabled or not. So, remove
redundant check and optional vunmap_page_range() routines.
vunmap_page_range() tears
Unmap legs do call vunmap_page_range() irrespective of
debug_pagealloc_enabled() is enabled or not. So, remove
redundant check and optional vunmap_page_range() routines.
Signed-off-by: Chintan Pandya
---
mm/vmalloc.c | 23 +--
1 file changed, 1 insertion(+), 22 deletions
On 4/3/2018 5:25 PM, Chintan Pandya wrote:
On 4/3/2018 2:13 PM, Marc Zyngier wrote:
Hi Chintan,
Hi Marc,
On 03/04/18 09:00, Chintan Pandya wrote:
This series of patches are follow up work (and depends on)
Toshi Kani 's patches "fix memory leak/
panic in ioremap huge pages"
On 4/3/2018 2:13 PM, Marc Zyngier wrote:
Hi Chintan,
Hi Marc,
On 03/04/18 09:00, Chintan Pandya wrote:
This series of patches are follow up work (and depends on)
Toshi Kani 's patches "fix memory leak/
panic in ioremap huge pages".
This series of patches are tested on
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/asm/tlbflush.h
index 9e82dd7
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 8 --
pagetable entry even in map.
Why ? Read this,
https://patchwork.kernel.org/patch/10134581/
Pass 'addr' in these interfaces so that proper TLB ops
can be performed.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 4 ++--
arch/x86/mm/pgtable.c | 8 +---
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya
---
arch/arm64
wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Chintan-Pandya/ioremap-Update-pgtable-free-interfaces-with-addr/20180329-133736
config: x86_64-rhel (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce
d redundant TLB invalidatation in one perticular case
>From V2->V3:
- Use the exisiting page table free interface to do arm64
specific things
>From V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc:
Add interfaces to free unmapped page table"
- Honor
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 8 --
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya
---
arch/arm64
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/asm/tlbflush.h
index 9e82dd7
pagetable entry even in map.
Why ? Read this,
https://patchwork.kernel.org/patch/10134581/
Pass 'addr' in these interfaces so that proper TLB ops
can be performed.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 4 ++--
arch/x86/mm/pgtable.c | 8 +---
;V3:
- Use the exisiting page table free interface to do arm64
specific things
>From V1->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc:
Add interfaces to free unmapped page table"
- Honored BBM for ARM64
Chintan Pandya (4):
ioremap: Update pgtable free inte
On 3/28/2018 5:20 PM, kbuild test robot wrote:
@725 if (!pmd_free_pte_page(&pmd[i]))
My bad ! Will fix this in v7
Chintan
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center,
Inc. is a member of the Code Aurora Forum, a Linux Foundation
Collaborative
I goofed up in making a patch file so enumeration is wrong.
I'll upload v7
On 3/28/2018 4:28 PM, Chintan Pandya wrote:
This series of patches are follow up work (and depends on)
Toshi Kani 's patches "fix memory leak/
panic in ioremap huge pages".
This series of patch
pagetable entry even in map.
Why ? Read this,
https://patchwork.kernel.org/patch/10134581/
Pass 'addr' in these interfaces so that proper TLB ops
can be performed.
Signed-off-by: Chintan Pandya
---
>From V4->V6:
- No change
arch/arm64/mm/mmu.c | 4 ++--
arch/x8
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya
---
From: V1-> V6:
- No change
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Clearing off the current pud/pmd entry
2) Invalidate TLB which could have previously
valid but not stale entry
3) Freeing of the un-used next level page tables
Signed-off-by: Chintan Pandya
---
>From
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
From: V5->V6:
- No change
arch/arm64/include/asm/tlbflush.h | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/
->V2:
- Rebased my patches on top of "[PATCH v2 1/2] mm/vmalloc:
Add interfaces to free unmapped page table"
- Honored BBM for ARM64
Chintan Pandya (4):
ioremap: Update pgtable free interfaces with addr
arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable
arm64: Implement page table
On 3/27/2018 11:30 PM, Will Deacon wrote:
Hi Chintan,
Hi Will,
On Tue, Mar 27, 2018 at 06:54:59PM +0530, Chintan Pandya wrote:
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Freeing of the un-used next level page tables
2) Clearing off the current
pagetable entry even in map.
Why ? Read this,
https://patchwork.kernel.org/patch/10134581/
Pass 'addr' in these interfaces so that proper TLB ops
can be performed.
Signed-off-by: Chintan Pandya
---
No change in v5.
arch/arm64/mm/mmu.c | 4 ++--
arch/x86/mm/pgtable.c | 6
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Freeing of the un-used next level page tables
2) Clearing off the current pud/pmd entry
3) Invalidate TLB which could have previously
valid but not stale entry
Signed-off-by: Chintan Pandya
---
V4->
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya
---
No change in v5
arch/arm64/mm
Add an interface to invalidate intermediate page tables
from TLB for kernel.
Signed-off-by: Chintan Pandya
---
Introduced in v5
arch/arm64/include/asm/tlbflush.h | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/arm64/include/asm/tlbflush.h
b/arch/arm64/include/asm/tlbflush.h
This series of patches are follow up work (and depends on)
Toshi Kani 's patches "fix memory leak/
panic in ioremap huge pages".
This series of patches are tested on 4.9 kernel with Cortex-A75
based SoC.
These patches can also go into '-stable' branch.
Chintan Pand
On 3/26/2018 3:25 PM, Mark Rutland wrote:
On Tue, Mar 20, 2018 at 05:15:13PM +0530, Chintan Pandya wrote:
+static int __pmd_free_pte_page(pmd_t *pmd, unsigned long addr, bool tlb_inv)
+{
+ pmd_t *table;
+
+ if (pmd_val(*pmd)) {
+ table = __va(pmd_val(*pmd
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Freeing of the un-used next level page tables
2) Clearing off the current pud/pmd entry
3) Invalidate TLB which could have previously
valid but not stale entry
Signed-off-by: Chintan Pandya
---
arch/arm64
pagetable entry even in map.
Why ? Read this,
https://patchwork.kernel.org/patch/10134581/
Pass 'addr' in these interfaces so that proper TLB ops
can be performed.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 4 ++--
arch/x86/mm/pgtable.c | 6 --
include/a
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 8 --
This series of patches are follow up work (and depends on)
Toshi Kani 's patches "fix memory leak/
panic in ioremap huge pages".
This series of patches are tested on 4.9 kernel with Cortex-A75
based SoC.
Chintan Pandya (3):
ioremap: Update pgtable free interfaces with addr
a
On 3/20/2018 12:59 AM, Kani, Toshi wrote:
On Mon, 2018-03-19 at 18:10 +0530, Chintan Pandya wrote:
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Freeing of the un-used next level page tables
2) Clearing off the current pud/pmd entry
3) Invalidate
On 3/20/2018 12:31 AM, Kani, Toshi wrote:
On Mon, 2018-03-19 at 18:10 +0530, Chintan Pandya wrote:
This patch ("mm/vmalloc: Add interfaces to free unmapped
page table") adds following 2 interfaces to free the page
table in case we implement huge mapping.
pud_free_pmd_
This series of patches are follow up work (and depends on)
Toshi Kani 's patches "fix memory leak/
panic in ioremap huge pages".
This series of patches are tested on 4.9 kernel with Cortex-A75
based SoC.
Chintan Pandya (3):
ioremap: Update pgtable free interfaces with addr
a
Implement pud_free_pmd_page() and pmd_free_pte_page().
Implementation requires,
1) Freeing of the un-used next level page tables
2) Clearing off the current pud/pmd entry
3) Invalidate TLB which could have previously
valid but not stale entry
Signed-off-by: Chintan Pandya
---
arch/arm64
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 8 --
pagetable entry even in map.
Why ? Read this,
https://patchwork.kernel.org/patch/10134581/
Pass 'addr' in these interfaces so that proper TLB ops
can be performed.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 4 ++--
arch/x86/mm/pgtable.c | 4 ++--
include/a
On 3/15/2018 6:48 PM, Mark Rutland wrote:
On Thu, Mar 15, 2018 at 06:15:05PM +0530, Chintan Pandya wrote:
Implement pud_free_pmd_page() and pmd_free_pte_page().
Make sure, that they are indeed a page table before
taking them to free.
As mentioned on the prior patch, if the tables we
On 3/16/2018 8:20 PM, Kani, Toshi wrote:
On Fri, 2018-03-16 at 13:10 +0530, Chintan Pandya wrote:
On 3/15/2018 9:42 PM, Kani, Toshi wrote:
On Thu, 2018-03-15 at 18:15 +0530, Chintan Pandya wrote:
Huge mapping changes PMD/PUD which could have
valid previous entries. This requires proper
TLB
On 3/15/2018 9:42 PM, Kani, Toshi wrote:
On Thu, 2018-03-15 at 18:15 +0530, Chintan Pandya wrote:
Huge mapping changes PMD/PUD which could have
valid previous entries. This requires proper
TLB maintanance on some architectures, like
ARM64.
Implent BBM (break-before-make) safe TLB
On 3/15/2018 8:46 PM, Mark Rutland wrote:
On Thu, Mar 15, 2018 at 06:55:32PM +0530, Chintan Pandya wrote:
On 3/15/2018 6:43 PM, Mark Rutland wrote:
On Thu, Mar 15, 2018 at 06:15:04PM +0530, Chintan Pandya wrote:
Huge mapping changes PMD/PUD which could have
valid previous entries. This
On 3/15/2018 7:01 PM, Mark Rutland wrote:
On Thu, Mar 15, 2018 at 06:15:04PM +0530, Chintan Pandya wrote:
@@ -91,10 +93,15 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned
long addr,
if (ioremap_pmd_enabled() &&
((next - addr) ==
On 3/15/2018 6:43 PM, Mark Rutland wrote:
Hi,
As a general note, pleas wrap commit text to 72 characters.
On Thu, Mar 15, 2018 at 06:15:04PM +0530, Chintan Pandya wrote:
Huge mapping changes PMD/PUD which could have
valid previous entries. This requires proper
TLB maintanance on some
Implement pud_free_pmd_page() and pmd_free_pte_page().
Make sure, that they are indeed a page table before
taking them to free.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 20 ++--
1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 8 --
ating
intermediate page_table entries could have
been optimized for specific arch. That's the
case with ARM64 at least.
Signed-off-by: Chintan Pandya
---
lib/ioremap.c | 25 +++--
1 file changed, 19 insertions(+), 6 deletions(-)
diff --git a/lib/ioremap.c b/lib/ioremap.c
ind
ARM64 MMU implements invalidation of TLB for
intermediate page tables for perticular VA. This
may or may not be available for other arch. So,
provide this API hook only for ARM64, for now.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 5 +
include/asm-generic/tlb.h
These series of patches are follow up work (and depends on)
Toshi Kani 's patches "fix memory leak/
panic in ioremap huge pages".
IOREMAP code has been touched up to honor BBM which is
requirement for some arch (like arm64) and works well
with all other.
Chintan Pandya (4):
as
On 3/14/2018 11:31 PM, Toshi Kani wrote:
Implement pud_free_pmd_page() and pmd_free_pte_page() on x86, which
clear a given pud/pmd entry and free up lower level page table(s).
Address range associated with the pud/pmd entry must have been purged
by INVLPG.
fixes: e61ce6ade404e ("mm: change ior
On 3/14/2018 8:08 PM, Kani, Toshi wrote:
On Wed, 2018-03-14 at 14:18 +0530, Chintan Pandya wrote:
Note: I was working on these patches for quite sometime
and realized that Toshi Kani has shared some patches
addressing the same isssue with subject
"[PATCH 0/2] fix memory leak / pan
On 3/14/2018 4:16 PM, Marc Zyngier wrote:
On 14/03/18 08:48, Chintan Pandya wrote:
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fix
On 3/14/2018 4:23 PM, Mark Rutland wrote:
On Wed, Mar 14, 2018 at 02:18:24PM +0530, Chintan Pandya wrote:
While setting huge page, we need to take care of
previously existing next level mapping. Since,
we are going to overrite previous mapping, the
only reference to next level page table will
On 3/14/2018 4:18 PM, Mark Rutland wrote:
On Wed, Mar 14, 2018 at 02:18:23PM +0530, Chintan Pandya wrote:
If huge mappings are enabled, they can override
valid intermediate previous mappings. Some MMU
can speculatively pre-fetch these intermediate
entries even after unmap. That's be
idate once we override pmd/pud with huge
mappings.
Signed-off-by: Chintan Pandya
---
lib/ioremap.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/lib/ioremap.c b/lib/ioremap.c
index b808a39..c1e1341 100644
--- a/lib/ioremap.c
+++ b/lib/ioremap.c
@@ -13,6
This commit 15122ee2c515a ("arm64: Enforce BBM for huge
IO/VMAP mappings") is a temporary work-around until the
issues with CONFIG_HAVE_ARCH_HUGE_VMAP gets fixed.
Revert this change as we have fixes for the issue.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 8 --
.
Signed-off-by: Chintan Pandya
---
arch/arm64/mm/mmu.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 8c704f1..c0df264 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -32,7 +32,7 @@
#include
#include
ARM64 MMU implements invalidation of TLB for
intermediate page tables for perticular VA. This
may or may not be available for other arch. So,
provide this API hook only for ARM64, for now.
Signed-off-by: Chintan Pandya
---
arch/arm64/include/asm/tlbflush.h | 5 +
include/asm-generic/tlb.h
err("my tests will run now 1\n");
t = kthread_create(&io_remap_test, NULL, "ioremap-testing");
/*
* Do this so that we can run this thread on GOLD cores
*/
kthread_bind(t, 6);
wake_up_process(t);
return 0;
}
late_initcall(iorem
On 3/8/2018 11:42 PM, Christopher Lameter wrote:
On Thu, 8 Mar 2018, Chintan Pandya wrote:
In this case, object got freed later but 'age'
shows otherwise. This could be because, while
printing this info, we print allocation traces
first and free traces thereafter. In between,
while
printing this info, we print allocation traces
first and free traces thereafter. In between,
if we get schedule out or jiffies increment,
(jiffies - t->when) could become meaningless.
Use the jitter free reference to calculate age.
Change-Id: I0846565807a4229748649bbecb1ffb743d71fcd8
Signed
On 3/7/2018 11:52 PM, Matthew Wilcox wrote:
On Wed, Mar 07, 2018 at 12:13:56PM -0600, Christopher Lameter wrote:
On Wed, 7 Mar 2018, Chintan Pandya wrote:
In this case, object got freed later but 'age' shows
otherwise. This could be because, while printing
this info, we print
we get schedule
out, (jiffies - t->when) could become meaningless.
So, simply print when the object was allocated/freed.
Signed-off-by: Chintan Pandya
---
mm/slub.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index e381728..b173f85 100644
-
On 2/15/2018 6:22 AM, frowand.l...@gmail.com wrote:
+static void of_populate_phandle_cache(void)
+{
+ unsigned long flags;
+ u32 cache_entries;
+ struct device_node *np;
+ u32 phandles = 0;
+
+ raw_spin_lock_irqsave(&devtree_lock, flags);
+
+ kfree(phandle_c
On 12/28/2017 4:54 PM, Hanjun Guo wrote:
From: Hanjun Guo
When we using iounmap() to free the 4K mapping, it just clear the PTEs
but leave P4D/PUD/PMD unchanged, also will not free the memory of page
tables.
This will cause issues on ARM64 platform (not sure if other archs have
the same issu
On 2/15/2018 6:14 AM, frowand.l...@gmail.com wrote:
From: Frank Rowand
The initial implementation of the of_find_node_by_phandle() cache
allocates the cache using kcalloc(). Add an early boot allocation
of the cache so it will be usable during early boot. Switch over
to the kcalloc() based
increase by one, resulting in a range of 1..n
for n phandle values. This implementation should also provide a good
reduction of overhead for any range of phandle values that are mostly
in a monotonic range.
Performance measurements by Chintan Pandya
of several implementations of patches that are
On 2/12/2018 11:57 AM, frowand.l...@gmail.com wrote:
From: Frank Rowand
Create a cache of the nodes that contain a phandle property. Use this
cache to find the node for a given phandle value instead of scanning
the devicetree to find the node. If the phandle value is not found
in the cache,
On 2/5/2018 5:53 PM, Chintan Pandya wrote:
My question was trying to determine whether the numbers reported above
are for a debug configuration or a production configuration.
My reported numbers are from debug configuration.
not a production configuration, I was requesting the numbers
My question was trying to determine whether the numbers reported above
are for a debug configuration or a production configuration.
My reported numbers are from debug configuration.
not a production configuration, I was requesting the numbers for a
production configuration.
I'm working on it
On 2/2/2018 12:40 AM, Frank Rowand wrote:
On 02/01/18 02:31, Chintan Pandya wrote:
Anyways, will fix this locally and share test results.
Thanks, I look forward to the results.
Set up for this time was slightly different. So, taken all the numbers again.
Boot to shell time (in ms
On 2/2/2018 2:39 AM, Frank Rowand wrote:
On 02/01/18 06:24, Rob Herring wrote:
And so
far, no one has explained why a bigger cache got slower.
Yes, I still find that surprising.
I thought a bit about this. And realized that increasing the cache size
should help improve the performance onl
Anyways, will fix this locally and share test results.
Thanks, I look forward to the results.
Set up for this time was slightly different. So, taken all the numbers
again.
Boot to shell time (in ms): Experiment 2
[1] Base: 14.843805 14.784842 14.842338
[2] 64 size
On 2/1/2018 1:35 AM, frowand.l...@gmail.com wrote:
From: Frank Rowand
+
+static void of_populate_phandle_cache(void)
+{
+ unsigned long flags;
+ phandle max_phandle;
+ u32 nodes = 0;
+ struct device_node *np;
+
+ if (phandle_cache)
+ return;
+
+
(1)
Can you point me to the driver code that is invoking
the search?
There are many locations. Few of them being,
https://source.codeaurora.org/quic/la/kernel/msm-4.9/tree/drivers/of/irq.c?h=msm-4.9#n214
https://source.codeaurora.org/quic/la/kernel/msm-4.9/tree/drivers/irqchip/irq-gic-v3.c?h=msm
ne with it. But at present, no idea how will I achieve this. If
you can share any pointers around this, that would help !
Thanks,
Chintan Pandya
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
55 15.041847 --> 0
Previously reported 400ms gain for [2] was from different set up. These
tests
and new data is from my own debug set up. When we take any of these patch to
production, result might deviate accordingly.
Chin
ch.
Rasmus
This is certainly doable if current approach is not welcomed due to
addition on hlish_node in device_node.
Chintan Pandya
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
boot is 400ms.
Signed-off-by: Chintan Pandya
---
drivers/of/base.c | 8 ++--
drivers/of/fdt.c | 18 ++
include/linux/of.h | 6 ++
3 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/drivers/of/base.c b/drivers/of/base.c
index 26618ba..bfbfa99 100644
---
On 1/26/2018 1:24 AM, Frank Rowand wrote:
On 01/25/18 02:14, Chintan Pandya wrote:
of_find_node_by_phandle() takes a lot of time finding
right node when your intended device is too right-side
in the fdt. Reason is, we search each device serially
from the fdt, starting from left-most to right
On 1/25/2018 8:20 PM, Rob Herring wrote:
On Thu, Jan 25, 2018 at 4:14 AM, Chintan Pandya wrote:
of_find_node_by_phandle() takes a lot of time finding
Got some numbers for what is "a lot of time"?
On my SDM device, I see total saving of 400ms during boot time. For some
clients
who
.
Change-Id: I4a2bc7eff6de142e4f91a7bf474893a45e61c128
Signed-off-by: Chintan Pandya
---
drivers/of/base.c | 9 +++--
drivers/of/fdt.c | 18 ++
include/linux/of.h | 6 ++
3 files changed, 31 insertions(+), 2 deletions(-)
diff --git a/drivers/of/base.c b/drivers/of
cycle waste. Fix that by returning
to the shrinker with SHRINK_STOP when LMK doesn't find any
more work to do. The deciding factor here is, no process
found in the selected LMK bucket or memory conditions are
sane.
Signed-off-by: Chintan Pandya
---
drivers/staging/android/lowmemorykiller.
Please ignore this patch. My extreme bad that I merged commit messages
applicable to some very old kernel into this patch. Updating shortly.
On 01/12/2015 09:38 PM, Chintan Pandya wrote:
The global shrinker will invoke lowmem_shrink in a loop.
The loop will be run (total_scan_pages/batch_size
cycle waste. Fix that by giving
excessively large batch size so that lowmem_shrink will
be called just once and in the same try LMK does the
needful.
Signed-off-by: Chintan Pandya
---
drivers/staging/android/lowmemorykiller.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a
that it adds to
memcg model and it's synchronization requirements from VM hotpaths.
Hence, I'm inclined to not add charge moving to version 2 of memcg.
Do you say charge migration is discouraged at runtime ? Difficult to
live with this limitation.
--
Chintan Pandya
QUALCOMM IND
me
task-selection to be killed by OOM on kernel rather than userspace
decides by itself.
--
Chintan Pandya
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
member of the Code Aurora Forum, hosted by The Linux Foundation
--
To unsubscribe from this list: send the line "unsu
used
process, it can get killed by in-cgroup OOM. To
avoid such scenarios, provide a convenient knob
by which we can forcefully trigger OOM and make
a room for upcoming process.
To trigger force OOM,
$ echo 1 > //memory.force_oom
Signed-off-by: Chintan Pandya
---
mm/memcontrol.c |
loc(max_size * sizeof(*list));
+
+ for ( ; ; ) {
+ ret = read_block(buf, BUF_SIZE, fin);
+ if (ret< 0)
+ break;
+
+ add_list(buf, ret);
+ }
+
+ printf("loaded %d\n", list_size);
+
+ printf("sort
s from deep sleep.
This is exactly the preference we are looking for. But yes, cannot be
generalized for all.
I know both RCU and some NOHZ_FULL muck already track when the system is
completely idle. This is yet another case of that.
Hugh
--
Chintan Pandya
QUALCOMM INDIA, on behalf of Qual
I will publish new patch with your comments on v4.
Thanks,
Hugh
[PATCH] ksm: avoid periodic wakeup while mergeable mms are quiet
Description yet to be written!
Reported-by: Chintan Pandya
Not-Signed-off-by: Hugh Dickins
>>> So looking at Hughs test results I'm quite sure that
101 - 200 of 222 matches
Mail list logo