2015-02-03 18:05 GMT+09:00 Vlastimil Babka vba...@suse.cz:
On 02/03/2015 07:49 AM, Joonsoo Kim wrote:
On Mon, Jan 19, 2015 at 11:05:15AM +0100, Vlastimil Babka wrote:
Hello,
I don't have any elegant idea, but, have some humble opinion.
The point is that migrate scanner should scan whole
2015-02-02 18:59 GMT+09:00 Vlastimil Babka vba...@suse.cz:
On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
This is preparation step to use page allocator's anti fragmentation logic
in compaction. This patch just separates fallback freepage checking part
from fallback freepage management part
2015-02-02 19:20 GMT+09:00 Vlastimil Babka vba...@suse.cz:
On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
Compaction has anti fragmentation algorithm. It is that freepage
should be more than pageblock order to finish the compaction if we don't
find any freepage in requested migratetype buddy list
2015-02-02 21:56 GMT+09:00 Zhang Yanfei zhangyanfei...@hotmail.com:
Hello Joonsoo,
At 2015/2/2 15:15, Joonsoo Kim wrote:
This is preparation step to use page allocator's anti fragmentation logic
in compaction. This patch just separates fallback freepage checking part
from fallback freepage
On Mon, Jan 19, 2015 at 11:05:15AM +0100, Vlastimil Babka wrote:
Even after all the patches compaction received in last several versions, it
turns out that its effectivneess degrades considerably as the system ages
after reboot. For example, see how success rates of stress-highalloc from
On Mon, Feb 02, 2015 at 09:51:01PM +0800, Zhang Yanfei wrote:
Hello,
At 2015/2/2 18:20, Vlastimil Babka wrote:
On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
Compaction has anti fragmentation algorithm. It is that freepage
should be more than pageblock order to finish the compaction if we
2015-02-04 0:51 GMT+09:00 Vlastimil Babka vba...@suse.cz:
On 02/03/2015 04:00 PM, Joonsoo Kim wrote:
2015-02-03 18:05 GMT+09:00 Vlastimil Babka vba...@suse.cz:
On 02/03/2015 07:49 AM, Joonsoo Kim wrote:
On Mon, Jan 19, 2015 at 11:05:15AM +0100, Vlastimil Babka wrote:
Hello,
I don't have
, classzone_idx from tracepoint output
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/linux/compaction.h|3 ++
include/trace/events/compaction.h | 74 +
mm/compaction.c | 38 +--
3 files changed, 111
. This would improve readability. For example, it makes us
easily notice whether current scanner try to compact previously
attempted pageblock or not.
Acked-by: Vlastimil Babka vba...@suse.cz
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/trace/events/compaction.h |2 +-
1 file changed, 1
deferring logic. This patch add new tracepoint
to understand work of deferring logic. This will also help to check
compaction success and fail.
Changes from v2: Remove reason part from tracepoint output
Changes from v3: Build fix for !CONFIG_COMPACTION
Signed-off-by: Joonsoo Kim iamjoonsoo
for !CONFIG_COMPACTION, !CONFIG_TRACEPOINTS
Acked-by: Vlastimil Babka vba...@suse.cz
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/linux/compaction.h|1 +
include/trace/events/compaction.h | 49 ++---
mm/compaction.c | 15
vba...@suse.cz
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/trace/events/compaction.h | 30 +++---
mm/compaction.c |9 ++---
2 files changed, 29 insertions(+), 10 deletions(-)
diff --git a/include/trace/events/compaction.h
b
On Fri, Jan 16, 2015 at 10:40:23PM -0800, Guenter Roeck wrote:
On Fri, Jan 16, 2015 at 03:50:38PM -0800, a...@linux-foundation.org wrote:
The mm-of-the-moment snapshot 2015-01-16-15-50 has been uploaded to
http://www.ozlabs.org/~akpm/mmotm/
mmotm-readme.txt says
README for
. And then,
virt_to_head_page() uses this optimized function to improve performance.
I saw 1.8% win in a fast-path loop over kmem_cache_alloc/free,
(14.063 ns - 13.810 ns) if target object is on tail page.
Change from v2: Add some code comments
Acked-by: Christoph Lameter c...@linux.com
Signed-off-by: Joonsoo Kim
-by: Jesper Dangaard Brouer bro...@redhat.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
mm/slub.c | 35 +++
1 file changed, 23 insertions(+), 12 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index fe376fe..e7ed6f8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
On Thu, Jan 15, 2015 at 05:16:46PM -0800, Andrew Morton wrote:
On Thu, 15 Jan 2015 16:40:33 +0900 Joonsoo Kim iamjoonsoo@lge.com wrote:
compound_head() is implemented with assumption that there would be
race condition when checking tail flag. This assumption is only true
when we try
On Thu, Jan 15, 2015 at 05:16:27PM -0800, Andrew Morton wrote:
On Thu, 15 Jan 2015 16:41:10 +0900 Joonsoo Kim iamjoonsoo@lge.com wrote:
We now have tracepoint for begin event of compaction and it prints
start position of both scanners, but, tracepoint for end event of
compaction
On Wed, Jan 21, 2015 at 02:21:07PM -0800, Andrew Morton wrote:
On Wed, 21 Jan 2015 15:11:38 +0100 Michal Hocko mho...@suse.cz wrote:
On Wed 21-01-15 15:06:03, Krzysztof Koz__owski wrote:
[...]
Same here :) [1] . So actually only ARM seems affected (both armv7 and
armv8) because it is
On Wed, Jan 21, 2015 at 09:27:41PM -0500, Sasha Levin wrote:
On 01/21/2015 08:59 PM, Joonsoo Kim wrote:
On Tue, Jan 20, 2015 at 11:31:43PM -0500, Sasha Levin wrote:
Commit mm/slub: optimize alloc/free fastpath by removing preemption
on/off
has added access to percpu memory while
On Wed, Jan 21, 2015 at 12:38:35PM +0800, Huang Ying wrote:
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit d2dc80750ee05ceb03c9b13b0531a782116d1ade (mm/slub: optimize
alloc/free fastpath by removing preemption on/off)
On Tue, Jan 20, 2015 at 11:31:43PM -0500, Sasha Levin wrote:
Commit mm/slub: optimize alloc/free fastpath by removing preemption on/off
has added access to percpu memory while the code is preemptible.
While those accesses are okay, this creates a huge amount of warnings from
the code that
On Tue, Jan 20, 2015 at 11:33:27PM +0800, Zhang Yanfei wrote:
Hello Minchan,
How are you?
在 2015/1/19 14:55, Minchan Kim 写道:
Hello,
On Sun, Jan 18, 2015 at 04:32:59PM +0800, Hui Zhu wrote:
From: Hui Zhu zhu...@xiaomi.com
The original of this patch [1] is part of Joonsoo's CMA
7cb9d1ed8a785df152cb8934e187031c8ebd1bb2 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim iamjoonsoo@lge.com
Date: Thu, 22 Jan 2015 10:28:58 +0900
Subject: [PATCH] mm/debug_pagealloc: fix build failure on ppc and some other
archs
Kim Phillips reported following build failure.
LD init/built-in.o
mm
]
(vfs_read+0x7c/0x100)
[9.811950] [c00cada4] (vfs_read) from [c00cae68] (SyS_read+0x40/0x8c)
[9.818810] [c00cae68] (SyS_read) from [c000f160]
(ret_fast_syscall+0x0/0x30)
I bisected this to:
d2dc80750ee05ceb03c9b13b0531a782116d1ade
Author: Joonsoo Kim iamjoonsoo@lge.com
Date
On Tue, Jan 20, 2015 at 12:38:32PM -0500, Sasha Levin wrote:
Provides a userspace interface to trigger a CMA allocation.
Usage:
echo [pages] alloc
This would provide testing/fuzzing access to the CMA allocation paths.
Signed-off-by: Sasha Levin sasha.le...@oracle.com
---
about barrier() usage
Acked-by: Christoph Lameter c...@linux.com
Tested-by: Jesper Dangaard Brouer bro...@redhat.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
mm/slub.c | 35 +++
1 file changed, 23 insertions(+), 12 deletions(-)
diff --git a/mm/slub.c b
. And then,
virt_to_head_page() uses this optimized function to improve performance.
I saw 1.8% win in a fast-path loop over kmem_cache_alloc/free,
(14.063 ns - 13.810 ns) if target object is on tail page.
Acked-by: Christoph Lameter c...@linux.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/linux/mm.h
, classzone_idx from tracepoint output
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/linux/compaction.h|3 ++
include/trace/events/compaction.h | 74 +
mm/compaction.c | 38 +--
3 files changed, 111
...@suse.cz
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/linux/compaction.h|2 ++
include/trace/events/compaction.h | 49 ++---
mm/compaction.c | 14 +--
3 files changed, 49 insertions(+), 16 deletions(-)
diff
. This would improve readability. For example, it makes us
easily notice whether current scanner try to compact previously
attempted pageblock or not.
Acked-by: Vlastimil Babka vba...@suse.cz
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/trace/events/compaction.h |2 +-
1 file changed, 1
deferring logic. This patch add new tracepoint
to understand work of deferring logic. This will also help to check
compaction success and fail.
Changes from v2: Remove reason part from tracepoint output
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/linux/compaction.h| 65
vba...@suse.cz
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/trace/events/compaction.h | 30 +++---
mm/compaction.c |9 ++---
2 files changed, 29 insertions(+), 10 deletions(-)
diff --git a/include/trace/events/compaction.h
b
On Thu, Jan 22, 2015 at 10:12:43AM -0600, Christoph Lameter wrote:
On Thu, 22 Jan 2015, Joonsoo Kim wrote:
Just out of curiosity, new zone? Something like movable zone?
Yes, I named it as ZONE_CMA. Maybe I can send prototype of
implementation within 1 or 2 weeks.
Ugghh. I'd rather
On Wed, Jan 21, 2015 at 04:52:36PM +0300, Stefan Strogin wrote:
Sorry for such a long delay. Now I'll try to answer all the questions
and make a second version.
The original reason of why we need a new debugging tool for CMA is
written by Minchan
On Thu, Jan 22, 2015 at 06:35:53PM +0300, Stefan Strogin wrote:
Hello Joonsoo,
On 30/12/14 07:38, Joonsoo Kim wrote:
On Fri, Dec 26, 2014 at 05:39:03PM +0300, Stefan I. Strogin wrote:
/proc/cmainfo contains a list of currently allocated CMA buffers for every
CMA area when
this situation, this patch add some code to consider zone
overlapping before adding ZONE_CMA.
setup_zone_migrate_reserve() reserve some pages for specific zone so
should consider zone overlap.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
mm/page_alloc.c |4 ++--
1 file changed, 2 insertions
In the following patches, total reserved page count is needed to
initialize ZONE_CMA. This is the preparation step for that.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/linux/cma.h |9 +
mm/cma.c| 17 +
2 files changed, 26 insertions
/10/15/623
[4] https://lkml.org/lkml/2014/5/30/320
Joonsoo Kim (16):
mm/page_alloc: correct highmem memory statistics
mm/writeback: correct dirty page calculation for highmem
mm/highmem: make nr_free_highpages() handles all highmem zones by
itself
mm/vmstat: make node_page_state
this situation, this patch add some code to consider zone
overlapping before adding ZONE_CMA.
pfn range argument provieded to test_pages_isolated() should be in
a single zone. If not, zone lock doesn't work to protect free state of
buddy freepage.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
mm
or not and add stat of the zone which can be treated
as highmem.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
mm/page_alloc.c | 19 ++-
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 616a2c9..c784035 100644
--- a/mm
nr_free_highpages() manually add statistics per each highmem zone
and return total value for them. Whenever we add a new highmem zone,
we need to consider this function and it's really troublesome. Make
it handles all highmem zones by itself.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
exisiting hooks for MIGRATE_CMA and many
problems such as watermark check and reserved page utilization are
resolved itself.
This patch only add basic infrastructure of ZONE_CMA. In the following
patch, ZONE_CMA is actually populated and used.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
arch
-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/linux/gfp.h | 10 --
include/linux/mm.h |1 +
mm/cma.c| 23 ---
mm/page_alloc.c | 42 +++---
4 files changed, 64 insertions(+), 12 deletions(-)
diff --git a/include
We can use is_highmem() on every callsites of is_highmem_idx() so
is_highmem_idx() isn't really needed. And, if we introduce a new zone
for CMA, we need to modify it to adapt for new zone, so it's
inconvenient. Therefore, this patch remove it before introducing
a new zone.
Signed-off-by: Joonsoo
Reserved pages for CMA could be on different zone. To figure out
memory map correctly, per zone number of stealed pages for CMA
would be needed.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
mm/cma.c | 28 +++-
1 file changed, 27 insertions(+), 1 deletion
up these steps for preparation of ZONE_CMA.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
mm/page_alloc.c | 15 ---
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 416e036..6030525f 100644
--- a/mm/page_alloc.c
+++ b/mm
-by: Sasha Levin sasha.le...@oracle.com
Acked-by: Joonsoo Kim iamjoonsoo@lge.com
Thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ
On Tue, Feb 10, 2015 at 01:48:06PM -0600, Christoph Lameter wrote:
The major portions are there but there is no support yet for
directly allocating per cpu objects. There could also be more
sophisticated code to exploit the batch freeing.
Signed-off-by: Christoph Lameter c...@linux.com
On Wed, Feb 11, 2015 at 12:18:07PM -0800, David Rientjes wrote:
On Wed, 11 Feb 2015, Christoph Lameter wrote:
This patch is referencing functions that don't exist and can do so since
it's not compiled, but I think this belongs in the next patch. I also
think that this particular
On Fri, Feb 13, 2015 at 01:15:40AM +0300, Stefan Strogin wrote:
Hi all.
Sorry for the long delay. Here is the second attempt to add some facility
for debugging CMA (the first one was mm: cma: add /proc/cmainfo [1]).
This patch set is based on v3.19 and Sasha Levin's patch set
mm: cma:
On Fri, Feb 13, 2015 at 01:15:41AM +0300, Stefan Strogin wrote:
static int cma_debugfs_get(void *data, u64 *val)
{
unsigned long *p = data;
@@ -125,6 +221,52 @@ static int cma_alloc_write(void *data, u64 val)
DEFINE_SIMPLE_ATTRIBUTE(cma_alloc_fops, NULL, cma_alloc_write, %llu\n);
On Fri, Feb 13, 2015 at 01:15:41AM +0300, Stefan Strogin wrote:
/sys/kernel/debug/cma/cma-N/buffers contains a list of currently allocated
CMA buffers for CMA region N when CONFIG_CMA_DEBUGFS is enabled.
Format is:
base_phys_addr - end_phys_addr (size kB), allocated by PID (comm)
stack
)
+ return -ENOMEM;
+
+ p = cma_alloc(cma, count, CONFIG_CMA_ALIGNMENT);
Alignment is resurrected. Please change it to 0.
Other than that,
Acked-by: Joonsoo Kim iamjoonsoo@lge.com
Thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
-by: Joonsoo Kim iamjoonsoo@lge.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Fri, Feb 13, 2015 at 01:15:42AM +0300, Stefan Strogin wrote:
From: Dmitry Safonov d.safo...@partner.samsung.com
Here are two functions that provide interface to compute/get used size
and size of biggest free chunk in cma region.
Add that information to debugfs.
Signed-off-by: Dmitry
On Fri, Feb 13, 2015 at 09:47:59AM -0600, Christoph Lameter wrote:
On Fri, 13 Feb 2015, Joonsoo Kim wrote:
I also think that this implementation is slub-specific. For example,
in slab case, it is always better to access local cpu cache first than
page allocator since slab doesn't use list
f7 74 1a
[ 33.608008] RIP [811dcf60] mem_cgroup_low+0x40/0x90
[ 33.608008] RSP 88000cb17a88
[ 33.608008] CR2: 00b0
[ 33.608008] BUG: unable to handle kernel [ 33.653499] ---[ end trace
e264a32717ffda51 ]---
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
...@gmail.com
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
mm/nommu.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm/nommu.c b/mm/nommu.c
index 7296360..3e67e75 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -1213,11 +1213,9 @@ static int do_mmap_private(struct
On Fri, Feb 13, 2015 at 09:49:24AM -0600, Christoph Lameter wrote:
On Fri, 13 Feb 2015, Joonsoo Kim wrote:
+ *p++ = freelist;
+ freelist = get_freepointer(s, freelist);
+ allocated++;
+ }
Fetching all objects with holding
On Sat, Feb 14, 2015 at 02:02:16PM +0900, Gioh Kim wrote:
2015-02-12 오후 4:32에 Joonsoo Kim 이(가) 쓴 글:
Until now, reserved pages for CMA are managed altogether with normal
page in the same zone. This approach has numorous problems and fixing
them isn't easy. To fix this situation, ZONE_CMA
On Fri, Feb 13, 2015 at 03:40:08PM +0900, Gioh Kim wrote:
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index c8778f7..883e78d 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -210,8 +210,8 @@ int undo_isolate_page_range(unsigned long start_pfn,
unsigned long
max_used_pages are defined as atomic_long_t so we need to use
unsigned long to keep temporary value for it rather than int
which is smaller than unsigned long in 64 bit system.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
drivers/block/zram/zram_drv.c | 2 +-
1 file changed, 1 insertion
On Fri, Jan 23, 2015 at 03:37:28PM -0600, Christoph Lameter wrote:
This patch adds the basic infrastructure for alloc / free operations
on pointer arrays. It includes a fallback function that can perform
the array operations using the single alloc and free that every
slab allocator performs.
On Thu, Jan 22, 2015 at 10:45:51AM +0900, Joonsoo Kim wrote:
On Wed, Jan 21, 2015 at 09:57:59PM +0900, Akinobu Mita wrote:
2015-01-21 9:07 GMT+09:00 Andrew Morton a...@linux-foundation.org:
On Tue, 20 Jan 2015 15:01:50 -0800 j...@joshtriplett.org wrote:
On Tue, Jan 20, 2015 at 02:02
On Tue, Jan 27, 2015 at 08:35:17AM +0100, Vlastimil Babka wrote:
On 12/10/2014 07:38 AM, Joonsoo Kim wrote:
After your patch is merged, I will resubmit these on top of it.
Hi Joonsoo,
my page stealing patches are now in -mm so are you planning to resubmit this?
At
least patch 1
2015-01-27 17:23 GMT+09:00 Vladimir Davydov vdavy...@parallels.com:
Hi Joonsoo,
On Tue, Jan 27, 2015 at 05:00:09PM +0900, Joonsoo Kim wrote:
On Mon, Jan 26, 2015 at 03:55:29PM +0300, Vladimir Davydov wrote:
@@ -3381,6 +3390,15 @@ void __kmem_cache_shrink(struct kmem_cache *s
On Mon, Jan 26, 2015 at 03:55:29PM +0300, Vladimir Davydov wrote:
To speed up further allocations SLUB may store empty slabs in per
cpu/node partial lists instead of freeing them immediately. This
prevents per memcg caches destruction, because kmem caches created for a
memory cgroup are only
On Mon, Jan 26, 2015 at 09:26:05AM -0500, Sasha Levin wrote:
Provides a userspace interface to trigger a CMA release.
Usage:
echo [pages] free
This would provide testing/fuzzing access to the CMA release paths.
Signed-off-by: Sasha Levin sasha.le...@oracle.com
---
,
- page_ext-trace, 0);
+ trace.nr_entries = page_ext-nr_entries;
+ trace.entries = page_ext-trace_entries[0];
+
+ ret += snprint_stack_trace(kbuf + ret, count - ret, trace, 0);
if (ret = count)
goto err;
Acked-by: Joonsoo Kim iamjoonsoo@lge.com
On Mon, Jan 26, 2015 at 09:26:04AM -0500, Sasha Levin wrote:
Provides a userspace interface to trigger a CMA allocation.
Usage:
echo [pages] alloc
This would provide testing/fuzzing access to the CMA allocation paths.
Signed-off-by: Sasha Levin sasha.le...@oracle.com
---
non-STD_MMU_64 builds to use the generic __kernel_map_pages().
I'd be happy to take this through the powerpc tree for 3.20, but for this:
depends on:
From: Joonsoo Kim iamjoonsoo@lge.com
Date: Thu, 22 Jan 2015 10:28:58 +0900
Subject: [PATCH] mm/debug_pagealloc: fix build failure on ppc
2015-01-28 1:57 GMT+09:00 Christoph Lameter c...@linux.com:
On Tue, 27 Jan 2015, Joonsoo Kim wrote:
IMHO, exposing these options is not a good idea. It's really
implementation specific. And, this flag won't show consistent performance
according to specific slab implementation. For example
2015-01-28 0:08 GMT+09:00 Sasha Levin sasha.le...@oracle.com:
On 01/27/2015 03:06 AM, Joonsoo Kim wrote:
On Mon, Jan 26, 2015 at 09:26:04AM -0500, Sasha Levin wrote:
Provides a userspace interface to trigger a CMA allocation.
Usage:
echo [pages] alloc
This would provide testing
2015-01-28 5:13 GMT+09:00 Sasha Levin sasha.le...@oracle.com:
On 01/27/2015 01:25 PM, Sasha Levin wrote:
On 01/27/2015 03:10 AM, Joonsoo Kim wrote:
+if (mem-n = count) {
+ cma_release(cma, mem-p, mem-n);
+ count -= mem-n
On Tue, Jan 27, 2015 at 09:22:57PM -0500, Sasha Levin wrote:
Provides a userspace interface to trigger a CMA release.
Usage:
echo [pages] free
This would provide testing/fuzzing access to the CMA release paths.
Signed-off-by: Sasha Levin sasha.le...@oracle.com
---
On Wed, Jan 28, 2015 at 09:30:56AM -0600, Christoph Lameter wrote:
On Wed, 28 Jan 2015, Joonsoo Kim wrote:
GFP_SLAB_ARRAY new is best for large quantities in either allocator since
SLAB also has to construct local metadata structures.
In case of SLAB, there is just a little more work
On Thu, Jan 22, 2015 at 09:48:25PM -0500, Sasha Levin wrote:
On 01/22/2015 03:26 AM, Joonsoo Kim wrote:
On Tue, Jan 20, 2015 at 12:38:32PM -0500, Sasha Levin wrote:
Provides a userspace interface to trigger a CMA allocation.
Usage:
echo [pages] alloc
This would provide testing
On Sat, Jan 31, 2015 at 08:38:10PM +0800, Zhang Yanfei wrote:
At 2015/1/30 20:34, Joonsoo Kim wrote:
From: Joonsoo iamjoonsoo@lge.com
This is preparation step to use page allocator's anti fragmentation logic
in compaction. This patch just separates steal decision part from actual
On Sat, Jan 31, 2015 at 11:58:03PM +0800, Zhang Yanfei wrote:
At 2015/1/30 20:34, Joonsoo Kim wrote:
From: Joonsoo iamjoonsoo@lge.com
Compaction has anti fragmentation algorithm. It is that freepage
should be more than pageblock order to finish the compaction if we don't
find any
On Mon, Feb 02, 2015 at 04:15:46PM +0900, Joonsoo Kim wrote:
freepage with MIGRATE_CMA can be used only for MIGRATE_MOVABLE and
they should not be expanded to other migratetype buddy list
to protect them from unmovable/reclaimable allocation. Implementing
these requirements
On Fri, Jan 30, 2015 at 03:27:50PM +0100, Vlastimil Babka wrote:
On 01/30/2015 01:34 PM, Joonsoo Kim wrote:
From: Joonsoo iamjoonsoo@lge.com
This is preparation step to use page allocator's anti fragmentation logic
in compaction. This patch just separates steal decision part from
On Fri, Jan 30, 2015 at 03:43:27PM +0100, Vlastimil Babka wrote:
On 01/30/2015 01:34 PM, Joonsoo Kim wrote:
From: Joonsoo iamjoonsoo@lge.com
Compaction has anti fragmentation algorithm. It is that freepage
should be more than pageblock order to finish the compaction if we don't
migratetype and
increase chance of fragmentation.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
mm/page_alloc.c | 36 +++-
1 file changed, 19 insertions(+), 17 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8d52ab1..e64b260 100644
--- a/mm
: 42.20
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
mm/compaction.c | 14 --
mm/internal.h | 2 ++
mm/page_alloc.c | 12
3 files changed, 22 insertions(+), 6 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 782772d..d40c426 100644
--- a/mm
This is preparation step to use page allocator's anti fragmentation logic
in compaction. This patch just separates fallback freepage checking part
from fallback freepage management part. Therefore, there is no functional
change.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
mm
On Mon, Jan 05, 2015 at 09:25:02PM -0500, Steven Rostedt wrote:
On Tue, 6 Jan 2015 10:32:47 +0900
Joonsoo Kim iamjoonsoo@lge.com wrote:
+++ b/mm/slub.c
@@ -2398,13 +2398,15 @@ redo:
* reading from one cpu area. That does not matter as long
* as we end up
On Mon, Jan 05, 2015 at 07:03:12PM -0800, Davidlohr Bueso wrote:
On Mon, 2015-01-05 at 10:36 +0900, Joonsoo Kim wrote:
- preempt_disable();
- c = this_cpu_ptr(s-cpu_slab);
+ do {
+ tid = this_cpu_read(s-cpu_slab-tid);
+ c = this_cpu_ptr(s-cpu_slab
On Mon, Jan 05, 2015 at 08:01:45PM -0800, Gregory Fong wrote:
+linux-mm and linux-kernel (not sure how those got removed from cc,
sorry about that)
On Mon, Jan 5, 2015 at 7:58 PM, Gregory Fong gregory.0...@gmail.com wrote:
Hi Joonsoo,
On Wed, May 28, 2014 at 12:04 AM, Joonsoo Kim
in !CONFIG_PREEMPT,
roughly 0.3%. Implementing each case separately would help performance,
but, since it's so marginal, I didn't do that. This would help
maintanance since we have same code for all cases.
Tested-by: Jesper Dangaard Brouer bro...@redhat.com
Signed-off-by: Joonsoo Kim
Hello,
On Mon, Jan 05, 2015 at 06:21:39PM +0100, Andreas Mohr wrote:
Hi,
Joonsoo Kim wrote:
+ * Calculate the next globally unique transaction for disambiguiation
disambiguation
Okay.
+ ac-tid = next_tid(ac-tid);
(and all others)
object oriented:
array_cache_next_tid(ac
On Mon, Jan 05, 2015 at 09:28:14AM -0600, Christoph Lameter wrote:
On Mon, 5 Jan 2015, Joonsoo Kim wrote:
index 449fc6b..54656f0 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -168,6 +168,41 @@ typedef unsigned short freelist_idx_t;
#define SLAB_OBJ_MAX_NUM ((1 sizeof(freelist_idx_t
odd behavior or problem on compaction
internal logic.
And, mode is added to both begin/end tracepoint output, since
according to mode, compaction behavior is quite different.
And, lastly, status format is changed to string rather than
status number for readability.
Signed-off-by: Joonsoo Kim
. This would improve readability. For example, it makes us
easily notice whether current scanner try to compact previously
attempted pageblock or not.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/trace/events/compaction.h |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
deferring logic. This patch add new tracepoint
to understand work of deferring logic. This will also help to check
compaction success and fail.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
include/linux/compaction.h| 65 +++--
include/trace/events
It is not well analyzed that when/why compaction start/finish or not. With
these new tracepoints, we can know much more about start/finish reason of
compaction. I can find following bug with these tracepoint.
http://www.spinics.net/lists/linux-mm/msg81582.html
Signed-off-by: Joonsoo Kim
It'd be useful to know current range where compaction work for detailed
analysis. With it, we can know pageblock where we actually scan and
isolate, and, how much pages we try in that pageblock and can guess why
it doesn't become freepage with pageblock order roughly.
Signed-off-by: Joonsoo Kim
On Fri, Jan 09, 2015 at 10:57:10AM +, Mel Gorman wrote:
On Thu, Jan 08, 2015 at 09:46:27AM +0100, Vlastimil Babka wrote:
On 01/08/2015 09:18 AM, Joonsoo Kim wrote:
On Tue, Jan 06, 2015 at 10:05:39AM +0100, Vlastimil Babka wrote:
On 12/03/2014 08:52 AM, Joonsoo Kim wrote:
It'd
On Thu, Jan 08, 2015 at 09:46:27AM +0100, Vlastimil Babka wrote:
On 01/08/2015 09:18 AM, Joonsoo Kim wrote:
On Tue, Jan 06, 2015 at 10:05:39AM +0100, Vlastimil Babka wrote:
On 12/03/2014 08:52 AM, Joonsoo Kim wrote:
It'd be useful to know where the both scanner is start. And, it also
On Mon, Jan 12, 2015 at 05:35:47PM +0100, Vlastimil Babka wrote:
On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
compaction deferring logic is heavy hammer that block the way to
the compaction. It doesn't consider overall system state, so it
could prevent user from doing compaction falsely
On Mon, Jan 12, 2015 at 04:53:53PM +0100, Vlastimil Babka wrote:
On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
It is not well analyzed that when/why compaction start/finish or not. With
these new tracepoints, we can know much more about start/finish reason of
compaction. I can find following
1 - 100 of 4538 matches
Mail list logo