On 09/16/2013 10:47 AM, Fengguang Wu wrote:
Greetings,
I got the below dmesg and the first bad commit is
commit 7a8010cd36273ff5f6fea5201ef9232f30cebbd9
Author: Vlastimil Babka vba...@suse.cz
Date: Wed Sep 11 14:22:35 2013 -0700
mm: munlock: manual pte walk in fast path instead
On 09/17/2013 03:29 PM, Fengguang Wu wrote:
Hi Vlastimil,
Also, some of the failures during bisect were not due to this bug, but a
WARNING for
list_add corruption which hopefully is not related to munlock. While it is
probably a far stretch,
some kind of memory corruption could also
: Mel Gorman mgor...@suse.de
Cc: Michel Lespinasse wal...@google.com
Cc: Hugh Dickins hu...@google.com
Cc: Rik van Riel r...@redhat.com
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Michal Hocko mho...@suse.cz
Cc: Vlastimil Babka vba...@suse.cz
Signed-off-by: Vlastimil Babka vba...@suse.cz
---
mm
On 10/07/2013 10:21 PM, Robert C Jennings wrote:
Introduce use of the unused SPLICE_F_MOVE flag for vmsplice to zap
pages.
When vmsplice is called with flags (SPLICE_F_GIFT | SPLICE_F_MOVE) the
writer's gift'ed pages would be zapped. This patch supports further work
to move vmsplice'd
On 10/07/2013 10:21 PM, Robert C Jennings wrote:
From: Matt Helsley matt.hels...@gmail.com
It is sometimes useful to move anonymous pages over a pipe rather than
save/swap them. Check the SPLICE_F_GIFT and SPLICE_F_MOVE flags to see
if userspace would like to move such pages. This differs
On 10/17/2013 03:48 PM, Robert Jennings wrote:
* Vlastimil Babka (vba...@suse.cz) wrote:
On 10/07/2013 10:21 PM, Robert C Jennings wrote:
Introduce use of the unused SPLICE_F_MOVE flag for vmsplice to zap
pages.
When vmsplice is called with flags (SPLICE_F_GIFT | SPLICE_F_MOVE) the
writer's
On 10/10/2013 11:46 PM, Johannes Weiner wrote:
Hi everyone,
here is an update to the cache sizing patches for 3.13.
Changes in this revision
o Drop frequency synchronization between refaulted and demoted pages
and just straight up activate refaulting pages whose access
, as skipping a block due to being
!MIGRATE_MOVABLE is done soon after skipping a block marked to be skipped, both
without locking.
Cc: Mel Gorman mgor...@suse.de
Cc: Rik van Riel r...@redhat.com
Signed-off-by: Vlastimil Babka vba...@suse.cz
---
mm/compaction.c | 5 -
1 file changed, 4 insertions
with
success rates. One of the further patches I'm considering for future
versions is to ignore or clear pageblock skip information for sync
compaction. But in that case, THP clearly should be changed so that it does
not fallback to the sync compaction.
Vlastimil Babka (5):
mm
in stress-highalloc benchmark show that the regression by commit
81c0a2bb in phase 3 no longer occurs, and phase 1 and 2 allocation success rates
are significantly improved.
Cc: Mel Gorman mgor...@suse.de
Cc: Rik van Riel r...@redhat.com
Signed-off-by: Vlastimil Babka vba...@suse.cz
---
mm/compaction.c
Cc: Rik van Riel r...@redhat.com
Signed-off-by: Vlastimil Babka vba...@suse.cz
---
mm/compaction.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/compaction.c b/mm/compaction.c
index f481193..2c2cc4a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -843,6 +843,10 @@ static int
.
Cc: Mel Gorman mgor...@suse.de
Cc: Rik van Riel r...@redhat.com
Signed-off-by: Vlastimil Babka vba...@suse.cz
---
mm/compaction.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 7c0073e..6a2f0c2 100644
--- a/mm
van Riel r...@redhat.com
Signed-off-by: Vlastimil Babka vba...@suse.cz
---
include/linux/compaction.h | 12
mm/compaction.c| 9 -
mm/page_alloc.c| 5 +
3 files changed, 17 insertions(+), 9 deletions(-)
diff --git a/include/linux/compaction.h b
On 11/26/2013 11:45 AM, Mel Gorman wrote:
On Mon, Nov 25, 2013 at 03:26:08PM +0100, Vlastimil Babka wrote:
Compaction of a zone is finished when the migrate scanner (which begins at
the
zone's lowest pfn) meets the free page scanner (which begins at the zone's
highest pfn). This is detected
On 12/04/2013 03:30 PM, Mel Gorman wrote:
This patch adds two tracepoints for compaction begin and end of a zone. Using
this it is possible to calculate how much time a workload is spending
within compaction and potentially debug problems related to cached pfns
for scanning.
I guess for
On 12/05/2013 10:05 AM, Mel Gorman wrote:
On Wed, Dec 04, 2013 at 03:51:57PM +0100, Vlastimil Babka wrote:
On 12/04/2013 03:30 PM, Mel Gorman wrote:
This patch adds two tracepoints for compaction begin and end of a zone. Using
this it is possible to calculate how much time a workload
and potentially debug problems related to cached pfns
for scanning. In combination with the direct reclaim and slab trace points
it should be possible to estimate most allocation-related overhead for
a workload.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Vlastimil Babka vba...@suse.cz
On 03/15/2014 04:06 AM, Sasha Levin wrote:
On 03/14/2014 07:55 PM, Sasha Levin wrote:
On 12/17/2013 08:00 AM, Vlastimil Babka wrote:
From: Vlastimil Babkavba...@suse.cz
Date: Fri, 13 Dec 2013 14:25:21 +0100
Subject: [PATCH 1/3] mm: munlock: fix a bug where THP tail page is
encountered
On 17.3.2014 22:08, Sasha Levin wrote:
On 03/17/2014 08:38 AM, Vlastimil Babka wrote:
On 03/15/2014 04:06 AM, Sasha Levin wrote:
On 03/14/2014 07:55 PM, Sasha Levin wrote:
On 12/17/2013 08:00 AM, Vlastimil Babka wrote:
From: Vlastimil Babkavba...@suse.cz
Date: Fri, 13 Dec 2013 14:25:21 +0100
On 17.3.2014 23:58, Sasha Levin wrote:
On 03/17/2014 06:20 PM, Vlastimil Babka wrote:
On 17.3.2014 22:08, Sasha Levin wrote:
On 03/17/2014 08:38 AM, Vlastimil Babka wrote:
On 03/15/2014 04:06 AM, Sasha Levin wrote:
On 03/14/2014 07:55 PM, Sasha Levin wrote:
On 12/17/2013 08:00 AM, Vlastimil
On 03/17/2014 11:58 PM, Sasha Levin wrote:
On 03/17/2014 06:20 PM, Vlastimil Babka wrote:
On 17.3.2014 22:08, Sasha Levin wrote:
On 03/17/2014 08:38 AM, Vlastimil Babka wrote:
On 03/15/2014 04:06 AM, Sasha Levin wrote:
On 03/14/2014 07:55 PM, Sasha Levin wrote:
On 12/17/2013 08:00 AM
and break it. Patch 5 reduces the amount of unneeded
set_pageblock_skip calls, and patch 6 fixes the race by making the bit
operations atomic, including reasons for picking this solution instead of
using zone-lock also for set_pageblock_skip().
Vlastimil
Vlastimil Babka (6):
mm: call
...@suse.de
Signed-off-by: Vlastimil Babka vba...@suse.cz
---
mm/page_alloc.c | 20 +++-
mm/page_isolation.c | 23 +--
2 files changed, 24 insertions(+), 19 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9d6892c..0cb41ec 100644
--- a/mm
, this raciness is not an issue as the bits
are just a heuristic for memory compaction.
Signed-off-by: Vlastimil Babka vba...@suse.cz
---
mm/page_alloc.c | 14 +-
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fd6a64c..050bf5e 100644
setting the
skip bit again.
Signed-off-by: Vlastimil Babka vba...@suse.cz
---
mm/compaction.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index f0db73b..20a75ee 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -529,8 +529,10
To prevent races, set_pageblock_migratetype() should be called with zone-lock
held. This patch adds a debugging assertion and introduces a _nolock variant
for zone init functions.
Signed-off-by: Vlastimil Babka vba...@suse.cz
---
mm/page_alloc.c | 13 ++---
1 file changed, 10 insertions
sites, where
a wrong value does not affect correctness. The function makes sure that the
value does not exceed valid migratetype numbers. Such too-high values are
assumed to be a result of race and caller-supplied fallback value is returned
instead.
Signed-off-by: Vlastimil Babka vba...@suse.cz
This patch complements the addition of get_pageblock_migratetype_nolock() for
the case where is_migrate_isolate_page() cannot be called with zone-lock held.
A race with set_pageblock_migratetype() may be detected, in which case a caller
supplied argument is returned.
Signed-off-by: Vlastimil
On 02/14/2014 07:53 AM, Joonsoo Kim wrote:
changes for v2
o include more experiment data in cover letter
o deal with vlastimil's comments mostly about commit description on 4/5
This patchset is related to the compaction.
patch 1 fixes contrary implementation of the purpose of compaction.
On 03/03/2014 09:28 AM, Joonsoo Kim wrote:
On Fri, Feb 28, 2014 at 03:15:04PM +0100, Vlastimil Babka wrote:
set_pageblock_flags_group() is used to set either migratetype or skip bit of a
pageblock. Setting migratetype is done under zone-lock (except from __init
code), however changing the skip
On 03/03/2014 09:22 AM, Joonsoo Kim wrote:
On Fri, Feb 28, 2014 at 03:15:00PM +0100, Vlastimil Babka wrote:
In order to prevent race with set_pageblock_migratetype, most of calls to
get_pageblock_migratetype have been moved under zone-lock. For the remaining
call sites, the extra locking
On 03/04/2014 01:55 AM, Joonsoo Kim wrote:
On Mon, Mar 03, 2014 at 02:54:09PM +0100, Vlastimil Babka wrote:
On 03/03/2014 09:22 AM, Joonsoo Kim wrote:
On Fri, Feb 28, 2014 at 03:15:00PM +0100, Vlastimil Babka wrote:
In order to prevent race with set_pageblock_migratetype, most of calls
On 03/06/2014 03:26 AM, Laura Abbott wrote:
We received several reports of bad page state when freeing CMA pages
previously allocated with alloc_contig_range:
1[ 1258.084111] BUG: Bad page state in process Binder_A pfn:63202
1[ 1258.089763] page:d21130b0 count:0 mapcount:1 mapping: (null)
anyway since we need all
pages to be isolated. Additionally, drop the error checking based on
nr_strict_required and just check the pfn ranges. This matches with
what isolate_freepages_range does.
Signed-off-by: Laura Abbott lau...@codeaurora.org
Acked-by: Vlastimil Babka vba...@suse.cz
---
v2
On 7.3.2014 1:33, Andrew Morton wrote:
On Thu, 6 Mar 2014 10:21:32 -0800 Laura Abbott lau...@codeaurora.org wrote:
We received several reports of bad page state when freeing CMA pages
previously allocated with alloc_contig_range:
1[ 1258.084111] BUG: Bad page state in process Binder_A
-sync.
Signed-off-by: David Rientjes rient...@google.com
Acked-by: Vlastimil Babka vba...@suse.cz
---
mm/compaction.c | 9 ++---
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -454,12 +454,13
, not sure right now.
Uh so the triggered assertion is the one added by this very patch, and there
are no more changes wrt this in mainline.
If you can still try debug patches, please try this. Thanks.
From: Vlastimil Babka vba...@suse.cz
Date: Mon, 13 Jan 2014 11:13:53 +0100
Subject: [PATCH
On 01/10/2014 06:48 PM, Motohiro Kosaki wrote:
diff --git a/mm/rmap.c b/mm/rmap.c
index 068522d..b99c742 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1389,9 +1389,19 @@ static int try_to_unmap_cluster(unsigned long
cursor, unsigned int *mapcount,
BUG_ON(!page || PageAnon(page));
On 01/13/2014 03:03 PM, Vlastimil Babka wrote:
On 01/10/2014 06:48 PM, Motohiro Kosaki wrote:
diff --git a/mm/rmap.c b/mm/rmap.c
index 068522d..b99c742 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1389,9 +1389,19 @@ static int try_to_unmap_cluster(unsigned long
cursor, unsigned int *mapcount
On 02/16/2014 03:59 PM, Daniel Borkmann wrote:
From: Vlastimil Babka vba...@suse.cz
[ 4366.519657] [ cut here ]
[ 4366.519709] kernel BUG at mm/mlock.c:528!
[ 4366.519742] invalid opcode: [#1] SMP
[ 4366.519782] Modules linked in: ccm arc4 iwldvm [...]
[ 4366.520488
is highorder or not,
but it's criteria for highorder is pageblock order. So calling it once
within pageblock range has no problem.
Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
Acked-by: Vlastimil Babka vba...@suse.cz
diff --git a/mm/compaction.c b/mm/compaction.c
index bbe1260..0d821a2
On 03/04/2014 01:23 AM, Joonsoo Kim wrote:
On Mon, Mar 03, 2014 at 12:02:00PM +0100, Vlastimil Babka wrote:
On 02/14/2014 07:53 AM, Joonsoo Kim wrote:
changes for v2
o include more experiment data in cover letter
o deal with vlastimil's comments mostly about commit description on 4/5
On 03/21/2014 02:53 AM, Sasha Levin wrote:
On 12/13/2013 04:08 AM, Vlastimil Babka wrote:
On 12/13/2013 09:49 AM, Bob Liu wrote:
On 12/13/2013 05:05 AM, Sasha Levin wrote:
On 12/12/2013 07:41 AM, Vlastimil Babka wrote:
On 12/12/2013 06:03 AM, Bob Liu wrote:
On 12/12/2013 11:16 AM, Sasha
On 03/06/2014 06:35 PM, Bartlomiej Zolnierkiewicz wrote:
Pages allocated from MIGRATE_RESERVE migratetype pageblocks
are not freed back to MIGRATE_RESERVE migratetype free
lists in free_pcppages_bulk()-__free_one_page() if we got
to free_pcppages_bulk() through drain_[zone_]pages().
The freeing
On 10/25/2013 05:46 PM, Robert Jennings wrote:
From: Robert C Jennings r...@linux.vnet.ibm.com
This patch set would add the ability to move anonymous user pages from one
process to another through vmsplice without copying data. Moving pages
rather than copying is implemented for a narrow
On 10/25/2013 05:46 PM, Robert Jennings wrote:
From: Robert C Jennings r...@linux.vnet.ibm.com
Introduce use of the unused SPLICE_F_MOVE flag for vmsplice to zap
pages.
When vmsplice is called with flags (SPLICE_F_GIFT | SPLICE_F_MOVE) the
writer's gift'ed pages would be zapped. This
with
the rwsem taken.
Signed-off-by: Davidlohr Bueso davidl...@hp.com
Cc: Michel Lespinasse wal...@google.com
Cc: Vlastimil Babka vba...@suse.cz
Acked-by: Vlastimil Babka vba...@suse.cz
---
mm/mlock.c | 18 +++---
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/mm
On 09/18/2013 03:17 AM, Bob Liu wrote:
On 09/17/2013 10:22 PM, Vlastimil Babka wrote:
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -379,10 +379,14 @@ static unsigned long __munlock_pagevec_fill(struct
pagevec *pvec,
/*
* Initialize pte walk starting at the already pinned page where
@oracle.com
Cc: Jörn Engel jo...@logfs.org
Cc: Mel Gorman mgor...@suse.de
Cc: Michel Lespinasse wal...@google.com
Cc: Hugh Dickins hu...@google.com
Cc: Rik van Riel r...@redhat.com
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Michal Hocko mho...@suse.cz
Cc: Vlastimil Babka vba...@suse.cz
Signed-off
On 09/26/2013 02:40 AM, Fengguang Wu wrote:
Hi Vlastimil,
FYI, this bug seems still not fixed in linux-next 20130925.
Hi,
I sent (including you) a RFC patch and later reviewed patch about week
ago. I assumed you would test it, but I probably should make that
request explicit, sorry. Anyway
On 06/05/2014 01:39 AM, David Rientjes wrote:
On Wed, 4 Jun 2014, Vlastimil Babka wrote:
diff --git a/mm/compaction.c b/mm/compaction.c
index ed7102c..f0fd4b5 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -185,47 +185,74 @@ static void update_pageblock_skip(struct compact_control
*cc
On 06/05/2014 02:02 AM, David Rientjes wrote:
On Wed, 4 Jun 2014, Vlastimil Babka wrote:
diff --git a/mm/compaction.c b/mm/compaction.c
index ae7db5f..3dce5a7 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -640,11 +640,18 @@ isolate_migratepages_range(struct zone *zone, struct
On 06/05/2014 02:08 AM, David Rientjes wrote:
On Wed, 4 Jun 2014, Vlastimil Babka wrote:
In direct compaction, we want to allocate the high-order page as soon as
possible, so migrating from a block of pages that contains also unmigratable
pages just adds to allocation latency.
The title
On 06/05/2014 11:30 PM, David Rientjes wrote:
On Thu, 5 Jun 2014, Vlastimil Babka wrote:
diff --git a/mm/compaction.c b/mm/compaction.c
index ae7db5f..3dce5a7 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -640,11 +640,18 @@ isolate_migratepages_range(struct zone *zone
On 06/05/2014 11:38 PM, David Rientjes wrote:
On Thu, 5 Jun 2014, Vlastimil Babka wrote:
Ok, so this obsoletes my patchseries that did something similar. I hope
Your patches 1/3 and 2/3 would still make sense. Checking alloc flags is IMHO
better than checking async here. That way
rechecks anymore.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: Christoph Lameter c...@linux.com
Cc: Rik van Riel r...@redhat.com
Cc: David Rientjes
in
number of pages scanned by migration scanner. This change is also important to
later allow detecting when a cc-order block of pages cannot be compacted, and
the scanner should skip to the next block instead of wasting time.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Minchan Kim minc
-by: David Rientjes rient...@google.com
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: Christoph Lameter c
it to gfpflags_to_migratetype().
Signed-off-by: David Rientjes rient...@google.com
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n-horigu
is then used to update cc-free_pfn.
In the mmtests stress-highalloc benchmark, this has resulted in lowering the
ratio between pages scanned by both scanners, from 2.5 free pages per migrate
page, to 2.25 free pages per migrate page, without affecting success rates.
Signed-off-by: Vlastimil Babka vba
that
was missing in the previous attempt, zone statistics are updated etc.
Evaluation is pending.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n
is not held, the
function however does avoid contended run for async compaction by aborting when
trylock fails. Sync compaction does not use trylock.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Michal Nazarewicz min...@mina86.com
-by: Vlastimil Babka vba...@suse.cz
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: Christoph Lameter c...@linux.com
Cc: Rik van Riel r...@redhat.com
---
mm
actually improved a bit.
[rient...@google.com: skip_on_failure logic; cleanups]
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n-horigu
, and not pretend that the recheck under lock guarantees anything. It is
just a heuristic after all.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Naoya Horiguchi n
On 06/09/2014 11:09 AM, David Rientjes wrote:
On Fri, 6 Jun 2014, Vlastimil Babka wrote:
diff --git a/mm/internal.h b/mm/internal.h
index 1a8a0d4..6aa1f74 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -164,7 +164,8 @@ isolate_migratepages_range(struct zone *zone, struct
compact_control *cc
On 06/10/2014 01:50 AM, David Rientjes wrote:
On Mon, 9 Jun 2014, Vlastimil Babka wrote:
Async compaction aborts when it detects zone lock contention or need_resched()
is true. David Rientjes has reported that in practice, most direct async
compactions for THP allocation abort due
On 06/10/2014 01:58 AM, David Rientjes wrote:
On Mon, 9 Jun 2014, Vlastimil Babka wrote:
diff --git a/mm/compaction.c b/mm/compaction.c
index d37f4a8..e1a4283 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -185,54 +185,77 @@ static void update_pageblock_skip(struct compact_control
*cc
On 06/10/2014 12:25 AM, David Rientjes wrote:
On Mon, 9 Jun 2014, Vlastimil Babka wrote:
Sorry, I meant ACCESS_ONCE(page_private(page)) in the migration scanner
Hm but that's breaking the abstraction of page_order(). I don't know if
it's
worse to create a new variant of page_order() or to do
On 06/09/2014 06:04 PM, Kirill A. Shutemov wrote:
Hello everybody,
We've discussed few times that is would be nice to allow huge pages to be
mapped with 4k pages too. Here's my first attempt to actually implement
this. It's early prototype and not stabilized yet, but I want to share it
to
On 05/07/2014 12:19 AM, Naoya Horiguchi wrote:
On Fri, May 02, 2014 at 05:27:55PM +0200, Vlastimil Babka wrote:
The compaction free scanner in isolate_freepages() currently remembers PFN of
the highest pageblock where it successfully isolates, to be used as the
starting pageblock for the next
On 05/06/2014 11:18 PM, Naoya Horiguchi wrote:
On Fri, May 02, 2014 at 05:26:18PM +0200, Vlastimil Babka wrote:
During compaction, update_nr_listpages() has been used to count remaining
non-migrated and free pages after a call to migrage_pages(). The freepages
counting has become unneccessary
compaction.
Signed-off-by: David Rientjes rient...@google.com
---
v3: do not update pageblock skip metadata when skipped due to async per
Vlastimil.
Great.
Acked-by: Vlastimil Babka vba...@suse.cz
include/linux/mmzone.h | 5 ++--
mm/compaction.c| 66
);
+ enum migrate_mode sync, bool *contended);
Everywhere else it's 'mode' and only in this function it's still called
'sync', that's confusing.
Afterwards:
Acked-by: Vlastimil Babka vba...@suse.cz
extern void compact_pgdat(pg_data_t *pgdat, int order);
extern void
events where nr_migrated=0
nr_failed=0. In the stress-highalloc mmtest, this was about 75% of the events.
The mm_compaction_isolate_migratepages event is better for determining that
nothing was isolated for migration, and this one was just duplicating the info.
Signed-off-by: Vlastimil Babka vba
compaction is restarted, not for multiple invocations of
the free scanner during single compaction.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Joonsoo Kim iamjoonsoo@lge.com
Cc: Bartlomiej Zolnierkiewicz b.zolnier
triggers, then terminate this pageblock scan for
async compaction as well.
Signed-off-by: David Rientjes rient...@google.com
Acked-by: Vlastimil Babka vba...@suse.cz
---
mm/compaction.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/compaction.c b/mm
On 05/07/2014 03:33 AM, Minchan Kim wrote:
On Mon, May 05, 2014 at 05:50:46PM +0200, Vlastimil Babka wrote:
On 05/05/2014 04:36 PM, Sasha Levin wrote:
On 05/02/2014 08:08 AM, Vlastimil Babka wrote:
On 04/30/2014 11:46 PM, Sasha Levin wrote:
On 04/03/2014 11:40 AM, Vlastimil Babka wrote
On 06/11/2014 03:32 AM, Minchan Kim wrote:
+ if (cc-mode == MIGRATE_ASYNC) {
+ if (need_resched()) {
+ cc-contended = COMPACT_CONTENDED_SCHED;
+ return true;
}
-
+ if (spin_is_locked(lock)) {
Why do you use spin_is_locked
On 06/11/2014 04:12 AM, Minchan Kim wrote:
@@ -314,6 +315,9 @@ static unsigned long isolate_freepages_block(struct
compact_control *cc,
int isolated, i;
struct page *page = cursor;
+ /* Record how far we have got within the block */
+ *start_pfn =
On 06/11/2014 10:16 AM, Joonsoo Kim wrote:
On Wed, Jun 11, 2014 at 11:12:13AM +0900, Minchan Kim wrote:
On Mon, Jun 09, 2014 at 11:26:17AM +0200, Vlastimil Babka wrote:
Unlike the migration scanner, the free scanner remembers the beginning of the
last scanned pageblock in cc-free_pfn. It might
On 06/11/2014 04:48 AM, Minchan Kim wrote:
On Mon, Jun 09, 2014 at 11:26:20AM +0200, Vlastimil Babka wrote:
From: David Rientjes rient...@google.com
struct compact_control currently converts the gfp mask to a migratetype, but we
need the entire gfp mask in a follow-up patch.
Pass the entire
On 06/11/2014 01:54 AM, David Rientjes wrote:
On Tue, 10 Jun 2014, Vlastimil Babka wrote:
I think the compiler is allowed to turn this into
if (ACCESS_ONCE(page_private(page)) 0
ACCESS_ONCE(page_private(page)) MAX_ORDER)
low_pfn += (1UL ACCESS_ONCE
On 06/11/2014 03:10 AM, Minchan Kim wrote:
On Mon, Jun 09, 2014 at 11:26:14AM +0200, Vlastimil Babka wrote:
Async compaction aborts when it detects zone lock contention or need_resched()
is true. David Rientjes has reported that in practice, most direct async
compactions for THP allocation
On 06/09/2014 11:26 AM, Vlastimil Babka wrote:
Compaction uses watermark checking to determine if it succeeded in creating
a high-order free page. My testing has shown that this is quite racy and it
can happen that watermark checking in compaction succeeds, and moments later
the watermark
On 06/12/2014 04:20 AM, Minchan Kim wrote:
On Wed, Jun 11, 2014 at 04:56:49PM +0200, Vlastimil Babka wrote:
On 06/09/2014 11:26 AM, Vlastimil Babka wrote:
Compaction uses watermark checking to determine if it succeeded in creating
a high-order free page. My testing has shown that this is quite
On 06/12/2014 02:21 AM, David Rientjes wrote:
On Wed, 11 Jun 2014, Vlastimil Babka wrote:
I hate to belabor this point, but I think gcc does treat it differently.
If you look at the assembly comparing your patch to if you do
unsigned long freepage_order = ACCESS_ONCE(page_private(page
On 06/09/2014 11:06 AM, David Rientjes wrote:
On Fri, 6 Jun 2014, Vlastimil Babka wrote:
Agreed. I was thinking higher than 1GB would be possible once we have
your series that does the pageblock skip for thp, I think the expense
would be constant because we won't needlessly be migrating pages
On 06/12/2014 01:49 AM, Minchan Kim wrote:
On Wed, Jun 11, 2014 at 02:22:30PM +0200, Vlastimil Babka wrote:
On 06/11/2014 03:10 AM, Minchan Kim wrote:
On Mon, Jun 09, 2014 at 11:26:14AM +0200, Vlastimil Babka wrote:
Async compaction aborts when it detects zone lock contention or need_resched
On 05/20/2014 01:37 AM, Andrew Morton wrote:
On Fri, 16 May 2014 11:47:53 +0200 Vlastimil Babka vba...@suse.cz wrote:
Compaction uses compact_checklock_irqsave() function to periodically check
for
lock contention and need_resched() to either abort async compaction, or to
free the lock
On 05/22/2014 05:20 AM, David Rientjes wrote:
On Fri, 16 May 2014, Vlastimil Babka wrote:
Compaction uses compact_checklock_irqsave() function to periodically check for
lock contention and need_resched() to either abort async compaction, or to
free the lock, schedule and retake the lock. When
On 05/22/2014 04:49 AM, David Rientjes wrote:
On Tue, 13 May 2014, Vlastimil Babka wrote:
I wonder what about a process doing e.g. mmap() with MAP_POPULATE. It seems to
me that it would get only MIGRATE_ASYNC here, right? Since gfp_mask would
include __GFP_NO_KSWAPD and it won't have
to update the bitmap if there have been no other changes made in
parallel.
In a test running dd onto tmpfs the overhead of the pageblock-related
functions went from 1.27% in profiles to 0.5%.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Vlastimil Babka vba...@suse.cz
Hi, I've tested
On 05/22/2014 10:55 AM, David Rientjes wrote:
On Thu, 22 May 2014, Vlastimil Babka wrote:
With -mm, it turns out that while egregious thp fault latencies were
reduced, faulting 64MB of memory backed by thp on a fragmented 128GB
machine can result in latencies of 1-3s for the entire 64MB
On 05/22/2014 03:58 PM, Dave Jones wrote:
Not sure if Sasha has already reported this on -next (It's getting hard
to keep track of all the VM bugs he's been finding), but I hit this overnight
on .15-rc6. First time I've seen this one.
page:ea0004599800 count:0 mapcount:0 mapping:
On 22.5.2014 20:23, Andrew Morton wrote:
On Thu, 22 May 2014 11:24:23 +0200 Vlastimil Babka vba...@suse.cz wrote:
In a test running dd onto tmpfs the overhead of the pageblock-related
functions went from 1.27% in profiles to 0.5%.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Vlastimil
On 23.5.2014 6:21, Sasha Levin wrote:
On 05/22/2014 09:58 AM, Dave Jones wrote:
Not sure if Sasha has already reported this on -next (It's getting hard
to keep track of all the VM bugs he's been finding), but I hit this overnight
on .15-rc6. First time I've seen this one.
Unfortunately I had
On 05/23/2014 04:48 AM, Shawn Guo wrote:
On 23 May 2014 07:49, Kevin Hilman khil...@linaro.org wrote:
On Fri, May 16, 2014 at 2:47 AM, Vlastimil Babka vba...@suse.cz wrote:
Compaction uses compact_checklock_irqsave() function to periodically check
for
lock contention and need_resched
On 05/22/2014 05:41 PM, Dave Jones wrote:
On Thu, May 22, 2014 at 05:08:09PM +0200, Vlastimil Babka wrote:
RIP: 0010:[bb718d98] [bb718d98]
PageTransHuge.part.23+0xb/0xd
Call Trace:
[bb1728a3] isolate_migratepages_range+0x7a3/0x870
[bb172d90
-by: Vlastimil Babka vba...@suse.cz
Reviewed-by: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: Minchan Kim minc...@kernel.org
Cc: Mel Gorman mgor...@suse.de
Cc: Bartlomiej Zolnierkiewicz b.zolnier...@samsung.com
Cc: Michal Nazarewicz min...@mina86.com
Cc: Christoph Lameter c...@linux.com
Cc: Rik van Riel
1 - 100 of 6095 matches
Mail list logo