0984caa823 ("mm: incorporate zero pages into transparent huge
pages")
Acked-by: Kirill A. Shutemov
Acked-by: Vlastimil Babka
Reviewed-by: Andrea Arcangeli
Signed-off-by: Minchan Kim
---
Hello Greg,
This patch should go to -stable but when you will apply it
after merging of linu
show any notable difference in allocation success rate, but, it shows more
> compaction success rate.
>
> Compaction success rate (Compaction success * 100 / Compaction stalls, %)
> 18.47 : 28.94
>
> Cc:
# v3.7+
Fixes: 1fb3f8ca0e9222535a39b884cb67a34628411b9f
> Acked-by: Vlastimil
On 12/18/2014 06:16 PM, James Custer wrote:
> Reading the documentation on pageblock_pfn_to_page it checks to see if all
> of
> [start_pfn, end_pfn) is valid and within the same zone. But the validity in
> the
> entirety of [start_pfn, end_pfn) doesn't seem to be a requirement of
> test_pages_in_a
On 12/16/2014 12:03 AM, a...@linux-foundation.org wrote:
> From: James Custer
> Subject: mm: fix invalid use of pfn_valid_within in test_pages_in_a_zone
>
> Offlining memory by 'echo 0 > /sys/devices/system/memory/memory#/online'
> or reading valid_zones 'cat
> /sys/devices/system/memory/memory#/
and since we are under
prepare_to_wait(), the wake up won't be missed. Also we update the comment
prepare_kswapd_sleep() to hopefully more clearly describe the races it is
preventing.
Fixes: 5515061d22f0 ("mm: throttle direct reclaimers if PF_MEMALLOC reserves
are lo
zone was
fully balanced in single iteration. Note that the comment in balance_pgdat()
also says "Wake them", so waking up a single process does not seem intentional.
Thus, replace wake_up() with wake_up_all().
Signed-off-by: Vlastimil Babka
Cc: Mel Gorman
Cc: Johannes Weiner
Cc: Mi
On 22.12.2014 17:25, Vladimir Davydov wrote:
E.g. suppose processes are
governed by FIFO and kswapd happens to have a higher prio than the
process killed by OOM. Then after cond_resched kswapd will be picked for
execution again, and the killing process won't have a chance to remove
itself from
On 19.12.2014 19:28, Vladimir Davydov wrote:
Hi,
On Fri, Dec 19, 2014 at 04:57:47PM +0100, Michal Hocko wrote:
On Fri 19-12-14 14:01:55, Vlastimil Babka wrote:
Charles Shirron and Paul Cassella from Cray Inc have reported kswapd stuck
in a busy loop with nothing left to balance, but
15061d22f0 ("mm: throttle direct reclaimers if PF_MEMALLOC reserves
are low and swap is backed by network storage")
Signed-off-by: Vlastimil Babka
Cc:# v3.6+
Cc: Mel Gorman
Cc: Johannes Weiner
Cc: Michal Hocko
Cc: Vladimir Davydov
Cc: Rik van Riel
---
I've CC
zone was
fully balanced in single iteration. Note that the comment in balance_pgdat()
also says "Wake them", so waking up a single process does not seem intentional.
Thus, replace wake_up() with wake_up_all().
Signed-off-by: Vlastimil Babka
Cc: Mel Gorman
Cc: Johannes Weiner
Cc: Mi
show any notable difference in allocation success rate, but, it shows more
compaction success rate and reduced elapsed time.
Compaction success rate (Compaction success * 100 / Compaction stalls, %)
18.47 : 28.94
Elapsed time (sec)
1429 : 1411
Cc:
Signed-off-by: Joonsoo Kim
Acked-by: Vlastim
On 11/03/2014 09:10 AM, Joonsoo Kim wrote:
On Fri, Oct 31, 2014 at 03:39:13PM +0100, Vlastimil Babka wrote:
+ __isolate_free_page(page, order);
+ set_page_refcounted(page);
+ isolated_page = page
On 10/31/2014 08:25 AM, Joonsoo Kim wrote:
@@ -571,6 +548,7 @@ static inline void __free_one_page(struct page *page,
unsigned long combined_idx;
unsigned long uninitialized_var(buddy_idx);
struct page *buddy;
+ int max_order = MAX_ORDER;
VM_BUG_ON(!zone_is_i
rid of the skip_counting labels is nice.
Acked-by: Vlastimil Babka
---
mm/page_alloc.c | 14 +++---
1 file changed, 3 insertions(+), 11 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6df23fe..2bc7768 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -57
On 10/23/2014 10:10 AM, Joonsoo Kim wrote:
> All the caller of __free_one_page() has similar migratetype recheck logic,
> so we can move it to __free_one_page(). This reduce line of code and help
> future maintenance. This is also preparation step for "mm/page_alloc:
> restrict max order of merging
On 10/23/2014 10:10 AM, Joonsoo Kim wrote:
> Changes from v3:
> Add one more check in free_one_page() that checks whether migratetype is
> MIGRATE_ISOLATE or not. Without this, abovementioned case 1 could happens.
Good catch.
> Cc:
> Signed-off-by: Joonsoo Kim
Acked-by:
e().
>
> In addition to move up position of this re-fetch, this patch use
> optimization technique, re-fetching migratetype only if there is
> isolate pageblock. Pageblock isolation is rare event, so we can
> avoid re-fetching in common case with this optimization.
>
> Cc:
&
On 07/11/2014 10:38 AM, Peter Zijlstra wrote:
On Fri, Jul 11, 2014 at 10:33:15AM +0200, Vlastimil Babka wrote:
Quoting Hugh from previous mail in this thread:
[ 363.600969] INFO: task trinity-c327:9203 blocked for more than 120 seconds.
[ 363.605359] Not tainted
3.16.0-rc4-next
On 07/11/2014 10:25 AM, Peter Zijlstra wrote:
On Thu, Jul 10, 2014 at 03:02:29PM -0400, Sasha Levin wrote:
What if we move lockdep's acquisition point to after it actually got the
lock?
NAK, you want to do deadlock detection _before_ you're stuck in a
deadlock.
We'd miss deadlocks, but we do
On 02/13/2014 12:55 PM, Jakub Jelinek wrote:
> On Thu, Feb 13, 2014 at 03:37:08AM -0800, tip-bot for Steven Noonan wrote:
>> Commit-ID: a9f180345f5378ac87d80ed0bea55ba421d83859
>> Gitweb:
>> http://git.kernel.org/tip/a9f180345f5378ac87d80ed0bea55ba421d83859
>> Author: Steven Noonan
>> Au
On 07/09/2014 06:03 PM, Sasha Levin wrote:
On 07/09/2014 08:47 AM, Sasha Levin wrote:
So it would again help to see stacks of other tasks, to see who holds the
i_mutex and where it's stuck...
The stacks print got garbled due to having large amount of tasks and too low of
a
console buffer. I'v
On 07/08/2014 07:09 PM, Ben Hutchings wrote:
On Mon, 2014-06-09 at 11:39 +0200, Vlastimil Babka wrote:
commit 7ed695e069c3cbea5e1fd08f84a04536da91f584 upstream.
Compaction of a zone is finished when the migrate scanner (which begins
at the zone's lowest pfn) meets the free page scanner (
On 07/09/2014 08:35 AM, Hugh Dickins wrote:
On Wed, 9 Jul 2014, Sasha Levin wrote:
On 07/02/2014 03:25 PM, a...@linux-foundation.org wrote:
From: Hugh Dickins
Subject: shmem: fix faulting into a hole while it's punched, take 2
I suspect there's something off with this patch, as the shmem_fal
, so
this patch moves the cached pfn reset to be performed *before* the
values are read.
Signed-off-by: Vlastimil Babka
Acked-by: Mel Gorman
Acked-by: Rik van Riel
Cc: Joonsoo Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
---
I have realized that this should have been CC
() also detects
scanners meeting and sets the compact_blockskip_flush flag to make
kswapd reset the scanner pfn's.
The results in stress-highalloc benchmark show that the "regression" by
commit 81c0a2bb515f in phase 3 no longer occurs, and phase 1 and 2
allocation success rates are also significant
block boundary. This also permits replacing
the end-of-pageblock alignment within the for loop with a simple
pageblock_nr_pages increment.
Signed-off-by: Vlastimil Babka
Reported-by: Heesub Shin
Acked-by: Minchan Kim
Cc: Mel Gorman
Acked-by: Joonsoo Kim
Cc: Bartlomiej Zolnierkiewicz
Cc: Mic
On 06/09/2014 03:30 PM, Greg KH wrote:
On Mon, Jun 09, 2014 at 11:39:15AM +0200, Vlastimil Babka wrote:
commit d3132e4b83e6bd383c74d716f7281d7c3136089c upstream.
Compaction caches pfn's for its migrate and free scanners to avoid
scanning the whole zone each time. In compact_zone(), the c
block boundary. This also permits replacing
the end-of-pageblock alignment within the for loop with a simple
pageblock_nr_pages increment.
Signed-off-by: Vlastimil Babka
Reported-by: Heesub Shin
Acked-by: Minchan Kim
Cc: Mel Gorman
Acked-by: Joonsoo Kim
Cc: Bartlomiej Zolnierkiewicz
Cc: Mic
() also detects
scanners meeting and sets the compact_blockskip_flush flag to make
kswapd reset the scanner pfn's.
The results in stress-highalloc benchmark show that the "regression" by
commit 81c0a2bb515f in phase 3 no longer occurs, and phase 1 and 2
allocation success rates are also significant
, so
this patch moves the cached pfn reset to be performed *before* the
values are read.
Signed-off-by: Vlastimil Babka
Acked-by: Mel Gorman
Acked-by: Rik van Riel
Cc: Joonsoo Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
---
I have realized that this should have been CC
On 02/16/2014 03:59 PM, Daniel Borkmann wrote:
From: Vlastimil Babka
[ 4366.519657] [ cut here ]
[ 4366.519709] kernel BUG at mm/mlock.c:528!
[ 4366.519742] invalid opcode: [#1] SMP
[ 4366.519782] Modules linked in: ccm arc4 iwldvm [...]
[ 4366.520488] video
to the stable tree,
> please let know about it.
Hello,
I realized this probably doesn't meet stable kernel rules of criticality and
the race
being actually observed to happen, so feel free to drop it.
Vlastimil
> From 01cc2e58697e34c6ee9a40fb6cebc18bf5a1923f Mon Sep 17 00:00:00 2001
> F
32 matches
Mail list logo