Re: [PATCH] mm,vmscan: only loop back if compaction would fail in all zones

2012-11-27 Thread Valdis . Kletnieks
On Sun, 25 Nov 2012 23:10:41 -0500, Johannes Weiner said:

> From: Johannes Weiner 
> Subject: [patch] mm: vmscan: fix endless loop in kswapd balancing
>
> Kswapd does not in all places have the same criteria for when it
> considers a zone balanced.  This leads to zones being not reclaimed
> because they are considered just fine and the compaction checks to
> loop over the zonelist again because they are considered unbalanced,
> causing kswapd to run forever.
>
> Add a function, zone_balanced(), that checks the watermark and if
> compaction has enough free memory to do its job.  Then use it
> uniformly for when kswapd needs to check if a zone is balanced.
>
> Signed-off-by: Johannes Weiner 
> ---
>  mm/vmscan.c | 27 ++-
>  1 file changed, 18 insertions(+), 9 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 48550c6..3b0aef4 100644

> + if (COMPACTION_BUILD && order && !compaction_suitable(zone, order))
> + return false;

Applying to next-20121117,I had to hand-patch for this other apkm patch:

./Next/merge.log:Applying: mm: use IS_ENABLED(CONFIG_COMPACTION) instead of 
COMPACTION_BUILD

Probably won't be till tomorrow before I know if this worked, it seems
to take a while before the kswapd storms start hitting (appears to be
a function of uptime - see almost none for 8-16 hours, after 24-30 hours
I'll be having a spinning kswapd most of the time).


pgpVzL1MnwzTe.pgp
Description: PGP signature


Re: [PATCH] mm,vmscan: only loop back if compaction would fail in all zones

2012-11-26 Thread Rik van Riel

On 11/25/2012 11:10 PM, Johannes Weiner wrote:


From: Johannes Weiner 
Subject: [patch] mm: vmscan: fix endless loop in kswapd balancing

Kswapd does not in all places have the same criteria for when it
considers a zone balanced.  This leads to zones being not reclaimed
because they are considered just fine and the compaction checks to
loop over the zonelist again because they are considered unbalanced,
causing kswapd to run forever.

Add a function, zone_balanced(), that checks the watermark and if
compaction has enough free memory to do its job.  Then use it
uniformly for when kswapd needs to check if a zone is balanced.

Signed-off-by: Johannes Weiner 


Reviewed-by: Rik van Riel 


--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm,vmscan: only loop back if compaction would fail in all zones

2012-11-26 Thread Johannes Hirte
Am Sun, 25 Nov 2012 23:10:41 -0500
schrieb Johannes Weiner :

> On Sun, Nov 25, 2012 at 10:15:18PM -0500, Johannes Weiner wrote:
> > On Sun, Nov 25, 2012 at 07:16:45PM -0500, Rik van Riel wrote:
> > > On Sun, 25 Nov 2012 17:44:33 -0500
> > > Johannes Weiner  wrote:
> > > > On Sun, Nov 25, 2012 at 01:29:50PM -0500, Rik van Riel wrote:
> > > 
> > > > > Could you try this patch?
> > > > 
> > > > It's not quite enough because it's not reaching the conditions
> > > > you changed, see analysis in
> > > > https://lkml.org/lkml/2012/11/20/567
> > > 
> > > Johannes,
> > > 
> > > does the patch below fix your problem?
> > 
> > I can not reproduce the problem anymore with my smoke test.
> > 
> > > I suspect it would, because kswapd should only ever run into this
> > > particular problem when we have a tiny memory zone in a pgdat,
> > > and in that case we will also have a larger zone nearby, where
> > > compaction would just succeed.
> > 
> > What if there is a higher order GFP_DMA allocation when the other
> > zones in the system meet the high watermark for this order?
> > 
> > There is something else that worries me: if the preliminary zone
> > scan finds the high watermark of all zones alright, end_zone is at
> > its initialization value, 0.  The final compaction loop at `if
> > (order)' goes through all zones up to and including end_zone, which
> > was never really set to anything meaningful(?) and the only zone
> > considered is the DMA zone again.  Very unlikely, granted, but if
> > you'd ever hit that race and kswapd gets stuck, this will be fun to
> > debug...
> 
> I actually liked your first idea better: force reclaim until the
> compaction watermark is met.  The only problem was that still not
> every check in there agreed when the zone was considered balanced and
> so no actual reclaim happened.
> 
> So how about making everybody agree?  If the high watermark is met but
> not the compaction one, keep doing reclaim AND don't consider the zone
> balanced, AND don't make it contribute to balanced_pages etc.?  This
> makes sure reclaim really does not bail and that the node is never
> considered alright when it's actually not according to compaction.
> This patch fixes the problem too (at least for the smoke test so far)
> and IMO makes the code a bit more understandable.
> 
> We may be able to drop some of the relooping conditions.  We may also
> be able to reduce the pressure from the DMA zone by passing the right
> classzone_idx in there.  Needs more thought.
> 
> ---
> From: Johannes Weiner 
> Subject: [patch] mm: vmscan: fix endless loop in kswapd balancing
> 
> Kswapd does not in all places have the same criteria for when it
> considers a zone balanced.  This leads to zones being not reclaimed
> because they are considered just fine and the compaction checks to
> loop over the zonelist again because they are considered unbalanced,
> causing kswapd to run forever.
> 
> Add a function, zone_balanced(), that checks the watermark and if
> compaction has enough free memory to do its job.  Then use it
> uniformly for when kswapd needs to check if a zone is balanced.
> 
> Signed-off-by: Johannes Weiner 
> ---
>  mm/vmscan.c | 27 ++-
>  1 file changed, 18 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 48550c6..3b0aef4 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2397,6 +2397,19 @@ static void age_active_anon(struct zone *zone,
> struct scan_control *sc) } while (memcg);
>  }
>  
> +static bool zone_balanced(struct zone *zone, int order,
> +   unsigned long balance_gap, int
> classzone_idx) +{
> + if (!zone_watermark_ok_safe(zone, order,
> high_wmark_pages(zone) +
> + balance_gap, classzone_idx, 0))
> + return false;
> +
> + if (COMPACTION_BUILD && order && !compaction_suitable(zone,
> order))
> + return false;
> +
> + return true;
> +}
> +
>  /*
>   * pgdat_balanced is used when checking if a node is balanced for
> high-order
>   * allocations. Only zones that meet watermarks and are in a zone
> allowed @@ -2475,8 +2488,7 @@ static bool
> prepare_kswapd_sleep(pg_data_t *pgdat, int order, long remaining,
> continue; }
>  
> - if (!zone_watermark_ok_safe(zone, order,
> high_wmark_pages(zone),
> - i, 0))
> + if (!zone_balanced(zone, order, 0, i))
>   all_zones_ok = false;
>   else
>   balanced += zone->present_pages;
> @@ -2585,8 +2597,7 @@ static unsigned long balance_pgdat(pg_data_t
> *pgdat, int order, break;
>   }
>  
> - if (!zone_watermark_ok_safe(zone, order,
> - high_wmark_pages(zone), 0,
> 0)) {
> + if (!zone_balanced(zone, order, 0, 0)) {
>   end_zone = i;
>   break;
>  

Re: [PATCH] mm,vmscan: only loop back if compaction would fail in all zones

2012-11-25 Thread Johannes Weiner
On Sun, Nov 25, 2012 at 10:15:18PM -0500, Johannes Weiner wrote:
> On Sun, Nov 25, 2012 at 07:16:45PM -0500, Rik van Riel wrote:
> > On Sun, 25 Nov 2012 17:44:33 -0500
> > Johannes Weiner  wrote:
> > > On Sun, Nov 25, 2012 at 01:29:50PM -0500, Rik van Riel wrote:
> > 
> > > > Could you try this patch?
> > > 
> > > It's not quite enough because it's not reaching the conditions you
> > > changed, see analysis in https://lkml.org/lkml/2012/11/20/567
> > 
> > Johannes,
> > 
> > does the patch below fix your problem?
> 
> I can not reproduce the problem anymore with my smoke test.
> 
> > I suspect it would, because kswapd should only ever run into this
> > particular problem when we have a tiny memory zone in a pgdat,
> > and in that case we will also have a larger zone nearby, where
> > compaction would just succeed.
> 
> What if there is a higher order GFP_DMA allocation when the other
> zones in the system meet the high watermark for this order?
> 
> There is something else that worries me: if the preliminary zone scan
> finds the high watermark of all zones alright, end_zone is at its
> initialization value, 0.  The final compaction loop at `if (order)'
> goes through all zones up to and including end_zone, which was never
> really set to anything meaningful(?) and the only zone considered is
> the DMA zone again.  Very unlikely, granted, but if you'd ever hit
> that race and kswapd gets stuck, this will be fun to debug...

I actually liked your first idea better: force reclaim until the
compaction watermark is met.  The only problem was that still not
every check in there agreed when the zone was considered balanced and
so no actual reclaim happened.

So how about making everybody agree?  If the high watermark is met but
not the compaction one, keep doing reclaim AND don't consider the zone
balanced, AND don't make it contribute to balanced_pages etc.?  This
makes sure reclaim really does not bail and that the node is never
considered alright when it's actually not according to compaction.
This patch fixes the problem too (at least for the smoke test so far)
and IMO makes the code a bit more understandable.

We may be able to drop some of the relooping conditions.  We may also
be able to reduce the pressure from the DMA zone by passing the right
classzone_idx in there.  Needs more thought.

---
From: Johannes Weiner 
Subject: [patch] mm: vmscan: fix endless loop in kswapd balancing

Kswapd does not in all places have the same criteria for when it
considers a zone balanced.  This leads to zones being not reclaimed
because they are considered just fine and the compaction checks to
loop over the zonelist again because they are considered unbalanced,
causing kswapd to run forever.

Add a function, zone_balanced(), that checks the watermark and if
compaction has enough free memory to do its job.  Then use it
uniformly for when kswapd needs to check if a zone is balanced.

Signed-off-by: Johannes Weiner 
---
 mm/vmscan.c | 27 ++-
 1 file changed, 18 insertions(+), 9 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 48550c6..3b0aef4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2397,6 +2397,19 @@ static void age_active_anon(struct zone *zone, struct 
scan_control *sc)
} while (memcg);
 }
 
+static bool zone_balanced(struct zone *zone, int order,
+ unsigned long balance_gap, int classzone_idx)
+{
+   if (!zone_watermark_ok_safe(zone, order, high_wmark_pages(zone) +
+   balance_gap, classzone_idx, 0))
+   return false;
+
+   if (COMPACTION_BUILD && order && !compaction_suitable(zone, order))
+   return false;
+
+   return true;
+}
+
 /*
  * pgdat_balanced is used when checking if a node is balanced for high-order
  * allocations. Only zones that meet watermarks and are in a zone allowed
@@ -2475,8 +2488,7 @@ static bool prepare_kswapd_sleep(pg_data_t *pgdat, int 
order, long remaining,
continue;
}
 
-   if (!zone_watermark_ok_safe(zone, order, high_wmark_pages(zone),
-   i, 0))
+   if (!zone_balanced(zone, order, 0, i))
all_zones_ok = false;
else
balanced += zone->present_pages;
@@ -2585,8 +2597,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int 
order,
break;
}
 
-   if (!zone_watermark_ok_safe(zone, order,
-   high_wmark_pages(zone), 0, 0)) {
+   if (!zone_balanced(zone, order, 0, 0)) {
end_zone = i;
break;
} else {
@@ -2662,9 +2673,8 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int 
order,
testorder = 0;
 
if ((

Re: [PATCH] mm,vmscan: only loop back if compaction would fail in all zones

2012-11-25 Thread Johannes Weiner
On Sun, Nov 25, 2012 at 07:16:45PM -0500, Rik van Riel wrote:
> On Sun, 25 Nov 2012 17:44:33 -0500
> Johannes Weiner  wrote:
> > On Sun, Nov 25, 2012 at 01:29:50PM -0500, Rik van Riel wrote:
> 
> > > Could you try this patch?
> > 
> > It's not quite enough because it's not reaching the conditions you
> > changed, see analysis in https://lkml.org/lkml/2012/11/20/567
> 
> Johannes,
> 
> does the patch below fix your problem?

I can not reproduce the problem anymore with my smoke test.

> I suspect it would, because kswapd should only ever run into this
> particular problem when we have a tiny memory zone in a pgdat,
> and in that case we will also have a larger zone nearby, where
> compaction would just succeed.

What if there is a higher order GFP_DMA allocation when the other
zones in the system meet the high watermark for this order?

There is something else that worries me: if the preliminary zone scan
finds the high watermark of all zones alright, end_zone is at its
initialization value, 0.  The final compaction loop at `if (order)'
goes through all zones up to and including end_zone, which was never
really set to anything meaningful(?) and the only zone considered is
the DMA zone again.  Very unlikely, granted, but if you'd ever hit
that race and kswapd gets stuck, this will be fun to debug...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] mm,vmscan: only loop back if compaction would fail in all zones

2012-11-25 Thread Rik van Riel
On Sun, 25 Nov 2012 17:44:33 -0500
Johannes Weiner  wrote:
> On Sun, Nov 25, 2012 at 01:29:50PM -0500, Rik van Riel wrote:

> > Could you try this patch?
> 
> It's not quite enough because it's not reaching the conditions you
> changed, see analysis in https://lkml.org/lkml/2012/11/20/567

Johannes,

does the patch below fix your problem?

I suspect it would, because kswapd should only ever run into this
particular problem when we have a tiny memory zone in a pgdat,
and in that case we will also have a larger zone nearby, where
compaction would just succeed.

---8<---

Subject: mm,vmscan: only loop back if compaction would fail in all zones

Kswapd frees memory to satisfy two goals:
1) allow allocations to succeed, and
2) balance memory pressure between zones 

Currently, kswapd has an issue where it will loop back to free
more memory if any memory zone in the pgdat has not enough free
memory for compaction.  This can lead to unnecessary overhead,
and even infinite loops in kswapd.

It is better to only loop back to free more memory if all of
the zones in the pgdat have insufficient free memory for
compaction.  That satisfies both of kswapd's goals with less
overhead.

Signed-off-by: Rik van Riel 
---
 mm/vmscan.c |   11 ---
 1 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index b99ecba..f0d111b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2790,6 +2790,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int 
order,
 */
if (order) {
int zones_need_compaction = 1;
+   int compaction_needs_memory = 1;
 
for (i = 0; i <= end_zone; i++) {
struct zone *zone = pgdat->node_zones + i;
@@ -2801,10 +2802,10 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, 
int order,
sc.priority != DEF_PRIORITY)
continue;
 
-   /* Would compaction fail due to lack of free memory? */
+   /* Is there enough memory for compaction? */
if (COMPACTION_BUILD &&
-   compaction_suitable(zone, order) == COMPACT_SKIPPED)
-   goto loop_again;
+   compaction_suitable(zone, order) != COMPACT_SKIPPED)
+   compaction_needs_memory = 0;
 
/* Confirm the zone is balanced for order-0 */
if (!zone_watermark_ok(zone, 0,
@@ -2822,6 +2823,10 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int 
order,
zone_clear_flag(zone, ZONE_CONGESTED);
}
 
+   /* None of the zones had enough free memory for compaction. */
+   if (compaction_needs_memory)
+   goto loop_again;
+
if (zones_need_compaction)
compact_pgdat(pgdat, order);
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/