fallback_alloc is called on kmalloc if the preferred node doesn't have
free or partial slabs and there's no pages on the node's free list
(GFP_THISNODE allocations fail). Before invoking the reclaimer it tries
to locate a free or partial slab on other allowed nodes' lists. While
iterating over the preferred node's zonelist it skips those zones which
cpuset_zone_allowed_hardwall returns false for. That means that for a
task bound to a specific node using cpusets fallback_alloc will always
ignore free slabs on other nodes and go directly to the reclaimer,
which, however, may allocate from other nodes if cpuset.mem_hardwall is
unset (default). As a result, we may get lists of free slabs grow
without bounds on other nodes, which is bad, because inactive slabs are
only evicted by cache_reap at a very slow rate and cannot be dropped
forcefully.

To reproduce the issue, run a process that will walk over a directory
tree with lots of files inside a cpuset bound to a node that constantly
experiences memory pressure. Look at num_slabs vs active_slabs growth as
reported by /proc/slabinfo.

We should use cpuset_zone_allowed_softwall in fallback_alloc. Since it
can sleep, we only call it on __GFP_WAIT allocations. For atomic
allocations we simply ignore cpusets, which is in agreement with the
cpuset documenation (see the comment to __cpuset_node_allowed_softwall).

Signed-off-by: Vladimir Davydov <vdavy...@parallels.com>
Cc: Christoph Lameter <c...@linux.com>
Cc: Pekka Enberg <penb...@kernel.org>
Cc: David Rientjes <rient...@google.com>
Cc: Joonsoo Kim <iamjoonsoo....@lge.com>
---
 mm/slab.c |   23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 2e60bf3dedbb..1d77a4df7ee1 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3049,14 +3049,23 @@ retry:
        for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
                nid = zone_to_nid(zone);
 
-               if (cpuset_zone_allowed_hardwall(zone, flags) &&
-                       get_node(cache, nid) &&
-                       get_node(cache, nid)->free_objects) {
-                               obj = ____cache_alloc_node(cache,
-                                       flags | GFP_THISNODE, nid);
-                               if (obj)
-                                       break;
+               if (!get_node(cache, nid) ||
+                   !get_node(cache, nid)->free_objects)
+                       continue;
+
+               if (local_flags & __GFP_WAIT) {
+                       bool allowed;
+
+                       local_irq_enable();
+                       allowed = cpuset_zone_allowed_softwall(zone, flags);
+                       local_irq_disable();
+                       if (!allowed)
+                               continue;
                }
+
+               obj = ____cache_alloc_node(cache, flags | GFP_THISNODE, nid);
+               if (obj)
+                       break;
        }
 
        if (!obj) {
-- 
1.7.10.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to