From: Glauber Costa <[email protected]> In very low free kernel memory situations, it may be the case that we have less objects to free than our initial batch size. If this is the case, it is better to shrink those, and open space for the new workload then to keep them and fail the new allocations.
In particular, we are concerned with the direct reclaim case for memcg. Although this same technique can be applied to other situations just as well, we will start conservative and apply it for that case, which is the one that matters the most. Signed-off-by: Glauber Costa <[email protected]> Signed-off-by: Vladimir Davydov <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Rik van Riel <[email protected]> --- mm/vmscan.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 1997813..b2a5be9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -281,17 +281,22 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker, nr_pages_scanned, lru_pages, max_pass, delta, total_scan); - while (total_scan >= batch_size) { + while (total_scan > 0) { unsigned long ret; + unsigned long nr_to_scan = min(batch_size, total_scan); - shrinkctl->nr_to_scan = batch_size; + if (!shrinkctl->target_mem_cgroup && + total_scan < batch_size) + break; + + shrinkctl->nr_to_scan = nr_to_scan; ret = shrinker->scan_objects(shrinker, shrinkctl); if (ret == SHRINK_STOP) break; freed += ret; - count_vm_events(SLABS_SCANNED, batch_size); - total_scan -= batch_size; + count_vm_events(SLABS_SCANNED, nr_to_scan); + total_scan -= nr_to_scan; cond_resched(); } -- 1.7.10.4 _______________________________________________ Devel mailing list [email protected] https://lists.openvz.org/mailman/listinfo/devel
