On 04/20/18 10:20, Roman Gushchin wrote:
> 
> Hi, Randy!
> 
> An updated version below.
> 
> Thanks!

OK, looks good now. Thanks.

FWIW:
Reviewed-by: Randy Dunlap <rdun...@infradead.org> # for Documentation/ only.

> ------------------------------------------------------------
> 
> 
> From 2225fa0b3400431dd803f206b20a9344f0dfcd0a Mon Sep 17 00:00:00 2001
> From: Roman Gushchin <g...@fb.com>
> Date: Fri, 20 Apr 2018 15:24:44 +0100
> Subject: [PATCH 1/2] mm: introduce memory.min
> 
> Memory controller implements the memory.low best-effort memory
> protection mechanism, which works perfectly in many cases and
> allows protecting working sets of important workloads from
> sudden reclaim.
> 
> But it's semantics has a significant limitation: it works
> only until there is a supply of reclaimable memory.
> This makes it pretty useless against any sort of slow memory
> leaks or memory usage increases. This is especially true
> for swapless systems. If swap is enabled, memory soft protection
> effectively postpones problems, allowing a leaking application
> to fill all swap area, which makes no sense.
> The only effective way to guarantee the memory protection
> in this case is to invoke the OOM killer.
> 
> This patch introduces the memory.min interface for cgroup v2
> memory controller. It works very similarly to memory.low
> (sharing the same hierarchical behavior), except that it's
> not disabled if there is no more reclaimable memory in the system.
> 
> Signed-off-by: Roman Gushchin <g...@fb.com>
> Cc: Johannes Weiner <han...@cmpxchg.org>
> Cc: Michal Hocko <mho...@suse.com>
> Cc: Vladimir Davydov <vdavydov....@gmail.com>
> Cc: Tejun Heo <t...@kernel.org>
> ---
>  Documentation/cgroup-v2.txt  | 24 ++++++++++-
>  include/linux/memcontrol.h   | 15 ++++++-
>  include/linux/page_counter.h | 11 ++++-
>  mm/memcontrol.c              | 99 
> ++++++++++++++++++++++++++++++++++++--------
>  mm/page_counter.c            | 63 ++++++++++++++++++++--------
>  mm/vmscan.c                  | 19 ++++++++-
>  6 files changed, 191 insertions(+), 40 deletions(-)
> 
> diff --git a/Documentation/cgroup-v2.txt b/Documentation/cgroup-v2.txt
> index 657fe1769c75..a413118b9c29 100644
> --- a/Documentation/cgroup-v2.txt
> +++ b/Documentation/cgroup-v2.txt
> @@ -1002,6 +1002,26 @@ PAGE_SIZE multiple when read back.
>       The total amount of memory currently being used by the cgroup
>       and its descendants.
>  
> +  memory.min
> +     A read-write single value file which exists on non-root
> +     cgroups.  The default is "0".
> +
> +     Hard memory protection.  If the memory usage of a cgroup
> +     is within its effective min boundary, the cgroup's memory
> +     won't be reclaimed under any conditions. If there is no
> +     unprotected reclaimable memory available, OOM killer
> +     is invoked.
> +
> +     Effective low boundary is limited by memory.min values of
> +     all ancestor cgroups. If there is memory.min overcommitment
> +     (child cgroup or cgroups are requiring more protected memory
> +     than parent will allow), then each child cgroup will get
> +     the part of parent's protection proportional to its
> +     actual memory usage below memory.min.
> +
> +     Putting more memory than generally available under this
> +     protection is discouraged and may lead to constant OOMs.
> +
>    memory.low
>       A read-write single value file which exists on non-root
>       cgroups.  The default is "0".
> @@ -1013,9 +1033,9 @@ PAGE_SIZE multiple when read back.
>  
>       Effective low boundary is limited by memory.low values of
>       all ancestor cgroups. If there is memory.low overcommitment
> -     (child cgroup or cgroups are requiring more protected memory,
> +     (child cgroup or cgroups are requiring more protected memory
>       than parent will allow), then each child cgroup will get
> -     the part of parent's protection proportional to the its
> +     the part of parent's protection proportional to its
>       actual memory usage below memory.low.
>  
>       Putting more memory than generally available under this



-- 
~Randy

Reply via email to