On 2019/01/07 23:38, Michal Hocko wrote:
> From: Michal Hocko <mho...@suse.com>
> 
> Tetsuo has reported [1] that a single process group memcg might easily
> swamp the log with no-eligible oom victim reports due to race between
> the memcg charge and oom_reaper

This explanation is outdated. I reported that one memcg OOM killer can
kill all processes in that memcg. I expect the changelog to be updated.

> 
> Thread 1              Thread2                         oom_reaper
> try_charge            try_charge
>                         mem_cgroup_out_of_memory
>                           mutex_lock(oom_lock)
>   mem_cgroup_out_of_memory
>     mutex_lock(oom_lock)
>                             out_of_memory
>                               select_bad_process
>                               oom_kill_process(current)
>                                 wake_oom_reaper
>                                                         oom_reap_task
>                                                         MMF_OOM_SKIP->victim
>                           mutex_unlock(oom_lock)
>     out_of_memory
>       select_bad_process # no task
> 
> If Thread1 didn't race it would bail out from try_charge and force the
> charge. We can achieve the same by checking tsk_is_oom_victim inside
> the oom_lock and therefore close the race.
> 
> [1] 
> http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1...@i-love.sakura.ne.jp
> Signed-off-by: Michal Hocko <mho...@suse.com>
> ---
>  mm/memcontrol.c | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index af7f18b32389..90eb2e2093e7 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1387,10 +1387,22 @@ static bool mem_cgroup_out_of_memory(struct 
> mem_cgroup *memcg, gfp_t gfp_mask,
>               .gfp_mask = gfp_mask,
>               .order = order,
>       };
> -     bool ret;
> +     bool ret = true;
>  
>       mutex_lock(&oom_lock);

And because of "[PATCH 1/2] mm, oom: marks all killed tasks as oom
victims", mark_oom_victim() will be called on current thread even if
we used mutex_lock_killable(&oom_lock) here, like you said

  mutex_lock_killable would take care of exiting task already. I would
  then still prefer to check for mark_oom_victim because that is not racy
  with the exit path clearing signals. I can update my patch to use
  _killable lock variant if we are really going with the memcg specific
  fix.

. If current thread is not yet killed by the OOM killer but can terminate
without invoking the OOM killer, using mutex_lock_killable(&oom_lock) here
saves some processes. What is the race you are referring by "racy with the
exit path clearing signals" ?

> +
> +     /*
> +      * multi-threaded tasks might race with oom_reaper and gain
> +      * MMF_OOM_SKIP before reaching out_of_memory which can lead
> +      * to out_of_memory failure if the task is the last one in
> +      * memcg which would be a false possitive failure reported
> +      */

Not only out_of_memory() failure. Current thread needlessly tries to
select next OOM victim. out_of_memory() failure is nothing but a result
of no eligible candidate case.

> +     if (tsk_is_oom_victim(current))
> +             goto unlock;
> +
>       ret = out_of_memory(&oc);
> +
> +unlock:
>       mutex_unlock(&oom_lock);
>       return ret;
>  }
> 

Reply via email to