On Thu 15-06-17 15:03:17, David Rientjes wrote:
> On Thu, 15 Jun 2017, Michal Hocko wrote:
> 
> > > Yes, quite a bit in testing.
> > > 
> > > One oom kill shows the system to be oom:
> > > 
> > > [22999.488705] Node 0 Normal free:90484kB min:90500kB ...
> > > [22999.488711] Node 1 Normal free:91536kB min:91948kB ...
> > > 
> > > followed up by one or more unnecessary oom kills showing the oom killer 
> > > racing with memory freeing of the victim:
> > > 
> > > [22999.510329] Node 0 Normal free:229588kB min:90500kB ...
> > > [22999.510334] Node 1 Normal free:600036kB min:91948kB ...
> > > 
> > > The patch is absolutely required for us to prevent continuous oom killing 
> > > of processes after a single process has been oom killed and its memory is 
> > > in the process of being freed.
> > 
> > OK, could you play with the patch/idea suggested in
> > http://lkml.kernel.org/r/20170615122031.gl1...@dhcp22.suse.cz?
> > 
> 
> I cannot, I am trying to unblock a stable kernel release to my production 
> that is obviously fixed with this patch and cannot experiment with 
> uncompiled and untested patches that introduce otherwise unnecessary 
> locking into the __mmput() path and is based on speculation rather than 
> hard data that __mmput() for some reason stalls for the oom victim's mm.  
> I was hoping that this fix could make it in time for 4.12 since 4.12 kills 
> 1-4 processes unnecessarily for each oom condition and then can review any 
> tested solution you may propose at a later time.

I am sorry but I have really hard to make the oom reaper a reliable way
to stop all the potential oom lockups go away. I do not want to
reintroduce another potential lockup now. I also do not see why any
solution should be rushed into. I have proposed a way to go and unless
it is clear that this is not a way forward then I simply do not agree
with any partial workarounds or shortcuts.
-- 
Michal Hocko
SUSE Labs

Reply via email to