On Tue 11-10-16 14:06:43, Minchan Kim wrote:
> On Mon, Oct 10, 2016 at 09:47:31AM +0200, Michal Hocko wrote:
[...]
> > that close to OOM usually blows up later or starts trashing very soon.
> > It is true that a particular workload might benefit from ever last
> > allocatable page in the system but
On Mon, Oct 10, 2016 at 09:47:31AM +0200, Michal Hocko wrote:
> On Sat 08-10-16 00:04:25, Minchan Kim wrote:
> [...]
> > I can show other log which reserve greater than 1%. See the DMA32 zone
> > free pages. It was GFP_ATOMIC allocation so it's different with I posted
> > but important thing is VM
On Sat 08-10-16 00:04:25, Minchan Kim wrote:
[...]
> I can show other log which reserve greater than 1%. See the DMA32 zone
> free pages. It was GFP_ATOMIC allocation so it's different with I posted
> but important thing is VM can reserve memory greater than 1% by the race
> which was really what w
On Fri, Oct 07, 2016 at 11:16:26AM +0200, Michal Hocko wrote:
> On Fri 07-10-16 14:45:32, Minchan Kim wrote:
> > I got OOM report from production team with v4.4 kernel.
> > It has enough free memory but failed to allocate order-0 page and
> > finally encounter OOM kill.
> > I could reproduce it wit
On Fri 07-10-16 14:45:32, Minchan Kim wrote:
> I got OOM report from production team with v4.4 kernel.
> It has enough free memory but failed to allocate order-0 page and
> finally encounter OOM kill.
> I could reproduce it with my test easily. Look at below.
> The reason is free pages(19M) of DMA3
I got OOM report from production team with v4.4 kernel.
It has enough free memory but failed to allocate order-0 page and
finally encounter OOM kill.
I could reproduce it with my test easily. Look at below.
The reason is free pages(19M) of DMA32 zone are reserved for
HIGHORDERATOMIC and doesn't unr
6 matches
Mail list logo