Tetsuo Handa wrote:
> Tetsuo Handa wrote:
> > I got OOM killers while running heavy disk I/O (extracting kernel source,
> > running lxr's genxref command). (Environ: 4 CPUs / 2048MB RAM / no swap / 
> > XFS)
> > Do you think these OOM killers reasonable? Too weak against fragmentation?
>
> Since I cannot establish workload that caused December 24's natural OOM
> killers, I used the following stressor for generating similar situation.
>

I came to feel that I am observing a different problem which is currently
hidden behind the "too small to fail" memory-allocation rule. That is, tasks
requesting order > 0 pages are continuously losing the competition when
tasks requesting order = 0 pages dominate, for reclaimed pages are stolen
by tasks requesting order = 0 pages before reclaimed pages are combined to
order > 0 pages (or maybe order > 0 pages are immediately split into
order = 0 pages due to tasks requesting order = 0 pages).

Currently, order <= PAGE_ALLOC_COSTLY_ORDER allocations implicitly retry
unless chosen by the OOM killer. Therefore, even if tasks requesting
order = 2 pages lost the competition when there are tasks requesting
order = 0 pages, the order = 2 allocation request is implicitly retried
and therefore the OOM killer is not invoked (though there is a problem that
tasks requesting order > 0 allocation will stall as long as tasks requesting
order = 0 pages dominate).

But this patchset introduced a limit of 16 retries. Thus, if tasks requesting
order = 2 pages lost the competition for 16 times due to tasks requesting
order = 0 pages, tasks requesting order = 2 pages invoke the OOM killer.
To avoid the OOM killer, we need to make sure that pages reclaimed for
order > 0 allocations will not be stolen by tasks requesting order = 0
allocations.

Is my feeling plausible?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to