On Mon, Jul 2, 2018 at 4:19 PM Andrew Morton <a...@linux-foundation.org> wrote:
>
> Before we go and add a large amount of code to do the shrinker's job
> for it, we should get a full understanding of what's going wrong.  Is
> it because the dentry_lru had a mixture of +ve and -ve dentries?
> Should we have a separate LRU for -ve dentries?  Are we appropriately
> aging the various dentries?  etc.
>
> It could be that tuning/fixing the current code will fix whatever
> problems inspired this patchset.

So I do think that the shrinker is likely the culprit behind the oom
issues. I think it's likely worse when you try to do some kind of
containerization, and dentries are shared.

That said, I think there are likely good reasons to limit excessive
negative dentries even outside the oom issue. Even if we did a perfect
job at shrinking them and took no time at all doing so, the fact that
you can generate an effecitvely infinite amount of negative dentries
and then polluting the dentry hash chains with them _could_ be a
performance problem.

No sane application does that, and we handle the "obvious" cases
already: ie if you create a lot of files in a deep subdirectory and
then do "rm -rf dir", we *will* throw the negative dentries away as we
remove the directories they are in. So it is unlikely to be much of a
problem in practice. But at least in theory you can generate many
millions of negative dentries just to mess with the system, and slow
down good people.

Probably not even remotely to the point of a DoS attack, but certainly
to the point of "we're wasting time".

So I do think that restricting negative dentries is a fine concept.
They are useful, but that doesn't mean that it makes sense to fill
memory with them.
                 Linus

Reply via email to