Greetings,

We are starting to support Ubuntu 12.04 LTS servers and HDP. But we are hitting 
the "open file limits" problem. Unfortunately setting this system-wide for 
ubuntu seems difficult -- no matter what we try, YARN tasks always show the 
result of ulimit -n as 1024 (or if we attempt to override, 4096). Something is 
setting a system-wide hard open-file limit to 4096 before the ResourceManager 
and NodeManagers start, and our tasks also get that limit. But this causes all 
sorts of problems, as you must know Hadoop really wants this limit to be 65536 
or more.

What I want is to change the system-wide default open-file limit for everything 
so that Hadoop services and everything else pick that up. How do we do that?

We're tried all of the obvious stuff from stackoverflow etc, like:


# vi /etc/security/limits.conf

* soft nofile 65536

* hard nofile 65536

root soft nofile 65536

root hard nofile 65536

But none of this seems to affect the RM/NM limits.

Thanks
john

<<attachment: winmail.dat>>

Reply via email to