In /etc/security/limits.conf change the per user or the system default number of open files

adding this line

*       hard    nofile  65536

will allow any user to open up 65536 files.


Keith Fisher wrote:
I'm running hadoop version 0.17.0 on a Red Hat Enterprise Linux 4.4
box. I'm using an IBM provided JDK 1.5. I've configured Hadoop for a
localhost.

I've written a simple test to open and write to files in HDFS. I close
the output stream after I write 10 bytes to the file. After 471 files,
I see an exception from DFSClient in the log4j logs for my test:

Exception in createBlockOutputStream java.io.IOException: Too many open files
Abandoning block blk_ .....
DataStreamer Exception: java.net.SocketException: Too many open files
Error recovery for block_ .... bad datanode[0]

I'd appreciate any suggestions on how to resolve this problem.

Thanks.
--
Jason Venner
Attributor - Program the Web <http://www.attributor.com/>
Attributor is hiring Hadoop Wranglers and coding wizards, contact if interested

Reply via email to