The most interesting one in my eyes is the too many open files one. My ulimit is 1024. How much should it be? I don't think that I have that many files open in my mappers. They should only be operating on a single file at a time. I can try to run the job again and get an lsof if it would be interesting.

Thanks for taking the time to reply, by the way.

For the current implementation, you need around 3x fds. 1024 is too low for Hadoop. The Hadoop requirement will come down, but 1024 would be too low anyway.

Raghu.

Reply via email to