https://issues.apache.org/jira/browse/HADOOP-4346 might explain this.

Raghu.

Bryan Duxbury wrote:
Ok, so, what might I do next to try and diagnose this? Does it sound like it might be an HDFS/mapreduce bug, or should I pore over my own code first?

Also, did any of the other exceptions look interesting?

-Bryan

On Sep 29, 2008, at 10:40 AM, Raghu Angadi wrote:

Raghu Angadi wrote:
Doug Cutting wrote:
Raghu Angadi wrote:
For the current implementation, you need around 3x fds. 1024 is too low for Hadoop. The Hadoop requirement will come down, but 1024 would be too low anyway.

1024 is the default on many systems. Shouldn't we try to make the default configuration work well there?
How can 1024 work well for different kinds of loads?

oops! 1024 should work for anyone "working with just one file" for any load. I didn't notice that. My comment can be ignored.

Raghu.


Reply via email to