[ 
http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12379107 ] 

Owen O'Malley commented on HADOOP-210:
--------------------------------------

The file handles are fine at 32768.
The kernel is 2.6.9, so it should be fine too.

The problem seems to be that the default thread stack size is 512k, which is 
more than a gig of stack for his 2036 threads. Mahadev is going to take the 
stack size on the Listener threads down to 128k, which should take the pressure 
off.

> Namenode not able to accept connections
> ---------------------------------------
>
>          Key: HADOOP-210
>          URL: http://issues.apache.org/jira/browse/HADOOP-210
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>  Environment: linux
>     Reporter: Mahadev konar
>     Assignee: Mahadev konar

>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).  
> After running for a while (map 12% reduce 1%) I get the following error on 
> the Namenode:
> Exception in thread "Server listener on port 60000" 
> java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:574)
>         at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any 
> of the clients. All the DFSClient calls get timeout. Here is a trace for one 
> of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
>       at org.apache.hadoop.ipc.Client.call(Client.java:305)
>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
>       at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
>       at 
> org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
>       at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
>       at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
>       at 
> org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
>       at 
> org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
>       at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
>       at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
>       at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
>       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
>       at 
> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the 
> outofmemory exception has been thrown). I have profiled the NameNode and it 
> seems to be using around a maixmum heap size of 57MB (which is not much). So, 
> heap size does not seem to be a problem. It might be happening due to lack of 
> Stack space? Any pointers?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to