[ 
https://issues.apache.org/jira/browse/HDFS-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12970061#action_12970061
 ] 

Konstantin Boudnik commented on HDFS-1401:
------------------------------------------

I have set 'ulimit -n 600' and what I can see pretty much constantly now:

{noformat}
2010-12-09 18:52:20,872 DEBUG hdfs.DFSClient (DFSOutputStream.java:run(496)) - 
DataStreamer block blk_4177412089361302133_1001 sending packet packet seqno:365 
offsetInBlock:373760 lastPacketInBlock:false lastByteOffsetInBlock: 375150
2010-12-09 18:52:20,872 INFO  hdfs.DFSClient 
(DFSInputStream.java:blockSeekTo(413)) - Failed to connect to /127.0.0.1:57014, 
add to deadNodes and continue
java.net.SocketException: Too many open files
        at sun.nio.ch.Net.socket0(Native Method)
        at sun.nio.ch.Net.socket(Net.java:97)
        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
        at 
sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
        at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
        at 
org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:63)
        at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:384)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:525)
        at java.io.DataInputStrea{noformat}m.read(DataInputStream.java:83)
        at 
org.apache.hadoop.hdfs.TestFileConcurrentReader.tailFile(TestFileConcurrentReader.java:456)
        at 
org.apache.hadoop.hdfs.TestFileConcurrentReader.access$300(TestFileConcurrentReader.java:50)
        at 
org.apache.hadoop.hdfs.TestFileConcurrentReader$4.run(TestFileConcurrentReader.java:395)
        at java.lang.Thread.run(Thread.java:662)
2010-12-09 18:52:20,872 INFO  hdfs.DFSClient 
(DFSInputStream.java:chooseDataNode(577)) - Could not obtain block 
blk_4177412089361302133_1001 from any node: java.io.IOException: No live nodes 
contain current block. Will get new block locations from namenode and retry...
2010-12-09 18:52:20,872 WARN  hdfs.DFSClient 
(DFSInputStream.java:chooseDataNode(592)) - DFS chooseDataNode: got # 1 
IOException, will wait for 1093.6147581144983 msec.
{noformat}

It seems like a block can't be read from a DN

> TestFileConcurrentReader test case is still timing out / failing
> ----------------------------------------------------------------
>
>                 Key: HDFS-1401
>                 URL: https://issues.apache.org/jira/browse/HDFS-1401
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 0.22.0
>            Reporter: Tanping Wang
>            Priority: Critical
>         Attachments: HDFS-1401.patch
>
>
> The unit test case, TestFileConcurrentReader after its most recent fix in 
> HDFS-1310 still times out when using java 1.6.0_07.  When using java 
> 1.6.0_07, the test case simply hangs.  On apache Hudson build ( which 
> possibly is using a higher sub-version of java) this test case has presented 
> an inconsistent test result that it sometimes passes, some times fails. For 
> example, between the most recent build 423, 424 and build 425, there is no 
> effective change, however, the test case failed on build 424 and passed on 
> build 425
> build 424 test failed
> https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/424/testReport/org.apache.hadoop.hdfs/TestFileConcurrentReader/
> build 425 test passed
> https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/425/testReport/org.apache.hadoop.hdfs/TestFileConcurrentReader/

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to