[ 
https://issues.apache.org/jira/browse/HDFS-693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12886235#action_12886235
 ] 

Cody Saunders commented on HDFS-693:
------------------------------------

Not sure how directly this is related, but since there's so much suggestions 
out there to use dfs.datanode.socket.write.timeout = 0, it sort of blows up in 
this line of code:

long writeTimeout = HdfsConstants.WRITE_TIMEOUT_EXTENSION * nodes.length +
                            datanodeWriteTimeout;

from:

hadoop-0.20.2/src/hdfs/org/apache/hadoop/hdfs/DFSClient.java

"datanodeWriteTimeout" is -ZERO- due to the desire to have infinite write 
timeouts, and per much advice on the web.. however, this particular line 
renders that useless since it adds it to the constant (5000) * # nodes (in this 
case, "2" were involved.. replication maybe).


> java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
> channel to be ready for write exceptions were cast when trying to read file 
> via StreamFile.
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-693
>                 URL: https://issues.apache.org/jira/browse/HDFS-693
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.20.1
>            Reporter: Yajun Dong
>         Attachments: HDFS-693.log
>
>
> To exclude the case of network problem, I found the count of  dataXceiver is 
> about 30.  Also, I could see the output of netstate -a | grep 50075 has many 
> TIME_WAIT status when this happened.
> partial log in attachment. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to