[ 
https://issues.apache.org/jira/browse/HDFS-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13269816#comment-13269816
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3376:
----------------------------------------------

{code}
+      // Don't use the cache on the last attempt - it's possible that there
+      // are arbitrarily many unusable sockets in the cache, but we don't
+      // want to fail the read.
{code}
Just a question: Will the unusable sockets be closed and removed from the cache?

                
> DFSClient fails to make connection to DN if there are many unusable cached 
> sockets
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-3376
>                 URL: https://issues.apache.org/jira/browse/HDFS-3376
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 2.0.0
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>            Priority: Critical
>             Fix For: 2.0.0
>
>         Attachments: hdfs-3376.txt
>
>
> After fixing the datanode side of keepalive to properly disconnect stale 
> clients, (HDFS-3357), the client side has the following issue: when it 
> connects to a DN, it first tries to use cached sockets, and will try a 
> configurable number of sockets from the cache. If there are more cached 
> sockets than the configured number of retries, and all of them have been 
> closed by the datanode side, then the client will throw an exception and mark 
> the replica node as dead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to