[ 
https://issues.apache.org/jira/browse/HDFS-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7168:
---------------------------------------
    Attachment: HDFS-7168.001.patch

> Use excludedNodes consistently in DFSOutputStream
> -------------------------------------------------
>
>                 Key: HDFS-7168
>                 URL: https://issues.apache.org/jira/browse/HDFS-7168
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-7168.001.patch
>
>
> We currently have two separate collections of excluded nodes in the 
> {{DFSOutputStream#DataStreamer}}.  One is {{DFSOutputStream#failed}}; another 
> is {{DFSOutputStream#excludedNodes}}.  Both of these collections just deal 
> with blacklisting nodes that we have found to be bad.  We should just use 
> excludedNodes for both.
> We also should make this a per-DFSOutputStream variable, rather than being 
> per-DataStreamer.  We don't need to forget all this information whenever a 
> DataStreamer is torn down.  Since {{DFSOutputStream#excludedNodes}} is a 
> Guava cache, nodes will expire out of it once enough time elapses, so they 
> will not be permanently blacklisted.
> We should also remove {{DFSOutputStream#setTestFilename}}, since it is no 
> longer needed now that we can safely rename streams that are open for write.  
> And {{DFSOutputStream#getBlock}} should be synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to