[ 
https://issues.apache.org/jira/browse/HDFS-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-3085.
---------------------------------------
    Target Version/s:   (was: )
          Resolution: Won't Fix

Closing this as client failed DN refreshing functionality already added in 
latest code IIRC.

> Local data node may need to reconsider for read, when reading a very big file 
> as that local DN may get recover in some time.
> ----------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3085
>                 URL: https://issues.apache.org/jira/browse/HDFS-3085
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode, hdfs-client
>    Affects Versions: 2.0.0-alpha
>            Reporter: Uma Maheswara Rao G
>            Priority: Major
>
> While reading the file, we will add the DN to deadNodes list and will skip 
> from reads.
> If we are reading very huge file (may take hours), and failed read from local 
> datanode, then this will be added to deadnode list and will be excluded for 
> the further reads for that file.
> If the local node recovered immediately,but that will not used for further 
> read. Read may continue with the remote nodes. It will effect the read 
> performance.
> It will be good if we reconsider the local node after certain period based on 
> some factors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to