[ 
https://issues.apache.org/jira/browse/HDFS-14882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956033#comment-16956033
 ] 

Ayush Saxena commented on HDFS-14882:
-------------------------------------

Thanx [~hexiaoqiao] for the patch.

{code:java}
-  <property>
-    <name>dfs.namenode.redundancy.considerLoad.factor</name>
-    <value>2.0</value>
-    <description>The factor by which a node's load can exceed the average
-      before being rejected for writes, only if considerLoad is true.
-    </description>
-  </property>
+<property>
+  <name>dfs.namenode.redundancy.considerLoad.factor</name>
+  <value>2.0</value>
+  <description>The factor by which a node's load can exceed the average
+    before being rejected for writes, only if considerLoad is true.
+  </description>
+</property>
{code}

This is not related to the change here, Just change in indentation, May be we 
should avoid it.

Apart LGTM

> Consider DataNode load when #getBlockLocation
> ---------------------------------------------
>
>                 Key: HDFS-14882
>                 URL: https://issues.apache.org/jira/browse/HDFS-14882
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Xiaoqiao He
>            Assignee: Xiaoqiao He
>            Priority: Major
>         Attachments: HDFS-14882.001.patch, HDFS-14882.002.patch, 
> HDFS-14882.003.patch, HDFS-14882.004.patch, HDFS-14882.005.patch
>
>
> Currently, we consider load of datanode when #chooseTarget for writer, 
> however not consider it for reader. Thus, the process slot of datanode could 
> be occupied by #BlockSender for reader, and disk/network will be busy 
> workload, then meet some slow node exception. IIRC same case is reported 
> times. Based on the fact, I propose to consider load for reader same as it 
> did #chooseTarget for writer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to