[ 
https://issues.apache.org/jira/browse/HDFS-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163966#comment-17163966
 ] 

AMC-team commented on HDFS-15443:
---------------------------------

Sure

But before that, I'm thinking that can we fall back the parameter value to its 
default value (4096) and give a log message. Just like what [~ayushtkn] suggest 
in [HDFS-15439|https://issues.apache.org/jira/browse/HDFS-15439]. Since this is 
a sanity check, falling back to default value can be a safe and conservative 
choice.

How do you think? [~elgoiri] [~ayushtkn] [~jianghuazhu]

> Setting dfs.datanode.max.transfer.threads to a very small value can cause 
> strange failure.
> ------------------------------------------------------------------------------------------
>
>                 Key: HDFS-15443
>                 URL: https://issues.apache.org/jira/browse/HDFS-15443
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: AMC-team
>            Priority: Major
>         Attachments: HDFS-15443.000.patch, HDFS-15443.001.patch, 
> HDFS-15443.002.patch
>
>
> Configuration parameter dfs.datanode.max.transfer.threads is to specify the 
> maximum number of threads to use for transferring data in and out of the DN. 
> This is a vital param that need to tune carefully. 
> {code:java}
> // DataXceiverServer.java
> // Make sure the xceiver count is not exceeded
> intcurXceiverCount = datanode.getXceiverCount();
> if (curXceiverCount > maxXceiverCount) {
> thrownewIOException("Xceiver count " + curXceiverCount
> + " exceeds the limit of concurrent xceivers: "
> + maxXceiverCount);
> }
> {code}
> There are many issues that caused by not setting this param to an appropriate 
> value. However, there is no any check code to restrict the parameter. 
> Although having a hard-and-fast rule is difficult because we need to consider 
> number of cores, main memory etc, *we can prevent users from setting this 
> value to an absolute wrong value by accident.* (e.g. a negative value that 
> totally break the availability of datanode.)
> *How to fix:*
> Add proper check code for the parameter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to