[ https://issues.apache.org/jira/browse/HDFS-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17164680#comment-17164680 ]
Hadoop QA commented on HDFS-15443: ---------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 25s{color} | {color:red} Docker failed to build yetus/hadoop:cce5a6f6094. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-15443 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13008375/HDFS-15443.003.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/29553/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. > Setting dfs.datanode.max.transfer.threads to a very small value can cause > strange failure. > ------------------------------------------------------------------------------------------ > > Key: HDFS-15443 > URL: https://issues.apache.org/jira/browse/HDFS-15443 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Reporter: AMC-team > Priority: Major > Attachments: HDFS-15443.000.patch, HDFS-15443.001.patch, > HDFS-15443.002.patch, HDFS-15443.003.patch > > > Configuration parameter dfs.datanode.max.transfer.threads is to specify the > maximum number of threads to use for transferring data in and out of the DN. > This is a vital param that need to tune carefully. > {code:java} > // DataXceiverServer.java > // Make sure the xceiver count is not exceeded > intcurXceiverCount = datanode.getXceiverCount(); > if (curXceiverCount > maxXceiverCount) { > thrownewIOException("Xceiver count " + curXceiverCount > + " exceeds the limit of concurrent xceivers: " > + maxXceiverCount); > } > {code} > There are many issues that caused by not setting this param to an appropriate > value. However, there is no any check code to restrict the parameter. > Although having a hard-and-fast rule is difficult because we need to consider > number of cores, main memory etc, *we can prevent users from setting this > value to an absolute wrong value by accident.* (e.g. a negative value that > totally break the availability of datanode.) > *How to fix:* > Add proper check code for the parameter. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org