[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16936951#comment-16936951
 ] 

Srinivasu Majeti commented on HDFS-14859:
-----------------------------------------

Junit failures - Latest I could see couple of errors with/without patch , so we 
could ignore above JTests as not related to current patch .
{code:java}
mvn 
-Dtest=TestUnderReplicatedBlocks,TestDFSShell,TestBalancerRPCDelay,TestHDFSCLI,TestReadStripedFileWithDNFailure,TestBlockTokenWithDFSStriped
 test

[ERROR] Failures:
[ERROR]   
TestHDFSCLI.tearDown:87->CLITestHelper.tearDown:126->CLITestHelper.displayResults:264
 One of the tests failed. See the Detailed results to identify the command that 
failed
[ERROR]   TestDFSShell.testErrOutPut:730  -mkdir returned there is No file or 
directory but has testChild in the path
[ERROR] Tests run: 64, Failures: 2, Errors: 0, Skipped: 0
{code}
javac error: I could only see below warning
{code:java}
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java:[681,4]
 [deprecation] Whitebox in org.apache.hadoop.test has been deprecated
{code}
Let me know if this can be ignored.

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> ----------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-14859
>                 URL: https://issues.apache.org/jira/browse/HDFS-14859
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 3.1.0, 3.3.0, 3.1.4
>            Reporter: Srinivasu Majeti
>            Assignee: Srinivasu Majeti
>            Priority: Major
>              Labels: block
>         Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch, HDFS-14859.004.patch, HDFS-14859.005.patch, 
> HDFS-14859.006.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>    private boolean areThresholdsMet() {
>      assert namesystem.hasWriteLock();
> -    int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +    // Calculating the number of live datanodes is time-consuming
> +    // in large clusters. Skip it when datanodeThreshold is zero.
> +    int datanodeNum = 0;
> +    if (datanodeThreshold > 0) {
> +      datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +    }
>      synchronized (this) {
>        return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>      }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
>     assert namesystem.hasWriteLock();
>     synchronized (this) {
>       return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>               blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
>     }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to