[ https://issues.apache.org/jira/browse/HDFS-9434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015049#comment-15015049 ]
Hudson commented on HDFS-9434: ------------------------------ FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #622 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/622/]) HDFS-9434. Recommission a datanode with 500k blocks may pause NN for 30 (szetszwo: rev 8e03e855b6a0cf650f43aac47b1ec642caf493f5) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Recommission a datanode with 500k blocks may pause NN for 30 seconds > -------------------------------------------------------------------- > > Key: HDFS-9434 > URL: https://issues.apache.org/jira/browse/HDFS-9434 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Reporter: Tsz Wo Nicholas Sze > Assignee: Tsz Wo Nicholas Sze > Fix For: 2.7.2 > > Attachments: h9434_20151116.patch > > > In BlockManager, processOverReplicatedBlocksOnReCommission is called within > the namespace lock. There is a (not very useful) log message printed in > processOverReplicatedBlock. When there is a large number of blocks stored in > a storage, printing the log message for each block can pause NN to process > any other operations. We did see that it could pause NN for 30 seconds for > a storage with 500k blocks. > I suggest to change the log message to trace level as a quick fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332)