[ https://issues.apache.org/jira/browse/HDFS-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14486323#comment-14486323 ]
Hudson commented on HDFS-3087: ------------------------------ FAILURE: Integrated in Hadoop-trunk-Commit #7540 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7540/]) HDFS-8025. Addendum fix for HDFS-3087 Decomissioning on NN restart can complete without blocks being replicated. Contributed by Ming Ma. (wang: rev 5a540c3d3107199f4632e2ad7ee8ff913b107a04) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java > Decomissioning on NN restart can complete without blocks being replicated > ------------------------------------------------------------------------- > > Key: HDFS-3087 > URL: https://issues.apache.org/jira/browse/HDFS-3087 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 0.23.0 > Reporter: Kihwal Lee > Assignee: Rushabh S Shah > Priority: Critical > Fix For: 2.5.0 > > Attachments: HDFS-3087.patch > > > If a data node is added to the exclude list and the name node is restarted, > the decomissioning happens right away on the data node registration. At this > point the initial block report has not been sent, so the name node thinks the > node has zero blocks and the decomissioning completes very quick, without > replicating the blocks on that node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)