[ https://issues.apache.org/jira/browse/HDFS-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696548#comment-14696548 ]
Hudson commented on HDFS-7235: ------------------------------ FAILURE: Integrated in Hadoop-trunk-Commit #8298 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8298/]) HDFS-7235. DataNode#transferBlock should report blocks that don't exist using reportBadBlock (yzhang via cmccabe) (vinayakumarb: rev f2b4bc9b6a1bd3f9dbfc4e85c1b9bde238da3627) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > DataNode#transferBlock should report blocks that don't exist using > reportBadBlock > --------------------------------------------------------------------------------- > > Key: HDFS-7235 > URL: https://issues.apache.org/jira/browse/HDFS-7235 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode > Affects Versions: 2.6.0 > Reporter: Yongjun Zhang > Assignee: Yongjun Zhang > Fix For: 2.7.0, 2.6.1 > > Attachments: HDFS-7235.001.patch, HDFS-7235.002.patch, > HDFS-7235.003.patch, HDFS-7235.004.patch, HDFS-7235.005.patch, > HDFS-7235.006.patch, HDFS-7235.007.patch, HDFS-7235.007.patch > > > When to decommission a DN, the process hangs. > What happens is, when NN chooses a replica as a source to replicate data on > the to-be-decommissioned DN to other DNs, it favors choosing this DN > to-be-decommissioned as the source of transfer (see BlockManager.java). > However, because of the bad disk, the DN would detect the source block to be > transfered as invalidBlock with the following logic in FsDatasetImpl.java: > {code} > /** Does the block exist and have the given state? */ > private boolean isValid(final ExtendedBlock b, final ReplicaState state) { > final ReplicaInfo replicaInfo = volumeMap.get(b.getBlockPoolId(), > b.getLocalBlock()); > return replicaInfo != null > && replicaInfo.getState() == state > && replicaInfo.getBlockFile().exists(); > } > {code} > The reason that this method returns false (detecting invalid block) is > because the block file doesn't exist due to bad disk in this case. > The key issue we found here is, after DN detects an invalid block for the > above reason, it doesn't report the invalid block back to NN, thus NN doesn't > know that the block is corrupted, and keeps sending the data transfer request > to the same DN to be decommissioned, again and again. This caused an infinite > loop, so the decommission process hangs. > Thanks [~qwertymaniac] for reporting the issue and initial analysis. -- This message was sent by Atlassian JIRA (v6.3.4#6332)