[ 
https://issues.apache.org/jira/browse/HDFS-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177518#comment-14177518
 ] 

Colin Patrick McCabe edited comment on HDFS-7235 at 10/22/14 7:15 PM:
----------------------------------------------------------------------

{code}
1787      ReplicaInfo replicaInfo = null;
1788          synchronized(data) {
1789            replicaInfo = (ReplicaInfo) data.getReplica( 
block.getBlockPoolId(),
1790                block.getBlockId());
1791          }
1792          if (replicaInfo != null 
1793              && replicaInfo.getState() == ReplicaState.FINALIZED 
1794              && !replicaInfo.getBlockFile().exists()) {
{code}
You can't release the lock this way.  Once you release the lock, replicaInfo 
could be mutated at any time.  So you need to do all the check under the lock.

{code}
1795                //
1796            // Report back to NN bad block caused by non-existent block 
file.
1797            // WATCH-OUT: be sure the conditions checked above matches the 
following
1798            // method in FsDatasetImpl.java:
1799            //   boolean isValidBlock(ExtendedBlock b)
1800            // all other conditions need to be true except that 
1801            // replicaInfo.getBlockFile().exists() returns false.
1802            //
{code}
I don't think we need the "WATCH-OUT" part.  We shouldn't be calling 
{{isValidBlock}}, so why do we care if the check is the same as that check?

I generally agree with this approach and I think we can get this in if that's 
fixed.


was (Author: cmccabe):
{code}
1787      ReplicaInfo replicaInfo = null;
1788          synchronized(data) {
1789            replicaInfo = (ReplicaInfo) data.getReplica( 
block.getBlockPoolId(),
1790                block.getBlockId());
1791          }
1792          if (replicaInfo != null 
1793              && replicaInfo.getState() == ReplicaState.FINALIZED 
1794              && !replicaInfo.getBlockFile().exists()) {
{code}
You can't release the lock this way.  Once you release the lock, replicaInfo 
could be mutated at any time.  So you need to do all the check under the lock.

{code}
1795                //
1796            // Report back to NN bad block caused by non-existent block 
file.
1797            // WATCH-OUT: be sure the conditions checked above matches the 
following
1798            // method in FsDatasetImpl.java:
1799            //   boolean isValidBlock(ExtendedBlock b)
1800            // all other conditions need to be true except that 
1801            // replicaInfo.getBlockFile().exists() returns false.
1802            //
{code}
I don't think we need the "WATCH-OUT" part.  We're not calling 
{{isValidBlock}}, so why do we care if the check is the same as that check?

I generally agree with this approach and I think we can get this in if that's 
fixed.

> Can not decommission DN which has invalid block due to bad disk
> ---------------------------------------------------------------
>
>                 Key: HDFS-7235
>                 URL: https://issues.apache.org/jira/browse/HDFS-7235
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, namenode
>    Affects Versions: 2.6.0
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>         Attachments: HDFS-7235.001.patch, HDFS-7235.002.patch, 
> HDFS-7235.003.patch
>
>
> When to decommission a DN, the process hangs. 
> What happens is, when NN chooses a replica as a source to replicate data on 
> the to-be-decommissioned DN to other DNs, it favors choosing this DN 
> to-be-decommissioned as the source of transfer (see BlockManager.java).  
> However, because of the bad disk, the DN would detect the source block to be 
> transfered as invalidBlock with the following logic in FsDatasetImpl.java:
> {code}
> /** Does the block exist and have the given state? */
>   private boolean isValid(final ExtendedBlock b, final ReplicaState state) {
>     final ReplicaInfo replicaInfo = volumeMap.get(b.getBlockPoolId(), 
>         b.getLocalBlock());
>     return replicaInfo != null
>         && replicaInfo.getState() == state
>         && replicaInfo.getBlockFile().exists();
>   }
> {code}
> The reason that this method returns false (detecting invalid block) is 
> because the block file doesn't exist due to bad disk in this case. 
> The key issue we found here is, after DN detects an invalid block for the 
> above reason, it doesn't report the invalid block back to NN, thus NN doesn't 
> know that the block is corrupted, and keeps sending the data transfer request 
> to the same DN to be decommissioned, again and again. This caused an infinite 
> loop, so the decommission process hangs.
> Thanks [~qwertymaniac] for reporting the issue and initial analysis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to