[ 
https://issues.apache.org/jira/browse/HDFS-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982958#comment-14982958
 ] 

Tony Wu commented on HDFS-9236:
-------------------------------

Thanks a lot for [~walter.k.su] and [~zhz]'s comments!

[~walter.k.su], DN throws {{RecoveryInProgressException}} only when the 
received recovery ID is smaller than the existing RUR recovery ID:
{code:java}
  static ReplicaRecoveryInfo initReplicaRecovery(String bpid, ReplicaMap map,
      Block block, long recoveryId, long xceiverStopTimeout) throws IOException 
{
    ...
    final ReplicaUnderRecovery rur;
    if (replica.getState() == ReplicaState.RUR) {
      rur = (ReplicaUnderRecovery)replica;
      if (rur.getRecoveryID() >= recoveryId) {
        throw new RecoveryInProgressException(
            "rur.getRecoveryID() >= recoveryId = " + recoveryId
            + ", block=" + block + ", rur=" + rur);
      }
      final long oldRecoveryID = rur.getRecoveryID();
      rur.setRecoveryID(recoveryId);
      LOG.info("initReplicaRecovery: update recovery id for " + block
          + " from " + oldRecoveryID + " to " + recoveryId);
    }
}
{code}

So if the DN has a block that is already in RUR, and a new block recovery 
starts (with larger recovery ID), the DN does not throw 
{{RecoveryInProgressException}}.

The patch is focused on what happens after this point, where a buggy DN (or a 
unknown corner case causes DN) might report RUR as the replica's original state.

I think your suggestion of moving to check out of {{syncBlock()}} and into 
{{initReplicaRecovery()}} make sense. I implemented a check to simply exclude 
the replicas whose original state is >= RUR (they won't be used for recovery 
anyways). But the issue with this is that we might end up with an empty 
{{syncList}} and incorrectly tell NN to drop this block. I think the current 
place for the check in the patch is probably the safest. Please let me know 
what you think.

Again thanks a lot for taking the time to look at my patch.



> Missing sanity check for block size during block recovery
> ---------------------------------------------------------
>
>                 Key: HDFS-9236
>                 URL: https://issues.apache.org/jira/browse/HDFS-9236
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: HDFS
>    Affects Versions: 2.7.1
>            Reporter: Tony Wu
>            Assignee: Tony Wu
>         Attachments: HDFS-9236.001.patch, HDFS-9236.002.patch, 
> HDFS-9236.003.patch, HDFS-9236.004.patch, HDFS-9236.005.patch
>
>
> Ran into an issue while running test against faulty data-node code. 
> Currently in DataNode.java:
> {code:java}
>   /** Block synchronization */
>   void syncBlock(RecoveringBlock rBlock,
>                          List<BlockRecord> syncList) throws IOException {
> …
>     // Calculate the best available replica state.
>     ReplicaState bestState = ReplicaState.RWR;
> …
>     // Calculate list of nodes that will participate in the recovery
>     // and the new block size
>     List<BlockRecord> participatingList = new ArrayList<BlockRecord>();
>     final ExtendedBlock newBlock = new ExtendedBlock(bpid, blockId,
>         -1, recoveryId);
>     switch(bestState) {
> …
>     case RBW:
>     case RWR:
>       long minLength = Long.MAX_VALUE;
>       for(BlockRecord r : syncList) {
>         ReplicaState rState = r.rInfo.getOriginalReplicaState();
>         if(rState == bestState) {
>           minLength = Math.min(minLength, r.rInfo.getNumBytes());
>           participatingList.add(r);
>         }
>       }
>       newBlock.setNumBytes(minLength);
>       break;
> …
>     }
> …
>     nn.commitBlockSynchronization(block,
>         newBlock.getGenerationStamp(), newBlock.getNumBytes(), true, false,
>         datanodes, storages);
>   }
> {code}
> This code is called by the DN coordinating the block recovery. In the above 
> case, it is possible for none of the rState (reported by DNs with copies of 
> the replica being recovered) to match the bestState. This can either be 
> caused by faulty DN code or stale/modified/corrupted files on DN. When this 
> happens the DN will end up reporting the minLengh of Long.MAX_VALUE.
> Unfortunately there is no check on the NN for replica length. See 
> FSNamesystem.java:
> {code:java}
>   void commitBlockSynchronization(ExtendedBlock oldBlock,
>       long newgenerationstamp, long newlength,
>       boolean closeFile, boolean deleteblock, DatanodeID[] newtargets,
>       String[] newtargetstorages) throws IOException {
> …
>       if (deleteblock) {
>         Block blockToDel = ExtendedBlock.getLocalBlock(oldBlock);
>         boolean remove = iFile.removeLastBlock(blockToDel) != null;
>         if (remove) {
>           blockManager.removeBlock(storedBlock);
>         }
>       } else {
>         // update last block
>         if(!copyTruncate) {
>           storedBlock.setGenerationStamp(newgenerationstamp);
>           
>           //>>>> XXX block length is updated without any check <<<<//
>           storedBlock.setNumBytes(newlength);
>         }
> …
>     if (closeFile) {
>       LOG.info("commitBlockSynchronization(oldBlock=" + oldBlock
>           + ", file=" + src
>           + (copyTruncate ? ", newBlock=" + truncatedBlock
>               : ", newgenerationstamp=" + newgenerationstamp)
>           + ", newlength=" + newlength
>           + ", newtargets=" + Arrays.asList(newtargets) + ") successful");
>     } else {
>       LOG.info("commitBlockSynchronization(" + oldBlock + ") successful");
>     }
>   }
> {code}
> After this point the block length becomes Long.MAX_VALUE. Any subsequent 
> block report (even with correct length) will cause the block to be marked as 
> corrupted. Since this is block could be the last block of the file. If this 
> happens and the client goes away, NN won’t be able to recover the lease and 
> close the file because the last block is under-replicated.
> I believe we need to have a sanity check for block size on both DN and NN to 
> prevent such case from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to