[ 
https://issues.apache.org/jira/browse/HDFS-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989045#comment-14989045
 ] 

Yongjun Zhang commented on HDFS-9236:
-------------------------------------

Thanks [~twu] for the offline discussion. Consolidating the condition checking 
seems not quite right. 

We can largely do what your last rev does, with some change (along the line of 
my last review):

1. instead of validReplciaCnt, use candidateReplicaCnt
2. add debug log about the replicas filtered out

{code}
          if (info != null) {
             continue;
          }
          if (info.getGenerationStamp() < block.getGenerationStamp() ||
              info.getNumBytes() <= 0) {
            if (LOG.isDebugEnabled()) {
              LOG.debug(...);
            }
            continue;
          }
          // Count the number of candidate replicas found.
          ++candidateStateCnt;
          if (info.getOriginalReplicaState().getValue() <=
                ReplicaState.RWR.getValue()) {
            syncList.add(new BlockRecord(id, proxyDN, info));
          } else {
            if (LOG.isDebugEnabled()) {
              LOG.debug(...);
            }
          }
{code}

and 

{code}
      // None of the replicas reported by DataNodes has the required original
      // state, report the error.
      if (candidateReplicaCnt > 0 && syncList.isEmpty()) {
        throw new IOException("Found " + candidateReplicaCnt +
            " replica(s) for block " + block + " but no one is in " +
            ReplicaState.RWR.name() + " or better state." +
            " datanodeids=" + Arrays.asList(locs));
      }
{code}


> Missing sanity check for block size during block recovery
> ---------------------------------------------------------
>
>                 Key: HDFS-9236
>                 URL: https://issues.apache.org/jira/browse/HDFS-9236
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: HDFS
>    Affects Versions: 2.7.1
>            Reporter: Tony Wu
>            Assignee: Tony Wu
>         Attachments: HDFS-9236.001.patch, HDFS-9236.002.patch, 
> HDFS-9236.003.patch, HDFS-9236.004.patch, HDFS-9236.005.patch, 
> HDFS-9236.006.patch
>
>
> Ran into an issue while running test against faulty data-node code. 
> Currently in DataNode.java:
> {code:java}
>   /** Block synchronization */
>   void syncBlock(RecoveringBlock rBlock,
>                          List<BlockRecord> syncList) throws IOException {
> …
>     // Calculate the best available replica state.
>     ReplicaState bestState = ReplicaState.RWR;
> …
>     // Calculate list of nodes that will participate in the recovery
>     // and the new block size
>     List<BlockRecord> participatingList = new ArrayList<BlockRecord>();
>     final ExtendedBlock newBlock = new ExtendedBlock(bpid, blockId,
>         -1, recoveryId);
>     switch(bestState) {
> …
>     case RBW:
>     case RWR:
>       long minLength = Long.MAX_VALUE;
>       for(BlockRecord r : syncList) {
>         ReplicaState rState = r.rInfo.getOriginalReplicaState();
>         if(rState == bestState) {
>           minLength = Math.min(minLength, r.rInfo.getNumBytes());
>           participatingList.add(r);
>         }
>       }
>       newBlock.setNumBytes(minLength);
>       break;
> …
>     }
> …
>     nn.commitBlockSynchronization(block,
>         newBlock.getGenerationStamp(), newBlock.getNumBytes(), true, false,
>         datanodes, storages);
>   }
> {code}
> This code is called by the DN coordinating the block recovery. In the above 
> case, it is possible for none of the rState (reported by DNs with copies of 
> the replica being recovered) to match the bestState. This can either be 
> caused by faulty DN code or stale/modified/corrupted files on DN. When this 
> happens the DN will end up reporting the minLengh of Long.MAX_VALUE.
> Unfortunately there is no check on the NN for replica length. See 
> FSNamesystem.java:
> {code:java}
>   void commitBlockSynchronization(ExtendedBlock oldBlock,
>       long newgenerationstamp, long newlength,
>       boolean closeFile, boolean deleteblock, DatanodeID[] newtargets,
>       String[] newtargetstorages) throws IOException {
> …
>       if (deleteblock) {
>         Block blockToDel = ExtendedBlock.getLocalBlock(oldBlock);
>         boolean remove = iFile.removeLastBlock(blockToDel) != null;
>         if (remove) {
>           blockManager.removeBlock(storedBlock);
>         }
>       } else {
>         // update last block
>         if(!copyTruncate) {
>           storedBlock.setGenerationStamp(newgenerationstamp);
>           
>           //>>>> XXX block length is updated without any check <<<<//
>           storedBlock.setNumBytes(newlength);
>         }
> …
>     if (closeFile) {
>       LOG.info("commitBlockSynchronization(oldBlock=" + oldBlock
>           + ", file=" + src
>           + (copyTruncate ? ", newBlock=" + truncatedBlock
>               : ", newgenerationstamp=" + newgenerationstamp)
>           + ", newlength=" + newlength
>           + ", newtargets=" + Arrays.asList(newtargets) + ") successful");
>     } else {
>       LOG.info("commitBlockSynchronization(" + oldBlock + ") successful");
>     }
>   }
> {code}
> After this point the block length becomes Long.MAX_VALUE. Any subsequent 
> block report (even with correct length) will cause the block to be marked as 
> corrupted. Since this is block could be the last block of the file. If this 
> happens and the client goes away, NN won’t be able to recover the lease and 
> close the file because the last block is under-replicated.
> I believe we need to have a sanity check for block size on both DN and NN to 
> prevent such case from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to