[ 
https://issues.apache.org/jira/browse/HDFS-17564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17862973#comment-17862973
 ] 

ASF GitHub Bot commented on HDFS-17564:
---------------------------------------

hadoop-yetus commented on PR #6911:
URL: https://github.com/apache/hadoop/pull/6911#issuecomment-2208471652

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |  14m 11s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  | 227m 36s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6911/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 380m 17s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.46 ServerAPI=1.46 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6911/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6911 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ea2255be361b 5.15.0-113-generic #123-Ubuntu SMP Mon Jun 10 
08:16:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a4aaa06477dcd8603c70654dd5d63fd566507b40 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6911/3/testReport/ |
   | Max. process+thread count | 4184 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6911/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Erasure Coding: Fix the issue of inaccurate metrics when decommission mark 
> busy DN
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-17564
>                 URL: https://issues.apache.org/jira/browse/HDFS-17564
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Haiyang Hu
>            Assignee: Haiyang Hu
>            Priority: Major
>              Labels: pull-request-available
>
> If DataNode is marked as busy and contains many EC blocks, when running 
> decommission DataNode, when execute ErasureCodingWork#addTaskToDatanode, here 
> will no replication work will be generated for ecBlocksToBeReplicated, but 
> related metrics (such as DatanodeDescriptor#currApproxBlocksScheduled, 
> pendingReconstruction and needReconstruction) will still updated.
> *Specific codeļ¼š*
> BlockManager#scheduleReconstruction -> BlockManager#chooseSourceDatanodes 
> [2628~2650] 
> If DataNode is marked as busy and contains many EC blocks here will not add 
> to srcNodes.
> .
> {code:java}
> @VisibleForTesting
> DatanodeDescriptor[] chooseSourceDatanodes(BlockInfo block,
>     List<DatanodeDescriptor> containingNodes,
>     List<DatanodeStorageInfo> nodesContainingLiveReplicas,
>     NumberReplicas numReplicas, List<Byte> liveBlockIndices,
>     List<Byte> liveBusyBlockIndices, List<Byte> excludeReconstructed, int 
> priority) {
>   containingNodes.clear();
>   nodesContainingLiveReplicas.clear();
>   List<DatanodeDescriptor> srcNodes = new ArrayList<>();
>  ...
>   for (DatanodeStorageInfo storage : blocksMap.getStorages(block)) {
>     final DatanodeDescriptor node = getDatanodeDescriptorFromStorage(storage);
>     final StoredReplicaState state = checkReplicaOnStorage(numReplicas, block,
>         storage, corruptReplicas.getNodes(block), false);
>     ...
>     // for EC here need to make sure the numReplicas replicates state correct
>     // because in the scheduleReconstruction it need the numReplicas to check
>     // whether need to reconstruct the ec internal block
>     byte blockIndex = -1;
>     if (isStriped) {
>       blockIndex = ((BlockInfoStriped) block)
>           .getStorageBlockIndex(storage);
>       countLiveAndDecommissioningReplicas(numReplicas, state,
>           liveBitSet, decommissioningBitSet, blockIndex);
>     }
>     if (priority != LowRedundancyBlocks.QUEUE_HIGHEST_PRIORITY
> && (!node.isDecommissionInProgress() && !node.isEnteringMaintenance())
>         && node.getNumberOfBlocksToBeReplicated() +
>         node.getNumberOfBlocksToBeErasureCoded() >= maxReplicationStreams) {
>       if (isStriped && (state == StoredReplicaState.LIVE
> || state == StoredReplicaState.DECOMMISSIONING)) {
>         liveBusyBlockIndices.add(blockIndex);
>         //HDFS-16566 ExcludeReconstructed won't be reconstructed.
>         excludeReconstructed.add(blockIndex);
>       }
>       continue; // already reached replication limit
>     }
>     if (node.getNumberOfBlocksToBeReplicated() +
>         node.getNumberOfBlocksToBeErasureCoded() >= 
> replicationStreamsHardLimit) {
>       if (isStriped && (state == StoredReplicaState.LIVE
> || state == StoredReplicaState.DECOMMISSIONING)) {
>         liveBusyBlockIndices.add(blockIndex);
>         //HDFS-16566 ExcludeReconstructed won't be reconstructed.
>         excludeReconstructed.add(blockIndex);
>       }
>       continue;
>     }
>     if(isStriped || srcNodes.isEmpty()) {
>       srcNodes.add(node);
>       if (isStriped) {
>         liveBlockIndices.add(blockIndex);
>       }
>       continue;
>     }
>    ...
> {code}
> ErasureCodingWork#addTaskToDatanode[149~157]
> {code:java}
> @Override
> void addTaskToDatanode(NumberReplicas numberReplicas) {
>   final DatanodeStorageInfo[] targets = getTargets();
>   assert targets.length > 0;
>   BlockInfoStriped stripedBlk = (BlockInfoStriped) getBlock();
>   ...
>   } else if ((numberReplicas.decommissioning() > 0 ||
>       numberReplicas.liveEnteringMaintenanceReplicas() > 0) &&
>       hasAllInternalBlocks()) {
>     List<Integer> leavingServiceSources = findLeavingServiceSources();
>     // decommissioningSources.size() should be >= targets.length
>     // if the leavingServiceSources size is 0,  here will not to 
> createReplicationWork
>     final int num = Math.min(leavingServiceSources.size(), targets.length);
>     for (int i = 0; i < num; i++) {
>       createReplicationWork(leavingServiceSources.get(i), targets[i]);
>     }
>   ...
> }
> // Since there is no decommission busy datanode in srcNodes, here return the 
> set size of srcIndices as 0.
> private List<Integer> findLeavingServiceSources() {
>     // Mark the block in normal node.
>     BlockInfoStriped block = (BlockInfoStriped)getBlock();
>     BitSet bitSet = new BitSet(block.getRealTotalBlockNum());
>     for (int i = 0; i < getSrcNodes().length; i++) {
>       if (getSrcNodes()[i].isInService()) {
>         bitSet.set(liveBlockIndices[i]);
>       }
>     }
>     // If the block is on the node which is decommissioning or
>     // entering_maintenance, and it doesn't exist on other normal nodes,
>     // we just add the node into source list.
>     List<Integer> srcIndices = new ArrayList<>();
>     for (int i = 0; i < getSrcNodes().length; i++) {
>       if ((getSrcNodes()[i].isDecommissionInProgress() ||
>           (getSrcNodes()[i].isEnteringMaintenance() &&
>           getSrcNodes()[i].isAlive())) &&
>           !bitSet.get(liveBlockIndices[i])) {
>         srcIndices.add(i);
>       }
>     }
>     return srcIndices;
>   }
> {code}
> so we need to fix this logic to avoid inaccurate metrics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to