[ https://issues.apache.org/jira/browse/HDFS-16269?focusedWorklogId=666119&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-666119 ]
ASF GitHub Bot logged work on HDFS-16269: ----------------------------------------- Author: ASF GitHub Bot Created on: 18/Oct/21 03:51 Start Date: 18/Oct/21 03:51 Worklog Time Spent: 10m Work Description: jianghuazhu commented on pull request #3544: URL: https://github.com/apache/hadoop/pull/3544#issuecomment-945338893 Thank you @jojochuang for your comments and reviews. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 666119) Time Spent: 1h (was: 50m) > [Fix] Improve NNThroughputBenchmark#blockReport operation > --------------------------------------------------------- > > Key: HDFS-16269 > URL: https://issues.apache.org/jira/browse/HDFS-16269 > Project: Hadoop HDFS > Issue Type: Bug > Components: benchmarks, namenode > Affects Versions: 2.9.2 > Reporter: JiangHua Zhu > Assignee: JiangHua Zhu > Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > When using NNThroughputBenchmark to verify the blockReport, you will get some > exception information. > Commands used: > ./bin/hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs > xxxx -op blockReport -datanodes 3 -reports 1 > The exception information: > 21/10/12 14:35:18 INFO namenode.NNThroughputBenchmark: Starting benchmark: > blockReport > 21/10/12 14:35:19 INFO namenode.NNThroughputBenchmark: Creating 10 files with > 10 blocks each. > 21/10/12 14:35:19 ERROR namenode.NNThroughputBenchmark: > java.lang.ArrayIndexOutOfBoundsException: 50009 > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550) > Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 50009 > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550) > Checked some code and found that the problem appeared here. > private ExtendedBlock addBlocks(String fileName, String clientName) > throws IOException { > for(DatanodeInfo dnInfo: loc.getLocations()) { > int dnIdx = dnInfo.getXferPort()-1; > datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock()); > } > } > It can be seen from this that what dnInfo.getXferPort() gets is a port > information and should not be used as an index of an array. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org