[ https://issues.apache.org/jira/browse/HDFS-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Work on HDFS-16269 started by JiangHua Zhu. ------------------------------------------- > [Fix] Improve NNThroughputBenchmark#blockReport operation > --------------------------------------------------------- > > Key: HDFS-16269 > URL: https://issues.apache.org/jira/browse/HDFS-16269 > Project: Hadoop HDFS > Issue Type: Bug > Components: benchmarks, namenode > Affects Versions: 2.9.2 > Reporter: JiangHua Zhu > Assignee: JiangHua Zhu > Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > When using NNThroughputBenchmark to verify the blockReport, you will get some > exception information. > Commands used: > ./bin/hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs > xxxx -op blockReport -datanodes 3 -reports 1 > The exception information: > 21/10/12 14:35:18 INFO namenode.NNThroughputBenchmark: Starting benchmark: > blockReport > 21/10/12 14:35:19 INFO namenode.NNThroughputBenchmark: Creating 10 files with > 10 blocks each. > 21/10/12 14:35:19 ERROR namenode.NNThroughputBenchmark: > java.lang.ArrayIndexOutOfBoundsException: 50009 > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550) > Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 50009 > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430) > at > org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550) > Checked some code and found that the problem appeared here. > private ExtendedBlock addBlocks(String fileName, String clientName) > throws IOException { > for(DatanodeInfo dnInfo: loc.getLocations()) { > int dnIdx = dnInfo.getXferPort()-1; > datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock()); > } > } > It can be seen from this that what dnInfo.getXferPort() gets is a port > information and should not be used as an index of an array. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org