[jira] [Updated] (HDFS-7633) When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime throws IllegalArgumentException
[ https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Walter Su updated HDFS-7633: Attachment: HDFS-7633.patch regenerating patch with 'git diff' > When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime > throws IllegalArgumentException > > > Key: HDFS-7633 > URL: https://issues.apache.org/jira/browse/HDFS-7633 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.6.0 >Reporter: Walter Su >Assignee: Walter Su >Priority: Minor > Attachments: HDFS-7633.patch > > > issue: > When Total blocks of one of my DNs reaches 33554432, It refuses to accept > more blocks, this is the ERROR. > 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client at /172.1.1.8:50490 > [Receiving block > BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] | > datasight-198:25009:DataXceiver error processing WRITE_BLOCK operation src: > /172.1.1.8:50490 dst: /172.1.1.11:25009 | > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250) > java.lang.IllegalArgumentException: n must be positive > at java.util.Random.nextInt(Random.java:300) > at > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263) > at > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276) > at > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232) > at java.lang.Thread.run(Thread.java:745) > analysis: > in function > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime() > when blockMap.size() is too big, > Math.max(blockMap.size(),1) * 600 is int type, and negtive > Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive > (int)period is Integer.MIN_VALUE > Math.abs((int)period) is Integer.MIN_VALUE , which is negtive > DFSUtil.getRandom().nextInt(periodInt) will thows IllegalArgumentException > I use Java HotSpot (build 1.7.0_05-b05) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7633) When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime throws IllegalArgumentException
[ https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Walter Su updated HDFS-7633: Attachment: (was: h7633_20150116.patch) > When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime > throws IllegalArgumentException > > > Key: HDFS-7633 > URL: https://issues.apache.org/jira/browse/HDFS-7633 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.6.0 >Reporter: Walter Su >Assignee: Walter Su >Priority: Minor > Attachments: HDFS-7633.patch > > > issue: > When Total blocks of one of my DNs reaches 33554432, It refuses to accept > more blocks, this is the ERROR. > 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client at /172.1.1.8:50490 > [Receiving block > BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] | > datasight-198:25009:DataXceiver error processing WRITE_BLOCK operation src: > /172.1.1.8:50490 dst: /172.1.1.11:25009 | > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250) > java.lang.IllegalArgumentException: n must be positive > at java.util.Random.nextInt(Random.java:300) > at > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263) > at > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276) > at > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232) > at java.lang.Thread.run(Thread.java:745) > analysis: > in function > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime() > when blockMap.size() is too big, > Math.max(blockMap.size(),1) * 600 is int type, and negtive > Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive > (int)period is Integer.MIN_VALUE > Math.abs((int)period) is Integer.MIN_VALUE , which is negtive > DFSUtil.getRandom().nextInt(periodInt) will thows IllegalArgumentException > I use Java HotSpot (build 1.7.0_05-b05) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7633) When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime throws IllegalArgumentException
[ https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charles Lamb updated HDFS-7633: --- Summary: When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime throws IllegalArgumentException (was: When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime thows IllegalArgumentException) > When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime > throws IllegalArgumentException > > > Key: HDFS-7633 > URL: https://issues.apache.org/jira/browse/HDFS-7633 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.6.0 >Reporter: Walter Su >Assignee: Walter Su >Priority: Minor > Attachments: h7633_20150116.patch > > > issue: > When Total blocks of one of my DNs reaches 33554432, It refuses to accept > more blocks, this is the ERROR. > 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client at /172.1.1.8:50490 > [Receiving block > BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] | > datasight-198:25009:DataXceiver error processing WRITE_BLOCK operation src: > /172.1.1.8:50490 dst: /172.1.1.11:25009 | > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250) > java.lang.IllegalArgumentException: n must be positive > at java.util.Random.nextInt(Random.java:300) > at > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263) > at > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276) > at > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232) > at java.lang.Thread.run(Thread.java:745) > analysis: > in function > org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime() > when blockMap.size() is too big, > Math.max(blockMap.size(),1) * 600 is int type, and negtive > Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive > (int)period is Integer.MIN_VALUE > Math.abs((int)period) is Integer.MIN_VALUE , which is negtive > DFSUtil.getRandom().nextInt(periodInt) will thows IllegalArgumentException > I use Java HotSpot (build 1.7.0_05-b05) -- This message was sent by Atlassian JIRA (v6.3.4#6332)