[ 
https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286901#comment-14286901
 ] 

Walter Su commented on HDFS-7633:
---------------------------------


{color:red}-1 overall{color}.  

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version ) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.


> When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime 
> throws IllegalArgumentException
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-7633
>                 URL: https://issues.apache.org/jira/browse/HDFS-7633
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.6.0
>            Reporter: Walter Su
>            Assignee: Walter Su
>            Priority: Minor
>         Attachments: HDFS-7633.patch
>
>
> issue:
> When Total blocks of one of my DNs reaches 33554432, It refuses to accept 
> more blocks, this is the ERROR.
> 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client  at /172.1.1.8:50490 
> [Receiving block 
> BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] | 
> datasight-198:25009:DataXceiver error processing WRITE_BLOCK operation  src: 
> /172.1.1.8:50490 dst: /172.1.1.11:25009 | 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
> java.lang.IllegalArgumentException: n must be positive
>         at java.util.Random.nextInt(Random.java:300)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
>         at java.lang.Thread.run(Thread.java:745)
> analysis:
> in function 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime()
> when blockMap.size() is too big,
> Math.max(blockMap.size(),1)  * 600  is int type, and negtive
> Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive
> (int)period  is Integer.MIN_VALUE
> Math.abs((int)period) is Integer.MIN_VALUE , which is negtive
> DFSUtil.getRandom().nextInt(periodInt)  will thows IllegalArgumentException
> I use Java HotSpot (build 1.7.0_05-b05)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to