[ 
https://issues.apache.org/jira/browse/HBASE-27990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17747196#comment-17747196
 ] 

ConfX commented on HBASE-27990:
-------------------------------

Hi [~wchevreuil],

Thanks for the response! The system should indeed crash when 0 is configured. 
However, the system should not crash due to an unhandled Arithmetic Exception, 
which can be dangerous and uninformative. We believe that a better way of 
handling this is adding a precheck, just like what is done in the the 
constructor of the exact same class several lines later:

{code:java}
if (blockNumCapacity >= Integer.MAX_VALUE) {
      // Enough for about 32TB of cache!
      throw new IllegalArgumentException("Cache capacity is too large, only 
support 32TB now");
}
{code}
and
{code:java}
private void sanityCheckConfigs() {
    Preconditions.checkArgument(acceptableFactor <= 1 && acceptableFactor >= 0,
      ACCEPT_FACTOR_CONFIG_NAME + " must be between 0.0 and 1.0");
...
{code}
 
We have also provided an easy patch of this issue.  [^HBASE-27990.patch] 

> BucketCache causes ArithmeticException due to improper blockSize value 
> checking 
> --------------------------------------------------------------------------------
>
>                 Key: HBASE-27990
>                 URL: https://issues.apache.org/jira/browse/HBASE-27990
>             Project: HBase
>          Issue Type: Bug
>            Reporter: ConfX
>            Priority: Critical
>         Attachments: HBASE-27990.patch, reproduce.sh
>
>
> h2. What happened
> There is no value checking for parameter 
> {{{}hbase.blockcache.minblocksize{}}}. This may cause improper calculations 
> and crashes the system like division by 0.
> h2. Buggy code
> In {{{}BucketCache.java{}}}, there is no value checking for {{blockSize}} and 
> this variable is directly used to calculate the {{{}blockNumCapacity{}}}. 
> When {{blockSize}} is mistakenly set to 0, the code would cause division by 0 
> and throw ArithmeticException to crash the system.
> {noformat}
>   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
> bucketSizes,
>     int writerThreadNum, int writerQLen, String persistencePath, int 
> ioErrorsTolerationDuration,
>     Configuration conf) throws IOException {
>     ...
>     long blockNumCapacity = capacity / blockSize;
>     ...{noformat}
> h2. How to reproduce
> (1) set hbase.blockcache.minblocksize=0
> (2) run 
> org.apache.hadoop.hbase.io.hfile.TestCacheConfig#testBucketCacheConfigL1L2Setup
> you should observe the following failure:
> {noformat}
> java.lang.ArithmeticException: / by zero
>     at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:282)
>     at 
> org.apache.hadoop.hbase.io.hfile.BlockCacheFactory.createBucketCache(BlockCacheFactory.java:238)
>     at 
> org.apache.hadoop.hbase.io.hfile.BlockCacheFactory.createBlockCache(BlockCacheFactory.java:110)
>     at 
> org.apache.hadoop.hbase.io.hfile.TestCacheConfig.testBucketCacheConfigL1L2Setup(TestCacheConfig.java:325)
>         ...{noformat}
> For an easy reproduction, run the reproduce.sh in the attachment.
> We are happy to provide a patch if this issue is confirmed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to