[ 
https://issues.apache.org/jira/browse/HBASE-22958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921946#comment-16921946
 ] 

Anoop Sam John commented on HBASE-22958:
----------------------------------------

In HBase-2.x the HM is also an RS and will be able to have tables in it. (By 
default no tables on HM).  But if the HM side config included the BC size it 
was initializing the BC which try to allocate this huge off heap area. You can 
change the config in HM side alone to avoid this BC size.  HBASE-21290 already 
solving this problem via which we will check if tables are enabled to be in HM 
and if so only init the BC even if the config says some size. You can try to 
upgrade to 2.1.1 in which the fix is available.

> when seting hbase.bucketcache.size too large, hmaster will break down
> ---------------------------------------------------------------------
>
>                 Key: HBASE-22958
>                 URL: https://issues.apache.org/jira/browse/HBASE-22958
>             Project: HBase
>          Issue Type: Bug
>          Components: BucketCache, master
>    Affects Versions: 2.1.0
>         Environment: OS:Centos 7.6
> Java:8U192
> HBase:2.1.0
> Hadoop:3.1.1
>            Reporter: dingwei2019
>            Priority: Major
>
> i want to use bucket cache, when i set the following config:
>  *in hbase-site.xml file*
>  hbase.bucketcache.ioengine: offheap
>  hbase.bucketcache.combinedcache.enabled: false
>  hbase.bucketcache.size:225280 (220G)
> *in hbase-env.sh file*
>  export HBASE_OFFHEAPSIZE=0G
>  export HBASE_REGIONSERVER_OPTS="-server -Xms100G -Xmx100G -XX:PermSize=5G 
> -XX:MaxPermSize=80G -XX:MaxDirectMemorySize=250G 
> -XX:+HeapDumpOnOutOfMemoryError -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC 
> -XX:MaxGCPauseMillis=100 -XX:GCPauseIntervalMillis=200
> *below is the error message:*
>  2019-08-31 17:18:03,390 ERROR [main] master.HMasterCommandLine: Master 
> exiting
>  java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster.
>  at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2972)
>  at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
>  at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
>  at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2983)
>  Caused by: java.lang.OutOfMemoryError: Direct buffer memory
>  at java.nio.Bits.reserveMemory(Bits.java:695)
>  at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
>  at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:79)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:208)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:79)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.ConstantPool.<init>(ConstantPool.java:32)
>  at org.apache.hbase.thirdparty.io.netty.util.Signal$1.<init>(Signal.java:27)
>  at org.apache.hbase.thirdparty.io.netty.util.Signal.<clinit>(Signal.java:27)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.<clinit>(DefaultPromise.java:43)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:36)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:47)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:104)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:91)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:68)
>  at 
> org.apache.hadoop.hbase.util.NettyEventLoopGroupConfig.<init>(NettyEventLoopGroupConfig.java:61)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.setupNetty(HRegionServer.java:680)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:539)
>  at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:484)
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2965)
>  ... 5 more
>  
> i know master could also store table, and master is an instance of 
> regionserver. But i want to config bucketcache only on regionserver, what can 
> i do?  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to