[ https://issues.apache.org/jira/browse/AMBARI-24382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16563660#comment-16563660 ]
Hari Sekhon commented on AMBARI-24382: -------------------------------------- It looks like this is a more severe issue than I thought, even setting something in the default config group which seems unrelated to bucket cache settings, such as {code:java} hbase.ipc.server.callqueue.read.ratio{code} still causes the new HMaster config to pick up all the Undefined values that only exist in the regionservers config group and write them literally in to the hbase-site.xml for HMaster when restarting ... again breaking the HMaster. The hbase-site.xml then contains the following config it should have omitted: {code:java} <property> <name>hbase.block.data.cachecompressed</name> <value>Undefined</value> </property> <property> <name>hbase.bucketcache.blockcache.memory.percentage</name> <value>Undefined</value> </property> <property> <name>hbase.bucketcache.blockcache.multi.percentage</name> <value>Undefined</value> </property> <property> <name>hbase.bucketcache.blockcache.single.percentage</name> <value>Undefined</value> </property> <property> <name>hbase.bucketcache.combinedcache.enabled</name> <value>Undefined</value> </property> <property> <name>hbase.bucketcache.ioengine</name> <value>Undefined</value> </property> <property> <name>hbase.bucketcache.size</name> <value>Undefined</value> </property> ... <property> <name>hbase.rs.cacheblocksonwrite</name> <value>Undefined</value> </property> <property> <name>hbase.rs.evictblocksonclose</name> <value>Undefined</value> </property> ... <property> <name>hfile.block.bloom.cacheonwrite</name> <value>Undefined</value> </property>{code} > Ambari shouldn't deployed "Undefined" Config Values > --------------------------------------------------- > > Key: AMBARI-24382 > URL: https://issues.apache.org/jira/browse/AMBARI-24382 > Project: Ambari > Issue Type: Bug > Components: ambari-server > Affects Versions: 2.5.2 > Reporter: Hari Sekhon > Priority: Critical > > Ambari should not deploy any properties with "Undefined" value (breaks HBase > Master as shown below). > When using a config group for HBase RegionServers to only enable Bucket Cache > on RegionServers (see AMBARI-24370), and then enabling a bunch of settings > related to using the bucket cache in the default config, Ambari will infer > that there should be a bucketcache setting and injects the following > properties with a literal value of "Undefined" in hbase-site.xml: > {code:java} > <property> > <name>hbase.bucketcache.ioengine</name> > <value>Undefined</value> > </property> > <property> > <name>hbase.bucketcache.size</name> > <value>Undefined</value> > </property>{code} > which breaks HMaster restart: > {code:java} > 2018-07-30 13:26:08,283 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.RuntimeException: Failed construction of Master: class > org.apache.hadoop.hbase.master.HMaster > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2824) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:235) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2838) > Caused by: java.lang.NumberFormatException: For input string: "Undefined" > at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043) > at sun.misc.FloatingDecimal.parseFloat(FloatingDecimal.java:122) > at java.lang.Float.parseFloat(Float.java:451) > at org.apache.hadoop.conf.Configuration.getFloat(Configuration.java:1400) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:597) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:566) > at > org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:650) > at org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:239) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:591) > at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:425) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2819) > ... 5 more > {code} > The above two settings are only added to a config group RegionServers, but > when the following settings are applied to the default config group thinking > they should be ignored by Masters because bucket cache isn't enabled on > Master, it turns out that Ambari infers they should exist, leaves them > Undefined but deploys them anyway (these settings are from the OpenTSDB HBase > performance tuning guide btw): > {code:java} > hbase.rs.cacheblocksonwrite=true > hbase.rs.evictblocksonclose=false > hfile.block.bloom.cacheonwrite=true > hfile.block.index.cacheonwrite=true > hbase.block.data.cachecompressed=true > hbase.bucketcache.blockcache.single.percentage=.99 > hbase.bucketcache.blockcache.multi.percentage=0 > hbase.bucketcache.blockcache.memory.percentage=.01 > {code} > I've worked around it by moving the config to the regionservers config group, > but ran in to AMBARI-24371 again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)