Weird, I got the url from that page previously.  I guess it changed since then 
and I didn't notice when I rebuilt my test cluster.  Thanks for the heads up.

Greg

From: Yusaku Sako <yus...@hortonworks.com<mailto:yus...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Date: Monday, November 3, 2014 3:50 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Subject: Re: possible bug in the Ambari API

Hi Greg,

The yum repo you referred to is old and no longer maintained (I just installed 
ambari-server off of it and I see the hash is 
trunk:0e959b0ed80fc1a170cc10b1c75050c88a7b2d06.
This is trunk code from Oct 4.
Please use the URLs shown in the Quick Start Guide: 
https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide

# to test the 1.7.0 branch build - updated nightly
wget -O /etc/yum.repos.d/ambari.repo 
http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/1.x/latest/1.7.0/ambari.repo
OR
#  to test the trunk build - updated multiple times a day
wget -O /etc/yum.repos.d/ambari.repo 
http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/1.x/latest/trunk/ambari.repo

Thanks,
Yusaku


On Mon, Nov 3, 2014 at 12:42 PM, Greg Hill 
<greg.h...@rackspace.com<mailto:greg.h...@rackspace.com>> wrote:
/api/v1/stacks/HDP/versions/2.1/services/HBASE/configurations works fine, just 
like any other GET method on a list of resources.

I did a yum update and ambari-server restart on my ambari node to rule that 
out.  Still get the same issue.  Happens for 
/api/v1/stacks/HDP/versions/2.1/services/HDFS/configurations/content as well.

This is my yum repo:
http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/1.x/updates/1.7.0.trunk/

Am I missing some header that fixes things?  All I'm passing in in 
X-Requested-By.

Why does a GET on a single resource return two resources anyway?  That seems 
like it should be subdivided further if that's how it works.

Greg

From: Srimanth Gunturi 
<srima...@hortonworks.com<mailto:srima...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Date: Monday, November 3, 2014 3:17 PM

To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Subject: Re: possible bug in the Ambari API

Hi Greg,
I attempted the same API on latest 1.7.0 build, and do not see the issue (the 
comma is present between the two configurations).
Do you see the same when you access 
"/api/v1/stacks2/HDP/versions/2.1/stackServices/HBASE/configurations" or  
"/api/v1/stacks2/HDP/versions/2.1/stackServices/HDFS/configurations/content" ?
Regards,
Srimanth



On Mon, Nov 3, 2014 at 12:12 PM, Greg Hill 
<greg.h...@rackspace.com<mailto:greg.h...@rackspace.com>> wrote:
Also, I still get the same broken response using 'stacks' instead of 'stacks2'. 
 Is this a bug that was fixed recently?  I'm using a build from last week.

Greg

From: Greg <greg.h...@rackspace.com<mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Date: Monday, November 3, 2014 3:05 PM

To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Subject: Re: possible bug in the Ambari API

Oh?  I was basing it off the python client using 'stacks2'.  I figured that 
stacks was deprecated, but I suppose I should have asked.  Neither API is 
documented.  Why are there two?

Greg

From: Jeff Sposetti <j...@hortonworks.com<mailto:j...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Date: Monday, November 3, 2014 2:54 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Subject: Re: possible bug in the Ambari API

Greg, That's the /stacks2 API. Want to try with /stacks (which I think is the 
preferred API resource)?

http://c6401.ambari.apache.org:8080/api/v1/stacks/HDP/versions/2.1/services/HBASE/configurations/content



[
  {
    "href" : 
"http://c6401.ambari.apache.org:8080/api/v1/stacks/HDP/versions/2.1/services/HBASE/configurations/content";,
    "StackConfigurations" : {
      "final" : "false",
      "property_description" : "Custom log4j.properties",
      "property_name" : "content",
      "property_type" : [ ],
      "property_value" : "\n# Licensed to the Apache Software Foundation (ASF) 
under one\n# or more contributor license agreements.  See the NOTICE file\n# 
distributed with this work for additional information\n# regarding copyright 
ownership.  The ASF licenses this file\n# to you under the Apache License, 
Version 2.0 (the\n# \"License\"); you may not use this file except in 
compliance\n# with the License.  You may obtain a copy of the License at\n#\n#  
   
http://www.apache.org/licenses/LICENSE-2.0\n#\n#<http://www.apache.org/licenses/LICENSE-2.0%5Cn#%5Cn%23>
 Unless required by applicable law or agreed to in writing, software\n# 
distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT 
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the 
License for the specific language governing permissions and\n# limitations 
under the License.\n\n\n# Define some default values that can be overridden by 
system 
properties\nhbase.root.logger=INFO,console\nhbase.security.logger=INFO,console\nhbase.log.dir=.\nhbase.log.file=hbase.log\n\n#
 Define the root logger to the system property 
\"hbase.root.logger\".\nlog4j.rootLogger=${hbase.root.logger}\n\n# Logging 
Threshold\nlog4j.threshold=ALL\n\n#\n# Daily Rolling File 
Appender\n#\nlog4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}\n\n#
 Rollver at midnight\nlog4j.appender.DRFA.DatePattern=.yyyy-MM-dd\n\n# 30-day 
backup\n#log4j.appender.DRFA.MaxBackupIndex=30\nlog4j.appender.DRFA.layout=org.apache.log4j.PatternLayout\n\n#
 Pattern format: Date LogLevel LoggerName 
LogMessage\nlog4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] 
%c{2}: %m%n\n\n# Rolling File Appender 
properties\nhbase.log.maxfilesize=256MB\nhbase.log.maxbackupindex=20\n\n# 
Rolling File 
Appender\nlog4j.appender.RFA=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}\n\nlog4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}\nlog4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}\n\nlog4j.appender.RFA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFA.layout.ConversionPattern=%d{ISO8601}
 %-5p [%t] %c{2}: %m%n\n\n#\n# Security audit 
appender\n#\nhbase.security.log.file=SecurityAuth.audit\nhbase.security.log.maxfilesize=256MB\nhbase.security.log.maxbackupindex=20\nlog4j.appender.RFAS=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}\nlog4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}\nlog4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}\nlog4j.appender.RFAS.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601}
 %p %c: 
%m%n\nlog4j.category.SecurityLogger=${hbase.security.logger}\nlog4j.additivity.SecurityLogger=false\n#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE\n\n#\n#
 Null 
Appender\n#\nlog4j.appender.NullAppender=org.apache.log4j.varia.NullAppender\n\n#\n#
 console\n# Add \"console\" to rootlogger above if you want to use 
this\n#\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{ISO8601}
 %-5p [%t] %c{2}: %m%n\n\n# Custom Logging 
levels\n\nlog4j.logger.org.apache.zookeeper=INFO\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG\nlog4j.logger.org.apache.hadoop.hbase=DEBUG\n#
 Make these two classes INFO-level. Make them DEBUG to see more zk 
debug.\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO\n#log4j.logger.org.apache.hadoop.dfs=DEBUG\n#
 Set this class to log INFO only otherwise its OTT\n# Enable this to get 
detailed connection error/retry logging.\n# 
log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=TRACE\n\n\n#
 Uncomment this line to enable tracing on _every_ RPC call (this can be a lot 
of output)\n#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG\n\n# 
Uncomment the below if you want to remove logging of client region caching'\n# 
and scan of .META. messages\n# 
log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO\n#
 log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO\n\n    ",
      "service_name" : "HBASE",
      "stack_name" : "HDP",
      "stack_version" : "2.1",
      "type" : "hbase-log4j.xml"
    }
  },
  {
    "href" : 
"http://c6401.ambari.apache.org:8080/api/v1/stacks/HDP/versions/2.1/services/HBASE/configurations/content";,
    "StackConfigurations" : {
      "final" : "false",
      "property_description" : "This is the jinja template for hbase-env.sh 
file",
      "property_name" : "content",
      "property_type" : [ ],
      "property_value" : "\n# Set environment variables here.\n\n# The java 
implementation to use. Java 1.6 required.\nexport 
JAVA_HOME={{java64_home}}\n\n# HBase Configuration directory\nexport 
HBASE_CONF_DIR=${HBASE_CONF_DIR:-{{hbase_conf_dir}}}\n\n# Extra Java CLASSPATH 
elements. Optional.\nexport HBASE_CLASSPATH=${HBASE_CLASSPATH}\n\n# The maximum 
amount of heap to use, in MB. Default is 1000.\n# export 
HBASE_HEAPSIZE=1000\n\n# Extra Java runtime options.\n# Below are what we set 
by default. May only work with SUN JVM.\n# For more on why as well as other 
possible settings,\n# see 
http://wiki.apache.org/hadoop/PerformanceTuning\nexport<http://wiki.apache.org/hadoop/PerformanceTuning%5Cnexport>
 SERVER_GC_OPTS=\"-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 
-Xloggc:{{log_dir}}/gc.log-`date +'%Y%m%d%H%M'`\"\n# Uncomment below to enable 
java garbage collection logging.\n# export HBASE_OPTS=\"$HBASE_OPTS -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps 
-Xloggc:$HBASE_HOME/logs/gc-hbase.log\"\n\n# Uncomment and adjust to enable JMX 
exporting\n# See jmxremote.password and jmxremote.access in 
$JRE_HOME/lib/management to configure remote password access.\n# More details 
at: 
http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html\n#\n#<http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html%5Cn#%5Cn%23>
 export HBASE_JMX_BASE=\"-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false\"\n# If you want to configure 
BucketCache, specify '-XX: MaxDirectMemorySize=' with proper direct memory 
size\n# export HBASE_THRIFT_OPTS=\"$HBASE_JMX_BASE 
-Dcom.sun.management.jmxremote.port=10103\"\n# export 
HBASE_ZOOKEEPER_OPTS=\"$HBASE_JMX_BASE 
-Dcom.sun.management.jmxremote.port=10104\"\n\n# File naming hosts on which 
HRegionServers will run. $HBASE_HOME/conf/regionservers by default.\nexport 
HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers\n\n# Extra ssh options. 
Empty by default.\n# export HBASE_SSH_OPTS=\"-o ConnectTimeout=1 -o 
SendEnv=HBASE_CONF_DIR\"\n\n# Where log files are stored. $HBASE_HOME/logs by 
default.\nexport HBASE_LOG_DIR={{log_dir}}\n\n# A string representing this 
instance of hbase. $USER by default.\n# export HBASE_IDENT_STRING=$USER\n\n# 
The scheduling priority for daemon processes. See 'man nice'.\n# export 
HBASE_NICENESS=10\n\n# The directory where pid files are stored. /tmp by 
default.\nexport HBASE_PID_DIR={{pid_dir}}\n\n# Seconds to sleep between slave 
commands. Unset by default. This\n# can be useful in large clusters, where, 
e.g., slave rsyncs can\n# otherwise arrive faster than the master can service 
them.\n# export HBASE_SLAVE_SLEEP=0.1\n\n# Tell HBase whether it should manage 
it's own instance of Zookeeper or not.\nexport HBASE_MANAGES_ZK=false\n\n{% if 
security_enabled %}\nexport HBASE_OPTS=\"$HBASE_OPTS -XX:+UseConcMarkSweepGC 
-XX:ErrorFile={{log_dir}}/hs_err_pid%p.log 
-Djava.security.auth.login.config={{client_jaas_config_file}}\"\nexport 
HBASE_MASTER_OPTS=\"$HBASE_MASTER_OPTS -Xmx{{master_heapsize}} 
-Djava.security.auth.login.config={{master_jaas_config_file}}\"\nexport 
HBASE_REGIONSERVER_OPTS=\"$HBASE_REGIONSERVER_OPTS 
-Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70  
-Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}} 
-Djava.security.auth.login.config={{regionserver_jaas_config_file}}\"\n{% else 
%}\nexport HBASE_OPTS=\"$HBASE_OPTS -XX:+UseConcMarkSweepGC 
-XX:ErrorFile={{log_dir}}/hs_err_pid%p.log\"\nexport 
HBASE_MASTER_OPTS=\"$HBASE_MASTER_OPTS -Xmx{{master_heapsize}}\"\nexport 
HBASE_REGIONSERVER_OPTS=\"$HBASE_REGIONSERVER_OPTS 
-Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70  
-Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}}\"\n{% endif %}\n    
",
      "service_name" : "HBASE",
      "stack_name" : "HDP",
      "stack_version" : "2.1",
      "type" : "hbase-env.xml"
    }
  }
]





On Mon, Nov 3, 2014 at 2:45 PM, Greg Hill 
<greg.h...@rackspace.com<mailto:greg.h...@rackspace.com>> wrote:
The more I look at this, I think it's just two separate dictionaries separated 
by a space.  That's not a valid response at all.  It should be wrapped in list 
structure.  I'll go file a JIRA ticket.

Greg

From: Greg <greg.h...@rackspace.com<mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Date: Monday, November 3, 2014 12:04 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Subject: possible bug in the Ambari API

On the latest Ambari 1.7.0 build, this API call returns invalid JSON that the 
parser chokes on.  Notice the lack of a comma between the end of the first 
'StackConfigurations' structure and the following one.  There's just "} {" 
instead of "}, {"

GET /api/v1/stacks2/HDP/versions/2.1/stackServices/HBASE/configurations/content

{
  "href" : 
"http://c6401.ambari.apache.org:8080/api/v1/stacks2/HDP/versions/2.1/stackServices/HBASE/configurations/content";,
  "StackConfigurations" : {
    "final" : "false",
    "property_description" : "Custom log4j.properties",
    "property_name" : "content",
    "property_type" : [ ],
    "property_value" : "\n# Licensed to the Apache Software Foundation (ASF) 
under one\n# or more contributor license agreements.  See the NOTICE file\n# 
distributed with this work for additional information\n# regarding copyright 
ownership.  The ASF licenses this file\n# to you under the Apache License, 
Version 2.0 (the\n# \"License\"); you may not use this file except in 
compliance\n# with the License.  You may obtain a copy of the License at\n#\n#  
   
http://www.apache.org/licenses/LICENSE-2.0\n#\n#<http://www.apache.org/licenses/LICENSE-2.0%5Cn#%5Cn%23>
 Unless required by applicable law or agreed to in writing, software\n# 
distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT 
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the 
License for the specific language governing permissions and\n# limitations 
under the License.\n\n\n# Define some default values that can be overridden by 
system 
properties\nhbase.root.logger=INFO,console\nhbase.security.logger=INFO,console\nhbase.log.dir=.\nhbase.log.file=hbase.log\n\n#
 Define the root logger to the system property 
\"hbase.root.logger\".\nlog4j.rootLogger=${hbase.root.logger}\n\n# Logging 
Threshold\nlog4j.threshold=ALL\n\n#\n# Daily Rolling File 
Appender\n#\nlog4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}\n\n#
 Rollver at midnight\nlog4j.appender.DRFA.DatePattern=.yyyy-MM-dd\n\n# 30-day 
backup\n#log4j.appender.DRFA.MaxBackupIndex=30\nlog4j.appender.DRFA.layout=org.apache.log4j.PatternLayout\n\n#
 Pattern format: Date LogLevel LoggerName 
LogMessage\nlog4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] 
%c{2}: %m%n\n\n# Rolling File Appender 
properties\nhbase.log.maxfilesize=256MB\nhbase.log.maxbackupindex=20\n\n# 
Rolling File 
Appender\nlog4j.appender.RFA=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}\n\nlog4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}\nlog4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}\n\nlog4j.appender.RFA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFA.layout.ConversionPattern=%d{ISO8601}
 %-5p [%t] %c{2}: %m%n\n\n#\n# Security audit 
appender\n#\nhbase.security.log.file=SecurityAuth.audit\nhbase.security.log.maxfilesize=256MB\nhbase.security.log.maxbackupindex=20\nlog4j.appender.RFAS=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}\nlog4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}\nlog4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}\nlog4j.appender.RFAS.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601}
 %p %c: 
%m%n\nlog4j.category.SecurityLogger=${hbase.security.logger}\nlog4j.additivity.SecurityLogger=false\n#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE\n\n#\n#
 Null 
Appender\n#\nlog4j.appender.NullAppender=org.apache.log4j.varia.NullAppender\n\n#\n#
 console\n# Add \"console\" to rootlogger above if you want to use 
this\n#\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{ISO8601}
 %-5p [%t] %c{2}: %m%n\n\n# Custom Logging 
levels\n\nlog4j.logger.org.apache.zookeeper=INFO\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG\nlog4j.logger.org.apache.hadoop.hbase=DEBUG\n#
 Make these two classes INFO-level. Make them DEBUG to see more zk 
debug.\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO\n#log4j.logger.org.apache.hadoop.dfs=DEBUG\n#
 Set this class to log INFO only otherwise its OTT\n# Enable this to get 
detailed connection error/retry logging.\n# 
log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=TRACE\n\n\n#
 Uncomment this line to enable tracing on _every_ RPC call (this can be a lot 
of output)\n#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG\n\n# 
Uncomment the below if you want to remove logging of client region caching'\n# 
and scan of .META. messages\n# 
log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO\n#
 log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO\n\n    ",
    "service_name" : "HBASE",
    "stack_name" : "HDP",
    "stack_version" : "2.1",
    "type" : "hbase-log4j.xml"
  }
} {
  "href" : 
"http://c6401.ambari.apache.org:8080/api/v1/stacks2/HDP/versions/2.1/stackServices/HBASE/configurations/content";,
  "StackConfigurations" : {
    "final" : "false",
    "property_description" : "This is the jinja template for hbase-env.sh file",
    "property_name" : "content",
    "property_type" : [ ],
    "property_value" : "\n# Set environment variables here.\n\n# The java 
implementation to use. Java 1.6 required.\nexport 
JAVA_HOME={{java64_home}}\n\n# HBase Configuration directory\nexport 
HBASE_CONF_DIR=${HBASE_CONF_DIR:-{{hbase_conf_dir}}}\n\n# Extra Java CLASSPATH 
elements. Optional.\nexport HBASE_CLASSPATH=${HBASE_CLASSPATH}\n\n# The maximum 
amount of heap to use, in MB. Default is 1000.\n# export 
HBASE_HEAPSIZE=1000\n\n# Extra Java runtime options.\n# Below are what we set 
by default. May only work with SUN JVM.\n# For more on why as well as other 
possible settings,\n# see 
http://wiki.apache.org/hadoop/PerformanceTuning\nexport<http://wiki.apache.org/hadoop/PerformanceTuning%5Cnexport>
 HBASE_OPTS=\"-XX:+UseConcMarkSweepGC 
-XX:ErrorFile={{log_dir}}/hs_err_pid%p.log\"\nexport 
SERVER_GC_OPTS=\"-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 
-Xloggc:{{log_dir}}/gc.log-`date +'%Y%m%d%H%M'`\"\n# Uncomment below to enable 
java garbage collection logging.\n# export HBASE_OPTS=\"$HBASE_OPTS -verbose:gc 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps 
-Xloggc:$HBASE_HOME/logs/gc-hbase.log\"\n\n# Uncomment and adjust to enable JMX 
exporting\n# See jmxremote.password and jmxremote.access in 
$JRE_HOME/lib/management to configure remote password access.\n# More details 
at: 
http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html\n#\n#<http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html%5Cn#%5Cn%23>
 export HBASE_JMX_BASE=\"-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false\"\nexport 
HBASE_MASTER_OPTS=\"-Xmx{{master_heapsize}}\"\nexport 
HBASE_REGIONSERVER_OPTS=\"-Xmn{{regionserver_xmn_size}} 
-XX:CMSInitiatingOccupancyFraction=70  -Xms{{regionserver_heapsize}} 
-Xmx{{regionserver_heapsize}}\"\n# export HBASE_THRIFT_OPTS=\"$HBASE_JMX_BASE 
-Dcom.sun.management.jmxremote.port=10103\"\n# export 
HBASE_ZOOKEEPER_OPTS=\"$HBASE_JMX_BASE 
-Dcom.sun.management.jmxremote.port=10104\"\n\n# File naming hosts on which 
HRegionServers will run. $HBASE_HOME/conf/regionservers by default.\nexport 
HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers\n\n# Extra ssh options. 
Empty by default.\n# export HBASE_SSH_OPTS=\"-o ConnectTimeout=1 -o 
SendEnv=HBASE_CONF_DIR\"\n\n# Where log files are stored. $HBASE_HOME/logs by 
default.\nexport HBASE_LOG_DIR={{log_dir}}\n\n# A string representing this 
instance of hbase. $USER by default.\n# export HBASE_IDENT_STRING=$USER\n\n# 
The scheduling priority for daemon processes. See 'man nice'.\n# export 
HBASE_NICENESS=10\n\n# The directory where pid files are stored. /tmp by 
default.\nexport HBASE_PID_DIR={{pid_dir}}\n\n# Seconds to sleep between slave 
commands. Unset by default. This\n# can be useful in large clusters, where, 
e.g., slave rsyncs can\n# otherwise arrive faster than the master can service 
them.\n# export HBASE_SLAVE_SLEEP=0.1\n\n# Tell HBase whether it should manage 
it's own instance of Zookeeper or not.\nexport HBASE_MANAGES_ZK=false\n\n{% if 
security_enabled %}\nexport HBASE_OPTS=\"$HBASE_OPTS 
-Djava.security.auth.login.config={{client_jaas_config_file}}\"\nexport 
HBASE_MASTER_OPTS=\"$HBASE_MASTER_OPTS 
-Djava.security.auth.login.config={{master_jaas_config_file}}\"\nexport 
HBASE_REGIONSERVER_OPTS=\"$HBASE_REGIONSERVER_OPTS 
-Djava.security.auth.login.config={{regionserver_jaas_config_file}}\"\n{% endif 
%}\n    ",
    "service_name" : "HBASE",
    "stack_name" : "HDP",
    "stack_version" : "2.1",
    "type" : "hbase-env.xml"
  }
}



CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader of 
this message is not the intended recipient, you are hereby notified that any 
printing, copying, dissemination, distribution, disclosure or forwarding of 
this communication is strictly prohibited. If you have received this 
communication in error, please contact the sender immediately and delete it 
from your system. Thank You.


CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader of 
this message is not the intended recipient, you are hereby notified that any 
printing, copying, dissemination, distribution, disclosure or forwarding of 
this communication is strictly prohibited. If you have received this 
communication in error, please contact the sender immediately and delete it 
from your system. Thank You.


CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader of 
this message is not the intended recipient, you are hereby notified that any 
printing, copying, dissemination, distribution, disclosure or forwarding of 
this communication is strictly prohibited. If you have received this 
communication in error, please contact the sender immediately and delete it 
from your system. Thank You.

Reply via email to