Hi,

I need to update the Hadoop and HBase *-env.sh files programmatically to 
change; for example, the HADOOP_NAMENODE_OPTS environmental variable. I do the 
following:

1.      Check content of /etc/hadoop/conf/hadoop-env.sh.

cat hadoop-env.sh | grep HADOOP_NAMENODE_OPTS
export HADOOP_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 
-XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log 
-XX:NewSize=200m -XX:MaxNewSize=640m -Xloggc:/var/log/hadoop/$USER/gc.log-`date 
+'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps 
-XX:+PrintGCDateStamps -Xms1024m -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS 
-Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_NAMENODE_OPTS}"

2.      Stop MapReduce and HDFS using the Ambari web client (I haven't figured 
out the REST calls yet).

3.      Issue the REST API call, which generates an error.

curl -u admin:admin -i -X PUT -d '{"Clusters": {"desired_config": {"type": 
"hadoop-env.sh", "tag": "seapilot-install", "properties" : { 
"HADOOP_NAMENODE_OPTS" : "-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.port=58004 $HADOOP_NAMENODE_OPTS"}}}}' 
http://<ambari-server>:8080/api/v1/clusters/mycluster
HTTP/1.1 500 Server Error
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Server: Jetty(7.6.7.v20120910)
Cache-Control: proxy-revalidate
Content-Length: 188
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Set-Cookie: AMBARISESSIONID=1i1srtsmvh76f1688i8zcl5995;Path=/
Date: Wed, 28 Aug 2013 17:25:17 GMT

{
  "status" : 500,
  "message" : "org.apache.ambari.server.controller.spi.SystemException: An 
internal system exception occurred: Configuration with that tag exists for 
'hadoop-env.sh'"
}

4.      Check content of /etc/hadoop/conf/hadoop-env.sh.

cat hadoop-env.sh | grep HADOOP_NAMENODE_OPTS
export HADOOP_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 
-XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log 
-XX:NewSize=200m -XX:MaxNewSize=640m -Xloggc:/var/log/hadoop/$USER/gc.log-`date 
+'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps 
-XX:+PrintGCDateStamps -Xms1024m -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS 
-Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_NAMENODE_OPTS}"


On a Regular Hadoop installation (that is, downloaded from the Apache website), 
I simply add a new line to hadoop-env.sh at the end thereby redefining the 
environment variable; for example:

export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote
   -Dcom.sun.management.jmxremote.ssl=false
   -Dcom.sun.management.jmxremote.authenticate=false
   -Dcom.sun.management.jmxremote.port=58004
    $HADOOP_NAMENODE_OPTS"

I know that I can make the changes directly to the *-env.sh files and then 
start/stop the services w/o using the Ambari GUI/REST calls but that defeats 
the purpose.

Can this configuration change be done in Ambari?

Sincerely,

Gunnar

I skate to where the puck is going to be, not where it has been.  - Wayne 
Gretzky





Reply via email to