hadoop-env has no property HADOOP_NAMENODE_OPTS?, you should use 
namenode_opt_maxnewsize for specifying XX:MaxHeapSize?

      "hadoop-env" : {
        "properties" : {
           "namenode_opt_maxnewsize" :  "16384m"
        }
      }


You may also want to check all available options in 
/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml?


________________________________
From: Stephen Boesch <[email protected]>
Sent: Tuesday, October 13, 2015 9:41 AM
To: [email protected]
Subject: Unable to set the namenode options using blueprints

Given a blueprint that includes the following:

      "hadoop-env" : {
        "properties" : {
           "HADOOP_NAMENODE_OPTS" :  " -XX:InitialHeapSize=16384m 
-XX:MaxHeapSize=16384m -Xmx16384m -XX:MaxPermSize=512m"
        }
      }

The following occurs when creating the cluster:

Error occurred during initialization of VM
Too small initial heap

The logs say:

CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log 
-XX:InitialHeapSize=1024 -XX:MaxHeapSize=1024 -XX:MaxNewSize=200 
-XX:MaxTenuringThreshold=6 -XX:NewSize=200 -XX:OldPLABSize=16 
-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
 
-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
 
-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
 -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps 
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers 
-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC

Notice that nowhere are the options provided included in the actual jvm 
launched values.


it is no wonder the low on resources given the only 1GB MaxHeapSize. totally 
inadequate for namenode.

btw this is HA - and both of the namenodes have same behavior.



Reply via email to