Given a blueprint that includes the following:

      "hadoop-env" : {
        "properties" : {
           "HADOOP_NAMENODE_OPTS" :  " -XX:InitialHeapSize=16384m
-XX:MaxHeapSize=16384m -Xmx16384m -XX:MaxPermSize=512m"
        }
      }

The following occurs when creating the cluster:

Error occurred during initialization of VM
Too small initial heap

The logs say:

CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log
-XX:InitialHeapSize=1024 *-XX:MaxHeapSize=1024* -XX:MaxNewSize=200
-XX:MaxTenuringThreshold=6 -XX:NewSize=200 -XX:OldPLABSize=16
-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
-XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers
-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC


Notice that nowhere are the options provided included in the actual jvm
launched values.


it is no wonder the low on resources given the only 1GB MaxHeapSize.
totally inadequate for namenode.

btw this is HA - and both of the namenodes have same behavior.

Reply via email to