[
https://issues.apache.org/jira/browse/AMBARI-11162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14545427#comment-14545427
]
Hudson commented on AMBARI-11162:
---------------------------------
FAILURE: Integrated in Ambari-trunk-Commit #2606 (See
[https://builds.apache.org/job/Ambari-trunk-Commit/2606/])
AMBARI-11162. Ambari should configure NameNode to terminate on
OutOfMemoryError. (aonishuk) (aonishuk:
http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=c36ec90919918aeeeb5217f8280d40cc36ae0082)
*
ambari-server/src/main/resources/stacks/HDP/2.3/services/HDFS/configuration/hadoop-env.xml
> Ambari should configure NameNode to terminate on OutOfMemoryError.
> ------------------------------------------------------------------
>
> Key: AMBARI-11162
> URL: https://issues.apache.org/jira/browse/AMBARI-11162
> Project: Ambari
> Issue Type: Bug
> Reporter: Andrew Onischuk
> Assignee: Andrew Onischuk
> Fix For: 2.1.0
>
>
> It is dangerous for the NameNode to keep running if it encounters
> `OutOfMemoryError`. There is some legacy code in the RPC handling layer that
> catches `OutOfMemoryError`, and we're not going to be able to revert this
> code. Instead, we can kill it externally by setting a JVM option. We would
> like Ambari to configure the NameNode to run with this option:
>
>
>
> -XX:OnOutOfMemoryError="kill -9 %p"
>
> This can be set in `HADOOP_NAMENODE_OPTS` in hadoop-env.sh.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)