it seems it is hardcoded in ExecutorRunnable.scala :

val commands = prefixEnv ++ Seq(
      YarnSparkHadoopUtil.expandEnvironment(Environment.JAVA_HOME) +
"/bin/java",
      "-server",
      // Kill if OOM is raised - leverage yarn's failure handling to cause
rescheduling.
      // Not killing the task leaves various aspects of the executor and (to
some extent) the jvm in
      // an inconsistent state.
      // TODO: If the OOM is not recoverable by rescheduling it on different
node, then do
      // 'something' to fail job ... akin to blacklisting trackers in mapred
?
      "-XX:OnOutOfMemoryError='kill %p'") ++



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23681.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to