Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

2015-07-08 Thread Konstantinos Kougios

seems you're correct:

2015-07-07 17:21:27,245 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Container [pid=38506,containerID=container_1436262805092_0022_01_03] 
is running be
yond virtual memory limits. Current usage: 4.3 GB of 4.5 GB physical 
memory used; 9.5 GB of 9.4 GB virtual memory used. Killing container.




On 07/07/15 18:28, Marcelo Vanzin wrote:
SIGTERM on YARN generally means the NM is killing your executor 
because it's running over its requested memory limits. Check your NM 
logs to make sure. And then take a look at the memoryOverhead 
setting for driver and executors 
(http://spark.apache.org/docs/latest/running-on-yarn.html).


On Tue, Jul 7, 2015 at 7:43 AM, Kostas Kougios 
kostas.koug...@googlemail.com mailto:kostas.koug...@googlemail.com 
wrote:


I've recompiled spark deleting the -XX:OnOutOfMemoryError=kill
declaration,
but still I am getting a SIGTERM!



--
View this message in context:

http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23687.html
Sent from the Apache Spark User List mailing list archive at
Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
mailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
mailto:user-h...@spark.apache.org




--
Marcelo




Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

2015-07-07 Thread Marcelo Vanzin
SIGTERM on YARN generally means the NM is killing your executor because
it's running over its requested memory limits. Check your NM logs to make
sure. And then take a look at the memoryOverhead setting for driver and
executors (http://spark.apache.org/docs/latest/running-on-yarn.html).

On Tue, Jul 7, 2015 at 7:43 AM, Kostas Kougios 
kostas.koug...@googlemail.com wrote:

 I've recompiled spark deleting the -XX:OnOutOfMemoryError=kill declaration,
 but still I am getting a SIGTERM!



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23687.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




-- 
Marcelo


Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

2015-07-07 Thread Kostas Kougios
it seems it is hardcoded in ExecutorRunnable.scala :

val commands = prefixEnv ++ Seq(
  YarnSparkHadoopUtil.expandEnvironment(Environment.JAVA_HOME) +
/bin/java,
  -server,
  // Kill if OOM is raised - leverage yarn's failure handling to cause
rescheduling.
  // Not killing the task leaves various aspects of the executor and (to
some extent) the jvm in
  // an inconsistent state.
  // TODO: If the OOM is not recoverable by rescheduling it on different
node, then do
  // 'something' to fail job ... akin to blacklisting trackers in mapred
?
  -XX:OnOutOfMemoryError='kill %p') ++



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23681.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

2015-07-07 Thread Kostas Kougios
I've recompiled spark deleting the -XX:OnOutOfMemoryError=kill declaration,
but still I am getting a SIGTERM! 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23687.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org