Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

2015-07-08 Thread Konstantinos Kougios
seems you're correct: 2015-07-07 17:21:27,245 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Container [pid=38506,containerID=container_1436262805092_0022_01_03] is running be yond virtual memory limits. Current usage: 4.3 GB of 4.5 GB

Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

2015-07-07 Thread Marcelo Vanzin
SIGTERM on YARN generally means the NM is killing your executor because it's running over its requested memory limits. Check your NM logs to make sure. And then take a look at the memoryOverhead setting for driver and executors (http://spark.apache.org/docs/latest/running-on-yarn.html). On Tue,

Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

2015-07-07 Thread Kostas Kougios
it seems it is hardcoded in ExecutorRunnable.scala : val commands = prefixEnv ++ Seq( YarnSparkHadoopUtil.expandEnvironment(Environment.JAVA_HOME) + /bin/java, -server, // Kill if OOM is raised - leverage yarn's failure handling to cause rescheduling. // Not killing the

is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

2015-07-07 Thread Kostas Kougios
I get a suspicious sigterm on the executors that doesnt seem to be from the driver. The other thing that might send a sigterm is the -XX:OnOutOfMemoryError=kill %p java arg that the executor starts with. Now my tasks dont seem to run out of mem, so how can I disable this param to debug them? --

Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

2015-07-07 Thread Kostas Kougios
I've recompiled spark deleting the -XX:OnOutOfMemoryError=kill declaration, but still I am getting a SIGTERM! -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23687.html Sent