seems you're correct:

2015-07-07 17:21:27,245 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Container [pid=38506,containerID=container_1436262805092_0022_01_000003] is running be yond virtual memory limits. Current usage: 4.3 GB of 4.5 GB physical memory used; 9.5 GB of 9.4 GB virtual memory used. Killing container.



On 07/07/15 18:28, Marcelo Vanzin wrote:
SIGTERM on YARN generally means the NM is killing your executor because it's running over its requested memory limits. Check your NM logs to make sure. And then take a look at the "memoryOverhead" setting for driver and executors (http://spark.apache.org/docs/latest/running-on-yarn.html).

On Tue, Jul 7, 2015 at 7:43 AM, Kostas Kougios <kostas.koug...@googlemail.com <mailto:kostas.koug...@googlemail.com>> wrote:

    I've recompiled spark deleting the -XX:OnOutOfMemoryError=kill
    declaration,
    but still I am getting a SIGTERM!



    --
    View this message in context:
    
http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23687.html
    Sent from the Apache Spark User List mailing list archive at
    Nabble.com.

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
    <mailto:user-unsubscr...@spark.apache.org>
    For additional commands, e-mail: user-h...@spark.apache.org
    <mailto:user-h...@spark.apache.org>




--
Marcelo

Reply via email to