[ https://issues.apache.org/jira/browse/SPARK-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15037869#comment-15037869 ]
Mridul Muralidharan commented on SPARK-11801: --------------------------------------------- I have preference for (a) because I am not sure if we can do (b) given the number of corner cases :-) But if it can be pulled off decently, sure why not - what I would not like is inconsistent behavior, where sometimes a behavior is exhibited and other times some other - because of the inherent instability at OOM and VM exit. Right now, I know that we will get a task failure at driver and I investigate cause of it at executor - whether it is jni crash, OOM, growth of memory which lead to yarn killing executor, etc. > Notify driver when OOM is thrown before executor JVM is killed > --------------------------------------------------------------- > > Key: SPARK-11801 > URL: https://issues.apache.org/jira/browse/SPARK-11801 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Affects Versions: 1.5.1 > Reporter: Srinivasa Reddy Vundela > Priority: Minor > > Here is some background for the issue. > Customer got OOM exception in one of the task and executor got killed with > kill %p. It is unclear in driver logs/Spark UI why the task is lost or > executor is lost. Customer has to look into the executor logs to see OOM is > the cause for the task/executor lost. > It would be helpful if driver logs/spark UI shows the reason for task > failures by making sure that task updates the driver with OOM. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org