[ 
https://issues.apache.org/jira/browse/SPARK-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-11799:
------------------------------
    Assignee: Srinivasa Reddy Vundela

> Make it explicit in executor logs that uncaught exceptions are thrown during 
> executor shutdown
> ----------------------------------------------------------------------------------------------
>
>                 Key: SPARK-11799
>                 URL: https://issues.apache.org/jira/browse/SPARK-11799
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.5.1
>            Reporter: Srinivasa Reddy Vundela
>            Assignee: Srinivasa Reddy Vundela
>            Priority: Minor
>
> Here is some background for the issue.
> Customer got OOM exception in one of the task and executor got killed with 
> kill %p. Few shutdown hooks are registered with ShutDownHookManager to do the 
> hadoop temp directory cleanup. During this shutdown phase other tasks are 
> throwing uncaught exception and executor logs are filled up with so many of 
> them. 
> Since it is unclear for the customer in driver logs/ Spark UI why the 
> container was lost customer is going through the executor logs and he see lot 
> of uncaught exception. 
> It would be clear to the customer if we can prepend the uncaught exceptions 
> with some message like [Container is in shutdown mode] so that he can skip 
> those.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to