[ 
https://issues.apache.org/jira/browse/SPARK-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14630648#comment-14630648
 ] 

Zhang, Liye commented on SPARK-9111:
------------------------------------

Hi [~srowen], the memory dump mentioned here is about the spark memory usage, 
which is related with umbrella 
[SPARK-9103|https://issues.apache.org/jira/browse/SPARK-9103], not the 
HeapDump, since we want to know what is the memory status for different spark 
component. It's not easy to get how much memory used for a specific spark 
component directly from the HeapDump, right?

> Dumping the memory info when an executor dies abnormally
> --------------------------------------------------------
>
>                 Key: SPARK-9111
>                 URL: https://issues.apache.org/jira/browse/SPARK-9111
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Spark Core
>            Reporter: Zhang, Liye
>            Priority: Minor
>
> When an executor is not normally finished, we shall give out it's memory dump 
> info right before the JVM shutting down. So that if the executor is killed 
> because of OOM, we can easily checkout how is the memory used and which part 
> cause the OOM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to