[jira] [Commented] (SPARK-9111) Dumping the memory info when an executor dies abnormally

2016-01-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080452#comment-15080452
 ] 

Steve Loughran commented on SPARK-9111:
---

the heap dump option could itself be useful; within a YARN container the  
launch command could be set to something like  {{-XX:HeapDumpPath=<>}} 
then the heap dump would be automatically grabbed by the YARN Nodemanager and 
copied to HDFS, where it would then be cleaned up by the normal YARN history 
cleanup routines.

> Dumping the memory info when an executor dies abnormally
> 
>
> Key: SPARK-9111
> URL: https://issues.apache.org/jira/browse/SPARK-9111
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Reporter: Zhang, Liye
>Priority: Minor
>
> When an executor is not normally finished, we shall give out it's memory dump 
> info right before the JVM shutting down. So that if the executor is killed 
> because of OOM, we can easily checkout how is the memory used and which part 
> cause the OOM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-9111) Dumping the memory info when an executor dies abnormally

2015-07-16 Thread Zhang, Liye (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14630648#comment-14630648
 ] 

Zhang, Liye commented on SPARK-9111:


Hi [~srowen], the memory dump mentioned here is about the spark memory usage, 
which is related with umbrella 
[SPARK-9103|https://issues.apache.org/jira/browse/SPARK-9103], not the 
HeapDump, since we want to know what is the memory status for different spark 
component. It's not easy to get how much memory used for a specific spark 
component directly from the HeapDump, right?

 Dumping the memory info when an executor dies abnormally
 

 Key: SPARK-9111
 URL: https://issues.apache.org/jira/browse/SPARK-9111
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core
Reporter: Zhang, Liye
Priority: Minor

 When an executor is not normally finished, we shall give out it's memory dump 
 info right before the JVM shutting down. So that if the executor is killed 
 because of OOM, we can easily checkout how is the memory used and which part 
 cause the OOM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org