[jira] [Commented] (SPARK-3682) Add helpful warnings to the UI

2014-11-11 Thread Kay Ousterhout (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14206891#comment-14206891
 ] 

Kay Ousterhout commented on SPARK-3682:
---

Some of the metrics you mentioned fall under the additional metrics that are 
hidden by default; as part of this, it might be nice to automatically show a 
metric as part of warning a user that the value is problematic.

 Add helpful warnings to the UI
 --

 Key: SPARK-3682
 URL: https://issues.apache.org/jira/browse/SPARK-3682
 Project: Spark
  Issue Type: New Feature
  Components: Web UI
Affects Versions: 1.1.0
Reporter: Sandy Ryza
 Attachments: SPARK-3682Design.pdf


 Spark has a zillion configuration options and a zillion different things that 
 can go wrong with a job.  Improvements like incremental and better metrics 
 and the proposed spark replay debugger provide more insight into what's going 
 on under the covers.  However, it's difficult for non-advanced users to 
 synthesize this information and understand where to direct their attention. 
 It would be helpful to have some sort of central location on the UI users 
 could go to that would provide indications about why an app/job is failing or 
 performing poorly.
 Some helpful messages that we could provide:
 * Warn that the tasks in a particular stage are spending a long time in GC.
 * Warn that spark.shuffle.memoryFraction does not fit inside the young 
 generation.
 * Warn that tasks in a particular stage are very short, and that the number 
 of partitions should probably be decreased.
 * Warn that tasks in a particular stage are spilling a lot, and that the 
 number of partitions should probably be increased.
 * Warn that a cached RDD that gets a lot of use does not fit in memory, and a 
 lot of time is being spent recomputing it.
 To start, probably two kinds of warnings would be most helpful.
 * Warnings at the app level that report on misconfigurations, issues with the 
 general health of executors.
 * Warnings at the job level that indicate why a job might be performing 
 slowly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-3682) Add helpful warnings to the UI

2014-09-25 Thread Arun Ahuja (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14147730#comment-14147730
 ] 

Arun Ahuja commented on SPARK-3682:
---

We've been running to a lot of these issues so this would be very helpful - 
could you explain this one however Warn that tasks in a particular stage are 
spilling a lot, and that the number of partitions should probably be 
decreased.?

Thanks!

 Add helpful warnings to the UI
 --

 Key: SPARK-3682
 URL: https://issues.apache.org/jira/browse/SPARK-3682
 Project: Spark
  Issue Type: New Feature
  Components: Web UI
Affects Versions: 1.1.0
Reporter: Sandy Ryza

 Spark has a zillion configuration options and a zillion different things that 
 can go wrong with a job.  Improvements like incremental and better metrics 
 and the proposed spark replay debugger provide more insight into what's going 
 on under the covers.  However, it's difficult for non-advanced users to 
 synthesize this information and understand where to direct their attention. 
 It would be helpful to have some sort of central location on the UI users 
 could go to that would provide indications about why an app/job is failing or 
 performing poorly.
 Some helpful messages that we could provide:
 * Warn that the tasks in a particular stage are spending a long time in GC.
 * Warn that spark.shuffle.memoryFraction does not fit inside the young 
 generation.
 * Warn that tasks in a particular stage are very short, and that the number 
 of partitions should probably be decreased.
 * Warn that tasks in a particular stage are spilling a lot, and that the 
 number of partitions should probably be decreased.
 * Warn that a cached RDD that gets a lot of use does not fit in memory, and a 
 lot of time is being spent recomputing it.
 To start, probably two kinds of warnings would be most helpful.
 * Warnings at the app level that report on misconfigurations, issues with the 
 general health of executors.
 * Warnings at the job level that indicate why a job might be performing 
 slowly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-3682) Add helpful warnings to the UI

2014-09-25 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14148379#comment-14148379
 ] 

Sandy Ryza commented on SPARK-3682:
---

Oops, that should have read increased.

When a task fetches more shuffle data from the previous stage than it can fit 
in memory, it needs to spill the extra data to disk.  Increasing the number of 
partitions makes it so that each task will be responsible for dealing with less 
data and will need to spill less.

 Add helpful warnings to the UI
 --

 Key: SPARK-3682
 URL: https://issues.apache.org/jira/browse/SPARK-3682
 Project: Spark
  Issue Type: New Feature
  Components: Web UI
Affects Versions: 1.1.0
Reporter: Sandy Ryza

 Spark has a zillion configuration options and a zillion different things that 
 can go wrong with a job.  Improvements like incremental and better metrics 
 and the proposed spark replay debugger provide more insight into what's going 
 on under the covers.  However, it's difficult for non-advanced users to 
 synthesize this information and understand where to direct their attention. 
 It would be helpful to have some sort of central location on the UI users 
 could go to that would provide indications about why an app/job is failing or 
 performing poorly.
 Some helpful messages that we could provide:
 * Warn that the tasks in a particular stage are spending a long time in GC.
 * Warn that spark.shuffle.memoryFraction does not fit inside the young 
 generation.
 * Warn that tasks in a particular stage are very short, and that the number 
 of partitions should probably be decreased.
 * Warn that tasks in a particular stage are spilling a lot, and that the 
 number of partitions should probably be increased.
 * Warn that a cached RDD that gets a lot of use does not fit in memory, and a 
 lot of time is being spent recomputing it.
 To start, probably two kinds of warnings would be most helpful.
 * Warnings at the app level that report on misconfigurations, issues with the 
 general health of executors.
 * Warnings at the job level that indicate why a job might be performing 
 slowly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org