Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63908091
Alright, I pushed that final cleanup commit. @andrewor14, want to take a
final look on the JsonProtocol backwards-compatibility stuff?
---
If your project is set up f
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63906253
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/23
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63906246
[Test build #23687 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23687/consoleFull)
for PR 3009 at commit
[`2bbf41a`](https://gith
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63897855
By the way, there are new Selenium tests for these examples of the new
behavior.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63897673
It's not too hard; I'm already done! :smiley:
Here's what a job details page looks like when stages were skipped:
![image](https://cloud.githubuserco
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63897403
[Test build #23687 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23687/consoleFull)
for PR 3009 at commit
[`2bbf41a`](https://githu
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20685273
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -214,6 +264,14 @@ class JobProgressListener(conf: SparkConf) extends
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63896661
I don't mind the current behavior, but IMO skipped is better if it's not
too hard to implement.
---
If your project is set up for it, you can reply to this email and ha
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63883863
Regarding "phantom" stages that are skipped:
What do you think about adding a "skipped" state to visually convey that
there were stage dependencies that _might_
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20677793
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -144,11 +146,30 @@ class JobProgressListener(conf: SparkConf) exten
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20677746
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software Foundation (A
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20675254
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -144,11 +146,30 @@ class JobProgressListener(conf: SparkConf) extend
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20675128
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StageTable.scala ---
@@ -195,9 +180,10 @@ private[ui] class StageTableBase(
private[ui] cla
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63873718
@kayousterhout
> Is this still the expected behavior? (this happened from running "val rdd
= sc.parallelize(1 to 10, 2).map((, 1)).reduceByKey(+_)" and then co
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20674515
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software Foundation (AS
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20674464
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software Foundation (AS
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20674091
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -166,6 +188,21 @@ class JobProgressListener(conf: SparkConf) extends
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20673960
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -144,11 +146,30 @@ class JobProgressListener(conf: SparkConf) extend
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63862396
Hi Kay,
The behavior you noticed is intentional. If a job completes but doesn't run
all of it's stages, it ends up showing as finished with a partially completed
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20669574
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala ---
@@ -56,7 +56,11 @@ case class SparkListenerTaskEnd(
extends SparkList
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20669493
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala ---
@@ -56,7 +56,11 @@ case class SparkListenerTaskEnd(
extends SparkList
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63855330
Also, I thought more about having this in 1.2, and I'm -0.5 on putting this
in 1.2. Given all of the subtleties you ended up running into with this, Josh,
I don't
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20666049
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StageTable.scala ---
@@ -195,9 +180,10 @@ private[ui] class StageTableBase(
private[ui] cl
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63854023
All superficial comments on the code, but I tried it out and still got this
somewhat icky situation where a completed stage has fewer succeeded tasks than
the total
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20665799
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software Foundation (A
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20665717
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software Foundation (A
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20665705
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala ---
@@ -56,7 +56,11 @@ case class SparkListenerTaskEnd(
extends Spar
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20665632
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -144,11 +146,30 @@ class JobProgressListener(conf: SparkConf) ex
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20665377
--- Diff: core/src/test/scala/org/apache/spark/ui/UISeleniumSuite.scala ---
@@ -17,16 +17,18 @@
package org.apache.spark.ui
-import
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20665341
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -214,6 +264,14 @@ class JobProgressListener(conf: SparkConf) ext
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20665208
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -214,6 +264,14 @@ class JobProgressListener(conf: SparkConf) ext
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20665136
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/UIData.scala ---
@@ -40,9 +40,22 @@ private[jobs] object UIData {
class JobUIData(
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20665049
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -166,6 +188,21 @@ class JobProgressListener(conf: SparkConf) extend
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20664971
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -144,11 +146,30 @@ class JobProgressListener(conf: SparkConf) exten
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63851287
Hey @JoshRosen I'll take a look at this right now. Is it still WIP by the
way?
---
If your project is set up for it, you can reply to this email and have your
reply a
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63844350
I looked through this and took a spin locally, LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If y
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63758934
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/23
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63758931
[Test build #23655 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23655/consoleFull)
for PR 3009 at commit
[`0b77e3e`](https://gith
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3009#discussion_r20625666
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -269,8 +271,6 @@ class SparkContext(config: SparkConf) extends Logging {
/**
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63755914
Alright, I think this should be good to review. I addressed the memory
leaks; if you're interested to see whether this works, try deleting any one
cleanup-related line
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63753255
[Test build #23655 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23655/consoleFull)
for PR 3009 at commit
[`0b77e3e`](https://githu
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63690828
> There are some DAGScheduler tests that verify that all of the data
structures are empty after jobs run [...] Can we just do something similar for
the UI?
Sur
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63688537
There are some DAGScheduler tests that verify that all of the data
structures are empty after jobs run:
https://github.com/apache/spark/blob/master/core/src/
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63604040
Just realized that these phantom stages will lead to memory leaks in the
`stageIdToData` and `stageIdToInfo`: we only `remove()` entries from these data
structures as a
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63596404
@JoshRosen yeah re: "phantom" stage ID's. I also noticed this recently in
relation to something else. It's sort of confusing, but we end up with gaps in
the ID space.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63593498
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/23
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63593492
[Test build #23590 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23590/consoleFull)
for PR 3009 at commit
[`d69c775`](https://gith
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63588777
[Test build #23590 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23590/consoleFull)
for PR 3009 at commit
[`d69c775`](https://githu
GitHub user JoshRosen opened a pull request:
https://github.com/apache/spark/pull/3009
[SPARK-4145] [WIP] Web UI job pages
This PR adds two new pages to the Spark Web UI:
- A jobs overview page, which shows details on running / completed / failed
jobs.
- A job details p
49 matches
Mail list logo