Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68453172
Hmm, looks like this change somehow broke a PySpark Streaming test:
```
==
FAIL: Basi
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68437206
[Test build #24955 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24955/consoleFull)
for PR 3849 at commit
[`0b10526`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68437209
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68432871
[Test build #24955 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24955/consoleFull)
for PR 3849 at commit
[`0b10526`](https://githu
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68429433
[Test build #24945 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24945/consoleFull)
for PR 3849 at commit
[`8c387ce`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68429438
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68426611
[Test build #24945 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24945/consoleFull)
for PR 3849 at commit
[`8c387ce`](https://githu
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3849#discussion_r22375124
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskContextSuite.scala ---
@@ -63,6 +63,33 @@ class TaskContextSuite extends FunSuite with
BeforeA
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68421619
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68421615
[Test build #24923 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24923/consoleFull)
for PR 3849 at commit
[`cbe4d76`](https://gith
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68418485
[Test build #24923 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24923/consoleFull)
for PR 3849 at commit
[`cbe4d76`](https://githu
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3849#discussion_r22372823
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -180,7 +189,7 @@ private[spark] class Executor(
// Run the a
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68412414
Hmm, looks like this failed MiMa checks due to the addition of a new method
to a public interface:
```
[info] spark-core: found 1 potential binary incompati
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68410774
[Test build #24908 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24908/consoleFull)
for PR 3849 at commit
[`9d8d4d1`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68410782
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68408407
Should we at least leave a documentation note to inform users about the
difference in behavior? I'm worried that someone will look at the 1.2 docs,
write some code whi
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68408128
Hm - we probably just shouldn't backport it, again I think users might be
depending on the hold behavior, putting it in a patch release is a bit iffy.
---
If your proj
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68408000
One potential consideration is backporing: if we agree that `attemptId`'s
meaning should be changed to reflect the 1.1.1 documentation, then we should
make the same fix
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68407820
Annoyingly, it looks like ScalaDoc doesn't display Javadoc annotations, so
in the Scala documentation for 1.2.0 the TaskContext class appears to have lost
all documenta
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68407800
Ah I see - I didn't see the doc. I'm more on the fence in this case
(because there was a doc that created a specification). So I guess I'm fine
either way.
---
If your
Github user koeninger commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68407388
The flip side is that it's already documented as doing the "right" thing:
http://spark.apache.org/docs/1.1.1/api/scala/index.html#org.apache.spark.TaskContext
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68406594
So personally I don't think we should change the semantics of `attemptId`
because this has been exposed to user applications and they could silently
break if we modify t
Github user koeninger commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68405794
Thanks for this. Most of the uses of attemptId I've seen look like they
were assuming it meant the 0-based attempt number.
---
If your project is set up for it, you c
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3849#discussion_r22367600
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -180,7 +189,7 @@ private[spark] class Executor(
// Run the a
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3849#discussion_r22367616
--- Diff: core/src/main/scala/org/apache/spark/scheduler/Task.scala ---
@@ -44,8 +44,9 @@ import org.apache.spark.util.Utils
*/
private[spark] abs
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3849#discussion_r22367584
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -180,7 +189,7 @@ private[spark] class Executor(
// Run the actual
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3849#discussion_r22367585
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -635,7 +635,7 @@ class DAGScheduler(
val rdd = job.finalStage.
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3849#discussion_r22367524
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -290,13 +291,22 @@ private[spark] class MesosScheduler
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3849#discussion_r22367512
--- Diff: core/src/main/scala/org/apache/spark/scheduler/Task.scala ---
@@ -44,8 +44,9 @@ import org.apache.spark.util.Utils
*/
private[spark] abstract
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3849#discussion_r22367485
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -635,7 +635,7 @@ class DAGScheduler(
val rdd = job.finalStage.rdd
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68404370
[Test build #24908 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24908/consoleFull)
for PR 3849 at commit
[`9d8d4d1`](https://githu
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3849#issuecomment-68404316
/cc @koeninger, who raised this issue on the mailing list, and @yhuai, who
filed the original JIRA issue.
Also, /cc @pwendell, @andrewor14, and @tdas for review
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3849#discussion_r22367053
--- Diff:
core/src/main/scala/org/apache/spark/executor/MesosExecutorBackend.scala ---
@@ -77,10 +77,13 @@ private[spark] class MesosExecutorBackend
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3849#discussion_r22367027
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -290,13 +291,22 @@ private[spark] class MesosSche
GitHub user JoshRosen opened a pull request:
https://github.com/apache/spark/pull/3849
[SPARK-4014] Change TaskContext.attemptId to return attempt number instead
of task ID
This patch modifies `TaskContext.attemptId` to return an attempt number,
which conveys how many times a task
35 matches
Mail list logo