Github user sadhen commented on the issue:
https://github.com/apache/spark/pull/11205
@jerryshao I think the 2nd bullet has not been fixed in SPARK-13054.
I use spark 2.1.1, and I still find that finished tasks remain in
`private val executorIdToTaskIds = new
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11205
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/83067/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11205
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11205
**[Test build #83067 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83067/testReport)**
for PR 11205 at commit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11205
Verified again, looks like the 2nd bullet is not valid anymore, I cannot
reproduce it in latest master branch, this might have already been fixed in
SPARK-13054.
So only first issue
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11205
@vanzin , in the current code `stageIdToTaskIndices` cannot be used to
track number of running tasks, because this structure doesn't remove task index
from itself when task is finished
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11205
**[Test build #83067 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83067/testReport)**
for PR 11205 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/11205
This PR is pretty old and a lot has changed since, but it looks like this
can be fixed now by just fixing code to look at `stageIdToTaskIndices` instead
of keeping `numRunningTasks` around? (Or
Github user rustagi commented on the issue:
https://github.com/apache/spark/pull/11205
Sorry haven't been able to confirm this patch becaus have not seen issue in
production for quite some time.
It was much more persistent with 2.0 than 2.1
Not sure of cause.
---
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11205
I guess the issue still exists, let me verify the issue again, if it still
exists I will bring the PR to latest. Thanks!
---
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/11205
gentle ping @rustagi
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/11205
gentle ping @rustgi, have you maybe had some time to confirm this patch
maybe? It sounds the only thing we need here is the confirmation.
---
If your project is set up for it, you can reply to
Github user rustagi commented on the issue:
https://github.com/apache/spark/pull/11205
I can confirm that removing speculation & setting maxtaskfailure to 1
eliminates this problem. Will try the patch & confirm
---
If your project is set up for it, you can reply to this email and
Github user rustagi commented on the issue:
https://github.com/apache/spark/pull/11205
I am seeing this issue quite frequently. Not sure what is causing it but
frequently we will get a onTaskEnd event after a stage has ended. This will
cause the numRunningTasks to become negative. If
14 matches
Mail list logo