GitHub user sitalkedia opened a pull request: https://github.com/apache/spark/pull/17485
[SPARK-20163] Kill all running tasks in a stage in case of fetch failure ## What changes were proposed in this pull request? Currently, the scheduler does not kill the running tasks in a stage when it encounters fetch failure, as a result, we might end up running many duplicate tasks in the cluster. There is already a TODO in TaskSetManager to kill all running tasks which has not been implemented. ## How was this patch tested? Unit tests. You can merge this pull request into a Git repository by running: $ git pull https://github.com/sitalkedia/spark kill_tasks_on_stage_failure Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/17485.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #17485 ---- commit ec2ac3484330547b4bd7c756ad9123f6c5b2931b Author: Sital Kedia <ske...@fb.com> Date: 2017-03-30T20:38:02Z [SPARK-20163] Kill all running tasks in a stage in case of fetch failure ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org