[GitHub] spark pull request #16389: [SPARK-18981][Core]The job hang problem when spec...
Github user asfgit closed the pull request at: https://github.com/apache/spark/pull/16389 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16389: [SPARK-18981][Core]The job hang problem when spec...
Github user zhaorongsheng commented on a diff in the pull request: https://github.com/apache/spark/pull/16389#discussion_r93811187 --- Diff: core/src/test/scala/org/apache/spark/ExecutorAllocationManagerSuite.scala --- @@ -938,6 +938,33 @@ class ExecutorAllocationManagerSuite assert(removeTimes(manager) === Map.empty) } + test("SPARK-18981: maxNumExecutorsNeeded should properly handle speculated tasks") { +sc = createSparkContext() +val manager = sc.executorAllocationManager.get +assert(maxNumExecutorsNeeded(manager) === 0) + +val stageInfo = createStageInfo(0, 1) +sc.listenerBus.postToAll(SparkListenerStageSubmitted(stageInfo)) +assert(maxNumExecutorsNeeded(manager) === 1) + +val taskInfo = createTaskInfo(1, 1, "executor-1") +val speculatedTaskInfo = createTaskInfo(2, 1, "executor-1") +sc.listenerBus.postToAll(SparkListenerTaskStart(0, 0, taskInfo)) +assert(maxNumExecutorsNeeded(manager) === 1) --- End diff -- Yes, the warning info 'No stages are running, but numRunningTasks != 0' is printed and at that time the #numRunningTasks is set to 0. But after that the speculated task end event is arrived and the #numRunningTasks will plus 1. The tests are wrong, I will fix it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16389: [SPARK-18981][Core]The job hang problem when spec...
Github user mridulm commented on a diff in the pull request: https://github.com/apache/spark/pull/16389#discussion_r93787886 --- Diff: core/src/test/scala/org/apache/spark/ExecutorAllocationManagerSuite.scala --- @@ -938,6 +938,33 @@ class ExecutorAllocationManagerSuite assert(removeTimes(manager) === Map.empty) } + test("SPARK-18981: maxNumExecutorsNeeded should properly handle speculated tasks") { +sc = createSparkContext() +val manager = sc.executorAllocationManager.get +assert(maxNumExecutorsNeeded(manager) === 0) + +val stageInfo = createStageInfo(0, 1) +sc.listenerBus.postToAll(SparkListenerStageSubmitted(stageInfo)) +assert(maxNumExecutorsNeeded(manager) === 1) + +val taskInfo = createTaskInfo(1, 1, "executor-1") +val speculatedTaskInfo = createTaskInfo(2, 1, "executor-1") +sc.listenerBus.postToAll(SparkListenerTaskStart(0, 0, taskInfo)) +assert(maxNumExecutorsNeeded(manager) === 1) --- End diff -- This tests looks wrong - taskIndex is higher than numTasks ? Would be better for the test to : * Launch stage with 1 task. * Launch a normal task and 1 speculative task - with same taskIndex, but different taskId's * Finish normal task. * Ensure stage is completed. * Now finish speculative task and check if bug is not reproduced (it should be reproduced without this fix). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #16389: [SPARK-18981][Core]The job hang problem when spec...
GitHub user zhaorongsheng opened a pull request: https://github.com/apache/spark/pull/16389 [SPARK-18981][Core]The job hang problem when speculation is on ## What changes were proposed in this pull request? The root cause of this issue is that `ExecutorAllocationListener` gets the speculated task end info after the stage end event handling which let `numRunningTasks = 0`. Then it let `numRunningTasks -= 1` so the #numRunningTasks is negative. When calculate #maxNeeded in method `maxNumExecutorsNeeded()`, the value may be 0 or negative. So `ExecutorAllocationManager` does not request container and the job will be hung. This PR changes the method `onTaskEnd()` in `ExecutorAllocationListener`. When `stageIdToNumTasks` contains the taskEnd's stageId, let #numRunningTasks minus 1. ## How was this patch tested? This patch was tested in the method `test("SPARK-18981...)` of ExecutorAllocationManagerSuite.scala. Create two taskInfos and one of them is speculated task. After the stage ending event, the speculated task ending event is posted to listener. You can merge this pull request into a Git repository by running: $ git pull https://github.com/zhaorongsheng/spark master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/16389.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #16389 commit 1e191136581a22ed7cafd42e8b85e9a057b71171 Author: roncen.zhaoDate: 2016-12-23T16:38:34Z resolve the job hang problem when speculation is on --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org