toujours33 commented on code in PR #38711: URL: https://github.com/apache/spark/pull/38711#discussion_r1028830007
########## core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala: ########## @@ -749,8 +749,10 @@ private[spark] class ExecutorAllocationManager( stageAttemptToNumRunningTask.getOrElse(stageAttempt, 0) + 1 // If this is the last pending task, mark the scheduler queue as empty if (taskStart.taskInfo.speculative) { - stageAttemptToSpeculativeTaskIndices.getOrElseUpdate(stageAttempt, - new mutable.HashSet[Int]) += taskIndex + stageAttemptToSpeculativeTaskIndices + .getOrElseUpdate(stageAttempt, new mutable.HashSet[Int]).add(taskIndex) + stageAttemptToUnsubmittedSpeculativeTasks Review Comment: I think it's ok here. As PR describes, `stageAttemptToUnsubmittedSpeculativeTasks.remove(taskIndex)` should be called only when speculative task start or task finished(whether it is speculative or not). Line 754 will do nothing if this speculative task has been removed when task finished, which is expected and will be ok. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org