Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/3783#discussion_r22668989
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
    @@ -426,39 +426,44 @@ private[spark] class ExecutorAllocationManager(
           }
         }
     
    -    override def onTaskStart(taskStart: SparkListenerTaskStart): Unit = 
synchronized {
    +    override def onTaskStart(taskStart: SparkListenerTaskStart): Unit = {
           val stageId = taskStart.stageId
           val taskId = taskStart.taskInfo.taskId
           val taskIndex = taskStart.taskInfo.index
           val executorId = taskStart.taskInfo.executorId
     
    -      // If this is the last pending task, mark the scheduler queue as 
empty
    -      stageIdToTaskIndices.getOrElseUpdate(stageId, new 
mutable.HashSet[Int]) += taskIndex
    -      val numTasksScheduled = stageIdToTaskIndices(stageId).size
    -      val numTasksTotal = stageIdToNumTasks.getOrElse(stageId, -1)
    -      if (numTasksScheduled == numTasksTotal) {
    -        // No more pending tasks for this stage
    -        stageIdToNumTasks -= stageId
    -        if (stageIdToNumTasks.isEmpty) {
    -          allocationManager.onSchedulerQueueEmpty()
    +      allocationManager.synchronized {
    +        allocationManager.onExecutorAdded(executorId)
    --- End diff --
    
    This seems incorrect to me. This marks the executor idle every time it runs 
a task, which will result in many duplicate executor warnings (see [this 
line](https://github.com/apache/spark/blob/8d45834debc6986e61831d0d6e982d5528dccc51/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala#L325)).
 I think we should do a check instead, and add a huge comment explaining the 
ordering issue:
    ```
    // This guards against the race condition in which the 
`SparkListenerTaskStart`
    // event is posted after the `SparkListenerBlockManagerAdded` event, which 
is
    // possible because these events are posted in different threads
    if (!allocationManager.executorIds.contains(executorId)) {
      allocationManager.onExecutorAdded(executorId)
    }
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to