Github user suyanNone commented on the pull request:

    https://github.com/apache/spark/pull/4055#issuecomment-118257935
  
    @squito you are right, taskset is set to zombie for not scheduling the rest 
task.
    you Testcase is good, and I just modify 1 place.
    ```
        runEvent(ExecutorLost("exec-hostA"))
        runEvent(CompletionEvent(taskSets(1).tasks(0),
          FetchFailed(null, firstShuffleId, 2, 0, "Fetch failed"),
          null, null, createFakeTaskInfo(), null))
        // so we resubmit stage 0, which completes happily
        scheduler.resubmitFailedStages()
        val stage0Resubmit = taskSets(2)
        assert(stage0Resubmit.stageId == 0)
        assert(stage0Resubmit.attempt === 1)
    ```
    
    to 
    
    ```
    runEvent(ExecutorLost("exec-hostA"))
        runEvent(CompletionEvent(taskSets(1).tasks(0),
          FetchFailed(null, firstShuffleId, 2, 0, "Fetch failed"),
          null, null, createFakeTaskInfo(), null))
        // so we resubmit stage 0, which completes happily
        Thread.sleep(1000)
        val stage0Resubmit = taskSets(2)
        assert(stage0Resubmit.stageId == 0)
        assert(stage0Resubmit.attempt === 1)
    ```
    
    the reason for that because dagScheduler handle FetchFailed, it will 
    ```
      messageScheduler.schedule(new Runnable {
                override def run(): Unit = 
eventProcessLoop.post(ResubmitFailedStages)
              }, 200, TimeUnit.MILLISECONDS) 
    ```
    and runEvent is asyn, so need to wait some time to make  sure that event 
was processed



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to