Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17480#discussion_r111298559
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
    @@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
        * yarn-client mode when AM re-registers after a failure.
        */
       def reset(): Unit = synchronized {
    -    initializing = true
    +    /**
    +     * When some tasks need to be scheduled and initial executor = 0, 
resetting the initializing
    +     * field may cause it to not be set to false in yarn.
    +     * SPARK-20079: https://issues.apache.org/jira/browse/SPARK-20079
    +     */
    +    if (maxNumExecutorsNeeded() == 0) {
    +      initializing = true
    --- End diff --
    
    What I'm asking is why reset is trying to change the state at all? It may 
die in the middle of a stage, or in between two stages, or while the driver is 
idle and nothing is happening. But the fact is, `reset()` has none of that 
information, but it is deciding that it should be resetting things to some 
initial state that is not based on the current status of the application at all.
    
    So why is reset doing that? What good is that achieving?
    
    As far as I can see, the only thing reset needs to account for is executors 
that the drive thought it asked to be killed. Those will die by themselves, so 
the code can just empty the "kill request" queue and update the related state 
appropriately.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to