Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17480#discussion_r110803588
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
    @@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
        * yarn-client mode when AM re-registers after a failure.
        */
       def reset(): Unit = synchronized {
    -    initializing = true
    +    /**
    +     * When some tasks need to be scheduled and initial executor = 0, 
resetting the initializing
    +     * field may cause it to not be set to false in yarn.
    +     * SPARK-20079: https://issues.apache.org/jira/browse/SPARK-20079
    +     */
    +    if (maxNumExecutorsNeeded() == 0) {
    +      initializing = true
    --- End diff --
    
    `updateAndSyncNumExecutorsTarget` is a weird method. It returns a value 
that is never used anywhere, the actual variables it sets internally are what 
matters...
    
    But I still don't understand why, when the AM restarts, should 
`updateAndSyncNumExecutorsTarget` be a no-op except in this case. What is 
different about this case that makes it an exception? Shouldn't 
`updateAndSyncNumExecutorsTarget` be called instead from `reset()` or very soon 
after, so the code can update its internal state to match the current status of 
the app?
    
    The thing I don't understand is why is it ever ok for 
`updateAndSyncNumExecutorsTarget` to just do nothing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to