Github user squito commented on the issue:

    https://github.com/apache/spark/pull/18874
  
    @srowen you have a good point about a case that becomes worse after this 
change.  Still I think this change is better on balance.
    
    btw, there are more even more odd cases with dynamic allocation right now 
-- the one that I've seen most often is if you have a run really short tasks, 
but all in sequence, you probably won't release any executors.  Say you first 
run some really large job with 10k tasks, and so you request a bunch of 
executors.  After that, you only ever run 100 tasks at a time, so you could 
release a bunch of resources.  But in 60 seconds, its enough time for a bunch 
of short stages to execute.  Each stage chooses a random set of executors to 
run on.  So no executor is ever idle for 60 seconds.
    
    Perhaps we could also fix that in some larger change which more tightly 
integrated dynamic allocation into the scheduler that Tom was alluding to.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to