Github user sebastienrainville commented on the pull request:

    https://github.com/apache/spark/pull/10924#issuecomment-185010055
  
    I'm not sure we want to use only one rejection delay setting in these 2 
cases. Arguably we could reject offers for a much longer period of time for 
`unmet constraints` since AFAIK constraints don't change dynamically and 
therefore are true for the lifetime of a framework. It's a bit different with 
`reached max cores` because if we lose an executor we want the scheduler to 
launch a new one and ideally not have to wait for too long for it. I put the 
same default delay of 120s for both since it seems to be a reasonable value.
    
    And for the fine-grained mode, there's no reason to not add the same logic. 
I'll do the change and test it. Unfortunately, the example function 
`declineOffer` cannot be reused there because it relies on local variables 
declared inside the loop. It really feels like this code needs some refactoring.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to