Github user ArtRand commented on the issue:

    https://github.com/apache/spark/pull/18098
  
    Hello @gpang, after thinking about this a lot I'm glad that you ended up 
merging this. However, I think it’s worth considering the implications of 
changing the offer evaluation logic in the driver. My main concern with the 
method you’ve proposed is in a cluster with many frameworks and concurrent 
Spark jobs (potentially with heterogeneous locality wait times) this solution 
may not be effective. Further I believe that your algorithm doesn’t account 
for adding an executor on an agent that already contains an executor 
(https://github.com/apache/spark/pull/18098/files#diff-387c5d0c916278495fc28420571adf9eR534),
 which may be what you want in some situations (because there is no other way 
to increase the cores for an executor already placed on an agent). I realize 
this is an edge case, and that that the old behavior (essentially random 
placement) wouldn’t afford better performance. 
    
    With this in mind I’d like to propose that we make offer evaluation in 
Spark on Mesos a pluggable interface. For example, right now there is no way to 
easily spread executors over agents, pack executors on agents, or other 
context-specific behaviors that a user may want. I think one of Spark’s 
strengths is its tunability, and we should expose this to users who wish to use 
it. 



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to