Github user Ngone51 commented on the issue:

    https://github.com/apache/spark/pull/22288
  
    As I mentioned at 
https://github.com/apache/spark/pull/22288#discussion_r216874530, I'm quite 
worry about this killing behaviour.  I thik we should kill a executor iff it is 
idle.
    
    By looking through dissuction above, give my thoughts below:
    
    * with dynamic allocation
    
    Maybe, we can add `onTaskCompletelyBlacklisted()` method in DA manager's 
`Listener` and pass a e.g. `TaskCompletelyBlacklistedEvent` to it. Thus, DA 
manger will allocate new executor for us.
    
    * with static allocation
    
    Set `spark.scheduler.unschedulableTaskSetTimeout` for a `TaskSet`. If a 
task blacklisted completely, 
    kill some executors iff they're idle (Maybe, taking executors' allocation 
time into acount here, we should increase timeout upperbound for a little for 
this `TaskSet`.).  Then, waiting until to be scheduled or timeout&abort.  


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to