Github user kayousterhout commented on the issue:

    https://github.com/apache/spark/pull/15249
  
    Re: executor blacklisting, one more reason I've heard for this (I think 
from Imran) is that tasks can fail on an executor because of memory pressure -- 
in which case the task may succeed on other executors that have fewer RDDs 
in-memory. I'm inclined to keep it since it seems useful, and if someone 
doesn't care about it, they can always set the max executor failures = max host 
failures.
    
    @mridulm re: (a), it sounds like for this case, task-set-level 
blacklisting, as Imran suggested, is actually preferable to task-level 
blacklisting (since the issues you mentioned all seem like issues that would 
affect all tasks).  Is that correct?  Trying to understand the issue before 
figuring out how to fix it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to