[ 
https://issues.apache.org/jira/browse/SPARK-24416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-24416:
------------------------------------

    Assignee:     (was: Apache Spark)

> Update configuration definition for spark.blacklist.killBlacklistedExecutors
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-24416
>                 URL: https://issues.apache.org/jira/browse/SPARK-24416
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.3.0
>            Reporter: Sanket Reddy
>            Priority: Minor
>
> spark.blacklist.killBlacklistedExecutors is defined asĀ 
> (Experimental) If set to "true", allow Spark to automatically kill, and 
> attempt to re-create, executors when they are blacklisted. Note that, when an 
> entire node is added to the blacklist, all of the executors on that node will 
> be killed.
> I presume the killing of blacklisted executors only happens after the stage 
> completes successfully and all tasks have completed or on fetch failures 
> (updateBlacklistForFetchFailure/updateBlacklistForSuccessfulTaskSet). It is 
> confusing because the definition states that the executor will be attempted 
> to be recreated as soon as it is blacklisted. This is not true while the 
> stage is in progress and an executor is blacklisted, it will not attempt to 
> cleanup until the stage finishes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to