[ 
https://issues.apache.org/jira/browse/SPARK-32643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Holden Karau resolved SPARK-32643.
----------------------------------
    Fix Version/s: 3.1.0
       Resolution: Fixed

> [Cleanup] Consolidate state kept in ExecutorDecommissionInfo with 
> TaskSetManager.tidToExecutorKillTimeMapping
> -------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-32643
>                 URL: https://issues.apache.org/jira/browse/SPARK-32643
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Spark Core
>    Affects Versions: 3.1.0
>            Reporter: Devesh Agrawal
>            Assignee: Devesh Agrawal
>            Priority: Minor
>             Fix For: 3.1.0
>
>
> The decommissioning state is a bit fragment across two places in the 
> TaskSchedulerImpl:
>  * 
> [https://github.com/apache/spark/pull/29014/|https://github.com/apache/spark/pull/29014/files]
>  stored the incoming decommission info messages in 
> _TaskSchedulerImpl.executorsPendingDecommission._
>  * While 
> [https://github.com/apache/spark/pull/28619/|https://github.com/apache/spark/pull/28619/files]
>  was storing just the executor end time in the map 
> _TaskSetManager.tidToExecutorKillTimeMapping_ (which in turn is contained in 
> TaskSchedulerImpl).
> While the two states are not really overlapping, its a bit of a code hygiene 
> concern to save this state in two places. 
> With [https://github.com/apache/spark/pull/29422], TaskSchedulerImpl is 
> emerging as the place where all decommissioning book keeping is kept within 
> the driver. So consolidate the information in _tidToExecutorKillTimeMapping_ 
> into _ExecutorDecommissionInfo._
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to