[ 
https://issues.apache.org/jira/browse/SPARK-46681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated SPARK-46681:
-----------------------------------
    Labels: pull-request-available  (was: )

> Refactor `ExecutorFailureTracker#maxNumExecutorFailures` to avoid unnecessary 
> computations when `MAX_EXECUTOR_FAILURES` is configured
> -------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-46681
>                 URL: https://issues.apache.org/jira/browse/SPARK-46681
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 4.0.0
>            Reporter: Yang Jie
>            Priority: Minor
>              Labels: pull-request-available
>
> {code:java}
> def maxNumExecutorFailures(sparkConf: SparkConf): Int = {
>   val effectiveNumExecutors =
>     if (Utils.isStreamingDynamicAllocationEnabled(sparkConf)) {
>       sparkConf.get(STREAMING_DYN_ALLOCATION_MAX_EXECUTORS)
>     } else if (Utils.isDynamicAllocationEnabled(sparkConf)) {
>       sparkConf.get(DYN_ALLOCATION_MAX_EXECUTORS)
>     } else {
>       sparkConf.get(EXECUTOR_INSTANCES).getOrElse(0)
>     }
>   // By default, effectiveNumExecutors is Int.MaxValue if dynamic allocation 
> is enabled. We need
>   // avoid the integer overflow here.
>   val defaultMaxNumExecutorFailures = math.max(3,
>     if (effectiveNumExecutors > Int.MaxValue / 2) Int.MaxValue else 2 * 
> effectiveNumExecutors)
>   
> sparkConf.get(MAX_EXECUTOR_FAILURES).getOrElse(defaultMaxNumExecutorFailures)
> } {code}
> The result of defaultMaxNumExecutorFailures is calculated first, even if 
> {{MAX_EXECUTOR_FAILURES}} is configured now
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to