[ https://issues.apache.org/jira/browse/SPARK-40481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dongjoon Hyun updated SPARK-40481: ---------------------------------- Parent: SPARK-41550 Issue Type: Sub-task (was: Improvement) > Ignore stage fetch failure caused by decommissioned executor > ------------------------------------------------------------ > > Key: SPARK-40481 > URL: https://issues.apache.org/jira/browse/SPARK-40481 > Project: Spark > Issue Type: Sub-task > Components: Spark Core > Affects Versions: 3.4.0 > Reporter: Zhongwei Zhu > Assignee: Zhongwei Zhu > Priority: Minor > Fix For: 3.4.0 > > > When executor decommission is enabled, there would be many stage failure > caused by FetchFailed from decommissioned executor, further causing whole > job's failure. It would be better not to count such failure in > `spark.stage.maxConsecutiveAttempts` -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org