[ https://issues.apache.org/jira/browse/SPARK-29771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Apache Spark reassigned SPARK-29771: ------------------------------------ Assignee: Apache Spark > Limit executor max failures before failing the application > ---------------------------------------------------------- > > Key: SPARK-29771 > URL: https://issues.apache.org/jira/browse/SPARK-29771 > Project: Spark > Issue Type: Improvement > Components: Kubernetes, Spark Core > Affects Versions: 3.1.0 > Reporter: Jackey Lee > Assignee: Apache Spark > Priority: Major > > ExecutorPodsAllocator does not limit the number of executor errors or > deletions, which may cause executor restart continuously without application > failure. > A simple example for this, add {{--conf > spark.executor.extraJavaOptions=-Xmse}} after spark-submit, which can make > executor restart thousands of times without application failure. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org