[jira] [Commented] (SPARK-39601) AllocationFailure should not be treated as exitCausedByApp when driver is shutting down

2022-12-13 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-39601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17646682#comment-17646682
 ] 

Apache Spark commented on SPARK-39601:
--

User 'pan3793' has created a pull request for this issue:
https://github.com/apache/spark/pull/39053

> AllocationFailure should not be treated as exitCausedByApp when driver is 
> shutting down
> ---
>
> Key: SPARK-39601
> URL: https://issues.apache.org/jira/browse/SPARK-39601
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 3.3.0
>Reporter: Cheng Pan
>Assignee: Cheng Pan
>Priority: Major
> Fix For: 3.4.0
>
>
> I observed some Spark Applications successfully completed all jobs but failed 
> during the shutting down phase w/ reason: Max number of executor failures 
> (16) reached, the timeline is
> Driver - Job success, Spark starts shutting down procedure.
> {code:java}
> 2022-06-23 19:50:55 CST AbstractConnector INFO - Stopped 
> Spark@74e9431b{HTTP/1.1, (http/1.1)}
> {0.0.0.0:0}
> 2022-06-23 19:50:55 CST SparkUI INFO - Stopped Spark web UI at 
> http://hadoop2627.xxx.org:28446
> 2022-06-23 19:50:55 CST YarnClusterSchedulerBackend INFO - Shutting down all 
> executors
> {code}
> Driver - A container allocate successful during shutting down phase.
> {code:java}
> 2022-06-23 19:52:21 CST YarnAllocator INFO - Launching container 
> container_e94_1649986670278_7743380_02_25 on host hadoop4388.xxx.org for 
> executor with ID 24 for ResourceProfile Id 0{code}
> Executor - The executor can not connect to driver endpoint because driver 
> already stopped the endpoint.
> {code:java}
> Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1911)
>   at 
> org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:61)
>   at 
> org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:393)
>   at 
> org.apache.spark.executor.YarnCoarseGrainedExecutorBackend$.main(YarnCoarseGrainedExecutorBackend.scala:81)
>   at 
> org.apache.spark.executor.YarnCoarseGrainedExecutorBackend.main(YarnCoarseGrainedExecutorBackend.scala)
> Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult: 
>   at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)
>   at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
>   at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
>   at 
> org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$9(CoarseGrainedExecutorBackend.scala:413)
>   at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.java:23)
>   at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:877)
>   at scala.collection.immutable.Range.foreach(Range.scala:158)
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:876)
>   at 
> org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$7(CoarseGrainedExecutorBackend.scala:411)
>   at 
> org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:62)
>   at 
> org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:61)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
>   ... 4 more
> Caused by: org.apache.spark.rpc.RpcEndpointNotFoundException: Cannot find 
> endpoint: spark://coarsegrainedschedu...@hadoop2627.xxx.org:21956
>   at 
> org.apache.spark.rpc.netty.NettyRpcEnv.$anonfun$asyncSetupEndpointRefByURI$1(NettyRpcEnv.scala:148)
>   at 
> org.apache.spark.rpc.netty.NettyRpcEnv.$anonfun$asyncSetupEndpointRefByURI$1$adapted(NettyRpcEnv.scala:144)
>   at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:307)
>   at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:41)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
>   at org.apache.spark.util.ThreadUtils$$anon$1.execute(ThreadUtils.scala:99)
>   at 
> scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
>   at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288){code}
> Driver - YarnAllocator received container launch error message and treat it 
> as 

[jira] [Commented] (SPARK-39601) AllocationFailure should not be treated as exitCausedByApp when driver is shutting down

2022-11-11 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-39601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17632163#comment-17632163
 ] 

Apache Spark commented on SPARK-39601:
--

User 'pan3793' has created a pull request for this issue:
https://github.com/apache/spark/pull/38622

> AllocationFailure should not be treated as exitCausedByApp when driver is 
> shutting down
> ---
>
> Key: SPARK-39601
> URL: https://issues.apache.org/jira/browse/SPARK-39601
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 3.3.0
>Reporter: Cheng Pan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-39601) AllocationFailure should not be treated as exitCausedByApp when driver is shutting down

2022-06-25 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-39601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558788#comment-17558788
 ] 

Apache Spark commented on SPARK-39601:
--

User 'pan3793' has created a pull request for this issue:
https://github.com/apache/spark/pull/36991

> AllocationFailure should not be treated as exitCausedByApp when driver is 
> shutting down
> ---
>
> Key: SPARK-39601
> URL: https://issues.apache.org/jira/browse/SPARK-39601
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 3.3.0
>Reporter: Cheng Pan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-39601) AllocationFailure should not be treated as exitCausedByApp when driver is shutting down

2022-06-25 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-39601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558787#comment-17558787
 ] 

Apache Spark commented on SPARK-39601:
--

User 'pan3793' has created a pull request for this issue:
https://github.com/apache/spark/pull/36991

> AllocationFailure should not be treated as exitCausedByApp when driver is 
> shutting down
> ---
>
> Key: SPARK-39601
> URL: https://issues.apache.org/jira/browse/SPARK-39601
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 3.3.0
>Reporter: Cheng Pan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org