[ 
https://issues.apache.org/jira/browse/SPARK-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14649497#comment-14649497
 ] 

Sean Owen commented on SPARK-9446:
----------------------------------

Resolved by https://github.com/apache/spark/pull/7756

> Clear Active SparkContext in stop() method
> ------------------------------------------
>
>                 Key: SPARK-9446
>                 URL: https://issues.apache.org/jira/browse/SPARK-9446
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.4.1
>            Reporter: Ted Yu
>            Assignee: Ted Yu
>            Priority: Minor
>             Fix For: 1.4.2, 1.5.0
>
>
> In thread 'stopped SparkContext remaining active' on mailing list, Andres 
> observed the following in driver log:
> {code}
> 15/07/29 15:17:09 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: 
> ApplicationMaster has disassociated: <address removed>
> 15/07/29 15:17:09 INFO YarnClientSchedulerBackend: Shutting down all executors
> Exception in thread "Yarn application state monitor" 
> org.apache.spark.SparkException: Error asking standalone scheduler to shut 
> down executors
>         at 
> org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBackend.scala:261)
>         at 
> org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stop(CoarseGrainedSchedulerBackend.scala:266)
>         at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:158)
>         at 
> org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:416)
>         at 
> org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1411)
>         at org.apache.spark.SparkContext.stop(SparkContext.scala:1644)
>         at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$$anon$1.run(YarnClientSchedulerBackend.scala:139)
> Caused by: java.lang.InterruptedException
>         at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1325)
>         at 
> scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
>         at 
> scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
>         at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>         at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
>         at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>         at scala.concurrent.Await$.result(package.scala:190)15/07/29 15:17:09 
> INFO YarnClientSchedulerBackend: Asking each executor to shut down
>         at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
>         at 
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
>         at 
> org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBackend.scala:257)
>         ... 6 more
> {code}
> Effect of the above exception is that a stopped SparkContext is returned to 
> user since SparkContext.clearActiveContext() is not called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to