[ https://issues.apache.org/jira/browse/SPARK-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301873#comment-15301873 ]
Rakesh commented on SPARK-15479: -------------------------------- Please note that setting for graceful shutdown is set to true. It works fine in local environment. > Spark job doesn't shut gracefully in yarn mode. > ----------------------------------------------- > > Key: SPARK-15479 > URL: https://issues.apache.org/jira/browse/SPARK-15479 > Project: Spark > Issue Type: Bug > Components: Streaming > Affects Versions: 1.5.1 > Reporter: Rakesh > > Issue i am having is similar to the one mentioned here : > http://stackoverflow.com/questions/36911442/how-to-stop-gracefully-a-spark-streaming-application-on-yarn > I am creating a rdd from sequence of 1 to 300 and creating streaming RDD out > of it. > val rdd = ssc.sparkContext.parallelize(1 to 300) > val dstream = new ConstantInputDStream(ssc, rdd) > dstream.foreachRDD{ rdd => > rdd.foreach{ x => > log(x) > Thread.sleep(50) > } > } > When i kill this job, i expect elements 1 to 300 to be logged before shutting > down. It is indeed the case when i run it locally. It waits for the job to > finish before shutting down. > But when i launch the job in cluster with "yarn-cluster" mode, it abruptly > shuts down. > Executor prints following log > ERROR executor.CoarseGrainedExecutorBackend: > Driver xx.xx.xx.xxx:yyyyy disassociated! Shutting down. > and then it shuts down. It is not a graceful shutdown. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org