Yes, it is set to true.
Log of driver :

16/05/12 10:18:29 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM
16/05/12 10:18:29 INFO streaming.StreamingContext: Invoking
stop(stopGracefully=true) from shutdown hook
16/05/12 10:18:29 INFO scheduler.JobGenerator: Stopping JobGenerator gracefully
16/05/12 10:18:29 INFO scheduler.JobGenerator: Waiting for all
received blocks to be consumed for job generation
16/05/12 10:18:29 INFO scheduler.JobGenerator: Waited for all received
blocks to be consumed for job generation

Log of executor:
16/05/12 10:18:29 ERROR executor.CoarseGrainedExecutorBackend: Driver
xx.xx.xx.xx:xxxxx disassociated! Shutting down.
16/05/12 10:18:29 WARN remote.ReliableDeliverySupervisor: Association
with remote system [xx.xx.xx.xx:xxxxx] has failed, address is now
gated for [5000] ms. Reason: [Disassociated]
16/05/12 10:18:29 INFO storage.DiskBlockManager: Shutdown hook called
16/05/12 10:18:29 INFO processors.StreamJobRunner$: VALUE
-------------> 204 //This is value i am logging
16/05/12 10:18:29 INFO util.ShutdownHookManager: Shutdown hook called
16/05/12 10:18:29 INFO processors.StreamJobRunner$: VALUE -------------> 205
16/05/12 10:18:29 INFO processors.StreamJobRunner$: VALUE -------------> 206






On Thu, May 12, 2016 at 11:45 AM Deepak Sharma <deepakmc...@gmail.com>
wrote:

> Hi Rakesh
> Did you tried setting *spark.streaming.stopGracefullyOnShutdown to true *for
> your spark configuration instance?
> If not try this , and let us know if this helps.
>
> Thanks
> Deepak
>
> On Thu, May 12, 2016 at 11:42 AM, Rakesh H (Marketing Platform-BLR) <
> rakes...@flipkart.com> wrote:
>
>> Issue i am having is similar to the one mentioned here :
>>
>> http://stackoverflow.com/questions/36911442/how-to-stop-gracefully-a-spark-streaming-application-on-yarn
>>
>> I am creating a rdd from sequence of 1 to 300 and creating streaming RDD
>> out of it.
>>
>> val rdd = ssc.sparkContext.parallelize(1 to 300)
>> val dstream = new ConstantInputDStream(ssc, rdd)
>> dstream.foreachRDD{ rdd =>
>>   rdd.foreach{ x =>
>>     log(x)
>>     Thread.sleep(50)
>>   }
>> }
>>
>>
>> When i kill this job, i expect elements 1 to 300 to be logged before
>> shutting down. It is indeed the case when i run it locally. It wait for the
>> job to finish before shutting down.
>>
>> But when i launch the job in custer with "yarn-cluster" mode, it abruptly
>> shuts down.
>> Executor prints following log
>>
>> ERROR executor.CoarseGrainedExecutorBackend:
>> Driver xx.xx.xx.xxx:yyyyy disassociated! Shutting down.
>>
>>  and then it shuts down. It is not a graceful shutdown.
>>
>> Anybody knows how to do it in yarn ?
>>
>>
>>
>>
>
>
> --
> Thanks
> Deepak
> www.bigdatabig.com
> www.keosha.net
>

Reply via email to