I have a very simple spark streaming job running locally in standalone mode. 
There is a customer receiver which reads from database and pass it to the main 
job which prints the total. Not an actual use case but I am playing around to 
learn. Problem is that job gets stuck forever, logic is very simple so I think 
it is neither doing any processing nor memory issue. What is strange is if I 
STOP the job, suddenly in logs I see the output of job execution and other 
backed jobs follow! Can some one help me understand what is going on here?

 val spark = SparkSession
  .builder()
  .master("local[1]")
  .appName("SocketStream")
  .getOrCreate()

val ssc = new StreamingContext(spark.sparkContext,Seconds(5))
val lines = ssc.receiverStream(new HanaCustomReceiver())


lines.foreachRDD{x => println("==============" + x.count())}

ssc.start()
ssc.awaitTermination()


[enter image description here]<https://i.stack.imgur.com/y1GGr.png>

After terminating program following logs roll which shows execution of the 
batch -

17/06/05 15:56:16 INFO JobGenerator: Stopping JobGenerator immediately 17/06/05 
15:56:16 INFO RecurringTimer: Stopped timer for JobGenerator after time 
1496696175000 17/06/05 15:56:16 INFO JobGenerator: Stopped JobGenerator 
==============100

Thanks!

Reply via email to