Hi,
You're right...killing the spark streaming job is the way to go. If a batch
was completed successfully, Spark Streaming will recover from the
controlled failure and start where it left off. I don't think there's other
way to do it.
Pozdrawiam,
Jacek Laskowski
https://about.me/JacekLaskow
Hi,
I am new in the usage of spark streaming. I have developed one spark
streaming job which runs every 30 minutes with checkpointing directory.
I have to implement minor change, shall I kill the spark streaming job once
the batch is completed using yarn application -kill command and update the
j