Not sure what you mean by offerts/offsets.  I assume you were using file-based instead of Kafka-based of data sources.  Are the incoming data generated in mini-batch files or in a single large file?  Have you had this type of problem before?

On 7/21/22 1:02 PM, KhajaAsmath Mohammed wrote:
Hi,

I am seeing weird behavior in our spark structured streaming application where the offerts are not getting picked by the streaming  job.

If I delete the checkpoint directory and run the job again, I can see the data for the first batch but it is not picking up new offsets again from the next job when the job is running.

FYI, job is still running but it is not picking up new offsets. I am not able to figure out where the issue is in this case.

Thanks,
Asmath


---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to