I was looking around for some documentation regarding how checkpointing (or 
rather, delivery semantics) is done when consuming from kafka with structured 
streaming and I stumbled across this old documentation (that still somehow 
exists in latest versions) at 
https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html#checkpoints.

This page (which I assume is from around the time of Spark 2.4?) describes that 
storing offsets using checkpoiting is the least reliable method and goes 
further into how to use kafka or an external storage to commit offsets.

It also says
If you enable Spark checkpointing, offsets will be stored in the checkpoint. 
(...) Furthermore, you cannot recover from a checkpoint if your application 
code has changed.

This all leaves me with several questions:

  1.  Is the above quote still true for Spark 3, that the checkpoint will break 
if you change the code? How about changing the subscribe pattern?

  2.  Why was the option to manually commit offsets asynchronously to kafka 
removed when it was deemed more reliable than checkpointing? Not to mention 
that storing offsets in kafka allows you to use all the tools offered in the 
kafka distribution to easily reset/rewind offsets on specific topics, which 
doesn't seem to be possible when using checkpoints.

  3.
>From a user perspective, storing offsets in kafka offers more features. From a 
>developer perspective, having to re-implement offset storage with 
>checkpointing across several output systems (such as HDFS, AWS S3 and other 
>object storages) seems like a lot of unnecessary work and re-inventing the 
>wheel.
Is the discussion leading up to the decision to only support storing offsets 
with checkpointing documented anywhere, perhaps in a jira?

Thanks for your time

Reply via email to