Hello,

I am using Spark Streaming and the Kafka Direct API and I am checkpointing
the metadata.
Checkpoints aren't recoverable if you upgrade code so I am losing the last
consumed offsets in this case.

I know I can build a system to store and load the offsets for each batch
but before implementing that I would like to know if checkpoints are going
to be able to recover a code upgrade in a foreseeable future ?

Thanks,
Nicolas

Reply via email to