Its pretty much impossible to do across arbitrary code changes. For that,
the best way is to go forward is the store and load the offsets yourselves.

On Wed, Sep 9, 2015 at 10:19 AM, Nicolas Monchy <nico...@gumgum.com> wrote:

> Hello,
>
> I am using Spark Streaming and the Kafka Direct API and I am checkpointing
> the metadata.
> Checkpoints aren't recoverable if you upgrade code so I am losing the last
> consumed offsets in this case.
>
> I know I can build a system to store and load the offsets for each batch
> but before implementing that I would like to know if checkpoints are going
> to be able to recover a code upgrade in a foreseeable future ?
>
> Thanks,
> Nicolas
>

Reply via email to