Ok. Thank you for the reply.

On Wed, Sep 9, 2015 at 11:40 AM, Tathagata Das <t...@databricks.com> wrote:

> Its pretty much impossible to do across arbitrary code changes. For that,
> the best way is to go forward is the store and load the offsets yourselves.
>
> On Wed, Sep 9, 2015 at 10:19 AM, Nicolas Monchy <nico...@gumgum.com>
> wrote:
>
>> Hello,
>>
>> I am using Spark Streaming and the Kafka Direct API and I am
>> checkpointing the metadata.
>> Checkpoints aren't recoverable if you upgrade code so I am losing the
>> last consumed offsets in this case.
>>
>> I know I can build a system to store and load the offsets for each batch
>> but before implementing that I would like to know if checkpoints are going
>> to be able to recover a code upgrade in a foreseeable future ?
>>
>> Thanks,
>> Nicolas
>>
>
>


-- 
Nicolas Monchy  |  Software Engineer, Big Data
*GumGum* <http://www.gumgum.com>  |  *Ads that stick*
424 375 8823  |  nico...@gumgum.com

Reply via email to