I can't be sure of how every client will handle it, it is probably not
likely, and there could potentially be unforeseen issues.

That said, given that offsets are stored in a (signed) Long. I would
suspect that it would rollover to negative values and increment from there.
That means instead of 9,223,372,036,854,775,807 potential offset values,
you actually have 18,446,744,073,709,551,614 potential values. To put that
into perspective if we assign 1 byte to each offset thats just over 18
Exabytes.

You will likely run into many more issues other than offset rollover,
before you are able to retain 18 Exabytes in single Kafka topic. (And if
not, I would evaluate breaking up your topic into multiple smaller ones).

Thanks,
Grant


On Sat, Oct 3, 2015 at 8:58 PM, Li Tao <ahumbleco...@gmail.com> wrote:

> It will never happan.
>
> On Thu, Oct 1, 2015 at 4:22 AM, Chad Lung <chad.l...@gmail.com> wrote:
>
> > I seen a previous question (http://search-hadoop.com/m/uyzND1lrGUW1PgKGG
> )
> > on offset rollovers but it doesn't look like it was ever answered.
> >
> > Does anyone one know what happens when an offset max limit is reached?
> > Overflow, or something else?
> >
> > Thanks,
> >
> > Chad
> >
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke

Reply via email to