If the whole cluster is down and you allow unclean leader election on the
broker, some exposed messages on the broker could be lost when restarting
the brokers. When that happens, the consumers may need to reset their
offset since the current offsets may no longer be valid. By default, the
offset will be reset to the smallest valid offset. You can
set auto.offset.reset to "largest" to avoid re-reading all old messages.

Thanks,

Jun

On Sun, Oct 19, 2014 at 9:58 PM, Yu Yang <yuyan...@gmail.com> wrote:

> Thanks, Jun! Yes, I set the topic replication factor to 3.
>
> On Sun, Oct 19, 2014 at 8:09 PM, Jun Rao <jun...@gmail.com> wrote:
>
> > Did you set the replication factor to be more than 1?
> >
> > Thanks,
> >
> > Jun
> >
> > On Sat, Oct 18, 2014 at 2:32 AM, Yu Yang <yuyan...@gmail.com> wrote:
> >
> > > Hi all,
> > >
> > > We have a kafka 0.8.1 cluster. We implemented a consumers for the
> topics
> > on
> > > the Kafka 0.8 cluster using high-level consumer api. We observed that
> if
> > > the Kafka cluster was down and got rebooted and the consumer was
> running,
> > > the consumer will fail to read a few topic partitions due to negative
> lag
> > > behind value. How shall we handle disaster recovery without re-reading
> > the
> > > processed messages?
> > >
> > > Thanks!
> > >
> > > -Yu
> > >
> >
>

Reply via email to