When the producer exhausted all the retries it will drop the message on the
floor. So when the broker is down for too long there will be data loss.

Guozhang


On Thu, Jun 5, 2014 at 6:20 AM, Libo Yu <yu_l...@hotmail.com> wrote:

> I want to know why there will be message loss when brokers are down for
> too long.
> I've noticed message loss when brokers are restarted during publishing. It
> is a sync producer with request.required.acks set to 1.
>
> Libo
>
> > Date: Thu, 29 May 2014 20:11:48 -0700
> > Subject: Re: question about synchronous producer
> > From: wangg...@gmail.com
> > To: users@kafka.apache.org
> >
> > Libo,
> >
> > That is correct. You may want to increase the retry.backoff.ms in this
> > case. In practice, if the brokers are down for too long, then data loss
> is
> > usually inevitable.
> >
> > Guozhang
> >
> >
> > On Thu, May 29, 2014 at 2:55 PM, Libo Yu <yu_l...@hotmail.com> wrote:
> >
> > > Hi team,
> > >
> > > Assume I am using a synchronous producer and it has the following
> default
> > > properties:
> > >
> > > message.send.max.retries
> > >       3
> > > retry.backoff.ms
> > >       100
> > >
> > > I use java api Producer.send(message) to send a message.
> > > While send() is being called, if the brokers are shutdown, what
> happens?
> > > send() will retry 3 times with a 100ms interval and fail silently?
> > > If I don't want to lose any message when the brokers are back online,
> what
> > > should I do? Thanks.
> > >
> > > Libo
> > >
> > >
> > >
> > >
> >
> >
> >
> >
> > --
> > -- Guozhang
>
>



-- 
-- Guozhang

Reply via email to