Hi Joel,

The error offset 206845418 didn't change. The only thing that changed is
the correlation id and it was incrementing.

The broker is the follower and I saw similar error messages for other
topics the broker was a follower for.  As indicated by the log this is a
request coming from a consumer not follower. One thing I don't quite
understand is that consumer requests for the topic (test1) should go to the
leader not follower so why there were consumer requests connecting to the
broker? The other issue I noticed is that the replica fetcher threads from
the follower didn't fetch any data at all from leader the log file size in
follower didn't grow for several hours

On Sat, May 23, 2015 at 12:40 AM, Joel Koshy <jjkosh...@gmail.com> wrote:

> When you say "keeps getting below exception" I'm assuming that the
> error offset (206845418) keeps changing - right? We saw a similar
> issue in the past and it turned out to be due to a NIC issue - i.e.,
> it negotiated at a low speed. So the replica fetcher couldn't keep up
> with the leader. i.e., while it caught up within the first segment the
> leader's log would roll (i.e., the segment would get deleted) and we
> would see the out of range error. Is this broker a follower for other
> partitions? Do those partitions show up in these error message?
>
> On Fri, May 22, 2015 at 03:11:09PM +0800, tao xiao wrote:
> > Hi team,
> >
> > One of the brokers keeps getting below exception.
> >
> > [2015-05-21 23:56:52,687] ERROR [Replica Manager on Broker 15]: Error
> when
> > processing fetch request for partition [test1,0] offset 206845418 from
> > consumer with correlation id 93748260. Possible cause: Request for offset
> > 206845418 but we only have log segments in the range 207804287 to
> > 207804287. (kafka.server.ReplicaManager)
> > This is the follower broker of topic test1 and ISR of that topic has
> only 1
> > broker left right now. Just wanted to know what cause this issue and how
> I
> > can prevent it?
> >
> > --
> > Regards,
> > Tao
>
>


-- 
Regards,
Tao

Reply via email to