Neha,

Thanks for the reply. I'm using the high level consumer, btw, I'm using
kafka 0.7.2 (we built it with scala 2.10) the consumer is using default
values with an high ZK timeout value.

As far as I know, my consumers didn't restart, they're running on services
that were not restarted (unless the consumer itself would reconnect after
sometime).

Don't know if it could be part of the reason, some of my consumers are in
remote sites, they have high latency and experience ZK timeouts here and
there. I've ZK observers on the remote sites with rather high timeout
values, they still disconnect from time to time from the main site due to
timeout.
Due to the ZK timeouts I noticed the consumers fail to write their offsets.


PS: Sorry for the previous spamming, my mail client went crazy and by the
time I realized it was too late.

Kindly,

Nicolas

-----Original Message-----
From: Neha Narkhede [mailto:neha.narkh...@gmail.com] 
Sent: Monday, March 11, 2013 23:52
To: users@kafka.apache.org
Subject: Re: OffsetOutOfRangeException with 0 retention

Nicolas,

It seems that you started a consumer from the earliest offset, then shut it
down for a long time, and tried restarting it again. At this time, you will
see OffsetOutOfRange exceptions, since the offset that your consumer is
trying to fetch has been garbage collected from the server (due to it being
too old). If you are using the high level consumer
(ZookeeperConsumerConnector), the consumer will automatically  reset the
offset to the earliest or latest depending on the autooffset.reset config
value.

Which consumer are you using in this test ?

Thanks,
Neha


On Mon, Mar 11, 2013 at 2:12 AM, Nicolas Berthet
<nicolasbert...@maaii.com>wrote:

> Hi,
>
>
>
> I'm currently seeing a lot of OffsetOutOfRangeException in my server 
> logs (it's not something that appeared recently, I simply didn't use 
> Kafka before). I tried to find information on the mailing-list, but 
> nothing seems to match my case.
>
>
>
> ERROR error when processing request FetchRequest(topic:test-topic, 
> part:0
> offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)
>
>      kafka.common.OffsetOutOfRangeException: offset 3004960 is out of 
> range
>
>
>
> I understand that, at startup, consumers will ask for a MAX_VALUE 
> offset to trigger this exception and detect the correct offset, right ?
>
>
>
> In my case, it's just too often (much more than the number of consumer 
> connections), but I also noticed it seems to happen in particular for 
> topics with a "0" retention. Did anybody else suffer from the same 
> symptoms ?
>
>
>
> Although it seems not critical (everything seems to work), it's 
> probably far from optimal, and the log is just full of those.
>
>
>
> Regards,
>
>
>
> Nicolas
>
>

Reply via email to