Hi Sachin,

I guess these "few days of topology run" will be a week?
Your Kafka spout is probably lagging behind and messages located at the
offset requested were deleted from the Kafka topic (log.retention.hours),
causing the following fetch request to throw `OffsetOutOfRange`.

Thanks,
Guy


On Mon, Nov 23, 2015 at 7:25 AM, Sachin Pasalkar <
sachin_pasal...@symantec.com> wrote:

> Can someone help us on this?
>
> From: Sachin Pasalkar <sachin_pasal...@symantec.com<mailto:
> sachin_pasal...@symantec.com>>
> Reply-To: "dev@storm.apache.org<mailto:dev@storm.apache.org>" <
> dev@storm.apache.org<mailto:dev@storm.apache.org>>
> Date: Friday, 20 November 2015 11:53 am
> To: "dev@storm.apache.org<mailto:dev@storm.apache.org>" <
> dev@storm.apache.org<mailto:dev@storm.apache.org>>
> Subject: Why am I getting OffsetOutOfRange: Updating offset from offset?
>
> Hi,
>
> We are developing application where after some days of topology run, we
> get continuous warning messages
>
>
> 2015-11-20 05:05:42.226 s.k.KafkaUtils [WARN] Got fetch request with
> offset out of range: [7238824446]
>
> 2015-11-20 05:05:42.229 s.k.t.TridentKafkaEmitter [WARN] OffsetOutOfRange:
> Updating offset from offset = 7238824446 to offset = 7241183683
>
> 2015-11-20 05:05:43.207 s.k.KafkaUtils [WARN] Got fetch request with
> offset out of range: [7022945051]
>
> 2015-11-20 05:05:43.208 s.k.t.TridentKafkaEmitter [WARN] OffsetOutOfRange:
> Updating offset from offset = 7022945051 to offset = 7025309343
>
> 2015-11-20 05:05:44.260 s.k.KafkaUtils [WARN] Got fetch request with
> offset out of range: [7170559432]
>
> 2015-11-20 05:05:44.264 s.k.t.TridentKafkaEmitter [WARN] OffsetOutOfRange:
> Updating offset from offset = 7170559432 to offset = 7172920769
>
> 2015-11-20 05:05:45.332 s.k.KafkaUtils [WARN] Got fetch request with
> offset out of range: [7132495867]……
>
>
> After some point topology stop processing messages, I need to rebalance it
> to start it again.
>
>
> My spout config is
>
>
> BrokerHosts brokers = new ZkHosts((String)
> stormConfiguration.get(ZOOKEEPER_HOSTS));
>
> TridentKafkaConfig spoutConfig = new TridentKafkaConfig(brokers, (String)
> stormConfiguration.get(KAFKA_INPUT_TOPIC));
>
>
> spoutConfig.scheme = getSpoutScheme(stormConfiguration);
>
> Boolean forceFromStart = (Boolean)
> stormConfiguration.get(FORCE_FROM_START);
>
>
> spoutConfig.ignoreZkOffsets = false;
>
> spoutConfig.fetchSizeBytes =
> stormConfiguration.getIntProperty(KAFKA_CONSUMER_FETCH_SIZE_BYTE,
> KAFKA_CONSUMER_DEFAULT_FETCH_SIZE_BYTE);
>
> spoutConfig.bufferSizeBytes =
> stormConfiguration.getIntProperty(KAFKA_CONSUMER_BUFFER_SIZE_BYTE,
> KAFKA_CONSUMER_DEFAULT_BUFFER_SIZE_BYTE);
>
> As per my knowledge, only thing we are doing wrong is topic has 12
> partitions but we are reading using only 1 spout, but that’s limitation on
> our side. I am not sure why its getting halted? It just keep printing below
> lines and does nothing
>
>
> 2015-11-20 05:44:41.574 b.s.m.n.Server [INFO] Getting metrics for server
> on port 6700
>
> 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for client
> connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6700
>
> 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for client
> connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6709
>
> 2015-11-20 05:44:41.574 b.s.m.n.Client [INFO] Getting metrics for client
> connection to Netty-Client-b-bdata-xx.net/xxx.xx.xxx.xxx:6707
>
>
> Thanks,
>
> Sachin
>
>

Reply via email to