Hi georgy,

Thanks for the reply. I realized that it was coming from my code and  I
have resolved my problem.

So what I found out is that even when there is no message in kafka to be
read, KafkaSpout keep emitting a null or empty string fields. I take that
emitted value and then i parse the data in my code. That is when it is
throwing that exception. My question now will be why would KafkaSpout emit
null or empty values where there is no data on kafka to be read.

Thanks.

--
Kushan Maskey



On Mon, Aug 18, 2014 at 9:39 PM, Georgy Abraham <itsmegeo...@gmail.com>
wrote:

> From the error messaage the Array index out of bounds is coming from your
> code . Maybe you missed something ?? You are using StormSubmitter class to
> run it on cluster right ??
> I haven't tried with a different curator version , so don't know that.
> ------------------------------
> From: Kushan Maskey
> Sent: 14-08-2014 PM 10:03
> To: user@storm.incubator.apache.org
> Subject: java.lang.ArrayIndexOutOfBoundsException: 3
> atbacktype.storm.utils.DisruptorQueue.consumeBatchToCursor
>
>
> I am getting this error message in the Storm UI. Topology works fine on
> localCluster.
>
>
> java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 3 at
> backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128)
> at
> backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
> at
> backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80)
> at
> backtype.storm.daemon.executor$fn__5641$fn__5653$fn__5700.invoke(executor.clj:746)
> at backtype.storm.util$async_loop$fn__457.invoke(util.clj:431) at
> clojure.lang.AFn.run(AFn.java:24) at java.lang.Thread.run(Thread.java:744)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 3 at <my
> package>.method(<My Class>.java:135) at <My
> Class>.method(<MyClass>.java:83) at <MyBolt>.execute(<MyBolt>.java:56) at
> backtype.storm.topology.BasicBoltExecutor.execute(BasicBoltExecutor.java:50)
> at
> backtype.storm.daemon.executor$fn__5641$tuple_action_fn__5643.invoke(executor.clj:631)
> at
> backtype.storm.daemon.executor$mk_task_receiver$fn__5564.invoke(executor.clj:399)
> at
> backtype.storm.disruptor$clojure_handler$reify__745.onEvent(disruptor.clj:58)
> at
> backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125)
> ... 6 more
>
>
> I am wondering if it has to do with curator version. Coz the storm
> distribution comes with curator 2.4.0 and i think we have to use curator
> 2.5.0.
>
> I am using storm 0.9.2 with kafka_2.10-0.8.1.1, zookeeper 3.4.5.
>
> --
> Kushan Maskey
> 817.403.7500
>

Reply via email to