Hi exports
Storm Topology reading data from kafka and kafka just a singal node
Topology Spout is so slow and I found this excption in the kafka's logging

[2014-04-09 14:58:53,729] ERROR error when processing request
FetchRequest(topic:topic.nginx, part:0 offset:948810259194
maxSize:1048576) (kafka.server.KafkaRequestHandlers)
kafka.common.OffsetOutOfRangeException: offset 948810259194 is out of range
at kafka.log.Log$.findRange(Log.scala:46)
at kafka.log.Log.read(Log.scala:264)
at
kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:112)
at
kafka.server.KafkaRequestHandlers$$anonfun$2.apply(KafkaRequestHandlers.scala:101)
at
kafka.server.KafkaRequestHandlers$$anonfun$2.apply(KafkaRequestHandlers.scala:100)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
at scala.collection.mutable.ArrayOps.map(ArrayOps.scala:34)
at
kafka.server.KafkaRequestHandlers.handleMultiFetchRequest(KafkaRequestHandlers.scala:100)
at
kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:40)
at
kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:40)
at kafka.network.Processor.handle(SocketServer.scala:296)
at kafka.network.Processor.read(SocketServer.scala:319)
at kafka.network.Processor.run(SocketServer.scala:214)
at java.lang.Thread.run(Thread.java:662)

It is meaning kafka's hold data too large ? or any other something?
It's a bug ?

thx a lot

Reply via email to