Hi All,

I am having a Kafka Storm Topology which is ingesting events published to
Kafka and processing on top of that data.

Although apart from some latency I found that everything was going good.
But recently I came across a issue which I couldn't get any solution yet.

I publishing some events from Logstash to Kafka and which is being
Subscribed by Storm Topology for further processing, I could see that the
source record count and the events processed by Storm is varying by a
reasonable number. So I have around 200 Million events to be processed out
of which 10 Million Events are getting lost as I could see the
acknowledgement of 190 Million events in the Storm.

Stuck at this issue, looking for expert advise.

Thanks!

Reply via email to