Ok so that narrows down the original problem.

Here's another test to try:

#1
conf.setNumAckers(conf.get(Config.TOPOLOGY_WORKERS)).
conf.setMaxSpoutPending(0);

#2
conf.setNumAckers(conf.get(Config.TOPOLOGY_WORKERS)).
conf.setMaxSpoutPending(1000); (this one will throttle the input rate, so
it will take a bit longer to run).

Make sure you always call collector.ack() on the input tuple.
Check your fail count to see if there's failed tuples, these will be
replayed.
Be sure to read
https://github.com/nathanmarz/storm/wiki/Guaranteeing-message-processing

-Jason


On Thu, Apr 10, 2014 at 10:11 AM, Shobha <shobha.p...@gmail.com> wrote:

> Thanks Jason for your answer. We tried by disabling ackers, and it worked.
>
> Could you please explain why it behaved differently when ack was enabled.
> Storm cluster has 25 acker threads for 25 workers.
>
> Do we need to try anything different configurations?
>
> Regards,
> Shobha Rani Poli
>
>
> On Thu, Apr 10, 2014 at 3:41 AM, Jason Jackson <jasonj...@gmail.com>wrote:
>
>> does the problem still occur if you disable acking? conf.setNumAckers(0).
>>
>>
>> On Wed, Apr 9, 2014 at 9:48 AM, Shobha <shobha.p...@gmail.com> wrote:
>>
>>> Hi All,
>>> We have storm cluster with 5 nodes, 25 spouts, 250 bolts, 25 workers.
>>> Storm is consuming the messages from kafka queue, and kafka queue has 214M
>>> items, spouts after consuming 181M items, the spouts stopped emitting the
>>> items. So killed the topology and redeployed it again, then it spouts
>>> started emitting the rest items.
>>>
>>> Noticed that only few spouts emitted the items after killing and
>>> redeploying the topology.
>>>
>>> Did anyone faced this problem, if so could anyone please suggest what
>>> might be the problem?
>>>
>>>
>>> Regards,
>>> Shobha Rani Poli
>>>
>>
>>
>

Reply via email to