Why have you configured 32 tasks on 36 executors? Set noof task to at least
36.
Looks like your Python bolt takes some time to process a tuple. You may
need to tune that or give it more threads.
If you are not maxing out on resource usage, set noof task to 72 and see if
that helps.

Srikanth

On Fri, Apr 24, 2015 at 2:16 PM, Sid <jack8danie...@gmail.com> wrote:

> My storm topology has python bolts using multilang support.
>
> Kafka_spout(java) -> format_filter_bolt -> process_bolt
>
> My storm cluster has 3 32 core EC2 instances. format_filter_bolt has 1
> executor and 1 task, process_bolt has 36 executor and 32 tasks. I have max
> spout pending = 250.
>
> I observe that format_filter_bolt has a execute latency of 0.018 and
> process latency of 7.585, while process_bolt has an execute latency of
> 0.008 and process latency 94664.242!
>
> These numbers look strange already. Comparing the two bolts, one with
> lower execute latency and handling fewer tuples has magnitude times larger
> process latency.
>
> Looking for suggestions on what I can do improve the latencies without
> throwing more computation at it. Thanks
>

Reply via email to