I believe that an acker tracks an entire tuple tree which might make this
"non-trivial". Maybe it could be used as an optimization if your topology
was entirely local or shuffle, but I'm not sure how common that is.

Also I thought 0mq will automatically do micro batches on it's own which
allows it to surpass the performance of native sockets in these types of
scenarios.
On Dec 1, 2014 5:44 PM, "Vladi Feigin" <vladi...@gmail.com> wrote:

> Hi All,
>
> We use Storm 0.82. Our throughput is 400K messages per sec.
> From Storm UI we calculate that the total latency of all spouts and bolts
> = ~3.5 min to process 10 min of data. But in reality it takes 13 min!
> Obviously it creates huge backlogs.
> We don't have time-out failures at all. We don't other exceptions.
> So our main suspect is Storm acking mechanism , which uses a lot of
> network.
> (BTW if you have other opinion , please let me know)
> We think the fact that the all ack messages go via 0mq ,even when acker
> bolt runs in the same worker, causes this huge performance drop. An ack is
> sent per tuple (micro-batches are not supported), which is inefficient.
> There is no a way as far as we know to define the acker bolt to work in
> local Shuffle (like it's possible for other bolts)
> We'd like to ask your opinion regarding the new proposed feature in Storm:
> Support local acking . That means if an acker runs locally in the same
> worker , send the ack messages via local Disraptor queue (like
> localShuffle) rather than via 0mq.
>
> Does it make sense? What do you think?
>
> If you think that a root cause of our problem is other one, please let us
> know.
> Thank you in advance,
> Vladi
>
>
>
>
>
>
>
> .
>
>
>

Reply via email to