Re: Decreasing Complete latency with growing number of executors

2015-05-20 Thread Dima Dragan
Nathan, Thank you very much for your explanation and time! I`ve got it. On Wed, May 20, 2015 at 4:19 PM, Nathan Leung wrote: > I'll use a somewhat canned example for illustrative purposes. Let's say > the spout has emitted 1000 tuples and they are sitting in the output queue, > and your bolt i

Re: Decreasing Complete latency with growing number of executors

2015-05-20 Thread Nathan Leung
I'll use a somewhat canned example for illustrative purposes. Let's say the spout has emitted 1000 tuples and they are sitting in the output queue, and your bolt is the only bolt in the system. If you have 2 executors, each tuple takes 5.6ms. This means that each bolt can process 178.5 tuples /

Re: Decreasing Complete latency with growing number of executors

2015-05-20 Thread Dima Dragan
Nathan, Process and execute latency are growing, should it mean that we spend more time for processing tuple, cause it spends more time in bolt queue? I thought that "Complete latency" and "Process latency" should be correlated. Am I right? On Wed, May 20, 2015 at 2:10 PM, Nathan Leung wrote:

Re: Decreasing Complete latency with growing number of executors

2015-05-20 Thread Nathan Leung
My point with increased throughput was that if you have items queued from the spout waiting to be processed, that counts towards the complete latency for the spout. If your bolts go through the tuples faster (and as you add more they do, you have 6x speedup from more bolts) then you will see the co

Re: Decreasing Complete latency with growing number of executors

2015-05-20 Thread Dima Dragan
Thank you, Jeffrey and Devang for your answers. Jeffrey, as far as I use shuffle grouping, I think, network serialization will left, but there will be no network delays (for remove it there is localOrShuffling grouping). For all experiments, I use only one worker, so it does not explain why comple

Re: Decreasing Complete latency with growing number of executors

2015-05-19 Thread Devang Shah
Was the number of workers or number of ackers changed across your experiments ? What are the numbers you used ? When you have many executors, increasing the ackers reduces the complete latency. Thanks and Regards, Devang On 20 May 2015 03:15, "Jeffery Maass" wrote: > Maybe the difference has t

Re: Decreasing Complete latency with growing number of executors

2015-05-19 Thread Jeffery Maass
Maybe the difference has to do with where the executors were running. If your entire topology is running within the same worker, it would mean that a serialization for the worker to worker networking layer is left out of the picture. I suppose that would mean the complete latency could decrease.

Re: Decreasing Complete latency with growing number of executors

2015-05-19 Thread Dima Dragan
Thanks Nathan for your answer, But I`m afraid that you understand me wrong : With increasing executors by 32x, each executor's throughput *increased* by 5x, but complete latency dropped. On Tue, May 19, 2015 at 5:16 PM, Nathan Leung wrote: > It depends on your application and the characteristi

Re: Decreasing Complete latency with growing number of executors

2015-05-19 Thread Nathan Leung
It depends on your application and the characteristics of the io. You increased executors by 32x and each executor's throughput dropped by 5x, so it makes sense that latency will drop. On May 19, 2015 9:54 AM, "Dima Dragan" wrote: > Hi everyone, > > I have found a strange behavior in topology met

Decreasing Complete latency with growing number of executors

2015-05-19 Thread Dima Dragan
Hi everyone, I have found a strange behavior in topology metrics. Let`s say, we have 1 node, 2-core machine. simple Storm topology Spout A -> Bolt B -> Bolt C Bolt B splits message on 320 parts and emits (shuffle grouping) each to Bolt C. Also Bolts B and C make some read/write operations to db