Nathan,
Thank you very much for your explanation and time!
I`ve got it.
On Wed, May 20, 2015 at 4:19 PM, Nathan Leung wrote:
> I'll use a somewhat canned example for illustrative purposes. Let's say
> the spout has emitted 1000 tuples and they are sitting in the output queue,
> and your bolt i
I'll use a somewhat canned example for illustrative purposes. Let's say
the spout has emitted 1000 tuples and they are sitting in the output queue,
and your bolt is the only bolt in the system.
If you have 2 executors, each tuple takes 5.6ms. This means that each bolt
can process 178.5 tuples /
Nathan,
Process and execute latency are growing, should it mean that we spend more
time for processing tuple, cause it spends more time in bolt queue?
I thought that "Complete latency" and "Process latency" should be
correlated. Am I right?
On Wed, May 20, 2015 at 2:10 PM, Nathan Leung wrote:
My point with increased throughput was that if you have items queued from
the spout waiting to be processed, that counts towards the complete latency
for the spout. If your bolts go through the tuples faster (and as you add
more they do, you have 6x speedup from more bolts) then you will see the
co
Thank you, Jeffrey and Devang for your answers.
Jeffrey, as far as I use shuffle grouping, I think, network serialization
will left, but there will be no network delays (for remove it there is
localOrShuffling grouping). For all experiments, I use only one worker, so
it does not explain why comple
Was the number of workers or number of ackers changed across your
experiments ? What are the numbers you used ?
When you have many executors, increasing the ackers reduces the complete
latency.
Thanks and Regards,
Devang
On 20 May 2015 03:15, "Jeffery Maass" wrote:
> Maybe the difference has t
Maybe the difference has to do with where the executors were running. If
your entire topology is running within the same worker, it would mean that
a serialization for the worker to worker networking layer is left out of
the picture. I suppose that would mean the complete latency could
decrease.
Thanks Nathan for your answer,
But I`m afraid that you understand me wrong : With increasing executors by
32x, each executor's throughput *increased* by 5x, but complete latency
dropped.
On Tue, May 19, 2015 at 5:16 PM, Nathan Leung wrote:
> It depends on your application and the characteristi
It depends on your application and the characteristics of the io. You
increased executors by 32x and each executor's throughput dropped by 5x, so
it makes sense that latency will drop.
On May 19, 2015 9:54 AM, "Dima Dragan" wrote:
> Hi everyone,
>
> I have found a strange behavior in topology met
Hi everyone,
I have found a strange behavior in topology metrics.
Let`s say, we have 1 node, 2-core machine. simple Storm topology
Spout A -> Bolt B -> Bolt C
Bolt B splits message on 320 parts and emits (shuffle grouping) each to
Bolt C. Also Bolts B and C make some read/write operations to db
10 matches
Mail list logo