Over how long a period do you see the complete latency decreasing? Does it 
stabilize at some point?

It’s typical for a topology to start out slow in terms of latency, then speed 
up as the worker JVMs “warm up.” The warm up period can last several minutes.

The complete latency metric should give you a good idea what end-to-end latency 
is like. Just realize that the latency numbers in Storm UI are approximations 
due to sampling. Storm samples a small percentage of tuples for calculating 
latencies. You can configure it to sample all, but it will kill performance and 
should only be done for debugging purposes.

-Taylor


> On Jul 14, 2017, at 7:29 AM, preethini v <preethin...@gmail.com> wrote:
> 
> Hi,
> 
> I am running WordCountTopology with 3 worker nodes. The parallelism of spout, 
> split and count is 5, 8 and 12 respectively. I have enabled acking to measure 
> the complete latency of the topology.
> 
> I am considering  complete latency as a measure of end-to-end latency.
> 
> The Complete latency is the time a tuple is emitted by a Spout until 
> Spout.ack() is called.  Thus, it is the time from tuple being emitted, the 
> tuple processing time, the time it spends in the internal input/output 
> buffers and until the ack for the tuple is received by the Spout.
> 
> The stats from storm UI show that the complete latency for a topology keeps 
> decreasing with time. 
> 
> 1. Is this normal? 
> 2. If yes, What explains the continuous decreasing complete latency value? 
> 3. Is complete latency a good measure of end-to-end latency of a topology?
> 
> Thanks,
> Preethini

Reply via email to