Re: Decreasing value of Complete Latency in Storm UI

2017-07-19 Thread preethini v
Hi,

Thanks.

I see that the latency stabilizes over time. I ran word count topology with
3 worker nodes, latency stabilizes after 2 hours or so.

Is there any other way I can measure the end to end latency of a topology
other than "complete latency" of Storm UI?

Best,
Preethini




On Mon, Jul 17, 2017 at 5:41 AM, Ambud Sharma 
wrote:

> If I may add, it is also explained by the potential surge of tuples when
> topology starts which will eventually reach an equilibrium which the normal
> latency of your topology components.
>
> On Jul 14, 2017 4:29 AM, "preethini v"  wrote:
>
>> Hi,
>>
>> I am running WordCountTopology with 3 worker nodes. The parallelism of
>> spout, split and count is 5, 8 and 12 respectively. I have enabled acking
>> to measure the complete latency of the topology.
>>
>> I am considering  complete latency as a measure of end-to-end latency.
>>
>> The Complete latency is the time a tuple is emitted by a Spout until
>> Spout.ack() is called.  Thus, it is the time from tuple being emitted,
>> the tuple processing time, the time it spends in the internal input/output
>> buffers and until the ack for the tuple is received by the Spout.
>>
>> The stats from storm UI show that the complete latency for a topology
>> keeps decreasing with time.
>>
>> 1. Is this normal?
>> 2. If yes, What explains the continuous decreasing complete latency
>> value?
>> 3. Is complete latency a good measure of end-to-end latency of a topology?
>>
>> Thanks,
>> Preethini
>>
>


Re: Decreasing value of Complete Latency in Storm UI

2017-07-16 Thread Ambud Sharma
If I may add, it is also explained by the potential surge of tuples when
topology starts which will eventually reach an equilibrium which the normal
latency of your topology components.

On Jul 14, 2017 4:29 AM, "preethini v"  wrote:

> Hi,
>
> I am running WordCountTopology with 3 worker nodes. The parallelism of
> spout, split and count is 5, 8 and 12 respectively. I have enabled acking
> to measure the complete latency of the topology.
>
> I am considering  complete latency as a measure of end-to-end latency.
>
> The Complete latency is the time a tuple is emitted by a Spout until
> Spout.ack() is called.  Thus, it is the time from tuple being emitted,
> the tuple processing time, the time it spends in the internal input/output
> buffers and until the ack for the tuple is received by the Spout.
>
> The stats from storm UI show that the complete latency for a topology
> keeps decreasing with time.
>
> 1. Is this normal?
> 2. If yes, What explains the continuous decreasing complete latency value?
> 3. Is complete latency a good measure of end-to-end latency of a topology?
>
> Thanks,
> Preethini
>


Re: Decreasing value of Complete Latency in Storm UI

2017-07-14 Thread P. Taylor Goetz
Over how long a period do you see the complete latency decreasing? Does it 
stabilize at some point?

It’s typical for a topology to start out slow in terms of latency, then speed 
up as the worker JVMs “warm up.” The warm up period can last several minutes.

The complete latency metric should give you a good idea what end-to-end latency 
is like. Just realize that the latency numbers in Storm UI are approximations 
due to sampling. Storm samples a small percentage of tuples for calculating 
latencies. You can configure it to sample all, but it will kill performance and 
should only be done for debugging purposes.

-Taylor


> On Jul 14, 2017, at 7:29 AM, preethini v  wrote:
> 
> Hi,
> 
> I am running WordCountTopology with 3 worker nodes. The parallelism of spout, 
> split and count is 5, 8 and 12 respectively. I have enabled acking to measure 
> the complete latency of the topology.
> 
> I am considering  complete latency as a measure of end-to-end latency.
> 
> The Complete latency is the time a tuple is emitted by a Spout until 
> Spout.ack() is called.  Thus, it is the time from tuple being emitted, the 
> tuple processing time, the time it spends in the internal input/output 
> buffers and until the ack for the tuple is received by the Spout.
> 
> The stats from storm UI show that the complete latency for a topology keeps 
> decreasing with time. 
> 
> 1. Is this normal? 
> 2. If yes, What explains the continuous decreasing complete latency value? 
> 3. Is complete latency a good measure of end-to-end latency of a topology?
> 
> Thanks,
> Preethini



Decreasing value of Complete Latency in Storm UI

2017-07-14 Thread preethini v
Hi,

I am running WordCountTopology with 3 worker nodes. The parallelism of
spout, split and count is 5, 8 and 12 respectively. I have enabled acking
to measure the complete latency of the topology.

I am considering  complete latency as a measure of end-to-end latency.

The Complete latency is the time a tuple is emitted by a Spout until
Spout.ack() is called.  Thus, it is the time from tuple being emitted, the
tuple processing time, the time it spends in the internal input/output
buffers and until the ack for the tuple is received by the Spout.

The stats from storm UI show that the complete latency for a topology keeps
decreasing with time.

1. Is this normal?
2. If yes, What explains the continuous decreasing complete latency value?
3. Is complete latency a good measure of end-to-end latency of a topology?

Thanks,
Preethini