It is really hard to tell without more information.  Off the top of my head it 
might have something to do with the system time on different hosts.  Getting 
the current time in milliseconds is full of issues, especially with leap 
seconds etc, but it is even more problematic between machines because the time 
is not guaranteed to be synced very closely.  That would be my first guess.  If 
they are all on the same machine (you are not switching hosts), then my next 
guess would be a bug in the code some where, or a misinterpretation of the 
results.
Do you have a reproducible use case that you can share?

- Bobby


On Monday, July 24, 2017, 10:13:59 AM CDT, preethini v <[email protected]> 
wrote:

Hi,
I measure the latency of a storm topology in the below two ways. And I see a 
huge difference in the values. 
Approach 1: attach a start time with every tuple. Note the end time for that 
tuple in ack(). Calculate the time delta of start and end times. 
Latency value is ~ 104 ms.
Approach 2: Using Storm UI parameter "complete Latency" to measure latency.
Latency value is ~ 2-3 ms.
Could someone please explain why is there a huge difference in Latency 
calculations?If not on timestamp basis, how does storm internal metrics system 
calculate the complete latency?
Thanks,Preethini

Reply via email to