Github user HeartSaVioR commented on the pull request:
https://github.com/apache/storm/pull/1379#issuecomment-215665798
I've done with functional tests.
To experiment this behavior, I made a odd topology which modifies
ExclamationTopology to let spout emits 1000 tuples and sleeps 5 secs in
nextTuple().
https://gist.github.com/HeartSaVioR/9ebac9f7f8503e878b0009a37c2b686b
I also turn off backpressure to not affect the test and also adjust receive
buffer size and send buffer size to 16384.
> before patch
* complete latency
```
10:word __complete-latency {default=5211.189090909091}
11:word __complete-latency {default=5193.9646017699115}
12:word __complete-latency {default=5230.410909090909}
13:word __complete-latency {default=5188.668965517241}
14:word __complete-latency {default=5214.798181818182}
15:word __complete-latency {default=5204.954545454545}
16:word __complete-latency {default=5205.9745454545455}
17:word __complete-latency {default=5210.390909090909}
18:word __complete-latency {default=5223.48}
19:word __complete-latency {default=5182.4290909090905}
```
(Avg. of complete latency is over 5 secs as I intended.)
* process latency for bolts
```
1:__acker __process-latency
{exclaim2:__ack_ack=0.13774403470715835,
exclaim1:__ack_ack=0.004287245444801715, word:__ack_init=0.004836109618484685}
2:__acker __process-latency
{exclaim2:__ack_ack=0.16721132897603486,
exclaim1:__ack_ack=0.0038503850385038503, word:__ack_init=0.0049261083743842365}
3:__acker __process-latency
{exclaim2:__ack_ack=0.14852037967615858,
exclaim1:__ack_ack=0.006379585326953748, word:__ack_init=0.002738225629791895}
5:exclaim1 __process-latency
{word:default=0.31357254290171604}
6:exclaim1 __process-latency
{word:default=0.29818181818181816}
7:exclaim1 __process-latency
{word:default=0.27874774232390126}
8:exclaim2 __process-latency
{exclaim1:default=0.24564586357039186}
9:exclaim2 __process-latency
{exclaim1:default=0.23582848837209303}
```
> after patch
* complete latency
```
10:word __complete-latency {default=25.16}
11:word __complete-latency {default=34.832727272727276}
12:word __complete-latency {default=30.92}
13:word __complete-latency {default=24.403636363636362}
14:word __complete-latency {default=47.261818181818185}
15:word __complete-latency {default=34.67272727272727}
16:word __complete-latency {default=36.11090909090909}
17:word __complete-latency {default=22.966010733452595}
18:word __complete-latency {default=46.26989619377163}
19:word __complete-latency {default=24.95090909090909}
```
* process latency for bolts
```
1:__acker __process-latency
{exclaim2:__ack_ack=0.15648854961832062,
exclaim1:__ack_ack=0.003131524008350731, word:__ack_init=0.0065252854812398045}
2:__acker __process-latency
{exclaim2:__ack_ack=0.19327731092436976,
exclaim1:__ack_ack=0.005393743257820928, word:__ack_init=0.005910800644814616}
3:__acker __process-latency
{exclaim2:__ack_ack=0.17433155080213902,
exclaim1:__ack_ack=0.004909983633387889, word:__ack_init=0.002696871628910464}
5:exclaim1 __process-latency {word:default=0.3125}
6:exclaim1 __process-latency {word:default=0.363871635610766}
7:exclaim1 __process-latency
{word:default=0.31440526001195457}
8:exclaim2 __process-latency
{exclaim1:default=0.27243125904486254}
9:exclaim2 __process-latency
{exclaim1:default=0.2510775862068966}
```
While sum of process latency is not close to complete latency (it makes
sense since they exclude waiting times in queues, serde, transfer, etc.)
applying patch makes complete latency much closer.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---