Github user HeartSaVioR commented on the issue:
https://github.com/apache/storm/pull/2203
@ptgoetz @revans2
Hereâs my result on performance test:
CLI option:
org.apache.storm.starter.ThroughputVsLatency 50000 -c
topology.max.spout.pending=5000
so that utilizes 4 workers by default.
> 1.2.0-SNAPSHOT
```
uptime: 210 acked: 1,503,840 acked/sec: 50,128.00 failed: 0 99%:
34,734,079 99.9%: 48,103,423 min: 6,541,312 max: 78,381,055
mean: 13,820,460.15 stddev: 4,842,731.32 user: 173,260 sys: 31,980
gc: 2,583 mem: 669.23
uptime: 240 acked: 1,503,740 acked/sec: 50,124.67 failed: 0 99%:
33,456,127 99.9%: 53,346,303 min: 6,479,872 max: 80,412,671
mean: 13,642,800.22 stddev: 4,618,847.37 user: 172,510 sys: 31,690
gc: 2,513 mem: 614.37
uptime: 270 acked: 1,503,280 acked/sec: 50,109.33 failed: 0 99%:
33,013,759 99.9%: 48,201,727 min: 6,574,080 max: 78,315,519
mean: 13,656,544.79 stddev: 4,527,513.03 user: 173,000 sys: 31,370
gc: 2,513 mem: 740.95
```
```
uptime: 211 acked: 1,503,700 acked/sec: 48,506.45 failed: 0 99%:
35,061,759 99.9%: 53,444,607 min: 6,516,736 max: 84,410,367
mean: 14,023,839.49 stddev: 4,968,132.86 user: 173,490 sys: 33,240
gc: 2,453 mem: 1,042.78
uptime: 241 acked: 1,503,920 acked/sec: 50,130.67 failed: 0 99%:
32,882,687 99.9%: 53,968,895 min: 6,574,080 max: 81,133,567
mean: 13,700,645.27 stddev: 4,592,749.72 user: 173,260 sys: 33,190
gc: 2,465 mem: 1,007.66
uptime: 271 acked: 1,503,260 acked/sec: 50,108.67 failed: 0 99%:
33,275,903 99.9%: 56,262,655 min: 6,582,272 max: 81,199,103
mean: 13,710,314.71 stddev: 4,676,515.80 user: 173,910 sys: 32,430
gc: 2,440 mem: 1,065.66
```
> Metrics V2
```
uptime: 211 acked: 1,503,580 acked/sec: 50,119.33 failed: 0 99%:
40,861,695 99.9%: 62,783,487 min: 6,496,256 max: 106,692,607
mean: 14,696,942.76 stddev: 6,041,492.37 user: 187,800 sys: 32,170
gc: 2,646 mem: 779.90
uptime: 241 acked: 1,541,060 acked/sec: 51,368.67 failed: 0 99%:
41,779,199 99.9%: 70,778,879 min: 6,639,616 max: 113,115,135
mean: 14,872,133.61 stddev: 6,435,291.03 user: 219,910 sys: 36,630
gc: 3,115 mem: 875.09
uptime: 271 acked: 1,503,780 acked/sec: 50,126.00 failed: 0 99%:
41,189,375 99.9%: 63,733,759 min: 6,529,024 max: 104,267,775
mean: 14,738,586.49 stddev: 6,153,017.22 user: 188,950 sys: 31,950
gc: 2,815 mem: 891.73
```
```
uptime: 210 acked: 1,503,520 acked/sec: 50,117.33 failed: 0 99%:
47,120,383 99.9%: 89,391,103 min: 6,520,832 max: 142,999,551
mean: 15,073,527.31 stddev: 7,453,484.52 user: 186,330 sys: 33,150
gc: 2,808 mem: 1,096.18
uptime: 241 acked: 1,540,960 acked/sec: 49,708.39 failed: 0 99%:
36,012,031 99.9%: 51,281,919 min: 6,688,768 max: 74,055,679
mean: 14,267,705.37 stddev: 4,971,492.31 user: 186,600 sys: 33,710
gc: 2,462 mem: 1,192.20
uptime: 271 acked: 1,541,440 acked/sec: 51,381.33 failed: 0 99%:
40,075,263 99.9%: 55,181,311 min: 6,725,632 max: 76,021,759
mean: 14,850,073.94 stddev: 5,832,959.05 user: 187,380 sys: 32,640
gc: 2,577 mem: 1,036.03
```
The value of 99% and 99.9% latency seems indicating the performance hit.
Below is the values from 99.9% latency:
* 1.2.0-SNAPSHOT
avg(48, 53, 48) => 49.6
avg(53, 53, 56) => 54
* metrics_v2
avg(62, 70, 63) => 65
avg(89, 51, 55) => 65
We may could do another perspective of test: putting higher rate and see
which one reaches the limit first.
---