Github user revans2 commented on the issue:
https://github.com/apache/storm/pull/2241
Sorry, but we also need to think about low throughput use cases. I have
several that I care about and I am seeing very long latency for low throughput.
min in some cases is 5 seconds, max can be up to 20 seconds, average is
around 10 seconds and the CPU utilization is 500%. This too needs to be
addressed.
```
500 1 -c topology.workers=1
uptime: 30 acked: 4,000 acked/sec: 133.33 failed: 0 99%:
9,923,723,263 99.9%: 9,999,220,735 min: 79,036,416 max: 10,015,997,951
mean: 5,861,829,371.65 stddev: 2,744,502,279.38 user: 0 sys:
0 gc: 0 mem: 0.00
uptime: 60 acked: 15,000 acked/sec: 500.00 failed: 0 99%:
14,646,509,567 99.9%: 14,973,665,279 min: 53,084,160 max: 15,023,996,927
mean: 7,410,713,531.31 stddev: 3,187,842,885.35 user: 0 sys:
0 gc: 0 mem: 0.00
uptime: 90 acked: 16,000 acked/sec: 533.33 failed: 0 99%:
14,747,172,863 99.9%: 14,990,442,495 min: 37,486,592 max: 15,032,385,535
mean: 7,947,532,282.45 stddev: 3,104,232,967.22 user: 0 sys:
0 gc: 0 mem: 0.00
uptime: 120 acked: 14,000 acked/sec: 466.67 failed: 0 99%:
14,856,224,767 99.9%: 14,998,831,103 min: 65,208,320 max: 15,023,996,927
mean: 9,071,752,875.48 stddev: 3,337,053,852.19 user: 0 sys:
0 gc: 0 mem: 0.00
uptime: 150 acked: 13,000 acked/sec: 433.33 failed: 0 99%:
14,914,945,023 99.9%: 14,998,831,103 min: 4,999,610,368 max: 15,074,328,575
mean: 10,374,946,814.88 stddev: 2,794,778,136.42 user: 0 sys:
0 gc: 0 mem: 0.00
uptime: 180 acked: 16,000 acked/sec: 533.33 failed: 0 99%:
14,940,110,847 99.9%: 15,049,162,751 min: 5,007,998,976 max: 15,602,810,879
mean: 10,539,964,609.74 stddev: 2,796,155,497.39 user: 0 sys:
0 gc: 0 mem: 0.00
uptime: 210 acked: 15,000 acked/sec: 500.00 failed: 0 99%:
14,881,390,591 99.9%: 14,998,831,103 min: 5,003,804,672 max: 15,015,608,319
mean: 9,616,077,147.72 stddev: 2,781,415,317.06 user: 0 sys:
0 gc: 0 mem: 0.00
uptime: 240 acked: 10,000 acked/sec: 333.33 failed: 0 99%:
14,889,779,199 99.9%: 15,007,219,711 min: 5,003,804,672 max: 15,015,608,319
mean: 9,840,073,724.86 stddev: 2,806,028,726.32 user: 0 sys:
0 gc: 0 mem: 0.00
uptime: 270 acked: 16,000 acked/sec: 533.33 failed: 0 99%:
17,951,621,119 99.9%: 19,780,337,663 min: 5,003,804,672 max: 20,015,218,687
mean: 10,556,609,171.18 stddev: 3,010,780,308.43 user: 0 sys:
0 gc: 0 mem: 0.00
uptime: 300 acked: 15,000 acked/sec: 500.00 failed: 0 99%:
14,898,167,807 99.9%: 14,998,831,103 min: 51,445,760 max: 15,023,996,927
mean: 9,694,508,448.06 stddev: 3,087,190,409.09 user: 0 sys:
0 gc: 0 mem: 0.00
```
I am fine with the goals and the design work being done for this. If you
can do better then the stuff I did for disruptor by all means rip out my code
and make things better. The low throughput issue was one I had to fix with my
initial patches to disruptor. People do care about this. I am not trying to
be a jerk, I am just trying to keep my customers happy, share some of my
experience doing something similar in the past, and also hopefully make storm
much much better in the end.
I apologize if I offended anyone. It was not my intention, but I really
was shocked to see a patch everyone was touting as better than sliced bread
decidedly worse in every way for a topology that worked really well before. I
was able to max out the default configuration of a parallelism of 4 at 100,000
sentences per second fully acked. The new patch could only handle 1/3rd of
that, and not when there is more then 1 worker.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---