Github user revans2 commented on the issue:

    https://github.com/apache/storm/pull/2241
  
    -1
    
    Perhaps I am running into some odd issues here so if I can be corrected I 
would be happy to change my vote, but nothing I have run with this patch is 
better in any way.  Are all of the results from micro benchmarks?  Did anyone 
run a real topology with this patch before posting all of these wonderful 
results to twitter?  I am not one to swear but WTF?
    
    I built a stock 2.0.0-SNAPSHOT build 
(450ed637f92c3f387681a47b4b667f17eeecac1f) and compared it to the exact same 
release with this patch merged on top of it (which was a clean merge).  I am 
running
    
    ```
    $ java -version
    java version "1.8.0_121"
    Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
    
    Sierra 10.12.6
    MacBook Pro (Retina, 15-inch, Mid 2015)
    2.8 GHz Intel Core i7
    16 GB 1600 MHz DDR3
    ```
    
    I ran the ThroughputVsLatency topology with several different options and 
no changes at all to the default storm.yaml.
    
    With this patch I found that.
    
    1. Running a topology with more than one worker appears to not be able to 
send any messages between workers (or it takes so long most of the messages 
time out). So I switched all of my tests to a single worker.
    2. When processing a nearly idle topology (500 sentences/second) the CPU 
utilization was everything that my box could give it.  (8 cores fully utilized) 
compared to about one half of one core used by stock storm.
    3. The latency is absolutely horrible.  The minimum latency for a somewhat 
idle topology was 1 to 4 seconds to do a word count.  For a topology processing 
10,000 sentences per second it dropped to 800 ms.  The maximum latency was 15 
seconds for all of these cases.  Compare that to stock storm which has a min 
latency of around 3 to 4 ms for the normal case.
    4. The system bolt metrics do not work, or at lest I was not able to get 
any of them back.  I tried to compare memory and CPU usage through top, which 
worked out OK.
    5. memory usage is insane.  The resident memory was 2 GB for almost all of 
the workers no matter the throughput.  That is 1 GB more than stock storm for 
the same 10,000 sentences per second.
    6. The maximum throughput is about 1/4th what it is on stock storm.  I was 
able to to 100,000 sentences per second on my laptop.  I could do it both with 
a single worker, and with 4 worker (although the CPU usage was higher in the 
latter case). With this patch I was able to do 30,000 sentences per second in 
the best case, but on average it could only do about 25,000.
    
    I am happy to make all of my numbers public.  I also plan to run them on a 
Linux box to see if it is any different. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to