Github user HeartSaVioR commented on a diff in the pull request:

    https://github.com/apache/storm/pull/2241#discussion_r158007018
  
    --- Diff: conf/defaults.yaml ---
    @@ -253,11 +278,17 @@ topology.trident.batch.emit.interval.millis: 500
     topology.testing.always.try.serialize: false
     topology.classpath: null
     topology.environment: null
    -topology.bolts.outgoing.overflow.buffer.enable: false
    -topology.disruptor.wait.timeout.millis: 1000
    -topology.disruptor.batch.size: 100
    -topology.disruptor.batch.timeout.millis: 1
    -topology.disable.loadaware.messaging: false
    +
    +topology.transfer.buffer.size: 1000   # size of recv  queue for transfer 
worker thread
    +topology.transfer.batch.size: 1       # can be no larger than half of 
`topology.transfer.buffer.size`
    +
    +topology.executor.receive.buffer.size: 32768  # size of recv queue for 
spouts & bolts. Will be internally rounded up to next power of 2 (if not 
already a power of 2)
    +topology.producer.batch.size: 1               # can be no larger than half 
of `topology.executor.receive.buffer.size`
    +
    +topology.batch.flush.interval.millis: 1  # Flush tuples are disabled if 
this is set to 0 or if (topology.producer.batch.size=1 and 
topology.transfer.batch.size=1).
    +topology.spout.recvq.skips: 3  # Check recvQ once every N invocations of 
Spout's nextTuple() [when ACKs disabled]
    +
    +topology.disable.loadaware.messaging: false   # load aware messaging can 
degrade throughput
    --- End diff --
    
    May be better to describe the cases when we recommend using load  aware 
messaging, or disable load aware messaging. The comment may mislead users to 
consider this option as always better to disable.


---

Reply via email to