Oh right sorry it's introduced at Storm 1.0.0.

And max-spout-pending controls the call of nextTuple() so tuple latency
shouldn't be affected.
One thing you would like to check is, event handler on Spout is single
thread, which means that same thread calls nextTuple() and ack() and
fail().

http://storm.apache.org/releases/1.0.0/Concepts.html
"It is imperative that nextTuple does not block for any spout
implementation, because Storm calls all the spout methods on the same
thread."

So if your topology spends non-ignorable time on nextTuple(), that could be
affected to tuple latency.

Hope this helps.

Thanks,
Jungtaek Lim (HeartSaVioR)

2016년 4월 20일 (수) 오전 10:21, Kevin Conaway <kevin.a.cona...@gmail.com>님이 작성:

> We are already sending our metrics to graphite but I don't see
> __skipped-max-spout being logged.  Was that added after 0.10?  I don't even
> see a reference to it in the codebase.
>
> We are capturing the queue metrics for each component (send_queue and
> receive_queue) and the population for each is very low, like 0.01% of
>  capacity (< 10 entries at any given time) so I'm not sure that items are
> sitting in those particular disruptor queues
>
> On Tue, Apr 19, 2016 at 8:54 PM, Jungtaek Lim <kabh...@gmail.com> wrote:
>
>> Hi Kevin,
>>
>> You can attach metrics consumer to log additional informations for that
>> topology like disruptor queue metrics, __skipped-max-spout for spout, and
>> etc.
>> Please refer
>> https://github.com/apache/storm/blob/master/storm-core/src/jvm/org/apache/storm/metric/LoggingMetricsConsumer.java
>>  to
>> how to set up.
>>
>> Please note that it may affect whole throughput (I guess it would be
>> relatively small) so you may want to attach when only testing / debugging
>> performance issue.
>>
>> Thanks,
>> Jungtaek Lim (HeartSaVioR)
>>
>>
>> 2016년 4월 20일 (수) 오전 9:41, Kevin Conaway <kevin.a.cona...@gmail.com>님이 작성:
>>
>>> In Storm 0.10, is there a way to monitor the _maxSpoutPending_ value?  I
>>> don' see it exposed in any of the metrics that storm publishes but I'd like
>>> to be able to see how much time each tuple spends waiting and how big the
>>> pending queue size is for each spout task.  Is this possible?
>>>
>>> Our topology has one spout and one bolt.  The average bolt execute
>>> latency is ~20ms but the overall spout latency is ~60ms.  I'd like to know
>>> where those other 40ms are being spent.
>>>
>>> Thank you
>>>
>>>
>>> --
>>> Kevin Conaway
>>> http://www.linkedin.com/pub/kevin-conaway/7/107/580/
>>> https://github.com/kevinconaway
>>>
>>
>
>
> --
> Kevin Conaway
> http://www.linkedin.com/pub/kevin-conaway/7/107/580/
> https://github.com/kevinconaway
>

Reply via email to