Each executor has backup unbounded buffer for queue since without that
deadlock may occur.

'topology.max.spout.pending' was introduced much earlier than backpressure.
It was only way to throttle, and still valid for non-backpressure-activated
topology.
Backpressure doesn't work smoothly so having good value of max spout
pending is still better than relying on backpressure. (Indeed we disable
backpressure by default.) It should be addressed.

- Jungtaek Lim (HeartSaVioR)

2017년 3월 3일 (금) 오전 7:42, Erik Weathers <eweath...@groupon.com>님이 작성:

The miguno blog post is a bit out of date, it predates the switch from
ZeroMQ to Netty as the communication layer between workers.

Notably, netty has an unbounded buffer (at least in version 0.9.6):

   -
   
https://github.com/apache/storm/blob/v0.9.6/storm-core/src/jvm/backtype/storm/messaging/netty/Server.java#L97


On Thu, Mar 2, 2017 at 1:28 PM, David Koch <ogd...@googlemail.com> wrote:

Hi,

The Storm documentation mentions setting topology.max.spout.pending as a
way of preventing "queue explosion"[1]. What is meant by this? Tuples
piling up and eventually causing out of memory exceptions? If I understand
correctly, the topology's queue and buffer sizes are all limited [2] - so
at what point could something explode even without limiting the maximum
number of pending tuples and/or the back pressure mechanism activated.

Thanks,

David

[1]
http://storm.apache.org/releases/1.0.0/Running-topologies-on-a-production-cluster.html
[2]
http://www.michael-noll.com/blog/2013/06/21/understanding-storm-internal-message-buffers/

Reply via email to