I'm sorry if I was not clear.  In production I have several ActiveMQ brokers
on remote machines all feeding a common ActiveMQ broker that I refer to as a
hub.  The hub establishes full duplex static connections with each of the
remote brokers.  A Message on any given remote broker is placed into a queue
named ESB and the hub has a queue names ESB.  When the message arrives at
the hub, decision logic determines which remote ActiveMQ broker to send the
file to next and places the message in a queue destined for that remote
broker.  We have used this topology for years with almost no errors.

Our DBA is testing new, larger and more numerous files that should be sent
through this network of brokers.  The activemq log from the hub is
indicating that the Provider Flow Control is kicking in and halting the flow
of messages.  Unfortunately, I never see the messages start to flow again
after waiting tens of minutes.

My Test Environment is not as large as the Production Environment and that
is why I scaled down some of the values.  I was hoping to reproduce the same
problem using smaller messages and it seems to have worked - I'm got the
same hung producer.

This is my first time digging into the code for the Producer Flow Control
and it looks fairly straight forward.  The "waitForSpace" method should wait
until there is some more space for the message, but it is acting as if the
comsumer is not freeing up any space and I do not understand why.  The
consumer should not be blocked.

I may have to set up remote debugging (via Eclipse) and dig into this to
clear up my misunderstanding.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Is-Producer-Flow-Control-Necessary-When-Sending-Persistant-Messages-tp4676925p4676953.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to