I've never seen any code that responds to slow consumers like that.  There
are a few plugins that allow you to abort (i.e. disconnect) slow consumers,
but you have to enable them explicitly and you'd see logging about how the
broker was aborting them.

Are you using a persistence store, or just storing all messages in the
memory store?

What does the broker process's resource utilization look like when the
problem occurs?  CPU, network I/O, disk I/O, garbage collections, etc.?

And why aren't you using either PFC or one of the pending message
strategies?  Preventing the broker from running out of memory or other
resources is their primary purpose.

Also, I was confused by the question about delivering messages to consumers
at the same rate.  For one thing, no, it's not possible and you wouldn't
want it if it was (the slowest consumer would get overwhelmed and/or the
fastest consumer would sit idle, depending on the rate).  But more
fundamentally, nothing you wrote seemed to indicate that there was a
problem with the rate at which ActiveMQ was delivering messages to the
different consumers.  If anything, it sounds like you need to speed up your
consumers' logic or add more consumers so that the backlog never gets
unmanageable even during the worst burst periods.

Tim
ActiveMQ version 5.11.3
No producer flow control.

<policyEntry queue=">" timeBeforeDispatchStarts="5000"
producerFlowControl="false" maxPageSize="1000" useCache="true"
 expireMessagesPeriod="0" optimizedDispatch="true">
                   <dispatchPolicy>
                      <roundRobinDispatchPolicy />
                    </dispatchPolicy>
                  <messageGroupMapFactory>
                    <simpleMessageGroupMapFactory/>
                  </messageGroupMapFactory>
                  <pendingMessageLimitStrategy>
                    <constantPendingMessageLimitStrategy limit="-1"/>
                  </pendingMessageLimitStrategy>
                </policyEntry>


I've noticed the following issue:

- Consumers are consuming at a consistent rate.
- Producer starts sending burst of messages
- Queue starts growing
- Once queue depth is around 400K+, ActiveMQ stops delivering messages to
the consumers.
- Eventually, Memory usage reaches to 100%, and it starts impacting other
queues, and producers/consumers.

To come out of this situation, I enable consumers to drop certain kind of
messages (to trigger faster consumption).
After this change, ActiveMQ starts delivering messages, and queue gets
drained.

Looks like to me, ActiveMQ detects the slow consumers for a growing queue,
and stops delivering messages.
Is it possible to configure ActiveMQ to deliver to all the consumers at
an equal rate? i.e. do not smartly balance based on the consumption rate?
Trying to avoid the system freeze.

Reply via email to