thanks for closing the loop on this one.

Yep, the destination limits have a parent/child relationship with system usage;
sort of pocket money like, all of the money comes from the parent
income so it is reduced!

If hard limits are in place and the expect to be met, then share the
system usage among destinations using memory limits that are a portion
of the total. This works for static numbers of destinations

When the numbers of destinations is dynamic, you can leverage the
cursorMemoryHighWaterMark percentage via a destination policy. Reduce
that such that the available limit is shared across destinations on an
as needed basis. A value of 5% would ensure that 20 destinations can
viably use some memory for caching messages without blocking.

In the single destination case, the default value of 70% with the
default store cursor will ensure caching stops before the system limit
is reached, even if the destination limit == the system limit, so
there is no need to block a send.

When the vmcursor is used, reaching the limit means sends will block
because all messages are always kept in memory.

Final note: with limits, the jvm's max heap -Xmx should exceed the
system usage value, possibly by a factor of 2 depending on the usage
pattern an GC spikes. In ActiveMQ, messages in memory are the only
resource that are accounted for when checking usage limits, so all
other objects, destinations, jmx, store caches etc need jvm resources.
Using all of the available heap for messages will quickly lead to OOM
so don't do that.


On 1 December 2011 22:04, Bryan <brya...@gmail.com> wrote:
> For those interested, I resolved my issue. ActiveMQ flow control kicks in
> when the queue memory limit is reached, or more importantly, when system
> memory usage is reached. By default, both the per-queue and system memory
> limits are set to 64mb. If you have more than one queue in use, then you
> will generally hit the system memory limit before the queue limit if there
> are slow consumers and you are using a VM pending queue policy. All queues
> will then end up being throttled based on the shared system memory, and this
> can result in a deadlock. The deadlock on the shared memory limit I consider
> to be a bug in ActiveMQ.
>
> Thus to avoid a deadlock, set the system memory usage limit to be high
> enough that it will never be reached before the per-queue limits, e.g. set
> it to (per queue limit) X (number of queues). Once I did this, there were no
> more deadlocks as a result of producer flow control. Theoretically a
> queue-specific deadlock could still happen if you are consuming then
> re-queuing messages to the same queue, but that isn't an issue for me.
>
> The following sets a 1mb limit per queue, and 200mb limit on system usage,
> thus you can have several queues full before you hit the system usage limit.
>
>  <destinationPolicy>
>        <policyMap>
>          <policyEntries>
>                <policyEntry queue=">" producerFlowControl="true" 
> memoryLimit="1mb">
>                  <pendingQueuePolicy>
>                        <vmQueueCursor/>
>                  </pendingQueuePolicy>
>                </policyEntry>
>          </policyEntries>
>        </policyMap>
> </destinationPolicy>
>
> <systemUsage>
>        <systemUsage>
>                <memoryUsage>
>                        <memoryUsage limit="200 mb"/>
>                </memoryUsage>
>                <storeUsage>
>                        <storeUsage limit="1 gb"/>
>                </storeUsage>
>                <tempUsage>
>                        <tempUsage limit="100 mb"/>
>                </tempUsage>
>        </systemUsage>
> </systemUsage>
>
>
>
> --
> View this message in context: 
> http://activemq.2283324.n4.nabble.com/Throttling-deadlock-tp4124447p4142420.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.



-- 
http://fusesource.com
http://blog.garytully.com

Reply via email to