Gary,

I've been trying to do pretty much the same thing that Scott is trying
to do, and I can't get it to work either - no matter what I do I seem
to be able to blow the broker up with an OOME.

What I want to do is configure my broker so that it becomes impossible
to run it out of memory or lock it up, messages should just continue
to be saved to disk until we hit the <storeUsage> limit.

My interpretation of what you write in this thread is that if I make
the <memoryUsage> limit something sensible like 100 mb, and set the
queue <policyEntry> memoryLimit to something higher than 100 mb,
messages should go in the store until the store is full.

However, with that configuration on the latest ActiveMQ 5.3.1 snapshot
(Thu Jan 28) it ignores the <memoryUsage> setting, and if the
<policyEntry> memoryLimit setting is high enough will quite happily
run out of heap space. If the memoryLimit is set below the heap space
limit (-Xmx) then at some point (appears to always be the same number
of messages in my test) the broker just locks up; the producer stops
sending messages, I can't connect a consumer to get any messages off,
and JMX stops returning any information.

My test configuration is this:

        <destinationPolicy>
            <policyMap>
                <policyEntries>
                    <policyEntry queue=">" memoryLimit="10gb"
optimizedDispatch="true" producerFlowControl="false">
                        <pendingQueuePolicy>
                            <fileQueueCursor/>
                        </pendingQueuePolicy>
                    </policyEntry>
                </policyEntries>
            </policyMap>
        </destinationPolicy>

        <managementContext>
            <managementContext createConnector="false"/>
        </managementContext>

        <persistenceAdapter>
            <!-- while testing lets turn off write sync so the tests
run quicker -->
            <kahaDB directory="/var/spool/activemq/kahadb"
enableJournalDiskSyncs="false"/>
        </persistenceAdapter>

        <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage limit="100 mb"/>
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="20 gb" name="store"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="100 mb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

        <transportConnectors>
            <transportConnector name="tcp" uri="tcp://localhost:61616"/>
        </transportConnectors>


I also set the following in /etc/activemq.conf:

ACTIVEMQ_OPTS="-Xmx1500m -XX:MaxPermSize=256m
-Dorg.apache.activemq.UseDedicatedTaskRunner=false"

I would be very interested in hearing what I am doing wrong.

Regards,

Mats Henrikson



On 16 February 2010 08:29, Gary Tully <gary.tu...@gmail.com> wrote:
> First thing is you need to use the FilePendingMessageStoragePolicy() as that
> will off load message references to the file system when
> SystemUsage.MemoryUsage limit is reached.
>
> So 1) add the following to the broker policy entry
>        PendingQueueMessageStoragePolicy pendingQueuePolicy = new
> FilePendingQueueMessageStoragePolicy();
>        policy.setPendingQueuePolicy(pendingQueuePolicy);
>
> With flow control on, you need to configure a lower SystemUsage as the use
> of disk space by the file based cursors is determined by the shared
> SystemUsage.memoryLimit, which by default is the same value as the memory
> limit for a destination. With a single destination, the flowcontroll kicks
> in before the system usage so no spooling to disk occurs.
>
> 2) Configure a SystemUsage.MemoryLimit that is less than the default
> destination memory limit of 64M
>   brokerService.getSystemUsage().getMemoryUsage().setLimit(1024 * 1024 *
> 63);
>
> This should do it once you add a TempStore() limit to implement 5.

Reply via email to