ok, so hypothetically if we had 1 queue. then there should be no out of
memory errors using that configuration?

also this causes us issues anyway as we have a solution that has an unbound
potential number of queues on the system. so we cannot judge the appropriate
value to set for the destination policy.

maybe it makes more sense for the memory allocated to the destination policy
to share the memory pool amongst the queues allocated?

we are testing these configurations as we go and are seeing some OOM issues
with the configuration on 5.3.1. this does seem to be related to the number
of producers, is there also some kind of cost to having producers connect to
the system and is it a known cost? so we can come up with some kind of best
guess?

thanks for all your help and advice


Gary Tully wrote:
> 
> 70% is the magic number, there is a cursorHighWaterMark destination policy
> that defaults to 70% of system usage. your config can go to 80%
> 
> On 7 April 2010 12:15, Richard Holt <richard_h...@btopenworld.com> wrote:
> 
>>
>> Hi Gary,
>>
>> It appears i am really not understanding this so could you clarify?
>>
>> You wrote:
>>
>> "Which ever destination tips it over the limit will have its memory usage
>> flushed to disk because it is
>> configured with a fileQueueCursor."
>>
>> and then wrote:
>>
>> "The salient point being that paging to disk is dependent on systemUsage
>> configuration, not individual/per destination memory usage."
>>
>> To me, this says that once the combined destinations usage exceeds the
>> systemusage/memoryusage of 20mb then the destination which did this is
>> flushed.
>>
>> However, given that i have 16 * 1mb queues how can this ever occur?
>> Surely
>> this means my maximum memory is 16mb?
>>
>> So, flushing must be occuring when destinatons exceed the 1mb value
>> irrespective of the systemusage/memoryusage setting...
>>
>> Or does it?
>>
>>
>>
>> Gary Tully wrote:
>> >
>> > richard, you have configured a 1mb limit for every queue because you
>> use
>> > the
>> > ">". So the broker will keep up to 1mb of space in memory for each
>> queue,
>> > up
>> > to 16MB for 16 queues.
>> >
>> > The paging (via fileQueueCursor) kicks in for each destination only
>> when
>> > 70%
>> > of the shared system limit (20mb) is reached. Which ever destination
>> tips
>> > it
>> > over the limit will have its memory usage flushed to disk because it is
>> > configured with a fileQueueCursor. System usage will reduce by 1Mb and
>> the
>> > rest of the destinations will continue to use memory up to their limit.
>> >
>> > The salient point being that paging to disk is dependent on systemUsage
>> > configuration, not individual/per destination memory usage.
>> >
>> >
>> > On 7 April 2010 11:22, Richard Holt <richard_h...@btopenworld.com>
>> wrote:
>> >
>> >>
>> >> Sorry about the thread hijack - however if i have something like this
>> >>
>> >>        <destinationPolicy>
>> >>            <policyMap>
>> >>                <policyEntries>
>> >>                    <policyEntry queue=">" memoryLimit="1mb"
>> >> producerFlowControl="false">
>> >>                        <pendingQueuePolicy>
>> >>                            <fileQueueCursor/>
>> >>                        </pendingQueuePolicy>
>> >>                    </policyEntry>
>> >>                </policyEntries>
>> >>            </policyMap>
>> >>        </destinationPolicy>
>> >>
>> >>                <systemUsage>
>> >>                  <systemUsage>
>> >>                        <memoryUsage>
>> >>                          <memoryUsage limit="20mb"/>
>> >>                        </memoryUsage>
>> >>                        <storeUsage>
>> >>                          <storeUsage limit="100mb"/>
>> >>                        </storeUsage>
>> >>                        <tempUsage>
>> >>                          <tempUsage limit="100mb"/>
>> >>                        </tempUsage>
>> >>                  </systemUsage>
>> >>                </systemUsage>
>> >>
>> >> and an Xmx of 32mb
>> >>
>> >> Once it reaches a limit of 1mb of messages per queue it will start to
>> >> page
>> >> out to disk.
>> >>
>> >> And the activemq system reserves 20mb of memory for itself?
>> >>
>> >> Is the 1mb then across all queues. So if i have 16 queues i in fact
>> need
>> >> 16mb. Because then the 20mb + 16 * 1mb will exceed my xmx value?
>> >>
>> >> Sorry if i am being obtuse but like the poster i am trying to see how
>> the
>> >> 3
>> >> memory settings relate to each other.
>> >>
>> >> Coincidentally i have tested the above on 5.3.1 and it works really
>> well.
>> >> If
>> >> i use 5.3 it breaks which i think is due to this
>> >> https://issues.apache.org/activemq/browse/AMQ-2610 and this
>> >>
>> >> --
>> >> View this message in context:
>> >>
>> http://old.nabble.com/Help-understanding-memory-usage-limits-tp28118112p28163080.html
>> >> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>> >>
>> >>
>> >
>> >
>> > --
>> > http://blog.garytully.com
>> >
>> > Open Source Integration
>> > http://fusesource.com
>> >
>> >
>>
>> --
>> View this message in context:
>> http://old.nabble.com/Help-understanding-memory-usage-limits-tp28118112p28163528.html
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>
>>
> 
> 
> -- 
> http://blog.garytully.com
> 
> Open Source Integration
> http://fusesource.com
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Help-understanding-memory-usage-limits-tp28118112p28164291.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to