version 5.5.1
broker cfg:  memory: 256M, Store:10g, Swap/tmp:10g, persistence=true,
producerFlowControl=true

I'm using the  http://activemq.apache.org/message-cursors.html
fileQueueCursor  destination policy for a queue.  It appears to properly
page in messages in the cursor without blocking when it gets to 70% memory
use.  The queue is being read by a transacted session consumer which is
keeping open the transaction for a many messages (>10k).  The messages
themselves are large (800k or so).  What I see is that the Queue
pagedInMessages data structure will grow without bound and contains a
reference to every message dispatched to the consumer.   Eventually the
broker will run out of memory.

Producer flow control does not get triggered (but if it did my transaction
would not be able to get all the messages it needs and would never
complete).

Is it just not possible to use ActiveMQ in this way?  What I mean by this is
that I'd like the total size of the number of messages * the size of the
messages of an open transaction to be bounded by the disk store limits not
by memory limits.  Since the messages are persisted, it seems like this
should be possible.

Any suggestions for what I should do differently?  Would the
connection.setTransactedIndividualAck setting in 5.6 help here?

I'm happy to work on fixes/tests for this but need some guidance.


--
View this message in context: 
http://activemq.2283324.n4.nabble.com/out-of-memory-using-producer-flow-control-and-fileQueueCursor-tp4415752p4415752.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to