For high performance we have an optimised acknowledgement mode which is an attempt at acknowledging in batches rather than on every single message. This is on by default in 4.0 but we've since disabled it as some folks have experienced problems in some scenarios. Batching acknowledgements causes the client to keep consumed messages around until they are acknowledged; which further increases the RAM usage. Given you are consumed over heap size you might want to disable this feature...
http://incubator.apache.org/activemq/connection-configuration-uri.html so connect via http://localhost:61616?jms.optimizeAcknowledge=false (in 4.0.1 this will be the default value). Other than that as I said, if you don't want a heap of such a large size then just reduce down the prefetech; things won't run quite as fast but if you've such little RAM on the boxes you are running ActiveMQ then you have to trade off RAM with performance. On 6/12/06, skarthik <[EMAIL PROTECTED]> wrote:
James, Thanks for your valuable time on following up on this issue. Well, I tried out some more test scenarios over the weekend and now DON'T think it is a memory leak problem, but definitely some kind of cleanup problem. Also I would like to reiterate that these issuses did not exist in RC2 build. I ran the test programs submitted in JIRA, with following parameters. Producer publishes messages of 10K size. The consumer heap was increased to 220 MB, to allow the consumer to store more than 20,000 messages (if required). Used the default topic prefetch values. (i think you mentioned it was 4000). With this heap size, the consumer runs out of memory after consuming around 19,000 messages. Now increased the consumer heap size to 225MB. Then a peculiar thing happens. After using up almost all of the heap memory (225 MB, and consuming about 21500-odd messages), the whole heap (about 210MB) is freed up in one shot. And this cycle (of using upto 225 MB and then freeing up all of it) continues after consuming every 21500-odd messages and the consumer never runs out of memory. So the question is why does it wait for upto 225 MB of memory to be used up before some kind of cleanup kicks in. BTW, since the memory profile test case reported by Feng (my co-worker) in the JIRA issue-tracker, involved messages of 64K and a heap size of only 64MB, I can see why it might be difficult to convince that a problem exists. But I don't think the prefetch size is an issue, since the consumer consumes the messages at the same rate as the producer, unless the messages are kept around even after the consumer has processed it (my understanding of activemq internals/behavior is not deep). Also for our application (not the test code), we would like to keep the default prefetch size (around 4000-5000). Anyways, hope you guys get a chance to look at this problem as time is running out for us (we have a deadline to deilver by this month-end). Also would like to mention that the reason we are stressing on this issue is, that using activemq 4.0, our actual application (not the test code submitted) does not pass through our QA department. RC2 build does, but it has some known issues. Thanks, Karthik -- View this message in context: http://www.nabble.com/4.0-Consumer-OutOfMemoryError-bug--t1707655.html#a4822824 Sent from the ActiveMQ - User forum at Nabble.com.
-- James ------- http://radio.weblogs.com/0112098/
