I've seen the same behaviour today with an ActiveMQ 5.1.0 (JDBC-only storage) and Java consumers (transactional). The cause in my case was the AMQ-1838 bug which resulted in a huge number of messages being paged in and the fetcher from the queue had it's AtomicLong-lastMessageId set to a number slightly bigger than the max. id of messages for that queue in the database. I'm not sure why that happened, I suspect it just read over the messages and after running out of memory it just started to discard them in the page-in (not in the database, a broker restart did always fix the problem). So I just applied the patch attached to AMQ-1838 and I could receive 1 Mio. messages without any trouble.
The fastest way to see whether you run into the same issue is to look at the memory consumed by ActiveMQ during the message consumption. Without the bug (aka with the patch) it stays flat, with the bug it would grow fast to the configured memory limit. --Mario On Fri, Aug 29, 2008 at 5:01 PM, Bryan Murphy <[EMAIL PROTECTED]> wrote: > More information... > There were ~ 4,000 messages still pending in the queue. I took a closer > look at the two active consumers under jconsole, the consumer that was > running had approximately 800 messages in it's > MessageCountAwaitingAcknowledge, the consumer that was idle had 0. > > I put a few new messages into the message queue, and this caused the idle > process to unblock and start processing messages. Whatever this did, it > unstuck the idle process and I now have significantly less than 4,000 > messages and it's continuing to tick downwards. > > It's pretty clear that ActiveMQ decided to stop sending messages to my > consumer.