I think you are experiencing automatic producer flow control in
response to the memory utilisation limits, see:
http://activemq.apache.org/what-happens-with-a-fast-producer-and-slow-consumer.html

The details of how to disable are at:
http://activemq.apache.org/producer-flow-control.html

Just fyi, there is a bunch of good usage information hidden in:
http://activemq.apache.org/faq.html

On restart, the slowness of recovery, this is a known issue that is
being addresses for 6.0
There is a cleanup task that works with the check pointing that can
reduce the amount of work outstanding on a restart, they run every 30
seconds or so but they are only relevant when there is consumption of
messages. with a large backlog of unconsumed messages, the active data
file list grows and the required collection of indexes that need to be
rebuilt on restart (recovery) gets large.

There are a bunch of related configuration options (data and index
file size etc)  on the Kaha store that may help some here, but the
underlying problem will not be addressed till 6.0.


2008/11/14 bonnyr <[EMAIL PROTECTED]>:
>
> Gary,
>
> Thanks for the pointers. I'll try those. In the meantime I'm having other
> issues...
> I've cleared up the data directory in the broker area (we're using kaha for
> persistence)
> and restarted the test. I am not able to create the error again !?
> However, I've noticed another, more depressing problem :) - as long as there
> are consumers
> and producers, and the consumers keep up, I get a sustained rate that is
> reasonably stable. So far so
> good. When the consumer dies and the queue builds up, the rate gradually
> drops until (at around 1M msgs in the queue)
> it stops almost completly and will allow around 3 m/s. Connecting the
> consumer at this point does not
> help completely. In fact, the consumer is not able to consume at the maximum
> rate I've observed
> on the test rig (~1200 m/s) but rather at ~250 m/s and at that point the
> producer has recovered its
> rate to ~100 m/s. Why is that?
>
> Also, I've shut down the broker and restarted it (there were ~800K messages
> in the queue plus whatever
> other operations related to acks/deletions stored in the roll forward files)
> and it has taken so far 1h 20m
> to go through about 80% of the data files (in the journal directory) but
> what's more troubling is that
> the operation gets slower and slower, almost exponentially. Again, any idea
> as to why ?
>
> Cheers,
>
> Bonny
>
> Gary Tully wrote:
>>
>>> This test is the first time we're seeing this message - our applications
>>> are
>>> in production environments
>>> and this is not being exhibited at all, so I think our batch glue layer
>>> is
>>> not the source of the problem.
>>>
>> Ok, I am not thinking it is the source of the problem but it may be
>> the key to reproducing the problem in a test case.
>>
>>> Also, the code I'm using in the test uses other glue classes which are
>>> part
>>> of our framework and cannot
>>> be easily separated to form a stand alone test case. Could you perhaps
>>> point
>>> me to a test case that
>>> perhaps resembles my setup and I'll be happy to modify it to test what
>>> I'm
>>> trying to achieve?
>>>
>> Great. Possibly look at
>> http://svn.apache.org/repos/asf/activemq/trunk/activemq-core/src/test/java/org/apache/activemq/usecases/CompositePublishTest.java
>> The support class that it extends could provide some useful
>> scaffolding. Ie: this starts an embedded broker with consumers and
>> produces and validate that the produced messages were consumed.
>> Another example of the same sort of thing, creating consumers and
>> firing off loads of messages, asserting behavior is:
>> https://svn.apache.org/repos/asf/activemq/trunk/activemq-core/src/test/java/org/apache/activemq/JmsQueueSendReceiveUsingTwoSessionsTest.java
>>
>> Hope this helps get you started.
>>
>>
>
> --
> View this message in context: 
> http://www.nabble.com/AMQ-5.2.0-RC3%3A-JMS-Exception-Could-not-correlate-acknowledgment-with-dispatched-message-tp20475017p20494692.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>

Reply via email to