for the queue case, with backlogs (when the consumers don't keep up)
you may want to experiment with <kahaDB
concurrentStoreAndDispatchQueues="false" />


On 12 September 2011 01:08, bbansal <bhup...@groupon.com> wrote:
> Hello folks,
>
> I am evaluating ActiveMQ for some simple scenarios. The web-server will push
> notifications to the queue/topic to be consumed by one or many consumers.
> The one requirement is web-server should not get impacted or should be able
> to write at their speed even if consumers goes down etc.
>
> ActiveMQ is performing very well with about 1500 QPS (8 producer thread,
> persistence, kaha-db) Kahadb parameters being used are
>
> enableJournalDiskSyncs="false" indexWriteBatchSize="1000"
> enableIndexWriteAsync="true
>
> The system works great if consumers are all caught up, the issue is when I
> am trying to test scenarios with backlogged data (keep running producer for
> 30 mins or so) and then start consumers. Consumer show good consumption rate
> but the producers (8 threads same as before) cannot do more than 120 QPS.
> This is a drop of more than 90% degradation.
>
> I ran profiler on the code (Jprofiler) and looks like the writers are
> getting stuck for write locks while competing with the removeAsyncMessages()
> or call to clear messages which got acknowledged from clients etc.
>
> I saw similar complaints for some other folks, Is there some settings we can
> use to fix the problem ? I dont want to degrade any guarantee level (eg.
> disable acks etc).
>
> Would be more than happy to run experiments with different settings if folks
> have some suggestions.
>
>
>
> --
> View this message in context: 
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3806018.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>



-- 
http://fusesource.com
http://blog.garytully.com

Reply via email to