Hi Gary,
We also have observed this problem, when the backlog piles up (e.g. for some
reason consumers are disconnected, like network outage) producers as well
slows down, even when producer flow control is disabled, send is
asynchronous.

Thanks and regards
Kaustubh

On Tue, Sep 13, 2011 at 3:18 AM, bbansal <bhup...@groupon.com> wrote:

> Hey Gary,
>
> I will try to write a testcase but based on my Jprofile it looks to me
> contention is for write lock due to removeMessages() calls after they
> receive the ack from the client side and the incoming producer messages.
>
> I am going to play with producer-flow-control settings and other
> configurations mentioned in this thread and report back if I see some
> significant difference.
>
> Best
> Bhupesh
>
>
> On Mon, Sep 12, 2011 at 9:17 AM, Gary Tully [via ActiveMQ] <
> ml-node+s2283324n3807858...@n4.nabble.com> wrote:
>
> > on the results of your jprobe profiling, it would be good to identify
> > if there is a real contention problem there.
> > If you can generate a simple junit test case that demonstrates the
> > behavior you are seeing, please open a jira issue and we can
> > investigate some more.
> > A test case will help focus the analysis.
> >
> > On 12 September 2011 01:08, bbansal <[hidden email]<
> http://user/SendEmail.jtp?type=node&node=3807858&i=0>>
> > wrote:
> >
> > > Hello folks,
> > >
> > > I am evaluating ActiveMQ for some simple scenarios. The web-server will
> > push
> > > notifications to the queue/topic to be consumed by one or many
> consumers.
> >
> > > The one requirement is web-server should not get impacted or should be
> > able
> > > to write at their speed even if consumers goes down etc.
> > >
> > > ActiveMQ is performing very well with about 1500 QPS (8 producer
> thread,
> > > persistence, kaha-db) Kahadb parameters being used are
> > >
> > > enableJournalDiskSyncs="false" indexWriteBatchSize="1000"
> > > enableIndexWriteAsync="true
> > >
> > > The system works great if consumers are all caught up, the issue is
> when
> > I
> > > am trying to test scenarios with backlogged data (keep running producer
> > for
> > > 30 mins or so) and then start consumers. Consumer show good consumption
> > rate
> > > but the producers (8 threads same as before) cannot do more than 120
> QPS.
> >
> > > This is a drop of more than 90% degradation.
> > >
> > > I ran profiler on the code (Jprofiler) and looks like the writers are
> > > getting stuck for write locks while competing with the
> > removeAsyncMessages()
> > > or call to clear messages which got acknowledged from clients etc.
> > >
> > > I saw similar complaints for some other folks, Is there some settings
> we
> > can
> > > use to fix the problem ? I dont want to degrade any guarantee level
> (eg.
> > > disable acks etc).
> > >
> > > Would be more than happy to run experiments with different settings if
> > folks
> > > have some suggestions.
> > >
> > >
> > >
> > > --
> > > View this message in context:
> >
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3806018.html
> > > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> > >
> >
> >
> >
> > --
> > http://fusesource.com
> > http://blog.garytully.com
> >
> >
> > ------------------------------
> >  If you reply to this email, your message will be added to the discussion
> > below:
> >
> >
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3807858.html
> >  To unsubscribe from Backlog data causes producers to slow down., click
> > here<
> http://activemq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=3806018&code=Ymh1cGVzaEBncm91cG9uLmNvbXwzODA2MDE4fC0yMDk5OTE3NDIy
> >.
> >
> >
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Backlog-data-causes-producers-to-slow-down-tp3806018p3808739.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Reply via email to