Kevin Burton has already identified that GCing destinations is horribly
inefficient when the number of destinations is large; search the archives
for posts from him for more details.  He has proposed some fixes (in Git,
but against an earlier version), but so far as I know no one from the
community has spent the time to rebase his changes into the current
baseline.  I don't think it would take that long to do, but no one (myself
included) has spent the time on it, as far as I know.

In your new thread dumps, the thread doing the indexing is holding locks
that no one else is waiting for, so I don't see why you claim that indexing
is now the point of contention.

Four threads are waiting on 0x0000000780169de0 and one thread is waiting on
0x00000007812b3bd0, but I don't see any threads holding those locks.  Is
this a full thread dump, or just a partial?

On Thu, Mar 31, 2016 at 5:43 AM, Christopher Shannon <
christopher.l.shan...@gmail.com> wrote:

> The CountStatisticImpl class is basically just a wrapper for an atomic
> long.  It's used all over the place (destinations, subscriptions, inside
> kahadb, etc) to keep track of various metrics in a non-blocking way.
> There's a DestinationStatistics object for each destination and that has
> several of those counters in it,  There's an option to disable metrics
> tracking but that will only prevent the counters from actually
> incrementing, not stop the allocations.  Some of the metrics are required
> for parts of the broker and won't honor that flag to disable them (such as
> needing message counts in kahadb) but I plan on going back and double
> checking all of those metrics at some point soon to make sure everything
> that can honor that flag does.  Since you have a lot of destinations you
> are seeing a lot of those counters.
>
> If you disable disk syncs then you need to be aware that you are risking
> message loss.  Since you are no longer waiting to make sure data is
> persisted to the disk before sending the ack to the producer there's a
> chance of losing messages if something happens (like a power outage)
>
> On Wed, Mar 30, 2016 at 2:41 PM, Shobhana <shobh...@quickride.in> wrote:
>
> > Hi Tim & Christopher,
> >
> > I tried with 5.13.2 version but as you suspected, it did not solve my
> > problem.
> >
> > We don't have any wildcard subscriptions. Most of the Topics have a
> maximum
> > of 8 subscriptions (Ranges between 2 and 8) and a few topics (~25-30 so
> > far)
> > have more than 8 (this is not fixed, it depends on no of users interested
> > in
> > these specific topics; the max I have seen is 40).
> >
> > Btw, I just realized that I have set a very low value for destination
> > inactivity (30 secs) and hence many destinations are getting removed very
> > early. Later when there is any message published to the same destination,
> > it
> > would result in destination getting created again. I will correct this by
> > increasing this time out to appropriate values based on each destination
> > (varies from 1 hour to 1 day)
> >
> > Today after upgrading to 5.13.2 version in my test env, I tried with
> > different configurations to see if there is any improvement. In
> particular,
> > I disabled journal disk sync (since many threads were waiting at KahaDB
> > level operations) and also disabled metadata update. With these changes,
> > the
> > contention moved to a different level (KahaDB update index .. see
> attached
> > thread dumps)
> >
> > ThreadDump1.txt
> > <http://activemq.2283324.n4.nabble.com/file/n4710055/ThreadDump1.txt>
> > ThreadDump2.txt
> > <http://activemq.2283324.n4.nabble.com/file/n4710055/ThreadDump2.txt>
> >
> > I will test again by increasing the index cache size (current value is
> set
> > to the default of 10000) to 100000 and see if it makes any improvement.
> >
> > Also histo reports showed a huge number (1393177) of
> > org.apache.activemq.management.CountStatisticImpl instances and 1951637
> > instances of java.util.concurrent.locks.ReentrantLock$NonfairSync. See
> > attached histo for complete report.
> >
> > histo.txt <http://activemq.2283324.n4.nabble.com/file/n4710055/histo.txt
> >
> >
> > What are these org.apache.activemq.management.CountStatisticImpl
> instances?
> > Is there any way to avoid them?
> >
> > Thanks,
> > Shobhana
> >
> >
> >
> >
> >
> >
> > --
> > View this message in context:
> >
> http://activemq.2283324.n4.nabble.com/ActiveMQ-with-KahaDB-as-persistent-store-becomes-very-slow-almost-unresponsive-after-creating-large-s-tp4709985p4710055.html
> > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> >
>

Reply via email to