FWIW I changed the contains method as follows:

@Override
public boolean contains(MessageReference message) {
    if (message != null) {
        return map.containsKey(message.getMessageId());
    }
    return false;
}

I got a speedup for my test taking 29 minutes from 41 minutes.  Can we get
this change in to the upcoming 5.13 release?

On Thu, Nov 26, 2015 at 11:44 AM, David Sitsky <david.sit...@gmail.com>
wrote:

> Hi,
>
> I have updated my application from ActiveMQ 5.3 to 5.11.1 and have noticed
> a performance degregation issue.  Running a number of jstacks I can see the
> broker is often stuck here:
>
> "Queue:master-items" Id=122 RUNNABLE
> at
> org.apache.activemq.broker.region.cursors.OrderedPendingList.contains(OrderedPendingList.java:144)
> at
> org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1930)
> at org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2119)
> at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1596)
> -  locked java.lang.Object@253c3089
> at
> org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:112)
> at
> org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:42)
>
> Number of locked synchronizers = 1
> - java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@2eb46567
>
> For this specific queue, there are a large number of items in it.. around
> 100,000.  However I noticed the code for contains has:
>
>     public boolean contains(MessageReference message) {
>         if (message != null) {
>             for (PendingNode value : map.values()) {
>                 if (value.getMessage().equals(message)) {
>                     return true;
>                 }
>             }
>         }
>         return false;
>     }
>
> This will obviously be very slow.  Given the Map is keyed by message ID,
> can't we do a .contains(message.getMessageId()) instead?  I noticed the
> remove() method does this.  I am not familiar with the internals of
> ActiveMQ, so I don't know the ramifications of this.
>
> Cheers,
> David
>

Reply via email to