Hi, All

When aggregating messages from an ActiveMQ queue to a LevelDB repo we see that 
over time ~1h the consumption of messages decreases significantly, for us to a 
point where we consume 0,5 msg/s as lowest.

Using a normal message consumer the throughput is >40 msg/s…

Tried different settings but without any significant improvements.

Moving from 2.25.0 to 3.2.0 was a big improvement but still it is not fast 
enough.

I know that there are no quick fixes but if someone have had the same issues, 
please let me know what to look for.

We see this in any of our environments, from local dev machine to TEST and PROD 
env.

Our solution demands we process the messages in  sequential order so we cannot 
use any async-options, multiple consumers or any thread related processing 
where we can end up in the wrong order.

The ActiveMQ 5.15.9 setup is standard with an increase in heap to be able to 
handle the load.



This snapshot after aggregating >3000 messages and after 30 minutes we flush 
the aggregation, process them and the aggregation continues with a slight 
increase at the start but then it starts degrading again.

The message is a json string unmarshalled to a Map<String,Object> then this one 
is aggregated in a List. We have checked the unmarshal process and measured it 
but can’t see anything can be done here.


ANY input is much appreciated

Happy Easter to all that celebrate it and for you who don’t I wish you a nice 
weekend

M

Reply via email to