Hello guys,

I have an ActiveMQ config which uses :

- a shared kahadb between 2 brokers
- persistence is enabled for kahadb.
- KahaDB is on NFSv4
- each broker has got 1GB of memory.
- client is using
failover(tcp://broker1:61616,tcp://broker2:61616)?randomize=false&backup=true&trackMessages=true

During a jmeter test, we simulated 500 producers sending simultaneously
messages on a queue to make the kahadb full. We wanted messages only to be
sent, but never consummed by producers.

During the test, the JVM did not support the increasing charge, so it
crashed because GC. The passive server became primary, but messages were
never delivered during the failover. It was like the queue was stuck or
something like that.

Other messages in different queues are sent, but the queue we made for the
test was in error, like banned from the activemq.

Can someone help me to prevent this case please?

Help will be apreciated for this weird behaviour I saw :)


 



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Reply via email to