I strongly believe these MyEventQueue1 and MyEventQueue2  queues messages
are consumed successfully .
As indicated, there are many duplicates messages detected for the same queue
which resulted in DLQ. I deployed the broker DLQ plugin
(discardingDLQBrokerPlugin) to discard all the messages.  

I did stopped the broker DLQ plugin and inspect sample of these DLQ
messages; all the headers (dlqDeliveryFailureCause) with details :
java.lang.Throwable: duplicate from store for queue.  These sample messages
did indeed processed/consumed by the application - hence i believe these
messages are consumed successfully.  I do not think there is any expiration
set on these messages. 

The symptom I see - messages are not there however the journal logs say
otherwise (and it is increasing). Occasionally a few data files get removed.
The journal logs files get deleted at a  very slow rate compared with the
rate journal logs file get created. This behavior is observed with
concurrentStoreAndDispatchQueues is set to true.  When I restart the broker,
all these journal logs file (db-xxx,log) get removed/deleted.  

When I switch the mKahaDB for the 2 queues to
concurrentStoreAndDispatchQueues to false, I can see the journal log files
get deleted, i.e only 1 journal log file at any point of time... and the
number - db-<the number>.log get increased fairly quick - but at any point
of time, there is only 1 journal log file. 

I believe the message workload is too fast to be handle by mKahaDB storage -
need to test out other options in KahaDB to get the best result - looking at 
indexCacheSize, indexWriteBatchSize, compactAcksIgnoresStoreGrowth and
possibly setting journalDiskSyncStrategy=“never" to see the results on these
2 queues. 

Do share out if you have any idea. 











--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Reply via email to