To be clear, you're not running the optimizer from that project, only the reader, right?
What is the behavior if you delete the index file and start the broker with only the journal files (which will cause the index to be rebuilt)? I'm trying to determine whether 1) the index is invalid, or 2) the broker is simply loading the KahaDB messages in a way that appears to be incorrect based on what you're seeing from the reader, irrespective of whether the content is read from the index or the journal files. FYI, troubleshooting problems with KahaDB data files is often very difficult, because you typically don't notice that something is wrong until days/weeks/months later, which makes it very difficult to identify what piece of code (if any) has the actual bug. It will be significantly easier to troubleshoot this problem if you have or can find a way to reliably reproduce the problem with a short/simple set of steps; have you had any luck in reliably reproducing the problem? The typical cause of old journal files not being deleted is that there is at least one message in the file that has not yet been consumed (or discarded). Messages in a DLQ count as unconsumed old messages; for many users, DLQ messages are the reason why the journal files are "inexplicably" not being deleted. Is it possible that your broker has old unconsumed messages? Tim On Tue, Oct 31, 2017 at 8:30 PM, t-watana <watanabe.tak...@jp.fujitsu.com> wrote: > The old journal files aren't cleanup when running ActiveMQ under the > following conditions. > > Conditions: > * Using ActiveMQ 5.13.1 > * Using MQTT (QoS=2) > * MQTT Client is Eclipse Paho 1.0.0 > * Subscriber's CleanSession is false. > * Number of Subscribers is 1000 over. > * Number of Topics is 1000 over. > * A Topic and a Subscriber are in one-to-one manner. > * Subscriber's offline timeout is about one month. > (<broker offlineDurableSubscriberTimeout="2592000000" in activemq.xml) > * Subscribers aren't always active, and repeat disconnection and > connection. > * Frequency of publish is 100 messages a day at most, it has been running > over 3 months. > > I tried to analyze the journal files by > amq-kahadb-tool(https://github.com/Hill30/amq-kahadb-tool), > then found the strange subscribers. > > * KAHA_SUBSCRIPTION_COMMAND recorded in db-1.log (indicates the subscriber > is durablesubscriber). > * Several KAHA_ADD_MESSAGE_COMMANDs are recorded, but there may be no > corresponding > KAHA_REMOVE_MESSAGE_COMMAND (indicates the subscriber has pending > message). > * There is no KAHA_SUBSCRIPTION_COMMAND that should be recorded when an > offline timeout > occurs (indicates the durable subscriber is valid) > > So, it means that there are the offline durable subscribers having the > pending messages from > the commands recorded in the journal files. > However, when starting ActiveMQ from the same KahaDB file (index and > journal) as the loaded > index(db.data) indicates that this subscriber does not exist in memory. > > Why does inconsistency occur between journal files and indexes ? > Also, what could be the cause of the old journal file being not deleted ? > > > > > -- > Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User- > f2341805.html >