Hi Gary, I'd love to be able to provide a failing unit test for you. However, this exception occurs infrequently and so far only on one machine. I made a backup copy of the message store prior to rebuilding it if you think that examining it might help? It contains only a single 32mb journal file, in addition to the other files. I'm assuming you have the tooling to be able to dive in. I wouldn't know where to start. Give me some pointers and I'll gladly help out.
I suppose I could run some unit test code coverage tools to see if there are any areas that stand out that lack coverage. What packages should I zero in on? Is there any support for code coverage analysis in the activemq build tools? Just wondering. Thanks, Paul On Thu, Oct 3, 2013 at 5:37 PM, Gary Tully <gary.tu...@gmail.com> wrote: > I really thought all of these kahaDB index corruption issues were > sorted, but if you can reproduce in 5.9 we have a problem that needs > work. the key to any resolution is a reproducible test case. > > In any event, you should be able to recover by rebuilding the index, > just deleting the db.data file before a restart. > > On 3 October 2013 17:04, Paul Gale <paul.n.g...@gmail.com> wrote: > > Anyone have any some thoughts about this? > > > > Thanks, > > Paul > > > > On Tue, Oct 1, 2013 at 12:50 AM, Paul Gale <paul.n.g...@gmail.com> > wrote: > >> Hi, > >> > >> I'm using ActiveMQ 5.8.0 on RHEL 6.1 > >> > >> I have noticed the following exception appearing in my broker's log > >> file. It appears to be related to the once hourly check for expired > >> messages, as these occur at _exactly_ the same time past the hour, as > >> given by the expireMessagesPeriod attribute on our topics. I have no > >> other scheduled checks that go off hourly. Scheduler support is set to > >> false for the broker (if that matters). The message store is located > >> on an NFS v3 mount. > >> > >> Any thoughts as to what could be causing this or if is this a > >> well-known issue that's fixed in 5.9.0? I couldn't find anything in > >> JIRA about this other than AMQ-3906 which was not resolved although > >> 'Chunk stream does not exist' has cropped up a few times it appears to > >> have been fixed in earlier releases. > >> > >> INFO | jvm 1 | 2013/09/26 05:01:08.362 | WARN | Topic > >> | Failed to browse Topic: Auction.Event | ActiveMQ > >> Broker[queue01.qa1] Scheduler > >> INFO | jvm 1 | 2013/09/26 05:01:08.362 | java.io.EOFException: > >> Chunk stream does not exist, page: 19 is marked free > >> INFO | jvm 1 | 2013/09/26 05:01:08.362 | at > >> > org.apache.activemq.store.kahadb.disk.page.Transaction$2.readPage(Transaction.java:470) > >> INFO | jvm 1 | 2013/09/26 05:01:08.362 | at > >> > org.apache.activemq.store.kahadb.disk.page.Transaction$2.<init>(Transaction.java:447) > >> INFO | jvm 1 | 2013/09/26 05:01:08.362 | at > >> > org.apache.activemq.store.kahadb.disk.page.Transaction.openInputStream(Transaction.java:444) > >> INFO | jvm 1 | 2013/09/26 05:01:08.363 | at > >> > org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:420) > >> INFO | jvm 1 | 2013/09/26 05:01:08.363 | at > >> > org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:377) > >> INFO | jvm 1 | 2013/09/26 05:01:08.363 | at > >> > org.apache.activemq.store.kahadb.disk.index.BTreeIndex.loadNode(BTreeIndex.java:262) > >> INFO | jvm 1 | 2013/09/26 05:01:08.363 | at > >> > org.apache.activemq.store.kahadb.disk.index.BTreeIndex.getRoot(BTreeIndex.java:174) > >> INFO | jvm 1 | 2013/09/26 05:01:08.363 | at > >> > org.apache.activemq.store.kahadb.disk.index.BTreeIndex.iterator(BTreeIndex.java:232) > >> INFO | jvm 1 | 2013/09/26 05:01:08.363 | at > >> > org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex$MessageOrderIterator.<init>(MessageDatabase.java:2757) > >> INFO | jvm 1 | 2013/09/26 05:01:08.363 | at > >> > org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.iterator(MessageDatabase.java:2739) > >> INFO | jvm 1 | 2013/09/26 05:01:08.363 | at > >> > org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$3.execute(KahaDBStore.java:526) > >> INFO | jvm 1 | 2013/09/26 05:01:08.363 | at > >> > org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:779) > >> INFO | jvm 1 | 2013/09/26 05:01:08.363 | at > >> > org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recover(KahaDBStore.java:522) > >> INFO | jvm 1 | 2013/09/26 05:01:08.363 | at > >> > org.apache.activemq.store.ProxyTopicMessageStore.recover(ProxyTopicMessageStore.java:62) > >> INFO | jvm 1 | 2013/09/26 05:01:08.364 | at > >> org.apache.activemq.broker.region.Topic.doBrowse(Topic.java:578) > >> INFO | jvm 1 | 2013/09/26 05:01:08.364 | at > >> org.apache.activemq.broker.region.Topic.access$100(Topic.java:65) > >> INFO | jvm 1 | 2013/09/26 05:01:08.364 | at > >> org.apache.activemq.broker.region.Topic$6.run(Topic.java:703) > >> INFO | jvm 1 | 2013/09/26 05:01:08.364 | at > >> > org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33) > >> INFO | jvm 1 | 2013/09/26 05:01:08.364 | at > >> java.util.TimerThread.mainLoop(Timer.java:555) > >> INFO | jvm 1 | 2013/09/26 05:01:08.364 | at > >> java.util.TimerThread.run(Timer.java:505) > >> > >> Thanks, > >> Paul > > > > -- > http://redhat.com > http://blog.garytully.com >