Sorry, that last paragraph should have read:
--
The thing that brought this to my attention was that storePercentUsage was
hitting 100% for a multi-GB max store size. I suspect this is due to some
crusty data in the indexes themselves - after 5993 x 8mb journals, perhaps
a 79mb *index* is not unreasonable. When the *indexes* are deleted and
rebuilt, they go down to 2mb.
--

Jakub


On Tue, Jun 27, 2017 at 10:20 AM, Jakub Korab <jakub.korab.li...@gmail.com>
wrote:

> Hi all,
>
> I am seeing some odd behaviour in a 5.14.5 ActiveMQ instance that used
> mKahaDB - I have not seen this sort of thing before, and was wondering if
> anyone could shed some light on it.
>
> The directory listing below is for one fragment of an mKahaDB
> installation. It is configured for 8mb journal files, and accepts messages
> for a single queue only. Messages are 60-80kb in size.
>
> What I am seeing is a very large index (db.data), often far in excess of
> the total size of the journals. The following is not uncommon to what is
> happening in other mKahaDB fragments:
>
> -rw-r--r-- 1 activemq activemq  8388608 Jun 19 18:07 db-5985.log
> -rw-r--r-- 1 activemq activemq  8388608 Jun 19 18:09 db-5986.log
> -rw-r--r-- 1 activemq activemq  8388608 Jun 19 18:11 db-5987.log
> -rw-r--r-- 1 activemq activemq  8388608 Jun 19 18:14 db-5988.log
> -rw-r--r-- 1 activemq activemq  8388608 Jun 19 18:16 db-5989.log
>
>
>
>
> *-rw-r--r-- 1 activemq activemq    99401 Jun 20 00:13
> db-5991.log-rw-r--r-- 1 activemq activemq  8388608 Jun 20 01:00
> db-5990.log-rw-r--r-- 1 activemq activemq      728 Jun 20 01:03
> db-5993.log-rw-r--r-- 1 activemq activemq  8388608 Jun 20 09:32 
> db-5992.log*-rw-r--r--
> 1 activemq activemq  3291400 Jun 20 17:54 db.redo
> *-rw-r--r-- 1 activemq activemq 79540224 Jun 20 17:54 db.data*
> -rw-r--r-- 1 activemq activemq  8388608 Jun 20 17:54 db-5994.log
>
> Points of note:
> - Total journal size is 67.2mb, indexes are 79.5mb.
> - db-5990.log is written after db-5991.log, db-5992.log after db-5993.log
> (non-sequential writes)
>
> The thing that brought this to my attention was that storePercentUsage was
> hitting 100% for a multi-GB max store size. I suspect this is due to some
> crusty data in the indexes themselves - after 5993 x 8mb journals, perhaps
> a 79mb journal is not unreasonable. When the journals are deleted and
> rebuilt, they go down to 2mb.
>
> Any ideas as to why the index is so large, and what might cause journals
> to be written out of order?
>
> Thanks,
>
> Jakub
>

Reply via email to