[ https://issues.apache.org/jira/browse/AMQ-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14303208#comment-14303208 ]
Dmytro Karimov commented on AMQ-5235: ------------------------------------- Also, when all queues are empty activemq-data isn't empty: {code} dmitriy@storage1:~$ ls /mnt/data/activemq-data/ 0000000000000000.log 0000000070928795.log.crc32 00000000c81efbc7.log.crc32 000000014530eada.log.crc32 000000019036cbb3.log.crc32 0000000000000000.log.crc32 0000000076d3a105.log.crc32 00000000ce600118.log.crc32 000000014b71e842.log.crc32 000000019676cceb.log 000000000642d171.log 000000007d13b42f.log.crc32 00000000dae237a2.log.crc32 0000000151b2f3b4.log.crc32 000000019676cceb.log.crc32 000000000642d171.log.crc32 000000008354bc7f.log.crc32 00000000e1226f24.log.crc32 0000000157f43825.log.crc32 00000001af76efcc.log 000000000c841ca6.log.crc32 000000008994ced1.log.crc32 00000000eda512f5.log.crc32 000000015e35748b.log.crc32 00000001b31c19ac.index 0000000012c4b726.log.crc32 000000008fd5f323.log.crc32 00000000f3e57ad6.log.crc32 000000016476a79f.log dirty.index 000000001904e4ec.log.crc32 0000000096172ab9.log.crc32 00000000fa2680ee.log.crc32 000000016476a79f.log.crc32 hs_err_pid28664.log 000000001f465d31.log.crc32 000000009c586165.log.crc32 000000010067dc54.log.crc32 000000016ab6c2d5.log hs_err_pid28664.log.crc32 0000000025869104.log.crc32 00000000a2997b85.log.crc32 0000000106a8e7df.log.crc32 000000016ab6c2d5.log.crc32 lock 000000003849599b.log.crc32 00000000a8daece5.log.crc32 000000010ce9ed67.log.crc32 0000000170f6c43b.log nodeid.txt 000000003e8a51b9.log.crc32 00000000af1b286a.log.crc32 00000001132b130d.log.crc32 0000000170f6c43b.log.crc32 plist.index 0000000044caba2d.log.crc32 00000000b55c8640.log.crc32 00000001196bd712.log.crc32 000000017d76c6bc.log.crc32 store-version.txt 000000006a515f8b.log.crc32 00000000c1debb6e.log.crc32 000000013eef74de.log.crc32 000000019036cbb3.log {code} > erroneous temp percent used > --------------------------- > > Key: AMQ-5235 > URL: https://issues.apache.org/jira/browse/AMQ-5235 > Project: ActiveMQ > Issue Type: Bug > Components: activemq-leveldb-store > Affects Versions: 5.9.0 > Environment: debian (quality testing and production) > Reporter: anselme dewavrin > > Dear all, > We have an activemq 5.9 configured with 1GB of tempUsage allowed. Just by > security because we only use persistent messages (about 6000 messages per > day). After severall days of use, the temp usage increases, and even shows > values that are above the total amount of the data on disk. Here it shows 45% > of its 1GB limit for the following files : > find activemq-data -ls > 76809801 4 drwxr-xr-x 5 anselme anselme 4096 Jun 19 10:24 > activemq-data > 76809813 4 -rw-r--r-- 1 anselme anselme 24 Jun 16 16:13 > activemq-data/store-version.txt > 76809817 4 drwxr-xr-x 2 anselme anselme 4096 Jun 16 16:13 > activemq-data/dirty.index > 76809811 4 -rw-r--r-- 2 anselme anselme 2437 Jun 16 12:06 > activemq-data/dirty.index/000008.sst > 76809820 4 -rw-r--r-- 1 anselme anselme 16 Jun 16 16:13 > activemq-data/dirty.index/CURRENT > 76809819 80 -rw-r--r-- 1 anselme anselme 80313 Jun 16 16:13 > activemq-data/dirty.index/000011.sst > 76809822 0 -rw-r--r-- 1 anselme anselme 0 Jun 16 16:13 > activemq-data/dirty.index/LOCK > 76809810 300 -rw-r--r-- 2 anselme anselme 305206 Jun 16 11:51 > activemq-data/dirty.index/000005.sst > 76809821 2048 -rw-r--r-- 1 anselme anselme 2097152 Jun 19 11:30 > activemq-data/dirty.index/000012.log > 76809818 1024 -rw-r--r-- 1 anselme anselme 1048576 Jun 16 16:13 > activemq-data/dirty.index/MANIFEST-000010 > 76809816 0 -rw-r--r-- 1 anselme anselme 0 Jun 16 16:13 > activemq-data/lock > 76809815 102400 -rw-r--r-- 1 anselme anselme 104857600 Jun 19 11:30 > activemq-data/0000000000f0faaf.log > 76809823 102400 -rw-r--r-- 1 anselme anselme 104857600 Jun 16 11:50 > activemq-data/0000000000385f46.log > 76809807 4 drwxr-xr-x 2 anselme anselme 4096 Jun 16 16:13 > activemq-data/0000000000f0faaf.index > 76809808 420 -rw-r--r-- 1 anselme anselme 429264 Jun 16 16:13 > activemq-data/0000000000f0faaf.index/000009.log > 76809811 4 -rw-r--r-- 2 anselme anselme 2437 Jun 16 12:06 > activemq-data/0000000000f0faaf.index/000008.sst > 76809812 4 -rw-r--r-- 1 anselme anselme 165 Jun 16 16:13 > activemq-data/0000000000f0faaf.index/MANIFEST-000007 > 76809809 4 -rw-r--r-- 1 anselme anselme 16 Jun 16 16:13 > activemq-data/0000000000f0faaf.index/CURRENT > 76809810 300 -rw-r--r-- 2 anselme anselme 305206 Jun 16 11:51 > activemq-data/0000000000f0faaf.index/000005.sst > 76809814 102400 -rw-r--r-- 1 anselme anselme 104857600 Jun 12 21:06 > activemq-data/0000000000000000.log > 76809802 4 drwxr-xr-x 2 anselme anselme 4096 Jun 16 16:13 > activemq-data/plist.index > 76809803 4 -rw-r--r-- 1 anselme anselme 16 Jun 16 16:13 > activemq-data/plist.index/CURRENT > 76809806 0 -rw-r--r-- 1 anselme anselme 0 Jun 16 16:13 > activemq-data/plist.index/LOCK > 76809805 1024 -rw-r--r-- 1 anselme anselme 1048576 Jun 16 16:13 > activemq-data/plist.index/000003.log > 76809804 1024 -rw-r--r-- 1 anselme anselme 1048576 Jun 16 16:13 > activemq-data/plist.index/MANIFEST-000002 > The problem is that in our production system it once blocked producers with a > tempusage at 122%, even if the disk was empty. > So we invesigated and executed the broker in a debugger, and found how the > usage is calculated. If it in the scala leveldb files : It is not based on > what is on disk, but on what it thinks is on the disk. It multiplies the > size of one log by the number of logs known by a certain hashmap. > I think the entries of the hashmap are not removed when the log files are > purged. > Could you confirm ? > Thanks in advance > Anselme -- This message was sent by Atlassian JIRA (v6.3.4#6332)