[jira] [Commented] (AMQ-6936) Dead loop on log file reading

2018-03-23 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16411151#comment-16411151
 ] 

Rural Hunter commented on AMQ-6936:
---

OK, that's nice, thanks!

> Dead loop on log file reading
> -
>
> Key: AMQ-6936
> URL: https://issues.apache.org/jira/browse/AMQ-6936
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.13.4
>Reporter: Rural Hunter
>Priority: Critical
>
> We restarted our activemq instance(5.13.4) but found it hung on recovering 
> data. I got the data and ran a debug and found these:
>  # The dead loop is in Journal.getNextLocation(Location location)
>  # I added log in the method and this is I got:
>  2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554239
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554330
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554330: offset=33554330, 
> size=28, type=2
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554358
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554358: offset=33554358, 
> size=91, type=1
>  2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554358
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554449
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554449: offset=33554449, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  # 33554432 is the max log file size. activemq read a location at the end of 
> the file and the size is 0, this causes the dead loop
> I don't know if this is a problem when the file is saved or there is 
> something need to be fixed when reading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (AMQ-6936) Dead loop on log file reading

2018-03-23 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16411049#comment-16411049
 ] 

Rural Hunter edited comment on AMQ-6936 at 3/23/18 9:13 AM:


I'm not sure if it is allowed that the file size is greater than the 
maxFileLength. If it is allowed, then this might should be changed:

  
{code:java}
else if (cur.getType() == 0){     
// eof - jump to next datafile     
cur.setOffset(maxFileLength); //=> change to: 
cur.setOffset(dataFile.getLength());    
 }
{code}
 

With the change above, my activemq instance can start up.


was (Author: ruralhunter):
I'm not sure if it is allowed that the file size is greater than the 
maxFileLength. If it is allowed, then this might should be changed:

   else if (cur.getType() == 0) {
    // eof - jump to next datafile
    cur.setOffset(maxFileLength); => change to: 
cur.setOffset(dataFile.getLength());
    }

With the change above, my activemq instance can start up.

> Dead loop on log file reading
> -
>
> Key: AMQ-6936
> URL: https://issues.apache.org/jira/browse/AMQ-6936
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.13.4
>Reporter: Rural Hunter
>Priority: Critical
>
> We restarted our activemq instance(5.13.4) but found it hung on recovering 
> data. I got the data and ran a debug and found these:
>  # The dead loop is in Journal.getNextLocation(Location location)
>  # I added log in the method and this is I got:
>  2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554239
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554330
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554330: offset=33554330, 
> size=28, type=2
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554358
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554358: offset=33554358, 
> size=91, type=1
>  2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554358
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554449
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554449: offset=33554449, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  # 33554432 is the max log file size. activemq read a location at the end of 
> the file and the size is 0, this causes the dead loop
> I don't know if this is a problem when the file is saved or there is 
> something need to be fixed when reading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6936) Dead loop on log file reading

2018-03-23 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16411049#comment-16411049
 ] 

Rural Hunter commented on AMQ-6936:
---

I'm not sure if it is allowed that the file size is greater than the 
maxFileLength. If it is allowed, then this might should be changed:

   else if (cur.getType() == 0) {
    // eof - jump to next datafile
    cur.setOffset(maxFileLength); => change to: 
cur.setOffset(dataFile.getLength());
    }

With the change above, my activemq instance can start up.

> Dead loop on log file reading
> -
>
> Key: AMQ-6936
> URL: https://issues.apache.org/jira/browse/AMQ-6936
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.13.4
>Reporter: Rural Hunter
>Priority: Critical
>
> We restarted our activemq instance(5.13.4) but found it hung on recovering 
> data. I got the data and ran a debug and found these:
>  # The dead loop is in Journal.getNextLocation(Location location)
>  # I added log in the method and this is I got:
>  2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554239
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554330
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554330: offset=33554330, 
> size=28, type=2
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554358
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554358: offset=33554358, 
> size=91, type=1
>  2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554358
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554449
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554449: offset=33554449, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  # 33554432 is the max log file size. activemq read a location at the end of 
> the file and the size is 0, this causes the dead loop
> I don't know if this is a problem when the file is saved or there is 
> something need to be fixed when reading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6936) Dead loop on log file reading

2018-03-23 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16411023#comment-16411023
 ] 

Rural Hunter commented on AMQ-6936:
---

These are the data files:

-rw-rw-r-- 1 activemq activemq 33554432  3月 23 11:46 db-13.log
-rw-rw-r-- 1 activemq activemq 33554432  3月 23 11:46 db-1.log
-rw-rw-r-- 1 activemq activemq 33554432  3月 23 11:46 db-2.log
-rw-rw-r-- 1 activemq activemq 33554432  3月 23 11:46 db-4.log
-rw-rw-r-- 1 activemq activemq 33554432  3月 23 11:46 db-5.log
-rw-rw-r-- 1 activemq activemq 33554454  3月 23 11:46 db-7.log
-rw-rw-r-- 1 activemq activemq    32768  3月 23 12:18 db.data
-rw-rw-r-- 1 activemq activemq    28720  3月 23 11:46 db.redo

 

db-7.log is larger than the maxFileLength = DEFAULT_MAX_FILE_LENGTH=33554432

 

> Dead loop on log file reading
> -
>
> Key: AMQ-6936
> URL: https://issues.apache.org/jira/browse/AMQ-6936
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.13.4
>Reporter: Rural Hunter
>Priority: Critical
>
> We restarted our activemq instance(5.13.4) but found it hung on recovering 
> data. I got the data and ran a debug and found these:
>  # The dead loop is in Journal.getNextLocation(Location location)
>  # I added log in the method and this is I got:
>  2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554239
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554330
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554330: offset=33554330, 
> size=28, type=2
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554358
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554358: offset=33554358, 
> size=91, type=1
>  2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554358
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554449
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554449: offset=33554449, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
> size=0, type=0
>  # 33554432 is the max log file size. activemq read a location at the end of 
> the file and the size is 0, this causes the dead loop
> I don't know if this is a problem when the file is saved or there is 
> something need to be fixed when reading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-6936) Dead loop on log file reading

2018-03-23 Thread Rural Hunter (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rural Hunter updated AMQ-6936:
--
Description: 
We restarted our activemq instance(5.13.4) but found it hung on recovering 
data. I got the data and ran a debug and found these:
 # The dead loop is in Journal.getNextLocation(Location location)
 # I added log in the method and this is I got:
 2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554239
 2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554330
 2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554330: offset=33554330, 
size=28, type=2
 2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554358
 2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554358: offset=33554358, 
size=91, type=1
 2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554358
 2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554449
 2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554449: offset=33554449, 
size=0, type=0
 2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
 2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
 2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
 2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
 2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
 2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
 2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
 2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
 2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
 2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
 # 33554432 is the max log file size. activemq read a location at the end of 
the file and the size is 0, this causes the dead loop

I don't know if this is a problem when the file is saved or there is something 
need to be fixed when reading.

  was:
We restarted our activemq instance(5.13.4) but found it hung on recovering 
data. I got the data and ran a debug and found these:
 # The dead loop is in Journal.getNextLocation(Location location)
 # I added log in the method and this is I got:
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554239: offset=33554239, 
size=91, type=1
2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554239
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554330
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554330: offset=33554330, 
size=28, type=2
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554358
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554358: offset=33554358, 
size=91, type=1
2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554358
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554449
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554449: offset=33554449, 
size=0, type=0
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
 # 33554432 is the max log file size. activemq read a location at the end of 
the file and the size is 0, this causes the dead loop

I don't know if this is a problem when the file is saved or there is something 
need to be fixed when reading.


> Dead loop on log file reading
> -
>
> Key: AMQ-6936
> URL: https://issues.apache.org/jira/browse/AMQ-6936
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.13.4
>Reporter: Rural Hunter
>Priority: Critical
>
> We restarted our activemq instance(5.13.4) but found it hung on recovering 
> data. I got the data and ran a debug and found these:
>  # The dead loop is in Journal.getNextLocation(Location location)
>  # I added log in the method and this is I got:
>  2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554239
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554330
>  2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554330: offset=33554330, 
> size=28, type=2
>  2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554358
>  2018-03-23 14:56:

[jira] [Commented] (AMQ-6931) KahaDB files cannot be loaded if they contain KAHA_ACK_MESSAGE_FILE_MAP_COMMAND messages

2018-03-23 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16410961#comment-16410961
 ] 

Rural Hunter commented on AMQ-6931:
---

I jsut raised another issue: AMQ-6936, not sure if it is related to this one.

> KahaDB files cannot be loaded if they contain 
> KAHA_ACK_MESSAGE_FILE_MAP_COMMAND messages
> 
>
> Key: AMQ-6931
> URL: https://issues.apache.org/jira/browse/AMQ-6931
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.13.1
>Reporter: Tim Bain
>Priority: Major
>
> As reported on the users mailing list 
> ([http://activemq.2283324.n4.nabble.com/Re-failed-to-start-ActiveMQ-td4737631.html]),
>  a KahaDB file that contains a message of type "CmdType: 
> KAHA_ACK_MESSAGE_FILE_MAP_COMMAND" (output is from 
> [https://github.com/Hill30/amq-kahadb-tool]) was found to have a location 
> size of 0, which is less than 
> org.apache.activemq.store.kahadb.disk.journal.Journal.RECORD_HEAD_SPACE. This 
> in turn causes a NegativeArraySizeException to be thrown in 
> org.apache.activemq.store.kahadb.disk.journal.DataFileAccessor.readRecord() 
> when attempting to start the broker.
> Either the data file should not have a location size of 0 (in which case, we 
> need to figure out how it happened and prevent it from happening again), or 
> it's valid to have a location size of 0 and we need to account for that 
> possibility when constructing the array in readRecord().
> One fact that may or may not be relevant: the file that contains the 
> zero-size records is the most recent journal file in the data directory, and 
> the persistence store had reached the store limit. It was not clear from the 
> mailing list thread whether other files also had the problem, nor whether it 
> would have occurred if the store limit had not been reached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMQ-6936) Dead loop on log file reading

2018-03-23 Thread Rural Hunter (JIRA)
Rural Hunter created AMQ-6936:
-

 Summary: Dead loop on log file reading
 Key: AMQ-6936
 URL: https://issues.apache.org/jira/browse/AMQ-6936
 Project: ActiveMQ
  Issue Type: Bug
  Components: KahaDB
Affects Versions: 5.13.4
Reporter: Rural Hunter


We restarted our activemq instance(5.13.4) but found it hung on recovering 
data. I got the data and ran a debug and found these:
 # The dead loop is in Journal.getNextLocation(Location location)
 # I added log in the method and this is I got:
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554239: offset=33554239, 
size=91, type=1
2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554239
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554330
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554330: offset=33554330, 
size=28, type=2
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554358
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554358: offset=33554358, 
size=91, type=1
2018-03-23 14:56:43 [DEBUG] Journal Get next location for: 7:33554358
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554449
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554449: offset=33554449, 
size=0, type=0
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
2018-03-23 14:56:43 [DEBUG] Journal Reading location: 7:33554432
2018-03-23 14:56:43 [DEBUG] Journal Location 7:33554432: offset=33554432, 
size=0, type=0
 # 33554432 is the max log file size. activemq read a location at the end of 
the file and the size is 0, this causes the dead loop

I don't know if this is a problem when the file is saved or there is something 
need to be fixed when reading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-3453) Tool for reading messages in journal files

2016-10-25 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15604828#comment-15604828
 ] 

Rural Hunter commented on AMQ-3453:
---

I just found this tool: https://github.com/Hill30/amq-kahadb-tool

> Tool for reading messages in journal files
> --
>
> Key: AMQ-3453
> URL: https://issues.apache.org/jira/browse/AMQ-3453
> Project: ActiveMQ
>  Issue Type: Wish
>  Components: Broker
>Reporter: Hariharan
>  Labels: tools
>
> Hi,
> Am facing some serious issues with AMQ. Some inputs from my side on what I 
> see:
> 1. Am using AMQ 5.5
> 2. kahaDB persistence mechanism is being used.
> 2. The max journal file size set is 64 mb.
> 3. All messages posted on queues and topics are being consumed successfully 
> by the clients.
> 4. I checked the pending messages by clicking on browseMessages button in 
> Operations tab under each queue and topic and a lot of them were empty while 
> some had messages that were variable over time. So am sure the messages are 
> being acknowledged properly.
> But what I see is that the journal files are not getting cleaned up 
> regularly. Some of them do get cleaned up but a lot of them stay back. A snap 
> shot is given below. 
> Total size of this folder is 1.6G and growing.
> {code:xml}
> -rw-rw-r-- 1 manih manih0 Aug  6 09:15 lock
> -rw-rw-r-- 1 manih manih 67110696 Aug  8 10:37 db-8.log
> -rw-rw-r-- 1 manih manih 67115667 Aug  8 13:34 db-9.log
> -rw-rw-r-- 1 manih manih 67109195 Aug  9 03:43 db-13.log
> -rw-rw-r-- 1 manih manih 67108941 Aug  9 07:42 db-14.log
> -rw-rw-r-- 1 manih manih 67110336 Aug  9 15:56 db-16.log
> -rw-rw-r-- 1 manih manih 67108973 Aug  9 18:41 db-17.log
> -rw-rw-r-- 1 manih manih 67109661 Aug 10 06:06 db-20.log
> -rw-rw-r-- 1 manih manih 67112421 Aug 10 14:43 db-22.log
> -rw-rw-r-- 1 manih manih 67108882 Aug 10 20:30 db-24.log
> -rw-rw-r-- 1 manih manih 67109313 Aug 11 09:19 db-27.log
> -rw-rw-r-- 1 manih manih 67109241 Aug 11 16:30 db-29.log
> -rw-rw-r-- 1 manih manih 67108976 Aug 12 06:02 db-33.log
> -rw-rw-r-- 1 manih manih 67116308 Aug 12 11:11 db-34.log
> -rw-rw-r-- 1 manih manih 67116690 Aug 12 15:54 db-35.log
> -rw-rw-r-- 1 manih manih 67109627 Aug 12 18:57 db-36.log
> -rw-rw-r-- 1 manih manih 67111521 Aug 13 02:54 db-37.log
> -rw-rw-r-- 1 manih manih 67114239 Aug 13 12:13 db-38.log
> -rw-rw-r-- 1 manih manih 67117361 Aug 13 21:26 db-39.log
> -rw-rw-r-- 1 manih manih 67117848 Aug 14 06:46 db-40.log
> -rw-rw-r-- 1 manih manih 67113289 Aug 15 13:09 db-46.log
> -rw-rw-r-- 1 manih manih 67109565 Aug 15 17:05 db-47.log
> -rw-rw-r-- 1 manih manih 67115347 Aug 15 21:32 db-48.log
> -rw-rw-r-- 1 manih manih 67109370 Aug 16 05:10 db-50.log
> -rw-rw-r-- 1 manih manih  3291400 Aug 16 08:36 db.redo
> -rw-rw-r-- 1 manih manih 53694464 Aug 16 08:36 db.data
> -rw-rw-r-- 1 manih manih 66584576 Aug 16 08:36 db-51.log
> {code}
> If all messages are being consumed regularly, then am not sure why the old 
> files are staying back while some of them created after them got cleaned up.
> Am not able to read the files as they are in binary format. Am also not sure 
> if I can go ahead and delete the old files.
> Is there any tool that can help me in finding out what messages these files 
> are actually holding? If there are no tools like that can AMQ developers 
> develop one? 
> Am sure it would be really useful for a lot of us.
> Thanks in advance.
> Hari



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6332) Invisible message prevents storage garbage clean

2016-06-21 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15343201#comment-15343201
 ] 

Rural Hunter commented on AMQ-6332:
---

OK, I put it here: 
http://activemq.2283324.n4.nabble.com/5-13-3-Invisible-message-prevents-storage-garbage-clean-td4713229.html

> Invisible message prevents storage garbage clean
> 
>
> Key: AMQ-6332
> URL: https://issues.apache.org/jira/browse/AMQ-6332
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.13.3
>Reporter: Rural Hunter
>
> I noticed some old data files are not cleaned. So I referred the doc to check 
> which queue is using them. I found they are reserved for one busy queue. The 
> queue has several hundreds of messages going through every second. There 
> couldn't be any old message left there. I also checked the queue in admin ui 
> and didn't find any old message in it. The new messages are in and out all 
> the time in that queue. 
> If I delete the queue in admin ui, the old data files are cleaned up soon.  
> Or if I restart activemq, the old data files also get cleaned up.
> I don't know how to track this problem. I can modify the source to debug it 
> if someone can guide me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-6332) Invisible message prevents storage garbage clean

2016-06-21 Thread Rural Hunter (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rural Hunter updated AMQ-6332:
--
Description: 
I noticed some old data files are not cleaned. So I referred the doc to check 
which queue is using them. I found they are reserved for one busy queue. The 
queue has several hundreds of messages going through every second. There 
couldn't be any old message left there. I also checked the queue in admin ui 
and didn't find any old message in it. The new messages are in and out all the 
time in that queue. 
If I delete the queue in admin ui, the old data files are cleaned up soon.  Or 
if I restart activemq, the old data files also get cleaned up.
I don't know how to track this problem. I can modify the source to debug it if 
someone can guide me.

  was:
I noticed some old data files are not cleaned. So I referred the doc to check 
which queue is using them. I found they are reserved for one busy queue. The 
queue has several hundreds of messages going through every seconds. There 
couldn't be any old message left there. I also checked the queue in admin ui 
and didn't find any old message in it. The new messsages are in and out all the 
time in that queue. 
If I delete the queue in admin ui, the old data files are cleaned up soon.  Or 
if I restart activemq, the old data files also get cleaned up.
I don't know how to track this problem. I can modify the source to debug it if 
someone can guide me.


> Invisible message prevents storage garbage clean
> 
>
> Key: AMQ-6332
> URL: https://issues.apache.org/jira/browse/AMQ-6332
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.13.3
>Reporter: Rural Hunter
>
> I noticed some old data files are not cleaned. So I referred the doc to check 
> which queue is using them. I found they are reserved for one busy queue. The 
> queue has several hundreds of messages going through every second. There 
> couldn't be any old message left there. I also checked the queue in admin ui 
> and didn't find any old message in it. The new messages are in and out all 
> the time in that queue. 
> If I delete the queue in admin ui, the old data files are cleaned up soon.  
> Or if I restart activemq, the old data files also get cleaned up.
> I don't know how to track this problem. I can modify the source to debug it 
> if someone can guide me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-6332) Invisible message prevents storage garbage clean

2016-06-21 Thread Rural Hunter (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rural Hunter updated AMQ-6332:
--
Summary: Invisible message prevents storage garbage clean  (was: Invisible 
prevents storage garbage clean)

> Invisible message prevents storage garbage clean
> 
>
> Key: AMQ-6332
> URL: https://issues.apache.org/jira/browse/AMQ-6332
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.13.3
>Reporter: Rural Hunter
>
> I noticed some old data files are not cleaned. So I referred the doc to check 
> which queue is using them. I found they are reserved for one busy queue. The 
> queue has several hundreds of messages going through every seconds. There 
> couldn't be any old message left there. I also checked the queue in admin ui 
> and didn't find any old message in it. The new messsages are in and out all 
> the time in that queue. 
> If I delete the queue in admin ui, the old data files are cleaned up soon.  
> Or if I restart activemq, the old data files also get cleaned up.
> I don't know how to track this problem. I can modify the source to debug it 
> if someone can guide me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-6332) Invisible prevents storage garbage clean

2016-06-21 Thread Rural Hunter (JIRA)
Rural Hunter created AMQ-6332:
-

 Summary: Invisible prevents storage garbage clean
 Key: AMQ-6332
 URL: https://issues.apache.org/jira/browse/AMQ-6332
 Project: ActiveMQ
  Issue Type: Bug
  Components: KahaDB
Affects Versions: 5.13.3
Reporter: Rural Hunter


I noticed some old data files are not cleaned. So I referred the doc to check 
which queue is using them. I found they are reserved for one busy queue. The 
queue has several hundreds of messages going through every seconds. There 
couldn't be any old message left there. I also checked the queue in admin ui 
and didn't find any old message in it. The new messsages are in and out all the 
time in that queue. 
If I delete the queue in admin ui, the old data files are cleaned up soon.  Or 
if I restart activemq, the old data files also get cleaned up.
I don't know how to track this problem. I can modify the source to debug it if 
someone can guide me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5889) Support a single port for all wire protocols

2016-06-07 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320059#comment-15320059
 ] 

Rural Hunter commented on AMQ-5889:
---

http://activemq.2283324.n4.nabble.com/nio-buffer-memory-problem-td4712706.html

could the changes in this ticket lead to the behavior in this problem?

> Support a single port for all wire protocols
> 
>
> Key: AMQ-5889
> URL: https://issues.apache.org/jira/browse/AMQ-5889
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 5.11.1
>Reporter: Christopher L. Shannon
>Assignee: Christopher L. Shannon
> Fix For: 5.13.0
>
>
> Both Apollo and Artemis support the ability to use a single port for all 
> protocols and to have automatic detection for the protocol being used.  It 
> would be nice to be able to support at least a subset of this feature in the 
> 5.x broker as well.
> Ideally we should at least be able to detect OpenWire, MQTT, STOMP, and AMQP 
> over a TCP, SSL, and NIO transport.  Websockets and HTTP would be a bonus but 
> could be more difficult to implement depending on how this could work with 
> Jetty so that would take some investigation.
> This is especially useful in environments where having to open up several new 
> ports can be difficult because of firewall and security restrictions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-06-01 Thread Rural Hunter (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rural Hunter updated AMQ-6203:
--
Comment: was deleted

(was: I read the code. So is the setting compactAcksIgnoresStoreGrowth can make 
the ack compaction run at busy time?)

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0, 5.13.3
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-06-01 Thread Rural Hunter (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rural Hunter updated AMQ-6203:
--
Comment: was deleted

(was: OK, good. I will try and see.)

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0, 5.13.3
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-06-01 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15311559#comment-15311559
 ] 

Rural Hunter commented on AMQ-6203:
---

OK, good. I will try and see.

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0, 5.13.3
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-06-01 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15311558#comment-15311558
 ] 

Rural Hunter commented on AMQ-6203:
---

OK, good. I will try and see.

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0, 5.13.3
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-06-01 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15310029#comment-15310029
 ] 

Rural Hunter commented on AMQ-6203:
---

I read the code. So is the setting compactAcksIgnoresStoreGrowth can make the 
ack compaction run at busy time?

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0, 5.13.3
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-06-01 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15310028#comment-15310028
 ] 

Rural Hunter commented on AMQ-6203:
---

I read the code. So is the setting compactAcksIgnoresStoreGrowth can make the 
ack compaction run at busy time?

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0, 5.13.3
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-05-30 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307100#comment-15307100
 ] 

Rural Hunter commented on AMQ-6203:
---

OK, then it is not much useful in a real busy production server.

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0, 5.13.3
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-05-26 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15302109#comment-15302109
 ] 

Rural Hunter commented on AMQ-6203:
---

I tried 5.13.3. But it seems the compaction actualy can never start under heavy 
load(such as our production) as I always see this log:
2016-05-26 21:47:36,153 | TRACE | Journal activity detected, no Ack compaction 
scheduled. |org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal 
Checkpoint Worker

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0, 5.13.3
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6287) Message ack rewrite task doesn't properly release index lock

2016-05-26 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301785#comment-15301785
 ] 

Rural Hunter commented on AMQ-6287:
---

did this actually land in release 5.13.3? I see this in the release note 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12311210&version=12335045
But 5.13.3 was released on 05/03 while this change was made on 05/11. How could 
this happen?

> Message ack rewrite task doesn't properly release index lock
> 
>
> Key: AMQ-6287
> URL: https://issues.apache.org/jira/browse/AMQ-6287
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.13.3
>Reporter: Christopher L. Shannon
>Assignee: Christopher L. Shannon
> Fix For: 5.14.0, 5.13.3
>
>
> The AckCompactionRunner that processes the ack rewrites doesn't properly 
> release the index lock in a try/finally block so it is possible that the lock 
> isn't released, such as when no journal to advance is found.
> This only currently affects 5.13.3 as in master it was already fixed by this 
> commit: 
> https://git-wip-us.apache.org/repos/asf?p=activemq.git;a=commit;h=62bdbb0db5dc4354f0e00fd5259b3db53eb1432d



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-05-24 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297931#comment-15297931
 ] 

Rural Hunter commented on AMQ-6203:
---

May I know if this affects 5.10.2? We are at this version and always see store 
files leak and have to restart to cleanup.

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0, 5.13.3
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-4943) Corrupted KahaDB store: java.lang.NegativeArraySizeException

2015-09-16 Thread Rural Hunter (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14791516#comment-14791516
 ] 

Rural Hunter commented on AMQ-4943:
---

I got the same problem after I encoutered "Too many open files" error and 
raised the ulimit and then restarted activemq 5.7.0

> Corrupted KahaDB store: java.lang.NegativeArraySizeException
> 
>
> Key: AMQ-4943
> URL: https://issues.apache.org/jira/browse/AMQ-4943
> Project: ActiveMQ
>  Issue Type: Bug
> Environment: activemq-5.5.1-fuse-10-16
>Reporter: Lionel Cons
>Priority: Critical
>
> One of our brokers went crazy and logged _many_ exceptions looking like this:
> 2013-12-19 12:27:09,007 [BrokerService[foobar] Task-13] ERROR 
> AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@654e3615:Consumer.prod.whatever,batchResetNeeded=false,storeHasMessages=true,size=2461725,cacheEnabled=false,maxBatchSize:200
>  - Failed to fill batch
> java.lang.RuntimeException: java.io.IOException: Invalid location: 43153:28, 
> : java.lang.NegativeArraySizeException
>   at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:277)
>   at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:110)
>   at 
> org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:157)
>   at 
> org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1746)
>   at 
> org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:1962)
>   at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1470)
>   at 
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122)
>   at 
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>   at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Invalid location: 43153:28, : 
> java.lang.NegativeArraySizeException
>   at 
> org.apache.kahadb.journal.DataFileAccessor.readRecord(DataFileAccessor.java:94)
>   at org.apache.kahadb.journal.Journal.read(Journal.java:601)
>   at 
> org.apache.activemq.store.kahadb.MessageDatabase.load(MessageDatabase.java:908)
>   at 
> org.apache.activemq.store.kahadb.KahaDBStore.loadMessage(KahaDBStore.java:1024)
>   at 
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$4.execute(KahaDBStore.java:552)
>   at org.apache.kahadb.page.Transaction.execute(Transaction.java:771)
>   at 
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recoverNextMessages(KahaDBStore.java:541)
>   at 
> org.apache.activemq.store.ProxyMessageStore.recoverNextMessages(ProxyMessageStore.java:88)
>   at 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:97)
>   at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:274)
>   ... 10 more
> 2013-12-19 12:27:09,007 [BrokerService[foobar] Task-13] ERROR Queue - Failed 
> to page in more queue messages 
> java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: 
> Invalid location: 43153:28, : java.lang.NegativeArraySizeException
>   at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:113)
>   at 
> org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:157)
>   at 
> org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1746)
>   at 
> org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:1962)
>   at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1470)
>   at 
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122)
>   at 
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>   at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.RuntimeException: java.io.IOException: Invalid location: 
> 43153:28, : java.lang.NegativeArraySizeException
>   at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:277)
>   at 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:110)
>   ... 9 more
> Caused by: java.io.IOException: Invalid locati