[jira] [Commented] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-28 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14296173#comment-14296173
 ] 

Sergiy Barlabanov commented on AMQ-5542:


This is correct. This is what we have in production. We have some messages in 
DLQs. They stay there for max 2 days. After that a special job removes messages 
older than 2 days from the DLQs.
Now we expect that ActiveMQ will always retain nearly all data files for the 
last two days because of those messages in the DLQs. So this would take quite a 
lot of space I think and we did not expect that when we set up the servers - 
may be we will have to get a larger SAN volume for that.

Another consideration is that this is not only the space which will be eaten by 
KahaDB but also the time ActiveMQ needs to recover after the crash. ActiveMQ 
will need quite a lot of time to replay all the data files which are sitting 
there just because of several DLQ messages.
And the periodic cleanup may take more time to check all the data files (and 
KahaDB cleanup is a single threaded storage blocking operation).

> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
> Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-27 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294463#comment-14294463
 ] 

Sergiy Barlabanov edited comment on AMQ-5542 at 1/28/15 12:19 AM:
--

Tim, I think, you are right. Otherwise KahaDB will loose acks and replay 
messages.
If this is true, than the current cleanup mechanism have to be reconsidered. It 
will not work well for scenarios, where messages may stay unconsumed for some 
time. Some sort of compaction has to be done.
In the current project we have nearly always some messages in DLQs sitting 
there for maximum 2 days. This means that we would always have nearly all data 
files for the last two days.
Currently it is ok, we have enough of place on the SAN. But what if that would 
be 2 weeks instead of 2 days?
I think in this case we would use JDBC storage.
Another possibility would be to use mKahaDB and put DLQs to a separate storage. 
That storage would not grow fast since there would not be much traffic.


was (Author: barlabanov):
I think, you are right. Otherwise KahaDB will loose acks and replay messages.
If this is true, than the current cleanup mechanism have to be reconsidered. It 
will not work well for scenarios, where messages may stay unconsumed for some 
time. Some sort of compaction has to be done.
In the current project we have nearly always some messages in DLQs sitting 
there for maximum 2 days. This means that we would always have nearly all data 
files for the last two days.
Currently it is ok, we have enough of place on the SAN. But what if that would 
be 2 weeks instead of 2 days?
I think in this case we would use JDBC storage.
Another possibility would be to use mKahaDB and put DLQs to a separate storage. 
That storage would not grow fast since there would not be much traffic.

> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
> Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-27 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294463#comment-14294463
 ] 

Sergiy Barlabanov commented on AMQ-5542:


I think, you are right. Otherwise KahaDB will loose acks and replay messages.
If this is true, than the current cleanup mechanism have to be reconsidered. It 
will not work well for scenarios, where messages may stay unconsumed for some 
time. Some sort of compaction has to be done.
In the current project we have nearly always some messages in DLQs sitting 
there for maximum 2 days. This means that we would always have nearly all data 
files for the last two days.
Currently it is ok, we have enough of place on the SAN. But what if that would 
be 2 weeks instead of 2 days?
I think in this case we would use JDBC storage.
Another possibility would be to use mKahaDB and put DLQs to a separate storage. 
That storage would not grow fast since there would not be much traffic.

> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
> Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-27 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294349#comment-14294349
 ] 

Sergiy Barlabanov commented on AMQ-5542:


AMQ-2736 actually introduced the problem. See the commend of Gary 
https://issues.apache.org/jira/browse/AMQ-2736#comment-12942986.
It was thought to be a logical error, but it was not. It is not possible just 
to drop the files, when they contain acks pointing to other files, which are 
blocked.

> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
> Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-27 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294349#comment-14294349
 ] 

Sergiy Barlabanov edited comment on AMQ-5542 at 1/27/15 10:49 PM:
--

AMQ-2736 actually introduced the problem. See the comment of Gary 
https://issues.apache.org/jira/browse/AMQ-2736#comment-12942986.
It was thought to be a logical error, but it was not. It is not possible just 
to drop the files, when they contain acks pointing to other files, which are 
blocked.


was (Author: barlabanov):
AMQ-2736 actually introduced the problem. See the commend of Gary 
https://issues.apache.org/jira/browse/AMQ-2736#comment-12942986.
It was thought to be a logical error, but it was not. It is not possible just 
to drop the files, when they contain acks pointing to other files, which are 
blocked.

> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
> Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-27 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294333#comment-14294333
 ] 

Sergiy Barlabanov commented on AMQ-5542:


The consequence of the fix is that if you are really unlucky you will get all 
files blocked beginning from the one with unconsumed message/messages through 
all next files containing acks pointing to the files before them.
So having some unconsumed messages for a long term (like messages in a DLQ) in 
such an unlucky case will eat quite a lot of space.

> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
> Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-27 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-5542:
---
Attachment: AMQ-5542.patch

This is the possible patch.
Just removed the copy of gcCandidateSet. It is unclear to me what was the 
purpose of this copy.

> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
> Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-27 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-5542:
---
Attachment: AdjustedAMQ2832Test.patch

This is an adjustment test for AMQ2832Test.java which fails currently on trunk, 
5.10.1, and 5.10.0.
The adjustment is relative to the trunk version of AMQ2832Test.java.


> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
> Attachments: AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-27 Thread Sergiy Barlabanov (JIRA)
Sergiy Barlabanov created AMQ-5542:
--

 Summary: KahaDB data files containing acknowledgements are deleted 
during cleanup
 Key: AMQ-5542
 URL: https://issues.apache.org/jira/browse/AMQ-5542
 Project: ActiveMQ
  Issue Type: Bug
  Components: Message Store
Affects Versions: 5.10.1, 5.10.0
Reporter: Sergiy Barlabanov


AMQ-2832 was not fixed cleanly.
The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
introduced a problem by deleting too many files.

Scenarios we are facing currently in production:
Data file #1 contains unconsumed messages sitting in a DLQ. So this file is not 
a cleanup candidate.
The next file #2 contains acks of some messages from file #1. This file is not 
a cleanup candidate (because of ackMessageFileMap logic).
The next file #3 contains acks of some messages from file #2. And this file is 
deleted during the cleanup procedure. So on Broker restart all messages from 
#2, whose acks were from the deleted file #3, are replayed!
The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
MessageDatabase#checkpointUpdate at the end of the method - 
org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
when a candidate is deleted from gcCandidateSet 
(org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
gcCandidates still contains that candidate and the comparison on 
org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5249) "cursor got duplicate" error after upgrade

2015-01-15 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278500#comment-14278500
 ] 

Sergiy Barlabanov edited comment on AMQ-5249 at 1/15/15 9:51 AM:
-

In our case after getting quite a lot of "cursor got duplicate" in a DQL, we 
start to observe "Problem retrieving message for browse" error with 
NullPointerException for that DLQ when trying to browse the queue. Restart 
helps.

2015-01-14 20:26:00,659 | ERROR | Problem retrieving message for browse | 
org.apache.activemq.broker.region.Queue | ActiveMQ Broker[dcdng] Scheduler
java.lang.NullPointerException


was (Author: barlabanov):
In our case after getting "cursor got duplicate" in a DQL, we start to observe 
"Problem retrieving message for browse" error with NullPointerException for 
that DLQ when trying to browse the queue. Restart helps.

2015-01-14 20:26:00,659 | ERROR | Problem retrieving message for browse | 
org.apache.activemq.broker.region.Queue | ActiveMQ Broker[dcdng] Scheduler
java.lang.NullPointerException

> "cursor got duplicate" error after upgrade
> --
>
> Key: AMQ-5249
> URL: https://issues.apache.org/jira/browse/AMQ-5249
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.1, 5.10.0
>Reporter: Rural Hunter
>
> I was using 5.9.0 and meet one problem so I tried to upgrade activemq. I 
> tried both 5.9.1 and 5.10.0 and encouterred a same problem. I saw messages 
> filled DLQ very quickly. I checked the clients both producer and consumer but 
> there was no error. I checked activemq log and found the log is full of these 
> warnings:
> 2014-06-27 23:22:09,337 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com.cyyun.webmon.spider.update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:211.com-52399-1400732399425-1:1:235992:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,337 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com..update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:nbzjjf22805-34129-1403880308671-1:1:28:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,338 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com.x.update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:jxncxnj2-48598-1403856107346-1:1:6007:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,338 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com..update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:jxnc17-60227-1400730816361-1:1:149072:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,339 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com..update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:cyyun-46954-1403800808565-1:1:9765:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,339 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com..update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:ubuntu-55495-1403497638437-1:1:53086:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,340 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com..update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:cyyun-39030-1403880008363-1:1:70:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> The problem mostly happens right after activemq starts and sometimes happened 
> after activemq worked normally for a while.
> For now I have to roll back to 5.9.0 and the problem doesn't occure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5249) "cursor got duplicate" error after upgrade

2015-01-15 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278500#comment-14278500
 ] 

Sergiy Barlabanov commented on AMQ-5249:


In our case after getting "cursor got duplicate" in a DQL, we start to observe 
"Problem retrieving message for browse" error with NullPointerException for 
that DLQ when trying to browse the queue. Restart helps.

2015-01-14 20:26:00,659 | ERROR | Problem retrieving message for browse | 
org.apache.activemq.broker.region.Queue | ActiveMQ Broker[dcdng] Scheduler
java.lang.NullPointerException

> "cursor got duplicate" error after upgrade
> --
>
> Key: AMQ-5249
> URL: https://issues.apache.org/jira/browse/AMQ-5249
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.1, 5.10.0
>Reporter: Rural Hunter
>
> I was using 5.9.0 and meet one problem so I tried to upgrade activemq. I 
> tried both 5.9.1 and 5.10.0 and encouterred a same problem. I saw messages 
> filled DLQ very quickly. I checked the clients both producer and consumer but 
> there was no error. I checked activemq log and found the log is full of these 
> warnings:
> 2014-06-27 23:22:09,337 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com.cyyun.webmon.spider.update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:211.com-52399-1400732399425-1:1:235992:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,337 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com..update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:nbzjjf22805-34129-1403880308671-1:1:28:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,338 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com.x.update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:jxncxnj2-48598-1403856107346-1:1:6007:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,338 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com..update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:jxnc17-60227-1400730816361-1:1:149072:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,339 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com..update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:cyyun-46954-1403800808565-1:1:9765:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,339 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com..update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:ubuntu-55495-1403497638437-1:1:53086:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> 2014-06-27 23:22:09,340 | WARN  | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@19117501:com..update,batchResetNeeded=false,storeHasMessages=true,size=211,cacheEnabled=true,maxBatchSize:200,hasSpace:true
>  - cursor got duplicate: ID:cyyun-39030-1403880008363-1:1:70:1:1, 4 | 
> org.apache.activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> Broker[localhost] Scheduler
> The problem mostly happens right after activemq starts and sometimes happened 
> after activemq worked normally for a while.
> For now I have to roll back to 5.9.0 and the problem doesn't occure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5445) Message acknowledged despite of an exception thrown by a message driven bean

2015-01-09 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14271003#comment-14271003
 ] 

Sergiy Barlabanov commented on AMQ-5445:


Any comments on this issue?

> Message acknowledged despite of an exception thrown by a message driven bean
> 
>
> Key: AMQ-5445
> URL: https://issues.apache.org/jira/browse/AMQ-5445
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JCA Container
>Affects Versions: 5.10.0
> Environment: Windows, Glassfish 3.1.2.2 with AMQ RAR, ActiveMQ 5.10.0 
> running standalone.
>Reporter: Sergiy Barlabanov
>
> When a Glassfish server is going down, messages being currently delivered to 
> a MDB, are acknowledged with the following message coming from 
> org.apache.activemq.ra.ServerSessionImpl:
> {{Local transaction had not been commited. Commiting now.}}
> Having analyzed the problem, we discovered, that when Glassfish is going down 
> the method endpoint#beforeDelivery 
> (org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an 
> XA transaction. So ActiveMQ starts a local transaction in 
> org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
> ActiveMQSession#run tries to call messageListener.onMessage() and it fails 
> with an exception. Exception is handled in ActiveMQSession#run, but is not 
> propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
> org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
> clause, which commits the session if there is local transaction (and there is 
> one - see above) despite the exception occurred before.
> This last commit seems to be inappropriate. In case of a transaction (local 
> or not) the corresponding message may not be acknowledged - it must be 
> rollbacked (no session commit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5445) Message acknowledged despite of an exception thrown by a message driven bean

2014-11-20 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-5445:
---
Description: 
When a Glassfish server is going down, messages being currently delivered to a 
MDB, are acknowledged with the following message coming from 
org.apache.activemq.ra.ServerSessionImpl:

{{Local transaction had not been commited. Commiting now.}}

Having analyzed the problem, we discovered, that when Glassfish is going down 
the method endpoint#beforeDelivery 
(org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an XA 
transaction. So ActiveMQ starts a local transaction in 
org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
ActiveMQSession#run tries to call messageListener.onMessage() and it fails with 
an exception. Exception is handled in ActiveMQSession#run, but is not 
propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
clause, which commits the session if there is local transaction (and there is 
one - see above) despite the exception occurred before.
This last commit seems to be inappropriate. In case of a transaction (local or 
not) the corresponding message may not be acknowledged - it must be rollbacked 
(no session commit).

  was:
When a Glassfish server is going down, messages being currently delivered to a 
MDB, are acknowledged with the following message coming from 
org.apache.activemq.ra.ServerSessionImpl:

Local transaction had not been commited. Commiting now.

Having analyzed the problem, we discovered, that when Glassfish is going down 
the method endpoint#beforeDelivery 
(org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an XA 
transaction. So ActiveMQ starts a local transaction in 
org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
ActiveMQSession#run tries to call messageListener.onMessage() and it fails with 
an exception. Exception is handled in ActiveMQSession#run, but is not 
propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
clause, which commits the session if there is local transaction (and there is 
one - see above) despite the exception occurred before.
This last commit seems to be inappropriate. In case of a transaction (local or 
not) the corresponding message may not be acknowledged - it must be rollbacked 
(no session commit).


> Message acknowledged despite of an exception thrown by a message driven bean
> 
>
> Key: AMQ-5445
> URL: https://issues.apache.org/jira/browse/AMQ-5445
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JCA Container
>Affects Versions: 5.10.0
> Environment: Windows, Glassfish 3.1.2.2 with AMQ RAR, ActiveMQ 5.10.0 
> running standalone.
>Reporter: Sergiy Barlabanov
>
> When a Glassfish server is going down, messages being currently delivered to 
> a MDB, are acknowledged with the following message coming from 
> org.apache.activemq.ra.ServerSessionImpl:
> {{Local transaction had not been commited. Commiting now.}}
> Having analyzed the problem, we discovered, that when Glassfish is going down 
> the method endpoint#beforeDelivery 
> (org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an 
> XA transaction. So ActiveMQ starts a local transaction in 
> org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
> ActiveMQSession#run tries to call messageListener.onMessage() and it fails 
> with an exception. Exception is handled in ActiveMQSession#run, but is not 
> propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
> org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
> clause, which commits the session if there is local transaction (and there is 
> one - see above) despite the exception occurred before.
> This last commit seems to be inappropriate. In case of a transaction (local 
> or not) the corresponding message may not be acknowledged - it must be 
> rollbacked (no session commit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5445) Message acknowledged despite of an exception thrown by a message driven bean

2014-11-20 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14219502#comment-14219502
 ] 

Sergiy Barlabanov edited comment on AMQ-5445 at 11/20/14 3:46 PM:
--

And the patch would be I guess to extend the catch {} clause in 
org.apache.activemq.ActiveMQSession#run and make it look something like this:

{code}
} catch (Throwable e) {
LOG.error("error dispatching message: ", e);
// A problem while invoking the MessageListener does not
// in general indicate a problem with the connection to the 
broker, i.e.
// it will usually be sufficient to let the afterDelivery() 
method either
// commit or roll back in order to deal with the exception.
// However, we notify any registered client internal exception 
listener
// of the problem.
connection.onClientInternalException(e);
if (transactionContext != null && 
transactionContext.isInLocalTransaction()) {
try {
rollback();
} catch (Throwable rollbackException) {
LOG.error("Error while trying to rollback the session", 
rollbackException);
connection.onClientInternalException(rollbackException);
}
}
{code}

or may be there is another way to let 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery know that it has to 
rollback instead of committing. I did not find an easy way to do this. It would 
be better to rollback in ServerSessionImpl#afterDelivery - it is also mentioned 
in the comment in that catch{} clause. So currently 
ServerSessionImpl#afterDelivery does not know anything about whether it has to 
rollback or commit. It is just committing.




was (Author: barlabanov):
And the patch would be I guess to extend the catch {} clause in 
org.apache.activemq.ActiveMQSession#run and make it look something like this:

{code}
} catch (Throwable e) {
LOG.error("error dispatching message: ", e);
// A problem while invoking the MessageListener does not
// in general indicate a problem with the connection to the 
broker, i.e.
// it will usually be sufficient to let the afterDelivery() 
method either
// commit or roll back in order to deal with the exception.
// However, we notify any registered client internal exception 
listener
// of the problem.
connection.onClientInternalException(e);
if (transactionContext != null && 
transactionContext.isInLocalTransaction()) {
try {
rollback();
} catch (Throwable rollbackException) {
LOG.error("Error while trying to rollback the session", 
rollbackException);
connection.onClientInternalException(rollbackException);
}
}
{code}

or may be there is another way to let 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery know that it has to 
rollback instead of committing.



> Message acknowledged despite of an exception thrown by a message driven bean
> 
>
> Key: AMQ-5445
> URL: https://issues.apache.org/jira/browse/AMQ-5445
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JCA Container
>Affects Versions: 5.10.0
> Environment: Windows, Glassfish 3.1.2.2 with AMQ RAR, ActiveMQ 5.10.0 
> running standalone.
>Reporter: Sergiy Barlabanov
>
> When a Glassfish server is going down, messages being currently delivered to 
> a MDB, are acknowledge with the message coming from 
> org.apache.activemq.ra.ServerSessionImpl:
> Local transaction had not been commited. Commiting now.
> Having analyzed the problem, we discovered, that when Glassfish is going down 
> the method endpoint#beforeDelivery 
> (org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an 
> XA transaction. So ActiveMQ starts a local transaction in 
> org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
> ActiveMQSession#run tries to call messageListener.onMessage() and it fails 
> with an exception. Exception is handled in ActiveMQSession#run, but is not 
> propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
> org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
> clause, which commits the session if there is local transaction (and there is 
> one - see above) despite the exception occurred before.
> This last commit seems to be inappropriate. In case of a transaction (local 
> or not) the

[jira] [Updated] (AMQ-5445) Message acknowledged despite of an exception thrown by a message driven bean

2014-11-20 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-5445:
---
Description: 
When a Glassfish server is going down, messages being currently delivered to a 
MDB, are acknowledged with the following message coming from 
org.apache.activemq.ra.ServerSessionImpl:

Local transaction had not been commited. Commiting now.

Having analyzed the problem, we discovered, that when Glassfish is going down 
the method endpoint#beforeDelivery 
(org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an XA 
transaction. So ActiveMQ starts a local transaction in 
org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
ActiveMQSession#run tries to call messageListener.onMessage() and it fails with 
an exception. Exception is handled in ActiveMQSession#run, but is not 
propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
clause, which commits the session if there is local transaction (and there is 
one - see above) despite the exception occurred before.
This last commit seems to be inappropriate. In case of a transaction (local or 
not) the corresponding message may not be acknowledged - it must be rollbacked 
(no session commit).

  was:
When a Glassfish server is going down, messages being currently delivered to a 
MDB, are acknowledged with the message coming from 
org.apache.activemq.ra.ServerSessionImpl:

Local transaction had not been commited. Commiting now.

Having analyzed the problem, we discovered, that when Glassfish is going down 
the method endpoint#beforeDelivery 
(org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an XA 
transaction. So ActiveMQ starts a local transaction in 
org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
ActiveMQSession#run tries to call messageListener.onMessage() and it fails with 
an exception. Exception is handled in ActiveMQSession#run, but is not 
propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
clause, which commits the session if there is local transaction (and there is 
one - see above) despite the exception occurred before.
This last commit seems to be inappropriate. In case of a transaction (local or 
not) the corresponding message may not be acknowledged - it must be rollbacked 
(no session commit).


> Message acknowledged despite of an exception thrown by a message driven bean
> 
>
> Key: AMQ-5445
> URL: https://issues.apache.org/jira/browse/AMQ-5445
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JCA Container
>Affects Versions: 5.10.0
> Environment: Windows, Glassfish 3.1.2.2 with AMQ RAR, ActiveMQ 5.10.0 
> running standalone.
>Reporter: Sergiy Barlabanov
>
> When a Glassfish server is going down, messages being currently delivered to 
> a MDB, are acknowledged with the following message coming from 
> org.apache.activemq.ra.ServerSessionImpl:
> Local transaction had not been commited. Commiting now.
> Having analyzed the problem, we discovered, that when Glassfish is going down 
> the method endpoint#beforeDelivery 
> (org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an 
> XA transaction. So ActiveMQ starts a local transaction in 
> org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
> ActiveMQSession#run tries to call messageListener.onMessage() and it fails 
> with an exception. Exception is handled in ActiveMQSession#run, but is not 
> propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
> org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
> clause, which commits the session if there is local transaction (and there is 
> one - see above) despite the exception occurred before.
> This last commit seems to be inappropriate. In case of a transaction (local 
> or not) the corresponding message may not be acknowledged - it must be 
> rollbacked (no session commit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5445) Message acknowledged despite of an exception thrown by a message driven bean

2014-11-20 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-5445:
---
Description: 
When a Glassfish server is going down, messages being currently delivered to a 
MDB, are acknowledged with the message coming from 
org.apache.activemq.ra.ServerSessionImpl:

Local transaction had not been commited. Commiting now.

Having analyzed the problem, we discovered, that when Glassfish is going down 
the method endpoint#beforeDelivery 
(org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an XA 
transaction. So ActiveMQ starts a local transaction in 
org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
ActiveMQSession#run tries to call messageListener.onMessage() and it fails with 
an exception. Exception is handled in ActiveMQSession#run, but is not 
propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
clause, which commits the session if there is local transaction (and there is 
one - see above) despite the exception occurred before.
This last commit seems to be inappropriate. In case of a transaction (local or 
not) the corresponding message may not be acknowledged - it must be rollbacked 
(no session commit).

  was:
When a Glassfish server is going down, messages being currently delivered to a 
MDB, are acknowledge with the message coming from 
org.apache.activemq.ra.ServerSessionImpl:

Local transaction had not been commited. Commiting now.

Having analyzed the problem, we discovered, that when Glassfish is going down 
the method endpoint#beforeDelivery 
(org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an XA 
transaction. So ActiveMQ starts a local transaction in 
org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
ActiveMQSession#run tries to call messageListener.onMessage() and it fails with 
an exception. Exception is handled in ActiveMQSession#run, but is not 
propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
clause, which commits the session if there is local transaction (and there is 
one - see above) despite the exception occurred before.
This last commit seems to be inappropriate. In case of a transaction (local or 
not) the corresponding message may not be acknowledged - it must be rollbacked 
(no session commit).


> Message acknowledged despite of an exception thrown by a message driven bean
> 
>
> Key: AMQ-5445
> URL: https://issues.apache.org/jira/browse/AMQ-5445
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JCA Container
>Affects Versions: 5.10.0
> Environment: Windows, Glassfish 3.1.2.2 with AMQ RAR, ActiveMQ 5.10.0 
> running standalone.
>Reporter: Sergiy Barlabanov
>
> When a Glassfish server is going down, messages being currently delivered to 
> a MDB, are acknowledged with the message coming from 
> org.apache.activemq.ra.ServerSessionImpl:
> Local transaction had not been commited. Commiting now.
> Having analyzed the problem, we discovered, that when Glassfish is going down 
> the method endpoint#beforeDelivery 
> (org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an 
> XA transaction. So ActiveMQ starts a local transaction in 
> org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
> ActiveMQSession#run tries to call messageListener.onMessage() and it fails 
> with an exception. Exception is handled in ActiveMQSession#run, but is not 
> propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
> org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
> clause, which commits the session if there is local transaction (and there is 
> one - see above) despite the exception occurred before.
> This last commit seems to be inappropriate. In case of a transaction (local 
> or not) the corresponding message may not be acknowledged - it must be 
> rollbacked (no session commit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5445) Message acknowledged despite of an exception thrown by a message driven bean

2014-11-20 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14219502#comment-14219502
 ] 

Sergiy Barlabanov edited comment on AMQ-5445 at 11/20/14 3:45 PM:
--

And the patch would be I guess to extend the catch {} clause in 
org.apache.activemq.ActiveMQSession#run and make it look something like this:

{code}
} catch (Throwable e) {
LOG.error("error dispatching message: ", e);
// A problem while invoking the MessageListener does not
// in general indicate a problem with the connection to the 
broker, i.e.
// it will usually be sufficient to let the afterDelivery() 
method either
// commit or roll back in order to deal with the exception.
// However, we notify any registered client internal exception 
listener
// of the problem.
connection.onClientInternalException(e);
if (transactionContext != null && 
transactionContext.isInLocalTransaction()) {
try {
rollback();
} catch (Throwable rollbackException) {
LOG.error("Error while trying to rollback the session", 
rollbackException);
connection.onClientInternalException(rollbackException);
}
}
{code}

or may be there is another way to let 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery know that it has to 
rollback instead of committing.




was (Author: barlabanov):
And the patch would be I guess to extend the catch {} clause in 
org.apache.activemq.ActiveMQSession#run and make it look something like this:


} catch (Throwable e) {
LOG.error("error dispatching message: ", e);
// A problem while invoking the MessageListener does not
// in general indicate a problem with the connection to the 
broker, i.e.
// it will usually be sufficient to let the afterDelivery() 
method either
// commit or roll back in order to deal with the exception.
// However, we notify any registered client internal exception 
listener
// of the problem.
connection.onClientInternalException(e);
if (transactionContext != null && 
transactionContext.isInLocalTransaction()) {
try {
rollback();
} catch (Throwable rollbackException) {
LOG.error("Error while trying to rollback the session", 
rollbackException);
connection.onClientInternalException(rollbackException);
}
}


or may be there is another way to let 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery know that it has to 
rollback instead of committing.



> Message acknowledged despite of an exception thrown by a message driven bean
> 
>
> Key: AMQ-5445
> URL: https://issues.apache.org/jira/browse/AMQ-5445
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JCA Container
>Affects Versions: 5.10.0
> Environment: Windows, Glassfish 3.1.2.2 with AMQ RAR, ActiveMQ 5.10.0 
> running standalone.
>Reporter: Sergiy Barlabanov
>
> When a Glassfish server is going down, messages being currently delivered to 
> a MDB, are acknowledge with the message coming from 
> org.apache.activemq.ra.ServerSessionImpl:
> Local transaction had not been commited. Commiting now.
> Having analyzed the problem, we discovered, that when Glassfish is going down 
> the method endpoint#beforeDelivery 
> (org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an 
> XA transaction. So ActiveMQ starts a local transaction in 
> org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
> ActiveMQSession#run tries to call messageListener.onMessage() and it fails 
> with an exception. Exception is handled in ActiveMQSession#run, but is not 
> propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
> org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
> clause, which commits the session if there is local transaction (and there is 
> one - see above) despite the exception occurred before.
> This last commit seems to be inappropriate. In case of a transaction (local 
> or not) the corresponding message may not be acknowledged - it must be 
> rollbacked (no session commit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5445) Message acknowledged despite of an exception thrown by a message driven bean

2014-11-20 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14219502#comment-14219502
 ] 

Sergiy Barlabanov edited comment on AMQ-5445 at 11/20/14 3:43 PM:
--

And the patch would be I guess to extend the catch {} clause in 
org.apache.activemq.ActiveMQSession#run and make it look something like this:


} catch (Throwable e) {
LOG.error("error dispatching message: ", e);
// A problem while invoking the MessageListener does not
// in general indicate a problem with the connection to the 
broker, i.e.
// it will usually be sufficient to let the afterDelivery() 
method either
// commit or roll back in order to deal with the exception.
// However, we notify any registered client internal exception 
listener
// of the problem.
connection.onClientInternalException(e);
if (transactionContext != null && 
transactionContext.isInLocalTransaction()) {
try {
rollback();
} catch (Throwable rollbackException) {
LOG.error("Error while trying to rollback the session", 
rollbackException);
connection.onClientInternalException(rollbackException);
}
}


or may be there is another way to let 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery know that it has to 
rollback instead of committing.




was (Author: barlabanov):
And the patch would be I guess to extend the catch {} clause in 
org.apache.activemq.ActiveMQSession#run and make it look something like this:


} catch (Throwable e) {
LOG.error("error dispatching message: ", e);
// A problem while invoking the MessageListener does not
// in general indicate a problem with the connection to the 
broker, i.e.
// it will usually be sufficient to let the afterDelivery() 
method either
// commit or roll back in order to deal with the exception.
// However, we notify any registered client internal exception 
listener
// of the problem.
connection.onClientInternalException(e);
if (transactionContext != null && 
transactionContext.isInLocalTransaction()) {
try {
rollback();
} catch (Throwable rollbackException) {
LOG.error("Error while trying to rollback the session", 
e);
connection.onClientInternalException(e);
}
}


or may be there is another way to let 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery know that it has to 
rollback instead of committing.



> Message acknowledged despite of an exception thrown by a message driven bean
> 
>
> Key: AMQ-5445
> URL: https://issues.apache.org/jira/browse/AMQ-5445
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JCA Container
>Affects Versions: 5.10.0
> Environment: Windows, Glassfish 3.1.2.2 with AMQ RAR, ActiveMQ 5.10.0 
> running standalone.
>Reporter: Sergiy Barlabanov
>
> When a Glassfish server is going down, messages being currently delivered to 
> a MDB, are acknowledge with the message coming from 
> org.apache.activemq.ra.ServerSessionImpl:
> Local transaction had not been commited. Commiting now.
> Having analyzed the problem, we discovered, that when Glassfish is going down 
> the method endpoint#beforeDelivery 
> (org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an 
> XA transaction. So ActiveMQ starts a local transaction in 
> org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
> ActiveMQSession#run tries to call messageListener.onMessage() and it fails 
> with an exception. Exception is handled in ActiveMQSession#run, but is not 
> propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
> org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
> clause, which commits the session if there is local transaction (and there is 
> one - see above) despite the exception occurred before.
> This last commit seems to be inappropriate. In case of a transaction (local 
> or not) the corresponding message may not be acknowledged - it must be 
> rollbacked (no session commit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5445) Message acknowledged despite of an exception thrown by a message driven bean

2014-11-20 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14219502#comment-14219502
 ] 

Sergiy Barlabanov commented on AMQ-5445:


And the patch would be I guess to extend the catch {} clause in 
org.apache.activemq.ActiveMQSession#run and make it look something like this:


} catch (Throwable e) {
LOG.error("error dispatching message: ", e);
// A problem while invoking the MessageListener does not
// in general indicate a problem with the connection to the 
broker, i.e.
// it will usually be sufficient to let the afterDelivery() 
method either
// commit or roll back in order to deal with the exception.
// However, we notify any registered client internal exception 
listener
// of the problem.
connection.onClientInternalException(e);
if (transactionContext != null && 
transactionContext.isInLocalTransaction()) {
try {
rollback();
} catch (Throwable rollbackException) {
LOG.error("Error while trying to rollback the session", 
e);
connection.onClientInternalException(e);
}
}


or may be there is another way to let 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery know that it has to 
rollback instead of committing.



> Message acknowledged despite of an exception thrown by a message driven bean
> 
>
> Key: AMQ-5445
> URL: https://issues.apache.org/jira/browse/AMQ-5445
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JCA Container
>Affects Versions: 5.10.0
> Environment: Windows, Glassfish 3.1.2.2 with AMQ RAR, ActiveMQ 5.10.0 
> running standalone.
>Reporter: Sergiy Barlabanov
>
> When a Glassfish server is going down, messages being currently delivered to 
> a MDB, are acknowledge with the message coming from 
> org.apache.activemq.ra.ServerSessionImpl:
> Local transaction had not been commited. Commiting now.
> Having analyzed the problem, we discovered, that when Glassfish is going down 
> the method endpoint#beforeDelivery 
> (org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an 
> XA transaction. So ActiveMQ starts a local transaction in 
> org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
> ActiveMQSession#run tries to call messageListener.onMessage() and it fails 
> with an exception. Exception is handled in ActiveMQSession#run, but is not 
> propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
> org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
> clause, which commits the session if there is local transaction (and there is 
> one - see above) despite the exception occurred before.
> This last commit seems to be inappropriate. In case of a transaction (local 
> or not) the corresponding message may not be acknowledged - it must be 
> rollbacked (no session commit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5445) Message acknowledged despite of an exception thrown by a message driven bean

2014-11-20 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-5445:
---
Environment: Windows, Glassfish 3.1.2.2 with AMQ RAR, ActiveMQ 5.10.0 
running standalone.  (was: Windows, Glassfish 3.1.2.2 witch AMQ RAR, ActiveMQ 
5.10.0 running standalone.)

> Message acknowledged despite of an exception thrown by a message driven bean
> 
>
> Key: AMQ-5445
> URL: https://issues.apache.org/jira/browse/AMQ-5445
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JCA Container
>Affects Versions: 5.10.0
> Environment: Windows, Glassfish 3.1.2.2 with AMQ RAR, ActiveMQ 5.10.0 
> running standalone.
>Reporter: Sergiy Barlabanov
>
> When a Glassfish server is going down, messages being currently delivered to 
> a MDB, are acknowledge with the message coming from 
> org.apache.activemq.ra.ServerSessionImpl:
> Local transaction had not been commited. Commiting now.
> Having analyzed the problem, we discovered, that when Glassfish is going down 
> the method endpoint#beforeDelivery 
> (org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an 
> XA transaction. So ActiveMQ starts a local transaction in 
> org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
> ActiveMQSession#run tries to call messageListener.onMessage() and it fails 
> with an exception. Exception is handled in ActiveMQSession#run, but is not 
> propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
> org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
> clause, which commits the session if there is local transaction (and there is 
> one - see above) despite the exception occurred before.
> This last commit seems to be inappropriate. In case of a transaction (local 
> or not) the corresponding message may not be acknowledged - it must be 
> rollbacked (no session commit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5445) Message acknowledged despite of an exception thrown by a message driven bean

2014-11-20 Thread Sergiy Barlabanov (JIRA)
Sergiy Barlabanov created AMQ-5445:
--

 Summary: Message acknowledged despite of an exception thrown by a 
message driven bean
 Key: AMQ-5445
 URL: https://issues.apache.org/jira/browse/AMQ-5445
 Project: ActiveMQ
  Issue Type: Bug
  Components: JCA Container
Affects Versions: 5.10.0
 Environment: Windows, Glassfish 3.1.2.2 witch AMQ RAR, ActiveMQ 5.10.0 
running standalone.
Reporter: Sergiy Barlabanov


When a Glassfish server is going down, messages being currently delivered to a 
MDB, are acknowledge with the message coming from 
org.apache.activemq.ra.ServerSessionImpl:

Local transaction had not been commited. Commiting now.

Having analyzed the problem, we discovered, that when Glassfish is going down 
the method endpoint#beforeDelivery 
(org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an XA 
transaction. So ActiveMQ starts a local transaction in 
org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
ActiveMQSession#run tries to call messageListener.onMessage() and it fails with 
an exception. Exception is handled in ActiveMQSession#run, but is not 
propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
clause, which commits the session if there is local transaction (and there is 
one - see above) despite the exception occurred before.
This last commit seems to be inappropriate. In case of a transaction (local or 
not) the corresponding message may not be acknowledged (no session commit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5445) Message acknowledged despite of an exception thrown by a message driven bean

2014-11-20 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-5445:
---
Description: 
When a Glassfish server is going down, messages being currently delivered to a 
MDB, are acknowledge with the message coming from 
org.apache.activemq.ra.ServerSessionImpl:

Local transaction had not been commited. Commiting now.

Having analyzed the problem, we discovered, that when Glassfish is going down 
the method endpoint#beforeDelivery 
(org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an XA 
transaction. So ActiveMQ starts a local transaction in 
org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
ActiveMQSession#run tries to call messageListener.onMessage() and it fails with 
an exception. Exception is handled in ActiveMQSession#run, but is not 
propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
clause, which commits the session if there is local transaction (and there is 
one - see above) despite the exception occurred before.
This last commit seems to be inappropriate. In case of a transaction (local or 
not) the corresponding message may not be acknowledged - it must be rollbacked 
(no session commit).

  was:
When a Glassfish server is going down, messages being currently delivered to a 
MDB, are acknowledge with the message coming from 
org.apache.activemq.ra.ServerSessionImpl:

Local transaction had not been commited. Commiting now.

Having analyzed the problem, we discovered, that when Glassfish is going down 
the method endpoint#beforeDelivery 
(org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an XA 
transaction. So ActiveMQ starts a local transaction in 
org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
ActiveMQSession#run tries to call messageListener.onMessage() and it fails with 
an exception. Exception is handled in ActiveMQSession#run, but is not 
propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
clause, which commits the session if there is local transaction (and there is 
one - see above) despite the exception occurred before.
This last commit seems to be inappropriate. In case of a transaction (local or 
not) the corresponding message may not be acknowledged (no session commit).


> Message acknowledged despite of an exception thrown by a message driven bean
> 
>
> Key: AMQ-5445
> URL: https://issues.apache.org/jira/browse/AMQ-5445
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JCA Container
>Affects Versions: 5.10.0
> Environment: Windows, Glassfish 3.1.2.2 witch AMQ RAR, ActiveMQ 
> 5.10.0 running standalone.
>Reporter: Sergiy Barlabanov
>
> When a Glassfish server is going down, messages being currently delivered to 
> a MDB, are acknowledge with the message coming from 
> org.apache.activemq.ra.ServerSessionImpl:
> Local transaction had not been commited. Commiting now.
> Having analyzed the problem, we discovered, that when Glassfish is going down 
> the method endpoint#beforeDelivery 
> (org.apache.activemq.ra.ServerSessionImpl#beforeDelivery) does not start an 
> XA transaction. So ActiveMQ starts a local transaction in 
> org.apache.activemq.ActiveMQSession#doStartTransaction. After that 
> ActiveMQSession#run tries to call messageListener.onMessage() and it fails 
> with an exception. Exception is handled in ActiveMQSession#run, but is not 
> propagated to org.apache.activemq.ra.ServerSessionImpl#afterDelivery. And in 
> org.apache.activemq.ra.ServerSessionImpl#afterDelivery there is finally {} 
> clause, which commits the session if there is local transaction (and there is 
> one - see above) despite the exception occurred before.
> This last commit seems to be inappropriate. In case of a transaction (local 
> or not) the corresponding message may not be acknowledged - it must be 
> rollbacked (no session commit).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5258) Connection reference leak in PooledConnectionFactory leading to expired connections stuck in the pool

2014-07-03 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-5258:
---

Description: 
org.apache.activemq.jms.pool.PooledConnectionFactory creates a connection on 
startup without giving it back to the pool:

{code:java}
public void start() {
LOG.debug("Staring the PooledConnectionFactory: create on start = {}", 
isCreateConnectionOnStartup());
stopped.set(false);
if (isCreateConnectionOnStartup()) {
try {
// warm the pool by creating a connection during startup
createConnection();
} catch (JMSException e) {
LOG.warn("Create pooled connection during start failed. This 
exception will be ignored.", e);
}
}
}
{code}

So no close() method of PooledConnection is called and so no 
decrementReferenceCount is called on ConnectionPool. So referenceCount never 
becomes 0.
Later on if an exception occurs and hasExpired of ConnectionPool is set to 
true, the connection will not be closed since expiredCheck() method of 
ConnectionPool always compares referenceCount with 0 and does close() only if 
it is 0.

So we have a dead ConnectionPool instance and all usages result in "XXX closed" 
errors.

The fix would be to add call to close() just after doing createConnection() in 
PooledConnectionFactory#start() to make referenceCount go to 0. Something like 
this:

{code:java}
public void start() {
LOG.debug("Staring the PooledConnectionFactory: create on start = {}", 
isCreateConnectionOnStartup());
stopped.set(false);
if (isCreateConnectionOnStartup()) {
try {
// warm the pool by creating a connection during startup
createConnection().close(); // <--- makes sure referenceCount 
goes to 0
} catch (JMSException e) {
LOG.warn("Create pooled connection during start failed. This 
exception will be ignored.", e);
}
}
}
{code}

  was:
org.apache.activemq.jms.pool.PooledConnectionFactory creates a connection on 
startup without giving it back to the pool. So no close() method of 
PooledConnection is called and so no decrementReferenceCount is called on 
ConnectionPool. So referenceCount never becomes 0.
Later on if an exception occurs and hasExpired of ConnectionPool is set to 
true, the connection will not be closed since expiredCheck of ConnectionPool 
compares referenceCount with 0 and does close() only if it is 0.
So we have a dead ConnectionPool instance and all usages result in "XXX closed" 
errors.

The fix would be to add call to close() just after doing createConnection() in 
PooledConnectionFactory#start() to make referenceCount go to 0. Something like 
this:

{code:java}
public void start() {
LOG.debug("Staring the PooledConnectionFactory: create on start = {}", 
isCreateConnectionOnStartup());
stopped.set(false);
if (isCreateConnectionOnStartup()) {
try {
// warm the pool by creating a connection during startup
createConnection().close(); // <--- makes sure referenceCount 
goes to 0
} catch (JMSException e) {
LOG.warn("Create pooled connection during start failed. This 
exception will be ignored.", e);
}
}
}
{code}


> Connection reference leak in PooledConnectionFactory leading to expired 
> connections stuck in the pool
> -
>
> Key: AMQ-5258
> URL: https://issues.apache.org/jira/browse/AMQ-5258
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-pool
>Affects Versions: 5.9.1, 5.10.0
>Reporter: Sergiy Barlabanov
>
> org.apache.activemq.jms.pool.PooledConnectionFactory creates a connection on 
> startup without giving it back to the pool:
> {code:java}
> public void start() {
> LOG.debug("Staring the PooledConnectionFactory: create on start = 
> {}", isCreateConnectionOnStartup());
> stopped.set(false);
> if (isCreateConnectionOnStartup()) {
> try {
> // warm the pool by creating a connection during startup
> createConnection();
> } catch (JMSException e) {
> LOG.warn("Create pooled connection during start failed. This 
> exception will be ignored.", e);
> }
> }
> }
> {code}
> So no close() method of PooledConnection is called and so no 
> decrementReferenceCount is called on ConnectionPool. So referenceCount never 
> becomes 0.
> Later on if an exception occurs and hasExpired of ConnectionPool is set to 
> true, the connection will not be closed since expiredCheck() method of 
> ConnectionPool always compares reference

[jira] [Updated] (AMQ-5258) Connection reference leak in PooledConnectionFactory leading to expired connections stuck in the pool

2014-07-03 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-5258:
---

Description: 
org.apache.activemq.jms.pool.PooledConnectionFactory creates a connection on 
startup without giving it back to the pool. So no close() method of 
PooledConnection is called and so no decrementReferenceCount is called on 
ConnectionPool. So referenceCount never becomes 0.
Later on if an exception occurs and hasExpired of ConnectionPool is set to 
true, the connection will not be closed since expiredCheck of ConnectionPool 
compares referenceCount with 0 and does close() only if it is 0.
So we have a dead ConnectionPool instance and all usages result in "XXX closed" 
errors.

The fix would be to add call to close() just after doing createConnection() in 
PooledConnectionFactory#start() to make referenceCount go to 0. Something like 
this:

{code:java}
public void start() {
LOG.debug("Staring the PooledConnectionFactory: create on start = {}", 
isCreateConnectionOnStartup());
stopped.set(false);
if (isCreateConnectionOnStartup()) {
try {
// warm the pool by creating a connection during startup
createConnection().close(); // <--- makes sure referenceCount 
goes to 0
} catch (JMSException e) {
LOG.warn("Create pooled connection during start failed. This 
exception will be ignored.", e);
}
}
}
{code}

  was:
org.apache.activemq.jms.pool.PooledConnectionFactory creates a connection on 
startup without giving it back to the pool. So no close() method of 
PooledConnection is called and so no decrementReferenceCount is called on 
ConnectionPool. So referenceCount never becomes 0.
Later on if an exception occurs and hasExpired of ConnectionPool is set to 
true, the connection will not be closed since expiredCheck of ConnectionPool 
compares referenceCount with 0 and does close() only if it is 0.
So we have a dead ConnectionPool instance and all usages result in "XXX closed" 
errors.

The fix would be to add call to close() just after doing createConnection() in 
PooledConnectionFactory#start() to make referenceCount go to 0. Something like 
this:

public void start() {
LOG.debug("Staring the PooledConnectionFactory: create on start = {}", 
isCreateConnectionOnStartup());
stopped.set(false);
if (isCreateConnectionOnStartup()) {
try {
// warm the pool by creating a connection during startup
createConnection().close(); // <--- makes sure referenceCount 
goes to 0
} catch (JMSException e) {
LOG.warn("Create pooled connection during start failed. This 
exception will be ignored.", e);
}
}
}



> Connection reference leak in PooledConnectionFactory leading to expired 
> connections stuck in the pool
> -
>
> Key: AMQ-5258
> URL: https://issues.apache.org/jira/browse/AMQ-5258
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-pool
>Affects Versions: 5.9.1, 5.10.0
>Reporter: Sergiy Barlabanov
>
> org.apache.activemq.jms.pool.PooledConnectionFactory creates a connection on 
> startup without giving it back to the pool. So no close() method of 
> PooledConnection is called and so no decrementReferenceCount is called on 
> ConnectionPool. So referenceCount never becomes 0.
> Later on if an exception occurs and hasExpired of ConnectionPool is set to 
> true, the connection will not be closed since expiredCheck of ConnectionPool 
> compares referenceCount with 0 and does close() only if it is 0.
> So we have a dead ConnectionPool instance and all usages result in "XXX 
> closed" errors.
> The fix would be to add call to close() just after doing createConnection() 
> in PooledConnectionFactory#start() to make referenceCount go to 0. Something 
> like this:
> {code:java}
> public void start() {
> LOG.debug("Staring the PooledConnectionFactory: create on start = 
> {}", isCreateConnectionOnStartup());
> stopped.set(false);
> if (isCreateConnectionOnStartup()) {
> try {
> // warm the pool by creating a connection during startup
> createConnection().close(); // <--- makes sure referenceCount 
> goes to 0
> } catch (JMSException e) {
> LOG.warn("Create pooled connection during start failed. This 
> exception will be ignored.", e);
> }
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-5258) Connection reference leak in PooledConnectionFactory leading to expired connections stuck in the pool

2014-07-03 Thread Sergiy Barlabanov (JIRA)
Sergiy Barlabanov created AMQ-5258:
--

 Summary: Connection reference leak in PooledConnectionFactory 
leading to expired connections stuck in the pool
 Key: AMQ-5258
 URL: https://issues.apache.org/jira/browse/AMQ-5258
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-pool
Affects Versions: 5.10.0, 5.9.1
Reporter: Sergiy Barlabanov


org.apache.activemq.jms.pool.PooledConnectionFactory creates a connection on 
startup without giving it back to the pool. So no close() method of 
PooledConnection is called and so no decrementReferenceCount is called on 
ConnectionPool. So referenceCount never becomes 0.
Later on if an exception occurs and hasExpired of ConnectionPool is set to 
true, the connection will not be closed since expiredCheck of ConnectionPool 
compares referenceCount with 0 and does close() only if it is 0.
So we have a dead ConnectionPool instance and all usages result in "XXX closed" 
errors.

The fix would be to add call to close() just after doing createConnection() in 
PooledConnectionFactory#start() to make referenceCount go to 0. Something like 
this:

public void start() {
LOG.debug("Staring the PooledConnectionFactory: create on start = {}", 
isCreateConnectionOnStartup());
stopped.set(false);
if (isCreateConnectionOnStartup()) {
try {
// warm the pool by creating a connection during startup
createConnection().close(); // <--- makes sure referenceCount 
goes to 0
} catch (JMSException e) {
LOG.warn("Create pooled connection during start failed. This 
exception will be ignored.", e);
}
}
}




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (AMQ-5136) MemoryUsage is not decremented on a JMS topic when rolling back a transacted session

2014-04-28 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13982938#comment-13982938
 ] 

Sergiy Barlabanov commented on AMQ-5136:


Can anybody comment this issue? Any ideas when it will be fixed? The fix would 
be quite easy: just add a correct rollback listener.

> MemoryUsage is not decremented on a JMS topic when rolling back a transacted 
> session
> 
>
> Key: AMQ-5136
> URL: https://issues.apache.org/jira/browse/AMQ-5136
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Sergiy Barlabanov
> Attachments: activemqbug.zip
>
>
> When sending a message to a topic using a transacted session, memory usage is 
> not decremented correctly when session.rollback is called. It is decremented 
> on commit but not on rollback. This has quite bad consequences since after 
> some time depending on the system memory usage settings and amount of rolled 
> back messages, the broker starts to reject messages saying that Usage Manager 
> memory limit is reached. The only solution is to restart the broker.
> I created a small Maven project (see the attachment) with a unit test 
> starting an embedded broker and two test methods: one sending messages with 
> session.commit and another with session.rollback(). The last one fails to 
> assert the memory usage. In the output one can see quite a lot of error logs 
> written by ActiveMQ. The problem is reproducible with 5.8.0 and 5.9.0.
> The problem seems to be in 
> org.apache.activemq.broker.region.Topic#doMessageSend method where a 
> transaction synchronization is registered. In the transaction synchronization 
> only afterCommit is supplied, but no afterRollback. So there seems to be 
> nobody calling message.decrementReferenceCount().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (AMQ-5136) MemoryUsage is not decremented on a JMS topic when rolling back a transacted session

2014-04-04 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-5136:
---

Attachment: activemqbug.zip

Maven project with the test.

> MemoryUsage is not decremented on a JMS topic when rolling back a transacted 
> session
> 
>
> Key: AMQ-5136
> URL: https://issues.apache.org/jira/browse/AMQ-5136
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Sergiy Barlabanov
> Attachments: activemqbug.zip
>
>
> When sending a message to a topic using a transacted session, memory usage is 
> not decremented correctly when session.rollback is called. It is decremented 
> on commit but not on rollback. This has quite bad consequences since after 
> some time depending on the system memory usage settings and amount of rolled 
> back messages, the broker starts to reject messages saying that Usage Manager 
> memory limit is reached. The only solution is to restart the broker.
> I created a small Maven project (see the attachment) with a unit test 
> starting an embedded broker and two test methods: one sending messages with 
> session.commit and another with session.rollback(). The last one fails to 
> assert the memory usage. In the output one can see quite a lot of error logs 
> written by ActiveMQ. The problem is reproducible with 5.8.0 and 5.9.0.
> The problem seems to be in 
> org.apache.activemq.broker.region.Topic#doMessageSend method where a 
> transaction synchronization is registered. In the transaction synchronization 
> only afterCommit is supplied, but no afterRollback. So there seems to be 
> nobody calling message.decrementReferenceCount().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-5136) MemoryUsage is not decremented on a JMS topic when rolling back a transacted session ->

2014-04-04 Thread Sergiy Barlabanov (JIRA)
Sergiy Barlabanov created AMQ-5136:
--

 Summary: MemoryUsage is not decremented on a JMS topic when 
rolling back a transacted session -> 
 Key: AMQ-5136
 URL: https://issues.apache.org/jira/browse/AMQ-5136
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0, 5.8.0
Reporter: Sergiy Barlabanov


When sending a message to a topic using a transacted session, memory usage is 
not decremented correctly when session.rollback is called. It is decremented on 
commit but not on rollback. This has quite bad consequences since after some 
time depending on the system memory usage settings and amount of rolled back 
messages, the broker starts to reject messages saying that Usage Manager memory 
limit is reached. The only solution is to restart the broker.
I created a small Maven project (see the attachment) with a unit test starting 
an embedded broker and two test methods: one sending messages with 
session.commit and another with session.rollback(). The last one fails to 
assert the memory usage. In the output one can see quite a lot of error logs 
written by ActiveMQ. The problem is reproducible with 5.8.0 and 5.9.0.
The problem seems to be in 
org.apache.activemq.broker.region.Topic#doMessageSend method where a 
transaction synchronization is registered. In the transaction synchronization 
only afterCommit is supplied, but no afterRollback. So there seems to be nobody 
calling message.decrementReferenceCount().




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (AMQ-5136) MemoryUsage is not decremented on a JMS topic when rolling back a transacted session

2014-04-04 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-5136:
---

Summary: MemoryUsage is not decremented on a JMS topic when rolling back a 
transacted session  (was: MemoryUsage is not decremented on a JMS topic when 
rolling back a transacted session -> )

> MemoryUsage is not decremented on a JMS topic when rolling back a transacted 
> session
> 
>
> Key: AMQ-5136
> URL: https://issues.apache.org/jira/browse/AMQ-5136
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Sergiy Barlabanov
>
> When sending a message to a topic using a transacted session, memory usage is 
> not decremented correctly when session.rollback is called. It is decremented 
> on commit but not on rollback. This has quite bad consequences since after 
> some time depending on the system memory usage settings and amount of rolled 
> back messages, the broker starts to reject messages saying that Usage Manager 
> memory limit is reached. The only solution is to restart the broker.
> I created a small Maven project (see the attachment) with a unit test 
> starting an embedded broker and two test methods: one sending messages with 
> session.commit and another with session.rollback(). The last one fails to 
> assert the memory usage. In the output one can see quite a lot of error logs 
> written by ActiveMQ. The problem is reproducible with 5.8.0 and 5.9.0.
> The problem seems to be in 
> org.apache.activemq.broker.region.Topic#doMessageSend method where a 
> transaction synchronization is registered. In the transaction synchronization 
> only afterCommit is supplied, but no afterRollback. So there seems to be 
> nobody calling message.decrementReferenceCount().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (AMQ-4166) RedeliveryPlugin causes a deadlock with JobSchedulerImpl

2012-11-14 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13497033#comment-13497033
 ] 

Sergiy Barlabanov commented on AMQ-4166:


Just tried our application with ActiveMQ 5.8-SNAPSHOT. No deadlocks detected. 
Looks good. optimizedDispatch was set to true.

One question: would it be enough to take 1407640 revision to get the bug 
patched in 5.7.0? It looks that message expiry is not the only problem causing 
the deadlock. You wrote something about "split schedule and execute in the 
scheduler or let the redelivery plugin do the schedule async".

> RedeliveryPlugin causes a deadlock with JobSchedulerImpl
> 
>
> Key: AMQ-4166
> URL: https://issues.apache.org/jira/browse/AMQ-4166
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.7.0
> Environment: Reproduced on Windows 8, Windows Vista, MacOS X
> with Oracle jdk 1.7.0_07. ActiveMQ is started embedded using RAR inside 
> Glassfish 3.1.2.2.
>Reporter: Sergiy Barlabanov
> Attachments: broker-config.xml, stack-trace-1.txt, stack-trace-2.txt
>
>
> Originates from the forum discussion 
> http://activemq.2283324.n4.nabble.com/RedeliveryPlugin-causes-a-deadlock-with-JobSchedulerImpl-in-ActiveMQ-5-7-0-tt4659019.html
> we have RedeliveryPlugin causing thread deadlock together with 
> JobSchedulerImpl. ActiveMQ version is 5.7.0. We activated RedeliveryPlugin in 
> our broker config xml (see below). There two stacktraces below as well. One 
> is from ActiveMQ VMTransport thread, which tries to send a message to a dead 
> letter queue using RedeliveryPlugin. RedeliveryPlugin just tries to 
> reschedule the message for redelivery and for that it calls JobSchedulerImpl 
> and blocks on its synchronized method "schedule". On the way "consumersLock" 
> is locked. 
> Another stack trace is from JobScheduler:JMS thread, which fires a job to 
> redeliver some message and tries to send it using the same queue used by the 
> VMTransport thread. And it blocks on that consumersLock locked by the 
> VMTransport thread. And this occurs in JobSchedulerImpl#mainLoop method 
> inside synchronized {} block causing a deadlock, since the VMTransport thread 
> tries to call another synchronized method of JobSchedulerImpl. The art how 
> RedeliveryPlugin and JobSchedulerImpl are programmed seems to be quite 
> dangerous, since they both access the queues and try to acquire queue locks. 
> And additionally synchronized methods of JobSchedulerImpl are called directly 
> from RedeliveryPlugin making that to a nice source of thread deadlocks. And I 
> see no measures taken in the code to avoid these deadlocks.
> We can reproduce it quite often if we start ActiveMQ with empty stores 
> (kahadb and scheduler stores are deleted manually from the file system before 
> startup). But looking at the code, I would say that the problem may occur in 
> any situation in any deployment scenario (standalone or embedded in a JEE 
> container). It is just enough to have some Transport thread redelivering a 
> message and the JobScheduler thread trying to fire a job at the same moment 
> on the same queue.
> And another strange thing, which is may be has nothing to do with the 
> deadlock but is still strange, is that according to the stack trace 
> RedeliveryPlugin tries to redeliver an expired message.
> broker config and the stack traces are attached to the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4166) RedeliveryPlugin causes a deadlock with JobSchedulerImpl

2012-11-14 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13497034#comment-13497034
 ] 

Sergiy Barlabanov commented on AMQ-4166:


Just a few notes: when I tried the resource adapter of 5.8-SNAPSHOT the 
following problems occured:
1. activemq-core.jar was not inside the rar. This resulted in 
XBeanBrokerFactory ClassNotFoundException.
2. activemq-spring.jar was missing in the rar as well. This resulted in the 
missing namespace exception (http://activemq.apache.org/schema/core) since 
activemq.xsd in inside activemq-spring.jar

I had to add the missing jars manually in order to get the RAR deployed.

> RedeliveryPlugin causes a deadlock with JobSchedulerImpl
> 
>
> Key: AMQ-4166
> URL: https://issues.apache.org/jira/browse/AMQ-4166
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.7.0
> Environment: Reproduced on Windows 8, Windows Vista, MacOS X
> with Oracle jdk 1.7.0_07. ActiveMQ is started embedded using RAR inside 
> Glassfish 3.1.2.2.
>Reporter: Sergiy Barlabanov
> Attachments: broker-config.xml, stack-trace-1.txt, stack-trace-2.txt
>
>
> Originates from the forum discussion 
> http://activemq.2283324.n4.nabble.com/RedeliveryPlugin-causes-a-deadlock-with-JobSchedulerImpl-in-ActiveMQ-5-7-0-tt4659019.html
> we have RedeliveryPlugin causing thread deadlock together with 
> JobSchedulerImpl. ActiveMQ version is 5.7.0. We activated RedeliveryPlugin in 
> our broker config xml (see below). There two stacktraces below as well. One 
> is from ActiveMQ VMTransport thread, which tries to send a message to a dead 
> letter queue using RedeliveryPlugin. RedeliveryPlugin just tries to 
> reschedule the message for redelivery and for that it calls JobSchedulerImpl 
> and blocks on its synchronized method "schedule". On the way "consumersLock" 
> is locked. 
> Another stack trace is from JobScheduler:JMS thread, which fires a job to 
> redeliver some message and tries to send it using the same queue used by the 
> VMTransport thread. And it blocks on that consumersLock locked by the 
> VMTransport thread. And this occurs in JobSchedulerImpl#mainLoop method 
> inside synchronized {} block causing a deadlock, since the VMTransport thread 
> tries to call another synchronized method of JobSchedulerImpl. The art how 
> RedeliveryPlugin and JobSchedulerImpl are programmed seems to be quite 
> dangerous, since they both access the queues and try to acquire queue locks. 
> And additionally synchronized methods of JobSchedulerImpl are called directly 
> from RedeliveryPlugin making that to a nice source of thread deadlocks. And I 
> see no measures taken in the code to avoid these deadlocks.
> We can reproduce it quite often if we start ActiveMQ with empty stores 
> (kahadb and scheduler stores are deleted manually from the file system before 
> startup). But looking at the code, I would say that the problem may occur in 
> any situation in any deployment scenario (standalone or embedded in a JEE 
> container). It is just enough to have some Transport thread redelivering a 
> message and the JobScheduler thread trying to fire a job at the same moment 
> on the same queue.
> And another strange thing, which is may be has nothing to do with the 
> deadlock but is still strange, is that according to the stack trace 
> RedeliveryPlugin tries to redeliver an expired message.
> broker config and the stack traces are attached to the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4166) RedeliveryPlugin causes a deadlock with JobSchedulerImpl

2012-11-13 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13496233#comment-13496233
 ] 

Sergiy Barlabanov commented on AMQ-4166:


I will try 5.8-SNAPSHOT the next days and write the feedback.

> RedeliveryPlugin causes a deadlock with JobSchedulerImpl
> 
>
> Key: AMQ-4166
> URL: https://issues.apache.org/jira/browse/AMQ-4166
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.7.0
> Environment: Reproduced on Windows 8, Windows Vista, MacOS X
> with Oracle jdk 1.7.0_07. ActiveMQ is started embedded using RAR inside 
> Glassfish 3.1.2.2.
>Reporter: Sergiy Barlabanov
> Attachments: broker-config.xml, stack-trace-1.txt, stack-trace-2.txt
>
>
> Originates from the forum discussion 
> http://activemq.2283324.n4.nabble.com/RedeliveryPlugin-causes-a-deadlock-with-JobSchedulerImpl-in-ActiveMQ-5-7-0-tt4659019.html
> we have RedeliveryPlugin causing thread deadlock together with 
> JobSchedulerImpl. ActiveMQ version is 5.7.0. We activated RedeliveryPlugin in 
> our broker config xml (see below). There two stacktraces below as well. One 
> is from ActiveMQ VMTransport thread, which tries to send a message to a dead 
> letter queue using RedeliveryPlugin. RedeliveryPlugin just tries to 
> reschedule the message for redelivery and for that it calls JobSchedulerImpl 
> and blocks on its synchronized method "schedule". On the way "consumersLock" 
> is locked. 
> Another stack trace is from JobScheduler:JMS thread, which fires a job to 
> redeliver some message and tries to send it using the same queue used by the 
> VMTransport thread. And it blocks on that consumersLock locked by the 
> VMTransport thread. And this occurs in JobSchedulerImpl#mainLoop method 
> inside synchronized {} block causing a deadlock, since the VMTransport thread 
> tries to call another synchronized method of JobSchedulerImpl. The art how 
> RedeliveryPlugin and JobSchedulerImpl are programmed seems to be quite 
> dangerous, since they both access the queues and try to acquire queue locks. 
> And additionally synchronized methods of JobSchedulerImpl are called directly 
> from RedeliveryPlugin making that to a nice source of thread deadlocks. And I 
> see no measures taken in the code to avoid these deadlocks.
> We can reproduce it quite often if we start ActiveMQ with empty stores 
> (kahadb and scheduler stores are deleted manually from the file system before 
> startup). But looking at the code, I would say that the problem may occur in 
> any situation in any deployment scenario (standalone or embedded in a JEE 
> container). It is just enough to have some Transport thread redelivering a 
> message and the JobScheduler thread trying to fire a job at the same moment 
> on the same queue.
> And another strange thing, which is may be has nothing to do with the 
> deadlock but is still strange, is that according to the stack trace 
> RedeliveryPlugin tries to redeliver an expired message.
> broker config and the stack traces are attached to the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4166) RedeliveryPlugin causes a deadlock with JobSchedulerImpl

2012-11-09 Thread Sergiy Barlabanov (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493921#comment-13493921
 ] 

Sergiy Barlabanov commented on AMQ-4166:


With this deadlock problem the new feature announced for 5.7.0 about 
per-destination redelivery configuration is actually unusable. I see no 
workaround besides fallback to use ConnectionFactory to configure the 
redelivery.

> RedeliveryPlugin causes a deadlock with JobSchedulerImpl
> 
>
> Key: AMQ-4166
> URL: https://issues.apache.org/jira/browse/AMQ-4166
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.7.0
> Environment: Reproduced on Windows 8, Windows Vista, MacOS X
> with Oracle jdk 1.7.0_07. ActiveMQ is started embedded using RAR inside 
> Glassfish 3.1.2.2.
>Reporter: Sergiy Barlabanov
> Attachments: broker-config.xml, stack-trace-1.txt, stack-trace-2.txt
>
>
> Originates from the forum discussion 
> http://activemq.2283324.n4.nabble.com/RedeliveryPlugin-causes-a-deadlock-with-JobSchedulerImpl-in-ActiveMQ-5-7-0-tt4659019.html
> we have RedeliveryPlugin causing thread deadlock together with 
> JobSchedulerImpl. ActiveMQ version is 5.7.0. We activated RedeliveryPlugin in 
> our broker config xml (see below). There two stacktraces below as well. One 
> is from ActiveMQ VMTransport thread, which tries to send a message to a dead 
> letter queue using RedeliveryPlugin. RedeliveryPlugin just tries to 
> reschedule the message for redelivery and for that it calls JobSchedulerImpl 
> and blocks on its synchronized method "schedule". On the way "consumersLock" 
> is locked. 
> Another stack trace is from JobScheduler:JMS thread, which fires a job to 
> redeliver some message and tries to send it using the same queue used by the 
> VMTransport thread. And it blocks on that consumersLock locked by the 
> VMTransport thread. And this occurs in JobSchedulerImpl#mainLoop method 
> inside synchronized {} block causing a deadlock, since the VMTransport thread 
> tries to call another synchronized method of JobSchedulerImpl. The art how 
> RedeliveryPlugin and JobSchedulerImpl are programmed seems to be quite 
> dangerous, since they both access the queues and try to acquire queue locks. 
> And additionally synchronized methods of JobSchedulerImpl are called directly 
> from RedeliveryPlugin making that to a nice source of thread deadlocks. And I 
> see no measures taken in the code to avoid these deadlocks.
> We can reproduce it quite often if we start ActiveMQ with empty stores 
> (kahadb and scheduler stores are deleted manually from the file system before 
> startup). But looking at the code, I would say that the problem may occur in 
> any situation in any deployment scenario (standalone or embedded in a JEE 
> container). It is just enough to have some Transport thread redelivering a 
> message and the JobScheduler thread trying to fire a job at the same moment 
> on the same queue.
> And another strange thing, which is may be has nothing to do with the 
> deadlock but is still strange, is that according to the stack trace 
> RedeliveryPlugin tries to redeliver an expired message.
> broker config and the stack traces are attached to the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4166) RedeliveryPlugin causes a deadlock with JobSchedulerImpl

2012-11-09 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-4166:
---

Attachment: stack-trace-2.txt
stack-trace-1.txt
broker-config.xml

Broker config and the stack traces of the participating threads.

> RedeliveryPlugin causes a deadlock with JobSchedulerImpl
> 
>
> Key: AMQ-4166
> URL: https://issues.apache.org/jira/browse/AMQ-4166
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.7.0
> Environment: Reproduced on Windows 8, Windows Vista, MacOS X
> with Oracle jdk 1.7.0_07. ActiveMQ is started embedded using RAR inside 
> Glassfish 3.1.2.2.
>Reporter: Sergiy Barlabanov
> Attachments: broker-config.xml, stack-trace-1.txt, stack-trace-2.txt
>
>
> Originates from the forum discussion 
> http://activemq.2283324.n4.nabble.com/RedeliveryPlugin-causes-a-deadlock-with-JobSchedulerImpl-in-ActiveMQ-5-7-0-tt4659019.html
> we have RedeliveryPlugin causing thread deadlock together with 
> JobSchedulerImpl. ActiveMQ version is 5.7.0. We activated RedeliveryPlugin in 
> our broker config xml (see below). There two stacktraces below as well. One 
> is from ActiveMQ VMTransport thread, which tries to send a message to a dead 
> letter queue using RedeliveryPlugin. RedeliveryPlugin just tries to 
> reschedule the message for redelivery and for that it calls JobSchedulerImpl 
> and blocks on its synchronized method "schedule". On the way "consumersLock" 
> is locked. 
> Another stack trace is from JobScheduler:JMS thread, which fires a job to 
> redeliver some message and tries to send it using the same queue used by the 
> VMTransport thread. And it blocks on that consumersLock locked by the 
> VMTransport thread. And this occurs in JobSchedulerImpl#mainLoop method 
> inside synchronized {} block causing a deadlock, since the VMTransport thread 
> tries to call another synchronized method of JobSchedulerImpl. The art how 
> RedeliveryPlugin and JobSchedulerImpl are programmed seems to be quite 
> dangerous, since they both access the queues and try to acquire queue locks. 
> And additionally synchronized methods of JobSchedulerImpl are called directly 
> from RedeliveryPlugin making that to a nice source of thread deadlocks. And I 
> see no measures taken in the code to avoid these deadlocks.
> We can reproduce it quite often if we start ActiveMQ with empty stores 
> (kahadb and scheduler stores are deleted manually from the file system before 
> startup). But looking at the code, I would say that the problem may occur in 
> any situation in any deployment scenario (standalone or embedded in a JEE 
> container). It is just enough to have some Transport thread redelivering a 
> message and the JobScheduler thread trying to fire a job at the same moment 
> on the same queue.
> And another strange thing, which is may be has nothing to do with the 
> deadlock but is still strange, is that according to the stack trace 
> RedeliveryPlugin tries to redeliver an expired message.
> Our broker configuration:
> http://activemq.apache.org/schema/core"; brokerName="dcdng" 
> useJmx="true" useShutdownHook="false" schedulerSupport="false">
> 
> 
> 
> 
> 
>  journalMaxFileLength="10mb"/>
> 
> 
> 
> 
> 
> 
> 
> 
>  
> 
> 
>  useQueueForQueueMessages="true" processExpired="false" enableAudit="false"/>
> 
> 
> 
> 
> 
> 
>  sendToDlqIfMaxRetriesExceeded="true">
> 
> 
> 
>  maximumRedeliveries="10" 
>   redeliveryDelay="1000"/>
> 
> 
>redeliveryDelay="1000"/>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Stack trace #1:
> Name: ActiveMQ VMTransport: vm://dcdng#101-1 
> State: BLOCKED on 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl@6a135124 owned by: 
> JobScheduler:JMS 
> Total blocked: 22  Total waited: 13 
> org.apache.activemq.broker.scheduler.JobSchedulerImpl.schedule(JobSchedulerImpl

[jira] [Updated] (AMQ-4166) RedeliveryPlugin causes a deadlock with JobSchedulerImpl

2012-11-09 Thread Sergiy Barlabanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergiy Barlabanov updated AMQ-4166:
---

Description: 
Originates from the forum discussion 
http://activemq.2283324.n4.nabble.com/RedeliveryPlugin-causes-a-deadlock-with-JobSchedulerImpl-in-ActiveMQ-5-7-0-tt4659019.html

we have RedeliveryPlugin causing thread deadlock together with 
JobSchedulerImpl. ActiveMQ version is 5.7.0. We activated RedeliveryPlugin in 
our broker config xml (see below). There two stacktraces below as well. One is 
from ActiveMQ VMTransport thread, which tries to send a message to a dead 
letter queue using RedeliveryPlugin. RedeliveryPlugin just tries to reschedule 
the message for redelivery and for that it calls JobSchedulerImpl and blocks on 
its synchronized method "schedule". On the way "consumersLock" is locked. 
Another stack trace is from JobScheduler:JMS thread, which fires a job to 
redeliver some message and tries to send it using the same queue used by the 
VMTransport thread. And it blocks on that consumersLock locked by the 
VMTransport thread. And this occurs in JobSchedulerImpl#mainLoop method inside 
synchronized {} block causing a deadlock, since the VMTransport thread tries to 
call another synchronized method of JobSchedulerImpl. The art how 
RedeliveryPlugin and JobSchedulerImpl are programmed seems to be quite 
dangerous, since they both access the queues and try to acquire queue locks. 
And additionally synchronized methods of JobSchedulerImpl are called directly 
from RedeliveryPlugin making that to a nice source of thread deadlocks. And I 
see no measures taken in the code to avoid these deadlocks.
We can reproduce it quite often if we start ActiveMQ with empty stores (kahadb 
and scheduler stores are deleted manually from the file system before startup). 
But looking at the code, I would say that the problem may occur in any 
situation in any deployment scenario (standalone or embedded in a JEE 
container). It is just enough to have some Transport thread redelivering a 
message and the JobScheduler thread trying to fire a job at the same moment on 
the same queue.
And another strange thing, which is may be has nothing to do with the deadlock 
but is still strange, is that according to the stack trace RedeliveryPlugin 
tries to redeliver an expired message.

broker config and the stack traces are attached to the issue.

  was:
Originates from the forum discussion 
http://activemq.2283324.n4.nabble.com/RedeliveryPlugin-causes-a-deadlock-with-JobSchedulerImpl-in-ActiveMQ-5-7-0-tt4659019.html

we have RedeliveryPlugin causing thread deadlock together with 
JobSchedulerImpl. ActiveMQ version is 5.7.0. We activated RedeliveryPlugin in 
our broker config xml (see below). There two stacktraces below as well. One is 
from ActiveMQ VMTransport thread, which tries to send a message to a dead 
letter queue using RedeliveryPlugin. RedeliveryPlugin just tries to reschedule 
the message for redelivery and for that it calls JobSchedulerImpl and blocks on 
its synchronized method "schedule". On the way "consumersLock" is locked. 
Another stack trace is from JobScheduler:JMS thread, which fires a job to 
redeliver some message and tries to send it using the same queue used by the 
VMTransport thread. And it blocks on that consumersLock locked by the 
VMTransport thread. And this occurs in JobSchedulerImpl#mainLoop method inside 
synchronized {} block causing a deadlock, since the VMTransport thread tries to 
call another synchronized method of JobSchedulerImpl. The art how 
RedeliveryPlugin and JobSchedulerImpl are programmed seems to be quite 
dangerous, since they both access the queues and try to acquire queue locks. 
And additionally synchronized methods of JobSchedulerImpl are called directly 
from RedeliveryPlugin making that to a nice source of thread deadlocks. And I 
see no measures taken in the code to avoid these deadlocks.
We can reproduce it quite often if we start ActiveMQ with empty stores (kahadb 
and scheduler stores are deleted manually from the file system before startup). 
But looking at the code, I would say that the problem may occur in any 
situation in any deployment scenario (standalone or embedded in a JEE 
container). It is just enough to have some Transport thread redelivering a 
message and the JobScheduler thread trying to fire a job at the same moment on 
the same queue.
And another strange thing, which is may be has nothing to do with the deadlock 
but is still strange, is that according to the stack trace RedeliveryPlugin 
tries to redeliver an expired message.

Our broker configuration:

http://activemq.apache.org/schema/core"; brokerName="dcdng" 
useJmx="true" useShutdownHook="false" schedulerSupport="false">

















   

[jira] [Created] (AMQ-4166) RedeliveryPlugin causes a deadlock with JobSchedulerImpl

2012-11-09 Thread Sergiy Barlabanov (JIRA)
Sergiy Barlabanov created AMQ-4166:
--

 Summary: RedeliveryPlugin causes a deadlock with JobSchedulerImpl
 Key: AMQ-4166
 URL: https://issues.apache.org/jira/browse/AMQ-4166
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.7.0
 Environment: Reproduced on Windows 8, Windows Vista, MacOS X
with Oracle jdk 1.7.0_07. ActiveMQ is started embedded using RAR inside 
Glassfish 3.1.2.2.
Reporter: Sergiy Barlabanov


Originates from the forum discussion 
http://activemq.2283324.n4.nabble.com/RedeliveryPlugin-causes-a-deadlock-with-JobSchedulerImpl-in-ActiveMQ-5-7-0-tt4659019.html

we have RedeliveryPlugin causing thread deadlock together with 
JobSchedulerImpl. ActiveMQ version is 5.7.0. We activated RedeliveryPlugin in 
our broker config xml (see below). There two stacktraces below as well. One is 
from ActiveMQ VMTransport thread, which tries to send a message to a dead 
letter queue using RedeliveryPlugin. RedeliveryPlugin just tries to reschedule 
the message for redelivery and for that it calls JobSchedulerImpl and blocks on 
its synchronized method "schedule". On the way "consumersLock" is locked. 
Another stack trace is from JobScheduler:JMS thread, which fires a job to 
redeliver some message and tries to send it using the same queue used by the 
VMTransport thread. And it blocks on that consumersLock locked by the 
VMTransport thread. And this occurs in JobSchedulerImpl#mainLoop method inside 
synchronized {} block causing a deadlock, since the VMTransport thread tries to 
call another synchronized method of JobSchedulerImpl. The art how 
RedeliveryPlugin and JobSchedulerImpl are programmed seems to be quite 
dangerous, since they both access the queues and try to acquire queue locks. 
And additionally synchronized methods of JobSchedulerImpl are called directly 
from RedeliveryPlugin making that to a nice source of thread deadlocks. And I 
see no measures taken in the code to avoid these deadlocks.
We can reproduce it quite often if we start ActiveMQ with empty stores (kahadb 
and scheduler stores are deleted manually from the file system before startup). 
But looking at the code, I would say that the problem may occur in any 
situation in any deployment scenario (standalone or embedded in a JEE 
container). It is just enough to have some Transport thread redelivering a 
message and the JobScheduler thread trying to fire a job at the same moment on 
the same queue.
And another strange thing, which is may be has nothing to do with the deadlock 
but is still strange, is that according to the stack trace RedeliveryPlugin 
tries to redeliver an expired message.

Our broker configuration:

http://activemq.apache.org/schema/core"; brokerName="dcdng" 
useJmx="true" useShutdownHook="false" schedulerSupport="false">


















 








































Stack trace #1:


Name: ActiveMQ VMTransport: vm://dcdng#101-1 
State: BLOCKED on 
org.apache.activemq.broker.scheduler.JobSchedulerImpl@6a135124 owned by: 
JobScheduler:JMS 
Total blocked: 22  Total waited: 13 
org.apache.activemq.broker.scheduler.JobSchedulerImpl.schedule(JobSchedulerImpl.java:110)
 
org.apache.activemq.broker.scheduler.SchedulerBroker.send(SchedulerBroker.java:185)
 
org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:129) 
org.apache.activemq.broker.CompositeDestinationBroker.send(CompositeDestinationBroker.java:96)
 
org.apache.activemq.broker.TransactionBroker.send(TransactionBroker.java:317) 
org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:135)
 
org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:135)
 
org.apache.activemq.broker.util.RedeliveryPlugin.scheduleRedelivery(RedeliveryPlugin.java:190)
 
org.apache.activemq.broker.util.RedeliveryPlugin.sendToDeadLetterQueue(RedeliveryPlugin.java:144)
 
org.apache.activemq.broker.MutableBrokerFilter.sendToDeadLetterQueue(MutableBrokerFilter.java:274)
 
org.apache.activemq.broker.region.RegionBroker.messageExpired(RegionBroker.java:798)
 
org.apache.activemq.broker.Broke