[jira] [Commented] (AMQ-4924) Duplicate messages are left in the persistence store

2013-12-11 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13845437#comment-13845437
 ] 

Ron Koerner commented on AMQ-4924:
--

So if I have no duplex connection I need to activate auditNetworkProducer on 
the receiving TransportConnector (in my case "LOCAL") and if I have a duplex 
connection I need to activate checkDuplicateMessagesOnDuplex on the sending 
NetworkConnector (in my case "REMOTE").
But when messages are sent from REMOTE to LOCAL the DemandForwardingBridge on 
the LOCAL side will actually check for the duplicate, even if it is configured 
only on the REMOTE side.
As soon as 5.10 is released I will change my setup accordingly. For now I can 
use supportFailOver as a workaround.

Thank you very much!

Other people with this problem may appreciate a hint to use 
auditNetworkProducers or checkDuplicateMessagesOnDuplex whenever they encounter 
the situation that there is a duplicate detected by the cursor.


> Duplicate messages are left in the persistence store
> 
>
> Key: AMQ-4924
> URL: https://issues.apache.org/jira/browse/AMQ-4924
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Ron Koerner
>Assignee: Rob Davies
> Fix For: 5.10.0
>
> Attachments: AMQ4924.java
>
>
> We have a local and remote broker connected with a duplex bridge, which is 
> initiated by the remote broker.
> Producers are attached to the remote broker, one consumer to the local broker.
> The following scenario causes messages to be left in the local store, which 
> are replayed when the local broker is restarted:
> # messages are forwarded from the remote broker to the local broker
> # messages are dispatched to the local consumer
> # the connection between the local and remote broker fails
> # the local broker tries to acknowledge the message reception to the remote 
> broker, which fails
> # the remote broker reconnects
> # the messages are resent
> # the local broker correctly identifies them as duplicates, but puts them 
> into the store nevertheless where they remain until the local broker is 
> restarted
> # other messages are produced and consumed without a problem
> # the local broker is restarted
> # the duplicates are now delivered to the local consumer again and of course 
> out of order
> This behaviour can be identified by a queue size which does not seem to 
> shrink below a certain number, even if a consumer is connected and consuming 
> other messages.
> When the log level is set to TRACE these messages indicate the problem:
> {code}
> 2013-12-06 20:35:17,405 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:1
>  - cursor got duplicate: ID:smcexp5-58011-1386358514283-7:1:1:1:1, 4 
> [ActiveMQ VMTransport: vm://LOCAL#19-1]
> 2013-12-06 20:35:17,412 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=false,size=1,cacheEnabled=false,maxBatchSize:1
>  - fillBatch [ActiveMQ BrokerService[LOCAL] Task-2]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (AMQ-4924) Duplicate messages are left in the persistence store

2013-12-11 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13845343#comment-13845343
 ] 

Ron Koerner commented on AMQ-4924:
--

After looking at your change, I wondered if the problem also occurs without a 
duplex bridge and reran my test. It does happen without duplex. I also may have 
misunderstood your code, but I was wondering how the remote side could know 
which messages need to be resent, as sometimes messages could get lost and 
other times just the response/ack gets lost (like in my case).

> Duplicate messages are left in the persistence store
> 
>
> Key: AMQ-4924
> URL: https://issues.apache.org/jira/browse/AMQ-4924
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Ron Koerner
>Assignee: Rob Davies
> Fix For: 5.10.0
>
> Attachments: AMQ4924.java
>
>
> We have a local and remote broker connected with a duplex bridge, which is 
> initiated by the remote broker.
> Producers are attached to the remote broker, one consumer to the local broker.
> The following scenario causes messages to be left in the local store, which 
> are replayed when the local broker is restarted:
> # messages are forwarded from the remote broker to the local broker
> # messages are dispatched to the local consumer
> # the connection between the local and remote broker fails
> # the local broker tries to acknowledge the message reception to the remote 
> broker, which fails
> # the remote broker reconnects
> # the messages are resent
> # the local broker correctly identifies them as duplicates, but puts them 
> into the store nevertheless where they remain until the local broker is 
> restarted
> # other messages are produced and consumed without a problem
> # the local broker is restarted
> # the duplicates are now delivered to the local consumer again and of course 
> out of order
> This behaviour can be identified by a queue size which does not seem to 
> shrink below a certain number, even if a consumer is connected and consuming 
> other messages.
> When the log level is set to TRACE these messages indicate the problem:
> {code}
> 2013-12-06 20:35:17,405 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:1
>  - cursor got duplicate: ID:smcexp5-58011-1386358514283-7:1:1:1:1, 4 
> [ActiveMQ VMTransport: vm://LOCAL#19-1]
> 2013-12-06 20:35:17,412 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=false,size=1,cacheEnabled=false,maxBatchSize:1
>  - fillBatch [ActiveMQ BrokerService[LOCAL] Task-2]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (AMQ-4924) Duplicate messages are left in the persistence store

2013-12-11 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13845322#comment-13845322
 ] 

Ron Koerner edited comment on AMQ-4924 at 12/11/13 11:50 AM:
-

This also seemst to be a duplicate of 
https://issues.apache.org/jira/browse/AMQ-3473.
Unfortunately setting auditNetworkProducers = true does not suppress the extra 
message in the store.


was (Author: ron.koerner):
This also seemst to be a duplicate of 
https://issues.apache.org/jira/browse/AMQ-3473.

> Duplicate messages are left in the persistence store
> 
>
> Key: AMQ-4924
> URL: https://issues.apache.org/jira/browse/AMQ-4924
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Ron Koerner
>Assignee: Rob Davies
> Fix For: 5.10.0
>
> Attachments: AMQ4924.java
>
>
> We have a local and remote broker connected with a duplex bridge, which is 
> initiated by the remote broker.
> Producers are attached to the remote broker, one consumer to the local broker.
> The following scenario causes messages to be left in the local store, which 
> are replayed when the local broker is restarted:
> # messages are forwarded from the remote broker to the local broker
> # messages are dispatched to the local consumer
> # the connection between the local and remote broker fails
> # the local broker tries to acknowledge the message reception to the remote 
> broker, which fails
> # the remote broker reconnects
> # the messages are resent
> # the local broker correctly identifies them as duplicates, but puts them 
> into the store nevertheless where they remain until the local broker is 
> restarted
> # other messages are produced and consumed without a problem
> # the local broker is restarted
> # the duplicates are now delivered to the local consumer again and of course 
> out of order
> This behaviour can be identified by a queue size which does not seem to 
> shrink below a certain number, even if a consumer is connected and consuming 
> other messages.
> When the log level is set to TRACE these messages indicate the problem:
> {code}
> 2013-12-06 20:35:17,405 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:1
>  - cursor got duplicate: ID:smcexp5-58011-1386358514283-7:1:1:1:1, 4 
> [ActiveMQ VMTransport: vm://LOCAL#19-1]
> 2013-12-06 20:35:17,412 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=false,size=1,cacheEnabled=false,maxBatchSize:1
>  - fillBatch [ActiveMQ BrokerService[LOCAL] Task-2]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (AMQ-4924) Duplicate messages are left in the persistence store

2013-12-11 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13845321#comment-13845321
 ] 

Ron Koerner edited comment on AMQ-4924 at 12/11/13 11:48 AM:
-

It also seems to help to enable "supportFailOver" in the broker.

I don't know about the individual advantages or disadvantages for each 
solution. Can you elaborate? Is there a reason not to activate supportFailOver 
or checkDuplicateMessagesOnDuplex permanently/by default?



was (Author: ron.koerner):
It also seems to help to enable "supportFailOver" in the broker (tested by me) 
or "auditNetworkProducers" on the receiving TransportConnector (according to 
Gary Tully).

I don't know about the individual advantages or disadvantages for each 
solution. Can you elaborate? Is there a reason not to activate supportFailOver 
or checkDuplicateMessagesOnDuplex permanently/by default?

> Duplicate messages are left in the persistence store
> 
>
> Key: AMQ-4924
> URL: https://issues.apache.org/jira/browse/AMQ-4924
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Ron Koerner
>Assignee: Rob Davies
> Fix For: 5.10.0
>
> Attachments: AMQ4924.java
>
>
> We have a local and remote broker connected with a duplex bridge, which is 
> initiated by the remote broker.
> Producers are attached to the remote broker, one consumer to the local broker.
> The following scenario causes messages to be left in the local store, which 
> are replayed when the local broker is restarted:
> # messages are forwarded from the remote broker to the local broker
> # messages are dispatched to the local consumer
> # the connection between the local and remote broker fails
> # the local broker tries to acknowledge the message reception to the remote 
> broker, which fails
> # the remote broker reconnects
> # the messages are resent
> # the local broker correctly identifies them as duplicates, but puts them 
> into the store nevertheless where they remain until the local broker is 
> restarted
> # other messages are produced and consumed without a problem
> # the local broker is restarted
> # the duplicates are now delivered to the local consumer again and of course 
> out of order
> This behaviour can be identified by a queue size which does not seem to 
> shrink below a certain number, even if a consumer is connected and consuming 
> other messages.
> When the log level is set to TRACE these messages indicate the problem:
> {code}
> 2013-12-06 20:35:17,405 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:1
>  - cursor got duplicate: ID:smcexp5-58011-1386358514283-7:1:1:1:1, 4 
> [ActiveMQ VMTransport: vm://LOCAL#19-1]
> 2013-12-06 20:35:17,412 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=false,size=1,cacheEnabled=false,maxBatchSize:1
>  - fillBatch [ActiveMQ BrokerService[LOCAL] Task-2]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (AMQ-4924) Duplicate messages are left in the persistence store

2013-12-11 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-4924:
-


This also seemst to be a duplicate of 
https://issues.apache.org/jira/browse/AMQ-3473.

> Duplicate messages are left in the persistence store
> 
>
> Key: AMQ-4924
> URL: https://issues.apache.org/jira/browse/AMQ-4924
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Ron Koerner
>Assignee: Rob Davies
> Fix For: 5.10.0
>
> Attachments: AMQ4924.java
>
>
> We have a local and remote broker connected with a duplex bridge, which is 
> initiated by the remote broker.
> Producers are attached to the remote broker, one consumer to the local broker.
> The following scenario causes messages to be left in the local store, which 
> are replayed when the local broker is restarted:
> # messages are forwarded from the remote broker to the local broker
> # messages are dispatched to the local consumer
> # the connection between the local and remote broker fails
> # the local broker tries to acknowledge the message reception to the remote 
> broker, which fails
> # the remote broker reconnects
> # the messages are resent
> # the local broker correctly identifies them as duplicates, but puts them 
> into the store nevertheless where they remain until the local broker is 
> restarted
> # other messages are produced and consumed without a problem
> # the local broker is restarted
> # the duplicates are now delivered to the local consumer again and of course 
> out of order
> This behaviour can be identified by a queue size which does not seem to 
> shrink below a certain number, even if a consumer is connected and consuming 
> other messages.
> When the log level is set to TRACE these messages indicate the problem:
> {code}
> 2013-12-06 20:35:17,405 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:1
>  - cursor got duplicate: ID:smcexp5-58011-1386358514283-7:1:1:1:1, 4 
> [ActiveMQ VMTransport: vm://LOCAL#19-1]
> 2013-12-06 20:35:17,412 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=false,size=1,cacheEnabled=false,maxBatchSize:1
>  - fillBatch [ActiveMQ BrokerService[LOCAL] Task-2]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (AMQ-4924) Duplicate messages are left in the persistence store

2013-12-11 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13845321#comment-13845321
 ] 

Ron Koerner commented on AMQ-4924:
--

It also seems to help to enable "supportFailOver" in the broker (tested by me) 
or "auditNetworkProducers" on the receiving TransportConnector (according to 
Gary Tully).

I don't know about the individual advantages or disadvantages for each 
solution. Can you elaborate? Is there a reason not to activate supportFailOver 
or checkDuplicateMessagesOnDuplex permanently/by default?

> Duplicate messages are left in the persistence store
> 
>
> Key: AMQ-4924
> URL: https://issues.apache.org/jira/browse/AMQ-4924
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Ron Koerner
>Assignee: Rob Davies
> Fix For: 5.10.0
>
> Attachments: AMQ4924.java
>
>
> We have a local and remote broker connected with a duplex bridge, which is 
> initiated by the remote broker.
> Producers are attached to the remote broker, one consumer to the local broker.
> The following scenario causes messages to be left in the local store, which 
> are replayed when the local broker is restarted:
> # messages are forwarded from the remote broker to the local broker
> # messages are dispatched to the local consumer
> # the connection between the local and remote broker fails
> # the local broker tries to acknowledge the message reception to the remote 
> broker, which fails
> # the remote broker reconnects
> # the messages are resent
> # the local broker correctly identifies them as duplicates, but puts them 
> into the store nevertheless where they remain until the local broker is 
> restarted
> # other messages are produced and consumed without a problem
> # the local broker is restarted
> # the duplicates are now delivered to the local consumer again and of course 
> out of order
> This behaviour can be identified by a queue size which does not seem to 
> shrink below a certain number, even if a consumer is connected and consuming 
> other messages.
> When the log level is set to TRACE these messages indicate the problem:
> {code}
> 2013-12-06 20:35:17,405 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:1
>  - cursor got duplicate: ID:smcexp5-58011-1386358514283-7:1:1:1:1, 4 
> [ActiveMQ VMTransport: vm://LOCAL#19-1]
> 2013-12-06 20:35:17,412 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=false,size=1,cacheEnabled=false,maxBatchSize:1
>  - fillBatch [ActiveMQ BrokerService[LOCAL] Task-2]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (AMQ-4924) Duplicate messages are left in the persistence store

2013-12-06 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-4924:
-

Attachment: AMQ4924.java

Attached new version, where the local consumer is disconnected and reconnected 
to exclude prefetch issues.

> Duplicate messages are left in the persistence store
> 
>
> Key: AMQ-4924
> URL: https://issues.apache.org/jira/browse/AMQ-4924
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Ron Koerner
> Attachments: AMQ4924.java
>
>
> We have a local and remote broker connected with a duplex bridge, which is 
> initiated by the remote broker.
> Producers are attached to the remote broker, one consumer to the local broker.
> The following scenario causes messages to be left in the local store, which 
> are replayed when the local broker is restarted:
> # messages are forwarded from the remote broker to the local broker
> # messages are dispatched to the local consumer
> # the connection between the local and remote broker fails
> # the local broker tries to acknowledge the message reception to the remote 
> broker, which fails
> # the remote broker reconnects
> # the messages are resent
> # the local broker correctly identifies them as duplicates, but puts them 
> into the store nevertheless where they remain until the local broker is 
> restarted
> # other messages are produced and consumed without a problem
> # the local broker is restarted
> # the duplicates are now delivered to the local consumer again and of course 
> out of order
> This behaviour can be identified by a queue size which does not seem to 
> shrink below a certain number, even if a consumer is connected and consuming 
> other messages.
> When the log level is set to TRACE these messages indicate the problem:
> {code}
> 2013-12-06 20:35:17,405 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:1
>  - cursor got duplicate: ID:smcexp5-58011-1386358514283-7:1:1:1:1, 4 
> [ActiveMQ VMTransport: vm://LOCAL#19-1]
> 2013-12-06 20:35:17,412 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=false,size=1,cacheEnabled=false,maxBatchSize:1
>  - fillBatch [ActiveMQ BrokerService[LOCAL] Task-2]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4924) Duplicate messages are left in the persistence store

2013-12-06 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-4924:
-

Attachment: (was: AMQ4924.java)

> Duplicate messages are left in the persistence store
> 
>
> Key: AMQ-4924
> URL: https://issues.apache.org/jira/browse/AMQ-4924
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Ron Koerner
>
> We have a local and remote broker connected with a duplex bridge, which is 
> initiated by the remote broker.
> Producers are attached to the remote broker, one consumer to the local broker.
> The following scenario causes messages to be left in the local store, which 
> are replayed when the local broker is restarted:
> # messages are forwarded from the remote broker to the local broker
> # messages are dispatched to the local consumer
> # the connection between the local and remote broker fails
> # the local broker tries to acknowledge the message reception to the remote 
> broker, which fails
> # the remote broker reconnects
> # the messages are resent
> # the local broker correctly identifies them as duplicates, but puts them 
> into the store nevertheless where they remain until the local broker is 
> restarted
> # other messages are produced and consumed without a problem
> # the local broker is restarted
> # the duplicates are now delivered to the local consumer again and of course 
> out of order
> This behaviour can be identified by a queue size which does not seem to 
> shrink below a certain number, even if a consumer is connected and consuming 
> other messages.
> When the log level is set to TRACE these messages indicate the problem:
> {code}
> 2013-12-06 20:35:17,405 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:1
>  - cursor got duplicate: ID:smcexp5-58011-1386358514283-7:1:1:1:1, 4 
> [ActiveMQ VMTransport: vm://LOCAL#19-1]
> 2013-12-06 20:35:17,412 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=false,size=1,cacheEnabled=false,maxBatchSize:1
>  - fillBatch [ActiveMQ BrokerService[LOCAL] Task-2]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4924) Duplicate messages are left in the persistence store

2013-12-06 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-4924:
-

Attachment: AMQ4924.java

Attached test case to reproduce the problem. It can be reproduced with KahaDB 
or LevelDB.

> Duplicate messages are left in the persistence store
> 
>
> Key: AMQ-4924
> URL: https://issues.apache.org/jira/browse/AMQ-4924
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
>Reporter: Ron Koerner
> Attachments: AMQ4924.java
>
>
> We have a local and remote broker connected with a duplex bridge, which is 
> initiated by the remote broker.
> Producers are attached to the remote broker, one consumer to the local broker.
> The following scenario causes messages to be left in the local store, which 
> are replayed when the local broker is restarted:
> # messages are forwarded from the remote broker to the local broker
> # messages are dispatched to the local consumer
> # the connection between the local and remote broker fails
> # the local broker tries to acknowledge the message reception to the remote 
> broker, which fails
> # the remote broker reconnects
> # the messages are resent
> # the local broker correctly identifies them as duplicates, but puts them 
> into the store nevertheless where they remain until the local broker is 
> restarted
> # other messages are produced and consumed without a problem
> # the local broker is restarted
> # the duplicates are now delivered to the local consumer again and of course 
> out of order
> This behaviour can be identified by a queue size which does not seem to 
> shrink below a certain number, even if a consumer is connected and consuming 
> other messages.
> When the log level is set to TRACE these messages indicate the problem:
> {code}
> 2013-12-06 20:35:17,405 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:1
>  - cursor got duplicate: ID:smcexp5-58011-1386358514283-7:1:1:1:1, 4 
> [ActiveMQ VMTransport: vm://LOCAL#19-1]
> 2013-12-06 20:35:17,412 TRACE .a.a.b.r.c.AbstractStoreCursor - 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=false,size=1,cacheEnabled=false,maxBatchSize:1
>  - fillBatch [ActiveMQ BrokerService[LOCAL] Task-2]
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (AMQ-4924) Duplicate messages are left in the persistence store

2013-12-06 Thread Ron Koerner (JIRA)
Ron Koerner created AMQ-4924:


 Summary: Duplicate messages are left in the persistence store
 Key: AMQ-4924
 URL: https://issues.apache.org/jira/browse/AMQ-4924
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0, 5.8.0
Reporter: Ron Koerner


We have a local and remote broker connected with a duplex bridge, which is 
initiated by the remote broker.
Producers are attached to the remote broker, one consumer to the local broker.
The following scenario causes messages to be left in the local store, which are 
replayed when the local broker is restarted:
# messages are forwarded from the remote broker to the local broker
# messages are dispatched to the local consumer
# the connection between the local and remote broker fails
# the local broker tries to acknowledge the message reception to the remote 
broker, which fails
# the remote broker reconnects
# the messages are resent
# the local broker correctly identifies them as duplicates, but puts them into 
the store nevertheless where they remain until the local broker is restarted
# other messages are produced and consumed without a problem
# the local broker is restarted
# the duplicates are now delivered to the local consumer again and of course 
out of order

This behaviour can be identified by a queue size which does not seem to shrink 
below a certain number, even if a consumer is connected and consuming other 
messages.

When the log level is set to TRACE these messages indicate the problem:
{code}
2013-12-06 20:35:17,405 TRACE .a.a.b.r.c.AbstractStoreCursor - 
org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:1
 - cursor got duplicate: ID:smcexp5-58011-1386358514283-7:1:1:1:1, 4 [ActiveMQ 
VMTransport: vm://LOCAL#19-1]
2013-12-06 20:35:17,412 TRACE .a.a.b.r.c.AbstractStoreCursor - 
org.apache.activemq.broker.region.cursors.QueueStorePrefetch@c0bc4f:testqueue,batchResetNeeded=false,storeHasMessages=false,size=1,cacheEnabled=false,maxBatchSize:1
 - fillBatch [ActiveMQ BrokerService[LOCAL] Task-2]
{code}




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (AMQ-4805) IllegalStateException during thread.Scheduler.schedualPeriodically

2013-11-25 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13831404#comment-13831404
 ] 

Ron Koerner commented on AMQ-4805:
--

I seem to have the same problem with ActiveMQ 5.8.0. It occurs out of the blue 
(no preceding errors at or above INFO level) and doesn't go away until the 
broker is restarted.

{code}
java.lang.IllegalStateException: Timer already cancelled.
at java.util.Timer.sched(Timer.java:354) ~[na:1.6.0_21-ea]
at java.util.Timer.schedule(Timer.java:222) ~[na:1.6.0_21-ea]
at 
org.apache.activemq.thread.Scheduler.schedualPeriodically(Scheduler.java:50) 
~[activemq-client-5.8.0.jar:5.8.0]
at org.apache.activemq.broker.region.Topic.start(Topic.java:549) 
~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.region.AbstractRegion.addDestination(AbstractRegion.java:142)
 ~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.region.RegionBroker.addDestination(RegionBroker.java:277)
 ~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145) 
~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.advisory.AdvisoryBroker.addDestination(AdvisoryBroker.java:174)
 ~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145) 
~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145) 
~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.MutableBrokerFilter.addDestination(MutableBrokerFilter.java:151)
 ~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.region.RegionBroker.send(RegionBroker.java:387) 
~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.jmx.ManagedRegionBroker.send(ManagedRegionBroker.java:282)
 ~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.advisory.AdvisoryBroker.fireAdvisory(AdvisoryBroker.java:550)
 ~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.advisory.AdvisoryBroker.fireConsumerAdvisory(AdvisoryBroker.java:499)
 ~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.advisory.AdvisoryBroker.fireConsumerAdvisory(AdvisoryBroker.java:485)
 ~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.advisory.AdvisoryBroker.addConsumer(AdvisoryBroker.java:97) 
~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.BrokerFilter.addConsumer(BrokerFilter.java:89) 
~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.BrokerFilter.addConsumer(BrokerFilter.java:89) 
~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.MutableBrokerFilter.addConsumer(MutableBrokerFilter.java:95)
 ~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.TransportConnection.processAddConsumer(TransportConnection.java:619)
 ~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.command.ConsumerInfo.visit(ConsumerInfo.java:332) 
~[activemq-client-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:329)
 ~[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:184)
 [activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
 [activemq-client-5.8.0.jar:5.8.0]
at 
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) 
[activemq-client-5.8.0.jar:5.8.0]
at 
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138) 
[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127) 
[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104) 
[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) 
[activemq-client-5.8.0.jar:5.8.0]
at 
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
 [activemq-client-5.8.0.jar:5.8.0]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:898)
 [activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInfo(DemandForwardingBridgeSupport.java:1176)
 [activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:761)
 [activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:596)
 [acti

[jira] [Commented] (AMQ-4097) Broker-to-Broker Reconnect fails wrongly due to duplicate name

2013-11-21 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13828815#comment-13828815
 ] 

Ron Koerner commented on AMQ-4097:
--

Hi Amine,

usually that meant, that the broker has a lot of things to do. Especially 
broker logins/logouts seem to be quite critical. We have one environment with 
about 50 connected remote brokers which have about 1 consumers shared among 
them. These are all connected over one leased line, so when that line breaks 
down, all remote brokers are disconnected due to missing keepalive. That causes 
a cascade of teardowns which can take minutes, especially if the line comes 
back and the remote brokers try to login again.

We do not have problems with a high number of messages to be sent in either 
direction or one logout/login.

Regards,
Ron

> Broker-to-Broker Reconnect fails wrongly due to duplicate name
> --
>
> Key: AMQ-4097
> URL: https://issues.apache.org/jira/browse/AMQ-4097
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.7.0
> Environment: A central broker to which a lot (50+) of external 
> brokers connect with a duplex bridge. A special routing/firewall is used 
> which can affect timing but not order of TCP packets. This can be simulated 
> by using socat.
> Actually we are using 5.7-SNAPSHOT of 2012-08-31.
>Reporter: Ron Koerner
>
> The situation is as follows:
> - an external broker A connects
> - time passes
> - a lot of external brokers disconnect including A
> - A reconnects (as well as all the other external brokers)
> - wrong message about duplicate name is generated
> In the log it looks like this:
> {code}
> 2012-10-08 17:11:19,835 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and 
> tcp:///127.0.0.1:54191(cbox-56BU902442) has been established. 
> [StartLocalBridge: localBroker=vm://c04ptec#278]
> ...
> ... a lot more of the following with different ports
> 2012-10-08 17:37:01,958 WARN  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and tcp:///127.0.0.1:54191 shutdown due 
> to a remote error: java.io.EOFException [ActiveMQ NIO Worker 193]
> ... more of these
> 2012-10-08 17:37:03,438 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 215]
> ...
> 2012-10-08 17:37:03,922 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-242:2, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#478]
> 2012-10-08 17:37:03,923 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#478 and tcp:///127.0.0.1:56529 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#478]
> ...
> 2012-10-08 17:37:04,036 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-182]
> ...
> 2012-10-08 17:37:06,540 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 207]
> ...
> 2012-10-08 17:37:06,548 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-292:1, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#570]
> 2012-10-08 17:37:06,548 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#570 and tcp:///127.0.0.1:56576 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#570]
> ...
> 2012-10-08 17:37:06,559 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-204]
> ...
> 2012-10-08 17:37:24,417 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-73]
> ...
> 2012-10-08 17:37:25,103 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 268]
> ...
> 2012-10-08 17:37:29,110 INFO  .DemandForwardingBri

[jira] [Commented] (AMQ-4097) Broker-to-Broker Reconnect fails wrongly due to duplicate name

2013-11-20 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13827665#comment-13827665
 ] 

Ron Koerner commented on AMQ-4097:
--

Hi Amine,

what you wrote is probably a prerequisite to get it working at all. We are 
usually encoding source and destination hostnames in the network-connector name 
to be on the safe side.

Anyway, the problem also occurs under heavy load with distinct network 
connector names.

Regards,
Ron

> Broker-to-Broker Reconnect fails wrongly due to duplicate name
> --
>
> Key: AMQ-4097
> URL: https://issues.apache.org/jira/browse/AMQ-4097
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.7.0
> Environment: A central broker to which a lot (50+) of external 
> brokers connect with a duplex bridge. A special routing/firewall is used 
> which can affect timing but not order of TCP packets. This can be simulated 
> by using socat.
> Actually we are using 5.7-SNAPSHOT of 2012-08-31.
>Reporter: Ron Koerner
>
> The situation is as follows:
> - an external broker A connects
> - time passes
> - a lot of external brokers disconnect including A
> - A reconnects (as well as all the other external brokers)
> - wrong message about duplicate name is generated
> In the log it looks like this:
> {code}
> 2012-10-08 17:11:19,835 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and 
> tcp:///127.0.0.1:54191(cbox-56BU902442) has been established. 
> [StartLocalBridge: localBroker=vm://c04ptec#278]
> ...
> ... a lot more of the following with different ports
> 2012-10-08 17:37:01,958 WARN  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and tcp:///127.0.0.1:54191 shutdown due 
> to a remote error: java.io.EOFException [ActiveMQ NIO Worker 193]
> ... more of these
> 2012-10-08 17:37:03,438 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 215]
> ...
> 2012-10-08 17:37:03,922 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-242:2, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#478]
> 2012-10-08 17:37:03,923 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#478 and tcp:///127.0.0.1:56529 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#478]
> ...
> 2012-10-08 17:37:04,036 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-182]
> ...
> 2012-10-08 17:37:06,540 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 207]
> ...
> 2012-10-08 17:37:06,548 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-292:1, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#570]
> 2012-10-08 17:37:06,548 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#570 and tcp:///127.0.0.1:56576 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#570]
> ...
> 2012-10-08 17:37:06,559 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-204]
> ...
> 2012-10-08 17:37:24,417 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-73]
> ...
> 2012-10-08 17:37:25,103 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 268]
> ...
> 2012-10-08 17:37:29,110 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#594 and 
> tcp:///127.0.0.1:56656(cbox-56BU902442) has been established. 
> [StartLocalBridge: localBroker=vm://c04ptec#594]
> ...
> 2012-10-08 17:37:59,669 WARN  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#594 and 
> tcp:///127.0.0.1:56656(cbox-56BU902442) was interru

[jira] [Created] (AMQ-4838) java.lang.ClassCastException: org.apache.activemq.store.kahadb.data.KahaTraceCommand cannot be cast to org.apache.activemq.store.kahadb.data.KahaAddMessageCommand

2013-10-30 Thread Ron Koerner (JIRA)
Ron Koerner created AMQ-4838:


 Summary: java.lang.ClassCastException: 
org.apache.activemq.store.kahadb.data.KahaTraceCommand cannot be cast to 
org.apache.activemq.store.kahadb.data.KahaAddMessageCommand
 Key: AMQ-4838
 URL: https://issues.apache.org/jira/browse/AMQ-4838
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.8.0
 Environment: Standalone single ActiveMQ 5.8.0, Linux
Reporter: Ron Koerner
Priority: Critical


I got this exception out of the blue on a standalone ActiveMQ broker which was 
running for 8 days at that time. I set the priority to critical as we lost a 
lot of data due to this bug.

{code}
2013-10-29 16:09:54,439 ERROR on.cursors.AbstractStoreCursor - 
org.apache.activemq.broker.region.cursors.QueueStorePrefetch@261c44
:esf.deubait.ice,batchResetNeeded=false,storeHasMessages=true,size=41413,cacheEnabled=false,maxBatchSize:200
 - Failed to fill batc
h [ActiveMQ Transport: tcp:///10.254.98.20:36270@6909]
java.lang.RuntimeException: java.lang.ClassCastException: 
org.apache.activemq.store.kahadb.data.KahaTraceCommand cannot be cast to
 org.apache.activemq.store.kahadb.data.KahaAddMessageCommand
at 
org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:277)
 ~[activemq-broker
-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:110)
 ~[activemq-broker-5.8
.0.jar:5.8.0]
at 
org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:157)
 [activemq-broker-5.8.0.jar:
5.8.0]
at 
org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1775) 
[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2003) 
[activemq-broker-5.8.0.jar:5.8.0]
at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1491) 
[activemq-broker-5.8.0.jar:5.8.0]
at org.apache.activemq.broker.region.Queue.wakeup(Queue.java:1709) 
[activemq-broker-5.8.0.jar:5.8.0]
at org.apache.activemq.broker.region.Queue.messageSent(Queue.java:1704) 
[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.region.Queue.doMessageSend(Queue.java:795) 
[activemq-broker-5.8.0.jar:5.8.0]
at org.apache.activemq.broker.region.Queue.send(Queue.java:721) 
[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.region.AbstractRegion.send(AbstractRegion.java:406) 
[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.region.RegionBroker.send(RegionBroker.java:392) 
[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.jmx.ManagedRegionBroker.send(ManagedRegionBroker.java:282)
 [activemq-broker-5.8.0.jar:5.8.0]
at org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:129) 
[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.CompositeDestinationBroker.send(CompositeDestinationBroker.java:96)
 [activemq-broker-5.8.0.j
ar:5.8.0]
at 
org.apache.activemq.broker.TransactionBroker.send(TransactionBroker.java:317) 
[activemq-broker-5.8.0.jar:5.8.0]
at org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:129) 
[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.security.AuthorizationBroker.send(AuthorizationBroker.java:202)
 [activemq-broker-5.8.0.jar:5.8.0]
at org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:129) 
[activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:135)
 [activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.TransportConnection.processMessage(TransportConnection.java:499)
 [activemq-broker-5.8.0.jar:
5.8.0]
at 
org.apache.activemq.command.ActiveMQMessage.visit(ActiveMQMessage.java:749) 
[activemq-client-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:329)
 [activemq-broker-5.8.0.jar:5.8.0]
at 
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:184)
 [activemq-broker-5.8.0.jar:5.8
.0]
at 
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) 
[activemq-client-5.8.0.jar:5.8.0]
at 
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
 [activemq-client-5.8.0.jar:
5.8.0]
at 
org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:288)
 [activemq-client-
5.8.0.jar:5.8.0]
at 
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
 [activemq-client-5.8.0.jar:5.8.0]
at 
org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:214) 
[activemq-client-5.8.0.jar:5.8.0]
at 
org.apache.activemq.tr

[jira] [Commented] (AMQ-4097) Broker-to-Broker Reconnect fails wrongly due to duplicate name

2013-07-29 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722285#comment-13722285
 ] 

Ron Koerner commented on AMQ-4097:
--

Hi Vee,

I'm not using the actual socat but only something like socat, which was 
developed by myself. As this is company property I can't disclose it. You may 
be able to duplicate it by looking into the proxy example of netty.

Regards,
Ron

> Broker-to-Broker Reconnect fails wrongly due to duplicate name
> --
>
> Key: AMQ-4097
> URL: https://issues.apache.org/jira/browse/AMQ-4097
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.7.0
> Environment: A central broker to which a lot (50+) of external 
> brokers connect with a duplex bridge. A special routing/firewall is used 
> which can affect timing but not order of TCP packets. This can be simulated 
> by using socat.
> Actually we are using 5.7-SNAPSHOT of 2012-08-31.
>Reporter: Ron Koerner
>
> The situation is as follows:
> - an external broker A connects
> - time passes
> - a lot of external brokers disconnect including A
> - A reconnects (as well as all the other external brokers)
> - wrong message about duplicate name is generated
> In the log it looks like this:
> {code}
> 2012-10-08 17:11:19,835 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and 
> tcp:///127.0.0.1:54191(cbox-56BU902442) has been established. 
> [StartLocalBridge: localBroker=vm://c04ptec#278]
> ...
> ... a lot more of the following with different ports
> 2012-10-08 17:37:01,958 WARN  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and tcp:///127.0.0.1:54191 shutdown due 
> to a remote error: java.io.EOFException [ActiveMQ NIO Worker 193]
> ... more of these
> 2012-10-08 17:37:03,438 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 215]
> ...
> 2012-10-08 17:37:03,922 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-242:2, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#478]
> 2012-10-08 17:37:03,923 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#478 and tcp:///127.0.0.1:56529 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#478]
> ...
> 2012-10-08 17:37:04,036 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-182]
> ...
> 2012-10-08 17:37:06,540 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 207]
> ...
> 2012-10-08 17:37:06,548 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-292:1, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#570]
> 2012-10-08 17:37:06,548 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#570 and tcp:///127.0.0.1:56576 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#570]
> ...
> 2012-10-08 17:37:06,559 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-204]
> ...
> 2012-10-08 17:37:24,417 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-73]
> ...
> 2012-10-08 17:37:25,103 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 268]
> ...
> 2012-10-08 17:37:29,110 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#594 and 
> tcp:///127.0.0.1:56656(cbox-56BU902442) has been established. 
> [StartLocalBridge: localBroker=vm://c04ptec#594]
> ...
> 2012-10-08 17:37:59,669 WARN  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#594 and 
> tcp:///127.0.0.1:56656(cbox-56BU902442) was interrupted during establishment. 
> [StartL

[jira] [Commented] (AMQ-4097) Broker-to-Broker Reconnect fails wrongly due to duplicate name

2013-05-23 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13665049#comment-13665049
 ] 

Ron Koerner commented on AMQ-4097:
--

Hi Edwin,
"initialReconnectDelay" may work if you have only one broker connecting. We 
have more than 40 brokers connecting to a central broker, so unless they have 
all different "initialReconnectDelay" settings several brokers will reconnect 
at the same time causing high load and there slowing down the connection 
teardown.
Regards,
Ron

> Broker-to-Broker Reconnect fails wrongly due to duplicate name
> --
>
> Key: AMQ-4097
> URL: https://issues.apache.org/jira/browse/AMQ-4097
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.7.0
> Environment: A central broker to which a lot (50+) of external 
> brokers connect with a duplex bridge. A special routing/firewall is used 
> which can affect timing but not order of TCP packets. This can be simulated 
> by using socat.
> Actually we are using 5.7-SNAPSHOT of 2012-08-31.
>Reporter: Ron Koerner
>
> The situation is as follows:
> - an external broker A connects
> - time passes
> - a lot of external brokers disconnect including A
> - A reconnects (as well as all the other external brokers)
> - wrong message about duplicate name is generated
> In the log it looks like this:
> {code}
> 2012-10-08 17:11:19,835 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and 
> tcp:///127.0.0.1:54191(cbox-56BU902442) has been established. 
> [StartLocalBridge: localBroker=vm://c04ptec#278]
> ...
> ... a lot more of the following with different ports
> 2012-10-08 17:37:01,958 WARN  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and tcp:///127.0.0.1:54191 shutdown due 
> to a remote error: java.io.EOFException [ActiveMQ NIO Worker 193]
> ... more of these
> 2012-10-08 17:37:03,438 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 215]
> ...
> 2012-10-08 17:37:03,922 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-242:2, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#478]
> 2012-10-08 17:37:03,923 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#478 and tcp:///127.0.0.1:56529 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#478]
> ...
> 2012-10-08 17:37:04,036 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-182]
> ...
> 2012-10-08 17:37:06,540 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 207]
> ...
> 2012-10-08 17:37:06,548 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-292:1, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#570]
> 2012-10-08 17:37:06,548 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#570 and tcp:///127.0.0.1:56576 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#570]
> ...
> 2012-10-08 17:37:06,559 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-204]
> ...
> 2012-10-08 17:37:24,417 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-73]
> ...
> 2012-10-08 17:37:25,103 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 268]
> ...
> 2012-10-08 17:37:29,110 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#594 and 
> tcp:///127.0.0.1:56656(cbox-56BU902442) has been established. 
> [StartLocalBridge: localBroker=vm://c04ptec#594]
> ...
> 2012-10-08 17:37:59,669 WARN  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#

[jira] [Commented] (AMQ-4097) Broker-to-Broker Reconnect fails wrongly due to duplicate name

2013-05-22 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664030#comment-13664030
 ] 

Ron Koerner commented on AMQ-4097:
--

Hi Edwin,

the problem is still there with ActiveMQ 5.8.0.

We are using something a little bit more powerful than socat, but from a 
network point of view it is just two socats in a row with some additional 
routing based on the IP of the connecting broker.

I made some extensive tests and I found that the error is actually correct. 
Under load the broker is just too slow to tear down the previous connection if 
the disconnect and connect happen very fast. Sometimes it could take the broker 
minutes to completely tear down a connection.

Unfortunately the code around creating and destroying connections is a quite 
complicated beast of threads, singletons, per-connection objects, 
per-configured-bridge objects and shared state that I don't think there is an 
easy solution to our problem.

To solve my problem I added a connection delay into my "socat", so that there 
needs to be a 30 second delay after each connect or disconnect before a new 
connection can be tried.

Regards,
Ron

> Broker-to-Broker Reconnect fails wrongly due to duplicate name
> --
>
> Key: AMQ-4097
> URL: https://issues.apache.org/jira/browse/AMQ-4097
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.7.0
> Environment: A central broker to which a lot (50+) of external 
> brokers connect with a duplex bridge. A special routing/firewall is used 
> which can affect timing but not order of TCP packets. This can be simulated 
> by using socat.
> Actually we are using 5.7-SNAPSHOT of 2012-08-31.
>Reporter: Ron Koerner
>
> The situation is as follows:
> - an external broker A connects
> - time passes
> - a lot of external brokers disconnect including A
> - A reconnects (as well as all the other external brokers)
> - wrong message about duplicate name is generated
> In the log it looks like this:
> {code}
> 2012-10-08 17:11:19,835 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and 
> tcp:///127.0.0.1:54191(cbox-56BU902442) has been established. 
> [StartLocalBridge: localBroker=vm://c04ptec#278]
> ...
> ... a lot more of the following with different ports
> 2012-10-08 17:37:01,958 WARN  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and tcp:///127.0.0.1:54191 shutdown due 
> to a remote error: java.io.EOFException [ActiveMQ NIO Worker 193]
> ... more of these
> 2012-10-08 17:37:03,438 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 215]
> ...
> 2012-10-08 17:37:03,922 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-242:2, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#478]
> 2012-10-08 17:37:03,923 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#478 and tcp:///127.0.0.1:56529 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#478]
> ...
> 2012-10-08 17:37:04,036 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-182]
> ...
> 2012-10-08 17:37:06,540 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 207]
> ...
> 2012-10-08 17:37:06,548 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-292:1, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#570]
> 2012-10-08 17:37:06,548 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#570 and tcp:///127.0.0.1:56576 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#570]
> ...
> 2012-10-08 17:37:06,559 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-204]
> ...
> 2012-10-08 17:37:24,417 INFO  .DemandFo

[jira] [Updated] (AMQ-4484) NetworkConnectors create a consumer for queue://

2013-04-24 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-4484:
-

Attachment: StringToListOfActiveMQDestinationConverter.patch

> NetworkConnectors create a consumer for queue://
> 
>
> Key: AMQ-4484
> URL: https://issues.apache.org/jira/browse/AMQ-4484
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.8.0
>Reporter: Ron Koerner
> Attachments: StringToListOfActiveMQDestinationConverter.patch
>
>
> At least for duplex NetworkConnectors a consumer for queue:// is added on the 
> remote broker, if there are no statically included destinations.
> This is caused by the string "[]" incorrectly converted to a list with one 
> element, instead of an empty list.
> The attached patch fixes the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4484) NetworkConnectors create a consumer for queue://

2013-04-24 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-4484:
-

Patch Info: Patch Available

> NetworkConnectors create a consumer for queue://
> 
>
> Key: AMQ-4484
> URL: https://issues.apache.org/jira/browse/AMQ-4484
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.8.0
>Reporter: Ron Koerner
>
> At least for duplex NetworkConnectors a consumer for queue:// is added on the 
> remote broker, if there are no statically included destinations.
> This is caused by the string "[]" incorrectly converted to a list with one 
> element, instead of an empty list.
> The attached patch fixes the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-4484) NetworkConnectors create a consumer for queue://

2013-04-24 Thread Ron Koerner (JIRA)
Ron Koerner created AMQ-4484:


 Summary: NetworkConnectors create a consumer for queue://
 Key: AMQ-4484
 URL: https://issues.apache.org/jira/browse/AMQ-4484
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.8.0
Reporter: Ron Koerner


At least for duplex NetworkConnectors a consumer for queue:// is added on the 
remote broker, if there are no statically included destinations.

This is caused by the string "[]" incorrectly converted to a list with one 
element, instead of an empty list.

The attached patch fixes the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4097) Broker-to-Broker Reconnect fails wrongly due to duplicate name

2012-10-08 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-4097:
-

Environment: 
A central broker to which a lot (50+) of external brokers connect with a duplex 
bridge. A special routing/firewall is used which can affect timing but not 
order of TCP packets. This can be simulated by using socat.
Actually we are using 5.7-SNAPSHOT of 2012-08-31.

  was:A central broker to which a lot (50+) of external brokers connect with a 
duplex bridge. A special routing/firewall is used which can affect timing but 
not order of TCP packets. This can be simulated by using socat.


> Broker-to-Broker Reconnect fails wrongly due to duplicate name
> --
>
> Key: AMQ-4097
> URL: https://issues.apache.org/jira/browse/AMQ-4097
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.7.0
> Environment: A central broker to which a lot (50+) of external 
> brokers connect with a duplex bridge. A special routing/firewall is used 
> which can affect timing but not order of TCP packets. This can be simulated 
> by using socat.
> Actually we are using 5.7-SNAPSHOT of 2012-08-31.
>Reporter: Ron Koerner
>
> The situation is as follows:
> - an external broker A connects
> - time passes
> - a lot of external brokers disconnect including A
> - A reconnects (as well as all the other external brokers)
> - wrong message about duplicate name is generated
> In the log it looks like this:
> {code}
> 2012-10-08 17:11:19,835 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and 
> tcp:///127.0.0.1:54191(cbox-56BU902442) has been established. 
> [StartLocalBridge: localBroker=vm://c04ptec#278]
> ...
> ... a lot more of the following with different ports
> 2012-10-08 17:37:01,958 WARN  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#278 and tcp:///127.0.0.1:54191 shutdown due 
> to a remote error: java.io.EOFException [ActiveMQ NIO Worker 193]
> ... more of these
> 2012-10-08 17:37:03,438 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 215]
> ...
> 2012-10-08 17:37:03,922 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-242:2, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#478]
> 2012-10-08 17:37:03,923 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#478 and tcp:///127.0.0.1:56529 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#478]
> ...
> 2012-10-08 17:37:04,036 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-182]
> ...
> 2012-10-08 17:37:06,540 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 207]
> ...
> 2012-10-08 17:37:06,548 WARN  emq.broker.TransportConnection - Failed to add 
> Connection ID:c04ptec-51799-1349706422094-292:1, reason: 
> javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
> to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
> vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#570]
> 2012-10-08 17:37:06,548 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#570 and tcp:///127.0.0.1:56576 shutdown due 
> to a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - 
> Client: cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already 
> connected from vm://c04ptec#278 [StartLocalBridge: 
> localBroker=vm://c04ptec#570]
> ...
> 2012-10-08 17:37:06,559 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-204]
> ...
> 2012-10-08 17:37:24,417 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
> to cbox-56BU902442 stopped [ActiveMQ Task-73]
> ...
> 2012-10-08 17:37:25,103 INFO  emq.broker.TransportConnection - Started 
> responder end of duplex bridge cBox 56BU902442 to cBox 
> Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 268]
> ...
> 2012-10-08 17:37:29,110 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://c04ptec#594 and 
> tcp:///127.0.0.1:56656(cbox-56BU902442) has been established. 
> [StartLocalBridge: localBroker=vm://c04ptec#594]
> ...
> 2012-10

[jira] [Updated] (AMQ-4097) Broker-to-Broker Reconnect fails wrongly due to duplicate name

2012-10-08 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-4097:
-

Description: 
The situation is as follows:

- an external broker A connects
- time passes
- a lot of external brokers disconnect including A
- A reconnects (as well as all the other external brokers)
- wrong message about duplicate name is generated

In the log it looks like this:
{code}
2012-10-08 17:11:19,835 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#278 and tcp:///127.0.0.1:54191(cbox-56BU902442) 
has been established. [StartLocalBridge: localBroker=vm://c04ptec#278]

...

... a lot more of the following with different ports
2012-10-08 17:37:01,958 WARN  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#278 and tcp:///127.0.0.1:54191 shutdown due to 
a remote error: java.io.EOFException [ActiveMQ NIO Worker 193]
... more of these

2012-10-08 17:37:03,438 INFO  emq.broker.TransportConnection - Started 
responder end of duplex bridge cBox 56BU902442 to cBox 
Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 215]

...

2012-10-08 17:37:03,922 WARN  emq.broker.TransportConnection - Failed to add 
Connection ID:c04ptec-51799-1349706422094-242:2, reason: 
javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#478]
2012-10-08 17:37:03,923 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#478 and tcp:///127.0.0.1:56529 shutdown due to 
a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - Client: 
cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected 
from vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#478]

...

2012-10-08 17:37:04,036 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
to cbox-56BU902442 stopped [ActiveMQ Task-182]

...

2012-10-08 17:37:06,540 INFO  emq.broker.TransportConnection - Started 
responder end of duplex bridge cBox 56BU902442 to cBox 
Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 207]

...

2012-10-08 17:37:06,548 WARN  emq.broker.TransportConnection - Failed to add 
Connection ID:c04ptec-51799-1349706422094-292:1, reason: 
javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#570]
2012-10-08 17:37:06,548 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#570 and tcp:///127.0.0.1:56576 shutdown due to 
a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - Client: 
cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected 
from vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#570]
...
2012-10-08 17:37:06,559 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
to cbox-56BU902442 stopped [ActiveMQ Task-204]

...

2012-10-08 17:37:24,417 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
to cbox-56BU902442 stopped [ActiveMQ Task-73]

...

2012-10-08 17:37:25,103 INFO  emq.broker.TransportConnection - Started 
responder end of duplex bridge cBox 56BU902442 to cBox 
Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 268]

...

2012-10-08 17:37:29,110 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#594 and tcp:///127.0.0.1:56656(cbox-56BU902442) 
has been established. [StartLocalBridge: localBroker=vm://c04ptec#594]

...

2012-10-08 17:37:59,669 WARN  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#594 and tcp:///127.0.0.1:56656(cbox-56BU902442) 
was interrupted during establishment. [StartLocalBridge: 
localBroker=vm://c04ptec#594]

...

2012-10-08 17:38:09,005 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
to cbox-56BU902442 stopped [ActiveMQ Task-228]

...

2012-10-08 17:38:18,681 INFO  emq.broker.TransportConnection - Started 
responder end of duplex bridge cBox 56BU902442 to cBox 
Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 292]
2012-10-08 17:38:18,681 WARN  emq.broker.TransportConnection - Failed to add 
Connection ID:P013SPWMK1WN-39320-1349704902319-152:1, reason: 
javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
to cBox Proxy_cbox-56BU902442_outbound already connected from vm://c04ptec#594 
[ActiveMQ NIO Worker 292]
2012-10-08 17:38:18,682 WARN  er.TransportConnection.Service - Async error 
occurred: javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 
56BU902442 to cBox Proxy_cbox-56BU902442_outbound already connected from 
vm://c04ptec#594 [ActiveMQ NIO Worker 292]
javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
to cBox Proxy_cbox-56BU90244

[jira] [Updated] (AMQ-4097) Broker-to-Broker Reconnect fails wrongly due to duplicate name

2012-10-08 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-4097:
-

Description: 
The situation is as follows:

- an external broker A connects
- time passes
- a lot of external brokers disconnect including A
- A reconnects (as well as all the other external brokers)
- wrong message about duplicate name is generated

In the log it looks like this:
{code}
2012-10-08 17:11:19,835 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#278 and tcp:///127.0.0.1:54191(cbox-56BU902442) 
has been established. [StartLocalBridge: localBroker=vm://c04ptec#278]

...

... a lot more of the following with different ports
2012-10-08 17:37:01,958 WARN  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#278 and tcp:///127.0.0.1:54191 shutdown due to 
a remote error: java.io.EOFException [ActiveMQ NIO Worker 193]
... more of these

2012-10-08 17:37:03,438 INFO  emq.broker.TransportConnection - Started 
responder end of duplex bridge cBox 56BU902442 to cBox 
Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 215]

...

2012-10-08 17:37:03,922 WARN  emq.broker.TransportConnection - Failed to add 
Connection ID:c04ptec-51799-1349706422094-242:2, reason: 
javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#478]
2012-10-08 17:37:03,923 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#478 and tcp:///127.0.0.1:56529 shutdown due to 
a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - Client: 
cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected 
from vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#478]

...

2012-10-08 17:37:04,036 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
to cbox-56BU902442 stopped [ActiveMQ Task-182]

...

2012-10-08 17:37:06,540 INFO  emq.broker.TransportConnection - Started 
responder end of duplex bridge cBox 56BU902442 to cBox 
Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 207]

...

2012-10-08 17:37:06,548 WARN  emq.broker.TransportConnection - Failed to add 
Connection ID:c04ptec-51799-1349706422094-292:1, reason: 
javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected from 
vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#570]
2012-10-08 17:37:06,548 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#570 and tcp:///127.0.0.1:56576 shutdown due to 
a local error: javax.jms.InvalidClientIDException: Broker: c04ptec - Client: 
cBox 56BU902442 to cBox Proxy_cbox-56BU902442_inbound_c04ptec already connected 
from vm://c04ptec#278 [StartLocalBridge: localBroker=vm://c04ptec#570]
...
2012-10-08 17:37:06,559 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
to cbox-56BU902442 stopped [ActiveMQ Task-204]

...

2012-10-08 17:37:24,417 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
to cbox-56BU902442 stopped [ActiveMQ Task-73]

...

2012-10-08 17:37:25,103 INFO  emq.broker.TransportConnection - Started 
responder end of duplex bridge cBox 56BU902442 to cBox 
Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 268]

...

2012-10-08 17:37:29,110 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#594 and tcp:///127.0.0.1:56656(cbox-56BU902442) 
has been established. [StartLocalBridge: localBroker=vm://c04ptec#594]

...

2012-10-08 17:37:59,669 WARN  .DemandForwardingBridgeSupport - Network 
connection between vm://c04ptec#594 and tcp:///127.0.0.1:56656(cbox-56BU902442) 
was interrupted during establishment. [StartLocalBridge: 
localBroker=vm://c04ptec#594]

...

2012-10-08 17:38:09,005 INFO  .DemandForwardingBridgeSupport - c04ptec bridge 
to cbox-56BU902442 stopped [ActiveMQ Task-228]

...

2012-10-08 17:38:18,681 INFO  emq.broker.TransportConnection - Started 
responder end of duplex bridge cBox 56BU902442 to cBox 
Proxy@ID:P013SPWMK1WN-39320-1349704902319-0:1 [ActiveMQ NIO Worker 292]
2012-10-08 17:38:18,681 WARN  emq.broker.TransportConnection - Failed to add 
Connection ID:P013SPWMK1WN-39320-1349704902319-152:1, reason: 
javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
to cBox Proxy_cbox-56BU902442_outbound already connected from vm://c04ptec#594 
[ActiveMQ NIO Worker 292]
2012-10-08 17:38:18,682 WARN  er.TransportConnection.Service - Async error 
occurred: javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 
56BU902442 to cBox Proxy_cbox-56BU902442_outbound already connected from 
vm://c04ptec#594 [ActiveMQ NIO Worker 292]
javax.jms.InvalidClientIDException: Broker: c04ptec - Client: cBox 56BU902442 
to cBox Proxy_cbox-56BU90244

[jira] [Updated] (AMQ-4097) Broker-to-Broker Reconnect fails wrongly due to duplicate name

2012-10-08 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-4097:
-

Summary: Broker-to-Broker Reconnect fails wrongly due to duplicate name  
(was: Broker-to-Broker Reconnect fails due to duplicate name)

> Broker-to-Broker Reconnect fails wrongly due to duplicate name
> --
>
> Key: AMQ-4097
> URL: https://issues.apache.org/jira/browse/AMQ-4097
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.7.0
> Environment: A central broker to which a lot (50+) of external 
> brokers connect with a duplex bridge. A special routing/firewall is used 
> which can affect timing but not order of TCP packets. This can be simulated 
> by using socat.
>Reporter: Ron Koerner
>
> The situation is as follows:
> - an external broker A connects
> - time passes
> - a lot of external brokers disconnect including A
> - A reconnects
> - wrong message about duplicate name is generated
> A logfile excerpt is attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-4097) Broker-to-Broker Reconnect fails due to duplicate name

2012-10-08 Thread Ron Koerner (JIRA)
Ron Koerner created AMQ-4097:


 Summary: Broker-to-Broker Reconnect fails due to duplicate name
 Key: AMQ-4097
 URL: https://issues.apache.org/jira/browse/AMQ-4097
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.7.0
 Environment: A central broker to which a lot (50+) of external brokers 
connect with a duplex bridge. A special routing/firewall is used which can 
affect timing but not order of TCP packets. This can be simulated by using 
socat.
Reporter: Ron Koerner


The situation is as follows:

- an external broker A connects
- time passes
- a lot of external brokers disconnect including A
- A reconnects
- wrong message about duplicate name is generated

A logfile excerpt is attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-3993) NetworkBridge sometimes stops trying to reconnect after connection is lost

2012-08-27 Thread Ron Koerner (JIRA)
Ron Koerner created AMQ-3993:


 Summary: NetworkBridge sometimes stops trying to reconnect after 
connection is lost
 Key: AMQ-3993
 URL: https://issues.apache.org/jira/browse/AMQ-3993
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.6.0
 Environment: using static:// networkConnector (i.e. 
SimpleDiscoveryAgent)
Reporter: Ron Koerner
 Attachments: reconnect-problem-annotated.txt

After losing connection due to shutdown of the peer the broker tries to rebuild 
the connection once, fails again and stops trying afterwards.

While this also happens with a standard setup, it seems to happen much more 
often with a certain type of firewall which always accepts a connection, but 
closes it if the real destination cannot be reached.

This can be simulated by using a "socat" forwarder between the two brokers.

The problems seems to lie in the following sequence of events, a race condition 
and the use of {{event.failed}} in {{SimpleDiscoveryAgent.serviceFailed}} and 
{{bridges}} in {{DiscoveryNetworkConnector}}:

# connection "failure" due to ShutdownInfo
#- event.failed=true
#- bridge is unregistered
# start establishing a new connection
#- event.failed=false
#- bridge is not yet registered
# second connection failure of the old connection due to EOF
#- not blocked, since event.failed==false
#- event.failed=true
#- bridge would be unregistered, but currently there is none
#- wait one second (continued below)
# new connection is started
#- bridge is registered
# receive multiple connection failures of the new connection
#- all blocked, since event.failed=true
# continue after one second, try to establish a new connection
#- blocked, since bridge is already registered

To fix this problem a NetworkBridge should probably not be allowed to call 
{{SimpleDiscoveryAgent.serviceFailed}} more than once, since {{event.failed}} 
cannot keep track of multiple connections at one time.

The chain of events holds a lot of race conditions. If the second failure of 
the old connection occurs before the new connection is started (which seems to 
be the case most of the time) or the new connection's bridge is registered 
before the EOF occurs, the problem does not manifest.

Attached is a log excerpt with my comments about the state of event.failed and 
bridges.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-3993) NetworkBridge sometimes stops trying to reconnect after connection is lost

2012-08-27 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-3993:
-

Attachment: reconnect-problem-annotated.txt

> NetworkBridge sometimes stops trying to reconnect after connection is lost
> --
>
> Key: AMQ-3993
> URL: https://issues.apache.org/jira/browse/AMQ-3993
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.6.0
> Environment: using static:// networkConnector (i.e. 
> SimpleDiscoveryAgent)
>Reporter: Ron Koerner
> Attachments: reconnect-problem-annotated.txt
>
>
> After losing connection due to shutdown of the peer the broker tries to 
> rebuild the connection once, fails again and stops trying afterwards.
> While this also happens with a standard setup, it seems to happen much more 
> often with a certain type of firewall which always accepts a connection, but 
> closes it if the real destination cannot be reached.
> This can be simulated by using a "socat" forwarder between the two brokers.
> The problems seems to lie in the following sequence of events, a race 
> condition and the use of {{event.failed}} in 
> {{SimpleDiscoveryAgent.serviceFailed}} and {{bridges}} in 
> {{DiscoveryNetworkConnector}}:
> # connection "failure" due to ShutdownInfo
> #- event.failed=true
> #- bridge is unregistered
> # start establishing a new connection
> #- event.failed=false
> #- bridge is not yet registered
> # second connection failure of the old connection due to EOF
> #- not blocked, since event.failed==false
> #- event.failed=true
> #- bridge would be unregistered, but currently there is none
> #- wait one second (continued below)
> # new connection is started
> #- bridge is registered
> # receive multiple connection failures of the new connection
> #- all blocked, since event.failed=true
> # continue after one second, try to establish a new connection
> #- blocked, since bridge is already registered
> To fix this problem a NetworkBridge should probably not be allowed to call 
> {{SimpleDiscoveryAgent.serviceFailed}} more than once, since {{event.failed}} 
> cannot keep track of multiple connections at one time.
> The chain of events holds a lot of race conditions. If the second failure of 
> the old connection occurs before the new connection is started (which seems 
> to be the case most of the time) or the new connection's bridge is registered 
> before the EOF occurs, the problem does not manifest.
> Attached is a log excerpt with my comments about the state of event.failed 
> and bridges.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-3887) Occasional Null Pointer Exception during NetworkConnector connection

2012-08-21 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13438598#comment-13438598
 ] 

Ron Koerner commented on AMQ-3887:
--

That seems to help.
Anyway, I think there is another lurking bug. When safeWaitUntilStarted() 
returns because of disposed.get() is true it is the same as when it returns 
because of the latch was unblocked.
Wherever safeWaitUntilStarted() is used should be a check afterwards whether 
disposed.get() is true and act accordingly.
Otherwise, thanks for your help.

> Occasional Null Pointer Exception during NetworkConnector connection
> 
>
> Key: AMQ-3887
> URL: https://issues.apache.org/jira/browse/AMQ-3887
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.6.0
> Environment: SLES 10
>Reporter: Ron Koerner
>Assignee: Gary Tully
> Fix For: 5.7.0
>
>
> While starting a duplex NetworkConnector an NPE can be observed on the 
> receiving side.
> {code}
> 2012-06-18 17:34:24,571 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://proxy-cbpi001#8 and tcp:///169.254.
> 0.5:59412(cbox-56BU101117) has been established. [StartLocalBridge: 
> localBroker=vm://proxy-cbpi001#8]
> 2012-06-18 17:34:24,577 WARN  .DemandForwardingBridgeSupport - Caught an 
> exception processing local command [BrokerService[proxy-c
> bpi001] Task-19]
> java.lang.NullPointerException: null
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport.configureMessage(DemandForwardingBridgeSupport.java:644)
>  ~[ac
> tivemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:675)
>  ~
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport$1.onCommand(DemandForwardingBridgeSupport.java:139)
>  [activemq
> -core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:135) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:124) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:103) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1307)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:837)
>  [activemq-core-5.6.0.jar:5
> .6.0]
> at 
> org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:872)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43) 
> [activemq-core-5.6.0.jar:5.6.0]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
> Source) [na:1.6.0_20]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.6.0_20]
> at java.lang.Thread.run(Unknown Source) [na:1.6.0_20]
> {code}
> The other broker will eventually connect, but with about a hundred of 
> connecting brokers this occurs too often to ignore.
> As this seems to be a race condition it is quite difficult to reproduce 
> reliably. I assume producerInfo is accessed in configureMessage before it is 
> initialized in startRemoteBridge.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (AMQ-3887) Occasional Null Pointer Exception during NetworkConnector connection

2012-07-19 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13418183#comment-13418183
 ] 

Ron Koerner commented on AMQ-3887:
--

My broker configuration contains some proprietary code which enables us to 
allow local connections for everyone, but remote connections need to be 
authenticated. I'll probably have to ask our legal department if I'm allowed to 
open source these.
I built a SNAPSHOT with {{waitStarted()}} and will monitor the behaviour.

> Occasional Null Pointer Exception during NetworkConnector connection
> 
>
> Key: AMQ-3887
> URL: https://issues.apache.org/jira/browse/AMQ-3887
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.6.0
> Environment: SLES 10
>Reporter: Ron Koerner
>Assignee: Timothy Bish
> Fix For: 5.7.0
>
>
> While starting a duplex NetworkConnector an NPE can be observed on the 
> receiving side.
> {code}
> 2012-06-18 17:34:24,571 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://proxy-cbpi001#8 and tcp:///169.254.
> 0.5:59412(cbox-56BU101117) has been established. [StartLocalBridge: 
> localBroker=vm://proxy-cbpi001#8]
> 2012-06-18 17:34:24,577 WARN  .DemandForwardingBridgeSupport - Caught an 
> exception processing local command [BrokerService[proxy-c
> bpi001] Task-19]
> java.lang.NullPointerException: null
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport.configureMessage(DemandForwardingBridgeSupport.java:644)
>  ~[ac
> tivemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:675)
>  ~
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport$1.onCommand(DemandForwardingBridgeSupport.java:139)
>  [activemq
> -core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:135) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:124) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:103) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1307)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:837)
>  [activemq-core-5.6.0.jar:5
> .6.0]
> at 
> org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:872)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43) 
> [activemq-core-5.6.0.jar:5.6.0]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
> Source) [na:1.6.0_20]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.6.0_20]
> at java.lang.Thread.run(Unknown Source) [na:1.6.0_20]
> {code}
> The other broker will eventually connect, but with about a hundred of 
> connecting brokers this occurs too often to ignore.
> As this seems to be a race condition it is quite difficult to reproduce 
> reliably. I assume producerInfo is accessed in configureMessage before it is 
> initialized in startRemoteBridge.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (AMQ-3887) Occasional Null Pointer Exception during NetworkConnector connection

2012-07-18 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13417003#comment-13417003
 ] 

Ron Koerner edited comment on AMQ-3887 at 7/18/12 11:37 AM:


I tried, but it seems the problem only manifests in a live system.
Anyway, I think it is a race condition introduced by the following facts

* {{DemandForwardingBridgeSupport.start}}
** synchronously connects the {{local/remoteBroker}} Transports to 
{{serviceLocal/RemoteCommand}}
** asynchronously runs {{startRemoteBride}}
* someone sends a messageDispatch command to {{localBroker}} (whenever, maybe 
even before DFBS.start)
* [BrokerService[smcufs02] Task-147] delivers the message after 
{{DemandForwardingBridgeSupport.start}} 
** to {{DemandForwardingBridgeSupport.serviceLocalCommand}}
** which calls {{configureMessage}} which uses {{producerInfo}} which is not 
yet set
** and {{startLocalBridge}} may not even be started yet
* {{startRemoteBridge}}
** eventually creates {{producerInfo}}

I don't think {{serviceLocalCommand}} should handle message dispatches before 
the bridge is completely started. Therefore a {{waitStarted()}} after {{if 
(command.isMessageDispatch())}} in {{serviceLocalCommand}} should solve the 
problem.

It is to note that {{serviceRemoteCommand}} already includes a 
{{waitStarted()}} for message dispatches.

Maybe both methods should make a checked for {{disposed}} after waiting. They 
do check at the beginning, but if {{waitStarted}} really blocks, that may 
change in the meantime.


  was (Author: ron.koerner):
I tried, but it seems the problem only manifests in a live system.
Anyway, I think it is a race condition introduced by the following facts

* {{DemandForwardingBridgeSupport.start}}
** synchronously connects the {{local/remoteBroker}} Transports to 
{{serviceLocal/RemoteCommand}}
** asynchronously runs {{startRemoteBride}}
* someone sends a messageDispatch command to {{localBroker}} (whenever, maybe 
even before DFBS.start)
* {{\[BrokerService[smcufs02] Task-147\] }} delivers the message after 
{{DemandForwardingBridgeSupport.start}} 
** to {{DemandForwardingBridgeSupport.serviceLocalCommand}}
** which calls {{configureMessage}} which uses {{producerInfo}} which is not 
yet set
** and {{startLocalBridge}} may not even be started yet
* {{startRemoteBridge}}
** eventually creates {{producerInfo}}

I don't think {{serviceLocalCommand}} should handle message dispatches before 
the bridge is completely started. Therefore a {{waitStarted()}} after {{if 
(command.isMessageDispatch())}} in {{serviceLocalCommand}} should solve the 
problem.

It is to note that {{serviceRemoteCommand}} already includes a 
{{waitStarted()}} for message dispatches.

Maybe both methods should make a checked for {{disposed}} after waiting. They 
do check at the beginning, but if {{waitStarted}} really blocks, that may 
change in the meantime.

  
> Occasional Null Pointer Exception during NetworkConnector connection
> 
>
> Key: AMQ-3887
> URL: https://issues.apache.org/jira/browse/AMQ-3887
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.6.0
> Environment: SLES 10
>Reporter: Ron Koerner
>Assignee: Timothy Bish
> Fix For: 5.7.0
>
>
> While starting a duplex NetworkConnector an NPE can be observed on the 
> receiving side.
> {code}
> 2012-06-18 17:34:24,571 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://proxy-cbpi001#8 and tcp:///169.254.
> 0.5:59412(cbox-56BU101117) has been established. [StartLocalBridge: 
> localBroker=vm://proxy-cbpi001#8]
> 2012-06-18 17:34:24,577 WARN  .DemandForwardingBridgeSupport - Caught an 
> exception processing local command [BrokerService[proxy-c
> bpi001] Task-19]
> java.lang.NullPointerException: null
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport.configureMessage(DemandForwardingBridgeSupport.java:644)
>  ~[ac
> tivemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:675)
>  ~
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport$1.onCommand(DemandForwardingBridgeSupport.java:139)
>  [activemq
> -core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:135) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.vm.VMTra

[jira] [Commented] (AMQ-3887) Occasional Null Pointer Exception during NetworkConnector connection

2012-07-18 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13417003#comment-13417003
 ] 

Ron Koerner commented on AMQ-3887:
--

I tried, but it seems the problem only manifests in a live system.
Anyway, I think it is a race condition introduced by the following facts

* {{DemandForwardingBridgeSupport.start}}
** synchronously connects the {{local/remoteBroker}} Transports to 
{{serviceLocal/RemoteCommand}}
** asynchronously runs {{startRemoteBride}}
* someone sends a messageDispatch command to {{localBroker}} (whenever, maybe 
even before DFBS.start)
* {{\[BrokerService[smcufs02] Task-147\] }} delivers the message after 
{{DemandForwardingBridgeSupport.start}} 
** to {{DemandForwardingBridgeSupport.serviceLocalCommand}}
** which calls {{configureMessage}} which uses {{producerInfo}} which is not 
yet set
** and {{startLocalBridge}} may not even be started yet
* {{startRemoteBridge}}
** eventually creates {{producerInfo}}

I don't think {{serviceLocalCommand}} should handle message dispatches before 
the bridge is completely started. Therefore a {{waitStarted()}} after {{if 
(command.isMessageDispatch())}} in {{serviceLocalCommand}} should solve the 
problem.

It is to note that {{serviceRemoteCommand}} already includes a 
{{waitStarted()}} for message dispatches.

Maybe both methods should make a checked for {{disposed}} after waiting. They 
do check at the beginning, but if {{waitStarted}} really blocks, that may 
change in the meantime.


> Occasional Null Pointer Exception during NetworkConnector connection
> 
>
> Key: AMQ-3887
> URL: https://issues.apache.org/jira/browse/AMQ-3887
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.6.0
> Environment: SLES 10
>Reporter: Ron Koerner
>Assignee: Timothy Bish
> Fix For: 5.7.0
>
>
> While starting a duplex NetworkConnector an NPE can be observed on the 
> receiving side.
> {code}
> 2012-06-18 17:34:24,571 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://proxy-cbpi001#8 and tcp:///169.254.
> 0.5:59412(cbox-56BU101117) has been established. [StartLocalBridge: 
> localBroker=vm://proxy-cbpi001#8]
> 2012-06-18 17:34:24,577 WARN  .DemandForwardingBridgeSupport - Caught an 
> exception processing local command [BrokerService[proxy-c
> bpi001] Task-19]
> java.lang.NullPointerException: null
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport.configureMessage(DemandForwardingBridgeSupport.java:644)
>  ~[ac
> tivemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:675)
>  ~
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport$1.onCommand(DemandForwardingBridgeSupport.java:139)
>  [activemq
> -core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:135) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:124) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:103) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) 
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1307)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:837)
>  [activemq-core-5.6.0.jar:5
> .6.0]
> at 
> org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:872)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122)
>  [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43) 
> [activemq-core-5.6.0.jar:5.6.0]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
> Source) [na:1.6.0_20]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.6.0_20]
> at java.lang.Thread.run(Unknown Source) [na:1.6.0_20]
> {code}
> The other broker will eve

[jira] [Comment Edited] (AMQ-3887) Occasional Null Pointer Exception during NetworkConnector connection

2012-07-13 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13413720#comment-13413720
 ] 

Ron Koerner edited comment on AMQ-3887 at 7/13/12 1:31 PM:
---

With the latest 5.7-SNAPSHOT the problem is still there:

{code}
2012-07-13 15:02:25,987 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://smcufs02#32 and 
tcp:///127.0.0.1:43274(cbox-5300653612) has been established. 
[StartLocalBridge: localBroker=vm://smcufs02#32]
2012-07-13 15:02:25,988 INFO  emq.broker.TransportConnection - Started 
responder end of duplex bridge cBox 5300653612 to cBox 
Proxy@ID:5300653612-48838-1341420908942-0:1 [ActiveMQ NIO Worker 1]
2012-07-13 15:02:25,989 INFO  .ConnectorAuthenticationBroker - checked login of 
checkit:xdev-5300653612 [ActiveMQ NIO Worker 1]
2012-07-13 15:02:26,021 WARN  .DemandForwardingBridgeSupport - Caught an 
exception processing local command [BrokerService[smcufs02] Task-147]
java.lang.NullPointerException: null
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.configureMessage(DemandForwardingBridgeSupport.java:673)
 ~[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:707)
 ~[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport$1.onCommand(DemandForwardingBridgeSupport.java:165)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAP
SHOT]
at 
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:137) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:126) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:103) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1307)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:837)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:872)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 [na:1.6.0_20]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
[na:1.6.0_20]
at java.lang.Thread.run(Thread.java:619) [na:1.6.0_20]
{code}

  was (Author: ron.koerner):
With the latest 5.7-SNAPSHOT the problem is still there:


2012-07-13 15:02:25,987 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://smcufs02#32 and 
tcp:///127.0.0.1:43274(cbox-5300653612) has been established. 
[StartLocalBridge: localBroker=vm://smcufs02#32]
2012-07-13 15:02:25,988 INFO  emq.broker.TransportConnection - Started 
responder end of duplex bridge cBox 5300653612 to cBox 
Proxy@ID:5300653612-48838-1341420908942-0:1 [ActiveMQ NIO Worker 1]
2012-07-13 15:02:25,989 INFO  .ConnectorAuthenticationBroker - checked login of 
checkit:xdev-5300653612 [ActiveMQ NIO Worker 1]
2012-07-13 15:02:26,021 WARN  .DemandForwardingBridgeSupport - Caught an 
exception processing local command [BrokerService[smcufs02] Task-147]
java.lang.NullPointerException: null
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.configureMessage(DemandForwardingBridgeSupport.java:673)
 ~[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:707)
 ~[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport$1.onCommand(DemandForwardingBridgeSupport.java:165)
 [activemq-core-5.7-SNAPSHOT.jar:

[jira] [Reopened] (AMQ-3887) Occasional Null Pointer Exception during NetworkConnector connection

2012-07-13 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner reopened AMQ-3887:
--


With the latest 5.7-SNAPSHOT the problem is still there:


2012-07-13 15:02:25,987 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://smcufs02#32 and 
tcp:///127.0.0.1:43274(cbox-5300653612) has been established. 
[StartLocalBridge: localBroker=vm://smcufs02#32]
2012-07-13 15:02:25,988 INFO  emq.broker.TransportConnection - Started 
responder end of duplex bridge cBox 5300653612 to cBox 
Proxy@ID:5300653612-48838-1341420908942-0:1 [ActiveMQ NIO Worker 1]
2012-07-13 15:02:25,989 INFO  .ConnectorAuthenticationBroker - checked login of 
checkit:xdev-5300653612 [ActiveMQ NIO Worker 1]
2012-07-13 15:02:26,021 WARN  .DemandForwardingBridgeSupport - Caught an 
exception processing local command [BrokerService[smcufs02] Task-147]
java.lang.NullPointerException: null
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.configureMessage(DemandForwardingBridgeSupport.java:673)
 ~[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:707)
 ~[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport$1.onCommand(DemandForwardingBridgeSupport.java:165)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAP
SHOT]
at 
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:137) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:126) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:103) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1307)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:837)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:872)
 [activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43) 
[activemq-core-5.7-SNAPSHOT.jar:5.7-SNAPSHOT]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 [na:1.6.0_20]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
[na:1.6.0_20]
at java.lang.Thread.run(Thread.java:619) [na:1.6.0_20]


> Occasional Null Pointer Exception during NetworkConnector connection
> 
>
> Key: AMQ-3887
> URL: https://issues.apache.org/jira/browse/AMQ-3887
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.6.0
> Environment: SLES 10
>Reporter: Ron Koerner
>Assignee: Timothy Bish
> Fix For: 5.7.0
>
>
> While starting a duplex NetworkConnector an NPE can be observed on the 
> receiving side.
> {code}
> 2012-06-18 17:34:24,571 INFO  .DemandForwardingBridgeSupport - Network 
> connection between vm://proxy-cbpi001#8 and tcp:///169.254.
> 0.5:59412(cbox-56BU101117) has been established. [StartLocalBridge: 
> localBroker=vm://proxy-cbpi001#8]
> 2012-06-18 17:34:24,577 WARN  .DemandForwardingBridgeSupport - Caught an 
> exception processing local command [BrokerService[proxy-c
> bpi001] Task-19]
> java.lang.NullPointerException: null
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport.configureMessage(DemandForwardingBridgeSupport.java:644)
>  ~[ac
> tivemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:675)
>  ~
> [activemq-core-5.6.0.jar:5.6.0]
> at 
> org.apache.activemq.network.DemandForwardingBridgeSupport$1.onCommand(DemandForwardingBridgeSupport.java:139)
>  [ac

[jira] [Created] (AMQ-3887) Occasional Null Pointer Exception during NetworkConnector connection

2012-06-18 Thread Ron Koerner (JIRA)
Ron Koerner created AMQ-3887:


 Summary: Occasional Null Pointer Exception during NetworkConnector 
connection
 Key: AMQ-3887
 URL: https://issues.apache.org/jira/browse/AMQ-3887
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.6.0
 Environment: SLES 10
Reporter: Ron Koerner


While starting a duplex NetworkConnector an NPE can be observed on the 
receiving side.

{code}
2012-06-18 17:34:24,571 INFO  .DemandForwardingBridgeSupport - Network 
connection between vm://proxy-cbpi001#8 and tcp:///169.254.
0.5:59412(cbox-56BU101117) has been established. [StartLocalBridge: 
localBroker=vm://proxy-cbpi001#8]
2012-06-18 17:34:24,577 WARN  .DemandForwardingBridgeSupport - Caught an 
exception processing local command [BrokerService[proxy-c
bpi001] Task-19]
java.lang.NullPointerException: null
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.configureMessage(DemandForwardingBridgeSupport.java:644)
 ~[ac
tivemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:675)
 ~
[activemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.network.DemandForwardingBridgeSupport$1.onCommand(DemandForwardingBridgeSupport.java:139)
 [activemq
-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
 [activemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) 
[activemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:135) 
[activemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:124) 
[activemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:103) 
[activemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) 
[activemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
 [activemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1307)
 [activemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:837)
 [activemq-core-5.6.0.jar:5
.6.0]
at 
org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:872)
 [activemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122) 
[activemq-core-5.6.0.jar:5.6.0]
at 
org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43) 
[activemq-core-5.6.0.jar:5.6.0]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
Source) [na:1.6.0_20]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
[na:1.6.0_20]
at java.lang.Thread.run(Unknown Source) [na:1.6.0_20]
{code}

The other broker will eventually connect, but with about a hundred of 
connecting brokers this occurs too often to ignore.

As this seems to be a race condition it is quite difficult to reproduce 
reliably. I assume producerInfo is accessed in configureMessage before it is 
initialized in startRemoteBridge.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (AMQ-3873) Occasional deadlock during startup

2012-06-15 Thread Ron Koerner (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13295664#comment-13295664
 ] 

Ron Koerner commented on AMQ-3873:
--

I'll try to find time. Currently a one-second delay between connections seems 
to be a workaround, but it is certainly no solution.

> Occasional deadlock during startup
> --
>
> Key: AMQ-3873
> URL: https://issues.apache.org/jira/browse/AMQ-3873
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.6.0
> Environment: Suse Linux, CentOS Linux
> Out-of-the-Box standalone broker with additional beans
>Reporter: Ron Koerner
>Assignee: Timothy Bish
>
> During startup I occasionally get deadlocks. I never had those on earlier 
> versions including a 5.6-SNAPSHOT from January.
> My activemq.xml contains a number of beans which each autowire the 
> BrokerService and make connections to 
> brokerService.getVmConnectorURI().toString()+"?async=false"
> To avoid beans to be started faster than the broker and create a second 
> broker by trying to connect, the broker is instantiated with start=false and 
> a special bean listening for the Spring ContextRefreshedEvent will start the 
> broker and run each beans connection methods.
> Therefore a number of VmConnections will be done serially but in a rapid 
> succession.
> This fails at different points, but always with the following thread lock 
> analysis:
> {code}
> Found one Java-level deadlock:
> =
> "ActiveMQ Task-3":
>   waiting for ownable synchronizer 0x9f288120, (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync),
>   which is held by "BrokerService[smcufs02] Task-1"
> "BrokerService[smcufs02] Task-1":
>   waiting to lock monitor 0x0807a650 (object 0x9f2880d8, a 
> java.util.concurrent.atomic.AtomicBoolean),
>   which is held by "ActiveMQ Task-3"
> Java stack information for the threads listed above:
> ===
> "ActiveMQ Task-3":
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x9f288120> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:747)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:778)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1114)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
> at 
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
> at 
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
> at 
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1307)
> at 
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:837)
> at 
> org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:798)
> at 
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:152)
> at 
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> at 
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> at 
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:135)
> at 
> org.apache.activemq.transport.vm.VMTransport.start(VMTransport.java:156)
> - locked <0x9f2880d8> (a java.util.concurrent.atomic.AtomicBoolean)
> at 
> org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:58)
> at 
> org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:58)
> at 
> org.apache.activemq.broker.TransportConnection.start(TransportConnection.java:914)
> - locked <0x9f2e4f98> (a 
> org.apache.activemq.broker.TransportConnection)
> at 
> org.apache.activemq.broker.TransportConnector$1$1.run(TransportConnector.java:227)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> "BrokerService[smcufs02] Task-1":
> at 
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:114)
> - waiting to lock <0x9f2880d8> (a 
> java.util.concurrent.atomic.AtomicBoolean)
> at 
> org.apache.activem

[jira] [Updated] (AMQ-3873) Occasional deadlock during startup

2012-06-01 Thread Ron Koerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ron Koerner updated AMQ-3873:
-

Description: 
During startup I occasionally get deadlocks. I never had those on earlier 
versions including a 5.6-SNAPSHOT from January.

My activemq.xml contains a number of beans which each autowire the 
BrokerService and make connections to 
brokerService.getVmConnectorURI().toString()+"?async=false"

To avoid beans to be started faster than the broker and create a second broker 
by trying to connect, the broker is instantiated with start=false and a special 
bean listening for the Spring ContextRefreshedEvent will start the broker and 
run each beans connection methods.

Therefore a number of VmConnections will be done serially but in a rapid 
succession.

This fails at different points, but always with the following thread lock 
analysis:

{code}
Found one Java-level deadlock:
=
"ActiveMQ Task-3":
  waiting for ownable synchronizer 0x9f288120, (a 
java.util.concurrent.locks.ReentrantLock$NonfairSync),
  which is held by "BrokerService[smcufs02] Task-1"
"BrokerService[smcufs02] Task-1":
  waiting to lock monitor 0x0807a650 (object 0x9f2880d8, a 
java.util.concurrent.atomic.AtomicBoolean),
  which is held by "ActiveMQ Task-3"

Java stack information for the threads listed above:
===
"ActiveMQ Task-3":
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x9f288120> (a 
java.util.concurrent.locks.ReentrantLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:747)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:778)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1114)
at 
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
at 
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
at 
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
at 
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1307)
at 
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:837)
at 
org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:798)
at 
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:152)
at 
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
at 
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
at 
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:135)
at 
org.apache.activemq.transport.vm.VMTransport.start(VMTransport.java:156)
- locked <0x9f2880d8> (a java.util.concurrent.atomic.AtomicBoolean)
at 
org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:58)
at 
org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:58)
at 
org.apache.activemq.broker.TransportConnection.start(TransportConnection.java:914)
- locked <0x9f2e4f98> (a org.apache.activemq.broker.TransportConnection)
at 
org.apache.activemq.broker.TransportConnector$1$1.run(TransportConnector.java:227)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
"BrokerService[smcufs02] Task-1":
at 
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:114)
- waiting to lock <0x9f2880d8> (a 
java.util.concurrent.atomic.AtomicBoolean)
at 
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:103)
at 
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
at 
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
at 
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1307)
at 
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:837)
at 
org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:872)
at 
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122)
at 
org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoo

[jira] [Created] (AMQ-3873) Occasional deadlock during startup

2012-06-01 Thread Ron Koerner (JIRA)
Ron Koerner created AMQ-3873:


 Summary: Occasional deadlock during startup
 Key: AMQ-3873
 URL: https://issues.apache.org/jira/browse/AMQ-3873
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.6.0
 Environment: Suse Linux, CentOS Linux
Out-of-the-Box standalone broker with additional beans
Reporter: Ron Koerner


During startup I occasionally get deadlocks. I never had those on earlier 
versions including a 5.6-SNAPSHOT from January.

My activemq.xml contains a number of beans which each autowire the 
BrokerService and make connections to 
brokerService.getVmConnectorURI().toString()+"?async=false"

To avoid beans to be started faster than the broker and create a second broker 
by trying to connect, the broker is instantiated with start=false and a special 
bean listening for the Spring ContextRefreshedEvent will start the broker and 
run each beans connection methods.

Therefore a number of VmConnections will be done serially but in a rapid 
succession.

This fails at different points, but always with the following thread lock 
analysis:

Found one Java-level deadlock:
=
"ActiveMQ Task-3":
  waiting for ownable synchronizer 0x9f288120, (a 
java.util.concurrent.locks.ReentrantLock$NonfairSync),
  which is held by "BrokerService[smcufs02] Task-1"
"BrokerService[smcufs02] Task-1":
  waiting to lock monitor 0x0807a650 (object 0x9f2880d8, a 
java.util.concurrent.atomic.AtomicBoolean),
  which is held by "ActiveMQ Task-3"

Java stack information for the threads listed above:
===
"ActiveMQ Task-3":
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x9f288120> (a 
java.util.concurrent.locks.ReentrantLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:747)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:778)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1114)
at 
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
at 
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
at 
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
at 
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1307)
at 
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:837)
at 
org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:798)
at 
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:152)
at 
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
at 
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
at 
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:135)
at 
org.apache.activemq.transport.vm.VMTransport.start(VMTransport.java:156)
- locked <0x9f2880d8> (a java.util.concurrent.atomic.AtomicBoolean)
at 
org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:58)
at 
org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:58)
at 
org.apache.activemq.broker.TransportConnection.start(TransportConnection.java:914)
- locked <0x9f2e4f98> (a org.apache.activemq.broker.TransportConnection)
at 
org.apache.activemq.broker.TransportConnector$1$1.run(TransportConnector.java:227)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
"BrokerService[smcufs02] Task-1":
at 
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:114)
- waiting to lock <0x9f2880d8> (a 
java.util.concurrent.atomic.AtomicBoolean)
at 
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:103)
at 
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
at 
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
at 
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1307)
at 
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:837)
at 
org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:872)
at 
org.apache.