[jira] [Commented] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-10-06 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14160061#comment-14160061
 ] 

matteo rulli commented on AMQ-5260:
---

After 4 days of tests with the patch that I proposed in the previous comment, 
the broker is still running without any deadlock.

So I would like to know from some amq expert whether or not what I call _patch_ 
can be really considered as a patch or a simple workaround with potential 
drawbacks.  

Thanks.

> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0, 5.10.0
>Reporter: matteo rulli
> Attachments: AMQ5260AdvancedTest.java, AMQ_5260.patch, 
> AMQ_5260_2.patch, AMQ_5260_3.patch, deadlock.jpg, debug.jpg
>
>
> Pretty the same description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
> org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
> org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
> org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> java.lang.Thread.run(Unknown Source)
> {noformat}
> 
> stack trace no.2
> {noformat}
> Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
> State: WAITING on 
> java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e owned by: 
> ActiveMQ BrokerService[master2] Task-4
> Total blocked: 19  Total waited: 3
> Stack trace: 
>  sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(Unknown Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
>  Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
> Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
> java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
> org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
>- locked java.util.concurrent.atomic.AtomicBoolean@689389da
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
> org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:856)
> org.apache.activemq.network.DemandForwardingBridgeSupport.add

[jira] [Commented] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-09-26 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149239#comment-14149239
 ] 

matteo rulli commented on AMQ-5260:
---

Thank you for your prompt reply! Now I understand the role of MutexTransport. 
By the way, I noticed that the deadlock involves two MutexTransport instances 
that wrap *VMTransport(s)*. Apparently if I apply the following hack
{code:java}
--- MutexTransport.java Thu Jun 05 14:48:36 2014
+++ MutexTransport.edited.java  Fri Sep 26 09:11:41 2014
@@ -19,17 +19,31 @@
 import java.io.IOException;
 import java.util.concurrent.locks.ReentrantLock;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 /**
  * Thread safe Transport Filter that serializes calls to and from the 
Transport Stack.
  */
 public class MutexTransport extends TransportFilter {
 
-private final ReentrantLock writeLock = new ReentrantLock();
+   private static final Logger LOG = 
LoggerFactory.getLogger(MutexTransport.class);
+private final static ReentrantLock vmWriteLock = new ReentrantLock();
+private ReentrantLock writeLock;
 private boolean syncOnCommand;
 
 public MutexTransport(Transport next) {
 super(next);
 this.syncOnCommand = false;
+
+writeLock = null;
+if(next != null && next.toString().startsWith("vm://")){
+   writeLock = vmWriteLock;
+   LOG.error("#@mrul# vm transport with mutex: " + next);
+}else{ 
+   writeLock = new ReentrantLock();
+   LOG.error("#@mrul# non-vm transport with mutex: " + next);
+}
 }
 
 public MutexTransport(Transport next, boolean syncOnCommand) {
{code}

the deadlock disappears. I noticed that the network bridge create many 
VMTransport of the form vm://brokerName# using basically three 
different thread paths and all these local VMTransports trigger many 
intertwined command exchanges that somehow determine the deadlock:

h1. VMTransports creation PATH 1
{noformat}
java.lang.Thread.getStackTrace(Unknown Source)
org.apache.activemq.transport.vm.VMTransport.(VMTransport.java:73)
org.apache.activemq.transport.vm.VMTransportServer$1.(VMTransportServer.java:77)
org.apache.activemq.transport.vm.VMTransportServer.connect(VMTransportServer.java:77)
org.apache.activemq.transport.vm.VMTransportFactory.doCompositeConnect(VMTransportFactory.java:147)
org.apache.activemq.transport.vm.VMTransportFactory.doConnect(VMTransportFactory.java:54)
org.apache.activemq.transport.TransportFactory.connect(TransportFactory.java:64)
org.apache.activemq.network.NetworkConnector.createLocalTransport(NetworkConnector.java:154)
org.apache.activemq.network.DiscoveryNetworkConnector.onServiceAdd(DiscoveryNetworkConnector.java:136)
org.apache.activemq.transport.discovery.simple.SimpleDiscoveryAgent.start(SimpleDiscoveryAgent.java:89)
org.apache.activemq.network.DiscoveryNetworkConnector.handleStart(DiscoveryNetworkConnector.java:205)
org.apache.activemq.network.NetworkConnector$1.doStart(NetworkConnector.java:59)
org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)
org.apache.activemq.network.NetworkConnector.start(NetworkConnector.java:159)
org.apache.activemq.broker.BrokerService.startAllConnectors(BrokerService.java:2501)
org.apache.activemq.broker.BrokerService.doStartBroker(BrokerService.java:693)
org.apache.activemq.broker.BrokerService.startBroker(BrokerService.java:659)
org.apache.activemq.broker.BrokerService.start(BrokerService.java:595)
org.apache.activemq.JmsMultipleBrokersTestSupport.startAllBrokers(JmsMultipleBrokersTestSupport.java:277)
{noformat}

h1. VMTransports creation PATH 2
{noformat}
org.apache.activemq.transport.vm.VMTransport.(VMTransport.java:73)
org.apache.activemq.transport.vm.VMTransportServer$1.(VMTransportServer.java:77)
org.apache.activemq.transport.vm.VMTransportServer.connect(VMTransportServer.java:77)
org.apache.activemq.transport.vm.VMTransportFactory.doCompositeConnect(VMTransportFactory.java:147)
org.apache.activemq.transport.vm.VMTransportFactory.doConnect(VMTransportFactory.java:54)
org.apache.activemq.transport.TransportFactory.connect(TransportFactory.java:64)
org.apache.activemq.network.NetworkBridgeFactory.createLocalTransport(NetworkBridgeFactory.java:80)
org.apache.activemq.network.DemandForwardingBridgeSupport.start(DemandForwardingBridgeSupport.java:184)
org.apache.activemq.network.DiscoveryNetworkConnector.onServiceAdd(DiscoveryNetworkConnector.java:152)
org.apache.activemq.transport.discovery.simple.SimpleDiscoveryAgent.start(SimpleDiscoveryAgent.java:89)
org.apache.activemq.network.DiscoveryNetworkConnector.handleStart(DiscoveryNetworkConnector.java:205)
org.apache.activemq.network.NetworkConnector$1.doStart(NetworkConnector.java:59)
org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)
org.apache.activemq.network.NetworkConnector.start(NetworkConnector.java:159)
org.a

[jira] [Commented] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-09-25 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147898#comment-14147898
 ] 

matteo rulli commented on AMQ-5260:
---

Just to better understand what is going on here, could anyone explain the 
reason *why we need a MutexTransport? What is its role?* The reason why we have 
a deadlock here is that two different threads try to acquire the same couple of 
_org.apache.activemq.transport.MutexTransport.writeLock_ in a different order.
So in order to work out a decent workaround I would like to understand the role 
of this  _org.apache.activemq.transport.MutexTransport.writeLock_: what 
resources is it supposed to protect? 

By the way, did you manage to reproduce the issue on your side using the 
provided "test case"?

Thank you.

> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0, 5.10.0
>Reporter: matteo rulli
> Attachments: AMQ5260AdvancedTest.java, AMQ_5260.patch, 
> AMQ_5260_2.patch, AMQ_5260_3.patch, deadlock.jpg, debug.jpg
>
>
> Pretty the same description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
> org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
> org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
> org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> java.lang.Thread.run(Unknown Source)
> {noformat}
> 
> stack trace no.2
> {noformat}
> Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
> State: WAITING on 
> java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e owned by: 
> ActiveMQ BrokerService[master2] Task-4
> Total blocked: 19  Total waited: 3
> Stack trace: 
>  sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(Unknown Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
>  Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
> Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
> java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
> org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
>- locked java.util.concurrent.atomic.AtomicBoolean@689389da
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
> org.apache.activemq.transport.ResponseCorrelato

[jira] [Comment Edited] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-09-15 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133752#comment-14133752
 ] 

matteo rulli edited comment on AMQ-5260 at 9/15/14 10:06 AM:
-

Ok, we managed to systematically reproduce the issue in a dedicated test-case 
(please find it as an attachment to this issue, *AMQ5260AdvancedTest.java*). 
This junit must be inserted in the _org.apache.activemq.bugs_ package within 
the activemq official src distro (version 5.10.0). In order to reproduce the 
deadlock in few seconds, the following configuration must be applied (VMArgs 
for the runnable jar of the junit test launch config):
{noformat}
-Dorg.apache.activemq.UseDedicatedTaskRunner=false 
-Dbroker.ip.address= 
-Dnum.producers.spec.topic=5 
-Dnum.consumers.spec.topic=5 
-Dnum.producers.gen.topic=5 
-Dnum.consumers.gen.topic=5 
-Dnum.messages.sec=200
{noformat}

At the time being the problem seems related to a concurrence issue triggered by 
an DemandForwardingBridgeSupport#addSubscription and an 
DemandForwardingBridgeSupport#onCommand invocations.

To reproduce the issue it is necessary to create a test forcing such an 
invocations:
# create two brokers each one with a duplex network connector to the other one
# create a generic topic and create some producers publishing on this topic. 
Moreover, create some consumers consuming from this topic. Half of the 
producers/consumers are connected to one broker, the other half to the other 
one. This scenario should trigger the DemandForwardingBridgeSupport#onCommand 
invocation 
# create some specific topic with one producer and some consumers. The producer 
is connected to one broker, the consumers to the other one. This scenario 
should trigger the  DemandForwardingBridgeSupport#addSubscription invocation.

After few seconds after the junit start, just press the _detect deadlock_ 
button in jconsole.

As a side note, the deadlock shows up both with two duplex net connectors (A to 
B and B to A) and with a single duplex net connector (A to B only) between the 
two brokers.


was (Author: matteor):
Ok, we managed to systematically reproduce the issue in a dedicated test-case 
(please find it as an attachment to this issue, *AMQ5260AdvancedTest.java*). 
This junit must be inserted in the _org.apache.activemq.bugs_ package within 
the activemq official src distro (version 5.10.0). In order to reproduce the 
deadlock in few seconds, the following configuration must be applied (VMArgs 
for the runnable jar of the junit test launch config):
{noformat}
-Dorg.apache.activemq.UseDedicatedTaskRunner=false 
-Dbroker.ip.address= 
-Dnum.producers.spec.topic=5 
-Dnum.consumers.spec.topic=5 
-Dnum.producers.gen.topic=5 
-Dnum.consumers.gen.topic=5 
-Dnum.messages.sec=200
{noformat}

At the time being the problem seems related to a concurrence issue triggered by 
an DemandForwardingBridgeSupport#addSubscription and an 
DemandForwardingBridgeSupport#onCommand invocations.

To reproduce the issue it is necessary to create a test forcing such an 
invocations:
# create two brokers each one with a duplex network connector to the other one
# create a generic topic and create some producers publishing on this topic. 
Moreover, create some consumers consuming from this topic. Half of the 
producers/consumers are connected to one broker, the other half to the other 
one. This scenario should trigger the DemandForwardingBridgeSupport#onCommand 
invocation 
# create some specific topic with one producer and some consumers. The producer 
is connected to one broker, the consumers to the other one. This scenario 
should trigger the  DemandForwardingBridgeSupport#addSubscription invocation.

After few seconds after the junit start, just press the _detect deadlock_ 
button in jconsole.


> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0, 5.10.0
>Reporter: matteo rulli
> Attachments: AMQ5260AdvancedTest.java, AMQ_5260.patch, 
> AMQ_5260_2.patch, AMQ_5260_3.patch, deadlock.jpg, debug.jpg
>
>
> Pretty the same description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.a

[jira] [Commented] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-09-15 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133752#comment-14133752
 ] 

matteo rulli commented on AMQ-5260:
---

Ok, we managed to systematically reproduce the issue in a dedicated test-case 
(please find it as an attachment to this issue, *AMQ5260AdvancedTest.java*). 
This junit must be inserted in the _org.apache.activemq.bugs_ package within 
the activemq official src distro (version 5.10.0). In order to reproduce the 
deadlock in few seconds, the following configuration must be applied (VMArgs 
for the runnable jar of the junit test launch config):
{noformat}
-Dorg.apache.activemq.UseDedicatedTaskRunner=false 
-Dbroker.ip.address= 
-Dnum.producers.spec.topic=5 
-Dnum.consumers.spec.topic=5 
-Dnum.producers.gen.topic=5 
-Dnum.consumers.gen.topic=5 
-Dnum.messages.sec=200
{noformat}

At the time being the problem seems related to a concurrence issue triggered by 
an DemandForwardingBridgeSupport#addSubscription and an 
DemandForwardingBridgeSupport#onCommand invocations.

To reproduce the issue it is necessary to create a test forcing such an 
invocations:
# create two brokers each one with a duplex network connector to the other one
# create a generic topic and create some producers publishing on this topic. 
Moreover, create some consumers consuming from this topic. Half of the 
producers/consumers are connected to one broker, the other half to the other 
one. This scenario should trigger the DemandForwardingBridgeSupport#onCommand 
invocation 
# create some specific topic with one producer and some consumers. The producer 
is connected to one broker, the consumers to the other one. This scenario 
should trigger the  DemandForwardingBridgeSupport#addSubscription invocation.

After few seconds after the junit start, just press the _detect deadlock_ 
button in jconsole.


> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0, 5.10.0
>Reporter: matteo rulli
> Attachments: AMQ5260AdvancedTest.java, AMQ_5260.patch, 
> AMQ_5260_2.patch, AMQ_5260_3.patch, deadlock.jpg, debug.jpg
>
>
> Pretty the same description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
> org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
> org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
> org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> java.lang.Thread.run(Unknown Source)
> {noformat}
> 
> stack trace no.2
> {noformat}
> Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
> State: WAITING on 
> java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e owned by: 
> ActiveMQ BrokerService[master2] Task-4
> Total blocked: 19  Total waited: 3
> Stack trace: 
>  sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(Unknown Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
>  Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
> Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
> java.util.concurrent.locks.ReentrantLock.lock(Unknown Source

[jira] [Updated] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-09-15 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-5260:
--
Attachment: AMQ5260AdvancedTest.java

> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0, 5.10.0
>Reporter: matteo rulli
> Attachments: AMQ5260AdvancedTest.java, AMQ_5260.patch, 
> AMQ_5260_2.patch, AMQ_5260_3.patch, deadlock.jpg, debug.jpg
>
>
> Pretty the same description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
> org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
> org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
> org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> java.lang.Thread.run(Unknown Source)
> {noformat}
> 
> stack trace no.2
> {noformat}
> Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
> State: WAITING on 
> java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e owned by: 
> ActiveMQ BrokerService[master2] Task-4
> Total blocked: 19  Total waited: 3
> Stack trace: 
>  sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(Unknown Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
>  Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
> Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
> java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
> org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
>- locked java.util.concurrent.atomic.AtomicBoolean@689389da
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
> org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:856)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInfo(DemandForwardingBridgeSupport.java:1128)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
>- locked java.net.URI@1bae2b28
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSu

[jira] [Commented] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-16 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14064635#comment-14064635
 ] 

matteo rulli commented on AMQ-5260:
---

I upgraded AMQ to 5.10.0 (thus rolling back all my awkward and tenantive 
patches) and I detected the very same deadlock:

{noformat}
Name: ActiveMQ NIO Worker 4006
State: BLOCKED on java.net.URI@5602eaf4 owned by: ActiveMQ Transport: 
tcp:///10.0.1.219:61616@58001
Total blocked: 1  Total waited: 3

Stack trace: 
 
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:763)
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:614)
org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:224)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.lang.Thread.run(Unknown Source)
{noformat}

{noformat}
Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@58001
State: WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync@598f6549 
owned by: ActiveMQ BrokerService[master2] Task-3288
Total blocked: 734  Total waited: 692

Stack trace: 
 sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(Unknown Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
 Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1370)
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:889)
org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:849)
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:150)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:130)
   - locked java.util.concurrent.atomic.AtomicBoolean@56b2aa50
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:107)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:905)
org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInfo(DemandForwardingBridgeSupport.java:1178)
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:763)
   - locked java.net.URI@5602eaf4
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:614)
org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:224)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.failover.FailoverTransport$3.onCommand(FailoverTransport.java:208)
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
org.apache.activemq.transport.TransportSupport.doConsume(Transport

[jira] [Updated] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-16 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-5260:
--

Affects Version/s: 5.10.0

> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0, 5.10.0
>Reporter: matteo rulli
> Attachments: AMQ_5260.patch, AMQ_5260_2.patch, AMQ_5260_3.patch, 
> deadlock.jpg, debug.jpg
>
>
> Pretty the same description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
> org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
> org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
> org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> java.lang.Thread.run(Unknown Source)
> {noformat}
> 
> stack trace no.2
> {noformat}
> Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
> State: WAITING on 
> java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e owned by: 
> ActiveMQ BrokerService[master2] Task-4
> Total blocked: 19  Total waited: 3
> Stack trace: 
>  sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(Unknown Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
>  Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
> Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
> java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
> org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
>- locked java.util.concurrent.atomic.AtomicBoolean@689389da
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
> org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:856)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInfo(DemandForwardingBridgeSupport.java:1128)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
>- locked java.net.URI@1bae2b28
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activem

[jira] [Comment Edited] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-11 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14058553#comment-14058553
 ] 

matteo rulli edited comment on AMQ-5260 at 7/11/14 8:39 AM:


I tried to fix the second deadlock with the patch contained in 
*AMQ_5260_2.patch*.


After few hours I run into another deadlock: this is due to the fact that 
_org.apache.activemq.transport.MutexTransport.writeLock_ is not static. This 
cause different instances of transports (in our case VMTransport and 
Failover/Openwire Transport) to cross different writeLock barriers in different 
order (see stacktraces below)

STACKTRACE 1
{noformat}
Name: ActiveMQ BrokerService[master2] Task-119
State: WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync@766eae77 
owned by: ActiveMQ Transport: tcp:///10.0.1.219:61616@64702
Total blocked: 1  Total waited: 94

Stack trace: 
 sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(Unknown Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
 Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:930)
org.apache.activemq.network.DemandForwardingBridgeSupport$2.onCommand(DemandForwardingBridgeSupport.java:177)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:170)
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:157)
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:112)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:904)
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.lang.Thread.run(Unknown Source)
{noformat}



STACKTRACE 2

{noformat}
Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@64702
State: WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync@61e2cf66 
owned by: ActiveMQ BrokerService[master2] Task-119
Total blocked: 8  Total waited: 22

Stack trace: 
 sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(Unknown Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
 Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:170)
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:157)
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:112)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
org.apache.activemq.network.DemandForwardingBridgeSupport.a

[jira] [Commented] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-11 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14058553#comment-14058553
 ] 

matteo rulli commented on AMQ-5260:
---

I tried to fix the second deadlock with the patch contained in 
*AMQ_5260_2.patch*.


After few hours I run into another deadlock: this is due to the fact that 
_org.apache.activemq.transport.MutexTransport.writeLock_ is not static. This 
cause different instances of transports (in our case VMTransport and 
Failover/Openwire Transport) to cross different writeLock barriers in different 
order (see stacktraces below)

STACKTRACE 1
{noformat}
Name: ActiveMQ BrokerService[master2] Task-119
State: WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync@766eae77 
owned by: ActiveMQ Transport: tcp:///10.0.1.219:61616@64702
Total blocked: 1  Total waited: 94

Stack trace: 
 sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(Unknown Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
 Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:930)
org.apache.activemq.network.DemandForwardingBridgeSupport$2.onCommand(DemandForwardingBridgeSupport.java:177)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:170)
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:157)
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:112)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:904)
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.lang.Thread.run(Unknown Source)
{noformat}



STACKTRACE 2

{noformat}
Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@64702
State: WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync@61e2cf66 
owned by: ActiveMQ BrokerService[master2] Task-119
Total blocked: 8  Total waited: 22

Stack trace: 
 sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(Unknown Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
 Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:170)
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:157)
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:112)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java

[jira] [Updated] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-11 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-5260:
--

Attachment: AMQ_5260_3.patch
AMQ_5260_2.patch
AMQ_5260.patch

> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0
>Reporter: matteo rulli
> Attachments: AMQ_5260.patch, AMQ_5260_2.patch, AMQ_5260_3.patch, 
> deadlock.jpg, debug.jpg
>
>
> Pretty the same description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
> org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
> org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
> org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> java.lang.Thread.run(Unknown Source)
> {noformat}
> 
> stack trace no.2
> {noformat}
> Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
> State: WAITING on 
> java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e owned by: 
> ActiveMQ BrokerService[master2] Task-4
> Total blocked: 19  Total waited: 3
> Stack trace: 
>  sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(Unknown Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
>  Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
> Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
> java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
> org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
>- locked java.util.concurrent.atomic.AtomicBoolean@689389da
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
> org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:856)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInfo(DemandForwardingBridgeSupport.java:1128)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
>- locked java.net.URI@1bae2b28
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(D

[jira] [Comment Edited] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-11 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14057439#comment-14057439
 ] 

matteo rulli edited comment on AMQ-5260 at 7/11/14 8:34 AM:


I tried with the following patch (see patch attachment *AMQ_5260.patch*):
{noformat}
--- 
\activemq-parent-5.9.0-orig\activemq-broker\src\main\java\org\apache\activemq\network\DemandForwardingBridgeSupport.java
Tue Oct 15 00:41:46 2013
+++ 
\activemq-parent-5.9.0\activemq-broker\src\main\java\org\apache\activemq\network\DemandForwardingBridgeSupport.java
 Thu Jul 10 11:51:44 2014
@@ -30,6 +30,8 @@
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
 
 import javax.management.ObjectName;
 
@@ -125,6 +127,8 @@
 private Transport duplexInboundLocalBroker = null;
 private ProducerInfo duplexInboundLocalProducerInfo;
 
+private static final Lock consumerInfoLock = new ReentrantLock(); 
+
 public DemandForwardingBridgeSupport(NetworkBridgeConfiguration 
configuration, Transport localBroker, Transport remoteBroker) {
 this.configuration = configuration;
 this.localBroker = localBroker;
@@ -708,11 +712,16 @@
 return;
 }
 
+/* - AMQ-5260 start 
--- */
 // in a cyclic network there can be multiple bridges per broker 
that can propagate
 // a network subscription so there is a need to synchronize on a 
shared entity
-synchronized (brokerService.getVmConnectorURI()) {
-addConsumerInfo(info);
-}
+//synchronized (brokerService.getVmConnectorURI()) {
+//addConsumerInfo(info);
+//}
+// the lock has been moved in the addConsumerInfo method to 
overcome the AMQ-5260
+addConsumerInfo(info);
+/* - AMQ-5260 end 
- */
+
 } else if (data.getClass() == DestinationInfo.class) {
 // It's a destination info - we want to pass up information about 
temporary destinations
 final DestinationInfo destInfo = (DestinationInfo) data;
@@ -1115,20 +1124,31 @@
 }
 
 protected void addConsumerInfo(final ConsumerInfo consumerInfo) throws 
IOException {
-ConsumerInfo info = consumerInfo.copy();
-addRemoteBrokerToBrokerPath(info);
-DemandSubscription sub = createDemandSubscription(info);
-if (sub != null) {
-if (duplicateSuppressionIsRequired(sub)) {
-undoMapRegistration(sub);
-} else {
-if (consumerInfo.isDurable()) {
-sub.getDurableRemoteSubs().add(new 
SubscriptionInfo(sub.getRemoteInfo().getClientId(), 
consumerInfo.getSubscriptionName()));
-}
-addSubscription(sub);
-LOG.debug("{} new demand subscription: {}", 
configuration.getBrokerName(), sub);
-}
-}
+   boolean addSubscription = false;
+   DemandSubscription sub = null;
+   consumerInfoLock.lock();
+   try{
+   ConsumerInfo info = consumerInfo.copy();
+   addRemoteBrokerToBrokerPath(info);
+   sub = createDemandSubscription(info);
+   if (sub != null) {
+   if (duplicateSuppressionIsRequired(sub)) {
+   undoMapRegistration(sub);
+   } else {
+   if (consumerInfo.isDurable()) {
+   sub.getDurableRemoteSubs().add(new 
SubscriptionInfo(sub.getRemoteInfo().getClientId(), 
consumerInfo.getSubscriptionName()));
+   }
+   addSubscription = true;
+   LOG.debug("{} new demand subscription: {}", 
configuration.getBrokerName(), sub);
+   }
+   }
+   }finally{
+   consumerInfoLock.unlock();
+   }
+   if(addSubscription && sub != null) {
+   addSubscription(sub);
+   LOG.debug("{} new demand subscription: {} has beed added", 
configuration.getBrokerName(), sub);
+   }
 }
 
 private void undoMapRegistration(DemandSubscription sub) {
{noformat}

But in I run into another deadlock:

STACKTRACE 1:
{noformat}
Name: ActiveMQ BrokerService[master2] Task-106
State: WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync@2c8aad83 
owned by: ActiveMQ Transport: tcp:///10.0.1.219:61616@64215
Total blocked: 0  Total waited: 6

Stack trace: 
 sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(Unknown Source)
java.

[jira] [Commented] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-10 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14057439#comment-14057439
 ] 

matteo rulli commented on AMQ-5260:
---

I tried with the following patch:
{noformat}
--- 
\activemq-parent-5.9.0-orig\activemq-broker\src\main\java\org\apache\activemq\network\DemandForwardingBridgeSupport.java
Tue Oct 15 00:41:46 2013
+++ 
\activemq-parent-5.9.0\activemq-broker\src\main\java\org\apache\activemq\network\DemandForwardingBridgeSupport.java
 Thu Jul 10 11:51:44 2014
@@ -30,6 +30,8 @@
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
 
 import javax.management.ObjectName;
 
@@ -125,6 +127,8 @@
 private Transport duplexInboundLocalBroker = null;
 private ProducerInfo duplexInboundLocalProducerInfo;
 
+private static final Lock consumerInfoLock = new ReentrantLock(); 
+
 public DemandForwardingBridgeSupport(NetworkBridgeConfiguration 
configuration, Transport localBroker, Transport remoteBroker) {
 this.configuration = configuration;
 this.localBroker = localBroker;
@@ -708,11 +712,16 @@
 return;
 }
 
+/* - AMQ-5260 start 
--- */
 // in a cyclic network there can be multiple bridges per broker 
that can propagate
 // a network subscription so there is a need to synchronize on a 
shared entity
-synchronized (brokerService.getVmConnectorURI()) {
-addConsumerInfo(info);
-}
+//synchronized (brokerService.getVmConnectorURI()) {
+//addConsumerInfo(info);
+//}
+// the lock has been moved in the addConsumerInfo method to 
overcome the AMQ-5260
+addConsumerInfo(info);
+/* - AMQ-5260 end 
- */
+
 } else if (data.getClass() == DestinationInfo.class) {
 // It's a destination info - we want to pass up information about 
temporary destinations
 final DestinationInfo destInfo = (DestinationInfo) data;
@@ -1115,20 +1124,31 @@
 }
 
 protected void addConsumerInfo(final ConsumerInfo consumerInfo) throws 
IOException {
-ConsumerInfo info = consumerInfo.copy();
-addRemoteBrokerToBrokerPath(info);
-DemandSubscription sub = createDemandSubscription(info);
-if (sub != null) {
-if (duplicateSuppressionIsRequired(sub)) {
-undoMapRegistration(sub);
-} else {
-if (consumerInfo.isDurable()) {
-sub.getDurableRemoteSubs().add(new 
SubscriptionInfo(sub.getRemoteInfo().getClientId(), 
consumerInfo.getSubscriptionName()));
-}
-addSubscription(sub);
-LOG.debug("{} new demand subscription: {}", 
configuration.getBrokerName(), sub);
-}
-}
+   boolean addSubscription = false;
+   DemandSubscription sub = null;
+   consumerInfoLock.lock();
+   try{
+   ConsumerInfo info = consumerInfo.copy();
+   addRemoteBrokerToBrokerPath(info);
+   sub = createDemandSubscription(info);
+   if (sub != null) {
+   if (duplicateSuppressionIsRequired(sub)) {
+   undoMapRegistration(sub);
+   } else {
+   if (consumerInfo.isDurable()) {
+   sub.getDurableRemoteSubs().add(new 
SubscriptionInfo(sub.getRemoteInfo().getClientId(), 
consumerInfo.getSubscriptionName()));
+   }
+   addSubscription = true;
+   LOG.debug("{} new demand subscription: {}", 
configuration.getBrokerName(), sub);
+   }
+   }
+   }finally{
+   consumerInfoLock.unlock();
+   }
+   if(addSubscription && sub != null) {
+   addSubscription(sub);
+   LOG.debug("{} new demand subscription: {} has beed added", 
configuration.getBrokerName(), sub);
+   }
 }
 
 private void undoMapRegistration(DemandSubscription sub) {
{noformat}

But in I run into another deadlock:

STACKTRACE 1:
{noformat}
Name: ActiveMQ BrokerService[master2] Task-106
State: WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync@2c8aad83 
owned by: ActiveMQ Transport: tcp:///10.0.1.219:61616@64215
Total blocked: 0  Total waited: 6

Stack trace: 
 sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(Unknown Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
 Source)
j

[jira] [Comment Edited] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-04 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14052479#comment-14052479
 ] 

matteo rulli edited comment on AMQ-5260 at 7/4/14 2:41 PM:
---

This is what one sees in remote debug if stops the thread _ActiveMQ NIO Worker 
12_ reported in the stacktraces above:

!debug.jpg!


*DemandForwardingBridgeSupport.java, line 714*


was (Author: matteor):
This is what one sees in remote debug if stops the thread _ActiveMQ NIO Worker 
12_ reported in the stacktraces above:

!debug.jpg!


> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0
>Reporter: matteo rulli
> Attachments: deadlock.jpg, debug.jpg
>
>
> Pretty the same description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
> org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
> org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
> org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> java.lang.Thread.run(Unknown Source)
> {noformat}
> 
> stack trace no.2
> {noformat}
> Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
> State: WAITING on 
> java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e owned by: 
> ActiveMQ BrokerService[master2] Task-4
> Total blocked: 19  Total waited: 3
> Stack trace: 
>  sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(Unknown Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
>  Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
> Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
> java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
> org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
>- locked java.util.concurrent.atomic.AtomicBoolean@689389da
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
> org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:856)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInf

[jira] [Updated] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-04 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-5260:
--

Attachment: debug.jpg

> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0
>Reporter: matteo rulli
> Attachments: deadlock.jpg, debug.jpg
>
>
> Pretty the same description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
> org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
> org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
> org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> java.lang.Thread.run(Unknown Source)
> {noformat}
> 
> stack trace no.2
> {noformat}
> Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
> State: WAITING on 
> java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e owned by: 
> ActiveMQ BrokerService[master2] Task-4
> Total blocked: 19  Total waited: 3
> Stack trace: 
>  sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(Unknown Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
>  Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
> Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
> java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
> org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
>- locked java.util.concurrent.atomic.AtomicBoolean@689389da
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
> org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:856)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInfo(DemandForwardingBridgeSupport.java:1128)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
>- locked java.net.URI@1bae2b28
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardin

[jira] [Comment Edited] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-04 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14052479#comment-14052479
 ] 

matteo rulli edited comment on AMQ-5260 at 7/4/14 2:40 PM:
---

This is what one sees in remote debug if stops the thread _ActiveMQ NIO Worker 
12_ reported in the stacktraces above:

!debug.jpg!



was (Author: matteor):
This is what one sees in remote debug if stops the thread reported in the 
stacktraces above:

!debug.jpg!


> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0
>Reporter: matteo rulli
> Attachments: deadlock.jpg, debug.jpg
>
>
> Pretty the same description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
> org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
> org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
> org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> java.lang.Thread.run(Unknown Source)
> {noformat}
> 
> stack trace no.2
> {noformat}
> Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
> State: WAITING on 
> java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e owned by: 
> ActiveMQ BrokerService[master2] Task-4
> Total blocked: 19  Total waited: 3
> Stack trace: 
>  sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(Unknown Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
>  Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
> Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
> java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
> org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
>- locked java.util.concurrent.atomic.AtomicBoolean@689389da
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
> org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:856)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInfo(DemandForwardingBridgeSupport.java:1128)
> org.apache.activemq.network.

[jira] [Commented] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-04 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14052479#comment-14052479
 ] 

matteo rulli commented on AMQ-5260:
---

This is what one sees in remote debug if stops the thread reported in the 
stacktraces above:

!debug.jpg!


> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0
>Reporter: matteo rulli
> Attachments: deadlock.jpg
>
>
> Pretty the same description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
> org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
> org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
> org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> java.lang.Thread.run(Unknown Source)
> {noformat}
> 
> stack trace no.2
> {noformat}
> Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
> State: WAITING on 
> java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e owned by: 
> ActiveMQ BrokerService[master2] Task-4
> Total blocked: 19  Total waited: 3
> Stack trace: 
>  sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(Unknown Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
>  Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
> Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
> java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
> org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
>- locked java.util.concurrent.atomic.AtomicBoolean@689389da
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
> org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:856)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInfo(DemandForwardingBridgeSupport.java:1128)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
>- locked java.net.URI@1bae2b28
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand

[jira] [Updated] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-04 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-5260:
--

Description: 
Pretty the same description with respect to AMQ-4328. 

!deadlock.jpg!

h2. Stacktraces:
Stacktrace no.1:
{noformat}
Name: ActiveMQ NIO Worker 12
State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
tcp:///10.0.1.219:61616@57789
Total blocked: 2  Total waited: 67

Stack trace: 
 
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.lang.Thread.run(Unknown Source)
{noformat}

stack trace no.2
{noformat}
Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
State: WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e 
owned by: ActiveMQ BrokerService[master2] Task-4
Total blocked: 19  Total waited: 3

Stack trace: 
 sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(Unknown Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
 Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
   - locked java.util.concurrent.atomic.AtomicBoolean@689389da
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:856)
org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInfo(DemandForwardingBridgeSupport.java:1128)
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
   - locked java.net.URI@1bae2b28
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.failover.FailoverTransport$3.onCommand(FailoverTransport.java:196)
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
org.apache.activemq.trans

[jira] [Updated] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-04 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-5260:
--

Attachment: deadlock.jpg

> Cross talk over duplex network connection can lead to blocking (alternative 
> take)
> -
>
> Key: AMQ-5260
> URL: https://issues.apache.org/jira/browse/AMQ-5260
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.9.0
>Reporter: matteo rulli
> Attachments: deadlock.jpg
>
>
> Pretty the description with respect to AMQ-4328. 
> 
> !deadlock.jpg!
> h2. Stacktraces:
> Stacktrace no.1:
> {noformat}
> Name: ActiveMQ NIO Worker 12
> State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
> tcp:///10.0.1.219:61616@57789
> Total blocked: 2  Total waited: 67
> Stack trace: 
>  
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
> org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
> org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
> org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> java.lang.Thread.run(Unknown Source)
> {noformat}
> 
> stack trace no.2
> {noformat}
> Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
> State: WAITING on 
> java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e owned by: 
> ActiveMQ BrokerService[master2] Task-4
> Total blocked: 19  Total waited: 3
> Stack trace: 
>  sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(Unknown Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
>  Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
> Source)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
> java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
> org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
>- locked java.util.concurrent.atomic.AtomicBoolean@689389da
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
> org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:856)
> org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInfo(DemandForwardingBridgeSupport.java:1128)
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
>- locked java.net.URI@1bae2b28
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
> org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSuppor

[jira] [Created] (AMQ-5260) Cross talk over duplex network connection can lead to blocking (alternative take)

2014-07-04 Thread matteo rulli (JIRA)
matteo rulli created AMQ-5260:
-

 Summary: Cross talk over duplex network connection can lead to 
blocking (alternative take)
 Key: AMQ-5260
 URL: https://issues.apache.org/jira/browse/AMQ-5260
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.9.0
Reporter: matteo rulli


Pretty the description with respect to AMQ-4328. 

!deadlock.jpg!

h2. Stacktraces:
Stacktrace no.1:
{noformat}
Name: ActiveMQ NIO Worker 12
State: BLOCKED on java.net.URI@1bae2b28 owned by: ActiveMQ Transport: 
tcp:///10.0.1.219:61616@57789
Total blocked: 2  Total waited: 67

Stack trace: 
 
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138)
org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69)
org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94)
org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.lang.Thread.run(Unknown Source)
{noformat}

stack trace no.2
{noformat}
Name: ActiveMQ Transport: tcp:///10.0.1.219:61616@57789
State: WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync@3cdbfa3e 
owned by: ActiveMQ BrokerService[master2] Task-4
Total blocked: 19  Total waited: 3

Stack trace: 
 sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(Unknown Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
 Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(Unknown 
Source)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(Unknown Source)
java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1339)
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:858)
org.apache.activemq.broker.TransportConnection.dispatchSync(TransportConnection.java:818)
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:151)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
   - locked java.util.concurrent.atomic.AtomicBoolean@689389da
org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
org.apache.activemq.network.DemandForwardingBridgeSupport.addSubscription(DemandForwardingBridgeSupport.java:856)
org.apache.activemq.network.DemandForwardingBridgeSupport.addConsumerInfo(DemandForwardingBridgeSupport.java:1128)
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteConsumerAdvisory(DemandForwardingBridgeSupport.java:714)
   - locked java.net.URI@1bae2b28
org.apache.activemq.network.DemandForwardingBridgeSupport.serviceRemoteCommand(DemandForwardingBridgeSupport.java:581)
org.apache.activemq.network.DemandForwardingBridgeSupport$3.onCommand(DemandForwardingBridgeSupport.java:191)
org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
org.apache.activemq.transport.failover.FailoverTransport$3.onCommand(FailoverTransport.java:196)
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
org.apache.activemq.transport

[jira] [Updated] (AMQ-4328) Cross talk over duplex network connection can lead to blocking

2014-07-04 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-4328:
--

Attachment: deadlock.jpg

> Cross talk over duplex network connection can lead to blocking
> --
>
> Key: AMQ-4328
> URL: https://issues.apache.org/jira/browse/AMQ-4328
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Connector
>Affects Versions: 5.7.0, 5.8.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: bridge, duplex, hang, network
> Fix For: 5.9.0
>
> Attachments: deadlock.jpg
>
>
> with active forwarding in both directions a duplex network connector can 
> block. in 5.8, threads of the form:{code}"ActiveMQ BrokerService[xx] Task-10" 
> daemon prio=10 tid=0xb35d1c00 nid=0xc64 runnable [0xb3369000]
>java.lang.Thread.State: RUNNABLE
>   at java.net.SocketOutputStream.socketWrite0(Native Method)
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
>   at 
> org.apache.activemq.transport.tcp.TcpBufferedOutputStream.flush(TcpBufferedOutputStream.java:115)
>   at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>   at 
> org.apache.activemq.transport.tcp.TcpTransport.oneway(TcpTransport.java:176)
>   at 
> org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:322)
>   at 
> org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:304)
>   at 
> org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85)
>   at 
> org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104)
>   at 
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
>   at 
> org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
>   at 
> org.apache.activemq.network.DemandForwardingBridgeSupport.serviceLocalCommand(DemandForwardingBridgeSupport.java:994)
>   at 
> org.apache.activemq.network.DemandForwardingBridgeSupport$2.onCommand(DemandForwardingBridgeSupport.java:201)
>   at 
> org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116)
>   at 
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
>   at 
> org.apache.activemq.transport.vm.VMTransport.doDispatch(VMTransport.java:138)
>   at 
> org.apache.activemq.transport.vm.VMTransport.dispatch(VMTransport.java:127)
>   - locked <0x647f4650> (a java.util.concurrent.atomic.AtomicBoolean)
>   at 
> org.apache.activemq.transport.vm.VMTransport.oneway(VMTransport.java:104)
>   at 
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
>   at 
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
>   at 
> org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1378)
>   at 
> org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:897)
>   at 
> org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:943)
>   at 
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
>   at 
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> "ActiveMQ Transport: tcp:///xx:61616@40803" prio=10 tid=0xb3525400 nid=0xbec 
> waiting on condition [0xb3276000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x64657028> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
>   at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186)
>   at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262)
>   at 
> org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:66)
>   at 
> org.apache.activemq.transport.ResponseCorrelator.oneway(ResponseCorrelator.java:60)
>   at 
> org.apac

[jira] [Updated] (AMQ-5129) Substitute TimeTask with ScheduledExecutorService in org.apache.activemq.thread.Scheduler

2014-03-31 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-5129:
--

Attachment: proposed-patch.txt

> Substitute TimeTask with ScheduledExecutorService in 
> org.apache.activemq.thread.Scheduler
> -
>
> Key: AMQ-5129
> URL: https://issues.apache.org/jira/browse/AMQ-5129
> Project: ActiveMQ
>  Issue Type: Wish
>Affects Versions: 5.9.0
>Reporter: matteo rulli
> Attachments: proposed-patch.txt
>
>
> Since Timer has only one execution thread, long-running task can affect other 
> scheduled tasks. Besides, runtime exceptions thrown in TimerTasks kill the 
> only one running thread, bringing down the entire Scheduler.
> I have the suspect that all this could relate to AMQ-3938: sometimes in very 
> busy environments I experience exaclty the same problem: a slow leakeage due 
> to temp queues that are not deleted. Since 
> org.apache.activemq.broker.region.RegionBroker uses a Scheduler to activate 
> the purgeInactiveDestinations, a crashed timer could explain why 
> purgeInactiveDestinations stops working.
> I attached a tentative patch to migrate timer to ScheduledExecutorService. 
> Hope this helps.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-5129) Substitute TimeTask with ScheduledExecutorService in org.apache.activemq.thread.Scheduler

2014-03-31 Thread matteo rulli (JIRA)
matteo rulli created AMQ-5129:
-

 Summary: Substitute TimeTask with ScheduledExecutorService in 
org.apache.activemq.thread.Scheduler
 Key: AMQ-5129
 URL: https://issues.apache.org/jira/browse/AMQ-5129
 Project: ActiveMQ
  Issue Type: Wish
Affects Versions: 5.9.0
Reporter: matteo rulli


Since Timer has only one execution thread, long-running task can affect other 
scheduled tasks. Besides, runtime exceptions thrown in TimerTasks kill the only 
one running thread, bringing down the entire Scheduler.

I have the suspect that all this could relate to AMQ-3938: sometimes in very 
busy environments I experience exaclty the same problem: a slow leakeage due to 
temp queues that are not deleted. Since 
org.apache.activemq.broker.region.RegionBroker uses a Scheduler to activate the 
purgeInactiveDestinations, a crashed timer could explain why 
purgeInactiveDestinations stops working.

I attached a tentative patch to migrate timer to ScheduledExecutorService. Hope 
this helps.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-12-09 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13843367#comment-13843367
 ] 

matteo rulli commented on AMQ-4889:
---

With the last checks and updates the ProxyConnector has been working great for 
days now: heap memory consumption is stable. I'll try to improve/simplify the 
test case in ProxyConnIssue.rar in the next days.

It would be great if I could have some feedbacks on the last patches I created.

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
>Assignee: Timothy Bish
> Attachments: AMQ4889.patch, NIOSSLTransport_patch_AMQ_4889.txt, 
> ProxyConnIssue.rar, ProxyConnection_patch_AMQ_4889.txt, 
> ProxyConnector_patch_AMQ_4889.txt, after_lsof.txt, after_netstat.txt, 
> lsof.txt, netstat.txt, sockstat.txt
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-12-04 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13839065#comment-13839065
 ] 

matteo rulli edited comment on AMQ-4889 at 12/4/13 4:54 PM:


Actually, swapping the start method and the connections.add() in 
setAcceptListener method in ProxyConnector turned out not to be a good idea. 
The _connections.remove()_ in TransportFilter.stop may be called before the 
connections.add() is invoked.
I appreciate this in the form of a tiny yet continous memory leakeage due to 
objects accumulating in connections collection.

Therefore I modified again the ProxyConnector code to fix also this sort of 
race condition: now I invoke the connections.add() _before_ invoking the start 
but I manage the removal inside a try-catch block (see the updated 
ProxyConnector_patch_AMQ_4889.txt patch)

Besides, I had to modify the equals method in ProxyConnection to avoid NPE in 
case local or remote transorts are null. I updated the 
ProxyConnection_patch_AMQ_4889.txt file with this fix.
I'm not completely satisfied with this ProxyConnection.equals() method, since 
it seems a little bit awkward to me. I don't know, maybe you can suggest a 
better strategy to purge the connections list.

The patch is running for hours now and the issue seems to be fixed. I'll double 
check ram consumption tomorrow to have a stronger feedback.



was (Author: matteor):
Actually, swapping the start method and the connections.add() in 
setAcceptListener method in ProxyConnector turned out not to be a good idea. 
The _connections.remove()_ in TransportFilter.stop may be called before the 
connections.add() is invoked.
I appreciate this in the form of a tiny yet continous memory leakeage due to 
objects accumulating in connections collection.

Therefore I modified again the ProxyConnector code to fix also this sort of 
race condition: now I invoke the connections.add() _before_ invoking the start 
but I manage the removal inside a try-catch block (see the updated 
ProxyConnector_patch_AMQ_4889.txt patch)

Besides, I had to modify the equals method in ProxyConnection to avoid NPE in 
case local or remote transorts are null. I updated the 
ProxyConnection_patch_AMQ_4889.txt file with this fix.
I'm not completely satisfied with this ProxyConnection.equals() method, since 
it seems a little bit awkward to me. I don't know, maybe you can suggest a 
better strategy to purge the connections list.


> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
>Assignee: Timothy Bish
> Attachments: AMQ4889.patch, NIOSSLTransport_patch_AMQ_4889.txt, 
> ProxyConnIssue.rar, ProxyConnection_patch_AMQ_4889.txt, 
> ProxyConnector_patch_AMQ_4889.txt, after_lsof.txt, after_netstat.txt, 
> lsof.txt, netstat.txt, sockstat.txt
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-12-04 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13839065#comment-13839065
 ] 

matteo rulli commented on AMQ-4889:
---

Actually, swapping the start method and the connections.add() in 
setAcceptListener method in ProxyConnector turned out not to be a good idea. 
The _connections.remove()_ in TransportFilter.stop may be called before the 
connections.add() is invoked.
I appreciate this in the form of a tiny yet continous memory leakeage due to 
objects accumulating in connections collection.

Therefore I modified again the ProxyConnector code to fix also this sort of 
race condition: now I invoke the connections.add() _before_ invoking the start 
but I manage the removal inside a try-catch block (see the updated 
ProxyConnector_patch_AMQ_4889.txt patch)

Besides, I had to modify the equals method in ProxyConnection to avoid NPE in 
case local or remote transorts are null. I updated the 
ProxyConnection_patch_AMQ_4889.txt file with this fix.
I'm not completely satisfied with this ProxyConnection.equals() method, since 
it seems a little bit awkward to me. I don't know, maybe you can suggest a 
better strategy to purge the connections list.


> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
>Assignee: Timothy Bish
> Attachments: AMQ4889.patch, NIOSSLTransport_patch_AMQ_4889.txt, 
> ProxyConnIssue.rar, ProxyConnection_patch_AMQ_4889.txt, 
> ProxyConnector_patch_AMQ_4889.txt, after_lsof.txt, after_netstat.txt, 
> lsof.txt, netstat.txt, sockstat.txt
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-12-04 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-4889:
--

Attachment: (was: ProxyConnector_patch_AMQ_4889.txt)

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
>Assignee: Timothy Bish
> Attachments: AMQ4889.patch, NIOSSLTransport_patch_AMQ_4889.txt, 
> ProxyConnIssue.rar, ProxyConnection_patch_AMQ_4889.txt, 
> ProxyConnector_patch_AMQ_4889.txt, after_lsof.txt, after_netstat.txt, 
> lsof.txt, netstat.txt, sockstat.txt
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-12-04 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-4889:
--

Attachment: (was: ProxyConnection_patch_AMQ_4889.txt)

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
>Assignee: Timothy Bish
> Attachments: AMQ4889.patch, NIOSSLTransport_patch_AMQ_4889.txt, 
> ProxyConnIssue.rar, ProxyConnection_patch_AMQ_4889.txt, 
> ProxyConnector_patch_AMQ_4889.txt, after_lsof.txt, after_netstat.txt, 
> lsof.txt, netstat.txt, sockstat.txt
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-12-04 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-4889:
--

Attachment: ProxyConnector_patch_AMQ_4889.txt
ProxyConnection_patch_AMQ_4889.txt

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
>Assignee: Timothy Bish
> Attachments: AMQ4889.patch, NIOSSLTransport_patch_AMQ_4889.txt, 
> ProxyConnIssue.rar, ProxyConnection_patch_AMQ_4889.txt, 
> ProxyConnector_patch_AMQ_4889.txt, after_lsof.txt, after_netstat.txt, 
> lsof.txt, netstat.txt, sockstat.txt
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-12-02 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13836369#comment-13836369
 ] 

matteo rulli edited comment on AMQ-4889 at 12/2/13 8:39 AM:


Still an issue in long-term memory consumption due to another bug in 
ProxyConnector: missing equals and hashCode methods in ProxyConnections induced 
a memory leakeage in connections collection inside ProxyConnector.

I added another patch to solve also this issue.


was (Author: matteor):
Still an issue in long-term memory consumption due to another bug in 
ProxyConnector: missing equals and hashCode methods in ProxyConnections induced 
a memory leakeage in connections collection inside ProxyConnector.

Added another patch to solve also this issue.

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
> Attachments: NIOSSLTransport_patch_AMQ_4889.txt, ProxyConnIssue.rar, 
> ProxyConnection_patch_AMQ_4889.txt, ProxyConnector_patch_AMQ_4889.txt, 
> after_lsof.txt, after_netstat.txt, lsof.txt, netstat.txt, sockstat.txt
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-12-02 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-4889:
--

Attachment: ProxyConnector_patch_AMQ_4889.txt
ProxyConnection_patch_AMQ_4889.txt

Still an issue in long-term memory consumption due to another bug in 
ProxyConnector: missing equals and hashCode methods in ProxyConnections induced 
a memory leakeage in connections collection inside ProxyConnector.

Added another patch to solve also this issue.

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
> Attachments: NIOSSLTransport_patch_AMQ_4889.txt, ProxyConnIssue.rar, 
> ProxyConnection_patch_AMQ_4889.txt, ProxyConnector_patch_AMQ_4889.txt, 
> after_lsof.txt, after_netstat.txt, lsof.txt, netstat.txt, sockstat.txt
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-12-02 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-4889:
--

Attachment: (was: ProxyConnector_patch_AMQ_4889.txt)

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
> Attachments: NIOSSLTransport_patch_AMQ_4889.txt, ProxyConnIssue.rar, 
> after_lsof.txt, after_netstat.txt, lsof.txt, netstat.txt, sockstat.txt
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-11-29 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-4889:
--

Attachment: after_netstat.txt
after_lsof.txt
sockstat.txt
netstat.txt
lsof.txt
NIOSSLTransport_patch_AMQ_4889.txt
ProxyConnector_patch_AMQ_4889.txt

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
> Attachments: NIOSSLTransport_patch_AMQ_4889.txt, ProxyConnIssue.rar, 
> ProxyConnector_patch_AMQ_4889.txt, after_lsof.txt, after_netstat.txt, 
> lsof.txt, netstat.txt, sockstat.txt
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-11-29 Thread matteo rulli (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835322#comment-13835322
 ] 

matteo rulli commented on AMQ-4889:
---

Actually the problem in ProxyConnector hid something more than a simple problem 
ProxyConnector.java file. 

Apparently some clean-up invocations were missing in SSLNIOTransport as well. 
This resulted in many leaked file descriptors in linux (RHEL 6.3 64 bit). 

Before the attached patches the lsof command gave tons of
{noformat}
java6530 platone8u  sock0,6  0t0 2489682586 can't 
identify protocol
java6530 platone9u  sock0,6  0t0 2489745273 can't 
identify protocol
java6530 platone   15u  sock0,6  0t0 2489727545 can't 
identify protocol
java6530 platone   66u  sock0,6  0t0 2489683982 can't 
identify protocol
java6530 platone   69u  sock0,6  0t0 2489684335 can't 
identify protocol
java6530 platone   72u  sock0,6  0t0 2489684339 can't 
identify protocol
java6530 platone   74u  sock0,6  0t0 2489684688 can't 
identify protocol
java6530 platone   76u  sock0,6  0t0 2489685055 can't 
identify protocol
java6530 platone   77u  sock0,6  0t0 2489716539 can't 
identify protocol
java6530 platone   97u  sock0,6  0t0 2489683431 can't 
identify protocol
java6530 platone   98u  sock0,6  0t0 2489683450 can't 
identify protocol
java6530 platone   99u  sock0,6  0t0 2489684695 can't 
identify protocol
java6530 platone  100u  sock0,6  0t0 2489683990 can't 
identify protocol
java6530 platone  101u  sock0,6  0t0 2489702245 can't 
identify protocol
java6530 platone  102u  sock0,6  0t0 2489685058 can't 
identify protocol
...
{noformat}
(see attached lsof.txt) with the following socket stats:
{noformat}
sockets: used 3067
TCP: inuse 21 orphan 0 tw 0 alloc 2808 mem 2648
UDP: inuse 6 mem 5
UDPLITE: inuse 0
RAW: inuse 1
FRAG: inuse 0 memory 0
{noformat}
and the following netstat outcome:
{noformat}
...
tcp0  0 :::192.168.24.82:44201  :::192.168.16.166:61616 
ESTABLISHED 6530/java   off (0.00/0/0)
tcp0  0 :::192.168.24.82:61619  :::192.168.24.66:64601  
CLOSE_WAIT  6530/java   off (0.00/0/0)
tcp0  0 :::192.168.24.82:61619  :::192.168.24.67:9033   
CLOSE_WAIT  6530/java   off (0.00/0/0)
tcp0  0 :::192.168.24.82:61619  :::192.168.24.66:56924  
CLOSE_WAIT  6530/java   off (0.00/0/0)
tcp0  0 :::192.168.24.82:61619  :::192.168.24.66:33021  
CLOSE_WAIT  6530/java   off (0.00/0/0)
tcp0  0 :::192.168.24.82:61619  :::192.168.24.67:56879  
CLOSE_WAIT  6530/java   off (0.00/0/0)
tcp0  0 :::192.168.24.82:61619  :::192.168.24.67:51487  
CLOSE_WAIT  6530/java   off (0.00/0/0)
tcp0  0 :::192.168.24.82:61619  :::192.168.24.66:35295  
CLOSE_WAIT  6530/java   off (0.00/0/0)
tcp0  0 :::192.168.24.82:61619  :::192.168.24.67:49529  
CLOSE_WAIT  6530/java   off (0.00/0/0)
tcp0  0 :::192.168.24.82:61619  :::192.168.24.67:51309  
CLOSE_WAIT  6530/java   off (0.00/0/0)
... many other CLOSE_WAIT
{noformat}
We applied the attached patches and everything run better: the lsof stopped 
reporting the _can't identify protocol_ records and the number of connections 
in CLOSE_WAIT dropped to zero.

So what? Does it make sense to you?

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
> Attachments: ProxyConnIssue.rar
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are at

[jira] [Updated] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-11-29 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-4889:
--

Attachment: (was: AMQ-4889-patch__5.9.0.txt)

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
> Attachments: ProxyConnIssue.rar
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-11-28 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-4889:
--

Attachment: AMQ-4889-patch__5.9.0.txt

I attached a possible patch. As long as I can see, the problem was adding 
ProxyConnection objs into a collection inside ProxyConnector before starting 
them.

Maybe an amq guru can check this and gives some feedbacks?

Thanks!

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
> Attachments: AMQ-4889-patch__5.9.0.txt, ProxyConnIssue.rar
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-11-15 Thread matteo rulli (JIRA)
matteo rulli created AMQ-4889:
-

 Summary: ProxyConnector memory usage skyrockets when several ssl 
handshakes fails
 Key: AMQ-4889
 URL: https://issues.apache.org/jira/browse/AMQ-4889
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0, 5.8.0
 Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and Linux 
RHEL 6.3 64 bit
Reporter: matteo rulli
 Attachments: ProxyConnIssue.rar

See 
[nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
 for further details.

To reproduce the issue:
# Start embedded proxy broker and the AMQ broker that are embedded in 
*AMQTestBroker* project (see attachments);
# Start the *AMQTestConsumer* project; This program repeatedly tries opening a 
connection to the ProxyConnector with wrong certificates.
# Open jconsole to monitor AMQTestBroker memory usage: you should experience an 
OOM error within one hour with the suggested settings (Xmx = 2048m).

Launch configurations and test keystores are attached to this issue along with 
the java projects.

This behavior seems to affect _ProxyConnector_ only, running the test against a 
standard nio-based _TransportConnector_ does not seem to produce anomalous 
memory consumptions.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4889) ProxyConnector memory usage skyrockets when several ssl handshakes fails

2013-11-15 Thread matteo rulli (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matteo rulli updated AMQ-4889:
--

Attachment: ProxyConnIssue.rar

> ProxyConnector memory usage skyrockets when several ssl handshakes fails
> 
>
> Key: AMQ-4889
> URL: https://issues.apache.org/jira/browse/AMQ-4889
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0, 5.9.0
> Environment: Seen in Windows 7 64bit, Windows Server 2008 R2 and 
> Linux RHEL 6.3 64 bit
>Reporter: matteo rulli
> Attachments: ProxyConnIssue.rar
>
>
> See 
> [nabble|http://activemq.2283324.n4.nabble.com/Proxy-Connector-memory-consumption-td4674255.html]
>  for further details.
> To reproduce the issue:
> # Start embedded proxy broker and the AMQ broker that are embedded in 
> *AMQTestBroker* project (see attachments);
> # Start the *AMQTestConsumer* project; This program repeatedly tries opening 
> a connection to the ProxyConnector with wrong certificates.
> # Open jconsole to monitor AMQTestBroker memory usage: you should experience 
> an OOM error within one hour with the suggested settings (Xmx = 2048m).
> Launch configurations and test keystores are attached to this issue along 
> with the java projects.
> This behavior seems to affect _ProxyConnector_ only, running the test against 
> a standard nio-based _TransportConnector_ does not seem to produce anomalous 
> memory consumptions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)