[jira] [Created] (AMQ-5789) ActiveMQSslConnectionFactory hardcodes the KeyStore type
Hadrian Zbarcea created AMQ-5789: Summary: ActiveMQSslConnectionFactory hardcodes the KeyStore type Key: AMQ-5789 URL: https://issues.apache.org/jira/browse/AMQ-5789 Project: ActiveMQ Issue Type: Improvement Components: Broker Affects Versions: 5.10.0 Reporter: Hadrian Zbarcea Assignee: Hadrian Zbarcea Fix For: 5.10.3, 5.11.2, 5.12.0 The issue is present in earlier versions but only 5.10.x and up are maintained right now. At the very minimum we should use KeyStore.getDefaultType() but even that is not sufficient, as one may use a different keystore type (such as pkcs12, bks, keychain on osx, etc) and it should be configurable. The default should be getDefaultType, as defined in java.security (which is by default, jks, so there shouldn't be any impact on current users). I am testing a patch should be able to commit it in a day or two. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (AMQ-5785) Deadlock between NIO worker and Broker.Servic threads
[ https://issues.apache.org/jira/browse/AMQ-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553030#comment-14553030 ] Christopher L. Shannon edited comment on AMQ-5785 at 5/20/15 8:19 PM: -- This appears to probably be the same race condition issue I reported here: https://issues.apache.org/jira/browse/AMQ-5712 [~tabish121] uploaded a patch to that Jira that I've been running for a while and it has solved the deadlock problem for me. It hasn't been committed to master yet but if you don't mind applying the patch yourself it should at least help you stop the deadlock until it gets fixed permanently. was (Author: christopher.l.shannon): This appears to probably be the same race condition issue I reported here: https://issues.apache.org/jira/browse/AMQ-5712 [~tabish121] uploaded a patch to that Jira that I've been running for a while and it has solved the deadlock problem. It hasn't been committed to master yet but if you don't mind applying the patch yourself it should at least help you stop the deadlock until it gets fixed permanently. > Deadlock between NIO worker and Broker.Servic threads > - > > Key: AMQ-5785 > URL: https://issues.apache.org/jira/browse/AMQ-5785 > Project: ActiveMQ > Issue Type: Bug > Components: Broker >Affects Versions: 5.10.0 > Environment: Physical Machine (192 GB RAM, 24 VCPU), RHEL 5.9, Java > 1.7 > ActiveMQ runs on 4 GB heap >Reporter: Sree Panchajanyam D >Priority: Critical > Attachments: threaddump16214.txt > > > During the peak loads we are encountering a recurring deadlock issue in > ActiveMQ broker. The threads that are deadlocked are > ActiveMQ NIO Worker - trying to add message to FilePendingCursor > Broker.Service Worker - that is trying to expire message from > FilePendingCursor. > = > Found one Java-level deadlock: > = > "ActiveMQ NIO Worker 1003": > waiting to lock monitor 0x2aeeb515a4f8 (object 0x0007807da3e8, a > org.apache.activemq.broker.region.cursors.FilePendingMessageCursor), > which is held by "ActiveMQ BrokerService.worker.1" > "ActiveMQ BrokerService.worker.1": > waiting for ownable synchronizer 0x00077ac84b40, (a > java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), > which is held by "ActiveMQ NIO Worker 1003" > Java stack information for the threads listed above: > === > "ActiveMQ NIO Worker 1003": > at > org.apache.activemq.broker.region.cursors.FilePendingMessageCursor.addMessageLast(FilePendingMessageCursor.java:207) > - waiting to lock <0x0007807da3e8> (a > org.apache.activemq.broker.region.cursors.FilePendingMessageCursor) > at > org.apache.activemq.broker.region.cursors.StoreQueueCursor.addMessageLast(StoreQueueCursor.java:96) > - locked <0x0007784e8c88> (a > org.apache.activemq.broker.region.cursors.StoreQueueCursor) > at org.apache.activemq.broker.region.Queue.sendMessage(Queue.java:1855) > at org.apache.activemq.broker.region.Queue.doMessageSend(Queue.java:939) > at org.apache.activemq.broker.region.Queue.send(Queue.java:733) > at > org.apache.activemq.broker.region.AbstractRegion.send(AbstractRegion.java:424) > at > org.apache.activemq.broker.region.RegionBroker.send(RegionBroker.java:445) > at > org.apache.activemq.broker.jmx.ManagedRegionBroker.send(ManagedRegionBroker.java:297) > at > org.apache.activemq.broker.CompositeDestinationBroker.send(CompositeDestinationBroker.java:96) > at > org.apache.activemq.broker.TransactionBroker.send(TransactionBroker.java:307) > at org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:147) > at org.apache.activemq.broker.UserIDBroker.send(UserIDBroker.java:56) > at > org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:152) > at > org.apache.activemq.broker.TransportConnection.processMessage(TransportConnection.java:496) > at > org.apache.activemq.command.ActiveMQMessage.visit(ActiveMQMessage.java:756) > at > org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:294) > at > org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:148) > at > org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) > at > org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113) > at > org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270) > at > org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83) > at > org.ap
[jira] [Commented] (AMQ-5785) Deadlock between NIO worker and Broker.Servic threads
[ https://issues.apache.org/jira/browse/AMQ-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553030#comment-14553030 ] Christopher L. Shannon commented on AMQ-5785: - This appears to probably be the same race condition issue I reported here: https://issues.apache.org/jira/browse/AMQ-5712 [~tabish121] uploaded a patch to that Jira that I've been running for a while and it has solved the deadlock problem. It hasn't been committed to master yet but if you don't mind applying the patch yourself it should at least help you stop the deadlock until it gets fixed permanently. > Deadlock between NIO worker and Broker.Servic threads > - > > Key: AMQ-5785 > URL: https://issues.apache.org/jira/browse/AMQ-5785 > Project: ActiveMQ > Issue Type: Bug > Components: Broker >Affects Versions: 5.10.0 > Environment: Physical Machine (192 GB RAM, 24 VCPU), RHEL 5.9, Java > 1.7 > ActiveMQ runs on 4 GB heap >Reporter: Sree Panchajanyam D >Priority: Critical > Attachments: threaddump16214.txt > > > During the peak loads we are encountering a recurring deadlock issue in > ActiveMQ broker. The threads that are deadlocked are > ActiveMQ NIO Worker - trying to add message to FilePendingCursor > Broker.Service Worker - that is trying to expire message from > FilePendingCursor. > = > Found one Java-level deadlock: > = > "ActiveMQ NIO Worker 1003": > waiting to lock monitor 0x2aeeb515a4f8 (object 0x0007807da3e8, a > org.apache.activemq.broker.region.cursors.FilePendingMessageCursor), > which is held by "ActiveMQ BrokerService.worker.1" > "ActiveMQ BrokerService.worker.1": > waiting for ownable synchronizer 0x00077ac84b40, (a > java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), > which is held by "ActiveMQ NIO Worker 1003" > Java stack information for the threads listed above: > === > "ActiveMQ NIO Worker 1003": > at > org.apache.activemq.broker.region.cursors.FilePendingMessageCursor.addMessageLast(FilePendingMessageCursor.java:207) > - waiting to lock <0x0007807da3e8> (a > org.apache.activemq.broker.region.cursors.FilePendingMessageCursor) > at > org.apache.activemq.broker.region.cursors.StoreQueueCursor.addMessageLast(StoreQueueCursor.java:96) > - locked <0x0007784e8c88> (a > org.apache.activemq.broker.region.cursors.StoreQueueCursor) > at org.apache.activemq.broker.region.Queue.sendMessage(Queue.java:1855) > at org.apache.activemq.broker.region.Queue.doMessageSend(Queue.java:939) > at org.apache.activemq.broker.region.Queue.send(Queue.java:733) > at > org.apache.activemq.broker.region.AbstractRegion.send(AbstractRegion.java:424) > at > org.apache.activemq.broker.region.RegionBroker.send(RegionBroker.java:445) > at > org.apache.activemq.broker.jmx.ManagedRegionBroker.send(ManagedRegionBroker.java:297) > at > org.apache.activemq.broker.CompositeDestinationBroker.send(CompositeDestinationBroker.java:96) > at > org.apache.activemq.broker.TransactionBroker.send(TransactionBroker.java:307) > at org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:147) > at org.apache.activemq.broker.UserIDBroker.send(UserIDBroker.java:56) > at > org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:152) > at > org.apache.activemq.broker.TransportConnection.processMessage(TransportConnection.java:496) > at > org.apache.activemq.command.ActiveMQMessage.visit(ActiveMQMessage.java:756) > at > org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:294) > at > org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:148) > at > org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) > at > org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113) > at > org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270) > at > org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83) > at > org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138) > at > org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69) > at > org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94) > at > org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecu
[jira] [Updated] (AMQ-5694) ActiveMQTempDestination.delete() can block forever
[ https://issues.apache.org/jira/browse/AMQ-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Endre Stølsvik updated AMQ-5694: Description: As described in AMQ-5681, we have a setup where we every 10 seconds queries the broker over JMS for a StatisticsMessage. This bug concerns a .. somewhat related .. problem: In that code path, we delete the temporary reply-to queue after we've read the data. We've now several times ended up with a peculiar situation where the thread seems to have died. Today I grabbed a JMX console and had a look, and the thread is not dead, it is just locked on the delete call. (see screenshot) We have another problem with a shared JDBC "cluster" (the "single master, hot standbys" setup), where the nodes loose master and effectively goes down. What I believe happens, is that the thread sends the delete message, and then goes into a blocking wait for the reply, which never appears, probably because the broker that was master and should have sent it, is now dead. The delete code path should probably have had some sane timeout, and just have raised some JMSException - or something. was: As described in AMQ-5681, we have a setup where we every 10 seconds queries the broker over JMS for a StatisticsMessage. This bug concerns a .. somewhat related .. problem: In that code path, we delete the temporary reply-to queue after we've read the data. We've now several times ended up with a peculiar situation where the thread seems to have died. Today I grabbed a JMX console and had a look, and the thread is not dead, it is just locked on the delete call. We have another problem with a shared JDBC "cluster" (the "single master, hot standbys" setup), where the nodes loose master and effectively goes down. What I believe happens, is that the thread sends the delete message, and then goes into a blocking wait for the reply, which never appears, probably because the broker that was master and should have sent it, is now dead. The delete code path should probably have had some sane timeout, and just have raised some JMSException - or something. > ActiveMQTempDestination.delete() can block forever > -- > > Key: AMQ-5694 > URL: https://issues.apache.org/jira/browse/AMQ-5694 > Project: ActiveMQ > Issue Type: Bug >Reporter: Endre Stølsvik > Attachments: thread hangs.png > > > As described in AMQ-5681, we have a setup where we every 10 seconds queries > the broker over JMS for a StatisticsMessage. > This bug concerns a .. somewhat related .. problem: In that code path, we > delete the temporary reply-to queue after we've read the data. > We've now several times ended up with a peculiar situation where the thread > seems to have died. Today I grabbed a JMX console and had a look, and the > thread is not dead, it is just locked on the delete call. (see screenshot) > We have another problem with a shared JDBC "cluster" (the "single master, hot > standbys" setup), where the nodes loose master and effectively goes down. > What I believe happens, is that the thread sends the delete message, and then > goes into a blocking wait for the reply, which never appears, probably > because the broker that was master and should have sent it, is now dead. > The delete code path should probably have had some sane timeout, and just > have raised some JMSException - or something. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMQ-5788) Failover issue: IllegalStateException: Cannot remove session | producer from connection that had not been registered
Endre Stølsvik created AMQ-5788: --- Summary: Failover issue: IllegalStateException: Cannot remove session | producer from connection that had not been registered Key: AMQ-5788 URL: https://issues.apache.org/jira/browse/AMQ-5788 Project: ActiveMQ Issue Type: Bug Reporter: Endre Stølsvik I am evidently having major problems with failover TCP transport, getting these types of exceptions after a master has shut down and another of the "hot standby" (using JDBC "Cluster" configuration) takes over (aquires the lease): {code} 2015-05-20 16:11:51,186 [ActiveMQ Transport: tcp:///127.0.0.1:50160@61617] DEBUG o.a.a.b.TransportConnection.Service - Error occured while processing async command: RemoveInfo {commandId = 274, responseRequired = false, objectId = ID:SVGD122-63456-1432130352376-1:2:136, lastDeliveredSequenceId = 0}, exception: java.lang.IllegalStateException: Cannot remove session from connection that had not been registered: ID:SVGD122-63456-1432130352376-1:2 java.lang.IllegalStateException: Cannot remove session from connection that had not been registered: ID:SVGD122-63456-1432130352376-1:2 at org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:722) ~[activemq-broker-5.11.1.jar:5.11.1] at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:74) ~[activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:334) ~[activemq-broker-5.11.1.jar:5.11.1] at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:188) [activemq-broker-5.11.1.jar:5.11.1] at org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) [activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113) [activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270) [activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83) [activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:214) [activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:196) [activemq-client-5.11.1.jar:5.11.1] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25] 2015-05-20 16:11:51,186 [ActiveMQ Transport: tcp:///127.0.0.1:50160@61617] WARN o.a.a.b.TransportConnection.Service - Async error occurred: java.lang.IllegalStateException: Cannot remove session from connection that had not been registered: ID:SVGD122-63456-1432130352376-1:2 at org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:722) ~[activemq-broker-5.11.1.jar:5.11.1] at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:74) ~[activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:334) ~[activemq-broker-5.11.1.jar:5.11.1] at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:188) [activemq-broker-5.11.1.jar:5.11.1] at org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) [activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113) [activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270) [activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83) [activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:214) [activemq-client-5.11.1.jar:5.11.1] at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:196) [activemq-client-5.11.1.jar:5.11.1] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25] {code} {code} 2015-05-20 16:11:47,280 [ActiveMQ Transport: tcp:///127.0.0.1:50146@61617] DEBUG o.a.a.b.TransportConnection.Service - Error occured while processing async command: ActiveMQMessage {commandId = 177, responseRequired = false, messageId = ID:SVGD122-63456-1432130352376-1:1:1:1:89, originalDestination = null, originalTransactionId = null, producerId = ID:SVGD122-63456-1432130352376-1:1:1:1, destination = queue://ActiveMQ.Statistics.Broker, transactionId = null, expiration = 0, timestamp = 1432131107279, arrival = 0, brokerInTime = 0, brokerOutTime = 0, correlationId = null, replyTo = topic://ABC.JWActiveMQ.StatisticsMessage, persistent = false, type = nu
[jira] [Resolved] (AMQ-5787) VMTransport uses broken double checked locking
[ https://issues.apache.org/jira/browse/AMQ-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothy Bish resolved AMQ-5787. --- Resolution: Fixed Fix Version/s: 5.12.0 Assignee: Timothy Bish Applied the PR, thanks! > VMTransport uses broken double checked locking > -- > > Key: AMQ-5787 > URL: https://issues.apache.org/jira/browse/AMQ-5787 > Project: ActiveMQ > Issue Type: Bug > Components: Broker, Transport >Affects Versions: 5.11.1 >Reporter: james >Assignee: Timothy Bish >Priority: Critical > Fix For: 5.12.0 > > > the VMTransport.getMessageQueue() method uses the "double checked locking" > idiom to avoid the synchronization overhead if the messageQueue has already > been instantiated. however, this idiom is broken unless the reference is > marked as volatile. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-4900) With AMQP transport, Delivery Annotations are stored with the message
[ https://issues.apache.org/jira/browse/AMQ-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14552880#comment-14552880 ] Timothy Bish commented on AMQ-4900: --- I've added a fix that will removing the incoming delivery annotations from messages when using the "JMS" transformer. We lack the toolset needed to strip this information from the incoming messages when using the "NATIVE" or "RAW' transformers without doing a complete decode and re-encode of the message which sort of defeats the purpose of those two transformers. > With AMQP transport, Delivery Annotations are stored with the message > - > > Key: AMQ-4900 > URL: https://issues.apache.org/jira/browse/AMQ-4900 > Project: ActiveMQ > Issue Type: Bug > Components: Broker >Affects Versions: 5.8.0 >Reporter: Ted Ross >Assignee: Timothy Bish > > If a message that has Delivery Annotations is transferred to a queue in the > broker, the content of the annotations is stored and appended to the message > when sent to a consumer. > I'm not 100% certain, but I believe that annotations should not be stored > with the message in a queue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (AMQ-4900) With AMQP transport, Delivery Annotations are stored with the message
[ https://issues.apache.org/jira/browse/AMQ-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothy Bish reassigned AMQ-4900: - Assignee: Timothy Bish (was: Kevin Earls) > With AMQP transport, Delivery Annotations are stored with the message > - > > Key: AMQ-4900 > URL: https://issues.apache.org/jira/browse/AMQ-4900 > Project: ActiveMQ > Issue Type: Bug > Components: Broker >Affects Versions: 5.8.0 >Reporter: Ted Ross >Assignee: Timothy Bish > > If a message that has Delivery Annotations is transferred to a queue in the > broker, the content of the annotations is stored and appended to the message > when sent to a consumer. > I'm not 100% certain, but I believe that annotations should not be stored > with the message in a queue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5776) Implement and test maxFrameSize across all protocols
[ https://issues.apache.org/jira/browse/AMQ-5776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14552865#comment-14552865 ] ASF GitHub Bot commented on AMQ-5776: - Github user cshannon closed the pull request at: https://github.com/apache/activemq/pull/99 > Implement and test maxFrameSize across all protocols > - > > Key: AMQ-5776 > URL: https://issues.apache.org/jira/browse/AMQ-5776 > Project: ActiveMQ > Issue Type: New Feature > Components: Broker >Affects Versions: 5.11.1 >Reporter: Christopher L. Shannon >Assignee: Timothy Bish > Fix For: 5.12.0 > > > The {{maxFrameSize}} option that currently exists for the OpenWire protocol > should be implemented and tested across all protocols based on the discussion > seen here: AMQ-5774. This will help pevent DOS attacks across any protocol > that is used. Subtasks will be created for the different protocol/transports. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5787) VMTransport uses broken double checked locking
[ https://issues.apache.org/jira/browse/AMQ-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14552854#comment-14552854 ] ASF GitHub Bot commented on AMQ-5787: - GitHub user cshannon opened a pull request: https://github.com/apache/activemq/pull/101 Fixing missing volatile on references in VMTransport Fixing missing volatile on references in VMTransport to prevent a synchronization bug. This resolves https://issues.apache.org/jira/browse/AMQ-5787 You can merge this pull request into a Git repository by running: $ git pull https://github.com/cshannon/activemq AMQ-5787 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/activemq/pull/101.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #101 commit 1e59f8b6a8af0feeb706175d1ffef45aaa82c382 Author: Christopher L. Shannon (cshannon) Date: 2015-05-20T18:38:59Z Fixing missing volatile on references in VMTransport to prevent a synchronization bug. This resolves https://issues.apache.org/jira/browse/AMQ-5787 > VMTransport uses broken double checked locking > -- > > Key: AMQ-5787 > URL: https://issues.apache.org/jira/browse/AMQ-5787 > Project: ActiveMQ > Issue Type: Bug > Components: Broker, Transport >Affects Versions: 5.11.1 >Reporter: james >Priority: Critical > > the VMTransport.getMessageQueue() method uses the "double checked locking" > idiom to avoid the synchronization overhead if the messageQueue has already > been instantiated. however, this idiom is broken unless the reference is > marked as volatile. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5787) VMTransport uses broken double checked locking
[ https://issues.apache.org/jira/browse/AMQ-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14552855#comment-14552855 ] Christopher L. Shannon commented on AMQ-5787: - Good catch, there are actually 3 variables with the same problem. messageQueue and taskRunner have the double locked check problem but taskRunnerFacatory should also be volatile as well to make sure it's visible to all threads since it is lazily initialized in getTaskRunner() if it is null. I pushed up a pull request to fix all 3. > VMTransport uses broken double checked locking > -- > > Key: AMQ-5787 > URL: https://issues.apache.org/jira/browse/AMQ-5787 > Project: ActiveMQ > Issue Type: Bug > Components: Broker, Transport >Affects Versions: 5.11.1 >Reporter: james >Priority: Critical > > the VMTransport.getMessageQueue() method uses the "double checked locking" > idiom to avoid the synchronization overhead if the messageQueue has already > been instantiated. however, this idiom is broken unless the reference is > marked as volatile. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMQ-5787) VMTransport uses broken double checked locking
james created AMQ-5787: -- Summary: VMTransport uses broken double checked locking Key: AMQ-5787 URL: https://issues.apache.org/jira/browse/AMQ-5787 Project: ActiveMQ Issue Type: Bug Components: Broker, Transport Affects Versions: 5.11.1 Reporter: james Priority: Critical the VMTransport.getMessageQueue() method uses the "double checked locking" idiom to avoid the synchronization overhead if the messageQueue has already been instantiated. however, this idiom is broken unless the reference is marked as volatile. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5578) preallocate journal files
[ https://issues.apache.org/jira/browse/AMQ-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14552808#comment-14552808 ] james commented on AMQ-5578: Two things. First, i'm curious what the status of this issue is? the code seems to be largely complete, but the issue is still open? i am interested in this issue because we have encountered a performance bottleneck which seems to be improved by this patch. Second, i have a proposed additional option for the preallocation strategy. we use activemq embedded within our application. As such, i don't want to use the ZEROS strategy because it requires allocating a 32MB chunk of memory. However, our application can run in scenarios where the underlying filesystem is networked. This makes OS_KERNEL_COPY somewhat less ideal since it puts the temp file in the same filesystem as the real data files. while this strategy works well in terms of the overall performance boost, it can cause up to a 5 second delay in the application layer while the new journal file is being preallocated. provided below is a variant on the ZEROS policy which i have called CHUNKED_ZEROS. this uses a small (1MB) buffer to do the preallocation. this seems to provide a similar overall performance improvement while reducing the max application layer delay to 3 seconds (and not having the potentially large overhead of the ZEROS strategy). {code} private static final int PREALLOC_CHUNK_SIZE = 1 << 20; private void doPreallocationChunkedZeros(RecoverableRandomAccessFile file) { ByteBuffer buffer = ByteBuffer.allocate(PREALLOC_CHUNK_SIZE); buffer.position(0); buffer.limit(PREALLOC_CHUNK_SIZE); try { FileChannel channel = file.getChannel(); int remLen = maxFileLength; while(remLen > 0) { if(remLen < buffer.remaining()) { buffer.limit(remLen); } int writeLen = channel.write(buffer); remLen -= writeLen; buffer.rewind(); } channel.force(false); channel.position(0); } catch (IOException e) { LOG.error("Could not preallocate journal file with zeros! Will continue without preallocation", e); } } {code} > preallocate journal files > - > > Key: AMQ-5578 > URL: https://issues.apache.org/jira/browse/AMQ-5578 > Project: ActiveMQ > Issue Type: Improvement > Components: Message Store >Affects Versions: 5.11.0 >Reporter: Gary Tully >Assignee: Gary Tully > Labels: journal, kahaDB, perfomance > Fix For: 5.12.0 > > > Our journals are append only, however we use the size to track journal > rollover on recovery and replay. We can improve performance if we never > update the size on disk and preallocate on creation. > Rework journal logic to ensure size is never updated. This will allow the > configuration option from https://issues.apache.org/jira/browse/AMQ-4947 to > be the default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5748) Add the ability to get Message Size from a Message Store
[ https://issues.apache.org/jira/browse/AMQ-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14552685#comment-14552685 ] Christopher L. Shannon commented on AMQ-5748: - I think adding the ability to get a metrics bean is a good idea, either as part of this issue or a new issue. In my case, if I'm querying the count, I usually want to know the size as well as other metrics about the store, so it would be nice to prevent iterating over the index more than once. Having a counter in DestinationMetrics is good but sometimes querying the actual store is helpful too. Maybe adding a method to the MessageStore interface (in addition to the count and message size methods) to return a metrics bean could work. I can play around with it some and see how it goes. Another thing I want to look at adding after this issue is resolved is expanding the metrics of the store to be able to query the message size for an individual consumer on a destination and not just the overall destination which this issue does. This will probably require iterating over the indexes relating to the subscriptions to figure out which consumer hasn't acknowledged the message in the store yet. > Add the ability to get Message Size from a Message Store > > > Key: AMQ-5748 > URL: https://issues.apache.org/jira/browse/AMQ-5748 > Project: ActiveMQ > Issue Type: New Feature > Components: Broker >Affects Versions: 5.11.1 >Reporter: Christopher L. Shannon >Priority: Minor > > Currently, the {{MessageStore}} interface supports getting a count for > messages ready to deliver using the {{getMessageCount}} method. It would > also be very useful to be able to retrieve the message sizes for those counts > as well for keeping track of metrics. > I've created a pull request to address this that adds a {{getMessageSize}} > method that focuses specifically on KahaDB and the Memory store. The KahaDB > store uses the same strategy as the existing {{getMessageCount}} method, > which is to iterate over the index and total up the size of the messages. > There are unit tests to show the size calculation and a unit test that shows > a store based on version 5 working with the new version (the index is rebuilt) > One extra issue is that the size was not being serialized to the index (it > was not included in the marshaller) so that required making a slight change > and adding a new marshaller for {{Location}} to store the size in the > location index of the store. Without this change, the size computation would > not work when the broker was restarted since the size was not serialized. > Note that I wasn't sure the best way to handle the new marshaller and version > compatibilities so I incremented the KahaDB version from 5 to 6. If an old > version of the index is loaded, the index should be detected as corrupted and > be rebuilt with the new format. If there is a better way to handle this > upgrade let me know and the patch can certainly be updated. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMQ-5786) ActiveMQ failed to start with KahaDB reporting corrupt journal records and throwing NegativeArraySizeException
Martin Lichtin created AMQ-5786: --- Summary: ActiveMQ failed to start with KahaDB reporting corrupt journal records and throwing NegativeArraySizeException Key: AMQ-5786 URL: https://issues.apache.org/jira/browse/AMQ-5786 Project: ActiveMQ Issue Type: Bug Components: KahaDB Affects Versions: 5.11.1 Environment: Karaf 3.0.3 Reporter: Martin Lichtin Priority: Critical ActiveMQ failed to start up, with the following information: {noformat} 2015-05-20 14:23:21,709 | INFO | apache.activemq.server]) | Journal | tore.kahadb.disk.journal.Journal 219 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Corrupt journal records found in 'activemq/kahadb/db-1.log' between offsets: 5504795..5505130 2015-05-20 14:23:21,725 | INFO | apache.activemq.server]) | Journal | tore.kahadb.disk.journal.Journal 219 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Corrupt journal records found in 'activemq/kahadb/db-1.log' between offsets: 5611475..5612818 2015-05-20 14:23:21,749 | INFO | apache.activemq.server]) | Journal | tore.kahadb.disk.journal.Journal 219 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Corrupt journal records found in 'activemq/kahadb/db-1.log' between offsets: 6139835..6140254 2015-05-20 14:23:21,756 | INFO | apache.activemq.server]) | Journal | tore.kahadb.disk.journal.Journal 219 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Corrupt journal records found in 'activemq/kahadb/db-1.log' between offsets: 6246179..6247270 2015-05-20 14:23:21,765 | INFO | apache.activemq.server]) | Journal | tore.kahadb.disk.journal.Journal 219 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Corrupt journal records found in 'activemq/kahadb/db-1.log' between offsets: 6512519..6520426 2015-05-20 14:23:21,789 | INFO | apache.activemq.server]) | Journal | tore.kahadb.disk.journal.Journal 219 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Corrupt journal records found in 'activemq/kahadb/db-1.log' between offsets: 7606018..7627848 2015-05-20 14:23:21,794 | INFO | apache.activemq.server]) | Journal | tore.kahadb.disk.journal.Journal 219 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Corrupt journal records found in 'activemq/kahadb/db-1.log' between offsets: 7630473..7650409 2015-05-20 14:23:21,796 | INFO | apache.activemq.server]) | Journal | tore.kahadb.disk.journal.Journal 219 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Corrupt journal records found in 'activemq/kahadb/db-1.log' between offsets: 7650488..7703243 2015-05-20 14:23:22,065 | INFO | apache.activemq.server]) | MessageDatabase | .kahadb.MessageDatabase$Metadata 168 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | KahaDB is version 5 2015-05-20 14:23:22,188 | INFO | apache.activemq.server]) | MessageDatabase | emq.store.kahadb.MessageDatabase 603 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Recovering from the journal ... 2015-05-20 14:23:22,188 | ERROR | apache.activemq.server]) | BrokerService | he.activemq.broker.BrokerService 609 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Failed to start Apache ActiveMQ ([broker-amq, null], java.io.IOException: Invalid location: 1:6516763, : java.lang.NegativeArraySizeException) 2015-05-20 14:23:22,189 | INFO | apache.activemq.server]) | BrokerService | he.activemq.broker.BrokerService 758 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Apache ActiveMQ 5.11.1 (broker-amq, null) is shutting down 2015-05-20 14:23:22,194 | INFO | apache.activemq.server]) | TransportConnector | tivemq.broker.TransportConnector 291 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Connector openwire stopped 2015-05-20 14:23:22,372 | INFO | apache.activemq.server]) | JobSchedulerStoreImpl| .scheduler.JobSchedulerStoreImpl 259 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | JobSchedulerStore: activemq/broker-amq/scheduler stopped. 2015-05-20 14:23:22,372 | INFO | apache.activemq.server]) | PListStoreImpl | tore.kahadb.plist.PListStoreImpl 356 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | PListStore:[/integrator/int1/proc/broker-amq/karaf-std-0.2-440/activemq/broker-amq/tmp_storage] stopped 2015-05-20 14:23:22,372 | INFO | apache.activemq.server]) | KahaDBStore | ctivemq.store.kahadb.KahaDBStore 245 | 105 - org.apache.activemq.activemq-osgi - 5.11.1 | Stopping async queue tasks 2015-05-20 14:23:22,372 | INFO | apache.activemq.server]) | KahaDBStore | ctivemq.store.kahadb.KahaDBStore 259 | 105 -
[jira] [Resolved] (ACTIVEMQ6-111) Journal directory created even if persistence is disabled
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Justin Bertram resolved ACTIVEMQ6-111. -- Resolution: Fixed Fix Version/s: 1.0.0 Resolved via 4b833cf5e79324453b7fa51a4158e890b23048b5 > Journal directory created even if persistence is disabled > - > > Key: ACTIVEMQ6-111 > URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-111 > Project: ActiveMQ Artemis > Issue Type: Bug >Reporter: Justin Bertram >Assignee: Justin Bertram > Fix For: 1.0.0 > > > The directory for the journal is still created even if the configuration uses > false. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMQ-5785) Deadlock between NIO worker and Broker.Servic threads
[ https://issues.apache.org/jira/browse/AMQ-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sree Panchajanyam D updated AMQ-5785: - Attachment: threaddump16214.txt Attaching the thread dump. This issue is occurring everyday at peak hours. We have upgraded to 5.10.0 from 5.6.0 about 4 months ago. This issue has started occurring since a week. > Deadlock between NIO worker and Broker.Servic threads > - > > Key: AMQ-5785 > URL: https://issues.apache.org/jira/browse/AMQ-5785 > Project: ActiveMQ > Issue Type: Bug > Components: Broker >Affects Versions: 5.10.0 > Environment: Physical Machine (192 GB RAM, 24 VCPU), RHEL 5.9, Java > 1.7 > ActiveMQ runs on 4 GB heap >Reporter: Sree Panchajanyam D >Priority: Critical > Attachments: threaddump16214.txt > > > During the peak loads we are encountering a recurring deadlock issue in > ActiveMQ broker. The threads that are deadlocked are > ActiveMQ NIO Worker - trying to add message to FilePendingCursor > Broker.Service Worker - that is trying to expire message from > FilePendingCursor. > = > Found one Java-level deadlock: > = > "ActiveMQ NIO Worker 1003": > waiting to lock monitor 0x2aeeb515a4f8 (object 0x0007807da3e8, a > org.apache.activemq.broker.region.cursors.FilePendingMessageCursor), > which is held by "ActiveMQ BrokerService.worker.1" > "ActiveMQ BrokerService.worker.1": > waiting for ownable synchronizer 0x00077ac84b40, (a > java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), > which is held by "ActiveMQ NIO Worker 1003" > Java stack information for the threads listed above: > === > "ActiveMQ NIO Worker 1003": > at > org.apache.activemq.broker.region.cursors.FilePendingMessageCursor.addMessageLast(FilePendingMessageCursor.java:207) > - waiting to lock <0x0007807da3e8> (a > org.apache.activemq.broker.region.cursors.FilePendingMessageCursor) > at > org.apache.activemq.broker.region.cursors.StoreQueueCursor.addMessageLast(StoreQueueCursor.java:96) > - locked <0x0007784e8c88> (a > org.apache.activemq.broker.region.cursors.StoreQueueCursor) > at org.apache.activemq.broker.region.Queue.sendMessage(Queue.java:1855) > at org.apache.activemq.broker.region.Queue.doMessageSend(Queue.java:939) > at org.apache.activemq.broker.region.Queue.send(Queue.java:733) > at > org.apache.activemq.broker.region.AbstractRegion.send(AbstractRegion.java:424) > at > org.apache.activemq.broker.region.RegionBroker.send(RegionBroker.java:445) > at > org.apache.activemq.broker.jmx.ManagedRegionBroker.send(ManagedRegionBroker.java:297) > at > org.apache.activemq.broker.CompositeDestinationBroker.send(CompositeDestinationBroker.java:96) > at > org.apache.activemq.broker.TransactionBroker.send(TransactionBroker.java:307) > at org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:147) > at org.apache.activemq.broker.UserIDBroker.send(UserIDBroker.java:56) > at > org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:152) > at > org.apache.activemq.broker.TransportConnection.processMessage(TransportConnection.java:496) > at > org.apache.activemq.command.ActiveMQMessage.visit(ActiveMQMessage.java:756) > at > org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:294) > at > org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:148) > at > org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) > at > org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113) > at > org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270) > at > org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83) > at > org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138) > at > org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69) > at > org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94) > at > org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > "ActiveMQ BrokerService.worker.1": > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00077ac84b40> (a > java.util.concurrent.lo
[jira] [Updated] (AMQ-5785) Deadlock between NIO worker and Broker.Servic threads
[ https://issues.apache.org/jira/browse/AMQ-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sree Panchajanyam D updated AMQ-5785: - Description: During the peak loads we are encountering a recurring deadlock issue in ActiveMQ broker. The threads that are deadlocked are ActiveMQ NIO Worker - trying to add message to FilePendingCursor Broker.Service Worker - that is trying to expire message from FilePendingCursor. = Found one Java-level deadlock: = "ActiveMQ NIO Worker 1003": waiting to lock monitor 0x2aeeb515a4f8 (object 0x0007807da3e8, a org.apache.activemq.broker.region.cursors.FilePendingMessageCursor), which is held by "ActiveMQ BrokerService.worker.1" "ActiveMQ BrokerService.worker.1": waiting for ownable synchronizer 0x00077ac84b40, (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), which is held by "ActiveMQ NIO Worker 1003" Java stack information for the threads listed above: === "ActiveMQ NIO Worker 1003": at org.apache.activemq.broker.region.cursors.FilePendingMessageCursor.addMessageLast(FilePendingMessageCursor.java:207) - waiting to lock <0x0007807da3e8> (a org.apache.activemq.broker.region.cursors.FilePendingMessageCursor) at org.apache.activemq.broker.region.cursors.StoreQueueCursor.addMessageLast(StoreQueueCursor.java:96) - locked <0x0007784e8c88> (a org.apache.activemq.broker.region.cursors.StoreQueueCursor) at org.apache.activemq.broker.region.Queue.sendMessage(Queue.java:1855) at org.apache.activemq.broker.region.Queue.doMessageSend(Queue.java:939) at org.apache.activemq.broker.region.Queue.send(Queue.java:733) at org.apache.activemq.broker.region.AbstractRegion.send(AbstractRegion.java:424) at org.apache.activemq.broker.region.RegionBroker.send(RegionBroker.java:445) at org.apache.activemq.broker.jmx.ManagedRegionBroker.send(ManagedRegionBroker.java:297) at org.apache.activemq.broker.CompositeDestinationBroker.send(CompositeDestinationBroker.java:96) at org.apache.activemq.broker.TransactionBroker.send(TransactionBroker.java:307) at org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:147) at org.apache.activemq.broker.UserIDBroker.send(UserIDBroker.java:56) at org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:152) at org.apache.activemq.broker.TransportConnection.processMessage(TransportConnection.java:496) at org.apache.activemq.command.ActiveMQMessage.visit(ActiveMQMessage.java:756) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:294) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:148) at org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) at org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113) at org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270) at org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83) at org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138) at org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69) at org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94) at org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) "ActiveMQ BrokerService.worker.1": at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00077ac84b40> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197) at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945) at org.apache.activemq.broker.region.Queue.messageExpired(Queue.java:1841) at org.apache.activemq.broker.region.cursors.FilePendingMessageCursor.discardExpiredMessage(FilePendingMessageCursor.java:474) at org.apache.activemq.broker.region.cursors.FilePendi
[jira] [Created] (AMQ-5785) Deadlock between NIO worker and Broker.Servic threads
Sree Panchajanyam D created AMQ-5785: Summary: Deadlock between NIO worker and Broker.Servic threads Key: AMQ-5785 URL: https://issues.apache.org/jira/browse/AMQ-5785 Project: ActiveMQ Issue Type: Bug Components: Broker Affects Versions: 5.10.0 Environment: Physical Machine (192 GB RAM, 24 VCPU), RHEL 5.9, Java 1.7 ActiveMQ runs on 4 GB heap Reporter: Sree Panchajanyam D Priority: Critical During the peak loads we are encountering a recurring deadlock issue in ActiveMQ broker. The threads that are deadlocked are ActiveMQ NIO Worker - trying to add message to FilePendingCursor Broker.Service Worker - that is trying to expire message from FilePendingCursor. ===Found one Java-level deadlock: = "ActiveMQ NIO Worker 1003": waiting to lock monitor 0x2aeeb515a4f8 (object 0x0007807da3e8, a org.apache.activemq.broker.region.cursors.FilePendingMessageCursor), which is held by "ActiveMQ BrokerService.worker.1" "ActiveMQ BrokerService.worker.1": waiting for ownable synchronizer 0x00077ac84b40, (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), which is held by "ActiveMQ NIO Worker 1003" Java stack information for the threads listed above: === "ActiveMQ NIO Worker 1003": at org.apache.activemq.broker.region.cursors.FilePendingMessageCursor.addMessageLast(FilePendingMessageCursor.java:207) - waiting to lock <0x0007807da3e8> (a org.apache.activemq.broker.region.cursors.FilePendingMessageCursor) at org.apache.activemq.broker.region.cursors.StoreQueueCursor.addMessageLast(StoreQueueCursor.java:96) - locked <0x0007784e8c88> (a org.apache.activemq.broker.region.cursors.StoreQueueCursor) at org.apache.activemq.broker.region.Queue.sendMessage(Queue.java:1855) at org.apache.activemq.broker.region.Queue.doMessageSend(Queue.java:939) at org.apache.activemq.broker.region.Queue.send(Queue.java:733) at org.apache.activemq.broker.region.AbstractRegion.send(AbstractRegion.java:424) at org.apache.activemq.broker.region.RegionBroker.send(RegionBroker.java:445) at org.apache.activemq.broker.jmx.ManagedRegionBroker.send(ManagedRegionBroker.java:297) at org.apache.activemq.broker.CompositeDestinationBroker.send(CompositeDestinationBroker.java:96) at org.apache.activemq.broker.TransactionBroker.send(TransactionBroker.java:307) at org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:147) at org.apache.activemq.broker.UserIDBroker.send(UserIDBroker.java:56) at org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:152) at org.apache.activemq.broker.TransportConnection.processMessage(TransportConnection.java:496) at org.apache.activemq.command.ActiveMQMessage.visit(ActiveMQMessage.java:756) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:294) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:148) at org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50) at org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113) at org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270) at org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83) at org.apache.activemq.transport.nio.NIOTransport.serviceRead(NIOTransport.java:138) at org.apache.activemq.transport.nio.NIOTransport$1.onSelect(NIOTransport.java:69) at org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:94) at org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) "ActiveMQ BrokerService.worker.1": at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00077ac84b40> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197) at java.util.concurr
[jira] [Resolved] (AMQ-5164) QueueMasterSlaveSingleUrlTest.testAdvisory fails
[ https://issues.apache.org/jira/browse/AMQ-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Tully resolved AMQ-5164. - Resolution: Fixed Fix Version/s: 5.12.0 Assignee: Gary Tully Seems exponential backoff on failover was allowing the receive to timeout. Test reinstated. > QueueMasterSlaveSingleUrlTest.testAdvisory fails > > > Key: AMQ-5164 > URL: https://issues.apache.org/jira/browse/AMQ-5164 > Project: ActiveMQ > Issue Type: Bug >Reporter: Kevin Earls >Assignee: Gary Tully > Fix For: 5.12.0 > > > This test currently fails with the following error: > testAdvisory(org.apache.activemq.broker.ft.QueueMasterSlaveSingleUrlTest) > Time elapsed: 24.891 sec <<< FAILURE! > junit.framework.AssertionFailedError: Didn't received advisory > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.assertTrue(Assert.java:22) > at junit.framework.Assert.assertNotNull(Assert.java:256) > at junit.framework.TestCase.assertNotNull(TestCase.java:426) > at > org.apache.activemq.broker.ft.QueueMasterSlaveTestSupport.testAdvisory(QueueMasterSlaveTestSupport.java:153) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at junit.framework.TestCase.runTest(TestCase.java:176) > at junit.framework.TestCase.runBare(TestCase.java:141) > at > org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:107) > at > org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:113) > at junit.framework.TestResult$1.protect(TestResult.java:122) > at junit.framework.TestResult.runProtected(TestResult.java:142) > at junit.framework.TestResult.run(TestResult.java:125) > at junit.framework.TestCase.run(TestCase.java:129) > at junit.framework.TestSuite.runTest(TestSuite.java:255) > at junit.framework.TestSuite.run(TestSuite.java:250) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) > Results : > Failed tests: > > QueueMasterSlaveSingleUrlTest>CombinationTestSupport.runBare:113->CombinationTestSupport.runBare:107->QueueMasterSlaveTestSupport.testAdvisory:153 > Didn't received advisory > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (AMQ-5783) Failed to browse Topic: XXXXX java.io.EOFException: Chunk stream does not exist, page: y is marked free
[ https://issues.apache.org/jira/browse/AMQ-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Tully resolved AMQ-5783. - Resolution: Fixed fix in http://git-wip-us.apache.org/repos/asf/activemq/commit/3fdf9861 An empty topic in the store did a partial cleanup of index resources but there remained some stale references in the metadata. It is properly cleaned up now. Also an MBean was left dangling when a sub was removed. Thanks to pat fox for the nice test :-) > Failed to browse Topic: X java.io.EOFException: Chunk stream does not > exist, page: y is marked free > --- > > Key: AMQ-5783 > URL: https://issues.apache.org/jira/browse/AMQ-5783 > Project: ActiveMQ > Issue Type: Bug > Components: KahaDB, Message Store >Affects Versions: 5.11.0 >Reporter: Gary Tully >Assignee: Gary Tully > Labels: durable_subscription > Fix For: 5.12.0 > > > When an offline durable subscriber is timed out > (offlineDurableSubscriberTimeout configured) periodically see the following > WARNING message. > {code} > 2015-05-13 13:45:08,472 [sage] Scheduler] - WARN Topic >- Failed to browse Topic: X > java.io.EOFException: Chunk stream does not exist, page: 39 is marked free > at > org.apache.activemq.store.kahadb.disk.page.Transaction$2.readPage(Transaction.java:470) > at > org.apache.activemq.store.kahadb.disk.page.Transaction$2.(Transaction.java:447) > at > org.apache.activemq.store.kahadb.disk.page.Transaction.openInputStream(Transaction.java:444) > at > org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:420) > at > org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:377) > at > org.apache.activemq.store.kahadb.disk.index.BTreeIndex.loadNode(BTreeIndex.java:266) > at > org.apache.activemq.store.kahadb.disk.index.BTreeIndex.getRoot(BTreeIndex.java:174) > at > org.apache.activemq.store.kahadb.disk.index.BTreeIndex.iterator(BTreeIndex.java:236) > at > org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex$MessageOrderIterator.(MessageDatabase.java:3033) > at > org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.iterator(MessageDatabase.java:2985) > at > org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$4.execute(KahaDBStore.java:564) > at > org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:779) > at > org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recover(KahaDBStore.java:558) > at > org.apache.activemq.store.ProxyTopicMessageStore.recover(ProxyTopicMessageStore.java:62) > at org.apache.activemq.broker.region.Topic.doBrowse(Topic.java:589) > at org.apache.activemq.broker.region.Topic.access$100(Topic.java:65) > at org.apache.activemq.broker.region.Topic$6.run(Topic.java:722) > at > org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33) > at java.util.TimerThread.mainLoop(Timer.java:555) > at java.util.TimerThread.run(Timer.java:505) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (AMQ-4842) activemq-unit-tests - A test keep failing testVirtualTopicFailover
[ https://issues.apache.org/jira/browse/AMQ-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Tully resolved AMQ-4842. - Resolution: Fixed Fix Version/s: (was: 5.9.1) (was: 5.10.0) 5.12.0 test works now and is reinstated. > activemq-unit-tests - A test keep failing testVirtualTopicFailover > -- > > Key: AMQ-4842 > URL: https://issues.apache.org/jira/browse/AMQ-4842 > Project: ActiveMQ > Issue Type: Test > Components: Test Cases >Affects Versions: 5.9.0 >Reporter: Claus Ibsen >Assignee: Gary Tully > Fix For: 5.12.0 > > > Failed tests: > testVirtualTopicFailover(org.apache.activemq.broker.ft.kahaDbJdbcLeaseQueueMasterSlaveTest): > Get message after failover -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (AMQ-4842) activemq-unit-tests - A test keep failing testVirtualTopicFailover
[ https://issues.apache.org/jira/browse/AMQ-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Tully reopened AMQ-4842: - Assignee: Gary Tully (was: Claus Ibsen) peeked that that test and it needed some work. the problem was store not being shared between master and slave so consumer queue did not exist on slave. > activemq-unit-tests - A test keep failing testVirtualTopicFailover > -- > > Key: AMQ-4842 > URL: https://issues.apache.org/jira/browse/AMQ-4842 > Project: ActiveMQ > Issue Type: Test > Components: Test Cases >Affects Versions: 5.9.0 >Reporter: Claus Ibsen >Assignee: Gary Tully > Fix For: 5.9.1, 5.10.0 > > > Failed tests: > testVirtualTopicFailover(org.apache.activemq.broker.ft.kahaDbJdbcLeaseQueueMasterSlaveTest): > Get message after failover -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMQ-5784) Jetty9 websockets dont work with mqtt
Lukas Treyer created AMQ-5784: - Summary: Jetty9 websockets dont work with mqtt Key: AMQ-5784 URL: https://issues.apache.org/jira/browse/AMQ-5784 Project: ActiveMQ Issue Type: Bug Components: Connector, MQTT Reporter: Lukas Treyer activemq in combination with jetty9 does not work for websockets + mqtt. patch: http://pastebin.com/phcsJHR5 I tested with the chat example coming with activemq with all browsers. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5430) LevelDB on NFS creats .nfs files
[ https://issues.apache.org/jira/browse/AMQ-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14552051#comment-14552051 ] Matthew Faulkner commented on AMQ-5430: --- Hi, I also see this problem in ActiveMQ 5.11.1. > LevelDB on NFS creats .nfs files > > > Key: AMQ-5430 > URL: https://issues.apache.org/jira/browse/AMQ-5430 > Project: ActiveMQ > Issue Type: Bug > Components: activemq-leveldb-store, Broker >Affects Versions: 5.10.0 >Reporter: Anuj Khandelwal > > We are currently testing levelDB on NFS. > leveldb creates .log files in leveldb directory to store actual message/data > and this files rotates after 100MB size. These .log files gets deleted when > all the messages are consumed from a particular file. > Issue: After all the messages are consumed I can see that files are getting > deleted but internally they are creating .nfs files of same size. > we have to restart the process to delete those .nfs files. > From my understanding: It seems that the LevelDB store keeps the old logfiles > open after they were deleted. > Below is the snapshot of files: > amqt...@kepler19.nyc:/u/amqtest/dev/leveldb> ls -a > .nfs0082e7befafe > .nfs00960d1eeb46 > .nfs01033243ea15 > .nfs00614cf1eaef > .nfs00960d1aee3e > .nfs01033242e52d > dirty.index > store-version.txt > .nfs0082e7c3000100c5 > .nfs00960d1ff27f > 724ff92c.index > lock > 724ff92c.log > plist.index > -- > amqt...@kepler19.nyc:/u/amqtest/dev/leveldb> du -sh .nfs* > 107M .nfs00614cf1eaef > 101M .nfs0082e7befafe > 101M .nfs0082e7c3000100c5 > 108M .nfs00960d1aee3e > 106M .nfs00960d1eeb46 > 104M .nfs00960d1ff27f > 101M .nfs01033242e52d > 101M .nfs01033243ea15 > Thanks, > Anuj -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMQ-5783) Failed to browse Topic: XXXXX java.io.EOFException: Chunk stream does not exist, page: y is marked free
Gary Tully created AMQ-5783: --- Summary: Failed to browse Topic: X java.io.EOFException: Chunk stream does not exist, page: y is marked free Key: AMQ-5783 URL: https://issues.apache.org/jira/browse/AMQ-5783 Project: ActiveMQ Issue Type: Bug Components: KahaDB, Message Store Affects Versions: 5.11.0 Reporter: Gary Tully Assignee: Gary Tully Fix For: 5.12.0 When an offline durable subscriber is timed out (offlineDurableSubscriberTimeout configured) periodically see the following WARNING message. {code} 2015-05-13 13:45:08,472 [sage] Scheduler] - WARN Topic - Failed to browse Topic: X java.io.EOFException: Chunk stream does not exist, page: 39 is marked free at org.apache.activemq.store.kahadb.disk.page.Transaction$2.readPage(Transaction.java:470) at org.apache.activemq.store.kahadb.disk.page.Transaction$2.(Transaction.java:447) at org.apache.activemq.store.kahadb.disk.page.Transaction.openInputStream(Transaction.java:444) at org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:420) at org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:377) at org.apache.activemq.store.kahadb.disk.index.BTreeIndex.loadNode(BTreeIndex.java:266) at org.apache.activemq.store.kahadb.disk.index.BTreeIndex.getRoot(BTreeIndex.java:174) at org.apache.activemq.store.kahadb.disk.index.BTreeIndex.iterator(BTreeIndex.java:236) at org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex$MessageOrderIterator.(MessageDatabase.java:3033) at org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.iterator(MessageDatabase.java:2985) at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$4.execute(KahaDBStore.java:564) at org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:779) at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recover(KahaDBStore.java:558) at org.apache.activemq.store.ProxyTopicMessageStore.recover(ProxyTopicMessageStore.java:62) at org.apache.activemq.broker.region.Topic.doBrowse(Topic.java:589) at org.apache.activemq.broker.region.Topic.access$100(Topic.java:65) at org.apache.activemq.broker.region.Topic$6.run(Topic.java:722) at org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5603) Consider preallocation of journal files in batch increments
[ https://issues.apache.org/jira/browse/AMQ-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14552002#comment-14552002 ] Gary Tully commented on AMQ-5603: - I guess we should reuse gc'ed journal files and maybe do a pool preallocation, outside of the index locks such that we can preallocate independent of writes. Maybe the gc thread could ensure the preallocation pool is topped up. > Consider preallocation of journal files in batch increments > --- > > Key: AMQ-5603 > URL: https://issues.apache.org/jira/browse/AMQ-5603 > Project: ActiveMQ > Issue Type: New Feature > Components: Message Store >Reporter: Christian Posta >Priority: Minor > Labels: kahaDB, perfomance > > Right now (as of ActiveMQ 5.12 release) we preallocate journal files, but the > only scope is for entire journal file. The [potential] issue with that is if > user configures large journal file sizes, we can end up stalling writes > during log rotation because of the allocation process. There are two ways to > do the allocation, configurable to do it in userspace, or defer to kernel > space, but nevertheless it would be good to avoid this issue altogether by > preallocating in small batch sizes regardless of the journal max file size. -- This message was sent by Atlassian JIRA (v6.3.4#6332)