[jira] [Commented] (AMQ-5734) Support MQTT 3.1 silent subscription fail

2015-04-29 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14519143#comment-14519143
 ] 

Gary Tully commented on AMQ-5734:
-

Dejan - can u peek at 
https://fisheye6.atlassian.com/changelog/activemq-git?cs=f5283a904589eb4a9344fe42885cdacf92de68dd
 
i found that the retained message (body size ==0) was not being accounted for 
in the test. Can you verify that that message should be dispatched?

> Support MQTT 3.1 silent subscription fail
> -
>
> Key: AMQ-5734
> URL: https://issues.apache.org/jira/browse/AMQ-5734
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: MQTT
>Affects Versions: 5.11.0
>Reporter: Dejan Bosanac
>Assignee: Dejan Bosanac
> Fix For: 5.12.0
>
>
> There's a difference in what 3.1 and 3.1.1 specs say should happen when an 
> unauthorised subscribe happens. In 3.1, the broker don't informs the client 
> so it just silently ignores the problem. In 3.1.1 it sends special QoS back. 
> We currently only implement the later. We need to implement also 3.1 
> behaviour for older clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5748) Add the ability to get Message Size from a Message Store

2015-04-28 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14517112#comment-14517112
 ] 

Gary Tully commented on AMQ-5748:
-

yep - demand for it is a key concern, because it will force an index rebuild so 
it will impact users. Folks that have a large journal will see a really slow 
start up first time! Maybe we need to implement an auto upgrade for the 
marshaller, so that we just rewrite the index with 0 size and only folks that 
want to see the size metric will have to replay the journal. 
There are some examples of this in the StoredDestinationMarshaller - where it 
checks versions.

I mentioned mKahaDB because it will give you a kahaDB instance per destination 
- which would make the store size ~= the destination message size accumulator. 
So it would work today  or just need some minor tweaks. In some ways that may 
be sufficient. Just a thought :-)

I guess the size based flow control would be the killer app, and for that we 
need to be able to recreate from the index, unless we push mKahaDB for that use 
case too!

> Add the ability to get Message Size from a Message Store
> 
>
> Key: AMQ-5748
> URL: https://issues.apache.org/jira/browse/AMQ-5748
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 5.11.1
>Reporter: Christopher L. Shannon
>Priority: Minor
>
> Currently, the {{MessageStore}} interface supports getting a count for 
> messages ready to deliver using the {{getMessageCount}} method.  It would 
> also be very useful to be able to retrieve the message sizes for those counts 
> as well for keeping track of metrics.
> I've created a pull request to address this that adds a {{getMessageSize}} 
> method that focuses specifically on KahaDB and the Memory store.  The KahaDB 
> store uses the same strategy as the existing {{getMessageCount}} method, 
> which is to iterate over the index and total up the size of the messages.  
> There are unit tests to show the size calculation and a unit test that shows 
> a store based on version 5 working with the new version (the index is rebuilt)
> One extra issue is that the size was not being serialized to the index (it 
> was not included in the marshaller) so that required making a slight change 
> and adding a new marshaller for {{Location}} to store the size in the 
> location index of the store.  Without this change, the size computation would 
> not work when the broker was restarted since the size was not serialized.
> Note that I wasn't sure the best way to handle the new marshaller and version 
> compatibilities so I incremented the KahaDB version from 5 to 6. If an old 
> version of the index is loaded, the index should be detected as corrupted and 
> be rebuilt with the new format.  If there is a better way to handle this 
> upgrade let me know and the patch can certainly be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5748) Add the ability to get Message Size from a Message Store

2015-04-28 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14516962#comment-14516962
 ] 

Gary Tully commented on AMQ-5748:
-

Hi Christopher, this is a substantial contrib and it looks good :-)

My first thought was how different would the be from mKahaDB with a store per 
destination and the existing store size metric. I guess that would be more 
coarse, but it would be per destination. I think that would need a tweak on 
mKahaDb to isolate the store size but that would be trivial in comparison. How 
would that map with your use case?


On the change to messageStore, if the intent is some metric that is accessed 
frequently, maybe introduce a destinationStatistic that is backed and 
initialised by the index. It would mean that the calculation is always up to 
date and there will be not store block on the index iterations.

Am I correct in thinking this is a precursor to doing flow control based on 
total message size, if that is the case it extending the destination statistic 
to encompass this would make sense. That is the pattern for messageCount.

> Add the ability to get Message Size from a Message Store
> 
>
> Key: AMQ-5748
> URL: https://issues.apache.org/jira/browse/AMQ-5748
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 5.11.1
>Reporter: Christopher L. Shannon
>Priority: Minor
>
> Currently, the {{MessageStore}} interface supports getting a count for 
> messages ready to deliver using the {{getMessageCount}} method.  It would 
> also be very useful to be able to retrieve the message sizes for those counts 
> as well for keeping track of metrics.
> I've created a pull request to address this that adds a {{getMessageSize}} 
> method that focuses specifically on KahaDB and the Memory store.  The KahaDB 
> store uses the same strategy as the existing {{getMessageCount}} method, 
> which is to iterate over the index and total up the size of the messages.  
> There are unit tests to show the size calculation and a unit test that shows 
> a store based on version 5 working with the new version (the index is rebuilt)
> One extra issue is that the size was not being serialized to the index (it 
> was not included in the marshaller) so that required making a slight change 
> and adding a new marshaller for {{Location}} to store the size in the 
> location index of the store.  Without this change, the size computation would 
> not work when the broker was restarted since the size was not serialized.
> Note that I wasn't sure the best way to handle the new marshaller and version 
> compatibilities so I incremented the KahaDB version from 5 to 6. If an old 
> version of the index is loaded, the index should be detected as corrupted and 
> be rebuilt with the new format.  If there is a better way to handle this 
> upgrade let me know and the patch can certainly be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5746) Slave broker not registering JMX mBean when scheduler is enabled

2015-04-27 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5746.
-
Resolution: Fixed

fix and test in http://git-wip-us.apache.org/repos/asf/activemq/commit/c1290511

scheduler store needs to be started after mbean registration

> Slave broker not registering JMX mBean when scheduler is enabled
> 
>
> Key: AMQ-5746
> URL: https://issues.apache.org/jira/browse/AMQ-5746
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, Job Scheduler
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
> Fix For: 5.12.0
>
>
> {code}
> http://activemq.apache.org/schema/core"; brokerName="localhost" 
> dataDirectory="${activemq.data}" schedulerSupport="true">
> {code}
> The slave broker does not register it's mBean information vis JMX.  From 
> looking at the log of the slave broker the following can be seen:
> {code}
> INFO | Database /scheduler/lock is locked... waiting 10 seconds for the 
> database to be unlocked. Reason: java.io.IOException: File '/scheduler/lock' 
> could not be locked.
> {code}
> In this case it appears the broker is stuck trying to lock the scheduler lock 
> file before the mBean can be registered.
> Without the scheduler enable, the mBean is registered in JMX and log shows 
> the broker is just waiting to lock the kahadb lock file:
> {code}
> INFO | Database kahadb/lock is locked... waiting 5 seconds for the database 
> to be unlocked. Reason: java.io.IOException: File 'kahadb/lock' could not be 
> locked.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5746) Slave broker not registering JMX mBean when scheduler is enabled

2015-04-27 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14514245#comment-14514245
 ] 

Gary Tully commented on AMQ-5746:
-

related pr - with test case - thanks Martyn - 
https://github.com/apache/activemq/pull/80

> Slave broker not registering JMX mBean when scheduler is enabled
> 
>
> Key: AMQ-5746
> URL: https://issues.apache.org/jira/browse/AMQ-5746
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, Job Scheduler
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
> Fix For: 5.12.0
>
>
> {code}
> http://activemq.apache.org/schema/core"; brokerName="localhost" 
> dataDirectory="${activemq.data}" schedulerSupport="true">
> {code}
> The slave broker does not register it's mBean information vis JMX.  From 
> looking at the log of the slave broker the following can be seen:
> {code}
> INFO | Database /scheduler/lock is locked... waiting 10 seconds for the 
> database to be unlocked. Reason: java.io.IOException: File '/scheduler/lock' 
> could not be locked.
> {code}
> In this case it appears the broker is stuck trying to lock the scheduler lock 
> file before the mBean can be registered.
> Without the scheduler enable, the mBean is registered in JMX and log shows 
> the broker is just waiting to lock the kahadb lock file:
> {code}
> INFO | Database kahadb/lock is locked... waiting 5 seconds for the database 
> to be unlocked. Reason: java.io.IOException: File 'kahadb/lock' could not be 
> locked.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5746) Slave broker not registering JMX mBean when scheduler is enabled

2015-04-27 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5746:
---

 Summary: Slave broker not registering JMX mBean when scheduler is 
enabled
 Key: AMQ-5746
 URL: https://issues.apache.org/jira/browse/AMQ-5746
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Job Scheduler
Affects Versions: 5.11.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


{code}
http://activemq.apache.org/schema/core"; brokerName="localhost" 
dataDirectory="${activemq.data}" schedulerSupport="true">
{code}

The slave broker does not register it's mBean information vis JMX.  From 
looking at the log of the slave broker the following can be seen:

{code}
INFO | Database /scheduler/lock is locked... waiting 10 seconds for the 
database to be unlocked. Reason: java.io.IOException: File '/scheduler/lock' 
could not be locked.
{code}

In this case it appears the broker is stuck trying to lock the scheduler lock 
file before the mBean can be registered.

Without the scheduler enable, the mBean is registered in JMX and log shows the 
broker is just waiting to lock the kahadb lock file:

{code}
INFO | Database kahadb/lock is locked... waiting 5 seconds for the database to 
be unlocked. Reason: java.io.IOException: File 'kahadb/lock' could not be 
locked.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-3222) Failover and SimpleDiscovery - query parameters getting dropped

2015-04-24 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14511171#comment-14511171
 ] 

Gary Tully commented on AMQ-3222:
-

I would expect{code}static:(tcp://hostname:port?socketBufferSize=2498560){code} 
to work also. The discovered. option to static will work too, but in the static 
case there is only one url to augment.

> Failover and SimpleDiscovery - query parameters getting dropped
> ---
>
> Key: AMQ-3222
> URL: https://issues.apache.org/jira/browse/AMQ-3222
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, Transport
>Affects Versions: 5.4.2
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: failover, networkBridge, networkConnector
> Fix For: 5.5.0
>
>
> Using failover with static discovery makes sense in a master slave scenario. 
> With simple static discovery with a pair of uri's, {code} uri="static:(tcp://localhost:32258,tcp://localhost:32259)" />{code} static 
> discovery will continue to try and connect to both uris which leads to 
> repeated logging of failed attempts to bridge to the slave that is not active 
> yet.
> Using failover{code} uri="static:(failover:(tcp://localhost:32258,tcp://localhost:32259)?randomize=false&maxReconnectAttempts=1)"/>{code}
>  improves on this as the failover: transport will be content with just one 
> uri, from the broker perspective there is a single network bridge, rather 
> than two.
> Currently query parameter are not correctly applied.
> Query parameter parsing is problematic when options for a transport are 
> duplicated by the discovery mechanism, e.g: maxReconnectAttempts. There have 
> been some related efforts to resolve this, 
> https://issues.apache.org/jira/browse/AMQ-2981 and 
> https://issues.apache.org/jira/browse/AMQ-2598. Parameters are stripped from 
> transport uris and applied to both the transport and the discovery mechanism. 
> The end result, and fix, is that additional transport options that need to be 
> applied to discovered transport (which typically have all query parameters 
> removed) need to be isolated from normal query parameters using a dot 
> (prefixed with "discovered.") notation. e.g: 
> {code}discovery:(multicast://default)?initialReconnectDelay=100&discovered.closeAsync=false"{code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5743) purged of 0 messages | org.apache.activemq.broker.region.Queue logged when clearing a temp queue

2015-04-24 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5743.
-
   Resolution: Fixed
Fix Version/s: 5.12.0

thanks for reporting
fix in http://git-wip-us.apache.org/repos/asf/activemq/commit/23ecbe80

> purged of 0 messages | org.apache.activemq.broker.region.Queue logged when 
> clearing a temp queue
> 
>
> Key: AMQ-5743
> URL: https://issues.apache.org/jira/browse/AMQ-5743
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.11.0, 5.11.1
>Reporter: Aleksandar Ivanisevic
>Assignee: Gary Tully
>Priority: Minor
> Fix For: 5.12.0
>
>
> since upgrading to 5.11 I'm getting a lot of these in the logs
> 2015-03-10 09:50:36,212 | INFO  | 
> temp-queue://ID:STATS2-37496-1425976359943-3:1604:1 purged of 0 messages | 
> org.apache.activemq.broker.region.Queue | ActiveMQ Transport: 
> ssl:///213.198.74.90:36525
> According to Garry Tully this is an oversight, quote from the users mailing 
> list
> "that is an oversight, when purge is called from a jmx op, we want to
> log the event, but it is also called to clear a temp queue when the
> temp queue is deleted and that should not generate a log message. I
> think that is your case.
> Can you open a jira to track this so that others will see the
> resolution. thanks"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (AMQ-5743) purged of 0 messages | org.apache.activemq.broker.region.Queue logged when clearing a temp queue

2015-04-24 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully reassigned AMQ-5743:
---

Assignee: Gary Tully

> purged of 0 messages | org.apache.activemq.broker.region.Queue logged when 
> clearing a temp queue
> 
>
> Key: AMQ-5743
> URL: https://issues.apache.org/jira/browse/AMQ-5743
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.11.0, 5.11.1
>Reporter: Aleksandar Ivanisevic
>Assignee: Gary Tully
>Priority: Minor
>
> since upgrading to 5.11 I'm getting a lot of these in the logs
> 2015-03-10 09:50:36,212 | INFO  | 
> temp-queue://ID:STATS2-37496-1425976359943-3:1604:1 purged of 0 messages | 
> org.apache.activemq.broker.region.Queue | ActiveMQ Transport: 
> ssl:///213.198.74.90:36525
> According to Garry Tully this is an oversight, quote from the users mailing 
> list
> "that is an oversight, when purge is called from a jmx op, we want to
> log the event, but it is also called to clear a temp queue when the
> temp queue is deleted and that should not generate a log message. I
> think that is your case.
> Can you open a jira to track this so that others will see the
> resolution. thanks"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5742) Destination dispatched count statistic not reflecting redelivery/redispatch

2015-04-23 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5742.
-
Resolution: Fixed

fix in http://git-wip-us.apache.org/repos/asf/activemq/commit/c85fa67e

> Destination dispatched count statistic not reflecting redelivery/redispatch
> ---
>
> Key: AMQ-5742
> URL: https://issues.apache.org/jira/browse/AMQ-5742
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: dispatchedCount, jmx, statistics
> Fix For: 5.12.0
>
>
> Tracking down in intermittent failure of 
> org.apache.activemq.network.DemandForwardingBridgeTest.testSendThenAddConsumer
> the problem turned out to be a decrement of the dispatched count when the 
> consumer was removed.
> So before the removal, most of the time, the stat was 1, and the test passed. 
> But if the removal was complete, the dispatched count was decremented in 
> error by the unacked message count. This is wrong. The dispatched stat needs 
> to reflect what happened :-) Otherwise it tracks the dequeue count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5742) Destination dispatched count statistic not reflecting redelivery/redispatch

2015-04-23 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5742:
---

 Summary: Destination dispatched count statistic not reflecting 
redelivery/redispatch
 Key: AMQ-5742
 URL: https://issues.apache.org/jira/browse/AMQ-5742
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.11.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


Tracking down in intermittent failure of 
org.apache.activemq.network.DemandForwardingBridgeTest.testSendThenAddConsumer

the problem turned out to be a decrement of the dispatched count when the 
consumer was removed.
So before the removal, most of the time, the stat was 1, and the test passed. 
But if the removal was complete, the dispatched count was decremented in error 
by the unacked message count. This is wrong. The dispatched stat needs to 
reflect what happened :-) Otherwise it tracks the dequeue count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5735) increment redeliverCounter in the absence of client supplied information

2015-04-22 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5735.
-
Resolution: Fixed

resolved in http://git-wip-us.apache.org/repos/asf/activemq/commit/eb6c0826

the removeInfo was working off of 0 in error, which was the root cause of many 
of the broken tests. the last sequenceId is now propagated as expected

> increment redeliverCounter in the absence of client supplied information
> 
>
> Key: AMQ-5735
> URL: https://issues.apache.org/jira/browse/AMQ-5735
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, JMS client
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: redelivery
> Fix For: 5.12.0
>
>
> A consumer remove info contains the lastDeliveredSequenceId the consumer has 
> delivered. In the absence of this information, for example when the consumer 
> does a system exit or the connection drops - this information will be lost. 
> The broker should increment redelivery to reflect the delivery attempt.
> Currently lastDeliveredSequenceId==0 indicates missing information but that 
> ignores the fact that the messageid broker sequence could be 0. A value of -1 
> should indicate nothing was delivered, > -1 the last sequence and from the 
> broker side, -2 indicating that the broker provided the information and 
> redelivery should be incremented. ie; -2 covers the client abort case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5709) Logging of "Database ... is locked" should be done on level DEBUG

2015-04-22 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14507811#comment-14507811
 ] 

Gary Tully commented on AMQ-5709:
-

logging once as warn and every time as debug works fine. This won't fill a 
production log but dev folks can see some action in debug mode. The frequency 
is managed by the lock acquire interval. 

> Logging of "Database ... is locked" should be done on level DEBUG
> -
>
> Key: AMQ-5709
> URL: https://issues.apache.org/jira/browse/AMQ-5709
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.11.1
>Reporter: Christian Schneider
>Assignee: clebert suconic
> Fix For: 5.12.0
>
>
> The SharedFileLocker tries to acquire a lock on the activemq lock file.  
> Everytime it can not lock it outputs the logging message below at INFO level. 
>  On the slave it will try this forever till the master is down.
> So I propose we only log on DEBUG level so the messages do not fill up a log 
> with a global default info log level.
> 2015-04-07 12:35:36,522 | INFO  | Database .../activemq/data/lock is 
> locked... waiting 10 seconds for the database to be unlocked. Reason: 
> java.io.IOException: File '.../activemq/data/lock' could not be locked. | 
> org.apache.activemq.store.SharedFileLocker | main



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5552) introduce a smoke-test profile that is enabled by default and during release:prepare

2015-04-22 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14507206#comment-14507206
 ] 

Gary Tully commented on AMQ-5552:
-

the existing sure-fire profiles using the property activemq.tests=X provided a 
template for this. I needs to be replicated in each module to take a subset of 
tests.

> introduce a smoke-test profile that is enabled by default and during 
> release:prepare
> 
>
> Key: AMQ-5552
> URL: https://issues.apache.org/jira/browse/AMQ-5552
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Distribution, Test Cases
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>  Labels: distribution, mvn, smoke, tests, validation
> Fix For: 5.12.0
>
>
> Users should be able to do $>mvn install on trunk or the source distribution 
> and get a validated (smoke-tested) distribution in < 10  minutes.
> The smoke-test profile should be enabled for release:prepare
> At the moment, more than 3k tests are run, they are not reliable enough and 
> the build is gone for a number of hours. This gives a bad first impression.
> Or course we should continue to improve the test suite but this has a totally 
> different focus.
> The smoke-test profile takes a smart cross section of tests in each module 
> that validate core functionality. 
> It will be an interesting challenge to get the selection right; balancing 
> typical use cases with coverage with speed etc.
> The tests should be:
>  * representative of the module functionality
>  * clean - no hard-coded ports and use only space on target
>  * fast
>  * reliable
>  * can be run in parallel (maybe if it allows more tests to be run in the 
> same time frame)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5735) increment redeliverCounter in the absence of client supplied information

2015-04-21 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5735:
---

 Summary: increment redeliverCounter in the absence of client 
supplied information
 Key: AMQ-5735
 URL: https://issues.apache.org/jira/browse/AMQ-5735
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, JMS client
Affects Versions: 5.11.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


A consumer remove info contains the lastDeliveredSequenceId the consumer has 
delivered. In the absence of this information, for example when the consumer 
does a system exit or the connection drops - this information will be lost. The 
broker should increment redelivery to reflect the delivery attempt.
Currently lastDeliveredSequenceId==0 indicates missing information but that 
ignores the fact that the messageid broker sequence could be 0. A value of -1 
should indicate nothing was delivered, > -1 the last sequence and from the 
broker side, -2 indicating that the broker provided the information and 
redelivery should be incremented. ie; -2 covers the client abort case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5578) preallocate journal files

2015-04-20 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14503014#comment-14503014
 ] 

Gary Tully commented on AMQ-5578:
-

resolved an additional problem  - preallocation was doing overwrite of existing 
data!
http://git-wip-us.apache.org/repos/asf/activemq/commit/4a821186

> preallocate journal files
> -
>
> Key: AMQ-5578
> URL: https://issues.apache.org/jira/browse/AMQ-5578
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Message Store
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: journal, kahaDB, perfomance
> Fix For: 5.12.0
>
>
> Our journals are append only, however we use the size to track journal 
> rollover on recovery and replay. We can improve performance if we never 
> update the size on disk and preallocate on creation.
> Rework journal logic to ensure size is never updated. This will allow the 
> configuration option from https://issues.apache.org/jira/browse/AMQ-4947 to 
> be the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5448) Update Documentation for unix/linux

2015-04-13 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14492359#comment-14492359
 ] 

Gary Tully commented on AMQ-5448:
-

[~scoopex] you should be able to edit the wiki now.

> Update Documentation for unix/linux
> ---
>
> Key: AMQ-5448
> URL: https://issues.apache.org/jira/browse/AMQ-5448
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.10.0
>Reporter: Marc Schöchlin
>
> The current unix/linux related documentation is not in sync with the behavior 
> documented in the wiki.
> The following pages are outdated/redundant because it refers the outdated:
> - http://activemq.apache.org/activemq-command-line-tools-reference.html
> - 
> http://activemq.apache.org/getting-started.html#GettingStarted-UnixBinaryInstallation
> - http://activemq.apache.org/unix-service.html
> - http://activemq.apache.org/unix-shell-script.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5709) Logging of "Database ... is locked" should be done on level DEBUG

2015-04-07 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483063#comment-14483063
 ] 

Gary Tully commented on AMQ-5709:
-

maybe we reduce the frequency - there was a fix a few years back for producers 
blocked on flow control to log a warn or info level message every 30 seconds[1].
Or log once at INFO and subsequent (the lock retry period) at debug level. That 
may be simpler.

[1] 
org.apache.activemq.broker.region.Destination#DEFAULT_BLOCKED_PRODUCER_WARNING_INTERVAL

> Logging of "Database ... is locked" should be done on level DEBUG
> -
>
> Key: AMQ-5709
> URL: https://issues.apache.org/jira/browse/AMQ-5709
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.11.1
>Reporter: Christian Schneider
> Fix For: 5.x, 6.0.0
>
>
> The SharedFileLocker tries to acquire a lock on the activemq lock file.  
> Everytime it can not lock it outputs the logging message below at INFO level. 
>  On the slave it will try this forever till the master is down.
> So I propose we only log on DEBUG level so the messages do not fill up a log 
> with a global default info log level.
> 2015-04-07 12:35:36,522 | INFO  | Database .../activemq/data/lock is 
> locked... waiting 10 seconds for the database to be unlocked. Reason: 
> java.io.IOException: File '.../activemq/data/lock' could not be locked. | 
> org.apache.activemq.store.SharedFileLocker | main



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5706) Add persistJMSRedelivered to Topic policies

2015-04-03 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394450#comment-14394450
 ] 

Gary Tully commented on AMQ-5706:
-

That will be more difficult to achieve than it seems because durable topic 
subscribers share a single instance of the message. Flipping the redelivered 
flag for one would effect all of the rest.
I guess the use case is valid for durable subs - but it would require some 
major surgery. Would virtual topics work in your use case?

> Add persistJMSRedelivered to Topic policies
> ---
>
> Key: AMQ-5706
> URL: https://issues.apache.org/jira/browse/AMQ-5706
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 5.11.1
>Reporter: Christopher L. Shannon
>Priority: Minor
>
> Currently the property {{persistJMSRedelivered}} can be set for Queue 
> policies as seen on this page 
> http://activemq.apache.org/per-destination-policies.html.  I would like to 
> also be able to set this property on a Topic policy as well.  It seems that 
> most of the work has already been done to support this as the 
> {{BaseDestination}} class is where this property is set so there shouldn't be 
> a reason why this property couldn't also apply to topics.  The 
> {{PolicyEntry}} class needs to be modified to allow this property to be 
> applied.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5703) kahadb - index recovery - corrupt journal records cannot be skipped

2015-04-01 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5703.
-
Resolution: Fixed

fix and test in http://git-wip-us.apache.org/repos/asf/activemq/commit/a7178a46

> kahadb - index recovery - corrupt journal records cannot be skipped
> ---
>
> Key: AMQ-5703
> URL: https://issues.apache.org/jira/browse/AMQ-5703
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB, Message Store
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: kahadb, recovery
> Fix For: 5.12.0
>
>
> Working with some corrupt data stores - if corruption occurs in the middle of 
> a journal and the index needs to be rebuilt we get:{code}java.io.EOFException
>   at java.io.RandomAccessFile.readFully(RandomAccessFile.java:446)
>   at java.io.RandomAccessFile.readFully(RandomAccessFile.java:424)
>   at 
> org.apache.activemq.util.RecoverableRandomAccessFile.readFully(RecoverableRandomAccessFile.java:75)
>   at 
> org.apache.activemq.store.kahadb.disk.journal.DataFileAccessor.readRecord(DataFileAccessor.java:87)
>   at 
> org.apache.activemq.store.kahadb.disk.journal.Journal.read(Journal.java:641)
>   at 
> org.apache.activemq.store.kahadb.MessageDatabase.load(MessageDatabase.java:1014)
>   at 
> org.apache.activemq.store.kahadb.MessageDatabase.recover(MessageDatabase.java:606)
>   at 
> org.apache.activemq.store.kahadb.MessageDatabase.open(MessageDatabase.java:400)
>   at 
> org.apache.activemq.store.kahadb.MessageDatabase.load(MessageDatabase.java:418)
>   at 
> org.apache.activemq.store.kahadb.MessageDatabase.doStart(MessageDatabase.java:262)
>   at 
> org.apache.activemq.store.kahadb.KahaDBStore.doStart(KahaDBStore.java:206)
>   at org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)
>   at 
> org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter.doStart(KahaDBPersistenceAdapter.java:223)
>   at org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)
>   at 
> org.apache.activemq.broker.BrokerService.doStartPersistenceAdapter(BrokerService.java:652)
>   at 
> org.apache.activemq.broker.BrokerService.startPersistenceAdapter(BrokerService.java:641)
>   at 
> org.apache.activemq.broker.BrokerService.start(BrokerService.java:606){code}
> attempting to read an invalid location. This stops further recovery and the 
> entire journal needs to be removed to progress.
> We have already identified the corrupt record, we just need to skip it when 
> we replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5703) kahadb - index recovery - corrupt journal records cannot be skipped

2015-04-01 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5703:
---

 Summary: kahadb - index recovery - corrupt journal records cannot 
be skipped
 Key: AMQ-5703
 URL: https://issues.apache.org/jira/browse/AMQ-5703
 Project: ActiveMQ
  Issue Type: Bug
  Components: KahaDB, Message Store
Affects Versions: 5.11.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


Working with some corrupt data stores - if corruption occurs in the middle of a 
journal and the index needs to be rebuilt we get:{code}java.io.EOFException
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:446)
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:424)
at 
org.apache.activemq.util.RecoverableRandomAccessFile.readFully(RecoverableRandomAccessFile.java:75)
at 
org.apache.activemq.store.kahadb.disk.journal.DataFileAccessor.readRecord(DataFileAccessor.java:87)
at 
org.apache.activemq.store.kahadb.disk.journal.Journal.read(Journal.java:641)
at 
org.apache.activemq.store.kahadb.MessageDatabase.load(MessageDatabase.java:1014)
at 
org.apache.activemq.store.kahadb.MessageDatabase.recover(MessageDatabase.java:606)
at 
org.apache.activemq.store.kahadb.MessageDatabase.open(MessageDatabase.java:400)
at 
org.apache.activemq.store.kahadb.MessageDatabase.load(MessageDatabase.java:418)
at 
org.apache.activemq.store.kahadb.MessageDatabase.doStart(MessageDatabase.java:262)
at 
org.apache.activemq.store.kahadb.KahaDBStore.doStart(KahaDBStore.java:206)
at org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)
at 
org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter.doStart(KahaDBPersistenceAdapter.java:223)
at org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)
at 
org.apache.activemq.broker.BrokerService.doStartPersistenceAdapter(BrokerService.java:652)
at 
org.apache.activemq.broker.BrokerService.startPersistenceAdapter(BrokerService.java:641)
at 
org.apache.activemq.broker.BrokerService.start(BrokerService.java:606){code}
attempting to read an invalid location. This stops further recovery and the 
entire journal needs to be removed to progress.
We have already identified the corrupt record, we just need to skip it when we 
replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5558) Make some activemq jar executable and able to send/receive messages

2015-03-31 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5558:

Summary: Make some activemq jar executable and able to send/receive 
messages  (was: Make activemq-client jar executable and able to send/receive 
messages)

> Make some activemq jar executable and able to send/receive messages
> ---
>
> Key: AMQ-5558
> URL: https://issues.apache.org/jira/browse/AMQ-5558
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.11.0
>Reporter: Dejan Bosanac
>Assignee: Dejan Bosanac
> Fix For: 5.12.0
>
>
> It would be nice to have basic verification/example tool builded directly in 
> activemq-client.jar, so that folks can do something like 
> {code}java -jar lib/activemq-client-xxx.jar producer
> java -jar lib/activemq-client-xxx.jar consumer{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5659) Add safety measure against infinite loop when store exception prevents message removal

2015-03-12 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358771#comment-14358771
 ] 

Gary Tully commented on AMQ-5659:
-

thanks for the patch. One thought. Would it not be simpler to just break out of 
the loop on an exception?

{code}
} catch(IOException e) {
   break; // ensure we don't loop
}{code}



> Add safety measure against infinite loop when store exception prevents 
> message removal
> --
>
> Key: AMQ-5659
> URL: https://issues.apache.org/jira/browse/AMQ-5659
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 5.7.0
> Environment: ServiceMix 4.5.3
>Reporter: metatech
> Attachments: purge_queue_abort_loop.patch
>
>
> When the broker is configured with a database store, the "purge" operation 
> enters an infinite loop when the message removal operation fails, for 
> instance when the broker datasource is being restarted (see example stack 
> trace below). 
> Here is a patch which adds a safety measure, in case the "dequeue" count of 
> the queue does not increase between 2 messages removal operations.  The check 
> is not garanteed to detect the problem on the next iteration, because a 
> business consumer might also be dequeuing messages from the queue.  But the 
> "purge" is probably much faster than the business consumer, so if it fails to 
> remove 2 messages in a row, it is enough to detect the problem and abort the 
> infinite loop.
> {code}
> 2015-03-05 15:38:30,353 | WARN  | 14571659-2202099 |  | 
> JDBCPersistenceAdapter   | Could not get JDBC connection: Data source 
> is closed
> java.sql.SQLException: Data source is closed
>   at 
> org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1362)
>   at 
> org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
>   at 
> org.apache.activemq.store.jdbc.TransactionContext.getConnection(TransactionContext.java:58)
>   at 
> org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.getStoreSequenceId(DefaultJDBCAdapter.java:285)
>   at 
> org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.getStoreSequenceIdForMessageId(JDBCPersistenceAdapter.java:787)
>   at 
> org.apache.activemq.store.jdbc.JDBCMessageStore.removeMessage(JDBCMessageStore.java:194)
>   at 
> org.apache.activemq.store.memory.MemoryTransactionStore.removeMessage(MemoryTransactionStore.java:358)
>   at 
> org.apache.activemq.store.memory.MemoryTransactionStore$1.removeAsyncMessage(MemoryTransactionStore.java:166)
>   at org.apache.activemq.broker.region.Queue.acknowledge(Queue.java:846)
>   at 
> org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1602)
>   at 
> org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1594)
>   at 
> org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1579)
>   at org.apache.activemq.broker.region.Queue.purge(Queue.java:1158)
>   at org.apache.activemq.broker.jmx.QueueView.purge(QueueView.java:54)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5658) ActiveMQ will not start after KahaDB Corruption due to "Protocol message contained an invalid tag (zero)" error

2015-03-12 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358690#comment-14358690
 ] 

Gary Tully commented on AMQ-5658:
-

are you using checkForCorruptJournalFiles and checksumJournalFiles?

If you are, any corrupt journal regions should be removed from the index so 
should not be attempted to be read.

Please add your kahadb configuration and maybe a full stack trace to the 
exceptions. There may be a case for dealing
with the exception at a lower level.

Unfortunately there is not much in the line of tooling that can help in this 
case.

> ActiveMQ will not start after KahaDB Corruption due to "Protocol message 
> contained an invalid tag (zero)" error
> ---
>
> Key: AMQ-5658
> URL: https://issues.apache.org/jira/browse/AMQ-5658
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0
> Environment: Windows 7
>Reporter: Paul Manning
>  Labels: corruption, journal, kahaDB, messageStore, protobuf
>
> We experienced an ActiveMQ crash where the KahaDB data files where corrupted. 
> The machine was powered down abruptly (pull the plug).
> When the machine restarted, ActiveMQ would not start and the following 
> entries were in the activemq.log:
> 2015-03-05 09:25:46,791 | INFO  | Corrupt journal records found in 
> 'c:\work\09_git\vc-core\vc-server\build\data\kahadb\db-131.log' between 
> offsets: 31054572..31231936 | 
> org.apache.activemq.store.kahadb.disk.journal.Journal | WrapperSimpleAppMain
> followed eventually by: 
> 2015-03-05 09:25:48,375 | ERROR | Failed to start Apache ActiveMQ 
> ([broker-USATL-L-008043.americas.abb.com-0, null], 
> org.apache.activemq.protobuf.InvalidProtocolBufferException: Protocol message 
> contained an invalid tag (zero).) | org.apache.activemq.broker.BrokerService 
> | WrapperSimpleAppMain
> Removing the .data files and the corrupted db-131.log file allows ActiveMQ to 
> restart. However, in that case, we experience message loss. 
> Is it possible to only lose the corrupted record instead of the whole data 
> file? 
> Tracing through the code, it does not appear that there is any attempt to 
> catch the InvalidProtocolBufferException exception and discard the corrupted 
> record. The exception is raised from CodedInputStream.readTag() during the 
> MessageDatabase.recover() process.
> It is worth noting that we have not been able to reproduce this error. I 
> imagine that this type of corruption is rare, but is there any way for a user 
> to recover from this. Any tools, etc.?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4540) NetworkBridge - don't wait for ever for demandSubscription pending send responses on remove

2015-03-12 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4540:

Fix Version/s: (was: 5.12.0)

> NetworkBridge - don't wait for ever for demandSubscription pending send 
> responses on remove
> ---
>
> Key: AMQ-4540
> URL: https://issues.apache.org/jira/browse/AMQ-4540
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0
>Reporter: Gary Tully
> Attachments: NetworkBridgeRemoveInflightTest.java
>
>
> In a networkbridge, a demandSub tracks outstanding asyncRquests and waits for 
> them to complete on removal such that the messages can be acked correctly 
> when the send completes.
> If the send is blocked on pfc on the remote broker, it may not return for 
> some time, which blocks other removals leaving messages stuck inflight to 
> networked subscriptions.
> The wait ensures that a message send will not be a duplicate, but blocking 
> for ever does not make sense, especially considering that removes are 
> serialised.
> We need some openwire command that can cancel pending sends to sort out this 
> case but even then we need to timeout at some stage in case the other end 
> cannot respond.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACTIVEMQ6-87) Strip @author tags from Java source

2015-03-10 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/ACTIVEMQ6-87?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355160#comment-14355160
 ] 

Gary Tully commented on ACTIVEMQ6-87:
-

go for it. it will be good to start from a clean slate.

> Strip @author tags from Java source
> ---
>
> Key: ACTIVEMQ6-87
> URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-87
> Project: Apache ActiveMQ 6
>  Issue Type: Bug
>Affects Versions: 6.0.0
>Reporter: Justin Bertram
>Assignee: Justin Bertram
> Fix For: 6.1.0
>
>
> Way back in 2004 [the Apache Board officially discouraged the use of 'author' 
> tags|http://www.apache.org/foundation/records/minutes/2004/board_minutes_2004_02_18.txt]
>  in source code.  They are not _banned_, but merely discouraged.  However, I 
> think it's probably a good idea to strip them from our source all the same.  
> Other projects (e.g. 
> [Camel|https://issues.apache.org/jira/browse/CAMEL-1812]) have done the same.
> Here are a few other reasons to remove them (taken from 
> [here|https://issues.jboss.org/browse/JBRULES-2895]):
> The author tags in the java files are a maintenance nightmare:
> - A large percentage is wrong, incomplete or inaccurate.
> - Most of the time, it only contains the original author. Many files are 
> completely refactored/expanded by other authors.
> - Git is accurate, that is the canonical source to find the correct author.
> To find the correct author of a piece of code, you always have to double 
> check with git, you cannot suppose the author on the author tag alone.
> - Author tags promote "code ownership", which is bad in the long run.
> - If people work on a piece they perceive as being owned by someone else, 
> they tend to:
> -- only fix what they are assigned to fix, not everything that's broken.
> -- discard responsibility if that code doesn't work properly.
> -- be scared of stepping of the feet of the owner
> - Instead of "code ownership", we need "module leadership" and "peer reviews".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5640) negative TotalMessageCount in JMX Broker MBean

2015-03-05 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5640.
-
   Resolution: Fixed
Fix Version/s: 5.12.0

fix in http://git-wip-us.apache.org/repos/asf/activemq/commit/ab28b771

thanks for the nice test :-)

> negative TotalMessageCount in JMX Broker MBean
> --
>
> Key: AMQ-5640
> URL: https://issues.apache.org/jira/browse/AMQ-5640
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker, JMX
>Affects Versions: 5.11.0
>Reporter: Torsten Mielke
>Assignee: Gary Tully
>  Labels: broker, jmx
> Fix For: 5.12.0
>
> Attachments: TotalMessageCountTest.java
>
>
> Starting a broker with a few messages on a queue and consuming these messages 
> will cause the TotalMessageCount property on the Broker MBean go to a 
> negative value. 
> That value should never go negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (AMQ-5640) negative TotalMessageCount in JMX Broker MBean

2015-03-05 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully reassigned AMQ-5640:
---

Assignee: Gary Tully

> negative TotalMessageCount in JMX Broker MBean
> --
>
> Key: AMQ-5640
> URL: https://issues.apache.org/jira/browse/AMQ-5640
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker, JMX
>Affects Versions: 5.11.0
>Reporter: Torsten Mielke
>Assignee: Gary Tully
>  Labels: broker, jmx
> Attachments: TotalMessageCountTest.java
>
>
> Starting a broker with a few messages on a queue and consuming these messages 
> will cause the TotalMessageCount property on the Broker MBean go to a 
> negative value. 
> That value should never go negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5568) Deleting lock file on broker shut down can take a master broker down

2015-03-05 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5568.
-
Resolution: Fixed

fix in http://git-wip-us.apache.org/repos/asf/activemq/commit/8c66fba0

> Deleting lock file on broker shut down can take a master broker down
> 
>
> Key: AMQ-5568
> URL: https://issues.apache.org/jira/browse/AMQ-5568
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, Message Store
>Affects Versions: 5.11.0
>Reporter: Torsten Mielke
>Assignee: Gary Tully
>  Labels: persistence
> Fix For: 5.12.0
>
>
> This problem may only occur on a shared file system master/slave setup. 
> I can reproduce reliably on a NFSv4 mount using a persistence adapter 
> configuration like 
> {code}
> 
>   
> 
>   
> 
> {code}
> However the problem is also reproducible using kahaDB.
> Two broker instances competing for the lock on the shared storage (e.g. 
> leveldb or kahadb). Lets say brokerA becomes master, broker B slave.
> If brokerA looses access to the NFS share, it will shut down. As part of 
> shutting down, it tries delete the lock file of the persistence adapter. Now 
> since the NFS share is gone, all file i/o calls hang for a good while before 
> returning errors. As such the broker shut down gets delayed.
> In the meantime the slave broker B (not affected by the NFS problem) grabs 
> the lock and becomes master.
> If the NFS mount is restored while broker A (the previous master) still hangs 
> on the file i/o operations (as part of its shutdown routine), the attempt to 
> delete the persistence adapter lock file will finally succeed and broker A 
> shuts down. 
> Deleting the lock file however also affects the new master broker B who 
> periodically runs a keepAlive() check on the lock. That check verifies the 
> file still exists and the FileLock is still valid. As the lock file got 
> deleted, keepAlive() fails on broker B and that broker shuts down as well. 
> The overall result is that both broker instances have shut down despite an 
> initially successful failover.
> Using restartAllowed=true is not an option either as this can cause other 
> problems in an NFS based master/slave setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5568) Deleting lock file on broker shut down can take a master broker down

2015-03-05 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5568:

Fix Version/s: 5.12.0

> Deleting lock file on broker shut down can take a master broker down
> 
>
> Key: AMQ-5568
> URL: https://issues.apache.org/jira/browse/AMQ-5568
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, Message Store
>Affects Versions: 5.11.0
>Reporter: Torsten Mielke
>Assignee: Gary Tully
>  Labels: persistence
> Fix For: 5.12.0
>
>
> This problem may only occur on a shared file system master/slave setup. 
> I can reproduce reliably on a NFSv4 mount using a persistence adapter 
> configuration like 
> {code}
> 
>   
> 
>   
> 
> {code}
> However the problem is also reproducible using kahaDB.
> Two broker instances competing for the lock on the shared storage (e.g. 
> leveldb or kahadb). Lets say brokerA becomes master, broker B slave.
> If brokerA looses access to the NFS share, it will shut down. As part of 
> shutting down, it tries delete the lock file of the persistence adapter. Now 
> since the NFS share is gone, all file i/o calls hang for a good while before 
> returning errors. As such the broker shut down gets delayed.
> In the meantime the slave broker B (not affected by the NFS problem) grabs 
> the lock and becomes master.
> If the NFS mount is restored while broker A (the previous master) still hangs 
> on the file i/o operations (as part of its shutdown routine), the attempt to 
> delete the persistence adapter lock file will finally succeed and broker A 
> shuts down. 
> Deleting the lock file however also affects the new master broker B who 
> periodically runs a keepAlive() check on the lock. That check verifies the 
> file still exists and the FileLock is still valid. As the lock file got 
> deleted, keepAlive() fails on broker B and that broker shuts down as well. 
> The overall result is that both broker instances have shut down despite an 
> initially successful failover.
> Using restartAllowed=true is not an option either as this can cause other 
> problems in an NFS based master/slave setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5640) negative TotalMessageCount in JMX Broker MBean

2015-03-05 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14348731#comment-14348731
 ] 

Gary Tully commented on AMQ-5640:
-

that metric sets reset=false - it should be properly initialized on startup. 
note: it is only relevant to queues though - durable subs would be out of the 
mix I think.

> negative TotalMessageCount in JMX Broker MBean
> --
>
> Key: AMQ-5640
> URL: https://issues.apache.org/jira/browse/AMQ-5640
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker, JMX
>Affects Versions: 5.11.0
>Reporter: Torsten Mielke
>  Labels: broker, jmx
>
> Starting a broker with a few messages on a queue and consuming these messages 
> will cause the TotalMessageCount property on the Broker MBean go to a 
> negative value. 
> That value should never go negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (AMQ-5568) Deleting lock file on broker shut down can take a master broker down

2015-03-05 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully reassigned AMQ-5568:
---

Assignee: Gary Tully

> Deleting lock file on broker shut down can take a master broker down
> 
>
> Key: AMQ-5568
> URL: https://issues.apache.org/jira/browse/AMQ-5568
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, Message Store
>Affects Versions: 5.11.0
>Reporter: Torsten Mielke
>Assignee: Gary Tully
>  Labels: persistence
>
> This problem may only occur on a shared file system master/slave setup. 
> I can reproduce reliably on a NFSv4 mount using a persistence adapter 
> configuration like 
> {code}
> 
>   
> 
>   
> 
> {code}
> However the problem is also reproducible using kahaDB.
> Two broker instances competing for the lock on the shared storage (e.g. 
> leveldb or kahadb). Lets say brokerA becomes master, broker B slave.
> If brokerA looses access to the NFS share, it will shut down. As part of 
> shutting down, it tries delete the lock file of the persistence adapter. Now 
> since the NFS share is gone, all file i/o calls hang for a good while before 
> returning errors. As such the broker shut down gets delayed.
> In the meantime the slave broker B (not affected by the NFS problem) grabs 
> the lock and becomes master.
> If the NFS mount is restored while broker A (the previous master) still hangs 
> on the file i/o operations (as part of its shutdown routine), the attempt to 
> delete the persistence adapter lock file will finally succeed and broker A 
> shuts down. 
> Deleting the lock file however also affects the new master broker B who 
> periodically runs a keepAlive() check on the lock. That check verifies the 
> file still exists and the FileLock is still valid. As the lock file got 
> deleted, keepAlive() fails on broker B and that broker shuts down as well. 
> The overall result is that both broker instances have shut down despite an 
> initially successful failover.
> Using restartAllowed=true is not an option either as this can cause other 
> problems in an NFS based master/slave setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5639) Allow advisory messages to traverse a broker network

2015-03-04 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5639.
-
Resolution: Fixed

fix in http://git-wip-us.apache.org/repos/asf/activemq/commit/11afd5f0

the required advisories need to be included in statically included destinations 
b/c there are no advisory messages produced for advisory destinations (to avoid 
forever looping). Auto forwarding depends on seeing the consumer advisory to be 
aware of demand.

Consumer and TempDestination advisories are still suppressed because they 
terminate at the bridge.



> Allow advisory messages to traverse a broker network
> 
>
> Key: AMQ-5639
> URL: https://issues.apache.org/jira/browse/AMQ-5639
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker, networkbridge
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: advisory, networkBridge
> Fix For: 5.12.0
>
>
> Currently the filter applied to forwarding consumers is very restrictive. It 
> will suppress all advisory messages. But only two types of advisory are used 
> by the network bridge.
> Allowing the propagation of selective advisories, like new connection 
> advisories is handy for monitoring at the application level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5639) Allow advisory messages to traverse a broker network

2015-03-04 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5639:
---

 Summary: Allow advisory messages to traverse a broker network
 Key: AMQ-5639
 URL: https://issues.apache.org/jira/browse/AMQ-5639
 Project: ActiveMQ
  Issue Type: Improvement
  Components: networkbridge, Broker
Affects Versions: 5.11.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


Currently the filter applied to forwarding consumers is very restrictive. It 
will suppress all advisory messages. But only two types of advisory are used by 
the network bridge.

Allowing the propagation of selective advisories, like new connection 
advisories is handy for monitoring at the application level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5630) Provide a way to disable durable subscriptions from configuration.

2015-03-04 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5630.
-
Resolution: Fixed

boolean attribute added {code}

in http://git-wip-us.apache.org/repos/asf/activemq/commit/741e3aad

> Provide a way to disable durable subscriptions from configuration.
> --
>
> Key: AMQ-5630
> URL: https://issues.apache.org/jira/browse/AMQ-5630
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: configuration, durable, topic, virtual
> Fix For: 5.12.0
>
>
> When virtual topics are used exclusively - so each topic sub gets a shared or 
> individual queue - it would be nice to be able to enforce that regular 
> durable subs are not allowed and reject them with an exception.
> A broker boolean attribute: rejectDurableConsumers would be perfect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5630) Provide a way to disable durable subscriptions from configuration.

2015-03-03 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5630:
---

 Summary: Provide a way to disable durable subscriptions from 
configuration.
 Key: AMQ-5630
 URL: https://issues.apache.org/jira/browse/AMQ-5630
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Broker
Affects Versions: 5.11.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


When virtual topics are used exclusively - so each topic sub gets a shared or 
individual queue - it would be nice to be able to enforce that regular durable 
subs are not allowed and reject them with an exception.
A broker boolean attribute: rejectDurableConsumers would be perfect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5626) kahadb - inconsumable low/med priority message after restart

2015-03-02 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5626.
-
Resolution: Fixed

issue assigning next id for index, caused overwrite when next message was not 
high priority.
fix and test in http://git-wip-us.apache.org/repos/asf/activemq/commit/ecebd241

> kahadb - inconsumable low/med priority message after restart
> 
>
> Key: AMQ-5626
> URL: https://issues.apache.org/jira/browse/AMQ-5626
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB, Message Store
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: kahadb, priority
> Fix For: 5.12.0
>
>
> with priority support enabled, on occasion after a restart some low priority 
> messages inconsumable.
> The cursor.queueSize() is reporting messages available on queue but they 
> cannot be consumed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5620) deadlock on shutdown - kahadb and local tx rollback

2015-03-02 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5620.
-
Resolution: Fixed

avoid the deadlock by allowing journal access during appender close.
http://git-wip-us.apache.org/repos/asf/activemq/commit/260e28ec

> deadlock on shutdown - kahadb and local tx rollback
> ---
>
> Key: AMQ-5620
> URL: https://issues.apache.org/jira/browse/AMQ-5620
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: deadlock, kahadb, shutdown
> Fix For: 5.12.0
>
>
> Deadlock found in a potential test case:{code}Found one Java-level deadlock:
> =
> "ActiveMQ BrokerService[localhost] Task-1":
>   waiting to lock monitor 0x7feeeb80d108 (object 0x0007f67294c8, a 
> org.apache.activemq.store.kahadb.disk.journal.Journal),
>   which is held by "pool-2-thread-1"
> "pool-2-thread-1":
>   waiting to lock monitor 0x7feee8a34428 (object 0x0007f67091f8, a 
> java.lang.Object),
>   which is held by "ActiveMQ BrokerService[localhost] Task-1"
> Java stack information for the threads listed above:
> ===
> "ActiveMQ BrokerService[localhost] Task-1":
>   at 
> org.apache.activemq.store.kahadb.disk.journal.Journal.getCurrentWriteFile(Journal.java:420)
>   - waiting to lock <0x0007f67294c8> (a 
> org.apache.activemq.store.kahadb.disk.journal.Journal)
>   at 
> org.apache.activemq.store.kahadb.disk.journal.DataFileAppender.enqueue(DataFileAppender.java:209)
>   - locked <0x0007f67091f8> (a java.lang.Object)
>   at 
> org.apache.activemq.store.kahadb.disk.journal.DataFileAppender.storeItem(DataFileAppender.java:148)
>   at 
> org.apache.activemq.store.kahadb.disk.journal.Journal.write(Journal.java:647)
>   at 
> org.apache.activemq.store.kahadb.MessageDatabase.store(MessageDatabase.java:977)
>   at 
> org.apache.activemq.store.kahadb.MessageDatabase.store(MessageDatabase.java:959)
>   at 
> org.apache.activemq.store.kahadb.KahaDBTransactionStore.rollback(KahaDBTransactionStore.java:313)
>   at 
> org.apache.activemq.transaction.LocalTransaction.rollback(LocalTransaction.java:94)
>   - locked <0x0007f6729698> (a 
> org.apache.activemq.store.kahadb.KahaDBTransactionStore)
>   at 
> org.apache.activemq.broker.TransactionBroker.removeConnection(TransactionBroker.java:323)
>   at 
> org.apache.activemq.broker.MutableBrokerFilter.removeConnection(MutableBrokerFilter.java:137)
>   at 
> org.apache.activemq.broker.TransportConnection.processRemoveConnection(TransportConnection.java:862)
>   - locked <0x0007f6729810> (a 
> org.apache.activemq.broker.jmx.ManagedTransportConnection)
>   at 
> org.apache.activemq.broker.TransportConnection.doStop(TransportConnection.java:1187)
>   at 
> org.apache.activemq.broker.TransportConnection$4.run(TransportConnection.java:1117)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> "pool-2-thread-1":
>   at 
> org.apache.activemq.store.kahadb.disk.journal.DataFileAppender.close(DataFileAppender.java:257)
>   - waiting to lock <0x0007f67091f8> (a java.lang.Object)
>   at 
> org.apache.activemq.store.kahadb.disk.journal.Journal.close(Journal.java:474)
>   - locked <0x0007f67294c8> (a 
> org.apache.activemq.store.kahadb.disk.journal.Journal)
>   at 
> org.apache.activemq.store.kahadb.MessageDatabase.close(MessageDatabase.java:438)
>   at 
> org.apache.activemq.store.kahadb.MessageDatabase.unload(MessageDatabase.java:466)
>   at 
> org.apache.activemq.store.kahadb.MessageDatabase.doStop(MessageDatabase.java:268)
>   at 
> org.apache.activemq.store.kahadb.KahaDBStore.doStop(KahaDBStore.java:288)
>   at org.apache.activemq.util.ServiceSupport.stop(ServiceSupport.java:71)
>   at org.apache.activemq.util.ServiceStopper.stop(ServiceStopper.java:41)
>   at org.apache.activemq.broker.BrokerService.stop(BrokerService.java:792)
>   at 
> org.apache.activemq.store.kahadb.PriorityMessageRestartBrokerTest.stopRestartBroker(PriorityMessageRestartBrokerTest.java:525)
>   at 
> org.apache.activemq.store.kahadb.PriorityMessageRestartBrokerTest.access$200(PriorityMessageRestartBrokerTest.java:70)
>   at 
> org.apache.activemq.store.kahadb.PriorityMessageRestartBrokerTest$BrokerRestartTask.run(PriorityMessageRestartBrokerTest.java:513)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask

[jira] [Created] (AMQ-5626) kahadb - inconsumable low/med priority message after restart

2015-03-02 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5626:
---

 Summary: kahadb - inconsumable low/med priority message after 
restart
 Key: AMQ-5626
 URL: https://issues.apache.org/jira/browse/AMQ-5626
 Project: ActiveMQ
  Issue Type: Bug
  Components: KahaDB, Message Store
Affects Versions: 5.11.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


with priority support enabled, on occasion after a restart some low priority 
messages inconsumable.
The cursor.queueSize() is reporting messages available on queue but they cannot 
be consumed.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5620) deadlock on shutdown - kahadb and local tx rollback

2015-02-26 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5620:
---

 Summary: deadlock on shutdown - kahadb and local tx rollback
 Key: AMQ-5620
 URL: https://issues.apache.org/jira/browse/AMQ-5620
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.11.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


Deadlock found in a potential test case:{code}Found one Java-level deadlock:
=
"ActiveMQ BrokerService[localhost] Task-1":
  waiting to lock monitor 0x7feeeb80d108 (object 0x0007f67294c8, a 
org.apache.activemq.store.kahadb.disk.journal.Journal),
  which is held by "pool-2-thread-1"
"pool-2-thread-1":
  waiting to lock monitor 0x7feee8a34428 (object 0x0007f67091f8, a 
java.lang.Object),
  which is held by "ActiveMQ BrokerService[localhost] Task-1"

Java stack information for the threads listed above:
===
"ActiveMQ BrokerService[localhost] Task-1":
at 
org.apache.activemq.store.kahadb.disk.journal.Journal.getCurrentWriteFile(Journal.java:420)
- waiting to lock <0x0007f67294c8> (a 
org.apache.activemq.store.kahadb.disk.journal.Journal)
at 
org.apache.activemq.store.kahadb.disk.journal.DataFileAppender.enqueue(DataFileAppender.java:209)
- locked <0x0007f67091f8> (a java.lang.Object)
at 
org.apache.activemq.store.kahadb.disk.journal.DataFileAppender.storeItem(DataFileAppender.java:148)
at 
org.apache.activemq.store.kahadb.disk.journal.Journal.write(Journal.java:647)
at 
org.apache.activemq.store.kahadb.MessageDatabase.store(MessageDatabase.java:977)
at 
org.apache.activemq.store.kahadb.MessageDatabase.store(MessageDatabase.java:959)
at 
org.apache.activemq.store.kahadb.KahaDBTransactionStore.rollback(KahaDBTransactionStore.java:313)
at 
org.apache.activemq.transaction.LocalTransaction.rollback(LocalTransaction.java:94)
- locked <0x0007f6729698> (a 
org.apache.activemq.store.kahadb.KahaDBTransactionStore)
at 
org.apache.activemq.broker.TransactionBroker.removeConnection(TransactionBroker.java:323)
at 
org.apache.activemq.broker.MutableBrokerFilter.removeConnection(MutableBrokerFilter.java:137)
at 
org.apache.activemq.broker.TransportConnection.processRemoveConnection(TransportConnection.java:862)
- locked <0x0007f6729810> (a 
org.apache.activemq.broker.jmx.ManagedTransportConnection)
at 
org.apache.activemq.broker.TransportConnection.doStop(TransportConnection.java:1187)
at 
org.apache.activemq.broker.TransportConnection$4.run(TransportConnection.java:1117)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"pool-2-thread-1":
at 
org.apache.activemq.store.kahadb.disk.journal.DataFileAppender.close(DataFileAppender.java:257)
- waiting to lock <0x0007f67091f8> (a java.lang.Object)
at 
org.apache.activemq.store.kahadb.disk.journal.Journal.close(Journal.java:474)
- locked <0x0007f67294c8> (a 
org.apache.activemq.store.kahadb.disk.journal.Journal)
at 
org.apache.activemq.store.kahadb.MessageDatabase.close(MessageDatabase.java:438)
at 
org.apache.activemq.store.kahadb.MessageDatabase.unload(MessageDatabase.java:466)
at 
org.apache.activemq.store.kahadb.MessageDatabase.doStop(MessageDatabase.java:268)
at 
org.apache.activemq.store.kahadb.KahaDBStore.doStop(KahaDBStore.java:288)
at org.apache.activemq.util.ServiceSupport.stop(ServiceSupport.java:71)
at org.apache.activemq.util.ServiceStopper.stop(ServiceStopper.java:41)
at org.apache.activemq.broker.BrokerService.stop(BrokerService.java:792)
at 
org.apache.activemq.store.kahadb.PriorityMessageRestartBrokerTest.stopRestartBroker(PriorityMessageRestartBrokerTest.java:525)
at 
org.apache.activemq.store.kahadb.PriorityMessageRestartBrokerTest.access$200(PriorityMessageRestartBrokerTest.java:70)
at 
org.apache.activemq.store.kahadb.PriorityMessageRestartBrokerTest$BrokerRestartTask.run(PriorityMessageRestartBrokerTest.java:513)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at jav

[jira] [Resolved] (AMQ-4483) Improve DLQ handling

2015-02-25 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-4483.
-
   Resolution: Fixed
Fix Version/s: (was: 5.9.0)
   5.12.0

reworked isDLQ flag as a destination option via 
http://git-wip-us.apache.org/repos/asf/activemq/commit/be919fbc

> Improve DLQ handling
> 
>
> Key: AMQ-4483
> URL: https://issues.apache.org/jira/browse/AMQ-4483
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.8.0
>Reporter: Dejan Bosanac
>Assignee: Gary Tully
> Fix For: 5.12.0
>
>
> Provide the way to see if destination is DLQ and retry all messages in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5614) Support message expiration in DLQ

2015-02-25 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5614.
-
Resolution: Fixed

DeadLetterStrategies now support expiration attribute, milliseconds till 
expiry. Defaults to 0.
In case of a loop, like setting expiration on a shared strategy enabled as a 
default, the audit will reject the duplicates.
Best used selectively with individual dead letter strategies and per 
destination policies.

http://git-wip-us.apache.org/repos/asf/activemq/commit/0142c4dc

> Support message expiration in DLQ
> -
>
> Key: AMQ-5614
> URL: https://issues.apache.org/jira/browse/AMQ-5614
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
> Fix For: 5.12.0
>
>
> Currently messages in the DLQ don't expire. With the option to expire, only 
> timely messages remain and can be processes.
> Immediate expiry today can be achieved on a per destination basis with the 
> discarding strategy of with the discarding dlq broker plugin.
> Using message expiry is a little more intuitive and more useful, because 
> stale messages will be auto removed from the ops radar and timely messages 
> can be dealt with as appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5614) Support message expiration in DLQ

2015-02-25 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5614:
---

 Summary: Support message expiration in DLQ
 Key: AMQ-5614
 URL: https://issues.apache.org/jira/browse/AMQ-5614
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.11.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


Currently messages in the DLQ don't expire. With the option to expire, only 
timely messages remain and can be processes.
Immediate expiry today can be achieved on a per destination basis with the 
discarding strategy of with the discarding dlq broker plugin.
Using message expiry is a little more intuitive and more useful, because stale 
messages will be auto removed from the ops radar and timely messages can be 
dealt with as appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5424) Broker at 100% CPU when idle after Network Connection reconnect with duplicates sent

2015-02-25 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336500#comment-14336500
 ] 

Gary Tully commented on AMQ-5424:
-

I did some rework or AMQ-4485 because of AMQ-5266. Part of that was to fix up 
the counters/stats if the store traps a duplicate;
see: 
https://github.com/apache/activemq/blob/85b9c81a3f2431b8272c19acf4e4b1cddeb25c5e/activemq-broker/src/main/java/org/apache/activemq/broker/region/Queue.java#L770

If the counters are correct the spin probably can be avoided so I think this 
may not reproduce on 5.11.0

> Broker at 100% CPU when idle after Network Connection reconnect with 
> duplicates sent
> 
>
> Key: AMQ-5424
> URL: https://issues.apache.org/jira/browse/AMQ-5424
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.10.0
>Reporter: Pete Bertrand
> Fix For: NEEDS_REVIEW
>
> Attachments: activemq.xml, thread-dump.txt
>
>
> In a network of 2 brokers (A and B) with durable queued messages 
> going from A to B over a duplex NetworkConnector,
> if A is stopped and restarted while messages are in-flight, 
> and if replayed messages from A are recognized as duplicates on B,
> then 30 seconds after B goes idle, B's CPU goes to 100%.
> I have attached the thread dump to the ticket.
> From what I have been able to figure out, the dequeue counter does not count
> moving the duplicate into the DLQ. The counters show a pending message when
> there is none in the persisted queue. So when the scheduler kicks in 30 
> seconds
> after the broker goes idle, it says "I have a pending message, fetch it from 
> the DB"
> but the fetch returns 0 messages. Immediately the scheduler still sees pending
> messages and does a DB fetch, with no results. This is where the CPU is 
> spinning.
> See the attached thread dump.
> So, in detail:
> It appears that after A is restarted and it replays messages that have not 
> been ACKed,
> B receives duplicate messages and sends them to the DLQ. Here is the warning 
> from the log:
> {noformat}
>   WARN | duplicate message from store 
> ID:host-lnx-59946-1415221396197-1:1:1:1:468, redirecting for dlq processing | 
> org.apache.activemq.broker.region.Queue | ActiveMQ VMTransport: 
> vm://broker1#11-1
> {noformat}
> After all messages are delivered and the brokers are idle for 30 seconds and 
> the CPU on B is now 100%, if you use the WebConsole and look at the queues on 
> B you see the following:
> {noformat}
>   Number Of   
>Queue  Pending Number Of  Messages  Messages
>Name   MessagesConsumers  Enqueued  Dequeued
> ActiveMQ.DLQ 10 1 0
> TEST.FOO 11469   468
> {noformat}
> On this test run, only one message was a duplicate. It was moved to the DLQ, 
> but the TEST.FOO counters show it as pending. The counters are out of sync 
> with actual messages in the persisted queue, because the duplicate message is 
> now in the DLQ and not in the TEST.FOO queue.
> At this point if you purge TEST.FOO, CPU on B goes back to normal because 
> this clears the pending message counter.
> +*Steps to reproduce*+
> Set up 2 brokers as follows:
>   *producer* ==> *broker-A*  <==  duplex network connection  ==>  *broker-B* 
> ==>  *consumer*
> 1) Download the binary distribution of AMQ 5.10.0 and extract 
> apache-activemq-5.10.0-bin.tar.gz
> 2) Create two brokers
> {noformat}
>  $ ACTIVEMQ_HOME/bin/activemq create /path/to/brokers/broker-a
>  $ ACTIVEMQ_HOME/bin/activemq create /path/to/brokers/broker-b
> {noformat}
> 3) Update broker-a to connect to broker-b with a duplex connection.
>_You can use the attached *activemq.xml*_. It does the following:
> - Sets transport for broker-a to port 61610
> - Sets up networkConnector to connect to broker-b on 61616
> - Does not start jetty web console on broker-a to avoid port conflict
> broker-b is un-modified and defaults to port 61616
> 4) Start the brokers
> {noformat}
>  $ broker-a/bin/broker-a start
>  $ broker-b/bin/broker-b start
> {noformat}
> 5) Start consumer connected to broker-b and producer connected to broker-a
> {noformat}
>  $ ant consumer -Durl=tcp://localhost:61616 -Ddurable=true
>  $ ant producer -Durl=tcp://localhost:61610 -Ddurable=true
> {noformat}
> 6) Stop broker-a before producer is finished sending messages, then restart
> {noformat}
>  $ broker-a/bin/broker-a stop
>  $ broker-a/bin/broker-a start
> {noformat}
> 7) Look at broker-b logs for duplicates, look at broker-b web console for 
> pending messages
>  http://localhost:8161/admin/queues.jsp
> 8) 30 seconds after going idle, broker-b CPU will goto 100%
> 9) Purge TEST.FOO on broker-b, pending messages will reset and CPU will go 
> back to normal.



--
This

[jira] [Reopened] (AMQ-4483) Improve DLQ handling

2015-02-24 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully reopened AMQ-4483:
-
  Assignee: Gary Tully  (was: Dejan Bosanac)

retry is fine - but the dependence on dlq strategy to determine if a dlq is 
flawed b/c the dlq typically will not have a strategy. Existing tests use a 
default policy entry so they work ok but once an individual strategy is used it 
falls down.

> Improve DLQ handling
> 
>
> Key: AMQ-4483
> URL: https://issues.apache.org/jira/browse/AMQ-4483
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.8.0
>Reporter: Dejan Bosanac
>Assignee: Gary Tully
> Fix For: 5.9.0
>
>
> Provide the way to see if destination is DLQ and retry all messages in it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5229) Queue; be able to pause/resume dispatch of message to all consumers

2015-02-23 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5229.
-
   Resolution: Fixed
Fix Version/s: 5.12.0

implemented pause/resume/isPaused queue view mbean ops and attribute

when paused, there is no dispatch to regular queue consumers, send and browse 
work as normal. Any inflight messages will continue inflight till ackes as 
normal.

> Queue; be able to pause/resume dispatch of message to all consumers
> ---
>
> Key: AMQ-5229
> URL: https://issues.apache.org/jira/browse/AMQ-5229
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Reporter: Pat Fox
>Assignee: Gary Tully
> Fix For: 5.12.0
>
>
> It would be good to be able to pause/resume the dispatch of messages from a 
> queue to the queues consumers.
> When the queue is "paused":
> -  NO messages sent to the associate consumers
> -  messages still to be enqueued on the queue
> -  ability to be able to browse the queue
> -  all the JMX counters for the queue to be available and correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (AMQ-5229) Queue; be able to pause/resume dispatch of message to all consumers

2015-02-23 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully reassigned AMQ-5229:
---

Assignee: Gary Tully

> Queue; be able to pause/resume dispatch of message to all consumers
> ---
>
> Key: AMQ-5229
> URL: https://issues.apache.org/jira/browse/AMQ-5229
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Reporter: Pat Fox
>Assignee: Gary Tully
>
> It would be good to be able to pause/resume the dispatch of messages from a 
> queue to the queues consumers.
> When the queue is "paused":
> -  NO messages sent to the associate consumers
> -  messages still to be enqueued on the queue
> -  ability to be able to browse the queue
> -  all the JMX counters for the queue to be available and correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5458) MBean to help testing replicated levelDB

2015-02-23 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5458.
-
   Resolution: Fixed
Fix Version/s: (was: 5.12.0)
   5.11.0

> MBean to help testing replicated levelDB
> 
>
> Key: AMQ-5458
> URL: https://issues.apache.org/jira/browse/AMQ-5458
> Project: ActiveMQ
>  Issue Type: New Feature
>Reporter: Hiram Chirino
>Assignee: Hiram Chirino
> Fix For: 5.11.0
>
>
> Would be nice if you set a system property like 
> 'org.apache.activemq.leveldb.test=true' that you then get a MBean for leveldb 
> stores that allows you to suspend/resume calls around the journal writes, 
> deletes and force operations so you can more easily write tests that validate 
> consistency and recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5578) preallocate journal files

2015-02-13 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5578.
-
Resolution: Fixed

implemented for kahadb - levelDB already does the right thing[1]

[1] 
https://github.com/apache/activemq/blob/master/activemq-leveldb-store/src/main/scala/org/apache/activemq/leveldb/RecordLog.scala#L202

> preallocate journal files
> -
>
> Key: AMQ-5578
> URL: https://issues.apache.org/jira/browse/AMQ-5578
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Message Store
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: journal, kahaDB, perfomance
> Fix For: 5.12.0
>
>
> Our journals are append only, however we use the size to track journal 
> rollover on recovery and replay. We can improve performance if we never 
> update the size on disk and preallocate on creation.
> Rework journal logic to ensure size is never updated. This will allow the 
> configuration option from https://issues.apache.org/jira/browse/AMQ-4947 to 
> be the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5520) MulticastDiscoveryAgent may use a network that is not multicast enabled, fails to startup

2015-02-11 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316371#comment-14316371
 ] 

Gary Tully commented on AMQ-5520:
-

seems that test failure points to a 
problem{code}org.springframework.beans.factory.BeanCreationException: Error 
creating bean with name 'broker1' defined in class path resource 
[spring-embedded-pooled.xml]: Invocation of init method failed; nested 
exception is org.springframework.beans.factory.BeanCreationException: Error 
creating bean with name 'org.apache.activemq.xbean.XBeanBrokerService#0' 
defined in class path resource [activemq.xml]: Invocation of init method 
failed; nested exception is java.net.SocketException: No such device
at java.net.NetworkInterface.supportsMulticast0(Native Method)
at 
java.net.NetworkInterface.supportsMulticast(NetworkInterface.java:417)
at 
org.apache.activemq.transport.discovery.multicast.MulticastDiscoveryAgent.findNetworkInterface(MulticastDiscoveryAgent.java:347)
at 
org.apache.activemq.transport.discovery.multicast.MulticastDiscoveryAgent.start(MulticastDiscoveryAgent.java:324)
at 
org.apache.activemq.broker.TransportConnector.start(TransportConnector.java:253)
at 
org.apache.activemq.broker.BrokerService.startTransportConnector(BrokerService.java:2593)
at 
org.apache.activemq.broker.BrokerService.startAllConnectors(BrokerService.java:2506)
at 
org.apache.activemq.broker.BrokerService.doStartBroker(BrokerService.java:710)
at 
org.apache.activemq.broker.BrokerService.startBroker(BrokerService.java:670)
at 
org.apache.activemq.broker.BrokerService.start(BrokerService.java:606)
at 
org.apache.activemq.xbean.XBeanBrokerService.afterPropertiesSet(XBeanBrokerService.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606){code}

wrapping the supportsMulticast call to ignore this exception.

> MulticastDiscoveryAgent may use a network that is not multicast enabled, 
> fails to startup
> -
>
> Key: AMQ-5520
> URL: https://issues.apache.org/jira/browse/AMQ-5520
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.10.0
>Reporter: Daniel Kulp
>Assignee: Daniel Kulp
> Fix For: 5.11.0
>
>
> If no network interface is explicitly set, it calls 
> mcast.joinGroup(inetAddress); without setting one on the MulticastSocket.  In 
> that case, MulticastSocket then calls NetworkInterface.getDefault().The 
> "default" interface on a system is relatively unpredictable and COULD result 
> in a network interface that isn't even multicast enabled. On my mac, it's 
> selecting a "awdl0" interface which doesn't support multicast.   If I have 
> Parallels running, it sometimes picks up one of those interfaces.   It also 
> sometimes picks up an ipv6 only network interface which also doesn't support 
> the ipv4 broadcast address.
> It would be better to enumerate the network interfaces and at least make sure 
> we grab one that support multicast on ipv4 and is "up".
> Note:  this causes some test failures on my machine.  Specifically 
> SpringTest.testSenderWithSpringXmlEmbeddedPooledBrokerConfiguredViaXml fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5578) preallocate journal files

2015-02-11 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5578:
---

 Summary: preallocate journal files
 Key: AMQ-5578
 URL: https://issues.apache.org/jira/browse/AMQ-5578
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Message Store
Affects Versions: 5.11.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


Our journals are append only, however we use the size to track journal rollover 
on recovery and replay. We can improve performance if we never update the size 
on disk and preallocate on creation.
Rework journal logic to ensure size is never updated. This will allow the 
configuration option from https://issues.apache.org/jira/browse/AMQ-4947 to be 
the default.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-1940) Negative queue size (reproducible)

2015-02-11 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-1940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316120#comment-14316120
 ] 

Gary Tully commented on AMQ-1940:
-

@Sergey - can you make a junit test case that can reproduce - peek at the 
activemq-camel module if you need to use camel routes

> Negative queue size (reproducible)
> --
>
> Key: AMQ-1940
> URL: https://issues.apache.org/jira/browse/AMQ-1940
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.2.0
> Environment: Found on Windows but reproduced under Linux
>Reporter: Vadim Chekan
>Assignee: Rob Davies
>Priority: Critical
> Fix For: 5.3.0
>
> Attachments: Main.java, Picture 6.png, QueuePurgeTest.java.diff.txt
>
>
> When you "purge" queue from web admin console, it zeroes queue message
> counter. But if you have an active consumer at that time which
> pre-fetched messages than your consumer will keep sending ack as it
> process messages from its buffer. ActiveMQ will keep decrement counter
> upon receiving each ack. So when consumer is done queue will show
> MINUS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5567) JDBC XA - Store COMMIT FAILED: java.io.IOException: Could not remove prepared transaction state from message add for sequenceId

2015-02-06 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5567.
-
Resolution: Fixed

issue was eager dispatch pushed out by concurrent send. The push to the cursor 
needs to happen on the commit outcome only. Also to avoid replay of messages 
from store on outcome overlap with cache full, the commit assigns a new 
sequenceId which ensures order.

>  JDBC XA - Store COMMIT FAILED:  java.io.IOException: Could not remove 
> prepared transaction state from message add for sequenceId
> -
>
> Key: AMQ-5567
> URL: https://issues.apache.org/jira/browse/AMQ-5567
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JDBC, Message Store
>Affects Versions: 5.10.0, 5.11.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>  Labels: XA
> Fix For: 5.12.0
>
>
> Occasional commit failure. Stack trace of the form{code}
> WARN  | .129:36295@61616 | XATransaction| 
> tivemq.transaction.XATransaction   91 | 121 - 
> org.apache.activemq.activemq-osgi  | Store COMMIT FAILED: 
> java.io.IOException: Could not remove prepared transaction state from message 
> add for sequenceId: 1753267
>   at 
> org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.doCommitAddOp(DefaultJDBCAdapter.java:1049)
>   at 
> org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.commitAdd(JDBCPersistenceAdapter.java:775)
>   at 
> org.apache.activemq.store.jdbc.JdbcMemoryTransactionStore$1.run(JdbcMemoryTransactionStore.java:108)
>   at 
> org.apache.activemq.store.memory.MemoryTransactionStore$Tx.commit(MemoryTransactionStore.java:101)
>   at 
> org.apache.activemq.store.memory.MemoryTransactionStore.commit(MemoryTransactionStore.java:269)
>   at 
> org.apache.activemq.transaction.XATransaction.storeCommit(XATransaction.java:86)
>   at 
> org.apache.activemq.transaction.XATransaction.commit(XATransaction.java:76)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5567) JDBC XA - Store COMMIT FAILED: java.io.IOException: Could not remove prepared transaction state from message add for sequenceId

2015-02-06 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5567:
---

 Summary:  JDBC XA - Store COMMIT FAILED:  java.io.IOException: 
Could not remove prepared transaction state from message add for sequenceId
 Key: AMQ-5567
 URL: https://issues.apache.org/jira/browse/AMQ-5567
 Project: ActiveMQ
  Issue Type: Bug
  Components: JDBC, Message Store
Affects Versions: 5.11.0, 5.10.0
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


Occasional commit failure. Stack trace of the form{code}
WARN  | .129:36295@61616 | XATransaction| 
tivemq.transaction.XATransaction   91 | 121 - org.apache.activemq.activemq-osgi 
 | Store COMMIT FAILED: 
java.io.IOException: Could not remove prepared transaction state from message 
add for sequenceId: 1753267
at 
org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.doCommitAddOp(DefaultJDBCAdapter.java:1049)
at 
org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.commitAdd(JDBCPersistenceAdapter.java:775)
at 
org.apache.activemq.store.jdbc.JdbcMemoryTransactionStore$1.run(JdbcMemoryTransactionStore.java:108)
at 
org.apache.activemq.store.memory.MemoryTransactionStore$Tx.commit(MemoryTransactionStore.java:101)
at 
org.apache.activemq.store.memory.MemoryTransactionStore.commit(MemoryTransactionStore.java:269)
at 
org.apache.activemq.transaction.XATransaction.storeCommit(XATransaction.java:86)
at 
org.apache.activemq.transaction.XATransaction.commit(XATransaction.java:76)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5178) ActiveMQ Karaf - Add CLI command to create queue/topic

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5178:

Fix Version/s: (was: 5.11.0)
   5.12.0

> ActiveMQ Karaf - Add CLI command to create queue/topic
> --
>
> Key: AMQ-5178
> URL: https://issues.apache.org/jira/browse/AMQ-5178
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: OSGi/Karaf
>Reporter: Claus Ibsen
>Assignee: Jean-Baptiste Onofré
>Priority: Minor
> Fix For: 5.12.0
>
>
> We have the activemq commands in Karaf where you can see some broker details.
> It would be good to have a command to create a queue/topic which you can do 
> today using the web console.
> For example people on SO have asked about this
> http://stackoverflow.com/questions/23562106/how-to-add-new-queues-to-jboss-a-mq-using-the-command-line-console-interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5310) activemq-client - Throws IllegalStateException in receive method which should be a JMSException

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5310:

Fix Version/s: (was: 5.11.0)
   5.12.0

> activemq-client - Throws IllegalStateException in receive method which should 
> be a JMSException
> ---
>
> Key: AMQ-5310
> URL: https://issues.apache.org/jira/browse/AMQ-5310
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.10.0
>Reporter: Claus Ibsen
>Assignee: Dejan Bosanac
> Fix For: 5.12.0
>
> Attachments: amq-5310.patch
>
>
> There is a change in activemq-client in the receive method, where it does a 
> checkClosed call that throws an IllegalStateException which is not supposed 
> according to JMS spec. This causes camel-jms / spring jms to not shutdown 
> nicely and causes the JMS listener to hang, and also other side-effects.
> For example the camel example pojo messaging demonstrates that. Just start 
> the example according to its readme, and then shutdown the JVM with ctrl + c, 
> and it hangs
> {code}
>  [Thu Aug 07 10:33:27 CEST 2014]; root of context hierarchy]
> 2014-08-07 10:33:42,965 [Thread-1   ] INFO  SpringCamelContext
>  - Apache Camel 2.14-SNAPSHOT (CamelContext: camel-1) is shutting down
> 2014-08-07 10:33:42,971 [sonnel.records]] WARN  
> ultJmsMessageListenerContainer - Setup of JMS message listener invoker failed 
> for destination 'personnel.records' - trying to recover. Cause: The Consumer 
> is closed
> javax.jms.IllegalStateException: The Consumer is closed
>   at 
> org.apache.activemq.ActiveMQMessageConsumer.checkClosed(ActiveMQMessageConsumer.java:861)
>   at 
> org.apache.activemq.ActiveMQMessageConsumer.receive(ActiveMQMessageConsumer.java:618)
>   at 
> org.apache.activemq.jms.pool.PooledMessageConsumer.receive(PooledMessageConsumer.java:67)
>   at 
> org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveMessage(AbstractPollingMessageListenerContainer.java:430)
>   at 
> org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:310)
>   at 
> org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:263)
>   at 
> org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1101)
>   at 
> org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1093)
>   at 
> org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:990)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> 2014-08-07 10:33:42,975 [Thread-1   ] DEBUG BeanComponent 
>  - Clearing BeanInfo cache[size=2, hits=0, misses=2, evicted=0]
> 2014-08-07 10:33:42,975 [sonnel.records]] ERROR 
> ultJmsMessageListenerContainer - Could not refresh JMS Connection for 
> destination 'personnel.records' - retrying in 5000 ms. Cause: null
> java.lang.NullPointerException
>   at 
> org.springframework.jms.listener.AbstractJmsListeningContainer.refreshSharedConnection(AbstractJmsListeningContainer.java:392)
>   at 
> org.springframework.jms.listener.DefaultMessageListenerContainer.refreshConnectionUntilSuccessful(DefaultMessageListenerContainer.java:885)
>   at 
> org.springframework.jms.listener.DefaultMessageListenerContainer.recoverAfterListenerSetupFailure(DefaultMessageListenerContainer.java:861)
>   at 
> org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1012)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> 2014-08-07 10:33:42,975 [Thread-1   ] DEBUG TimerListenerManager  
>  - Removed TimerListener: 
> org.apache.camel.management.mbean.ManagedCamelContext@2a2bc16
> {code}
> As you can see from this stacktrace there is a NPE error inside spring jms 
> which causes it not to shutdown correctly also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4710) The first heart-beat after a connection becomes idle isn't sent as quickly as it should be

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4710:

Fix Version/s: (was: 5.11.0)
   5.12.0

> The first heart-beat after a connection becomes idle isn't sent as quickly as 
> it should be
> --
>
> Key: AMQ-4710
> URL: https://issues.apache.org/jira/browse/AMQ-4710
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: stomp
>Affects Versions: 5.8.0
>Reporter: Andy Wilkinson
>Assignee: Timothy Bish
> Fix For: 5.12.0
>
> Attachments: amq-4710.diff
>
>
> After ActiveMQ sends a stomp frame, it may not send a heart-beat for up to 
> almost 2x the negotiated interval.
> The following test should illustrate the problem:
> {code}
> import org.junit.Test;
> import static org.junit.Assert.*;
> public class ActiveMqHeartbeatTests {
>   @Test
>   public void heartbeats() throws Exception {
>   BrokerService broker = createAndStartBroker();
>   Socket socket = null;
>   try {
>   socket = new Socket("localhost", 61613);
>   byte[] connectFrame = 
> "CONNECT\nheart-beat:0,1\naccept-version:1.2\n\n\0".getBytes();
>   socket.getOutputStream().write(connectFrame);
>   byte[] buffer = new byte[4096];
>   long lastReadTime = System.currentTimeMillis();
>   while (true) {
>   int read = socket.getInputStream().read(buffer);
>   byte[] frame = Arrays.copyOf(buffer, read);
>   long now = System.currentTimeMillis();
>   long timeSinceLastRead = now - lastReadTime;
>   lastReadTime = now;
>   System.out.println(new String(frame));
>   System.out.println("Time since last read: " + 
> timeSinceLastRead + "ms");
>   if (timeSinceLastRead > 15000) {
>   fail("Data not received for " + 
> timeSinceLastRead + "ms");
>   }
>   }
>   } finally {
>   if (socket != null) {
>   socket.close();
>   }
>   broker.stop();
>   }
>   }
>   private BrokerService createAndStartBroker() throws Exception {
>   BrokerService broker = new BrokerService();
>   broker.addConnector("stomp://localhost:61613");
>   broker.setStartAsync(false);
>   broker.setDeleteAllMessagesOnStartup(true);
>   broker.start();
>   return broker;
>   }
> }
> {code}
> For the initial read of the CONNECTED frame I see:
> {noformat}
> Time since last read: 49ms
> {noformat}
> However, it's then almost 20 seconds before a heart-beat's sent:
> {noformat}
> Time since last read: 19994ms
> {noformat}
> If I comment out the fail(…) line in the test, after the first heartbeat 
> taking almost 2ms to be sent, things settle down and a heartbeat's 
> received every 1ms.
> It looks like the write checker wakes up every 1ms. The first time it 
> wakes up, it notices that the CONNECTED frame was sent and does nothing. It 
> then sleeps for a further 1ms before checking again. As the CONNECTED 
> frame was sent very early in the first 1ms window, this leads to it 
> taking almost 2ms for the first heart-beat to be sent. From this point, 
> as no further data frames are sent, the write checker wakes up and sends a 
> heart-beat every 1ms.
> In short, I don't think ActiveMQ is adhering to the requirement that "the 
> sender MUST send new data over the network connection at least every  
> milliseconds".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4848) activemq:list - Should include the transport connectors and their urls

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4848:

Fix Version/s: (was: 5.11.0)
   5.12.0

> activemq:list - Should include the transport connectors and their urls
> --
>
> Key: AMQ-4848
> URL: https://issues.apache.org/jira/browse/AMQ-4848
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: OSGi/Karaf
>Reporter: Claus Ibsen
>Assignee: Jean-Baptiste Onofré
>Priority: Minor
> Fix For: 5.12.0
>
>
> When using the karaf commands you can get a bit of information about the 
> broker.
> But we need to output the transport connectors in use as well, so you can see 
> the types and url's, eg tcp://localhost:61616, and ampq:xxx and so forth.
> We could add that to the list command that shows the broker names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4277) activemq-web - REST GET 204

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4277:

Fix Version/s: (was: 5.11.0)
   5.12.0

> activemq-web - REST GET 204
> ---
>
> Key: AMQ-4277
> URL: https://issues.apache.org/jira/browse/AMQ-4277
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.7.0
> Environment: windows 7 and windows xp
>Reporter: Helen Huang
> Fix For: 5.12.0
>
> Attachments: AMQ4277TestPatch.txt, RestTest.java
>
>
> Using 5.7 REST API I can GET a message from a topic but if a new message is 
> POSTed to that same topic before the GET has been reissued (less than 20ms 
> behind the POST and using the same session as the previous GET) the GET will 
> timeout with a 204 and does not retrieve the message. This may be as designed 
> for topics (queues are not an option) but I am just looking for confirmation. 
> I assumed the first GET would have provided a topic subscription and the 
> broker would hold a topic message for which there is a subscriber for longer 
> than 20ms? This is not a stress test and is recreated with a simple producer 
> and a separate consumer usually within the first couple of message exchanges. 
> It has been noticed that when the topic message is POSTed without an 
> outstanding GET to receive it, the following exception is logged: 
> 2013-01-22 01:09:37,484 | DEBUG | Async client internal exception occurred 
> with no exception listener registered: java.lang.IllegalStateException: 
> DISPATCHED,initial | org.apache.activemq.ActiveMQConnection | ActiveMQ 
> Session Task-1 
> java.lang.IllegalStateException: DISPATCHED,initial 
> at 
> org.eclipse.jetty.server.AsyncContinuation.dispatch(AsyncContinuation.java:408)
>  
> at 
> org.eclipse.jetty.server.AsyncContinuation.resume(AsyncContinuation.java:815) 
> at 
> org.apache.activemq.web.MessageServlet$Listener.onMessageAvailable(MessageServlet.java:409)
>  
> at 
> org.apache.activemq.ActiveMQMessageConsumer.dispatch(ActiveMQMessageConsumer.java:1343)
>  
> at 
> org.apache.activemq.ActiveMQSessionExecutor.dispatch(ActiveMQSessionExecutor.java:131)
>  
> at 
> org.apache.activemq.ActiveMQSessionExecutor.iterate(ActiveMQSessionExecutor.java:202)
>  
> at 
> org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
>  
> at 
> org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>  
> at java.lang.Thread.run(Thread.java:722) 
>   
> Are there any configuration modifications available to have the topic 
> messages retained for at least 1 second?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4980) MessageGroupLateArrivalsTest.testConsumerLateToBigPartyGetsNewGroup fails intermittently

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4980:

Fix Version/s: (was: 5.11.0)
   5.12.0

> MessageGroupLateArrivalsTest.testConsumerLateToBigPartyGetsNewGroup fails 
> intermittently
> 
>
> Key: AMQ-4980
> URL: https://issues.apache.org/jira/browse/AMQ-4980
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Test Cases
>Reporter: Kevin Earls
>Assignee: Kevin Earls
>Priority: Minor
> Fix For: 5.12.0
>
>
> The MessageGroupLateArrivalsTest.testConsumerLateToBigPartyGetsNewGroup test 
> fails intermittently on CI boxes with the following error.
> Error Message
> worker1 received 2 messages from groups [C] expected:<4> but was:<2>
> Stacktrace
> junit.framework.AssertionFailedError: worker1 received 2 messages from groups 
> [C] expected:<4> but was:<2>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.TestCase.assertEquals(TestCase.java:401)
>   at 
> org.apache.activemq.usecases.MessageGroupLateArrivalsTest.testConsumerLateToBigPartyGetsNewGroup(MessageGroupLateArrivalsTest.java:211)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at 
> org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:107)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
>   at 
> org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
>   at 
> org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:81)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4471) Inconsistent messages with the WebSocket/Stomp Demo

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4471:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Inconsistent messages with the WebSocket/Stomp Demo
> ---
>
> Key: AMQ-4471
> URL: https://issues.apache.org/jira/browse/AMQ-4471
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: stomp, Transport
>Affects Versions: 5.8.0
>Reporter: Matthias Weßendorf
>Priority: Minor
> Fix For: 5.12.0
>
>
> Playing with the "demo/websocket/index.html" demo (5.8.0), I see an 
> inconsistent messaging behavioir
> Having two browsers (FF and chrome) and not always the message receives the 
> other browser
> * TEST in FF => displayed in Chrome (and FF)
> * TEST (1) in Chrome => displayed in both
> * TEST (2) in Chrome => this time, only visible in Chrome; no message arrived 
> at the Firefox browser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4929) remove old and unused org.apache.activemq.broker.BrokerService#setSupportFailOver

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4929:

Fix Version/s: (was: 5.11.0)
   5.12.0

> remove old and unused 
> org.apache.activemq.broker.BrokerService#setSupportFailOver
> -
>
> Key: AMQ-4929
> URL: https://issues.apache.org/jira/browse/AMQ-4929
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0
>Reporter: Gary Tully
> Fix For: 5.12.0
>
>
> there is a bunch of duplicate detection in transactionbroker that is disabled 
> by default and not tested that duplicates work done elsewhere - store and 
> producerAutit. It should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4975) DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest.testSendReceive fails intermittently

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4975:

Fix Version/s: (was: 5.11.0)
   5.12.0

> DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest.testSendReceive fails 
> intermittently
> --
>
> Key: AMQ-4975
> URL: https://issues.apache.org/jira/browse/AMQ-4975
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Test Cases
>Reporter: Kevin Earls
>Priority: Minor
> Fix For: 5.12.0
>
>
> This test fails intermittently with the error below.  It typical fails at 
> around message 180-185, where it looks like it receives the same message 
> twice.
> (This test is defined in JmsSendReceiveTestSupport.  I'll add an overridden 
> no-op version in DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest for now so it 
> doesn't cause CI builds to fail)
> ---
>  T E S T S
> ---
> Running 
> org.apache.activemq.broker.ft.DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest
> Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 70.702 sec 
> <<< FAILURE! - in 
> org.apache.activemq.broker.ft.DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest
> testSendReceive(org.apache.activemq.broker.ft.DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest)
>   Time elapsed: 18.286 sec  <<< FAILURE!
> junit.framework.ComparisonFailure: Message: 181 expected: 18[1] at Thu Jan 16 16:02...> but was: 16:02...>
>   at junit.framework.Assert.assertEquals(Assert.java:100)
>   at junit.framework.TestCase.assertEquals(TestCase.java:261)
>   at 
> org.apache.activemq.JmsSendReceiveTestSupport.assertMessagesReceivedAreValid(JmsSendReceiveTestSupport.java:165)
>   at 
> org.apache.activemq.JmsSendReceiveTestSupport.assertMessagesAreReceived(JmsSendReceiveTestSupport.java:128)
>   at 
> org.apache.activemq.JmsSendReceiveTestSupport.testSendReceive(JmsSendReceiveTestSupport.java:104)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at 
> org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:107)
>   at 
> org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:113)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Results :
> Failed tests: 
>   
> DbRestartJDBCQueueMasterSlaveLeaseQuiesceTest>CombinationTestSupport.runBare:113->CombinationTestSupport.runBare:107->JmsSendReceiveTestSupport.testSendReceive:104->JmsSendReceiveTestSupport.assertMessagesAreReceived:128->JmsSendReceiveTestSupport.assertMessagesReceivedAreValid:165
>  Message: 181 expected: but 
> was:
> Tests run: 4, Failures: 1, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4944) Implement reliable forwarding in broker networks

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4944:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Implement reliable forwarding in broker networks
> 
>
> Key: AMQ-4944
> URL: https://issues.apache.org/jira/browse/AMQ-4944
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Broker
>Reporter: Gary Tully
>  Labels: duplicate, failover, networks, recovery
> Fix For: 5.12.0
>
>
> See some detail in https://issues.apache.org/jira/browse/AMQ-4465
> Essentially when we forward we do a send and individual ack. If we miss the 
> send reply due to connection failure or broker death we will have a duplicate 
> send. The other end may suppress but this cannot be guaranteed.
> We need to do batch 2pc between the brokers such that we can always recover 
> and guarantee at most once delivery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4886) AMQ2149LevelDBTest hangs or fails frequently

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4886:

Fix Version/s: (was: 5.11.0)
   5.12.0

> AMQ2149LevelDBTest hangs or fails frequently
> 
>
> Key: AMQ-4886
> URL: https://issues.apache.org/jira/browse/AMQ-4886
> Project: ActiveMQ
>  Issue Type: Bug
>Reporter: Kevin Earls
>Assignee: Kevin Earls
> Fix For: 5.12.0
>
> Attachments: AMQ2149LevelDBTest.stack
>
>
> I'll update this as I get more information, but this test suite has multiple 
> cases that hang and timeout frequently 
> (testTopicTransactionalOrderWithRestart and testTopicOrderWithRestart seem to 
> do so most frequently.)  
> It can also hang in tearDown, which causes the whole suite to hang without 
> timing out, which can be a problem when run under Hudson or Jenkins.  
> I will  attach a stack trace of the tearDown hang, and also update 
> AMQ2149Test to prevent this.  I'm also going to update the test to use JUnit4 
> and reduce the timeouts from 30 to 5 minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4973) UnreliableUdpTransportTest and MulticastTransportTest have test failures

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4973:

Fix Version/s: (was: 5.11.0)
   5.12.0

> UnreliableUdpTransportTest and MulticastTransportTest have test failures
> 
>
> Key: AMQ-4973
> URL: https://issues.apache.org/jira/browse/AMQ-4973
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Test Cases
>Reporter: Kevin Earls
>Priority: Minor
> Fix For: 5.12.0
>
>
> The testSendingMediumMessage and testSendingLargeMessage test cases fail for 
> both of these as shown below.  
> UnreliableUdpTransportTest uses 
> org.apache.activemq.transport.reliable.ReliableTransport, which is 
> deprecated.  Should we continue to run these tests?
> ---
>  T E S T S
> ---
> Running org.apache.activemq.transport.multicast.MulticastTransportTest
> Tests run: 3, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 40.6 sec <<< 
> FAILURE! - in org.apache.activemq.transport.multicast.MulticastTransportTest
> testSendingMediumMessage(org.apache.activemq.transport.multicast.MulticastTransportTest)
>   Time elapsed: 40.402 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: Should have received a Command by now!
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.assertTrue(Assert.java:22)
>   at junit.framework.Assert.assertNotNull(Assert.java:256)
>   at junit.framework.TestCase.assertNotNull(TestCase.java:426)
>   at 
> org.apache.activemq.transport.udp.UdpTestSupport.assertCommandReceived(UdpTestSupport.java:257)
>   at 
> org.apache.activemq.transport.udp.UdpTestSupport.assertSendTextMessage(UdpTestSupport.java:112)
>   at 
> org.apache.activemq.transport.udp.UdpTestSupport.testSendingMediumMessage(UdpTestSupport.java:84)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> testSendingLargeMessage(org.apache.activemq.transport.multicast.MulticastTransportTest)
>   Time elapsed: 0.009 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: Failed to send to transport: 
> java.net.SocketException: Socket is closed
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.TestCase.fail(TestCase.java:227)
>   at 
> org.apache.activemq.transport.udp.UdpTestSupport.assertSendTextMessage(UdpTestSupport.java:123)
>   at 
> org.apache.activemq.transport.udp.UdpTestSupport.testSendingLargeMessage(UdpTestSupport.java:90)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.j

[jira] [Updated] (AMQ-5303) MQTT Subscriptions on VirtualTopic prefixed destinations failed retained tests.

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5303:

Fix Version/s: (was: 5.11.0)
   5.12.0

> MQTT Subscriptions on VirtualTopic prefixed destinations failed retained 
> tests.
> ---
>
> Key: AMQ-5303
> URL: https://issues.apache.org/jira/browse/AMQ-5303
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: MQTT
>Affects Versions: 5.10.0
>Reporter: Timothy Bish
> Fix For: 5.12.0
>
>
> For an MQTT Subscription on a Virtual Topic such as "VirtualTopic.FOO" the 
> retained message contract doesn't seem to be honoured 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4767) Extend ActiveMQ Camel component to support INDIVIDUAL_MESSAGE ack mode

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4767:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Extend ActiveMQ Camel component to support INDIVIDUAL_MESSAGE ack mode
> --
>
> Key: AMQ-4767
> URL: https://issues.apache.org/jira/browse/AMQ-4767
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: activemq-camel
>Affects Versions: 5.8.0
> Environment: All
>Reporter: Matt Pavlovich
>Assignee: Claus Ibsen
> Fix For: 5.12.0
>
>
> It would be really helpful to have a per-message acknowledgement that did not 
> acknowledge all previous messages.. this would be an ActiveMQ-only 
> acknowledgement mode.
> Really handy for Camel to do things like 
>  from: activemq:queue:My.Queue?concurrentConsumers=5& 
> acknowledgementModeName=INDIVIDUAL_MESSAGE
>  to: jetty:http://somewebendpoint/
> and have quasi-transacted behavior



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4244) RA activation spec maxMessagesPerSessions property not honored.

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4244:

Fix Version/s: (was: 5.11.0)
   5.12.0

> RA activation spec maxMessagesPerSessions property not honored.
> ---
>
> Key: AMQ-4244
> URL: https://issues.apache.org/jira/browse/AMQ-4244
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: JCA Container
>Affects Versions: 5.7.0
>Reporter: Hiram Chirino
> Fix For: 5.12.0
>
>
> Seems like it was supposed to control prefetching, right now it has no 
> effect.  see:
> https://github.com/apache/activemq/blob/trunk/activemq-ra/src/main/java/org/apache/activemq/ra/ActiveMQEndpointWorker.java#L170
> Connection settings are the only way to configure the prefetching when used 
> in a resource adapter now.
> Would be nice if a the maxMessagesPerSessions setting could be renamed to 
> something simpler like prefetchSize.  And IF set, then it overrides the 
> connection defaults.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-2040) Improve message browsing

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-2040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-2040:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Improve message browsing
> 
>
> Key: AMQ-2040
> URL: https://issues.apache.org/jira/browse/AMQ-2040
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 5.2.0
>Reporter: Dejan Bosanac
>Assignee: Dejan Bosanac
> Fix For: 5.12.0
>
>
> Currently the browse() method returns 400 messages (or all if there are less 
> than that). Allow configuring the number of messages returned and fetching  
> messages beyond first page with the method such as browse(int page).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4777) ActiveMQ broker silently fails to start in Karaf

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4777:

Fix Version/s: (was: 5.11.0)
   5.12.0

> ActiveMQ broker silently fails to start in Karaf
> 
>
> Key: AMQ-4777
> URL: https://issues.apache.org/jira/browse/AMQ-4777
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: OSGi/Karaf
>Affects Versions: 5.8.0
> Environment: Kubuntu 64-bit, Oracle JDK 7u40, Karaf 2.3.3
>Reporter: Amichai Rothman
>Assignee: Jean-Baptiste Onofré
> Fix For: 5.12.0
>
>
> I have an application, deployed into Karaf. I install the activemq-broker 
> feature, and one of the app bundles uses ActiveMQConnectionFactory to get a 
> javax.jms.Connection instance and then uses the JMS API only (no other 
> ActiveMQ-specific APIs in use). I'll note that this bundle depends on another 
> bundle which uses the JMS API as well (no ActiveMQ imports there at all). The 
> application also depends on other karaf features such as cxf-dosgi.
> ActiveMQ fails to start in a couple of ways, depending on the order of 
> installation of the features and app:
> At first I got a LinkageError:
> java.lang.LinkageError: loader constraint violation: when resolving method 
> "org.apache.activemq.ActiveMQConnectionFactory.createConne
> ction()Ljavax/jms/Connection;" the class loader (instance of 
> org/eclipse/osgi/internal/baseadaptor/DefaultClassLoader) of the current
>  class, com/myprojectgroup/bus/activemq/ActiveMQSession, and the class loader 
> (instance of org/eclipse/osgi/internal/baseadaptor/D
> efaultClassLoader) for resolved class, 
> org/apache/activemq/ActiveMQConnectionFactory, have different Class objects 
> for the type Ljava
> x/jms/Connection; used in the signature
> at 
> com.myprojectgroup.bus.activemq.ActiveMQSession.createConnection(ActiveMQSession.java:38)
> at 
> com.myprojectgroup.messaging.jms.Session.open(Session.java:70)[68:com.myprojectgroup.messaging:0.1.0.SNAPSHOT]
> at 
> com.myprojectgroup.messaging.jms.Session$1.run(Session.java:108)[68:com.myprojectgroup.messaging:0.1.0.SNAPSHOT]
> The only bundle that exports the javax.jms package is 
> org.apache.geronimo.specs.geronimo-jms_1.1_spec, though after further 
> investigation I found that the activemq-web-console bundle does have another 
> copy of it internally, which I think might be the cause of the conflict (it 
> does not import the package in the manifest, so I can't just remove the 
> package from the jar since it won't find the package exported by the 
> geronimo-jms bundle without a corresponding import declaration). The 
> activemq-osgi bundle has a DynamicImport-Package: *, which may further 
> complicate things. If it also had an explicit Import-Package directive for 
> the statically-linked classes it uses such as javax.jms ones, perhaps it 
> would go through the regular bundle classloading mechanism and avoid the 
> problem - I think it takes precedence over dynamic imports, though I'm not 
> sure).
> Next, I played around with reordering the installation, and got to another 
> failure mode where there is no exception at all or any error in the logs, but 
> the broker simply fails to start silently. Perhaps this is also caused by 
> classloader issues but they are just occurring somewhere within ActiveMQ and 
> being silently ignored. This mode of failure I've managed to recreate easily:
> On a fresh stock installation of Karaf 2.3.3,  add two feature urls:
> features:addurl mvn:org.apache.activemq/activemq-karaf/5.8.0/xml/features 
> mvn:org.apache.cxf.dosgi/cxf-dosgi/1.6-SNAPSHOT/xml/features
> (you might need to add the apache snapshot repo to karaf for it to find the 
> dosgi snapshot feature)
> And then install them in this order:
> features:install -v cxf-dosgi-discovery-distributed activemq-broker
> The broker will not be started, although the logs will have no errors.
> Strangely, if you now uninstall the activemq-web-console bundle (not feature) 
> and restart karaf, the broker will start ok. Also if you install only 
> activemq-broker and not dosgi, it will start ok.
> To simplify, I found that instead of the dosgi feature it's enough to install 
> and start the mvn:org.osgi/org.osgi.compendium/4.3.1 bundle before installing 
> activemq-broker to make it fail - this is from looking at the innards of the 
> dosgi feature, however if I remove this bundle from the feature the broker 
> still fails to start, so it's not the only bundle causing problems, but just 
> an example.
> The only workaround I've found so far is to install everything, then 
> uninstall the activemq-web-console bundle and restart - then everything works 
> as it should. Strangely, not installing it in the first place doesn't work 
> either - the broker won'

[jira] [Updated] (AMQ-5009) Switch activemq-all from shaded jar to pom dependency aggregator

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5009:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Switch activemq-all from shaded jar to pom dependency aggregator
> 
>
> Key: AMQ-5009
> URL: https://issues.apache.org/jira/browse/AMQ-5009
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.9.0
>Reporter: Michael O'Cleirigh
>  Labels: patch
> Fix For: 5.12.0
>
> Attachments: 
> on-trunk-AMQ-5009-switch-to-pom-based-dependency-aggregation-.patch, 
> on-v5.9.0-AMQ-5009-switch-to-pom-based-dependency-aggregation-.patch
>
>
> I encountered an issue when adding a dependency on activemq-all into my 
> project; it caused a collision with our existing dependency on *slf4j-log4j12*
> {code}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/root/.m2/repository/org/slf4j/slf4j-log4j12/1.6.2/slf4j-log4j12-1.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/root/.m2/repository/org/apache/activemq/activemq-all/5.9.0/activemq-all-5.9.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> {code}
> The main issue is that because the activemq-all jar is shaded there is no way 
> for us to use maven dependency *exclusions* command to not take the 
> activemq-all contributed *slf4j-log4j12* artifact.
> There is a straight forward fix to this issue.  
> Switch from using the *maven-shade-plugin* to using a pom dependency which 
> aggregates the activemq dependencies in a maven controllable way. 
> By making the packaging of the *activemq-all* artifact pom it means that when 
> it is included in a project only the transitive dependencies that it declares 
> are included, the pom artifact itself is not.
> I've tested that this works in our project that depends on activemq-all 5.9.0 
> and also have a patch prepared against the current 5.10-SNAPSHOT trunk.
> The only difference to the consumer of the activemq-all artifact is that they 
> have to specify the ** as pom.
> For example:
> {code}
> 
> org.apache.activemq
> activemq-all
> 5.9.0
> pom
> 
> {code}
> The Apache Wicket project uses the same approach, see 
> [here|http://repo1.maven.org/maven2/org/apache/wicket/wicket/6.13.0/wicket-6.13.0.pom]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4155) Can't start Blueprint broker in Apache Karaf without having Spring JARs

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4155:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Can't start Blueprint broker in Apache Karaf without having Spring JARs
> ---
>
> Key: AMQ-4155
> URL: https://issues.apache.org/jira/browse/AMQ-4155
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 5.8.0
> Environment: Apache Karaf 2.3.0
>Reporter: Victor Antonovich
> Fix For: 5.12.0
>
>
> ActiveMQ 5.8-SNAPSHOT (compiled from trunk) is unable to start blueprint 
> broker:
> {code}
> karaf@root> features:addurl 
> mvn:org.apache.activemq/activemq-karaf/5.8-SNAPSHOT/xml/features
> karaf@root> features:install activemq activemq-blueprint
> karaf@root> activemq:create-broker -t blueprint
> Creating file: @|green 
> /usr/java/apache-karaf-2.3.0/deploy/localhost-broker.xml|
> Default ActiveMQ Broker (localhost) configuration file created at: 
> /usr/java/apache-karaf-2.3.0/deploy/localhost-broker.xml
> Please review the configuration and modify to suite your needs.  
> 0
> karaf@root> la | grep localhost-broker
> [ 105] [Active ] [Failure ] [   80] localhost-broker.xml (0.0.0)
> {code}
> Blueprint component doesn't start due to the {{NoClassDefFoundError}}:
> {code}
> 2012-11-02 16:26:24,702 | ERROR | rint Extender: 3 | BlueprintContainerImpl   
> | container.BlueprintContainerImpl  375 | 7 - 
> org.apache.aries.blueprint.core - 1.0.1 | Unable to start blueprint container 
> for bundle localhost-broker.xml
> org.osgi.service.blueprint.container.ComponentDefinitionException: Error when 
> instanciating bean .component-2 of class class 
> org.apache.activemq.xbean.XBeanBrokerService
>   at 
> org.apache.aries.blueprint.container.BeanRecipe.getInstance(BeanRecipe.java:333)[7:org.apache.aries.blueprint.core:1.0.1]
>   at 
> org.apache.aries.blueprint.container.BeanRecipe.internalCreate2(BeanRecipe.java:806)[7:org.apache.aries.blueprint.core:1.0.1]
>   at 
> org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:787)[7:org.apache.aries.blueprint.core:1.0.1]
>   at 
> org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecipe.java:79)[7:org.apache.aries.blueprint.core:1.0.1]
>   at 
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)[:1.6.0_37]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)[:1.6.0_37]
>   at 
> org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:88)[7:org.apache.aries.blueprint.core:1.0.1]
>   at 
> org.apache.aries.blueprint.container.BlueprintRepository.createInstances(BlueprintRepository.java:245)[7:org.apache.aries.blueprint.core:1.0.1]
>   at 
> org.apache.aries.blueprint.container.BlueprintRepository.createAll(BlueprintRepository.java:183)[7:org.apache.aries.blueprint.core:1.0.1]
>   at 
> org.apache.aries.blueprint.container.BlueprintContainerImpl.instantiateEagerComponents(BlueprintContainerImpl.java:646)[7:org.apache.aries.blueprint.core:1.0.1]
>   at 
> org.apache.aries.blueprint.container.BlueprintContainerImpl.doRun(BlueprintContainerImpl.java:353)[7:org.apache.aries.blueprint.core:1.0.1]
>   at 
> org.apache.aries.blueprint.container.BlueprintContainerImpl.run(BlueprintContainerImpl.java:252)[7:org.apache.aries.blueprint.core:1.0.1]
>   at 
> org.apache.aries.blueprint.utils.threading.impl.DiscardableRunnable.run(DiscardableRunnable.java:48)[7:org.apache.aries.blueprint.core:1.0.1]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)[:1.6.0_37]
>   at 
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)[:1.6.0_37]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)[:1.6.0_37]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)[:1.6.0_37]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)[:1.6.0_37]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)[:1.6.0_37]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)[:1.6.0_37]
>   at java.lang.Thread.run(Thread.java:662)[:1.6.0_37]
> Caused by: java.lang.NoClassDefFoundError: 
> org/springframework/beans/BeansException
>   at 
> org.apache.activemq.xbean.XBeanBrokerService.(XBeanBrokerService.java:47)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)[:1.6.0_37]
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)[:1.6.0_37]
>   at 
> sun.reflec

[jira] [Updated] (AMQ-5137) make networkConnector decreaseNetworkConsumerPriority="true" the default

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5137:

Fix Version/s: (was: 5.11.0)
   5.12.0

> make networkConnector decreaseNetworkConsumerPriority="true" the default
> 
>
> Key: AMQ-5137
> URL: https://issues.apache.org/jira/browse/AMQ-5137
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.9.0
>Reporter: Gary Tully
>  Labels: mesh, networkConnectors
> Fix For: 5.12.0
>
>
> It makes sense to bias local consumers, to avoid hops where possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4540) NetworkBridge - don't wait for ever for demandSubscription pending send responses on remove

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4540:

Fix Version/s: (was: 5.11.0)
   5.12.0

> NetworkBridge - don't wait for ever for demandSubscription pending send 
> responses on remove
> ---
>
> Key: AMQ-4540
> URL: https://issues.apache.org/jira/browse/AMQ-4540
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.8.0
>Reporter: Gary Tully
> Fix For: 5.12.0
>
> Attachments: NetworkBridgeRemoveInflightTest.java
>
>
> In a networkbridge, a demandSub tracks outstanding asyncRquests and waits for 
> them to complete on removal such that the messages can be acked correctly 
> when the send completes.
> If the send is blocked on pfc on the remote broker, it may not return for 
> some time, which blocks other removals leaving messages stuck inflight to 
> networked subscriptions.
> The wait ensures that a message send will not be a duplicate, but blocking 
> for ever does not make sense, especially considering that removes are 
> serialised.
> We need some openwire command that can cancel pending sends to sort out this 
> case but even then we need to timeout at some stage in case the other end 
> cannot respond.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5321) activeMQ levelDB

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5321:

Fix Version/s: (was: 5.11.0)
   5.12.0

> activeMQ  levelDB
> -
>
> Key: AMQ-5321
> URL: https://issues.apache.org/jira/browse/AMQ-5321
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.10.0
> Environment: windows 7
>Reporter: Kevin
> Fix For: 5.12.0
>
>
> https://issues.apache.org/jira/browse/AMQ-5257which was duplicated by
> https://issues.apache.org/jira/browse/AMQ-5105.
> claimed to be fixed in 5.11.0,   but when I used the binaries of 5.11.0  from 
> unreleased   
> (https://repository.apache.org/content/repositories/snapshots/org/apache/activemq/apache-activemq/5.11-SNAPSHOT/)
>  , it is not fixed yet.
> here is what I got:
> PSHOT\bin\win32>activemq.bat
> wrapper  | --> Wrapper Started as Console
> wrapper  | Launching a JVM...
> jvm 1| Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
> jvm 1|   Copyright 1999-2006 Tanuki Software, Inc.  All Rights Reserved.
> jvm 1|
> jvm 1| Java Runtime: Oracle Corporation 1.7.0_67 C:\Program Files 
> (x86)\Java
> \jre7
> jvm 1|   Heap sizes: current=15872k  free=12305k  max=1013632k
> jvm 1| JVM args: -Dactivemq.home=../.. -Dactivemq.base=../.. 
> -Djavax.net
> .ssl.keyStorePassword=password -Djavax.net.ssl.trustStorePassword=password 
> -Djav
> ax.net.ssl.keyStore=../../conf/broker.ks 
> -Djavax.net.ssl.trustStore=../../conf/b
> roker.ts -Dcom.sun.management.jmxremote 
> -Dorg.apache.activemq.UseDedicatedTaskRu
> nner=true -Djava.util.logging.config.file=logging.properties 
> -Dactivemq.conf=../
> ../conf -Dactivemq.data=../../data 
> -Djava.security.auth.login.config=../../conf/
> login.config -Xmx1024m -Djava.library.path=../../bin/win32 
> -Dwrapper.key=7JJvTVF
> 5VnXQi50z -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 
> -Dwrapper.jvm.port.m
> ax=31999 -Dwrapper.pid=6236 -Dwrapper.version=3.2.3 
> -Dwrapper.native_library=wra
> pper -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1
> jvm 1| Extensions classpath:
> jvm 1|   
> [..\..\lib,..\..\lib\camel,..\..\lib\optional,..\..\lib\web,..\..\l
> ib\extra]
> jvm 1| ACTIVEMQ_HOME: ..\..
> jvm 1| ACTIVEMQ_BASE: ..\..
> jvm 1| ACTIVEMQ_CONF: ..\..\conf
> jvm 1| ACTIVEMQ_DATA: ..\..\data
> jvm 1| Loading message broker from: xbean:activemq.xml
> jvm 1|  INFO | Refreshing 
> org.apache.activemq.xbean.XBeanBrokerFactory$1@193
> c227: startup date [Wed Aug 13 10:00:30 EDT 2014]; root of context hierarchy
> jvm 1|  INFO | Using Persistence Adapter: Replicated 
> LevelDB[C:\ActiveMQ\apa
> che-activemq-5.11-20140808.003936-58-bin\apache-activemq-5.11-SNAPSHOT\bin\win32
> \..\..\data\leveldb, bosvsvm01:2181//activemq/leveldb-stores]
> jvm 1|  INFO | Starting StateChangeDispatcher
> jvm 1|  INFO | Client environment:zookeeper.version=3.4.5-1392090, built 
> on
> 09/30/2012 17:52 GMT
> jvm 1|  INFO | Client environment:host.name=WMT-VS009.bost.local
> jvm 1|  INFO | Client environment:java.version=1.7.0_67
> jvm 1|  INFO | Client environment:java.vendor=Oracle Corporation
> jvm 1|  INFO | Client environment:java.home=C:\Program Files 
> (x86)\Java\jre7
> jvm 1|  INFO | Client 
> environment:java.class.path=../../bin/wrapper.jar;../.
> ./bin/activemq.jar
> jvm 1|  INFO | Client environment:java.library.path=../../bin/win32
> jvm 1|  INFO | Client 
> environment:java.io.tmpdir=C:\Users\george\AppData\Lo
> cal\Temp\
> jvm 1|  INFO | Client environment:java.compiler=
> jvm 1|  INFO | Client environment:os.name=Windows 7
> jvm 1|  INFO | Client environment:os.arch=x86
> jvm 1|  INFO | Client environment:os.version=6.1
> jvm 1|  INFO | Client environment:user.name=george
> jvm 1|  INFO | Client environment:user.home=C:\Users\george
> jvm 1|  INFO | Client 
> environment:user.dir=C:\ActiveMQ\apache-activemq-5.11-
> 20140808.003936-58-bin\apache-activemq-5.11-SNAPSHOT\bin\win32
> jvm 1|  INFO | Initiating client connection, connectString=bosvsvm01:2181
>  sessionTimeout=2000 
> watcher=org.apache.activemq.leveldb.replicated.groups.ZKCli
> ent@1fbdfd0
> jvm 1|  WARN | SASL configuration failed: 
> javax.security.auth.login.LoginExc
> eption: No JAAS configuration section named 'Client' was found in specified 
> JAAS
>  configuration file: '../../conf/login.config'. Will continue connection to 
> Zook
> eeper server without SASL authentication, if Zookeeper server allows it.
> jvm 1|  INFO | Opening socket connection to server 
> brlvsvolap01.bluecrest.lo
> cal/10.42.0.109:2181
> jvm 1|  WARN | unprocessed event state: AuthFailed
> jvm 1|  INFO | Socket connection established to bosvsvm01:2181.bos.local
> /10.42.0.109

[jira] [Updated] (AMQ-5470) AMQP - delayed authentication from SASL connect leads to race on client end.

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5470:

Fix Version/s: (was: 5.11.0)
   5.12.0

> AMQP - delayed authentication from SASL connect leads to race on client end.
> 
>
> Key: AMQ-5470
> URL: https://issues.apache.org/jira/browse/AMQ-5470
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 5.10.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.12.0
>
> Attachments: AMQ-5470.patch
>
>
> We currently delay checking the credentials provided during the SASL 
> negotiation and also checking if anonymous client connects are legal until 
> after opening the proton connection and then we send an error condition 
> indicating the failure and close the connection.  This can lead to a race on 
> the client end where it looks for a breif moment in time that the connection 
> succeeded.  During that time the client might attempt some further action and 
> then fail in an odd way as the connection is closed under it.  
> We should look into authenticating immediately and failing the SASL handshake 
> if not authorized.  We should also consider whether we want to support raw 
> connections with a SASL handshake as well since without at least a SASL 
> ANONYMOUS handshake we can get back into this issue unless we just forcibly 
> close the socket on a client if we don't support anonymous connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)



[jira] [Updated] (AMQ-2870) Messages that don't match a message selector for a durable subscription are stored causing the persistent store to eventually fill up

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-2870:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Messages that don't match a message selector for a durable subscription are 
> stored causing the persistent store to eventually fill up
> -
>
> Key: AMQ-2870
> URL: https://issues.apache.org/jira/browse/AMQ-2870
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.3.2
>Reporter: Gary Tully
>Assignee: Gary Tully
> Fix For: 5.4.1, 5.12.0
>
>
> With a durable sub, ack entries are created on a message send for each 
> durable sub, but if the durable sub does not match the message due to a 
> selector, the message remains unacked, pending, such that is can fill up the 
> store. Any unmatched message should be acked immediately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-3598) Unprivileged users can receive messages from a protected topic when using wildcards in destination

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-3598:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Unprivileged users can receive messages from a protected topic when using 
> wildcards in destination
> --
>
> Key: AMQ-3598
> URL: https://issues.apache.org/jira/browse/AMQ-3598
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.5.0, 5.5.1
> Environment: OS: Mac OS X 10.6.8
> JRE/JDK: 1.6.0_29
> ActiveMQ: 5.5.0
>Reporter: Thorsten Panitz
>  Labels: authorization, security
> Fix For: 5.12.0
>
> Attachments: AMQ-3598.patch, ActiveMQAuthorizationBug.zip
>
>
> A consumer can receive messages from protected queues/topics if he uses a 
> Destination which contains a wildcard as described 
> [here|http://activemq.apache.org/wildcards.html]:
> {code:language=java}
> Destination queue = new ActiveMQQueue("messages.>");
> Destination topic = new ActiveMQTopic(">");
> {code}
> We are using the default authentication/authorization system as described in 
> [Security 
> Authentication/Authorization|http://activemq.apache.org/security.html#Security-Authorization]
>  with the following configuration:
> {code:title=broker.xml|language=xml}
> 
> 
> 
>username="admin"
>   password="admin"
>   groups="admins"/>
>username="user"
>   password="user"
>   groups="users"/>
> 
> 
> 
> 
> 
> 
>  read="admins"
> write="admins"
> admin="admins"/>
>  read="admins"
> write="admins"
> admin="admins"/>
>  read="admins, users"
> write="admins, users"
> admin="admins, users"/>
>  read="admins, users"
> write="admins, users"
> admin="admins, users"/>
> 
> 
> 
> 
> 
> {code}
> As exepected, clients connecting as "user" to the topic "messages.cat2" get 
> an exception ("User user is not authorized to read from: 
> topic://messages.cat2"). Suprisingly "user" can receive messages from topic 
> "messages.cat2" if he creates a consumer with the destination "messages.>":
> {code:title=consumer.java|language=java}
> final Destination destination = new ActiveMQTopic("messages.>");
> final Connection conn = new ActiveMQConnectionFactory("user", "user", 
> BROKER_URL).createConnection();
> final Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
> final MessageConsumer consumer = session.createConsumer(destination);
> conn.start();
> closure.run();
> final Message message = consumer.receive(TIMEOUT);
> session.close();
> conn.close(); 
> {code}
> IMHO this behaviour is a security problem as an unprivileged user can receive 
> messages from a protected topic or queue!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4990) Add support for the changes in MQTT 3.1.1

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4990:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Add support for the changes in MQTT 3.1.1
> -
>
> Key: AMQ-4990
> URL: https://issues.apache.org/jira/browse/AMQ-4990
> Project: ActiveMQ
>  Issue Type: New Feature
>Reporter: Hiram Chirino
>Assignee: Hiram Chirino
> Fix For: 5.12.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4890) Allow configuration of the message cursor for temporary queues

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4890:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Allow configuration of the message cursor for temporary queues
> --
>
> Key: AMQ-4890
> URL: https://issues.apache.org/jira/browse/AMQ-4890
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Reporter: Gary Tully
> Fix For: 5.12.0
>
>
> This one is an old nut:
> http://mail-archives.apache.org/mod_mbox/activemq-users/201208.mbox/%3ccah+vqmnec+ktoh5j2hvp9hyd34hao1u7pggp38pn0kbks11...@mail.gmail.com%3E
> but it makes sense to allow the filependingmessage cursor to be configured 
> via a policy entry so that temp destinations are not limited by memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5458) MBean to help testing replicated levelDB

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5458:

Fix Version/s: (was: 5.11.0)
   5.12.0

> MBean to help testing replicated levelDB
> 
>
> Key: AMQ-5458
> URL: https://issues.apache.org/jira/browse/AMQ-5458
> Project: ActiveMQ
>  Issue Type: New Feature
>Reporter: Hiram Chirino
>Assignee: Hiram Chirino
> Fix For: 5.12.0
>
>
> Would be nice if you set a system property like 
> 'org.apache.activemq.leveldb.test=true' that you then get a MBean for leveldb 
> stores that allows you to suspend/resume calls around the journal writes, 
> deletes and force operations so you can more easily write tests that validate 
> consistency and recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4761) Activemq console commands - Better human readable output

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4761:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Activemq console commands - Better human readable output
> 
>
> Key: AMQ-4761
> URL: https://issues.apache.org/jira/browse/AMQ-4761
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: OSGi/Karaf
>Reporter: Claus Ibsen
>Assignee: Jean-Baptiste Onofré
> Fix For: 5.12.0
>
>
> The activemq commands which you can use from command line, and in karaf shell 
> outputs to the console.
> Though the output is not well formatted / sorted etc. So it does look a bit 
> hard to read for humans.
> We should improve this to make the output easier to read, by grouping related 
> information. And have nice tabular data where it makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4232) Kahadb does not allow to obtain count of used/free bytes in the storage

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-4232:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Kahadb does not allow to obtain count of used/free bytes in the storage
> ---
>
> Key: AMQ-4232
> URL: https://issues.apache.org/jira/browse/AMQ-4232
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 5.4.2, 5.7.0
> Environment: fedora15, ubuntu12.10; debian; probably independent
>Reporter: Tomáš Martinec
>  Labels: hangs, kahadb, usage
> Fix For: 5.12.0
>
> Attachments: extending.kahadb.diff
>
>
> The full story is user forum under title:
> Activemq 5.4.2 hangs when the temp disk usage is used
> TempUsage.retrieveUsage() always returns the size of the allocated storage 
> instead of the actual usage. I could not see what methods of kahadb returns 
> the needed information, so I extended the kahadb interface.
> Note that this issue may cause other problems such as AMQ-4136.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5463) Enable client to set ContainerId so can discovered if running in Docker for example

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5463:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Enable client to set ContainerId so can discovered if running in Docker for 
> example
> ---
>
> Key: AMQ-5463
> URL: https://issues.apache.org/jira/browse/AMQ-5463
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Broker
>Reporter: Rob Davies
>Assignee: Rob Davies
> Fix For: 5.12.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5124) Exception logged on startup: jolokia-agent: Cannot start discovery multicast handler

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5124:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Exception logged on startup: jolokia-agent: Cannot start discovery multicast 
> handler
> 
>
> Key: AMQ-5124
> URL: https://issues.apache.org/jira/browse/AMQ-5124
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.11.0
>Reporter: Hiram Chirino
> Fix For: 5.12.0
>
>
> jolokia-agent is barfing an ugly exception on the console on activemq 
> startup: 
>  WARN | jolokia-agent: Cannot start discovery multicast handler: 
> java.net.SocketException: Can't assign requested address
> java.net.SocketException: Can't assign requested address
>   at java.net.PlainDatagramSocketImpl.join(Native Method)
>   at 
> java.net.AbstractPlainDatagramSocketImpl.joinGroup(AbstractPlainDatagramSocketImpl.java:202)
>   at java.net.MulticastSocket.joinGroup(MulticastSocket.java:402)
>   at 
> org.jolokia.discovery.MulticastUtil.joinMcGroupsOnAllNetworkInterfaces(MulticastUtil.java:136)
>   at 
> org.jolokia.discovery.MulticastUtil.newMulticastSocket(MulticastUtil.java:38)
>   at 
> org.jolokia.discovery.MulticastSocketListenerThread.(MulticastSocketListenerThread.java:60)
>   at 
> org.jolokia.discovery.DiscoveryMulticastResponder.start(DiscoveryMulticastResponder.java:75)
>   at 
> org.jolokia.http.AgentServlet.initDiscoveryMulticast(AgentServlet.java:176)
>   at org.jolokia.http.AgentServlet.init(AgentServlet.java:162)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:477)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5471) Configuration server for Network of Brokers

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5471:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Configuration server for Network of Brokers
> ---
>
> Key: AMQ-5471
> URL: https://issues.apache.org/jira/browse/AMQ-5471
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Broker
>Reporter: Hadrian Zbarcea
>Assignee: Hadrian Zbarcea
> Fix For: 5.12.0
>
>
> Brokers use the xbean configuration to start which is usually found in ./conf 
> other other places on the local disk. In a NOB topology however it is hard to 
> distribute the configuration files and maintain them across brokers 
> especially with a growing or elasticly changing number of brokers.
> ActiveMQ already supports reading the xbean configuration from an http;// 
> URL, so it would be very helpful to have a rest service that manages the 
> configuration for all the brokers. 
> I started such a service on [github|https://github.com/hzbarcea/activemq-nob] 
> but plan to contribute it to ASF once it's in a decent shape, in a couple of 
> weeks or so.
> The service uses the local file system and appropriate conventions to store 
> all the relevant broker configuration resources (e.g. could be generated with 
> ./bin/activemq create  minus certificates probably). I plan to 
> enhance it later to support a git repository for the configuration, so that 
> it's versioned, so that an operator could roll out a new NOB topology, or 
> roll back to the previous configuration.
> Feedback appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5393) Update disk based limits periodically

2015-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5393:

Fix Version/s: (was: 5.11.0)
   5.12.0

> Update disk based limits periodically
> -
>
> Key: AMQ-5393
> URL: https://issues.apache.org/jira/browse/AMQ-5393
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.10.0
>Reporter: Dejan Bosanac
> Fix For: 5.12.0
>
>
> At the moment, we set store and temp limits at broker startup based on the 
> configuration and available space. It's possible that other artefacts such as 
> logs can reduce available disk space so that our limits does not have effect. 
> It'd be good to periodically check for the usable space left and adjust 
> limits accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-02-02 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5542.
-
   Resolution: Fixed
Fix Version/s: 5.12.0

test and fix applied (reverted the change from AMQ-2736) with thanks. all tests 
look good. The copy looks plain wrong to me.

> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
>Assignee: Gary Tully
> Fix For: 5.12.0
>
> Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5552) introduce a smoke-test profile that is enabled by default and during release:prepare

2015-01-30 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5552:

Description: 
Users should be able to do $>mvn install on trunk or the source distribution 
and get a validated (smoke-tested) distribution in < 10  minutes.
The smoke-test profile should be enabled for release:prepare

At the moment, more than 3k tests are run, they are not reliable enough and the 
build is gone for a number of hours. This gives a bad first impression.
Or course we should continue to improve the test suite but this has a totally 
different focus.

The smoke-test profile takes a smart cross section of tests in each module that 
validate core functionality. 

It will be an interesting challenge to get the selection right; balancing 
typical use cases with coverage with speed etc.

The tests should be:
 * representative of the module functionality
 * clean - no hard-coded ports and use only space on target
 * fast
 * reliable
 * can be run in parallel (maybe if it allows more tests to be run in the same 
time frame)
 

  was:
Users should be able to do $>mvn install on trunk or the source distribution 
and get a validated (smoke-tested) distribution in < 10  minutes.
The smoke-test profile should be enabled for release:prepare

At the moment, more than 3k tests are run, they are not reliable enough and the 
build is gone for a number of hours. This gives a bad first impression.
Or course we should continue to improve the test suite but this has a totally 
different focus.

The smoke-test profile takes a smart cross section of tests in each module that 
validate core functionality. 

It will be an interesting challenge to get the selection right; balancing 
typical use cases with coverage with speed etc.

But the tests should be:
 * clean - no hard-coded ports and use only space on target
 * fast
 * reliable
 * can be run in parallel (maybe if it allows more tests to be run in the same 
time frame)
 


> introduce a smoke-test profile that is enabled by default and during 
> release:prepare
> 
>
> Key: AMQ-5552
> URL: https://issues.apache.org/jira/browse/AMQ-5552
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Distribution, Test Cases
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>  Labels: distribution, mvn, smoke, tests, validation
> Fix For: 5.12.0
>
>
> Users should be able to do $>mvn install on trunk or the source distribution 
> and get a validated (smoke-tested) distribution in < 10  minutes.
> The smoke-test profile should be enabled for release:prepare
> At the moment, more than 3k tests are run, they are not reliable enough and 
> the build is gone for a number of hours. This gives a bad first impression.
> Or course we should continue to improve the test suite but this has a totally 
> different focus.
> The smoke-test profile takes a smart cross section of tests in each module 
> that validate core functionality. 
> It will be an interesting challenge to get the selection right; balancing 
> typical use cases with coverage with speed etc.
> The tests should be:
>  * representative of the module functionality
>  * clean - no hard-coded ports and use only space on target
>  * fast
>  * reliable
>  * can be run in parallel (maybe if it allows more tests to be run in the 
> same time frame)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5552) introduce a smoke-test profile that is enabled by default and during release:prepare

2015-01-30 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5552:

Summary: introduce a smoke-test profile that is enabled by default and 
during release:prepare  (was: introduce a smoke-test profile that is enable by 
default and during release:prepare)

> introduce a smoke-test profile that is enabled by default and during 
> release:prepare
> 
>
> Key: AMQ-5552
> URL: https://issues.apache.org/jira/browse/AMQ-5552
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Distribution, Test Cases
>Affects Versions: 5.11.0
>Reporter: Gary Tully
>  Labels: distribution, mvn, smoke, tests, validation
> Fix For: 5.12.0
>
>
> Users should be able to do $>mvn install on trunk or the source distribution 
> and get a validated (smoke-tested) distribution in < 10  minutes.
> The smoke-test profile should be enabled for release:prepare
> At the moment, more than 3k tests are run, they are not reliable enough and 
> the build is gone for a number of hours. This gives a bad first impression.
> Or course we should continue to improve the test suite but this has a totally 
> different focus.
> The smoke-test profile takes a smart cross section of tests in each module 
> that validate core functionality. 
> It will be an interesting challenge to get the selection right; balancing 
> typical use cases with coverage with speed etc.
> But the tests should be:
>  * clean - no hard-coded ports and use only space on target
>  * fast
>  * reliable
>  * can be run in parallel (maybe if it allows more tests to be run in the 
> same time frame)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5552) introduce a smoke-test profile that is enable by default and during release:prepare

2015-01-30 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5552:
---

 Summary: introduce a smoke-test profile that is enable by default 
and during release:prepare
 Key: AMQ-5552
 URL: https://issues.apache.org/jira/browse/AMQ-5552
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Distribution, Test Cases
Affects Versions: 5.11.0
Reporter: Gary Tully
 Fix For: 5.12.0


Users should be able to do $>mvn install on trunk or the source distribution 
and get a validated (smoke-tested) distribution in < 10  minutes.
The smoke-test profile should be enabled for release:prepare

At the moment, more than 3k tests are run, they are not reliable enough and the 
build is gone for a number of hours. This gives a bad first impression.
Or course we should continue to improve the test suite but this has a totally 
different focus.

The smoke-test profile takes a smart cross section of tests in each module that 
validate core functionality. 

It will be an interesting challenge to get the selection right; balancing 
typical use cases with coverage with speed etc.

But the tests should be:
 * clean - no hard-coded ports and use only space on target
 * fast
 * reliable
 * can be run in parallel (maybe if it allows more tests to be run in the same 
time frame)
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-29 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14296645#comment-14296645
 ] 

Gary Tully commented on AMQ-5542:
-

i will review and try and recall the need for the copy. great find. patch with 
test case is always much appreciated :-)

> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
>Assignee: Gary Tully
> Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-29 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully reassigned AMQ-5542:
---

Assignee: Gary Tully

> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
>Assignee: Gary Tully
> Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup

2015-01-28 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14295309#comment-14295309
 ] 

Gary Tully commented on AMQ-5542:
-

mKahaDB is the current answer to the need for compaction/rewrite problem. that 
is why it emerged. Partition based on the average length of time a message 
spends in a queue.

> KahaDB data files containing acknowledgements are deleted during cleanup
> 
>
> Key: AMQ-5542
> URL: https://issues.apache.org/jira/browse/AMQ-5542
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Message Store
>Affects Versions: 5.10.0, 5.10.1
>Reporter: Sergiy Barlabanov
> Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 
> introduced a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is 
> not a cleanup candidate.
> The next file #2 contains acks of some messages from file #1. This file is 
> not a cleanup candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file 
> is deleted during the cleanup procedure. So on Broker restart all messages 
> from #2, whose acks were from the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see 
> MessageDatabase#checkpointUpdate at the end of the method - 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0 tag). So 
> when a candidate is deleted from gcCandidateSet 
> (org/apache/activemq/store/kahadb/MessageDatabase.java:1668 on 5.10.0 tag), 
> gcCandidates still contains that candidate and the comparison on 
> org/apache/activemq/store/kahadb/MessageDatabase.java:1666 works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5540) KahaDB can't fail over to the slave if the master is unable to write to disk

2015-01-27 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294318#comment-14294318
 ] 

Gary Tully edited comment on AMQ-5540 at 1/27/15 10:34 PM:
---

please attach some logs or stack traces so that is is possible to ascertain the 
state of the master. And attach your xml config.


was (Author: gtully):
please attach some logs or stack traces so that is is possible to ascertain the 
state of the master. And attach your xml config.

> KahaDB can't fail over to the slave if the master is unable to write to disk
> 
>
> Key: AMQ-5540
> URL: https://issues.apache.org/jira/browse/AMQ-5540
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, Message Store
>Affects Versions: 5.10.0
> Environment: Using Master-slave topology with shared kahadb. 
> Using KahaDB on NFS. 
>Reporter: Anuj Khandelwal
>
> This is coming from 
> http://activemq.2283324.n4.nabble.com/kahadb-corruption-quot-Checkpoint-failed-java-io-IOException-Input-output-error-quot-td4690378.html#a4690442
>  . 
> KahaDB can't fail over to the slave if the master is unable to write to disk 
> when it shuts down (because it couldn't write to disk). KahaDB should be able 
> to detect such failures and allow slave broker to acquire the lock.
> Thanks,
> Anuj



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5540) KahaDB can't fail over to the slave if the master is unable to write to disk

2015-01-27 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294318#comment-14294318
 ] 

Gary Tully commented on AMQ-5540:
-

please attach some logs or stack traces so that is is possible to ascertain the 
state of the master. And attach your xml config.

> KahaDB can't fail over to the slave if the master is unable to write to disk
> 
>
> Key: AMQ-5540
> URL: https://issues.apache.org/jira/browse/AMQ-5540
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, Message Store
>Affects Versions: 5.10.0
> Environment: Using Master-slave topology with shared kahadb. 
> Using KahaDB on NFS. 
>Reporter: Anuj Khandelwal
>
> This is coming from 
> http://activemq.2283324.n4.nabble.com/kahadb-corruption-quot-Checkpoint-failed-java-io-IOException-Input-output-error-quot-td4690378.html#a4690442
>  . 
> KahaDB can't fail over to the slave if the master is unable to write to disk 
> when it shuts down (because it couldn't write to disk). KahaDB should be able 
> to detect such failures and allow slave broker to acquire the lock.
> Thanks,
> Anuj



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5541) Support preemptive redelivery flag for non persistent messages

2015-01-27 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-5541.
-
Resolution: Fixed

policy entry now eagerly sets redelivered flag on broker messages (non 
persistent, only a persistent one will be rewritten) such that in the event of 
an abortive connection close, the message will reflect the dispatch attempt.

> Support preemptive redelivery flag for non persistent messages
> --
>
> Key: AMQ-5541
> URL: https://issues.apache.org/jira/browse/AMQ-5541
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Reporter: Gary Tully
>Assignee: Gary Tully
> Fix For: 5.12.0
>
>
> policy entry setPersistJMSRedelivered gives us a hardened redelivery flag 
> even in the event of a restart
> https://issues.apache.org/jira/browse/AMQ-5068
> This feature is conditional on the message being persistent but it need not 
> be. For non persistent messages, we can eagerly set the redelivered flag. 
> This is useful for connection drops and or stomp clients that close their 
> socket. If they ever care about their delivery flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5541) Support preemptive redelivery flag for non persistent messages

2015-01-27 Thread Gary Tully (JIRA)
Gary Tully created AMQ-5541:
---

 Summary: Support preemptive redelivery flag for non persistent 
messages
 Key: AMQ-5541
 URL: https://issues.apache.org/jira/browse/AMQ-5541
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Reporter: Gary Tully
Assignee: Gary Tully
 Fix For: 5.12.0


policy entry setPersistJMSRedelivered gives us a hardened redelivery flag even 
in the event of a restart
https://issues.apache.org/jira/browse/AMQ-5068

This feature is conditional on the message being persistent but it need not be. 
For non persistent messages, we can eagerly set the redelivered flag. This is 
useful for connection drops and or stomp clients that close their socket. If 
they ever care about their delivery flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5537) Network Connector Throughput

2015-01-27 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-5537:

Fix Version/s: (was: 5.11.0)

> Network Connector Throughput
> 
>
> Key: AMQ-5537
> URL: https://issues.apache.org/jira/browse/AMQ-5537
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Connector
>Affects Versions: 5.x
> Environment: Network of Brokers. Platform agnostic. Local Broker has 
> a networkConnector defined to forward all messages to a remote broker.
>Reporter: Ehud Eshet
>
> *Requirement*
> 1.  Allow network connector to use transactions when forwarding persistent 
> messages.
> 2. Provide the following new network connector properties:
> maxMessagesPerTransaction - when specified and great than 1, use transactions.
> maxTransactionLatencyMillis - commit immediately when time passed since last 
> commit is more than specified.
> Let's say both parameters are set as 1000.
> Network connector should commit after every 1000 messages or when more than 
> 1000ms passed since last commit (the sooner).
> *Background*
> Persistent messages throughput is significantly slower.
> When using transactions and committing every 1000 messages, throughput on 
> local broker with levelDB is about 12,000 messages of 1KB per second.
> Network connector does not use transactions. Thus, its throughput is limited 
> to few hundreds messages per second.
> When imitating network connector functionality (receive from local broker and 
> send to remote broker) using transactions on both sessions, I managed to have 
> a sustained throughput of 10,000 messages/sec stored on local broker plus up 
> to 11,000 messages/s forwarded to remote broker (forwarding throughput must 
> be higher to allow catch up after reconnect).
> *Sample code*
> {code:title=TransactionalStoreAndForward.java|borderStyle=solid}
> import java.util.Date;
> import javax.jms.*;
> import javax.jms.Connection;
> import javax.jms.Message;
> import org.apache.activemq.*;
> import org.apache.activemq.broker.*;
> public class TransactionalStoreAndForward implements Runnable 
> {
>   private final String m_queueName;
>   private final ActiveMQConnectionFactory m_fromAMQF, m_toAMQF;
>   
>   private Connection m_fromConn = null, m_toConn = null;
>   private Session m_fromSess = null, m_toSess = null;
>   private MessageConsumer m_msgConsumer = null;
>   private MessageProducer m_msgProducer = null;
>   
>   private boolean m_cont = true;
>   
>   public static final int MAX_MESSAGES_PER_TRANSACTION = 500;
>   public static final long MAX_TRANSACTION_LATENCY_MILLIS = 5000L;
>   
>   public TransactionalStoreAndForward(String fromUri, String toUri, 
> String queueName)
>   {
>   m_fromAMQF = new ActiveMQConnectionFactory(fromUri);
>   m_toAMQF = new ActiveMQConnectionFactory(toUri);
>   m_queueName = queueName;
>   }
>   
>   @Override
>   public void run() 
>   {
>   while (m_cont)
>   {
>   connect();
>   process();
>   }
>   }
>   
>   private void process()
>   {
>   long txMessages = 0, totalMessages = 0, lastPrintMessages = 0;
>   long startTime = 0L;
>   long lastTxTime = startTime, lastPrintTime = startTime;
>   
>   Message msg = null;
>   
>   try {
>   while (m_cont)
>   {
>   while ((msg = 
> m_msgConsumer.receive(MAX_TRANSACTION_LATENCY_MILLIS)) != null)
>   {
>   if (startTime == 0) {
>   startTime = 
> System.currentTimeMillis();
>   lastTxTime = startTime;
>   lastPrintTime = startTime;
>   }
>   
>   m_msgProducer.send(msg);
>   txMessages++;
>   totalMessages++;
>   
>   if (txMessages == 
> MAX_MESSAGES_PER_TRANSACTION || 
>   
> System.currentTimeMillis() - lastTxTime > MAX_TRANSACTION_LATENCY_MILLIS)
>   {
>   m_toSess.commit();
>   m_fromSess.commit();
>   lastTxTime = 
> System.currentTimeMillis();
>   

  1   2   3   4   5   6   7   8   9   10   >