[jira] [Updated] (ACTIVEMQ6-18) Update contributor section the README
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-18?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martyn Taylor updated ACTIVEMQ6-18: --- Fix Version/s: 6.0.0 Update contributor section the README - Key: ACTIVEMQ6-18 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-18 Project: Apache ActiveMQ 6 Issue Type: Task Reporter: Martyn Taylor Assignee: Martyn Taylor Fix For: 6.0.0 Update the README to reflect the new commit, review, push process for code submissions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-5) Replace components to non WildFly
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-5: Assignee: Martyn Taylor Replace components to non WildFly - Key: ACTIVEMQ6-5 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-5 Project: Apache ActiveMQ 6 Issue Type: Improvement Reporter: Martyn Taylor Assignee: Martyn Taylor Fix For: 6.0.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-17) Add Broker Interceptor - like the Camel Broker Component in ActiveMQ 5
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-17?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-17: - Fix Version/s: 6.1.0 Add Broker Interceptor - like the Camel Broker Component in ActiveMQ 5 -- Key: ACTIVEMQ6-17 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-17 Project: Apache ActiveMQ 6 Issue Type: Bug Reporter: Rob Davies Assignee: Andy Taylor Fix For: 6.1.0 One of the main reasons ActiveMQ is popular is its flexibility. Allowing Camel to intercept messages inside the broker (Camel Broker Component) - will allow some of the current BrokerPlugins (like TimeStamp) - to be implemented using Camel routes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (ACTIVEMQ6-17) Add Broker Interceptor - like the Camel Broker Component in ActiveMQ 5
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-17?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor reassigned ACTIVEMQ6-17: Assignee: Andy Taylor Add Broker Interceptor - like the Camel Broker Component in ActiveMQ 5 -- Key: ACTIVEMQ6-17 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-17 Project: Apache ActiveMQ 6 Issue Type: Bug Reporter: Rob Davies Assignee: Andy Taylor Fix For: 6.1.0 One of the main reasons ActiveMQ is popular is its flexibility. Allowing Camel to intercept messages inside the broker (Camel Broker Component) - will allow some of the current BrokerPlugins (like TimeStamp) - to be implemented using Camel routes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-21) Created queues from management should be part of the configuration
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-21?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-21: - Fix Version/s: 6.1.0 Created queues from management should be part of the configuration -- Key: ACTIVEMQ6-21 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-21 Project: Apache ActiveMQ 6 Issue Type: Improvement Reporter: clebert suconic Fix For: 6.1.0 When you create a queue / topic / any type of destination we will offer in the future from management it must be set as part of the configuration automatically, so in case all the data is reset the destinations should be preserved as part of the configuration. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-23) Improve connection load balancing
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-23?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-23: - Fix Version/s: 6.1.0 Improve connection load balancing - Key: ACTIVEMQ6-23 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-23 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 Move the connection load balancing to the server side by allowing the cluster to decide where a client should connect to on initial connect, removing the functionality from the client. All the client should need to know on reconnect or failover is its initial servers id. we can then plumb in more dynamic load balancing based on queue depth etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-23) Improve connection load balancing
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-23?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-23: - Description: Move the connection load balancing to the server side by allowing the cluster to decide where a client should connect to on initial connect, removing the functionality from the client. All the client should need to know on reconnect or failover is its initial servers id. we can then plumb in more dynamic load balancing based on queue depth etc. (was: This will be an integrated gateway with Zoo Keeper. A Connection ID will set what node is coming, and the gateway will work as a proxy to the node where the connection is sitting on.) Summary: Improve connection load balancing (was: Broker Gateway to massive scale up clients) Improve connection load balancing - Key: ACTIVEMQ6-23 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-23 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 Move the connection load balancing to the server side by allowing the cluster to decide where a client should connect to on initial connect, removing the functionality from the client. All the client should need to know on reconnect or failover is its initial servers id. we can then plumb in more dynamic load balancing based on queue depth etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-24) Lazy conversions on Protocols / Persistency
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-24?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-24: - Fix Version/s: 6.1.0 Lazy conversions on Protocols / Persistency --- Key: ACTIVEMQ6-24 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-24 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 We should do like apollo on converting between Protocols and persisting data. We should keep the data native as much as we can to the protocol and only convert when needed. Right now we are always converting to an internal format independently of the protocol. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-25) Smarter Redistribution of Messaging
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-25?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-25: - Fix Version/s: 6.1.0 Smarter Redistribution of Messaging --- Key: ACTIVEMQ6-25 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-25 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 this is dependant on ACTIVEMQ6-19: Since we are disabling Server Side Load Balancing a client without messages should kick redistribution in a smarter fashion. It's also desirable to deal with filtered transfers. We have had issues on both Activemq5 (AFAIK) and hornetq about consumers demanding messages for a specific filter. It's not easy to deal with it... I'm not sure if it's possible but we should take that into consideration when designing this feature. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-26) Improve Page Transactions on the persisted Journal
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-26?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-26: - Fix Version/s: 6.1.0 Improve Page Transactions on the persisted Journal -- Key: ACTIVEMQ6-26 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-26 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 the model donated from HornetQ will set one element on the journal and in memory paged messages that were sent transactionally. When writing on the page model, we must control if the transaction (if transactioned) was paged or not. The footprint is really small however we have had some issues raised. I don't think they are really important as we could solve this by introducing other queue types based on big-data what would solve this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5430) LevelDB on NFS creats .nfs files
[ https://issues.apache.org/jira/browse/AMQ-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207876#comment-14207876 ] Anuj Khandelwal commented on AMQ-5430: -- Any updates here ? I have also verified that these files are opened by ActiveMQ broker even after deletion. [lsof +D /path_of_levelDB]. This is a serious issue because we have lot of messages going thorough persistence store and it will fill up the storage. Please take a look. Thanks, Anuj LevelDB on NFS creats .nfs files Key: AMQ-5430 URL: https://issues.apache.org/jira/browse/AMQ-5430 Project: ActiveMQ Issue Type: Bug Components: activemq-leveldb-store Affects Versions: 5.10.0 Reporter: Anuj Khandelwal We are currently testing levelDB on NFS. leveldb creates .log files in leveldb directory to store actual message/data and this files rotates after 100MB size. These .log files gets deleted when all the messages are consumed from a particular file. Issue: After all the messages are consumed I can see that files are getting deleted but internally they are creating .nfs files of same size. we have to restart the process to delete those .nfs files. From my understanding: It seems that the LevelDB store keeps the old logfiles open after they were deleted. Below is the snapshot of files: amqt...@kepler19.nyc:/u/amqtest/dev/leveldb ls -a .nfs0082e7befafe .nfs00960d1eeb46 .nfs01033243ea15 .nfs00614cf1eaef .nfs00960d1aee3e .nfs01033242e52d dirty.index store-version.txt .nfs0082e7c3000100c5 .nfs00960d1ff27f 724ff92c.index lock 724ff92c.log plist.index -- amqt...@kepler19.nyc:/u/amqtest/dev/leveldb du -sh .nfs* 107M .nfs00614cf1eaef 101M .nfs0082e7befafe 101M .nfs0082e7c3000100c5 108M .nfs00960d1aee3e 106M .nfs00960d1eeb46 104M .nfs00960d1ff27f 101M .nfs01033242e52d 101M .nfs01033243ea15 Thanks, Anuj -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-28) Rebalance load after failover or scale up nodes
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-28?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-28: - Fix Version/s: 6.1.0 Rebalance load after failover or scale up nodes --- Key: ACTIVEMQ6-28 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-28 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 Rebalance load after a node has too much load after certain events. It could be after failover, or after scaling up (starting new nodes on the cluster) It's not certain if this is just about connections or also about data (queues and topics). We believe that if we just did this on connection level, ACTIVEMQ6-25 would take care of redistributing the data. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-29) Reconnect to any live
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-29?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-29: - Description: Right now clients will always connect to their backup or original lives. We could in a configurable fashion have the client connecting anywhere after a certain number of retries to their original nodes. (This will only be applied to core clients since AMQP will always reconnect to their original addresses). was: Right now clients will always connect to their backup or original lives. We could in a configurable fashion have the client connecting anywhere after a certain number of retries to their original nodes. (This will only be applied to core clients since AMQP will always reconnect to their original addresses). The gateway defined on ACTIVEMQ6-23 could help with any AMQP client failing over and reconnecting it to the proper nodes. Reconnect to any live - Key: ACTIVEMQ6-29 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-29 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Right now clients will always connect to their backup or original lives. We could in a configurable fashion have the client connecting anywhere after a certain number of retries to their original nodes. (This will only be applied to core clients since AMQP will always reconnect to their original addresses). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-29) Reconnect to any live
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-29?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-29: - Fix Version/s: 6.1.0 Reconnect to any live - Key: ACTIVEMQ6-29 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-29 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 Right now clients will always connect to their backup or original lives. We could in a configurable fashion have the client connecting anywhere after a certain number of retries to their original nodes. (This will only be applied to core clients since AMQP will always reconnect to their original addresses). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ACTIVEMQ6-29) Reconnect to any live
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-29?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14207878#comment-14207878 ] Andy Taylor commented on ACTIVEMQ6-29: -- i dont think we should do it after n retirs, we should be deterministic, either HA or just reconnect to any node. Reconnect to any live - Key: ACTIVEMQ6-29 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-29 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 Right now clients will always connect to their backup or original lives. We could in a configurable fashion have the client connecting anywhere after a certain number of retries to their original nodes. (This will only be applied to core clients since AMQP will always reconnect to their original addresses). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-30) allow backup servers to service multiple live servers
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-30?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-30: - Summary: allow backup servers to service multiple live servers (was: Implement a limited number of backup nodes available) allow backup servers to service multiple live servers - Key: ACTIVEMQ6-30 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-30 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic We called this as N+M Backup with shared storage, but the same policy could be applied with replication. You have a few nodes elected to be backups. Say you have for example 3 nodes as dedicated backups (M) for 10 live nodes (N), resulting in N+M nodes. We could implement the same using replication? That point is optional though.. depending on the design we could ellect to implement this only for shared storage options. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-31) Disaster recovery
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-31?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-31: - Summary: Disaster recovery (was: Multiple backup options on a node / Disaster recovery option) Disaster recovery - Key: ACTIVEMQ6-31 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-31 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic We should have the option of having multiple backups for a node. When doing so we should specify what service level we are expected to have in sync mode. A Remote site could introduce latency and other issues hence we should limit the possibility of syncing the OperationContext with remote backups through a configuration option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-31) Disaster recovery
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-31?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-31: - Description: allow some sort of disaster recovery (was: We should have the option of having multiple backups for a node. When doing so we should specify what service level we are expected to have in sync mode. A Remote site could introduce latency and other issues hence we should limit the possibility of syncing the OperationContext with remote backups through a configuration option.) Disaster recovery - Key: ACTIVEMQ6-31 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-31 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic allow some sort of disaster recovery -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (ACTIVEMQ6-2) port fixes from HornetQ master
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor resolved ACTIVEMQ6-2. - Resolution: Fixed port fixes from HornetQ master -- Key: ACTIVEMQ6-2 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-2 Project: Apache ActiveMQ 6 Issue Type: Task Reporter: Andy Taylor Assignee: Andy Taylor Fix For: 6.0.0 since the initial donation was a while ago we should apply all the fixes up to now before we start to repackage -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-10) Remove JBoss 4 integration (hornetq-service-sar).
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-10: - Assignee: Martyn Taylor Remove JBoss 4 integration (hornetq-service-sar). - Key: ACTIVEMQ6-10 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-10 Project: Apache ActiveMQ 6 Issue Type: Improvement Reporter: Martyn Taylor Assignee: Martyn Taylor Fix For: 6.0.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-19) Disable Server Side Load Balancing
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Taylor updated ACTIVEMQ6-19: - Fix Version/s: 6.1.0 Disable Server Side Load Balancing -- Key: ACTIVEMQ6-19 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-19 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 It should be possible to disable server side load balancing on message producers with a configuration option. Once we introduce this configuration the default should be off (meaning server side load balancing = no) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMQ-5174) Cannot use the JDBCIOExceptionHandler when kahadb is configured with lease-database-locker
[ https://issues.apache.org/jira/browse/AMQ-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Gale updated AMQ-5174: --- Affects Version/s: 5.10.0 Cannot use the JDBCIOExceptionHandler when kahadb is configured with lease-database-locker -- Key: AMQ-5174 URL: https://issues.apache.org/jira/browse/AMQ-5174 Project: ActiveMQ Issue Type: Bug Components: Message Store Affects Versions: 5.9.0, 5.9.1, 5.10.0 Reporter: Paul Gale Priority: Critical Fix For: 5.11.0 The {{JDBCIOExceptionHandler}} is limited to operating with the {{JDBCPersistenceAdapter}}. It should be allowed to work in combination with the {{KahaDBPersistenceAdapter}} if it's configured to use a {{LeaseDatabaseLocker}} as a locker. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ACTIVEMQ6-31) Disaster recovery
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-31?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208092#comment-14208092 ] clebert suconic commented on ACTIVEMQ6-31: -- This feature is really about having multiple active backups, being replicated or shared storage. Disaster recovery - Key: ACTIVEMQ6-31 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-31 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic allow some sort of disaster recovery -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5174) Cannot use the JDBCIOExceptionHandler when kahadb is configured with lease-database-locker
[ https://issues.apache.org/jira/browse/AMQ-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208094#comment-14208094 ] Gary Tully commented on AMQ-5174: - If the instanceof dependency is changed to org.apache.activemq.broker.LockableServiceSupport that may suffice. Cannot use the JDBCIOExceptionHandler when kahadb is configured with lease-database-locker -- Key: AMQ-5174 URL: https://issues.apache.org/jira/browse/AMQ-5174 Project: ActiveMQ Issue Type: Bug Components: Message Store Affects Versions: 5.9.0, 5.9.1, 5.10.0 Reporter: Paul Gale Priority: Critical Fix For: 5.11.0 The {{JDBCIOExceptionHandler}} is limited to operating with the {{JDBCPersistenceAdapter}}. It should be allowed to work in combination with the {{KahaDBPersistenceAdapter}} if it's configured to use a {{LeaseDatabaseLocker}} as a locker. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (AMQ-5174) Cannot use the JDBCIOExceptionHandler when kahadb is configured with lease-database-locker
[ https://issues.apache.org/jira/browse/AMQ-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208094#comment-14208094 ] Gary Tully edited comment on AMQ-5174 at 11/12/14 3:00 PM: --- If the instanceof dependency is changed to org.apache.activemq.broker.LockableServiceSupport that may suffice. Maybe add LeaseLockerIOExceptionHandler that will work with any store that implements LockableServiceSupport and deprecate the JDBCIOExceptionHandler was (Author: gtully): If the instanceof dependency is changed to org.apache.activemq.broker.LockableServiceSupport that may suffice. Cannot use the JDBCIOExceptionHandler when kahadb is configured with lease-database-locker -- Key: AMQ-5174 URL: https://issues.apache.org/jira/browse/AMQ-5174 Project: ActiveMQ Issue Type: Bug Components: Message Store Affects Versions: 5.9.0, 5.9.1, 5.10.0 Reporter: Paul Gale Priority: Critical Fix For: 5.11.0 The {{JDBCIOExceptionHandler}} is limited to operating with the {{JDBCPersistenceAdapter}}. It should be allowed to work in combination with the {{KahaDBPersistenceAdapter}} if it's configured to use a {{LeaseDatabaseLocker}} as a locker. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
PR Process on github.com/activemq6 repo
I would like us to use the pull request approach on github.com/apache/activemq-6. To me that's a no brainer... PR commits is the best approach... simpler to handle, review.. and you could even get tests working to validate the changes are consistent and accurate with the testsuite. It really improves collaboration with anyone interested in contribute code. However, We have a two issues to get sorted to make that work: I - PR builds We will need a bot user that would have push authorization on the mirror: https://wiki.jenkins-ci.org/display/JENKINS/GitHub+pull+request+builder+plugin II - Authorization for committers to close PRs. We will merge PRs manually but we still need a way to close PRs when we reject issues.
Re: PR Process on github.com/activemq6 repo
I pressed send prematurely by accident... I googled for it and there seems to be an apacheBot user we could use for the PR builds. I would need some help on setting up a JOB for the PR build on the ci.apache. and we also need the authorization to close PRs when they are rejected. Any ideas on how to get it sorted? On Wed, Nov 12, 2014 at 11:21 AM, Clebert Suconic clebert.suco...@gmail.com wrote: I would like us to use the pull request approach on github.com/apache/activemq-6. To me that's a no brainer... PR commits is the best approach... simpler to handle, review.. and you could even get tests working to validate the changes are consistent and accurate with the testsuite. It really improves collaboration with anyone interested in contribute code. However, We have a two issues to get sorted to make that work: I - PR builds We will need a bot user that would have push authorization on the mirror: https://wiki.jenkins-ci.org/display/JENKINS/GitHub+pull+request+builder+plugin II - Authorization for committers to close PRs. We will merge PRs manually but we still need a way to close PRs when we reject issues. -- Clebert Suconic http://community.jboss.org/people/clebert.suco...@jboss.com http://clebertsuconic.blogspot.com
[jira] [Commented] (AMQ-5383) JMS 2.0 Client APIs Support
[ https://issues.apache.org/jira/browse/AMQ-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208224#comment-14208224 ] Gary Tully commented on AMQ-5383: - [~johndament] is there any chance of a geronimo release with your jms 2.0 spec jar? Wondering how best to get something into maven central. JMS 2.0 Client APIs Support --- Key: AMQ-5383 URL: https://issues.apache.org/jira/browse/AMQ-5383 Project: ActiveMQ Issue Type: New Feature Components: JMS client Reporter: John D. Ament I'd like to propose improving upon ActiveMQ by adding client API support to the JMS client. One way to do this would be to throw out the current client JAR and replace it with a new one based on JMS 2. That wouldn't be a good idea, many people are happy to use JMS 1.1 and I feel we should let them continue. Antoher way is to create a new JAR, activemq-jm2-client, which extends the existing 1.1 client JAR for 2.0 APIs. This would allow the core activemq to remain on Java 6, and allow just this API JAR to be on Java 7 (due to the AutoCloseable requirement). I've already started putting together a geronimo spec jar for JMS 2 support. You can see that here: https://github.com/johnament/geronimo-specs/tree/jms2/geronimo-jms_2.0_spec -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-22) Review OpenWire Implementation
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-22: - Description: This is a review of the current open wire implementation on the activemq6 branch I would suggest someone with experience on activemq5 team to review the implementation. was: This is a review of the open wire implementation donated by HornetQ. I would suggest someone from activemq5 team to review the implementation. Review OpenWire Implementation -- Key: ACTIVEMQ6-22 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-22 Project: Apache ActiveMQ 6 Issue Type: Improvement Affects Versions: 6.0.0 Reporter: clebert suconic This is a review of the current open wire implementation on the activemq6 branch I would suggest someone with experience on activemq5 team to review the implementation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: PR Process on github.com/activemq6 repo
On 12 November 2014 16:21, Clebert Suconic clebert.suco...@gmail.com wrote: I would like us to use the pull request approach on github.com/apache/activemq-6. To me that's a no brainer... PR commits is the best approach... simpler to handle, review.. and you could even get tests working to validate the changes are consistent and accurate with the testsuite. It really improves collaboration with anyone interested in contribute code. However, We have a two issues to get sorted to make that work: I - PR builds We will need a bot user that would have push authorization on the mirror: https://wiki.jenkins-ci.org/display/JENKINS/GitHub+pull+request+builder+plugin I doubt infra would be happy with a bot user having push rights to the repo. There is a Pull Request build process (sounds like you already found that though) leveraging the ASF Jenkins instances though. You can log in to Jenkins with your existing committer ldap credentials, but you will need granted rights (by the PMC chair usually) to actually configure anything on it. https://blogs.apache.org/infra/entry/github_pull_request_builds_now II - Authorization for committers to close PRs. We will merge PRs manually but we still need a way to close PRs when we reject issues. In the past this has been ruled out, committers can request addition to the 'Apache' organization on github that the mirrors live under, but only infra team members have the necessary rights to update anything within it. The expectation seems to be that you either file JIRA requests with infra to clean up stale Pull Requests, ask the person who opened it to close it, or leverage commit messages to do so (This closes #foo etc) when merging to the ASF hosted git repo. Robbie
[jira] [Updated] (ACTIVEMQ6-19) Disable Server Side Load Balancing on message routing
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-19: - Summary: Disable Server Side Load Balancing on message routing (was: Disable Server Side Load Balancing) Disable Server Side Load Balancing on message routing - Key: ACTIVEMQ6-19 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-19 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 It should be possible to disable server side load balancing on message producers with a configuration option. Once we introduce this configuration the default should be off (meaning server side load balancing = no) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: PR Process on github.com/activemq6 repo
In the past this has been ruled out, committers can request addition to the 'Apache' organization on github that the mirrors live under, but only infra team members have the necessary rights to update anything within it. The expectation seems to be that you either file JIRA requests with infra to clean up stale Pull Requests, ask the person who opened it to close it, or leverage commit messages to do so (This closes #foo etc) when merging to the ASF hosted git repo. I got the impression that you would see this empty merge commit message on the logs. I don't want to increase the log with any message to close a PR. If that's not happening it's probably ok... I'm just afraid of the history to build up in anyways when we reject PRs.
Re: PR Process on github.com/activemq6 repo
On 12 November 2014 16:55, Clebert Suconic clebert.suco...@gmail.com wrote: In the past this has been ruled out, committers can request addition to the 'Apache' organization on github that the mirrors live under, but only infra team members have the necessary rights to update anything within it. The expectation seems to be that you either file JIRA requests with infra to clean up stale Pull Requests, ask the person who opened it to close it, or leverage commit messages to do so (This closes #foo etc) when merging to the ASF hosted git repo. I got the impression that you would see this empty merge commit message on the logs. I don't want to increase the log with any message to close a PR. If that's not happening it's probably ok... I'm just afraid of the history to build up in anyways when we reject PRs. The only committer ability to close Pull Requests is via the log messages so far as I am aware, otherwise you need to ask infra or the original requester to close. That said, as I have never actually merged a pull request I'm not necessarily the best person to comment, I'm just repeating what I have read or been told in the past. I have only ever asked infra to close a couple of stale PRs after the changes were actually committed via raising a JIRA and attaching a patch, and the resultant commits didn't reference the pull requst and the original requesters didnt respond to request to close them. Robbie
[jira] [Issue Comment Deleted] (ACTIVEMQ6-16) Integrate with Jolokia and HawtIO to offer up Web management console
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-16: - Comment: was deleted (was: Right... we were face to face and we captured the item... we will work out the wording accordingly next week.) Integrate with Jolokia and HawtIO to offer up Web management console Key: ACTIVEMQ6-16 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-16 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: Martyn Taylor -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMQ-5433) AMQP messages stuck in broker when receiver detaches while receiving
Chuck Rolke created AMQ-5433: Summary: AMQP messages stuck in broker when receiver detaches while receiving Key: AMQ-5433 URL: https://issues.apache.org/jira/browse/AMQ-5433 Project: ActiveMQ Issue Type: Bug Components: AMQP Affects Versions: 5.11.0 Environment: broker: snapshot apache-activemq-5.11-20141022.222801-124-bin.tar.gz Wed Oct 22 22:28:02 UTC 2014 host: Linux Fedora 19 x64 client: Amqp.Net Lite self tests Reporter: Chuck Rolke [Amqp.Net Lite|https://amqpnetlite.codeplex.com/] has a set of self tests that work OK when run against a Qpidd 0.30 (proton 0.8) broker. However, when run against ActiveMQ 5.10 or 5.11 SNAPSHOT one self tests causes a batch of messages to get stuck in the broker. The self test in question is [LinkTests.cs CloseBusyReceiver|https://amqpnetlite.codeplex.com/SourceControl/latest#test/Test.Amqp.NetMF/LinkTests.cs]. The test * opens a sender and receiver to node q1. * sends 20 messages. * when the receiver receives the first message it closes the receiver and that causes an AMQP detach. * then the test opens a second receiver and tries to drain the 20 messages. The test fails because the second receiver never receives anything. The AMQ web console shows the 20 messages in the queue. Running the test again without clearing the queue results in 40 messages in the queue and the self test fails because the first receiver never receives any messages and thus never closes the busy receiver. The first 20 messages jam that queue. Also, the messages are not received by tools that normally work fine, like qpid-receive. This effect is 100% repeatable. Protocol trace file to be attached -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMQ-5433) AMQP messages stuck in broker when receiver detaches while receiving
[ https://issues.apache.org/jira/browse/AMQ-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chuck Rolke updated AMQ-5433: - Attachment: lite_58_amq5.11_closeBusyReceiver_run_2_times.htm..htm AMQP protocol trace of self test running twice. The command line was {noformat} mstest /testcontainer:.\bin\debug\test.amqp.net\test.amqp.net.dll ^ /test:Test.Amqp.LinkTests.TestMethod_CloseBusyReceiver ^ /detail:debugTrace {noformat} AMQP messages stuck in broker when receiver detaches while receiving Key: AMQ-5433 URL: https://issues.apache.org/jira/browse/AMQ-5433 Project: ActiveMQ Issue Type: Bug Components: AMQP Affects Versions: 5.11.0 Environment: broker: snapshot apache-activemq-5.11-20141022.222801-124-bin.tar.gz Wed Oct 22 22:28:02 UTC 2014 host: Linux Fedora 19 x64 client: Amqp.Net Lite self tests Reporter: Chuck Rolke Attachments: lite_58_amq5.11_closeBusyReceiver_run_2_times.htm..htm [Amqp.Net Lite|https://amqpnetlite.codeplex.com/] has a set of self tests that work OK when run against a Qpidd 0.30 (proton 0.8) broker. However, when run against ActiveMQ 5.10 or 5.11 SNAPSHOT one self tests causes a batch of messages to get stuck in the broker. The self test in question is [LinkTests.cs CloseBusyReceiver|https://amqpnetlite.codeplex.com/SourceControl/latest#test/Test.Amqp.NetMF/LinkTests.cs]. The test * opens a sender and receiver to node q1. * sends 20 messages. * when the receiver receives the first message it closes the receiver and that causes an AMQP detach. * then the test opens a second receiver and tries to drain the 20 messages. The test fails because the second receiver never receives anything. The AMQ web console shows the 20 messages in the queue. Running the test again without clearing the queue results in 40 messages in the queue and the self test fails because the first receiver never receives any messages and thus never closes the busy receiver. The first 20 messages jam that queue. Also, the messages are not received by tools that normally work fine, like qpid-receive. This effect is 100% repeatable. Protocol trace file to be attached -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMQ-5434) Expose consumer details via JMX
Robin Huiser created AMQ-5434: - Summary: Expose consumer details via JMX Key: AMQ-5434 URL: https://issues.apache.org/jira/browse/AMQ-5434 Project: ActiveMQ Issue Type: New Feature Components: JMX Affects Versions: 5.10.0 Environment: Any Reporter: Robin Huiser Priority: Minor Fix For: NEEDS_REVIEW Via the ActiveMQ console, details are presented per connection / client about: * Selector * Enqueues * Dequeues * Dispatched ...etc It would be really great if 3rd party monitoring tooling (such as HawtIO) could pick up these details too through JMX. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-25) Smarter Redistribution of Messaging
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-25?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-25: - Description: this is dependant on ACTIVEMQ6-19: Since we are disabling Server Side Load Balancing a client without messages should kick redistribution in a smarter fashion. It's also desirable to deal with filtered transfers. We have had issues on both Activemq5 (AFAIK) and hornetq about consumers demanding messages for a specific filter. It's not easy to deal with it... I'm not sure if it's possible but we should take that into consideration when designing this feature. The issue here is when you have multiple nodes... messages are sent in one node... the second node has a consumer but no local producers.. messages won't be delivered to that node on this case. A Smarter algorithm for redistribution should be taken in place. We can discuss implementation details later, but one idea would be to request a single message to the remote node any time the local consumer is exausted... and have some statistics based redistribution for when messages are not evenly distributed in the case of a single producer and multiple queues. was: this is dependant on ACTIVEMQ6-19: Since we are disabling Server Side Load Balancing a client without messages should kick redistribution in a smarter fashion. It's also desirable to deal with filtered transfers. We have had issues on both Activemq5 (AFAIK) and hornetq about consumers demanding messages for a specific filter. It's not easy to deal with it... I'm not sure if it's possible but we should take that into consideration when designing this feature. Smarter Redistribution of Messaging --- Key: ACTIVEMQ6-25 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-25 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 this is dependant on ACTIVEMQ6-19: Since we are disabling Server Side Load Balancing a client without messages should kick redistribution in a smarter fashion. It's also desirable to deal with filtered transfers. We have had issues on both Activemq5 (AFAIK) and hornetq about consumers demanding messages for a specific filter. It's not easy to deal with it... I'm not sure if it's possible but we should take that into consideration when designing this feature. The issue here is when you have multiple nodes... messages are sent in one node... the second node has a consumer but no local producers.. messages won't be delivered to that node on this case. A Smarter algorithm for redistribution should be taken in place. We can discuss implementation details later, but one idea would be to request a single message to the remote node any time the local consumer is exausted... and have some statistics based redistribution for when messages are not evenly distributed in the case of a single producer and multiple queues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ACTIVEMQ6-6) Extract any JBoss SPI code replace with generic
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208645#comment-14208645 ] clebert suconic commented on ACTIVEMQ6-6: - Mainly: the RA needs to be totally generic. That way it can work well on any application server. I'm not going to create a separate task just for cleaning the RA as we have this task here... if we make sure the RA is cleaned we won't need another task. Extract any JBoss SPI code replace with generic --- Key: ACTIVEMQ6-6 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-6 Project: Apache ActiveMQ 6 Issue Type: Improvement Reporter: Martyn Taylor Fix For: 6.0.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (ACTIVEMQ6-32) Integration with Discovery / Zoo-keeper
clebert suconic created ACTIVEMQ6-32: Summary: Integration with Discovery / Zoo-keeper Key: ACTIVEMQ6-32 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-32 Project: Apache ActiveMQ 6 Issue Type: Bug Reporter: clebert suconic Fix For: 6.1.0 We should add Zoo-keeper as an option on discovering both servers and nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ACTIVEMQ6-23) Improve connection load balancing
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-23?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208681#comment-14208681 ] clebert suconic commented on ACTIVEMQ6-23: -- There could/should be a proxy component only responsible fore transferring load and doing some loading. I believe there is a similar component on activemq5 doing that. Improve connection load balancing - Key: ACTIVEMQ6-23 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-23 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 Move the connection load balancing to the server side by allowing the cluster to decide where a client should connect to on initial connect, removing the functionality from the client. All the client should need to know on reconnect or failover is its initial servers id. we can then plumb in more dynamic load balancing based on queue depth etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5383) JMS 2.0 Client APIs Support
[ https://issues.apache.org/jira/browse/AMQ-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208800#comment-14208800 ] John D. Ament commented on AMQ-5383: If someone has commit access to geronimo, I can provide the spec impls. JMS 2.0 Client APIs Support --- Key: AMQ-5383 URL: https://issues.apache.org/jira/browse/AMQ-5383 Project: ActiveMQ Issue Type: New Feature Components: JMS client Reporter: John D. Ament I'd like to propose improving upon ActiveMQ by adding client API support to the JMS client. One way to do this would be to throw out the current client JAR and replace it with a new one based on JMS 2. That wouldn't be a good idea, many people are happy to use JMS 1.1 and I feel we should let them continue. Antoher way is to create a new JAR, activemq-jm2-client, which extends the existing 1.1 client JAR for 2.0 APIs. This would allow the core activemq to remain on Java 6, and allow just this API JAR to be on Java 7 (due to the AutoCloseable requirement). I've already started putting together a geronimo spec jar for JMS 2 support. You can see that here: https://github.com/johnament/geronimo-specs/tree/jms2/geronimo-jms_2.0_spec -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (AMQ-5383) JMS 2.0 Client APIs Support
[ https://issues.apache.org/jira/browse/AMQ-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208800#comment-14208800 ] John D. Ament edited comment on AMQ-5383 at 11/12/14 10:16 PM: --- If someone has commit access to geronimo, I can provide the spec api jar. was (Author: johndament): If someone has commit access to geronimo, I can provide the spec impls. JMS 2.0 Client APIs Support --- Key: AMQ-5383 URL: https://issues.apache.org/jira/browse/AMQ-5383 Project: ActiveMQ Issue Type: New Feature Components: JMS client Reporter: John D. Ament I'd like to propose improving upon ActiveMQ by adding client API support to the JMS client. One way to do this would be to throw out the current client JAR and replace it with a new one based on JMS 2. That wouldn't be a good idea, many people are happy to use JMS 1.1 and I feel we should let them continue. Antoher way is to create a new JAR, activemq-jm2-client, which extends the existing 1.1 client JAR for 2.0 APIs. This would allow the core activemq to remain on Java 6, and allow just this API JAR to be on Java 7 (due to the AutoCloseable requirement). I've already started putting together a geronimo spec jar for JMS 2 support. You can see that here: https://github.com/johnament/geronimo-specs/tree/jms2/geronimo-jms_2.0_spec -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-32) Smarter Discovery support with different modules
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-32?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-32: - Description: We currently have a plugin module supporting different discovery modules. It's currently supported Jgroups, UDP WE could investigate adding another integration point with zoo-keeper, which would be an extra discovery option. was:We should add Zoo-keeper as an option on discovering both servers and nodes. Summary: Smarter Discovery support with different modules (was: Integration with Discovery / Zoo-keeper) Smarter Discovery support with different modules Key: ACTIVEMQ6-32 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-32 Project: Apache ActiveMQ 6 Issue Type: Bug Reporter: clebert suconic Fix For: 6.1.0 We currently have a plugin module supporting different discovery modules. It's currently supported Jgroups, UDP WE could investigate adding another integration point with zoo-keeper, which would be an extra discovery option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-16) Integrate with Jolokia to offer up Web management console integration points
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-16: - Summary: Integrate with Jolokia to offer up Web management console integration points (was: Integrate with Jolokia and HawtIO to offer up Web management console) Integrate with Jolokia to offer up Web management console integration points Key: ACTIVEMQ6-16 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-16 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: Martyn Taylor -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-16) Integrate with Jolokia to offer up Web management console integration points
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-16: - Description: Some tools use Jolokia as an API through JMX. We could add that as an option for such tools. Integrate with Jolokia to offer up Web management console integration points Key: ACTIVEMQ6-16 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-16 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: Martyn Taylor Some tools use Jolokia as an API through JMX. We could add that as an option for such tools. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ACTIVEMQ6-29) Reconnect to any live
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-29?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14208824#comment-14208824 ] clebert suconic commented on ACTIVEMQ6-29: -- It would make sense to keep this configurable. Try the backup if configured for N-times.. then go to any node if not available? We can discuss that later... I know we have two options here.. and we will figure out what's best later. Reconnect to any live - Key: ACTIVEMQ6-29 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-29 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 Right now clients will always connect to their backup or original lives. We could in a configurable fashion have the client connecting anywhere after a certain number of retries to their original nodes. (This will only be applied to core clients since AMQP will always reconnect to their original addresses). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-30) allow backup servers to service multiple live servers
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-30?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-30: - Description: On this you could have a number of backup Nodes (M) serving a number of live Nodes (N) On that scenario you would always have N+M nodes. you would only support M lives failing on this scenario. Right now a backup can only support one live. With this we would allow it to serve multiple lives. I'm not sure yet if this applies to replication as well. It's easy to be implemented with shared storage. We will need to investigate if this makes sense with replication or not. (To be discussed during design discussions before the implementation) was: We called this as N+M Backup with shared storage, but the same policy could be applied with replication. You have a few nodes elected to be backups. Say you have for example 3 nodes as dedicated backups (M) for 10 live nodes (N), resulting in N+M nodes. We could implement the same using replication? That point is optional though.. depending on the design we could ellect to implement this only for shared storage options. allow backup servers to service multiple live servers - Key: ACTIVEMQ6-30 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-30 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic On this you could have a number of backup Nodes (M) serving a number of live Nodes (N) On that scenario you would always have N+M nodes. you would only support M lives failing on this scenario. Right now a backup can only support one live. With this we would allow it to serve multiple lives. I'm not sure yet if this applies to replication as well. It's easy to be implemented with shared storage. We will need to investigate if this makes sense with replication or not. (To be discussed during design discussions before the implementation) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-32) Smarter Discovery support with different modules
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-32?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-32: - Issue Type: New Feature (was: Bug) Smarter Discovery support with different modules Key: ACTIVEMQ6-32 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-32 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 We currently have a plugin module supporting different discovery modules. It's currently supported Jgroups, UDP WE could investigate adding another integration point with zoo-keeper, which would be an extra discovery option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-12) Compare and Contract ActiveMQ6 and ActiveMQ 5.X embedability.
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-12?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-12: - Issue Type: Task (was: Bug) Compare and Contract ActiveMQ6 and ActiveMQ 5.X embedability. - Key: ACTIVEMQ6-12 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-12 Project: Apache ActiveMQ 6 Issue Type: Task Reporter: Martyn Taylor Fix For: 6.0.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-33) Generic integration with SASL Frameworks
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-33?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-33: - Fix Version/s: 6.1.0 Summary: Generic integration with SASL Frameworks (was: Generic integration with SASL) Generic integration with SASL Frameworks Key: ACTIVEMQ6-33 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-33 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 Right now we are bound to User/Password or anonymous on SASL. We should use some framework that would allow SASL integration with a bigger number of possibilities. We should investigate options from the JDK for this... or if there is any other framework available. I believe this only affects AMQP, but as part of this issue we should investigate if there is any interest extending SASL into any other protocol. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (ACTIVEMQ6-33) Generic integration with SASL
clebert suconic created ACTIVEMQ6-33: Summary: Generic integration with SASL Key: ACTIVEMQ6-33 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-33 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Right now we are bound to User/Password or anonymous on SASL. We should use some framework that would allow SASL integration with a bigger number of possibilities. We should investigate options from the JDK for this... or if there is any other framework available. I believe this only affects AMQP, but as part of this issue we should investigate if there is any interest extending SASL into any other protocol. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-10) Remove JBoss 4 integration (hornetq-service-sar).
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-10: - Description: The initial donation has some old JBoss code on it that we need to get rid of as we move forward Remove JBoss 4 integration (hornetq-service-sar). - Key: ACTIVEMQ6-10 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-10 Project: Apache ActiveMQ 6 Issue Type: Improvement Reporter: Martyn Taylor Assignee: Martyn Taylor Fix For: 6.0.0 The initial donation has some old JBoss code on it that we need to get rid of as we move forward -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ACTIVEMQ6-17) Add Broker Interceptor - like the Camel Broker Component in ActiveMQ 5
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-17?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic updated ACTIVEMQ6-17: - Issue Type: New Feature (was: Bug) Add Broker Interceptor - like the Camel Broker Component in ActiveMQ 5 -- Key: ACTIVEMQ6-17 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-17 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: Rob Davies Assignee: Andy Taylor Fix For: 6.1.0 One of the main reasons ActiveMQ is popular is its flexibility. Allowing Camel to intercept messages inside the broker (Camel Broker Component) - will allow some of the current BrokerPlugins (like TimeStamp) - to be implemented using Camel routes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMQ-5435) Persistence Adapter Starting Thread is still alive after stopping a slave broker with lease database locker
Shi Lei created AMQ-5435: Summary: Persistence Adapter Starting Thread is still alive after stopping a slave broker with lease database locker Key: AMQ-5435 URL: https://issues.apache.org/jira/browse/AMQ-5435 Project: ActiveMQ Issue Type: Bug Components: Broker Affects Versions: 5.10.0 Environment: Windows, JDK7 Reporter: Shi Lei I am using jdbc master/slave with lease database lock. I found if I call broker.stop to stop a slave broker service (which means it tries to get a lease locker and has not got yet), its Persistence Adapter Starting Thread is still alive. If I create and start a new broker in the same java VM, there will be 2 Persistence Adapter Starting Threads inside the same java VM. At this time, if the master broker is down, the stopped broker will get the database lease locker, but somehow it cannot start broker. Now I have 2 broker service in the same VM. One has got the locker, but cannot start broker, the other one is still requesting the locker. The root cause is that after stopping broker, LeaseDatabaseLocker.isStopping() is false, LeaseDatabaseLocker.isStopped() is true, In LeaseDatabaseLocker.doStart while (!isStopping()) { Connection connection = null; PreparedStatement statement = null; try { connection = getConnection(); initTimeDiff(connection); statement = connection.prepareStatement(sql); setQueryTimeout(statement); now = System.currentTimeMillis() + diffFromCurrentTime; statement.setString(1, getLeaseHolderId()); statement.setLong(2, now + lockAcquireSleepInterval); statement.setLong(3, now); int result = statement.executeUpdate(); if (result == 1) { // we got the lease, verify we still have it if (keepAlive()) { break; } } reportLeasOwnerShipAndDuration(connection); } catch (Exception e) { LOG.debug(getLeaseHolderId() + lease acquire failure: + e, e); if (isStopping()) { throw new Exception( Cannot start broker as being asked to shut down. + Interrupted attempt to acquire lock: + e, e); } if (handleStartException) { lockable.getBrokerService().handleIOException(IOExceptionSupport.create(e)); } } finally { close(statement); close(connection); } LOG.info(getLeaseHolderId() + failed to acquire lease. Sleeping for + lockAcquireSleepInterval + milli(s) before trying again...); TimeUnit.MILLISECONDS.sleep(lockAcquireSleepInterval); } if (isStopping()) { throw new RuntimeException(getLeaseHolderId() + failing lease acquire due to stop); } LOG.info(getLeaseHolderId() + , becoming master with lease expiry + new Date(now) + on dataSource: + dataSource); } I think we should replace isStopping() with isStopping() or isStopped(). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMQ-5435) Persistence Adapter Starting Thread is still alive after stopping a slave broker with lease database locker
[ https://issues.apache.org/jira/browse/AMQ-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shi Lei updated AMQ-5435: - Description: I am using jdbc master/slave with lease database lock. amq:broker id=broker startAsync=true I found if I call broker.stop to stop a slave broker service (which means it tries to get a lease locker and has not got yet), its Persistence Adapter Starting Thread is still alive. If I create and start a new broker in the same java VM, there will be 2 Persistence Adapter Starting Threads inside the same java VM. At this time, if the master broker is down, the stopped broker will get the database lease locker, but somehow it cannot start broker. Now I have 2 broker service in the same VM. One has got the locker, but cannot start broker, the other one is still requesting the locker. The root cause is that after stopping broker, LeaseDatabaseLocker.isStopping() is false, LeaseDatabaseLocker.isStopped() is true, In LeaseDatabaseLocker.doStart while (!isStopping()) { Connection connection = null; PreparedStatement statement = null; try { connection = getConnection(); initTimeDiff(connection); statement = connection.prepareStatement(sql); setQueryTimeout(statement); now = System.currentTimeMillis() + diffFromCurrentTime; statement.setString(1, getLeaseHolderId()); statement.setLong(2, now + lockAcquireSleepInterval); statement.setLong(3, now); int result = statement.executeUpdate(); if (result == 1) { // we got the lease, verify we still have it if (keepAlive()) { break; } } reportLeasOwnerShipAndDuration(connection); } catch (Exception e) { LOG.debug(getLeaseHolderId() + lease acquire failure: + e, e); if (isStopping()) { throw new Exception( Cannot start broker as being asked to shut down. + Interrupted attempt to acquire lock: + e, e); } if (handleStartException) { lockable.getBrokerService().handleIOException(IOExceptionSupport.create(e)); } } finally { close(statement); close(connection); } LOG.info(getLeaseHolderId() + failed to acquire lease. Sleeping for + lockAcquireSleepInterval + milli(s) before trying again...); TimeUnit.MILLISECONDS.sleep(lockAcquireSleepInterval); } if (isStopping()) { throw new RuntimeException(getLeaseHolderId() + failing lease acquire due to stop); } LOG.info(getLeaseHolderId() + , becoming master with lease expiry + new Date(now) + on dataSource: + dataSource); } I think we should replace isStopping() with isStopping() or isStopped(). was: I am using jdbc master/slave with lease database lock. I found if I call broker.stop to stop a slave broker service (which means it tries to get a lease locker and has not got yet), its Persistence Adapter Starting Thread is still alive. If I create and start a new broker in the same java VM, there will be 2 Persistence Adapter Starting Threads inside the same java VM. At this time, if the master broker is down, the stopped broker will get the database lease locker, but somehow it cannot start broker. Now I have 2 broker service in the same VM. One has got the locker, but cannot start broker, the other one is still requesting the locker. The root cause is that after stopping broker, LeaseDatabaseLocker.isStopping() is false, LeaseDatabaseLocker.isStopped() is true, In LeaseDatabaseLocker.doStart while (!isStopping()) { Connection connection = null; PreparedStatement statement = null; try { connection = getConnection(); initTimeDiff(connection); statement = connection.prepareStatement(sql); setQueryTimeout(statement); now = System.currentTimeMillis() + diffFromCurrentTime; statement.setString(1, getLeaseHolderId()); statement.setLong(2, now + lockAcquireSleepInterval); statement.setLong(3, now); int result = statement.executeUpdate(); if (result == 1) { // we got the lease, verify we still have it if (keepAlive()) { break; } } reportLeasOwnerShipAndDuration(connection); } catch (Exception e) {
[jira] [Commented] (ACTIVEMQ6-29) Reconnect to any live
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-29?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209437#comment-14209437 ] Andy Taylor commented on ACTIVEMQ6-29: -- configurable yes, but deterministic. it should either be one or the other i think. Reconnect to any live - Key: ACTIVEMQ6-29 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-29 Project: Apache ActiveMQ 6 Issue Type: New Feature Reporter: clebert suconic Fix For: 6.1.0 Right now clients will always connect to their backup or original lives. We could in a configurable fashion have the client connecting anywhere after a certain number of retries to their original nodes. (This will only be applied to core clients since AMQP will always reconnect to their original addresses). -- This message was sent by Atlassian JIRA (v6.3.4#6332)