[jira] [Updated] (AMQ-5659) Add safety measure against infinite loop when store exception prevents message removal

2015-03-19 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-5659:
--
Attachment: (was: purge_queue_abort_loop.patch)

 Add safety measure against infinite loop when store exception prevents 
 message removal
 --

 Key: AMQ-5659
 URL: https://issues.apache.org/jira/browse/AMQ-5659
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.7.0
 Environment: ServiceMix 4.5.3
Reporter: metatech
 Attachments: purge_queue_abort_loop_v3.patch


 When the broker is configured with a database store, the purge operation 
 enters an infinite loop when the message removal operation fails, for 
 instance when the broker datasource is being restarted (see example stack 
 trace below). 
 Here is a patch which adds a safety measure, in case the dequeue count of 
 the queue does not increase between 2 messages removal operations.  The check 
 is not garanteed to detect the problem on the next iteration, because a 
 business consumer might also be dequeuing messages from the queue.  But the 
 purge is probably much faster than the business consumer, so if it fails to 
 remove 2 messages in a row, it is enough to detect the problem and abort the 
 infinite loop.
 {code}
 2015-03-05 15:38:30,353 | WARN  | 14571659-2202099 |  | 
 JDBCPersistenceAdapter   | Could not get JDBC connection: Data source 
 is closed
 java.sql.SQLException: Data source is closed
   at 
 org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1362)
   at 
 org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
   at 
 org.apache.activemq.store.jdbc.TransactionContext.getConnection(TransactionContext.java:58)
   at 
 org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.getStoreSequenceId(DefaultJDBCAdapter.java:285)
   at 
 org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.getStoreSequenceIdForMessageId(JDBCPersistenceAdapter.java:787)
   at 
 org.apache.activemq.store.jdbc.JDBCMessageStore.removeMessage(JDBCMessageStore.java:194)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore.removeMessage(MemoryTransactionStore.java:358)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore$1.removeAsyncMessage(MemoryTransactionStore.java:166)
   at org.apache.activemq.broker.region.Queue.acknowledge(Queue.java:846)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1602)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1594)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1579)
   at org.apache.activemq.broker.region.Queue.purge(Queue.java:1158)
   at org.apache.activemq.broker.jmx.QueueView.purge(QueueView.java:54)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5659) Add safety measure against infinite loop when store exception prevents message removal

2015-03-19 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-5659:
--
Attachment: (was: purge_queue_abort_loop_v2.patch)

 Add safety measure against infinite loop when store exception prevents 
 message removal
 --

 Key: AMQ-5659
 URL: https://issues.apache.org/jira/browse/AMQ-5659
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.7.0
 Environment: ServiceMix 4.5.3
Reporter: metatech
 Attachments: purge_queue_abort_loop.patch, 
 purge_queue_abort_loop_v3.patch


 When the broker is configured with a database store, the purge operation 
 enters an infinite loop when the message removal operation fails, for 
 instance when the broker datasource is being restarted (see example stack 
 trace below). 
 Here is a patch which adds a safety measure, in case the dequeue count of 
 the queue does not increase between 2 messages removal operations.  The check 
 is not garanteed to detect the problem on the next iteration, because a 
 business consumer might also be dequeuing messages from the queue.  But the 
 purge is probably much faster than the business consumer, so if it fails to 
 remove 2 messages in a row, it is enough to detect the problem and abort the 
 infinite loop.
 {code}
 2015-03-05 15:38:30,353 | WARN  | 14571659-2202099 |  | 
 JDBCPersistenceAdapter   | Could not get JDBC connection: Data source 
 is closed
 java.sql.SQLException: Data source is closed
   at 
 org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1362)
   at 
 org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
   at 
 org.apache.activemq.store.jdbc.TransactionContext.getConnection(TransactionContext.java:58)
   at 
 org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.getStoreSequenceId(DefaultJDBCAdapter.java:285)
   at 
 org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.getStoreSequenceIdForMessageId(JDBCPersistenceAdapter.java:787)
   at 
 org.apache.activemq.store.jdbc.JDBCMessageStore.removeMessage(JDBCMessageStore.java:194)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore.removeMessage(MemoryTransactionStore.java:358)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore$1.removeAsyncMessage(MemoryTransactionStore.java:166)
   at org.apache.activemq.broker.region.Queue.acknowledge(Queue.java:846)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1602)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1594)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1579)
   at org.apache.activemq.broker.region.Queue.purge(Queue.java:1158)
   at org.apache.activemq.broker.jmx.QueueView.purge(QueueView.java:54)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5659) Add safety measure against infinite loop when store exception prevents message removal

2015-03-13 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-5659:
--
Attachment: purge_queue_abort_loop_v3.patch

V3 of the patch fixes the incorrect threshold

 Add safety measure against infinite loop when store exception prevents 
 message removal
 --

 Key: AMQ-5659
 URL: https://issues.apache.org/jira/browse/AMQ-5659
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.7.0
 Environment: ServiceMix 4.5.3
Reporter: metatech
 Attachments: purge_queue_abort_loop.patch, 
 purge_queue_abort_loop_v2.patch, purge_queue_abort_loop_v3.patch


 When the broker is configured with a database store, the purge operation 
 enters an infinite loop when the message removal operation fails, for 
 instance when the broker datasource is being restarted (see example stack 
 trace below). 
 Here is a patch which adds a safety measure, in case the dequeue count of 
 the queue does not increase between 2 messages removal operations.  The check 
 is not garanteed to detect the problem on the next iteration, because a 
 business consumer might also be dequeuing messages from the queue.  But the 
 purge is probably much faster than the business consumer, so if it fails to 
 remove 2 messages in a row, it is enough to detect the problem and abort the 
 infinite loop.
 {code}
 2015-03-05 15:38:30,353 | WARN  | 14571659-2202099 |  | 
 JDBCPersistenceAdapter   | Could not get JDBC connection: Data source 
 is closed
 java.sql.SQLException: Data source is closed
   at 
 org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1362)
   at 
 org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
   at 
 org.apache.activemq.store.jdbc.TransactionContext.getConnection(TransactionContext.java:58)
   at 
 org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.getStoreSequenceId(DefaultJDBCAdapter.java:285)
   at 
 org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.getStoreSequenceIdForMessageId(JDBCPersistenceAdapter.java:787)
   at 
 org.apache.activemq.store.jdbc.JDBCMessageStore.removeMessage(JDBCMessageStore.java:194)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore.removeMessage(MemoryTransactionStore.java:358)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore$1.removeAsyncMessage(MemoryTransactionStore.java:166)
   at org.apache.activemq.broker.region.Queue.acknowledge(Queue.java:846)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1602)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1594)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1579)
   at org.apache.activemq.broker.region.Queue.purge(Queue.java:1158)
   at org.apache.activemq.broker.jmx.QueueView.purge(QueueView.java:54)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5659) Add safety measure against infinite loop when store exception prevents message removal

2015-03-13 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360133#comment-14360133
 ] 

metatech commented on AMQ-5659:
---

Gary, breaking out of the for loop is not enough. We really want to break the 
2 nested loops + skip the reset of the queue count performed at the end of the 
method.

 Add safety measure against infinite loop when store exception prevents 
 message removal
 --

 Key: AMQ-5659
 URL: https://issues.apache.org/jira/browse/AMQ-5659
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.7.0
 Environment: ServiceMix 4.5.3
Reporter: metatech
 Attachments: purge_queue_abort_loop.patch, 
 purge_queue_abort_loop_v2.patch


 When the broker is configured with a database store, the purge operation 
 enters an infinite loop when the message removal operation fails, for 
 instance when the broker datasource is being restarted (see example stack 
 trace below). 
 Here is a patch which adds a safety measure, in case the dequeue count of 
 the queue does not increase between 2 messages removal operations.  The check 
 is not garanteed to detect the problem on the next iteration, because a 
 business consumer might also be dequeuing messages from the queue.  But the 
 purge is probably much faster than the business consumer, so if it fails to 
 remove 2 messages in a row, it is enough to detect the problem and abort the 
 infinite loop.
 {code}
 2015-03-05 15:38:30,353 | WARN  | 14571659-2202099 |  | 
 JDBCPersistenceAdapter   | Could not get JDBC connection: Data source 
 is closed
 java.sql.SQLException: Data source is closed
   at 
 org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1362)
   at 
 org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
   at 
 org.apache.activemq.store.jdbc.TransactionContext.getConnection(TransactionContext.java:58)
   at 
 org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.getStoreSequenceId(DefaultJDBCAdapter.java:285)
   at 
 org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.getStoreSequenceIdForMessageId(JDBCPersistenceAdapter.java:787)
   at 
 org.apache.activemq.store.jdbc.JDBCMessageStore.removeMessage(JDBCMessageStore.java:194)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore.removeMessage(MemoryTransactionStore.java:358)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore$1.removeAsyncMessage(MemoryTransactionStore.java:166)
   at org.apache.activemq.broker.region.Queue.acknowledge(Queue.java:846)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1602)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1594)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1579)
   at org.apache.activemq.broker.region.Queue.purge(Queue.java:1158)
   at org.apache.activemq.broker.jmx.QueueView.purge(QueueView.java:54)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5659) Add safety measure against infinite loop when store exception prevents message removal

2015-03-12 Thread metatech (JIRA)
metatech created AMQ-5659:
-

 Summary: Add safety measure against infinite loop when store 
exception prevents message removal
 Key: AMQ-5659
 URL: https://issues.apache.org/jira/browse/AMQ-5659
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.7.0
 Environment: ServiceMix 4.5.3
Reporter: metatech


When the broker is configured with a database store, the purge operation 
enters an infinite loop when the message removal operation fails, for instance 
when the broker datasource is being restarted (see example stack trace below). 

Here is a patch which adds a safety measure, in case the dequeue count of the 
queue does not increase between 2 messages removal operations.  The check is 
not garanteed to detect the problem on the next iteration, because a business 
consumer might also be dequeuing messages from the queue.  But the purge is 
probably much faster than the business consumer, so if it fails to remove 2 
messages in a row, it is enough to detect the problem and abort the infinite 
loop.

{code}
2015-03-05 15:38:30,353 | WARN  | 14571659-2202099 |  | JDBCPersistenceAdapter  
 | Could not get JDBC connection: Data source is closed
java.sql.SQLException: Data source is closed
at 
org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1362)
at 
org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at 
org.apache.activemq.store.jdbc.TransactionContext.getConnection(TransactionContext.java:58)
at 
org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.getStoreSequenceId(DefaultJDBCAdapter.java:285)
at 
org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.getStoreSequenceIdForMessageId(JDBCPersistenceAdapter.java:787)
at 
org.apache.activemq.store.jdbc.JDBCMessageStore.removeMessage(JDBCMessageStore.java:194)
at 
org.apache.activemq.store.memory.MemoryTransactionStore.removeMessage(MemoryTransactionStore.java:358)
at 
org.apache.activemq.store.memory.MemoryTransactionStore$1.removeAsyncMessage(MemoryTransactionStore.java:166)
at org.apache.activemq.broker.region.Queue.acknowledge(Queue.java:846)
at 
org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1602)
at 
org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1594)
at 
org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1579)
at org.apache.activemq.broker.region.Queue.purge(Queue.java:1158)
at org.apache.activemq.broker.jmx.QueueView.purge(QueueView.java:54)
{code}







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5659) Add safety measure against infinite loop when store exception prevents message removal

2015-03-12 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-5659:
--
Attachment: purge_queue_abort_loop.patch

 Add safety measure against infinite loop when store exception prevents 
 message removal
 --

 Key: AMQ-5659
 URL: https://issues.apache.org/jira/browse/AMQ-5659
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.7.0
 Environment: ServiceMix 4.5.3
Reporter: metatech
 Attachments: purge_queue_abort_loop.patch


 When the broker is configured with a database store, the purge operation 
 enters an infinite loop when the message removal operation fails, for 
 instance when the broker datasource is being restarted (see example stack 
 trace below). 
 Here is a patch which adds a safety measure, in case the dequeue count of 
 the queue does not increase between 2 messages removal operations.  The check 
 is not garanteed to detect the problem on the next iteration, because a 
 business consumer might also be dequeuing messages from the queue.  But the 
 purge is probably much faster than the business consumer, so if it fails to 
 remove 2 messages in a row, it is enough to detect the problem and abort the 
 infinite loop.
 {code}
 2015-03-05 15:38:30,353 | WARN  | 14571659-2202099 |  | 
 JDBCPersistenceAdapter   | Could not get JDBC connection: Data source 
 is closed
 java.sql.SQLException: Data source is closed
   at 
 org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1362)
   at 
 org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
   at 
 org.apache.activemq.store.jdbc.TransactionContext.getConnection(TransactionContext.java:58)
   at 
 org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.getStoreSequenceId(DefaultJDBCAdapter.java:285)
   at 
 org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.getStoreSequenceIdForMessageId(JDBCPersistenceAdapter.java:787)
   at 
 org.apache.activemq.store.jdbc.JDBCMessageStore.removeMessage(JDBCMessageStore.java:194)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore.removeMessage(MemoryTransactionStore.java:358)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore$1.removeAsyncMessage(MemoryTransactionStore.java:166)
   at org.apache.activemq.broker.region.Queue.acknowledge(Queue.java:846)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1602)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1594)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1579)
   at org.apache.activemq.broker.region.Queue.purge(Queue.java:1158)
   at org.apache.activemq.broker.jmx.QueueView.purge(QueueView.java:54)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5659) Add safety measure against infinite loop when store exception prevents message removal

2015-03-12 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358852#comment-14358852
 ] 

metatech commented on AMQ-5659:
---

Hi Gary. The code tries to abort only on fatal/unrecoverable errors.  If the 
error is transient (only for one message), the purge goes on.
Here is another version of the patch which detects 3 failures in a row before 
aborting, so that transient errors are ignored.

 Add safety measure against infinite loop when store exception prevents 
 message removal
 --

 Key: AMQ-5659
 URL: https://issues.apache.org/jira/browse/AMQ-5659
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.7.0
 Environment: ServiceMix 4.5.3
Reporter: metatech
 Attachments: purge_queue_abort_loop.patch, 
 purge_queue_abort_loop_v2.patch


 When the broker is configured with a database store, the purge operation 
 enters an infinite loop when the message removal operation fails, for 
 instance when the broker datasource is being restarted (see example stack 
 trace below). 
 Here is a patch which adds a safety measure, in case the dequeue count of 
 the queue does not increase between 2 messages removal operations.  The check 
 is not garanteed to detect the problem on the next iteration, because a 
 business consumer might also be dequeuing messages from the queue.  But the 
 purge is probably much faster than the business consumer, so if it fails to 
 remove 2 messages in a row, it is enough to detect the problem and abort the 
 infinite loop.
 {code}
 2015-03-05 15:38:30,353 | WARN  | 14571659-2202099 |  | 
 JDBCPersistenceAdapter   | Could not get JDBC connection: Data source 
 is closed
 java.sql.SQLException: Data source is closed
   at 
 org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1362)
   at 
 org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
   at 
 org.apache.activemq.store.jdbc.TransactionContext.getConnection(TransactionContext.java:58)
   at 
 org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.getStoreSequenceId(DefaultJDBCAdapter.java:285)
   at 
 org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.getStoreSequenceIdForMessageId(JDBCPersistenceAdapter.java:787)
   at 
 org.apache.activemq.store.jdbc.JDBCMessageStore.removeMessage(JDBCMessageStore.java:194)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore.removeMessage(MemoryTransactionStore.java:358)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore$1.removeAsyncMessage(MemoryTransactionStore.java:166)
   at org.apache.activemq.broker.region.Queue.acknowledge(Queue.java:846)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1602)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1594)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1579)
   at org.apache.activemq.broker.region.Queue.purge(Queue.java:1158)
   at org.apache.activemq.broker.jmx.QueueView.purge(QueueView.java:54)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5659) Add safety measure against infinite loop when store exception prevents message removal

2015-03-12 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-5659:
--
Attachment: purge_queue_abort_loop_v2.patch

 Add safety measure against infinite loop when store exception prevents 
 message removal
 --

 Key: AMQ-5659
 URL: https://issues.apache.org/jira/browse/AMQ-5659
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.7.0
 Environment: ServiceMix 4.5.3
Reporter: metatech
 Attachments: purge_queue_abort_loop.patch, 
 purge_queue_abort_loop_v2.patch


 When the broker is configured with a database store, the purge operation 
 enters an infinite loop when the message removal operation fails, for 
 instance when the broker datasource is being restarted (see example stack 
 trace below). 
 Here is a patch which adds a safety measure, in case the dequeue count of 
 the queue does not increase between 2 messages removal operations.  The check 
 is not garanteed to detect the problem on the next iteration, because a 
 business consumer might also be dequeuing messages from the queue.  But the 
 purge is probably much faster than the business consumer, so if it fails to 
 remove 2 messages in a row, it is enough to detect the problem and abort the 
 infinite loop.
 {code}
 2015-03-05 15:38:30,353 | WARN  | 14571659-2202099 |  | 
 JDBCPersistenceAdapter   | Could not get JDBC connection: Data source 
 is closed
 java.sql.SQLException: Data source is closed
   at 
 org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1362)
   at 
 org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
   at 
 org.apache.activemq.store.jdbc.TransactionContext.getConnection(TransactionContext.java:58)
   at 
 org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.getStoreSequenceId(DefaultJDBCAdapter.java:285)
   at 
 org.apache.activemq.store.jdbc.JDBCPersistenceAdapter.getStoreSequenceIdForMessageId(JDBCPersistenceAdapter.java:787)
   at 
 org.apache.activemq.store.jdbc.JDBCMessageStore.removeMessage(JDBCMessageStore.java:194)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore.removeMessage(MemoryTransactionStore.java:358)
   at 
 org.apache.activemq.store.memory.MemoryTransactionStore$1.removeAsyncMessage(MemoryTransactionStore.java:166)
   at org.apache.activemq.broker.region.Queue.acknowledge(Queue.java:846)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1602)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1594)
   at 
 org.apache.activemq.broker.region.Queue.removeMessage(Queue.java:1579)
   at org.apache.activemq.broker.region.Queue.purge(Queue.java:1158)
   at org.apache.activemq.broker.jmx.QueueView.purge(QueueView.java:54)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-4598) Negative JMX QueueSize attribute in due to purging a queue

2015-02-17 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324048#comment-14324048
 ] 

metatech commented on AMQ-4598:
---

This fix works, but the queue size hides all the inflight messages because the 
counter goes immediately to 0, although inflight messages are still being 
processed.
The behaviour is different when a queue is not purged (all inflight messages 
are counted in) and when a queue is purged (inflight messages are not counted 
in).  
This would be more consistent to always reset the counter to the number of 
inflight messages in the queue (see attachment 
amq_negative_queue_size_after_purge.diff)


 Negative JMX QueueSize attribute in due to purging a queue
 --

 Key: AMQ-4598
 URL: https://issues.apache.org/jira/browse/AMQ-4598
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.8.0
Reporter: Dejan Bosanac
Assignee: Dejan Bosanac
 Fix For: 5.9.0

 Attachments: amq_negative_queue_size_after_purge.diff


 If you purge a queue that has a bunch of messages prefetched you may end up 
 with a negative QueueSize in JMX as the prefetched messages get acked after 
 the purge. 
 This behavior should be considered a bug as many users depend on these JMX 
 statistics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4598) Negative JMX QueueSize attribute in due to purging a queue

2015-02-17 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4598:
--
Attachment: amq_negative_queue_size_after_purge.diff

 Negative JMX QueueSize attribute in due to purging a queue
 --

 Key: AMQ-4598
 URL: https://issues.apache.org/jira/browse/AMQ-4598
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.8.0
Reporter: Dejan Bosanac
Assignee: Dejan Bosanac
 Fix For: 5.9.0

 Attachments: amq_negative_queue_size_after_purge.diff


 If you purge a queue that has a bunch of messages prefetched you may end up 
 with a negative QueueSize in JMX as the prefetched messages get acked after 
 the purge. 
 This behavior should be considered a bug as many users depend on these JMX 
 statistics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-3472) Negative number of pending messages in broker

2015-02-17 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323912#comment-14323912
 ] 

metatech commented on AMQ-3472:
---

A workaround in 5.7 is to replace the following line in method purge of class 
org.apache.activemq.broker.region.Queue
-this.destinationStatistics.getMessages().setCount(0);
+
this.destinationStatistics.getMessages().setCount(this.destinationStatistics.getInflight().getCount());
AMQ-4598 included in ActiveMQ 5.8 probably has a real fix.


 Negative number of pending messages in broker
 -

 Key: AMQ-3472
 URL: https://issues.apache.org/jira/browse/AMQ-3472
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.5.0
 Environment: RHEL 5.5, Sun Java 1.6.0_21, Tomcat 6, Synchronous 
 message consumers, 10k messages per consumer prefetch, 2-10 million messages 
 a day,  in-container broker.
Reporter: Marcin Depinski

 After the purge and deletion of a queue and a restart of the broker, the 
 broker shows a negative number of pending messages. Similar to AMQ-1693. Seen 
 this at least a dozen times in our environment. Another restart with no 
 messages pending and no consumers will zero the counter correctly but we 
 can't be restarting our app every time this happens. Things still seem to 
 work after the counter goes negative but if anything inside ActiveMQ is 
 counting on this counter being correct well... let me know if I can help



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-3692) ActiveMQ OSGi bundle should be stopped when broker stops itself

2015-01-15 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278518#comment-14278518
 ] 

metatech commented on AMQ-3692:
---

Here is a workaround that allows to use the Blueprint XBean syntax, with 
blueprint.aries.xml-validation:=false.
I did not find how to enhance the validator to allow it yet.
{code}
blueprint xmlns=http://www.osgi.org/xmlns/blueprint/v1.0.0;
   
xmlns:cm=http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0;
   
xmlns:ext=http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0;
   xmlns:amq=http://activemq.apache.org/schema/core;
   xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation=
http://www.osgi.org/xmlns/blueprint/v1.0.0 
http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://activemq.apache.org/schema/core 
http://activemq.apache.org/schema/core/activemq-core-5.5.0.xsd;

manifest
Bundle-SymbolicName: activemq-broker.xml; 
blueprint.aries.xml-validation:=false
/manifest

broker

shutdownHooks
bean class=org.apache.activemq.util.osgi.BrokerBundleWatcher  
xmlns=http://www.osgi.org/xmlns/blueprint/v1.0.0;
property name=bundleContext ref=blueprintBundleContext/
/bean
/shutdownHooks

/broker
/blueprint
{code}

Without it, the error is the following in ServiceMix 4.5.3 : 

{code}
Unable to start blueprint container for bundle activemq-broker.xml
org.osgi.service.blueprint.container.ComponentDefinitionException: Unable to 
validate xml
at org.apache.aries.blueprint.container.Parser.validate(Parser.java:288)
at 
org.apache.aries.blueprint.container.BlueprintContainerImpl.doRun(BlueprintContainerImpl.java:281)
at 
org.apache.aries.blueprint.container.BlueprintContainerImpl.run(BlueprintContainerImpl.java:230)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching 
wildcard is strict, but no declaration can be found for element 'bean'.
{code}



 ActiveMQ OSGi bundle should be stopped when broker stops itself
 ---

 Key: AMQ-3692
 URL: https://issues.apache.org/jira/browse/AMQ-3692
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Broker
Affects Versions: 5.4.2
 Environment: ServiceMix 4.3
Reporter: metatech
Assignee: Hiram Chirino
 Fix For: 5.9.0

 Attachments: BrokerBundleWatcher.patch, BrokerBundleWatcher_v2.patch, 
 BrokerBundleWatcher_v3.patch, BrokerService.patch, activemq-broker.xml


 In case of error, the ActiveMQ broker can stop itself.
 In an OSGi/Blueprint environment, the bundle is however still in 
 Active/Created state, which misleads an external monitoring software into 
 thinking that the broker is running fine.
 This patch stops the bundle when the broker stops itself.
 This patch can also auto-restart the bundle, which will restart the broker.
 This is critical in an Master/Slave configuration : when the connection to 
 the database is lost, the broker cannot maintain the DB exclusive lock, and 
 it stops itself.  The bundle should be stopped and started again, so that it 
 enters again the Creating state, in which it waits to obtain the DB lock 
 again.
 The class BrokerBundleWatcher needs to be registered with the 
 shutdownHooks property of the ActiveMQ BrokerService.  However, there is 
 a limitation with the XBean syntax in a Blueprint XML, which does not allow 
 to define inner beans.  The workaround is to define the activemq-broker.xml 
 in full native Blueprint syntax (no XBean).
 The patch also provides a modified version of the BrokerService, that injects 
 its own reference into the ShutdownHook's which implement the 
 BrokerServiceAware interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-3696) Slave broker cannot be stopped in a JDBC Master/Slave configuration within OSGi

2014-12-10 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240929#comment-14240929
 ] 

metatech edited comment on AMQ-3696 at 12/10/14 10:57 AM:
--

[~dejanb] I only had the time and opportunity to test this new feature last 
week in ServiceMi 4.5.3 (including ActiveMQ 5.7.0).  It works indeed fine. 
Thanks !


was (Author: metatech):
[~dejanb] I only had the time and opportunity to test this new feature last 
week in ServiceMi 4.5.0 (including ActiveMQ 5.7.0).  It works indeed fine. 
Thanks !

 Slave broker cannot be stopped in a JDBC Master/Slave configuration within 
 OSGi
 ---

 Key: AMQ-3696
 URL: https://issues.apache.org/jira/browse/AMQ-3696
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.4.2
 Environment: ServiceMix 4.3
Reporter: metatech
Assignee: Dejan Bosanac
 Fix For: 5.7.0

 Attachments: DatabaseLockerUnblocker.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 A Blueprint container cannot be stopped while it is in the state Creating 
 because both operations are synchronized in BlueprintContainerImpl.
 The impact is that a slave broker cannot be stopped. Fortunately, before the 
 broker itself is stopped, first the OSGi services are unregistered, which 
 calls the configured OSGi unregistration listeners.
 This patch provides a class which is a OSGi service unregistration listener, 
 to allow to stop the database locker, while it is blocked in the Creating 
 state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-3696) Slave broker cannot be stopped in a JDBC Master/Slave configuration within OSGi

2014-12-10 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240929#comment-14240929
 ] 

metatech commented on AMQ-3696:
---

[~dejanb] I only had the time and opportunity to test this new feature last 
week in ServiceMi 4.5.0 (including ActiveMQ 5.7.0).  It works indeed fine. 
Thanks !

 Slave broker cannot be stopped in a JDBC Master/Slave configuration within 
 OSGi
 ---

 Key: AMQ-3696
 URL: https://issues.apache.org/jira/browse/AMQ-3696
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.4.2
 Environment: ServiceMix 4.3
Reporter: metatech
Assignee: Dejan Bosanac
 Fix For: 5.7.0

 Attachments: DatabaseLockerUnblocker.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 A Blueprint container cannot be stopped while it is in the state Creating 
 because both operations are synchronized in BlueprintContainerImpl.
 The impact is that a slave broker cannot be stopped. Fortunately, before the 
 broker itself is stopped, first the OSGi services are unregistered, which 
 calls the configured OSGi unregistration listeners.
 This patch provides a class which is a OSGi service unregistration listener, 
 to allow to stop the database locker, while it is blocked in the Creating 
 state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4122) Lease Database Locker failover broken

2014-03-06 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4122:
--

Attachment: amq_dual_master_5.7_backport.patch

For those using ServiceMix 4.5, the file amq_dual_master_5.7_backport.patch 
provides a backport for ActiveMQ 5.7. 

 Lease Database Locker failover broken
 -

 Key: AMQ-4122
 URL: https://issues.apache.org/jira/browse/AMQ-4122
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.7.0
 Environment: Java 7u9, SUSE 11, Mysql
Reporter: st.h
Assignee: Gary Tully
 Fix For: 5.8.0

 Attachments: AMQ4122.patch, activemq-kyle.xml, activemq.xml, 
 activemq.xml, amq_dual_master_5.7_backport.patch, mysql.log


 We are using ActiveMQ 5.7.0 together with a mysql database and could not 
 observe correct failover behavior with lease database locker.
 It seems that there is a race condition, which prevents the correct failover 
 procedure.
 We noticed that when starting up two instances, both instance are becoming 
 master.
 We did several test, including the following and could not observe intended 
 functionality:
 - shutdown all instances
 - manipulate database lock that one node has lock and set expiry time in 
 distance future
 - start up both instances. both instances are unable to acquire lock, as the 
 lock hasn't expired, which should be correct behavior.
 - update the expiry time in database, so that the lock is expired.
 - first instance notices expired lock and becomes master
 - when second instance checks for lock, it also updates the database and 
 becomes master.
 To my understanding the second instance should not be able to update the 
 lock, as it is held by the first instance and should not be able to become 
 master.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-4790) Avoid thread creation storm after machine suspend/resume

2013-10-08 Thread metatech (JIRA)
metatech created AMQ-4790:
-

 Summary: Avoid thread creation storm after machine suspend/resume
 Key: AMQ-4790
 URL: https://issues.apache.org/jira/browse/AMQ-4790
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2
Reporter: metatech
Priority: Minor


My ActiveMQ broker is running and I suspend my PC.  When I resume the machine 
the next day or (much worse) after a week-end, my PC is busy at 100% CPU for a 
few minutes.  Because many threads are busy in parallel, the PC is completely 
unresponsive and the even the mouse cursor can hardly move.
I had noticed that it only happens when my ServiceMix is started, and when the 
embedded ActiveMQ broker is configured with a JDBC persistence adapter.
After investigating in the code, I found out that the lock keep alive feature 
uses the scheduleWithFixedDelay to update a DB lock every 30 seconds.  When 
the PC is resumed, this generates a backlog of thousands of calls.  It is 
better to use scheduleWithFixedDelay instead, which waits until the previous 
calls has finished before firing a new one.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (AMQ-4790) Avoid thread creation storm after machine suspend/resume

2013-10-08 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4790:
--

Attachment: activemq_jdbc_no_thread_storm.patch

 Avoid thread creation storm after machine suspend/resume
 

 Key: AMQ-4790
 URL: https://issues.apache.org/jira/browse/AMQ-4790
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2
Reporter: metatech
Priority: Minor
 Attachments: activemq_jdbc_no_thread_storm.patch


 My ActiveMQ broker is running and I suspend my PC.  When I resume the machine 
 the next day or (much worse) after a week-end, my PC is busy at 100% CPU for 
 a few minutes.  Because many threads are busy in parallel, the PC is 
 completely unresponsive and the even the mouse cursor can hardly move.
 I had noticed that it only happens when my ServiceMix is started, and when 
 the embedded ActiveMQ broker is configured with a JDBC persistence adapter.
 After investigating in the code, I found out that the lock keep alive 
 feature uses the scheduleWithFixedDelay to update a DB lock every 30 
 seconds.  When the PC is resumed, this generates a backlog of thousands of 
 calls.  It is better to use scheduleWithFixedDelay instead, which waits 
 until the previous calls has finished before firing a new one.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (AMQ-3692) ActiveMQ OSGi bundle should be stopped when broker stops itself

2013-09-02 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13756105#comment-13756105
 ] 

metatech commented on AMQ-3692:
---

For the record, this design choice has the following impact : in a cluster of 2 
nodes with a common database (using the DefaultDatabaseLocker), it means that 
when both nodes have at some point lost their connection to the database, none 
of the 2 ActiveMQ broker nodes is not restarted anymore and therefore all JMS 
routes are in error until a manual restart of the ActiveMQ broker bundle is 
launched.
For our mission-critical usage of ServiceMix, this was not considered as 
acceptable and this is the reason why we created the BrokerBundleWatcher.  
Another (less elegant) solution would be to write a script that monitors the 
broker and restarts the ActiveMQ broker in case of error.



 ActiveMQ OSGi bundle should be stopped when broker stops itself
 ---

 Key: AMQ-3692
 URL: https://issues.apache.org/jira/browse/AMQ-3692
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Broker
Affects Versions: 5.4.2
 Environment: ServiceMix 4.3
Reporter: metatech
Assignee: Hiram Chirino
 Fix For: 5.9.0

 Attachments: activemq-broker.xml, BrokerBundleWatcher.patch, 
 BrokerBundleWatcher_v2.patch, BrokerBundleWatcher_v3.patch, 
 BrokerService.patch


 In case of error, the ActiveMQ broker can stop itself.
 In an OSGi/Blueprint environment, the bundle is however still in 
 Active/Created state, which misleads an external monitoring software into 
 thinking that the broker is running fine.
 This patch stops the bundle when the broker stops itself.
 This patch can also auto-restart the bundle, which will restart the broker.
 This is critical in an Master/Slave configuration : when the connection to 
 the database is lost, the broker cannot maintain the DB exclusive lock, and 
 it stops itself.  The bundle should be stopped and started again, so that it 
 enters again the Creating state, in which it waits to obtain the DB lock 
 again.
 The class BrokerBundleWatcher needs to be registered with the 
 shutdownHooks property of the ActiveMQ BrokerService.  However, there is 
 a limitation with the XBean syntax in a Blueprint XML, which does not allow 
 to define inner beans.  The workaround is to define the activemq-broker.xml 
 in full native Blueprint syntax (no XBean).
 The patch also provides a modified version of the BrokerService, that injects 
 its own reference into the ShutdownHook's which implement the 
 BrokerServiceAware interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-06-18 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13686632#comment-13686632
 ] 

metatech commented on AMQ-4489:
---

[~gtully] : Gary, my understanding from this problem is the following : with an 
ActiveMQ broker configured with JDBC persistence, if the queue depth is larger 
than what can fit in-memory, any non-FIFO consumer on the queue may be blocked 
receiving messages in the queue, because messages in the JMS store are only 
considered for consumption after in-memory messages.  This problem can happen 
when JMS priorities are used, but also when JMS message selectors are used.
Is that correct ?


 Newly received messages with higher priority are never consumed, until broker 
 is restarted
 --

 Key: AMQ-4489
 URL: https://issues.apache.org/jira/browse/AMQ-4489
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2, using Camel producers/consumers
Reporter: metatech
 Attachments: MessagePriorityTest_frozen.java, 
 MessagePriorityTest.java, MessagePriorityTest_workaround.java


 We configured message prioritization according to the following page :
 http://activemq.apache.org/how-can-i-support-priority-queues.html
 We use a JDBC adapter for message persistence, in an Oracle database.
 Prioritisation is enabled on the queue with the prioritizedMessages option, 
 and we also specify a memory limit for the queue (24 MB)
 We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
 producers/consumers.
 Message can have 2 priorities : 4 (normal) for non-business hours and 9 
 (high) for business hours.
 The scenario to reproduce the problem is the following : 
 1. Enqueue 1000 normal and 1000 high messages.
 2. All high messages are consumed first.
 3. After a few normal messages are consumed, enqueue additional 1000 high 
 messages.
 4. All normal messages are consumed before high messages.
 5. All additional high 1000 messages are never consumed.
 6. Restart broker.
 7. All additional high 1000 messages start getting consumed.
 In production, we have a producer with high peaks during the night 
 (10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
 messages/hour), so the queue can reach 100,000-200,000 messages at some 
 periods of the day. Messages are small (200 bytes).
 We enabled SQL query tracing on the broker (with log4jdbc), and we see that 
 the logic with which the findNextMessagesByPriorityStatement query is 
 called does not seem correct in the JDBCMessageStore.recoverNextMessages 
 method :
 At step 2, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID
 At step 4, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID
 The problem is that the value for the last priority stored in the  
 lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
 4, until step 6, where it is reset to 9.
 We tried changing the priority to constant '9' in the query.  It works OK 
 until step 3, where only 200 messages are consumed
 Our understanding is that there should be one lastRecoveredSequenceId 
 variable for each priority level, so that the last consumed message but not 
 yet removed from the DB is memorized, and also the priority should probably 
 also be reset to 9 every time the query is executed.
 Can you have a look please ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4490) JDBCMessageStore fails to retrieve message after 200 messages when cache is disabled

2013-05-15 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658214#comment-13658214
 ] 

metatech commented on AMQ-4490:
---

Problem cannot be reproduced on 5.6, closing this ticket.

 JDBCMessageStore fails to retrieve message after 200 messages when cache is 
 disabled
 

 Key: AMQ-4490
 URL: https://issues.apache.org/jira/browse/AMQ-4490
 Project: ActiveMQ
  Issue Type: Bug
  Components: Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2
Reporter: metatech

 When trying to reproduce bug AMQ-4489, we found that the JDBCMessageStore 
 fails to retrieve all messages from the store when useCache=false.
 The existing unit test JDBCMessagePriorityTest reproduces it (see below).
 A similar problem occurs when MemoryLimit on the queue is used (which forces 
 the messages to be written to and later read from the JDBC message store).
 Can you please have a look ?
 ---
 Test set: org.apache.activemq.store.jdbc.JDBCMessagePriorityTest
 ---
 Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.94 sec  
 FAILURE!
 testQueues 
 {useCache=false}(org.apache.activemq.store.jdbc.JDBCMessagePriorityTest)  
 Time elapsed: 6.656 sec   FAILURE!
 junit.framework.AssertionFailedError: Message 200 was null
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (AMQ-4490) JDBCMessageStore fails to retrieve message after 200 messages when cache is disabled

2013-05-15 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech closed AMQ-4490.
-

   Resolution: Fixed
Fix Version/s: 5.6.0

 JDBCMessageStore fails to retrieve message after 200 messages when cache is 
 disabled
 

 Key: AMQ-4490
 URL: https://issues.apache.org/jira/browse/AMQ-4490
 Project: ActiveMQ
  Issue Type: Bug
  Components: Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2
Reporter: metatech
 Fix For: 5.6.0


 When trying to reproduce bug AMQ-4489, we found that the JDBCMessageStore 
 fails to retrieve all messages from the store when useCache=false.
 The existing unit test JDBCMessagePriorityTest reproduces it (see below).
 A similar problem occurs when MemoryLimit on the queue is used (which forces 
 the messages to be written to and later read from the JDBC message store).
 Can you please have a look ?
 ---
 Test set: org.apache.activemq.store.jdbc.JDBCMessagePriorityTest
 ---
 Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.94 sec  
 FAILURE!
 testQueues 
 {useCache=false}(org.apache.activemq.store.jdbc.JDBCMessagePriorityTest)  
 Time elapsed: 6.656 sec   FAILURE!
 junit.framework.AssertionFailedError: Message 200 was null
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-3692) ActiveMQ OSGi bundle should be stopped when broker stops itself

2013-05-03 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-3692:
--

Attachment: BrokerBundleWatcher_v3.patch

Here is a new version that uses the BundleContext instead of the Bundle as 
input parameter, making it compatible with both Spring and Blueprint.
It also improves some cases where parallel restarts were being initiated.

 ActiveMQ OSGi bundle should be stopped when broker stops itself
 ---

 Key: AMQ-3692
 URL: https://issues.apache.org/jira/browse/AMQ-3692
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Broker
Affects Versions: 5.4.2
 Environment: ServiceMix 4.3
Reporter: metatech
 Fix For: 5.9.0

 Attachments: activemq-broker.xml, BrokerBundleWatcher.patch, 
 BrokerBundleWatcher_v2.patch, BrokerBundleWatcher_v3.patch, 
 BrokerService.patch


 In case of error, the ActiveMQ broker can stop itself.
 In an OSGi/Blueprint environment, the bundle is however still in 
 Active/Created state, which misleads an external monitoring software into 
 thinking that the broker is running fine.
 This patch stops the bundle when the broker stops itself.
 This patch can also auto-restart the bundle, which will restart the broker.
 This is critical in an Master/Slave configuration : when the connection to 
 the database is lost, the broker cannot maintain the DB exclusive lock, and 
 it stops itself.  The bundle should be stopped and started again, so that it 
 enters again the Creating state, in which it waits to obtain the DB lock 
 again.
 The class BrokerBundleWatcher needs to be registered with the 
 shutdownHooks property of the ActiveMQ BrokerService.  However, there is 
 a limitation with the XBean syntax in a Blueprint XML, which does not allow 
 to define inner beans.  The workaround is to define the activemq-broker.xml 
 in full native Blueprint syntax (no XBean).
 The patch also provides a modified version of the BrokerService, that injects 
 its own reference into the ShutdownHook's which implement the 
 BrokerServiceAware interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-30 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645373#comment-13645373
 ] 

metatech commented on AMQ-4489:
---

Gary, thanks for looking into this.
I just tried with the ZIP from Hudson build ActiveMQ-Java7 #187 and the problem 
is still present.
Beware that the asserts have been disabled in the test driver, otherwise it 
is impossible to see how many messages are not sorted according to priority (it 
aborts on the first failed one).  The easiest way to see it is to change the 
parameter redirectTestOutputToFile in the root pom.xml to false, and launch 
the following command :
mvn -Dtest=JDBCMessagePriorityTest#testQueues test.


 Newly received messages with higher priority are never consumed, until broker 
 is restarted
 --

 Key: AMQ-4489
 URL: https://issues.apache.org/jira/browse/AMQ-4489
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2, using Camel producers/consumers
Reporter: metatech
 Attachments: MessagePriorityTest_frozen.java, 
 MessagePriorityTest.java, MessagePriorityTest_workaround.java


 We configured message prioritization according to the following page :
 http://activemq.apache.org/how-can-i-support-priority-queues.html
 We use a JDBC adapter for message persistence, in an Oracle database.
 Prioritisation is enabled on the queue with the prioritizedMessages option, 
 and we also specify a memory limit for the queue (24 MB)
 We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
 producers/consumers.
 Message can have 2 priorities : 4 (normal) for non-business hours and 9 
 (high) for business hours.
 The scenario to reproduce the problem is the following : 
 1. Enqueue 1000 normal and 1000 high messages.
 2. All high messages are consumed first.
 3. After a few normal messages are consumed, enqueue additional 1000 high 
 messages.
 4. All normal messages are consumed before high messages.
 5. All additional high 1000 messages are never consumed.
 6. Restart broker.
 7. All additional high 1000 messages start getting consumed.
 In production, we have a producer with high peaks during the night 
 (10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
 messages/hour), so the queue can reach 100,000-200,000 messages at some 
 periods of the day. Messages are small (200 bytes).
 We enabled SQL query tracing on the broker (with log4jdbc), and we see that 
 the logic with which the findNextMessagesByPriorityStatement query is 
 called does not seem correct in the JDBCMessageStore.recoverNextMessages 
 method :
 At step 2, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID
 At step 4, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID
 The problem is that the value for the last priority stored in the  
 lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
 4, until step 6, where it is reset to 9.
 We tried changing the priority to constant '9' in the query.  It works OK 
 until step 3, where only 200 messages are consumed
 Our understanding is that there should be one lastRecoveredSequenceId 
 variable for each priority level, so that the last consumed message but not 
 yet removed from the DB is memorized, and also the priority should probably 
 also be reset to 9 every time the query is executed.
 Can you have a look please ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-29 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4489:
--

Attachment: MessagePriorityTest.java

Problem can be reproduced in ActiveMQ 5.8.0.
Here is a test driver to reproduce it.
Replace the MessagePriorityTest.java in a vanilla installation and run the test 
:
mvn -Dtest=JDBCMessagePriorityTest test
Note : asserts have been disabled to avoid stop on first error.

 Newly received messages with higher priority are never consumed, until broker 
 is restarted
 --

 Key: AMQ-4489
 URL: https://issues.apache.org/jira/browse/AMQ-4489
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2, using Camel producers/consumers
Reporter: metatech
 Attachments: MessagePriorityTest.java


 We configured message prioritization according to the following page :
 http://activemq.apache.org/how-can-i-support-priority-queues.html
 We use a JDBC adapter for message persistence, in an Oracle database.
 Prioritisation is enabled on the queue with the prioritizedMessages option, 
 and we also specify a memory limit for the queue (24 MB)
 We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
 producers/consumers.
 Message can have 2 priorities : 4 (normal) for non-business hours and 9 
 (high) for business hours.
 The scenario to reproduce the problem is the following : 
 1. Enqueue 1000 normal and 1000 high messages.
 2. All high messages are consumed first.
 3. After a few normal messages are consumed, enqueue additional 1000 high 
 messages.
 4. All normal messages are consumed before high messages.
 5. All additional high 1000 messages are never consumed.
 6. Restart broker.
 7. All additional high 1000 messages start getting consumed.
 In production, we have a producer with high peaks during the night 
 (10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
 messages/hour), so the queue can reach 100,000-200,000 messages at some 
 periods of the day. Messages are small (200 bytes).
 We enabled SQL query tracing on the broker (with log4jdbc), and we see that 
 the logic with which the findNextMessagesByPriorityStatement query is 
 called does not seem correct in the JDBCMessageStore.recoverNextMessages 
 method :
 At step 2, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID
 At step 4, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID
 The problem is that the value for the last priority stored in the  
 lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
 4, until step 6, where it is reset to 9.
 We tried changing the priority to constant '9' in the query.  It works OK 
 until step 3, where only 200 messages are consumed
 Our understanding is that there should be one lastRecoveredSequenceId 
 variable for each priority level, so that the last consumed message but not 
 yet removed from the DB is memorized, and also the priority should probably 
 also be reset to 9 every time the query is executed.
 Can you have a look please ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-29 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4489:
--

Attachment: MessagePriorityTest_workaround.java

Here is a new version of the test driver that almost solves the problem of the 
priorities not taken into account.  The workaround is to  restart the broker 
before message consumption.  In real-life, this is of course not possible, but 
it can help find the root cause of the problem.  Instead of hundreds of 
messages not being prioritized properly, only 2 messages are not, but this 
minor problem can also be solved with queuePrefetch=0 instead of 1.
Note : the test driver does not reproduce the problem where the messages with 
high priority are never consumed anymore (this problem could not be isolated).

 Newly received messages with higher priority are never consumed, until broker 
 is restarted
 --

 Key: AMQ-4489
 URL: https://issues.apache.org/jira/browse/AMQ-4489
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2, using Camel producers/consumers
Reporter: metatech
 Attachments: MessagePriorityTest.java, 
 MessagePriorityTest_workaround.java


 We configured message prioritization according to the following page :
 http://activemq.apache.org/how-can-i-support-priority-queues.html
 We use a JDBC adapter for message persistence, in an Oracle database.
 Prioritisation is enabled on the queue with the prioritizedMessages option, 
 and we also specify a memory limit for the queue (24 MB)
 We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
 producers/consumers.
 Message can have 2 priorities : 4 (normal) for non-business hours and 9 
 (high) for business hours.
 The scenario to reproduce the problem is the following : 
 1. Enqueue 1000 normal and 1000 high messages.
 2. All high messages are consumed first.
 3. After a few normal messages are consumed, enqueue additional 1000 high 
 messages.
 4. All normal messages are consumed before high messages.
 5. All additional high 1000 messages are never consumed.
 6. Restart broker.
 7. All additional high 1000 messages start getting consumed.
 In production, we have a producer with high peaks during the night 
 (10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
 messages/hour), so the queue can reach 100,000-200,000 messages at some 
 periods of the day. Messages are small (200 bytes).
 We enabled SQL query tracing on the broker (with log4jdbc), and we see that 
 the logic with which the findNextMessagesByPriorityStatement query is 
 called does not seem correct in the JDBCMessageStore.recoverNextMessages 
 method :
 At step 2, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID
 At step 4, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID
 The problem is that the value for the last priority stored in the  
 lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
 4, until step 6, where it is reset to 9.
 We tried changing the priority to constant '9' in the query.  It works OK 
 until step 3, where only 200 messages are consumed
 Our understanding is that there should be one lastRecoveredSequenceId 
 variable for each priority level, so that the last consumed message but not 
 yet removed from the DB is memorized, and also the priority should probably 
 also be reset to 9 every time the query is executed.
 Can you have a look please ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-29 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4489:
--

Attachment: MessagePriorityTest_frozen.java

A third version (_frozen) of the test driver that reproduces the frozen 
consumption of messages.  After 3600 messages, there are still 1200 messages in 
the queue, but the browser sees 0. The workaround to restart the broker 
resumes the message consumption.

 Newly received messages with higher priority are never consumed, until broker 
 is restarted
 --

 Key: AMQ-4489
 URL: https://issues.apache.org/jira/browse/AMQ-4489
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2, using Camel producers/consumers
Reporter: metatech
 Attachments: MessagePriorityTest_frozen.java, 
 MessagePriorityTest.java, MessagePriorityTest_workaround.java


 We configured message prioritization according to the following page :
 http://activemq.apache.org/how-can-i-support-priority-queues.html
 We use a JDBC adapter for message persistence, in an Oracle database.
 Prioritisation is enabled on the queue with the prioritizedMessages option, 
 and we also specify a memory limit for the queue (24 MB)
 We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
 producers/consumers.
 Message can have 2 priorities : 4 (normal) for non-business hours and 9 
 (high) for business hours.
 The scenario to reproduce the problem is the following : 
 1. Enqueue 1000 normal and 1000 high messages.
 2. All high messages are consumed first.
 3. After a few normal messages are consumed, enqueue additional 1000 high 
 messages.
 4. All normal messages are consumed before high messages.
 5. All additional high 1000 messages are never consumed.
 6. Restart broker.
 7. All additional high 1000 messages start getting consumed.
 In production, we have a producer with high peaks during the night 
 (10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
 messages/hour), so the queue can reach 100,000-200,000 messages at some 
 periods of the day. Messages are small (200 bytes).
 We enabled SQL query tracing on the broker (with log4jdbc), and we see that 
 the logic with which the findNextMessagesByPriorityStatement query is 
 called does not seem correct in the JDBCMessageStore.recoverNextMessages 
 method :
 At step 2, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID
 At step 4, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID
 The problem is that the value for the last priority stored in the  
 lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
 4, until step 6, where it is reset to 9.
 We tried changing the priority to constant '9' in the query.  It works OK 
 until step 3, where only 200 messages are consumed
 Our understanding is that there should be one lastRecoveredSequenceId 
 variable for each priority level, so that the last consumed message but not 
 yet removed from the DB is memorized, and also the priority should probably 
 also be reset to 9 every time the query is executed.
 Can you have a look please ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-26 Thread metatech (JIRA)
metatech created AMQ-4489:
-

 Summary: Newly received messages with higher priority are never 
consumed, until broker is restarted
 Key: AMQ-4489
 URL: https://issues.apache.org/jira/browse/AMQ-4489
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2, using Camel producers/consumers
Reporter: metatech


We configured message prioritization according to the following page :
http://activemq.apache.org/how-can-i-support-priority-queues.html
We use a JDBC adapter for message persistence, in an Oracle database.
We also specify a memory limit for the queue (24 MB)
We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
producers/consumers.
Message can have 2 priorities : 4 (normal) for non-business hours and 9 (high) 
for business hours.

The scenario to reproduce the problem is the following : 
1. Enqueue 1000 normal and 1000 high messages.
2. All high messages are consumed first.
3. After a few low messages are consumed, enqueue additional 1000 high 
messages.
4. All low messages are consumed before high messages.
5. All additional 1000 messages are never consumed.
6. Restart broker.
7. All additional 1000 messages start getting consumed.

In production, we have a producer with high peaks during the night 
(10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
messages/hour), so the queue can reach 100,000-200,000 messages at some periods 
of the day. Messages are small (200 bytes).

We enabled SQL query tracing on the broker (with log4jdbc), and we see that the 
logic with which the findNextMessagesByPriorityStatement query is called does 
not seem correct in the JDBCMessageStore.recoverNextMessages method :

At step 2, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID

At step 4, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID

The problem is that the value for the last priority stored in the  
lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
4, until step 6, where it is reset to 9.

We tried changing the priority to constant '9' in the query.  It works OK until 
step 3, where only 200 messages are consumed

Our understanding is that there should be one lastRecoveredSequenceId 
variable for each priority level, so that the last consumed message but not 
yet removed from the DB is memorized, and also the priority should probably 
also be reset to 9 every time the query is executed.

Can you have a look please ?


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-4490) JDBCMessageStore fails to retrieve message after 200 messages when cache is disabled

2013-04-26 Thread metatech (JIRA)
metatech created AMQ-4490:
-

 Summary: JDBCMessageStore fails to retrieve message after 200 
messages when cache is disabled
 Key: AMQ-4490
 URL: https://issues.apache.org/jira/browse/AMQ-4490
 Project: ActiveMQ
  Issue Type: Bug
  Components: Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2
Reporter: metatech


When trying to reproduce bug AMQ-4489, we found that the JDBCMessageStore fails 
to retrieve all messages from the store when useCache=false.

The existing unit test JDBCMessagePriorityTest reproduces it (see below).
A similar problem occurs when MemoryLimit on the queue is used (which forces 
the messages to be written to and later read from the JDBC message store).
Can you please have a look ?

---
Test set: org.apache.activemq.store.jdbc.JDBCMessagePriorityTest
---
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.94 sec  
FAILURE!
testQueues 
{useCache=false}(org.apache.activemq.store.jdbc.JDBCMessagePriorityTest)  Time 
elapsed: 6.656 sec   FAILURE!
junit.framework.AssertionFailedError: Message 200 was null
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-26 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4489:
--

Description: 
We configured message prioritization according to the following page :
http://activemq.apache.org/how-can-i-support-priority-queues.html
We use a JDBC adapter for message persistence, in an Oracle database.
Prioritisation is enabled on the queue with the prioritizeMessages option, 
and we also specify a memory limit for the queue (24 MB)
We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
producers/consumers.
Message can have 2 priorities : 4 (normal) for non-business hours and 9 (high) 
for business hours.

The scenario to reproduce the problem is the following : 
1. Enqueue 1000 normal and 1000 high messages.
2. All high messages are consumed first.
3. After a few low messages are consumed, enqueue additional 1000 high 
messages.
4. All low messages are consumed before high messages.
5. All additional 1000 messages are never consumed.
6. Restart broker.
7. All additional 1000 messages start getting consumed.

In production, we have a producer with high peaks during the night 
(10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
messages/hour), so the queue can reach 100,000-200,000 messages at some periods 
of the day. Messages are small (200 bytes).

We enabled SQL query tracing on the broker (with log4jdbc), and we see that the 
logic with which the findNextMessagesByPriorityStatement query is called does 
not seem correct in the JDBCMessageStore.recoverNextMessages method :

At step 2, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID

At step 4, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID

The problem is that the value for the last priority stored in the  
lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
4, until step 6, where it is reset to 9.

We tried changing the priority to constant '9' in the query.  It works OK until 
step 3, where only 200 messages are consumed

Our understanding is that there should be one lastRecoveredSequenceId 
variable for each priority level, so that the last consumed message but not 
yet removed from the DB is memorized, and also the priority should probably 
also be reset to 9 every time the query is executed.

Can you have a look please ?


  was:
We configured message prioritization according to the following page :
http://activemq.apache.org/how-can-i-support-priority-queues.html
We use a JDBC adapter for message persistence, in an Oracle database.
We also specify a memory limit for the queue (24 MB)
We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
producers/consumers.
Message can have 2 priorities : 4 (normal) for non-business hours and 9 (high) 
for business hours.

The scenario to reproduce the problem is the following : 
1. Enqueue 1000 normal and 1000 high messages.
2. All high messages are consumed first.
3. After a few low messages are consumed, enqueue additional 1000 high 
messages.
4. All low messages are consumed before high messages.
5. All additional 1000 messages are never consumed.
6. Restart broker.
7. All additional 1000 messages start getting consumed.

In production, we have a producer with high peaks during the night 
(10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
messages/hour), so the queue can reach 100,000-200,000 messages at some periods 
of the day. Messages are small (200 bytes).

We enabled SQL query tracing on the broker (with log4jdbc), and we see that the 
logic with which the findNextMessagesByPriorityStatement query is called does 
not seem correct in the JDBCMessageStore.recoverNextMessages method :

At step 2, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID

At step 4, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID

The problem is that the value for the last priority stored in the  
lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
4, until step 6, where it is reset to 9.

We tried changing the priority to constant '9' in the query.  It works OK until 
step 3, where only 200 messages are consumed

Our understanding is that there should be one lastRecoveredSequenceId 
variable for each priority level, so that the last consumed message but not 
yet removed from the DB is memorized, and also the priority should probably 
also be reset to 9 every time the query is 

[jira] [Updated] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-26 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4489:
--

Description: 
We configured message prioritization according to the following page :
http://activemq.apache.org/how-can-i-support-priority-queues.html
We use a JDBC adapter for message persistence, in an Oracle database.
Prioritisation is enabled on the queue with the prioritizedMessages option, 
and we also specify a memory limit for the queue (24 MB)
We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
producers/consumers.
Message can have 2 priorities : 4 (normal) for non-business hours and 9 (high) 
for business hours.

The scenario to reproduce the problem is the following : 
1. Enqueue 1000 normal and 1000 high messages.
2. All high messages are consumed first.
3. After a few low messages are consumed, enqueue additional 1000 high 
messages.
4. All low messages are consumed before high messages.
5. All additional 1000 messages are never consumed.
6. Restart broker.
7. All additional 1000 messages start getting consumed.

In production, we have a producer with high peaks during the night 
(10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
messages/hour), so the queue can reach 100,000-200,000 messages at some periods 
of the day. Messages are small (200 bytes).

We enabled SQL query tracing on the broker (with log4jdbc), and we see that the 
logic with which the findNextMessagesByPriorityStatement query is called does 
not seem correct in the JDBCMessageStore.recoverNextMessages method :

At step 2, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID

At step 4, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID

The problem is that the value for the last priority stored in the  
lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
4, until step 6, where it is reset to 9.

We tried changing the priority to constant '9' in the query.  It works OK until 
step 3, where only 200 messages are consumed

Our understanding is that there should be one lastRecoveredSequenceId 
variable for each priority level, so that the last consumed message but not 
yet removed from the DB is memorized, and also the priority should probably 
also be reset to 9 every time the query is executed.

Can you have a look please ?


  was:
We configured message prioritization according to the following page :
http://activemq.apache.org/how-can-i-support-priority-queues.html
We use a JDBC adapter for message persistence, in an Oracle database.
Prioritisation is enabled on the queue with the prioritizeMessages option, 
and we also specify a memory limit for the queue (24 MB)
We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
producers/consumers.
Message can have 2 priorities : 4 (normal) for non-business hours and 9 (high) 
for business hours.

The scenario to reproduce the problem is the following : 
1. Enqueue 1000 normal and 1000 high messages.
2. All high messages are consumed first.
3. After a few low messages are consumed, enqueue additional 1000 high 
messages.
4. All low messages are consumed before high messages.
5. All additional 1000 messages are never consumed.
6. Restart broker.
7. All additional 1000 messages start getting consumed.

In production, we have a producer with high peaks during the night 
(10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
messages/hour), so the queue can reach 100,000-200,000 messages at some periods 
of the day. Messages are small (200 bytes).

We enabled SQL query tracing on the broker (with log4jdbc), and we see that the 
logic with which the findNextMessagesByPriorityStatement query is called does 
not seem correct in the JDBCMessageStore.recoverNextMessages method :

At step 2, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID

At step 4, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID

The problem is that the value for the last priority stored in the  
lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
4, until step 6, where it is reset to 9.

We tried changing the priority to constant '9' in the query.  It works OK until 
step 3, where only 200 messages are consumed

Our understanding is that there should be one lastRecoveredSequenceId 
variable for each priority level, so that the last consumed message but not 
yet removed from the DB is memorized, and 

[jira] [Updated] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-26 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4489:
--

Description: 
We configured message prioritization according to the following page :
http://activemq.apache.org/how-can-i-support-priority-queues.html
We use a JDBC adapter for message persistence, in an Oracle database.
Prioritisation is enabled on the queue with the prioritizedMessages option, 
and we also specify a memory limit for the queue (24 MB)
We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
producers/consumers.
Message can have 2 priorities : 4 (normal) for non-business hours and 9 (high) 
for business hours.

The scenario to reproduce the problem is the following : 
1. Enqueue 1000 normal and 1000 high messages.
2. All high messages are consumed first.
3. After a few normal messages are consumed, enqueue additional 1000 high 
messages.
4. All normal messages are consumed before high messages.
5. All additional high 1000 messages are never consumed.
6. Restart broker.
7. All additional high 1000 messages start getting consumed.

In production, we have a producer with high peaks during the night 
(10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
messages/hour), so the queue can reach 100,000-200,000 messages at some periods 
of the day. Messages are small (200 bytes).

We enabled SQL query tracing on the broker (with log4jdbc), and we see that the 
logic with which the findNextMessagesByPriorityStatement query is called does 
not seem correct in the JDBCMessageStore.recoverNextMessages method :

At step 2, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID

At step 4, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID

The problem is that the value for the last priority stored in the  
lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
4, until step 6, where it is reset to 9.

We tried changing the priority to constant '9' in the query.  It works OK until 
step 3, where only 200 messages are consumed

Our understanding is that there should be one lastRecoveredSequenceId 
variable for each priority level, so that the last consumed message but not 
yet removed from the DB is memorized, and also the priority should probably 
also be reset to 9 every time the query is executed.

Can you have a look please ?


  was:
We configured message prioritization according to the following page :
http://activemq.apache.org/how-can-i-support-priority-queues.html
We use a JDBC adapter for message persistence, in an Oracle database.
Prioritisation is enabled on the queue with the prioritizedMessages option, 
and we also specify a memory limit for the queue (24 MB)
We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
producers/consumers.
Message can have 2 priorities : 4 (normal) for non-business hours and 9 (high) 
for business hours.

The scenario to reproduce the problem is the following : 
1. Enqueue 1000 normal and 1000 high messages.
2. All high messages are consumed first.
3. After a few low messages are consumed, enqueue additional 1000 high 
messages.
4. All low messages are consumed before high messages.
5. All additional 1000 messages are never consumed.
6. Restart broker.
7. All additional 1000 messages start getting consumed.

In production, we have a producer with high peaks during the night 
(10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
messages/hour), so the queue can reach 100,000-200,000 messages at some periods 
of the day. Messages are small (200 bytes).

We enabled SQL query tracing on the broker (with log4jdbc), and we see that the 
logic with which the findNextMessagesByPriorityStatement query is called does 
not seem correct in the JDBCMessageStore.recoverNextMessages method :

At step 2, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID

At step 4, we see the following query being executed :
SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID

The problem is that the value for the last priority stored in the  
lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
4, until step 6, where it is reset to 9.

We tried changing the priority to constant '9' in the query.  It works OK until 
step 3, where only 200 messages are consumed

Our understanding is that there should be one lastRecoveredSequenceId 
variable for each priority level, so that the last consumed message but not 
yet removed from the DB is 

[jira] [Commented] (AMQ-3692) ActiveMQ OSGi bundle should be stopped when broker stops itself

2012-10-02 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467970#comment-13467970
 ] 

metatech commented on AMQ-3692:
---

Gary: the SpringOsgiContextHook uses Spring DM-specific API, where the 
BrokerBundleWatcher only uses generic OSGi API, and is therefore also 
compatible with Blueprint.  There is one minor modification to 
BrokerBundleWatcher which would be required to be compatible with Spring DM : 
Blueprint provides a blueprintBundle built-in variable, for which I did not 
find an equivalent for Spring DM.  However, Spring DM provides a 
bundleContext built-in variable (see 6.6. Accessing the BundleContext), from 
which the bundle can easily be obtained with the getBundle() method.  A new 
method setBundleContext would need to be added to the class.  The 
BrokerBundleWatcher could then supersede the SpringOsgiContextHook, I think.

 ActiveMQ OSGi bundle should be stopped when broker stops itself
 ---

 Key: AMQ-3692
 URL: https://issues.apache.org/jira/browse/AMQ-3692
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Broker
Affects Versions: 5.4.2
 Environment: ServiceMix 4.3
Reporter: metatech
 Fix For: 5.8.0

 Attachments: activemq-broker.xml, BrokerBundleWatcher.patch, 
 BrokerBundleWatcher_v2.patch, BrokerService.patch


 In case of error, the ActiveMQ broker can stop itself.
 In an OSGi/Blueprint environment, the bundle is however still in 
 Active/Created state, which misleads an external monitoring software into 
 thinking that the broker is running fine.
 This patch stops the bundle when the broker stops itself.
 This patch can also auto-restart the bundle, which will restart the broker.
 This is critical in an Master/Slave configuration : when the connection to 
 the database is lost, the broker cannot maintain the DB exclusive lock, and 
 it stops itself.  The bundle should be stopped and started again, so that it 
 enters again the Creating state, in which it waits to obtain the DB lock 
 again.
 The class BrokerBundleWatcher needs to be registered with the 
 shutdownHooks property of the ActiveMQ BrokerService.  However, there is 
 a limitation with the XBean syntax in a Blueprint XML, which does not allow 
 to define inner beans.  The workaround is to define the activemq-broker.xml 
 in full native Blueprint syntax (no XBean).
 The patch also provides a modified version of the BrokerService, that injects 
 its own reference into the ShutdownHook's which implement the 
 BrokerServiceAware interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (AMQ-3692) ActiveMQ OSGi bundle should be stopped when broker stops itself

2012-10-02 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467970#comment-13467970
 ] 

metatech edited comment on AMQ-3692 at 10/3/12 5:59 AM:


Gary: the SpringOsgiContextHook uses Spring DM-specific API, whereas the 
BrokerBundleWatcher only uses generic OSGi API, and is therefore also 
compatible with Blueprint.  There is one minor modification to 
BrokerBundleWatcher which would be required to be compatible with Spring DM : 
Blueprint provides a blueprintBundle built-in variable, for which I did not 
find an equivalent for Spring DM.  However, Spring DM provides a 
bundleContext built-in variable (see 6.6. Accessing the BundleContext), from 
which the bundle can easily be obtained with the getBundle() method.  A new 
method setBundleContext would need to be added to the class.  The 
BrokerBundleWatcher could then supersede the SpringOsgiContextHook, I think.

  was (Author: metatech):
Gary: the SpringOsgiContextHook uses Spring DM-specific API, where the 
BrokerBundleWatcher only uses generic OSGi API, and is therefore also 
compatible with Blueprint.  There is one minor modification to 
BrokerBundleWatcher which would be required to be compatible with Spring DM : 
Blueprint provides a blueprintBundle built-in variable, for which I did not 
find an equivalent for Spring DM.  However, Spring DM provides a 
bundleContext built-in variable (see 6.6. Accessing the BundleContext), from 
which the bundle can easily be obtained with the getBundle() method.  A new 
method setBundleContext would need to be added to the class.  The 
BrokerBundleWatcher could then supersede the SpringOsgiContextHook, I think.
  
 ActiveMQ OSGi bundle should be stopped when broker stops itself
 ---

 Key: AMQ-3692
 URL: https://issues.apache.org/jira/browse/AMQ-3692
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Broker
Affects Versions: 5.4.2
 Environment: ServiceMix 4.3
Reporter: metatech
 Fix For: 5.8.0

 Attachments: activemq-broker.xml, BrokerBundleWatcher.patch, 
 BrokerBundleWatcher_v2.patch, BrokerService.patch


 In case of error, the ActiveMQ broker can stop itself.
 In an OSGi/Blueprint environment, the bundle is however still in 
 Active/Created state, which misleads an external monitoring software into 
 thinking that the broker is running fine.
 This patch stops the bundle when the broker stops itself.
 This patch can also auto-restart the bundle, which will restart the broker.
 This is critical in an Master/Slave configuration : when the connection to 
 the database is lost, the broker cannot maintain the DB exclusive lock, and 
 it stops itself.  The bundle should be stopped and started again, so that it 
 enters again the Creating state, in which it waits to obtain the DB lock 
 again.
 The class BrokerBundleWatcher needs to be registered with the 
 shutdownHooks property of the ActiveMQ BrokerService.  However, there is 
 a limitation with the XBean syntax in a Blueprint XML, which does not allow 
 to define inner beans.  The workaround is to define the activemq-broker.xml 
 in full native Blueprint syntax (no XBean).
 The patch also provides a modified version of the BrokerService, that injects 
 its own reference into the ShutdownHook's which implement the 
 BrokerServiceAware interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-3692) ActiveMQ OSGi bundle should be stopped when broker stops itself

2012-09-22 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13461113#comment-13461113
 ] 

metatech commented on AMQ-3692:
---

Claus: it is a normal ShutdownHook : the object is created during broker 
creation, but the run method is only executed when the shutdown is triggered. 
 The thread started within the run has no loop : it only runs once and then 
terminates.  So I think there should be no additional thread termination 
mechanism.  

 ActiveMQ OSGi bundle should be stopped when broker stops itself
 ---

 Key: AMQ-3692
 URL: https://issues.apache.org/jira/browse/AMQ-3692
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Broker
Affects Versions: 5.4.2
 Environment: ServiceMix 4.3
Reporter: metatech
 Fix For: 5.8.0

 Attachments: activemq-broker.xml, BrokerBundleWatcher.patch, 
 BrokerBundleWatcher_v2.patch, BrokerService.patch


 In case of error, the ActiveMQ broker can stop itself.
 In an OSGi/Blueprint environment, the bundle is however still in 
 Active/Created state, which misleads an external monitoring software into 
 thinking that the broker is running fine.
 This patch stops the bundle when the broker stops itself.
 This patch can also auto-restart the bundle, which will restart the broker.
 This is critical in an Master/Slave configuration : when the connection to 
 the database is lost, the broker cannot maintain the DB exclusive lock, and 
 it stops itself.  The bundle should be stopped and started again, so that it 
 enters again the Creating state, in which it waits to obtain the DB lock 
 again.
 The class BrokerBundleWatcher needs to be registered with the 
 shutdownHooks property of the ActiveMQ BrokerService.  However, there is 
 a limitation with the XBean syntax in a Blueprint XML, which does not allow 
 to define inner beans.  The workaround is to define the activemq-broker.xml 
 in full native Blueprint syntax (no XBean).
 The patch also provides a modified version of the BrokerService, that injects 
 its own reference into the ShutdownHook's which implement the 
 BrokerServiceAware interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-3692) ActiveMQ OSGi bundle should be stopped when broker stops itself

2012-09-19 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13458946#comment-13458946
 ] 

metatech commented on AMQ-3692:
---

Claus, the attached activemq-broker.xml shows how the class is registered as a 
shutdown hook of the broker.
The ActiveMQ Karaf commands must be run manually : the idea of the patch is 
that the broker is monitored and auto-restarted automatically, without human 
intervention nor polling script.


 ActiveMQ OSGi bundle should be stopped when broker stops itself
 ---

 Key: AMQ-3692
 URL: https://issues.apache.org/jira/browse/AMQ-3692
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Broker
Affects Versions: 5.4.2
 Environment: ServiceMix 4.3
Reporter: metatech
 Fix For: NEEDS_REVIEWED

 Attachments: activemq-broker.xml, BrokerBundleWatcher.patch, 
 BrokerBundleWatcher_v2.patch, BrokerService.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 In case of error, the ActiveMQ broker can stop itself.
 In an OSGi/Blueprint environment, the bundle is however still in 
 Active/Created state, which misleads an external monitoring software into 
 thinking that the broker is running fine.
 This patch stops the bundle when the broker stops itself.
 This patch can also auto-restart the bundle, which will restart the broker.
 This is critical in an Master/Slave configuration : when the connection to 
 the database is lost, the broker cannot maintain the DB exclusive lock, and 
 it stops itself.  The bundle should be stopped and started again, so that it 
 enters again the Creating state, in which it waits to obtain the DB lock 
 again.
 The class BrokerBundleWatcher needs to be registered with the 
 shutdownHooks property of the ActiveMQ BrokerService.  However, there is 
 a limitation with the XBean syntax in a Blueprint XML, which does not allow 
 to define inner beans.  The workaround is to define the activemq-broker.xml 
 in full native Blueprint syntax (no XBean).
 The patch also provides a modified version of the BrokerService, that injects 
 its own reference into the ShutdownHook's which implement the 
 BrokerServiceAware interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-3640) activemq-broker.xml stays in Blueprint GracePeriod state forever

2012-07-30 Thread metatech (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13424835#comment-13424835
 ] 

metatech commented on AMQ-3640:
---

For the record, with ServiceMix 4.4.2 the problem is also reproduceable.  I 
believe it is due to Felix 3.0.9 still being used.

 activemq-broker.xml stays in Blueprint GracePeriod state forever
 --

 Key: AMQ-3640
 URL: https://issues.apache.org/jira/browse/AMQ-3640
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-camel
Affects Versions: 5.4.2
 Environment: ServiceMix 4.3
Reporter: metatech

 In a small proportion of the time (about 20%) of first starting ServiceMix, 
 the bundle activemq-broker.xml stays in the GracePeriod forever (ie no 
 timeout after 5 minutes).  It is waiting for a namespace handler for the 
 namespace http://activemq.apache.org/schema/core;, which should be exported 
 by the bundle activemq-blueprint, which is declared as a OSGi fragment of 
 the activemq-core bundle.  When the problem occurs, the fragment is not 
 properly attached to the host bundle.  See the extract of the Karaf console : 
 {code}
 [  46] [Active ] [] [   ] [   60] activemq-core (5.4.2)
 [  56] [Installed  ] [] [   ] [   60] activemq-blueprint 
 (5.4.2)
 [  59] [Active ] [GracePeriod ] [   ] [   60] activemq-broker.xml 
 (0.0.0)
 {code}
 There are no Hosts: and Fragments: lines.
 The activemq-blueprint stays in Installed state instead of Resolved, and 
 cannot be started manually because it is a fragment bundle.
 Proposed short-term workaround (untested) : merge both bundles 
 activemq-core and activemq-blueprint.
 Long-term solution : I guess the problem is due to a race condition between 
 the starting of the bundles and the discovery of the host-fragment 
 relationship : maybe a 2-phase mechanism is needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira