Jenkins build is unstable: ActiveMQ-Java7 #187

2013-04-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/ActiveMQ-Java7/187/



[jira] [Created] (AMQ-4493) Temporary destinations via STOMP fails using chained Request/Reply

2013-04-29 Thread Jan-Helge Bergesen (JIRA)
Jan-Helge Bergesen created AMQ-4493:
---

 Summary: Temporary destinations via STOMP fails using chained 
Request/Reply 
 Key: AMQ-4493
 URL: https://issues.apache.org/jira/browse/AMQ-4493
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, stomp
Affects Versions: 5.8.0
 Environment: Standalone application, 2 Camel XML apps (2.10.3), 
ActiveMQ vanilla install (5.8.0)
Reporter: Jan-Helge Bergesen


It may seem like the conversion between STOMP temporary destination names (i.e 
/temp-queue/xx + /temp-topic/xx) to exposed internal representation (i.e 
/remote-temp-queue/ID\:x) fails under certain conditions.

Will attach test setup that demonstrates behavior, both on 5.8.0 and 5.7.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4493) Temporary destinations via STOMP fails using chained Request/Reply

2013-04-29 Thread Jan-Helge Bergesen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan-Helge Bergesen updated AMQ-4493:


Attachment: jms-request-reply.zip

Test-setup with 3 applications.
ZIP file contains powerpoint summary, as well as STOMP on-wire tracing.

 Temporary destinations via STOMP fails using chained Request/Reply 
 ---

 Key: AMQ-4493
 URL: https://issues.apache.org/jira/browse/AMQ-4493
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, stomp
Affects Versions: 5.8.0
 Environment: Standalone application, 2 Camel XML apps (2.10.3), 
 ActiveMQ vanilla install (5.8.0)
Reporter: Jan-Helge Bergesen
 Attachments: jms-request-reply.zip


 It may seem like the conversion between STOMP temporary destination names 
 (i.e /temp-queue/xx + /temp-topic/xx) to exposed internal representation (i.e 
 /remote-temp-queue/ID\:x) fails under certain conditions.
 Will attach test setup that demonstrates behavior, both on 5.8.0 and 5.7.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4493) Temporary destinations via STOMP fails using chained Request/Reply

2013-04-29 Thread Jan-Helge Bergesen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan-Helge Bergesen updated AMQ-4493:


Attachment: observation.png

Attached graphic summary.

 Temporary destinations via STOMP fails using chained Request/Reply 
 ---

 Key: AMQ-4493
 URL: https://issues.apache.org/jira/browse/AMQ-4493
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, stomp
Affects Versions: 5.8.0
 Environment: Standalone application, 2 Camel XML apps (2.10.3), 
 ActiveMQ vanilla install (5.8.0)
Reporter: Jan-Helge Bergesen
 Attachments: jms-request-reply.zip, observation.png


 It may seem like the conversion between STOMP temporary destination names 
 (i.e /temp-queue/xx + /temp-topic/xx) to exposed internal representation (i.e 
 /remote-temp-queue/ID\:x) fails under certain conditions.
 Will attach test setup that demonstrates behavior, both on 5.8.0 and 5.7.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4493) Temporary destinations via STOMP fails using chained Request/Reply

2013-04-29 Thread Jan-Helge Bergesen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan-Helge Bergesen updated AMQ-4493:


Environment: 
1 x Standalone application, 2 x Camel XML apps (2.10.3), ActiveMQ vanilla 
install (5.8.0).
Run on Windows 7, but also with broker run on Red Hat Enterprise Linux Server 
release 6.3 (Santiago)


  was:Standalone application, 2 Camel XML apps (2.10.3), ActiveMQ vanilla 
install (5.8.0)


 Temporary destinations via STOMP fails using chained Request/Reply 
 ---

 Key: AMQ-4493
 URL: https://issues.apache.org/jira/browse/AMQ-4493
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, stomp
Affects Versions: 5.8.0
 Environment: 1 x Standalone application, 2 x Camel XML apps (2.10.3), 
 ActiveMQ vanilla install (5.8.0).
 Run on Windows 7, but also with broker run on Red Hat Enterprise Linux Server 
 release 6.3 (Santiago)
Reporter: Jan-Helge Bergesen
 Attachments: jms-request-reply.zip, observation.png


 It may seem like the conversion between STOMP temporary destination names 
 (i.e /temp-queue/xx + /temp-topic/xx) to exposed internal representation (i.e 
 /remote-temp-queue/ID\:x) fails under certain conditions.
 Will attach test setup that demonstrates behavior, both on 5.8.0 and 5.7.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-29 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4489:
--

Attachment: MessagePriorityTest.java

Problem can be reproduced in ActiveMQ 5.8.0.
Here is a test driver to reproduce it.
Replace the MessagePriorityTest.java in a vanilla installation and run the test 
:
mvn -Dtest=JDBCMessagePriorityTest test
Note : asserts have been disabled to avoid stop on first error.

 Newly received messages with higher priority are never consumed, until broker 
 is restarted
 --

 Key: AMQ-4489
 URL: https://issues.apache.org/jira/browse/AMQ-4489
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2, using Camel producers/consumers
Reporter: metatech
 Attachments: MessagePriorityTest.java


 We configured message prioritization according to the following page :
 http://activemq.apache.org/how-can-i-support-priority-queues.html
 We use a JDBC adapter for message persistence, in an Oracle database.
 Prioritisation is enabled on the queue with the prioritizedMessages option, 
 and we also specify a memory limit for the queue (24 MB)
 We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
 producers/consumers.
 Message can have 2 priorities : 4 (normal) for non-business hours and 9 
 (high) for business hours.
 The scenario to reproduce the problem is the following : 
 1. Enqueue 1000 normal and 1000 high messages.
 2. All high messages are consumed first.
 3. After a few normal messages are consumed, enqueue additional 1000 high 
 messages.
 4. All normal messages are consumed before high messages.
 5. All additional high 1000 messages are never consumed.
 6. Restart broker.
 7. All additional high 1000 messages start getting consumed.
 In production, we have a producer with high peaks during the night 
 (10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
 messages/hour), so the queue can reach 100,000-200,000 messages at some 
 periods of the day. Messages are small (200 bytes).
 We enabled SQL query tracing on the broker (with log4jdbc), and we see that 
 the logic with which the findNextMessagesByPriorityStatement query is 
 called does not seem correct in the JDBCMessageStore.recoverNextMessages 
 method :
 At step 2, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID
 At step 4, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID
 The problem is that the value for the last priority stored in the  
 lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
 4, until step 6, where it is reset to 9.
 We tried changing the priority to constant '9' in the query.  It works OK 
 until step 3, where only 200 messages are consumed
 Our understanding is that there should be one lastRecoveredSequenceId 
 variable for each priority level, so that the last consumed message but not 
 yet removed from the DB is memorized, and also the priority should probably 
 also be reset to 9 every time the query is executed.
 Can you have a look please ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-29 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4489:
--

Attachment: MessagePriorityTest_workaround.java

Here is a new version of the test driver that almost solves the problem of the 
priorities not taken into account.  The workaround is to  restart the broker 
before message consumption.  In real-life, this is of course not possible, but 
it can help find the root cause of the problem.  Instead of hundreds of 
messages not being prioritized properly, only 2 messages are not, but this 
minor problem can also be solved with queuePrefetch=0 instead of 1.
Note : the test driver does not reproduce the problem where the messages with 
high priority are never consumed anymore (this problem could not be isolated).

 Newly received messages with higher priority are never consumed, until broker 
 is restarted
 --

 Key: AMQ-4489
 URL: https://issues.apache.org/jira/browse/AMQ-4489
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2, using Camel producers/consumers
Reporter: metatech
 Attachments: MessagePriorityTest.java, 
 MessagePriorityTest_workaround.java


 We configured message prioritization according to the following page :
 http://activemq.apache.org/how-can-i-support-priority-queues.html
 We use a JDBC adapter for message persistence, in an Oracle database.
 Prioritisation is enabled on the queue with the prioritizedMessages option, 
 and we also specify a memory limit for the queue (24 MB)
 We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
 producers/consumers.
 Message can have 2 priorities : 4 (normal) for non-business hours and 9 
 (high) for business hours.
 The scenario to reproduce the problem is the following : 
 1. Enqueue 1000 normal and 1000 high messages.
 2. All high messages are consumed first.
 3. After a few normal messages are consumed, enqueue additional 1000 high 
 messages.
 4. All normal messages are consumed before high messages.
 5. All additional high 1000 messages are never consumed.
 6. Restart broker.
 7. All additional high 1000 messages start getting consumed.
 In production, we have a producer with high peaks during the night 
 (10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
 messages/hour), so the queue can reach 100,000-200,000 messages at some 
 periods of the day. Messages are small (200 bytes).
 We enabled SQL query tracing on the broker (with log4jdbc), and we see that 
 the logic with which the findNextMessagesByPriorityStatement query is 
 called does not seem correct in the JDBCMessageStore.recoverNextMessages 
 method :
 At step 2, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID
 At step 4, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID
 The problem is that the value for the last priority stored in the  
 lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
 4, until step 6, where it is reset to 9.
 We tried changing the priority to constant '9' in the query.  It works OK 
 until step 3, where only 200 messages are consumed
 Our understanding is that there should be one lastRecoveredSequenceId 
 variable for each priority level, so that the last consumed message but not 
 yet removed from the DB is memorized, and also the priority should probably 
 also be reset to 9 every time the query is executed.
 Can you have a look please ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-29 Thread metatech (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

metatech updated AMQ-4489:
--

Attachment: MessagePriorityTest_frozen.java

A third version (_frozen) of the test driver that reproduces the frozen 
consumption of messages.  After 3600 messages, there are still 1200 messages in 
the queue, but the browser sees 0. The workaround to restart the broker 
resumes the message consumption.

 Newly received messages with higher priority are never consumed, until broker 
 is restarted
 --

 Key: AMQ-4489
 URL: https://issues.apache.org/jira/browse/AMQ-4489
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2, using Camel producers/consumers
Reporter: metatech
 Attachments: MessagePriorityTest_frozen.java, 
 MessagePriorityTest.java, MessagePriorityTest_workaround.java


 We configured message prioritization according to the following page :
 http://activemq.apache.org/how-can-i-support-priority-queues.html
 We use a JDBC adapter for message persistence, in an Oracle database.
 Prioritisation is enabled on the queue with the prioritizedMessages option, 
 and we also specify a memory limit for the queue (24 MB)
 We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
 producers/consumers.
 Message can have 2 priorities : 4 (normal) for non-business hours and 9 
 (high) for business hours.
 The scenario to reproduce the problem is the following : 
 1. Enqueue 1000 normal and 1000 high messages.
 2. All high messages are consumed first.
 3. After a few normal messages are consumed, enqueue additional 1000 high 
 messages.
 4. All normal messages are consumed before high messages.
 5. All additional high 1000 messages are never consumed.
 6. Restart broker.
 7. All additional high 1000 messages start getting consumed.
 In production, we have a producer with high peaks during the night 
 (10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
 messages/hour), so the queue can reach 100,000-200,000 messages at some 
 periods of the day. Messages are small (200 bytes).
 We enabled SQL query tracing on the broker (with log4jdbc), and we see that 
 the logic with which the findNextMessagesByPriorityStatement query is 
 called does not seem correct in the JDBCMessageStore.recoverNextMessages 
 method :
 At step 2, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID
 At step 4, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID
 The problem is that the value for the last priority stored in the  
 lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
 4, until step 6, where it is reset to 9.
 We tried changing the priority to constant '9' in the query.  It works OK 
 until step 3, where only 200 messages are consumed
 Our understanding is that there should be one lastRecoveredSequenceId 
 variable for each priority level, so that the last consumed message but not 
 yet removed from the DB is memorized, and also the priority should probably 
 also be reset to 9 every time the query is executed.
 Can you have a look please ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-4494) Cursor hasSpace() doesn't check system usage

2013-04-29 Thread Dejan Bosanac (JIRA)
Dejan Bosanac created AMQ-4494:
--

 Summary: Cursor hasSpace() doesn't check system usage
 Key: AMQ-4494
 URL: https://issues.apache.org/jira/browse/AMQ-4494
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.8.0
Reporter: Dejan Bosanac
Assignee: Dejan Bosanac
 Fix For: 5.9.0


When checking for space, every cursor checks its destinations memory usage. 
Unfortunately, that doesn't check the parent (system usage) so with large 
number of destinations when total sum of per destination limits is larger than 
total system memory usage, we can brake the system usage memory limit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-4495) Imporve cursor memory management

2013-04-29 Thread Dejan Bosanac (JIRA)
Dejan Bosanac created AMQ-4495:
--

 Summary: Imporve cursor memory management
 Key: AMQ-4495
 URL: https://issues.apache.org/jira/browse/AMQ-4495
 Project: ActiveMQ
  Issue Type: Bug
Reporter: Dejan Bosanac


As currently stands, the store queue cursor will cache producer messages until 
it gets to the 70% (high watermark) of its usage. After that caching stops and 
messages goes only in store. When consumers comes, messages get dispatched to 
it, but memory isn't released until they are acked. The problem is with the use 
case where producer flow control is off and we have a prefetch large enough to 
get all our messages from the cache. Then, basically the cursor gets empty and 
as message acks release memory one by one, we go to the store and try to batch 
one message at the time. You can guess that things start to be really slow at 
that point. 

The solution for this scenario is to wait with batching until we have more 
space so that store access is optimized. We can do this by adding a new limit 
(smaller then the high watermark) which will be used as the limit after which 
we start filling cursor from the store again.

All this led us to the following questions:

1. Why do we use 70% as the limit (instead of 100%) when we stop caching 
producer messages?

2. Would a solution that stop caching producer messages at 100% of usage and 
then start batching messages from the store when usage drops below high 
watermark value be enough. Of course, high watermark would be configurable, but 
100% by default so we don't alter any behavior for regular use cases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[DISCUSS] Rethinking cursor memory management

2013-04-29 Thread Dejan Bosanac
I come up with the task to research some configuration limits for the use
case and as a part of it went thoroughly through the current memory
management for cursors. I discussed it with Gary and we came up with some
changes that should be done. But before that it'd be good to have a wider
discussion of and proposed changes.

As currently stands, the store queue cursor will cache producer messages
until it gets to the 70% (high watermark) of its usage. After that caching
stops and messages goes only in store. When consumers comes, messages get
dispatched to it, but memory isn't released until they are acked. The
problem is with the use case where producer flow control is off and we have
a prefetch large enough to get all our messages from the cache. Then,
basically the cursor gets empty and as message acks release memory one by
one, we go to the store and try to batch one message at the time. You can
guess that things start to be really slow at that point.

The solution for this scenario is to wait with batching until we have more
space so that store access is optimized. We can do this by adding a new
limit (smaller then the high watermark) which will be used as the limit
after which we start filling cursor from the store again.

All this led us to the following questions:

1. Why do we use 70% as the limit (instead of 100%) when we stop caching
producer messages?

2. Would a solution that stop caching producer messages at 100% of usage
and then start batching messages from the store when usage drops below high
watermark value be enough. Of course, high watermark would be configurable,
but 100% by default so we don't alter any behavior for regular use cases.

I can't find reason why this shouldn't work, but if anyone have any
comments or other ideas on this, please post it here or to the relevant
Jira (https://issues.apache.org/jira/browse/AMQ-4495)


Regards
--
Dejan Bosanac
--
Red Hat, Inc.
FuseSource is now part of Red Hat
dbosa...@redhat.com
Twitter: @dejanb
Blog: http://sensatic.net
ActiveMQ in Action: http://www.manning.com/snyder/


[jira] [Assigned] (AMQNET-434) FailoverTransport Memory Leak with TransactionState

2013-04-29 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish reassigned AMQNET-434:
---

Assignee: Timothy Bish  (was: Jim Gomes)

 FailoverTransport Memory Leak with TransactionState
 ---

 Key: AMQNET-434
 URL: https://issues.apache.org/jira/browse/AMQNET-434
 Project: ActiveMQ .Net
  Issue Type: Bug
Affects Versions: 1.5.6
Reporter: Daniel Marbach
Assignee: Timothy Bish

 I'm hunting down a possible memory leak. We have the following problem in 
 production:
 when the consumer/subscriber endpoint runs for a long time with failover 
 transport enabled the memory grows indefinitely. 
 I used YouTrack and AntsProfiler to hunt down the issues. The retention path 
 I see in production is the following:
 The FailoverTransport nested class FailoverTask has two 
 ConnectionStateTrackers this keeps a dictionary which links the ConnectionId 
 to the ConnectionState. The ConnectionState itself has a dictionary which 
 links the transactionId to the TransactionState. The TranscationState tracks 
 commands. BUT these commands are never freed up from the transaction state 
 and stay there forever which will blow up the memory some time. 
 I'm currently investigation how to fix this but must first properly 
 understand the code. I opened up this issue in the hope that it will ring a 
 bell for you guys.
 Daniel

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-434) FailoverTransport Memory Leak with TransactionState

2013-04-29 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-434:


Fix Version/s: 1.6.0

 FailoverTransport Memory Leak with TransactionState
 ---

 Key: AMQNET-434
 URL: https://issues.apache.org/jira/browse/AMQNET-434
 Project: ActiveMQ .Net
  Issue Type: Bug
Affects Versions: 1.5.6
Reporter: Daniel Marbach
Assignee: Timothy Bish
 Fix For: 1.6.0


 I'm hunting down a possible memory leak. We have the following problem in 
 production:
 when the consumer/subscriber endpoint runs for a long time with failover 
 transport enabled the memory grows indefinitely. 
 I used YouTrack and AntsProfiler to hunt down the issues. The retention path 
 I see in production is the following:
 The FailoverTransport nested class FailoverTask has two 
 ConnectionStateTrackers this keeps a dictionary which links the ConnectionId 
 to the ConnectionState. The ConnectionState itself has a dictionary which 
 links the transactionId to the TransactionState. The TranscationState tracks 
 commands. BUT these commands are never freed up from the transaction state 
 and stay there forever which will blow up the memory some time. 
 I'm currently investigation how to fix this but must first properly 
 understand the code. I opened up this issue in the hope that it will ring a 
 bell for you guys.
 Daniel

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQNET-434) FailoverTransport Memory Leak with TransactionState

2013-04-29 Thread Timothy Bish (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQNET-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644520#comment-13644520
 ] 

Timothy Bish commented on AMQNET-434:
-

Working on some unit test for ConnectionStateTracker over the weekend, found a 
few things so far.

 FailoverTransport Memory Leak with TransactionState
 ---

 Key: AMQNET-434
 URL: https://issues.apache.org/jira/browse/AMQNET-434
 Project: ActiveMQ .Net
  Issue Type: Bug
Affects Versions: 1.5.6
Reporter: Daniel Marbach
Assignee: Timothy Bish
 Fix For: 1.6.0


 I'm hunting down a possible memory leak. We have the following problem in 
 production:
 when the consumer/subscriber endpoint runs for a long time with failover 
 transport enabled the memory grows indefinitely. 
 I used YouTrack and AntsProfiler to hunt down the issues. The retention path 
 I see in production is the following:
 The FailoverTransport nested class FailoverTask has two 
 ConnectionStateTrackers this keeps a dictionary which links the ConnectionId 
 to the ConnectionState. The ConnectionState itself has a dictionary which 
 links the transactionId to the TransactionState. The TranscationState tracks 
 commands. BUT these commands are never freed up from the transaction state 
 and stay there forever which will blow up the memory some time. 
 I'm currently investigation how to fix this but must first properly 
 understand the code. I opened up this issue in the hope that it will ring a 
 bell for you guys.
 Daniel

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQNET-435) ConnectioStateTracker not properly enforcing cache limits

2013-04-29 Thread Timothy Bish (JIRA)
Timothy Bish created AMQNET-435:
---

 Summary: ConnectioStateTracker not properly enforcing cache limits
 Key: AMQNET-435
 URL: https://issues.apache.org/jira/browse/AMQNET-435
 Project: ActiveMQ .Net
  Issue Type: Bug
  Components: ActiveMQ
Affects Versions: 1.5.6
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 1.6.0


The ConnectionStateTracker doesn't properly limit the size of its Message Cache 
and allows build up of Message Commands or MessagePull commands beyond the 
configured cache size. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (AMQNET-435) ConnectioStateTracker not properly enforcing cache limits

2013-04-29 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved AMQNET-435.
-

Resolution: Fixed

Fixed on trunk

 ConnectioStateTracker not properly enforcing cache limits
 -

 Key: AMQNET-435
 URL: https://issues.apache.org/jira/browse/AMQNET-435
 Project: ActiveMQ .Net
  Issue Type: Bug
  Components: ActiveMQ
Affects Versions: 1.5.6
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 1.6.0


 The ConnectionStateTracker doesn't properly limit the size of its Message 
 Cache and allows build up of Message Commands or MessagePull commands beyond 
 the configured cache size. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQNET-436) Pull Consumers can return immediately when connection is not active when a timed receive is called.

2013-04-29 Thread Timothy Bish (JIRA)
Timothy Bish created AMQNET-436:
---

 Summary: Pull Consumers can return immediately when connection is 
not active when a timed receive is called.
 Key: AMQNET-436
 URL: https://issues.apache.org/jira/browse/AMQNET-436
 Project: ActiveMQ .Net
  Issue Type: Improvement
  Components: ActiveMQ
Affects Versions: 1.5.6
Reporter: Timothy Bish
Assignee: Timothy Bish
Priority: Minor
 Fix For: 1.6.0


For Connections that use Failover and MessageConsumers that have a zero 
prefetch a timed receive can simply return immediately instead of blocking or 
throwing am exception if a send timeout is configured.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQNET-434) FailoverTransport Memory Leak with TransactionState

2013-04-29 Thread Timothy Bish (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQNET-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644823#comment-13644823
 ] 

Timothy Bish commented on AMQNET-434:
-

Make sure you add any additional information your profiler tool provides to aid 
in tracking these down. 

 FailoverTransport Memory Leak with TransactionState
 ---

 Key: AMQNET-434
 URL: https://issues.apache.org/jira/browse/AMQNET-434
 Project: ActiveMQ .Net
  Issue Type: Bug
Affects Versions: 1.5.6
Reporter: Daniel Marbach
Assignee: Timothy Bish
 Fix For: 1.6.0


 I'm hunting down a possible memory leak. We have the following problem in 
 production:
 when the consumer/subscriber endpoint runs for a long time with failover 
 transport enabled the memory grows indefinitely. 
 I used YouTrack and AntsProfiler to hunt down the issues. The retention path 
 I see in production is the following:
 The FailoverTransport nested class FailoverTask has two 
 ConnectionStateTrackers this keeps a dictionary which links the ConnectionId 
 to the ConnectionState. The ConnectionState itself has a dictionary which 
 links the transactionId to the TransactionState. The TranscationState tracks 
 commands. BUT these commands are never freed up from the transaction state 
 and stay there forever which will blow up the memory some time. 
 I'm currently investigation how to fix this but must first properly 
 understand the code. I opened up this issue in the hope that it will ring a 
 bell for you guys.
 Daniel

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-425) Add support for updateClusterClients to Failover Transport

2013-04-29 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-425:


Summary: Add support for updateClusterClients to Failover Transport  (was: 
Dynamic Failover  Apache.NMS.activeMQ 1.5.5 issue )

 Add support for updateClusterClients to Failover Transport
 --

 Key: AMQNET-425
 URL: https://issues.apache.org/jira/browse/AMQNET-425
 Project: ActiveMQ .Net
  Issue Type: Bug
  Components: NMS
Affects Versions: 1.5.5, 1.5.6
 Environment: We have developed a NET4.0 application (Producer  
 Consumer) using:
 -Apache.NMS 1.5.0
 -Apache.NMS.activeMQ 1.5.5
  vmware vsphere 4.1
 -CentOS 6 32bits (AQsrv1  AQsrv2)
 -JRE6
 -Fuse Message Broker v5.5.1
Reporter: Marc Rodrigues
Assignee: Jim Gomes
Priority: Critical

 Hello,
 The NMS failover client does not yet support the updateClusterClients 
 options, so it won't respond to the list of brokers provided by the server.
 The workaround is to provide the static list or use some other discovery 
 mechanism like multicast.
 Issue reported via forum :
 Our goal is to use a network of brokers with failover and dynamic discovery 
 (production environment will have around 100 brokers).
 So we have configured 2 CentOS virtual machine for test:
 vmware vsphere 4.1
 -CentOS 6 32bits (AQsrv1  AQsrv2)
 -JRE6
 -Fuse Message Broker v5.5.1 (see activemq.xml configuration at the end)
 We have developed a NET4.0 application (Producer  Consumer) using:
 -Apache.NMS 1.5.0
 -Apache.NMS.activeMQ 1.5.5
 If I use the following URI to connect: 
 activemq:failover://(tcp://AQsrv1:61616,tcp://AQsrv2:61616)?transport.randomize=truetransport.startupMaxReconnectAttempts=0transport.timeout=2000nms.AsyncSend=true
 - when I stop the activeMQ process on AQsrv1: the failover works fine and 
 application continue to send/consume messages on AQsrv2.
 Now If I use the following URI:
 activemq:failover://(tcp://AQsrv1:61616)?transport.randomize=truetransport.startupMaxReconnectAttempts=0transport.timeout=2000nms.AsyncSend=true
 - when I stop the activeMQ process on AQsrv1: the failover seems to not work 
 anymore and my application is not able to connect to AQsrv2 automatically.
 As I'm supposed to received a list of brokers (so AQsrv2) from AQsrv1
  
 activemq.xml on both servers
 networkConnectors
 networkConnector
 name=ncelab-activeMQ
 uri=multicast://default?group=ncelab
 dynamicOnly=true
 networkTTL=2
 duplex=true
 prefetchSize=1
 decreaseNetworkConsumerPriority=false /
 /networkConnectors
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb/
 /persistenceAdapter
 transportConnectors
 transportConnector name=openwire
 uri=tcp://0.0.0.0:61616
 discoveryUri=multicast://default?group=ncelab
 enableStatusMonitor=true
 rebalanceClusterClients=true
 updateClusterClients=true
 updateClusterClientsOnRemove=true /
 /transportConnectors

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-425) Add support for updateClusterClients to Failover Transport

2013-04-29 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-425:


Issue Type: Improvement  (was: Bug)

 Add support for updateClusterClients to Failover Transport
 --

 Key: AMQNET-425
 URL: https://issues.apache.org/jira/browse/AMQNET-425
 Project: ActiveMQ .Net
  Issue Type: Improvement
  Components: NMS
Affects Versions: 1.5.5, 1.5.6
 Environment: We have developed a NET4.0 application (Producer  
 Consumer) using:
 -Apache.NMS 1.5.0
 -Apache.NMS.activeMQ 1.5.5
  vmware vsphere 4.1
 -CentOS 6 32bits (AQsrv1  AQsrv2)
 -JRE6
 -Fuse Message Broker v5.5.1
Reporter: Marc Rodrigues
Assignee: Jim Gomes
Priority: Critical

 Hello,
 The NMS failover client does not yet support the updateClusterClients 
 options, so it won't respond to the list of brokers provided by the server.
 The workaround is to provide the static list or use some other discovery 
 mechanism like multicast.
 Issue reported via forum :
 Our goal is to use a network of brokers with failover and dynamic discovery 
 (production environment will have around 100 brokers).
 So we have configured 2 CentOS virtual machine for test:
 vmware vsphere 4.1
 -CentOS 6 32bits (AQsrv1  AQsrv2)
 -JRE6
 -Fuse Message Broker v5.5.1 (see activemq.xml configuration at the end)
 We have developed a NET4.0 application (Producer  Consumer) using:
 -Apache.NMS 1.5.0
 -Apache.NMS.activeMQ 1.5.5
 If I use the following URI to connect: 
 activemq:failover://(tcp://AQsrv1:61616,tcp://AQsrv2:61616)?transport.randomize=truetransport.startupMaxReconnectAttempts=0transport.timeout=2000nms.AsyncSend=true
 - when I stop the activeMQ process on AQsrv1: the failover works fine and 
 application continue to send/consume messages on AQsrv2.
 Now If I use the following URI:
 activemq:failover://(tcp://AQsrv1:61616)?transport.randomize=truetransport.startupMaxReconnectAttempts=0transport.timeout=2000nms.AsyncSend=true
 - when I stop the activeMQ process on AQsrv1: the failover seems to not work 
 anymore and my application is not able to connect to AQsrv2 automatically.
 As I'm supposed to received a list of brokers (so AQsrv2) from AQsrv1
  
 activemq.xml on both servers
 networkConnectors
 networkConnector
 name=ncelab-activeMQ
 uri=multicast://default?group=ncelab
 dynamicOnly=true
 networkTTL=2
 duplex=true
 prefetchSize=1
 decreaseNetworkConsumerPriority=false /
 /networkConnectors
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb/
 /persistenceAdapter
 transportConnectors
 transportConnector name=openwire
 uri=tcp://0.0.0.0:61616
 discoveryUri=multicast://default?group=ncelab
 enableStatusMonitor=true
 rebalanceClusterClients=true
 updateClusterClients=true
 updateClusterClientsOnRemove=true /
 /transportConnectors

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-425) Add support for updateClusterClients to Failover Transport

2013-04-29 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-425:


Priority: Major  (was: Critical)

 Add support for updateClusterClients to Failover Transport
 --

 Key: AMQNET-425
 URL: https://issues.apache.org/jira/browse/AMQNET-425
 Project: ActiveMQ .Net
  Issue Type: Improvement
  Components: NMS
Affects Versions: 1.5.5, 1.5.6
 Environment: We have developed a NET4.0 application (Producer  
 Consumer) using:
 -Apache.NMS 1.5.0
 -Apache.NMS.activeMQ 1.5.5
  vmware vsphere 4.1
 -CentOS 6 32bits (AQsrv1  AQsrv2)
 -JRE6
 -Fuse Message Broker v5.5.1
Reporter: Marc Rodrigues
Assignee: Jim Gomes

 Hello,
 The NMS failover client does not yet support the updateClusterClients 
 options, so it won't respond to the list of brokers provided by the server.
 The workaround is to provide the static list or use some other discovery 
 mechanism like multicast.
 Issue reported via forum :
 Our goal is to use a network of brokers with failover and dynamic discovery 
 (production environment will have around 100 brokers).
 So we have configured 2 CentOS virtual machine for test:
 vmware vsphere 4.1
 -CentOS 6 32bits (AQsrv1  AQsrv2)
 -JRE6
 -Fuse Message Broker v5.5.1 (see activemq.xml configuration at the end)
 We have developed a NET4.0 application (Producer  Consumer) using:
 -Apache.NMS 1.5.0
 -Apache.NMS.activeMQ 1.5.5
 If I use the following URI to connect: 
 activemq:failover://(tcp://AQsrv1:61616,tcp://AQsrv2:61616)?transport.randomize=truetransport.startupMaxReconnectAttempts=0transport.timeout=2000nms.AsyncSend=true
 - when I stop the activeMQ process on AQsrv1: the failover works fine and 
 application continue to send/consume messages on AQsrv2.
 Now If I use the following URI:
 activemq:failover://(tcp://AQsrv1:61616)?transport.randomize=truetransport.startupMaxReconnectAttempts=0transport.timeout=2000nms.AsyncSend=true
 - when I stop the activeMQ process on AQsrv1: the failover seems to not work 
 anymore and my application is not able to connect to AQsrv2 automatically.
 As I'm supposed to received a list of brokers (so AQsrv2) from AQsrv1
  
 activemq.xml on both servers
 networkConnectors
 networkConnector
 name=ncelab-activeMQ
 uri=multicast://default?group=ncelab
 dynamicOnly=true
 networkTTL=2
 duplex=true
 prefetchSize=1
 decreaseNetworkConsumerPriority=false /
 /networkConnectors
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb/
 /persistenceAdapter
 transportConnectors
 transportConnector name=openwire
 uri=tcp://0.0.0.0:61616
 discoveryUri=multicast://default?group=ncelab
 enableStatusMonitor=true
 rebalanceClusterClients=true
 updateClusterClients=true
 updateClusterClientsOnRemove=true /
 /transportConnectors

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-403) Add support for priortiy URIs to Failover transport

2013-04-29 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-403:


Summary: Add support for priortiy URIs to Failover transport  (was: 
Failover transport: support priority urls)

 Add support for priortiy URIs to Failover transport
 ---

 Key: AMQNET-403
 URL: https://issues.apache.org/jira/browse/AMQNET-403
 Project: ActiveMQ .Net
  Issue Type: New Feature
  Components: ActiveMQ
Reporter: Boris Rybalkin
Assignee: Jim Gomes
Priority: Minor

 Currently priorityBackup parameter is not supported in NMS library.
 Is it possible to have similar to what java library has:
 failover:(tcp://local:61616,tcp://remote:61616)?priorityBackup=true

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-403) Add support for priortiy URIs to Failover transport

2013-04-29 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-403:


Fix Version/s: 1.6.0

 Add support for priortiy URIs to Failover transport
 ---

 Key: AMQNET-403
 URL: https://issues.apache.org/jira/browse/AMQNET-403
 Project: ActiveMQ .Net
  Issue Type: New Feature
  Components: ActiveMQ
Reporter: Boris Rybalkin
Assignee: Jim Gomes
Priority: Minor
 Fix For: 1.6.0


 Currently priorityBackup parameter is not supported in NMS library.
 Is it possible to have similar to what java library has:
 failover:(tcp://local:61616,tcp://remote:61616)?priorityBackup=true

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (AMQNET-403) Add support for priortiy URIs to Failover transport

2013-04-29 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish reassigned AMQNET-403:
---

Assignee: Timothy Bish  (was: Jim Gomes)

 Add support for priortiy URIs to Failover transport
 ---

 Key: AMQNET-403
 URL: https://issues.apache.org/jira/browse/AMQNET-403
 Project: ActiveMQ .Net
  Issue Type: New Feature
  Components: ActiveMQ
Reporter: Boris Rybalkin
Assignee: Timothy Bish
Priority: Minor
 Fix For: 1.6.0


 Currently priorityBackup parameter is not supported in NMS library.
 Is it possible to have similar to what java library has:
 failover:(tcp://local:61616,tcp://remote:61616)?priorityBackup=true

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (AMQNET-425) Add support for updateClusterClients to Failover Transport

2013-04-29 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish reassigned AMQNET-425:
---

Assignee: Timothy Bish  (was: Jim Gomes)

 Add support for updateClusterClients to Failover Transport
 --

 Key: AMQNET-425
 URL: https://issues.apache.org/jira/browse/AMQNET-425
 Project: ActiveMQ .Net
  Issue Type: Improvement
  Components: NMS
Affects Versions: 1.5.5, 1.5.6
 Environment: We have developed a NET4.0 application (Producer  
 Consumer) using:
 -Apache.NMS 1.5.0
 -Apache.NMS.activeMQ 1.5.5
  vmware vsphere 4.1
 -CentOS 6 32bits (AQsrv1  AQsrv2)
 -JRE6
 -Fuse Message Broker v5.5.1
Reporter: Marc Rodrigues
Assignee: Timothy Bish

 Hello,
 The NMS failover client does not yet support the updateClusterClients 
 options, so it won't respond to the list of brokers provided by the server.
 The workaround is to provide the static list or use some other discovery 
 mechanism like multicast.
 Issue reported via forum :
 Our goal is to use a network of brokers with failover and dynamic discovery 
 (production environment will have around 100 brokers).
 So we have configured 2 CentOS virtual machine for test:
 vmware vsphere 4.1
 -CentOS 6 32bits (AQsrv1  AQsrv2)
 -JRE6
 -Fuse Message Broker v5.5.1 (see activemq.xml configuration at the end)
 We have developed a NET4.0 application (Producer  Consumer) using:
 -Apache.NMS 1.5.0
 -Apache.NMS.activeMQ 1.5.5
 If I use the following URI to connect: 
 activemq:failover://(tcp://AQsrv1:61616,tcp://AQsrv2:61616)?transport.randomize=truetransport.startupMaxReconnectAttempts=0transport.timeout=2000nms.AsyncSend=true
 - when I stop the activeMQ process on AQsrv1: the failover works fine and 
 application continue to send/consume messages on AQsrv2.
 Now If I use the following URI:
 activemq:failover://(tcp://AQsrv1:61616)?transport.randomize=truetransport.startupMaxReconnectAttempts=0transport.timeout=2000nms.AsyncSend=true
 - when I stop the activeMQ process on AQsrv1: the failover seems to not work 
 anymore and my application is not able to connect to AQsrv2 automatically.
 As I'm supposed to received a list of brokers (so AQsrv2) from AQsrv1
  
 activemq.xml on both servers
 networkConnectors
 networkConnector
 name=ncelab-activeMQ
 uri=multicast://default?group=ncelab
 dynamicOnly=true
 networkTTL=2
 duplex=true
 prefetchSize=1
 decreaseNetworkConsumerPriority=false /
 /networkConnectors
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb/
 /persistenceAdapter
 transportConnectors
 transportConnector name=openwire
 uri=tcp://0.0.0.0:61616
 discoveryUri=multicast://default?group=ncelab
 enableStatusMonitor=true
 rebalanceClusterClients=true
 updateClusterClients=true
 updateClusterClientsOnRemove=true /
 /transportConnectors

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-425) Add support for updateClusterClients to Failover Transport

2013-04-29 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated AMQNET-425:


Fix Version/s: 1.6.0

 Add support for updateClusterClients to Failover Transport
 --

 Key: AMQNET-425
 URL: https://issues.apache.org/jira/browse/AMQNET-425
 Project: ActiveMQ .Net
  Issue Type: Improvement
  Components: NMS
Affects Versions: 1.5.5, 1.5.6
 Environment: We have developed a NET4.0 application (Producer  
 Consumer) using:
 -Apache.NMS 1.5.0
 -Apache.NMS.activeMQ 1.5.5
  vmware vsphere 4.1
 -CentOS 6 32bits (AQsrv1  AQsrv2)
 -JRE6
 -Fuse Message Broker v5.5.1
Reporter: Marc Rodrigues
Assignee: Timothy Bish
 Fix For: 1.6.0


 Hello,
 The NMS failover client does not yet support the updateClusterClients 
 options, so it won't respond to the list of brokers provided by the server.
 The workaround is to provide the static list or use some other discovery 
 mechanism like multicast.
 Issue reported via forum :
 Our goal is to use a network of brokers with failover and dynamic discovery 
 (production environment will have around 100 brokers).
 So we have configured 2 CentOS virtual machine for test:
 vmware vsphere 4.1
 -CentOS 6 32bits (AQsrv1  AQsrv2)
 -JRE6
 -Fuse Message Broker v5.5.1 (see activemq.xml configuration at the end)
 We have developed a NET4.0 application (Producer  Consumer) using:
 -Apache.NMS 1.5.0
 -Apache.NMS.activeMQ 1.5.5
 If I use the following URI to connect: 
 activemq:failover://(tcp://AQsrv1:61616,tcp://AQsrv2:61616)?transport.randomize=truetransport.startupMaxReconnectAttempts=0transport.timeout=2000nms.AsyncSend=true
 - when I stop the activeMQ process on AQsrv1: the failover works fine and 
 application continue to send/consume messages on AQsrv2.
 Now If I use the following URI:
 activemq:failover://(tcp://AQsrv1:61616)?transport.randomize=truetransport.startupMaxReconnectAttempts=0transport.timeout=2000nms.AsyncSend=true
 - when I stop the activeMQ process on AQsrv1: the failover seems to not work 
 anymore and my application is not able to connect to AQsrv2 automatically.
 As I'm supposed to received a list of brokers (so AQsrv2) from AQsrv1
  
 activemq.xml on both servers
 networkConnectors
 networkConnector
 name=ncelab-activeMQ
 uri=multicast://default?group=ncelab
 dynamicOnly=true
 networkTTL=2
 duplex=true
 prefetchSize=1
 decreaseNetworkConsumerPriority=false /
 /networkConnectors
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb/
 /persistenceAdapter
 transportConnectors
 transportConnector name=openwire
 uri=tcp://0.0.0.0:61616
 discoveryUri=multicast://default?group=ncelab
 enableStatusMonitor=true
 rebalanceClusterClients=true
 updateClusterClients=true
 updateClusterClientsOnRemove=true /
 /transportConnectors

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4489) Newly received messages with higher priority are never consumed, until broker is restarted

2013-04-29 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644843#comment-13644843
 ] 

Gary Tully commented on AMQ-4489:
-

I just gave org.apache.activemq.store.JDBCMessagePriorityTest#testQueues (with 
ur modified MessagePriorityTest) a run on trunk and it works, is that the test 
that should show the problem?
maybe try against at 5.9-SNAPSHOT or did I miss something.

 Newly received messages with higher priority are never consumed, until broker 
 is restarted
 --

 Key: AMQ-4489
 URL: https://issues.apache.org/jira/browse/AMQ-4489
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.5.1
 Environment: ServiceMix 4.4.2, using Camel producers/consumers
Reporter: metatech
 Attachments: MessagePriorityTest_frozen.java, 
 MessagePriorityTest.java, MessagePriorityTest_workaround.java


 We configured message prioritization according to the following page :
 http://activemq.apache.org/how-can-i-support-priority-queues.html
 We use a JDBC adapter for message persistence, in an Oracle database.
 Prioritisation is enabled on the queue with the prioritizedMessages option, 
 and we also specify a memory limit for the queue (24 MB)
 We use ActiveMQ 5.5.1 within ServiceMix 4.4.2, and use Camel JMS 
 producers/consumers.
 Message can have 2 priorities : 4 (normal) for non-business hours and 9 
 (high) for business hours.
 The scenario to reproduce the problem is the following : 
 1. Enqueue 1000 normal and 1000 high messages.
 2. All high messages are consumed first.
 3. After a few normal messages are consumed, enqueue additional 1000 high 
 messages.
 4. All normal messages are consumed before high messages.
 5. All additional high 1000 messages are never consumed.
 6. Restart broker.
 7. All additional high 1000 messages start getting consumed.
 In production, we have a producer with high peaks during the night 
 (10,000-100,000 messages/hour), and 6 consumers (about 5,000-10,000 
 messages/hour), so the queue can reach 100,000-200,000 messages at some 
 periods of the day. Messages are small (200 bytes).
 We enabled SQL query tracing on the broker (with log4jdbc), and we see that 
 the logic with which the findNextMessagesByPriorityStatement query is 
 called does not seem correct in the JDBCMessageStore.recoverNextMessages 
 method :
 At step 2, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 200 AND PRIORITY = 9) OR PRIORITY  9) ORDER BY PRIORITY DESC, ID
 At step 4, we see the following query being executed :
 SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER='priorityQueue' AND ((ID  
 1200 AND PRIORITY = 4) OR PRIORITY  4) ORDER BY PRIORITY DESC, ID
 The problem is that the value for the last priority stored in the  
 lastRecoveredPriority variable of the JDBCMessageStore stays permanently to 
 4, until step 6, where it is reset to 9.
 We tried changing the priority to constant '9' in the query.  It works OK 
 until step 3, where only 200 messages are consumed
 Our understanding is that there should be one lastRecoveredSequenceId 
 variable for each priority level, so that the last consumed message but not 
 yet removed from the DB is memorized, and also the priority should probably 
 also be reset to 9 every time the query is executed.
 Can you have a look please ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4494) Cursor hasSpace() doesn't check system usage

2013-04-29 Thread Christian Posta (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644961#comment-13644961
 ] 

Christian Posta commented on AMQ-4494:
--

Also tracked at this JIRA: https://issues.apache.org/jira/browse/AMQ-4467

 Cursor hasSpace() doesn't check system usage
 

 Key: AMQ-4494
 URL: https://issues.apache.org/jira/browse/AMQ-4494
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.8.0
Reporter: Dejan Bosanac
Assignee: Dejan Bosanac
 Fix For: 5.9.0


 When checking for space, every cursor checks its destinations memory usage. 
 Unfortunately, that doesn't check the parent (system usage) so with large 
 number of destinations when total sum of per destination limits is larger 
 than total system memory usage, we can brake the system usage memory limit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-2826) Look at the possibility of incorporating a cassandra persistence adapter from http://github.com/ticktock/qsandra

2013-04-29 Thread Les Hazlewood (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-2826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645002#comment-13645002
 ] 

Les Hazlewood commented on AMQ-2826:


Hi Scott,

Has there been any movement on QSandra in the last two years?  Do you know of 
anyone using it in production?  Or is it a now defunct project?  Any 
information would be helpful!

Thanks,

Les

 Look at the possibility of incorporating a cassandra persistence adapter from 
 http://github.com/ticktock/qsandra 
 -

 Key: AMQ-2826
 URL: https://issues.apache.org/jira/browse/AMQ-2826
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Message Store
Affects Versions: 5.3.2
Reporter: Scott Clasen

 I am the author of http://github.com/ticktock/qsandra, which is a cassandra 
 persistence adapter for activemq. I am willing to donate it if it is 
 something that is of interest to ActiveMQ..
 Only current trouble with that is it needs JDK 1.6, so it would probably need 
 to wait until (if and when) ActiveMQ 5.x is built with JDK 6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQNET-433) Correctly rethrow exceptions without swallowing the stack trace

2013-04-29 Thread Daniel Marbach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQNET-433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645292#comment-13645292
 ] 

Daniel Marbach commented on AMQNET-433:
---

Hy Jim,
That is what I consider a correct and detailed answer which I find much more 
polite.
Daniel

 Correctly rethrow exceptions without swallowing the stack trace
 ---

 Key: AMQNET-433
 URL: https://issues.apache.org/jira/browse/AMQNET-433
 Project: ActiveMQ .Net
  Issue Type: Bug
  Components: ActiveMQ
Reporter: Daniel Marbach
Assignee: Jim Gomes
Priority: Trivial
 Fix For: 1.6.0


 When looking through the code I saw a lot of the following code snippets:
 try
 {
 }
 catch(AnyException ex)
 {
// do something
throw ex;
 }
 This WILL rewrite the stack trace and is not considered best practice. I 
 suggest you change the appropriate places in the code to:
 try
 {
 }
 catch(AnyException ex)
 {
// do something
throw;
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQNET-434) FailoverTransport Memory Leak with TransactionState

2013-04-29 Thread Daniel Marbach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQNET-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645293#comment-13645293
 ] 

Daniel Marbach commented on AMQNET-434:
---

We found the root cause of the problem. Will add a patch



 FailoverTransport Memory Leak with TransactionState
 ---

 Key: AMQNET-434
 URL: https://issues.apache.org/jira/browse/AMQNET-434
 Project: ActiveMQ .Net
  Issue Type: Bug
Affects Versions: 1.5.6
Reporter: Daniel Marbach
Assignee: Timothy Bish
 Fix For: 1.6.0

 Attachments: ConnectionStateTrackerMemoryLeak.cs


 I'm hunting down a possible memory leak. We have the following problem in 
 production:
 when the consumer/subscriber endpoint runs for a long time with failover 
 transport enabled the memory grows indefinitely. 
 I used YouTrack and AntsProfiler to hunt down the issues. The retention path 
 I see in production is the following:
 The FailoverTransport nested class FailoverTask has two 
 ConnectionStateTrackers this keeps a dictionary which links the ConnectionId 
 to the ConnectionState. The ConnectionState itself has a dictionary which 
 links the transactionId to the TransactionState. The TranscationState tracks 
 commands. BUT these commands are never freed up from the transaction state 
 and stay there forever which will blow up the memory some time. 
 I'm currently investigation how to fix this but must first properly 
 understand the code. I opened up this issue in the hope that it will ring a 
 bell for you guys.
 Daniel

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQNET-434) FailoverTransport Memory Leak with TransactionState

2013-04-29 Thread Daniel Marbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Marbach updated AMQNET-434:
--

Attachment: ConnectionStateTrackerMemoryLeak.cs

Fixes removal of TransactionState in DTC case

 FailoverTransport Memory Leak with TransactionState
 ---

 Key: AMQNET-434
 URL: https://issues.apache.org/jira/browse/AMQNET-434
 Project: ActiveMQ .Net
  Issue Type: Bug
Affects Versions: 1.5.6
Reporter: Daniel Marbach
Assignee: Timothy Bish
 Fix For: 1.6.0

 Attachments: ConnectionStateTrackerMemoryLeak.cs


 I'm hunting down a possible memory leak. We have the following problem in 
 production:
 when the consumer/subscriber endpoint runs for a long time with failover 
 transport enabled the memory grows indefinitely. 
 I used YouTrack and AntsProfiler to hunt down the issues. The retention path 
 I see in production is the following:
 The FailoverTransport nested class FailoverTask has two 
 ConnectionStateTrackers this keeps a dictionary which links the ConnectionId 
 to the ConnectionState. The ConnectionState itself has a dictionary which 
 links the transactionId to the TransactionState. The TranscationState tracks 
 commands. BUT these commands are never freed up from the transaction state 
 and stay there forever which will blow up the memory some time. 
 I'm currently investigation how to fix this but must first properly 
 understand the code. I opened up this issue in the hope that it will ring a 
 bell for you guys.
 Daniel

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQNET-434) FailoverTransport Memory Leak with TransactionState

2013-04-29 Thread Daniel Marbach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQNET-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645295#comment-13645295
 ] 

Daniel Marbach commented on AMQNET-434:
---

Sorry the file should be named .patch. Messed that up

 FailoverTransport Memory Leak with TransactionState
 ---

 Key: AMQNET-434
 URL: https://issues.apache.org/jira/browse/AMQNET-434
 Project: ActiveMQ .Net
  Issue Type: Bug
Affects Versions: 1.5.6
Reporter: Daniel Marbach
Assignee: Timothy Bish
 Fix For: 1.6.0

 Attachments: ConnectionStateTrackerMemoryLeak.cs


 I'm hunting down a possible memory leak. We have the following problem in 
 production:
 when the consumer/subscriber endpoint runs for a long time with failover 
 transport enabled the memory grows indefinitely. 
 I used YouTrack and AntsProfiler to hunt down the issues. The retention path 
 I see in production is the following:
 The FailoverTransport nested class FailoverTask has two 
 ConnectionStateTrackers this keeps a dictionary which links the ConnectionId 
 to the ConnectionState. The ConnectionState itself has a dictionary which 
 links the transactionId to the TransactionState. The TranscationState tracks 
 commands. BUT these commands are never freed up from the transaction state 
 and stay there forever which will blow up the memory some time. 
 I'm currently investigation how to fix this but must first properly 
 understand the code. I opened up this issue in the hope that it will ring a 
 bell for you guys.
 Daniel

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira