[jira] [Resolved] (AMQNET-454) Add Apache Qpid provider to NMS

2016-01-08 Thread Jim Gomes (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Gomes resolved AMQNET-454.
--
Resolution: Fixed

Looks like this feature has been sufficiently developed to close this issue. 
New issues can be opened to address specific problems/enhancements.

> Add Apache Qpid provider to NMS
> ---
>
> Key: AMQNET-454
> URL: https://issues.apache.org/jira/browse/AMQNET-454
> Project: ActiveMQ .Net
>  Issue Type: New Feature
>  Components: NMS
>Affects Versions: 1.6.0
>Reporter: Chuck Rolke
>Assignee: Jim Gomes
> Attachments: Apache.NMS.AMQP-21-Add-Map-Text-Message-tests.patch, 
> Apache.NMS.AMQP-22-add-more-tests.patch, 
> Apache.NMS.AMQP-23a-MessageDeliveryTest.cs.patch, 
> Apache.NMS.AMQP-23b-MSConnectionFactoryTest.cs.patch, 
> Apache.NMS.AMQP-23c-NmsConsoleTracer.cs.patch, 
> Apache.NMS.AMQP-23d-addTraceStatements.patch, 
> Apache.NMS.AMQP-23e-addFilesToTestProject.patch, 
> Apache.NMS.AMQP-24-tidy-up.patch, Apache.NMS.AMQP-25-use-qpid-0.28.patch, 
> Apache.NMS.AMQP-26-hook-in-session-ack.patch, 
> Apache.NMS.AMQP-27-nant-unmanaged-copy.patch, 
> Apache.NMS.AMQP-28-close-qpid-sender-receiver.patch, 
> Apache.NMS.AMQP-29-stop-sessions-before-connection.patch, 
> Apache.NMS.AMQP-30-lock-x86-only.patch, 
> Apache.NMS.AMQP-Add-message-cloning-19.patch, 
> Apache.NMS.AMQP-add-connection-property-table-17.patch, 
> Apache.NMS.AMQP-add-hello-world-example-11.patch, 
> Apache.NMS.AMQP-add-hello-world-example-retry-12.patch, 
> Apache.NMS.AMQP-add-hello-world-to-vs2008-18.patch, 
> Apache.NMS.AMQP-add-message-conversions-06.patch, 
> Apache.NMS.AMQP-add-message-test-20.patch, 
> Apache.NMS.AMQP-add-topic-05.patch, 
> Apache.NMS.AMQP-connectionProperties-07.patch, 
> Apache.NMS.AMQP-copyrights-conn-str-fix-09.patch, 
> Apache.NMS.AMQP-fix-destination-to-use-qpid-address-10.patch, 
> Apache.NMS.AMQP-fix-helloworld-13.patch, 
> Apache.NMS.AMQP-fix-list-message-body-15.patch, 
> Apache.NMS.AMQP-fix-map-message-body-14.patch, 
> Apache.NMS.AMQP-fix-replyTo-and-receive-timeouts-16.patch, 
> Apache.NMS.AMQP-object-lifecycle-04.patch, 
> Apache.NMS.AMQP-provider-configs-03.patch, 
> Apache.NMS.AMQP-qpid-object-lifecycle-02.patch, 
> Apache.NMS.AMQP-set-connection-credentials-08.patch, RELEASE.txt, 
> vendor-Apache.QPID-00-replace-debug-with-release.patch, 
> vendor-QPid-nant-01.patch
>
>
> NMS includes various providers ActiveMQ, STOMP, MSMQ, EMS, and WCF. This 
> issue proposes to add [Apache Qpid|http://qpid.apache.org/index.html] as 
> another provider.
> Qpid has a [Messaging .NET 
> Binding|http://qpid.apache.org/releases/qpid-0.24/programming/book/ch05.html] 
> that is layered on top of the native C++ Qpid Messaging client. The Qpid .NET 
> binding is attractive as the hook for tying in Qpid as an NMS provider.
> The proposed NMS provider supports [AMQP 
> 1.0|http://qpid.apache.org/amqp.html] by including [Qpid 
> Proton|http://qpid.apache.org/proton/index.html] libraries.
> From a high level this addition to Active.NMS would consist of two parts
> * Add Qpid as a vendor kit. This includes both the Qpid .NET Binding and Qpid 
> Proton in a single kit.
> * Add the new provider with code linking NMS to Qpid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work logged] (AMQNET-185) Add new provider implementation for IBM WebSphere MQ (formerly MQSeries)

2016-01-08 Thread Jim Gomes (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-185?focusedWorklogId=23349&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-23349
 ]

Jim Gomes logged work on AMQNET-185:


Author: Jim Gomes
Created on: 08/Jan/16 18:52
Start Date: 08/Jan/16 18:52
Worklog Time Spent: 2h 

Issue Time Tracking
---

Worklog Id: (was: 23349)
Time Spent: 2h
Remaining Estimate: 0h

> Add new provider implementation for IBM WebSphere MQ (formerly MQSeries)
> 
>
> Key: AMQNET-185
> URL: https://issues.apache.org/jira/browse/AMQNET-185
> Project: ActiveMQ .Net
>  Issue Type: New Feature
>  Components: XMS
>Affects Versions: 1.2.0
>Reporter: Jim Gomes
>Assignee: Jim Gomes
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: Apache.NMS.XMS.7z
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> An additional provider implementation for interfacing with IBM WebSphere MQ 
> would greatly enhance the cross-broker support of NMS.  This new provider 
> implementation can be implemented in a similar fashion to the TIBCO EMS 
> provider.  The new provider should be named Apache.NMS.XMS.  The IBM 
> WebSphere MQ .NET client is informally, but commonly, referred to as XMS .NET.
> The URI prefix for the provider should be XMS: in a similar way that EMS: 
> prefix is used for TIBCO.
> A new Component module should be added to JIRA to track this provider.  The 
> Component module should be named XMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQNET-185) Add new provider implementation for IBM WebSphere MQ (formerly MQSeries)

2016-01-08 Thread Jim Gomes (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Gomes resolved AMQNET-185.
--
Resolution: Fixed

> Add new provider implementation for IBM WebSphere MQ (formerly MQSeries)
> 
>
> Key: AMQNET-185
> URL: https://issues.apache.org/jira/browse/AMQNET-185
> Project: ActiveMQ .Net
>  Issue Type: New Feature
>  Components: XMS
>Affects Versions: 1.2.0
>Reporter: Jim Gomes
>Assignee: Jim Gomes
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: Apache.NMS.XMS.7z
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> An additional provider implementation for interfacing with IBM WebSphere MQ 
> would greatly enhance the cross-broker support of NMS.  This new provider 
> implementation can be implemented in a similar fashion to the TIBCO EMS 
> provider.  The new provider should be named Apache.NMS.XMS.  The IBM 
> WebSphere MQ .NET client is informally, but commonly, referred to as XMS .NET.
> The URI prefix for the provider should be XMS: in a similar way that EMS: 
> prefix is used for TIBCO.
> A new Component module should be added to JIRA to track this provider.  The 
> Component module should be named XMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQNET-516) Create wiki pages in Confluence for new XMS provider

2016-01-08 Thread Jim Gomes (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQNET-516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Gomes resolved AMQNET-516.
--
Resolution: Fixed

> Create wiki pages in Confluence for new XMS provider
> 
>
> Key: AMQNET-516
> URL: https://issues.apache.org/jira/browse/AMQNET-516
> Project: ActiveMQ .Net
>  Issue Type: Sub-task
>  Components: XMS
>Affects Versions: 1.8.0
>Reporter: Jim Gomes
>Assignee: Jim Gomes
>Priority: Minor
>  Labels: documentation
> Fix For: 1.8.0
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The main documentation page should have links added to the new XMS provider 
> information page.
> See this page: http://activemq.apache.org/nms/nms.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-6116) Improve security context authorization cache

2016-01-08 Thread Dejan Bosanac (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dejan Bosanac resolved AMQ-6116.

Resolution: Fixed

> Improve security context authorization cache
> 
>
> Key: AMQ-6116
> URL: https://issues.apache.org/jira/browse/AMQ-6116
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.13.0
>Reporter: Dejan Bosanac
>Assignee: Dejan Bosanac
> Fix For: 5.14.0
>
>
> The read cache is never used, so we should remove it. Also, there's a 
> potential for the write cache to never be cleaned, if the authentication 
> plugin don't do it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6116) Improve security context authorization cache

2016-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089419#comment-15089419
 ] 

ASF subversion and git services commented on AMQ-6116:
--

Commit 5f8a3df5a4fc0822897cc1abdcd4d99924285937 in activemq's branch 
refs/heads/master from [~dejanb]
[ https://git-wip-us.apache.org/repos/asf?p=activemq.git;h=5f8a3df ]

https://issues.apache.org/jira/browse/AMQ-6116 - improve security context


> Improve security context authorization cache
> 
>
> Key: AMQ-6116
> URL: https://issues.apache.org/jira/browse/AMQ-6116
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.13.0
>Reporter: Dejan Bosanac
>Assignee: Dejan Bosanac
> Fix For: 5.14.0
>
>
> The read cache is never used, so we should remove it. Also, there's a 
> potential for the write cache to never be cleaned, if the authentication 
> plugin don't do it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-6117) QueueView method output is not in sync with actual queue

2016-01-08 Thread Jo Vandermeeren (JIRA)
Jo Vandermeeren created AMQ-6117:


 Summary: QueueView method output is not in sync with actual queue
 Key: AMQ-6117
 URL: https://issues.apache.org/jira/browse/AMQ-6117
 Project: ActiveMQ
  Issue Type: Bug
  Components: JMX
Affects Versions: 5.13.0
Reporter: Jo Vandermeeren


After upgrading from 5.10.2 to 5.13.0, it seems that the data provided by the 
QueueView methods is not in sync with the actual queue.

When removing messages from the DLQ via QueueView.removeMessage(String), the 
message is actually removed from the queue but QueueView.browse() still lists 
the message.

When new messages arrive on the DLQ via JMS, the output of QueueView.browse() 
still lists the stale message.

Only when an action is performed via the ActiveMQ admin console (e.g. refresh 
browse.jsp for that queue) the JMX output is refreshed. 
The QueueSize attribute of the queue however, is always accurate when accessed 
via JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-6116) Improve security context authorization cache

2016-01-08 Thread Dejan Bosanac (JIRA)
Dejan Bosanac created AMQ-6116:
--

 Summary: Improve security context authorization cache
 Key: AMQ-6116
 URL: https://issues.apache.org/jira/browse/AMQ-6116
 Project: ActiveMQ
  Issue Type: Improvement
Affects Versions: 5.13.0
Reporter: Dejan Bosanac
Assignee: Dejan Bosanac
 Fix For: 5.14.0


The read cache is never used, so we should remove it. Also, there's a potential 
for the write cache to never be cleaned, if the authentication plugin don't do 
it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-6115) No more browse/consume possible after #checkpoint run

2016-01-08 Thread Klaus Pittig (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Klaus Pittig updated AMQ-6115:
--
Description: 
We are currently facing a problem when Using ActiveMQ with a large number of 
Persistence Queues (250) á 1000 persistent TextMessages á 10 KB.
Our scenario requires these messages to remain in the storage over a long time 
(days), until they are consumed (large amounts of data are staged for 
distribution for many consumer, that could be offline for some days).

This issue is independent of the JVM,  OS and PersistentAdapter (KahaDB, 
LevelDB) with enough free space and memory.
We tested this behaviour with ActiveMQ: 5.11.2, 5.13.0 and 5.5.1.

After the Persistence Store is filled with these Messages (we use a simple unit 
test for production always the same message) and a broker restart, we can 
browse/consume some Queues  _until_ the #checkpoint call after 30 seconds.

This call causes the broker to use all available memory and never releases it 
for other tasks such as Queue browse/consume. Internally the MessageCursor 
seems to decide, that there is not enough memory and stops delivery of queue 
content to browsers/consumers.

=> Is there a way to avoid this behaviour of fix this? 
The expectation is, that we can consume/browse any queue under all 
circumstances.

Besides the above mentioned settings we use the following settings for the 
broker (btw: changing the memoryLimit to a lower value like 1mb does not change 
the situation):
{code:xml}


  

  

  
  

  

  
















{code}

If we set the *cursorMemoryHighWaterMark* in the destinationPolicy to a higher 
value like *150* or *600* depending on the difference between memoryUsage and 
the available heap space relieves the situation a bit for a workaround, but 
this is not really an option for production systems in my point of view.

Attached some information from Oracle Mission Control and JProfiler showing 
those ActiveMQTextMessage instances that are never released from memory.


  was:
We are currently facing a problem when Using ActiveMQ with a large number of 
Persistence Queues (250) á 1000 persistent TextMessages á 10 KB.
Our scenario requires these messages to remain in the storage over a long time 
(days), until they are consumed (large amounts of data are staged for 
distribution for many consumer, that could be offline for some days).

This issue is independent of the JVM,  OS and PersistentAdapter (KahaDB, 
LevelDB) with enough free space and memory.
We tested this behaviour with ActiveMQ: 5.11.2, 5.13.0 and 5.5.1.

After the Persistence Store is filled with these Messages (we use a simple unit 
test for production always the same message) and a broker restart, we can 
browse/consume some Queues  _until_ the #checkpoint call after 30 seconds.

This call causes the broker to use all available memory and never releases it 
for other tasks such as Queue browse/consume. Internally the MessageCursor seem 
to decide, that there is not enough memory and stops delivery of queue content 
to browsers/consumers.

=> Is there a way to avoid this behaviour of fix this? 
The expectation is, that we can consume/browse any queue under all 
circumstances.

Besides the above mentioned settings we use the following settings for the 
broker (btw: changing the memoryLimit to a lower value like 1mb does not change 
the situation):
{code:xml}


  

  

  
  

  

  
















{code}

If we set the *cursorMemoryHighWaterMark* in the destinationPolicy to a higher 
value like *150* or *600* depending on the difference between memoryUsage and 
the available heap space relieves the situation a bit for a workaround, but 
this is not really an option for production systems in my point of view.

Attached some information from Oracle Mission Control and JProfiler showing 
those ActiveMQTextMessage instances that are never released from memory.



> No more browse/consume possible after #checkpoint run
> -
>
> Key: AMQ-6115
> URL: https://issues.apache.org/jira/browse/AMQ-6115
> Pr

[jira] [Updated] (AMQ-6115) No more browse/consume possible after #checkpoint run

2016-01-08 Thread Klaus Pittig (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Klaus Pittig updated AMQ-6115:
--
Attachment: Bildschirmfoto 2016-01-08 um 12.09.34.png

JProfiler .hprof heapmem comparison before and after #checkpoint


> No more browse/consume possible after #checkpoint run
> -
>
> Key: AMQ-6115
> URL: https://issues.apache.org/jira/browse/AMQ-6115
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store, Broker, KahaDB
>Affects Versions: 5.5.1, 5.11.2, 5.13.0
> Environment: OS=Linux,MacOS,Windows, Java=1.7,1.8, Xmx=1024m, 
> SystemUsage Memory Limit 500 MB, Temp Limit 1 GB, Storage 80 GB
>Reporter: Klaus Pittig
> Attachments: Bildschirmfoto 2016-01-08 um 12.09.34.png, 
> Bildschirmfoto 2016-01-08 um 13.29.08.png
>
>
> We are currently facing a problem when Using ActiveMQ with a large number of 
> Persistence Queues (250) á 1000 persistent TextMessages á 10 KB.
> Our scenario requires these messages to remain in the storage over a long 
> time (days), until they are consumed (large amounts of data are staged for 
> distribution for many consumer, that could be offline for some days).
> This issue is independent of the JVM,  OS and PersistentAdapter (KahaDB, 
> LevelDB) with enough free space and memory.
> We tested this behaviour with ActiveMQ: 5.11.2, 5.13.0 and 5.5.1.
> After the Persistence Store is filled with these Messages (we use a simple 
> unit test for production always the same message) and a broker restart, we 
> can browse/consume some Queues  _until_ the #checkpoint call after 30 seconds.
> This call causes the broker to use all available memory and never releases it 
> for other tasks such as Queue browse/consume. Internally the MessageCursor 
> seem to decide, that there is not enough memory and stops delivery of queue 
> content to browsers/consumers.
> => Is there a way to avoid this behaviour of fix this? 
> The expectation is, that we can consume/browse any queue under all 
> circumstances.
> Besides the above mentioned settings we use the following settings for the 
> broker (btw: changing the memoryLimit to a lower value like 1mb does not 
> change the situation):
> {code:xml}
> 
> 
>   
>  optimizedDispatch="true" memoryLimit="128mb">
>   
> 
>   
>   
> 
>   
> 
>   
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> If we set the *cursorMemoryHighWaterMark* in the destinationPolicy to a 
> higher value like *150* or *600* depending on the difference between 
> memoryUsage and the available heap space relieves the situation a bit for a 
> workaround, but this is not really an option for production systems in my 
> point of view.
> Attached some information from Oracle Mission Control and JProfiler showing 
> those ActiveMQTextMessage instances that are never released from memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-6115) No more browse/consume possible after #checkpoint run

2016-01-08 Thread Klaus Pittig (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Klaus Pittig updated AMQ-6115:
--
Attachment: Bildschirmfoto 2016-01-08 um 13.29.08.png

Oracle Mission Control JOverflow Analysis showing unreleasable 
ActiveMQTextMessages


> No more browse/consume possible after #checkpoint run
> -
>
> Key: AMQ-6115
> URL: https://issues.apache.org/jira/browse/AMQ-6115
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store, Broker, KahaDB
>Affects Versions: 5.5.1, 5.11.2, 5.13.0
> Environment: OS=Linux,MacOS,Windows, Java=1.7,1.8, Xmx=1024m, 
> SystemUsage Memory Limit 500 MB, Temp Limit 1 GB, Storage 80 GB
>Reporter: Klaus Pittig
> Attachments: Bildschirmfoto 2016-01-08 um 13.29.08.png
>
>
> We are currently facing a problem when Using ActiveMQ with a large number of 
> Persistence Queues (250) á 1000 persistent TextMessages á 10 KB.
> Our scenario requires these messages to remain in the storage over a long 
> time (days), until they are consumed (large amounts of data are staged for 
> distribution for many consumer, that could be offline for some days).
> This issue is independent of the JVM,  OS and PersistentAdapter (KahaDB, 
> LevelDB) with enough free space and memory.
> We tested this behaviour with ActiveMQ: 5.11.2, 5.13.0 and 5.5.1.
> After the Persistence Store is filled with these Messages (we use a simple 
> unit test for production always the same message) and a broker restart, we 
> can browse/consume some Queues  _until_ the #checkpoint call after 30 seconds.
> This call causes the broker to use all available memory and never releases it 
> for other tasks such as Queue browse/consume. Internally the MessageCursor 
> seem to decide, that there is not enough memory and stops delivery of queue 
> content to browsers/consumers.
> => Is there a way to avoid this behaviour of fix this? 
> The expectation is, that we can consume/browse any queue under all 
> circumstances.
> Besides the above mentioned settings we use the following settings for the 
> broker (btw: changing the memoryLimit to a lower value like 1mb does not 
> change the situation):
> {code:xml}
> 
> 
>   
>  optimizedDispatch="true" memoryLimit="128mb">
>   
> 
>   
>   
> 
>   
> 
>   
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> {code}
> If we set the *cursorMemoryHighWaterMark* in the destinationPolicy to a 
> higher value like *150* or *600* depending on the difference between 
> memoryUsage and the available heap space relieves the situation a bit for a 
> workaround, but this is not really an option for production systems in my 
> point of view.
> Attached some information from Oracle Mission Control and JProfiler showing 
> those ActiveMQTextMessage instances that are never released from memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-6115) No more browse/consume possible after #checkpoint run

2016-01-08 Thread Klaus Pittig (JIRA)
Klaus Pittig created AMQ-6115:
-

 Summary: No more browse/consume possible after #checkpoint run
 Key: AMQ-6115
 URL: https://issues.apache.org/jira/browse/AMQ-6115
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-leveldb-store, Broker, KahaDB
Affects Versions: 5.13.0, 5.11.2, 5.5.1
 Environment: OS=Linux,MacOS,Windows, Java=1.7,1.8, Xmx=1024m, 
SystemUsage Memory Limit 500 MB, Temp Limit 1 GB, Storage 80 GB
Reporter: Klaus Pittig


We are currently facing a problem when Using ActiveMQ with a large number of 
Persistence Queues (250) á 1000 persistent TextMessages á 10 KB.
Our scenario requires these messages to remain in the storage over a long time 
(days), until they are consumed (large amounts of data are staged for 
distribution for many consumer, that could be offline for some days).

This issue is independent of the JVM,  OS and PersistentAdapter (KahaDB, 
LevelDB) with enough free space and memory.
We tested this behaviour with ActiveMQ: 5.11.2, 5.13.0 and 5.5.1.

After the Persistence Store is filled with these Messages (we use a simple unit 
test for production always the same message) and a broker restart, we can 
browse/consume some Queues  _until_ the #checkpoint call after 30 seconds.

This call causes the broker to use all available memory and never releases it 
for other tasks such as Queue browse/consume. Internally the MessageCursor seem 
to decide, that there is not enough memory and stops delivery of queue content 
to browsers/consumers.

=> Is there a way to avoid this behaviour of fix this? 
The expectation is, that we can consume/browse any queue under all 
circumstances.

Besides the above mentioned settings we use the following settings for the 
broker (btw: changing the memoryLimit to a lower value like 1mb does not change 
the situation):
{code:xml}


  

  

  
  

  

  
















{code}

If we set the *cursorMemoryHighWaterMark* in the destinationPolicy to a higher 
value like *150* or *600* depending on the difference between memoryUsage and 
the available heap space relieves the situation a bit for a workaround, but 
this is not really an option for production systems in my point of view.

Attached some information from Oracle Mission Control and JProfiler showing 
those ActiveMQTextMessage instances that are never released from memory.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-160) After failback backup prints warnings to log

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-160:
--
Fix Version/s: (was: 1.2.0)
   1.3.0

> After failback backup prints warnings to log
> 
>
> Key: ARTEMIS-160
> URL: https://issues.apache.org/jira/browse/ARTEMIS-160
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Jeff Mesnil
>Assignee: Justin Bertram
> Fix For: 1.3.0
>
>
> We integrate Artemis in our app server.
> When the artemis server is stopped, we want to unregister any JNDI bindings 
> for the JMS resources.
> For failback, the only way to detect that the artemis server is stopped is to 
> use the ActivateCallback callback on Artemis *core* server. There is no way 
> to be notified when the JMS server (wrapping the core server) is stopped.
> This leads to a window where we remove JNDI bindings from the JMS server 
> before it is deactivated but the actual operations is performed after it was 
> deactivated and the server prints WARNING logs:
> {noformat}
> 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
> Thread Pool – 4) WFLYMSGAMQ0004: Failed to destroy queue: ExpiryQueue: 
> java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
> yet active
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320)
> 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
> Thread Pool – 68) WFLYMSGAMQ0004: Failed to destroy queue: AsyncQueue: 
> java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
> yet active
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320)
> 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
> Thread Pool – 9) WFLYMSGAMQ0004: Failed to destroy queue: DLQ: 
> java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
> yet active
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320)
> {noformat}



--

[jira] [Updated] (ARTEMIS-57) the 'to' field of AMQP messages gets cleared within the broker

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-57?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-57:
-
Fix Version/s: (was: 1.2.0)
   1.3.0

> the 'to' field of AMQP messages gets cleared within the broker
> --
>
> Key: ARTEMIS-57
> URL: https://issues.apache.org/jira/browse/ARTEMIS-57
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 1.0.0
>Reporter: Robbie Gemmell
> Fix For: 1.3.0
>
>
> When sending and receiving AMQP messages, the 'to' field of the Properties 
> section (which is meant to be immutable) is cleared as the message transits 
> through the broker.
> The encoding on the wire of a message Properties section as it was sent to 
> the broker:
> {noformat}
>  # properties
>  # properties
># message-id
> "localhost.localdomai"
> "n-54104-141838672362"
> "2-0:1:1:1-1"
>   
># user-id
># to
> "myQueue"
>   
># subject
># reply-to
># correlation-id
># content-type
># content-encoding
># absolute-expiry-time
>   #2014/12/12 12:18:44.423 # creation-time
>   #  group-id
>   #  group-sequence
>   #  reply-to-group-id
> 
> {noformat}
> The encoding on the wire on its way to a consumer:
> {noformat}
>  # properties
>  # properties
># message-id
># user-id
># to
># subject
># reply-to
># correlation-id
># content-type
># content-encoding
># absolute-expiry-time
>   #2014/12/12 12:18:44.423 # creation-time
>   #  group-id
>   #  group-sequence
>   #  reply-to-group-id
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-164) Add examples from qpid JMS

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-164:
--
Fix Version/s: (was: 1.2.0)
   1.3.0

> Add examples from qpid JMS
> --
>
> Key: ARTEMIS-164
> URL: https://issues.apache.org/jira/browse/ARTEMIS-164
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: clebert suconic
>Assignee: Justin Bertram
>Priority: Minor
> Fix For: 1.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-175) CLI improvement with --docker

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-175:
--
Fix Version/s: (was: 1.2.0)
   1.3.0

> CLI improvement with --docker
> -
>
> Key: ARTEMIS-175
> URL: https://issues.apache.org/jira/browse/ARTEMIS-175
> Project: ActiveMQ Artemis
>  Issue Type: Task
>Reporter: clebert suconic
>Priority: Minor
> Fix For: 1.3.0
>
>
> that would help to keep servers alive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-46) AMQP interop: Active broker does not respect the "drain" flag.

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-46?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-46:
-
Fix Version/s: (was: 1.2.0)
   1.3.0

> AMQP interop: Active broker does not respect the "drain" flag.
> --
>
> Key: ARTEMIS-46
> URL: https://issues.apache.org/jira/browse/ARTEMIS-46
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 1.0.0
>Reporter: Alan Conway
>Priority: Minor
> Fix For: 1.3.0
>
>
> The drain flag on the AMQP flow performative allows a client to request 
> confirmation that it has received the last available message that it has 
> credit to receive.
> To reproduce using the qpid-send, qpid-receive clients from 
> http://svn.apache.org/repos/asf/qpid/trunk/qpid/. Create a JMS queue 'foo' on 
> the active broker then run:
> $ qpid-send -a jms.queue.foo -b localhost:5455 --content-string XXX 
> --connection-options='{protocol:amqp1.0}'
> $ qpid-receive -a jms.queue.foo -b localhost:5455 
> --connection-options='{protocol:amqp1.0}' --log-enable trace+:Protocol
> qpid-receive hangs, the  last line of output is:
> 2014-11-24 15:15:46 [Protocol] trace [58e8ee08-0f33-426b-b77a-450f7c3d976c]: 
> 0 -> @flow(19) [next-incoming-id=2, incoming-window=2147483647, 
> next-outgoing-id=0, outgoing-window=0, handle=0, delivery-count=1, 
> link-credit=1, drain=true]
> This shows that qpid-receive sent a flow with drain=true but never received a 
> response.
> Why is this important? Without the drain flag it is impossible for a client 
> to implement the simple behavior "get the next message" correctly. The flow 
> response tells the client immediately "there are no more messages available 
> for you". Without it the client can only use a timeout which is unreliable 
> (if too short the client may give up while the message is in flight) and 
> inefficient (if too long the client will wait needlessly for messages that 
> the broker knows are not presently available)
> The spec 2.6.7 is a little ambiguous about whether this is a SHOULD or a MUST 
> behavior but without it it is impossible to implement the use cases described 
> in the following section.
> AMQP 1.0 specification 2.7.6
> drain
> The drain flag indicates how the sender SHOULD behave when insufficient 
> messages are available to consume the current link-credit. If set, the sender 
> will (after sending all available messages) advance the delivery-count as 
> much as possible, consuming all link-credit, and send the flow state to the 
> receiver. Only the receiver can independently modify this field. The sender's 
> value is always the last known value indicated by the receiver.
> If the link-credit is less than or equal to zero, i.e., the delivery-count is 
> the same as or greater than the delivery-limit, a sender MUST NOT send more 
> messages. If the link-credit is reduced by the receiver when transfers are 
> in-flight, the receiver MAY either handle the excess messages normally or 
> detach the link with a transfer-limit-exceeded error code.
> Figure 2.40: Flow Control
>  +--++--+
>  |  Sender  |---transfer>| Receiver |
>  +--++--+
>   \/ +--++--+
>   |
>   |
>   |
>  if link-credit <= 0 then pause 
>   
> If the sender's drain flag is set and there are no available messages, the 
> sender MUST advance its delivery-count until link-credit is zero, and send 
> its updated flow state to the receiver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-155) Incoming AMQP connection using "cut-through" ANONYMOUS SASL fails

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-155:
--
Fix Version/s: (was: 1.2.0)
   1.3.0

> Incoming AMQP connection using "cut-through" ANONYMOUS SASL fails
> -
>
> Key: ARTEMIS-155
> URL: https://issues.apache.org/jira/browse/ARTEMIS-155
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 1.0.0
>Reporter: Ted Ross
>Priority: Minor
> Fix For: 1.3.0
>
>
> When connecting an AMQP 1.0 connection to the broker using SASL ANONYMOUS, 
> the following exchange occurs:
> {noformat}
>   ClientBroker
> init(SASL) ->
> sasl.init (ANON) ->
> init(AMQP) ->
> open ->
>  <- init(SASL)
>  <- sasl.mechanisms
>  <- sasl.outcome(OK)
>  <- init(AMQP)
>  socket closed by broker after timeout
> {noformat}
> It appears the the broker doesn't process the open frame.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-75) Recovery Manager should raise/log error when multiple ActiveMQRegistry service providers are registered.

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-75:
-
Fix Version/s: (was: 1.2.0)
   1.3.0

> Recovery Manager should raise/log error when multiple ActiveMQRegistry 
> service providers are registered.
> 
>
> Key: ARTEMIS-75
> URL: https://issues.apache.org/jira/browse/ARTEMIS-75
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Martyn Taylor
>Assignee: Martyn Taylor
> Fix For: 1.3.0
>
>
> Since we are now using the Service Loader to load instances of the 
> ActiveMQRegistry, it is now possible that more than one Service Provider for 
> the registry can be loaded.  We should log an error if more than 1 service 
> provider is found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-56) the message-id of AMQP messages gets cleared within the broker

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-56?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-56:
-
Fix Version/s: (was: 1.2.0)
   1.3.0

> the message-id of AMQP messages gets cleared within the broker
> --
>
> Key: ARTEMIS-56
> URL: https://issues.apache.org/jira/browse/ARTEMIS-56
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 1.0.0
>Reporter: Robbie Gemmell
> Fix For: 1.3.0
>
>
> When sending and receiving AMQP messages, the message-id field of the 
> Properties section (which is meant to be immutable) is cleared as the message 
> transits through the broker.
> The encoding on the wire of a message Properties section as it was sent to 
> the broker:
> {noformat}
>  # properties
>  # properties
># message-id
> "localhost.localdomai"
> "n-54104-141838672362"
> "2-0:1:1:1-1"
>   
># user-id
># to
> "myQueue"
>   
># subject
># reply-to
># correlation-id
># content-type
># content-encoding
># absolute-expiry-time
>   #2014/12/12 12:18:44.423 # creation-time
>   #  group-id
>   #  group-sequence
>   #  reply-to-group-id
> 
> {noformat}
> The encoding on the wire on its way to a consumer:
> {noformat}
>  # properties
>  # properties
># message-id
># user-id
># to
># subject
># reply-to
># correlation-id
># content-type
># content-encoding
># absolute-expiry-time
>   #2014/12/12 12:18:44.423 # creation-time
>   #  group-id
>   #  group-sequence
>   #  reply-to-group-id
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-59) AMQP messages published transactionally should be accepted using a TransactionalState

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-59?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-59:
-
Fix Version/s: (was: 1.2.0)
   1.3.0

> AMQP messages published transactionally should be accepted using a 
> TransactionalState
> -
>
> Key: ARTEMIS-59
> URL: https://issues.apache.org/jira/browse/ARTEMIS-59
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 1.0.0
>Reporter: Robbie Gemmell
> Fix For: 1.3.0
>
>
> Currently, when an incoming AMQP message is part of a transaction, it is 
> accepted using the regular Accepted terminal state on the disposition reply. 
> According to the spec [1] the disposition should actually use a 
> TransactionalState with Accepted outcome.
> Similar issue to AMQ-5352 for ActiveMQ 5.
> [1] 
> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transactions-v1.0-os.html#doc-idp111808
> The issue can be seen in the following protocol traces.
> The transactional message transfer from the producer:
> {noformat}
> 
>   
>   
>   
>
> 
>  # transfer
>  # transfer
>1  # handle
>1  # delivery-id
># delivery-tag
> "0"
>   
># message-format
># settled
># more
># rcv-settle-mode
># state  
> Transactional state
># state
>  # txn-id
>   00 00 00 00 7f ff ff ff 
> 
> #  outcome
>   
>   #  resume [false]
>   #  aborted [false]
>   #  batchable [false]
> 
> 
>  # header
>  # header
># durable
>   #  priority
>   #  ttl
>   #  first-acquirer
>   #  delivery-count
> 
> 
>  # message-annotations
>  # message-annotations
>   
> "x-opt-jms-msg-type"
>   
>5 
>   
> "x-opt-to-type"
>   
>0 
> 
> 
>  # properties
>  # properties
># message-id
> "localhost.localdomai"
> "n-48953-141840504087"
> "8-0:1:1:1-1"
>   
># user-id
># to
> "myQueue"
>   
># subject
># reply-to
># correlation-id
># content-type
># content-encoding
># absolute-expiry-time
>   #2014/12/12 17:24:01.614 # creation-time
>   #  group-id
>   #  group-sequence
>   #  reply-to-group-id
> 
> 
>  # amqp-value
>  # amqp-value
>   "Hello world!"
> 
> 
>   
>   
> 
> {noformat}
> The disposition for this message can then be seen being updated by the broker 
> the Accepted state, rather than a TransactionalState identifying the 
> transaction and containing the Accepted outcome:
> {noformat}
> 
>   
>   
>   
>
> 
>  # disposition
>  # disposition
># role
>1  # first
>1  # last
># settled
># state    
> Non-Transactional state
># accepted
>   #  batchable [false]
> 
> 
>   
>   
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-60) Transactionally consumed AMQP messages are settled without any disposition state.

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-60?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-60:
-
Fix Version/s: (was: 1.2.0)
   1.3.0

> Transactionally consumed AMQP messages are settled without any disposition 
> state.
> -
>
> Key: ARTEMIS-60
> URL: https://issues.apache.org/jira/browse/ARTEMIS-60
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 1.0.0
>Reporter: Robbie Gemmell
> Fix For: 1.3.0
>
>
> When the broker receives an unsettled disposition frame from a consumer 
> accepting a message using TransactionalState to make it part of a 
> transaction, it settles the message but does so with no state at all. This 
> process causes a settled disposition frame to be sent to the client which 
> contains no state. The message should retain TransactionalState linking it to 
> the transaction and its outcome.
> Similar issue to AMQ-5456 for ActiveMQ 5.
> The issue can be seen in the protocol trace below:
> {noformat}
> 
>   
>   
>   
>
> 
>  # disposition
>  # disposition
># role
># first
># last
># settled
># state   TransactionalState
># state
>  # txn-id
>   00 00 00 00 00 00 00 0d 
> 
>  # outcome
>  # accepted
>   
>   #  batchable [false]
> 
> 
>   
>   
> 
> {noformat}
> {noformat}
> 
>   
>   
>   
>
> 
>  # disposition
>  # disposition
># role
>1  # first
>1  # last
># settled
>   #  state No state
>   #  batchable [false]
> 
> 
>   
>   
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-114) Port existing ActiveMQ 5.x examples

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-114:
--
Fix Version/s: (was: 1.2.0)
   1.3.0

> Port existing ActiveMQ 5.x examples
> ---
>
> Key: ARTEMIS-114
> URL: https://issues.apache.org/jira/browse/ARTEMIS-114
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Hiram Chirino
>Assignee: clebert suconic
> Fix For: 1.3.0
>
>
> Once ARTEMIS-113 is implemented, porting over the examples from ActiveMQ 5 
> should be easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-203) Investigate removal of namespaces for queues / topics

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-203:
--
Fix Version/s: 1.3.0

> Investigate removal of namespaces for queues / topics
> -
>
> Key: ARTEMIS-203
> URL: https://issues.apache.org/jira/browse/ARTEMIS-203
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 1.1.0, 1.2.0
>Reporter: Martyn Taylor
> Fix For: 1.3.0
>
>
> We are currently using address name spacing on the broker to determine the
> message producer client type, for example producing from a jms client results 
> in messages being posted to jms.queue.*** where *** is the address provided 
> by the client.  The has caused some confusions for users who were expecting 
> to produce and consume to the same address using different clients.  
> This also has a couple of other consequences:
> 1. Messages can not be produced to the same queue by AMQP and JMS.
> 2. Consumers need to be aware of the producer client type so they can 
> subscribe to the appropriately namespaced address.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-161) Graceful shutdown: add a timeout to stop Artemis

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-161:
--
Fix Version/s: (was: 1.2.0)
   1.3.0

> Graceful shutdown: add a timeout to stop Artemis
> 
>
> Key: ARTEMIS-161
> URL: https://issues.apache.org/jira/browse/ARTEMIS-161
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.0.0
>Reporter: Jeff Mesnil
>Assignee: Justin Bertram
> Fix For: 1.3.0
>
>
> We want to provide a graceful shutdown for Artemis to leave some time for JMS 
> clients to finish their work before stopping the server.
> This is also covered by ARTEMIS-72 which deals with refusing new remote 
> connections once the shutdown process is started (while keeping in-vm 
> connections opened).
> This issue is about specifying a timeout when stopping the ActiveMQServer.
> It is possible to provide a general shutdown timeout in the server 
> configuration but this is not suitable.
> A shutdown process is contextual: it may be a quick shutdown in case of 
> emergency (with a timeout of some seconds) or a long timeout (several hours) 
> in case of planned upgrade for example.
> This parameter should be specified when the admin starts the shutdown process 
> and be passed to the ActiveMQServer (and its wrapping JMSServerManger) stop() 
> method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-213) Fix WireFormatNegotiationTest

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-213:
--
Fix Version/s: (was: 1.2.0)
   1.3.0

> Fix WireFormatNegotiationTest
> -
>
> Key: ARTEMIS-213
> URL: https://issues.apache.org/jira/browse/ARTEMIS-213
> Project: ActiveMQ Artemis
>  Issue Type: Sub-task
>  Components: OpenWire
>Affects Versions: 1.0.0
>Reporter: Howard Gao
>Assignee: clebert suconic
> Fix For: 1.3.0
>
>
> Client can negotiate a working wireformat version with broker. If a client 
> requests a earlier version of wireformat the broker should respect it. e.g.
> if a client request:
> "tcp://localhost:61616?wireFormat.version=1"
> The server should confirm that the version to be used is 1.
> Test: WireformatNegociationTest



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-203) Investigate removal of namespaces for queues / topics

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-203:
--
Fix Version/s: (was: 1.2.0)

> Investigate removal of namespaces for queues / topics
> -
>
> Key: ARTEMIS-203
> URL: https://issues.apache.org/jira/browse/ARTEMIS-203
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 1.1.0, 1.2.0
>Reporter: Martyn Taylor
>
> We are currently using address name spacing on the broker to determine the
> message producer client type, for example producing from a jms client results 
> in messages being posted to jms.queue.*** where *** is the address provided 
> by the client.  The has caused some confusions for users who were expecting 
> to produce and consume to the same address using different clients.  
> This also has a couple of other consequences:
> 1. Messages can not be produced to the same queue by AMQP and JMS.
> 2. Consumers need to be aware of the producer client type so they can 
> subscribe to the appropriately namespaced address.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-203) Investigate removal of namespaces for queues / topics

2016-01-08 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor updated ARTEMIS-203:
--
Affects Version/s: 1.2.0

> Investigate removal of namespaces for queues / topics
> -
>
> Key: ARTEMIS-203
> URL: https://issues.apache.org/jira/browse/ARTEMIS-203
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 1.1.0, 1.2.0
>Reporter: Martyn Taylor
>
> We are currently using address name spacing on the broker to determine the
> message producer client type, for example producing from a jms client results 
> in messages being posted to jms.queue.*** where *** is the address provided 
> by the client.  The has caused some confusions for users who were expecting 
> to produce and consume to the same address using different clients.  
> This also has a couple of other consequences:
> 1. Messages can not be produced to the same queue by AMQP and JMS.
> 2. Consumers need to be aware of the producer client type so they can 
> subscribe to the appropriately namespaced address.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)