[jira] [Created] (ARTEMIS-3813) Support getName from management

2022-05-03 Thread Justin Bertram (Jira)
Justin Bertram created ARTEMIS-3813:
---

 Summary: Support getName from management 
 Key: ARTEMIS-3813
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3813
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Justin Bertram
Assignee: Justin Bertram






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread David Bennion (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531405#comment-17531405
 ] 

David Bennion commented on ARTEMIS-3809:


Thank you sir!  I'll be happy to verify the change on my side as well.

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
> Attachments: image-2022-05-03-10-51-46-872.png
>
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:157)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:89)
>     at mypackage.MessageListener.handleMessage(MessageListener.java:46)
> {code}
>  
> The app can run either as a single node using the InVM transporter or as a 
> cluster using the TCP.  To my knowledge, I have only seen this issue occur on 
> the InVM. 
> I am not expert in this code, but I can tell from the call stack that 0 must 
> be the value of timeWait passed into waitCompletion().  But from what I can 
> discern of the code changes in 2.21.0,  it should be adjusting the 
> readTimeout to the timeout of the message (I think?) such that it causes the 
> read to eventually give up rather than remaining blocked forever.
> We have persistenceEnabled = false, which leads me to believe that the only 
> disk activity  for messages should be related to large messages(?).  
> On a machine and context where this was consistently happening, I adjusted 
> the min-large-message-size upwards and the problem went away.   This makes 
> sense for my application, but ultimately if a message goes across the 
> threshold to become large it appears to hang the consumer indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (AMQ-8591) ActiveMQ 5.17.0 can not start when in symlinked directory (docker)

2022-05-03 Thread Jira


 [ 
https://issues.apache.org/jira/browse/AMQ-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Onofré reassigned AMQ-8591:
-

Assignee: Jean-Baptiste Onofré

> ActiveMQ 5.17.0 can not start when in symlinked directory (docker)
> --
>
> Key: AMQ-8591
> URL: https://issues.apache.org/jira/browse/AMQ-8591
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.17.0
>Reporter: Petr Újezdský
>Assignee: Jean-Baptiste Onofré
>Priority: Major
>
> Up to version 5.16 this Docker file (with only version changed) worked fine
> [https://github.com/rmohr/docker-activemq/blob/master/5.15.9/Dockerfile]
> Since version 5.17.0 the command
> {code:java}
> ln -s /opt/$ACTIVEMQ $ACTIVEMQ_HOME && \
> {code}
> must be rewritten into simple {{mv}}
> {code:java}
> mv /opt/$ACTIVEMQ $ACTIVEMQ_HOME && \
> {code}
> and the command {{chown -R activemq:activemq /opt/$ACTIVEMQ}} removed.
> Otherwise the startup fails on:
> {code:java}
> WARN | Failed startup of context o.e.j.w.WebAppContext@5d1b1c2a{ActiveMQ 
> Console,/admin,file:///opt/activemq/webapps/admin/,UNAVAILABLE}
> org.springframework.beans.factory.BeanDefinitionStoreException: IOException 
> parsing XML document from ServletContext resource 
> [/WEB-INF/webconsole-embedded.xml]; nested exception is 
> java.io.FileNotFoundException: Could not open ServletContext resource 
> [/WEB-INF/webconsole-embedded.xml]
> at 
> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:342)
>  ~[spring-beans-5.3.16.jar:5.3.16]
> at 
> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:310)
>  ~[spring-beans-5.3.16.jar:5.3.16]
> at 
> org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:196)
>  ~[spring-beans-5.3.16.jar:5.3.16]
> at 
> org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:232)
>  ~[spring-beans-5.3.16.jar:5.3.16]
> at 
> org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:203)
>  ~[spring-beans-5.3.16.jar:5.3.16]
> at 
> org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:125)
>  ~[spring-web-5.3.16.jar:5.3.16]
> at 
> org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:94)
>  ~[spring-web-5.3.16.jar:5.3.16]
> at 
> org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130)
>  ~[spring-context-5.3.16.jar:5.3.16]
> at 
> org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:671)
>  ~[spring-context-5.3.16.jar:5.3.16]
> at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:553)
>  ~[spring-context-5.3.16.jar:5.3.16]
> at 
> org.apache.activemq.web.WebConsoleStarter.createWebapplicationContext(WebConsoleStarter.java:71)
>  ~[?:?]
> at 
> org.apache.activemq.web.WebConsoleStarter.contextInitialized(WebConsoleStarter.java:44)
>  ~[?:?]
> at 
> org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:1073)
>  ~[jetty-server-9.4.45.v20220203.jar:9.4.45.v20220203]
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:572)
>  ~[jetty-servlet-9.4.45.v20220203.jar:9.4.45.v20220203]
> at 
> org.eclipse.jetty.server.handler.ContextHandler.contextInitialized(ContextHandler.java:1002)
>  ~[jetty-server-9.4.45.v20220203.jar:9.4.45.v20220203]
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:746) 
> ~[jetty-servlet-9.4.45.v20220203.jar:9.4.45.v20220203]
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:379)
>  ~[jetty-servlet-9.4.45.v20220203.jar:9.4.45.v20220203]
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1449) 
> ~[jetty-webapp-9.4.45.v20220203.jar:9.4.45.v20220203]
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1414) 
> ~[jetty-webapp-9.4.45.v20220203.jar:9.4.45.v20220203]
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:916)
>  ~[jetty-server-9.4.45.v20220203.jar:9.4.45.v20220203]
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:288

[jira] [Created] (AMQ-8594) Upgrade to Pax Logging 1.11.16

2022-05-03 Thread Jira
Jean-Baptiste Onofré created AMQ-8594:
-

 Summary: Upgrade to Pax Logging 1.11.16
 Key: AMQ-8594
 URL: https://issues.apache.org/jira/browse/AMQ-8594
 Project: ActiveMQ
  Issue Type: Dependency upgrade
Reporter: Jean-Baptiste Onofré
Assignee: Jean-Baptiste Onofré
 Fix For: 5.18.0, 5.17.2






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531385#comment-17531385
 ] 

Justin Bertram commented on ARTEMIS-3809:
-

> I can't see that packetAdded ever gets reset to false, so that appears to be 
> the one way door on this.

Agreed. This is the culprit as far as I can tell. It looks like the client 
begins down to download the message and gets at least one packet, but then 
fails to get anymore and is therefore stuck. The fix looks fairly easy. Writing 
the test will be hard part.

Thanks for your help on this!

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
> Attachments: image-2022-05-03-10-51-46-872.png
>
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:157)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:89)
>     at mypackage.MessageListener.handleMessage(MessageListener.java:46)
> {code}
>  
> The app can run either as a single node using the InVM transporter or as a 
> cluster using the TCP.  To my knowledge, I have only seen this issue occur on 
> the InVM. 
> I am not expert in this code, but I can tell from the call stack that 0 must 
> be the value of timeWait passed into waitCompletion().  But from what I can 
> discern of the code changes in 2.21.0,  it should be adjusting the 
> readTimeout to the timeout of the message (I think?) such that it causes the 
> read to eventually give up rather than remaining blocked forever.
> We have persistenceEnabled = false, which leads me to believe that the only 
> disk activity  for messages should be related to large messages(?).  
> On a machine and context where this was consistently happening, I adjusted 
> the min-large-message-size upwards and the problem went away.   This makes 
> sense for my application, but ultimately if a message goes across the 
> threshold to become large it appears to hang the consumer indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531385#comment-17531385
 ] 

Justin Bertram edited comment on ARTEMIS-3809 at 5/3/22 7:55 PM:
-

bq. I can't see that packetAdded ever gets reset to false, so that appears to 
be the one way door on this.

Agreed. This is the culprit as far as I can tell. It looks like the client 
begins down to download the message and gets at least one packet, but then 
fails to get anymore and is therefore stuck. The fix looks fairly easy. Writing 
the test will be hard part.

Thanks for your help on this!


was (Author: jbertram):
> I can't see that packetAdded ever gets reset to false, so that appears to be 
> the one way door on this.

Agreed. This is the culprit as far as I can tell. It looks like the client 
begins down to download the message and gets at least one packet, but then 
fails to get anymore and is therefore stuck. The fix looks fairly easy. Writing 
the test will be hard part.

Thanks for your help on this!

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
> Attachments: image-2022-05-03-10-51-46-872.png
>
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:157)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:89)
>     at mypackage.MessageListener.handleMessage(MessageListener.java:46)
> {code}
>  
> The app can run either as a single node using the InVM transporter or as a 
> cluster using the TCP.  To my knowledge, I have only seen this issue occur on 
> the InVM. 
> I am not expert in this code, but I can tell from the call stack that 0 must 
> be the value of timeWait passed into waitCompletion().  But from what I can 
> discern of the code changes in 2.21.0,  it should be adjusting the 
> readTimeout to the timeout of the message (I think?) such that it causes the 
> read to eventually give up rather than remaining blocked forever.
> We have persistenceEnabled = false, which leads me to believe that the only 
> disk activity  for messages should be related to large messages(?).  
> On a machine and context where this was consistently happening, I adjusted 
> the min-large-message-size upwards and the problem went away.   This makes 
> sense for my application, but ultimately if a message goes across the 
> threshold to become large it appears to hang the consumer indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (ARTEMIS-3759) Allow for Mirroring (Broker Connections) to specify a specific set of addresses to send events, as is done for a cluster connection

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3759?focusedWorklogId=765677&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765677
 ]

ASF GitHub Bot logged work on ARTEMIS-3759:
---

Author: ASF GitHub Bot
Created on: 03/May/22 19:47
Start Date: 03/May/22 19:47
Worklog Time Spent: 10m 
  Work Description: iliya-gr commented on PR #4054:
URL: 
https://github.com/apache/activemq-artemis/pull/4054#issuecomment-1116497576

   > I will spend some time reviewing this tomorrow after I finish the release 
on 2.22.
   
   Do I need to squash commits before?




Issue Time Tracking
---

Worklog Id: (was: 765677)
Time Spent: 2.5h  (was: 2h 20m)

> Allow for Mirroring (Broker Connections) to specify a specific set of 
> addresses to send events, as is done for a cluster connection  
> -
>
> Key: ARTEMIS-3759
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3759
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: AMQP
>Affects Versions: 2.19.1, 2.21.0
>Reporter: Mikhail Lukyanov
>Priority: Major
> Attachments: ImageAddressSyntax.png, ImageInternalQueues.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> If a target broker of mirroring is in a cluster, then mirroring of the 
> broker's internal queues also occurs, and often messages accumulate in such 
> queues. In theory, internal cluster queues should not be mirrored, this does 
> not make much sense. 
> Therefore, it would be convenient to allow you to configure for which 
> addresses and their queues mirroring will be performed, events will be sent 
> (message-acknowledgements, queue-removal, queue-creation). The syntax that is 
> used to specify the *_addresses_* of the *_cluster connection_* is well 
> suited for this. 
> Mirrored internal cluster queues
>  !ImageInternalQueues.png! 
> Address syntax
>  !ImageAddressSyntax.png! 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread David Bennion (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531331#comment-17531331
 ] 

David Bennion commented on ARTEMIS-3809:


The stuck message from inspecting the outStream:

ClientLargeMessageImpl[messageID=38664715608, durable=false, 
address=analyze,userID=null,properties=TypedProperties[...SNIPPED...,_AMQ_LARGE_SIZE={color:#FF}108314{color}]]

 

So barely over the 100K threshold

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
> Attachments: image-2022-05-03-10-51-46-872.png
>
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:157)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:89)
>     at mypackage.MessageListener.handleMessage(MessageListener.java:46)
> {code}
>  
> The app can run either as a single node using the InVM transporter or as a 
> cluster using the TCP.  To my knowledge, I have only seen this issue occur on 
> the InVM. 
> I am not expert in this code, but I can tell from the call stack that 0 must 
> be the value of timeWait passed into waitCompletion().  But from what I can 
> discern of the code changes in 2.21.0,  it should be adjusting the 
> readTimeout to the timeout of the message (I think?) such that it causes the 
> read to eventually give up rather than remaining blocked forever.
> We have persistenceEnabled = false, which leads me to believe that the only 
> disk activity  for messages should be related to large messages(?).  
> On a machine and context where this was consistently happening, I adjusted 
> the min-large-message-size upwards and the problem went away.   This makes 
> sense for my application, but ultimately if a message goes across the 
> threshold to become large it appears to hang the consumer indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread David Bennion (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531328#comment-17531328
 ] 

David Bennion commented on ARTEMIS-3809:


compressed messages is off for us currently

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
> Attachments: image-2022-05-03-10-51-46-872.png
>
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:157)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:89)
>     at mypackage.MessageListener.handleMessage(MessageListener.java:46)
> {code}
>  
> The app can run either as a single node using the InVM transporter or as a 
> cluster using the TCP.  To my knowledge, I have only seen this issue occur on 
> the InVM. 
> I am not expert in this code, but I can tell from the call stack that 0 must 
> be the value of timeWait passed into waitCompletion().  But from what I can 
> discern of the code changes in 2.21.0,  it should be adjusting the 
> readTimeout to the timeout of the message (I think?) such that it causes the 
> read to eventually give up rather than remaining blocked forever.
> We have persistenceEnabled = false, which leads me to believe that the only 
> disk activity  for messages should be related to large messages(?).  
> On a machine and context where this was consistently happening, I adjusted 
> the min-large-message-size upwards and the problem went away.   This makes 
> sense for my application, but ultimately if a message goes across the 
> threshold to become large it appears to hang the consumer indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread David Bennion (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531314#comment-17531314
 ] 

David Bennion commented on ARTEMIS-3809:


(timeOut - System.currentTimeMillis())/1000  right now = -1973 and just keeps 
gapping further.

I can't see that packetAdded ever gets reset to false, so that appears to be 
the one way door on this.

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
> Attachments: image-2022-05-03-10-51-46-872.png
>
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:157)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:89)
>     at mypackage.MessageListener.handleMessage(MessageListener.java:46)
> {code}
>  
> The app can run either as a single node using the InVM transporter or as a 
> cluster using the TCP.  To my knowledge, I have only seen this issue occur on 
> the InVM. 
> I am not expert in this code, but I can tell from the call stack that 0 must 
> be the value of timeWait passed into waitCompletion().  But from what I can 
> discern of the code changes in 2.21.0,  it should be adjusting the 
> readTimeout to the timeout of the message (I think?) such that it causes the 
> read to eventually give up rather than remaining blocked forever.
> We have persistenceEnabled = false, which leads me to believe that the only 
> disk activity  for messages should be related to large messages(?).  
> On a machine and context where this was consistently happening, I adjusted 
> the min-large-message-size upwards and the problem went away.   This makes 
> sense for my application, but ultimately if a message goes across the 
> threshold to become large it appears to hang the consumer indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread David Bennion (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531305#comment-17531305
 ] 

David Bennion commented on ARTEMIS-3809:


Because packetAdded = true this timeout never triggers here
{code:java}
            } else if (System.currentTimeMillis() > timeOut && !packetAdded) {
               throw ActiveMQClientMessageBundle.BUNDLE.timeoutOnLargeMessage();
            }{code}

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
> Attachments: image-2022-05-03-10-51-46-872.png
>
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:157)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:89)
>     at mypackage.MessageListener.handleMessage(MessageListener.java:46)
> {code}
>  
> The app can run either as a single node using the InVM transporter or as a 
> cluster using the TCP.  To my knowledge, I have only seen this issue occur on 
> the InVM. 
> I am not expert in this code, but I can tell from the call stack that 0 must 
> be the value of timeWait passed into waitCompletion().  But from what I can 
> discern of the code changes in 2.21.0,  it should be adjusting the 
> readTimeout to the timeout of the message (I think?) such that it causes the 
> read to eventually give up rather than remaining blocked forever.
> We have persistenceEnabled = false, which leads me to believe that the only 
> disk activity  for messages should be related to large messages(?).  
> On a machine and context where this was consistently happening, I adjusted 
> the min-large-message-size upwards and the problem went away.   This makes 
> sense for my application, but ultimately if a message goes across the 
> threshold to become large it appears to hang the consumer indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (ARTEMIS-3808) Support starting/stopping the embedded web server via mangement

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3808?focusedWorklogId=765547&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765547
 ]

ASF GitHub Bot logged work on ARTEMIS-3808:
---

Author: ASF GitHub Bot
Created on: 03/May/22 17:01
Start Date: 03/May/22 17:01
Worklog Time Spent: 10m 
  Work Description: gemmellr commented on code in PR #4061:
URL: https://github.com/apache/activemq-artemis/pull/4061#discussion_r863953256


##
artemis-commons/src/main/java/org/apache/activemq/artemis/core/server/ActiveMQComponent.java:
##
@@ -29,4 +31,8 @@ default void asyncStop(Runnable callback) throws Exception {
}
 
boolean isStarted();
+
+   default SimpleString getName() {
+  return SimpleString.toSimpleString("");
+   }

Review Comment:
   Using String would seem nicer than expanding the use of SimpleString for 
something it doesnt seem needed for. It seemed like you had to change many 
existing instances of using String already.
   
   Also the description says "This was necessary in order to actually find the 
WebServerComponent in the broker's list of components". Would a simple 
alternative be a marker interface, supplied from the broker and implemented by 
the base WebServerComponent class, allowing operating on it without needing any 
of the widespread String vs SimpleString changes or random-constant name based 
coordination ?



##
artemis-commons/src/main/java/org/apache/activemq/artemis/utils/ComponentConstants.java:
##
@@ -0,0 +1,24 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.activemq.artemis.utils;
+
+import org.apache.activemq.artemis.api.core.SimpleString;
+
+public final class ComponentConstants {
+
+   public static final SimpleString WEB_SERVER = 
RandomUtil.randomSimpleString();

Review Comment:
   A random constant being used for coordination essentially inside the broker 
seems a bit icky.



##
artemis-server/src/main/java/org/apache/activemq/artemis/core/management/impl/ActiveMQServerControlImpl.java:
##
@@ -4479,5 +4482,39 @@ public void replay(String startScan, String endScan, 
String address, String targ
 
   server.replay(startScanDate, endScanDate, address, target, filter);
}
+
+   @Override
+   public void stopEmbeddedWebServer() throws Exception {
+  for (ActiveMQComponent component : server.getExternalComponents()) {

Review Comment:
   The stop/start/restart seem like maybe they should be coordinated to avoid 
falling over each other.



##
artemis-web/src/test/java/org/apache/activemq/cli/test/WebServerComponentTest.java:
##
@@ -175,31 +166,31 @@ public void testComponentStopBehavior() throws Exception {
   Assert.assertFalse(webServerComponent.isStarted());
   webServerComponent.configure(webServerDTO, "./src/test/resources/", 
"./src/test/resources/");
   webServerComponent.start();
-  final int port = webServerComponent.getPort();
   // Make the connection attempt.
-  CountDownLatch latch = new CountDownLatch(1);
-  final ClientHandler clientHandler = new ClientHandler(latch);
-  bootstrap.group(group).channel(NioSocketChannel.class).handler(new 
ChannelInitializer() {
- @Override
- protected void initChannel(Channel ch) throws Exception {
-ch.pipeline().addLast(new HttpClientCodec());
-ch.pipeline().addLast(clientHandler);
- }
-  });
-  Channel ch = bootstrap.connect("localhost", port).sync().channel();
+  verifyConnection(webServerComponent.getPort());
+  Assert.assertTrue(webServerComponent.isStarted());
 
-  URI uri = new URI(URL);
-  // Prepare the HTTP request.
-  HttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, 
HttpMethod.GET, uri.getRawPath());
-  request.headers().set(HttpHeaderNames.HOST, "localhost");
+  //usual stop won't actually stop it
+  webServerComponent.stop();
+  assertTrue(webServerComponent.isStarted());
 
-  // Send the HTTP request.
-  ch.writeAndFlush(request);
-  assertTrue(latch.await(5, TimeUnit.SECONDS));

[jira] [Commented] (ARTEMIS-3811) QueueBinding type will clash with cluster connections

2022-05-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531299#comment-17531299
 ] 

ASF subversion and git services commented on ARTEMIS-3811:
--

Commit ee51a7806da5f258dbb1ed56271c56d3e992383b in activemq-artemis's branch 
refs/heads/main from Clebert Suconic
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=ee51a7806d ]

ARTEMIS-3811 Cluster connections clashing with ANYCast addresses

This is a test fix for AMQPClusterReplicaTest


> QueueBinding type will clash with cluster connections
> -
>
> Key: ARTEMIS-3811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3811
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.22.0
>Reporter: Clebert Suconic
>Assignee: Clebert Suconic
>Priority: Major
> Fix For: 2.23.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> - You create a cluster connection,
> - add a local queue with type=ANYCAST
> The remote queue added will have a type MULTICAST, and validateAddress will 
> fail.
> This is making AMQPClusterReplicaTest to fail



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (ARTEMIS-3811) QueueBinding type will clash with cluster connections

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3811?focusedWorklogId=765542&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765542
 ]

ASF GitHub Bot logged work on ARTEMIS-3811:
---

Author: ASF GitHub Bot
Created on: 03/May/22 16:55
Start Date: 03/May/22 16:55
Worklog Time Spent: 10m 
  Work Description: clebertsuconic merged PR #4063:
URL: https://github.com/apache/activemq-artemis/pull/4063




Issue Time Tracking
---

Worklog Id: (was: 765542)
Time Spent: 20m  (was: 10m)

> QueueBinding type will clash with cluster connections
> -
>
> Key: ARTEMIS-3811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3811
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.22.0
>Reporter: Clebert Suconic
>Assignee: Clebert Suconic
>Priority: Major
> Fix For: 2.23.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> - You create a cluster connection,
> - add a local queue with type=ANYCAST
> The remote queue added will have a type MULTICAST, and validateAddress will 
> fail.
> This is making AMQPClusterReplicaTest to fail



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread David Bennion (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531297#comment-17531297
 ] 

David Bennion commented on ARTEMIS-3809:


OK, I was able to validate the above in my debugger.  When we hang, we are 
continually looping and waiting for 30 seconds.  The streamEnded is false, but 
packetAdded = true.  So interesting state.  I'll keep looking around here and 
maybe get lucky before investing the work to make this reproducible.

 

!image-2022-05-03-10-51-46-872.png!

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
> Attachments: image-2022-05-03-10-51-46-872.png
>
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:157)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:89)
>     at mypackage.MessageListener.handleMessage(MessageListener.java:46)
> {code}
>  
> The app can run either as a single node using the InVM transporter or as a 
> cluster using the TCP.  To my knowledge, I have only seen this issue occur on 
> the InVM. 
> I am not expert in this code, but I can tell from the call stack that 0 must 
> be the value of timeWait passed into waitCompletion().  But from what I can 
> discern of the code changes in 2.21.0,  it should be adjusting the 
> readTimeout to the timeout of the message (I think?) such that it causes the 
> read to eventually give up rather than remaining blocked forever.
> We have persistenceEnabled = false, which leads me to believe that the only 
> disk activity  for messages should be related to large messages(?).  
> On a machine and context where this was consistently happening, I adjusted 
> the min-large-message-size upwards and the problem went away.   This makes 
> sense for my application, but ultimately if a message goes across the 
> threshold to become large it appears to hang the consumer indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread David Bennion (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Bennion updated ARTEMIS-3809:
---
Attachment: image-2022-05-03-10-51-46-872.png

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
> Attachments: image-2022-05-03-10-51-46-872.png
>
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:157)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:89)
>     at mypackage.MessageListener.handleMessage(MessageListener.java:46)
> {code}
>  
> The app can run either as a single node using the InVM transporter or as a 
> cluster using the TCP.  To my knowledge, I have only seen this issue occur on 
> the InVM. 
> I am not expert in this code, but I can tell from the call stack that 0 must 
> be the value of timeWait passed into waitCompletion().  But from what I can 
> discern of the code changes in 2.21.0,  it should be adjusting the 
> readTimeout to the timeout of the message (I think?) such that it causes the 
> read to eventually give up rather than remaining blocked forever.
> We have persistenceEnabled = false, which leads me to believe that the only 
> disk activity  for messages should be related to large messages(?).  
> On a machine and context where this was consistently happening, I adjusted 
> the min-large-message-size upwards and the problem went away.   This makes 
> sense for my application, but ultimately if a message goes across the 
> threshold to become large it appears to hang the consumer indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (ARTEMIS-3808) Support starting/stopping the embedded web server via mangement

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3808?focusedWorklogId=765508&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765508
 ]

ASF GitHub Bot logged work on ARTEMIS-3808:
---

Author: ASF GitHub Bot
Created on: 03/May/22 15:48
Start Date: 03/May/22 15:48
Worklog Time Spent: 10m 
  Work Description: jbertram commented on code in PR #4061:
URL: https://github.com/apache/activemq-artemis/pull/4061#discussion_r863923053


##
artemis-server/src/main/java/org/apache/activemq/artemis/core/management/impl/ActiveMQServerControlImpl.java:
##
@@ -4479,5 +4482,39 @@ public void replay(String startScan, String endScan, 
String address, String targ
 
   server.replay(startScanDate, endScanDate, address, target, filter);
}
+
+   @Override
+   public void stopEmbeddedWebServer() throws Exception {
+  for (ActiveMQComponent component : server.getExternalComponents()) {

Review Comment:
   Stopping the embedded web server from the web console is not a problem. It's 
having the thread survive so that it can _start_ it again that's the problem.





Issue Time Tracking
---

Worklog Id: (was: 765508)
Time Spent: 1h 50m  (was: 1h 40m)

> Support starting/stopping the embedded web server via mangement
> ---
>
> Key: ARTEMIS-3808
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3808
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> It would be useful to be able to cycle the embedded web server if, for 
> example, one needed to renew the SSL certificates.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (ARTEMIS-3812) Upgrade Micrometer

2022-05-03 Thread Justin Bertram (Jira)
Justin Bertram created ARTEMIS-3812:
---

 Summary: Upgrade Micrometer
 Key: ARTEMIS-3812
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3812
 Project: ActiveMQ Artemis
  Issue Type: Dependency upgrade
Reporter: Justin Bertram
Assignee: Justin Bertram






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread David Bennion (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531244#comment-17531244
 ] 

David Bennion edited comment on ARTEMIS-3809 at 5/3/22 3:43 PM:


Just ran and put a breakpoint on LargeMessageControllerImpl:294.  (attempting 
to get it to occur again).   Verified that readTimeout *IS* set to 3 (30 
seconds), as you suggested. 

I am now suspecting that we are stuck in that while loop and will never get a 
streamEnded.  (given that every time I get a stack trace when we are stuck it 
still points to that line.  It is a matter of nano-seconds to do the loop and 
get back to the 30s wait() call)

Will try to substantiate this with the debugger in context when the issue 
actually happens (slow going).


was (Author: JIRAUSER288845):
Just ran and put a breakpoint on LargeMessageControllerImpl:294.  (attempting 
to get it to occur again).   Verified that readTimeout *IS* set to 3 (30 
seconds), as you suggested. 

I am now suspecting that we are stuck in that while loop and will never get a 
streamEnded. 

Will try to substantiate this with the debugger in the context when the issue 
actually happens (slow going).

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:157)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:89)
>     at mypackage.MessageListener.handleMessage(MessageListener.java:46)
> {code}
>  
> The app can run either as a single node using the InVM transporter or as a 
> cluster using the TCP.  To my knowledge, I have only seen this issue occur on 
> the InVM. 
> I am not expert in this code, but I can tell from the call stack that 0 must 
> be the value of timeWait passed into waitCompletion().  But from what I can 
> discern of the code changes in 2.21.0,  it should be adjusting the 
> readTimeout to the timeout of the message (I think?) such that it causes the 
> read to eventually give up rather than remaining blocked forever.
> We have persistenceEnabled = false, which leads me to believe that the only 
> disk activity  for messages should be related to large messages(?).  
> On a machine and context where this was consistently happening, I adjusted 
> the min-large-message-size upwards and the problem went away.   This makes 
> sense for my application, but ultimately if a message goes across the 
> threshold to become large it appears to hang the consumer indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread David Bennion (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531244#comment-17531244
 ] 

David Bennion commented on ARTEMIS-3809:


Just ran and put a breakpoint on LargeMessageControllerImpl:294.  (attempting 
to get it to occur again).   Verified that readTimeout *IS* set to 3 (30 
seconds), as you suggested. 

I am now suspecting that we are stuck in that while loop and will never get a 
streamEnded. 

Will try to substantiate this with the debugger in the context when the issue 
actually happens (slow going).

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:157)
>     at 
> org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.getBodyBuffer(ClientLargeMessageImpl.java:89)
>     at mypackage.MessageListener.handleMessage(MessageListener.java:46)
> {code}
>  
> The app can run either as a single node using the InVM transporter or as a 
> cluster using the TCP.  To my knowledge, I have only seen this issue occur on 
> the InVM. 
> I am not expert in this code, but I can tell from the call stack that 0 must 
> be the value of timeWait passed into waitCompletion().  But from what I can 
> discern of the code changes in 2.21.0,  it should be adjusting the 
> readTimeout to the timeout of the message (I think?) such that it causes the 
> read to eventually give up rather than remaining blocked forever.
> We have persistenceEnabled = false, which leads me to believe that the only 
> disk activity  for messages should be related to large messages(?).  
> On a machine and context where this was consistently happening, I adjusted 
> the min-large-message-size upwards and the problem went away.   This makes 
> sense for my application, but ultimately if a message goes across the 
> threshold to become large it appears to hang the consumer indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread David Bennion (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531237#comment-17531237
 ] 

David Bennion edited comment on ARTEMIS-3809 at 5/3/22 2:47 PM:


Thanks Justin.  
{quote}Since the {{wait}} happens in a {{while}} loop it could be looping on 
the same {{LargeMessageControllerImpl}} instance. So the lock ID in itself is 
not sufficient to say that execution of that thread has not proceeded in anyway 
between thread dumps.
{quote}
Ok.  Yes I get this in principle.  The behavior we observed from this is that 
all of the other messages timed out in the queues.  We have 60 second timeouts 
on all these these messages.  I pulled a thread dump this morning and this 
machine is still hung on the same place and with the same object hash.  We ran 
a large batch of messages yesterday and after this issue a large number came 
back as timeout.  So I am virtually certain that we are a hung on a single 
message in a single place, otherwise the message would have eventually timed 
out and the consumer would have become unblocked and wouldn't be hanging in 
this same location.  I have not yet been able to get this machine into a 
debugger to be able to attempt to inspect.  (There is some work involved in 
that due to network segmentation and security stuff, but I'll work my way 
around that).
{quote}The default call timeout on the {{ServerLocator}} should be {{3}} 
milliseconds. Therefore, if you are not setting it to {{0}} I don't see how a 
value of {{0}} could be used here.
{quote}
OK, that surprises me.  I will search  to double validate that what I said is 
true.  I cannot see any references in our code other than the one we just put 
in.  But is it possible that there is some other configuration call that does 
this as a side effect?  I can clearly see that the current number must be 
either 0 or astronomically large.
{quote}The main thing I'm interested in at this point is {{TRACE}} logging on 
{{{}org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl{}}}.
 This will show what packets are arriving to the client. Specifically I want to 
see packets of the type {{SESS_RECEIVE_LARGE_MSG}} and 
{{{}SESS_RECEIVE_CONTINUATION{}}}.
{quote}
Got it.  Unless I am mistaken, I won't see any of those logs from 
RemotingConnectionImpl in my logs because I have only seemed to be able to make 
this happen on InVM.  For instance, I see these logs:
{noformat}
05/02/2022 15:19:09.440 TRACE (.InVMConnector)) [InVMConnection      ]  
InVMConnection [serverID=0, id=1e5a49e7-ca54-11ec-99a3-0050568ed7da]::packet 
sent done
05/02/2022 15:19:09.440 TRACE (0.0-2081182161)) [InVMConnection      ]  
InVMConnection [serverID=0, id=1e5a49e7-ca54-11ec-99a3-0050568ed7da]::Sending 
inVM packet
05/02/2022 15:19:09.440 TRACE (0.0-2081182161)) [InVMConnection      ]  
InVMConnection [serverID=0, id=1e5a49e7-ca54-11ec-99a3-0050568ed7da]::packet 
sent done
05/02/2022 15:19:09.440 TRACE (.InVMConnector)) [InVMConnection      ]  
InVMConnection [serverID=0, id=1e5a49e7-ca54-11ec-99a3-0050568ed7da]::Sending 
inVM packet
05/02/2022 15:19:09.440 TRACE (.InVMConnector)) [InVMConnection      ]  
InVMConnection [serverID=0, id=1e5a49e7-ca54-11ec-99a3-0050568ed7da]::packet 
sent done
05/02/2022 15:19:09.441 TRACE (0.0-2081182161)) [InVMConnection      ]  
InVMConnection [serverID=0, id=1e5a49e7-ca54-11ec-99a3-0050568ed7da]::Sending 
inVM packet
05/02/2022 15:19:09.441 TRACE (0.0-2081182161)) [InVMConnection      ]  
InVMConnection [serverID=0, id=1e5a49e7-ca54-11ec-99a3-0050568ed7da]::packet 
sent done
05/02/2022 15:19:09.441 TRACE (.InVMConnector)) [InVMConnection      ]  
InVMConnection [serverID=0, id=1e5a49e7-ca54-11ec-99a3-0050568ed7da]::Sending 
inVM packet{noformat}
 

But these logs just appear over and over like this and don't really seem to 
have anything interesting.
{quote}Ultimately the best way to proceed is for you to provide a way to 
reproduce the problem. That way I can put a debugger on it and really see 
what's happening.
{quote}
I get it.  I will see what I can do.  It is hard enough to get this one to 
happen even with all our code connected.

 


was (Author: JIRAUSER288845):
Thanks Justin.  
{quote}Since the {{wait}} happens in a {{while}} loop it could be looping on 
the same {{LargeMessageControllerImpl}} instance. So the lock ID in itself is 
not sufficient to say that execution of that thread has not proceeded in anyway 
between thread dumps.
{quote}
Ok.  Yes I get this in principle.  The behavior we observed from this is that 
all of the other messages timed out in the queues.  We have 60 second timeouts 
on all these these messages.  I pulled a thread dump this morning and this 
machine is still hung on the same place and with the same object hash.  We ran 
a large batch of messages yesterday and after this issue a large number came 
back as timeout.  

[jira] [Commented] (ARTEMIS-3809) LargeMessageControllerImpl hangs the message consume

2022-05-03 Thread David Bennion (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17531237#comment-17531237
 ] 

David Bennion commented on ARTEMIS-3809:


Thanks Justin.  
{quote}Since the {{wait}} happens in a {{while}} loop it could be looping on 
the same {{LargeMessageControllerImpl}} instance. So the lock ID in itself is 
not sufficient to say that execution of that thread has not proceeded in anyway 
between thread dumps.
{quote}
Ok.  Yes I get this in principle.  The behavior we observed from this is that 
all of the other messages timed out in the queues.  We have 60 second timeouts 
on all these these messages.  I pulled a thread dump this morning and this 
machine is still hung on the same place and with the same object hash.  We ran 
a large batch of messages yesterday and after this issue a large number came 
back as timeout.  So I am virtually certain that we are a hung on a single 
message in a single place, otherwise the message would have eventually timed 
out and the consumer would have become unblocked and wouldn't be hanging in 
this same location.  I have not yet been able to get this machine into a 
debugger to be able to attempt to inspect.  (There is some work involved in 
that due to network segmentation and security stuff, but I'll work my way 
around that).
{quote}The default call timeout on the {{ServerLocator}} should be {{3}} 
milliseconds. Therefore, if you are not setting it to {{0}} I don't see how a 
value of {{0}} could be used here.
{quote}
OK, that surprises me.  I will search  to double validate that what I said is 
true.  I cannot see any references in our code other than the one we just put 
in.  But is it possible that there is some other configuration call that does 
this as a side effect?  I can clearly see that the current number must be 
either 0 or astronomically large.
{quote}The main thing I'm interested in at this point is {{TRACE}} logging on 
{{{}org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl{}}}.
 This will show what packets are arriving to the client. Specifically I want to 
see packets of the type {{SESS_RECEIVE_LARGE_MSG}} and 
{{{}SESS_RECEIVE_CONTINUATION{}}}.
{quote}
Got it.  Unless I am mistaken, I won't see any of those logs from 
RemotingConnectionImpl in my logs because I have only seemed to be able to make 
this happen on InVM.  For instance, I see these logs:

[insert log here]

 

But these logs just appear over and over like this and don't really seem to 
have anything interesting.
{quote}Ultimately the best way to proceed is for you to provide a way to 
reproduce the problem. That way I can put a debugger on it and really see 
what's happening.
{quote}
I get it.  I will see what I can do.  It is hard enough to get this one to 
happen even with all our code connected.

 

> LargeMessageControllerImpl hangs the message consume
> 
>
> Key: ARTEMIS-3809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.21.0
> Environment: OS: Windows Server 2019
> JVM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12
> Max Memory (-Xmx): 6GB
> Allocated to JVM: 4.168GB
> Currently in use: 3.398GB  (heap 3.391GB, non-heap 0.123GB)
>Reporter: David Bennion
>Priority: Major
>  Labels: test-stability
>
> I wondered if this might be a recurrence of issue ARTEMIS-2293 but this 
> happens on 2.21.0 and I can see the code change in 
> LargeMessageControllerImpl.  
> Using the default min-large-message-size of 100K. (defaults)
> Many messages are passing through the broker when this happens.  I would 
> anticipate that most of the messages are smaller than 100K, but clearly some 
> of them must exceed.  After some number of messages, a particular consumer 
> ceases to consume messages.
> After the system became "hung" I was able to get a stack trace and I was able 
> to identify that the system is stuck in an Object.wait() for a notify that 
> appears to never come.
> Here is the trace I was able to capture:
> {code:java}
> Thread-2 (ActiveMQ-client-global-threads) id=78 state=TIMED_WAITING
>     - waiting on <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     - locked <0x43523a75> (a 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl)
>     at  java.base@17.0.1/java.lang.Object.wait(Native Method)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.waitCompletion(LargeMessageControllerImpl.java:294)
>     at 
> org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:268)
>     at 
> org.apache.activemq.artemis.core.client.impl.Cl

[jira] [Work logged] (AMQ-8593) Deprecate masterslave discovery agent and add a new leaderfollower discovery agent

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8593?focusedWorklogId=765457&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765457
 ]

ASF GitHub Bot logged work on AMQ-8593:
---

Author: ASF GitHub Bot
Created on: 03/May/22 13:46
Start Date: 03/May/22 13:46
Worklog Time Spent: 10m 
  Work Description: mattrpav commented on PR #835:
URL: https://github.com/apache/activemq/pull/835#issuecomment-1116119107

   +1 'staticfailover' makes sense as the uri prefix. 




Issue Time Tracking
---

Worklog Id: (was: 765457)
Time Spent: 2h 10m  (was: 2h)

> Deprecate masterslave discovery agent and add a new leaderfollower discovery 
> agent
> --
>
> Key: AMQ-8593
> URL: https://issues.apache.org/jira/browse/AMQ-8593
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Network of Brokers
>Affects Versions: 5.17.2
>Reporter: Lucas Tétreault
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This [tweet|https://twitter.com/owenblacker/status/1517156221207212032] 
> raised the issue of non-inclusive terminology in the [AWS 
> docs|https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-creating-configuring-network-of-brokers.html#creating-configuring-network-of-brokers-configure-network-connectors]
>  and suggested that we should replace masterslave with an more inclusive name 
> for the network connector transport. The AWS docs refer to a feature of 
> ActiveMQ that is a convenience discovery agent: 
> [https://activemq.apache.org/networks-of-brokers#masterslave-discovery]
> Replacing master/slave nomenclature in ActiveMQ was raised in July 2020 
> AMQ-7514 and there have been some attempts at making some changes 
> ([#679|https://github.com/apache/activemq/pull/679], 
> [#714|https://github.com/apache/activemq/pull/714], 
> [#788|https://github.com/apache/activemq/pull/788]) however we have not been 
> able to come to an agreement on nomenclature so these efforts seem to have 
> stalled out.
> If we are able to come to an agreement on nomenclature in this PR, we can 
> move forward with removing more non-inclusive terminology on the website (I 
> will follow up with some PRs to the website), in discussions with the 
> community and of course in this codebase. This will remove adoption barriers 
> and make ActiveMQ a more approachable and inclusive project for everyone! 
> Other Apache projects such as Solr and Kafka have moved from master/slave to 
> leader/follower. Leader/follower is also recommended by the 
> [IETF|https://tools.ietf.org/id/draft-knodel-terminology-02.html] and 
> [inclusivenaming.org|https://inclusivenaming.org/word-lists/tier-1/] which is 
> supported by companies such as Cisco, Intel, and RedHat.
> If we can't come to an agreement on Leader/Follower or some other 
> nomenclature I will, at the very least, create a follow up PR to remove the 
> masterslave transport since it is just a convenience method to use 
> static+failover with ?randomize=false&maxReconnectAttempts=0.
> This change leaves the masterslave: transport in place but provides a new 
> alias leaderfollower: for now but we can easily remove it in 5.18.0.
> PR: https://github.com/apache/activemq/pull/835



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (ARTEMIS-3811) QueueBinding type will clash with cluster connections

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3811?focusedWorklogId=765451&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765451
 ]

ASF GitHub Bot logged work on ARTEMIS-3811:
---

Author: ASF GitHub Bot
Created on: 03/May/22 13:37
Start Date: 03/May/22 13:37
Worklog Time Spent: 10m 
  Work Description: clebertsuconic opened a new pull request, #4063:
URL: https://github.com/apache/activemq-artemis/pull/4063

   This is a test fix for AMQPClusterReplicaTest




Issue Time Tracking
---

Worklog Id: (was: 765451)
Remaining Estimate: 0h
Time Spent: 10m

> QueueBinding type will clash with cluster connections
> -
>
> Key: ARTEMIS-3811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3811
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.22.0
>Reporter: Clebert Suconic
>Assignee: Clebert Suconic
>Priority: Major
> Fix For: 2.23.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> - You create a cluster connection,
> - add a local queue with type=ANYCAST
> The remote queue added will have a type MULTICAST, and validateAddress will 
> fail.
> This is making AMQPClusterReplicaTest to fail



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (ARTEMIS-3811) QueueBinding type will clash with cluster connections

2022-05-03 Thread Clebert Suconic (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clebert Suconic updated ARTEMIS-3811:
-
Summary: QueueBinding type will clash with cluster connections  (was: 
QueueBinding type will clash with cluser)

> QueueBinding type will clash with cluster connections
> -
>
> Key: ARTEMIS-3811
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3811
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.22.0
>Reporter: Clebert Suconic
>Assignee: Clebert Suconic
>Priority: Major
> Fix For: 2.23.0
>
>
> - You create a cluster connection,
> - add a local queue with type=ANYCAST
> The remote queue added will have a type MULTICAST, and validateAddress will 
> fail.
> This is making AMQPClusterReplicaTest to fail



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (ARTEMIS-3811) QueueBinding type will clash with cluser

2022-05-03 Thread Clebert Suconic (Jira)
Clebert Suconic created ARTEMIS-3811:


 Summary: QueueBinding type will clash with cluser
 Key: ARTEMIS-3811
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3811
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.22.0
Reporter: Clebert Suconic
Assignee: Clebert Suconic
 Fix For: 2.23.0


- You create a cluster connection,
- add a local queue with type=ANYCAST

The remote queue added will have a type MULTICAST, and validateAddress will 
fail.


This is making AMQPClusterReplicaTest to fail



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (AMQ-8593) Deprecate masterslave discovery agent and add a new leaderfollower discovery agent

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8593?focusedWorklogId=765390&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765390
 ]

ASF GitHub Bot logged work on AMQ-8593:
---

Author: ASF GitHub Bot
Created on: 03/May/22 12:02
Start Date: 03/May/22 12:02
Worklog Time Spent: 10m 
  Work Description: gemmellr commented on PR #835:
URL: https://github.com/apache/activemq/pull/835#issuecomment-1116019688

   > @gemmellr
   > 
   > > The other config type is "static":
   > > ```
   > > 
   > > ```
   > > 
   > > 
   > > 
   > >   
   > > 
   > > 
   > >   
   > > 
   > > 
   > > 
   > >   
   > > The one under discussion differs in using "failover" transports. So what 
about something like simply "staticfailover":
   > > ```
   > > 
   > > ```
   > 
   > Why need to make a new name at all? why is simply a nested of existing not 
suitable here: e.g.
   > 
   > static:(failover:(uri1,uri2)?randomize=false&maxReconnectAttempts=0)
   
   Maybe it would be, though apparently in the past it was considered not to be 
and this convenience was created instead. At this point I dont see a particular 
benefit in trying to unwind the convenience, so just changing the prefix seems 
the simplest to explain to upgraders and the simplest for them to action and so 
is what I would do. As it actually seemed to have nothing in particular to do 
with topology or its naming it seemed odd to get caught up on that here at all, 
and so I suggested an obvious very literal name that fits what its used for.
   
   I'm not against just removing it if everyone else wants to unwind the 
convenience entirely and then cover things as needed for ungraders, but I thing 
the rename is simpler for everyone. I dont plan to spend more more time on this 
than I have already.




Issue Time Tracking
---

Worklog Id: (was: 765390)
Time Spent: 2h  (was: 1h 50m)

> Deprecate masterslave discovery agent and add a new leaderfollower discovery 
> agent
> --
>
> Key: AMQ-8593
> URL: https://issues.apache.org/jira/browse/AMQ-8593
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Network of Brokers
>Affects Versions: 5.17.2
>Reporter: Lucas Tétreault
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This [tweet|https://twitter.com/owenblacker/status/1517156221207212032] 
> raised the issue of non-inclusive terminology in the [AWS 
> docs|https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-creating-configuring-network-of-brokers.html#creating-configuring-network-of-brokers-configure-network-connectors]
>  and suggested that we should replace masterslave with an more inclusive name 
> for the network connector transport. The AWS docs refer to a feature of 
> ActiveMQ that is a convenience discovery agent: 
> [https://activemq.apache.org/networks-of-brokers#masterslave-discovery]
> Replacing master/slave nomenclature in ActiveMQ was raised in July 2020 
> AMQ-7514 and there have been some attempts at making some changes 
> ([#679|https://github.com/apache/activemq/pull/679], 
> [#714|https://github.com/apache/activemq/pull/714], 
> [#788|https://github.com/apache/activemq/pull/788]) however we have not been 
> able to come to an agreement on nomenclature so these efforts seem to have 
> stalled out.
> If we are able to come to an agreement on nomenclature in this PR, we can 
> move forward with removing more non-inclusive terminology on the website (I 
> will follow up with some PRs to the website), in discussions with the 
> community and of course in this codebase. This will remove adoption barriers 
> and make ActiveMQ a more approachable and inclusive project for everyone! 
> Other Apache projects such as Solr and Kafka have moved from master/slave to 
> leader/follower. Leader/follower is also recommended by the 
> [IETF|https://tools.ietf.org/id/draft-knodel-terminology-02.html] and 
> [inclusivenaming.org|https://inclusivenaming.org/word-lists/tier-1/] which is 
> supported by companies such as Cisco, Intel, and RedHat.
> If we can't come to an agreement on Leader/Follower or some other 
> nomenclature I will, at the very least, create a follow up PR to remove the 
> masterslave transport since it is just a convenience method to use 
> static+failover with ?randomize=false&maxReconnectAttempts=0.
> This change leaves the masterslave: transport in place but provides a new 
> alias leaderfollower: for now but we can easily remove it in 5.18.0.
> PR: https://github.com/apache/activemq/pull/835



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (ARTEMIS-3707) ResourceAdapter improvements

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3707?focusedWorklogId=765370&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765370
 ]

ASF GitHub Bot logged work on ARTEMIS-3707:
---

Author: ASF GitHub Bot
Created on: 03/May/22 11:26
Start Date: 03/May/22 11:26
Worklog Time Spent: 10m 
  Work Description: gemmellr commented on PR #4044:
URL: 
https://github.com/apache/activemq-artemis/pull/4044#issuecomment-1115990630

   You have definitely misunderstood me. Though I did query at one point why 
the new rar modules were being added, as they seemed largely duplication of the 
example and their presence wasnt clear based on the documentation also being 
added that said you _must_ build your own like the example, we then discussed 
why on the PR. I reviewed the PR multiple times after without saying 'remove 
these', and even added stuff to the build myself so you could use it to more 
easily action some prior feedback I had given, plus when you asked if the PR 
was dead I said no. You then closed it and raised the other of somewhat 
different PRs.




Issue Time Tracking
---

Worklog Id: (was: 765370)
Time Spent: 8h 40m  (was: 8.5h)

> ResourceAdapter improvements
> 
>
> Key: ARTEMIS-3707
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3707
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Waldi
>Priority: Major
> Fix For: 2.23.0
>
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> Hi Everybody,
> I've tried to install the resourceAdapter in openliberty/WLP by myself. I've 
> got in some troubles and fixed it by myself. Now, I would like to provide the 
> modifications and a small piece of documentation of the resourceAdapter, if 
> interested.
>  * Sample config / project for openliberty/WLP
>  * remove usage of the transactionManager in the resource adapter
>  * fix the jakarta ra.xml namespaces and classnames
>  * create a maven build for a rar archive with fewer dependencies and 
> therefore a smaller footprint.
> I took notice of ARTEMIS-1487 and ARTEMIS-1614. In my opinion, with my 
> contributions, we can close these issues. Can you tell me your thoughts and 
> give me feedback?



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (AMQ-8593) Deprecate masterslave discovery agent and add a new leaderfollower discovery agent

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8593?focusedWorklogId=765365&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765365
 ]

ASF GitHub Bot logged work on AMQ-8593:
---

Author: ASF GitHub Bot
Created on: 03/May/22 11:18
Start Date: 03/May/22 11:18
Worklog Time Spent: 10m 
  Work Description: michaelpearce-gain commented on PR #835:
URL: https://github.com/apache/activemq/pull/835#issuecomment-1115984626

   So to replace wording or master/slave with either primary/secondary or 
leader/follower as i stated requires a proposal and a vote in dev mailing list 
plain and simple there is no way to dance around it, in slack or PR's. 
   
   But for this PR i think just deprecation (not yet removal) of convenience 
method is simplest and cleanest. IMO.




Issue Time Tracking
---

Worklog Id: (was: 765365)
Time Spent: 1h 50m  (was: 1h 40m)

> Deprecate masterslave discovery agent and add a new leaderfollower discovery 
> agent
> --
>
> Key: AMQ-8593
> URL: https://issues.apache.org/jira/browse/AMQ-8593
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Network of Brokers
>Affects Versions: 5.17.2
>Reporter: Lucas Tétreault
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This [tweet|https://twitter.com/owenblacker/status/1517156221207212032] 
> raised the issue of non-inclusive terminology in the [AWS 
> docs|https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-creating-configuring-network-of-brokers.html#creating-configuring-network-of-brokers-configure-network-connectors]
>  and suggested that we should replace masterslave with an more inclusive name 
> for the network connector transport. The AWS docs refer to a feature of 
> ActiveMQ that is a convenience discovery agent: 
> [https://activemq.apache.org/networks-of-brokers#masterslave-discovery]
> Replacing master/slave nomenclature in ActiveMQ was raised in July 2020 
> AMQ-7514 and there have been some attempts at making some changes 
> ([#679|https://github.com/apache/activemq/pull/679], 
> [#714|https://github.com/apache/activemq/pull/714], 
> [#788|https://github.com/apache/activemq/pull/788]) however we have not been 
> able to come to an agreement on nomenclature so these efforts seem to have 
> stalled out.
> If we are able to come to an agreement on nomenclature in this PR, we can 
> move forward with removing more non-inclusive terminology on the website (I 
> will follow up with some PRs to the website), in discussions with the 
> community and of course in this codebase. This will remove adoption barriers 
> and make ActiveMQ a more approachable and inclusive project for everyone! 
> Other Apache projects such as Solr and Kafka have moved from master/slave to 
> leader/follower. Leader/follower is also recommended by the 
> [IETF|https://tools.ietf.org/id/draft-knodel-terminology-02.html] and 
> [inclusivenaming.org|https://inclusivenaming.org/word-lists/tier-1/] which is 
> supported by companies such as Cisco, Intel, and RedHat.
> If we can't come to an agreement on Leader/Follower or some other 
> nomenclature I will, at the very least, create a follow up PR to remove the 
> masterslave transport since it is just a convenience method to use 
> static+failover with ?randomize=false&maxReconnectAttempts=0.
> This change leaves the masterslave: transport in place but provides a new 
> alias leaderfollower: for now but we can easily remove it in 5.18.0.
> PR: https://github.com/apache/activemq/pull/835



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (AMQ-8593) Deprecate masterslave discovery agent and add a new leaderfollower discovery agent

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8593?focusedWorklogId=765362&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765362
 ]

ASF GitHub Bot logged work on AMQ-8593:
---

Author: ASF GitHub Bot
Created on: 03/May/22 11:09
Start Date: 03/May/22 11:09
Worklog Time Spent: 10m 
  Work Description: lucastetreault commented on PR #835:
URL: https://github.com/apache/activemq/pull/835#issuecomment-1115978234

   > > I still think it would be valuable to try to get to a consensus on what 
terms should replace master/slave. Any thoughts on how we can go about that?
   > 
   > essentially requires a vote on dev mail list if you need to have a 
decision, for the community and pmc to cast their votes.
   > 
   > This said, for this change, is it even needed we make a new name.? Why 
not go with simply nested of existing names/url transports per same comment to 
robbie and which i believe you even highlighted in slack yourself? Are we 
really saving anything from telling users to put :
   > 
   > (already supported) static:(failover:(uri1,uri2
   > 
   > vs
   > 
   > staticfailover(uri1,uri2.
   
   I'm fine if the solution in the codebase is to simply remove the convenience 
method. Even so, I think we need to agree on new terms. For example, to update 
https://activemq.apache.org/networks-of-brokers#masterslave-discovery with new 
instructions, how do I refer to the pair of brokers using shared storage if not 
master/slave? I know Matt proposed some wording on slack that dances around the 
problem, but there are lots of other places in the docs that could use updating 
as well :) 




Issue Time Tracking
---

Worklog Id: (was: 765362)
Time Spent: 1h 40m  (was: 1.5h)

> Deprecate masterslave discovery agent and add a new leaderfollower discovery 
> agent
> --
>
> Key: AMQ-8593
> URL: https://issues.apache.org/jira/browse/AMQ-8593
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Network of Brokers
>Affects Versions: 5.17.2
>Reporter: Lucas Tétreault
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This [tweet|https://twitter.com/owenblacker/status/1517156221207212032] 
> raised the issue of non-inclusive terminology in the [AWS 
> docs|https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-creating-configuring-network-of-brokers.html#creating-configuring-network-of-brokers-configure-network-connectors]
>  and suggested that we should replace masterslave with an more inclusive name 
> for the network connector transport. The AWS docs refer to a feature of 
> ActiveMQ that is a convenience discovery agent: 
> [https://activemq.apache.org/networks-of-brokers#masterslave-discovery]
> Replacing master/slave nomenclature in ActiveMQ was raised in July 2020 
> AMQ-7514 and there have been some attempts at making some changes 
> ([#679|https://github.com/apache/activemq/pull/679], 
> [#714|https://github.com/apache/activemq/pull/714], 
> [#788|https://github.com/apache/activemq/pull/788]) however we have not been 
> able to come to an agreement on nomenclature so these efforts seem to have 
> stalled out.
> If we are able to come to an agreement on nomenclature in this PR, we can 
> move forward with removing more non-inclusive terminology on the website (I 
> will follow up with some PRs to the website), in discussions with the 
> community and of course in this codebase. This will remove adoption barriers 
> and make ActiveMQ a more approachable and inclusive project for everyone! 
> Other Apache projects such as Solr and Kafka have moved from master/slave to 
> leader/follower. Leader/follower is also recommended by the 
> [IETF|https://tools.ietf.org/id/draft-knodel-terminology-02.html] and 
> [inclusivenaming.org|https://inclusivenaming.org/word-lists/tier-1/] which is 
> supported by companies such as Cisco, Intel, and RedHat.
> If we can't come to an agreement on Leader/Follower or some other 
> nomenclature I will, at the very least, create a follow up PR to remove the 
> masterslave transport since it is just a convenience method to use 
> static+failover with ?randomize=false&maxReconnectAttempts=0.
> This change leaves the masterslave: transport in place but provides a new 
> alias leaderfollower: for now but we can easily remove it in 5.18.0.
> PR: https://github.com/apache/activemq/pull/835



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (AMQ-8593) Deprecate masterslave discovery agent and add a new leaderfollower discovery agent

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8593?focusedWorklogId=765358&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765358
 ]

ASF GitHub Bot logged work on AMQ-8593:
---

Author: ASF GitHub Bot
Created on: 03/May/22 10:51
Start Date: 03/May/22 10:51
Worklog Time Spent: 10m 
  Work Description: michaelpearce-gain commented on PR #835:
URL: https://github.com/apache/activemq/pull/835#issuecomment-1115965841

   > I still think it would be valuable to try to get to a consensus on what 
terms should replace master/slave. Any thoughts on how we can go about that?
   
   essentially requires a vote on dev mail list if you need to have a decision, 
for the community and pmc to cast their votes.
   
   This said, for this change, is it even needed? Why not go with simply nested 
of existing names/url transports per same comment to robbie and which i believe 
you even highlighted in slack yourself? 




Issue Time Tracking
---

Worklog Id: (was: 765358)
Time Spent: 1.5h  (was: 1h 20m)

> Deprecate masterslave discovery agent and add a new leaderfollower discovery 
> agent
> --
>
> Key: AMQ-8593
> URL: https://issues.apache.org/jira/browse/AMQ-8593
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Network of Brokers
>Affects Versions: 5.17.2
>Reporter: Lucas Tétreault
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This [tweet|https://twitter.com/owenblacker/status/1517156221207212032] 
> raised the issue of non-inclusive terminology in the [AWS 
> docs|https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-creating-configuring-network-of-brokers.html#creating-configuring-network-of-brokers-configure-network-connectors]
>  and suggested that we should replace masterslave with an more inclusive name 
> for the network connector transport. The AWS docs refer to a feature of 
> ActiveMQ that is a convenience discovery agent: 
> [https://activemq.apache.org/networks-of-brokers#masterslave-discovery]
> Replacing master/slave nomenclature in ActiveMQ was raised in July 2020 
> AMQ-7514 and there have been some attempts at making some changes 
> ([#679|https://github.com/apache/activemq/pull/679], 
> [#714|https://github.com/apache/activemq/pull/714], 
> [#788|https://github.com/apache/activemq/pull/788]) however we have not been 
> able to come to an agreement on nomenclature so these efforts seem to have 
> stalled out.
> If we are able to come to an agreement on nomenclature in this PR, we can 
> move forward with removing more non-inclusive terminology on the website (I 
> will follow up with some PRs to the website), in discussions with the 
> community and of course in this codebase. This will remove adoption barriers 
> and make ActiveMQ a more approachable and inclusive project for everyone! 
> Other Apache projects such as Solr and Kafka have moved from master/slave to 
> leader/follower. Leader/follower is also recommended by the 
> [IETF|https://tools.ietf.org/id/draft-knodel-terminology-02.html] and 
> [inclusivenaming.org|https://inclusivenaming.org/word-lists/tier-1/] which is 
> supported by companies such as Cisco, Intel, and RedHat.
> If we can't come to an agreement on Leader/Follower or some other 
> nomenclature I will, at the very least, create a follow up PR to remove the 
> masterslave transport since it is just a convenience method to use 
> static+failover with ?randomize=false&maxReconnectAttempts=0.
> This change leaves the masterslave: transport in place but provides a new 
> alias leaderfollower: for now but we can easily remove it in 5.18.0.
> PR: https://github.com/apache/activemq/pull/835



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (AMQ-8593) Deprecate masterslave discovery agent and add a new leaderfollower discovery agent

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8593?focusedWorklogId=765355&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765355
 ]

ASF GitHub Bot logged work on AMQ-8593:
---

Author: ASF GitHub Bot
Created on: 03/May/22 10:47
Start Date: 03/May/22 10:47
Worklog Time Spent: 10m 
  Work Description: michaelpearce-gain commented on PR #835:
URL: https://github.com/apache/activemq/pull/835#issuecomment-1115963086

   
   > 
   > The other config type is "static":
   > 
   > ```
   > 
   > ```
   > 
   > The one under discussion differs in using "failover" transports. So what 
about something like simply "staticfailover":
   > 
   > ```
   > 
   > ```
   
   Why need to make a new name at all? why is simply a nested of existing not 
suitable here: e.g.
   
   static:(failover:(uri1,uri2)?randomize=false&maxReconnectAttempts=0)




Issue Time Tracking
---

Worklog Id: (was: 765355)
Time Spent: 1h 20m  (was: 1h 10m)

> Deprecate masterslave discovery agent and add a new leaderfollower discovery 
> agent
> --
>
> Key: AMQ-8593
> URL: https://issues.apache.org/jira/browse/AMQ-8593
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Network of Brokers
>Affects Versions: 5.17.2
>Reporter: Lucas Tétreault
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This [tweet|https://twitter.com/owenblacker/status/1517156221207212032] 
> raised the issue of non-inclusive terminology in the [AWS 
> docs|https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-creating-configuring-network-of-brokers.html#creating-configuring-network-of-brokers-configure-network-connectors]
>  and suggested that we should replace masterslave with an more inclusive name 
> for the network connector transport. The AWS docs refer to a feature of 
> ActiveMQ that is a convenience discovery agent: 
> [https://activemq.apache.org/networks-of-brokers#masterslave-discovery]
> Replacing master/slave nomenclature in ActiveMQ was raised in July 2020 
> AMQ-7514 and there have been some attempts at making some changes 
> ([#679|https://github.com/apache/activemq/pull/679], 
> [#714|https://github.com/apache/activemq/pull/714], 
> [#788|https://github.com/apache/activemq/pull/788]) however we have not been 
> able to come to an agreement on nomenclature so these efforts seem to have 
> stalled out.
> If we are able to come to an agreement on nomenclature in this PR, we can 
> move forward with removing more non-inclusive terminology on the website (I 
> will follow up with some PRs to the website), in discussions with the 
> community and of course in this codebase. This will remove adoption barriers 
> and make ActiveMQ a more approachable and inclusive project for everyone! 
> Other Apache projects such as Solr and Kafka have moved from master/slave to 
> leader/follower. Leader/follower is also recommended by the 
> [IETF|https://tools.ietf.org/id/draft-knodel-terminology-02.html] and 
> [inclusivenaming.org|https://inclusivenaming.org/word-lists/tier-1/] which is 
> supported by companies such as Cisco, Intel, and RedHat.
> If we can't come to an agreement on Leader/Follower or some other 
> nomenclature I will, at the very least, create a follow up PR to remove the 
> masterslave transport since it is just a convenience method to use 
> static+failover with ?randomize=false&maxReconnectAttempts=0.
> This change leaves the masterslave: transport in place but provides a new 
> alias leaderfollower: for now but we can easily remove it in 5.18.0.
> PR: https://github.com/apache/activemq/pull/835



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (AMQ-8593) Deprecate masterslave discovery agent and add a new leaderfollower discovery agent

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8593?focusedWorklogId=765353&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765353
 ]

ASF GitHub Bot logged work on AMQ-8593:
---

Author: ASF GitHub Bot
Created on: 03/May/22 10:38
Start Date: 03/May/22 10:38
Worklog Time Spent: 10m 
  Work Description: lucastetreault commented on PR #835:
URL: https://github.com/apache/activemq/pull/835#issuecomment-1115956606

   > Def needs a deprecation log entry if old config is read up, so that users 
are alerted maybe if still using old config. A
   
   How about: `LOG.warn("masterSlave is deprecated and will be removed in a 
future release. Use {newname} instead.");` 
   
   > The one under discussion differs in using "failover" transports. So what 
about something like simply "staticfailover":
   
   I like that. Very literal which is probably good in this case. 
   
   I still think it would be valuable to try to get to a consensus on what 
terms should replace master/slave. Any thoughts on how we can go about that? 
   
   
   
   




Issue Time Tracking
---

Worklog Id: (was: 765353)
Time Spent: 1h 10m  (was: 1h)

> Deprecate masterslave discovery agent and add a new leaderfollower discovery 
> agent
> --
>
> Key: AMQ-8593
> URL: https://issues.apache.org/jira/browse/AMQ-8593
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Network of Brokers
>Affects Versions: 5.17.2
>Reporter: Lucas Tétreault
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This [tweet|https://twitter.com/owenblacker/status/1517156221207212032] 
> raised the issue of non-inclusive terminology in the [AWS 
> docs|https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-creating-configuring-network-of-brokers.html#creating-configuring-network-of-brokers-configure-network-connectors]
>  and suggested that we should replace masterslave with an more inclusive name 
> for the network connector transport. The AWS docs refer to a feature of 
> ActiveMQ that is a convenience discovery agent: 
> [https://activemq.apache.org/networks-of-brokers#masterslave-discovery]
> Replacing master/slave nomenclature in ActiveMQ was raised in July 2020 
> AMQ-7514 and there have been some attempts at making some changes 
> ([#679|https://github.com/apache/activemq/pull/679], 
> [#714|https://github.com/apache/activemq/pull/714], 
> [#788|https://github.com/apache/activemq/pull/788]) however we have not been 
> able to come to an agreement on nomenclature so these efforts seem to have 
> stalled out.
> If we are able to come to an agreement on nomenclature in this PR, we can 
> move forward with removing more non-inclusive terminology on the website (I 
> will follow up with some PRs to the website), in discussions with the 
> community and of course in this codebase. This will remove adoption barriers 
> and make ActiveMQ a more approachable and inclusive project for everyone! 
> Other Apache projects such as Solr and Kafka have moved from master/slave to 
> leader/follower. Leader/follower is also recommended by the 
> [IETF|https://tools.ietf.org/id/draft-knodel-terminology-02.html] and 
> [inclusivenaming.org|https://inclusivenaming.org/word-lists/tier-1/] which is 
> supported by companies such as Cisco, Intel, and RedHat.
> If we can't come to an agreement on Leader/Follower or some other 
> nomenclature I will, at the very least, create a follow up PR to remove the 
> masterslave transport since it is just a convenience method to use 
> static+failover with ?randomize=false&maxReconnectAttempts=0.
> This change leaves the masterslave: transport in place but provides a new 
> alias leaderfollower: for now but we can easily remove it in 5.18.0.
> PR: https://github.com/apache/activemq/pull/835



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (AMQ-8593) Deprecate masterslave discovery agent and add a new leaderfollower discovery agent

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8593?focusedWorklogId=765350&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765350
 ]

ASF GitHub Bot logged work on AMQ-8593:
---

Author: ASF GitHub Bot
Created on: 03/May/22 10:05
Start Date: 03/May/22 10:05
Worklog Time Spent: 10m 
  Work Description: gemmellr commented on code in PR #835:
URL: https://github.com/apache/activemq/pull/835#discussion_r863632302


##
activemq-client/src/main/java/org/apache/activemq/transport/discovery/masterslave/MasterSlaveDiscoveryAgentFactory.java:
##
@@ -27,6 +27,7 @@
 import java.net.URI;
 import java.util.Map;
 
+@Deprecated

Review Comment:
   Ditto



##
activemq-client/src/main/java/org/apache/activemq/transport/discovery/masterslave/MasterSlaveDiscoveryAgent.java:
##
@@ -24,7 +24,10 @@
 /**
  * A static DiscoveryAgent that supports connecting to a Master / Slave tuple
  * of brokers.
+ *
+ * @deprecated This class is superseded by HADiscoveryAgent and will be 
removed in 5.18.0.
  */
+@Deprecated

Review Comment:
   Can it use the 'for removal' flag? If so it seems like it should. Though i 
realise most things arent going to touch the code itself so it would really 
only be for descriptive purposes.
   
   (I've yet to use it myself so not clear when it came in) 



##
activemq-unit-tests/src/test/java/org/apache/activemq/network/NetworkConnectionsTest.java:
##
@@ -5,9 +5,9 @@
  * The ASF licenses this file to You under the Apache License, Version 2.0
  * (the "License"); you may not use this file except in compliance with
  * the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 

Review Comment:
   Seems like this should be unwound...but if a change actually needs made, I'd 
make the licence header a comment rather than Javadoc.





Issue Time Tracking
---

Worklog Id: (was: 765350)
Time Spent: 1h  (was: 50m)

> Deprecate masterslave discovery agent and add a new leaderfollower discovery 
> agent
> --
>
> Key: AMQ-8593
> URL: https://issues.apache.org/jira/browse/AMQ-8593
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Network of Brokers
>Affects Versions: 5.17.2
>Reporter: Lucas Tétreault
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This [tweet|https://twitter.com/owenblacker/status/1517156221207212032] 
> raised the issue of non-inclusive terminology in the [AWS 
> docs|https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-creating-configuring-network-of-brokers.html#creating-configuring-network-of-brokers-configure-network-connectors]
>  and suggested that we should replace masterslave with an more inclusive name 
> for the network connector transport. The AWS docs refer to a feature of 
> ActiveMQ that is a convenience discovery agent: 
> [https://activemq.apache.org/networks-of-brokers#masterslave-discovery]
> Replacing master/slave nomenclature in ActiveMQ was raised in July 2020 
> AMQ-7514 and there have been some attempts at making some changes 
> ([#679|https://github.com/apache/activemq/pull/679], 
> [#714|https://github.com/apache/activemq/pull/714], 
> [#788|https://github.com/apache/activemq/pull/788]) however we have not been 
> able to come to an agreement on nomenclature so these efforts seem to have 
> stalled out.
> If we are able to come to an agreement on nomenclature in this PR, we can 
> move forward with removing more non-inclusive terminology on the website (I 
> will follow up with some PRs to the website), in discussions with the 
> community and of course in this codebase. This will remove adoption barriers 
> and make ActiveMQ a more approachable and inclusive project for everyone! 
> Other Apache projects such as Solr and Kafka have moved from master/slave to 
> leader/follower. Leader/follower is also recommended by the 
> [IETF|https://tools.ietf.org/id/draft-knodel-terminology-02.html] and 
> [inclusivenaming.org|https://inclusivenaming.org/word-lists/tier-1/] which is 
> supported by companies such as Cisco, Intel, and RedHat.
> If we can't come to an agreement on Leader/Follower or some other 
> nomenclature I will, at the very least, create a follow up PR to remove the 
> masterslave transport since it is just a convenience method to use 
> static+failover with ?randomize=false&maxReconnectAttempts=0.
> This change leaves the masterslave: transport in place but provides a new 
> alias leaderfollower: for now but we can easily remove it in 5.18.0.
> PR: https://github.com/apache/activemq/pull/835



--
This message was sent by Atlassian Jira
(v8.20.7#82000

[jira] [Work logged] (AMQ-8593) Deprecate masterslave discovery agent and add a new leaderfollower discovery agent

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8593?focusedWorklogId=765349&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765349
 ]

ASF GitHub Bot logged work on AMQ-8593:
---

Author: ASF GitHub Bot
Created on: 03/May/22 09:57
Start Date: 03/May/22 09:57
Worklog Time Spent: 10m 
  Work Description: gemmellr commented on PR #835:
URL: https://github.com/apache/activemq/pull/835#issuecomment-1115923343

   Since the docs say this mechanism is essentially an alternative means of 
specifying static "failover:" transport usages with some unnamed options, and 
the code seemingly confirms that from looking like just a loop for creating a 
"failover://(looped-given-servers)?randomize=false&maxReconnectAttempts=0" 
string, why not just name it more literally about what the config actually does 
rather than trying to come up with some new abstractive topology name it seems 
people have a hard time agreeing on.
   
   The other config type is "static":
   ```
   
   ```
   The one under discussion differs in using "failover" transports. So what 
about something like simply "staticfailover":
   ```
   
   ```




Issue Time Tracking
---

Worklog Id: (was: 765349)
Time Spent: 50m  (was: 40m)

> Deprecate masterslave discovery agent and add a new leaderfollower discovery 
> agent
> --
>
> Key: AMQ-8593
> URL: https://issues.apache.org/jira/browse/AMQ-8593
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Network of Brokers
>Affects Versions: 5.17.2
>Reporter: Lucas Tétreault
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This [tweet|https://twitter.com/owenblacker/status/1517156221207212032] 
> raised the issue of non-inclusive terminology in the [AWS 
> docs|https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-creating-configuring-network-of-brokers.html#creating-configuring-network-of-brokers-configure-network-connectors]
>  and suggested that we should replace masterslave with an more inclusive name 
> for the network connector transport. The AWS docs refer to a feature of 
> ActiveMQ that is a convenience discovery agent: 
> [https://activemq.apache.org/networks-of-brokers#masterslave-discovery]
> Replacing master/slave nomenclature in ActiveMQ was raised in July 2020 
> AMQ-7514 and there have been some attempts at making some changes 
> ([#679|https://github.com/apache/activemq/pull/679], 
> [#714|https://github.com/apache/activemq/pull/714], 
> [#788|https://github.com/apache/activemq/pull/788]) however we have not been 
> able to come to an agreement on nomenclature so these efforts seem to have 
> stalled out.
> If we are able to come to an agreement on nomenclature in this PR, we can 
> move forward with removing more non-inclusive terminology on the website (I 
> will follow up with some PRs to the website), in discussions with the 
> community and of course in this codebase. This will remove adoption barriers 
> and make ActiveMQ a more approachable and inclusive project for everyone! 
> Other Apache projects such as Solr and Kafka have moved from master/slave to 
> leader/follower. Leader/follower is also recommended by the 
> [IETF|https://tools.ietf.org/id/draft-knodel-terminology-02.html] and 
> [inclusivenaming.org|https://inclusivenaming.org/word-lists/tier-1/] which is 
> supported by companies such as Cisco, Intel, and RedHat.
> If we can't come to an agreement on Leader/Follower or some other 
> nomenclature I will, at the very least, create a follow up PR to remove the 
> masterslave transport since it is just a convenience method to use 
> static+failover with ?randomize=false&maxReconnectAttempts=0.
> This change leaves the masterslave: transport in place but provides a new 
> alias leaderfollower: for now but we can easily remove it in 5.18.0.
> PR: https://github.com/apache/activemq/pull/835



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (AMQ-8593) Deprecate masterslave discovery agent and add a new leaderfollower discovery agent

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8593?focusedWorklogId=765346&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765346
 ]

ASF GitHub Bot logged work on AMQ-8593:
---

Author: ASF GitHub Bot
Created on: 03/May/22 09:41
Start Date: 03/May/22 09:41
Worklog Time Spent: 10m 
  Work Description: gemmellr commented on PR #835:
URL: https://github.com/apache/activemq/pull/835#issuecomment-1115911687

   > I assumed we could merge this in to a 5.17.X release and then it would be 
fair game to remove in 5.18.0. Is there some written guidance on due notice for 
this type of change that I could refer to?
   
   5.17.x and 5.18.x wont necessarily be coexisting for as long (or maybe at 
all) as some previous cases have, and it seems likely some folks may update to 
one of the earlier 5.17.x releases already existing and stick for a while and 
not see this in any later 5.17.x releases before e.g trying 5.18.0. Overall, 
may be better to consider some period of time where the deprecated option 
availability overlaps the new one rather than just a specific version.




Issue Time Tracking
---

Worklog Id: (was: 765346)
Time Spent: 40m  (was: 0.5h)

> Deprecate masterslave discovery agent and add a new leaderfollower discovery 
> agent
> --
>
> Key: AMQ-8593
> URL: https://issues.apache.org/jira/browse/AMQ-8593
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Network of Brokers
>Affects Versions: 5.17.2
>Reporter: Lucas Tétreault
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This [tweet|https://twitter.com/owenblacker/status/1517156221207212032] 
> raised the issue of non-inclusive terminology in the [AWS 
> docs|https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-creating-configuring-network-of-brokers.html#creating-configuring-network-of-brokers-configure-network-connectors]
>  and suggested that we should replace masterslave with an more inclusive name 
> for the network connector transport. The AWS docs refer to a feature of 
> ActiveMQ that is a convenience discovery agent: 
> [https://activemq.apache.org/networks-of-brokers#masterslave-discovery]
> Replacing master/slave nomenclature in ActiveMQ was raised in July 2020 
> AMQ-7514 and there have been some attempts at making some changes 
> ([#679|https://github.com/apache/activemq/pull/679], 
> [#714|https://github.com/apache/activemq/pull/714], 
> [#788|https://github.com/apache/activemq/pull/788]) however we have not been 
> able to come to an agreement on nomenclature so these efforts seem to have 
> stalled out.
> If we are able to come to an agreement on nomenclature in this PR, we can 
> move forward with removing more non-inclusive terminology on the website (I 
> will follow up with some PRs to the website), in discussions with the 
> community and of course in this codebase. This will remove adoption barriers 
> and make ActiveMQ a more approachable and inclusive project for everyone! 
> Other Apache projects such as Solr and Kafka have moved from master/slave to 
> leader/follower. Leader/follower is also recommended by the 
> [IETF|https://tools.ietf.org/id/draft-knodel-terminology-02.html] and 
> [inclusivenaming.org|https://inclusivenaming.org/word-lists/tier-1/] which is 
> supported by companies such as Cisco, Intel, and RedHat.
> If we can't come to an agreement on Leader/Follower or some other 
> nomenclature I will, at the very least, create a follow up PR to remove the 
> masterslave transport since it is just a convenience method to use 
> static+failover with ?randomize=false&maxReconnectAttempts=0.
> This change leaves the masterslave: transport in place but provides a new 
> alias leaderfollower: for now but we can easily remove it in 5.18.0.
> PR: https://github.com/apache/activemq/pull/835



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (AMQ-8593) Deprecate masterslave discovery agent and add a new leaderfollower discovery agent

2022-05-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8593?focusedWorklogId=765322&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765322
 ]

ASF GitHub Bot logged work on AMQ-8593:
---

Author: ASF GitHub Bot
Created on: 03/May/22 08:35
Start Date: 03/May/22 08:35
Worklog Time Spent: 10m 
  Work Description: michaelpearce-gain commented on PR #835:
URL: https://github.com/apache/activemq/pull/835#issuecomment-1115859305

   Def needs a deprecation log entry if old config is read up, so that users 
are alerted maybe if still using old config. A
   
   lso im a little -1 using terminology "ha" ha = highly available of which 
there is many ways to be "highly available" which leader/follower 
primary/secondary is just one way  also on naming i believe was front 
runners for replacement of master/slave terminology.




Issue Time Tracking
---

Worklog Id: (was: 765322)
Time Spent: 0.5h  (was: 20m)

> Deprecate masterslave discovery agent and add a new leaderfollower discovery 
> agent
> --
>
> Key: AMQ-8593
> URL: https://issues.apache.org/jira/browse/AMQ-8593
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Network of Brokers
>Affects Versions: 5.17.2
>Reporter: Lucas Tétreault
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This [tweet|https://twitter.com/owenblacker/status/1517156221207212032] 
> raised the issue of non-inclusive terminology in the [AWS 
> docs|https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-creating-configuring-network-of-brokers.html#creating-configuring-network-of-brokers-configure-network-connectors]
>  and suggested that we should replace masterslave with an more inclusive name 
> for the network connector transport. The AWS docs refer to a feature of 
> ActiveMQ that is a convenience discovery agent: 
> [https://activemq.apache.org/networks-of-brokers#masterslave-discovery]
> Replacing master/slave nomenclature in ActiveMQ was raised in July 2020 
> AMQ-7514 and there have been some attempts at making some changes 
> ([#679|https://github.com/apache/activemq/pull/679], 
> [#714|https://github.com/apache/activemq/pull/714], 
> [#788|https://github.com/apache/activemq/pull/788]) however we have not been 
> able to come to an agreement on nomenclature so these efforts seem to have 
> stalled out.
> If we are able to come to an agreement on nomenclature in this PR, we can 
> move forward with removing more non-inclusive terminology on the website (I 
> will follow up with some PRs to the website), in discussions with the 
> community and of course in this codebase. This will remove adoption barriers 
> and make ActiveMQ a more approachable and inclusive project for everyone! 
> Other Apache projects such as Solr and Kafka have moved from master/slave to 
> leader/follower. Leader/follower is also recommended by the 
> [IETF|https://tools.ietf.org/id/draft-knodel-terminology-02.html] and 
> [inclusivenaming.org|https://inclusivenaming.org/word-lists/tier-1/] which is 
> supported by companies such as Cisco, Intel, and RedHat.
> If we can't come to an agreement on Leader/Follower or some other 
> nomenclature I will, at the very least, create a follow up PR to remove the 
> masterslave transport since it is just a convenience method to use 
> static+failover with ?randomize=false&maxReconnectAttempts=0.
> This change leaves the masterslave: transport in place but provides a new 
> alias leaderfollower: for now but we can easily remove it in 5.18.0.
> PR: https://github.com/apache/activemq/pull/835



--
This message was sent by Atlassian Jira
(v8.20.7#820007)