[jira] [Updated] (AMQ-9013) ERROR | XXXX, no longer able to keep the exclusive lock so giving up being a master

2022-08-23 Thread DannyChan (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DannyChan updated AMQ-9013:
---
Description: 
===

2022-08-15

update to the 5.16.5,The problem still exists.

===

In our production environment, ActiveMQ often restarts randomly. Refer to 
official documents for configuration JDBC Master Slave. Twice failover in half 
an hour. check vm and oracle healthy in during failover,please analyze the 
problem, thanks

 

1 activemq log:
first failover
INFO   | jvm 1    | 2022/07/31 16:25:01 |  INFO | 1 Lease held by 2 
till Sun Jul 31 16:25:03 CST 2022
INFO   | jvm 1    | 2022/07/31 16:25:01 | ERROR | 1, no longer able to keep 
the exclusive lock so giving up being a master
INFO   | jvm 1    | 2022/07/31 16:25:01 |  INFO | Apache ActiveMQ 5.9.0 (1, 
ID:1-50824-1658747295825-0:1) is shutting down
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | Connector openwire stopped
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | 
PListStore:[/home/apache-activemq-5.9.0/bin/linux-x86-64/../../data/1/tmp_storage]
 stopped
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | Apache ActiveMQ 5.9.0 (1, 
ID:1-50824-1658747295825-0:1) uptime 11 days 13 hours
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | Apache ActiveMQ 5.9.0 (1, 
ID:1-50824-1658747295825-0:1) is shutdown
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | Closing 
org.apache.activemq.xbean.XBeanBrokerFactory$1@6415d653: startup date [Wed Jul 
20 03:15:12 CST 2022]; root of context hierarchy
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | Destroying Spring 
FrameworkServlet 'dispatcher'
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | Destroying hawtio 
authentication filter
INFO   | jvm 1    | 2022/07/31 16:25:03 | Restarting broker
INFO   | jvm 1    | 2022/07/31 16:25:03 | Loading message broker from: 
xbean:activemq.xml
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | Refreshing 
org.apache.activemq.xbean.XBeanBrokerFactory$1@294574e5: startup date [Sun Jul 
31 16:25:03 CST 2022]; root of context hierarchy
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | 
PListStore:[/home/apache-activemq-5.9.0/bin/linux-x86-64/../../data/1/tmp_storage]
 started
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | Using Persistence Adapter: 
JDBCPersistenceAdapter(org.apache.commons.dbcp.BasicDataSource@9158362)
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | JMX consoles can connect to 
service:jmx:rmi:///jndi/rmi://x.x.x.x:10616/jmxrmi
INFO   | jvm 1    | 2022/07/31 16:25:03 |  INFO | Database adapter driver 
override recognized for : [oracle_jdbc_driver] - adapter: class 
org.apache.activemq.store.jdbc.adapter.OracleJDBCAdapter
INFO   | jvm 1    | 2022/07/31 16:25:15 |  INFO | 1 attempting to acquire 
exclusive lease to become the master
INFO   | jvm 1    | 2022/07/31 16:25:15 |  INFO | 1 Lease held by 2 
till Sun Jul 31 16:25:17 CST 2022
INFO   | jvm 1    | 2022/07/31 16:25:15 |  INFO | 1 failed to acquire 
lease.  Sleeping for 4000 milli(s) before trying again...
second failover
INFO   | jvm 1    | 2022/07/31 16:37:16 |  INFO | 1 Lease held by 2 
till Sun Jul 31 16:37:20 CST 2022
INFO   | jvm 1    | 2022/07/31 16:37:16 |  INFO | 1 failed to acquire 
lease.  Sleeping for 4000 milli(s) before trying again...
INFO   | jvm 1    | 2022/07/31 16:37:20 |  INFO | 1, becoming master with 
lease expiry Sun Jul 31 16:37:20 CST 2022 on dataSource: 
org.apache.commons.dbcp.BasicDataSource@9158362
INFO   | jvm 1    | 2022/07/31 16:37:20 |  INFO | Apache ActiveMQ 5.9.0 (1, 
ID:01-50824-1658747295825-0:2) is starting
INFO   | jvm 1    | 2022/07/31 16:37:20 |  INFO | Listening for connections at: 
tcp://01:61616?maximumConnections=1000=104857600
INFO   | jvm 1    | 2022/07/31 16:37:20 |  INFO | Connector openwire started
INFO   | jvm 1    | 2022/07/31 16:37:20 |  INFO | Apache ActiveMQ 5.9.0 (1, 
ID:01-50824-1658747295825-0:2) started
INFO   | jvm 1    | 2022/07/31 16:37:20 |  INFO | For help or more information 
please see: [http://activemq.apache.org|http://activemq.apache.org/]
INFO   | jvm 1    | 2022/07/31 16:37:21 |  INFO | Welcome to hawtio 1.2-M23 : 
[http://hawt.io/] : Don't cha wish your console was hawt like me? ;)
INFO   | jvm 1    | 2022/07/31 16:37:21 |  INFO | Starting hawtio 
authentication filter, JAAS realm: "activemq" authorized role: "admins" role 
principal classes: "org.apache.activemq.jaas.GroupPrincipal"
INFO   | jvm 1    | 2022/07/31 16:37:21 |  INFO | Using file upload directory: 
/tmp/uploads
INFO   | jvm 1    | 2022/07/31 16:37:21 |  INFO | jolokia-agent: Using access 
restrictor classpath:/jolokia-access.xml
INFO   | jvm 1    | 2022/07/31 16:37:21 |  INFO | ActiveMQ WebConsole available 
at [http://localhost:8151/]
INFO   | jvm 1    | 2022/07/31 16:37:21 |  INFO | Initializing Spring 
FrameworkServlet 'dispatcher'

2 activemq log:
first failover
INFO   

[jira] [Created] (AMQ-9061) Memory Leak when Trying to Connect to Inactive ActiveMQ Process

2022-08-23 Thread Joe (Jira)
Joe created AMQ-9061:


 Summary: Memory Leak when Trying to Connect to Inactive ActiveMQ 
Process
 Key: AMQ-9061
 URL: https://issues.apache.org/jira/browse/AMQ-9061
 Project: ActiveMQ
  Issue Type: Bug
  Components: AMQP
Affects Versions: 5.15.2
Reporter: Joe


When ActiveMQ is shutdown and we try to connect, we are seeing approximately 2 
threads created per second that are never closed. 

Here is an example of the thread status from jstack:

 
{code:java}
"ActiveMQ Connection Executor: unconnected" #246 daemon prio=5 os_prio=0 
tid=0x7f43b8016000 nid=0xbc3b waiting on condition [0x7f44401d7000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x88681188> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
 {code}
 

 

If it's relevant, we try to connect through an instance of 
ActiveMQConnectionFactory, using this function

 
createConnection​(String userName, String password)
We do receive and catch the expected error that the connection to ActiveMQ was 
refused.

Of course, this eventually causes our application to crash.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (AMQ-9061) Memory Leak when Trying to Connect to Inactive ActiveMQ Process

2022-08-23 Thread Joe (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe updated AMQ-9061:
-
Description: 
When ActiveMQ is shutdown and we try to connect, we are seeing approximately 2 
threads created per second that are never closed. 

Here is an example of the thread status from jstack:

 
{code:java}
"ActiveMQ Connection Executor: unconnected" #246 daemon prio=5 os_prio=0 
tid=0x7f43b8016000 nid=0xbc3b waiting on condition [0x7f44401d7000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x88681188> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
 {code}
 

 

If it's relevant, we try to connect through an instance of 
ActiveMQConnectionFactory, using this function

 
{code:java}
createConnection​(String userName, String password) {code}

We do receive and catch the expected error that the connection to ActiveMQ was 
refused.

Of course, this eventually causes our application to crash.

  was:
When ActiveMQ is shutdown and we try to connect, we are seeing approximately 2 
threads created per second that are never closed. 

Here is an example of the thread status from jstack:

 
{code:java}
"ActiveMQ Connection Executor: unconnected" #246 daemon prio=5 os_prio=0 
tid=0x7f43b8016000 nid=0xbc3b waiting on condition [0x7f44401d7000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x88681188> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
        at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
        at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
 {code}
 

 

If it's relevant, we try to connect through an instance of 
ActiveMQConnectionFactory, using this function

 
createConnection​(String userName, String password)
We do receive and catch the expected error that the connection to ActiveMQ was 
refused.

Of course, this eventually causes our application to crash.


> Memory Leak when Trying to Connect to Inactive ActiveMQ Process
> ---
>
> Key: AMQ-9061
> URL: https://issues.apache.org/jira/browse/AMQ-9061
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 5.15.2
>Reporter: Joe
>Priority: Major
>
> When ActiveMQ is shutdown and we try to connect, we are seeing approximately 
> 2 threads created per second that are never closed. 
> Here is an example of the thread status from jstack:
>  
> {code:java}
> "ActiveMQ Connection Executor: unconnected" #246 daemon prio=5 os_prio=0 
> tid=0x7f43b8016000 nid=0xbc3b waiting on condition [0x7f44401d7000]
>    java.lang.Thread.State: WAITING (parking)
>         at sun.misc.Unsafe.park(Native Method)
>         - parking to wait for  <0x88681188> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>         at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>         at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>         at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>         at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>  {code}
>  
>  
> If it's relevant, we try to connect through an instance of 
> ActiveMQConnectionFactory, using this 

[jira] [Comment Edited] (ARTEMIS-3831) Scale-down fails when using same discovery-group used by Broker cluster connection

2022-08-23 Thread Bob Maloney (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583767#comment-17583767
 ] 

Bob Maloney edited comment on ARTEMIS-3831 at 8/23/22 5:35 PM:
---

Receiving the same error. Are there example config files for the possible 
workaround in the description? Aside from scale-down, I have clustering 
operational in Kubernetes.

Note that the error can be replicated with a single cluster-enabled broker. For 
the workaround, I've essentially duplicated the existing configs. No errors on 
startup, but still receiving AMQ222181 on shutdown.

ha policy
{code:xml}
  
 

   true
   

 
   
{code}
acceptor/connector (for separate port)
{code:xml}
 ... 
 tcp://0.0.0.0:61618

 
 tcp://0.0.0.0:61619

  

  
 tcp://0.0.0.0:61618
 
 tcp://0.0.0.0:61619
  
{code}
broadcast-group
{code:xml}
  
 
2000
jgroups.xml
artemis_broadcast_channel
netty-connector
 
 
 
2000
jgroups_2.xml
jgroups_broadcast_channel
jgroups-netty-connector
 
  
{code}
discovery-group
{code:xml}
  
 
jgroups.xml
artemis_broadcast_channel
1
 
 
 
jgroups_2.xml
jgroups_broadcast_channel
1
 
   
{code}
cluster-connection
{code:xml}
  
 

netty-connector
1000
5000
5
5000
500
1.0
5000
-1
-1
true
ON_DEMAND
1
32000
3
1000
2

 
 
 

jgroups-netty-connector
1000
5000
5
5000
500
1.0
5000
-1
-1
true
ON_DEMAND
1
32000
3
1000
2

 
  
{code}
New , with only change being a separate bind_port. 
jgroups-kubernetes is used for server discovery
{code:xml}
http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd;>

















{code}
 


was (Author: JIRAUSER281196):
Receiving the same error. Are there example config files for the possible 
workaround in the description? Aside from scale-down, I have clustering 
operational in Kubernetes.

Note that the error can be replicated with a single cluster-enabled broker. For 
the workaround, I've essentially duplicated the existing configs, but nothing 
stands out that now one JGroups channel will be used by the broker versus the 
other used by scale-down. No errors on startup, but still receiving AMQ222181 
on shutdown.

ha policy
{code:xml}
  
 

   true
   

 
   
{code}
acceptor/connector (for separate port)
{code:xml}
 ... 
 tcp://0.0.0.0:61618

 
 tcp://0.0.0.0:61619

  

  
 tcp://0.0.0.0:61618
 
 tcp://0.0.0.0:61619
  
{code}
broadcast-group
{code:xml}
  
 
2000
jgroups.xml
artemis_broadcast_channel
netty-connector
 
 
 
2000
jgroups_2.xml
jgroups_broadcast_channel
jgroups-netty-connector
 
  
{code}
discovery-group
{code:xml}
  
 
jgroups.xml
artemis_broadcast_channel
1
 
 
 
jgroups_2.xml
jgroups_broadcast_channel
1
 
   
{code}
cluster-connection
{code:xml}
  
 

netty-connector
1000
5000
5
5000
500
1.0
5000
-1
-1
true
ON_DEMAND
1
32000
3
1000
2

 
 
 

jgroups-netty-connector
1000
5000
5
5000
500
1.0
5000
-1
-1
true
ON_DEMAND
1
32000
3
1000
2

 
  
{code}
New , with only change being a separate bind_port. 

[jira] [Comment Edited] (ARTEMIS-3831) Scale-down fails when using same discovery-group used by Broker cluster connection

2022-08-23 Thread Bob Maloney (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583767#comment-17583767
 ] 

Bob Maloney edited comment on ARTEMIS-3831 at 8/23/22 5:34 PM:
---

Receiving the same error. Are there example config files for the possible 
workaround in the description? Aside from scale-down, I have clustering 
operational in Kubernetes.

Note that the error can be replicated with a single cluster-enabled broker. For 
the workaround, I've essentially duplicated the existing configs, but nothing 
stands out that now one JGroups channel will be used by the broker versus the 
other used by scale-down. No errors on startup, but still receiving AMQ222181 
on shutdown.

ha policy
{code:xml}
  
 

   true
   

 
   
{code}
acceptor/connector (for separate port)
{code:xml}
 ... 
 tcp://0.0.0.0:61618

 
 tcp://0.0.0.0:61619

  

  
 tcp://0.0.0.0:61618
 
 tcp://0.0.0.0:61619
  
{code}
broadcast-group
{code:xml}
  
 
2000
jgroups.xml
artemis_broadcast_channel
netty-connector
 
 
 
2000
jgroups_2.xml
jgroups_broadcast_channel
jgroups-netty-connector
 
  
{code}
discovery-group
{code:xml}
  
 
jgroups.xml
artemis_broadcast_channel
1
 
 
 
jgroups_2.xml
jgroups_broadcast_channel
1
 
   
{code}
cluster-connection
{code:xml}
  
 

netty-connector
1000
5000
5
5000
500
1.0
5000
-1
-1
true
ON_DEMAND
1
32000
3
1000
2

 
 
 

jgroups-netty-connector
1000
5000
5
5000
500
1.0
5000
-1
-1
true
ON_DEMAND
1
32000
3
1000
2

 
  
{code}
New , with only change being a separate bind_port. 
jgroups-kubernetes is used for server discovery
{code:xml}
http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd;>

















{code}
 


was (Author: JIRAUSER281196):
Receiving the same error. Are there example config files for the possible 
workaround in the description? Aside from scale-down, I have clustering 
operational in Kubernetes.

Note that the error can be replicated with a single cluster-enabled broker. For 
the workaround, I've essentially duplicated the existing configs, but nothing 
stands out that now one JGroups channel will be used by the broker versus the 
other used by scale-down. No errors on startup, but still receiving AMQ222181 
on shutdown.

acceptor/connector (for separate port)
{code:xml}
 ... 
 tcp://0.0.0.0:61618

 
 tcp://0.0.0.0:61619

  

  
 tcp://0.0.0.0:61618
 
 tcp://0.0.0.0:61619
  
{code}
broadcast-group
{code:xml}
  
 
2000
jgroups.xml
artemis_broadcast_channel
netty-connector
 
 
 
2000
jgroups_2.xml
jgroups_broadcast_channel
jgroups-netty-connector
 
  
{code}
discovery-group
{code:xml}
  
 
jgroups.xml
artemis_broadcast_channel
1
 
 
 
jgroups_2.xml
jgroups_broadcast_channel
1
 
   
{code}
cluster-connection
{code:xml}
  
 

netty-connector
1000
5000
5
5000
500
1.0
5000
-1
-1
true
ON_DEMAND
1
32000
3
1000
2

 
 
 

jgroups-netty-connector
1000
5000
5
5000
500
1.0
5000
-1
-1
true
ON_DEMAND
1
32000
3
1000
2

 
  
{code}
New , with only change being a separate bind_port. 
jgroups-kubernetes is 

[jira] [Commented] (ARTEMIS-3831) Scale-down fails when using same discovery-group used by Broker cluster connection

2022-08-23 Thread Bob Maloney (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583767#comment-17583767
 ] 

Bob Maloney commented on ARTEMIS-3831:
--

Receiving the same error. Are there example config files for the possible 
workaround in the description? Aside from scale-down, I have clustering 
operational in Kubernetes.

Note that the error can be replicated with a single cluster-enabled broker. For 
the workaround, I've essentially duplicated the existing configs, but nothing 
stands out that now one JGroups channel will be used by the broker versus the 
other used by scale-down. No errors on startup, but still receiving AMQ222181 
on shutdown.

acceptor/connector (for separate port)
{code:xml}
 ... 
 tcp://0.0.0.0:61618

 
 tcp://0.0.0.0:61619

  

  
 tcp://0.0.0.0:61618
 
 tcp://0.0.0.0:61619
  
{code}
broadcast-group
{code:xml}
  
 
2000
jgroups.xml
artemis_broadcast_channel
netty-connector
 
 
 
2000
jgroups_2.xml
jgroups_broadcast_channel
jgroups-netty-connector
 
  
{code}
discovery-group
{code:xml}
  
 
jgroups.xml
artemis_broadcast_channel
1
 
 
 
jgroups_2.xml
jgroups_broadcast_channel
1
 
   
{code}
cluster-connection
{code:xml}
  
 

netty-connector
1000
5000
5
5000
500
1.0
5000
-1
-1
true
ON_DEMAND
1
32000
3
1000
2

 
 
 

jgroups-netty-connector
1000
5000
5
5000
500
1.0
5000
-1
-1
true
ON_DEMAND
1
32000
3
1000
2

 
  
{code}
New , with only change being a separate bind_port. 
jgroups-kubernetes is used for server discovery
{code:xml}
http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd;>

















{code}
 

> Scale-down fails when using same discovery-group used by Broker cluster 
> connection
> --
>
> Key: ARTEMIS-3831
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3831
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.1
>Reporter: Apache Dev
>Priority: Major
>
> Using 2 Live brokers in cluster.
> Both having the following HA Policy:
> {code}
> 
> 
> 
> true
>  discovery-group-name="activemq-discovery-group"/>
> 
> 
> 
> {code}
> where "activemq-discovery-group" is using JGroups TCPPING:
> {code}
> 
> 
> ...
> ...
> 1
> 
> 
> {code}
> and it is used by the cluster of 2 brokers:
> {code}
> 
> 
> netty-connector
> 5000
> true
> OFF
> 1
>  discovery-group-name="activemq-discovery-group"/>
> 
> 
> {code}
> Issue is that when shutdown happens, scale-down fails:
> {code}
> org.apache.activemq.artemis.core.server  W AMQ222181: 
> Unable to scaleDown messages
> ActiveMQInternalErrorException[errorType=INTERNAL_ERROR 
> message=AMQ219004: Failed to initialise session factory]
> at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.initialize(ServerLocatorImpl.java:272)
> at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:655)
> at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:554)
> at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:533)
> at 
> org.apache.activemq.artemis.core.server.LiveNodeLocator.connectToCluster(LiveNodeLocator.java:85)
> at 
> org.apache.activemq.artemis.core.server.impl.LiveOnlyActivation.connectToScaleDownTarget(LiveOnlyActivation.java:146)
> at 
> 

[jira] [Updated] (ARTEMIS-3955) Consolidate Subject on RemotingConnection

2022-08-23 Thread Justin Bertram (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram updated ARTEMIS-3955:

Description: There's really no need to have {{subject}} _and_ 
{{auditSubject}}. These can be combined into one field for simplicity and 
clarity.

> Consolidate Subject on RemotingConnection
> -
>
> Key: ARTEMIS-3955
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3955
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>
> There's really no need to have {{subject}} _and_ {{auditSubject}}. These can 
> be combined into one field for simplicity and clarity.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-3955) Consolidate Subject on RemotingConnection

2022-08-23 Thread Justin Bertram (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram updated ARTEMIS-3955:

Description: There's really no need to have {{subject}} _and_ 
{{auditSubject}} on 
{{org.apache.activemq.artemis.spi.core.protocol.RemotingConnection}}. These can 
be combined into one field for simplicity and clarity.  (was: There's really no 
need to have {{subject}} _and_ {{auditSubject}}. These can be combined into one 
field for simplicity and clarity.)

> Consolidate Subject on RemotingConnection
> -
>
> Key: ARTEMIS-3955
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3955
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>
> There's really no need to have {{subject}} _and_ {{auditSubject}} on 
> {{org.apache.activemq.artemis.spi.core.protocol.RemotingConnection}}. These 
> can be combined into one field for simplicity and clarity.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-3955) Consolidate Subject on RemotingConnection

2022-08-23 Thread Justin Bertram (Jira)
Justin Bertram created ARTEMIS-3955:
---

 Summary: Consolidate Subject on RemotingConnection
 Key: ARTEMIS-3955
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3955
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Justin Bertram
Assignee: Justin Bertram






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3953) project compile failed on mac

2022-08-23 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583689#comment-17583689
 ] 

Justin Bertram commented on ARTEMIS-3953:
-

Given this error message:

bq. The environment variable JAVA_HOME is not correctly set.

I believe this is an environmental issue.

> project compile failed on mac 
> --
>
> Key: ARTEMIS-3953
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3953
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.25.0
>Reporter: gongping.zhu
>Priority: Major
>
> when i git clone zhe project and import the IDEA, it can not success use 
> maven to compile;
>  
> it need add zhe below element for each maven-javadoc-plugin configuation
> ${java.home}/bin/javadoc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (ARTEMIS-3953) project compile failed on mac

2022-08-23 Thread Justin Bertram (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-3953.
-
Resolution: Not A Bug

> project compile failed on mac 
> --
>
> Key: ARTEMIS-3953
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3953
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.25.0
>Reporter: gongping.zhu
>Priority: Major
>
> when i git clone zhe project and import the IDEA, it can not success use 
> maven to compile;
>  
> it need add zhe below element for each maven-javadoc-plugin configuation
> ${java.home}/bin/javadoc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3949) Internally synchronize methods in ClientSession implementations

2022-08-23 Thread Peter Machon (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583639#comment-17583639
 ] 

Peter Machon commented on ARTEMIS-3949:
---

First of all, thanks for all the comments and suggestions.

Plus, I've added an EDIT to the original question ... maybe that's not common 
here?!

> Internally synchronize methods in ClientSession implementations
> ---
>
> Key: ARTEMIS-3949
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3949
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.24.0
>Reporter: Peter Machon
>Priority: Major
>
> {{ClientSessionImpl}} has two internal functions i.e. {{startCall}} and 
> {{{}endCall{}}}. These function count concurrent access and throw in case of 
> concurrent access.
> They are used e.g. in {{ClientProducerImpl#doSend}} method and in the 
> {{ClientSessionImpl#acknowledge}} method.
> This forces user code to synchronize the use of the session object. That is a 
> pain for two reasons:
>  # From a user perspective it is not even clear, which methods are internally 
> counting concurrent access. E.g. the {{doSend}} method does not even belong 
> to the session.
>  # The session object is not accessible from the user code at any time. E.g. 
> the {{ClientMessageImpl}} internally uses the {{{}ClientSession{}}}'s 
> {{acknowledge}} method. From user code it is not necessarily clear which 
> session the {{ClientMessage}} belongs to. Thus, it would require user code to 
> e.g. implement their own message store just to be able to synchronize the 
> right session.
> Solution:
> The {{ClientSessionImpl}} and all other internal objects like 
> {{{}ClientProducerImpl{}}}, {{{}ClientMessageImpl{}}}, and similar have full 
> access and knowledge about their synchronization needs. I thus suggest to 
> implement synchronization where needed instead of leaving the user alone with 
> this issue, where the solution actually means to reimplement a lot of 
> functionality of the client.
> e.g.
> {code:java}
> startCall();
> try {
>sessionContext.sendACK(false, blockOnAcknowledge, consumer, message);
> } finally {
>endCall();
> }{code}
>  
> could be replaced with something like
> {code:java}
> synchronized(this) {
>sessionContext.sendACK(false, blockOnAcknowledge, consumer, message);
> }{code}
>  
> *EDIT:*
> Clicking through the client code, I realized that there actually is 
> synchronization on the send method in {{{}ChannelImpl{}}}:
> {code:java}
>// This must never called by more than one thread concurrently
>private boolean send(final Packet packet, final int reconnectID, final 
> boolean flush, final boolean batch) {
>   if (invokeInterceptors(packet, interceptors, connection) != null) {
>  return false;
>   }
>   synchronized (sendLock) {
>   ...
>   }
> }
> {code}
> Even though, the comment explicitly says not to call this message 
> concurrently, there is a synchronization block enclosing all the logic of the 
> function.
> Might the comment be deprecated and the concurrency warning thus too? Do I 
> miss something?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-3949) Internally synchronize methods in ClientSession implementations

2022-08-23 Thread Robbie Gemmell (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell closed ARTEMIS-3949.
---
Resolution: Information Provided

I have added some more detail to the javadoc via ARTEMIS-3954 around the 
session+children thread model and the related effect of setting a handle. 
Closing this one following the previous comments around it being as 
intended/expected.


> Internally synchronize methods in ClientSession implementations
> ---
>
> Key: ARTEMIS-3949
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3949
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.24.0
>Reporter: Peter Machon
>Priority: Major
>
> {{ClientSessionImpl}} has two internal functions i.e. {{startCall}} and 
> {{{}endCall{}}}. These function count concurrent access and throw in case of 
> concurrent access.
> They are used e.g. in {{ClientProducerImpl#doSend}} method and in the 
> {{ClientSessionImpl#acknowledge}} method.
> This forces user code to synchronize the use of the session object. That is a 
> pain for two reasons:
>  # From a user perspective it is not even clear, which methods are internally 
> counting concurrent access. E.g. the {{doSend}} method does not even belong 
> to the session.
>  # The session object is not accessible from the user code at any time. E.g. 
> the {{ClientMessageImpl}} internally uses the {{{}ClientSession{}}}'s 
> {{acknowledge}} method. From user code it is not necessarily clear which 
> session the {{ClientMessage}} belongs to. Thus, it would require user code to 
> e.g. implement their own message store just to be able to synchronize the 
> right session.
> Solution:
> The {{ClientSessionImpl}} and all other internal objects like 
> {{{}ClientProducerImpl{}}}, {{{}ClientMessageImpl{}}}, and similar have full 
> access and knowledge about their synchronization needs. I thus suggest to 
> implement synchronization where needed instead of leaving the user alone with 
> this issue, where the solution actually means to reimplement a lot of 
> functionality of the client.
> e.g.
> {code:java}
> startCall();
> try {
>sessionContext.sendACK(false, blockOnAcknowledge, consumer, message);
> } finally {
>endCall();
> }{code}
>  
> could be replaced with something like
> {code:java}
> synchronized(this) {
>sessionContext.sendACK(false, blockOnAcknowledge, consumer, message);
> }{code}
>  
> *EDIT:*
> Clicking through the client code, I realized that there actually is 
> synchronization on the send method in {{{}ChannelImpl{}}}:
> {code:java}
>// This must never called by more than one thread concurrently
>private boolean send(final Packet packet, final int reconnectID, final 
> boolean flush, final boolean batch) {
>   if (invokeInterceptors(packet, interceptors, connection) != null) {
>  return false;
>   }
>   synchronized (sendLock) {
>   ...
>   }
> }
> {code}
> Even though, the comment explicitly says not to call this message 
> concurrently, there is a synchronization block enclosing all the logic of the 
> function.
> Might the comment be deprecated and the concurrency warning thus too? Do I 
> miss something?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3954) add some detail to core-client thread model javadoc note

2022-08-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583577#comment-17583577
 ] 

ASF subversion and git services commented on ARTEMIS-3954:
--

Commit f6bca09afaca1fd1ae97ca520784945dbf63bdd5 in activemq-artemis's branch 
refs/heads/main from Robbie Gemmell
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=f6bca09afa ]

ARTEMIS-3954: add more detail to core-client javadoc around the session and 
children thread model being like the JMS client


> add some detail to core-client thread model javadoc note
> 
>
> Key: ARTEMIS-3954
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3954
> Project: ActiveMQ Artemis
>  Issue Type: Task
>  Components: API
>Affects Versions: 2.24.0
>Reporter: Robbie Gemmell
>Priority: Minor
>
> Add some more detail to the core-client thread model Javadoc note, 
> elaborating that its session etc are single threaded, i.e the model is 
> basically the same as that of the JMS client it underpins. Follows discussion 
> on ARTEMIS-3949.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (ARTEMIS-3954) add some detail to core-client thread model javadoc note

2022-08-23 Thread Robbie Gemmell (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell resolved ARTEMIS-3954.
-
Fix Version/s: 2.25.0
 Assignee: Robbie Gemmell
   Resolution: Fixed

> add some detail to core-client thread model javadoc note
> 
>
> Key: ARTEMIS-3954
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3954
> Project: ActiveMQ Artemis
>  Issue Type: Task
>  Components: API
>Affects Versions: 2.24.0
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Minor
> Fix For: 2.25.0
>
>
> Add some more detail to the core-client thread model Javadoc note, 
> elaborating that its session etc are single threaded, i.e the model is 
> basically the same as that of the JMS client it underpins. Follows discussion 
> on ARTEMIS-3949.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-3954) add some detail to core-client thread model javadoc note

2022-08-23 Thread Robbie Gemmell (Jira)
Robbie Gemmell created ARTEMIS-3954:
---

 Summary: add some detail to core-client thread model javadoc note
 Key: ARTEMIS-3954
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3954
 Project: ActiveMQ Artemis
  Issue Type: Task
  Components: API
Affects Versions: 2.24.0
Reporter: Robbie Gemmell


Add some more detail to the core-client thread model Javadoc note, elaborating 
that its session etc are single threaded, i.e the model is basically the same 
as that of the JMS client it underpins. Follows discussion on ARTEMIS-3949.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3949) Internally synchronize methods in ClientSession implementations

2022-08-23 Thread Robbie Gemmell (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583534#comment-17583534
 ] 

Robbie Gemmell commented on ARTEMIS-3949:
-

{quote}
But following your description, having asynchroneous handlers, some handlers 
might fail and messages would still be acknowledged, if any message that 
arrived later calls acknowledge.
Did I get that right? 
{quote}
Close but not exactly. The call acknowledges the entire sessions delivered 
messages, there is no ordering / sequence consideration at all. It isnt 'ack 
messages arriving before this one', its 'all delivered messages'. Any call to 
acknowledge will ack *all* messages delivered to that point, regardless of any 
sequence of arrival or which you call the operation on. Again, as JMS mandates 
CLIENT_ACK mode behaves.

Since you are meant to be using the handler [thread] itself to..handle..the 
messages and  indeed the session, this normally isnt so surprising since its 
typical to handle and acknowledge messages in the order they arrive, and theres 
only one handler thread delivering on the session. But if you are passing your 
messages off from the handler, entirely against the expected thread model, and 
processing them in other threads then clearly you cant just call acknowledge 
(in violation of the thread model) without coordination since if you did you 
would never _any_ idea what you were actually acknowledging. You should really 
be coordinating things back to the handler to do the ack, e.g a blocking work 
queue, or using multiple sessions+handlers so you dont need to coordinate (each 
session having its own delivery thread), or not using the handler API at all 
and coordinating worker threads while using the consumer.receive() APIs. 

{quote}
Does the individualAcknowledge() method definitely only acknowledge this one 
message or is there also something else to know?
{quote}
I've never used the Core client directly but given the methods existence, and 
its use for implementing the 'individual ack' session mode extension in the JMS 
client I would expect it does exactly what it says on the tin.

> Internally synchronize methods in ClientSession implementations
> ---
>
> Key: ARTEMIS-3949
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3949
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.24.0
>Reporter: Peter Machon
>Priority: Major
>
> {{ClientSessionImpl}} has two internal functions i.e. {{startCall}} and 
> {{{}endCall{}}}. These function count concurrent access and throw in case of 
> concurrent access.
> They are used e.g. in {{ClientProducerImpl#doSend}} method and in the 
> {{ClientSessionImpl#acknowledge}} method.
> This forces user code to synchronize the use of the session object. That is a 
> pain for two reasons:
>  # From a user perspective it is not even clear, which methods are internally 
> counting concurrent access. E.g. the {{doSend}} method does not even belong 
> to the session.
>  # The session object is not accessible from the user code at any time. E.g. 
> the {{ClientMessageImpl}} internally uses the {{{}ClientSession{}}}'s 
> {{acknowledge}} method. From user code it is not necessarily clear which 
> session the {{ClientMessage}} belongs to. Thus, it would require user code to 
> e.g. implement their own message store just to be able to synchronize the 
> right session.
> Solution:
> The {{ClientSessionImpl}} and all other internal objects like 
> {{{}ClientProducerImpl{}}}, {{{}ClientMessageImpl{}}}, and similar have full 
> access and knowledge about their synchronization needs. I thus suggest to 
> implement synchronization where needed instead of leaving the user alone with 
> this issue, where the solution actually means to reimplement a lot of 
> functionality of the client.
> e.g.
> {code:java}
> startCall();
> try {
>sessionContext.sendACK(false, blockOnAcknowledge, consumer, message);
> } finally {
>endCall();
> }{code}
>  
> could be replaced with something like
> {code:java}
> synchronized(this) {
>sessionContext.sendACK(false, blockOnAcknowledge, consumer, message);
> }{code}
>  
> *EDIT:*
> Clicking through the client code, I realized that there actually is 
> synchronization on the send method in {{{}ChannelImpl{}}}:
> {code:java}
>// This must never called by more than one thread concurrently
>private boolean send(final Packet packet, final int reconnectID, final 
> boolean flush, final boolean batch) {
>   if (invokeInterceptors(packet, interceptors, connection) != null) {
>  return false;
>   }
>   synchronized (sendLock) {
>   ...
>   }
> }
> {code}
> Even though, the comment explicitly says not to call this message 
> concurrently, there is a synchronization block 

[jira] [Updated] (ARTEMIS-3949) Internally synchronize methods in ClientSession implementations

2022-08-23 Thread Peter Machon (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Machon updated ARTEMIS-3949:
--
Description: 
{{ClientSessionImpl}} has two internal functions i.e. {{startCall}} and 
{{{}endCall{}}}. These function count concurrent access and throw in case of 
concurrent access.
They are used e.g. in {{ClientProducerImpl#doSend}} method and in the 
{{ClientSessionImpl#acknowledge}} method.

This forces user code to synchronize the use of the session object. That is a 
pain for two reasons:
 # From a user perspective it is not even clear, which methods are internally 
counting concurrent access. E.g. the {{doSend}} method does not even belong to 
the session.
 # The session object is not accessible from the user code at any time. E.g. 
the {{ClientMessageImpl}} internally uses the {{{}ClientSession{}}}'s 
{{acknowledge}} method. From user code it is not necessarily clear which 
session the {{ClientMessage}} belongs to. Thus, it would require user code to 
e.g. implement their own message store just to be able to synchronize the right 
session.

Solution:

The {{ClientSessionImpl}} and all other internal objects like 
{{{}ClientProducerImpl{}}}, {{{}ClientMessageImpl{}}}, and similar have full 
access and knowledge about their synchronization needs. I thus suggest to 
implement synchronization where needed instead of leaving the user alone with 
this issue, where the solution actually means to reimplement a lot of 
functionality of the client.

e.g.
{code:java}
startCall();
try {
   sessionContext.sendACK(false, blockOnAcknowledge, consumer, message);
} finally {
   endCall();
}{code}
 
could be replaced with something like
{code:java}
synchronized(this) {
   sessionContext.sendACK(false, blockOnAcknowledge, consumer, message);
}{code}
 

*EDIT:*

Clicking through the client code, I realized that there actually is 
synchronization on the send method in {{{}ChannelImpl{}}}:
{code:java}
   // This must never called by more than one thread concurrently
   private boolean send(final Packet packet, final int reconnectID, final 
boolean flush, final boolean batch) {
  if (invokeInterceptors(packet, interceptors, connection) != null) {
 return false;
  }

  synchronized (sendLock) {
  ...
  }
}
{code}
Even though, the comment explicitly says not to call this message concurrently, 
there is a synchronization block enclosing all the logic of the function.

Might the comment be deprecated and the concurrency warning thus too? Do I miss 
something?

  was:
{{ClientSessionImpl}} has two internal functions i.e. {{startCall}} and 
{{endCall}}. These function count concurrent access and throw in case of 
concurrent access.
They are used e.g. in {{ClientProducerImpl#doSend}} method and in the 
{{ClientSessionImpl#acknowledge}} method.

This forces user code to synchronize the use of the session object. That is a 
pain for two reasons:
 # From a user perspective it is not even clear, which methods are internally 
counting concurrent access. E.g. the {{doSend}} method does not even belong to 
the session.
 # The session object is not accessible from the user code at any time. E.g. 
the {{ClientMessageImpl}} internally uses the {{ClientSession}}'s 
{{acknowledge}} method. From user code it is not necessarily clear which 
session the {{ClientMessage}} belongs to. Thus, it would require user code to 
e.g. implement their own message store just to be able to synchronize the right 
session.

Solution:

The {{ClientSessionImpl}} and all other internal objects like 
{{{}ClientProducerImpl{}}}, {{{}ClientMessageImpl{}}}, and similar have full 
access and knowledge about their synchronization needs. I thus suggest to 
implement synchronization where needed instead of leaving the user alone with 
this issue, where the solution actually means to reimplement a lot of 
functionality of the client.

e.g.
{code:java}
startCall();
try {
   sessionContext.sendACK(false, blockOnAcknowledge, consumer, message);
} finally {
   endCall();
}{code}
 
could be replaced with something like
{code:java}
synchronized(this) {
   sessionContext.sendACK(false, blockOnAcknowledge, consumer, message);
}{code}
 


> Internally synchronize methods in ClientSession implementations
> ---
>
> Key: ARTEMIS-3949
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3949
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.24.0
>Reporter: Peter Machon
>Priority: Major
>
> {{ClientSessionImpl}} has two internal functions i.e. {{startCall}} and 
> {{{}endCall{}}}. These function count concurrent access and throw in case of 
> concurrent access.
> They are used e.g. in {{ClientProducerImpl#doSend}} method and in the 
> {{ClientSessionImpl#acknowledge}} method.
> This 

[jira] [Commented] (AMQ-8133) Consider adding IBM Z (s390x) into Apache ActiveMQ Jenkins CI

2022-08-23 Thread snehal (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583514#comment-17583514
 ] 

snehal commented on AMQ-8133:
-

[~mattrpav] 
Hi,
Hope you are doing well. I am following up on adding s390x Jenkins CI support.
Let us know if anything more is needed from our end.

> Consider adding IBM Z (s390x) into Apache ActiveMQ Jenkins CI
> -
>
> Key: AMQ-8133
> URL: https://issues.apache.org/jira/browse/AMQ-8133
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.16.1
>Reporter: Ruixin (Peter) Bao
>Assignee: Matt Pavlovich
>Priority: Minor
>  Labels: None
>
> Hi,
> I would like to add support for IBM Z (s390x) into Apache ActiveMQ Jenkins 
> CI. Currently I can use 5.16.1 tar.gz file from 
> [http://archive.apache.org/dist/activemq/] and java openjdk 8 to successfully 
> run the web console. Wondering what is the process I should follow to add CI 
> support for s390x? Happy to discuss and help with the process. Thanks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (ARTEMIS-3950) dont prepare unused debug detail during xml data import processing

2022-08-23 Thread Robbie Gemmell (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell reassigned ARTEMIS-3950:
---

Assignee: Robbie Gemmell

> dont prepare unused debug detail during xml data import processing
> --
>
> Key: ARTEMIS-3950
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3950
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.24.0
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Trivial
> Fix For: 2.25.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The xml data import command for the cli prepares some details in a 
> StringBuilder for use in debug logging. Though it does avoid calling 
> toString() on the builder if the logger isnt enabled for debug output, it 
> would be better not to prepare the detail at all since it takes various steps 
> to do so including toString() of message metadata.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (ARTEMIS-3950) dont prepare unused debug detail during xml data import processing

2022-08-23 Thread Robbie Gemmell (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell resolved ARTEMIS-3950.
-
Resolution: Fixed

> dont prepare unused debug detail during xml data import processing
> --
>
> Key: ARTEMIS-3950
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3950
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.24.0
>Reporter: Robbie Gemmell
>Priority: Trivial
> Fix For: 2.25.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The xml data import command for the cli prepares some details in a 
> StringBuilder for use in debug logging. Though it does avoid calling 
> toString() on the builder if the logger isnt enabled for debug output, it 
> would be better not to prepare the detail at all since it takes various steps 
> to do so including toString() of message metadata.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-3950) dont prepare unused debug detail during xml data import processing

2022-08-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3950?focusedWorklogId=802793=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-802793
 ]

ASF GitHub Bot logged work on ARTEMIS-3950:
---

Author: ASF GitHub Bot
Created on: 23/Aug/22 10:27
Start Date: 23/Aug/22 10:27
Worklog Time Spent: 10m 
  Work Description: asfgit merged PR #4184:
URL: https://github.com/apache/activemq-artemis/pull/4184




Issue Time Tracking
---

Worklog Id: (was: 802793)
Time Spent: 20m  (was: 10m)

> dont prepare unused debug detail during xml data import processing
> --
>
> Key: ARTEMIS-3950
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3950
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.24.0
>Reporter: Robbie Gemmell
>Priority: Trivial
> Fix For: 2.25.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The xml data import command for the cli prepares some details in a 
> StringBuilder for use in debug logging. Though it does avoid calling 
> toString() on the builder if the logger isnt enabled for debug output, it 
> would be better not to prepare the detail at all since it takes various steps 
> to do so including toString() of message metadata.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3950) dont prepare unused debug detail during xml data import processing

2022-08-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583505#comment-17583505
 ] 

ASF subversion and git services commented on ARTEMIS-3950:
--

Commit 734e7f4ae52215566f4d51b2be2f62f810f63ab0 in activemq-artemis's branch 
refs/heads/main from Robbie Gemmell
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=734e7f4ae5 ]

ARTEMIS-3950: dont prepare unused debug detail during xml data import processing


> dont prepare unused debug detail during xml data import processing
> --
>
> Key: ARTEMIS-3950
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3950
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.24.0
>Reporter: Robbie Gemmell
>Priority: Trivial
> Fix For: 2.25.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The xml data import command for the cli prepares some details in a 
> StringBuilder for use in debug logging. Though it does avoid calling 
> toString() on the builder if the logger isnt enabled for debug output, it 
> would be better not to prepare the detail at all since it takes various steps 
> to do so including toString() of message metadata.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3953) project compile failed on mac

2022-08-23 Thread Robbie Gemmell (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583427#comment-17583427
 ] 

Robbie Gemmell commented on ARTEMIS-3953:
-

What was your JAVA_HOME set to, given thats what it was complaining about?

Setting the javadocExecutable value as you did does not seem appropriate since 
it is platform-specific, and shouldnt be needed. As Justin said, there are also 
other folks developing on Mac's that would make me expect this to work overall, 
so I'd wonder if that points to something different about your particular env, 
though it is interesting that updating maven resolved it.

> project compile failed on mac 
> --
>
> Key: ARTEMIS-3953
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3953
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.25.0
>Reporter: gongping.zhu
>Priority: Major
>
> when i git clone zhe project and import the IDEA, it can not success use 
> maven to compile;
>  
> it need add zhe below element for each maven-javadoc-plugin configuation
> ${java.home}/bin/javadoc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-3953) project compile failed on mac

2022-08-23 Thread gongping.zhu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583391#comment-17583391
 ] 

gongping.zhu edited comment on ARTEMIS-3953 at 8/23/22 7:20 AM:


when outside IDEA it still can't compile

 

and the error as below,the version is Apache Maven 3.6.2

 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (fallback-javadoc-jar) 
on project artemis-website: MavenReportException: Error while generating 
Javadoc: Unable to find javadoc command: The environment variable JAVA_HOME is 
not correctly set.

 

after upgrade maven version to 3.8.6 problem solve


was (Author: JIRAUSER293605):
when outside IDEA it still can't compile

 

and the error as below,the version is Apache Maven 3.6.2

 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (fallback-javadoc-jar) 
on project artemis-website: MavenReportException: Error while generating 
Javadoc: Unable to find javadoc command: The environment variable JAVA_HOME is 
not correctly set.

 

after upgrade maven version to 3.8.6 problem still exist

> project compile failed on mac 
> --
>
> Key: ARTEMIS-3953
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3953
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.25.0
>Reporter: gongping.zhu
>Priority: Major
>
> when i git clone zhe project and import the IDEA, it can not success use 
> maven to compile;
>  
> it need add zhe below element for each maven-javadoc-plugin configuation
> ${java.home}/bin/javadoc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-3953) project compile failed on mac

2022-08-23 Thread gongping.zhu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583391#comment-17583391
 ] 

gongping.zhu edited comment on ARTEMIS-3953 at 8/23/22 7:01 AM:


when outside IDEA it still can't compile

 

and the error as below,the version is Apache Maven 3.6.2

 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (fallback-javadoc-jar) 
on project artemis-website: MavenReportException: Error while generating 
Javadoc: Unable to find javadoc command: The environment variable JAVA_HOME is 
not correctly set.

 

after upgrade maven version to 3.8.6 problem still exist


was (Author: JIRAUSER293605):
when outside IDEA it still can't compile

 

and the error as below,the version is Apache Maven 3.6.2

 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (fallback-javadoc-jar) 
on project artemis-website: MavenReportException: Error while generating 
Javadoc: Unable to find javadoc command: The environment variable JAVA_HOME is 
not correctly set.

 

compile passed after upgrade maven version to 3.8.6

> project compile failed on mac 
> --
>
> Key: ARTEMIS-3953
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3953
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.25.0
>Reporter: gongping.zhu
>Priority: Major
>
> when i git clone zhe project and import the IDEA, it can not success use 
> maven to compile;
>  
> it need add zhe below element for each maven-javadoc-plugin configuation
> ${java.home}/bin/javadoc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-3953) project compile failed on mac

2022-08-23 Thread gongping.zhu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583391#comment-17583391
 ] 

gongping.zhu edited comment on ARTEMIS-3953 at 8/23/22 6:59 AM:


when outside IDEA it still can't compile

 

and the error as below,the version is Apache Maven 3.6.2

 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (fallback-javadoc-jar) 
on project artemis-website: MavenReportException: Error while generating 
Javadoc: Unable to find javadoc command: The environment variable JAVA_HOME is 
not correctly set.

 

compile passed after upgrade maven version to 3.8.6


was (Author: JIRAUSER293605):
when outside IDEA it still can't compile

 

and the error as below,the version is Apache Maven 3.6.2

 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (fallback-javadoc-jar) 
on project artemis-website: MavenReportException: Error while generating 
Javadoc: Unable to find javadoc command: The environment variable JAVA_HOME is 
not correctly set.

 

 

> project compile failed on mac 
> --
>
> Key: ARTEMIS-3953
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3953
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.25.0
>Reporter: gongping.zhu
>Priority: Major
>
> when i git clone zhe project and import the IDEA, it can not success use 
> maven to compile;
>  
> it need add zhe below element for each maven-javadoc-plugin configuation
> ${java.home}/bin/javadoc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-3953) project compile failed on mac

2022-08-23 Thread gongping.zhu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583391#comment-17583391
 ] 

gongping.zhu edited comment on ARTEMIS-3953 at 8/23/22 6:28 AM:


when outside IDEA it still can't compile

 

and the error as below,the version is Apache Maven 3.6.2

 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (fallback-javadoc-jar) 
on project artemis-website: MavenReportException: Error while generating 
Javadoc: Unable to find javadoc command: The environment variable JAVA_HOME is 
not correctly set.

 

 


was (Author: JIRAUSER293605):
when outside IDEA it still can't 

the version is Apache Maven 3.6.2

> project compile failed on mac 
> --
>
> Key: ARTEMIS-3953
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3953
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.25.0
>Reporter: gongping.zhu
>Priority: Major
>
> when i git clone zhe project and import the IDEA, it can not success use 
> maven to compile;
>  
> it need add zhe below element for each maven-javadoc-plugin configuation
> ${java.home}/bin/javadoc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3953) project compile failed on mac

2022-08-23 Thread gongping.zhu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583391#comment-17583391
 ] 

gongping.zhu commented on ARTEMIS-3953:
---

when outside IDEA it still can't 

> project compile failed on mac 
> --
>
> Key: ARTEMIS-3953
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3953
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.25.0
>Reporter: gongping.zhu
>Priority: Major
>
> when i git clone zhe project and import the IDEA, it can not success use 
> maven to compile;
>  
> it need add zhe below element for each maven-javadoc-plugin configuation
> ${java.home}/bin/javadoc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3953) project compile failed on mac

2022-08-23 Thread gongping.zhu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583390#comment-17583390
 ] 

gongping.zhu commented on ARTEMIS-3953:
---

when outside IDEA it still can't 

> project compile failed on mac 
> --
>
> Key: ARTEMIS-3953
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3953
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.25.0
>Reporter: gongping.zhu
>Priority: Major
>
> when i git clone zhe project and import the IDEA, it can not success use 
> maven to compile;
>  
> it need add zhe below element for each maven-javadoc-plugin configuation
> ${java.home}/bin/javadoc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-3953) project compile failed on mac

2022-08-23 Thread gongping.zhu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583391#comment-17583391
 ] 

gongping.zhu edited comment on ARTEMIS-3953 at 8/23/22 6:25 AM:


when outside IDEA it still can't 

the version is Apache Maven 3.6.2


was (Author: JIRAUSER293605):
when outside IDEA it still can't 

> project compile failed on mac 
> --
>
> Key: ARTEMIS-3953
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3953
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.25.0
>Reporter: gongping.zhu
>Priority: Major
>
> when i git clone zhe project and import the IDEA, it can not success use 
> maven to compile;
>  
> it need add zhe below element for each maven-javadoc-plugin configuation
> ${java.home}/bin/javadoc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)