[jira] [Work logged] (ARTEMIS-2859) Strange Address Sizes on clustered topics.

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2859?focusedWorklogId=467081&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-467081
 ]

ASF GitHub Bot logged work on ARTEMIS-2859:
---

Author: ASF GitHub Bot
Created on: 06/Aug/20 05:40
Start Date: 06/Aug/20 05:40
Worklog Time Spent: 10m 
  Work Description: swerner0 commented on pull request #3238:
URL: https://github.com/apache/activemq-artemis/pull/3238#issuecomment-669713344


   We were initially seeing an issue with clustered topics, but then saw 
negative address size warnings on a single broker when we had a consumer on the 
full topic and another on a wildcard, thought it looked strange. Also relates 
to ARTEMIS-2768
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 467081)
Time Spent: 20m  (was: 10m)

> Strange Address Sizes on clustered topics.
> --
>
> Key: ARTEMIS-2859
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2859
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.12.0, 2.14.0
> Environment: uname -a
> Linux tarek02 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 15:29:09 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
> java version "1.8.0_251"
> Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
> Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)
>Reporter: Tarek Hammoud
>Priority: Major
> Attachments: TestClusteredTopic.java, broker.xml, 
> image-2020-08-03-14-05-54-676.png, image-2020-08-03-14-05-54-720.png, 
> screenshot.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> !screenshot.png! Hello,
> We are seeing some strange AddressSizes in JMX for simple clustered topics. 
> The problem was observed on 2.12.0 in production but can also be reproduced 
> on 2.14.0. I set up a 3-node cluster (Sample broker.xml) attached. The test 
> program creates multiple clustered topic consumers. A publisher sends a 
> message every few seconds. The JMX console shows a strange address size on 
> one of the nodes. Easy to reproduce with the attached test program. Seems to 
> be fine with queues. 
> Thank you for help in advance.[^TestClusteredTopic.java][^broker.xml]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2859) Strange Address Sizes on clustered topics.

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2859?focusedWorklogId=467079&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-467079
 ]

ASF GitHub Bot logged work on ARTEMIS-2859:
---

Author: ASF GitHub Bot
Created on: 06/Aug/20 05:28
Start Date: 06/Aug/20 05:28
Worklog Time Spent: 10m 
  Work Description: swerner0 opened a new pull request #3238:
URL: https://github.com/apache/activemq-artemis/pull/3238


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 467079)
Remaining Estimate: 0h
Time Spent: 10m

> Strange Address Sizes on clustered topics.
> --
>
> Key: ARTEMIS-2859
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2859
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.12.0, 2.14.0
> Environment: uname -a
> Linux tarek02 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 15:29:09 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
> java version "1.8.0_251"
> Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
> Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)
>Reporter: Tarek Hammoud
>Priority: Major
> Attachments: TestClusteredTopic.java, broker.xml, 
> image-2020-08-03-14-05-54-676.png, image-2020-08-03-14-05-54-720.png, 
> screenshot.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> !screenshot.png! Hello,
> We are seeing some strange AddressSizes in JMX for simple clustered topics. 
> The problem was observed on 2.12.0 in production but can also be reproduced 
> on 2.14.0. I set up a 3-node cluster (Sample broker.xml) attached. The test 
> program creates multiple clustered topic consumers. A publisher sends a 
> message every few seconds. The JMX console shows a strange address size on 
> one of the nodes. Easy to reproduce with the attached test program. Seems to 
> be fine with queues. 
> Thank you for help in advance.[^TestClusteredTopic.java][^broker.xml]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2868) Split Brain on Replication could "damage" the Topology, isolating the server

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2868?focusedWorklogId=467062&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-467062
 ]

ASF GitHub Bot logged work on ARTEMIS-2868:
---

Author: ASF GitHub Bot
Created on: 06/Aug/20 03:36
Start Date: 06/Aug/20 03:36
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on pull request #3232:
URL: https://github.com/apache/activemq-artemis/pull/3232#issuecomment-669663656


   I will reopen this when I settle tests.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 467062)
Time Spent: 40m  (was: 0.5h)

> Split Brain on Replication could "damage" the Topology, isolating the server
> 
>
> Key: ARTEMIS-2868
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2868
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.14.0
>Reporter: Clebert Suconic
>Priority: Major
> Fix For: 2.15.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The issue is that after a split brain, during a reconnect the live broker 
> will send a topology update to the live replacing the topology.
>  
> As a result, new replicas will not be able to reconnect after the split 
> brain, since the topology will return a wrong address, probably towards 
> itself from backup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2868) Split Brain on Replication could "damage" the Topology, isolating the server

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2868?focusedWorklogId=467061&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-467061
 ]

ASF GitHub Bot logged work on ARTEMIS-2868:
---

Author: ASF GitHub Bot
Created on: 06/Aug/20 03:36
Start Date: 06/Aug/20 03:36
Worklog Time Spent: 10m 
  Work Description: clebertsuconic closed pull request #3232:
URL: https://github.com/apache/activemq-artemis/pull/3232


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 467061)
Time Spent: 0.5h  (was: 20m)

> Split Brain on Replication could "damage" the Topology, isolating the server
> 
>
> Key: ARTEMIS-2868
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2868
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.14.0
>Reporter: Clebert Suconic
>Priority: Major
> Fix For: 2.15.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The issue is that after a split brain, during a reconnect the live broker 
> will send a topology update to the live replacing the topology.
>  
> As a result, new replicas will not be able to reconnect after the split 
> brain, since the topology will return a wrong address, probably towards 
> itself from backup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2873) Configuration Managed Queues are being auto deleted but should not.

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2873?focusedWorklogId=467015&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-467015
 ]

ASF GitHub Bot logged work on ARTEMIS-2873:
---

Author: ASF GitHub Bot
Created on: 05/Aug/20 23:47
Start Date: 05/Aug/20 23:47
Worklog Time Spent: 10m 
  Work Description: michaelandrepearce commented on pull request #3236:
URL: https://github.com/apache/activemq-artemis/pull/3236#issuecomment-669601946


   replaced by https://github.com/apache/activemq-artemis/pull/3237



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 467015)
Time Spent: 40m  (was: 0.5h)

> Configuration Managed Queues are being auto deleted but should not.
> ---
>
> Key: ARTEMIS-2873
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2873
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Michael Andre Pearce
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Auto Delete queues and Auto Delete created queues are meant to allow auto 
> deletion of queues created by clients. Configuration managed queues should 
> not be auto deleted, these should be deleted only by removing from 
> configuration, it seems there a bug where configuration managed queues can be 
> auto deleted, ensure this is not the case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2873) Configuration Managed Queues are being auto deleted but should not.

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2873?focusedWorklogId=467016&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-467016
 ]

ASF GitHub Bot logged work on ARTEMIS-2873:
---

Author: ASF GitHub Bot
Created on: 05/Aug/20 23:47
Start Date: 05/Aug/20 23:47
Worklog Time Spent: 10m 
  Work Description: michaelandrepearce edited a comment on pull request 
#3236:
URL: https://github.com/apache/activemq-artemis/pull/3236#issuecomment-669601946


   created the branch in the wrong place replaced by 
https://github.com/apache/activemq-artemis/pull/3237



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 467016)
Time Spent: 50m  (was: 40m)

> Configuration Managed Queues are being auto deleted but should not.
> ---
>
> Key: ARTEMIS-2873
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2873
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Michael Andre Pearce
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Auto Delete queues and Auto Delete created queues are meant to allow auto 
> deletion of queues created by clients. Configuration managed queues should 
> not be auto deleted, these should be deleted only by removing from 
> configuration, it seems there a bug where configuration managed queues can be 
> auto deleted, ensure this is not the case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2873) Configuration Managed Queues are being auto deleted but should not.

2020-08-05 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171830#comment-17171830
 ] 

ASF subversion and git services commented on ARTEMIS-2873:
--

Commit e1594cb5f2961fda57b3459cc1f2c310f7c5253a in activemq-artemis's branch 
refs/heads/ARTEMIS-2873 from Michael Pearce
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=e1594cb ]

[ARTEMIS-2873] Ensure configuation managed queues are not auto deleted
these should only be removed if removed in configuration.
Auto Delete Queues and Auto Delete Created Queues should only apply to NON 
configuration managed queues.


> Configuration Managed Queues are being auto deleted but should not.
> ---
>
> Key: ARTEMIS-2873
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2873
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Michael Andre Pearce
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Auto Delete queues and Auto Delete created queues are meant to allow auto 
> deletion of queues created by clients. Configuration managed queues should 
> not be auto deleted, these should be deleted only by removing from 
> configuration, it seems there a bug where configuration managed queues can be 
> auto deleted, ensure this is not the case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2873) Configuration Managed Queues are being auto deleted but should not.

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2873?focusedWorklogId=467013&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-467013
 ]

ASF GitHub Bot logged work on ARTEMIS-2873:
---

Author: ASF GitHub Bot
Created on: 05/Aug/20 23:30
Start Date: 05/Aug/20 23:30
Worklog Time Spent: 10m 
  Work Description: michaelandrepearce opened a new pull request #3237:
URL: https://github.com/apache/activemq-artemis/pull/3237


   …d these should only be removed if removed in configuration.
   
   Auto Delete Queues and Auto Delete Created Queues should only apply to NON 
configuration managed queues.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 467013)
Time Spent: 0.5h  (was: 20m)

> Configuration Managed Queues are being auto deleted but should not.
> ---
>
> Key: ARTEMIS-2873
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2873
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Michael Andre Pearce
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Auto Delete queues and Auto Delete created queues are meant to allow auto 
> deletion of queues created by clients. Configuration managed queues should 
> not be auto deleted, these should be deleted only by removing from 
> configuration, it seems there a bug where configuration managed queues can be 
> auto deleted, ensure this is not the case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2873) Configuration Managed Queues are being auto deleted but should not.

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2873?focusedWorklogId=467012&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-467012
 ]

ASF GitHub Bot logged work on ARTEMIS-2873:
---

Author: ASF GitHub Bot
Created on: 05/Aug/20 23:27
Start Date: 05/Aug/20 23:27
Worklog Time Spent: 10m 
  Work Description: michaelandrepearce closed pull request #3236:
URL: https://github.com/apache/activemq-artemis/pull/3236


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 467012)
Time Spent: 20m  (was: 10m)

> Configuration Managed Queues are being auto deleted but should not.
> ---
>
> Key: ARTEMIS-2873
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2873
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Michael Andre Pearce
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Auto Delete queues and Auto Delete created queues are meant to allow auto 
> deletion of queues created by clients. Configuration managed queues should 
> not be auto deleted, these should be deleted only by removing from 
> configuration, it seems there a bug where configuration managed queues can be 
> auto deleted, ensure this is not the case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2873) Configuration Managed Queues are being auto deleted but should not.

2020-08-05 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171822#comment-17171822
 ] 

ASF subversion and git services commented on ARTEMIS-2873:
--

Commit 667e3f7a0197e7836e05720975a2be6c4b2da1ea in activemq-artemis's branch 
refs/heads/ARTEMIS-2873 from Michael Pearce
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=667e3f7 ]

[ARTEMIS-2873] Ensure configuation managed queues are not auto deleted these 
should only be removed if removed in configuration.

Auto Delete Queues and Auto Delete Created Queues should only apply to NON 
configuration managed queues.


> Configuration Managed Queues are being auto deleted but should not.
> ---
>
> Key: ARTEMIS-2873
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2873
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Michael Andre Pearce
>Priority: Major
>
> Auto Delete queues and Auto Delete created queues are meant to allow auto 
> deletion of queues created by clients. Configuration managed queues should 
> not be auto deleted, these should be deleted only by removing from 
> configuration, it seems there a bug where configuration managed queues can be 
> auto deleted, ensure this is not the case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2873) Configuration Managed Queues are being auto deleted but should not.

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2873?focusedWorklogId=467011&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-467011
 ]

ASF GitHub Bot logged work on ARTEMIS-2873:
---

Author: ASF GitHub Bot
Created on: 05/Aug/20 23:13
Start Date: 05/Aug/20 23:13
Worklog Time Spent: 10m 
  Work Description: michaelandrepearce opened a new pull request #3236:
URL: https://github.com/apache/activemq-artemis/pull/3236


   …d these should only be removed if removed in configuration.
   
   Auto Delete Queues and Auto Delete Created Queues should only apply to NON 
configuration managed queues.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 467011)
Remaining Estimate: 0h
Time Spent: 10m

> Configuration Managed Queues are being auto deleted but should not.
> ---
>
> Key: ARTEMIS-2873
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2873
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Michael Andre Pearce
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Auto Delete queues and Auto Delete created queues are meant to allow auto 
> deletion of queues created by clients. Configuration managed queues should 
> not be auto deleted, these should be deleted only by removing from 
> configuration, it seems there a bug where configuration managed queues can be 
> auto deleted, ensure this is not the case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2873) Configuration Managed Queues are being auto deleted but should not.

2020-08-05 Thread Michael Andre Pearce (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Andre Pearce updated ARTEMIS-2873:
--
Summary: Configuration Managed Queues are being auto deleted but should 
not.  (was: Configuration Managed Queues are being auto deleted.)

> Configuration Managed Queues are being auto deleted but should not.
> ---
>
> Key: ARTEMIS-2873
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2873
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Michael Andre Pearce
>Priority: Major
>
> Auto Delete queues and Auto Delete created queues are meant to allow auto 
> deletion of queues created by clients. Configuration managed queues should 
> not be auto deleted, these should be deleted only by removing from 
> configuration, it seems there a bug where configuration managed queues can be 
> auto deleted, ensure this is not the case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2873) Configuration Managed Queues are being auto deleted.

2020-08-05 Thread Michael Andre Pearce (Jira)
Michael Andre Pearce created ARTEMIS-2873:
-

 Summary: Configuration Managed Queues are being auto deleted.
 Key: ARTEMIS-2873
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2873
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.14.0
Reporter: Michael Andre Pearce


Auto Delete queues and Auto Delete created queues are meant to allow auto 
deletion of queues created by clients. Configuration managed queues should not 
be auto deleted, these should be deleted only by removing from configuration, 
it seems there a bug where configuration managed queues can be auto deleted, 
ensure this is not the case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2868) Split Brain on Replication could "damage" the Topology, isolating the server

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2868?focusedWorklogId=466973&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-466973
 ]

ASF GitHub Bot logged work on ARTEMIS-2868:
---

Author: ASF GitHub Bot
Created on: 05/Aug/20 20:39
Start Date: 05/Aug/20 20:39
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on pull request #3232:
URL: https://github.com/apache/activemq-artemis/pull/3232#issuecomment-669494068


   Tests are showing the warning on regular occurrences. I need to do some 
tweaking before merge.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 466973)
Time Spent: 20m  (was: 10m)

> Split Brain on Replication could "damage" the Topology, isolating the server
> 
>
> Key: ARTEMIS-2868
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2868
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.14.0
>Reporter: Clebert Suconic
>Priority: Major
> Fix For: 2.15.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The issue is that after a split brain, during a reconnect the live broker 
> will send a topology update to the live replacing the topology.
>  
> As a result, new replicas will not be able to reconnect after the split 
> brain, since the topology will return a wrong address, probably towards 
> itself from backup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2872) Support FQQN syntax for security-settings

2020-08-05 Thread Justin Bertram (Jira)
Justin Bertram created ARTEMIS-2872:
---

 Summary: Support FQQN syntax for security-settings
 Key: ARTEMIS-2872
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2872
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Justin Bertram
Assignee: Justin Bertram






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2871) update to proton-j 0.33.6 and qpid-jms 0.53.0

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2871?focusedWorklogId=466854&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-466854
 ]

ASF GitHub Bot logged work on ARTEMIS-2871:
---

Author: ASF GitHub Bot
Created on: 05/Aug/20 15:53
Start Date: 05/Aug/20 15:53
Worklog Time Spent: 10m 
  Work Description: gemmellr opened a new pull request #3234:
URL: https://github.com/apache/activemq-artemis/pull/3234


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 466854)
Remaining Estimate: 0h
Time Spent: 10m

> update to proton-j 0.33.6 and qpid-jms 0.53.0
> -
>
> Key: ARTEMIS-2871
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2871
> Project: ActiveMQ Artemis
>  Issue Type: Task
>  Components: AMQP
>Reporter: Robbie Gemmell
>Priority: Major
> Fix For: 2.15.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> update to proton-j 0.33.6 and qpid-jms 0.53.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2871) update to proton-j 0.33.6 and qpid-jms 0.53.0

2020-08-05 Thread Robbie Gemmell (Jira)
Robbie Gemmell created ARTEMIS-2871:
---

 Summary: update to proton-j 0.33.6 and qpid-jms 0.53.0
 Key: ARTEMIS-2871
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2871
 Project: ActiveMQ Artemis
  Issue Type: Task
  Components: AMQP
Reporter: Robbie Gemmell
 Fix For: 2.15.0


update to proton-j 0.33.6 and qpid-jms 0.53.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-05 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171571#comment-17171571
 ] 

Justin Bertram commented on ARTEMIS-2861:
-

I plan on implementing this. I just haven't had the bandwidth yet. I'll create 
a new issue for that work.

> Add queue name as a parameter to ActiveMQSecurityManager 
> -
>
> Key: ARTEMIS-2861
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.14.0
>Reporter: Luís Alves
>Priority: Major
>
> Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) 
> and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want 
> to have fine grained access control over operations over addresses and queues 
> (subscriptions) like described on 
> https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
> proposed in 
> https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
>  and it solves the authN part. For the authZ part I've already had some 
> feedback here 
> https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
>  but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin 
> will not give the needed control. So I'm proposing that 
> ActiveMQSecurityManager latest implementation adds the queue name as the 
> calling method:
> {code:java}
>  @Override
>public void check(final SimpleString address,
>  final SimpleString queue,
>  final CheckType checkType,
>  final SecurityAuth session) throws Exception {
> {code}
> already has this information. 
> Using UMA 2.0 each address can be a resource and we can have: 
> SEND,CONSUME,CREATE_ADDRESS,DELETE_ADDRESS,CREATE_DURABLE_QUEUE,DELETE_DURABLE_QUEUE,CREATE_NON_DURABLE_QUEUE,DELETE_NON_DURABLE_QUEUE,MANAGE,BROWSE
>  as scopes, which I think are quite fine grained. Depending on the use case a 
> subscription also can be a resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-8019) Support for InactivityMonitor over https and failover

2020-08-05 Thread Diptesh Chakraborty (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diptesh Chakraborty updated AMQ-8019:
-
Summary: Support for InactivityMonitor over https and failover  (was: 
Support for managing InactivityMonitor over https and failover)

> Support for InactivityMonitor over https and failover
> -
>
> Key: AMQ-8019
> URL: https://issues.apache.org/jira/browse/AMQ-8019
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.15.9
>Reporter: Diptesh Chakraborty
>Priority: Major
>  Labels: HttpsURLConnection, InactivityMonitor
>
> With the below configuration on
>  
> *Client configuration:*
> failover: 
> ([https://.com:8443)?nested.useInactivityMonitor=false&useExponentialBackOff=false&initialReconnectDelay=7000&maxReconnectAttempts=4&maxReconnectDelay=7000|https://.com:8443)/?nested.useInactivityMonitor=false&useExponentialBackOff=false&initialReconnectDelay=7000&maxReconnectAttempts=4&maxReconnectDelay=7000]
>  
> *Broker configuration*
> https://.com:8443"/>
>  
> With the above setup, I am still encountering the following in
>  
> *activemq.log*
> at java.lang.Thread.run(Thread.java:745)[:1.7.0_131] at 
> java.lang.Thread.run(Thread.java:745)[:1.7.0_131]2020-08-05 21:50:01,066 | 
> WARN  | Transport Connection to: blockingQueue_458574741 failed: 
> org.apache.activemq.transport.InactivityIOException: Channel was inactive for 
> too (>3) long: blockingQueue_458574741 | 
> org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
> InactivityMonitor Worker
>  
> *Logs at client side*
> 188078 [ActiveMQ Task-3] INFO 
> org.apache.activemq.transport.failover.FailoverTransport - Successfully 
> reconnected to 
> [https://.com:8443?useInactivityMonitor=false|https://.com:8443/?useInactivityMonitor=false]
> WARN | Transport 
> ([https://x.com:8443?useInactivityMonitor=false|https://x.com:8443/?useInactivityMonitor=false])
>  failed , attempting to automatically reconnect: {}
> WARN | Transport 
> ([https://x.com.com:8443?useInactivityMonitor=false|https://x.com.com:8443/?useInactivityMonitor=false])
>  failed , attempting to automatically reconnect: {}
> java.io.IOException: Failed to perform GET on: 
> [https://.com:8443|https://.com:8443/]
> Reason: null at 
> org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:34)
>  at 
> org.apache.activemq.transport.http.HttpClientTransport.run(HttpClientTransport.java:208)
>  at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readInt(Unknown Source) at 
> org.apache.activemq.transport.util.TextWireFormat.unmarshal(TextWireFormat.java:52)
>  at 
> org.apache.activemq.transport.http.HttpClientTransport.run(HttpClientTransport.java:199)
>  ... 1 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171564#comment-17171564
 ] 

Francesco Nigro commented on ARTEMIS-2852:
--

Let me think about it until tomorrow and will answer in line (y)
In the meantime I've added a commit to 
https://github.com/franz1981/activemq-artemis/tree/speed_up_core_mmap fixing a 
behavioural change we've introduced on replication that could be a possible 
cause of the seen regression (despite the test case validity).
If you want to try it with the currrent setup would be great

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (AMQ-7470) ActiveMQ producer thread hangs on setXid

2020-08-05 Thread Andreas Baumgart (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171531#comment-17171531
 ] 

Andreas Baumgart commented on AMQ-7470:
---

Are there any updates on this issue? It's also a blocker for us.

>From the stack trace I can see that this syncSendPacket method is called:

public Response syncSendPacket(Command command) throws JMSException {
  return syncSendPacket(command, 0);
 }

Wouldn't it be better to specify a concrete timeout here instead of 0 which I 
guess means waiting forever? This would at least avoid that ActiveMQ hangs 
forever. And it would allow our application to recover from the situation by 
handling the timeout exception.

> ActiveMQ producer thread hangs on setXid
> 
>
> Key: AMQ-7470
> URL: https://issues.apache.org/jira/browse/AMQ-7470
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: AMQP, Broker, JMS client
>Affects Versions: 5.15.6
>Reporter: Rajesh Pote
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> org.apache.aries.transaction.internal.AriesPlatformTransactionManager.commit(AriesPlatformTransactionManager.java:75)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:484)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:291)
> at 
> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
> . custom service
> {code}
> {code}
> "DefaultMessageListenerContainer-3" #13199 prio=5 os_prio=0 
> tid=0x7fb8687e6800 nid=0x3954 waiting on condition [0x7fb7b0b98000]
>java.lang.Thread.State: WAITING (parkin

[jira] [Comment Edited] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-05 Thread Jira


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171318#comment-17171318
 ] 

Luís Alves edited comment on ARTEMIS-2861 at 8/5/20, 2:32 PM:
--

I would say that DELETE_DURABLE_QUEUE & DELETE_NON_DURABLE_QUEUE should also be 
included, so only the subscriber (owner) can cancel subscription. And they can 
for sure be deleted administratively (to old subscriptions without interaction 
or by the address owner that don't want the subscriber to receive updates 
anymore). 

Regarding :: seems a great idea :), as that way it's easy to know how to break 
and as you said aligns with the FQQN.

Do you plan to implement this? Will you open a new specific ticket for that?


was (Author: luisalves00):
I would say that DELETE_DURABLE_QUEUE & DELETE_NON_DURABLE_QUEUE should also be 
included, so only the subscriber (owner) can cancel subscription. And they can 
for sure be deleted administratively (to old subscriptions without interaction 
or by the address owner that don't want the subscriber to receive updates 
anymore). 

Regarding :: seems a great idea :), as that way it's easy to know how to break 
and as you said aligns with the FQQN.

> Add queue name as a parameter to ActiveMQSecurityManager 
> -
>
> Key: ARTEMIS-2861
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.14.0
>Reporter: Luís Alves
>Priority: Major
>
> Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) 
> and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want 
> to have fine grained access control over operations over addresses and queues 
> (subscriptions) like described on 
> https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
> proposed in 
> https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
>  and it solves the authN part. For the authZ part I've already had some 
> feedback here 
> https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
>  but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin 
> will not give the needed control. So I'm proposing that 
> ActiveMQSecurityManager latest implementation adds the queue name as the 
> calling method:
> {code:java}
>  @Override
>public void check(final SimpleString address,
>  final SimpleString queue,
>  final CheckType checkType,
>  final SecurityAuth session) throws Exception {
> {code}
> already has this information. 
> Using UMA 2.0 each address can be a resource and we can have: 
> SEND,CONSUME,CREATE_ADDRESS,DELETE_ADDRESS,CREATE_DURABLE_QUEUE,DELETE_DURABLE_QUEUE,CREATE_NON_DURABLE_QUEUE,DELETE_NON_DURABLE_QUEUE,MANAGE,BROWSE
>  as scopes, which I think are quite fine grained. Depending on the use case a 
> subscription also can be a resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171471#comment-17171471
 ] 

Kasper Kondzielski edited comment on ARTEMIS-2852 at 8/5/20, 1:33 PM:
--

??Yep, that seems more correct:??

Great, thanks for clarification!

??In addition, this does seems something that should be fixed in the original 
version too, looking at the comment of ??[~amp001]?? on the original blog post??

Omg, you are right. To my defense I will say that I wasn't even here 3 years 
ago. I also remember going through all these comments of the original version 
once I started doing these tests, but it seems that I have never reached the 
end for some reason. Anyway these aren't excuses, so please accept my apology. 

??3 HA pairs to achieve HA with no split brain, but NO load balancing, just 1 
live processing messages??

Just to be clear - server-side LB can be turned off by not specifying 
cluster-connections, yes? 

??And I've a proposal: to understand the scaling capability, even using a 
single broker (that as been done until now) is important to understand how the 
scalability of the whole system behave IMO.??
 :) I also though about it (but in the context of another queue). That by any 
means would be also an interesting benchmark, but a different one from the one 
we defined and trying to accomplish.

??Considering that clients (code and machines) basically doesn't do any 
processing with the messages I'm not quite sure that load balancing is needed 
here at all, hence I'm not 100% sure adding redistribution on the other 2 nodes 
is meaningful...??

My simple mental model suggests that it might be beneficial, but I will leave 
it for you guys to decide :) 


was (Author: kkondzielski):
??Yep, that seems more correct:??

Great, thanks for clarification!

??In addition, this does seems something that should be fixed in the original 
version too, looking at the comment of ??[~amp001]?? on the original blog post??

Omg, you are right. To my defense I will say that I wasn't even here 3 years 
ago. I also remember going through all these comments of the original version 
once I started doing these tests, but it seems that I have never reached the 
end for some reason. Anyway these aren't excuses, so please accept my apology. 

??3 HA pairs to achieve HA with no split brain, but NO load balancing, just 1 
live processing messages??

Just to be clear - server-side LB can be turned off by not specifying 
cluster-connections, yes? 

??And I've a proposal: to understand the scaling capability, even using a 
single broker (that as been done until now) is important to understand how the 
scalability of the whole system behave IMO.??
 :) I also though about it (but in the context of another queue). That by any 
means would be also an interesting benchmark, but a different one from the one 
we defined and trying to accomplish.

??Considering that clients (code and machines) basically doesn't do any 
processing with the messages I'm not quite sure that load balancing is needed 
here at all, hence I'm not 100% sure adding redistribution on the other 2 nodes 
is meaningful...??

My simple mental model suggest that it might be beneficial, but I will leave it 
for you guys to decide :) 

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously 

[jira] [Updated] (AMQ-8019) Support for managing InactivityMonitor over https and failover

2020-08-05 Thread Diptesh Chakraborty (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diptesh Chakraborty updated AMQ-8019:
-
Labels: HttpsURLConnection InactivityMonitor  (was: InactivityMonitor)

> Support for managing InactivityMonitor over https and failover
> --
>
> Key: AMQ-8019
> URL: https://issues.apache.org/jira/browse/AMQ-8019
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.15.9
>Reporter: Diptesh Chakraborty
>Priority: Major
>  Labels: HttpsURLConnection, InactivityMonitor
>
> With the below configuration on
>  
> *Client configuration:*
> failover: 
> ([https://.com:8443)?nested.useInactivityMonitor=false&useExponentialBackOff=false&initialReconnectDelay=7000&maxReconnectAttempts=4&maxReconnectDelay=7000|https://.com:8443)/?nested.useInactivityMonitor=false&useExponentialBackOff=false&initialReconnectDelay=7000&maxReconnectAttempts=4&maxReconnectDelay=7000]
>  
> *Broker configuration*
> https://.com:8443"/>
>  
> With the above setup, I am still encountering the following in
>  
> *activemq.log*
> at java.lang.Thread.run(Thread.java:745)[:1.7.0_131] at 
> java.lang.Thread.run(Thread.java:745)[:1.7.0_131]2020-08-05 21:50:01,066 | 
> WARN  | Transport Connection to: blockingQueue_458574741 failed: 
> org.apache.activemq.transport.InactivityIOException: Channel was inactive for 
> too (>3) long: blockingQueue_458574741 | 
> org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
> InactivityMonitor Worker
>  
> *Logs at client side*
> 188078 [ActiveMQ Task-3] INFO 
> org.apache.activemq.transport.failover.FailoverTransport - Successfully 
> reconnected to 
> [https://.com:8443?useInactivityMonitor=false|https://.com:8443/?useInactivityMonitor=false]
> WARN | Transport 
> ([https://x.com:8443?useInactivityMonitor=false|https://x.com:8443/?useInactivityMonitor=false])
>  failed , attempting to automatically reconnect: {}
> WARN | Transport 
> ([https://x.com.com:8443?useInactivityMonitor=false|https://x.com.com:8443/?useInactivityMonitor=false])
>  failed , attempting to automatically reconnect: {}
> java.io.IOException: Failed to perform GET on: 
> [https://.com:8443|https://.com:8443/]
> Reason: null at 
> org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:34)
>  at 
> org.apache.activemq.transport.http.HttpClientTransport.run(HttpClientTransport.java:208)
>  at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readInt(Unknown Source) at 
> org.apache.activemq.transport.util.TextWireFormat.unmarshal(TextWireFormat.java:52)
>  at 
> org.apache.activemq.transport.http.HttpClientTransport.run(HttpClientTransport.java:199)
>  ... 1 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-8019) Support for managing InactivityMonitor over https and failover

2020-08-05 Thread Diptesh Chakraborty (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diptesh Chakraborty updated AMQ-8019:
-
Description: 
With the below configuration on

 

*Client configuration:*

failover: 
([https://.com:8443)?nested.useInactivityMonitor=false&useExponentialBackOff=false&initialReconnectDelay=7000&maxReconnectAttempts=4&maxReconnectDelay=7000|https://.com:8443)/?nested.useInactivityMonitor=false&useExponentialBackOff=false&initialReconnectDelay=7000&maxReconnectAttempts=4&maxReconnectDelay=7000]

 

*Broker configuration*

https://.com:8443"/>

 

With the above setup, I am still encountering the following in

 

*activemq.log*

at java.lang.Thread.run(Thread.java:745)[:1.7.0_131] at 
java.lang.Thread.run(Thread.java:745)[:1.7.0_131]2020-08-05 21:50:01,066 | WARN 
 | Transport Connection to: blockingQueue_458574741 failed: 
org.apache.activemq.transport.InactivityIOException: Channel was inactive for 
too (>3) long: blockingQueue_458574741 | 
org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
InactivityMonitor Worker

 

*Logs at client side*

188078 [ActiveMQ Task-3] INFO 
org.apache.activemq.transport.failover.FailoverTransport - Successfully 
reconnected to 
[https://.com:8443?useInactivityMonitor=false|https://.com:8443/?useInactivityMonitor=false]

WARN | Transport 
([https://x.com:8443?useInactivityMonitor=false|https://x.com:8443/?useInactivityMonitor=false])
 failed , attempting to automatically reconnect: {}

WARN | Transport 
([https://x.com.com:8443?useInactivityMonitor=false|https://x.com.com:8443/?useInactivityMonitor=false])
 failed , attempting to automatically reconnect: {}

java.io.IOException: Failed to perform GET on: 
[https://.com:8443|https://.com:8443/]

Reason: null at 
org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:34) 
at 
org.apache.activemq.transport.http.HttpClientTransport.run(HttpClientTransport.java:208)
 at java.lang.Thread.run(Unknown Source)

Caused by: java.io.EOFException

at java.io.DataInputStream.readInt(Unknown Source) at 
org.apache.activemq.transport.util.TextWireFormat.unmarshal(TextWireFormat.java:52)
 at 
org.apache.activemq.transport.http.HttpClientTransport.run(HttpClientTransport.java:199)
 ... 1 more

  was:
With the below configuration on

 

*Client configuration:*

failover:([https://.com:8443)?nested.useInactivityMonitor=false&useExponentialBackOff=false&initialReconnectDelay=7000&maxReconnectAttempts=4&maxReconnectDelay=7000|https://.com:8443)/?nested.useInactivityMonitor=false&useExponentialBackOff=false&initialReconnectDelay=7000&maxReconnectAttempts=4&maxReconnectDelay=7000]

 

*Broker configuration*

https://.com:8443"/>

 

With the above setup, I am still encountering the following in

 

*activemq.log*

at java.lang.Thread.run(Thread.java:745)[:1.7.0_131] at 
java.lang.Thread.run(Thread.java:745)[:1.7.0_131]2020-08-05 21:50:01,066 | WARN 
 | Transport Connection to: blockingQueue_458574741 failed: 
org.apache.activemq.transport.InactivityIOException: Channel was inactive for 
too (>3) long: blockingQueue_458574741 | 
org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
InactivityMonitor Worker

 

*Logs at client side*

188078 [ActiveMQ Task-3] INFO 
org.apache.activemq.transport.failover.FailoverTransport - Successfully 
reconnected to 
[https://.com:8443?useInactivityMonitor=false|https://.com:8443/?useInactivityMonitor=false]

WARN | Transport 
([https://x.com:8443?useInactivityMonitor=false|https://x.com:8443/?useInactivityMonitor=false])
 failed , attempting to automatically reconnect: {}

WARN | Transport 
([https://x.com.com:8443?useInactivityMonitor=false|https://x.com.com:8443/?useInactivityMonitor=false])
 failed , attempting to automatically reconnect: {}

java.io.IOException: Failed to perform GET on: 
[https://.com:8443|https://.com:8443/]

Reason: null at 
org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:34) 
at 
org.apache.activemq.transport.http.HttpClientTransport.run(HttpClientTransport.java:208)
 at java.lang.Thread.run(Unknown Source)

Caused by: java.io.EOFException

at java.io.DataInputStream.readInt(Unknown Source) at 
org.apache.activemq.transport.util.TextWireFormat.unmarshal(TextWireFormat.java:52)
 at 
org.apache.activemq.transport.http.HttpClientTransport.run(HttpClientTransport.java:199)
 ... 1 more


> Support for managing InactivityMonitor over https and failover
> --
>
> Key: AMQ-8019
> URL: https://issues.apache.org/jira/browse/AMQ-8019
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.15.9
>Reporter: Diptesh Chakraborty
>Priority: Major
>  Labels: InactivityMonitor
>
> 

[jira] [Created] (AMQ-8019) Support for managing InactivityMonitor over https and failover

2020-08-05 Thread Diptesh Chakraborty (Jira)
Diptesh Chakraborty created AMQ-8019:


 Summary: Support for managing InactivityMonitor over https and 
failover
 Key: AMQ-8019
 URL: https://issues.apache.org/jira/browse/AMQ-8019
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.15.9
Reporter: Diptesh Chakraborty


With the below configuration on

 

*Client configuration:*

failover:([https://.com:8443)?nested.useInactivityMonitor=false&useExponentialBackOff=false&initialReconnectDelay=7000&maxReconnectAttempts=4&maxReconnectDelay=7000|https://.com:8443)/?nested.useInactivityMonitor=false&useExponentialBackOff=false&initialReconnectDelay=7000&maxReconnectAttempts=4&maxReconnectDelay=7000]

 

*Broker configuration*

https://.com:8443"/>

 

With the above setup, I am still encountering the following in

 

*activemq.log*

at java.lang.Thread.run(Thread.java:745)[:1.7.0_131] at 
java.lang.Thread.run(Thread.java:745)[:1.7.0_131]2020-08-05 21:50:01,066 | WARN 
 | Transport Connection to: blockingQueue_458574741 failed: 
org.apache.activemq.transport.InactivityIOException: Channel was inactive for 
too (>3) long: blockingQueue_458574741 | 
org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
InactivityMonitor Worker

 

*Logs at client side*

188078 [ActiveMQ Task-3] INFO 
org.apache.activemq.transport.failover.FailoverTransport - Successfully 
reconnected to 
[https://.com:8443?useInactivityMonitor=false|https://.com:8443/?useInactivityMonitor=false]

WARN | Transport 
([https://x.com:8443?useInactivityMonitor=false|https://x.com:8443/?useInactivityMonitor=false])
 failed , attempting to automatically reconnect: {}

WARN | Transport 
([https://x.com.com:8443?useInactivityMonitor=false|https://x.com.com:8443/?useInactivityMonitor=false])
 failed , attempting to automatically reconnect: {}

java.io.IOException: Failed to perform GET on: 
[https://.com:8443|https://.com:8443/]

Reason: null at 
org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:34) 
at 
org.apache.activemq.transport.http.HttpClientTransport.run(HttpClientTransport.java:208)
 at java.lang.Thread.run(Unknown Source)

Caused by: java.io.EOFException

at java.io.DataInputStream.readInt(Unknown Source) at 
org.apache.activemq.transport.util.TextWireFormat.unmarshal(TextWireFormat.java:52)
 at 
org.apache.activemq.transport.http.HttpClientTransport.run(HttpClientTransport.java:199)
 ... 1 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2870) CORE connection failure sometimes doesn't cleanup sessions

2020-08-05 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171477#comment-17171477
 ] 

Justin Bertram commented on ARTEMIS-2870:
-

I recently resolved ARTEMIS-2856 which seems potentially related to this. At 
the very least I recommend you move to 2.14.0 and even perhaps build your own 
version of 2.14.0 with the fix from ARTEMIS-2856 or just use 2.15.0-SNAPSHOT. 
If you can reproduce the issue with the fix for ARTEMIS-2856 on at least 
version 2.14.0 then I'll investigate further.

> CORE connection failure sometimes doesn't cleanup sessions
> --
>
> Key: ARTEMIS-2870
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2870
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.10.1
>Reporter: Markus Meierhofer
>Priority: Critical
> Attachments: artemis.log, broker.xml, duplicated consumers.png
>
>
> h3. Summary
> Since the upgrade of our deployed artemis instances from version 2.6.4 to 
> 2.10.1 we have noticed the problem that sometimes, a connection failure 
> doesn't include the cleanup of its connected sessions, leading to "zombie" 
> consumers and producers on queues.
>  
> h3. The issue
> Our Artemis Clients are connected to the broker via the provided JMS 
> abstraction, using the default connection TTL of 60 seconds. we are using 
> both JMS Topics and JMS Queues.
> As most of our Clients are mobile and in a WiFi, connection losses may occur 
> frequently, depending on the quality of the network. When the client is 
> disconnected for 60 seconds, the broker usually closes the connection and 
> cleans up all the sessions connected to it. The mobile Clients then create 
> reconnect when they are online again. What we have noticed is that after many 
> connection failures, messages may to be sent twice to the mobile clients. 
> When analyzing the problem on the broker console, we found out that there 
> were two consumers connected to each of the queues one mobile client usually 
> consumes from. One of them belonged to the new connection of the mobile 
> Client, which is fine.
> The other consumer belonged to a session whose connection already failed and 
> was closed at that time. When analyzing the logs, we saw that for these 
> connections, it contained a "Connection failure to ... has been detected" 
> line, but no following "clearing up resources for session ..." log lines for 
> these connections.
>  
> h3. Instance of the issue
>  
> The broken Session is the "7a9292cb-xxx" in the picture. In the logs you can 
> see that the connection failure was detected, but the session was never 
> cleared by the broker (mind the timestamp).
> !duplicated consumers.png!
> {code:java}
> [WARN 2020-07-27 14:33:29,794  Thread-13  
> org.apache.activemq.artemis.core.client]: AMQ212037: Connection failure to 
> /10.255.0.2:54812 has been detected: syscall:read(..) failed: Connection 
> reset by peer [code=GENERIC_EXCEPTION]
> [WARN 2020-07-29 09:31:30,828 Thread-20   
> org.apache.activemq.artemis.core.client]: AMQ212037: Connection failure to 
> /10.255.0.2:55994 has been detected: AMQ229014: Did not receive data from 
> /10.255.0.2:55994 within the 60,000ms connection TTL. The connection will now 
> be closed. [code=CONNECTION_TIMEDOUT]
> {code}
>  
> Attached you can find the full [^artemis.log] and our [^broker.xml]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171471#comment-17171471
 ] 

Kasper Kondzielski commented on ARTEMIS-2852:
-

??Yep, that seems more correct:??

Great, thanks for clarification!

??In addition, this does seems something that should be fixed in the original 
version too, looking at the comment of ??[~amp001]?? on the original blog post??

Omg, you are right. To my defense I will say that I wasn't even here 3 years 
ago. I also remember going through all these comments of the original version 
once I started doing these tests, but it seems that I have never reached the 
end for some reason. Anyway these aren't excuses, so please accept my apology. 

??3 HA pairs to achieve HA with no split brain, but NO load balancing, just 1 
live processing messages??

Just to be clear - server-side LB can be turned off by not specifying 
cluster-connections, yes? 

??And I've a proposal: to understand the scaling capability, even using a 
single broker (that as been done until now) is important to understand how the 
scalability of the whole system behave IMO.??
 :) I also though about it (but in the context of another queue). That by any 
means would be also an interesting benchmark, but a different one from the one 
we defined and trying to accomplish.

??Considering that clients (code and machines) basically doesn't do any 
processing with the messages I'm not quite sure that load balancing is needed 
here at all, hence I'm not 100% sure adding redistribution on the other 2 nodes 
is meaningful...??

My simple mental model suggest that it might be beneficial, but I will leave it 
for you guys to decide :) 

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171447#comment-17171447
 ] 

Francesco Nigro commented on ARTEMIS-2852:
--

[~kkondzielski]
In addition, this does seems something that should be fixed in the original 
version too, looking at the comment of [~amp001] on the original blog post

{quote}It's by design in artemis also if you configure a multi master as in it 
will shared over the masters - it's named cluster load balancing. And is 
transparent from a client/user perspective. I sent you link to the clustering 
doc and also sample deployment diagram with settings for a three master setup 
in the GitHub discussion thread.

In the docs live is used to describe master nodes and backup is slave 
Essentially you setup three ha pairs in a cluster group, (or even colocated 
live/backups) it is a lot easier to setup with udp discovery as it all self 
discovers and configures or you can use jgroups or static if needed.

By the looks of your ansible code you're making a cluster with one master and 
two slaves your actually almost there, you just need make two more masters and 
one more slave.

Or you could do co-located to reduce the nodes in half but you have to do that 
in broker.xml. It probably quicker though as you have mostly there just to 
create two more masters and the extra slave you can always re-work it to do 
co-lo later.

Once you do that just check the artemis logs and just make sure you see it all 
join. Update you client url list if using static, or easier is to use the same 
UDP discovery if it is available.{quote}

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171422#comment-17171422
 ] 

Francesco Nigro edited comment on ARTEMIS-2852 at 8/5/20, 11:52 AM:


[~kkondzielski]

Yep, that seems more correct:
- the 3 live-backup pairs are there to save split brain to happen 
- having a symmetric cluster should be ok to get proper redistribution of 
messages (and achieve server side load balancing)
- ideally a round robin client side load balancing would help clients to get 
proper load balancing for each client node 

I've still another question (and more later, just need to think more about it):
- what's the exact meaning of threads/sender nodes/receiver nodes in term of 
number of connections/sessions etc etc? 

And I've a proposal: to understand the scaling capability, even using a single 
broker (that as been done until now) is important to understand how the 
scalability of the whole system behave IMO. 
Would be nice to have 3 baseline: 
1) single broker (without HA) 
2) single HA pair (1 live- 1 backup)
3) 3 HA pairs to achieve HA with no split brain, but NO load balancing, just 1 
live processing messages

Considering that clients (code and machines) basically doesn't do any 
processing with the messages I'm not quite sure that load balancing is needed 
here at all, hence I'm not 100% sure adding redistribution on the other 2 nodes 
is meaningful...
Probably [~jbertram] has some thought on this, given that was going to add an 
OFF_WITH_REDISTRIBUTION option recently...


was (Author: nigro@gmail.com):
[~kkondzielski]

Yep, that seems more correct:
- the 3 live-backup pairs are there to save split brain to happen 
- having a symmetric cluster should be ok to get proper redistribution of 
messages (and achieve server side load balancing)
- ideally a round robin client side load balancing would help clients to get 
proper load balancing for each client node 

I've still another question (and more later, just need to think more about it):
- what's the exact meaning of threads/sender nodes/receiver nodes in term of 
number of connections/sessions etc etc? 

And I've a proposal: to understand the scaling capability, even using a single 
broker (that as been done until now) is important to understand how the 
scalability of the whole system behave IMO. 
Would be nice to have 3 baseline: 
1) single broker (without HA) 
2) single HA pair (1 live- 1 backup)
3) 3 HA pairs to achieve HA with no split brain, but NO load balancing, just 1 
live processing messages

Considering that clients (code and machines) basically doesn't do any 
processing with the messages I'm not quite sure that load balancing is needed 
here at all, hence I'm not 100% sure adding redistribution on the other 2 nodes 
is meaningful...
Probably [~jbertram] has some thoughts on this, given that was going to add an 
OFF_WITH_REDISTRIBUTION option recently...

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a ded

[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171422#comment-17171422
 ] 

Francesco Nigro edited comment on ARTEMIS-2852 at 8/5/20, 11:52 AM:


[~kkondzielski]

Yep, that seems more correct:
- the 3 live-backup pairs are there to save split brain to happen 
- having a symmetric cluster should be ok to get proper redistribution of 
messages (and achieve server side load balancing)
- ideally a round robin client side load balancing would help clients to get 
proper load balancing for each client node 

I've still another question (and more later, just need to think more about it):
- what's the exact meaning of threads/sender nodes/receiver nodes in term of 
number of connections/sessions etc etc? 

And I've a proposal: to understand the scaling capability, even using a single 
broker (that as been done until now) is important to understand how the 
scalability of the whole system behave IMO. 
Would be nice to have 3 baseline: 
1) single broker (without HA) 
2) single HA pair (1 live- 1 backup)
3) 3 HA pairs to achieve HA with no split brain, but NO load balancing, just 1 
live processing messages

Considering that clients (code and machines) basically doesn't do any 
processing with the messages I'm not quite sure that load balancing is needed 
here at all, hence I'm not 100% sure adding redistribution on the other 2 nodes 
is meaningful...
Probably [~jbertram] has some thoughts on this, given that was going to add an 
OFF_WITH_REDISTRIBUTION option recently...


was (Author: nigro@gmail.com):
[~kkondzielski]

Yep, that seems more correct:
- the 3 live-backup pairs are there to save split brain to happen 
- having a symmetric cluster should be ok to get proper redistribution of 
messages (and achieve server side load balancing)
- ideally a round robin client side load balancing would help clients to get 
proper load balancing for each client node 

I've still another question (and more later, just need to think more about it):
- what's the exact meaning of threads/sender nodes/receiver nodes in term of 
number of connections/sessions etc etc? 

And I've a proposal: to understand the scaling capability, even using a single 
broker (that as been done until now) is important to understand how the 
scalability of the whole system behave IMO. 
Would be nice to have 2 baseline: 
1) single broker (without HA) 
2) single HA pair (1 live- 1 backup)

And later adding the other 2 live-backup pairs to check how the number changes. 
wdyt?


> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae48

[jira] [Updated] (ARTEMIS-2870) CORE connection failure sometimes doesn't cleanup sessions

2020-08-05 Thread Markus Meierhofer (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Meierhofer updated ARTEMIS-2870:
---
Priority: Critical  (was: Major)

> CORE connection failure sometimes doesn't cleanup sessions
> --
>
> Key: ARTEMIS-2870
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2870
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.10.1
>Reporter: Markus Meierhofer
>Priority: Critical
> Attachments: artemis.log, broker.xml, duplicated consumers.png
>
>
> h3. Summary
> Since the upgrade of our deployed artemis instances from version 2.6.4 to 
> 2.10.1 we have noticed the problem that sometimes, a connection failure 
> doesn't include the cleanup of its connected sessions, leading to "zombie" 
> consumers and producers on queues.
>  
> h3. The issue
> Our Artemis Clients are connected to the broker via the provided JMS 
> abstraction, using the default connection TTL of 60 seconds. we are using 
> both JMS Topics and JMS Queues.
> As most of our Clients are mobile and in a WiFi, connection losses may occur 
> frequently, depending on the quality of the network. When the client is 
> disconnected for 60 seconds, the broker usually closes the connection and 
> cleans up all the sessions connected to it. The mobile Clients then create 
> reconnect when they are online again. What we have noticed is that after many 
> connection failures, messages may to be sent twice to the mobile clients. 
> When analyzing the problem on the broker console, we found out that there 
> were two consumers connected to each of the queues one mobile client usually 
> consumes from. One of them belonged to the new connection of the mobile 
> Client, which is fine.
> The other consumer belonged to a session whose connection already failed and 
> was closed at that time. When analyzing the logs, we saw that for these 
> connections, it contained a "Connection failure to ... has been detected" 
> line, but no following "clearing up resources for session ..." log lines for 
> these connections.
>  
> h3. Instance of the issue
>  
> The broken Session is the "7a9292cb-xxx" in the picture. In the logs you can 
> see that the connection failure was detected, but the session was never 
> cleared by the broker (mind the timestamp).
> !duplicated consumers.png!
> {code:java}
> [WARN 2020-07-27 14:33:29,794  Thread-13  
> org.apache.activemq.artemis.core.client]: AMQ212037: Connection failure to 
> /10.255.0.2:54812 has been detected: syscall:read(..) failed: Connection 
> reset by peer [code=GENERIC_EXCEPTION]
> [WARN 2020-07-29 09:31:30,828 Thread-20   
> org.apache.activemq.artemis.core.client]: AMQ212037: Connection failure to 
> /10.255.0.2:55994 has been detected: AMQ229014: Did not receive data from 
> /10.255.0.2:55994 within the 60,000ms connection TTL. The connection will now 
> be closed. [code=CONNECTION_TIMEDOUT]
> {code}
>  
> Attached you can find the full [^artemis.log] and our [^broker.xml]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171422#comment-17171422
 ] 

Francesco Nigro edited comment on ARTEMIS-2852 at 8/5/20, 11:04 AM:


[~kkondzielski]

Yep, that seems more correct:
- the 3 live-backup pairs are there to save split brain to happen 
- having a symmetric cluster should be ok to get proper redistribution of 
messages (and achieve server side load balancing)
- ideally a round robin client side load balancing would help clients to get 
proper load balancing for each client node 

I've still another question (and more later, just need to think more about it):
- what's the exact meaning of threads/sender nodes/receiver nodes in term of 
number of connections/sessions etc etc? 

And I've a proposal: to understand the scaling capability, even using a single 
broker (that as been done until now) is important to understand how the 
scalability of the whole system behave IMO. 
Would be nice to have 2 baseline: 
1) single broker (without HA) 
2) single HA pair (1 live- 1 backup)

And later adding the other 2 live-backup pairs to check how the number changes. 
wdyt?



was (Author: nigro@gmail.com):
Yep, that seems more correct:
- the 3 live-backup pairs are there to save split brain to happen 
- having a symmetric cluster should be ok to get proper redistribution of 
messages (and achieve server side load balancing)
- ideally a round robin client side load balancing would help clients to get 
proper load balancing for each client node 

I've still another question (and more later, just need to think more about it):
- what's the exact meaning of threads/sender nodes/receiver nodes in term of 
number of connections/sessions etc etc? 

And I've a proposal: to understand the scaling capability, even using a single 
broker (that as been done until now) is important to understand how the 
scalability of the whole system behave IMO. 
Would be nice to have 2 baseline: 
1) single broker (without HA) 
2) single HA pair (1 live- 1 backup)

And later adding the other 2 live-backup pairs to check how the number changes. 
wdyt?


> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This mess

[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171422#comment-17171422
 ] 

Francesco Nigro commented on ARTEMIS-2852:
--

Yep, that seems more correct:
- the 3 live-backup pairs are there to save split brain to happen 
- having a symmetric cluster should be ok to get proper redistribution of 
messages (and achieve server side load balancing)
- ideally a round robin client side load balancing would help clients to get 
proper load balancing for each client node 

I've still another question (and more later, just need to think more about it):
- what's the exact meaning of threads/sender nodes/receiver nodes in term of 
number of connections/sessions etc etc? 

And I've a proposal: to understand the scaling capability, even using a single 
broker (that as been done until now) is important to understand how the 
scalability of the whole system behave IMO. 
Would be nice to have 2 baseline: 
1) single broker (without HA) 
2) single HA pair (1 live- 1 backup)

And later adding the other 2 live-backup pairs to check how the number changes. 
wdyt?


> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171394#comment-17171394
 ] 

Kasper Kondzielski edited comment on ARTEMIS-2852 at 8/5/20, 10:02 AM:
---

I think that you got it right. We have 1 master and 2 slaves. 
 We wanted to achieve safe and persistent data replication. That's why we chose 
master-slave configuration, as it is the only one which guarantees replication. 
I know that the additional slave isn't used as only a single slave can be 
connected to a given master. I think that this is actually a leftover from a 
previous configurations and I just left is as it was. 

Maybe it would be easier to describe what we were trying to achieve based on 
some real example of another queue. Take a look at rabbitMq with their quorum 
queues for example. Given a cluster of 3 nodes each node participates equally 
to message processing and data replication. i.e. words data won't be lost even 
if any of them goes down.

Having said that I started to think that our test might be a little bit unfair, 
since we configured data replication (using master-slave approach) but we 
didn't take care of message redistribution.  Am I right, that a cluster of 3 
master nodes connected with each others and 3 slave nodes, each connected with 
a particular master node, would be a more appropriate solution?

Something like that:

!Selection_451.png!

Which also should solve the splitbrain problem.

Keep in mind that in our tests we are not scaling the cluster but rather amount 
of sender and receivers.

 


was (Author: kkondzielski):
I think that you got it right. We have 1 master and 2 slaves. 
We wanted to achieve safe and persistent data replication. That's why we chose 
master-slave configuration, as it is the only one which guarantees replication. 
I know that the additional slave isn't used as only a single slave can be 
connected to a given master. I think that this is actually a leftover from a 
previous configurations and I just left is as it was. 

Maybe it would be easier to describe what we were trying to achieve based on 
some real example of another queue. Take a look at rabbitMq with their quorum 
queues for example. Given a cluster of 3 nodes each node participates equally 
to message processing and data replication. i.e. words data won't be lost even 
if any of them goes down.

Having said that I started to think that our test might be a little bit unfair, 
since we configured data replication (using master-slave approach) but we 
didn't take care of message redistribution.  Am I right, that a cluster of 3 
master nodes connected with each others and 3 slave nodes, each connected with 
a particular master node, would be a more appropriate solution?

Something like that:

!Selection_451.png!

Keep in mind that in our tests we are not scaling the cluster but rather amount 
of sender and receivers.

 

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in d

[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171394#comment-17171394
 ] 

Kasper Kondzielski commented on ARTEMIS-2852:
-

I think that you got it right. We have 1 master and 2 slaves. 
We wanted to achieve safe and persistent data replication. That's why we chose 
master-slave configuration, as it is the only one which guarantees replication. 
I know that the additional slave isn't used as only a single slave can be 
connected to a given master. I think that this is actually a leftover from a 
previous configurations and I just left is as it was. 

Maybe it would be easier to describe what we were trying to achieve based on 
some real example of another queue. Take a look at rabbitMq with their quorum 
queues for example. Given a cluster of 3 nodes each node participates equally 
to message processing and data replication. i.e. words data won't be lost even 
if any of them goes down.

Having said that I started to think that our test might be a little bit unfair, 
since we configured data replication (using master-slave approach) but we 
didn't take care of message redistribution.  Am I right, that a cluster of 3 
master nodes connected with each others and 3 slave nodes, each connected with 
a particular master node, would be a more appropriate solution?

Something like that:

!Selection_451.png!

Keep in mind that in our tests we are not scaling the cluster but rather amount 
of sender and receivers.

 

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Kasper Kondzielski (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kasper Kondzielski updated ARTEMIS-2852:

Attachment: Selection_451.png

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171356#comment-17171356
 ] 

Francesco Nigro edited comment on ARTEMIS-2852 at 8/5/20, 8:59 AM:
---

[~kkondzielski] 
I've looked on 
https://github.com/softwaremill/mqperf/blob/16c76aec30c5b68ff3bee1c2783cf6c35fb1ad8c/ansible/roles/artemis/tasks/main.yml
 but I don't understand how it works:
it seems to me you have 1 master and 2 slaves, that's not a proper 
configuration nor to scale or save split-brain to happen.
In your configuration you won't have message load balancing (ie messages won't 
be sent across cluster members, because slaves are "passive"), slaves won't 
participate on quorum decisions (ie split brain can still happen) and only 1 
slave can receive the replicated journal records (ie there is no increased 
availability)...
Basically only master is being queried for its messages...what are you trying 
to achieve?
Maybe I've misunderstood the architecture?


was (Author: nigro@gmail.com):
[~kkondzielski] 
I've looked on 
https://github.com/softwaremill/mqperf/blob/16c76aec30c5b68ff3bee1c2783cf6c35fb1ad8c/ansible/roles/artemis/tasks/main.yml
 but I don't understand how it works:
it seems to me you have 1 master and 2 slaves, that's not a proper 
configuration nor to scale or save split-brain to happen.
In your configuration you won't have message load balancing (ie messages won't 
be sent across cluster members, because slaves are "passive"), slaves won't 
participate on quorum decisions (ie split brain can still happen) and only 1 
slave can receive the replicated journal records (ie there is no increased 
availability)...
Basically only master is being queried for its messages...what are you trying 
to achieve?

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-05 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171356#comment-17171356
 ] 

Francesco Nigro commented on ARTEMIS-2852:
--

[~kkondzielski] 
I've looked on 
https://github.com/softwaremill/mqperf/blob/16c76aec30c5b68ff3bee1c2783cf6c35fb1ad8c/ansible/roles/artemis/tasks/main.yml
 but I don't understand how it works:
it seems to me you have 1 master and 2 slaves, that's not a proper 
configuration nor to scale or save split-brain to happen.
In your configuration you won't have message load balancing (ie messages won't 
be sent across cluster members, because slaves are "passive"), slaves won't 
participate on quorum decisions (ie split brain can still happen) and only 1 
slave can receive the replicated journal records (ie there is no increased 
availability)...
Basically only master is being queried for its messages...what are you trying 
to achieve?

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (ARTEMIS-2866) AMQ214015: Failed to execute connection life cycle listener

2020-08-05 Thread Jira


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiri Daněk closed ARTEMIS-2866.
---
Resolution: Not A Bug

The behavior does not depend on the broker version. I saw the exact same 
behavior with all versions of broker from 2.8.0 to 2.14.0.

The message {{ERROR - AMQ214015: Failed to execute connection life cycle 
listener}} seems to be harmless and unrelated to my problem. I do not know 
what's causing it, and what to do about it, but it has no bearing on my issue.

My problem was that I was creating the queue on the broker with

{code:java}
Configuration configuration = new ConfigurationImpl();
configuration.addQueueConfiguration(new 
CoreQueueConfiguration().setAddress(someQueue));
{code}

and I ended up with an autocreated address.

When I replaced the above with

{code:java}
configuration.addQueueConfiguration(new QueueConfiguration(someQueue)
.setAddress(someQueue)
.setAutoCreated(false)
.setRoutingType(RoutingType.ANYCAST)
.setAutoDelete(false)
.setPurgeOnNoConsumers(false));
{code}

the test failure went away.

The test code is

{noformat}
// call session.recover and see that the unacked message is returned to 
the queue each time
JmsConnectionFactory factory = new JmsConnectionFactory(brokerUrl);
try (Connection connection = factory.createConnection()) {
connection.start();
try (Session session = 
connection.createSession(Session.CLIENT_ACKNOWLEDGE)) {
// this will use up default delivery retry count, message gets 
discarded by the broker
for (int i = 0; i < 10; i++) {
try (MessageConsumer consumer = 
session.createConsumer(destination)) {
Message message = consumer.receive(1000);
assertThat(message).isNotNull();

if (i != 0) {
assertThat(message.getJMSRedelivered()).isTrue();
}
long deliveryCount = 
message.getLongProperty("JMSXDeliveryCount");
assertThat(deliveryCount).isEqualTo(i + 1);

session.recover();
} // close consumer
}
}
}
{noformat}

> AMQ214015: Failed to execute connection life cycle listener
> ---
>
> Key: ARTEMIS-2866
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2866
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.14.0
>Reporter: Jiri Daněk
>Priority: Major
>
> I've upgraded Artemis from version 2.10.1 to 2.14.0 and now I started seeing 
> an occasional test failure in my project 
> https://github.com/rh-messaging/cli-java.
> https://travis-ci.org/github/rh-messaging/cli-java/jobs/714822739#L17121
> {noformat}
> ERROR - AMQ214015: Failed to execute connection life cycle listener
> java.lang.NullPointerException
>   at 
> org.apache.activemq.artemis.utils.actors.ProcessorBase.onAddedTaskIfNotRunning(ProcessorBase.java:186)
>   at 
> org.apache.activemq.artemis.utils.actors.ProcessorBase.task(ProcessorBase.java:174)
>   at 
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.execute(OrderedExecutor.java:54)
>   at 
> org.apache.activemq.artemis.core.remoting.impl.netty.ActiveMQChannelHandler.exceptionCaught(ActiveMQChannelHandler.java:106)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:264)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:241)
>   at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1405)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:248)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:901)
>   at 
> io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:818)
>   at 
> io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
>   at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java

[jira] [Commented] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-05 Thread Jira


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171318#comment-17171318
 ] 

Luís Alves commented on ARTEMIS-2861:
-

I would say that DELETE_DURABLE_QUEUE & DELETE_NON_DURABLE_QUEUE should also be 
included, so only the subscriber (owner) can cancel subscription. And they can 
for sure be deleted administratively (to old subscriptions without interaction 
or by the address owner that don't want the subscriber to receive updates 
anymore). 

Regarding :: seems a great idea :), as that way it's easy to know how to break 
and as you said aligns with the FQQN.

> Add queue name as a parameter to ActiveMQSecurityManager 
> -
>
> Key: ARTEMIS-2861
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.14.0
>Reporter: Luís Alves
>Priority: Major
>
> Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) 
> and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want 
> to have fine grained access control over operations over addresses and queues 
> (subscriptions) like described on 
> https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
> proposed in 
> https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
>  and it solves the authN part. For the authZ part I've already had some 
> feedback here 
> https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
>  but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin 
> will not give the needed control. So I'm proposing that 
> ActiveMQSecurityManager latest implementation adds the queue name as the 
> calling method:
> {code:java}
>  @Override
>public void check(final SimpleString address,
>  final SimpleString queue,
>  final CheckType checkType,
>  final SecurityAuth session) throws Exception {
> {code}
> already has this information. 
> Using UMA 2.0 each address can be a resource and we can have: 
> SEND,CONSUME,CREATE_ADDRESS,DELETE_ADDRESS,CREATE_DURABLE_QUEUE,DELETE_DURABLE_QUEUE,CREATE_NON_DURABLE_QUEUE,DELETE_NON_DURABLE_QUEUE,MANAGE,BROWSE
>  as scopes, which I think are quite fine grained. Depending on the use case a 
> subscription also can be a resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2870) CORE connection failure sometimes doesn't cleanup sessions

2020-08-05 Thread Markus Meierhofer (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Meierhofer updated ARTEMIS-2870:
---
 Attachment: duplicated consumers.png
 artemis.log
 broker.xml
Description: 
h3. Summary

Since the upgrade of our deployed artemis instances from version 2.6.4 to 
2.10.1 we have noticed the problem that sometimes, a connection failure doesn't 
include the cleanup of its connected sessions, leading to "zombie" consumers 
and producers on queues.

 
h3. The issue

Our Artemis Clients are connected to the broker via the provided JMS 
abstraction, using the default connection TTL of 60 seconds. we are using both 
JMS Topics and JMS Queues.

As most of our Clients are mobile and in a WiFi, connection losses may occur 
frequently, depending on the quality of the network. When the client is 
disconnected for 60 seconds, the broker usually closes the connection and 
cleans up all the sessions connected to it. The mobile Clients then create 
reconnect when they are online again. What we have noticed is that after many 
connection failures, messages may to be sent twice to the mobile clients. When 
analyzing the problem on the broker console, we found out that there were two 
consumers connected to each of the queues one mobile client usually consumes 
from. One of them belonged to the new connection of the mobile Client, which is 
fine.

The other consumer belonged to a session whose connection already failed and 
was closed at that time. When analyzing the logs, we saw that for these 
connections, it contained a "Connection failure to ... has been detected" line, 
but no following "clearing up resources for session ..." log lines for these 
connections.

 
h3. Instance of the issue

 

The broken Session is the "7a9292cb-xxx" in the picture. In the logs you can 
see that the connection failure was detected, but the session was never cleared 
by the broker (mind the timestamp).

!duplicated consumers.png!
{code:java}
[WARN 2020-07-27 14:33:29,794  Thread-13  
org.apache.activemq.artemis.core.client]: AMQ212037: Connection failure to 
/10.255.0.2:54812 has been detected: syscall:read(..) failed: Connection reset 
by peer [code=GENERIC_EXCEPTION]
[WARN 2020-07-29 09:31:30,828 Thread-20   
org.apache.activemq.artemis.core.client]: AMQ212037: Connection failure to 
/10.255.0.2:55994 has been detected: AMQ229014: Did not receive data from 
/10.255.0.2:55994 within the 60,000ms connection TTL. The connection will now 
be closed. [code=CONNECTION_TIMEDOUT]
{code}
 

Attached you can find the full [^artemis.log] and our [^broker.xml]

  was:
h3. Summary


Since the upgrade of our deployed artemis instances from version 2.6.4 to 
2.10.1 we have noticed the problem that sometimes, a connection failure doesn't 
include the cleanup of its connected sessions, leading to "zombie" consumers 
and producers on queues.

 
h3. The issue


Our Artemis Clients are connected to the broker via the provided JMS 
abstraction, using the default connection TTL of 60 seconds. we are using both 
JMS Topics and JMS Queues.

As most of our Clients are mobile and in a WiFi, connection losses may occur 
frequently, depending on the quality of the network. When the client is 
disconnected for 60 seconds, the broker usually closes the connection and 
cleans up all the sessions connected to it. The mobile Clients then create 
reconnect when they are online again. What we have noticed is that after many 
connection failures, messages may to be sent twice to the mobile clients. When 
analyzing the problem on the broker console, we found out that there were two 
consumers connected to each of the queues one mobile client usually consumes 
from. One of them belonged to the new connection of the mobile Client, which is 
fine.

The other consumer belonged to a session whose connection already failed and 
was closed at that time. When analyzing the logs, we saw that for these 
connections, it contained a "Connection failure to ... has been detected" line, 
but no following "clearing up resources for session ..." log lines for these 
connections.

 
h3. Instance of the issue

 

The broken Session is the "7a9292cb-xxx" in the picture. In the logs you can 
see that the connection failure was detected, but the session was never cleared 
by the broker (mind the timestamp).

!duplicated consumers.png!
{code:java}
[WARN 2020-07-27 14:33:29,794  Thread-13  
org.apache.activemq.artemis.core.client]: AMQ212037: Connection failure to 
/10.255.0.2:54812 has been detected: syscall:read(..) failed: Connection reset 
by peer [code=GENERIC_EXCEPTION]
[WARN 2020-07-29 09:31:30,828 Thread-20   
org.apache.activemq.artemis.core.client]: AMQ212037: Connection failure to 
/10.255.0.2:55994 has been detected: AMQ229014: Did not receive data from 
/10.255.0.2:55994 within the 60,000ms connection TTL. The connection will now 
be cl

[jira] [Created] (ARTEMIS-2870) CORE connection failure sometimes doesn't cleanup sessions

2020-08-05 Thread Markus Meierhofer (Jira)
Markus Meierhofer created ARTEMIS-2870:
--

 Summary: CORE connection failure sometimes doesn't cleanup sessions
 Key: ARTEMIS-2870
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2870
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.10.1
Reporter: Markus Meierhofer


h3. Summary


Since the upgrade of our deployed artemis instances from version 2.6.4 to 
2.10.1 we have noticed the problem that sometimes, a connection failure doesn't 
include the cleanup of its connected sessions, leading to "zombie" consumers 
and producers on queues.

 
h3. The issue


Our Artemis Clients are connected to the broker via the provided JMS 
abstraction, using the default connection TTL of 60 seconds. we are using both 
JMS Topics and JMS Queues.

As most of our Clients are mobile and in a WiFi, connection losses may occur 
frequently, depending on the quality of the network. When the client is 
disconnected for 60 seconds, the broker usually closes the connection and 
cleans up all the sessions connected to it. The mobile Clients then create 
reconnect when they are online again. What we have noticed is that after many 
connection failures, messages may to be sent twice to the mobile clients. When 
analyzing the problem on the broker console, we found out that there were two 
consumers connected to each of the queues one mobile client usually consumes 
from. One of them belonged to the new connection of the mobile Client, which is 
fine.

The other consumer belonged to a session whose connection already failed and 
was closed at that time. When analyzing the logs, we saw that for these 
connections, it contained a "Connection failure to ... has been detected" line, 
but no following "clearing up resources for session ..." log lines for these 
connections.

 
h3. Instance of the issue

 

The broken Session is the "7a9292cb-xxx" in the picture. In the logs you can 
see that the connection failure was detected, but the session was never cleared 
by the broker (mind the timestamp).

!duplicated consumers.png!
{code:java}
[WARN 2020-07-27 14:33:29,794  Thread-13  
org.apache.activemq.artemis.core.client]: AMQ212037: Connection failure to 
/10.255.0.2:54812 has been detected: syscall:read(..) failed: Connection reset 
by peer [code=GENERIC_EXCEPTION]
[WARN 2020-07-29 09:31:30,828 Thread-20   
org.apache.activemq.artemis.core.client]: AMQ212037: Connection failure to 
/10.255.0.2:55994 has been detected: AMQ229014: Did not receive data from 
/10.255.0.2:55994 within the 60,000ms connection TTL. The connection will now 
be closed. [code=CONNECTION_TIMEDOUT]
{code}
 

-Attached you can find the full [^artemis.log] and our [^broker.xml]-

I could not upload the files in Jira, therefore I uploaded the full 
artemis.log, our broker xml and the picture of the sessions at 
https://we.tl/t-jfcwRr4xM7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)