[jira] [Created] (AMQ-8017) Testing

2020-08-03 Thread Eeranna Kuruva (Jira)
Eeranna Kuruva created AMQ-8017:
---

 Summary: Testing
 Key: AMQ-8017
 URL: https://issues.apache.org/jira/browse/AMQ-8017
 Project: ActiveMQ
  Issue Type: New Feature
 Environment: Testing
Reporter: Eeranna Kuruva


Testing



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (AMQNET-565) Dotnet core port

2020-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQNET-565?focusedWorklogId=466028&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-466028
 ]

ASF GitHub Bot logged work on AMQNET-565:
-

Author: ASF GitHub Bot
Created on: 04/Aug/20 05:00
Start Date: 04/Aug/20 05:00
Worklog Time Spent: 10m 
  Work Description: Havret commented on pull request #9:
URL: 
https://github.com/apache/activemq-nms-openwire/pull/9#issuecomment-668380192


   @michaelandrepearce once it's merged I can give it a try. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 466028)
Time Spent: 13h 50m  (was: 13h 40m)

> Dotnet core port 
> -
>
> Key: AMQNET-565
> URL: https://issues.apache.org/jira/browse/AMQNET-565
> Project: ActiveMQ .Net
>  Issue Type: New Feature
>  Components: ActiveMQ
>Reporter: Wojtek Kulma
>Priority: Major
>  Time Spent: 13h 50m
>  Remaining Estimate: 0h
>
> Apache.NMS.ActiveMQ should be ported for dotnet core. 
> For now the following error is rises:
> D:\RiderProjects\syncro [master ≡ +1 ~1 -1 !]> dotnet add package 
> Apache.NMS.ActiveMQ
> Microsoft (R) Build Engine version 15.1.1012.6693
> Copyright (C) Microsoft Corporation. All rights reserved.
>   Writing C:\Users\wkulma\AppData\Local\Temp\tmp9A2E.tmp
> info : Adding PackageReference for package 'Apache.NMS.ActiveMQ' into project 
> 'D:\RiderProjects\syncro\syncro.fsproj'.
> log  : Restoring packages for D:\RiderProjects\syncro\syncro.fsproj...
> info :   GET 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/index.json
> info :   CACHE https://api.nuget.org/v3-flatcontainer/fsharp.core/index.json
> info :   CACHE 
> https://api.nuget.org/v3-flatcontainer/fsharp.core/4.1.17/fsharp.core.4.1.17.nupkg
> info :   CACHE 
> https://api.nuget.org/v3-flatcontainer/fsharp.net.sdk/index.json
> info :   CACHE 
> https://api.nuget.org/v3-flatcontainer/fsharp.net.sdk/1.0.5/fsharp.net.sdk.1.0.5.nupkg
> info :   OK 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/index.json 611ms
> info :   GET 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/1.7.2/apache.nms.activemq.1.7.2.nupkg
> info :   OK 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/1.7.2/apache.nms.activemq.1.7.2.nupkg
>  481ms
> error: Package Apache.NMS.ActiveMQ 1.7.2 is not compatible with netcoreapp1.1 
> (.NETCoreApp,Version=v1.1). Package Apache.NMS.ActiveMQ 1.7.2 supports:
> error:   - net20 (.NETFramework,Version=v2.0)
> error:   - net35 (.NETFramework,Version=v3.5)
> error:   - net40 (.NETFramework,Version=v4.0)
> error: Package Apache.NMS 1.7.1 is not compatible with netcoreapp1.1 
> (.NETCoreApp,Version=v1.1). Package Apache.NMS 1.7.1 supports:
> error:   - net20 (.NETFramework,Version=v2.0)
> error:   - net20-cf (.NETFramework,Version=v2.0,Profile=CompactFramework)
> error:   - net35 (.NETFramework,Version=v3.5)
> error:   - net40 (.NETFramework,Version=v4.0)
> error: One or more packages are incompatible with .NETCoreApp,Version=v1.1.
> error: Package 'Apache.NMS.ActiveMQ' is incompatible with 'all' frameworks in 
> project 'D:\RiderProjects\syncro\syncro.fsproj'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2859) Strange Address Sizes on clustered topics.

2020-08-03 Thread Tarek Hammoud (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170427#comment-17170427
 ] 

Tarek Hammoud commented on ARTEMIS-2859:


This might be related to https://issues.apache.org/jira/browse/ARTEMIS-2768 as 
the bridge is also using wildcards thus the offsetting counters.

> Strange Address Sizes on clustered topics.
> --
>
> Key: ARTEMIS-2859
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2859
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.12.0, 2.14.0
> Environment: uname -a
> Linux tarek02 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 15:29:09 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
> java version "1.8.0_251"
> Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
> Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)
>Reporter: Tarek Hammoud
>Priority: Major
> Attachments: TestClusteredTopic.java, broker.xml, 
> image-2020-08-03-14-05-54-676.png, image-2020-08-03-14-05-54-720.png, 
> screenshot.png
>
>
> !screenshot.png! Hello,
> We are seeing some strange AddressSizes in JMX for simple clustered topics. 
> The problem was observed on 2.12.0 in production but can also be reproduced 
> on 2.14.0. I set up a 3-node cluster (Sample broker.xml) attached. The test 
> program creates multiple clustered topic consumers. A publisher sends a 
> message every few seconds. The JMX console shows a strange address size on 
> one of the nodes. Easy to reproduce with the attached test program. Seems to 
> be fine with queues. 
> Thank you for help in advance.[^TestClusteredTopic.java][^broker.xml]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2768) Negative AddressSize in JMX

2020-08-03 Thread Tarek Hammoud (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tarek Hammoud updated ARTEMIS-2768:
---
Affects Version/s: 2.14.0

> Negative AddressSize in JMX
> ---
>
> Key: ARTEMIS-2768
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2768
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.11.0, 2.12.0, 2.14.0
>Reporter: Tarek Hammoud
>Priority: Major
> Attachments: TestWildCard.java
>
>
> Hello,
> I see negative address size in JMX. This happens when there are two 
> consumers. One listening on the full topic name. The other listening on the 
> wild card topic. I can easily reproduce with this. [^TestWildCard.java]
> ^Broker shows:^
> ^2020-05-17 09:24:34,826 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-152.^
> ^2020-05-17 09:24:34,827 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-88.^
> ^2020-05-17 09:24:34,828 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-944.^
> ^2020-05-17 09:24:34,828 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-1,800.^
> ^2020-05-17 09:24:34,829 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-1,736.^
> ^2020-05-17 09:24:34,829 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-2,592.^
> ^2020-05-17 09:24:34,829 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-3,448.^
> ^2020-05-17 09:24:34,830 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-4,304.^
> ^2020-05-17 09:24:34,830 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-5,160.^
> ^2020-05-17 09:24:34,830 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-5,096.^
> ^2020-05-17 09:24:34,831 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-5,952.^
> ^2020-05-17 09:24:34,831 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-6,016.^
> ^2020-05-17 09:24:34,832 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-6,080.^
> ^2020-05-17 09:24:34,832 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-6,016.^
> ^2020-05-17 09:24:34,832 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-6,872.^
> ^2020-05-17 09:24:34,832 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-7,728.^
> ^2020-05-17 09:24:34,833 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-8,584.^
> ^2020-05-17 09:24:34,833 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-8,520.^
> ^2020-05-17 09:24:34,833 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-9,376.^
> ^2020-05-17 09:24:34,834 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-10,232.^
> ^2020-05-17 09:24:34,834 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-10,168.^
> ^2020-05-17 09:24:34,834 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-10,104.^
> ^2020-05-17 09:24:34,835 WARN [org.apache.activemq.artemis.core.server] 
> AMQ14: Destination global.topic.FooBar has an inconsistent and negative 
> address size=-10,960

[jira] [Commented] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-03 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170382#comment-17170382
 ] 

Justin Bertram commented on ARTEMIS-2861:
-

bq. Which check type is "creating consumers"? 

{{org.apache.activemq.artemis.core.security.CheckType#CONSUME}}

You wouldn't see this when the queue was created. You would only see this when 
the consumer was created on the queue. Also, in order to see the queue name as 
part of {{address}} the initial authorization check on just the address name 
(i.e. without the queue name concatenated to it) must fail.

The use-case for ARTEMIS-592 involved statically created durable subscription 
queues so there was no explicit check for {{CREATE_DURABLE_QUEUE}} from a 
client. Since your use-case is different it may be necessary to enhance the 
broker to concatenate the queue name for {{CREATE_DURABLE_QUEUE}} checks as 
well.

> Add queue name as a parameter to ActiveMQSecurityManager 
> -
>
> Key: ARTEMIS-2861
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.14.0
>Reporter: Luís Alves
>Priority: Major
>
> Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) 
> and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want 
> to have fine grained access control over operations over addresses and queues 
> (subscriptions) like described on 
> https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
> proposed in 
> https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
>  and it solves the authN part. For the authZ part I've already had some 
> feedback here 
> https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
>  but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin 
> will not give the needed control. So I'm proposing that 
> ActiveMQSecurityManager latest implementation adds the queue name as the 
> calling method:
> {code:java}
>  @Override
>public void check(final SimpleString address,
>  final SimpleString queue,
>  final CheckType checkType,
>  final SecurityAuth session) throws Exception {
> {code}
> already has this information. 
> Using UMA 2.0 each address can be a resource and we can have: 
> SEND,CONSUME,CREATE_ADDRESS,DELETE_ADDRESS,CREATE_DURABLE_QUEUE,DELETE_DURABLE_QUEUE,CREATE_NON_DURABLE_QUEUE,DELETE_NON_DURABLE_QUEUE,MANAGE,BROWSE
>  as scopes, which I think are quite fine grained. Depending on the use case a 
> subscription also can be a resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170373#comment-17170373
 ] 

Kasper Kondzielski edited comment on ARTEMIS-2852 at 8/3/20, 7:37 PM:
--

All the tests and machines' setup (via ansible) are opensource and can be found 
here: [https://github.com/softwaremill/mqperf] (machines' types committed into 
the repository are t3.macro for development purposes - normally we use 
r5.2xlarge).

 

Although I think that I could do a little more tests as I can imagine that it 
will take me far less time to do that, than for you to understand them from the 
ground.

Apart from the obvious parameters like xmx and so on, what are the values which 
need to be restored in order to achieve this apple-to-apple comparison? What 
about `UseStringDeduplication` should I turn it off?


was (Author: kkondzielski):
All the tests and machines' setup (via ansible) are opensource and can be found 
here: [https://github.com/softwaremill/mqperf] (machines' types committed into 
the repository are t3.macro for development purposes - normally we use 
r5.2xlarge).

 

Although I think that I could do a little more tests as I can imagine that it 
will take me far less time to do that, than for you to understand them from the 
ground.

Apart from the obvious parameters like xmx and so on, what are the values which 
need to be restored in order to achieve this apple-to-apple comparison? What 
about `UseStringDeduplication` should I turn it on?

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170373#comment-17170373
 ] 

Kasper Kondzielski edited comment on ARTEMIS-2852 at 8/3/20, 7:37 PM:
--

All the tests and machines' setup (via ansible) are opensource and can be found 
here: [https://github.com/softwaremill/mqperf] (machines' types committed into 
the repository are t3.macro for development purposes - normally we use 
r5.2xlarge).

 

Although I think that I could do a little more tests as I can imagine that it 
will take me far less time to do that, than for you to understand them from the 
ground.

Apart from the obvious parameters like xmx and so on, what are the values which 
need to be restored in order to achieve this apple-to-apple comparison? What 
about `UseStringDeduplication` should I turn it on?


was (Author: kkondzielski):
All the tests and machines' setup (via ansible) are opensource and can be found 
here: [https://github.com/softwaremill/mqperf] (machines' types committed into 
the repository are t3.macro for development purposes - normally we use 
r5.2xlarge).

 

Although I think that I could do a little more tests as I can imagine that it 
will take me far less time to do that, than for you to understand them from the 
ground.

Apart from the obvious parameters like xmx and so on, what are the values which 
need to be restored in order to achieve this apple-to-apple comparison? 

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170373#comment-17170373
 ] 

Kasper Kondzielski edited comment on ARTEMIS-2852 at 8/3/20, 7:36 PM:
--

All the tests and machines' setup (via ansible) are opensource and can be found 
here: [https://github.com/softwaremill/mqperf] (machines' types committed into 
the repository are t3.macro for development purposes - normally we use 
r5.2xlarge).

 

Although I think that I could do a little more tests as I can imagine that it 
will take me far less time to do that, than for you to understand them from the 
ground.

Apart from the obvious parameters like xmx and so on, what are the values which 
need to be restored in order to achieve this apple-to-apple comparison? 


was (Author: kkondzielski):
All the tests and machines' setup (via ansible) are opensource and can be found 
here: [https://github.com/softwaremill/mqperf] (machines' types committed into 
the repository are t3.macro for development purposes - normally we use 
r5.2xlarge).

 

Although I think that I could do a little more tests as I can imagine that it 
will take me far less time to do that, than for you to understand them from the 
ground.

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170373#comment-17170373
 ] 

Kasper Kondzielski edited comment on ARTEMIS-2852 at 8/3/20, 7:32 PM:
--

All the tests and machines' setup (via ansible) are opensource and can be found 
here: [https://github.com/softwaremill/mqperf] (machines' types committed into 
the repository are t3.macro for development purposes - normally we use 
r5.2xlarge).

 

Although I think that I could do a little more tests as I can imagine that it 
will take me far less time to do that, than for you to understand them from the 
ground.


was (Author: kkondzielski):
All the tests and machines' setup (via ansible) are opensource and can be found 
here: [https://github.com/softwaremill/mqperf] (machines' types committed into 
the repository are t3.macro for development purposes - normally we use 
r5.2xlarge).

 

Although I think that I could some more tests as I can imagine that it will 
take me far less time to do that, than for you to understand them from the 
ground.

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170373#comment-17170373
 ] 

Kasper Kondzielski edited comment on ARTEMIS-2852 at 8/3/20, 7:31 PM:
--

All the tests and machines' setup (via ansible) are opensource and can be found 
here: [https://github.com/softwaremill/mqperf] (machines' types committed into 
the repository are t3.macro for development purposes - normally we use 
r5.2xlarge).

 

Although I think that I could some more tests as I can imagine that it will 
take me far less time to do that, than for you to understand them from the 
ground.


was (Author: kkondzielski):
Running these tests costs some real money as we are using r5.2xlarge machines 
on aws. I might ask my company if they will be willing to put more money into 
that, but I am not really convinced that it is the best way to approach this 
problem since the feedback loop would be huge. All the tests and machines' 
setup (via ansible) are opensource and can be found here: 
[https://github.com/softwaremill/mqperf] (machines' types committed into the 
repository are t3.macro for development purposes).

 

Although I think that I could run 5 or 6 tests more as I can imagine that it 
will take me far less time to do that, than for you to understand them from the 
ground.

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170373#comment-17170373
 ] 

Kasper Kondzielski edited comment on ARTEMIS-2852 at 8/3/20, 7:26 PM:
--

Running these tests costs some real money as we are using r5.2xlarge machines 
on aws. I might ask my company if they will be willing to put more money into 
that, but I am not really convinced that it is the best way to approach this 
problem since the feedback loop would be huge. All the tests and machines' 
setup (via ansible) are opensource and can be found here: 
[https://github.com/softwaremill/mqperf] (machines' types committed into the 
repository are t3.macro for development purposes).

 

Although I think that I could run 5 or 6 tests more as I can imagine that it 
will take me far less time to do that, than for you to understand them from the 
ground.


was (Author: kkondzielski):
Running these tests costs some real money as we are using r5.2xlarge machines 
on aws. I might ask my company if they will be willing to put more money into 
that, but I am not really convinced that it is the best way to approach this 
problem since the feedback loop would be huge. All the tests and machines' 
setup (via ansible) are opensource and can be found here: 
[https://github.com/softwaremill/mqperf] (machines' types committed into the 
repository are t3.macro for development purposes).

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170373#comment-17170373
 ] 

Kasper Kondzielski commented on ARTEMIS-2852:
-

Running these tests costs some real money as we are using r5.2xlarge machines 
on aws. I might ask my company if they will be willing to put more money into 
that, but I am not really convinced that it is the best way to approach this 
problem since the feedback loop would be huge. All the tests and machines' 
setup (via ansible) are opensource and can be found here: 
[https://github.com/softwaremill/mqperf] (machines' types committed into the 
repository are t3.macro for development purposes).

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170224#comment-17170224
 ] 

Kasper Kondzielski edited comment on ARTEMIS-2852 at 8/3/20, 7:12 PM:
--

Let me start from the beginning. In our tests we try to compare various queues. 
Obviously most of these queues can be fine tuned to the very extreme in terms 
of performance, but this requires a lot of very specialized knowledge. As we 
are time constrained we can't allow ourselves to dive so deeply into each 
implementation. That's why for most of the cases we just use the default 
configuration with some best practice tune ups if they are publicly and easily 
available. This is the kind of comparison I had in mind when I opened this 
issue. I compared artemis 2.2.0 with all the best practice we knew at the time 
of testing with the current version 2.9.0 and all the best practices we know 
now. So it is not about comparing isolated binaries with all other things fixed 
to some values, but rather giving a practical insight on the performance which 
would benefit most of the people.

 

Still, when I saw the results I was concerned about that decrease. There can be 
three causes to that:
 * I did something terrible wrong in the configuration
 * there is a bug in the current version
 * there is nothing wrong either with implementation or the configuration and 
the performance decrease was a result of e.g. stabilizing and hardening the 
system

In the first two cases I think that it is justified to create this issue.  
Although it might not be the best name for this issue, without having the 
option to compare these results against previous version I wouldn't even know 
that it might be an issue.

 

In case of the third option it still might be good to let users know about such 
characteristic, especially to those who migrated all the way from 2.2.0 up to 
2.13.0


was (Author: kkondzielski):
Let me start from the beginning. In our tests we try to compare various queues. 
Obviously most of these queues can be fine tuned to the very extreme in terms 
of performance, but this requires a lot of very specialized knowledge. As we 
are time constrained we can't allow ourselves to dive so deeply into each 
implementation. That's way for most of the cases we just use the default 
configuration with some best practice tune ups if they are publicly and easily 
available. This is the kind of comparison I had in mind when I opened this 
issue. I compared artemis 2.2.0 with all the best practice we knew at the time 
of testing with the current version 2.9.0 and all the best practices we know 
now. So it is not about comparing isolated binaries with all other things fixed 
to some values, but rather giving a practical insight on the performance which 
would benefit most of the people.

 

Still, when I saw the results I was concerned about that decrease. There can be 
three causes to that:
 * I did something terrible wrong in the configuration
 * there is a bug in the current version
 * there is nothing wrong either with implementation or the configuration and 
the performance decrease was a result of e.g. stabilizing and hardening the 
system

In the first two cases I think that it is justified to create this issue.  
Although it might not be the best name for this issue, without having the 
option to compare these results against previous version I wouldn't even know 
that it might be an issue.

 

In case of the third option it still might be good to let users know about such 
characteristic, especially to those who migrated all the way from 2.2.0 up to 
2.13.0

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines

[jira] [Commented] (ARTEMIS-2859) Strange Address Sizes on clustered topics.

2020-08-03 Thread Tarek Hammoud (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170360#comment-17170360
 ] 

Tarek Hammoud commented on ARTEMIS-2859:


It seems that some internal cluster address always has the negative size of the 
address size of the topic itself. These strange address sizes reported on 
multiple tickets are getting very concerning for a production environment as 
paging and producers can be affected by these numbers. Thank you. 

!image-2020-08-03-14-05-54-720.png!!image-2020-08-03-14-05-54-676.png!

> Strange Address Sizes on clustered topics.
> --
>
> Key: ARTEMIS-2859
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2859
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.12.0, 2.14.0
> Environment: uname -a
> Linux tarek02 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 15:29:09 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
> java version "1.8.0_251"
> Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
> Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)
>Reporter: Tarek Hammoud
>Priority: Major
> Attachments: TestClusteredTopic.java, broker.xml, 
> image-2020-08-03-14-05-54-676.png, image-2020-08-03-14-05-54-720.png, 
> screenshot.png
>
>
> !screenshot.png! Hello,
> We are seeing some strange AddressSizes in JMX for simple clustered topics. 
> The problem was observed on 2.12.0 in production but can also be reproduced 
> on 2.14.0. I set up a 3-node cluster (Sample broker.xml) attached. The test 
> program creates multiple clustered topic consumers. A publisher sends a 
> message every few seconds. The JMX console shows a strange address size on 
> one of the nodes. Easy to reproduce with the attached test program. Seems to 
> be fine with queues. 
> Thank you for help in advance.[^TestClusteredTopic.java][^broker.xml]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2859) Strange Address Sizes on clustered topics.

2020-08-03 Thread Tarek Hammoud (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tarek Hammoud updated ARTEMIS-2859:
---
Attachment: image-2020-08-03-14-05-54-720.png

> Strange Address Sizes on clustered topics.
> --
>
> Key: ARTEMIS-2859
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2859
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.12.0, 2.14.0
> Environment: uname -a
> Linux tarek02 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 15:29:09 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
> java version "1.8.0_251"
> Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
> Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)
>Reporter: Tarek Hammoud
>Priority: Major
> Attachments: TestClusteredTopic.java, broker.xml, 
> image-2020-08-03-14-05-54-676.png, image-2020-08-03-14-05-54-720.png, 
> screenshot.png
>
>
> !screenshot.png! Hello,
> We are seeing some strange AddressSizes in JMX for simple clustered topics. 
> The problem was observed on 2.12.0 in production but can also be reproduced 
> on 2.14.0. I set up a 3-node cluster (Sample broker.xml) attached. The test 
> program creates multiple clustered topic consumers. A publisher sends a 
> message every few seconds. The JMX console shows a strange address size on 
> one of the nodes. Easy to reproduce with the attached test program. Seems to 
> be fine with queues. 
> Thank you for help in advance.[^TestClusteredTopic.java][^broker.xml]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2859) Strange Address Sizes on clustered topics.

2020-08-03 Thread Tarek Hammoud (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tarek Hammoud updated ARTEMIS-2859:
---
Attachment: image-2020-08-03-14-05-54-676.png

> Strange Address Sizes on clustered topics.
> --
>
> Key: ARTEMIS-2859
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2859
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.12.0, 2.14.0
> Environment: uname -a
> Linux tarek02 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 15:29:09 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
> java version "1.8.0_251"
> Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
> Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)
>Reporter: Tarek Hammoud
>Priority: Major
> Attachments: TestClusteredTopic.java, broker.xml, 
> image-2020-08-03-14-05-54-676.png, image-2020-08-03-14-05-54-720.png, 
> screenshot.png
>
>
> !screenshot.png! Hello,
> We are seeing some strange AddressSizes in JMX for simple clustered topics. 
> The problem was observed on 2.12.0 in production but can also be reproduced 
> on 2.14.0. I set up a 3-node cluster (Sample broker.xml) attached. The test 
> program creates multiple clustered topic consumers. A publisher sends a 
> message every few seconds. The JMX console shows a strange address size on 
> one of the nodes. Easy to reproduce with the attached test program. Seems to 
> be fine with queues. 
> Thank you for help in advance.[^TestClusteredTopic.java][^broker.xml]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (ARTEMIS-2864) Firefox can't open WebConsole using localhost

2020-08-03 Thread Justin Bertram (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-2864.
-
Resolution: Cannot Reproduce

> Firefox can't open WebConsole using localhost
> -
>
> Key: ARTEMIS-2864
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2864
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.14.0
> Environment: * macOS 10.15.2
>  * Firefox 78.0.2
>  * Works on Chrome 84
>Reporter: Duy Dao
>Priority: Minor
> Attachments: Bildschirmfoto 2020-08-03 um 20.38.39.png
>
>
> I can't open the Web Console on a fresh broker using the default config. 
> After a successfull login on http://localhost:8161, I get 403 (Forbidden) on 
> the jolokia endpoints.
> This only happens on Firefox 78.0.2 (macOS), it works when I use Chrome 84 or 
> when I use the IP instead of localhost (e.g. http://127.0.0.1:8161 after 
> modifying the jolokia-access.xml)
> As a workaround, I can use Chrome instead of Firefox or the IP instead of 
> localhost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2864) Firefox can't open WebConsole using localhost

2020-08-03 Thread Duy Dao (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170350#comment-17170350
 ] 

Duy Dao commented on ARTEMIS-2864:
--

Hi [~jbertram] ,

thanks for getting in touch with me & sorry for the bad description:

The steps to reproduce this error on my client is:
 * Open the web console page
 * Enter the login information & press login
 * I get redirected to the login page (no error notification)
 * I see "403 Forbidden" responses in the Dev Console (see screenshot)

I've tried to clear/disable the cache and it has no effect. What does work is:
 * Change the domain (e.g. IP instead of localhost)
 * Change the port (e.g. 8162 instead of 8161)
 * Use a private window

I'm pretty sure the problem is on my end, and I just wanted to document the 
workaround in case someone stumbles on the same error. Please feel free to 
close this issue :)

 

> Firefox can't open WebConsole using localhost
> -
>
> Key: ARTEMIS-2864
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2864
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.14.0
> Environment: * macOS 10.15.2
>  * Firefox 78.0.2
>  * Works on Chrome 84
>Reporter: Duy Dao
>Priority: Minor
> Attachments: Bildschirmfoto 2020-08-03 um 20.38.39.png
>
>
> I can't open the Web Console on a fresh broker using the default config. 
> After a successfull login on http://localhost:8161, I get 403 (Forbidden) on 
> the jolokia endpoints.
> This only happens on Firefox 78.0.2 (macOS), it works when I use Chrome 84 or 
> when I use the IP instead of localhost (e.g. http://127.0.0.1:8161 after 
> modifying the jolokia-access.xml)
> As a workaround, I can use Chrome instead of Firefox or the IP instead of 
> localhost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170349#comment-17170349
 ] 

Justin Bertram commented on ARTEMIS-2852:
-

Again, that's fair. However, as developers working on the code-base we're 
curious as to whether or not this is actually a regression or not. To that end, 
it would great if you could actually perform an apples-to-apples comparison to 
help us gain clarity here. At this point you have the tests, and we don't. Is 
this something you'd be willing to do?

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2864) Firefox can't open WebConsole using localhost

2020-08-03 Thread Duy Dao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duy Dao updated ARTEMIS-2864:
-
Attachment: Bildschirmfoto 2020-08-03 um 20.38.39.png

> Firefox can't open WebConsole using localhost
> -
>
> Key: ARTEMIS-2864
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2864
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.14.0
> Environment: * macOS 10.15.2
>  * Firefox 78.0.2
>  * Works on Chrome 84
>Reporter: Duy Dao
>Priority: Minor
> Attachments: Bildschirmfoto 2020-08-03 um 20.38.39.png
>
>
> I can't open the Web Console on a fresh broker using the default config. 
> After a successfull login on http://localhost:8161, I get 403 (Forbidden) on 
> the jolokia endpoints.
> This only happens on Firefox 78.0.2 (macOS), it works when I use Chrome 84 or 
> when I use the IP instead of localhost (e.g. http://127.0.0.1:8161 after 
> modifying the jolokia-access.xml)
> As a workaround, I can use Chrome instead of Firefox or the IP instead of 
> localhost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2864) Firefox can't open WebConsole using localhost

2020-08-03 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170339#comment-17170339
 ] 

Justin Bertram commented on ARTEMIS-2864:
-

I'm a bit confused by the description of the problem.  You say you can't open 
the Web Console, but then you say you successfully log-in to 
http://localhost:8161 which presumably is the web console. Can you clarify 
this? Also, what exactly do you mean by "I get 403 (Forbidden) on the jolokia 
endpoints"? How does Jolokia fit in to the picture?

Have you tried in Firefox 79 (assuming it's available for macOS)? I just 
created a fresh install of 2.14.0 and used Firefox 79 to log-in to the web 
console with no issues. To be clear, I'm on Linux and have no access to macOS.

> Firefox can't open WebConsole using localhost
> -
>
> Key: ARTEMIS-2864
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2864
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.14.0
> Environment: * macOS 10.15.2
>  * Firefox 78.0.2
>  * Works on Chrome 84
>Reporter: Duy Dao
>Priority: Minor
>
> I can't open the Web Console on a fresh broker using the default config. 
> After a successfull login on http://localhost:8161, I get 403 (Forbidden) on 
> the jolokia endpoints.
> This only happens on Firefox 78.0.2 (macOS), it works when I use Chrome 84 or 
> when I use the IP instead of localhost (e.g. http://127.0.0.1:8161 after 
> modifying the jolokia-access.xml)
> As a workaround, I can use Chrome instead of Firefox or the IP instead of 
> localhost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2864) Firefox can't open WebConsole using localhost

2020-08-03 Thread Duy Dao (Jira)
Duy Dao created ARTEMIS-2864:


 Summary: Firefox can't open WebConsole using localhost
 Key: ARTEMIS-2864
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2864
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.14.0
 Environment: * macOS 10.15.2
 * Firefox 78.0.2
 * Works on Chrome 84
Reporter: Duy Dao


I can't open the Web Console on a fresh broker using the default config. After 
a successfull login on http://localhost:8161, I get 403 (Forbidden) on the 
jolokia endpoints.

This only happens on Firefox 78.0.2 (macOS), it works when I use Chrome 84 or 
when I use the IP instead of localhost (e.g. http://127.0.0.1:8161 after 
modifying the jolokia-access.xml)

As a workaround, I can use Chrome instead of Firefox or the IP instead of 
localhost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170224#comment-17170224
 ] 

Kasper Kondzielski commented on ARTEMIS-2852:
-

Let me start from the beginning. In our tests we try to compare various queues. 
Obviously most of these queues can be fine tuned to the very extreme in terms 
of performance, but this requires a lot of very specialized knowledge. As we 
are time constrained we can't allow ourselves to dive so deeply into each 
implementation. That's way for most of the cases we just use the default 
configuration with some best practice tune ups if they are publicly and easily 
available. This is the kind of comparison I had in mind when I opened this 
issue. I compared artemis 2.2.0 with all the best practice we knew at the time 
of testing with the current version 2.9.0 and all the best practices we know 
now. So it is not about comparing isolated binaries with all other things fixed 
to some values, but rather giving a practical insight on the performance which 
would benefit most of the people.

 

Still, when I saw the results I was concerned about that decrease. There can be 
three causes to that:
 * I did something terrible wrong in the configuration
 * there is a bug in the current version
 * there is nothing wrong either with implementation or the configuration and 
the performance decrease was a result of e.g. stabilizing and hardening the 
system

In the first two cases I think that it is justified to create this issue.  
Although it might not be the best name for this issue, without having the 
option to compare these results against previous version I wouldn't even know 
that it might be an issue.

 

In case of the third option it still might be good to let users know about such 
characteristic, especially to those who migrated all the way from 2.2.0 up to 
2.13.0

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-03 Thread Jira


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170218#comment-17170218
 ] 

Luís Alves commented on ARTEMIS-2861:
-

Which check type is "creating consumers"? 
I need to recheck tomorrow but when my consumer connected and tried to create a 
durable subscription I didn't get the concatenation.

e.g.:
{code:java}
String validateUserAndRole(String user, //clientID
  String password, //clientSecret
  Set roles, //don't need them
  CheckType checkType,  //CREATE_DURABLE_QUEUE  
  String address, //"org.activemq.premium.news" - Must 
confirm, but I think the queue name ("myClient.mySub") wasn't concatenated.
  RemotingConnection remotingConnection,   
  String securityDomain);
{code}

Also...how do I know how to break if all my address is splited by dots (.)?


> Add queue name as a parameter to ActiveMQSecurityManager 
> -
>
> Key: ARTEMIS-2861
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.14.0
>Reporter: Luís Alves
>Priority: Major
>
> Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) 
> and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want 
> to have fine grained access control over operations over addresses and queues 
> (subscriptions) like described on 
> https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
> proposed in 
> https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
>  and it solves the authN part. For the authZ part I've already had some 
> feedback here 
> https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
>  but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin 
> will not give the needed control. So I'm proposing that 
> ActiveMQSecurityManager latest implementation adds the queue name as the 
> calling method:
> {code:java}
>  @Override
>public void check(final SimpleString address,
>  final SimpleString queue,
>  final CheckType checkType,
>  final SecurityAuth session) throws Exception {
> {code}
> already has this information. 
> Using UMA 2.0 each address can be a resource and we can have: 
> SEND,CONSUME,CREATE_ADDRESS,DELETE_ADDRESS,CREATE_DURABLE_QUEUE,DELETE_DURABLE_QUEUE,CREATE_NON_DURABLE_QUEUE,DELETE_NON_DURABLE_QUEUE,MANAGE,BROWSE
>  as scopes, which I think are quite fine grained. Depending on the use case a 
> subscription also can be a resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-03 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170192#comment-17170192
 ] 

Justin Bertram commented on ARTEMIS-2861:
-

If you have additional questions about how to accomplish your goal I recommend 
you use the [ActiveMQ users mailing 
list|http://activemq.apache.org/contact/#mailing].

> Add queue name as a parameter to ActiveMQSecurityManager 
> -
>
> Key: ARTEMIS-2861
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.14.0
>Reporter: Luís Alves
>Priority: Major
>
> Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) 
> and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want 
> to have fine grained access control over operations over addresses and queues 
> (subscriptions) like described on 
> https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
> proposed in 
> https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
>  and it solves the authN part. For the authZ part I've already had some 
> feedback here 
> https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
>  but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin 
> will not give the needed control. So I'm proposing that 
> ActiveMQSecurityManager latest implementation adds the queue name as the 
> calling method:
> {code:java}
>  @Override
>public void check(final SimpleString address,
>  final SimpleString queue,
>  final CheckType checkType,
>  final SecurityAuth session) throws Exception {
> {code}
> already has this information. 
> Using UMA 2.0 each address can be a resource and we can have: 
> SEND,CONSUME,CREATE_ADDRESS,DELETE_ADDRESS,CREATE_DURABLE_QUEUE,DELETE_DURABLE_QUEUE,CREATE_NON_DURABLE_QUEUE,DELETE_NON_DURABLE_QUEUE,MANAGE,BROWSE
>  as scopes, which I think are quite fine grained. Depending on the use case a 
> subscription also can be a resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-03 Thread Justin Bertram (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram updated ARTEMIS-2861:

Component/s: (was: API)

> Add queue name as a parameter to ActiveMQSecurityManager 
> -
>
> Key: ARTEMIS-2861
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.14.0
>Reporter: Luís Alves
>Priority: Major
>
> Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) 
> and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want 
> to have fine grained access control over operations over addresses and queues 
> (subscriptions) like described on 
> https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
> proposed in 
> https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
>  and it solves the authN part. For the authZ part I've already had some 
> feedback here 
> https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
>  but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin 
> will not give the needed control. So I'm proposing that 
> ActiveMQSecurityManager latest implementation adds the queue name as the 
> calling method:
> {code:java}
>  @Override
>public void check(final SimpleString address,
>  final SimpleString queue,
>  final CheckType checkType,
>  final SecurityAuth session) throws Exception {
> {code}
> already has this information. 
> Using UMA 2.0 each address can be a resource and we can have: 
> SEND,CONSUME,CREATE_ADDRESS,DELETE_ADDRESS,CREATE_DURABLE_QUEUE,DELETE_DURABLE_QUEUE,CREATE_NON_DURABLE_QUEUE,DELETE_NON_DURABLE_QUEUE,MANAGE,BROWSE
>  as scopes, which I think are quite fine grained. Depending on the use case a 
> subscription also can be a resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-03 Thread Justin Bertram (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-2861.
-
Resolution: Information Provided

> Add queue name as a parameter to ActiveMQSecurityManager 
> -
>
> Key: ARTEMIS-2861
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: API
>Affects Versions: 2.14.0
>Reporter: Luís Alves
>Priority: Major
>
> Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) 
> and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want 
> to have fine grained access control over operations over addresses and queues 
> (subscriptions) like described on 
> https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
> proposed in 
> https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
>  and it solves the authN part. For the authZ part I've already had some 
> feedback here 
> https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
>  but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin 
> will not give the needed control. So I'm proposing that 
> ActiveMQSecurityManager latest implementation adds the queue name as the 
> calling method:
> {code:java}
>  @Override
>public void check(final SimpleString address,
>  final SimpleString queue,
>  final CheckType checkType,
>  final SecurityAuth session) throws Exception {
> {code}
> already has this information. 
> Using UMA 2.0 each address can be a resource and we can have: 
> SEND,CONSUME,CREATE_ADDRESS,DELETE_ADDRESS,CREATE_DURABLE_QUEUE,DELETE_DURABLE_QUEUE,CREATE_NON_DURABLE_QUEUE,DELETE_NON_DURABLE_QUEUE,MANAGE,BROWSE
>  as scopes, which I think are quite fine grained. Depending on the use case a 
> subscription also can be a resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-03 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170172#comment-17170172
 ] 

Justin Bertram commented on ARTEMIS-2861:
-

As part of the work for ARTEMIS-592 the name of the queue is *already* passed 
into the {{check}} method as part of the {{address}} when creating consumers or 
browsers. The {{address}} and {{queue}} names are concatenated with a {{.}} 
character. You can simply decompose the name in your implementation of 
{{org.apache.activemq.artemis.spi.core.security.ActiveMQSecurityManager4}}.

> Add queue name as a parameter to ActiveMQSecurityManager 
> -
>
> Key: ARTEMIS-2861
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: API
>Affects Versions: 2.14.0
>Reporter: Luís Alves
>Priority: Major
>
> Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) 
> and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want 
> to have fine grained access control over operations over addresses and queues 
> (subscriptions) like described on 
> https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
> proposed in 
> https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
>  and it solves the authN part. For the authZ part I've already had some 
> feedback here 
> https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
>  but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin 
> will not give the needed control. So I'm proposing that 
> ActiveMQSecurityManager latest implementation adds the queue name as the 
> calling method:
> {code:java}
>  @Override
>public void check(final SimpleString address,
>  final SimpleString queue,
>  final CheckType checkType,
>  final SecurityAuth session) throws Exception {
> {code}
> already has this information. 
> Using UMA 2.0 each address can be a resource and we can have: 
> SEND,CONSUME,CREATE_ADDRESS,DELETE_ADDRESS,CREATE_DURABLE_QUEUE,DELETE_DURABLE_QUEUE,CREATE_NON_DURABLE_QUEUE,DELETE_NON_DURABLE_QUEUE,MANAGE,BROWSE
>  as scopes, which I think are quite fine grained. Depending on the use case a 
> subscription also can be a resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2863) Support pausing dispatch during group rebalance

2020-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2863?focusedWorklogId=465768&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-465768
 ]

ASF GitHub Bot logged work on ARTEMIS-2863:
---

Author: ASF GitHub Bot
Created on: 03/Aug/20 15:51
Start Date: 03/Aug/20 15:51
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on pull request #3230:
URL: https://github.com/apache/activemq-artemis/pull/3230#issuecomment-668098882


   you actually rebased with a merge commit. you should fix your merge in 
there.. and have it flat please?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 465768)
Time Spent: 0.5h  (was: 20m)

> Support pausing dispatch during group rebalance
> ---
>
> Key: ARTEMIS-2863
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2863
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Michael Andre Pearce
>Assignee: Michael Andre Pearce
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently on rebalance dispatch is not paused, as such inflight messages to a 
> consumer when rebalanced may cause out of order issues as the new consumer 
> maybe dispatched a message for the same group and process it faster.
>  
> As such it would be good to have ability to pause dispatch when group 
> rebalance occurs, waiting till delivering "inflight" messages are processed 
> before re-dispatching.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2863) Support pausing dispatch during group rebalance

2020-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2863?focusedWorklogId=465766&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-465766
 ]

ASF GitHub Bot logged work on ARTEMIS-2863:
---

Author: ASF GitHub Bot
Created on: 03/Aug/20 15:48
Start Date: 03/Aug/20 15:48
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on pull request #3230:
URL: https://github.com/apache/activemq-artemis/pull/3230#issuecomment-668097201


   can you rebase? you have a merge branch commit in there.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 465766)
Time Spent: 20m  (was: 10m)

> Support pausing dispatch during group rebalance
> ---
>
> Key: ARTEMIS-2863
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2863
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Michael Andre Pearce
>Assignee: Michael Andre Pearce
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently on rebalance dispatch is not paused, as such inflight messages to a 
> consumer when rebalanced may cause out of order issues as the new consumer 
> maybe dispatched a message for the same group and process it faster.
>  
> As such it would be good to have ability to pause dispatch when group 
> rebalance occurs, waiting till delivering "inflight" messages are processed 
> before re-dispatching.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2863) Support pausing dispatch during group rebalance

2020-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2863?focusedWorklogId=465752&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-465752
 ]

ASF GitHub Bot logged work on ARTEMIS-2863:
---

Author: ASF GitHub Bot
Created on: 03/Aug/20 15:27
Start Date: 03/Aug/20 15:27
Worklog Time Spent: 10m 
  Work Description: michaelandrepearce opened a new pull request #3230:
URL: https://github.com/apache/activemq-artemis/pull/3230


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 465752)
Remaining Estimate: 0h
Time Spent: 10m

> Support pausing dispatch during group rebalance
> ---
>
> Key: ARTEMIS-2863
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2863
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Michael Andre Pearce
>Assignee: Michael Andre Pearce
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently on rebalance dispatch is not paused, as such inflight messages to a 
> consumer when rebalanced may cause out of order issues as the new consumer 
> maybe dispatched a message for the same group and process it faster.
>  
> As such it would be good to have ability to pause dispatch when group 
> rebalance occurs, waiting till delivering "inflight" messages are processed 
> before re-dispatching.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2863) Support pausing dispatch during group rebalance

2020-08-03 Thread Michael Andre Pearce (Jira)
Michael Andre Pearce created ARTEMIS-2863:
-

 Summary: Support pausing dispatch during group rebalance
 Key: ARTEMIS-2863
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2863
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 2.14.0
Reporter: Michael Andre Pearce
Assignee: Michael Andre Pearce


Currently on rebalance dispatch is not paused, as such inflight messages to a 
consumer when rebalanced may cause out of order issues as the new consumer 
maybe dispatched a message for the same group and process it faster.

 

As such it would be good to have ability to pause dispatch when group rebalance 
occurs, waiting till delivering "inflight" messages are processed before 
re-dispatching.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170088#comment-17170088
 ] 

Justin Bertram edited comment on ARTEMIS-2852 at 8/3/20, 2:58 PM:
--

bq. ...keeping the configuration exactly the same is not our priority.  I.e. we 
are not trying to do apple-to-apple comparison.

That's fair enough, but for what it's worth you opened this issue specifically 
by comparing the performance of 2.13.0 and 2.2.0. If you're not trying to do an 
apple-to-apple comparison it's confusing to identify a problem in the context 
of such a comparison. If the comparison isn't actually apple-to-apple then I'm 
not sure this issue is valid as it currently stands. How can you say 
performance has decreased when you aren't conducting the same test? I'm not 
saying there's no issue here potentially. I'm just saying it might have nothing 
to do any kind of regression.


was (Author: jbertram):
> ...keeping the configuration exactly the same is not our priority.  I.e. we 
> are not trying to do apple-to-apple comparison.

That's fair enough, but for what it's worth you opened this issue specifically 
by comparing the performance of 2.13.0 and 2.2.0. If you're not trying to do an 
apple-to-apple comparison it's confusing to identify a problem in the context 
of such a comparison. If the comparison isn't actually apple-to-apple then I'm 
not sure this issue is valid as it currently stands. How can you say 
performance has decreased when you aren't conducting the same test? I'm not 
saying there's no issue here potentially. I'm just saying it might have nothing 
to do any kind of regression.

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170088#comment-17170088
 ] 

Justin Bertram commented on ARTEMIS-2852:
-

> ...keeping the configuration exactly the same is not our priority.  I.e. we 
> are not trying to do apple-to-apple comparison.

That's fair enough, but for what it's worth you opened this issue specifically 
by comparing the performance of 2.13.0 and 2.2.0. If you're not trying to do an 
apple-to-apple comparison it's confusing to identify a problem in the context 
of such a comparison. If the comparison isn't actually apple-to-apple then I'm 
not sure this issue is valid as it currently stands. How can you say 
performance has decreased when you aren't conducting the same test? I'm not 
saying there's no issue here potentially. I'm just saying it might have nothing 
to do any kind of regression.

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170044#comment-17170044
 ] 

Francesco Nigro edited comment on ARTEMIS-2852 at 8/3/20, 2:49 PM:
---

[~kkondzielski] I've noticed another important thing: on 2.13 to disable 
buffering on MAPPED journal the 

{code:xml}
0
{code}

should be set like this, see comment on broker.xml:
{quote} Note: If you specify 0 the system will perform writes directly to the 
disk.
We recommend this to be 0 if you are using journalType=MAPPED and 
journal-datasync=false.{quote}


was (Author: nigro@gmail.com):
[~kkondzielski] I've noticed another imporatant thing: on 2.13 to disable 
buffering on MAPPED journal the 

{code:xml}
0
{code}

should be set like this, see comment on broker.xml:
{quote} Note: If you specify 0 the system will perform writes directly to the 
disk.
We recommend this to be 0 if you are using journalType=MAPPED and 
journal-datasync=false.{quote}

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170081#comment-17170081
 ] 

Kasper Kondzielski commented on ARTEMIS-2852:
-

?? I've noticed another imporatant thing: on 2.13 to disable buffering on 
MAPPED journal the...??

Great catch, thanks!

??I cannot say if it's the motivation behind the scalability issue, but I think 
that to have a proper apple-to-apple comparison makes sense to have a 
similar/same configuration.??

To be honest we are more interested in finding out what is the artemis mq 
throughput when compared to other queues rather then to its previous versions. 
Although, having our tests span over multiple versions of given queue gives us 
this additional benefit of seeing how the performance has changed over the 
time, keeping the configuration exactly the same is not our priority.  I.e. we 
are not trying to do apple-to-apple comparison.

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2862) Activation failure can result in zombie broker

2020-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2862?focusedWorklogId=465728&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-465728
 ]

ASF GitHub Bot logged work on ARTEMIS-2862:
---

Author: ASF GitHub Bot
Created on: 03/Aug/20 14:38
Start Date: 03/Aug/20 14:38
Worklog Time Spent: 10m 
  Work Description: jbertram opened a new pull request #3229:
URL: https://github.com/apache/activemq-artemis/pull/3229


   In certain cases with shared-store HA a broker's activation can fail but
   the broker will still be holding the journal lock. This results in a
   "zombie" broker which can't actually service clients and prevents the
   backup from activating.
   
   This commit adds an ActivationFailureListener to catch activation
   failures and stop the broker completely.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 465728)
Remaining Estimate: 0h
Time Spent: 10m

> Activation failure can result in zombie broker
> --
>
> Key: ARTEMIS-2862
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2862
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In certain cases with shared-store HA a broker's activation can fail but the 
> broker will still be holding the journal lock. This results in a "zombie" 
> broker which can't actually service clients and prevents the backup from 
> activating.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2862) Activation failure can result in zombie broker

2020-08-03 Thread Justin Bertram (Jira)
Justin Bertram created ARTEMIS-2862:
---

 Summary: Activation failure can result in zombie broker
 Key: ARTEMIS-2862
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2862
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Justin Bertram
Assignee: Justin Bertram


In certain cases with shared-store HA a broker's activation can fail but the 
broker will still be holding the journal lock. This results in a "zombie" 
broker which can't actually service clients and prevents the backup from 
activating.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170029#comment-17170029
 ] 

Francesco Nigro edited comment on ARTEMIS-2852 at 8/3/20, 2:07 PM:
---

[~kkondzielski]

{quote}So, we will probably need to decrease this value down to 32 GB which is 
the largest value that supports coops by default, right? {quote}

Yep and I see too that -XX:+UseStringDeduplication is enabled on 2.13.0 while 
isn't on 2.2.0: maybe it could increase some of the phases costs and would be 
better to drop it on 2.13.0.

I cannot say if it's the motivation behind the scalability issue, but I think 
that to have a proper apple-to-apple comparison makes sense to have a 
similar/same configuration.


was (Author: nigro@gmail.com):
[~kkondzielski]

{quote}So, we will probably need to decrease this value down to 32 GB which is 
the largest value that supports coops by default, right? {quote}

Yep and I see too that -XX:+UseStringDeduplication is enabled on 2.13.0 while 
isn't on 2.2.0: maybe it could increase some of the phases costs and would be 
better to drop it on 2.13.0.

I cannot say if it's the motivation behind the scalability issue, but I think 
that to have a proper apple-to-apple comparison makes sense to make the 2 
configurations more similar then possible.

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170044#comment-17170044
 ] 

Francesco Nigro commented on ARTEMIS-2852:
--

[~kkondzielski] I've noticed another imporatant thing: on 2.13 to disable 
buffering on MAPPED journal the 

{code:xml}
0
{code}

should be set like this, see comment on broker.xml:
{quote} Note: If you specify 0 the system will perform writes directly to the 
disk.
We recommend this to be 0 if you are using journalType=MAPPED and 
journal-datasync=false.{quote}

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170029#comment-17170029
 ] 

Francesco Nigro edited comment on ARTEMIS-2852 at 8/3/20, 1:17 PM:
---

[~kkondzielski]

{quote}So, we will probably need to decrease this value down to 32 GB which is 
the largest value that supports coops by default, right? {quote}

Yep and I see too that -XX:+UseStringDeduplication is enabled on 2.13.0 while 
isn't on 2.2.0: maybe it could increase some of the phases costs and would be 
better to drop it on 2.13.0.

I cannot say if it's the motivation behind the scalability issue, but I think 
that to have a proper apple-to-apple comparison makes sense to make the 2 
configurations more similar then possible.


was (Author: nigro@gmail.com):
[~kkondzielski]

{quote}So, we will probably need to decrease this value down to 32 GB which is 
the largest value that supports coops by default, right? {quote}

Yep and I see too that -XX:+UseStringDeduplication is enabled on 2.13.0 while 
isn't on 2.2.0: maybe it could increase some of the phases costs and would be 
better to drop it on 2.13.0

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170029#comment-17170029
 ] 

Francesco Nigro commented on ARTEMIS-2852:
--

[~kkondzielski]

{quote}So, we will probably need to decrease this value down to 32 GB which is 
the largest value that supports coops by default, right? {quote}

Yep and I see too that -XX:+UseStringDeduplication is enabled on 2.13.0 while 
isn't on 2.2.0: maybe it could increase some of the phases costs and would be 
better to drop it on 2.13.0

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17169990#comment-17169990
 ] 

Francesco Nigro edited comment on ARTEMIS-2852 at 8/3/20, 1:04 PM:
---

Another question: I see 

{quote}the Xmx java parameter bumped to 16G (now bumped to 48G){quote}

It means that the perf results on 2.2.0 was using 16G while now you're using 
48G?
If yes, it could affect results because some of the G1 GC phases have linear 
cost with live set (or heap size, depending the phase) so changing heap size 
won't means that we would get better performance..In particular with 48G we 
won't get anymore COOPS support (compressed pointers) hence the amount of 
stored objects and the density of them (including internal data structures of 
AMQ) are not anymore "right". 



was (Author: nigro@gmail.com):
Another question: I see 

{quote}the Xmx java parameter bumped to 16G (now bumped to 48G){quote}

It means that the perf results on 2.2.0 was using 16G while now you're using 
48G?
If yes, it could affect results because some of the G1 GC pauses have linear 
cost with live set (or heap size, depending the phase) so changing heap size 
won't means that we would get better performance..In particular with 48G we 
won't get anymore COOPS support (compressed pointers) hence the amount of 
stored objects and the density of them (including internal data structures of 
AMQ) are not anymore "right". 


> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170003#comment-17170003
 ] 

Kasper Kondzielski edited comment on ARTEMIS-2852 at 8/3/20, 12:43 PM:
---

??Another question: I see??
{quote}??the Xmx java parameter bumped to 16G (now bumped to 48G)??
{quote}
??It means that the perf results on 2.2.0 was using 16G while now you're using 
48G???

Yes. That's very interesting. So, we will probably need to decrease this value 
down to 32 GB which is the largest value that supports coops by default, right? 


was (Author: kkondzielski):
??Another question: I see??
{quote}??the Xmx java parameter bumped to 16G (now bumped to 48G)??
{quote}
??It means that the perf results on 2.2.0 was using 16G while now you're using 
48G???

Yes. That's very interesting. So, we will probably need to decrease this value 
down to 32 GB which is the largest supported value by default, right? 

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170003#comment-17170003
 ] 

Kasper Kondzielski commented on ARTEMIS-2852:
-

??Another question: I see??
{quote}??the Xmx java parameter bumped to 16G (now bumped to 48G)??
{quote}
??It means that the perf results on 2.2.0 was using 16G while now you're using 
48G???

Yes. That's very interesting. So, we will probably need to decrease this value 
down to 32 GB which is the largest supported value by default, right? 

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17169990#comment-17169990
 ] 

Francesco Nigro commented on ARTEMIS-2852:
--

Another question: I see 

{quote}the Xmx java parameter bumped to 16G (now bumped to 48G){quote}

It means that the perf results on 2.2.0 was using 16G while now you're using 
48G?
If yes, it could affect results because some of the G1 GC pauses have linear 
cost with live set (or heap size, depending the phase) so changing heap size 
won't means that we would get better performance..In particular with 48G we 
won't get anymore COOPS support (compressed pointers) hence the amount of 
stored objects and the density of them (including internal data structures of 
AMQ) are not anymore "right". 


> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (AMQNET-565) Dotnet core port

2020-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQNET-565?focusedWorklogId=465661&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-465661
 ]

ASF GitHub Bot logged work on AMQNET-565:
-

Author: ASF GitHub Bot
Created on: 03/Aug/20 12:32
Start Date: 03/Aug/20 12:32
Worklog Time Spent: 10m 
  Work Description: killnine commented on pull request #9:
URL: 
https://github.com/apache/activemq-nms-openwire/pull/9#issuecomment-667996988


   > @killnine I think you need to merge the proposed changes from @rafal-gain 
to your branch as that is what this PR is from, then this all can be merged. 
After that i guess just need someone to release it @Havret ? would you mind?
   
   Nice, I merged in his changes. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 465661)
Time Spent: 13h 40m  (was: 13.5h)

> Dotnet core port 
> -
>
> Key: AMQNET-565
> URL: https://issues.apache.org/jira/browse/AMQNET-565
> Project: ActiveMQ .Net
>  Issue Type: New Feature
>  Components: ActiveMQ
>Reporter: Wojtek Kulma
>Priority: Major
>  Time Spent: 13h 40m
>  Remaining Estimate: 0h
>
> Apache.NMS.ActiveMQ should be ported for dotnet core. 
> For now the following error is rises:
> D:\RiderProjects\syncro [master ≡ +1 ~1 -1 !]> dotnet add package 
> Apache.NMS.ActiveMQ
> Microsoft (R) Build Engine version 15.1.1012.6693
> Copyright (C) Microsoft Corporation. All rights reserved.
>   Writing C:\Users\wkulma\AppData\Local\Temp\tmp9A2E.tmp
> info : Adding PackageReference for package 'Apache.NMS.ActiveMQ' into project 
> 'D:\RiderProjects\syncro\syncro.fsproj'.
> log  : Restoring packages for D:\RiderProjects\syncro\syncro.fsproj...
> info :   GET 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/index.json
> info :   CACHE https://api.nuget.org/v3-flatcontainer/fsharp.core/index.json
> info :   CACHE 
> https://api.nuget.org/v3-flatcontainer/fsharp.core/4.1.17/fsharp.core.4.1.17.nupkg
> info :   CACHE 
> https://api.nuget.org/v3-flatcontainer/fsharp.net.sdk/index.json
> info :   CACHE 
> https://api.nuget.org/v3-flatcontainer/fsharp.net.sdk/1.0.5/fsharp.net.sdk.1.0.5.nupkg
> info :   OK 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/index.json 611ms
> info :   GET 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/1.7.2/apache.nms.activemq.1.7.2.nupkg
> info :   OK 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/1.7.2/apache.nms.activemq.1.7.2.nupkg
>  481ms
> error: Package Apache.NMS.ActiveMQ 1.7.2 is not compatible with netcoreapp1.1 
> (.NETCoreApp,Version=v1.1). Package Apache.NMS.ActiveMQ 1.7.2 supports:
> error:   - net20 (.NETFramework,Version=v2.0)
> error:   - net35 (.NETFramework,Version=v3.5)
> error:   - net40 (.NETFramework,Version=v4.0)
> error: Package Apache.NMS 1.7.1 is not compatible with netcoreapp1.1 
> (.NETCoreApp,Version=v1.1). Package Apache.NMS 1.7.1 supports:
> error:   - net20 (.NETFramework,Version=v2.0)
> error:   - net20-cf (.NETFramework,Version=v2.0,Profile=CompactFramework)
> error:   - net35 (.NETFramework,Version=v3.5)
> error:   - net40 (.NETFramework,Version=v4.0)
> error: One or more packages are incompatible with .NETCoreApp,Version=v1.1.
> error: Package 'Apache.NMS.ActiveMQ' is incompatible with 'all' frameworks in 
> project 'D:\RiderProjects\syncro\syncro.fsproj'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (AMQNET-565) Dotnet core port

2020-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQNET-565?focusedWorklogId=465654&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-465654
 ]

ASF GitHub Bot logged work on AMQNET-565:
-

Author: ASF GitHub Bot
Created on: 03/Aug/20 12:08
Start Date: 03/Aug/20 12:08
Worklog Time Spent: 10m 
  Work Description: michaelandrepearce commented on pull request #9:
URL: 
https://github.com/apache/activemq-nms-openwire/pull/9#issuecomment-667986929


   @killnine  I think you need to merge the proposed changes from @rafal-gain 
to your branch as that is what this PR is from, then this all can be merged. 
After that i guess just need someone to release it @Havret ? would you mind?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 465654)
Time Spent: 13.5h  (was: 13h 20m)

> Dotnet core port 
> -
>
> Key: AMQNET-565
> URL: https://issues.apache.org/jira/browse/AMQNET-565
> Project: ActiveMQ .Net
>  Issue Type: New Feature
>  Components: ActiveMQ
>Reporter: Wojtek Kulma
>Priority: Major
>  Time Spent: 13.5h
>  Remaining Estimate: 0h
>
> Apache.NMS.ActiveMQ should be ported for dotnet core. 
> For now the following error is rises:
> D:\RiderProjects\syncro [master ≡ +1 ~1 -1 !]> dotnet add package 
> Apache.NMS.ActiveMQ
> Microsoft (R) Build Engine version 15.1.1012.6693
> Copyright (C) Microsoft Corporation. All rights reserved.
>   Writing C:\Users\wkulma\AppData\Local\Temp\tmp9A2E.tmp
> info : Adding PackageReference for package 'Apache.NMS.ActiveMQ' into project 
> 'D:\RiderProjects\syncro\syncro.fsproj'.
> log  : Restoring packages for D:\RiderProjects\syncro\syncro.fsproj...
> info :   GET 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/index.json
> info :   CACHE https://api.nuget.org/v3-flatcontainer/fsharp.core/index.json
> info :   CACHE 
> https://api.nuget.org/v3-flatcontainer/fsharp.core/4.1.17/fsharp.core.4.1.17.nupkg
> info :   CACHE 
> https://api.nuget.org/v3-flatcontainer/fsharp.net.sdk/index.json
> info :   CACHE 
> https://api.nuget.org/v3-flatcontainer/fsharp.net.sdk/1.0.5/fsharp.net.sdk.1.0.5.nupkg
> info :   OK 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/index.json 611ms
> info :   GET 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/1.7.2/apache.nms.activemq.1.7.2.nupkg
> info :   OK 
> https://api.nuget.org/v3-flatcontainer/apache.nms.activemq/1.7.2/apache.nms.activemq.1.7.2.nupkg
>  481ms
> error: Package Apache.NMS.ActiveMQ 1.7.2 is not compatible with netcoreapp1.1 
> (.NETCoreApp,Version=v1.1). Package Apache.NMS.ActiveMQ 1.7.2 supports:
> error:   - net20 (.NETFramework,Version=v2.0)
> error:   - net35 (.NETFramework,Version=v3.5)
> error:   - net40 (.NETFramework,Version=v4.0)
> error: Package Apache.NMS 1.7.1 is not compatible with netcoreapp1.1 
> (.NETCoreApp,Version=v1.1). Package Apache.NMS 1.7.1 supports:
> error:   - net20 (.NETFramework,Version=v2.0)
> error:   - net20-cf (.NETFramework,Version=v2.0,Profile=CompactFramework)
> error:   - net35 (.NETFramework,Version=v3.5)
> error:   - net40 (.NETFramework,Version=v4.0)
> error: One or more packages are incompatible with .NETCoreApp,Version=v1.1.
> error: Package 'Apache.NMS.ActiveMQ' is incompatible with 'all' frameworks in 
> project 'D:\RiderProjects\syncro\syncro.fsproj'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-2854) Non-durable subscribers may stop receiving after failover

2020-08-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2854?focusedWorklogId=465645&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-465645
 ]

ASF GitHub Bot logged work on ARTEMIS-2854:
---

Author: ASF GitHub Bot
Created on: 03/Aug/20 11:55
Start Date: 03/Aug/20 11:55
Worklog Time Spent: 10m 
  Work Description: gaohoward opened a new pull request #3228:
URL: https://github.com/apache/activemq-artemis/pull/3228


   In a cluster scenario where non durable subscribers fail over to
   backup while another live node forwarding messages to it,
   there is a chance that the the live node keeps the old remote
   binding for the subs and messages go to those
   old remote bindings will result in "finding not found".



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 465645)
Remaining Estimate: 0h
Time Spent: 10m

> Non-durable subscribers may stop receiving after failover
> -
>
> Key: ARTEMIS-2854
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2854
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.15.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In a cluster scenario where non durable subscribers fail over to backup while 
> another live node forwarding messages to it, there is a chance that the the 
> live node keeps the old remote binding for the subs and messages go to those
> old remote bindings will result in "finding not found".
> For example suppose there are 2 live-backup pairs in the cluster: Live1 
> backup1
> Live2 and backup2. A non durable subscriber connects to Live1 and messages
> are sent to Live2 and then redistributed to the sub on Live1.
> Now Live1 crashes and backup1 becomes live. The subscriber fails over to 
> backup1.
> In the mean time Live2 re-connects backup1 too. During the process Live2 
> didn't
> successfully remove the old remote binding for the subs and it still point to 
> the
> old temp queue's id (which is gone with the Live1 as it's a temp queue).
> So the messages (after failover) still are routed to the old queue which is 
> no longer there. The subscriber will be idle without receiving new messages 
> from it.
> The code concerned this :
> https://github.com/apache/activemq-artemis/blob/master/artemis-server/src/main/java/org/apache/activemq/artemis/core/server/cluster/impl/ClusterConnectionImpl.java#L1239
> The code doesn't take care of the case where it's possible that the old 
> remote binding is still in the map the it's key (clusterName) will be the 
> same as the new remote binding (which references to a new temp queue) 
> recreated on fail over.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17169905#comment-17169905
 ] 

Francesco Nigro edited comment on ARTEMIS-2852 at 8/3/20, 10:40 AM:


Oops sorry I wrote 2.9.0 by accident :) ok thanks !(y)


was (Author: nigro@gmail.com):
Tips sorry I wrote 2.9.0 by accident :) ok thanks !(y)

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17169905#comment-17169905
 ] 

Francesco Nigro commented on ARTEMIS-2852:
--

Tips sorry I wrote 2.9.0 by accident :) ok thanks !(y)

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17169899#comment-17169899
 ] 

Kasper Kondzielski commented on ARTEMIS-2852:
-

We were using 2.2.0 from the official release channel.

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17169886#comment-17169886
 ] 

Francesco Nigro commented on ARTEMIS-2852:
--

Thanks :) in summer things tends to calm down a bit ;)
Just a question: the latest post was using 2.9.0 from the website or some 
specific commit from upstream? 

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2852) Huge performance decrease between versions 2.2.0 and 2.13.0

2020-08-03 Thread Kasper Kondzielski (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17169791#comment-17169791
 ] 

Kasper Kondzielski commented on ARTEMIS-2852:
-

Hi [~nigro@gmail.com] ,

No problem we still have a couple of queues to test and we want to make these 
data as accurate as possible.

> Huge performance decrease between versions 2.2.0 and 2.13.0
> ---
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Kasper Kondzielski
>Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput state almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-03 Thread Jira


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luís Alves updated ARTEMIS-2861:

Description: 
Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) and 
User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want to have 
fine grained access control over operations over addresses and queues 
(subscriptions) like described on 
https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
proposed in 
https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
 and it solves the authN part. For the authZ part I've already had some 
feedback here 
https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
 but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin will 
not give the needed control. So I'm proposing that ActiveMQSecurityManager 
latest implementation adds the queue name as the calling method:

{code:java}
 @Override
   public void check(final SimpleString address,
 final SimpleString queue,
 final CheckType checkType,
 final SecurityAuth session) throws Exception {
{code}

already has this information. 
Using UMA 2.0 each address can be a resource and we can have: 
SEND,CONSUME,CREATE_ADDRESS,DELETE_ADDRESS,CREATE_DURABLE_QUEUE,DELETE_DURABLE_QUEUE,CREATE_NON_DURABLE_QUEUE,DELETE_NON_DURABLE_QUEUE,MANAGE,BROWSE
 as scopes, which I think are quite fine grained. Depending on the use case a 
subscription also can be a resource.




  was:
Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) and 
User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want to have 
fine grained access control over operations over addresses and queues 
(subscriptions) like described on 
https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
proposed in 
https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
 and it solves the authN part. For the authZ part I've already had some 
feedback here 
https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
 but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin will 
not give the needed control. So I'm proposing that ActiveMQSecurityManager 
latest implementation adds the queue name as the calling method:

{code:java}
 @Override
   public void check(final SimpleString address,
 final SimpleString queue,
 final CheckType checkType,
 final SecurityAuth session) throws Exception {
{code}

already has this information. 




> Add queue name as a parameter to ActiveMQSecurityManager 
> -
>
> Key: ARTEMIS-2861
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: API
>Affects Versions: 2.14.0
>Reporter: Luís Alves
>Priority: Major
>
> Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) 
> and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want 
> to have fine grained access control over operations over addresses and queues 
> (subscriptions) like described on 
> https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
> proposed in 
> https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
>  and it solves the authN part. For the authZ part I've already had some 
> feedback here 
> https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
>  but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin 
> will not give the needed control. So I'm proposing that 
> ActiveMQSecurityManager latest implementation adds the queue name as the 
> calling method:
> {code:java}
>  @Override
>public void check(final SimpleString address,
>  final SimpleString queue,
>  final CheckType checkType,
>  final SecurityAuth session) throws Exception {
> {code}
> already has this information. 
> Using UMA 2.0 each address can be a resource and we can have: 
> SEND,CONSUME,CREATE_ADDRESS,DELETE_ADDRESS,CREATE_DURABLE_QUEUE,DELETE_DURABLE_QUEUE,CREATE_NON_DURABLE_QUEUE,DELETE_NON_DURABLE_QUEUE,MANAGE,BROWSE
>  as scopes, which I think are quite fine grained. Depending on the use case a 
> subscription also can be a resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-03 Thread Jira


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luís Alves updated ARTEMIS-2861:

Description: 
Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) and 
User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want to have 
fine grained access control over operations over addresses and queues 
(subscriptions) like described on 
https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
proposed in 
https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
 and it solves the authN part. For the authZ part I've already had some 
feedback here 
https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
 but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin will 
not give the needed control. So I'm proposing that ActiveMQSecurityManager 
latest implementation adds the queue name as the calling method:

{code:java}
 @Override
   public void check(final SimpleString address,
 final SimpleString queue,
 final CheckType checkType,
 final SecurityAuth session) throws Exception {
{code}

already has this information. 



  was:Currently I was trying to integrate Artemis with OpenId Connect 
(Oauth2.0) and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. 
I want to have fine grained access control over operations over addresses and 
queues (subscriptions) like described on 
https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
proposed in 
https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
 and it solves the authN part. For the authZ part I've already had some 
feedback here 
https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
 but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin will 
not give the needed control. So I'm proposing that ActiveMQSecurityManager 
latest implementation adds the queue name as the calling method 


> Add queue name as a parameter to ActiveMQSecurityManager 
> -
>
> Key: ARTEMIS-2861
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: API
>Affects Versions: 2.14.0
>Reporter: Luís Alves
>Priority: Major
>
> Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) 
> and User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want 
> to have fine grained access control over operations over addresses and queues 
> (subscriptions) like described on 
> https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
> proposed in 
> https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
>  and it solves the authN part. For the authZ part I've already had some 
> feedback here 
> https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
>  but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin 
> will not give the needed control. So I'm proposing that 
> ActiveMQSecurityManager latest implementation adds the queue name as the 
> calling method:
> {code:java}
>  @Override
>public void check(final SimpleString address,
>  final SimpleString queue,
>  final CheckType checkType,
>  final SecurityAuth session) throws Exception {
> {code}
> already has this information. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2861) Add queue name as a parameter to ActiveMQSecurityManager

2020-08-03 Thread Jira
Luís Alves created ARTEMIS-2861:
---

 Summary: Add queue name as a parameter to ActiveMQSecurityManager 
 Key: ARTEMIS-2861
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2861
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: API
Affects Versions: 2.14.0
Reporter: Luís Alves


Currently I was trying to integrate Artemis with OpenId Connect (Oauth2.0) and 
User Managed Access 2.0 (UMA 2.0) using Keycloak implementation. I want to have 
fine grained access control over operations over addresses and queues 
(subscriptions) like described on 
https://issues.apache.org/jira/browse/ARTEMIS-592. I've investigated as 
proposed in 
https://medium.com/@joelicious/extending-artemis-security-with-oauth2-7fd9b3dffe3
 and it solves the authN part. For the authZ part I've already had some 
feedback here 
https://stackoverflow.com/questions/63191001/activemq-artemis-activemqsecuritymanager4-verify-clientid-subscription,
 but I think org.apache.activemq.artemis.core.server.SecuritySettingPlugin will 
not give the needed control. So I'm proposing that ActiveMQSecurityManager 
latest implementation adds the queue name as the calling method 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)