[jira] [Created] (ARTEMIS-4047) Artemis does not send message to consumer AMQP

2022-10-13 Thread daves (Jira)
daves created ARTEMIS-4047:
--

 Summary: Artemis does not send message to consumer AMQP
 Key: ARTEMIS-4047
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4047
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: AMQP, Broker
Affects Versions: 2.25.0
Reporter: daves
 Attachments: 1.PNG, 2.PNG, 3.PNG, 4.PNG, 5.PNG, All.zip

The broker does not send messages from one of many existing queues to the 
connected consumer.

According to the UI the queue does contain ~15k messages.
I’m not able to consume any of these messages. I also tried to read a message 
using the browse function of the UI/console but that does not work eighter. 
The message was created by a AMQP client and should be consumed by another AMQP 
client.

I tried to capture the situation in a few screenshots… 
I don’t know which data can help you to understand the situation, so I’ve 
collected everything:
 * Logs
 * Broker
 * Data

Please let me know if there are any other data I should add to the ticket.

 

I don’t think that the code of my client is relevant since the problem only 
exist for a single queue…but here it is anyway:

 

 
{code:java}
using Amqp;
using Amqp.Framing;
using Amqp.Types;
namespace Test;
public sealed class MessageConsumer
{
    private readonly String _address;
    private readonly CancellationToken _cancellationToken;
    private readonly String _consumerName;
    private readonly String[] _destinations;
    public MessageConsumer( String address, String consumerName, String[] 
destinations, CancellationToken cancellationToken )
    {
        _address = address;
        _consumerName = consumerName;
        _destinations = destinations;
        _cancellationToken = cancellationToken;
    }
    public async Task StartReceivingMessages()
    {
        await Task.Yield();
        while ( !_cancellationToken.IsCancellationRequested )
        {
            var connectionFactory = new ConnectionFactory();
            var address = new Address( _address );
            try
            {
                var connection = await connectionFactory.CreateAsync( address );
                var session = ( (IConnection) connection ).CreateSession();
                var receivers = new List();
                foreach ( var destination in _destinations )
                {
                    var receiver = session.CreateReceiver( 
$"{_consumerName}_{destination}",
                                                           new Source
                                                           {
                                                               Address = 
destination,
                                                               Capabilities = 
new[] { new Symbol( "queue" ) }
                                                           } );
                    receivers.Add( receiver );
                }
                while ( !_cancellationToken.IsCancellationRequested )
                    foreach ( var receiver in receivers )
                    {
                        // ReceiveAsync( TimeSpan.Zero ); blocks forever and no 
messages will be received 
                        var message = await receiver.ReceiveAsync( 
TimeSpan.FromMilliseconds( 1 ) );
                        if ( message == null )
                            continue;
                        receiver.Accept( message );
                        Console.WriteLine( $"{_consumerName} - Received message 
with id: '{message.Properties.MessageId}'" );
                    }
            }
            catch ( Exception ex )
            {
                Console.WriteLine( $"{_consumerName} - Connection error in 
producer '{_consumerName}' {ex.Message} => create new connection." );
                await Task.Delay( 1000, CancellationToken.None );
            }
        }
    }
}
{code}
 

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4047) Artemis does not send message to consumer AMQP

2022-10-13 Thread daves (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

daves updated ARTEMIS-4047:
---
Affects Version/s: 2.26.0

> Artemis does not send message to consumer AMQP
> --
>
> Key: ARTEMIS-4047
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4047
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.25.0, 2.26.0
>Reporter: daves
>Priority: Major
> Attachments: 1.PNG, 2.PNG, 3.PNG, 4.PNG, 5.PNG, All.zip
>
>
> The broker does not send messages from one of many existing queues to the 
> connected consumer.
> According to the UI the queue does contain ~15k messages.
> I’m not able to consume any of these messages. I also tried to read a message 
> using the browse function of the UI/console but that does not work eighter. 
> The message was created by a AMQP client and should be consumed by another 
> AMQP client.
> I tried to capture the situation in a few screenshots… 
> I don’t know which data can help you to understand the situation, so I’ve 
> collected everything:
>  * Logs
>  * Broker
>  * Data
> Please let me know if there are any other data I should add to the ticket.
>  
> I don’t think that the code of my client is relevant since the problem only 
> exist for a single queue…but here it is anyway:
>  
>  
> {code:java}
> using Amqp;
> using Amqp.Framing;
> using Amqp.Types;
> namespace Test;
> public sealed class MessageConsumer
> {
>     private readonly String _address;
>     private readonly CancellationToken _cancellationToken;
>     private readonly String _consumerName;
>     private readonly String[] _destinations;
>     public MessageConsumer( String address, String consumerName, String[] 
> destinations, CancellationToken cancellationToken )
>     {
>         _address = address;
>         _consumerName = consumerName;
>         _destinations = destinations;
>         _cancellationToken = cancellationToken;
>     }
>     public async Task StartReceivingMessages()
>     {
>         await Task.Yield();
>         while ( !_cancellationToken.IsCancellationRequested )
>         {
>             var connectionFactory = new ConnectionFactory();
>             var address = new Address( _address );
>             try
>             {
>                 var connection = await connectionFactory.CreateAsync( address 
> );
>                 var session = ( (IConnection) connection ).CreateSession();
>                 var receivers = new List();
>                 foreach ( var destination in _destinations )
>                 {
>                     var receiver = session.CreateReceiver( 
> $"{_consumerName}_{destination}",
>                                                            new Source
>                                                            {
>                                                                Address = 
> destination,
>                                                                Capabilities = 
> new[] { new Symbol( "queue" ) }
>                                                            } );
>                     receivers.Add( receiver );
>                 }
>                 while ( !_cancellationToken.IsCancellationRequested )
>                     foreach ( var receiver in receivers )
>                     {
>                         // ReceiveAsync( TimeSpan.Zero ); blocks forever and 
> no messages will be received 
>                         var message = await receiver.ReceiveAsync( 
> TimeSpan.FromMilliseconds( 1 ) );
>                         if ( message == null )
>                             continue;
>                         receiver.Accept( message );
>                         Console.WriteLine( $"{_consumerName} - Received 
> message with id: '{message.Properties.MessageId}'" );
>                     }
>             }
>             catch ( Exception ex )
>             {
>                 Console.WriteLine( $"{_consumerName} - Connection error in 
> producer '{_consumerName}' {ex.Message} => create new connection." );
>                 await Task.Delay( 1000, CancellationToken.None );
>             }
>         }
>     }
> }
> {code}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4047) Artemis does not send message to consumer AMQP

2022-10-13 Thread daves (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17616971#comment-17616971
 ] 

daves commented on ARTEMIS-4047:


I saw that there is a newer version 2.26. I’ve updated my broker to this 
version, but the problem still exists. … Also, a new start of the broker does 
not change anything. Restarting the client does not change anything either.

> Artemis does not send message to consumer AMQP
> --
>
> Key: ARTEMIS-4047
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4047
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.25.0
>Reporter: daves
>Priority: Major
> Attachments: 1.PNG, 2.PNG, 3.PNG, 4.PNG, 5.PNG, All.zip
>
>
> The broker does not send messages from one of many existing queues to the 
> connected consumer.
> According to the UI the queue does contain ~15k messages.
> I’m not able to consume any of these messages. I also tried to read a message 
> using the browse function of the UI/console but that does not work eighter. 
> The message was created by a AMQP client and should be consumed by another 
> AMQP client.
> I tried to capture the situation in a few screenshots… 
> I don’t know which data can help you to understand the situation, so I’ve 
> collected everything:
>  * Logs
>  * Broker
>  * Data
> Please let me know if there are any other data I should add to the ticket.
>  
> I don’t think that the code of my client is relevant since the problem only 
> exist for a single queue…but here it is anyway:
>  
>  
> {code:java}
> using Amqp;
> using Amqp.Framing;
> using Amqp.Types;
> namespace Test;
> public sealed class MessageConsumer
> {
>     private readonly String _address;
>     private readonly CancellationToken _cancellationToken;
>     private readonly String _consumerName;
>     private readonly String[] _destinations;
>     public MessageConsumer( String address, String consumerName, String[] 
> destinations, CancellationToken cancellationToken )
>     {
>         _address = address;
>         _consumerName = consumerName;
>         _destinations = destinations;
>         _cancellationToken = cancellationToken;
>     }
>     public async Task StartReceivingMessages()
>     {
>         await Task.Yield();
>         while ( !_cancellationToken.IsCancellationRequested )
>         {
>             var connectionFactory = new ConnectionFactory();
>             var address = new Address( _address );
>             try
>             {
>                 var connection = await connectionFactory.CreateAsync( address 
> );
>                 var session = ( (IConnection) connection ).CreateSession();
>                 var receivers = new List();
>                 foreach ( var destination in _destinations )
>                 {
>                     var receiver = session.CreateReceiver( 
> $"{_consumerName}_{destination}",
>                                                            new Source
>                                                            {
>                                                                Address = 
> destination,
>                                                                Capabilities = 
> new[] { new Symbol( "queue" ) }
>                                                            } );
>                     receivers.Add( receiver );
>                 }
>                 while ( !_cancellationToken.IsCancellationRequested )
>                     foreach ( var receiver in receivers )
>                     {
>                         // ReceiveAsync( TimeSpan.Zero ); blocks forever and 
> no messages will be received 
>                         var message = await receiver.ReceiveAsync( 
> TimeSpan.FromMilliseconds( 1 ) );
>                         if ( message == null )
>                             continue;
>                         receiver.Accept( message );
>                         Console.WriteLine( $"{_consumerName} - Received 
> message with id: '{message.Properties.MessageId}'" );
>                     }
>             }
>             catch ( Exception ex )
>             {
>                 Console.WriteLine( $"{_consumerName} - Connection error in 
> producer '{_consumerName}' {ex.Message} => create new connection." );
>                 await Task.Delay( 1000, CancellationToken.None );
>             }
>         }
>     }
> }
> {code}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4048) stop the '-all' client modules grabbing their own -SNAPSHOT sources unnecessarily during build

2022-10-13 Thread Robbie Gemmell (Jira)
Robbie Gemmell created ARTEMIS-4048:
---

 Summary: stop the '-all' client modules grabbing their own 
-SNAPSHOT sources unnecessarily during build
 Key: ARTEMIS-4048
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4048
 Project: ActiveMQ Artemis
  Issue Type: Task
Affects Versions: 2.26.0
Reporter: Robbie Gemmell
Assignee: Robbie Gemmell
 Fix For: 2.27.0


The three '-all' partial-shaded uber client modules can each unnecessarily 
download their own 6MB previously-shaded -SNAPSHOT sources from 
repository.apache.org while they are being built, if a sufficiently new copy 
isnt already in the maven local repo. Typically this will be if a developer 
hasnt 'mvn installed' yet in a day, or as much as every build in a CI env that 
doesnt cache prior installed snapshot output. Taking GHA CI jobs as example, 
where 4 builds occur for the various jobs for each run, it can thus grab about 
6x3x4=72MB of -SNAPSHOT sources unnecessarily on every push to a PR and/or main.

The cause is that the shading is producing a sources jar, which naturally 
incorporates the original module sources itself (though it has no real content, 
in this instance), but finds this has not been prepared yet before the shading 
occurs (the parent pom arranges it at a later phase) so it has to go looking 
and downloads it if it doesnt find something 'up to date enough' in the local 
repo. The remote version is the already-shaded 6MB output of a prior shading 
run, rather than the modules original basically-empty one.

The fix is to ensure the modules original [basically-empty] sources jar is 
always produced during the build, before the shading process, meaning it never 
needs to go looking for it and potentially then download it. Similarly, the 
modules original [basically empty] main jar should similarly always be created, 
so that the shading always operates on that, as opposed to potentially 
operating on the already-shaded renamed artifact output from a prior run, as it 
currently does if you dont do a mvn clean.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4048) stop the '-all' client modules grabbing their own -SNAPSHOT sources unnecessarily during build

2022-10-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617006#comment-17617006
 ] 

ASF subversion and git services commented on ARTEMIS-4048:
--

Commit 8cba446e2b1fd2259c1b07a960adcb47158c666c in activemq-artemis's branch 
refs/heads/main from Robbie Gemmell
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=8cba446e2b ]

ARTEMIS-4048: stop -all client modules needlessly grabbing their own -SNAPSHOT 
sources during build

- Always produce the original basically-empty sources jar for the
  module, before the shading, meaning its always available.
- Do the same for the main jar, to avoid the shading operating on a
  previously-shaded renamed output from a prior run if not cleaning.


> stop the '-all' client modules grabbing their own -SNAPSHOT sources 
> unnecessarily during build
> --
>
> Key: ARTEMIS-4048
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4048
> Project: ActiveMQ Artemis
>  Issue Type: Task
>Affects Versions: 2.26.0
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
> Fix For: 2.27.0
>
>
> The three '-all' partial-shaded uber client modules can each unnecessarily 
> download their own 6MB previously-shaded -SNAPSHOT sources from 
> repository.apache.org while they are being built, if a sufficiently new copy 
> isnt already in the maven local repo. Typically this will be if a developer 
> hasnt 'mvn installed' yet in a day, or as much as every build in a CI env 
> that doesnt cache prior installed snapshot output. Taking GHA CI jobs as 
> example, where 4 builds occur for the various jobs for each run, it can thus 
> grab about 6x3x4=72MB of -SNAPSHOT sources unnecessarily on every push to a 
> PR and/or main.
> The cause is that the shading is producing a sources jar, which naturally 
> incorporates the original module sources itself (though it has no real 
> content, in this instance), but finds this has not been prepared yet before 
> the shading occurs (the parent pom arranges it at a later phase) so it has to 
> go looking and downloads it if it doesnt find something 'up to date enough' 
> in the local repo. The remote version is the already-shaded 6MB output of a 
> prior shading run, rather than the modules original basically-empty one.
> The fix is to ensure the modules original [basically-empty] sources jar is 
> always produced during the build, before the shading process, meaning it 
> never needs to go looking for it and potentially then download it. Similarly, 
> the modules original [basically empty] main jar should similarly always be 
> created, so that the shading always operates on that, as opposed to 
> potentially operating on the already-shaded renamed artifact output from a 
> prior run, as it currently does if you dont do a mvn clean.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (ARTEMIS-4048) stop the '-all' client modules grabbing their own -SNAPSHOT sources unnecessarily during build

2022-10-13 Thread Robbie Gemmell (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell resolved ARTEMIS-4048.
-
Resolution: Fixed

> stop the '-all' client modules grabbing their own -SNAPSHOT sources 
> unnecessarily during build
> --
>
> Key: ARTEMIS-4048
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4048
> Project: ActiveMQ Artemis
>  Issue Type: Task
>Affects Versions: 2.26.0
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
> Fix For: 2.27.0
>
>
> The three '-all' partial-shaded uber client modules can each unnecessarily 
> download their own 6MB previously-shaded -SNAPSHOT sources from 
> repository.apache.org while they are being built, if a sufficiently new copy 
> isnt already in the maven local repo. Typically this will be if a developer 
> hasnt 'mvn installed' yet in a day, or as much as every build in a CI env 
> that doesnt cache prior installed snapshot output. Taking GHA CI jobs as 
> example, where 4 builds occur for the various jobs for each run, it can thus 
> grab about 6x3x4=72MB of -SNAPSHOT sources unnecessarily on every push to a 
> PR and/or main.
> The cause is that the shading is producing a sources jar, which naturally 
> incorporates the original module sources itself (though it has no real 
> content, in this instance), but finds this has not been prepared yet before 
> the shading occurs (the parent pom arranges it at a later phase) so it has to 
> go looking and downloads it if it doesnt find something 'up to date enough' 
> in the local repo. The remote version is the already-shaded 6MB output of a 
> prior shading run, rather than the modules original basically-empty one.
> The fix is to ensure the modules original [basically-empty] sources jar is 
> always produced during the build, before the shading process, meaning it 
> never needs to go looking for it and potentially then download it. Similarly, 
> the modules original [basically empty] main jar should similarly always be 
> created, so that the shading always operates on that, as opposed to 
> potentially operating on the already-shaded renamed artifact output from a 
> prior run, as it currently does if you dont do a mvn clean.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (AMQ-9107) Closing many consumers causes CPU to spike to 100%

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-9107?focusedWorklogId=816617&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816617
 ]

ASF GitHub Bot logged work on AMQ-9107:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 14:09
Start Date: 13/Oct/22 14:09
Worklog Time Spent: 10m 
  Work Description: mattrpav commented on code in PR #908:
URL: https://github.com/apache/activemq/pull/908#discussion_r994672165


##
activemq-broker/src/main/java/org/apache/activemq/broker/jmx/ManagedRegionBroker.java:
##
@@ -215,6 +217,7 @@ public ObjectName registerSubscription(ConnectionContext 
context, Subscription s
 registerSubscription(objectName, sub.getConsumerInfo(), key, 
view);
 }
 subscriptionMap.put(sub, objectName);
+consumerSubscriptionMap.put(sub.getConsumerInfo(), sub);

Review Comment:
   Was this tested with an offline durable topic subscription? I'm thinking 
this will return null here





Issue Time Tracking
---

Worklog Id: (was: 816617)
Time Spent: 40m  (was: 0.5h)

> Closing many consumers causes CPU to spike to 100%
> --
>
> Key: AMQ-9107
> URL: https://issues.apache.org/jira/browse/AMQ-9107
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.17.1, 5.16.5
>Reporter: Lucas Tétreault
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.18.0, 5.16.6, 5.17.3
>
> Attachments: example.zip, image-2022-10-07-00-12-39-657.png, 
> image-2022-10-07-00-17-30-657.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When there are many consumers (~188k) on a queue, closing them is incredibly 
> expensive and causes the CPU to spike to 100% while the consumers are closed. 
> Tested on an Amazon MQ mq.m5.large instance (2 vcpu, 8gb memory).
> I have attached a minimal recreation of the issue where the following 
> happens: 
> 1/ Open 100 connections.
> 2/ Create consumers as fast as we can on all of those connections until we 
> hit at least 188k consumers.
> 3/ Sleep for 5 minutes so we can observe the CPU come back down after opening 
> all those connections.
> 4/ Start closing consumers as fast as we can.
> 5/ After all consumers are closed, sleep for 5 minutes to observe the CPU 
> come back down after closing all the connections.
>  
> In this example it seems 5 minutes wasn't actually sufficient time for the 
> CPU to come back down and the consumer and connection counts seem to hit 0 at 
> the same time: 
> !image-2022-10-07-00-12-39-657.png|width=757,height=353!
>  
> In a previous test with more time sleeping after closing all the consumers we 
> can see the CPU come back down before we close the connections. 
> !image-2022-10-07-00-17-30-657.png|width=764,height=348!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (AMQ-9107) Closing many consumers causes CPU to spike to 100%

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-9107?focusedWorklogId=816616&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816616
 ]

ASF GitHub Bot logged work on AMQ-9107:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 14:09
Start Date: 13/Oct/22 14:09
Worklog Time Spent: 10m 
  Work Description: mattrpav commented on PR #908:
URL: https://github.com/apache/activemq/pull/908#issuecomment-1277675599

   In the doStop()

Issue Time Tracking
---

Worklog Id: (was: 816616)
Time Spent: 0.5h  (was: 20m)

> Closing many consumers causes CPU to spike to 100%
> --
>
> Key: AMQ-9107
> URL: https://issues.apache.org/jira/browse/AMQ-9107
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.17.1, 5.16.5
>Reporter: Lucas Tétreault
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.18.0, 5.16.6, 5.17.3
>
> Attachments: example.zip, image-2022-10-07-00-12-39-657.png, 
> image-2022-10-07-00-17-30-657.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When there are many consumers (~188k) on a queue, closing them is incredibly 
> expensive and causes the CPU to spike to 100% while the consumers are closed. 
> Tested on an Amazon MQ mq.m5.large instance (2 vcpu, 8gb memory).
> I have attached a minimal recreation of the issue where the following 
> happens: 
> 1/ Open 100 connections.
> 2/ Create consumers as fast as we can on all of those connections until we 
> hit at least 188k consumers.
> 3/ Sleep for 5 minutes so we can observe the CPU come back down after opening 
> all those connections.
> 4/ Start closing consumers as fast as we can.
> 5/ After all consumers are closed, sleep for 5 minutes to observe the CPU 
> come back down after closing all the connections.
>  
> In this example it seems 5 minutes wasn't actually sufficient time for the 
> CPU to come back down and the consumer and connection counts seem to hit 0 at 
> the same time: 
> !image-2022-10-07-00-12-39-657.png|width=757,height=353!
>  
> In a previous test with more time sleeping after closing all the consumers we 
> can see the CPU come back down before we close the connections. 
> !image-2022-10-07-00-17-30-657.png|width=764,height=348!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (AMQ-9107) Closing many consumers causes CPU to spike to 100%

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-9107?focusedWorklogId=816622&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816622
 ]

ASF GitHub Bot logged work on AMQ-9107:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 14:12
Start Date: 13/Oct/22 14:12
Worklog Time Spent: 10m 
  Work Description: jbonofre commented on code in PR #908:
URL: https://github.com/apache/activemq/pull/908#discussion_r994700232


##
activemq-broker/src/main/java/org/apache/activemq/broker/jmx/ManagedRegionBroker.java:
##
@@ -215,6 +217,7 @@ public ObjectName registerSubscription(ConnectionContext 
context, Subscription s
 registerSubscription(objectName, sub.getConsumerInfo(), key, 
view);
 }
 subscriptionMap.put(sub, objectName);
+consumerSubscriptionMap.put(sub.getConsumerInfo(), sub);

Review Comment:
   Good catch, that's possible. At least we should test if `sub` is not null.





Issue Time Tracking
---

Worklog Id: (was: 816622)
Time Spent: 50m  (was: 40m)

> Closing many consumers causes CPU to spike to 100%
> --
>
> Key: AMQ-9107
> URL: https://issues.apache.org/jira/browse/AMQ-9107
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.17.1, 5.16.5
>Reporter: Lucas Tétreault
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.18.0, 5.16.6, 5.17.3
>
> Attachments: example.zip, image-2022-10-07-00-12-39-657.png, 
> image-2022-10-07-00-17-30-657.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When there are many consumers (~188k) on a queue, closing them is incredibly 
> expensive and causes the CPU to spike to 100% while the consumers are closed. 
> Tested on an Amazon MQ mq.m5.large instance (2 vcpu, 8gb memory).
> I have attached a minimal recreation of the issue where the following 
> happens: 
> 1/ Open 100 connections.
> 2/ Create consumers as fast as we can on all of those connections until we 
> hit at least 188k consumers.
> 3/ Sleep for 5 minutes so we can observe the CPU come back down after opening 
> all those connections.
> 4/ Start closing consumers as fast as we can.
> 5/ After all consumers are closed, sleep for 5 minutes to observe the CPU 
> come back down after closing all the connections.
>  
> In this example it seems 5 minutes wasn't actually sufficient time for the 
> CPU to come back down and the consumer and connection counts seem to hit 0 at 
> the same time: 
> !image-2022-10-07-00-12-39-657.png|width=757,height=353!
>  
> In a previous test with more time sleeping after closing all the consumers we 
> can see the CPU come back down before we close the connections. 
> !image-2022-10-07-00-17-30-657.png|width=764,height=348!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4045) AMQ224041: Failed to deliver in mirror

2022-10-13 Thread Stephen Baker (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Baker updated ARTEMIS-4045:
---
Attachment: proposed-fix.log

> AMQ224041: Failed to deliver in mirror
> --
>
> Key: ARTEMIS-4045
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4045
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Stephen Baker
>Priority: Major
> Attachments: expire_error_mirror.log, proposed-fix.log
>
>
> I saw the following stack trace when running artemis 2.25 to artemis 2.25 in 
> a dual mirror configuration with docker instances.
> The side that has the error is the only side running the message expiry scan.
> Messages were added to the other side through JMS with a short (10s) expiry.
> {code:java}
> artemis-test-artemis-1-m-1   | 2022-10-12 22:02:13,468 ERROR 
> [org.apache.activemq.artemis.core.server] AMQ224041: Failed to deliver: 
> java.lang.IllegalStateException: this method requires to be called within the 
> handler, use the executor
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.proton.handler.ProtonHandler.requireHandler(ProtonHandler.java:210)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.proton.AMQPConnectionContext.requireInHandler(AMQPConnectionContext.java:197)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.proton.ProtonAbstractReceiver.settle(ProtonAbstractReceiver.java:185)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.connect.mirror.AMQPMirrorControllerTarget$ACKMessageOperation.run(AMQPMirrorControllerTarget.java:125)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.connect.mirror.AMQPMirrorControllerTarget.performAck(AMQPMirrorControllerTarget.java:388)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.connect.mirror.AMQPMirrorControllerTarget.lambda$performAck$2(AMQPMirrorControllerTarget.java:377)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl$2.skipDelivery(QueueImpl.java:1203)
>  [artemis-server-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl.doInternalPoll(QueueImpl.java:2932)
>  [artemis-server-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver(QueueImpl.java:2991)
>  [artemis-server-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl$DeliverRunner.run(QueueImpl.java:4250)
>  [artemis-server-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:56)
>  [artemis-commons-2.25.0.jar:]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
>  [artemis-commons-2.25.0.jar:]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:67)
>  [artemis-commons-2.25.0.jar:]
> artemis-test-artemis-1-m-1   |     at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [java.base:]
> artemis-test-artemis-1-m-1   |     at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [java.base:]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
>  [artemis-commons-2.25.0.jar:]
> artemis-test-artemis-1-m-1   |{code}
> I believe the stack may be enough to diagnose the issue. It's very 
> specifically calling run directly where all the other code paths run it 
> through an executor, and the error says that it can't be run directly.
>  
> From AMQPMirrorControllerTarget
> {code:java}
> switch (retry) {
>case 0:
>   // first retry, after IO Operations
>   sessionSPI.getSessionContext().executeOnCompletion(new 
> RunnableCallback(() -> performAck(nodeID, messageID, targetQueue, 
> ackMessageOperation, reason, (short) 1)));
>   return;
>case 1:
>   // second retry after the queue is flushed the temporary adds
>   targetQueue.flushOnIntermediate(() -> {
>  recoverContext();
>  performAck(nodeID, messageID, targetQueue, ackMessageOperation, 
> reason, (short)2);
>   });
>   return;
>   

[jira] [Commented] (ARTEMIS-4045) AMQ224041: Failed to deliver in mirror

2022-10-13 Thread Stephen Baker (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617087#comment-17617087
 ] 

Stephen Baker commented on ARTEMIS-4045:


[^proposed-fix.log] shows a trace running with [~clebert.suco...@jboss.com] 's 
proposed-fix patch. The error is gone. The second retry is at: 2022-10-13 
14:10:56,514

> AMQ224041: Failed to deliver in mirror
> --
>
> Key: ARTEMIS-4045
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4045
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Stephen Baker
>Priority: Major
> Attachments: expire_error_mirror.log, proposed-fix.log
>
>
> I saw the following stack trace when running artemis 2.25 to artemis 2.25 in 
> a dual mirror configuration with docker instances.
> The side that has the error is the only side running the message expiry scan.
> Messages were added to the other side through JMS with a short (10s) expiry.
> {code:java}
> artemis-test-artemis-1-m-1   | 2022-10-12 22:02:13,468 ERROR 
> [org.apache.activemq.artemis.core.server] AMQ224041: Failed to deliver: 
> java.lang.IllegalStateException: this method requires to be called within the 
> handler, use the executor
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.proton.handler.ProtonHandler.requireHandler(ProtonHandler.java:210)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.proton.AMQPConnectionContext.requireInHandler(AMQPConnectionContext.java:197)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.proton.ProtonAbstractReceiver.settle(ProtonAbstractReceiver.java:185)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.connect.mirror.AMQPMirrorControllerTarget$ACKMessageOperation.run(AMQPMirrorControllerTarget.java:125)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.connect.mirror.AMQPMirrorControllerTarget.performAck(AMQPMirrorControllerTarget.java:388)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.protocol.amqp.connect.mirror.AMQPMirrorControllerTarget.lambda$performAck$2(AMQPMirrorControllerTarget.java:377)
>  [artemis-amqp-protocol-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl$2.skipDelivery(QueueImpl.java:1203)
>  [artemis-server-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl.doInternalPoll(QueueImpl.java:2932)
>  [artemis-server-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver(QueueImpl.java:2991)
>  [artemis-server-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl$DeliverRunner.run(QueueImpl.java:4250)
>  [artemis-server-2.25.0.jar:2.25.0]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:56)
>  [artemis-commons-2.25.0.jar:]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
>  [artemis-commons-2.25.0.jar:]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:67)
>  [artemis-commons-2.25.0.jar:]
> artemis-test-artemis-1-m-1   |     at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [java.base:]
> artemis-test-artemis-1-m-1   |     at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [java.base:]
> artemis-test-artemis-1-m-1   |     at 
> org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
>  [artemis-commons-2.25.0.jar:]
> artemis-test-artemis-1-m-1   |{code}
> I believe the stack may be enough to diagnose the issue. It's very 
> specifically calling run directly where all the other code paths run it 
> through an executor, and the error says that it can't be run directly.
>  
> From AMQPMirrorControllerTarget
> {code:java}
> switch (retry) {
>case 0:
>   // first retry, after IO Operations
>   sessionSPI.getSessionContext().executeOnCompletion(new 
> RunnableCallback(() -> performAck(nodeID, messageID, targetQueue, 
> ackMessageOperation, reason, (short) 1)));
>   return;
>case 1:
>   // second retry after the queue is flushed the temporary adds
>   targe

[jira] [Work logged] (AMQ-9107) Closing many consumers causes CPU to spike to 100%

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-9107?focusedWorklogId=816642&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816642
 ]

ASF GitHub Bot logged work on AMQ-9107:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 14:42
Start Date: 13/Oct/22 14:42
Worklog Time Spent: 10m 
  Work Description: mattrpav commented on PR #908:
URL: https://github.com/apache/activemq/pull/908#issuecomment-1277726391

   > registeredMBeans.clear();
   
   We probably at least need a unit test to verify after stop that all the 
collections are size == 0




Issue Time Tracking
---

Worklog Id: (was: 816642)
Time Spent: 1h  (was: 50m)

> Closing many consumers causes CPU to spike to 100%
> --
>
> Key: AMQ-9107
> URL: https://issues.apache.org/jira/browse/AMQ-9107
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.17.1, 5.16.5
>Reporter: Lucas Tétreault
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.18.0, 5.16.6, 5.17.3
>
> Attachments: example.zip, image-2022-10-07-00-12-39-657.png, 
> image-2022-10-07-00-17-30-657.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When there are many consumers (~188k) on a queue, closing them is incredibly 
> expensive and causes the CPU to spike to 100% while the consumers are closed. 
> Tested on an Amazon MQ mq.m5.large instance (2 vcpu, 8gb memory).
> I have attached a minimal recreation of the issue where the following 
> happens: 
> 1/ Open 100 connections.
> 2/ Create consumers as fast as we can on all of those connections until we 
> hit at least 188k consumers.
> 3/ Sleep for 5 minutes so we can observe the CPU come back down after opening 
> all those connections.
> 4/ Start closing consumers as fast as we can.
> 5/ After all consumers are closed, sleep for 5 minutes to observe the CPU 
> come back down after closing all the connections.
>  
> In this example it seems 5 minutes wasn't actually sufficient time for the 
> CPU to come back down and the consumer and connection counts seem to hit 0 at 
> the same time: 
> !image-2022-10-07-00-12-39-657.png|width=757,height=353!
>  
> In a previous test with more time sleeping after closing all the consumers we 
> can see the CPU come back down before we close the connections. 
> !image-2022-10-07-00-17-30-657.png|width=764,height=348!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4020) switch to using SLF4J for logging API and use Log4j 2 for broker distribution

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4020?focusedWorklogId=816657&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816657
 ]

ASF GitHub Bot logged work on ARTEMIS-4020:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 15:20
Start Date: 13/Oct/22 15:20
Worklog Time Spent: 10m 
  Work Description: tabish121 opened a new pull request, #4257:
URL: https://github.com/apache/activemq-artemis/pull/4257

   Attempt to standardize all Logger declaration to a singular variable name 
which makes the code more consistent and make finding usages of loggers in the 
code a bit easier.




Issue Time Tracking
---

Worklog Id: (was: 816657)
Time Spent: 9h 10m  (was: 9h)

> switch to using SLF4J for logging API and use Log4j 2 for broker distribution
> -
>
> Key: ARTEMIS-4020
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4020
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
> Fix For: 2.27.0
>
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> Switch to using [SLF4J|https://www.slf4j.org/] as the logging API for the 
> code base, with end-uses supplying and configuring an SLF4J-supporting 
> logging implementation of their choice based on their needs.
> For the client, applications will need to supply an SLF4J binding to a 
> logging implementation of their choice to enable logging. An example of doing 
> so using [Log4J 2|https://logging.apache.org/log4j/2.x/manual/index.html] is 
> given in (/will be, once the release is out) the [client logging 
> documentation|https://activemq.apache.org/components/artemis/documentation/latest/logging.html#logging-in-a-client-application].
> For the broker, the assembly distribution will include [Log4J 
> 2|https://logging.apache.org/log4j/2.x/manual/index.html] as its logging 
> implentation, with the "artemis create" CLI command used to create broker 
> instances now creating a log4j2.properties configuration within the 
> /etc/ directory to configure Log4J. Details for upgrading an 
> existing broker-instance is given in (/will be, once the release is out) the 
> [version upgrade 
> documentation|https://activemq.apache.org/components/artemis/documentation/latest/versions.html].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4035) All consumers of federated queue drop if only one consumer drops

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4035?focusedWorklogId=816667&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816667
 ]

ASF GitHub Bot logged work on ARTEMIS-4035:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 15:36
Start Date: 13/Oct/22 15:36
Worklog Time Spent: 10m 
  Work Description: asfgit closed pull request #4249: ARTEMIS-4035 all 
consumers of federated queue drop if only one consum…
URL: https://github.com/apache/activemq-artemis/pull/4249




Issue Time Tracking
---

Worklog Id: (was: 816667)
Time Spent: 0.5h  (was: 20m)

> All consumers of federated queue drop if only one consumer drops
> 
>
> Key: ARTEMIS-4035
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4035
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Scenario:
> - 2 nodes.
> - 2 federated queues in an upstream configuration.
> - One consumer for each federated queue connected to just one of the brokers.
> - Open the web console of the brokers that the consumers are connected. All 
> the consumers there.
> - Open the web console of the other broker. The same consumers from before 
> are there (i.e. the federation is working).
> - Drop one consumer from the broker and then all the consumers from the other 
> node are dropped. Federation no longer works



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4035) All consumers of federated queue drop if only one consumer drops

2022-10-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617147#comment-17617147
 ] 

ASF subversion and git services commented on ARTEMIS-4035:
--

Commit 0ab098e4561a335d3db5bc6b484437918d316b05 in activemq-artemis's branch 
refs/heads/main from Justin Bertram
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=0ab098e456 ]

ARTEMIS-4035 all consumers of federated queue drop if only one consumer drops


> All consumers of federated queue drop if only one consumer drops
> 
>
> Key: ARTEMIS-4035
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4035
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Scenario:
> - 2 nodes.
> - 2 federated queues in an upstream configuration.
> - One consumer for each federated queue connected to just one of the brokers.
> - Open the web console of the brokers that the consumers are connected. All 
> the consumers there.
> - Open the web console of the other broker. The same consumers from before 
> are there (i.e. the federation is working).
> - Drop one consumer from the broker and then all the consumers from the other 
> node are dropped. Federation no longer works



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4035) All consumers of federated queue drop if only one consumer drops

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4035?focusedWorklogId=816671&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816671
 ]

ASF GitHub Bot logged work on ARTEMIS-4035:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 15:48
Start Date: 13/Oct/22 15:48
Worklog Time Spent: 10m 
  Work Description: jbertram commented on PR #4249:
URL: 
https://github.com/apache/activemq-artemis/pull/4249#issuecomment-1277826509

   @clebertsuconic, I just saw your comment after I merged. I'm running the 
full test-suite now. FWIW, I ran all the federations tests manually and they 
all passed.




Issue Time Tracking
---

Worklog Id: (was: 816671)
Time Spent: 40m  (was: 0.5h)

> All consumers of federated queue drop if only one consumer drops
> 
>
> Key: ARTEMIS-4035
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4035
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Scenario:
> - 2 nodes.
> - 2 federated queues in an upstream configuration.
> - One consumer for each federated queue connected to just one of the brokers.
> - Open the web console of the brokers that the consumers are connected. All 
> the consumers there.
> - Open the web console of the other broker. The same consumers from before 
> are there (i.e. the federation is working).
> - Drop one consumer from the broker and then all the consumers from the other 
> node are dropped. Federation no longer works



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4042) DefaultSensitiveStringCodec - read ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY env if system property is not set

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4042?focusedWorklogId=816689&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816689
 ]

ASF GitHub Bot logged work on ARTEMIS-4042:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 16:22
Start Date: 13/Oct/22 16:22
Worklog Time Spent: 10m 
  Work Description: asfgit closed pull request #4254: ARTEMIS-4042 - read 
sensitive string codec env var if system property…
URL: https://github.com/apache/activemq-artemis/pull/4254




Issue Time Tracking
---

Worklog Id: (was: 816689)
Time Spent: 40m  (was: 0.5h)

> DefaultSensitiveStringCodec - read ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY 
> env if system property is not set 
> 
>
> Key: ARTEMIS-4042
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4042
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 2.26.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.27.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Following up on ARTEMIS-3488, to avoid expansion of the env var on the 
> command line, if it is not set as a system property, attempt to read directly 
> from the environment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4042) DefaultSensitiveStringCodec - read ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY env if system property is not set

2022-10-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617170#comment-17617170
 ] 

ASF subversion and git services commented on ARTEMIS-4042:
--

Commit 8a6e29ccde3525e2012417ac41777529853a5bf0 in activemq-artemis's branch 
refs/heads/main from Gary Tully
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=8a6e29ccde ]

ARTEMIS-4042 - read sensitive string codec env var if system property is not set


> DefaultSensitiveStringCodec - read ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY 
> env if system property is not set 
> 
>
> Key: ARTEMIS-4042
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4042
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 2.26.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.27.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Following up on ARTEMIS-3488, to avoid expansion of the env var on the 
> command line, if it is not set as a system property, attempt to read directly 
> from the environment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4042) DefaultSensitiveStringCodec - read ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY env if system property is not set

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4042?focusedWorklogId=816688&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816688
 ]

ASF GitHub Bot logged work on ARTEMIS-4042:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 16:22
Start Date: 13/Oct/22 16:22
Worklog Time Spent: 10m 
  Work Description: jbertram commented on PR #4254:
URL: 
https://github.com/apache/activemq-artemis/pull/4254#issuecomment-1277875051

   @gtully they must be. I don't see how your change could be related to those 
failures. I'll merge this now.




Issue Time Tracking
---

Worklog Id: (was: 816688)
Time Spent: 0.5h  (was: 20m)

> DefaultSensitiveStringCodec - read ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY 
> env if system property is not set 
> 
>
> Key: ARTEMIS-4042
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4042
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 2.26.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.27.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Following up on ARTEMIS-3488, to avoid expansion of the env var on the 
> command line, if it is not set as a system property, attempt to read directly 
> from the environment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4013) PostgresLargeObjectManager does incorrectly unwrap the jdbc connection

2022-10-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617176#comment-17617176
 ] 

ASF subversion and git services commented on ARTEMIS-4013:
--

Commit 9a44d3e0ea3259f6b187f48ae5f18e7f3c1a51b0 in activemq-artemis's branch 
refs/heads/main from Johannes Edmeier
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=9a44d3e0ea ]

ARTEMIS-4013 proper cxn unwrap in PostgresLargeObjectManager


> PostgresLargeObjectManager does incorrectly unwrap the jdbc connection
> --
>
> Key: ARTEMIS-4013
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4013
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.25.0
>Reporter: Johannes Edmeier
>Priority: Minor
>
> The {{PostgresLargeObjectManager}} unwraps in an unusual and non-conformative 
> way. It should just do {{connection.unwrap(PGConnection.class)}} instead of 
> the unusual reflection stuff.
> This currently prevents usage of Artemis alongside Testcontainers as it 
> results in this exception:
> {noformat}
> java.lang.ClassCastException: class org.testcontainers.jdbc.ConnectionWrapper 
> cannot be cast to class org.postgresql.PGConnection 
> (org.testcontainers.jdbc.ConnectionWrapper and org.postgresql.PGConnection 
> are in unnamed module of loader 'app')
>   at 
> org.apache.activemq.artemis.jdbc.store.file.PostgresLargeObjectManager.createLO(PostgresLargeObjectManager.java:69)
>   at 
> org.apache.activemq.artemis.jdbc.store.file.PostgresSequentialSequentialFileDriver.createFile(PostgresSequentialSequentialFileDriver.java:64)
>   at 
> org.apache.activemq.artemis.jdbc.store.file.JDBCSequentialFileFactoryDriver.openFile(JDBCSequentialFileFactoryDriver.java:109)
>   at 
> org.apache.activemq.artemis.jdbc.store.file.JDBCSequentialFile.load(JDBCSequentialFile.java:110)
>   at 
> org.apache.activemq.artemis.jdbc.store.file.JDBCSequentialFile.open(JDBCSequentialFile.java:104)
>   at 
> org.apache.activemq.artemis.core.paging.impl.PagingStoreFactoryDatabase.reloadStores(PagingStoreFactoryDatabase.java:220)
>   at 
> org.apache.activemq.artemis.core.paging.impl.PagingManagerImpl.reloadStores(PagingManagerImpl.java:326)
>   at 
> org.apache.activemq.artemis.core.paging.impl.PagingManagerImpl.start(PagingManagerImpl.java:430)
>   at 
> org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.initialisePart1(ActiveMQServerImpl.java:3160)
>   at 
> org.apache.activemq.artemis.core.server.impl.LiveOnlyActivation.run(LiveOnlyActivation.java:68)
>   at 
> org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.internalStart(ActiveMQServerImpl.java:655)
>   at 
> org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.start(ActiveMQServerImpl.java:568)
>   at 
> org.apache.activemq.artemis.core.server.embedded.EmbeddedActiveMQ.start(EmbeddedActiveMQ.java:116)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4013) PostgresLargeObjectManager does incorrectly unwrap the jdbc connection

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4013?focusedWorklogId=816695&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816695
 ]

ASF GitHub Bot logged work on ARTEMIS-4013:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 16:32
Start Date: 13/Oct/22 16:32
Worklog Time Spent: 10m 
  Work Description: asfgit closed pull request #4231: ARTEMIS-4013 proper 
connection unwrap in PostgresLargeObjectManager
URL: https://github.com/apache/activemq-artemis/pull/4231




Issue Time Tracking
---

Worklog Id: (was: 816695)
Remaining Estimate: 0h
Time Spent: 10m

> PostgresLargeObjectManager does incorrectly unwrap the jdbc connection
> --
>
> Key: ARTEMIS-4013
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4013
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.25.0
>Reporter: Johannes Edmeier
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{PostgresLargeObjectManager}} unwraps in an unusual and non-conformative 
> way. It should just do {{connection.unwrap(PGConnection.class)}} instead of 
> the unusual reflection stuff.
> This currently prevents usage of Artemis alongside Testcontainers as it 
> results in this exception:
> {noformat}
> java.lang.ClassCastException: class org.testcontainers.jdbc.ConnectionWrapper 
> cannot be cast to class org.postgresql.PGConnection 
> (org.testcontainers.jdbc.ConnectionWrapper and org.postgresql.PGConnection 
> are in unnamed module of loader 'app')
>   at 
> org.apache.activemq.artemis.jdbc.store.file.PostgresLargeObjectManager.createLO(PostgresLargeObjectManager.java:69)
>   at 
> org.apache.activemq.artemis.jdbc.store.file.PostgresSequentialSequentialFileDriver.createFile(PostgresSequentialSequentialFileDriver.java:64)
>   at 
> org.apache.activemq.artemis.jdbc.store.file.JDBCSequentialFileFactoryDriver.openFile(JDBCSequentialFileFactoryDriver.java:109)
>   at 
> org.apache.activemq.artemis.jdbc.store.file.JDBCSequentialFile.load(JDBCSequentialFile.java:110)
>   at 
> org.apache.activemq.artemis.jdbc.store.file.JDBCSequentialFile.open(JDBCSequentialFile.java:104)
>   at 
> org.apache.activemq.artemis.core.paging.impl.PagingStoreFactoryDatabase.reloadStores(PagingStoreFactoryDatabase.java:220)
>   at 
> org.apache.activemq.artemis.core.paging.impl.PagingManagerImpl.reloadStores(PagingManagerImpl.java:326)
>   at 
> org.apache.activemq.artemis.core.paging.impl.PagingManagerImpl.start(PagingManagerImpl.java:430)
>   at 
> org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.initialisePart1(ActiveMQServerImpl.java:3160)
>   at 
> org.apache.activemq.artemis.core.server.impl.LiveOnlyActivation.run(LiveOnlyActivation.java:68)
>   at 
> org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.internalStart(ActiveMQServerImpl.java:655)
>   at 
> org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.start(ActiveMQServerImpl.java:568)
>   at 
> org.apache.activemq.artemis.core.server.embedded.EmbeddedActiveMQ.start(EmbeddedActiveMQ.java:116)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4002) Allow $ARTEMIS_LOGGING_CONF override by respecting pre-existing value in artemis script

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4002?focusedWorklogId=816698&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816698
 ]

ASF GitHub Bot logged work on ARTEMIS-4002:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 16:35
Start Date: 13/Oct/22 16:35
Worklog Time Spent: 10m 
  Work Description: jbertram commented on PR #4223:
URL: 
https://github.com/apache/activemq-artemis/pull/4223#issuecomment-1277890063

   @gtully, there's conflicts on this now. Given that and @gemmellr's previous 
comment I think that this PR probably just needs to be closed.




Issue Time Tracking
---

Worklog Id: (was: 816698)
Time Spent: 20m  (was: 10m)

> Allow $ARTEMIS_LOGGING_CONF override by respecting pre-existing value in 
> artemis script
> ---
>
> Key: ARTEMIS-4002
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4002
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 2.25.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Trivial
> Fix For: 2.27.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We set the logging system property using an env var, however we don't allow 
> that env var to be provided, it is currently overwritten
> When the env can be easily modified, it would be great to be able to easily 
> provide an alternative logging file via setting the $ARTEMIS_LOGGING_CONF 
> variable.
> We just have to set it only when it is empty!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-2476) New MQTT subscriptions receive older (not last published) retained message.

2022-10-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617178#comment-17617178
 ] 

ASF subversion and git services commented on ARTEMIS-2476:
--

Commit ea04426bcd2b3dab266abd17c2568524a5e3b6b5 in activemq-artemis's branch 
refs/heads/main from Justin Bertram
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=ea04426bcd ]

ARTEMIS-4037 refactor MQTTRetainMessageManagerTest

Commit 5a42de5fa6ee1b96f6f3e404f5a3d11a702e1776 called my attention to
this test. It really needs to be refactored because:

 - It belongs in the integration-tests module rather than the MQTT
   protocol module.
 - It is using a lot of non-standard components (e.g.
   EmbeddedJMSResource, Awaitility, etc.).
 - It is overly complicated (e.g. using its own MqttClientService).

This commit resolves all those problems. The new implementation is quite
a bit different but still equivalent. I reverted the original fix from
ARTEMIS-2476 and the test still fails.


> New MQTT subscriptions receive older (not last published) retained message.
> ---
>
> Key: ARTEMIS-2476
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2476
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: MQTT
>Affects Versions: 2.10.0, 2.10.1
>Reporter: Assen Sharlandjiev
>Priority: Blocker
> Fix For: 2.12.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> I have observed that new MQTT subscriptions on a given topic, receive older 
> retained messages. Instead of getting the latest retained message published 
> on the topic, the new subscription received an older message, published 
> before that last one. 
>  
> I have created a [mqtt-test|https://github.com/assens/mqtt-test] project that 
> demonstrated the problem.  Check the readme, and run the Artemis broker test:
>  
> mvn -Dtest=ArtemisTest verify
>  
>  The test project contains multiple MQTT brokers tests. Only Artemis broker 
> tests fail.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4037) Use random alphanumeric strings for MQTTRetainMessageManagerTest

2022-10-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617177#comment-17617177
 ] 

ASF subversion and git services commented on ARTEMIS-4037:
--

Commit ea04426bcd2b3dab266abd17c2568524a5e3b6b5 in activemq-artemis's branch 
refs/heads/main from Justin Bertram
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=ea04426bcd ]

ARTEMIS-4037 refactor MQTTRetainMessageManagerTest

Commit 5a42de5fa6ee1b96f6f3e404f5a3d11a702e1776 called my attention to
this test. It really needs to be refactored because:

 - It belongs in the integration-tests module rather than the MQTT
   protocol module.
 - It is using a lot of non-standard components (e.g.
   EmbeddedJMSResource, Awaitility, etc.).
 - It is overly complicated (e.g. using its own MqttClientService).

This commit resolves all those problems. The new implementation is quite
a bit different but still equivalent. I reverted the original fix from
ARTEMIS-2476 and the test still fails.


> Use random alphanumeric strings for MQTTRetainMessageManagerTest
> 
>
> Key: ARTEMIS-4037
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4037
> Project: ActiveMQ Artemis
>  Issue Type: Task
>Reporter: Domenico Francesco Bruscino
>Assignee: Domenico Francesco Bruscino
>Priority: Major
> Fix For: 2.27.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4037) Use random alphanumeric strings for MQTTRetainMessageManagerTest

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4037?focusedWorklogId=816699&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816699
 ]

ASF GitHub Bot logged work on ARTEMIS-4037:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 16:36
Start Date: 13/Oct/22 16:36
Worklog Time Spent: 10m 
  Work Description: asfgit closed pull request #4255: ARTEMIS-4037 refactor 
MQTTRetainMessageManagerTest
URL: https://github.com/apache/activemq-artemis/pull/4255




Issue Time Tracking
---

Worklog Id: (was: 816699)
Time Spent: 0.5h  (was: 20m)

> Use random alphanumeric strings for MQTTRetainMessageManagerTest
> 
>
> Key: ARTEMIS-4037
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4037
> Project: ActiveMQ Artemis
>  Issue Type: Task
>Reporter: Domenico Francesco Bruscino
>Assignee: Domenico Francesco Bruscino
>Priority: Major
> Fix For: 2.27.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4020) switch to using SLF4J for logging API and use Log4j 2 for broker distribution

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4020?focusedWorklogId=816701&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816701
 ]

ASF GitHub Bot logged work on ARTEMIS-4020:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 16:38
Start Date: 13/Oct/22 16:38
Worklog Time Spent: 10m 
  Work Description: gemmellr commented on PR #4257:
URL: 
https://github.com/apache/activemq-artemis/pull/4257#issuecomment-1277893221

   I just pushed another PR that removed a couple files this changed, sorry :)




Issue Time Tracking
---

Worklog Id: (was: 816701)
Time Spent: 9h 20m  (was: 9h 10m)

> switch to using SLF4J for logging API and use Log4j 2 for broker distribution
> -
>
> Key: ARTEMIS-4020
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4020
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
> Fix For: 2.27.0
>
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> Switch to using [SLF4J|https://www.slf4j.org/] as the logging API for the 
> code base, with end-uses supplying and configuring an SLF4J-supporting 
> logging implementation of their choice based on their needs.
> For the client, applications will need to supply an SLF4J binding to a 
> logging implementation of their choice to enable logging. An example of doing 
> so using [Log4J 2|https://logging.apache.org/log4j/2.x/manual/index.html] is 
> given in (/will be, once the release is out) the [client logging 
> documentation|https://activemq.apache.org/components/artemis/documentation/latest/logging.html#logging-in-a-client-application].
> For the broker, the assembly distribution will include [Log4J 
> 2|https://logging.apache.org/log4j/2.x/manual/index.html] as its logging 
> implentation, with the "artemis create" CLI command used to create broker 
> instances now creating a log4j2.properties configuration within the 
> /etc/ directory to configure Log4J. Details for upgrading an 
> existing broker-instance is given in (/will be, once the release is out) the 
> [version upgrade 
> documentation|https://activemq.apache.org/components/artemis/documentation/latest/versions.html].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4002) Allow $ARTEMIS_LOGGING_CONF override by respecting pre-existing value in artemis script

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4002?focusedWorklogId=816703&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816703
 ]

ASF GitHub Bot logged work on ARTEMIS-4002:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 16:39
Start Date: 13/Oct/22 16:39
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on PR #4223:
URL: 
https://github.com/apache/activemq-artemis/pull/4223#issuecomment-1277894253

   +1, this should be closed...
   
   @gtully  ?




Issue Time Tracking
---

Worklog Id: (was: 816703)
Time Spent: 0.5h  (was: 20m)

> Allow $ARTEMIS_LOGGING_CONF override by respecting pre-existing value in 
> artemis script
> ---
>
> Key: ARTEMIS-4002
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4002
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 2.25.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Trivial
> Fix For: 2.27.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We set the logging system property using an env var, however we don't allow 
> that env var to be provided, it is currently overwritten
> When the env can be easily modified, it would be great to be able to easily 
> provide an alternative logging file via setting the $ARTEMIS_LOGGING_CONF 
> variable.
> We just have to set it only when it is empty!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4035) All consumers of federated queue drop if only one consumer drops

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4035?focusedWorklogId=816706&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816706
 ]

ASF GitHub Bot logged work on ARTEMIS-4035:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 16:49
Start Date: 13/Oct/22 16:49
Worklog Time Spent: 10m 
  Work Description: jbertram commented on PR #4249:
URL: 
https://github.com/apache/activemq-artemis/pull/4249#issuecomment-1277905051

   Full test-suite looks good. No related failures.




Issue Time Tracking
---

Worklog Id: (was: 816706)
Time Spent: 50m  (was: 40m)

> All consumers of federated queue drop if only one consumer drops
> 
>
> Key: ARTEMIS-4035
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4035
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Scenario:
> - 2 nodes.
> - 2 federated queues in an upstream configuration.
> - One consumer for each federated queue connected to just one of the brokers.
> - Open the web console of the brokers that the consumers are connected. All 
> the consumers there.
> - Open the web console of the other broker. The same consumers from before 
> are there (i.e. the federation is working).
> - Drop one consumer from the broker and then all the consumers from the other 
> node are dropped. Federation no longer works



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4047) Artemis does not send message to consumer AMQP

2022-10-13 Thread Justin Bertram (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram updated ARTEMIS-4047:

Description: 
The broker does not send messages from one of many existing queues to the 
connected consumer.

According to the UI the queue does contain ~15k messages.
I’m not able to consume any of these messages. I also tried to read a message 
using the browse function of the UI/console but that does not work eighter. 
The message was created by a AMQP client and should be consumed by another AMQP 
client.

I tried to capture the situation in a few screenshots… 
I don’t know which data can help you to understand the situation, so I’ve 
collected everything:
 * Logs
 * Broker
 * Data

Please let me know if there are any other data I should add to the ticket. 

I don’t think that the code of my client is relevant since the problem only 
exist for a single queue…but here it is anyway: 
{code:java}
using Amqp;
using Amqp.Framing;
using Amqp.Types;
namespace Test;
public sealed class MessageConsumer
{
    private readonly String _address;
    private readonly CancellationToken _cancellationToken;
    private readonly String _consumerName;
    private readonly String[] _destinations;
    public MessageConsumer( String address, String consumerName, String[] 
destinations, CancellationToken cancellationToken )
    {
        _address = address;
        _consumerName = consumerName;
        _destinations = destinations;
        _cancellationToken = cancellationToken;
    }
    public async Task StartReceivingMessages()
    {
        await Task.Yield();
        while ( !_cancellationToken.IsCancellationRequested )
        {
            var connectionFactory = new ConnectionFactory();
            var address = new Address( _address );
            try
            {
                var connection = await connectionFactory.CreateAsync( address );
                var session = ( (IConnection) connection ).CreateSession();
                var receivers = new List();
                foreach ( var destination in _destinations )
                {
                    var receiver = session.CreateReceiver( 
$"{_consumerName}_{destination}",
                                                           new Source
                                                           {
                                                               Address = 
destination,
                                                               Capabilities = 
new[] { new Symbol( "queue" ) }
                                                           } );
                    receivers.Add( receiver );
                }
                while ( !_cancellationToken.IsCancellationRequested )
                    foreach ( var receiver in receivers )
                    {
                        // ReceiveAsync( TimeSpan.Zero ); blocks forever and no 
messages will be received 
                        var message = await receiver.ReceiveAsync( 
TimeSpan.FromMilliseconds( 1 ) );
                        if ( message == null )
                            continue;
                        receiver.Accept( message );
                        Console.WriteLine( $"{_consumerName} - Received message 
with id: '{message.Properties.MessageId}'" );
                    }
            }
            catch ( Exception ex )
            {
                Console.WriteLine( $"{_consumerName} - Connection error in 
producer '{_consumerName}' {ex.Message} => create new connection." );
                await Task.Delay( 1000, CancellationToken.None );
            }
        }
    }
}{code}

  was:
The broker does not send messages from one of many existing queues to the 
connected consumer.

According to the UI the queue does contain ~15k messages.
I’m not able to consume any of these messages. I also tried to read a message 
using the browse function of the UI/console but that does not work eighter. 
The message was created by a AMQP client and should be consumed by another AMQP 
client.

I tried to capture the situation in a few screenshots… 
I don’t know which data can help you to understand the situation, so I’ve 
collected everything:
 * Logs
 * Broker
 * Data

Please let me know if there are any other data I should add to the ticket.

 

I don’t think that the code of my client is relevant since the problem only 
exist for a single queue…but here it is anyway:

 

 
{code:java}
using Amqp;
using Amqp.Framing;
using Amqp.Types;
namespace Test;
public sealed class MessageConsumer
{
    private readonly String _address;
    private readonly CancellationToken _cancellationToken;
    private readonly String _consumerName;
    private readonly String[] _destinations;
    public MessageConsumer( String address, String consumerName, String[] 
destinations, CancellationToken cancellationToken )
    {
        _address = address;
        _consumerName = consumerName;
        _destinat

[jira] [Work logged] (ARTEMIS-4020) switch to using SLF4J for logging API and use Log4j 2 for broker distribution

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4020?focusedWorklogId=816711&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816711
 ]

ASF GitHub Bot logged work on ARTEMIS-4020:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 17:03
Start Date: 13/Oct/22 17:03
Worklog Time Spent: 10m 
  Work Description: asfgit merged PR #4257:
URL: https://github.com/apache/activemq-artemis/pull/4257




Issue Time Tracking
---

Worklog Id: (was: 816711)
Time Spent: 9.5h  (was: 9h 20m)

> switch to using SLF4J for logging API and use Log4j 2 for broker distribution
> -
>
> Key: ARTEMIS-4020
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4020
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
> Fix For: 2.27.0
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> Switch to using [SLF4J|https://www.slf4j.org/] as the logging API for the 
> code base, with end-uses supplying and configuring an SLF4J-supporting 
> logging implementation of their choice based on their needs.
> For the client, applications will need to supply an SLF4J binding to a 
> logging implementation of their choice to enable logging. An example of doing 
> so using [Log4J 2|https://logging.apache.org/log4j/2.x/manual/index.html] is 
> given in (/will be, once the release is out) the [client logging 
> documentation|https://activemq.apache.org/components/artemis/documentation/latest/logging.html#logging-in-a-client-application].
> For the broker, the assembly distribution will include [Log4J 
> 2|https://logging.apache.org/log4j/2.x/manual/index.html] as its logging 
> implentation, with the "artemis create" CLI command used to create broker 
> instances now creating a log4j2.properties configuration within the 
> /etc/ directory to configure Log4J. Details for upgrading an 
> existing broker-instance is given in (/will be, once the release is out) the 
> [version upgrade 
> documentation|https://activemq.apache.org/components/artemis/documentation/latest/versions.html].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4020) switch to using SLF4J for logging API and use Log4j 2 for broker distribution

2022-10-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617197#comment-17617197
 ] 

ASF subversion and git services commented on ARTEMIS-4020:
--

Commit b900a1e4bd6965bc2b6842242249d3a4dcc9c825 in activemq-artemis's branch 
refs/heads/main from Timothy Bish
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=b900a1e4bd ]

ARTEMIS-4020 Standardize the naming of Logger types for consistency

Attempt to standardize all Logger declaration to a singular variable name
which makes the code more consistent and make finding usages of loggers in
the code a bit easier.


> switch to using SLF4J for logging API and use Log4j 2 for broker distribution
> -
>
> Key: ARTEMIS-4020
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4020
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
> Fix For: 2.27.0
>
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> Switch to using [SLF4J|https://www.slf4j.org/] as the logging API for the 
> code base, with end-uses supplying and configuring an SLF4J-supporting 
> logging implementation of their choice based on their needs.
> For the client, applications will need to supply an SLF4J binding to a 
> logging implementation of their choice to enable logging. An example of doing 
> so using [Log4J 2|https://logging.apache.org/log4j/2.x/manual/index.html] is 
> given in (/will be, once the release is out) the [client logging 
> documentation|https://activemq.apache.org/components/artemis/documentation/latest/logging.html#logging-in-a-client-application].
> For the broker, the assembly distribution will include [Log4J 
> 2|https://logging.apache.org/log4j/2.x/manual/index.html] as its logging 
> implentation, with the "artemis create" CLI command used to create broker 
> instances now creating a log4j2.properties configuration within the 
> /etc/ directory to configure Log4J. Details for upgrading an 
> existing broker-instance is given in (/will be, once the release is out) the 
> [version upgrade 
> documentation|https://activemq.apache.org/components/artemis/documentation/latest/versions.html].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4025) properties config - provide error status for invalid properties

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4025?focusedWorklogId=816719&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816719
 ]

ASF GitHub Bot logged work on ARTEMIS-4025:
---

Author: ASF GitHub Bot
Created on: 13/Oct/22 17:23
Start Date: 13/Oct/22 17:23
Worklog Time Spent: 10m 
  Work Description: gtully merged PR #4241:
URL: https://github.com/apache/activemq-artemis/pull/4241




Issue Time Tracking
---

Worklog Id: (was: 816719)
Time Spent: 1h 50m  (was: 1h 40m)

> properties config -  provide error status for invalid properties
> 
>
> Key: ARTEMIS-4025
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4025
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 2.26.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.27.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> following up on the availability of a [status json|ARTEMIS-4007] - trap any 
> errors from bean util property failure to apply, for any in invalid property. 
> Currently all failures to find setters are silently ignored.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4028) properties config - provide checksum read status for properties

2022-10-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617209#comment-17617209
 ] 

ASF subversion and git services commented on ARTEMIS-4028:
--

Commit 3b981b3920dc2214e943d9de81bd991fb7182eb6 in activemq-artemis's branch 
refs/heads/main from Gary Tully
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=3b981b3920 ]

ARTEMIS-4028 - add alder32 checksum to the status of properties files read by 
the broker


> properties config - provide checksum read status for properties
> ---
>
> Key: ARTEMIS-4028
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4028
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 2.26.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.27.0
>
>
> When properties are first applied or updated, seeing the checksum of the 
> applied content allows verification that the application has completed.
> On reload/reapply, where there is a period before the broker detects the need 
> to update, the status can provide a mechanism to verify when changes have 
> been applied.
> To this end, I will add an alder32 checksum field to the properties status 
> and apply the same pattern to the re-loadable properties from the jaas login 
> modules.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4025) properties config - provide error status for invalid properties

2022-10-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617208#comment-17617208
 ] 

ASF subversion and git services commented on ARTEMIS-4025:
--

Commit 03ef286cc8d3124f339515cbb644cd4c0156237d in activemq-artemis's branch 
refs/heads/main from Gary Tully
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=03ef286cc8 ]

ARTEMIS-4025 - trap failure to apply properties as errors and report via the 
broker status in a configuration/properties/errors status field


> properties config -  provide error status for invalid properties
> 
>
> Key: ARTEMIS-4025
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4025
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 2.26.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.27.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> following up on the availability of a [status json|ARTEMIS-4007] - trap any 
> errors from bean util property failure to apply, for any in invalid property. 
> Currently all failures to find setters are silently ignored.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4047) Artemis does not send message to consumer AMQP

2022-10-13 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617254#comment-17617254
 ] 

Justin Bertram commented on ARTEMIS-4047:
-

I think this is related to a bug involving page counters (i.e. an internal 
house-keeping stat related to paged messages). When I performed a {{data 
print}} on the data you uploaded I see that there are actually *0* messages in 
the {{TransactionInformationMessage}} queue. However, there is this record:
{noformat}
recordID=18826397;userRecordType=41;isUpdate=false;compactCount=7;PageCountRecordInc
 [queueID=2922, value=15350, persistentSize=7867226]{noformat}
I believe this is why the console is reporting that 
{{TransactionInformationMessage}} has 15,350 messages.

[~clebertsuconic], does this sound right to you?

> Artemis does not send message to consumer AMQP
> --
>
> Key: ARTEMIS-4047
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4047
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.25.0, 2.26.0
>Reporter: daves
>Priority: Major
> Attachments: 1.PNG, 2.PNG, 3.PNG, 4.PNG, 5.PNG, All.zip
>
>
> The broker does not send messages from one of many existing queues to the 
> connected consumer.
> According to the UI the queue does contain ~15k messages.
> I’m not able to consume any of these messages. I also tried to read a message 
> using the browse function of the UI/console but that does not work eighter. 
> The message was created by a AMQP client and should be consumed by another 
> AMQP client.
> I tried to capture the situation in a few screenshots… 
> I don’t know which data can help you to understand the situation, so I’ve 
> collected everything:
>  * Logs
>  * Broker
>  * Data
> Please let me know if there are any other data I should add to the ticket. 
> I don’t think that the code of my client is relevant since the problem only 
> exist for a single queue…but here it is anyway: 
> {code:java}
> using Amqp;
> using Amqp.Framing;
> using Amqp.Types;
> namespace Test;
> public sealed class MessageConsumer
> {
>     private readonly String _address;
>     private readonly CancellationToken _cancellationToken;
>     private readonly String _consumerName;
>     private readonly String[] _destinations;
>     public MessageConsumer( String address, String consumerName, String[] 
> destinations, CancellationToken cancellationToken )
>     {
>         _address = address;
>         _consumerName = consumerName;
>         _destinations = destinations;
>         _cancellationToken = cancellationToken;
>     }
>     public async Task StartReceivingMessages()
>     {
>         await Task.Yield();
>         while ( !_cancellationToken.IsCancellationRequested )
>         {
>             var connectionFactory = new ConnectionFactory();
>             var address = new Address( _address );
>             try
>             {
>                 var connection = await connectionFactory.CreateAsync( address 
> );
>                 var session = ( (IConnection) connection ).CreateSession();
>                 var receivers = new List();
>                 foreach ( var destination in _destinations )
>                 {
>                     var receiver = session.CreateReceiver( 
> $"{_consumerName}_{destination}",
>                                                            new Source
>                                                            {
>                                                                Address = 
> destination,
>                                                                Capabilities = 
> new[] { new Symbol( "queue" ) }
>                                                            } );
>                     receivers.Add( receiver );
>                 }
>                 while ( !_cancellationToken.IsCancellationRequested )
>                     foreach ( var receiver in receivers )
>                     {
>                         // ReceiveAsync( TimeSpan.Zero ); blocks forever and 
> no messages will be received 
>                         var message = await receiver.ReceiveAsync( 
> TimeSpan.FromMilliseconds( 1 ) );
>                         if ( message == null )
>                             continue;
>                         receiver.Accept( message );
>                         Console.WriteLine( $"{_consumerName} - Received 
> message with id: '{message.Properties.MessageId}'" );
>                     }
>             }
>             catch ( Exception ex )
>             {
>                 Console.WriteLine( $"{_consumerName} - Connection error in 
> producer '{_consumerName}' {ex.Message} => create new connection." );
>                 await Task.Delay( 1000, CancellationToken.None );
>             }
>         }
>     }
> }{code}



--
T

[jira] [Commented] (ARTEMIS-4046) mqtt $share topic can not work

2022-10-13 Thread gongping.zhu (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617433#comment-17617433
 ] 

gongping.zhu commented on ARTEMIS-4046:
---

when i upgrade 2.26.0 and use MQTT.x[v1.8.3] client tool to connect server; and 
sub topic $share/hello/hello/#

it cannot work right; 

after i chg to 2.25.0 it can work right

> mqtt $share topic can not work
> --
>
> Key: ARTEMIS-4046
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4046
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.26.0
>Reporter: gongping.zhu
>Priority: Major
>
> When I use version 2.25.0, I can correctly use the share topic mechanism; 
> When I upgrade to 2.26.0, the share topic mechanism cannot work;
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4046) mqtt $share topic can not work

2022-10-13 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617448#comment-17617448
 ] 

Justin Bertram commented on ARTEMIS-4046:
-

As far as I can tell the only change between 2.25.0 and 2.26.0 relevant for 
MQTT is ARTEMIS-3913 (which was for you) and it's not clear how that would 
impact your use-case. Can you provide a test-case or at least more details 
about what exactly you're doing and what exactly isn't working? I just ran a 
quick test with 2 consumers subscribing to {{$share/hello/hello/#}}. I sent 2 
messages to {{hello/foo}} and both consumers were able to receive a message 
indicating that the shared subscription was working fine.

Without more detail I'm afraid I can't really investigate any further, and I 
will be forced to close this Jira.

> mqtt $share topic can not work
> --
>
> Key: ARTEMIS-4046
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4046
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.26.0
>Reporter: gongping.zhu
>Priority: Major
>
> When I use version 2.25.0, I can correctly use the share topic mechanism; 
> When I upgrade to 2.26.0, the share topic mechanism cannot work;
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (AMQ-9107) Closing many consumers causes CPU to spike to 100%

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-9107?focusedWorklogId=816865&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816865
 ]

ASF GitHub Bot logged work on AMQ-9107:
---

Author: ASF GitHub Bot
Created on: 14/Oct/22 05:14
Start Date: 14/Oct/22 05:14
Worklog Time Spent: 10m 
  Work Description: lucastetreault commented on PR #908:
URL: https://github.com/apache/activemq/pull/908#issuecomment-1278493980

   Hey Matt, as discussed on slack. I'll have a look and add some tests in the 
next few days. Just wanted to add a comment here in case others were following 
along :) 




Issue Time Tracking
---

Worklog Id: (was: 816865)
Time Spent: 1h 10m  (was: 1h)

> Closing many consumers causes CPU to spike to 100%
> --
>
> Key: AMQ-9107
> URL: https://issues.apache.org/jira/browse/AMQ-9107
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.17.1, 5.16.5
>Reporter: Lucas Tétreault
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.18.0, 5.16.6, 5.17.3
>
> Attachments: example.zip, image-2022-10-07-00-12-39-657.png, 
> image-2022-10-07-00-17-30-657.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When there are many consumers (~188k) on a queue, closing them is incredibly 
> expensive and causes the CPU to spike to 100% while the consumers are closed. 
> Tested on an Amazon MQ mq.m5.large instance (2 vcpu, 8gb memory).
> I have attached a minimal recreation of the issue where the following 
> happens: 
> 1/ Open 100 connections.
> 2/ Create consumers as fast as we can on all of those connections until we 
> hit at least 188k consumers.
> 3/ Sleep for 5 minutes so we can observe the CPU come back down after opening 
> all those connections.
> 4/ Start closing consumers as fast as we can.
> 5/ After all consumers are closed, sleep for 5 minutes to observe the CPU 
> come back down after closing all the connections.
>  
> In this example it seems 5 minutes wasn't actually sufficient time for the 
> CPU to come back down and the consumer and connection counts seem to hit 0 at 
> the same time: 
> !image-2022-10-07-00-12-39-657.png|width=757,height=353!
>  
> In a previous test with more time sleeping after closing all the consumers we 
> can see the CPU come back down before we close the connections. 
> !image-2022-10-07-00-17-30-657.png|width=764,height=348!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (AMQ-9107) Closing many consumers causes CPU to spike to 100%

2022-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-9107?focusedWorklogId=816868&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-816868
 ]

ASF GitHub Bot logged work on AMQ-9107:
---

Author: ASF GitHub Bot
Created on: 14/Oct/22 05:24
Start Date: 14/Oct/22 05:24
Worklog Time Spent: 10m 
  Work Description: jbonofre commented on PR #908:
URL: https://github.com/apache/activemq/pull/908#issuecomment-1278500436

   @lucastetreault thanks ! Much appreciated. You can create a new PR, I will 
do a review. Thanks again !




Issue Time Tracking
---

Worklog Id: (was: 816868)
Time Spent: 1h 20m  (was: 1h 10m)

> Closing many consumers causes CPU to spike to 100%
> --
>
> Key: AMQ-9107
> URL: https://issues.apache.org/jira/browse/AMQ-9107
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.17.1, 5.16.5
>Reporter: Lucas Tétreault
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.18.0, 5.16.6, 5.17.3
>
> Attachments: example.zip, image-2022-10-07-00-12-39-657.png, 
> image-2022-10-07-00-17-30-657.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When there are many consumers (~188k) on a queue, closing them is incredibly 
> expensive and causes the CPU to spike to 100% while the consumers are closed. 
> Tested on an Amazon MQ mq.m5.large instance (2 vcpu, 8gb memory).
> I have attached a minimal recreation of the issue where the following 
> happens: 
> 1/ Open 100 connections.
> 2/ Create consumers as fast as we can on all of those connections until we 
> hit at least 188k consumers.
> 3/ Sleep for 5 minutes so we can observe the CPU come back down after opening 
> all those connections.
> 4/ Start closing consumers as fast as we can.
> 5/ After all consumers are closed, sleep for 5 minutes to observe the CPU 
> come back down after closing all the connections.
>  
> In this example it seems 5 minutes wasn't actually sufficient time for the 
> CPU to come back down and the consumer and connection counts seem to hit 0 at 
> the same time: 
> !image-2022-10-07-00-12-39-657.png|width=757,height=353!
>  
> In a previous test with more time sleeping after closing all the consumers we 
> can see the CPU come back down before we close the connections. 
> !image-2022-10-07-00-17-30-657.png|width=764,height=348!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)