[jira] [Commented] (ARTEMIS-4240) Consumer stuck handling Large Message

2023-04-17 Thread Apache Dev (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713281#comment-17713281
 ] 

Apache Dev commented on ARTEMIS-4240:
-

Thanks Justin!
{quote}However, we also detect invalid concurrent {{Session}} usage and log a 
WARN message. Despite the fact the spec says the {{Session}} is not 
thread-safe, it doesn't require this kind of illegal access detection. I think 
that's similar to what you're describing here. Correct me if I'm wrong.
{quote}
Yes, I think it would be useful to log a WARN in this situation where 
troubleshooting is not straightforward because no explicit error occurs and the 
30 seconds timeout may go unnoticed.

 

> Consumer stuck handling Large Message
> -
>
> Key: ARTEMIS-4240
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4240
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, JMS
>Affects Versions: 2.19.1, 2.28.0
>Reporter: Apache Dev
>Assignee: Justin Bertram
>Priority: Critical
>
> In this scenario:
>  * "core" protocol
>  * JMS consumer APIs
>  * non-persistent messaging
>  * client connection configured with {{minLargeMessageSize=2147483647}} in 
> order to disable Large Messages
> a consumer receives correctly all messages having a small size.
> But when a message > 1Mib is received, the consumer thread is stuck for 30 
> seconds, with this stack:
> {noformat}
> 3XMTHREADINFO  "Thread-3 (ActiveMQ-client-global-threads)" 
> J9VMThread:0x006D9300, omrthread_t:0x7FB7A002C248, 
> java/lang/Thread:0xE0BD22F8, state:P, prio=5
> 3XMJAVALTHREAD(java/lang/Thread getId:0x37, isDaemon:true)
> 3XMJAVALTHRCCLsun/misc/Launcher$AppClassLoader(0xE00298C0)
> 3XMTHREADINFO1(native thread ID:0xDD80, native priority:0x5, 
> native policy:UNKNOWN, vmstate:P, vm thread flags:0x000a0081)
> 3XMTHREADINFO2(native stack address range 
> from:0x7FB85EE4F000, to:0x7FB85EE8F000, size:0x4)
> 3XMCPUTIME   CPU usage total: 0.025981590 secs, current 
> category="Application"
> 3XMTHREADBLOCK Parked on: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0xE0DA3B80
>  Owned by: 
> 3XMHEAPALLOC Heap bytes allocated since last GC cycle=0 (0x0)
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:226)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2089)
> 4XESTACKTRACEat 
> java/util/concurrent/LinkedBlockingQueue.poll(LinkedBlockingQueue.java:478)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.popPacket(LargeMessageControllerImpl.java:1123)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.checkForPacket(LargeMessageControllerImpl.java:1167)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.discardUnusedPackets(LargeMessageControllerImpl.java:135)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientLargeMessageImpl.discardBody(ClientLargeMessageImpl.java:142)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1031)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.access$400(ClientConsumerImpl.java:49)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1129)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/OrderedExecutor.doTask(OrderedExecutor.java:42)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/OrderedExecutor.doTask(OrderedExecutor.java:31)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/ProcessorBase.executePendingTasks(ProcessorBase.java:65)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/ProcessorBase$$Lambda$6/0xe00d1330.run(Bytecode
>  PC:4)
> 4XESTACKTRACEat 
> java/util/concurrent/ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160(Compiled
>  Code))
> 4XESTACKTRACEat 
> java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
> 3XMTHREADINFO3 

[jira] [Work logged] (ARTEMIS-4241) Paging + FQQN is broken

2023-04-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4241?focusedWorklogId=857462&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-857462
 ]

ASF GitHub Bot logged work on ARTEMIS-4241:
---

Author: ASF GitHub Bot
Created on: 17/Apr/23 19:51
Start Date: 17/Apr/23 19:51
Worklog Time Spent: 10m 
  Work Description: brusdev merged PR #4436:
URL: https://github.com/apache/activemq-artemis/pull/4436




Issue Time Tracking
---

Worklog Id: (was: 857462)
Remaining Estimate: 0h
Time Spent: 10m

> Paging + FQQN is broken
> ---
>
> Key: ARTEMIS-4241
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4241
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I produce messages using the CLI, e.g.:
> {noformat}
> ./bin/artemis producer --destination queue://pagingQueue - -message-count 
> 1000 --text-size 9000{noformat}
> I can see that the corresponding address starts paging:
> {noformat}
> 2022-09-27 09:32:08,633 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222038: Starting paging on address 'pagingQueue'; size is currently: 
> 114,458 bytes; max-size-bytes: 100,000; global-size-bytes: 114,458{noformat}
> And I can see it reflected properly in the {{data}} folder.
> However, when I produce messages with the CLI using the FQQN syntax, e.g.:
> {noformat}
> $ ./bin/artemis producer --destination pagingQueue::pagingQueue - 
> -message-count 1000 --text-size 9000{noformat}
> It seems that the address starts paging due to the following log:
> {noformat}
> 2022-09-27 09:32:37,740 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222038: Starting paging on address 'pagingQueue::pagingQueue'; size is 
> currently: 114,606 bytes; max-size-bytes: 100,000; global-size-bytes: 
> 114,926{noformat}
> But there's no change in the {{data/paging}} folder.
> I can see the address starts paging when the global size is increased after 
> ingesting some more data with the below {{WARN}}.
> {noformat}
> 2022-09-27 09:35:16,340 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222038: Starting paging on address 'pagingQueue'; size is currently: 
> 100,032 bytes; max-size-bytes: 100,000; global-size-bytes: 
> 29,954,895{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4241) Paging + FQQN is broken

2023-04-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713268#comment-17713268
 ] 

ASF subversion and git services commented on ARTEMIS-4241:
--

Commit 673481369f2a197098b728ef4c55ea16d3faa070 in activemq-artemis's branch 
refs/heads/main from Justin Bertram
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=673481369f ]

ARTEMIS-4241 paging + FQQN is broken


> Paging + FQQN is broken
> ---
>
> Key: ARTEMIS-4241
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4241
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I produce messages using the CLI, e.g.:
> {noformat}
> ./bin/artemis producer --destination queue://pagingQueue - -message-count 
> 1000 --text-size 9000{noformat}
> I can see that the corresponding address starts paging:
> {noformat}
> 2022-09-27 09:32:08,633 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222038: Starting paging on address 'pagingQueue'; size is currently: 
> 114,458 bytes; max-size-bytes: 100,000; global-size-bytes: 114,458{noformat}
> And I can see it reflected properly in the {{data}} folder.
> However, when I produce messages with the CLI using the FQQN syntax, e.g.:
> {noformat}
> $ ./bin/artemis producer --destination pagingQueue::pagingQueue - 
> -message-count 1000 --text-size 9000{noformat}
> It seems that the address starts paging due to the following log:
> {noformat}
> 2022-09-27 09:32:37,740 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222038: Starting paging on address 'pagingQueue::pagingQueue'; size is 
> currently: 114,606 bytes; max-size-bytes: 100,000; global-size-bytes: 
> 114,926{noformat}
> But there's no change in the {{data/paging}} folder.
> I can see the address starts paging when the global size is increased after 
> ingesting some more data with the below {{WARN}}.
> {noformat}
> 2022-09-27 09:35:16,340 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222038: Starting paging on address 'pagingQueue'; size is currently: 
> 100,032 bytes; max-size-bytes: 100,000; global-size-bytes: 
> 29,954,895{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (ARTEMIS-4241) Paging + FQQN is broken

2023-04-17 Thread Domenico Francesco Bruscino (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Domenico Francesco Bruscino resolved ARTEMIS-4241.
--
Fix Version/s: 2.29.0
   Resolution: Fixed

> Paging + FQQN is broken
> ---
>
> Key: ARTEMIS-4241
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4241
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I produce messages using the CLI, e.g.:
> {noformat}
> ./bin/artemis producer --destination queue://pagingQueue - -message-count 
> 1000 --text-size 9000{noformat}
> I can see that the corresponding address starts paging:
> {noformat}
> 2022-09-27 09:32:08,633 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222038: Starting paging on address 'pagingQueue'; size is currently: 
> 114,458 bytes; max-size-bytes: 100,000; global-size-bytes: 114,458{noformat}
> And I can see it reflected properly in the {{data}} folder.
> However, when I produce messages with the CLI using the FQQN syntax, e.g.:
> {noformat}
> $ ./bin/artemis producer --destination pagingQueue::pagingQueue - 
> -message-count 1000 --text-size 9000{noformat}
> It seems that the address starts paging due to the following log:
> {noformat}
> 2022-09-27 09:32:37,740 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222038: Starting paging on address 'pagingQueue::pagingQueue'; size is 
> currently: 114,606 bytes; max-size-bytes: 100,000; global-size-bytes: 
> 114,926{noformat}
> But there's no change in the {{data/paging}} folder.
> I can see the address starts paging when the global size is increased after 
> ingesting some more data with the below {{WARN}}.
> {noformat}
> 2022-09-27 09:35:16,340 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222038: Starting paging on address 'pagingQueue'; size is currently: 
> 100,032 bytes; max-size-bytes: 100,000; global-size-bytes: 
> 29,954,895{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4240) Consumer stuck handling Large Message

2023-04-17 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713253#comment-17713253
 ] 

Justin Bertram edited comment on ARTEMIS-4240 at 4/17/23 6:54 PM:
--

bq. ...what happens is that the consumer thread passes the "response" message 
directly to the thread making the "request", which is blocked waiting for it.

Typically the thread sending the request is the one consuming the response so 
there's no need for any multi-threaded handling of the message.

bq. My doubt however is about why Broker sends the messages as Large, even if 
client does not use such APIs and tries to prevent their usage with 
minLargeMessageSize=2147483647.

I explained this 
[previously|https://issues.apache.org/jira/browse/ARTEMIS-4240?focusedCommentId=17712015&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17712015].
 In short, if the broker _doesn't_ convert it to a "large" message then it 
won't fit in the journal buffer and sending the message will simply fail.

bq. ...this implies that application code must be aware of it...

I think an argument could be made that the JMS specification does not guarantee 
that the {{Message}} is thread-safe. As noted previously, the only objects 
which are explicitly identified as being thread-safe are {{Destination}}, 
{{ConnectionFactory}}, and {{Connection}}, and that makes things rather tricky 
when you start introducing other threads into your JMS applications.

bq. ...this seems to have been implemented by ARTEMIS-2228 for a specific 
management API use-case.

The use-case from the Jira is specific to management, but the fix applies to 
all use-cases involving sending messages to the broker since the problem could 
occur in a non-management use-case.

bq. I suppose that client-side it is not useful receiving a streamed Large 
message if client is not aware of it and does not handle the streaming...

In just about every use-case the differences between large and non-large 
messages are completely transparent. This is a bit of a edge case where the 
application code is arguably incorrect.

bq. It could be useful to have an explicit broker configuration to disable the 
Large message handling in such cases...

I think that could be useful as well. In this situation the sending of the 
original message which would normally be converted to a "large" message would 
simply fail instead and the producer would then need to deal with the failure.

bq. Also, it would be useful to detect client-side unintended reading of the 
message from a thread which is not the consumer one.

To be clear, the exception(s) thrown when making an illegal call from within a 
{{MessageListener}} or {{CompletionListener}} are mandated by the JMS 
specification. However, we also detect invalid concurrent {{Session}} usage and 
log a WARN message. Despite the fact the spec says the {{Session}} is not 
thread-safe, it doesn't require this kind of illegal access detection. I think 
that's similar to what you're describing here. Correct me if I'm wrong.


was (Author: jbertram):
> ...what happens is that the consumer thread passes the "response" message 
> directly to the thread making the "request", which is blocked waiting for it.

Typically the thread sending the request is the one consuming the response so 
there's no need for any multi-threaded handling of the message.

> My doubt however is about why Broker sends the messages as Large, even if 
> client does not use such APIs and tries to prevent their usage with 
> minLargeMessageSize=2147483647.

I explained this 
[previously|https://issues.apache.org/jira/browse/ARTEMIS-4240?focusedCommentId=17712015&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17712015].
 In short, if the broker _doesn't_ convert it to a "large" message then it 
won't fit in the journal buffer and sending the message will simply fail.

> ...this implies that application code must be aware of it...

I think an argument could be made that the JMS specification does not guarantee 
that the {{Message}} is thread-safe. As noted previously, the only objects 
which are explicitly identified as being thread-safe are {{Destination}}, 
{{ConnectionFactory}}, and {{Connection}}, and that makes things rather tricky 
when you start introducing other threads into your JMS applications.

> ...this seems to have been implemented by ARTEMIS-2228 for a specific 
> management API use-case.

The use-case from the Jira is specific to management, but the fix applies to 
all use-cases involving sending messages to the broker since the problem could 
occur in a non-management use-case.

> I suppose that client-side it is not useful receiving a streamed Large 
> message if client is not aware of it and does not handle the streaming...

In just about every use-case the differences between large and non-large

[jira] [Commented] (ARTEMIS-4240) Consumer stuck handling Large Message

2023-04-17 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713253#comment-17713253
 ] 

Justin Bertram commented on ARTEMIS-4240:
-

> ...what happens is that the consumer thread passes the "response" message 
> directly to the thread making the "request", which is blocked waiting for it.

Typically the thread sending the request is the one consuming the response so 
there's no need for any multi-threaded handling of the message.

> My doubt however is about why Broker sends the messages as Large, even if 
> client does not use such APIs and tries to prevent their usage with 
> minLargeMessageSize=2147483647.

I explained this 
[previously|https://issues.apache.org/jira/browse/ARTEMIS-4240?focusedCommentId=17712015&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17712015].
 In short, if the broker _doesn't_ convert it to a "large" message then it 
won't fit in the journal buffer and sending the message will simply fail.

> ...this implies that application code must be aware of it...

I think an argument could be made that the JMS specification does not guarantee 
that the {{Message}} is thread-safe. As noted previously, the only objects 
which are explicitly identified as being thread-safe are {{Destination}}, 
{{ConnectionFactory}}, and {{Connection}}, and that makes things rather tricky 
when you start introducing other threads into your JMS applications.

> ...this seems to have been implemented by ARTEMIS-2228 for a specific 
> management API use-case.

The use-case from the Jira is specific to management, but the fix applies to 
all use-cases involving sending messages to the broker since the problem could 
occur in a non-management use-case.

> I suppose that client-side it is not useful receiving a streamed Large 
> message if client is not aware of it and does not handle the streaming...

In just about every use-case the differences between large and non-large 
messages are completely transparent. This is a bit of a edge case where the 
application code is arguably incorrect.

> It could be useful to have an explicit broker configuration to disable the 
> Large message handling in such cases...

I think that could be useful as well. In this situation the sending of the 
original message which would normally be converted to a "large" message would 
simply fail instead and the producer would then need to deal with the failure.

> Also, it would be useful to detect client-side unintended reading of the 
> message from a thread which is not the consumer one.

To be clear, the exception(s) thrown when making an illegal call from within a 
{{MessageListener}} or {{CompletionListener}} are mandated by the JMS 
specification. However, we also detect invalid concurrent {{Session}} usage and 
log a WARN message. Despite the fact the spec says the {{Session}} is not 
thread-safe, it doesn't require this kind of illegal access detection. I think 
that's similar to what you're describing here. Correct me if I'm wrong.

> Consumer stuck handling Large Message
> -
>
> Key: ARTEMIS-4240
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4240
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, JMS
>Affects Versions: 2.19.1, 2.28.0
>Reporter: Apache Dev
>Assignee: Justin Bertram
>Priority: Critical
>
> In this scenario:
>  * "core" protocol
>  * JMS consumer APIs
>  * non-persistent messaging
>  * client connection configured with {{minLargeMessageSize=2147483647}} in 
> order to disable Large Messages
> a consumer receives correctly all messages having a small size.
> But when a message > 1Mib is received, the consumer thread is stuck for 30 
> seconds, with this stack:
> {noformat}
> 3XMTHREADINFO  "Thread-3 (ActiveMQ-client-global-threads)" 
> J9VMThread:0x006D9300, omrthread_t:0x7FB7A002C248, 
> java/lang/Thread:0xE0BD22F8, state:P, prio=5
> 3XMJAVALTHREAD(java/lang/Thread getId:0x37, isDaemon:true)
> 3XMJAVALTHRCCLsun/misc/Launcher$AppClassLoader(0xE00298C0)
> 3XMTHREADINFO1(native thread ID:0xDD80, native priority:0x5, 
> native policy:UNKNOWN, vmstate:P, vm thread flags:0x000a0081)
> 3XMTHREADINFO2(native stack address range 
> from:0x7FB85EE4F000, to:0x7FB85EE8F000, size:0x4)
> 3XMCPUTIME   CPU usage total: 0.025981590 secs, current 
> category="Application"
> 3XMTHREADBLOCK Parked on: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0xE0DA3B80
>  Owned by: 
> 3XMHEAPALLOC Heap bytes allocated since last GC cycle=0 (0x0)
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACE  

[jira] [Comment Edited] (ARTEMIS-4240) Consumer stuck handling Large Message

2023-04-17 Thread Apache Dev (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713222#comment-17713222
 ] 

Apache Dev edited comment on ARTEMIS-4240 at 4/17/23 5:41 PM:
--

Thanks guys!

Just to clarify, I simplified the scenario for the reproducer, but in the real 
scenario we have a request-reply pattern, and the issue happens when the reply 
message is received.
Actually, ExecutorService is not used because what happens is that the consumer 
thread passes the "response" message directly to the thread making the 
"request", which is blocked waiting for it.
That's why we do not need multiple consumers: we already have threads ready to 
process their own reply message, and the "onMessage" is actually a dispatcher 
which awakens them with the received reply message.

My doubt however is about why Broker sends the messages as Large, even if 
client does not use such APIs and tries to prevent their usage with 
minLargeMessageSize=2147483647.
As this implies that application code must be aware of it, avoiding the pattern 
which is causing the race condition or inspecting the type of message (for 
example checking for the property "_AMQ_LARGE_SIZE") in order to decide how to 
handle it.

Broker seems to decide to handle the message as Large in: 
{{org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl#checkLargeMessage}}

And this seems to have been implemented by ARTEMIS-2228 for a specific 
management API use-case.

I suppose that client-side it is not useful receiving a streamed Large message 
if client is not aware of it and does not handle the streaming, because the 
full message will end up being fully loaded in the heap.
It could be useful to have an explicit broker configuration to disable the 
Large message handling in such cases (configuration that we obtained messing 
with  and  as a workaround).

Also, it would be useful to detect client-side unintended reading of the 
message from a thread which is not the consumer one. A similar mechanism 
already happens for example for "AMQ139006: It is illegal to call this method 
from within a Message Listener".

 


was (Author: apachedev):
Thanks guys!

Just to clarify, I simplified the scenario for the reproducer, but in the real 
scenario we have a request-reply pattern, and the issue happens when the reply 
message is received.
Actually, ExecutorService is not used because what happens is that the consumer 
thread passes the "response" message directly to the thread making the 
"request", which is blocked waiting for it.
That's why we do not need multiple consumers: we already have threads ready to 
process their own reply message, and the "onMessage" is actually a dispatcher 
which awakens them with the received reply message.

My doubt however is about why Broker sends the messages as Large, even if 
client does not use such APIs and tries to prevent their usage with 
minLargeMessageSize=2147483647.
As this implies that application code must be aware of it, avoiding the pattern 
which is causing the race condition or inspecting the type of message (for 
example checking for the property "_AMQ_LARGE_SIZE") in order to decide how to 
handle it.

Broker seems to decide to handle the message as Large in: 
{{org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl#checkLargeMessage}}

And this seems to have been implemented by ARTEMIS-2228 for a specific 
management API use-case.

I suppose that client-side it is not useful receiving a streamed Large message, 
because application code does not handle the streaming, so the full message 
will end up being fully in the heap.
It could be useful to have an explicit broker configuration to disable the 
Large message handling in such cases (configuration that we obtained messing 
with  and  as a workaround).

Also, it would be useful to detect client-side unintended reading of the 
message from a thread which is not the consumer one. A similar mechanism 
already happens for example for "AMQ139006: It is illegal to call this method 
from within a Message Listener".

 

> Consumer stuck handling Large Message
> -
>
> Key: ARTEMIS-4240
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4240
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, JMS
>Affects Versions: 2.19.1, 2.28.0
>Reporter: Apache Dev
>Assignee: Justin Bertram
>Priority: Critical
>
> In this scenario:
>  * "core" protocol
>  * JMS consumer APIs
>  * non-persistent messaging
>  * client connection configured with {{minLargeMessageSize=2147483647}} in 
> order to disable Large Messages
> a consumer receives correctly all messages having a small size.
> But when a message > 1Mib is received, the consumer thread is stuck for 

[jira] [Comment Edited] (ARTEMIS-4240) Consumer stuck handling Large Message

2023-04-17 Thread Apache Dev (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713222#comment-17713222
 ] 

Apache Dev edited comment on ARTEMIS-4240 at 4/17/23 5:36 PM:
--

Thanks guys!

Just to clarify, I simplified the scenario for the reproducer, but in the real 
scenario we have a request-reply pattern, and the issue happens when the reply 
message is received.
Actually, ExecutorService is not used because what happens is that the consumer 
thread passes the "response" message directly to the thread making the 
"request", which is blocked waiting for it.
That's why we do not need multiple consumers: we already have threads ready to 
process their own reply message, and the "onMessage" is actually a dispatcher 
which awakens them with the received reply message.

My doubt however is about why Broker sends the messages as Large, even if 
client does not use such APIs and tries to prevent their usage with 
minLargeMessageSize=2147483647.
As this implies that application code must be aware of it, avoiding the pattern 
which is causing the race condition or inspecting the type of message (for 
example checking for the property "_AMQ_LARGE_SIZE") in order to decide how to 
handle it.

Broker seems to decide to handle the message as Large in: 
{{org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl#checkLargeMessage}}

And this seems to have been implemented by ARTEMIS-2228 for a specific 
management API use-case.

I suppose that client-side it is not useful receiving a streamed Large message, 
because application code does not handle the streaming, so the full message 
will end up being fully in the heap.
It could be useful to have an explicit broker configuration to disable the 
Large message handling in such cases (configuration that we obtained messing 
with  and  as a workaround).

Also, it would be useful to detect client-side unintended reading of the 
message from a thread which is not the consumer one. A similar mechanism 
already happens for example for "AMQ139006: It is illegal to call this method 
from within a Message Listener".

 


was (Author: apachedev):
Thanks guys!

Just to clarify, I simplified the scenario for the reproducer, but in the real 
scenario we have a request-reply pattern, and the issue happens when the reply 
message is received.
Actually, ExecutorService is not used because what happens is that the consumer 
thread passes the "response" message directly to the thread making the 
"request", which is blocked waiting for it.
That's why we do not need multiple consumers: we already have threads ready to 
process their own reply message, and the "onMessage" is actually a dispatcher 
which awakens them with the received reply message.

My doubt however is about why Broker sends the messages as Large, even if 
client does not use such APIs and tries to prevent their usage with 
minLargeMessageSize=2147483647.
As this implies that application code must be aware of it, avoiding the pattern 
which is causing the race condition or inspecting the type of message (for 
example checking for the property "_AMQ_LARGE_SIZE") in order to decide how to 
handle it.

Broker seems to decide to handle the message as Large in: 
{{org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl#checkLargeMessage}}

And this seems to have been implemented by ARTEMIS-2228 for a specific 
management API use-case.

I suppose that client-side it is not useful receiving a streamed Large message, 
because application code does not handle the streaming, so the full message 
will end up being fully in the heap.
It could be useful to have an explicit broker configuration to disable the 
Large message handling in such cases (configuration that we obtained messing 
with  and  as a workaround).

 

> Consumer stuck handling Large Message
> -
>
> Key: ARTEMIS-4240
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4240
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, JMS
>Affects Versions: 2.19.1, 2.28.0
>Reporter: Apache Dev
>Assignee: Justin Bertram
>Priority: Critical
>
> In this scenario:
>  * "core" protocol
>  * JMS consumer APIs
>  * non-persistent messaging
>  * client connection configured with {{minLargeMessageSize=2147483647}} in 
> order to disable Large Messages
> a consumer receives correctly all messages having a small size.
> But when a message > 1Mib is received, the consumer thread is stuck for 30 
> seconds, with this stack:
> {noformat}
> 3XMTHREADINFO  "Thread-3 (ActiveMQ-client-global-threads)" 
> J9VMThread:0x006D9300, omrthread_t:0x7FB7A002C248, 
> java/lang/Thread:0xE0BD22F8, state:P, prio=5
> 3XMJAVALTHREAD(java/lang/Thread ge

[jira] [Commented] (ARTEMIS-4240) Consumer stuck handling Large Message

2023-04-17 Thread Apache Dev (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713222#comment-17713222
 ] 

Apache Dev commented on ARTEMIS-4240:
-

Thanks guys!

Just to clarify, I simplified the scenario for the reproducer, but in the real 
scenario we have a request-reply pattern, and the issue happens when the reply 
message is received.
Actually, ExecutorService is not used because what happens is that the consumer 
thread passes the "response" message directly to the thread making the 
"request", which is blocked waiting for it.
That's why we do not need multiple consumers: we already have threads ready to 
process their own reply message, and the "onMessage" is actually a dispatcher 
which awakens them with the received reply message.

My doubt however is about why Broker sends the messages as Large, even if 
client does not use such APIs and tries to prevent their usage with 
minLargeMessageSize=2147483647.
As this implies that application code must be aware of it, avoiding the pattern 
which is causing the race condition or inspecting the type of message (for 
example checking for the property "_AMQ_LARGE_SIZE") in order to decide how to 
handle it.

Broker seems to decide to handle the message as Large in: 
{{org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl#checkLargeMessage}}

And this seems to have been implemented by ARTEMIS-2228 for a specific 
management API use-case.

I suppose that client-side it is not useful receiving a streamed Large message, 
because application code does not handle the streaming, so the full message 
will end up being fully in the heap.
It could be useful to have an explicit broker configuration to disable the 
Large message handling in such cases (configuration that we obtained messing 
with  and  as a workaround).

 

> Consumer stuck handling Large Message
> -
>
> Key: ARTEMIS-4240
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4240
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, JMS
>Affects Versions: 2.19.1, 2.28.0
>Reporter: Apache Dev
>Assignee: Justin Bertram
>Priority: Critical
>
> In this scenario:
>  * "core" protocol
>  * JMS consumer APIs
>  * non-persistent messaging
>  * client connection configured with {{minLargeMessageSize=2147483647}} in 
> order to disable Large Messages
> a consumer receives correctly all messages having a small size.
> But when a message > 1Mib is received, the consumer thread is stuck for 30 
> seconds, with this stack:
> {noformat}
> 3XMTHREADINFO  "Thread-3 (ActiveMQ-client-global-threads)" 
> J9VMThread:0x006D9300, omrthread_t:0x7FB7A002C248, 
> java/lang/Thread:0xE0BD22F8, state:P, prio=5
> 3XMJAVALTHREAD(java/lang/Thread getId:0x37, isDaemon:true)
> 3XMJAVALTHRCCLsun/misc/Launcher$AppClassLoader(0xE00298C0)
> 3XMTHREADINFO1(native thread ID:0xDD80, native priority:0x5, 
> native policy:UNKNOWN, vmstate:P, vm thread flags:0x000a0081)
> 3XMTHREADINFO2(native stack address range 
> from:0x7FB85EE4F000, to:0x7FB85EE8F000, size:0x4)
> 3XMCPUTIME   CPU usage total: 0.025981590 secs, current 
> category="Application"
> 3XMTHREADBLOCK Parked on: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0xE0DA3B80
>  Owned by: 
> 3XMHEAPALLOC Heap bytes allocated since last GC cycle=0 (0x0)
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:226)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2089)
> 4XESTACKTRACEat 
> java/util/concurrent/LinkedBlockingQueue.poll(LinkedBlockingQueue.java:478)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.popPacket(LargeMessageControllerImpl.java:1123)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.checkForPacket(LargeMessageControllerImpl.java:1167)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.discardUnusedPackets(LargeMessageControllerImpl.java:135)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientLargeMessageImpl.discardBody(ClientLargeMessageImpl.java:142)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1031)
> 4XESTACKTRACEat 
> org/apache/activemq/art

[jira] [Resolved] (ARTEMIS-4240) Consumer stuck handling Large Message

2023-04-17 Thread Justin Bertram (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-4240.
-
Resolution: Information Provided

> Consumer stuck handling Large Message
> -
>
> Key: ARTEMIS-4240
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4240
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, JMS
>Affects Versions: 2.19.1, 2.28.0
>Reporter: Apache Dev
>Assignee: Justin Bertram
>Priority: Critical
>
> In this scenario:
>  * "core" protocol
>  * JMS consumer APIs
>  * non-persistent messaging
>  * client connection configured with {{minLargeMessageSize=2147483647}} in 
> order to disable Large Messages
> a consumer receives correctly all messages having a small size.
> But when a message > 1Mib is received, the consumer thread is stuck for 30 
> seconds, with this stack:
> {noformat}
> 3XMTHREADINFO  "Thread-3 (ActiveMQ-client-global-threads)" 
> J9VMThread:0x006D9300, omrthread_t:0x7FB7A002C248, 
> java/lang/Thread:0xE0BD22F8, state:P, prio=5
> 3XMJAVALTHREAD(java/lang/Thread getId:0x37, isDaemon:true)
> 3XMJAVALTHRCCLsun/misc/Launcher$AppClassLoader(0xE00298C0)
> 3XMTHREADINFO1(native thread ID:0xDD80, native priority:0x5, 
> native policy:UNKNOWN, vmstate:P, vm thread flags:0x000a0081)
> 3XMTHREADINFO2(native stack address range 
> from:0x7FB85EE4F000, to:0x7FB85EE8F000, size:0x4)
> 3XMCPUTIME   CPU usage total: 0.025981590 secs, current 
> category="Application"
> 3XMTHREADBLOCK Parked on: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0xE0DA3B80
>  Owned by: 
> 3XMHEAPALLOC Heap bytes allocated since last GC cycle=0 (0x0)
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:226)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2089)
> 4XESTACKTRACEat 
> java/util/concurrent/LinkedBlockingQueue.poll(LinkedBlockingQueue.java:478)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.popPacket(LargeMessageControllerImpl.java:1123)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.checkForPacket(LargeMessageControllerImpl.java:1167)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.discardUnusedPackets(LargeMessageControllerImpl.java:135)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientLargeMessageImpl.discardBody(ClientLargeMessageImpl.java:142)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1031)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.access$400(ClientConsumerImpl.java:49)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1129)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/OrderedExecutor.doTask(OrderedExecutor.java:42)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/OrderedExecutor.doTask(OrderedExecutor.java:31)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/ProcessorBase.executePendingTasks(ProcessorBase.java:65)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/ProcessorBase$$Lambda$6/0xe00d1330.run(Bytecode
>  PC:4)
> 4XESTACKTRACEat 
> java/util/concurrent/ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160(Compiled
>  Code))
> 4XESTACKTRACEat 
> java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
> 3XMTHREADINFO3   Native callstack:
> 4XENATIVESTACK(0x7FB863821952 [libj9prt29.so+0x5c952])
> 4XENATIVESTACK(0x7FB8637EC7E3 [libj9prt29.so+0x277e3])
> 4XENATIVESTACK(0x7FB863821E4A [libj9prt29.so+0x5ce4a])
> 4XENATIVESTACK(0x7FB8637EC7E3 [libj9prt29.so+0x277e3])
> 4XENATIVESTACK(0x7FB8638217E4 [libj9prt29.so+0x5c7e4])
> 4XENATIVESTACK(0x7FB86381DB3F [libj9prt29.so+0x58b3f])
> 4XENATIVESTACK(0x7

[jira] [Commented] (ARTEMIS-4240) Consumer stuck handling Large Message

2023-04-17 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713199#comment-17713199
 ] 

Justin Bertram commented on ARTEMIS-4240:
-

I had a closer look at what is going on under the covers and this basically 
comes down to a race condition between the end of the {{onMessage()}} and the 
execution of the new {{Runnable}} which actually reads the message. The race 
happens because when the {{onMessage()}} finishes there is some clean-up 
required related to the large message so both the the thread executing the 
{{onMessage()}} and the new {{Runnable}} are accessing the message concurrently.

If you _really_ want to use this pattern with large messages where you use an 
{{ExecutorService}} from a {{MessageListener}} then I would recommend reading 
the message in the {{onMessage()}} and operating the raw data in the 
{{Runnable}}, e.g.:
{code:java}
consumer.setMessageListener(new MessageListener() {

final AtomicLong lastProcessedTimestamp = new AtomicLong();

@Override
public void onMessage(Message message) {
long now = System.currentTimeMillis();
if (lastProcessedTimestamp.compareAndSet(0, now)) {
System.out.println("Message received");
} else {
long before = lastProcessedTimestamp.getAndSet(now);
System.out.printf("Message received. Elapsed from previous 
message: %d ms%n", now - before);
}

// here we read the message
try {
BytesMessage bytesMessage = (BytesMessage) message;
byte[] buff = new byte[(int) bytesMessage.getBodyLength()];
bytesMessage.readBytes(buff);
} catch (JMSException e) {
throw new RuntimeException(e);
}

// process the data asynchronously
executor.execute(new Runnable() {

@Override
public void run() {
// here we operate on the raw byte array
completionLatch.countDown();
}
});
}
});{code}
That said, I would recommend not using this pattern at all and instead using 
multiple {{Session}}, {{MessageConsumer}}, and {{MessagListener}} objects to 
get concurrent message processing. I believe the alternative that Robbie 
suggested (using a single-threaded synchronous receiver) is also viable.

> Consumer stuck handling Large Message
> -
>
> Key: ARTEMIS-4240
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4240
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, JMS
>Affects Versions: 2.19.1, 2.28.0
>Reporter: Apache Dev
>Assignee: Justin Bertram
>Priority: Critical
>
> In this scenario:
>  * "core" protocol
>  * JMS consumer APIs
>  * non-persistent messaging
>  * client connection configured with {{minLargeMessageSize=2147483647}} in 
> order to disable Large Messages
> a consumer receives correctly all messages having a small size.
> But when a message > 1Mib is received, the consumer thread is stuck for 30 
> seconds, with this stack:
> {noformat}
> 3XMTHREADINFO  "Thread-3 (ActiveMQ-client-global-threads)" 
> J9VMThread:0x006D9300, omrthread_t:0x7FB7A002C248, 
> java/lang/Thread:0xE0BD22F8, state:P, prio=5
> 3XMJAVALTHREAD(java/lang/Thread getId:0x37, isDaemon:true)
> 3XMJAVALTHRCCLsun/misc/Launcher$AppClassLoader(0xE00298C0)
> 3XMTHREADINFO1(native thread ID:0xDD80, native priority:0x5, 
> native policy:UNKNOWN, vmstate:P, vm thread flags:0x000a0081)
> 3XMTHREADINFO2(native stack address range 
> from:0x7FB85EE4F000, to:0x7FB85EE8F000, size:0x4)
> 3XMCPUTIME   CPU usage total: 0.025981590 secs, current 
> category="Application"
> 3XMTHREADBLOCK Parked on: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0xE0DA3B80
>  Owned by: 
> 3XMHEAPALLOC Heap bytes allocated since last GC cycle=0 (0x0)
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:226)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2089)
> 4XESTACKTRACEat 
> java/util/concurrent/LinkedBlockingQueue.poll(LinkedBlockingQueue.java:478)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.popPac

[jira] [Commented] (ARTEMIS-4240) Consumer stuck handling Large Message

2023-04-17 Thread Robbie Gemmell (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713161#comment-17713161
 ] 

Robbie Gemmell commented on ARTEMIS-4240:
-

JMS does probably rule it out as legitimate however given that having a 
MessageListener on a consumer dedicates the entire single-threaded Session to 
the onMessage delivery thread. The only methods allowed to be called 
concurrently in the session, and thus from any other thread in a 
MessageListener situation, are the close methods on session/producer/consumer.

In general for a normal Message, which has been completed before onMessage was 
called, it often probably wont be a problem for a given provider impl as long 
as only one thread at a time operates on it and with proper happens-before 
relationships in place in the application code. For the rather different 
streamed-large-message case though, where the message has not necessarily been 
completed before onMessage is called, I'd guess allowing at least the onMessage 
delivery thread interacting with the message object in addition to possibly the 
IO threads as more payload arrives needs to be ok, otherwise onMessage may 
never return?

Since the OP seem to want a synchronous receiver that hands off messages to 
another thread to process, calling consumer.receive() in a polling 
loop and using that thread to do to the processing (or coordinatoring with the 
other thread if there is a real reason) may be better anyway, its not 
immediately clear why you would use a MessageLister there (aside: although its 
all done on one thread, thats also what Spring does by default under the covers 
for its 'Listener' delivery handling).

> Consumer stuck handling Large Message
> -
>
> Key: ARTEMIS-4240
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4240
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, JMS
>Affects Versions: 2.19.1, 2.28.0
>Reporter: Apache Dev
>Assignee: Justin Bertram
>Priority: Critical
>
> In this scenario:
>  * "core" protocol
>  * JMS consumer APIs
>  * non-persistent messaging
>  * client connection configured with {{minLargeMessageSize=2147483647}} in 
> order to disable Large Messages
> a consumer receives correctly all messages having a small size.
> But when a message > 1Mib is received, the consumer thread is stuck for 30 
> seconds, with this stack:
> {noformat}
> 3XMTHREADINFO  "Thread-3 (ActiveMQ-client-global-threads)" 
> J9VMThread:0x006D9300, omrthread_t:0x7FB7A002C248, 
> java/lang/Thread:0xE0BD22F8, state:P, prio=5
> 3XMJAVALTHREAD(java/lang/Thread getId:0x37, isDaemon:true)
> 3XMJAVALTHRCCLsun/misc/Launcher$AppClassLoader(0xE00298C0)
> 3XMTHREADINFO1(native thread ID:0xDD80, native priority:0x5, 
> native policy:UNKNOWN, vmstate:P, vm thread flags:0x000a0081)
> 3XMTHREADINFO2(native stack address range 
> from:0x7FB85EE4F000, to:0x7FB85EE8F000, size:0x4)
> 3XMCPUTIME   CPU usage total: 0.025981590 secs, current 
> category="Application"
> 3XMTHREADBLOCK Parked on: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0xE0DA3B80
>  Owned by: 
> 3XMHEAPALLOC Heap bytes allocated since last GC cycle=0 (0x0)
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:226)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2089)
> 4XESTACKTRACEat 
> java/util/concurrent/LinkedBlockingQueue.poll(LinkedBlockingQueue.java:478)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.popPacket(LargeMessageControllerImpl.java:1123)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.checkForPacket(LargeMessageControllerImpl.java:1167)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.discardUnusedPackets(LargeMessageControllerImpl.java:135)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientLargeMessageImpl.discardBody(ClientLargeMessageImpl.java:142)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1031)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.access$400(ClientConsumerImpl.java:49)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/

[jira] [Assigned] (ARTEMIS-4240) Consumer stuck handling Large Message

2023-04-17 Thread Justin Bertram (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram reassigned ARTEMIS-4240:
---

Assignee: Justin Bertram

> Consumer stuck handling Large Message
> -
>
> Key: ARTEMIS-4240
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4240
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, JMS
>Affects Versions: 2.19.1, 2.28.0
>Reporter: Apache Dev
>Assignee: Justin Bertram
>Priority: Critical
>
> In this scenario:
>  * "core" protocol
>  * JMS consumer APIs
>  * non-persistent messaging
>  * client connection configured with {{minLargeMessageSize=2147483647}} in 
> order to disable Large Messages
> a consumer receives correctly all messages having a small size.
> But when a message > 1Mib is received, the consumer thread is stuck for 30 
> seconds, with this stack:
> {noformat}
> 3XMTHREADINFO  "Thread-3 (ActiveMQ-client-global-threads)" 
> J9VMThread:0x006D9300, omrthread_t:0x7FB7A002C248, 
> java/lang/Thread:0xE0BD22F8, state:P, prio=5
> 3XMJAVALTHREAD(java/lang/Thread getId:0x37, isDaemon:true)
> 3XMJAVALTHRCCLsun/misc/Launcher$AppClassLoader(0xE00298C0)
> 3XMTHREADINFO1(native thread ID:0xDD80, native priority:0x5, 
> native policy:UNKNOWN, vmstate:P, vm thread flags:0x000a0081)
> 3XMTHREADINFO2(native stack address range 
> from:0x7FB85EE4F000, to:0x7FB85EE8F000, size:0x4)
> 3XMCPUTIME   CPU usage total: 0.025981590 secs, current 
> category="Application"
> 3XMTHREADBLOCK Parked on: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0xE0DA3B80
>  Owned by: 
> 3XMHEAPALLOC Heap bytes allocated since last GC cycle=0 (0x0)
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:226)
> 4XESTACKTRACEat 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2089)
> 4XESTACKTRACEat 
> java/util/concurrent/LinkedBlockingQueue.poll(LinkedBlockingQueue.java:478)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.popPacket(LargeMessageControllerImpl.java:1123)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.checkForPacket(LargeMessageControllerImpl.java:1167)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.discardUnusedPackets(LargeMessageControllerImpl.java:135)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientLargeMessageImpl.discardBody(ClientLargeMessageImpl.java:142)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1031)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.access$400(ClientConsumerImpl.java:49)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1129)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/OrderedExecutor.doTask(OrderedExecutor.java:42)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/OrderedExecutor.doTask(OrderedExecutor.java:31)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/ProcessorBase.executePendingTasks(ProcessorBase.java:65)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/actors/ProcessorBase$$Lambda$6/0xe00d1330.run(Bytecode
>  PC:4)
> 4XESTACKTRACEat 
> java/util/concurrent/ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160(Compiled
>  Code))
> 4XESTACKTRACEat 
> java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> 4XESTACKTRACEat 
> org/apache/activemq/artemis/utils/ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
> 3XMTHREADINFO3   Native callstack:
> 4XENATIVESTACK(0x7FB863821952 [libj9prt29.so+0x5c952])
> 4XENATIVESTACK(0x7FB8637EC7E3 [libj9prt29.so+0x277e3])
> 4XENATIVESTACK(0x7FB863821E4A [libj9prt29.so+0x5ce4a])
> 4XENATIVESTACK(0x7FB8637EC7E3 [libj9prt29.so+0x277e3])
> 4XENATIVESTACK(0x7FB8638217E4 [libj9prt29.so+0x5c7e4])
> 4XENATIVESTACK(0x7FB86381DB3F [libj9prt29.so+0x58b3f])
> 4XENATIVESTACK(0x7FB8

[jira] [Work logged] (ARTEMIS-4212) Unexpected Behavior when Routing Type of Destinations Doesn't Match Clients

2023-04-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4212?focusedWorklogId=857358&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-857358
 ]

ASF GitHub Bot logged work on ARTEMIS-4212:
---

Author: ASF GitHub Bot
Created on: 17/Apr/23 12:14
Start Date: 17/Apr/23 12:14
Worklog Time Spent: 10m 
  Work Description: gemmellr commented on PR #4421:
URL: 
https://github.com/apache/activemq-artemis/pull/4421#issuecomment-1511228213

   Looks good other than still confused about 
https://github.com/apache/activemq-artemis/pull/4421#discussion_r1168574598




Issue Time Tracking
---

Worklog Id: (was: 857358)
Time Spent: 5h 10m  (was: 5h)

> Unexpected Behavior when Routing Type of Destinations Doesn't Match Clients
> ---
>
> Key: ARTEMIS-4212
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4212
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> When the routing type of an address (and associated queue) does not match the 
> routing type of a client producer, the resultant behavior is a bit unexpected.
> Expected Behavior:
> If a client sends a message to an address / queue with the same name, but a 
> different routing type, the expected behavior would be to throw some sort of 
> InvalidDestinationException (if auto-create is not enabled), or to create the 
> matching address and queue with the appropriate routing type. The routing 
> count on the existing address (with non-matching routing type) should remain 
> unchanged.
> Actual Behavior:
> When sending, for example, to a predefined anycast address and queue from a 
> multiccast (Topic) producer, the routed count on the address is incremented, 
> but the message count on the matching queue is not. No indication is given at 
> the client end that the messages failed to get routed - they are silently 
> dropped.
> This is reproducible using a qpid / proton queue producer to send to a 
> multicast address or using a topic producer to send to an anycast address, 
> e.g.:
> 1. Create a a broker, setting auto-create-queues and auto-create addresses to 
> "false" for the catch-all address-setting
> 2. Start the broker and create a an address and matching queue with a ANYCAST 
> routing type
> 3. Send 1000 messages to the broker using the same queue name but mismatched 
> routing type:
> {code}
> ./artemis producer --url amqp://localhost:61616 --user admin --password admin 
> --destination topic://{QUEUE NAME} --protocol amqp
> {code}
> No error is emitted and the routing count is incremented by 1000 at the 
> address level, but remains unchanged at the destination level.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4212) Unexpected Behavior when Routing Type of Destinations Doesn't Match Clients

2023-04-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4212?focusedWorklogId=857356&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-857356
 ]

ASF GitHub Bot logged work on ARTEMIS-4212:
---

Author: ASF GitHub Bot
Created on: 17/Apr/23 11:59
Start Date: 17/Apr/23 11:59
Worklog Time Spent: 10m 
  Work Description: gemmellr commented on code in PR #4421:
URL: https://github.com/apache/activemq-artemis/pull/4421#discussion_r1168577969


##
tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/amqp/QueueAutoCreationTest.java:
##
@@ -124,7 +124,7 @@ public void testAutoCreateOnTopic() throws Exception {
   Connection connection = factory.createConnection();
   SimpleString addressName = 
UUIDGenerator.getInstance().generateSimpleStringUUID();
   logger.debug("Address is {}", addressName);
-  clientSession.createAddress(addressName, RoutingType.ANYCAST, false);
+  clientSession.createAddress(addressName, RoutingType.MULTICAST, false);

Review Comment:
   ah, auto-creation..? Though, the address is being explicitly pre-created, 
sonot really doing auto-creation? auto-update?
   
   Still seems weird to change it in one test but not the other...albeit I 
guess I'd expect a followup commit to delete the tests in this class or move 
them elsewhere.





Issue Time Tracking
---

Worklog Id: (was: 857356)
Time Spent: 5h  (was: 4h 50m)

> Unexpected Behavior when Routing Type of Destinations Doesn't Match Clients
> ---
>
> Key: ARTEMIS-4212
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4212
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> When the routing type of an address (and associated queue) does not match the 
> routing type of a client producer, the resultant behavior is a bit unexpected.
> Expected Behavior:
> If a client sends a message to an address / queue with the same name, but a 
> different routing type, the expected behavior would be to throw some sort of 
> InvalidDestinationException (if auto-create is not enabled), or to create the 
> matching address and queue with the appropriate routing type. The routing 
> count on the existing address (with non-matching routing type) should remain 
> unchanged.
> Actual Behavior:
> When sending, for example, to a predefined anycast address and queue from a 
> multiccast (Topic) producer, the routed count on the address is incremented, 
> but the message count on the matching queue is not. No indication is given at 
> the client end that the messages failed to get routed - they are silently 
> dropped.
> This is reproducible using a qpid / proton queue producer to send to a 
> multicast address or using a topic producer to send to an anycast address, 
> e.g.:
> 1. Create a a broker, setting auto-create-queues and auto-create addresses to 
> "false" for the catch-all address-setting
> 2. Start the broker and create a an address and matching queue with a ANYCAST 
> routing type
> 3. Send 1000 messages to the broker using the same queue name but mismatched 
> routing type:
> {code}
> ./artemis producer --url amqp://localhost:61616 --user admin --password admin 
> --destination topic://{QUEUE NAME} --protocol amqp
> {code}
> No error is emitted and the routing count is incremented by 1000 at the 
> address level, but remains unchanged at the destination level.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4212) Unexpected Behavior when Routing Type of Destinations Doesn't Match Clients

2023-04-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4212?focusedWorklogId=857353&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-857353
 ]

ASF GitHub Bot logged work on ARTEMIS-4212:
---

Author: ASF GitHub Bot
Created on: 17/Apr/23 11:56
Start Date: 17/Apr/23 11:56
Worklog Time Spent: 10m 
  Work Description: gemmellr commented on code in PR #4421:
URL: https://github.com/apache/activemq-artemis/pull/4421#discussion_r1168574598


##
tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/amqp/QueueAutoCreationTest.java:
##
@@ -124,7 +124,7 @@ public void testAutoCreateOnTopic() throws Exception {
   Connection connection = factory.createConnection();
   SimpleString addressName = 
UUIDGenerator.getInstance().generateSimpleStringUUID();
   logger.debug("Address is {}", addressName);
-  clientSession.createAddress(addressName, RoutingType.ANYCAST, false);
+  clientSession.createAddress(addressName, RoutingType.MULTICAST, false);

Review Comment:
   That I dont quite understand. The same change was and still is being made in 
the basically-same test in  
[AutoCreateJmsDestinationTest.java](https://github.com/apache/activemq-artemis/pull/4421/files#diff-6afa2ca2a98c6d517b31d95108be8a9a803968254c6f27bb739bc0bed395c2ae),
 so now we have 2 tests creating addresses with different routing types, one 
multicast and one anycast, and yet both sending to them as a Topic...isnt that 
exactly 'sending msgs to address w/mismatching routing types' ?





Issue Time Tracking
---

Worklog Id: (was: 857353)
Time Spent: 4h 50m  (was: 4h 40m)

> Unexpected Behavior when Routing Type of Destinations Doesn't Match Clients
> ---
>
> Key: ARTEMIS-4212
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4212
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> When the routing type of an address (and associated queue) does not match the 
> routing type of a client producer, the resultant behavior is a bit unexpected.
> Expected Behavior:
> If a client sends a message to an address / queue with the same name, but a 
> different routing type, the expected behavior would be to throw some sort of 
> InvalidDestinationException (if auto-create is not enabled), or to create the 
> matching address and queue with the appropriate routing type. The routing 
> count on the existing address (with non-matching routing type) should remain 
> unchanged.
> Actual Behavior:
> When sending, for example, to a predefined anycast address and queue from a 
> multiccast (Topic) producer, the routed count on the address is incremented, 
> but the message count on the matching queue is not. No indication is given at 
> the client end that the messages failed to get routed - they are silently 
> dropped.
> This is reproducible using a qpid / proton queue producer to send to a 
> multicast address or using a topic producer to send to an anycast address, 
> e.g.:
> 1. Create a a broker, setting auto-create-queues and auto-create addresses to 
> "false" for the catch-all address-setting
> 2. Start the broker and create a an address and matching queue with a ANYCAST 
> routing type
> 3. Send 1000 messages to the broker using the same queue name but mismatched 
> routing type:
> {code}
> ./artemis producer --url amqp://localhost:61616 --user admin --password admin 
> --destination topic://{QUEUE NAME} --protocol amqp
> {code}
> No error is emitted and the routing count is incremented by 1000 at the 
> address level, but remains unchanged at the destination level.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4243) ActiveMQ Artemis CLI fails to export bindings without routing types

2023-04-17 Thread Domenico Francesco Bruscino (Jira)
Domenico Francesco Bruscino created ARTEMIS-4243:


 Summary: ActiveMQ Artemis CLI fails to export bindings without 
routing types
 Key: ARTEMIS-4243
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4243
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.28.0
Reporter: Domenico Francesco Bruscino
Assignee: Domenico Francesco Bruscino


The ActiveMQ Artemis CLI fails to export a journal that contains a binding 
without routing types:
{code:java}
$ ./broker/bin/artemis data exp


   
  
  
  
java.lang.StringIndexOutOfBoundsException: begin 0, end -2, length 0
at java.base/java.lang.String.checkBoundsBeginEnd(String.java:4602)
at java.base/java.lang.String.substring(String.java:2705)
at 
org.apache.activemq.artemis.cli.commands.tools.xml.XmlDataExporter.printBindingsAsXML(XmlDataExporter.java:347)
at 
org.apache.activemq.artemis.cli.commands.tools.xml.XmlDataExporter.printDataAsXML(XmlDataExporter.java:327)
at 
org.apache.activemq.artemis.cli.commands.tools.xml.XmlDataExporter.writeXMLData(XmlDataExporter.java:155)
at 
org.apache.activemq.artemis.cli.commands.tools.xml.XmlDataExporter.writeOutput(XmlDataExporter.java:148)
at 
org.apache.activemq.artemis.cli.commands.tools.xml.XmlDataExporter.process(XmlDataExporter.java:137)
at 
org.apache.activemq.artemis.cli.commands.tools.xml.XmlDataExporter.execute(XmlDataExporter.java:104)
at 
org.apache.activemq.artemis.cli.Artemis.internalExecute(Artemis.java:212)
at org.apache.activemq.artemis.cli.Artemis.execute(Artemis.java:162)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.apache.activemq.artemis.boot.Artemis.execute(Artemis.java:144)
at org.apache.activemq.artemis.boot.Artemis.main(Artemis.java:61)
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)