[ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713253#comment-17713253
 ] 

Justin Bertram edited comment on ARTEMIS-4240 at 4/17/23 6:54 PM:
------------------------------------------------------------------

bq. ...what happens is that the consumer thread passes the "response" message 
directly to the thread making the "request", which is blocked waiting for it.

Typically the thread sending the request is the one consuming the response so 
there's no need for any multi-threaded handling of the message.

bq. My doubt however is about why Broker sends the messages as Large, even if 
client does not use such APIs and tries to prevent their usage with 
minLargeMessageSize=2147483647.

I explained this 
[previously|https://issues.apache.org/jira/browse/ARTEMIS-4240?focusedCommentId=17712015&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17712015].
 In short, if the broker _doesn't_ convert it to a "large" message then it 
won't fit in the journal buffer and sending the message will simply fail.

bq. ...this implies that application code must be aware of it...

I think an argument could be made that the JMS specification does not guarantee 
that the {{Message}} is thread-safe. As noted previously, the only objects 
which are explicitly identified as being thread-safe are {{Destination}}, 
{{ConnectionFactory}}, and {{Connection}}, and that makes things rather tricky 
when you start introducing other threads into your JMS applications.

bq. ...this seems to have been implemented by ARTEMIS-2228 for a specific 
management API use-case.

The use-case from the Jira is specific to management, but the fix applies to 
all use-cases involving sending messages to the broker since the problem could 
occur in a non-management use-case.

bq. I suppose that client-side it is not useful receiving a streamed Large 
message if client is not aware of it and does not handle the streaming...

In just about every use-case the differences between large and non-large 
messages are completely transparent. This is a bit of a edge case where the 
application code is arguably incorrect.

bq. It could be useful to have an explicit broker configuration to disable the 
Large message handling in such cases...

I think that could be useful as well. In this situation the sending of the 
original message which would normally be converted to a "large" message would 
simply fail instead and the producer would then need to deal with the failure.

bq. Also, it would be useful to detect client-side unintended reading of the 
message from a thread which is not the consumer one.

To be clear, the exception(s) thrown when making an illegal call from within a 
{{MessageListener}} or {{CompletionListener}} are mandated by the JMS 
specification. However, we also detect invalid concurrent {{Session}} usage and 
log a WARN message. Despite the fact the spec says the {{Session}} is not 
thread-safe, it doesn't require this kind of illegal access detection. I think 
that's similar to what you're describing here. Correct me if I'm wrong.


was (Author: jbertram):
> ...what happens is that the consumer thread passes the "response" message 
> directly to the thread making the "request", which is blocked waiting for it.

Typically the thread sending the request is the one consuming the response so 
there's no need for any multi-threaded handling of the message.

> My doubt however is about why Broker sends the messages as Large, even if 
> client does not use such APIs and tries to prevent their usage with 
> minLargeMessageSize=2147483647.

I explained this 
[previously|https://issues.apache.org/jira/browse/ARTEMIS-4240?focusedCommentId=17712015&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17712015].
 In short, if the broker _doesn't_ convert it to a "large" message then it 
won't fit in the journal buffer and sending the message will simply fail.

> ...this implies that application code must be aware of it...

I think an argument could be made that the JMS specification does not guarantee 
that the {{Message}} is thread-safe. As noted previously, the only objects 
which are explicitly identified as being thread-safe are {{Destination}}, 
{{ConnectionFactory}}, and {{Connection}}, and that makes things rather tricky 
when you start introducing other threads into your JMS applications.

> ...this seems to have been implemented by ARTEMIS-2228 for a specific 
> management API use-case.

The use-case from the Jira is specific to management, but the fix applies to 
all use-cases involving sending messages to the broker since the problem could 
occur in a non-management use-case.

> I suppose that client-side it is not useful receiving a streamed Large 
> message if client is not aware of it and does not handle the streaming...

In just about every use-case the differences between large and non-large 
messages are completely transparent. This is a bit of a edge case where the 
application code is arguably incorrect.

> It could be useful to have an explicit broker configuration to disable the 
> Large message handling in such cases...

I think that could be useful as well. In this situation the sending of the 
original message which would normally be converted to a "large" message would 
simply fail instead and the producer would then need to deal with the failure.

> Also, it would be useful to detect client-side unintended reading of the 
> message from a thread which is not the consumer one.

To be clear, the exception(s) thrown when making an illegal call from within a 
{{MessageListener}} or {{CompletionListener}} are mandated by the JMS 
specification. However, we also detect invalid concurrent {{Session}} usage and 
log a WARN message. Despite the fact the spec says the {{Session}} is not 
thread-safe, it doesn't require this kind of illegal access detection. I think 
that's similar to what you're describing here. Correct me if I'm wrong.

> Consumer stuck handling Large Message
> -------------------------------------
>
>                 Key: ARTEMIS-4240
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-4240
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>          Components: Broker, JMS
>    Affects Versions: 2.19.1, 2.28.0
>            Reporter: Apache Dev
>            Assignee: Justin Bertram
>            Priority: Critical
>
> In this scenario:
>  * "core" protocol
>  * JMS consumer APIs
>  * non-persistent messaging
>  * client connection configured with {{minLargeMessageSize=2147483647}} in 
> order to disable Large Messages
> a consumer receives correctly all messages having a small size.
> But when a message > 1Mib is received, the consumer thread is stuck for 30 
> seconds, with this stack:
> {noformat}
> 3XMTHREADINFO      "Thread-3 (ActiveMQ-client-global-threads)" 
> J9VMThread:0x00000000006D9300, omrthread_t:0x00007FB7A002C248, 
> java/lang/Thread:0x00000000E0BD22F8, state:P, prio=5
> 3XMJAVALTHREAD            (java/lang/Thread getId:0x37, isDaemon:true)
> 3XMJAVALTHRCCL            sun/misc/Launcher$AppClassLoader(0x00000000E00298C0)
> 3XMTHREADINFO1            (native thread ID:0xDD80, native priority:0x5, 
> native policy:UNKNOWN, vmstate:P, vm thread flags:0x000a0081)
> 3XMTHREADINFO2            (native stack address range 
> from:0x00007FB85EE4F000, to:0x00007FB85EE8F000, size:0x40000)
> 3XMCPUTIME               CPU usage total: 0.025981590 secs, current 
> category="Application"
> 3XMTHREADBLOCK     Parked on: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0x00000000E0DA3B80
>  Owned by: <unknown>
> 3XMHEAPALLOC             Heap bytes allocated since last GC cycle=0 (0x0)
> 3XMTHREADINFO3           Java callstack:
> 4XESTACKTRACE                at sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACE                at 
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:226)
> 4XESTACKTRACE                at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2089)
> 4XESTACKTRACE                at 
> java/util/concurrent/LinkedBlockingQueue.poll(LinkedBlockingQueue.java:478)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.popPacket(LargeMessageControllerImpl.java:1123)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.checkForPacket(LargeMessageControllerImpl.java:1167)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.discardUnusedPackets(LargeMessageControllerImpl.java:135)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/ClientLargeMessageImpl.discardBody(ClientLargeMessageImpl.java:142)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1031)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.access$400(ClientConsumerImpl.java:49)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1129)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/utils/actors/OrderedExecutor.doTask(OrderedExecutor.java:42)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/utils/actors/OrderedExecutor.doTask(OrderedExecutor.java:31)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/utils/actors/ProcessorBase.executePendingTasks(ProcessorBase.java:65)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/utils/actors/ProcessorBase$$Lambda$6/0x00000000e00d1330.run(Bytecode
>  PC:4)
> 4XESTACKTRACE                at 
> java/util/concurrent/ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160(Compiled
>  Code))
> 4XESTACKTRACE                at 
> java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/utils/ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
> 3XMTHREADINFO3           Native callstack:
> 4XENATIVESTACK                (0x00007FB863821952 [libj9prt29.so+0x5c952])
> 4XENATIVESTACK                (0x00007FB8637EC7E3 [libj9prt29.so+0x277e3])
> 4XENATIVESTACK                (0x00007FB863821E4A [libj9prt29.so+0x5ce4a])
> 4XENATIVESTACK                (0x00007FB8637EC7E3 [libj9prt29.so+0x277e3])
> 4XENATIVESTACK                (0x00007FB8638217E4 [libj9prt29.so+0x5c7e4])
> 4XENATIVESTACK                (0x00007FB86381DB3F [libj9prt29.so+0x58b3f])
> 4XENATIVESTACK                (0x00007FB8698B7420 [libpthread.so.0+0x14420])
> 4XENATIVESTACK               pthread_cond_timedwait+0x271 (0x00007FB8698B27D1 
> [libpthread.so.0+0xf7d1])
> 4XENATIVESTACK               omrthread_park+0x184 (0x00007FB8680C9454 
> [libj9thr29.so+0x7454])
> 4XENATIVESTACK                (0x00007FB863D35744 [libj9vm29.so+0xdb744])
> 4XENATIVESTACK                (0x00007FB84DC354B7 [<unknown>+0x0]){noformat} 
> At that point, the consumer thread has already invoked the application 
> "onMessage" callback, but then it blocks causing the entire consumer to block 
> and not process any new message for 30 seconds.
> In all cases, no errors or message lost has been detected. Just slowdown due 
> to the blocked consumer.
> "Large messages" are disabled client-side, however we noticed that broker 
> decides if message has to be converted to large when sending it.
> Broker uses the following method in 
> org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl#checkLargeMessage:
> {code:java}
>    /** This will check if a regular message needs to be converted as large 
> message */
>    public static Message checkLargeMessage(Message message, StorageManager 
> storageManager) throws Exception {
>       if (message.isLargeMessage()) {
>          return message; // nothing to be done on this case
>       }
>       if (message.getEncodeSize() + ESTIMATE_RECORD_TRAIL > 
> storageManager.getMaxRecordSize()) {
>          return asLargeMessage(message, storageManager);
>       } else {
>          return message;
>       }
>    }{code}
> As a workaround, it is possible to reconfigure broker journal to use the 
> following configurations, so that {{storageManager.getMaxRecordSize()}} 
> always returns a big number, ensuring that message is never converted to 
> Large:
> {code:xml}
>         <journal-file-size>2147483647</journal-file-size>
>         <journal-buffer-size>2147483647</journal-buffer-size>
> {code}
> Not clear to me why broker converts the message to Large even if client does 
> not require it.
> And why broker decides this according to the record size of storage, as in 
> this scenario only non-persistent messages are used. 
> Possibly related to ARTEMIS-3809.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to