[ 
https://issues.apache.org/jira/browse/ARTEMIS-4240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713161#comment-17713161
 ] 

Robbie Gemmell commented on ARTEMIS-4240:
-----------------------------------------

JMS does probably rule it out as legitimate however given that having a 
MessageListener on a consumer dedicates the entire single-threaded Session to 
the onMessage delivery thread. The only methods allowed to be called 
concurrently in the session, and thus from any other thread in a 
MessageListener situation, are the close methods on session/producer/consumer.

In general for a normal Message, which has been completed before onMessage was 
called, it often probably wont be a problem for a given provider impl as long 
as only one thread at a time operates on it and with proper happens-before 
relationships in place in the application code. For the rather different 
streamed-large-message case though, where the message has not necessarily been 
completed before onMessage is called, I'd guess allowing at least the onMessage 
delivery thread interacting with the message object in addition to possibly the 
IO threads as more payload arrives needs to be ok, otherwise onMessage may 
never return?

Since the OP seem to want a synchronous receiver that hands off messages to 
another thread to process, calling consumer.receive(<timeout>) in a polling 
loop and using that thread to do to the processing (or coordinatoring with the 
other thread if there is a real reason) may be better anyway, its not 
immediately clear why you would use a MessageLister there (aside: although its 
all done on one thread, thats also what Spring does by default under the covers 
for its 'Listener' delivery handling).

> Consumer stuck handling Large Message
> -------------------------------------
>
>                 Key: ARTEMIS-4240
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-4240
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>          Components: Broker, JMS
>    Affects Versions: 2.19.1, 2.28.0
>            Reporter: Apache Dev
>            Assignee: Justin Bertram
>            Priority: Critical
>
> In this scenario:
>  * "core" protocol
>  * JMS consumer APIs
>  * non-persistent messaging
>  * client connection configured with {{minLargeMessageSize=2147483647}} in 
> order to disable Large Messages
> a consumer receives correctly all messages having a small size.
> But when a message > 1Mib is received, the consumer thread is stuck for 30 
> seconds, with this stack:
> {noformat}
> 3XMTHREADINFO      "Thread-3 (ActiveMQ-client-global-threads)" 
> J9VMThread:0x00000000006D9300, omrthread_t:0x00007FB7A002C248, 
> java/lang/Thread:0x00000000E0BD22F8, state:P, prio=5
> 3XMJAVALTHREAD            (java/lang/Thread getId:0x37, isDaemon:true)
> 3XMJAVALTHRCCL            sun/misc/Launcher$AppClassLoader(0x00000000E00298C0)
> 3XMTHREADINFO1            (native thread ID:0xDD80, native priority:0x5, 
> native policy:UNKNOWN, vmstate:P, vm thread flags:0x000a0081)
> 3XMTHREADINFO2            (native stack address range 
> from:0x00007FB85EE4F000, to:0x00007FB85EE8F000, size:0x40000)
> 3XMCPUTIME               CPU usage total: 0.025981590 secs, current 
> category="Application"
> 3XMTHREADBLOCK     Parked on: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0x00000000E0DA3B80
>  Owned by: <unknown>
> 3XMHEAPALLOC             Heap bytes allocated since last GC cycle=0 (0x0)
> 3XMTHREADINFO3           Java callstack:
> 4XESTACKTRACE                at sun/misc/Unsafe.park(Native Method)
> 4XESTACKTRACE                at 
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:226)
> 4XESTACKTRACE                at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2089)
> 4XESTACKTRACE                at 
> java/util/concurrent/LinkedBlockingQueue.poll(LinkedBlockingQueue.java:478)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.popPacket(LargeMessageControllerImpl.java:1123)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.checkForPacket(LargeMessageControllerImpl.java:1167)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/LargeMessageControllerImpl.discardUnusedPackets(LargeMessageControllerImpl.java:135)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/ClientLargeMessageImpl.discardBody(ClientLargeMessageImpl.java:142)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1031)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl.access$400(ClientConsumerImpl.java:49)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/core/client/impl/ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1129)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/utils/actors/OrderedExecutor.doTask(OrderedExecutor.java:42)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/utils/actors/OrderedExecutor.doTask(OrderedExecutor.java:31)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/utils/actors/ProcessorBase.executePendingTasks(ProcessorBase.java:65)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/utils/actors/ProcessorBase$$Lambda$6/0x00000000e00d1330.run(Bytecode
>  PC:4)
> 4XESTACKTRACE                at 
> java/util/concurrent/ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160(Compiled
>  Code))
> 4XESTACKTRACE                at 
> java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> 4XESTACKTRACE                at 
> org/apache/activemq/artemis/utils/ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
> 3XMTHREADINFO3           Native callstack:
> 4XENATIVESTACK                (0x00007FB863821952 [libj9prt29.so+0x5c952])
> 4XENATIVESTACK                (0x00007FB8637EC7E3 [libj9prt29.so+0x277e3])
> 4XENATIVESTACK                (0x00007FB863821E4A [libj9prt29.so+0x5ce4a])
> 4XENATIVESTACK                (0x00007FB8637EC7E3 [libj9prt29.so+0x277e3])
> 4XENATIVESTACK                (0x00007FB8638217E4 [libj9prt29.so+0x5c7e4])
> 4XENATIVESTACK                (0x00007FB86381DB3F [libj9prt29.so+0x58b3f])
> 4XENATIVESTACK                (0x00007FB8698B7420 [libpthread.so.0+0x14420])
> 4XENATIVESTACK               pthread_cond_timedwait+0x271 (0x00007FB8698B27D1 
> [libpthread.so.0+0xf7d1])
> 4XENATIVESTACK               omrthread_park+0x184 (0x00007FB8680C9454 
> [libj9thr29.so+0x7454])
> 4XENATIVESTACK                (0x00007FB863D35744 [libj9vm29.so+0xdb744])
> 4XENATIVESTACK                (0x00007FB84DC354B7 [<unknown>+0x0]){noformat} 
> At that point, the consumer thread has already invoked the application 
> "onMessage" callback, but then it blocks causing the entire consumer to block 
> and not process any new message for 30 seconds.
> In all cases, no errors or message lost has been detected. Just slowdown due 
> to the blocked consumer.
> "Large messages" are disabled client-side, however we noticed that broker 
> decides if message has to be converted to large when sending it.
> Broker uses the following method in 
> org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl#checkLargeMessage:
> {code:java}
>    /** This will check if a regular message needs to be converted as large 
> message */
>    public static Message checkLargeMessage(Message message, StorageManager 
> storageManager) throws Exception {
>       if (message.isLargeMessage()) {
>          return message; // nothing to be done on this case
>       }
>       if (message.getEncodeSize() + ESTIMATE_RECORD_TRAIL > 
> storageManager.getMaxRecordSize()) {
>          return asLargeMessage(message, storageManager);
>       } else {
>          return message;
>       }
>    }{code}
> As a workaround, it is possible to reconfigure broker journal to use the 
> following configurations, so that {{storageManager.getMaxRecordSize()}} 
> always returns a big number, ensuring that message is never converted to 
> Large:
> {code:xml}
>         <journal-file-size>2147483647</journal-file-size>
>         <journal-buffer-size>2147483647</journal-buffer-size>
> {code}
> Not clear to me why broker converts the message to Large even if client does 
> not require it.
> And why broker decides this according to the record size of storage, as in 
> this scenario only non-persistent messages are used. 
> Possibly related to ARTEMIS-3809.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to