[ 
https://issues.apache.org/jira/browse/ARTEMIS-3449?focusedWorklogId=643519&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-643519
 ]

ASF GitHub Bot logged work on ARTEMIS-3449:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 30/Aug/21 15:26
            Start Date: 30/Aug/21 15:26
    Worklog Time Spent: 10m 
      Work Description: franz1981 commented on a change in pull request #3711:
URL: https://github.com/apache/activemq-artemis/pull/3711#discussion_r698583729



##########
File path: 
artemis-protocols/artemis-amqp-protocol/src/main/java/org/apache/activemq/artemis/protocol/amqp/proton/ProtonServerSenderContext.java
##########
@@ -587,43 +587,48 @@ void deliver() {
          LargeBodyReader context = message.getLargeBodyReader();
          try {
             context.open();
+            final ByteBuf tmpFrameBuf = 
PooledByteBufAllocator.DEFAULT.directBuffer(frameSize);
+            final NettyReadable nettyReadable = new NettyReadable(tmpFrameBuf);
             try {
+
                context.position(position);
                long bodySize = context.getSize();
-
-               ByteBuffer buf = ByteBuffer.allocate(frameSize);
+               // materialize it so we can use its internal NIO buffer
+               tmpFrameBuf.ensureWritable(frameSize);
 
                for (; sender.getLocalState() != EndpointState.CLOSED && 
position < bodySize; ) {
                   if (!connection.flowControl(this::resume)) {
                      context.close();
                      return;
                   }
-                  buf.clear();
-                  int size = 0;
-
-                  try {
-                     if (position == 0) {
-                        replaceInitialHeader(deliveryAnnotationsToEncode, 
context, WritableBuffer.ByteBufferWrapper.wrap(buf));
-                     }
-                     size = context.readInto(buf);
-
-                     sender.send(new ReadableBuffer.ByteBufferReader(buf));
-                     position += size;
-                  } catch (java.nio.BufferOverflowException overflowException) 
{
-                     if (position == 0) {
-                        if (log.isDebugEnabled()) {
-                           log.debug("Delivery of message failed with an 
overFlowException, retrying again with expandable buffer");
-                        }
-                        // on the very first packet, if the initial header was 
replaced with a much bigger header (re-encoding)
-                        // we could recover the situation with a retry using 
an expandable buffer.
-                        // this is tested on 
org.apache.activemq.artemis.tests.integration.amqp.AmqpMessageDivertsTest
-                        size = 
retryInitialPacketWithExpandableBuffer(deliveryAnnotationsToEncode, context, 
buf);
-                     } else {
-                        // if this is not the position 0, something is going on
-                        // we just forward the exception as this is not 
supposed to happen
-                        throw overflowException;
+                  // using internalNioBuffer to save creating a new ByteBuffer 
duplicate/slice/view in the loop
+                  ByteBuffer nioBuffer = tmpFrameBuf.internalNioBuffer(0, 
frameSize);
+                  int bufPosition = nioBuffer.position();
+                  tmpFrameBuf.clear();
+                  final int writtenBytes;
+                  if (position == 0) {
+                     // no need to cache NettyWritable: position should be 0 
just once per large message file
+                     replaceInitialHeader(deliveryAnnotationsToEncode, 
context, new NettyWritable(tmpFrameBuf));
+                     writtenBytes = tmpFrameBuf.writerIndex();
+                     // tested on 
org.apache.activemq.artemis.tests.integration.amqp.AmqpMessageDivertsTest:
+                     // tmpFrameBuf can grow over the initial capacity
+                     if (nioBuffer.remaining() < writtenBytes) {
+                        // ensure reading at least frameSize from the file
+                        tmpFrameBuf.ensureWritable(frameSize);

Review comment:
       This has to be done because of how netty works with internalNioBuffer: 
it forces Netty buffer to be materialized from EMPTY singleton to a proper one. 
I still need to be sure that's still valid TBH.
   
   > This change seems like it will specifically undo that by making the buffer 
perpetually the wrong size and make every subsequent write for the message use 
an off-sized buffer leading to oddball and less efficient transfer framing 
behaviour.
   
   I'm not sure about this really: this change is going to send `frameSize` (or 
less, depending by the remaining data in the large file) data and, if  
https://issues.apache.org/jira/browse/ARTEMIS-3026 is going to happen, just the 
single initial one `encoded header + frameSize` length, while subsequent ones 
`frameSize`-sized as expected.
   It looks to me very similar to the original code behaviour although without 
the exceptional code path in.
   
   
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 643519)
    Time Spent: 2h 20m  (was: 2h 10m)

> Speedup AMQP large message streaming
> ------------------------------------
>
>                 Key: ARTEMIS-3449
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-3449
>             Project: ActiveMQ Artemis
>          Issue Type: Improvement
>            Reporter: Francesco Nigro
>            Assignee: Francesco Nigro
>            Priority: Major
>          Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> AMQP is using unpooled heap ByteBuffer(s) to stream AMQP large messages: 
> given that the underline NIO sequential file can both use FileChannel or 
> RandomAccessFile (depending if the ByteBuffer used is direct/heap based), 
> both approaches would benefit from using Netty pooled direct buffers and save 
> additional copies (performed by RandomAccessFile) to happen, reducing GC too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to