Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21593#discussion_r196637144
  
    --- Diff: 
common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java
 ---
    @@ -137,30 +137,15 @@ protected void deallocate() {
       }
     
       private int copyByteBuf(ByteBuf buf, WritableByteChannel target) throws 
IOException {
    -    ByteBuffer buffer = buf.nioBuffer();
    -    int written = (buffer.remaining() <= NIO_BUFFER_LIMIT) ?
    -      target.write(buffer) : writeNioBuffer(target, buffer);
    +    // SPARK-24578: cap the sub-region's size of returned nio buffer to 
improve the performance
    +    // for the case that the passed-in buffer has too many components.
    +    int length = Math.min(buf.readableBytes(), NIO_BUFFER_LIMIT);
    +    ByteBuffer buffer = buf.nioBuffer(buf.readerIndex(), length);
    --- End diff --
    
    I think you can go one step further here, and call `buf.nioBuffers(int, 
int)` (plural) 
    
https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/ByteBuf.java#L2355
    
    that will avoid the copying required to create the merged buffer (though 
its a bit complicated as you have to check for incomplete writes from any 
single `target.write()` call).
    
    Also OK to leave this for now as this is a pretty important fix.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to