Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22105#discussion_r210353176
  
    --- Diff: 
common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java
 ---
    @@ -140,8 +140,24 @@ private int copyByteBuf(ByteBuf buf, 
WritableByteChannel target) throws IOExcept
         // SPARK-24578: cap the sub-region's size of returned nio buffer to 
improve the performance
         // for the case that the passed-in buffer has too many components.
         int length = Math.min(buf.readableBytes(), NIO_BUFFER_LIMIT);
    --- End diff --
    
    That was only briefly discussed in the PR that Ryan linked above... the 
original code actually used 512k.
    
    I think Hadoop's limit is a little low, but maybe 256k is a bit high. IIRC 
socket buffers are 32k by default on Linux, so it seems unlikely you'd be able 
to write 256k in one call (ignoring what IOUtil does internally). But maybe in 
practice it works ok.
    
    If anyone has the time to test this out that would be great.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to