Github user liyezhang556520 commented on the pull request: https://github.com/apache/spark/pull/12083#issuecomment-204632236 >So if we write a 1M buffer, it can only write NIO_BUFFER_LIMIT (512K). And we need to write the reset 512K again. So in this case, we need to copy 1M + 512K bytes. If we divide 1M buffer to 2 * 512K buffers, then we only need to copy 512K + 512K bytes. Is it correct? @zsxwing , yes that right, so there will be tremendous copies if the data to be written is huge. >So basically, if the source buffer is not a direct buffer, that class is making a copy of the whole source buffer before trying to write it to the channel. That's, uh, a little silly, but I guess it's something we have to live with... Yes, we have to live with it if the buffer is not a direct buffer.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org