Github user JoshRosen commented on the issue: https://github.com/apache/spark/pull/21175 No, I mean that the code here can simply follow the write call as straight through code. We don't need to guard against exceptions here because the duplicate of the buffer is used only by a single thread, so you can omit the try block and just concatenate the try contents to the finally contents. Minor bit but I wanted to comment because I initially was confused about when errors could occur and thread safety / sharing until I realized that the modified state does not escape this method. On Mon, May 14, 2018 at 9:03 PM Wenchen Fan <notificati...@github.com> wrote: > *@cloud-fan* commented on this pull request. > ------------------------------ > > In core/src/main/scala/org/apache/spark/util/io/ChunkedByteBuffer.scala > <https://github.com/apache/spark/pull/21175#discussion_r188160044>: > > > @@ -63,10 +63,15 @@ private[spark] class ChunkedByteBuffer(var chunks: Array[ByteBuffer]) { > */ > def writeFully(channel: WritableByteChannel): Unit = { > for (bytes <- getChunks()) { > - while (bytes.remaining() > 0) { > - val ioSize = Math.min(bytes.remaining(), bufferWriteChunkSize) > - bytes.limit(bytes.position() + ioSize) > - channel.write(bytes) > + val curChunkLimit = bytes.limit() > + while (bytes.hasRemaining) { > + try { > + val ioSize = Math.min(bytes.remaining(), bufferWriteChunkSize) > + bytes.limit(bytes.position() + ioSize) > + channel.write(bytes) > + } finally { > > Do you mean this is not a real bug that can cause real workload to fail? > > â > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/apache/spark/pull/21175#discussion_r188160044>, or mute > the thread > <https://github.com/notifications/unsubscribe-auth/AADGPJvZNC5LYjHl2WZ44YEIBVGLrehEks5tylODgaJpZM4TptO_> > . >
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org