hanishakoneru commented on a change in pull request #1685:
URL: https://github.com/apache/ozone/pull/1685#discussion_r553025593
##########
File path:
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
##########
@@ -301,36 +348,38 @@ private synchronized void readChunkFromContainer(int len)
throws IOException {
startByteIndex = chunkPosition;
} else {
// Start reading the chunk from the last chunkPosition onwards.
- startByteIndex = bufferOffset + bufferLength;
+ startByteIndex = bufferOffsetWrtChunkData + bufferLength;
}
- // bufferOffset and bufferLength are updated below, but if read fails
+ // bufferOffsetWrtChunkData and bufferLength are updated after the data
+ // is read from Container and put into the buffers, but if read fails
// and is retried, we need the previous position. Position is reset after
// successful read in adjustBufferPosition()
storePosition();
+ long adjustedBuffersOffset, adjustedBuffersLen;
if (verifyChecksum) {
- // Update the bufferOffset and bufferLength as per the checksum
- // boundary requirement.
- computeChecksumBoundaries(startByteIndex, len);
+ // Adjust the chunk offset and length to include required checksum
+ // boundaries
+ Pair<Long, Long> adjustedOffsetAndLength =
+ computeChecksumBoundaries(startByteIndex, len);
+ adjustedBuffersOffset = adjustedOffsetAndLength.getLeft();
+ adjustedBuffersLen = adjustedOffsetAndLength.getRight();
} else {
// Read from the startByteIndex
- bufferOffset = startByteIndex;
- bufferLength = len;
+ adjustedBuffersOffset = startByteIndex;
+ adjustedBuffersLen = len;
}
// Adjust the chunkInfo so that only the required bytes are read from
// the chunk.
final ChunkInfo adjustedChunkInfo = ChunkInfo.newBuilder(chunkInfo)
- .setOffset(bufferOffset + chunkInfo.getOffset())
- .setLen(bufferLength)
+ .setOffset(adjustedBuffersOffset + chunkInfo.getOffset())
+ .setLen(adjustedBuffersLen)
.build();
- ByteString byteString = readChunk(adjustedChunkInfo);
-
- buffers = byteString.asReadOnlyByteBufferList();
- bufferIndex = 0;
- allocated = true;
+ byte[] chunkData = readChunk(adjustedChunkInfo).toByteArray();
Review comment:
```
What if we read in small buffers on the server side itself and send it
across as a list of bytestrings to the client?
```
This might work but would require a change in the DN-Client protocol. Would
have to analyze the compatibility issues and how to address them.
`In case, this turns out to be unavoidable, we can also think about doing
bytebuffer.compact() which also does an intrinsic buffer copy to release the
buffers but the logic would be more simpler`
I am not sure if there is much gain in doing this. The code changes in this
PR were required because the logic was inaccurate. It was working because there
was always only one ByteBuffer.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]