[ https://issues.apache.org/jira/browse/HDFS-17293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17809229#comment-17809229 ]
ASF GitHub Bot commented on HDFS-17293: --------------------------------------- hfutatzhanghb commented on PR #6368: URL: https://github.com/apache/hadoop/pull/6368#issuecomment-1903059289 > > Sir, very nice catch. I think below code may resolve the problem you found. Please take a look when you are free. I will submit another PR to fix it and Add UT. > > ```java > > if (!getStreamer().getAppendChunk()) { > > int psize = 0; > > if (blockSize == getStreamer().getBytesCurBlock()) { > > psize = writePacketSize; > > } else if (blockSize - getStreamer().getBytesCurBlock() + PacketHeader.PKT_MAX_HEADER_LEN > > < writePacketSize ) { > > psize = (int)(blockSize - getStreamer().getBytesCurBlock()) + PacketHeader.PKT_MAX_HEADER_LEN; > > } else { > > psize = (int) Math > > .min(blockSize - getStreamer().getBytesCurBlock(), writePacketSize); > > } > > computePacketChunkSize(psize, bytesPerChecksum); > > } > > ``` > > Thank you very much for investing your time in fixing these bugs. The above fixes did not take `ChecksumSize` into account, and it would be better for us to discuss this issue in the new PR. Please check if the failed tests are related to the modification of this PR. Thanks again. @zhangshuyan0 Sir, Agree with you, let's discuss this issue in the new PR. The failed tests were all passed in my local. > First packet data + checksum size will be set to 516 bytes when writing to a > new block. > --------------------------------------------------------------------------------------- > > Key: HDFS-17293 > URL: https://issues.apache.org/jira/browse/HDFS-17293 > Project: Hadoop HDFS > Issue Type: Improvement > Affects Versions: 3.3.6 > Reporter: farmmamba > Assignee: farmmamba > Priority: Major > Labels: pull-request-available > > First packet size will be set to 516 bytes when writing to a new block. > In method computePacketChunkSize, the parameters psize and csize would be > (0, 512) > when writting to a new block. It should better use writePacketSize. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org