[ 
https://issues.apache.org/jira/browse/HDFS-17293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800196#comment-17800196
 ] 

ASF GitHub Bot commented on HDFS-17293:
---------------------------------------

hfutatzhanghb commented on code in PR #6368:
URL: https://github.com/apache/hadoop/pull/6368#discussion_r1435849521


##########
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java:
##########
@@ -536,8 +536,13 @@ protected void adjustChunkBoundary() {
     }
 
     if (!getStreamer().getAppendChunk()) {
-      final int psize = (int) Math
-          .min(blockSize - getStreamer().getBytesCurBlock(), writePacketSize);
+      int psize = 0;
+      if (blockSize == getStreamer().getBytesCurBlock()) {

Review Comment:
   @Hexiaoqiao Sir, thanks for your replying. I will add unit tests soonly when 
i am avaliable.





> First packet data + checksum size will be set to 516 bytes when writing to a 
> new block.
> ---------------------------------------------------------------------------------------
>
>                 Key: HDFS-17293
>                 URL: https://issues.apache.org/jira/browse/HDFS-17293
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 3.3.6
>            Reporter: farmmamba
>            Assignee: farmmamba
>            Priority: Major
>              Labels: pull-request-available
>
> First packet size will be set to 516 bytes when writing to a new block.
> In  method computePacketChunkSize, the parameters psize and csize would be 
> (0, 512)
> when writting to a new block. It should better use writePacketSize.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to