[ https://issues.apache.org/jira/browse/HDFS-7308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14334569#comment-14334569 ]
Tsz Wo Nicholas Sze commented on HDFS-7308: ------------------------------------------- Patch looks good to me. [~stack], I wonder if you could repeat the test you have done for HDFS-7276 with the patch here to see if the packet size can go over 65536? > DFSClient write packet size may > 64kB > -------------------------------------- > > Key: HDFS-7308 > URL: https://issues.apache.org/jira/browse/HDFS-7308 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client > Reporter: Tsz Wo Nicholas Sze > Assignee: Takuya Fukudome > Priority: Minor > Attachments: HDFS-7308.1.patch > > > In DFSOutputStream.computePacketChunkSize(..), > {code} > private void computePacketChunkSize(int psize, int csize) { > final int chunkSize = csize + getChecksumSize(); > chunksPerPacket = Math.max(psize/chunkSize, 1); > packetSize = chunkSize*chunksPerPacket; > if (DFSClient.LOG.isDebugEnabled()) { > ... > } > } > {code} > We have the following > || variables || usual values || > | psize | dfsClient.getConf().writePacketSize = 64kB | > | csize | bytesPerChecksum = 512B | > | getChecksumSize(), i.e. CRC size | 32B | > | chunkSize = csize + getChecksumSize() | 544B (not a power of two) | > | psize/chunkSize | 120.47 | > | chunksPerPacket = max(psize/chunkSize, 1) | 120 | > | packetSize = chunkSize*chunksPerPacket (not including header) | 65280B | > | PacketHeader.PKT_MAX_HEADER_LEN | 33B | > | actual packet size | 65280 + 33 = *65313* < 65536 = 64k | > It is fortunate that the usual packet size = 65313 < 64k although the > calculation above does not guarantee it always happens (e.g. if > PKT_MAX_HEADER_LEN=257, then actual packet size=65537 > 64k.) We should fix > the computation in order to guarantee actual packet size < 64k. -- This message was sent by Atlassian JIRA (v6.3.4#6332)