[ 
https://issues.apache.org/jira/browse/HDFS-7308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takuya Fukudome updated HDFS-7308:
----------------------------------
    Attachment: HDFS-7308.2.patch

Hi [~szetszwo] and [~yzhangal],
Thank you for reviewing. I added a test and attached a new patch. Thank you.

> DFSClient write packet size may > 64kB
> --------------------------------------
>
>                 Key: HDFS-7308
>                 URL: https://issues.apache.org/jira/browse/HDFS-7308
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Takuya Fukudome
>            Priority: Minor
>         Attachments: HDFS-7308.1.patch, HDFS-7308.2.patch
>
>
> In DFSOutputStream.computePacketChunkSize(..),
> {code}
>   private void computePacketChunkSize(int psize, int csize) {
>     final int chunkSize = csize + getChecksumSize();
>     chunksPerPacket = Math.max(psize/chunkSize, 1);
>     packetSize = chunkSize*chunksPerPacket;
>     if (DFSClient.LOG.isDebugEnabled()) {
>       ...
>     }
>   }
> {code}
> We have the following
> || variables || usual values ||
> | psize | dfsClient.getConf().writePacketSize = 64kB |
> | csize | bytesPerChecksum = 512B |
> | getChecksumSize(), i.e. CRC size | 32B |
> | chunkSize = csize + getChecksumSize() | 544B (not a power of two) |
> | psize/chunkSize | 120.47 |
> | chunksPerPacket = max(psize/chunkSize, 1) | 120 |
> | packetSize = chunkSize*chunksPerPacket (not including header) | 65280B |
> | PacketHeader.PKT_MAX_HEADER_LEN | 33B |
> | actual packet size | 65280 + 33 = *65313* < 65536 = 64k |
> It is fortunate that the usual packet size = 65313 < 64k although the 
> calculation above does not guarantee it always happens (e.g. if 
> PKT_MAX_HEADER_LEN=257, then actual packet size=65537 > 64k.)  We should fix 
> the computation in order to guarantee actual packet size < 64k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to