[ 
https://issues.apache.org/jira/browse/HDFS-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16858722#comment-16858722
 ] 

Kihwal Lee commented on HDFS-14532:
-----------------------------------

Assuming 4 byte checksum per 512 byte data chunk, a 128kB checksum buffer will 
hold checksum data for 16MB of data.  It seems wasteful and more so if reads 
are short and seek-heavy. 

> Datanode's BlockSender checksum buffer is too big
> -------------------------------------------------
>
>                 Key: HDFS-14532
>                 URL: https://issues.apache.org/jira/browse/HDFS-14532
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Daryn Sharp
>            Priority: Major
>         Attachments: Screen Shot 2019-05-31 at 12.32.06 PM.png
>
>
> The BlockSender uses an excessively large 128K buffered input stream – 99% of 
> the entire instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to