[ 
https://issues.apache.org/jira/browse/HADOOP-2758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12572347#action_12572347
 ] 

Konstantin Shvachko commented on HADOOP-2758:
---------------------------------------------

# The comments are very useful, thanks.
# "4 byte Legth of actual data" should read Le_n_gth.
# BlockSender
#- *sendBuf* member should be a local variable of sendChunks(). 
We always call the constructor BlockSender() followed by only one call to 
sendChunks().
So there is no advantage of allocating the buffer in the constructor and then 
using it in sendChunks().
#- Same with maxChunksPerPacket
# +1 other than that

> Reduce memory copies when data is read from DFS
> -----------------------------------------------
>
>                 Key: HADOOP-2758
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2758
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>             Fix For: 0.17.0
>
>         Attachments: HADOOP-2758.patch, HADOOP-2758.patch, HADOOP-2758.patch
>
>
> Currently datanode and client part of DFS perform multiple copies of data on 
> the 'read path' (i.e. path from storage on datanode to user buffer on the 
> client). This jira reduces these copies by enhancing data read protocol and 
> implementation of read on both datanode and the client. I will describe the 
> changes in next comment.
> Requirement is that this fix should reduce CPU used and should not cause 
> regression in any benchmarks. It might not improve the benchmarks since most 
> benchmarks are not cpu bound.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to