[ 
https://issues.apache.org/jira/browse/HDFS-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14162727#comment-14162727
 ] 

Jing Zhao commented on HDFS-6934:
---------------------------------

The patch looks pretty good to me. Some comments and questions:
# Maybe we should list the changes that have been made in the patch in the 
jira. Considering the patch changes both the client and DN's code, that will 
make it easier for us to revisit the jira in the future. 
# Do we also need to handle the checksum for short circuit read?
# It will be better to have a javadoc in {{BlockSender#BlockSender}} to explain 
idea behind the length check.
{code}
-          checksumIn = new DataInputStream(
-              new BufferedInputStream(metaIn, 
HdfsConstants.IO_FILE_BUFFER_SIZE));
+          if (metaIn.getLength() > BlockMetadataHeader.getHeaderSize()) {
+            checksumIn = new DataInputStream(new BufferedInputStream(
+                metaIn, HdfsConstants.IO_FILE_BUFFER_SIZE));
{code}
# It will be great to have more unit tests to cover both read and write 
scenarios.

> Move checksum computation off the hot path when writing to RAM disk
> -------------------------------------------------------------------
>
>                 Key: HDFS-6934
>                 URL: https://issues.apache.org/jira/browse/HDFS-6934
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>            Reporter: Arpit Agarwal
>            Assignee: Tsz Wo Nicholas Sze
>         Attachments: h6934_20141003b.patch, h6934_20141005.patch
>
>
> Since local RAM is considered reliable we can avoid writing checksums on the 
> hot path when replicas are being written to a local RAM disk.
> The checksum can be computed by the lazy writer when moving replicas to disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to