[ 
https://issues.apache.org/jira/browse/HDFS-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13018450#comment-13018450
 ] 

Hudson commented on HDFS-1630:
------------------------------

Integrated in Hadoop-Hdfs-trunk-Commit #587 (See 
[https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/587/])
    HDFS-1630. Support fsedits checksum. Contrbuted by Hairong Kuang.


> Checksum fsedits
> ----------------
>
>                 Key: HDFS-1630
>                 URL: https://issues.apache.org/jira/browse/HDFS-1630
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.23.0
>
>         Attachments: editsChecksum.patch, editsChecksum1.patch, 
> editsChecksum2.patch
>
>
> HDFS-903 calculates a MD5 checksum to a saved image, so that we could verify 
> the integrity of the image at the loading time.
> The other half of the story is how to verify fsedits. Similarly we could use 
> the checksum approach. But since a fsedit file is growing constantly, a 
> checksum per file does not work. I am thinking to add a checksum per 
> transaction. Is it doable or too expensive?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to