[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Raghu Angadi updated HADOOP-1134:
---------------------------------
Attachment: BlockCrcFeatureTestPlan.pdf
Attached test plan (pdf).
> Block level CRCs in HDFS
> ------------------------
>
> Key: HADOOP-1134
> URL: https://issues.apache.org/jira/browse/HADOOP-1134
> Project: Hadoop
> Issue Type: New Feature
> Components: dfs
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Fix For: 0.14.0
>
> Attachments: BlockCrcFeatureTestPlan.pdf,
> BlockLevelCrc-07032007.patch, BlockLevelCrc-07052007.patch,
> BlockLevelCrc-07062007.patch, BlockLevelCrc-07102007.patch,
> BlockLevelCrc-07122007.patch, DfsBlockCrcDesign.htm, HADOOP-1134-01.patch,
> HADOOP-1134-02.patch, HADOOP-1134-03.patch, readBuffer.java
>
>
> Currently CRCs are handled at FileSystem level and are transparent to core
> HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given
> filesystem ) regd more about it. Though this served us well there a few
> disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In
> many cases, it nearly doubles the number of blocks. Taking namenode out of
> CRCs would nearly double namespace performance both in terms of CPU and
> memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted
> blocks. With block level CRCs, Datanode can periodically verify the checksums
> and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as
> in GFS. I will update the jira with detailed requirements and design. This
> will include same guarantees provided by current implementation and will
> include a upgrade of current data.
>
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.