[ 
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482609
 ] 

Doug Cutting commented on HADOOP-1134:
--------------------------------------

Another thing to think about is reverting if the upgrade doesn't work.  If the 
upgrade purely adds new files next to block files then reversion is easy until 
you remove the old CRC files.  So the removal of the old CRC files should 
probably be a separate step, only performed after the rest of the upgrade is 
shown to be satisfactory.

> I don't think using existing CRCs helps much.

I suspect it would greatly speed the upgrade.  Yes, the filesystem would need 
to be brought up in a read-only mode so that the old CRC files could be read.  
But note that the old CRCs were computed on the client as the data was created 
(as should be new CRC files).  If a block has been corrupted, simply CRCing its 
data on the datanode would hide that.  So the old CRCs are what we want for 
correctness too.


> Block level CRCs in HDFS
> ------------------------
>
>                 Key: HADOOP-1134
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1134
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Raghu Angadi
>         Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core 
> HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given 
> filesystem ) regd more about it. Though this served us well there a few 
> disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In 
> many cases, it nearly doubles the number of blocks. Taking namenode out of 
> CRCs would nearly double namespace performance both in terms of CPU and 
> memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted 
> blocks. With block level CRCs, Datanode can periodically verify the checksums 
> and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as 
> in GFS. I will update the jira with detailed requirements and design. This 
> will include same guarantees provided by current implementation and will 
> include a upgrade of current data.
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to