[ 
https://issues.apache.org/jira/browse/HADOOP-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12915086#action_12915086
 ] 

Tanping Wang commented on HADOOP-2797:
--------------------------------------

The test file images used ,  hadoop-14-dfs-dir.tgz as for now are generated and 
uploaded only.  Based on the comments in  hadoop-1629, these FSimages suppose 
to  contain the various categories 

    *  zero length files
    * file with replication set higher than number of datanodes
    * file with no .crc file
    * file with corrupt .crc file
    * file with multiple blocks (will need to set dfs.block.size to a small 
value)
    * file with multiple checksum blocks
    * empty directory
    * all of the above again but with a different io.bytes.per.checksum setting


As FSImage structure could be changed in future releases,  developers need to  
generate new FS images.  There should be at least test program that can 
generate these FSImages contained in version control.   Otherwise, this test is 
not very sustainable.  

> Withdraw CRC upgrade from HDFS
> ------------------------------
>
>                 Key: HADOOP-2797
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2797
>             Project: Hadoop Common
>          Issue Type: New Feature
>    Affects Versions: 0.13.0
>            Reporter: Robert Chansler
>            Assignee: Raghu Angadi
>            Priority: Critical
>             Fix For: 0.18.0
>
>         Attachments: hadoop-14-dfs-dir.tgz, HADOOP-2797.patch, 
> HADOOP-2797.patch, HADOOP-2797.patch, HADOOP-2797.patch
>
>
> HDFS will no longer support upgrades from versions without CRCs for block 
> data. Users upgrading from version 0.13 or earlier must first upgrade to an 
> intermediate (0.14, 0.15, 0.16, 0.17) version before upgrade to 0.18 or later.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to