[
https://issues.apache.org/jira/browse/HADOOP-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12515067
]
Doug Cutting commented on HADOOP-1629:
--------------------------------------
> Any suggestions on how large the tar-gzip file should be?
1MB? We should be able to use small block and buffer sizes to get all of the
desired sample files into a meg, no?
> Block CRC Unit Tests: upgrade test
> ----------------------------------
>
> Key: HADOOP-1629
> URL: https://issues.apache.org/jira/browse/HADOOP-1629
> Project: Hadoop
> Issue Type: Test
> Components: dfs
> Affects Versions: 0.14.0
> Reporter: Nigel Daley
> Assignee: Raghu Angadi
> Priority: Blocker
> Fix For: 0.14.0
>
>
> HADOOP-1286 introduced a distributed upgrade framework. 1 or more unit tests
> should be developed that start with a zipped up Hadoop 0.12 file system (that
> is included in Hadoop's src/test directory under version controlled) and
> attempts to upgrade it to the current version of Hadoop (ie the version that
> the tests are running against). The zipped up file system should include
> some "interesting" files, such as:
> - zero length files
> - file with replication set higher than number of datanodes
> - file with no .crc file
> - file with corrupt .crc file
> - file with multiple blocks (will need to set dfs.block.size to a small value)
> - file with multiple checksum blocks
> - empty directory
> - all of the above again but with a different io.bytes.per.checksum setting
> The class that generates the zipped up file system should also be included in
> this patch.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.