[ 
http://issues.apache.org/jira/browse/HADOOP-518?page=comments#action_12434785 ] 
            
Doug Cutting commented on HADOOP-518:
-------------------------------------

Could this be related to HADOOP-320?  In other words, is the bug perhaps that 
'dfs -cp' didn't copy the checksum file?

> hadoop dfs -cp foo/bar/bad-file mumble/new-file copies a file with a bad 
> checksum
> ---------------------------------------------------------------------------------
>
>                 Key: HADOOP-518
>                 URL: http://issues.apache.org/jira/browse/HADOOP-518
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>         Environment: red hat
>            Reporter: Dick King
>
> I have a file that reliably generates a checksum error when it's read, 
> whether by a map/reduce job as input or by a "dfs -get" command.
> However...
> if I do a "dfs -cp" from the file with the bad checksum the copy can be read 
> in its entirety without a checksum error.
> I would consider it reasonable for the command to fail, or for the new file 
> to be created but to also have a checksum error in the same place, but this 
> behavior is unsettling.
> -dk

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to