hadoop dfs -cp foo/bar/bad-file mumble/new-file copies a file with a bad 
checksum
---------------------------------------------------------------------------------

                 Key: HADOOP-518
                 URL: http://issues.apache.org/jira/browse/HADOOP-518
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
         Environment: red hat
            Reporter: Dick King


I have a file that reliably generates a checksum error when it's read, whether 
by a map/reduce job as input or by a "dfs -get" command.

However...

if I do a "dfs -cp" from the file with the bad checksum the copy can be read in 
its entirety without a checksum error.

I would consider it reasonable for the command to fail, or for the new file to 
be created but to also have a checksum error in the same place, but this 
behavior is unsettling.

-dk


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to