[ 
https://issues.apache.org/jira/browse/HADOOP-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14614637#comment-14614637
 ] 

Harsh J commented on HADOOP-8940:
---------------------------------

bq. Then it can look at time stamps on the files, and possibly checksums as 
well, to pick up where it left off on a failure.

You could also do this with DistCp's {{-update}} flag, with 
{{-Dmapreduce.framework.name=local}} passed through for Local FS {{file:///}} 
sources. I'm uncertain if the checksum checks would work though, unless the 
files were written by the Checksumming FS. Useful for a lot of files, but 
probably not if what's needed is independent file-level append-like resume.

> Add a resume feature to the copyFromLocal and put commands
> ----------------------------------------------------------
>
>                 Key: HADOOP-8940
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8940
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: tools
>    Affects Versions: 2.0.1-alpha
>            Reporter: Adam Muise
>            Assignee: Mahesh Dharmasena
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>
> Add a resume feature to the copyFromLocal command. Failures in large 
> transfers result in a great deal of wasted time. For large files, it would be 
> good to be able to continue from the last good block onwards. The file would 
> have to be unavailable to other clients for reads or regular writes until the 
> "resume" process was completed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to