[ 
https://issues.apache.org/jira/browse/HDFS-222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12756715#action_12756715
 ] 

Doug Cutting commented on HDFS-222:
-----------------------------------

This sounds good to me.

It might be simplest if the source were removed and the entire operation is 
atomic, e.g., if it succeeds then the source is gone and if it fails the 
destination is unchanged and the source is still present.  There is no middle 
ground.

Also, what happens if the target already exists?  Should we add an overwrite 
option?  If this is unspecified and the target exists then an error is thrown.  
If it is specified, and the target is a plain file, then the old target's 
blocks are removed as a part of the atomic operation.  If the old target is a 
directory an error is thrown regardless of overwrite.

(Can you guess which issue I've been following?)


> Support for concatenating of files into a single file
> -----------------------------------------------------
>
>                 Key: HDFS-222
>                 URL: https://issues.apache.org/jira/browse/HDFS-222
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Venkatesh S
>            Assignee: Boris Shkolnik
>
> An API to concatenate files of same size and replication factor on HDFS into 
> a single larger file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to