[ 
https://issues.apache.org/jira/browse/HDFS-222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12756805#action_12756805
 ] 

Boris Shkolnik commented on HDFS-222:
-------------------------------------

To simplify (and to avoid overwrite question)  I suggest we concatenate srcs'  
blocks TO the target file. 
i.e. if we have

File1  {Block11, Block12}
File2  {Block21, Block22}
File3  {Block31, Block32}

and we do
concat(File1, File2, File3)

we get
File1  {Block11, Block12, Block21, Block22, Block31, Block32}
and File2, File3 deleted



To make things atomic we would need to introduce one new OP_CONCAT_DELETE for 
the EditsLog, which will be recorded only when every block is moved and source 
file deleted (we cannot just call FsDirectory.delete() for this reason). 


> Support for concatenating of files into a single file
> -----------------------------------------------------
>
>                 Key: HDFS-222
>                 URL: https://issues.apache.org/jira/browse/HDFS-222
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Venkatesh S
>            Assignee: Boris Shkolnik
>
> An API to concatenate files of same size and replication factor on HDFS into 
> a single larger file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to