[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13984296#comment-13984296
 ] 

Guo Ruijing commented on HDFS-2139:
-----------------------------------

comment 1: using hardlink to copy data is a good idea. but we can still keep 
copy in RPC like following and using hardlink in implementation.

message CopyBlockRequestProto {
  required ExtendedBlockProto srcBlock = 1;
  required ExtendedBlockProto dstBlock = 2;
  required uint64 length = 3;
}

in implementation, 
if platform don't support hardlink, we can use copy
if length == srcBlock length, we can use hardlink
if length != srcBlock lenth, we can use copy

> Fast copy for HDFS.
> -------------------
>
>                 Key: HDFS-2139
>                 URL: https://issues.apache.org/jira/browse/HDFS-2139
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Pritam Damania
>         Attachments: HDFS-2139.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to