[ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16531337#comment-16531337
 ] 

Elek, Marton commented on HDDS-75:
----------------------------------

Patch is rebased, now it's compatible with trunk. This is the first version 
some minor modifications are still required (eg. remove unused fields from the 
proto files)

This version uses the Grpc server of the GrpcXceiverService server.

The patch doesn't contain any throttling or progress tracking. It could be done 
in a separated Jira.
The patch doesn't support offsets (currently). I will simplify the 
communication with removing len/offset from the remote.

Please move it to "Patch Available" state (I have no permission to do this)

> Ozone: Support CopyContainer
> ----------------------------
>
>                 Key: HDDS-75
>                 URL: https://issues.apache.org/jira/browse/HDDS-75
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>          Components: Ozone Datanode
>            Reporter: Anu Engineer
>            Assignee: Elek, Marton
>            Priority: Major
>              Labels: OzonePostMerge
>         Attachments: HDDS-75.005.patch, HDFS-11686-HDFS-7240.001.patch, 
> HDFS-11686-HDFS-7240.002.patch, HDFS-11686-HDFS-7240.003.patch, 
> HDFS-11686-HDFS-7240.004.patch
>
>
> Once a container is closed we need to copy the container to the correct pool 
> or re-encode the container to use erasure coding. The copyContainer allows 
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one 
> datanode to an other node. It could be used by higher level components such 
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode 
> could read the raw data from one or more source datanode where the container 
> exists.
> The source provides a binary representation of the container over a common 
> interface which has two method:
>  # prepare(containerName)
>  # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the implementation 
> could prepare for the copy by precreate a compressed tar file from the 
> container data. As a first step we can provide a simple implementation which 
> creates the tar files on demand.
> The destination datanode should retry the copy if the container in the source 
> node not yet prepared.
> The raw container data is provided over HTTP. The HTTP endpoint should be 
> separated from the ObjectStore  REST API (similar to the distinctions between 
> HDFS-7240 and HDFS-13074) 
> Long-term the HTTP endpoint should support Http-Range requests: One container 
> could be copied from multiple source by the destination. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to