[ https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574937#comment-16574937 ]
genericqa commented on HDDS-75: ------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HDDS-75 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDDS-75 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12934986/HDDS-75.007.patch | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/739/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Support CopyContainer > ---------------------------- > > Key: HDDS-75 > URL: https://issues.apache.org/jira/browse/HDDS-75 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Datanode > Reporter: Anu Engineer > Assignee: Elek, Marton > Priority: Blocker > Fix For: 0.2.1 > > Attachments: HDDS-75.005.patch, HDDS-75.006.patch, HDDS-75.007.patch, > HDFS-11686-HDFS-7240.001.patch, HDFS-11686-HDFS-7240.002.patch, > HDFS-11686-HDFS-7240.003.patch, HDFS-11686-HDFS-7240.004.patch > > > Once a container is closed we need to copy the container to the correct pool > or re-encode the container to use erasure coding. The copyContainer allows > users to get the container as a tarball from the remote machine. > The copyContainer is a basic step to move the raw container data from one > datanode to an other node. It could be used by higher level components such > like the scm which ensures that the replication rules are satisfied. > The CopyContainer by default works in pull model: the destination datanode > could read the raw data from one or more source datanode where the container > exists. > The source provides a binary representation of the container over a common > interface which has two method: > # prepare(containerName) > # copyData(String containerName, OutputStream destination) > Prepare phase is called right after the closing event and the implementation > could prepare for the copy by precreate a compressed tar file from the > container data. As a first step we can provide a simple implementation which > creates the tar files on demand. > The destination datanode should retry the copy if the container in the source > node not yet prepared. > The raw container data is provided over HTTP. The HTTP endpoint should be > separated from the ObjectStore REST API (similar to the distinctions between > HDFS-7240 and HDFS-13074) > Long-term the HTTP endpoint should support Http-Range requests: One container > could be copied from multiple source by the destination. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org