[ 
https://issues.apache.org/jira/browse/HDFS-7702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14297657#comment-14297657
 ] 

Charles Lamb commented on HDFS-7702:
------------------------------------

Hi [~xiyunyue],

I read over your proposal and have some high level questions.

I am unclear about your proposal in the failure scenarios. If a source or 
target NN or one or more of the DNs fails in the middle of a migration, how are 
things restarted?

Why use Kryo and not protobuf for serialization? Why use Kryo and not the 
existing Hadoop/HDFS protocols and infrastructure for network communications 
between the various nodes?

Is the transfer granularity blockpool only? I infer that from this statement:

bq. The target namenode will notify datanode remove blockpool id which belong 
to the source namenode,

but then this statement:

bq. it will mark delete the involved sub-tree from its own namespace

leads me to believe that it's sub-trees in the namespace.

Could you please clarify this statement:

bq. all read and write operation regarding the same namespace sub-tree is 
forwarding to the target namenode.

Who does the forwarding, the client or the source NN?




> Move metadata across namenode - Effort to a real distributed namenode
> ---------------------------------------------------------------------
>
>                 Key: HDFS-7702
>                 URL: https://issues.apache.org/jira/browse/HDFS-7702
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Ray
>            Assignee: Ray
>
> Implement a tool can show in memory namespace tree structure with 
> weight(size) and a API can move metadata across different namenode. The 
> purpose is moving data efficiently and faster, without moving blocks on 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to