[ 
https://issues.apache.org/jira/browse/HDFS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hairong Kuang updated HDFS-624:
-------------------------------

    Attachment: pipelineRecovery2.patch

This patch incorporates the following comments:
1.  In FSDataset.append(..), should we check whether newGS > replicaInfo's gs?
2.   BlockNotFoundException should be ReplicaNotFoundException 
3. For ClientProtocol.updatePipeline() should we rather adopt the following 
signature:
updatePipeline(String clientName, Block oldBlock, Block newBlock, DatanodeID[] 
newNodes)
4. May be updateBlockForPipeline() would be a better name for 
getNewStampForPipeline().

It also has some changes in aspect made by Cos and a change in the fault inject 
pipeline tests to make them work with new pipeline code.

> Client support pipeline recovery
> --------------------------------
>
>                 Key: HDFS-624
>                 URL: https://issues.apache.org/jira/browse/HDFS-624
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>    Affects Versions: Append Branch
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: Append Branch
>
>         Attachments: HDFS-624-aspects.patch, pipelineRecovery.patch, 
> pipelineRecovery1.patch, pipelineRecovery2.patch
>
>
> This jira aims to
> 1. set up initial pipeline for append;
> 2. recover failed pipeline setup for append;
> 2. set up pipeline to recover failed data streaming.
> The algorithm is described in the design document in the pipeline recovery 
> and pipeline set up sections. Pipeline close and failed pipeline close are 
> not included in this jira. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to