[ 
https://issues.apache.org/jira/browse/HDFS-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13255026#comment-13255026
 ] 

Todd Lipcon commented on HDFS-3161:
-----------------------------------

Hi Uma/Vinay.

I ran into an issue like this without use of append():

- Client writing blk_N_GS1 to DN1, DN9, DN10
- Pipeline failed. commitBlockSynchronization succeeded with DN9 and DN10, sets 
gs to blk_N_GS2
- Client closes the pipeline
- NN issues replication request of blk_N_GS2 from DN9 to DN1
- DN1 already has blk_N_GS1 in its ongoingCreates map

I'm not sure if this can cause any serious issue with the block (it didn't in 
my case), but I agree that, if a replication request happens for a block with a 
higher genstamp, it should interrupt the old block's ongoingCreate. If the 
replication request is a lower genstamp, it should be ignored.
                
> 20 Append: Excluded DN replica from recovery should be removed from DN.
> -----------------------------------------------------------------------
>
>                 Key: HDFS-3161
>                 URL: https://issues.apache.org/jira/browse/HDFS-3161
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 1.0.0
>            Reporter: suja s
>            Priority: Critical
>             Fix For: 1.0.3
>
>
> 1) DN1->DN2->DN3 are in pipeline.
> 2) Client killed abruptly
> 3) one DN has restarted , say DN3
> 4) In DN3 info.wasRecoveredOnStartup() will be true
> 5) NN recovery triggered, DN3 skipped from recovery due to above check.
> 6) Now DN1, DN2 has blocks with generataion stamp 2 and DN3 has older 
> generation stamp say 1 and also DN3 still has this block entry in 
> ongoingCreates
> 7) as part of recovery file has closed and got only two live replicas ( from 
> DN1 and DN2)
> 8) So, NN issued the command for replication. Now DN3 also has the replica 
> with newer generation stamp.
> 9) Now DN3 contains 2 replicas on disk. and one entry in ongoing creates with 
> referring to blocksBeingWritten directory.
> When we call append/ leaseRecovery, it may again skip this node for that 
> recovery as blockId entry still presents in ongoingCreates with startup 
> recovery true.
> It may keep continue this dance for evry recovery.
> And this stale replica will not be cleaned untill we restart the cluster. 
> Actual replica will be trasferred to this node only through replication 
> process.
> Also unnecessarily that replicated blocks will get invalidated after next 
> recoveries....

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to