[ 
https://issues.apache.org/jira/browse/HDFS-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975348#comment-14975348
 ] 

Zhe Zhang commented on HDFS-9289:
---------------------------------

[~lichangleo] I think the below log shows that the client does have new GS 
{{1106111511603}} because the parameter {{newBlock}} is passed in from the 
client. So IIUC even if we check GS when completing file, as the patch does, it 
won't stop the client from completing / closing the file. Or could you describe 
how you think the patch can avoid this error? Thanks..

{code}
2015-10-20 19:49:20,392 [IPC Server handler 63 on 8020] INFO 
namenode.FSNamesystem: 
updatePipeline(BP-1052427332-98.138.108.146-1350583571998:blk_3773617405_1106111498065)
 successfully to 
BP-1052427332-98.138.108.146-1350583571998:blk_3773617405_1106111511603
{code}

> check genStamp when complete file
> ---------------------------------
>
>                 Key: HDFS-9289
>                 URL: https://issues.apache.org/jira/browse/HDFS-9289
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Chang Li
>            Assignee: Chang Li
>            Priority: Critical
>         Attachments: HDFS-9289.1.patch, HDFS-9289.2.patch
>
>
> we have seen a case of corrupt block which is caused by file complete after a 
> pipelineUpdate, but the file complete with the old block genStamp. This 
> caused the replicas of two datanodes in updated pipeline to be viewed as 
> corrupte. Propose to check genstamp when commit block



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to