[ https://issues.apache.org/jira/browse/HDFS-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14980535#comment-14980535 ]
Daryn Sharp commented on HDFS-9289: ----------------------------------- I worked with Chang on this issue and can't think of a scenario in which it's legitimate for the client to misreport the genstamp - whether the pipeline was updated or not. Consider a more extreme case: The client wrote more data after the pipeline recovered and misreports the older genstamp. That's silent data corruption! I'd like to see an exception here rather than later. > check genStamp when complete file > --------------------------------- > > Key: HDFS-9289 > URL: https://issues.apache.org/jira/browse/HDFS-9289 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Chang Li > Assignee: Chang Li > Priority: Critical > Attachments: HDFS-9289.1.patch, HDFS-9289.2.patch, HDFS-9289.3.patch, > HDFS-9289.4.patch > > > we have seen a case of corrupt block which is caused by file complete after a > pipelineUpdate, but the file complete with the old block genStamp. This > caused the replicas of two datanodes in updated pipeline to be viewed as > corrupte. Propose to check genstamp when commit block -- This message was sent by Atlassian JIRA (v6.3.4#6332)