[ 
https://issues.apache.org/jira/browse/HDFS-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16602918#comment-16602918
 ] 

Surendra Singh Lilhore commented on HDFS-13840:
-----------------------------------------------

Thanks [~brahmareddy] for patch.

Some review comment.

1. In {{checkReplicaCorrupt()}}, {{isStriped()}} block check it is not 
required. EC file also handle gstamp same as continues file.
{code}
+        if (!storedBlock.isStriped()
+            && storedBlock.getGenerationStamp() > reported
+            .getGenerationStamp()) {
+          return new BlockToMarkCorrupt(new Block(reported), storedBlock,
+              reportedGS,
{code}
2. Why the below change is required ?, if block is marked as corrupted and it 
is corrupted during write then it will marked as invalid.
{code}
+    if (ucBlock.reportedState == ReplicaState.FINALIZED && (
+        block.findStorageInfo(storageInfo) < 0) || corruptReplicas
+        .isReplicaCorrupt(block, storageInfo.getDatanodeDescriptor())) {
{code}

> RBW Blocks which are having less GS should be added to Corrupt
> --------------------------------------------------------------
>
>                 Key: HDFS-13840
>                 URL: https://issues.apache.org/jira/browse/HDFS-13840
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Brahma Reddy Battula
>            Assignee: Brahma Reddy Battula
>            Priority: Minor
>         Attachments: HDFS-13840-002.patch, HDFS-13840-003.patch, 
> HDFS-13840-004.patch, HDFS-13840.patch
>
>
> # Start two DN's  (DN1,DN2).
>  # Write fileA with rep=2 ( dn't close)
>  # Stop DN1.
>  # Write some data to fileA.
>  # restart the DN1
>  # Get the blocklocations of fileA.
> Here RWR state block will be reported on DN restart and added to locations.
> IMO,RWR blocks which having less GS shouldn't added, as they give false 
> postive (anyway read can be failed as it's genstamp is less)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to